Asymptotic Analysis of Random Walks This book focuses on the asymptotic behaviour of the probabilities of large deviati...
21 downloads
719 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Asymptotic Analysis of Random Walks This book focuses on the asymptotic behaviour of the probabilities of large deviations of the trajectories of random walks with ‘heavy-tailed’ (in particular, regularly varying, sub- and semiexponential) jump distributions. Large deviation probabilities are of great interest in numerous applied areas, typical examples being ruin probabilities in risk theory, error probabilities in mathematical statistics and buffer-overflow probabilities in queueing theory. The classical large deviation theory, developed for distributions decaying exponentially fast (or even faster) at infinity, mostly uses analytical methods. If the fast decay condition fails, which is the case in many important applied problems, then direct probabilistic methods usually prove to be efficient. This monograph presents a unified and systematic exposition of large deviation theory for heavy-tailed random walks. Most of the results presented in the book are appearing in a monograph for the first time. Many of them were obtained by the authors. Professor A l e x a n d e r B o r o v k o v works at the Sobolev Institute of Mathematics in Novosibirsk. Professor K o n s t a n t i n B o r o v k o v is a staff member in the Department of Mathematics and Statistics at the University of Melbourne.
ENCYCLOPEDIA OF MATHEMATICS AND ITS APPLICATIONS All the titles listed below can be obtained from good booksellers or from Cambridge University Press. For a complete series listing visit http://www.cambridge.org/uk/series/sSeries.asp?code=EOM 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 110 111 112 113 114
J. Krajicek Bounded Arithmetic, Propositional Logic, and Complexity Theory H. Groemer Geometric Applications of Fourier Series and Spherical Harmonics H.O. Fattorini Infinite Dimensional Optimization and Control Theory A.C. Thompson Minkowski Geometry R.B. Bapat and T.E.S. Raghavan Nonnegative Matrices with Applications K. Engel Sperner Theory D. Cvetkovic, P. Rowlinson and S. Simic Eigenspaces of Graphs F. Bergeron, G. Labelle and P. Leroux Combinatorial Species and Tree-Like Structures R. Goodman and N. Wallach Representations and Invariants of the Classical Groups T. Beth, D. Jungnickel, and H. Lenz Design Theory I, 2nd edn A. Pietsch and J. Wenzel Orthonormal Systems for Banach Space Geometry G.E. Andrews, R. Askey and R. Roy Special Functions R. Ticciati Quantum Field Theory for Mathematicians M. Stern Semimodular Lattices I. Lasiecka and R. Triggiani Control Theory for Partial Differential Equations I I. Lasiecka and R. Triggiani Control Theory for Partial Differential Equations II A.A. Ivanov Geometry of Sporadic Groups I A. Schinzel Polymomials with Special Regard to Reducibility H. Lenz, T. Beth, and D. Jungnickel Design Theory II, 2nd edn T. Palmer Banach Algebras and the General Theory of *-Algebras II O. Stormark Lie’s Structural Approach to PDE Systems C.F. Dunkl and Y. Xu Orthogonal Polynomials of Several Variables J.P. Mayberry The Foundations of Mathematics in the Theory of Sets C. Foias, O. Manley, R. Rosa and R. Temam Navier–Stokes Equations and Turbulence B. Polster and G. Steinke Geometries on Surfaces R.B. Paris and D. Kaminski Asymptotics and Mellin–Barnes Integrals R. McEliece The Theory of Information and Coding, 2nd edn B. Magurn Algebraic Introduction to K-Theory T. Mora Solving Polynomial Equation Systems I K. Bichteler Stochastic Integration with Jumps M. Lothaire Algebraic Combinatorics on Words A.A. Ivanov and S.V. Shpectorov Geometry of Sporadic Groups II P. McMullen and E. Schulte Abstract Regular Polytopes G. Gierz et al. Continuous Lattices and Domains S. Finch Mathematical Constants Y. Jabri The Mountain Pass Theorem G. Gasper and M. Rahman Basic Hypergeometric Series, 2nd edn M.C. Pedicchio and W. Tholen (eds.) Categorical Foundations M.E.H. Ismail Classical and Quantum Orthogonal Polynomials in One Variable T. Mora Solving Polynomial Equation Systems II E. Olivieri and M. Eul´alia Vares Large Deviations and Metastability A. Kushner, V. Lychagin and V. Rubtsov Contact Geometry and Nonlinear Differential Equations L.W. Beineke, R.J. Wilson, P.J. Cameron. (eds.) Topics in Algebraic Graph Theory O. Staffans Well-Posed Linear Systems J.M. Lewis, S. Lakshmivarahan and S. Dhall Dynamic Data Assimilation M. Lothaire Applied Combinatorics on Words A. Markoe Analytic Tomography P.A. Martin Multiple Scattering R.A. Brualdi Combinatorial Matrix Classes M.-J. Lai and L.L. Schumaker Spline Functions on Triangulations R.T. Curtis Symmetric Generation of Groups H. Salzmann, T. Grundh¨ofer, H. H¨ahl and R. L¨owen The Classical Fields S. Peszat and J. Zabczyk Stochastic Partial Differential Equations with L´evy Noise J. Beck Combinatorial Games
Asymptotic Analysis of Random Walks Heavy-Tailed Distributions A.A. BOROVKOV K.A. BOROVKOV Translated by O.B. BOROVKOVA
cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S˜ao Paulo, Delhi Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org C
A.A. Borovkov and K.A. Borovkov 2008
This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2008 Printed in the United States of America A catalogue record for this publication is available from the British Library ISBN 978-0-521-88117-3 hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
Contents
page xi xv
Notation Introduction 1
Preliminaries 1.1 Regularly varying functions and their main properties 1.2 Subexponential distributions 1.3 Locally subexponential distributions 1.4 Asymptotic properties of ‘functions of distributions’ 1.5 The convergence of distributions of sums of random variables with regularly varying tails to stable laws 1.6 Functional limit theorems
1 1 13 44 51 57 75
2
Random walks with jumps having no finite first moment 80 2.1 Introduction. The main approach to bounding from above the distribution tails of the maxima of sums of random variables 80 2.2 Upper bounds for the distribution of the maximum of sums when α 1 and the left tail is arbitrary 84 2.3 Upper bounds for the distribution of the sum of random variables when the left tail dominates the right tail 91 2.4 Upper bounds for the distribution of the maximum of sums when the left tail is substantially heavier than the right tail 97 2.5 Lower bounds for the distributions of the sums. Finiteness criteria for the maximum of the sums 103 2.6 The asymptotic behaviour of the probabilities P(Sn x) 110 120 2.7 The asymptotic behaviour of the probabilities P(S n x)
3
Random walks with jumps having finite mean and infinite variance127 127 3.1 Upper bounds for the distribution of S n 3.2 Upper bounds for the distribution of S n (a), a > 0 137 141 3.3 Lower bounds for the distribution of Sn 142 3.4 Asymptotics of P(Sn x) and its refinements 149 3.5 Asymptotics of P(S n x) and its refinements v
vi
Contents 3.6 3.7 3.8 3.9
4
5
6
The asymptotics of P(S(a) x) with refinements and the general boundary problem Integro-local theorems on large deviations of Sn for index −α, α ∈ (0, 2) Uniform relative convergence to a stable law Analogues of the law of the iterated logarithm in the case of infinite variance
154 166 173 176
Random walks with jumps having finite variance 4.1 Upper bounds for the distribution of S n 4.2 Upper bounds for the distribution of S n (a), a > 0 4.3 Lower bounds for the distributions of Sn and S n (a) 4.4 Asymptotics of P(Sn x) and its refinements 4.5 Asymptotics of P(S n x) and its refinements 4.6 Asymptotics of P(S(a) x) and its refinements. The general boundary problem 4.7 Integro-local theorems for the sums Sn 4.8 Extension of results on the asymptotics of P(Sn x) and P(S n x) to wider classes of jump distributions 4.9 The distribution of the trajectory {Sk } given that Sn x or S n x
182 182 191 194 197 204
Random walks with semiexponential jump distributions 5.1 Introduction 5.2 Bounds for the distributions of Sn and S n , and their consequences 5.3 Bounds for the distribution of S n (a) 5.4 Large deviations of the sums Sn 5.5 Large deviations of the maxima S n 5.6 Large deviations of S n (a) when a > 0 5.7 Large deviations of S n (−a) when a > 0 5.8 Integro-local and integral theorems on the whole real line 5.9 Additivity (subexponentiality) zones for various distribution classes
233 233
208 217 224 228
238 247 250 268 274 287 290 296
Large deviations on the boundary of and outside the Cram´er zone for random walks with jump distributions decaying exponentially fast 300 6.1 Introduction. The main method of studying large deviations when Cram´er’s condition holds. Applicability bounds 300 6.2 Integro-local theorems for sums Sn of r.v.’s with distributions from the class ER when the function V (t) is of index from the interval (−1, −3) 308
Contents 6.3
6.4 6.5
Integro-local theorems for the sums Sn when the Cram´er transform for the summands has a finite variance at the right boundary point The conditional distribution of the trajectory {Sk } given Sn ∈ Δ[x) Asymptotics of the probability of the crossing of a remote boundary by the random walk
vii
315 318 319
7
Asymptotic properties of functions of regularly varying and semiexponential distributions. Asymptotics of the distributions of stopped sums and their maxima. An alternative approach to studying the asymptotics of P(Sn x) 335 7.1 Functions of regularly varying distributions 335 7.2 Functions of semiexponential distributions 341 7.3 Functions of distributions interpreted as the distributions of stopped sums. Asymptotics for the maxima of stopped sums 344 7.4 Sums stopped at an arbitrary Markov time 347 7.5 An alternative approach to studying the asymptotics of P(Sn x) for sub- and semiexponential distributions of the summands 354 7.6 A Poissonian representation for the supremum S and the time when it was attained 367
8
On the asymptotics of the first hitting times 8.1 Introduction 8.2 A fixed level x 8.3 A growing level x
9
Integro-local and integral large deviation theorems for sums of random vectors 398 9.1 Introduction 398 9.2 Integro-local large deviation theorems for sums of independent random vectors with regularly varying distributions 402 9.3 Integral theorems 412
10
Large deviations in trajectory space 10.1 Introduction 10.2 One-sided large deviations in trajectory space 10.3 The general case
417 417 418 422
11
Large deviations of sums of random variables of two types 11.1 The formulation of the problem for sums of random variables of two types 11.2 Asymptotics of P (m, n, x) related to the class of regularly varying distributions
427
369 369 370 391
427 429
viii
Contents 11.3
Asymptotics of P (m, n, x) related to semiexponential distributions
432
12
Random walks with non-identically distributed jumps in the triangular array scheme in the case of infinite second moment. Transient phenomena 439 439 12.1 Upper and lower bounds for the distributions of S n and Sn 12.2 Asymptotics of the crossing of an arbitrary remote boundary 454 12.3 Asymptotics of the probability of the crossing of an arbitrary remote boundary on an unbounded time interval. Bounds for the first crossing time 457 12.4 Convergence in the triangular array scheme of random walks with non-identically distributed jumps to stable processes 464 12.5 Transient phenomena 471
13
Random walks with non-identically distributed jumps in the triangular array scheme in the case of finite variances 482 13.1 Upper and lower bounds for the distributions of S n and Sn 482 13.2 Asymptotics of the probability of the crossing of an arbitrary remote boundary 495 13.3 The invariance principle. Transient phenomena 502
14
Random walks with dependent jumps 14.1 The classes of random walks with dependent jumps that admit asymptotic analysis 14.2 Martingales on countable Markov chains. The main results of the asymptotic analysis when the jump variances can be infinite 14.3 Martingales on countable Markov chains. The main results of the asymptotic analysis in the case of finite variances 14.4 Arbitrary random walks on countable Markov chains
506 506
509 514 516
15
Extension of the results of Chapters 2–5 to continuous-time random processes with independent increments 522 15.1 Introduction 522 15.2 The first approach, based on using the closeness of the trajectories of processes in discrete and continuous time 525 15.3 The construction of a full analogue of the asymptotic analysis from Chapters 2–5 for random processes with independent increments 532
16
Extension of the results of Chapters 3 and 4 to generalized renewal processes 543 16.1 Introduction 543 16.2 Large deviation probabilities for S(T ) and S(T ) 551 16.3 Asymptotic expansions 574
Contents 16.4 The crossing of arbitrary boundaries 16.5 The case of linear boundaries Bibliographic notes References Index
ix 585 592 602 611 624
Notation
This list includes only the notation used systematically throughout the book. Random variables and events ξ1 , ξ2 . . . are independent random variables (r.v.’s), assumed to be identically d distributed in Chapters 1–11, 16 (in which case ξj = ξ) ξ(a) = ξ − a ξ y is an r.v. ξ ‘truncated’ at the level y: P(ξ y < t) = P(ξ < t)/P(ξ < y), ty (λ) is an r.v. with distribution P(ξ (λ) ∈ dt) = (eλt /ϕ(λ))P(ξ ∈ dt) (the ξ Cram´er transform) ξ n = max ξk kn n Sn = j=1 ξj S n = max Sk kn
S = sup Sk k0
S n = min Sk kn
Sn = max |Sk | ≡ max{S n , S n } kn n Sn (a) = i=1 ξi (a) ≡ Sn − an S n (a) = max Sk (a) = max(Sk − ak) kn
kn
S(a) = supk0 Sk (a) = supk0 (Sk − ak) y y Sn = nj=1 ξj (λ) (λ) n Sn = j=1 ξj τ1 , τ2 , . . . are independent identically distributed r.v.’s (τj > 0 in Chapter16) d
τ = τj k tk = j=1 τj Gn is one of the events {Sn x},{S n x} or maxkn (Sk − g(k)) 0 (in Chapters 15 and 16) GT = suptT (S(t) − g(t)) 0 1(A) is the indicator of the event A xi
xii
Notation d
d
d
=, , are equality and inequalities between r.v.’s in distribution ⇒ is used to denote convergence of r.v.’s in distribution Distributions and their characteristics The notation ζ ⊂ = G means that the r.v. ζ has the distribution G =⇒ G means that the distributions of the r.v.’s ζn converge The notation ζn ⊂ weakly to the distribution G (as n → ∞) Fj is the distribution of ξj (Fj = F in Chapters 1–11, 16) F+ (t) = F([t, ∞)) ≡ P(ξ t), Fj,+ (t) = Fj ([t, ∞)) F− (t) = F((−∞, −t)) ≡ P(ξ < −t), Fj,− (t) = Fj ((−∞, −t)) F (t) = F− (t) + F+ (t), Fj (t) = Fj,− (t) + Fj,+ (t) t ∞ FI (t) = 0 F (u) du, F I (t) = t F (u) du V (t), W (t), U (t) are regularly varying functions (r.v.f.’s) (in Chapters 1–4): V (t) = t−α L(t), α > 0 W (t) = t−β LW (t), β > 0 U (t) = t−γ LU (t), γ > 0 L(t), LW (t), LU (t), LY (t) are slowly varying functions (s.v.f.’s), corresponding to V , W , U , Y V (t) = e−l(t) , l(t) = tα L(t), α ∈ (0, 1), L(t) is an s.v.f. (in Chapter 5) l(t) = tα L(t) is the exponent of a semiexponential distribution V (t) = max{V (t), W (t)} Fτ is the distribution of τ G is the distribution of ζ α, β are the exponents of the right and left regularly varying distribution tails of ξ respectively, or those of their regularly varying majorants (or minorants) α ˆ = max{α, β} α ˇ = min{α, β} d = Var ξ = E(ξ − Eξ)2 f(λ) = Eeiλξ is the characteristic function (ch.f.) of ξ g(λ) = Eeiλζ is the ch.f. of ζ ϕ(λ) = Eeλξ is the moment generating function of ξ (α, ρ) are the parameters of the limiting stable law Fα,ρ is the (standard) stable distribution with parameters (α, ρ) Fα,ρ,+ (t) = Fα,ρ ([t, ∞)), Fα,ρ,− = Fα,ρ ((−∞, −t)), t > 0 Fα,ρ (t) = Fα,ρ,+ (t) + Fα,ρ,− (t), t > 0 Φ is the standard normal distribution Φ(t) is the standard normal distribution function Conditions on distributions [ · , =] ⇔ F+ (t) = V (t), t > 0 [ · , <] ⇔ F+ (t) V (t), t > 0
Notation
xiii
[ · , >] ⇔ F+ (t) V (t), t > 0 [=, · ] ⇔ F− (t) = W (t), t > 0 [<, · ] ⇔ F− (t) W (t), t > 0 [>, · ] ⇔ F− (t) W (t), t > 0 [=, =] ⇔ F+ (t) = V (t), F− (t) = W (t), t > 0 [<, <] ⇔ F+ (t) V (t), F− (t) W (t), t > 0 [>, >] ⇔ F+ (t) V (t), F− (t) W (t), t > 0 [Rα,ρ ] means that F (t) = t−α LF (t), α ∈ (0, 2], where LF (t) is an s.v.f. and there exists the limit 1 F+ (t) =: ρ+ = (ρ + 1) ∈ [0, 1] lim t→∞ F (t) 2 [D(h,q) ], h ∈ (0, 2], are conditions on the smoothness of F (t) at infinity; see § 3.4 [D(k,q) ], k = 1, 2, . . . , are generalized conditions of the differentiability of F (t) at infinity; see § 4.4 [<] means that Fτ (t) Vτ (t) := t−γ Lτ (t), where Lτ is an s.v.f. Scalings
⎧ (−1) (1/n) if Eξ 2 = ∞, α < 2, ⎨ F (−1) b(n) = Y (1/n) if Eξ 2 = ∞, α = 2, ⎩ √ nd if d = Var ξ < ∞
(−1) (1/n) if Eξ 2 = ∞, V σ(n) = (α − 2)dn ln n if d = Var ξ < ∞ (−1) (1/n) σW (n) = W (−1) σ1 (n) = w1 (1/n), where w1 (t) = t−2 l(t) (in Chapter5) (−1) σ2 (n) = w2 (1/n), where w2 (t) = t−2 l2 (t) (in Chapter5)
Combined conditions Eξ 2 = ∞, [<, <], W (t) cV (t), x → ∞ and nV (x) → 0 x Eξ 2 = ∞, [<, <], x → ∞ and nV
[Q1 ]: [Q2 ]:
Distribution classes L is the class of distributions with asymptotically locally constant tails (or of their distribution tails) R is the class of distributions with right tails regularly varying at infinity (or the class of their tails); in Chapters 2–4, the condition F+ ∈ R coincides with condition [ · , =] ER is the class of regularly varying exponentially decaying distributions (or of their tails) Se is the class of semiexponential distributions (or of their tails); in Chapter 5, the condition F+ ∈ Se coincides with condition [ · , =]
xiv
Notation
S+ is the class of subexponential distributions on R+ (or of their tails) S is the class of subexponential distributions (or of their tails) Sloc is the class of locally subexponential distributions C is the class of distributions satisfying the Cram´er condition Ms is the class of distributions satisfying the condition E|ξ|s < ∞ S is the class of stable distributions Nα,ρ is the domain of normal attraction to the stable law Fα,ρ Miscellaneous ∼ is the relation of asymptotic equivalence: A ∼ B ⇔ A/B → 1 is the relation of asymptotic comparability: A B ⇔ A = O(B), B = O(A) x+ = max{0, x} x denotes the integral part of x r = x/y 1 φ = sign λ Π = Π(x, n) = nV (x) Π(y) = Π(y, n) Π(y) = nV (x) Δ[x) = [x, x + Δ), Δ > 0 (in Chapter 9, Δ[x) is a cube in Rd with edge length Δ) d (x, y) = i=1 x(i) y (i) is the scalar product of the vectors x, y ∈ Rd |x| = (x, x)1/2 Sd−1 = {x ∈ Rd : |x| = 1} is the unit sphere in Rd e(x) = x/|x| , x ∈ Rd
Introduction
This book is concerned with the asymptotic behaviour of the probabilities of rare events related to large deviations of the trajectories of random walks, whose jump distributions decay not too fast at infinity and possess some form of ‘regular behaviour’. Typically, we will be considering regularly varying, subexponential, semiexponential and other similar distributions. For brevity, all these distributions will be referred to in this Introduction as ‘regular’. As the main key words for the contents of the present book one could list the following: random walks, large deviations, slowly decaying and, in particular, regular distributions. To the question why the above-mentioned themes have become the subject of a separate monograph, an answer relating to each of these key concepts can be given. • Random walks form a classical object of probability theory, the study of which is of tremendous theoretical interest. They constitute a mathematical model of great importance for applications in mathematical statistics (sequential analysis), risk theory, queueing theory and so on. • Large deviations and rare events are of great interest in all these applied areas, since computing the asymptotics of large deviation probabilities enables one to find, for example, small error probabilities in mathematical statistics, small ruin probabilities in risk theory, small buffer overflow probabilities in queueing theory and so on. • Slowly decaying and, in particular, regular distributions present, when one is studying large deviation probabilities, an alternative to distributions decaying exponentially fast at infinity (for which Cram´er’s condition holds; the meaning of the terms we are using here will be explained in more detail in what follows). The first classical results in large deviation theory were obtained for the case of distributions decaying exponentially fast. However, this condition of fast (exponential) decay fails in many applied problems. Thus, for instance, the ‘empirical distribution tails’ for insurance claim sizes, for the sizes of files sent over the Internet and also for many other data sets as well, usually decay as a power function (see e.g. [1]). However, in the problems of, say, mathematical statistics, the assumption of fast decay of the distributions is often adequate as it reflects the nature of xv
xvi
Introduction
the problem. Therefore, both classes of distributions, regular and fast decaying, are of great interest. Random walks with fast decaying distributions will be considered in a separate book (a companion volume to the present monograph). The reason for this is that studying random walks with regular distributions requires a completely different approach in comparison with the case of fast decaying distributions, since for regular distributions the large deviation probabilities are mostly formed by contributions from the distribution tails (i.e. on account of the large jumps in the random walk trajectory), while for distributions decaying exponentially fast they are formed by contributions from the ‘central parts’ of the distributions. In the latter case analytical methods prove to be efficient, and everything is determined by the behaviour of the Laplace transforms of the jump distributions. In the former case, direct probabilistic approaches prove to be more efficient. However, until now the results for regular distributions were of a disconnected character, and referred to different special cases. The writing of the present monograph was undertaken as an attempt to present a unified exposition of the theory on the basis of a common approach; a large number of the results we present are new. Before surveying the contents of the book, we will make a few further, more detailed, remarks of a general character in order to make the naturalness of the objects of study, and also the logic and structure of the monograph, clearer to the reader. The application of probability theory as a mathematical discipline is most efficient when one is studying phenomena where a large number of random factors are present. The influence of such factors is, as a rule, additive (or can be reduced to such), especially in cases where the individual contribution of each factor is small. Hence sums of random variables have always been (and will be) among the main objects of research in probability theory. A vast literature is devoted to the study of the main asymptotic laws describing the distributions of sums of large numbers of random summands (see e.g. [130, 152, 186, 223, 121, 122]). The most advanced results in this direction were obtained for sums of independent identically distributed (i.i.d.) random variables (r.v.’s). Let ξ, ξ1 , ξ2 , . . . be i.i.d. (possibly vector-valued) r.v.’s. Put S0 := 0 and Sn :=
n
ξi ,
n = 1, 2, . . .
i=1
The sequence {Sn ; n 1} is called a random walk. The following assertions constitute the fundamental classical limit theorems for random walks. 1. The strong law of large numbers. If there exists a finite expectation Eξ then, as n → ∞, Sn → Eξ almost surely (a.s.). (0.0.1) n One could call the value nEξ the first-order approximation to the sum Sn .
xvii
Introduction 2. The central limit theorem. If Eξ 2 < ∞ then, as n → ∞, ζn :=
Sn − nEξ √ ⇒ζ⊂ = Φ, nd
(0.0.2)
where d = Var ξ = Eξ 2 −(Eξ)2 is the variance of the r.v. ξ, the symbol ⇒ denotes the (weak) convergence of the r.v.’s in distribution and the notation ζ ⊂ = F says that the r.v. ζ has the distribution F; in this case, F = Φ is the standard√normal distribution (with parameters (0,1)). One could call the quantity nEξ + ζ nd the second-order approximation to Sn . Since the limiting distribution Φ in (0.0.2) is continuous, the relation (0.0.2) is equivalent to the following one: for any v ∈ R we have P(ζn v) → P(ζ v)
as
n → ∞,
and, moreover, this convergence is uniform in v. In other words, for deviations of √ the form x = nEξ + v nd, √ P(Sn x) ∼ P ζ (x − nEξ)/ nd = 1 − Φ(v) as n → ∞ (0.0.3) uniformly in v ∈ [v1 , v2 ], where −∞ < v1 v2 < ∞ are fixed numbers and Φ is the standard normal distribution function (here and in what follows, the notation A ∼ B means that A/B → 1 under the indicated passage to the limit). 3. Convergence to stable laws. If the expectation of the r.v. ξ is infinite or does not exist, then the first-order approximation for the sum Sn can only be found when the sum of the right and left tails of the distribution of ξ, i.e. the function F (t) := P(ξ t) + P(ξ < −t),
t > 0,
is regularly varying as t → ∞; it can be represented as F (t) = t−α L(t),
(0.0.4)
where α ∈ (0, 1] and L(t) is a slowly varying function (s.v.f.) as t → ∞. The same can be said about the second-order approximation for Sn in the case when E|ξ| < ∞ but Eξ 2 = ∞. In this case, we have α ∈ [1, 2] in (0.0.4). For these two cases, we have the following assertion. For simplicity’s sake, assume that α < 2, α = 1; we will also assume that Eξ = 0 when the expectation is finite (the ‘boundary case’ α = 1 is excluded to avoid the necessity of nontrivial centring of the sums Sn when Eξ = ±∞ or the expectation does not exist). Let F+ (t) := P(ξ t), let (0.0.4) hold and let there exist the limit lim
t→∞
F+ (t) =: ρ+ ∈ [0, 1] F (t)
xviii
Introduction
(if ρ+ = 0 then the right tail of the distribution does not need to be regularly varying). Denote by F (−1) (x) := inf{t > 0 : F (t) x},
x > 0,
the (generalized) inverse function for F , and put b(n) := F (−1) (n−1 ) = n1/α L1 (n), where L1 is also be an s.v.f. (see § 1.1). Then, as n → ∞, Sn ⇒ ζ (α,ρ) ⊂ = Fα,ρ , b(n)
(0.0.5)
where Fα,ρ is the standard stable law with parameters α and ρ = 2ρ+ − 1. For completeness of exposition, we will present a formal proof of the above assertion for all α ∈ (0, 2] in § 1.5. The relation (0.0.5), similarly to (0.0.3), means that, for x = vb(n), P(Sn x) ∼ 1 − Fα,ρ,+ (v)
as
n→∞
(0.0.6)
uniformly in v ∈ [v1 , v2 ] for fixed v1 v2 from (0, ∞), where Fα,ρ,+ (v) = Fα,ρ ([v, ∞)) is the right tail of the distribution Fα,ρ . The assertions (0.0.2), (0.0.3) and (0.0.5), (0.0.6) give a satisfactory answer for the behaviour of the probabilities of the form P(Sn x) for large n only for deviations of the form x = vb(n), where v is moderately large and b(n) is the √ scaling factor in the respective limit theorem (b(n) = nd in the case Eξ 2 < ∞, when we also assume that Eξ = 0). For example, in the finite variance case it is recommended to deal very carefully with the normal approximation values given by (0.0.3) for v > 3 and moderately large values of n (say, n 100). This leads to a natural setting for the problem on the asymptotic behaviour of the probabilities P(Sn x) for x b(n), all the more so since, as we have already noted, it is precisely such ‘large deviations’ that are often of interest in applied problems. Such probabilities are referred to as the probabilities of large deviations of the sums Sn . So far we have only discussed questions related to the distributions of the partial sums Sn . However, in many applications (in particular, as already mentioned, in mathematical statistics, queueing theory, risk theory and other areas) questions related to the behaviour of the entire trajectory S1 , S2 , . . . , Sn are of no less importance. Thus, of interest is the problem of computing the probability (0.0.7) P g1 (k) < Sk < g2 (k); k = 1, . . . , n for two given sequences (‘boundaries’) g1 (k) and g2 (k), k = 1, 2, . . . , or the probability of the complementary event that the trajectory {Sk ; k = 1, . . . , n} will leave at least once the corridor specified by the functions gi (k). These are
xix
Introduction
the so-called boundary problems for random walks. The simplest is the problem on the limiting distribution of the variables S n := max Sk
and
kn
S n (a) := max(Sk − ak) kn
for a given constant a. The following is known about the asymptotics of the probability (0.0.7). Let Eξ = 0, Eξ 2 < ∞ (and without loss of generality, one can assume that in this case Eξ 2 = 1), and let the boundaries gi be given by the relations √ i = 1, 2, (0.0.8) gi (k) = nfi k/n , where f1 < f2 are some fixed sufficiently regular (e.g. piecewise smooth) functions on [0, 1], f1 (0) < 0 < f2 (0). Then by the Kolmogorov-Petrovskii theorem (see e.g. [162]) the probability (0.0.7) converges as n → ∞ to the value P (0, 0) of the solution P (t, z) to the boundary problem for the parabolic equation 1 ∂2P ∂P =0 + ∂t 2 ∂z 2
in the region (t, z) : 0 < t < 1, f1 (t) < z < f2 (t) with boundary conditions t ∈ [0, 1], i = 1, 2, P (t, fi (t)) = 0, P (1, z) = 1,
z ∈ (f1 (1), f2 (1)).
The above assertion also follows from the so-called invariance principle (also known as the functional central limit theorem). According to the principle, the distribution of the random polygon {ζn (t); t ∈ [0, 1]} with vertices at the points √ (k/n, Sk / n ), k = 0, 1, . . . , n, in the space C(0, 1) of continuous functions on [0, 1] (endowed with the σ-algebra of Borel sets generated by the uniform distance in C(0, 1)) converges weakly to the distribution of the standard Wiener process {w(t); t ∈ [0, 1]} in the space C(0, 1) as n → ∞. From this fact it follows that the probability (0.0.7) for the boundaries (0.0.8) converges to the quantity P f1 (t) < w(t) < f2 (t); t ∈ [0, 1] , which, in turn, is given by the solution to the above-mentioned boundary problem for the parabolic equation at the point (0, 0). A similar ‘invariance principle’ holds in the case of convergence to stable laws, when Eξ 2 = ∞. In the case where gi (k) = b(n)fi (k/n), the probability (0.0.7) converges to the value P f1 (t) < ζ(t) < f2 (t); t ∈ [0, 1] , where ζ(·) is the corresponding stable process (the increments of the process ζ(·) on disjoint time intervals are independent of each other and have, up to a scaling transform, the distribution Fα,ρ ). Here we encounter the same problem: the above results do not give a satisfactory answer to the question of the behaviour of probabilities of the form (0.0.7)
xx
Introduction
(or the probabilities of the complementary events) in the case where g1 (k) and −g2 (k) are large in comparison with b(n). We again arrive at the large deviation problem but now in the context of boundary problems for random walks. One of the simplest examples here is the problem on the asymptotic behaviour of the probability P S n (a) x → 0 when, say, Eξ = 0,
a = 0,
x b(n),
whereas the case Eξ = 0,
a > 0,
x→∞
provides one with another example of a problem on large deviation probabilities; this, however, does not quite fit the above scheme. The above-mentioned problems on large deviations, along with a number of other related problems, constitute the main object of study in the present book. Here we should make the following important remark. Even while studying large deviation probabilities for Sn in the case Eξ = 0, Eξ 2 <√∞, it turns out that the nature of the asymptotics of P(Sn x) when x n ln n, and the methods used to find it, strongly depend on the behaviour of the tail P(ξ t) as t → ∞. If the tail vanishes exponentially fast, i.e. the so-called Cram´er condition is met: ϕ(λ) := Eeλξ < ∞
(0.0.9)
for some λ > 0, then, as we have already noted, the asymptotics in question will be formed, roughly speaking, in equal degrees by contributions from all the jumps in the trajectory of the random walk. In this case, the asymptotics are described by laws that are established mainly via analytical calculations and are determined in a wide zone of deviations by the analytic properties of the moment generating function ϕ(λ). If Cram´er’s condition does not hold then, as when studying the conditions for convergence to stable laws in (0.0.5), we have to assume the regular character of the tails F+ (t) = P(ξ t). Such an assumption can be either a condition of the form (0.0.4) or the condition P(ξ t) = exp{−t−α L(t)},
α ∈ (0, 1),
(0.0.10)
where L(t) is an s.v.f. as t → ∞ possessing some smoothness properties. The class of the tails of the form (0.0.4) will be called the class of regularly varying tails (distributions), and the class specified by the condition (0.0.10) (under some additional assumptions on the function L) will be called the class of semiexponential tails (distributions). In the cases (0.0.4) and (0.0.10), the asymptotics of P(Sn x) in situations where x grows fast enough as n → ∞ will, as a rule, be governed by a single
xxi
Introduction
large jump. The methods of deriving the asymptotics, as well as the asymptotics itself, turn out to be substantially different from those in the case where Cram´er’s condition (0.0.9) is met. The methods used here are mostly based on direct probabilistic approaches. The main objects of study in the present monograph are problems on large deviations for random walks with jumps following regular distributions (in particular, regularly varying, (0.0.4), and semiexponential, (0.0.10), ones). There is a great deal of literature on studying these problems. This is especially so for the asymptotics of the distributions P(Sn x) of sums of r.v.’s (see below and also the bibliographic notes to Chapters 2–5). However, the results obtained in this direction so far have been, for the most part, disconnected and incomplete. Along with conditions of the form (0.0.4) and (0.0.10), one often encounters in the literature the so-called subexponentiality property of the right distribution tails, which is characterized by the following relation: for independent copies ξ1 and ξ2 of the r.v. ξ one has P(ξ1+ + ξ2+ t) ∼ 2P(ξ + t)
as
t → ∞,
(0.0.11)
where x+ = max{0, x} is the positive part of x. Distributions from both classes (0.0.4) and (0.0.10) possess this property. Roughly speaking, the classes (0.0.4) and (0.0.10) form a ‘regular’ part of the class of subexponential distributions. In this connection, it is important to note that the methods of study and the form of the asymptotics of interest for the classes (0.0.4) and (0.0.10) prove in many situations to be substantially different. That is why in this book we will study these classes separately, believing that this approach is methodologically well justified. A short history of the problem. Research in the area of large deviations for random walks with heavy-tailed jumps began in the second half of the twentieth century. At first the main effort was, of course, concentrated on studying the large deviations of the sums Sn of r.v.’s. Here one should first of all mention the papers by C. Heyde [141, 145], S.V. Nagaev [201, 206], A.V. Nagaev [194, 195], D.H. Fuk and S.V. Nagaev [127], L.V. Rozovskii [237, 238] and others. These established the basic principle by which the asymptotics of P(Sn x) are formed: the main contribution to the probability of interest comes from trajectories that contain one large jump. Later on papers began appearing in which this principle was used to find the distribution of the maximum S n of partial sums and also to solve more general boundary problems for random walks (I.F. Pinelis [225], V.V. Godovanchuk [131], A.A. Borovkov [40, 42]). Somewhat aside from this were papers devoted to the probabilities of large deviations of the maximum of a random walk with negative drift. The first general results were obtained by A.A. Borovkov in [42], while more complete versions
xxii
Introduction
(for subexponential summands) were established by N. Veraverbeke [275] and D.A. Korshunov [178]. The authors of the present book began a systematic study of large deviations for random walks with regularly distributed jumps in their papers [51] and [63, 66]. Then the papers [52, 64, 54, 59, 60] and some others appeared, in which the derived results were extended to semiexponential and regular exponentially decaying distributions, to multivariate random walks, to the case of non-identically distributed summands and so on. As a result, a whole range of interesting problems arose, unified by the general approach to their solution and a system of interconnected and rather advanced results that were, as a rule, quite close to unimprovable. As these problems and results were, moreover, of a considerable interest for applications, the idea of writing a monograph on all this became quite natural. The same applies to the related monograph, to be devoted to random walks with fast decaying jump distributions. More detailed bibliographic references will be given within the exposition in each chapter, and also in the bibliographic notes at the end of the book. Now we will outline the contents of the book. Chapter 1 contains preliminary results and information that will be used in the sequel. In § 1.11 we present the basic properties of slowly varying and regularly varying functions and in § 1.2 the classes of subexponential and semiexponential distributions are introduced. We give conditions characterizing these classes and establish a connection between them. The asymptotic properties of the so-called functions of (subexponential) distributions are studied in § 1.4. The above-mentioned §§ 1.1–1.4 constitute the first part of the chapter. In the second part of Chapter 1 (§§ 1.5, 1.6) we present known fundamental limit theorems of probability theory (already briefly mentioned above). Section 1.5 contains a proof of the theorem on the convergence in distribution of the normed sums of i.i.d. r.v.’s to a stable law. A special feature of the proof is the use of an explicit form of the scaling sequence, which enables one to characterize the moderate deviation and large deviation zones using the same terms as those to be employed in Chapters 2 and 3. In § 1.6 we present functional limit theorems (the invariance principle) and the law of the iterated logarithm. Chapters 2–5 are similar in their contents and structure to each other, and it is appropriate to review them as a whole. They are devoted to studying large deviation probabilities for random walks whose jump distributions belong to one of the following four distribution classes: (1) regularly varying distributions (or distributions admitting regularly varying majorants or minorants) having no finite means (Chapter 2); (2) distributions of the above type having finite means but infinite variances (Chapter 3); 1
The first part of the section number stands for the chapter number.
Introduction
xxiii
(3) distributions of the above type having finite variances (Chapter 4); (4) semiexponential distributions (Chapter 5). The first sections in all these chapters are devoted to bounding from above the probabilities P(S n x) and P(S n (a) x). The main approach to obtaining these bounds is the same in all four chapters (it is presented in § 2.1), but the results are different depending on the conditions imposed on the majorants of the distribution tails of the r.v. ξ. The same can be said about the lower bounds. The derived two-sided bounds prove to be sharp enough to obtain, under the conditions of Chapters 2–4 in the case when the tails P(ξ t) = V (t) are regularly varying, the asymptotics P(Sn x) ∼ nV (x),
P(S n x) ∼ nV (x)
either in the widest possible zones of deviations x or in zones quite close to the latter. Each of these four chapters contains sections where, using more precise approaches, we establish the asymptotics of the probabilities P(S n x), P S n (a) x P(Sn x), and, moreover, asymptotic expansions for them as well (under additional conditions on the tails F+ (t)). An exception is Chapter 2, as under its assumptions the problems of deriving asymptotic expansions and studying the asymptotics of P(S n (a) x) are not meaningful. Furthermore, • Chapter 2 contains a finiteness criterion for the supremum of cumulative sums (§ 2.5) and the asymptotics of the renewal function (§ 2.6). • In Chapters 3 and 4 we obtain integro-local theorems on large deviations of Sn (§§ 3.7, 4.7). • In Chapter 3 we find conditions for uniform relative convergence to a stable law on the entire axis and establish analogues to the law of the iterated logarithm in the case Eξ 2 = ∞ (§ 3.9). • In Chapters 3 and 4 we find the asymptotics of the probability P maxkn (Sk − g(k)) 0 of the crossing of an arbitrary boundary {g(k); k 1} prior to the time n by the random walk (§§ 3.6, 4.6). • In Chapter 4 we consider the possibility of extending the results on the asymptotic behaviour of P(Sn x) and P(S n x) to wider classes of jump distributions (§ 4.8) and we describe the limiting behaviour of the trajectory {Sk ; k = 1, . . . , n} given that Sn x or S n x (§ 4.9). Chapters 2–5 are devoted to ‘heavy-tailed’ random walks, i.e. to situations when the jump distributions vanish at infinity more slowly than an exponential function, and indeed this is the main focus of the book. In Chapter 6, however, we present the main approach to studying the large
xxiv
Introduction
deviation probabilities for ‘light-tailed’ random walks – this is the case when the jump distributions vanish at infinity exponentially fast or even faster (i.e. they satisfy Cram´er’s condition (0.0.9) for some λ > 0). This is done for the sake of completeness of exposition, and also to ascertain that, in a number of cases, studying ‘light-tailed’ random walks can be reduced to the respective problems for heavy-tailed distributions considered in Chapters 2–5. In § 6.1 we describe the main method for studying large deviation probabilities when Cram´er’s condition holds (the method is based on the Cram´er transform and the integro-local Gnedenko–Stone–Shepp theorems) and also ascertain its applicability bounds. In §§ 6.2 and 6.3 we study integro-local theorems for sums of r.v.’s with light tails of the form P(ξ t) = e−λ+ t V (t),
0 < λ+ < ∞,
where V (t) is a function regularly varying as t → ∞. In a number of cases the methods presented in § 6.1 do not work for such distributions, but one can achieve success using the results of Chapters 3 and 4. In § 6.2 we consider the case when the index of the function V (t) belongs to the interval (−1, −3) (in this case one uses the results of Chapter 3); in § 6.3, we take the index of V (t) to be less than −3 (in this case one needs the results of Chapter 4). In § 6.4 we consider large deviations in more general boundary problems. However, here the exposition has to be restricted to several special types of boundary {g(k)}, as the nature of the boundary-crossing probabilities turns out to be quite complicated and sensitive to the particular form of the boundary. Chapters 7–16 are devoted to some more specialized aspects of the theory of random walks and also to some generalizations of the results of Chapters 2–5 and their extensions to continuous-time processes. In Chapter 7 we continue the study of functions of subexponential distributions that we began in § 1.4. Now, for narrower classes of regularly varying and semiexponential distributions, we obtain wider conditions enabling one to find the desired asymptotics (§§ 7.1 and 7.2). In § 7.3 we apply the obtained results to study the asymptotics of the distributions of stopped sums and their maxima, i.e. the asymptotics of P(Sτ x)
and
P(S τ x),
where the r.v. τ is either independent of {Sk } or is a stopping time for that sequence (§§ 7.3 and 7.4). In § 7.5 we discuss an alternative approach (to that presented in Chapters 3–5) to studying the asymptotics of P(S ∞ x) in the case of subexponential distributions of the summands ξk with Eξk < 0. The approach is based on factorization identities and the results of § 1.3. Here we also obtain integro-local theorems and asymptotic expansions in the integral theorem under minimal conditions (in Chapters 3–5 the conditions were excessive, as there we also included the case n < ∞).
Introduction
xxv
Chapter 8 is devoted to a systematic study of the asymptotics of the first hit ting time distribution, i.e. the probabilities P η+ (x) n as n → ∞, where η+ (x) := min{k 1 : Sk x}, and also of similar problems for η− (x) := min{k 1 : Sk −x}. We classify the results according to the following three main criteria: (1) the value of x, distinguishing between the three cases x = 0, x > 0 is fixed and x → ∞; (2) the drift direction (the value of the expectation Eξ, if it exists); (3) the properties of the distribution of ξ. In § 8.1 we consider the case of a fixed level x (usually x = 0) and different combinations of criteria (2) and (3). In § 8.2 we study the case when x → ∞ together with n, again with different combinations of criteria (2) and (3). In Chapter 9 the results of Chapters 3 and 4 are extended to the multivariate case. Our attention is given mainly to integro-local theorems, i.e. to studying the asymptotics P Sn ∈ Δ[x) , where Sn = nj=1 ξj is the sum of d-dimensional i.i.d. random vectors and Δ[x) := {y ∈ Rd : xi yi < xi + Δ} is a cube with edge length Δ and a vertex at the point x = (x1 , . . . , xd ). The reason is that in the multivariate case, the language and approach of integro-local theorems prove to be the most natural. Integral theorems are more difficult to prove directly and can easily be derived from the corresponding integro-local theorems. Another difficulty arising when one is studying the probabilities of large deviations of the sums Sn of ‘heavy-tailed’ random vectors ξk consists in defining and classifying the very concept of a heavy tailed multivariate distribution. In § 9.1, examples are given in which the main contribution to the probability for Sn to hit a remote cube Δ[x) comes not from trajectories with one large jump (as in the univariate case) but from those with exactly k large jumps, where k can be any integer between 1 and d > 1. In § 9.2 we concentrate on the ‘most regular’ jump distributions and establish integro-local theorems for them, both when E|ξ|2 = ∞ and when E|ξ|2 < ∞; § 9.3 is devoted to integral theorems which can be obtained using integro-local theorems as well as in a direct way. In the latter case, one has to impose conditions on the asymptotics of the probability that the remote set under consideration will be reached by one large jump. We then return to univariate random walks. In Chapter 10 such walks are considered as processes, and we study there the probability of large deviations of such processes in their trajectory spaces. In other words, we study the asymptotics of P Sn (·) ∈ xA ,
xxvi
Introduction
where Sn (t) = Snt , t ∈ [0, 1], and A is a measurable set in the space D(0, 1) of functions without discontinuities of the second type (it is supposed that the set A is bounded away from zero). Under certain conditions on the structure of the set A the desired asymptotics are found for regularly varying jump distributions, both in the case of ‘one-sided’ sets (§ 10.2) and in the general case (§ 10.3). Here we use the results of Chapter 3 when Eξ 2 = ∞ and the results of Chapter 4 when Eξ 2 < ∞. Chapters 11–14 are devoted to extending the results of Chapters 3 and 4 to random walks of a more general nature, when the jumps ξi are independent but not identically distributed. In Chapter 11 we consider the simplest problem of this kind, the large deviation probabilities of sums of r.v.’s of two different types. In § 11.1 we discuss a motivation for the problem and give examples. As before, n m we let Sn := i=1 ξi and, moreover, Tm := i=1 τi , where the r.v.’s τi are independent of each other and also of {ξk } and are identically distributed. We are interested in the asymptotics of the probabilities P (m, n, x) := P(Tm + Sn x) as x → ∞. In § 11.2 we study the asymptotics of P (m, n, x) for the case of regularly varying distributions and in § 11.3 for the case of semiexponential distributions. In Chapters 12 and 13 we consider random walks with arbitrary non-identically distributed jumps ξj in the triangular array scheme, both in the case of an infinite second moment (Chapter 12 contains extensions of the results of Chapter 3) and in the case of a finite second moment (Chapter 13 is a generalization of Chapter 4). The order of exposition in Chapters 12 and 13 is roughly the same as in Chapters 3 and 4. In §§ 12.1 and 13.1 we obtain upper and lower bounds for P(S n x) and P(Sn x) respectively. The asymptotics of the probability of the crossing of an arbitrary remote boundary are found in §§ 12.2,12.3 and 13.2. Here we also obtain bounds, uniform in a, for the probabilities P S n (a) x and the distributions of the first crossing time of the level x → ∞. In § 12.4 we establish theorems on the convergence of random walks to random processes. On the basis of these results, in § 12.5 we study transient phenomena in the problem on the asymptotics of the distribution of S n (a) as n → ∞, a → 0. Similar results for random walks with jumps ξi having a finite second moments are established in § 13.3. The results of Chapters 12 and 13 enable us in Chapter 14 to extend the main assertions of these chapters to the case of dependent jumps. In § 14.1 we give a description of the classes of random walks that admit an asymptotic analysis in the spirit of Chapters 12 and 13. These classes include: (1) martingales with a common majorant of the jump distributions; (2) martingales defined on denumerable Markov chains; (3) martingales defined on arbitrary Markov chains;
xxvii
Introduction (4) arbitrary random walks defined on arbitrary Markov chains.
For arbitrary Markov chains one can obtain essentially the same results as for denumerable ones, but the exposition becomes much more technical. For this reason, and also because in case (1) one can obtain (and in a rather simple way) only bounds for the distributions of interest, we will restrict ourselves in Chapter 14 to considering martingales and arbitrary random walks defined on denumerable Markov chains. In § 14.2 we obtain upper and lower bounds for and also the asymptotics of the probabilities P(Sn x) and P(S n x) for such walks in the case where the jumps in the walk can have infinite variance. The case of finite variance is considered in § 14.3. In § 14.4 we study arbitrary random walks defined on denumerable Markov chains. Chapters 15 and 16 are devoted to extending the results of Chapters 2–5 to continuous-time processes. Chapter 15 contains such extensions to processes {S(t)} with independent increments. Two approaches are considered. The first is presented in § 15.2. It is based on using the closeness of the trajectories of the processes with independent increments to random polygons with vertices at the points (kΔ, S(kΔ)) for a small fixed Δ, where the S(kΔ) are clearly the sums of i.i.d. r.v.’s that we studied in Chapters 2–5. The second approach is presented in § 15.3. It consists of applying the same philosophy, based on singling out one large jump (now in the process {S(t)}), as that employed in Chapters 2–5. Using this approach, we can extend to the processes {S(t)} all the results of Chapters 3 and 4, including those for asymptotic expansions. The first approach (that in § 15.1) only allows one to extend the first-order asymptotics results. Chapter 16 is devoted to the generalized renewal processes S(t) := Sν(t) + qt,
t 0,
where q is a linear drift coefficient, ν(t) :=
∞
1(tk t) = min{k 1 : tk t} − 1,
k=1
tk := τ1 + · · · + τk , the r.v.’s τj are independent of each other and of {ξk } and are identically distributed with finite mean aτ := Eτ1 . It is assumed that the distribution tails P(ξ t) = V (t) of the r.v.’s ξ (and in some cases also the distribution tails P(τ t) = Vτ (t) of the r.v.’s τ ) are regularly varying functions or are dominated by such. In § 16.2 we study the probabilities of large deviations of the r.v.’s S(T ) and S(T ) := maxtT S(t) under the assumption that the mean trend in the process is equal to zero, Eξ+qEτ = 0. Here substantial contributions to the probabilities P(S(T ) x) and P(S(T ) x) can come not only from large jumps ξj but also from large renewal intervals τj (especially when q > 0). Accordingly, in some deviation zones, to the natural (and expected) quantity H(T )V (x) (where H(T ) := Eν(T )) giving the asymptotics of P(S(T ) x) and P(S(T ) x), we
xxviii
Introduction
may need to add, say, values of the form a−1 τ (τ − x/q)Vτ (x/q), which can dominate when Vτ (t) V (t). The asymptotics of the probabilities P(S(T ) x) and P(S(T ) x) are studied in § 16.2 in a rather exhaustive way: for values of q having both signs, for different relations between V (t) and Vτ (t) or between x and T and for all the large deviation zones. In § 16.3 we obtain asymptotic expansions for P(S(T ) x) under additional assumptions on the smoothness of the tailsV (t) and Vτ (t). The asymptotics of the probability P suptT (S(t)−g(t)) 0 of the crossing of an arbitrary remote boundary g(t) by the process {S(t)} are studied in § 16.4. The case of a linear boundary g(t) is considered in greater detail in § 16.5. Let us briefly list the main special features of the present book. 1. The traditional range of problems on limit theorems for the sums Sn is considerably extended in the book: we include the so-called boundary problems relating to the crossing of given boundaries by the trajectory of the random walk. In particular, this applies to problems, of widespread application, on the probabilities of large deviations of the maxima S n = maxkn Sk of sums of random variables. 2. The book is the first monograph in which the study of the above-mentioned wide range of problems is carried out in a comprehensive and systematic way and, as a rule, under minimal adequate conditions. It should fill a number of previously existing gaps. 3. In the book, for the first time in a monograph, asymptotic expansions (the asymptotics of second and higher orders) under rather general conditions, close to minimal, are studied for the above-mentioned range of problems. (Asymptotics expansions for P(Sn x) were also studied in [276] but for a narrow class of distributions.) 4. Along with classical random walks, a comprehensive asymptotic analysis is carried out for generalized renewal processes. 5. For the first time in a monograph, multivariate large deviation problems for jump distributions regularly varying at infinity are touched upon. 6. For the first time complete results on large deviations for random walks with non-identically distributed jumps in the triangular array scheme are obtained. Transient phenomena are studied for such walks with jumps having an infinite variance. One may also note that the following are included: • integro-local theorems for the sums Sn ; • a study of the structure of the classes of semiexponential and subexponential distributions; • analogues of the law of the iterated logarithm for random walks with infinite jump variance;
Introduction
xxix
• a derivation of the asymptotics of P(Sn x) and P(S n x) for random walks with dependent jumps, defined on Markov chains. The authors are grateful to the ARC Centre of Excellence for Mathematics and Statistics of Complex Systems, the Russian Foundation for Basic Research (grant 05-01-00810) and the international association INTAS (grant 03-51-5018) for their much appreciated support during the writing of the book. The authors are also grateful to S.G. Foss for the discussions of some aspects of the book. The writing of this monograph would have been a much harder task were it not for a constant technical support from T.V. Belyaeva, to whom the authors express their sincere gratitude.
For the reader’s attention We use := for ‘is defined by’, ‘iff’ for ‘if and only if’, and 2 for the end of proof. Parts of the expositions that are, from our viewpoint, of secondary interest, are typeset in a small font. A.A. Borovkov, K.A. Borovkov
1 Preliminaries
1.1 Regularly varying functions and their main properties Regularly (and, in particular, slowly) varying functions play an important role in the subsequent exposition. In this section we will present those basic properties of the above-mentioned functions that will be used in what follows. We will often assume that the domain of the functions under consideration includes the right half-axis (0, ∞) where the functions are measurable and locally integrable.
1.1.1 General properties Definition 1.1.1. A positive (Lebesgue) measurable function L(t) is said to be a slowly varying function (s.v.f.) as t → ∞ if, for any fixed v > 0, L(vt) → 1 as L(t)
t → ∞.
(1.1.1)
A function V (t) is said to be a regularly varying (of index −α ∈ R) function (r.v.f.) as t → ∞ if it can be represented as V (t) = t−α L(t),
(1.1.2)
where L(t) is an s.v.f. as t → ∞. The definition of an s.v.f. (r.v.f.) as t ↓ 0 is quite similar. In what follows, the term s.v.f. (r.v.f.) will always refer, unless otherwise stipulated, to a function which is slowly (regularly) varying at infinity. One can easily see that, similarly to (1.1.1), the convergence V (vt) → v −α V (t)
as
t→∞
(1.1.3)
for any fixed v > 0 is a characteristic property of regularly varying functions. Thus, an s.v.f. is an r.v.f. of index zero. Note that r.v.f.’s admit a definition that, from the first glance, appears to be more general
1
2
Preliminaries
than (1.1.3). One can define them as measurable functions such that, for all v > 0 from a set of positive Lebesgue measure, there exists the limit lim
t→∞
V (vt) =: g(v). V (t)
(1.1.4)
In this case, one necessarily has g(v) ≡ v −α for some α ∈ R and, moreover, (1.1.3) holds for all v > 0 (see e.g. p. 17 of [32]). The fact that the power function appears in the limit becomes natural from the obvious relation g(v1 v2 ) = lim
t→∞
V (v1 v2 t) V (v2 t) × = g(v1 )g(v2 ), V (v2 t) V (t)
which is equivalent to the Cauchy functional equation for h(u) := ln g(eu ): h(u1 + u2 ) = h(u1 ) + h(u2 ). It is well known that, in ‘non-pathological’ cases, this equation can only have a linear solution of the form h(u) = cu, which means that g(v) = v c .
The following functions are typical representatives of the class of s.v.f.’s: the logarithmic function and its powers lnγ t, γ ∈ R, linear combinations thereof, multiple logarithms, functions with the property that L(t) → L = const = 0 as t → ∞ etc. An example of an oscillating bounded s.v.f. is provided by L0 (t) = 2 + sin(ln ln t),
t > 1.
(1.1.5)
We will need the following two fundamental properties of s.v.f.’s. Theorem 1.1.2 (Uniform convergence theorem). If L(t) is an s.v.f. as t → ∞, then the convergence (1.1.1) holds uniformly in v on any interval [v1 , v2 ] with 0 < v1 < v2 < ∞. It follows from the assertion of the theorem that the uniform convergence (1.1.1) on an interval [1/M, M ] will also take place in the case when, as t → ∞, the quantity M = M (t) increases to infinity slowly enough. Theorem 1.1.3 (Integral representation). A positive function L(t) is an s.v.f. as t → ∞ iff for some t0 > 0 one has t ε(u) L(t) = c(t) exp du , t t0 , (1.1.6) u t0
where c(t) and ε(t) are measurable functions, with c(t) → c ∈ (0, ∞) and ε(t) → 0 as t → ∞. For example, for the function L(t) = ln t the representation (1.1.6) holds with c(t) = 1, t0 = e and ε(t) = (ln t)−1 . Proof of Theorem 1.1.2. Put h(x) := ln L(ex ).
(1.1.7)
3
1.1 Regularly varying functions and their main properties
Then the property (1.1.1) of s.v.f.’s is equivalent to the following: for any u ∈ R, one has the convergence h(x + u) − h(x) → 0
(1.1.8)
as x → ∞. To prove the theorem, we have to show that this convergence is uniform in u ∈ [u1 , u2 ] for any fixed ui ∈ R. To do this, it suffices to verify that the convergence (1.1.8) is uniform on the interval [0, 1]. Indeed, from the obvious inequality |h(x+u +u )−h(x)| |h(x+u +u )−h(x+u )|+|h(x+u )−h(x)| (1.1.9) we have |h(x + u) − h(x)| (u2 − u1 + 1) sup |h(x + y) − h(x)|,
u ∈ [u1 , u2 ].
y∈[0,1]
For a given ε ∈ (0, 1) and any x > 0 put Ix := [x, x + 2],
Ix∗ := {u ∈ Ix : |h(u) − h(x)| ε/2},
∗ I0,x := {u ∈ I0 : |h(x + u) − h(x)| ε/2}. ∗ are measurable and differ from each other only It is clear that the sets Ix∗ and I0,x ∗ ∗ ), where μ is the Lebesgue measure. by a translation by x, so that μ(Ix ) = μ(I0,x ∗ By virtue of (1.1.8), the indicator function of the set I0,x converges to zero at any point u ∈ I0 as x → ∞. Therefore, by the dominated convergence theorem, the ∗ integral of this function, which is equal to μ(I0,x ), tends to 0, so that for large ∗ enough x0 one has μ(Ix ) < ε/2 when x x0 . Further, for s ∈ [0, 1] the interval Ix ∩Ix+s = [x+s, x+2] has length 2−s 1, so that when x x0 the set ∗ ) (Ix ∩ Ix+s ) \ (Ix∗ ∪ Ix+s
has measure 1 − ε > 0 and is therefore non-empty. Let y be a point from this set. Then |h(x + s) − h(x)| |h(x + s) − h(y)| + |h(y) − h(x)| < ε/2 + ε/2 = ε for x x0 , which proves the required uniformity on [0, 1] and hence on any other fixed interval as well. The theorem is proved. Proof of Theorem 1.1.3. That the right-hand side of (1.1.6) is an s.v.f. is almost obvious: for any fixed positive v = 1, vt L(vt) ε(u) c(vt) = exp du , (1.1.10) L(t) c(t) u t
where, as t → ∞, one has c(vt)/c(t) → c/c = 1 and vt vt ε(u) du = o(ln v) = o(1). du = o u u t
t
(1.1.11)
4
Preliminaries
Now we prove that any s.v.f. admits the representation (1.1.6). In terms of the function (1.1.7), the required representation will be equivalent (after the change of variable t = ex ) to the relation x h(x) = d(x) +
δ(y) dy,
(1.1.12)
x0
where d(x) = ln c(ex ) → d ∈ R and δ(x) = ε(ex ) → 0 as x → ∞, x0 = ln t0 . Therefore it suffices to establish the representation (1.1.12) for the function h(x). First of all note that h(x) (like L(t)) is a locally bounded function. Indeed, by Theorem 1.1.2, for a large enough x0 and all x x0 sup |h(x + y) − h(x)| < 1.
0y1
Hence for any x > x0 we have by virtue of (1.1.9) the bound |h(x) − h(x0 )| x − x0 + 1. Further, the local boundedness and measurability of the function h mean that it is locally integrable on [x0 , ∞) and therefore can be represented for x x0 as x0 +1
h(x) =
1 (h(x)−h(x+y)) dy+
h(y) dy+ x0
x
0
(h(y+1)−h(y)) dy. (1.1.13)
x0
The first integral in (1.1.13) is a constant that we will denote by d. The second tends to zero as x → ∞ owing to Theorem 1.1.2, so that 1 d(x) := d +
(h(x) − h(x + y)) dy → d,
x → ∞.
0
As to the third integral in (1.1.13), by the definition of an s.v.f. for its integrand one has δ(y) := h(y + 1) − h(y) → 0 as y → ∞, which completes the proof of the representation (1.1.12).
1.1.2 The main asymptotic properties In this subsection we will obtain several corollaries from Theorems 1.1.2 and 1.1.3. Theorem 1.1.4. (i) If L1 and L2 are s.v.f.’s then L1 + L2 , L1 L2 , Lb1 and L(t) := L1 (at + b), where a 0 and b ∈ R, are also s.v.f.’s.
5
1.1 Regularly varying functions and their main properties (ii) If L is an s.v.f. then for any δ > 0 there exists a tδ > 0 such that t−δ L(t) tδ
for all
t tδ .
(1.1.14)
In other words, L(t) = to(1) as t → ∞. (iii) If L is an s.v.f. then for any δ > 0 and v0 > 1 there exists a tδ > 0 such that for all v v0 and t tδ , L(vt) vδ . L(t)
v −δ
(1.1.15)
(iv) (Karamata’s theorem) If α > 1 then, for the r.v.f. V in (1.1.2), one has ∞ V I (t) :=
V (u) du ∼
tV (t) α−1
as
t → ∞.
(1.1.16)
V (u) du ∼
tV (t) 1−α
as
t → ∞.
(1.1.17)
t
If α < 1 then t VI (t) := 0
If α = 1 then one has the equalities VI (t) = tV (t)L1 (t)
(1.1.18)
and ∞ V (t) = tV (t)L2 (t) I
V (u)du < ∞,
if
(1.1.19)
0
where the Li (t) → ∞ as t → ∞, i = 1, 2, are s.v.f.’s. (v) For an r.v.f. V of index −α < 0 put σ(t) := V (−1) (1/t) = inf{u : V (u) < 1/t}. Then σ(t) is an r.v.f. of index 1/α: σ(t) = t1/α L1 (t), where L1 is an s.v.f. If the function L has the property L tL1/α (t) ∼ L(t) as t → ∞ then
L1 (t) ∼ L1/α t1/α .
(1.1.20)
(1.1.21)
(1.1.22)
Similar assertions hold for functions that are slowly or regularly varying as t decreases to zero. Note that from Theorem 1.1.2 and the inequality (1.1.15) we also obtain the
6
Preliminaries
following property of s.v.f.’s: for any δ > 0 there exists a tδ > 0 such that for all t and v satisfying the inequalities t tδ , vt tδ one has (1 − δ) min{v δ , v −δ }
L(vt) (1 + δ) max{v δ , v −δ }. L(t)
(1.1.23)
Proof. Assertion (i) is evident (just observe that, to prove the last part of (i), one needs Theorem 1.1.2). (ii) This property follows immediately from the representation (1.1.6) and the bound t ln t t ε(u) ln t t du du +o = o(ln t) du = + = O u u u t0
ln t
t0
ln t
t0
as t → ∞. (iii) To prove this property, we notice that, in relation to the expression on the right-hand side of (1.1.10), for any fixed δ > 0 and v0 > 1 and all sufficiently large t one has −δ/2
v−δ/2 v0 and
c(vt) δ/2 v0 v δ/2 , c(t)
v v0 ,
vt ε(u) δ du ln v 2 u t
(by virtue of (1.1.11)). From this (1.1.15) follows. (iv) Owing to the uniform convergence theorem, one can choose an M = M (t) → ∞ as t → ∞ such that the convergence in (1.1.1) will be uniform in v ∈ [1, M ]. Changing variables by putting u = vt, we obtain V I (t) = t−α+1 L(t)
∞ 1
M ∞ L(vt) . dv = t−α+1 L(t) v −α + L(t) 1
M
If α > 1 then, as t → ∞, M
M ∼
1
v −α dv →
1
1 , α−1
whereas due to property (iii) one has, for δ = (α − 1)/2, the relation ∞
∞ <
M
v M
−α+δ
∞ dv = M
v −(α+1)/2 dv → 0.
(1.1.24)
1.1 Regularly varying functions and their main properties
7
Together these two relations mean that tV (t) t−α+1 L(t) = . α−1 α−1
V I (t) ∼
The case α < 1 is considered quite similarly, but taking into account that the convergence in (1.1.1) is uniform in v ∈ [1/M, 1] and also the equality 1
v −α dv =
0
1 . 1−α
If α = 1 then the first integral on the right-hand side of (1.1.24) is M
M ∼
1
v −1 dv = ln M,
1
so that if ∞ V (u) du < ∞
(1.1.25)
0
then V I (t) (1 + o(1))L(t) ln M L(t)
(1.1.26)
and therefore L2 (t) :=
V I (t) V I (t) = →∞ tV (t) L(t)
as t → ∞.
Now note that, by virtue of property (i), L2 is an s.v.f. if the function V I (t) is such. But, for v > 1, vt V (t) = V (vt) + I
V (u) du,
I
t
where the integral clearly does not exceed (v−1)L(t)(1+o(1)). Owing to (1.1.26) this implies that V I (vt)/V I (t) → 1 as t → ∞, which completes the proof of (1.1.19). That (1.1.18) is true in the subcase when (1.1.25) holds is almost obvious, since t VI (t) = tV (t)L1 (t) = L(t)L1 (t) =
∞ V (u) du →
0
V (u) du, 0
so that, firstly, L1 is an s.v.f. by virtue of property (i) and, secondly, L1 (t) → ∞ since L(t) → 0 owing to (1.1.26).
8
Preliminaries ∞
Now let α = 1 and 0 V (u) du = ∞. Then, if M = M (t) → ∞ sufficiently slowly, one obtains by the uniform convergence theorem a result similar to (1.1.26) (see also (1.1.24)): 1 VI (t) =
v
−1
0
1 L(vt) dv
v −1 L(vt) dv ∼ L(t) ln M L(t).
1/M
Therefore L1 (t) := VI (t)/L(t) → ∞ as t → ∞. Further, also by an argument similar to the previous exposition, for v ∈ (0, 1) one has t VI (t) = VI (vt) +
V (u) du, vt
where the last integral does not exceed (1 − v)L(t)(1 + o(1)) VI (t), so that VI (t) (and also, by property (i), L1 (t)) is an s.v.f. This completes the proof of property (iv). (v) Clearly, by the uniform convergence theorem the quantity σ = σ(t) is a solution to the ‘asymptotic equation’ V (σ) ∼
1 t
as
t→∞
(1.1.27)
(where the symbol ∼ can be replaced by the equality sign provided that the function V is continuous and monotonically decreasing). Representing σ in the form σ = t1/α L1 , L1 = L1 (t), we obtain an equivalent relation 1/α L1 ) ∼ 1, L−α 1 L(t
(1.1.28)
and it is obvious that t1/α L1 → ∞
as t → ∞.
(1.1.29)
Fix an arbitrary v > 0. Substituting vt for t in (1.1.28) and for brevity putting L2 = L2 (t) := L1 (vt), we get the relation 1/α L−α L2 ) ∼ 1, 2 L(t
(1.1.30)
since L(v 1/α t1/α L2 ) ∼ L(t1/α L2 ) owing to (1.1.29) (with L1 replaced by L2 ). Now we will show by contradiction that (1.1.28)–(1.1.30) imply that L1 ∼ L2 as t → ∞, where the latter clearly means that L1 is an s.v.f. Indeed, the contrary assumption means that there exist a v0 > 1 and a sequence tn → ∞ such that un := L2 (tn )/L1 (tn ) > v0 ,
n = 1, 2, . . .
(1.1.31)
(the possible alternative case can be considered in exactly the same way). Ev1/α idently, t∗n := tn L1 (tn ) → ∞ by virtue of (1.1.29), so that from (1.1.28),
9
1.1 Regularly varying functions and their main properties (1.1.29) and property (iii) with δ = α/2 we obtain that 1∼
1/α
L−α 2 (tn )L(tn L2 (tn )) 1/α
L−α 1 (tn )L(tn L1 (tn ))
= u−α n
L(un t∗n ) −α/2 un−α/2 < v0 < 1. L(t∗n )
We get a contradiction. Note that the above argument proves the uniqueness (up to the asymptotic equivalence) of the solution to the equation (1.1.27). Finally, the relation (1.1.22) can be proved by directly verifying (1.1.27) for σ := t1/α L1/α (t1/α ): using (1.1.21), one has V (σ) = σ −α L(σ) =
L(t1/α L1/α (t1/α )) L(t1/α ) 1 ∼ = . 1/α t tL(t ) tL(t1/α )
The desired assertion now follows owing to the above-mentioned uniqueness of the solution to the asymptotic equation (1.1.27). Theorem 1.1.4 is proved.
1.1.3 The asymptotic properties of the transforms of r.v.f.’s (an Abelian type theorem) For an r.v.f. V (t), its Laplace transform ∞ ψ(λ) :=
e−λt V (t) dt < ∞
0
is defined for any λ > 0. The following asymptotic relations hold true for the transform. Theorem 1.1.5. Let V (t) be an r.v.f. (i.e. it has the form (1.1.2)). (i) If α ∈ [0, 1) then ψ(λ) ∼ (ii) If α = 1 and
∞ 0
Γ(1 − α) V (1/λ) λ
as
λ ↓ 0.
(1.1.32)
V (t) dt = ∞ then
t
ψ(λ) ∼ VI (1/λ)
as
λ ↓ 0,
(1.1.33)
where VI (t) = 0 V (u) du → ∞ is an s.v.f. and, moreover, VI (t) L(t) as t → ∞. ∞ (iii) In any case, ψ(λ) ↑ VI (∞) = 0 V (t) dt ∞ as λ ↓ 0. Rewriting the relation (1.1.32), one obtains V (t) ∼
ψ(1/t) tΓ(1 − α)
as
t → ∞.
Relations of this type will also hold true in the case when, instead of the regularity of the function V , we require its monotonicity and then assume that ψ(λ) is an
10
Preliminaries
r.v.f. as λ ↓ 0. Assertions of this kind are referred to as Tauberian theorems. In the present book, we will not be using such theorems, so we will not dwell on them here. Proof of Theorem 1.1.5. (i) For any fixed ε > 0 we have ε/λ ∞ + , ψ(λ) = 0
(1.1.34)
ε/λ
where owing to (1.1.17) one has the following relation for the first integral in the case α < 1: ε/λ ε/λ εV (ε/λ) −λt e V (t) dt V (t) dt ∼ λ(1 − α) 0
λ ↓ 0.
as
(1.1.35)
0
Making the change of variables λt = u, one can rewrite the second integral in (1.1.34) as follows: ∞ ε/λ
V (1/λ) = λ
∞
L(u/λ) V (1/λ) e−u u−α du = L(1/λ) λ
2
ε
ε
∞ + .
(1.1.36)
2
Here, as λ ↓ 0, each of the two integrals on the right-hand side converges to the respective integral of e−u u−α : for the former, this follows from the uniform convergence theorem (the convergence L(u/λ)/L(1/λ) → 1 holds uniformly in u ∈ [ε, 2]), whereas for the latter it is a consequence of (1.1.1) and the dominated convergence theorem (since, owing to Theorem 1.1.4(iii), for all sufficiently small λ one has L(u/λ)/L(1/λ) < u for u 2). Therefore ∞ ∼ ε/λ
V (1/λ) λ
∞
u−α e−u du.
(1.1.37)
ε
Now observe that, as λ ↓ 0, εV (ε/λ) λ
L(ε/λ) V (1/λ) = ε1−α → ε1−α . λ L(1/λ)
Since ε > 0 can be chosen arbitrary small, this relation together with (1.1.35) and (1.1.37) completes the proof of (1.1.32). (ii) Integrating by parts and again making the change of variables λt = u, we
1.1 Regularly varying functions and their main properties
11
obtain for α = 1 and M > 1 that ∞ ∞ −λt ψ(λ) = e dVI (t) = − VI (t) de−λt 0
0
∞ =
VI (u/λ)e
−u
1/M
du =
0
M +
0
∞ +
1/M
.
(1.1.38)
M
By Theorem 1.1.4(iv), VI (t) L(t) is an s.v.f. as t → ∞ and hence, for M = M (λ) → ∞ sufficiently slowly as λ ↓ 0, by the uniform convergence theorem the middle integral on the final right-hand side of (1.1.38) is M M VI (u/λ) −u e−u du ∼ VI (1/λ). VI (1/λ) e du ∼ VI (1/λ) 1/M VI (1/λ) 1/M The other two integrals are negligibly small: since VI (t) is an increasing function, the first does not exceed VI (1/λM )/M = o(VI (1/λ)) and, for the second, by Theorem 1.1.4(iii) we have ∞ ∞ VI (u/λ) −u ue−u du = o(VI (1/λ)). VI (1/λ) e du VI (1/λ) M VI (1/λ) M Hence (ii) is proved. Assertion (iii) is obvious.
1.1.4 The subexponential property An important property of r.v.f.’s is that their regularity character is preserved under convolution. We will confine ourselves here to considering the case of probability distributions whose tails are r.v.f.’s. Let ξ, ξ1 , ξ2 , . . . be independent identically distributed (i.i.d.) random variables (r.v.’s) with distribution F, and let the right tail of this distribution, F+ (t) := F([t, ∞)) = P(ξ t),
t ∈ R,
be an r.v.f. (as t → ∞) of the form (1.1.2): F+ (t) ≡ V (t) = t−α L(t). We will denote the class of all such distributions with a fixed α 0 by R(α), and the class of all distributions with regularly varying right tails by R := α0 R(α). It turns out that in this case, as x → ∞, P(ξ1 + ξ2 x) = V 2∗ (x) := V ∗ V (x) ∞ =− V (x − t) dV (t) ∼ 2V (x) = 2P(ξ x), −∞
(1.1.39)
12
Preliminaries
and, more generally, for any fixed n > 1, V n∗ (x) := P(ξ1 + · · · + ξn x) ∼ nV (x)
x → ∞.
as
(1.1.40)
In order to prove (1.1.39), introduce the events A = {ξ1 + ξ2 x} and Bi = {ξi < x/2}, i = 1, 2. Clearly P(A) = P(AB1 ) + P(AB2 ) − P(AB1 B2 ) + P(AB 1 B 2 ), where P(AB1 B2 ) = 0, P(AB 1 B 2 ) = P(B 1 B 2 ) = V 2 (x/2) (here and in what follows B denotes the complement of B) and x/2 P(AB1 ) = P(AB2 ) =
V (x − t) F(dt). −∞
Therefore V
2∗
x/2 (x) = 2
V (x − t) F(dt) + V 2 (x/2).
(1.1.41)
−∞
(We could have obtained the same result by integrating by parts the convolution in (1.1.39).) It remains to observe that V 2 (x/2) = o(V (x)) and x/2
−M
V (x − t) F(dt) = −∞
M +
−∞
−M
x/2 + ,
(1.1.42)
M
where, as can be easily seen, for any M = M (x) → ∞ as x → ∞ such that M = o(x) one has M
−M
∼ V (x)
+
and −∞
−M
x/2 = o(V (x)), M
which proves (1.1.39). One can establish (1.1.40) in a similar way (we will prove this relation for a more general case in Theorem 1.2.12(iii) below). The same assertions turn out also to be true for the so-called semiexponential distributions, i.e. distributions of which the right tails have the form F+ (t) = e−t
α
L(t)
,
α ∈ (0, 1),
(1.1.43)
where L(t) is an s.v.f. as t → ∞ satisfying a certain smoothness condition (see Definition 1.2.22 below, p. 29). Considerable attention will be paid in this book to extending the asymptotic relation (1.1.40) to the case when n grows together with x, and also to refining this relation for distributions with both regularly varying and semiexponential tails.
1.2 Subexponential distributions
13
Such distributions are the main representatives of the class of so-called subexponential distributions, of which the characteristic property is given by the relation (1.1.39). In the next section we will consider the main properties of these distributions.
1.2 Subexponential distributions 1.2.1 The main properties of subexponential distributions Before giving any formal definitions, we will briefly describe the relationships between the classes of distributions that we are going to introduce and explain why we pay them different amounts of attention in different contexts. As we have already noted in the Introduction, the main objects of study in this book are random walks in which the jump distributions have tails that are either regularly varying at infinity or semiexponential. In both cases, such distributions are typical representatives of the class of so-called subexponential distributions. The characteristic property of subexponential distributions is the asymptotic tail additivity of the convolutions of the original distributions, i.e. the fact that for the sums Sn = ξ1 + · · · + ξn of (any) fixed numbers n of i.i.d. r.v.’s ξi one has P(Sn x) ∼ nP(ξ1 x)
as x → ∞
(1.2.1)
(cf. (1.1.40)). Below we will establish a number of sufficient and necessary conditions that, to some extent, characterize the class S of subexponential distributions. These conditions show that the class S is the result of a considerable extension of the union of the class R of distributions with regularly varying tails and the class Se of distributions with semiexponential tails (see Definition 1.2.22 below, p. 29) which is obtained, roughly speaking, through the addition of functions that ‘oscillate’ between the functions from R or between those from Se. For such ‘oscillating’ functions the limit theorems to be obtained in Chapters 2–4 will, as a rule, be valid in much narrower zones than for elements of R or Se (see § 4.8). However, these functions usually do not appear in applications, where the assumption of their presence would look rather artificial. Therefore in what follows we will confine ourselves mostly to considering distributions from the classes R and Se. We will devote less attention to other distributions from the class S. Nevertheless, a number of properties of random walks will be established for the whole broad class S. The difference between the class S, on the one hand, and the classes R and Se, on the other, is that the above-mentioned tail-additivity property (1.2.1) extends much further (in terms of the number n of summands in the sum of random variables) for distributions from R and Se than for arbitrary distributions from S. More precisely, for the classes R and Se, the relation (1.2.1) remains true also in the case when n grows rather fast with x (for more detail, see § 5.9). This allows one to advance much further in studying the asymptotic properties of the distributions of the r.v.’s Sn , S n = maxkn Sk etc. for the classes R and Se, whereas
14
Preliminaries
the zone in which it is natural to do this within the class S is quite narrow (see below). At the same time, in subsequent chapters we will study distributions from the classes R and Se separately, the reason being that the technical aspects and even the formulations of results for these distribution classes are different in many respects. However, there are problems for which it is natural to conduct studies within the wider class of subexponential distributions. As an example of such a problem, we could mention here that of the asymptotic behaviour of the probability P(S x) as x → ∞ in the case Eξi < 0, where S = supk0 Sk (see § 7.5). Now we will give a formal definition of the class of subexponential distributions and discuss their basic properties and also the relations between this class and other important distribution classes to be considered in the present book. Let ζ ∈ R be an r.v. with distribution G: G(B) = P(ζ ∈ B) for any Borel set B (recall that in this case we write ζ ⊂ = G). By G(t) we denote the complement distribution function corresponding to the distribution of the r.v. ζ: G(t) := P(ζ t),
t ∈ R.
Similarly, to the distribution Gi there corresponds the function Gi (t), and so on. The function G(t) is also referred to as the (right) tail of the distribution G, but normally this term is used only when t > 0. Note that throughout this book, except in §§ 1.2–1.4, for the right distribution tails we will use notation of the form G+ (t) (this should not lead to any confusion). The convolution of the tails G1 (t) and G2 (t) is the function G1 ∗ G2 (t) := −
G1 (t − y) dG2 (y) =
G1 (t − y) G2 (dy) = P(Z2 t),
= Gi , i = 1, 2. Clearly where Z2 = ζ1 + ζ2 is the sum of independent r.v.’s ζi ⊂ G1 ∗ G2 (t) = G2 ∗ G1 (t). By G2∗ (t) = G ∗ G(t) we denote the convolution of the tail G(t) with itself, and put G(n+1)∗ (t) = G ∗ Gn∗ (t), n 2. Similarly, the convolution G1 ∗ G2 of two distributions G1 and G2 is the measure G1 ∗ G2 (B) = G1 (B − t) G2 (dt), and so on. Definition 1.2.1. A probability distribution G on [0, ∞) belongs to the class S+ of subexponential distributions on the positive half-line if G2∗ (t) ∼ 2G(t)
as
t → ∞.
(1.2.2)
A probability distribution G on R belongs to the class S of subexponential distributions if the distribution G+ of the positive part ζ + := max{0, ζ} of the r.v. ζ ⊂ = G belongs to S+ . A r.v. is said to be subexponential if its distribution is subexponential.
1.2 Subexponential distributions
15
Remark 1.2.2. As clearly we always have (G+ )2∗ (t) = P(ζ1+ + ζ2+ t) P({ζ1+ t} ∪ {ζ2+ t}) = P(ζ1 t) + P(ζ2 t) − P(ζ1 t, ζ2 t) = 2G(t) − G2 (t) = 2G+ (t)(1 + o(1)) as t → ∞, subexponentiality is equivalent to the following property: lim sup t→∞
(G+ )2∗ (t) 2. G+ (t)
(1.2.3)
Observe also that, since the relation (1.2.2) only makes sense when G(t) > 0 for all t ∈ R, any subexponential distribution has an unbounded (from the right) support. Note that in the literature the notation S is normally used for the class of subexponential distributions on [0, ∞), whereas the general case is either ignored or just mentioned in passing as a possible extension obtained as above. This situation can be explained, on the one hand, by the fact that subexponentiality is, by definition, a property of right distribution tails only and, on the other, by a historical tradition: the class of subexponential distributions was originally introduced in the context of the theory of branching processes and then used mostly for modelling insurance claim sizes and service times in queueing theory, where the respective r.v.’s are always positive. In the context of general random walks, such a restrictive assumption is no longer natural. Moreover, such an approach (i.e. one confined to the class of positive r.v.’s) may well cause confusion, especially when the concept of subexponentiality is encountered for the first time. Thus, it is by no means obvious from the definition that the tail additivity property (1.2.2) holds for subexponential r.v.’s that can assume both negative and positive values; this fact requires a non-trivial proof and will be seen to be a consequence of Theorems 1.2.8 and 1.2.4(vi) (see Theorem 1.2.12(iii) below). Moreover, the fact that (1.2.2) holds for the distribution G of a signed r.v. ζ itself (and not for the distribution G+ of the r.v. ζ + ) does not imply that G ∈ S (an example illustrating this observation is given in Remark 1.2.10 on p. 19). Therefore, to avoid vagueness and ambiguities, from the very beginning we will be considering the general case of distributions given on the whole real line. As we have already seen in § 1.1.4, the class R of distributions with regularly varying right tails is a subset of the class S (see (1.1.39)). Below we will see that in a similar way the so-called semiexponential distributions also belong to the class S (Theorem 1.2.21). One of the main properties of subexponential distributions G is that the corresponding functions G(t) are asymptotically locally constant in the following sense.
16
Preliminaries
Definition 1.2.3. A function G(t) > 0 is said to be (asymptotically) locally constant (l.c.) if, for any fixed v, G(t + v) →1 G(t)
as
t → ∞.
(1.2.4)
In the literature, distributions with l.c. tails are often referred to as ‘long-tailed distributions’; it would appear, however, that the term ‘locally constant’ better reflects the meaning of the concept. The class of all distributions G with l.c. tails G(t) will be denoted by L. For future reference, we will present the main properties of l.c. functions in the form of a separate theorem. Theorem 1.2.4. (i) For an l.c. function G(t) the convergence (1.2.4) is uniform in v on any fixed bounded interval. (ii) A function G(t) is l.c. iff, for some t0 > 0, it admits the following representation: t G(t) = c(t) exp ε(u) du , t t0 , (1.2.5) t0
where the functions c(t) and ε(t) are measurable, with c(t) → c ∈ (0, ∞) and ε(t) → 0 as t → ∞. (iii) If G1 (t) and G2 (t) are l.c. then G1 (t) + G2 (t), G1 (t)G2 (t), Gb1 (t) and G(t) := G1 (at + b) where a 0, b ∈ R, are also l.c. functions. (iv) If G(t) is an l.c. function then, for any ε > 0, eεt G(t) → ∞
as t → ∞.
In other words, any l.c. function G(t) can be represented as G(t) = e−l(t) , (v) Let
l(t) = o(t) as
∞
GI (t) :=
t → ∞.
(1.2.6)
G(u) du < ∞
t
and let at least one of the two following conditions be met: (a) G(t) is an l.c. function, (b) GI (t) is an l.c. function and G(t) is monotone. Then G(t) = o(GI (t)) as
t → ∞.
(vi) If G ∈ L then G2∗ (t) ∼ (G+ )2∗ (t) as t → ∞.
(1.2.7)
17
1.2 Subexponential distributions
Remark 1.2.5. It follows from the assertion of part (i) of the theorem that the uniform convergence in (1.2.4) on the interval [−M, M ] will also hold in the case when M = M (t) increases unboundedly (together with t) slowly enough. Remark 1.2.6. The coinage of the term ‘subexponential distribution’ was apparently due mostly to the fact that the tail of such a distribution decays, as t → ∞, slower than any exponential function e−εt with ε > 0. According to Theorem 1.2.4(iv), this is actually a property of a much wider class L of distributions with l.c. tails (the relationship between the classes S and L is discussed in more detail below, in Theorems 1.2.8, 1.2.17 and 1.2.25). Proof of Theorem 1.2.4. (i)–(iii) It is evident from Definitions 1.1.1 and 1.2.3 that G(t) is an l.c. function iff L(t) := G(ln t) is an s.v.f. Therefore the assertion of part (i) immediately follows from Theorem 1.1.2 (the uniform convergence theorem for s.v.f.’s), whereas those of parts (ii) and (iii) follow from Theorems 1.1.3 and 1.1.4(i) respectively. The assertion of part (iv) follows from the integral representation (1.2.5). (v) If condition (a) is met then, for any M > 0 and all sufficiently large t, t+M
G (t) > I
G(u) du >
1 M G(t). 2
t
Since M is arbitrary, GI (t) G(t). Further, if condition (b) is satisfied then G(t) 1 I GI (t) G (t)
t G(u) du =
GI (t − 1) −1→0 GI (t)
t−1
as t → ∞. (vi) Let ζ1 and ζ2 be independent copies of the r.v. ζ and let Z2 := ζ1 + ζ2 , (+) Z2 := ζ1+ + ζ2+ . Clearly ζi ζi+ , so that (+)
G2∗ (t) = P(Z2 t) P(Z2
t) = (G+ )2∗ (t).
(1.2.8)
However, for any M > 0, G2∗ (t) P(Z2 t, ζ1 > 0, ζ2 > 0) +
2 P Z2 t, ζi ∈ [−M, 0] , i=1 (+)
where the first term on the right-hand side equals P(Z2 t, ζ1+ > 0, ζ2+ > 0), and the last two can be estimated as follows: since G ∈ L then, for any ε > 0
18
Preliminaries
and for all sufficiently large M and t, P Z2 t, ζ1 ∈ [−M, 0] P ζ2 t + M, ζ1 ∈ [−M, 0] = G(t)
G(t + M ) [P(ζ1 0) − P(ζ1 < −M )] G(t)
(1 − ε)G(t) P(ζ1+ = 0) (+)
= (1 − ε) P(Z2
t, ζ1+ = 0).
Thus we obtain for G2∗ (t) the lower bound 2 (+) (+) P(Z2 t, ζi+ = 0) G2∗ (t) P Z2 t, ζ1+ > 0, ζ2+ > 0 + (1 − ε) i=1
(1 −
(+) ε)P(Z2
∗ 2∗
t) = (1 − ε)(G ) (t).
Thereby part (vi) is proved, as ε is arbitrarily small. Later we will also need the following modification of the concept of an l.c. function. Definition 1.2.7. Let ψ(t) 1 be a fixed non-decreasing function. A function G(t) > 0 is said to be ψ-asymptotically locally constant (ψ-l.c.) if, for any fixed v, G t + vψ(t) → 1 as t → ∞. (1.2.9) G(t) Clearly, any l.c. function will be ψ-l.c. for ψ ≡ 1 and, moreover, it will also be ψ-l.c. for a suitable (i.e. sufficiently slow growing) function ψ(t) → ∞. In what follows, we will be using only monotone ψ-l.c. functions. For them the convergence in (1.2.9), as well as the convergence in (1.2.4), will obviously be uniform in v on any fixed bounded interval. In this case any ψ-l.c. function is l.c., so that the class of l.c. functions is at its broadest and includes all ψ-l.c. functions. Observe that any r.v.f. is ψ-l.c. for any function ψ(t) = o(t). For example, −ctα the function G(t) , α ∈ (0, 1) (a Weibull distribution tail) is ψ-l.c. for = e 1−α . However, the exponential function G(t) = e−ct is not ψ-l.c. for ψ(t) = o t any ψ. Accordingly, any ψ-l.c. function decays (or grows) more slowly than the exponential function (cf. Theorem 1.2.4(iv)). Now we return to our discussion of subexponential distributions. First we consider the relationship between the classes S and L . Theorem 1.2.8. One has S ⊂ L, and therefore all the assertions of Theorem 1.2.4 hold true for subexponential distributions. This inclusion is strict: not every distribution from the class L is subexponential. Remark 1.2.9. Some sufficient conditions for a distribution G ∈ L to belong to the class S are given below, in Theorems 1.2.17 and 1.2.25.
19
1.2 Subexponential distributions
Remark 1.2.10. In the case when a distribution G is not concentrated on [0, ∞), the tail additivity condition (1.2.2) alone will be insufficient for the function G(t) to be l.c. (and hence for ensuring the ‘subexponential decay’ of the distribution tail, cf. Remark 1.2.6). It is this fact that explains the necessity of defining subexponentiality in the general case in terms of the condition (1.2.2) on the distribution G+ of the r.v. ζ + . As we will see below (Corollary 1.2.16), the subexponentiality of a distribution G on R is actually equivalent to the combination of the conditions (1.2.2) (on the distribution G itself) and G ∈ L. The following example shows that for r.v.’s assuming values of both signs, the condition (1.2.2), generally speaking, does not imply subexponential behaviour for G(t). Example 1.2.11. Let μ > 0 be fixed and let the right tail of a distribution G have the form G(t) = e−μt V (t),
(1.2.10)
where V (t) is an r.v.f. converging to zero as t → ∞ and such that ∞ g(μ) :=
eμy G(dy) < ∞. −∞
We have (cf. (1.1.41), (1.1.42)) 2∗
t/2
G (t) = 2
G(t − y) G(dy) + G2 (t/2),
−∞
where t/2 G(t − y) G(dy) = e −∞
−μt
t/2 eμy V (t − y) G(dy)
−∞
⎛
⎜ = e−μt ⎝
−M
−∞
M +
t/2 +
−M
⎞ ⎟ ⎠.
M
One can easily see that, for M = M (t) → ∞ slowly enough as t → ∞, we get M
−M
e V (t − y) G(dy) ∼ g(μ)V (t), −∞
−M
whereas
t/2 +
μy
= o G(t) ,
M
G2 (t/2) = e−μt V 2 (t/2) ce−μt V 2 (t) = o G(t) .
Thus we obtain G2∗ (t) ∼ 2g(μ)e−μt V (t) = 2g(μ)G(t),
(1.2.11)
20
Preliminaries
and it is clear that one can always find a distribution G (with a negative mean) such that g(μ) = 1. Then the relation (1.2.2) given in the definition of subexponentiality will hold true, although G(t) decays exponentially fast and therefore is not an l.c. function. Nevertheless, observe that the class of distributions that satisfy just the relation (1.2.2) is an extension of the class S, distributions from the former class possessing many properties of distributions from S. Proof of Theorem 1.2.8. First we will prove that S ⊂ L. Since the definitions of both classes are given in terms of the right distribution tails, one can assume without loss of generality that G ∈ S+ (or just consider from the very beginning the distribution G+ ). For independent (non-negative) r.v.’s ζi ⊂ = G we have, for t > 0, G2∗ (t) = P(ζ1 + ζ2 t) = P(ζ1 t) + P(ζ1 + ζ2 t, ζ1 < t) t G(t − y) G(dy).
= G(t) +
(1.2.12)
0
Since G(t) is a non-increasing function and G(0) = 1, we obtain for t > v > 0 that G2∗ (t) =1+ G(t)
v 0
G(t − y) G(dy) + G(t)
t
G(t − y) G(dy) G(t)
v
G(t − v) 1 + 1 − G(v) + G(v) − G(t) . G(t) Hence for large enough t (such that G(v) − G(t) > 0) 2∗ G (t) 1 G(t − v) − 2 + G(v) . 1 G(t) G(v) − G(t) G(t) Since G ∈ S+ , the final right-hand side tends to G(v)/G(v) = 1 as t → ∞, and therefore G ∈ L. To complete the proof of the theorem, it suffices to give an example of a distribution G ∈ L \ S. Since in order to achieve this we will have to refer to a necessary condition for subexponentiality given below in Theorem 1.2.17, it seems natural to present such a construction after this theorem, which we do in Example 1.2.18 (see p. 27). One can also find examples of distributions from L that are not subexponential in [109, 229]. The next theorem states several important properties of subexponential distributions.
1.2 Subexponential distributions
21
Theorem 1.2.12. Let G ∈ S. Then the following assertions hold true. (i) If Gi (t)/G(t) → ci as t → ∞, ci 0, i = 1, 2, c1 + c2 > 0, then G1 ∗ G2 (t) ∼ G1 (t) + G2 (t) ∼ (c1 + c2 )G(t). (ii) If G0 (t) ∼ cG(t) as t → ∞, c > 0, then G0 ∈ S. (iii) For any fixed n 2 Gn∗ (t) ∼ nG(t)
as
t → ∞.
(1.2.13)
(iv) For any ε > 0 there exists a value b = b(ε) < ∞ such that, for all n 2 and t, holds Gn∗ (t) b (1 + ε)n G(t) Remark 1.2.13. It is clear that the asymptotic relation G1 (t) ∼ G2 (t) as t → ∞ defines an equivalence relation on the set of distributions on R. Theorem 1.2.12(ii) means that the class S is closed with respect to that equivalence. One can easily see that in each equivalence subclass of S under this relation there is always a distribution with an arbitrarily smooth tail G(t). Indeed, let p(t) be an infinitely many times differentiable probability density on R vanishing outside [0, 1]; for instance, one can take p(x) = c exp −1/[x(1 − x)] , x ∈ (0, 1); p(x) = 0, x ∈ (0, 1). Now we will ‘smooth’ the function l(t) := − ln G(t), G ∈ S, by putting (1.2.14) l0 (t) := p(t − u)l(u) du, G0 (t) := e−l0 (t) . It is evident that G0 (t) is infinitely many times differentiable and, since l(t) is non-decreasing and we actually integrate only over [t − 1, t], one has l(t − 1) l0 (t) l(t) and hence by Theorem 1.2.8 1
G(t − 1) G0 (t) →1 G(t) G(t)
as
t → ∞.
Therefore G0 is equivalent to the original G. A simpler smoothing procedure that leads to a less smooth asymptotically equivalent tail consists in replacing l(t) by its linear interpolation with nodes at the points (k, l(k)), k being an integer. Thus, up to an additive term o(1), the function l(t) = − ln G(t), G ∈ S, can always be assumed arbitrarily smooth. The aforesaid is clearly applicable to the class L as well: this class is also closed with respect to the above equivalence, and in each of its equivalence subclasses there are arbitrarily smooth representatives. Remark 1.2.14. Note that from Theorem 1.2.12(ii), (iii) it follows immediately that if G ∈ S then also Gn∗ ∈ S, n = 2, 3, . . . Moreover, if we denote by
22
Preliminaries
= G then, from the Gn∨ the distribution of the maximum of i.i.d. r.v.’s ζ1 , . . . , ζn ⊂ obvious relation Gn∨ (t) = 1 − (1 − G(t))n ∼ nG(t)
as
t→∞
(1.2.15)
and Theorem 1.2.12(ii), one obtains that Gn∨ also belongs to S. The relations (1.2.15) and (1.2.13) show that, in the case of a subexponential G, the tail Gn∗ (t) of the distribution of the sum of a fixed number n of i.i.d. = G is asymptotically equivalent (as t → ∞) to the tail Gn∨ (t) of the r.v.’s ζi ⊂ maximum of these r.v.’s. This means that ‘large’ values of this sum are mainly due to the presence of a single ‘large’ summand ζi in it. One can easily see that this property is characteristic of subexponentiality. Remark 1.2.15. Observe also that the converse of the assertion stated at the beginning of Remark 1.2.14 is true as well: if Gn∗ ∈ S for some n 2 then G ∈ S ([112]; see also Proposition A3.18 of [113]). That Gn∨ ∈ S implies that G ∈ S follows in an obvious way from (1.2.15) and Theorem 1.2.12(ii). Proof of Theorem 1.2.12. (i) First assume that c1 c2 > 0 and that both distributions Gi are concentrated on [0, ∞). Fix an arbitrary ε > 0 and choose values M < N large enough that Gi (M ) < ε, i = 1, 2, G(M ) < ε and such that for t > N one has 1−ε<
G(t − M ) < 1 + ε, G(t)
(1 − ε)ci <
Gi (t) < (1 + ε)ci G(t)
(1.2.16)
for i = 1, 2 (that the former pair of inequalities can be satisfied is seen from Theorem 1.2.8). = Gi , i = 1, 2, be independent r.v.’s. Then, for t > 2N , one Let ζ ⊂ = G and ζi ⊂ has the representation G1 ∗ G2 (t) = P1 + P2 + P3 + P4 ,
(1.2.17)
where (see Fig. 1.1) P1 := P(ζ1 t − ζ2 , ζ2 ∈ [0, M )), P2 := P(ζ2 t − ζ1 , ζ1 ∈ [0, M )), P3 := P(ζ2 t − ζ1 , ζ1 ∈ [M, t − M )), P4 := P(ζ2 M, ζ1 t − M ). We will show that the first two terms on the right-hand side of (1.2.17) are asymptotically equivalent to c1 G(t) and c2 G(t), respectively, whereas the last two are negligibly small compared with G(t). Indeed, for P1 we have the obvious two-sided bounds (1 − ε)2 c1 G(t) < G1 (t)(1 − G2 (M )) = P(ζ1 t, ζ2 ∈ [0, M )) P1 P(ζ1 t − M ) = G1 (t − M ) (1 + ε)2 c1 G(t)
23
1.2 Subexponential distributions ζ2
6 P2
t
@
@ @
P3
@
@ @
P4
@
0
@ @
@ @ @ @
t−M
M
P1 t
ζ1
Fig. 1.1. An illustration of the representation (1.2.17). Although M = o(t), the main contribution to the sum comes from the terms P1 and P2 .
by virtue of (1.2.16); bounds for the term P2 can be obtained in a similar way. Further, P4 = P(ζ2 M, ζ1 t − M ) = G2 (M )G1 (t − M ) < ε(1 + ε)2 c2 G(t). It remains to estimate P3 (note that now we will need the condition G ∈ S; so far we have only used the fact that G ∈ L). We have P3 =
G2 (t − y) G1 (dy) (1 + ε)c2
[M,t−M )
G(t − y) G1 (dy), (1.2.18)
[M,t−M )
where it is clear that, owing to (1.2.16), the last integral is equal to P(ζ + ζ1 t, ζ1 ∈ [M, t − M )) P(ζ t − M, ζ1 ∈ [M, t − M )) + P(ζ + ζ1 t, ζ ∈ [M, t − M )) G1 (t − y) G(dy) = G(t − M ) G1 ([M, t − M )) + [M,t−M )
ε(1 + ε)G(t) + (1 + ε)c1
G(t − y) G(dy).
(1.2.19)
[M,t−M )
Next observe that, using an argument similar to that above, one can easily obtain
24
Preliminaries
(putting G1 := G2 := G) that G2∗ (t) = (1 + θ1 ε)2G(t) +
G(t − y) G(dy) + ε(1 + θ2 ε)G(t),
[M,t−M )
where |θi | 1, i = 1, 2. Since G2∗ (t) ∼ 2G(t) as G ∈ S+ , this equality means that the integral on its right-hand side is o(G(t)). Now it follows from (1.2.18), (1.2.19) that P3 = o(G(t)) also, and hence the required assertion is proved in the case G ∈ S+ . To extend the desired result to the case of distributions Gi on R, it suffices simply to repeat the argument from the proof of Theorem 1.2.4(vi). The case when one of the ci is zero can be reduced to the case already considered, c1 c2 > 0. If, say, c1 = 0, c2 > 0 then one can introduce the distribution ! 1 := (G1 + G)/2, for which clearly G !1 (t)/G(t) → ! G c1 = 1/2, and therefore, by virtue of the assertion that we have already proved, ! 1 ∗ G2 (t) G1 ∗ G2 (t) + G ∗ G2 (t) G 1 + c2 ∼ = 2 G(t) 2G(t) G1 ∗ G2 (t) 1 + c2 = + (1 + o(1)) 2G(t) 2 as t → ∞, so that G1 ∗ G2 (t)/G(t) → c2 = c1 + c2 . + (ii) Let G+ = G0 . Since G+ 0 be the distribution of the r.v. ζ0 , where ζ0 ⊂ 0 (t) = G0 (t) for t > 0, it follows immediately from part (i) with G1 = G2 = G+ 0 that + 2∗ (G+ ) (t) ∼ 2G (t), i.e. G ∈ S. 0 0 0
(iii) If G ∈ S then, by Theorems 1.2.4(vi) and 1.2.8, one has, as t → ∞, G2∗ (t) ∼ (G+ )2∗ (t) ∼ 2G(t). The relation (1.2.13) follows straight away from part (i) using an induction argument. (iv) One has Gn∗ (t) Gn∗ + (t), n 1 (cf. (1.2.8)). Hence it is clear that it suffices to consider the case G ∈ S+ . Put αn := sup t0
Gn∗ (t) . G(t)
Analogously to (1.2.12), for n 2 one has t G (t) = G(t) + n∗
0
G(n−1)∗ (t − y) G(dy),
25
1.2 Subexponential distributions and therefore, for any M > 0, t αn 1 + sup
0tM
t + sup t>M
1+
0
0
G(n−1)∗ (t − y) G(dy) G(t)
G(n−1)∗ (t − y) G(t − y) G(dy) G(t − y) G(t)
1 G2∗ (t) − G(t) + αn−1 sup . G(M ) G(t) t>M
Since G ∈ S, for any ε > 0 there exists an M = M (ε) such that sup t>M
G2∗ (t) − G(t) < 1 + ε, G(t)
and hence αn b0 + αn−1 (1 + ε),
b0 := 1 + 1/G(M ),
α1 = 1.
From here one obtains recursively αn b0 + b0 (1 + ε) + αn−2 (1 + ε)2 · · · b0
n−1
(1 + ε)j
j=0
b0 (1 + ε)n . ε
The theorem is proved.
1.2.2 Sufficient conditions for subexponentiality Now we will turn to a discussion of sufficient conditions for a given distribution G to belong to the class of subexponential distributions. As we already know, S ⊂ L (Theorem 1.2.8) and therefore the easily verified condition G ∈ L is, quite naturally, always present in conditions sufficient for G ∈ S. First we will present a simple assertion that clarifies the subexponentiality condition for signed r.v.’s; it follows from Theorems 1.2.8, 1.2.12(iii) and 1.2.4(vi). Corollary 1.2.16. A distribution G belongs to S iff G ∈ L and G2∗ (t) ∼ 2G(t) as t → ∞. The next theorem basically paraphrases the laconic definition of subexponentiality in terms of the relative smallness of the probabilities of certain simple events for a pair of independent r.v.’s with a common distribution G. Theorem 1.2.17. (i) Let G ∈ S. Then, for any fixed p ∈ (0, 1), as t → ∞ G(pt)G((1 − p)t) = o(G(t)),
(1.2.20)
26
Preliminaries and for any M = M (t) → ∞ such that M pt t − M one has pt G(t − y) G(dy) = o(G(t)).
(1.2.21)
M
(ii) Conversely, let G ∈ L and, for some p ∈ (0, 1), let relation (1.2.20) hold. Moreover, suppose that for some M = M (t) → ∞ such that c0 := lim sup t→∞
G(t − M ) <∞ G(t)
(1.2.22)
one has (1.2.21) with the upper integration limit replaced by max{p, 1−p}t. Then G ∈ S. In what follows, it will sometimes be convenient to use an equivalent form of condition (1.2.20) in terms of the function l(t) = − ln G(t): l(pt) + l((1 − p)t) − l(t) → ∞ as
ζ2
(1.2.23)
6 Rpt
t
t → ∞.
@
@ @
t−M
···
M
G(pt)G((1 − p)t)
@
@ @
(1 − p)t
@ @ @
(1−p)t R
@ @
M 0
M
pt
G(t − y) G(dy)
M
@ @
t−M
t
ζ1
Fig. 1.2. An illustration relating to the proof of Theorem 1.2.17: for G ∈ S, all three expressions in the plot should be o(G(t)).
Proof. To prove both parts of the theorem, it suffices to consider the case G ∈ S+ . (i) We will use the representation (1.2.17) (with G1 = G2 = G) for G2∗ (t). Clearly, when M < pt < t − M , one has pt G(t − y) G(dy) P3 , M
G(pt)G((1 − p)t) P3 + P4 ,
1.2 Subexponential distributions
27
where the Pi were defined after (1.2.17) (see also Figs. 1.1, 1.2). The desired statements are now obvious since we showed in the proof of Theorem 1.2.12(i) that P3 + P4 = o(G(t)) when M = M (t) → ∞ as t → ∞. (ii) One can easily see that, in the representation (1.2.17), (1−p)t
pt P3 + P4
G(t − y) G(dy) + M
G(t − y) G(dy) + G(pt)G((1 − p)t), M
so that it suffices to show that P1 ≡ P2 ∼ G(t) as t → ∞. By Theorem 1.2.4(i), there exists an M1 = M1 (t) → ∞ such that G(t − y)/G(t) → 1 uniformly in y ∈ [0, M1 ]. It remains to observe that P1 = G(t − y) G(dy) + G(t − y) G(dy), [0,M1 )
[M1 ,M )
where the first integral is evidently (1 + o(1))G(t) while the second does not exceed c0 G(t)G([M1 , M )) = o(G(t)). Now we will use the above theorem to construct an example of a distribution G ∈ L \ S (thus completing the proof of Theorem 1.2.8). Example 1.2.18. According to Theorem 1.2.4(ii), G ∈ L if for the function t l(t) = − ln G(t) one has the representation l(t) = 0 ε(u) du, where ∞ ε(u) 0,
ε(u) → 0
as
ε(u) du = ∞.
u → ∞,
(1.2.24)
0
Define a piecewise constant function ε(t) as follows. Put t0 := 0, tk := 2k−1 , k = 1, 2, . . . , and let ε(t) := 1 for t ∈ [t0 , t1 ) and, for k = 1, 2, . . . , ⎧ tn ⎨ t−1 n = 2k − 1, ε(u) du, n 0 ε(t) := t ∈ [tn , tn+1 ), ⎩ t−1 = (t −1 n = 2k. , n+1 − tn ) n Clearly, by construction one has t 2k−1
l(t2k ) =
t2k
ε(u) du + 0
ε(u) du = 2l(t2k−1 ),
k = 1, 2, . . . ,
(1.2.25)
t2k−1
and, moreover, all the conditions from (1.2.24) are satisfied. However, it follows from Theorem 1.2.17(i) that G ∈ S, since condition (1.2.20) (or equivalently condition (1.2.23)) is not satisfied for p = 1/2; by virtue of (1.2.25) we have 2l(t) − l(2t) → ∞ as t → ∞. Theorem 1.2.17(ii) enables one to obtain a number of more convenient conditions that are sufficient for subexponentiality. First of all observe that, putting
28
Preliminaries
p = 1/2 and M = t/2 in this theorem, we immediately establish the following result from [109]. Corollary 1.2.19. If G ∈ L and lim supt→∞ G(t)/G(2t) < ∞ then G ∈ S. Theorem 1.2.21 below is also a simple corollary of Theorem 1.2.17. It extends the condition sufficient for subexponentiality that we established earlier (the regular variation of G(t) at infinity). Clearly, any r.v.f. G(t) possesses the property that if b > 0 is fixed then G(bt) ∼ cb G(t) as t → ∞ (where cb = b−α when G(t) is of the form (1.1.2)). Following [42], we will now introduce the following class of functions. Definition 1.2.20. A function G(t) is said to be an upper-power function if it is l.c. and, for any b > 1, there exists a c(b) > 0 such that G(bt) > c(b) G(t),
t > 0,
(1.2.26)
where c(b) is bounded away from zero on any interval (1, b1 ), b1 < ∞. In other words, for any p ∈ (0, 1) one has G(pt) < cp G(t) for some cp < ∞, where cp is bounded on any interval (p1 , 1), 0 < p1 < 1. One can easily see that if the function G(t) is non-increasing and the condition (1.2.26) is satisfied for some b > 1 then this condition will also be met for any b > 1 and c(b) will be bounded away from zero on any interval (1, b1 ), b1 < ∞. In the literature, the property (1.2.26) is often referred to as dominated variation (see e.g. § 1.4 of [113]). It is clear that the class of upper-power functions is broader than the class of r.v.f.’s. In particular, it contains functions obtained by multiplying r.v.f.’s by ‘slowly oscillating’ factors m(t) that bounded away from zero. It turns out that such functions still belong to S and, moreover, applying the above operation (under the additional condition that the function m(t) is bounded) to the tails of subexponential distributions does not take one outside S. Theorem 1.2.21. (i) If G(t) is an upper-power function then G ∈ S. (ii) Let the tail of a distribution G0 be of the form G0 (t) = m(t)G(t), where G ∈ S and m(t) is an l.c. function such that for all t 0 < m1 m(t) m2 < ∞.
(1.2.27)
Then G0 ∈ S. Proof. (i) This assertion follows from Corollary 1.2.19 and the above remarks. (ii) It suffices to verify the conditions of Theorem 1.2.17(ii) for p = 1/2. Denote by M = M (t) → ∞ (as t → ∞) a function for which the convergence G(t + v)/G(t) → 1 is uniform in v ∈ [−M, M ] (see Remark 1.2.5 on p. 17). Clearly, G0 ∈ L (cf. Theorem 1.1.4(i)). Therefore it remains only to verify (1.2.20) and (1.2.21) for G0 (t), p = 1/2 and the chosen M = M (t). By virtue
1.2 Subexponential distributions
29
of (1.2.27) and the relation (1.2.20) for G ∈ S (which holds owing to Theorem 1.2.17(i)), G20 (t/2) m22 G2 (t/2) = o(G(t)) = o(G0 (t)), so that (1.2.20) also holds for G0 (t). Further, quite similarly to the bounds (1.2.18), (1.2.19) from the proof of Theorem 1.2.12(i), one derives that ⎛ ⎞ t/2 t/2 ⎜ ⎟ G0 (t − y) G0 (dy) = O ⎝ G(t − y) G(dy)⎠ + o(G(t)) = o(G(t)); M
M
the last equality holds owing to the fact that G ∈ S and to the relation (1.2.21) for G(t) (which holds by Theorem 1.2.17(i)). Since G(t) = O(G0 (t)) by condition (1.2.27), the above bound immediately implies the relation (1.2.21) for G0 (t). The theorem is proved.
1.2.3 Further sufficient conditions for distributions to belong to S. Relationship to the class of semiexponential distributions Theorem 1.2.17 essentially gives necessary and sufficient conditions for a distribution to belong to S. So far, using the theorem we have only established sufficient conditions from Theorem 1.2.21, which, as we will see from what follows, are quite narrow. To construct broader (in a certain sense) sufficient conditions, we will introduce the class of so-called semiexponential distributions. This is a rather wide class (in particular, it includes, along with the class R, the lognormal α distribution, the Weibull distribution G(t) = e−t , 0 < α < 1, and many others). Definition 1.2.22. A distribution G belongs to the class Se of semiexponential distributions if (i) G(t) = e−l(t) , where, for some s.v.f. L(t), the following representation holds true: l(t) = tα L(t),
α ∈ [0, 1];
L(t) → 0 as t → ∞ when α = 1; (1.2.28) (ii) as t → ∞, for Δ = Δ(t) = o(t) one has l(t + Δ) − l(t) = (α + o(1))
Δ l(t) + o(1). t
(1.2.29)
We will denote the subclass of the distributions G ∈ Se for which the index of the function l(t) is equal to α by Se(α), α ∈ [0, 1]. Condition (ii) could be rewritten in the following equivalent form: for any fixed
30
Preliminaries
δ > 0, as t → ∞, for α > 0, for α = 0,
l(t + Δ) − l(t) ∼ αΔl(t)/t l(t + Δ) − l(t) = o(Δl(t)/t)
for α 0,
l(t + Δ) − l(t) = o(1)
"
Δ l(t) > δ, t Δ if l(t) → 0. t if
(1.2.30) (1.2.31)
As we will see below, in the proof of Theorem 1.2.36, it is this property that, in a sense, plays the central role in determining whether G ∈ Se. The first property in (1.2.28) (i.e. that the representation l(t) = tα L(t), where L is an s.v.f., holds) is, in many aspects, a consequence of (1.2.29). This observation will be important when we compare the classes S and Se. Remark 1.2.23. It is seen from Definition 1.2.22 that if G ∈ Se(α) for some α ∈ [0, 1] then the distribution G1 with tail G1 (t) ∼ G(t) (or, which is the same, with l1 (t) := − ln G1 (t) = l(t) + o(1)) as t → ∞ also belongs to the subclass Se(α). Thus, like the class S (cf. Remark 1.2.13 on p. 21), each subclass Se(α) is closed with respect to the relation of asymptotic equivalence and can be partitioned in a similar way into equivalence subclasses. The last assertion is evidently applicable to the whole class Se as well. Remark 1.2.24. From the above it follows that the definition of the class Se can be rewritten in the following way: G ∈ Se if G(t) = e−l(t)+o(1) , where l(t) admits the representation (1.2.28), and for all Δ = o(t) one has, for α > 0,
l(t + Δ) − l(t) ∼ αΔl(t)/t,
for α = 0,
l(t + Δ) − l(t) = o(Δl(t)/t).
(1.2.32)
In this definition, the function l(t) (and therefore the function L(t)) can, without loss of generality, be assumed to be differentiable, since it can be ‘smoothed’ without leaving a given equivalence subclass (as in Remark 1.2.13). If L(t) is differentiable and L (t) = o(L(t)/t) as t → ∞, then the relation (1.2.32) for all Δ = o(t) follows immediately from the representation (1.2.28). Now we will turn to the relationship between the class Se (and its subclasses Se(α)) and the distribution classes that we have already considered. First we will show that R ⊂ Se(0),
but
R = Se(0).
(1.2.33)
That is, the class R of distributions with tails regularly varying at infinity proves to be strictly smaller than Se(0). For this and other reasons also (in particular, the importance of the class R in applications), in the consequent exposition we will single out this class and consider it separately from Se.
1.2 Subexponential distributions
31
We now return to (1.2.33). The first relation is next to obvious. Indeed, for G ∈ R one has G(t) = t−β LG (t), β 0, where LG (t) is an s.v.f., so that l(t) = − ln G(t) = β ln t − ln LG (t), and hence, as one can easily verify, conditions (1.2.28) and (1.2.29) are met for α = 0. To establish the second relation in (1.2.33), we will construct an example of a distribution G ∈ Se(0) \ R. Put l(t) := L0 (t) ln t, where L0 (t) is the s.v.f. L0 (t) from (1.1.5), which oscillates between the values 1 and 3. Clearly G(t) = e−l(t) is not an r.v.f., whereas l(t) is an s.v.f. by Theorem 1.1.4(i), and it is not hard to verify that condition (1.2.29) holds with α = 0, so that G ∈ Se(0). Now we will present a number of further assertions characterizing the class S and its connections with Se and its subclasses Se(α). Theorem 1.2.25. Let G ∈ L and let the function l(t) = − ln G(t) possess the property # $ t . (1.2.34) lim sup l(t + uz(t)) − l(t) < 1 for z(t) := l(t) t→∞, u→1 Then G ∈ S. Remark 1.2.26. The property (1.2.34) can be rewritten in the following form: if Δ = Δ(t) ∼ z(t) as t → ∞ then, for all sufficiently large t, one has l(t + Δ) − l(t) (α + o(1))
Δ l(t) + q t
(1.2.35)
for 0 α < 1 and α + q < 1. Under broad conditions on the function l(t), this relation will also remain true for Δ = o(z(t)) (see below). Roughly speaking, the first term on the right-hand side of (1.2.35) bounds the rate of the ‘differentiable growth’ of the function l(t), whereas the second bounds the rate of its ‘singular growth’. The jumps in the function l(t) must be o(t) as t → ∞ owing to the fact that G(t) is an l.c. function. Remark 1.2.27. The relation (1.2.35) could be rewritten in the following equivalent form: Δ Δ = Δ(t) ∼ z(t), l(t) − l(t − Δ) (α + o(1)) l(t) + q, t for 0 α < 1 and α + q < 1. To verify this, one has to observe that z(t) = o(t) and that under condition (1.2.34) we have l(t + Δ) ∼ l(t) for Δ ∈ [0, z(t)]. Since the functions l(t) and z(t) = t/l(t) are ‘equally smooth’, we also have z(t + Δ) ∼ z(t) and Δ(t) ∼ Δ(! t ) for ! t := t − Δ(t). Prior to proving Theorem 1.2.25, we will state an important corollary of that assertion which establishes a connection between the distribution classes under consideration.
32
Preliminaries
Corollary 1.2.28. (i) If, for some c > 0 and α ∈ [0, 1) and all Δ ∈ [c, z(t)], the function l(t) = − ln G(t) satisfies the inequality l(t + Δ) − l(t) (α + o(1))
Δ l(t) + o(1) as t
t→∞
(1.2.36)
then G ∈ S. (ii) For α ∈ [0, 1) one has Se(α) ⊂ S. Proof of Corollary 1.2.28. (i) One can easily see that inequality (1.2.36) ensures that the conditions of Theorem 1.2.25 are met. Indeed, (1.2.36) implies (1.2.35) and therefore also (1.2.34) by virtue of Remark 1.2.26. It remains to demonstrate that G ∈ L. To this end, note that from (1.2.36) with Δ = c it follows that l(t+c)−l(t) = O(l(t)/t)+o(1), and hence we just have to show that l(t) = o(t). This relation follows immediately from (1.2.36) and the next lemma. Lemma 1.2.29. If, for some γ0 < 1 and t0 < ∞, l(t + z(t)) − l(t) γ0
for
t t0
(1.2.37)
then l(t) = o(t) as t → ∞ (in other words, z(t) → ∞). Observe that (1.2.34) implies (1.2.37). Proof. Consider the increasing sequence s0 := t0 , sn+1 := sn + z(sn ), Owing to (1.2.37) and the obvious inequality "
x1 + x2 x1 x2 min , y1 + y2 y1 y2
n = 0, 1, 2, . . .
for
(1.2.38)
y1 , y2 > 0
we obtain z(sn+1 ) =
sn + z(sn ) sn + z(sn ) min{z(sn ), z(sn )/γ0 } l(sn + z(sn )) l(sn ) + γ0
= z(sn ) z(sn−1 ) · · · z(s0 ) = z(t0 ) > 0.
(1.2.39)
Hence sn → ∞ as n → ∞, and there exists a (possibly infinite) limit z∞ := lim z(sn ) z(t0 ) > 0. n→∞
Since for t ∈ [sn , sn+1 ] z(t) =
t sn sn z(sn ) ∼ z(sn ), = l(t) l(sn+1 ) l(sn ) + γ0 1 + γ0 /l(sn )
to complete the proof of the lemma we just have to show that z∞ = ∞. Assume the contrary, that z∞ < ∞, so that z(sn ) = (1 − ε1 (n))z∞ ,
ε1 (n) → 0 as n → ∞.
(1.2.40)
33
1.2 Subexponential distributions This implies that there exists an infinite subsequence {n } such that l(sn +1 ) − l(sn ) (1 − ε2 (n ))
z(sn ) , z∞
ε2 (n ) → 0.
(1.2.41)
Indeed, if this were not the case then there would exist a δ > 0 and an n0 < ∞ such that, for all n n0 , l(sn+1 ) − l(sn )
z(sn ) z∞ + δ
and therefore l(sn ) = l(sn0 ) +
n
l(sk ) − l(sk−1 )
k=n0 +1
l(sn0 ) +
1 z∞ + δ
n
z(sk−1 ) = l(sn0 ) +
k=n0 +1
sn − sn0 , z∞ + δ
whence lim inf n→∞ z(sn ) z∞ + δ, which would contradict the definition of z∞ . It remains to notice that from (1.2.40) and (1.2.41) one has l(sn + z(sn )) − l(sn ) (1 − ε1 (n ))(1 − ε2 (n )) → 1
as
n → ∞.
This contradicts (1.2.37), therefore z∞ = ∞. The lemma is proved. We return to the proof of Corollary 1.2.28. (ii) Since z(t) = o(t) when α < 1, the property (1.2.29) of the elements of Se(α) with α < 1 clearly implies (1.2.36). Corollary 1.2.28 is proved. The proof of Theorem 1.2.25 will be preceded by the following lemma. Lemma 1.2.30. If condition (1.2.37) is satisfied for the function l(t) then there exist t0 < ∞, γ < 1 and a continuous piecewise differentiable function h(t) such that, for all t t0 , |h(t) − l(t)| γ, γh(t) , t γ t h(t) , h(u) u
(1.2.42)
h (t)
(1.2.43) t0 u t.
(1.2.44)
In particular, h(t) h(t0 )(t/t0 )γ . Proof. Suppose that (1.2.37) holds. Since l(t) → ∞, we can assume without loss of generality that γ0 + 1 γ0 γ := . 1 − 1/l(t0 ) 2
34
Preliminaries
Using the sequence (1.2.38) with s0 = t0 , denote by h(t) the continuous piecewise linear function with nodes at the points (sn , l(sn )), n 0: h(t) := l(sn ) +
l(sn+1 ) − l(sn ) (t − sn ), z(sn )
t ∈ [sn , sn+1 ].
Since sn → ∞ as n → ∞ by virtue of (1.2.39), we have thus defined the function h(t) on the entire half-line [t0 , ∞) (its definition on the left of the point t0 is inessential for our purposes). It is evident that, owing to (1.2.37), on the interval [sn , sn+1 ] one has the inequalities l(sn ) l(t) l(sn+1 ) l(sn ) + γ0 , l(sn ) h(t) l(sn+1 ) l(sn ) + γ0 ,
(1.2.45)
which proves (1.2.42) since γ0 < γ < 1. Further, to obtain (1.2.43) note that for t ∈ [sn , sn+1 ] we have sn t − z(sn ) and therefore γ0 γ0 l(sn ) l(sn+1 ) − l(sn ) γ0 h(t) = z(sn ) z(sn ) sn t − z(sn ) γ0 h(t) γ0 h(t) = 1 − z(sn )/t t 1 − z(sn )/sn t γh(t) h(t) γ0 , = 1 − 1/l(sn ) t t
h (t) =
since l(sn ) l(t0 ). Finally, (1.2.44) is almost obvious from (1.2.43): integrating the latter inequality on the interval [u, t], we get ln
t h(t) γ ln , h(u) u
(1.2.46)
which proves (1.2.44). The lemma is proved. Proof of Theorem 1.2.25. We will make use of Theorem 1.2.17(ii) with p = 1/2 and M = z(t). Recall that, under the conditions of the theorem, by virtue of Remark 1.2.27 one has z(t) ∼ z( ! t ) for ! t := t − z(t). Therefore, by (1.2.34), G(t − M ) = exp{l(t) − l( ! t )} = exp{l( ! t + z(t)) − l( ! t )} < e G(t) for all sufficiently large t, and so condition (1.2.22) of Theorem 1.2.17(ii) is satisfied. Further, for the function h from Lemma 1.2.30 one has |l(t)−h(t)| < γ, t t0 . Therefore, if we establish relations (1.2.20), (1.2.21) for the distribution Gh with Gh (t) = e−h(t) then they will be valid for the original distribution G as well. So to simplify the exposition one could assume from the very beginning that the
35
1.2 Subexponential distributions
function l(t) has the properties (1.2.43), (1.2.44). From these properties it follows that for any v ∈ [0, 1/2] and t t0 one has l(t) − l(vt) − l((1 − v)t) [1 − vγ − (1 − v)γ ]l(t).
(1.2.47)
Next, use the inequality 1 − (1 − v)γ (2γ − 1)v γ ,
v ∈ [0, 1/2],
of which the left-hand (right-hand) side is a concave (convex) function on the indicated interval, the values of these functions coinciding with each other at the end points v = 0 and v = 1/2 of the interval. Together with (1.2.47) the inequality implies that l(t) − l(vt) − l((1 − v)t) −cv γ l(t),
c = 2 − 2γ > 0.
(1.2.48)
From this we immediately obtain (1.2.23), so that the condition (1.2.20) of Theorem 1.2.17 is satisfied (for any p ∈ (0, 1)). As we have already noted, under the conditions of Theorem 1.2.25 there exist t0 < ∞ and γ0 < 1 such that (1.2.37) holds true. As z(t) → ∞ by Lemma 1.2.29 (or simply by virtue of the assumption G ∈ L and Theorem 1.2.4(iv)), one has z(t) t0 for all sufficiently large t. Denote by {sn ; n 0} the sequence constructed according to (1.2.38) with initial value s0 = z(t). Further, inequality (1.2.48) implies that t/2 t/2 G(t − y) G(dy) = G(t) exp{l(t) − l(y) − l(t − y)} dl(y) M
z(t)
t/2 G(t)
exp{−c(y/t)γ l(y)} dl(y) =: G(t)I(t).
z(t)
(1.2.49) Put N := min{n 1 : sn t/2} and represent I(t) as a sum of integrals over the intervals [sn , sn+1 ). Observe that, according to (1.2.37), the increments of the function l(t) on each of these intervals do not exceed γ. Therefore, using the inequalities sn s0 + nz(s0 ) (n + 1)z(t), sn+1 sn+1 l(sn+1 ) − l(sn ) − + 1 1, z(sn+1 ) z(sn ) which are valid owing to (1.2.38), (1.2.39) and our choice of s0 , we obtain from
36
Preliminaries
the dominated convergence theorem that I(t)
N −1
sn+1
exp{−c(y/t)γ l(t)} dl(y)
n=0 s n ∞
N −1
exp{−c(sn /t)γ l(t)}
n=0
exp{−c(n + 1)γ l1−γ (t)} → 0 as
t → ∞.
n=0
Together with (1.2.49) this implies that condition (1.2.21) of Theorem 1.2.17 is satisfied. Thus, all the conditions of Theorem 1.2.17(ii) are met, and therefore G ∈ S. The theorem is proved. We will present one more important consequence of Theorem 1.2.25. First recall that, by virtue of Remark 1.2.13 (p. 21), for distributions G ∈ S one can always assume that the function l(t) = − ln G(t) (up to an additive o(1) correction term, t → ∞) is differentiable arbitrarily many times. A similar assertion holds for G ∈ Se as (cf. Remark 1.2.24 on p. 30). Theorem 1.2.31. Let G(t) = e−l(t)+o(1) , where l(t) is continuous and piecewise differentiable and lim sup z(t)l (t) < 1,
z(t) = t/l(t).
(1.2.50)
t→∞
Then the conditions of Theorem 1.2.25 are satisfied, so that G ∈ S. Proof. The condition (1.2.50) clearly means that, for some γ < 1 and t0 < ∞, one has l (t) γl(t)/t for t t0 . As shown in Lemma 1.2.30, this relation implies (1.2.46), so that ln
t+Δ l(t + Δ) γ ln , l(t) t
Hence, for Δ = o(t), t → ∞, one has Δ l(t + Δ) 1+ l(t) t
t t0 , Δ > 0.
γ
=1+
Δ (γ + o(1)). t
From here the relation (1.2.36) is immediate. It remains to use Corollary 1.2.28. Theorem 1.2.31, in its turn, implies the following. Corollary 1.2.32. For G ∈ S it suffices that lim sup t [ln(− ln G(t))] < 1.
(1.2.51)
t→∞
Next we will give one more consequence of the above results. It concerns conditions for G ∈ S that are, in a sense, close to the necessary ones and are not related to the differentiability properties of the function l(t) = − ln G(t). The condition to be presented will be expressed simply in terms of some restrictions
37
1.2 Subexponential distributions
on the asymptotic behaviour of the increments l(t + Δ) − l(t) for some function Δ = Δ(t) such that 0 < c Δ = o(z(t)) as t → ∞. First note that, for an l(t) = t−α L(t) with a ‘regular’ s.v.f. L(t), the derivative l (t) is close to αl(t)/t = α/z(t) (cf. Theorem 1.1.4(iv)), so that in this situation 1/z(t) characterizes the decay rate of the function l (t) as t → ∞. Theorem 1.2.31 shows that a sufficient condition for G ∈ S is that the ratio of l (t) and 1/z(t) is bounded away (from below) from unity as t → ∞. In the general case, we will consider the ratio r(t, Δ) := z(t)(l(t + Δ) − l(t))/Δ of the difference analogue (l(t + Δ) − l(t))/Δ of the derivative of l(t) and the function 1/z(t). Theorem 1.2.33. (i) Let there exist a function Δ = Δ(t) such that 0 < c Δ(t) = o(z(t)) as t → ∞
(1.2.52)
α+ (G, Δ) := lim sup r(t, Δ(t)) < 1.
(1.2.53)
and t→∞
Then G ∈ S. (ii) Conversely, if G ∈ S then, for any function Δ = Δ(t) c > 0, we have α− (G, Δ) := lim inf r(t, Δ(t)) 1. t→∞
(1.2.54)
Remark 1.2.34. Observe that (1.2.52) contains the condition that z(t) → ∞ as t → ∞ or, equivalently, that l(t) = o(t). The proof of part (ii) of the theorem shows that in this case one always has α− (G, Δ) 1. To prove the theorem we will need the following analogue of Lemma 1.2.30. Lemma 1.2.35. If the conditions of Theorem 1.2.33(i) are satisfied for l(t) then there exist t0 < ∞, γ < 1 and a continuous piecewise differentiable function h(t) such that |h(t) − l(t)| = o(1) as h (t)
γh(t) t
for all
t → ∞, t t0 .
(1.2.55) (1.2.56)
Proof. The proof of the lemma essentially repeats that of Lemma 1.2.30 but with z(t) replaced by Δ(t) when constructing the sequence (1.2.38): now we put sn+1 := sn + Δ(sn ). The value s0 is chosen to be such that r(t, Δ(t)) γ < 1 for all t s0 (its existence is ensured by condition (1.2.53)). It is clear that sn → ∞ as n → ∞ by virtue of (1.2.52).
38
Preliminaries
Denote by h(t) a continuous piecewise linear function with nodes (sn , l(sn )), n 0. Then clearly l(sn+1 ) − l(sn )
γΔ(sn ) =: rn = o(1) as z(sn )
n → ∞.
(1.2.57)
Since both l(t) and h(t) are non-decreasing functions we find, cf. (1.2.45), that the deviation of these functions from each other on the interval [sn , sn+1 ] also does not exceed rn . This proves (1.2.55). Further, from (1.2.57) and (1.2.55) we obtain that, for any given ε > 0 and for all sufficiently large n, one has h (t) =
γ γ(1 + ε)h(t) l(sn+1 ) − l(sn ) , Δ(sn ) z(sn ) t
t ∈ [sn , sn+1 ].
Increasing, if necessary, the values of the previously chosen quantities s0 and γ < 1, we arrive at (1.2.56). The lemma is proved. Proof of Theorem 1.2.33. (i) This assertion is an obvious consequence of Theorem 1.2.31 and Lemma 1.2.35. (ii) Assume that α− (G, Δ) > 1 for some function Δ(t) c > 0. Then, evidently, sn := sn−1 + Δ(sn−1 ) → ∞ as n → ∞ and, for a sufficiently large s0 , one has r(t, Δ(t)) 1 for all t s0 . Therefore, for all n 0 we have z(sn+1 ) =
sn + Δ(sn ) sn + Δ(sn ) = z(sn ). l(sn + Δ(sn )) l(sn ) + Δ(sn )/z(sn )
Thus the sequence {z(sn )} is non-increasing, so that l(sn ) sn /z(s0 ), n 0. Since sn → ∞, the above relation implies that l(t) = o(t), which is inconsistent with G ∈ S by Theorems 1.2.8 and 1.2.4(iv). The theorem is proved. It follows from the above assertions that the ‘regular’ part of the class S, i.e. the part consisting of the elements for which the upper and lower limits α± in (1.2.53) and (1.2.54) coincide with each other, is nothing other than the class Se of semiexponential distributions. More precisely, denote by S(α) ⊂ S the subclass of distributions G ∈ S for which there exists a function Δ = Δ(t) satisfying (1.2.52) and such that α+ (G, Δ) = α− (G, Δ) = α ∈ [0, 1]: "
there is a Δ such that (1.2.52) holds . (1.2.58) S(α) := G ∈ S : and α+ (G, Δ) = α− (G, Δ) = α Recall that Se(α) denotes the subclass of semiexponential distributions with index α ∈ [0, 1] (see Definition 1.2.22 on p. 29). Theorem 1.2.36. For α ∈ [0, 1), S(α) = Se(α). For α = 1 one has S(1) ⊂ Se(1).
1.2 Subexponential distributions
39
For distributions G ∈ Se(1), to ensure that G ∈ S(1) one needs, generally speaking, additional conditions on the s.v.f. L(t) → 0 in the representation (1.2.28). For more detail, see Remark 1.2.37 below. Proof. As we observed in Remark 1.2.23 (p. 30), the subclasses Se(α) are closed with respect to asymptotic equivalence: if G1 ∈ Se(α) and G1 (t) ∼ G(t) (or, equivalently, l1 (t) := − ln G1 (t) = l(t) + o(1)) as t → ∞ then G ∈ Se(α). Hence, to prove the inclusion S(α) ⊂ Se(α),
α ∈ [0, 1],
(1.2.59)
it suffices to show that, for any G ∈ S(α), there is an asymptotically equivalent distribution G1 ∈ Se(α). Assume that G ∈ S(α). To construct the required function G1 (t) we will employ the linear interpolation h(t) from the proof of Lemma 1.2.35 (it is obtained using the function Δ(t) from (1.2.58)) and put l1 (t) := h(t), G1 (t) := e−l1 (t) . The same argument as in Lemma 1.2.35, together with the condition α+ (G, Δ) = α− (G, Δ) = α, shows that there exists the limit lim t ln l1 (t) = α. t→∞
Therefore, (ln l1 (t)) = α/t + ε(t)/t, where ε(t) → 0 as t → ∞, so that t ln l1 (t) = α ln t + 1
ε(u) du u
and hence
t
l1 (t) = t L(t),
where L(t) = exp
α
0
ε(u) du . u
(1.2.60)
By virtue of Theorem 1.1.3, the last representation means that L(t) is an s.v.f. Since G ∈ S, by Theorem 1.2.12(ii) we have G1 ∈ S and therefore l1 (t) = o(t) (see Theorems 1.1.2(iv) and 1.1.3), so that L(t) = o(1) when α = 1. Further, it is clear that t+Δ
ε(u) du = o(Δ/t) u
as
Δ = o(t),
t
so that from (1.2.60) we obtain l1 (t + Δ) = l1 (t)
1+
Δ t
α
eo(Δ/t) .
Therefore
Δ l1 (t). t Thus the conditions of Definition 1.2.22 are satisfied for the function G1 (t). So l1 (t + Δ) − l1 (t) = (α + o(1))
40
Preliminaries
we have proved that G1 ∈ Se(α) (and hence that G ∈ Se(α)), which establishes (1.2.59). Now let G ∈ Se(α), α ∈ [0, 1). To prove that G ∈ S(α) it suffices, by virtue of Theorem 1.2.33(i), to show that there is a function Δ = Δ(t) satisfying (1.2.52), for which there exists the limit α+ (G, Δ) = α− (G, Δ) = α (note that we consider only the case α < 1). But the existence of such a function Δ(t) directly follows from the relation (1.2.29) in Definition 1.2.22. The relation implies that there exists a function ε = ε(t) that converges to zero slowly enough as t → ∞ and has the property that, for Δ(t) := ε(t)t/l(t) = o(z(t)), one has Δ(t) c > 0 and l(t + Δ(t)) − l(t) = (α + o(1))
Δ(t) l(t). t
The last relation means that r(t, Δ(t)) → α as t → ∞. The theorem is proved. Remark 1.2.37. In the boundary case where α+ (G, Δ) = 1 in (1.2.53), both G ∈ S and G ∈ / S are possible. Before discussing the conditions for a distribution G ∈ Se(1) to belong to the class of subexponential distributions, we will give an example of a distribution G ∈ Se(1) \ S. Example 1.2.38. Set l(t) := tL(t), where −1 k , t ∈ [22k , 22k+1 ], L(t) := t ∈ [22k+1 , 22k+2 ], (log2 t − k − 1)−1 ,
k = 1, 2, . . .
Clearly, L(t) ∼ 2/ log2 t, L (t) = 0 for t ∈ (22k , 22k+1 ) and L (t) = −
log2 e t(log2 t − k − 1)2
for t ∈ (22k+1 , 22k+2 ),
so that in any case L (t) = o(L(t)/t). Hence for Δ = Δ(t) = o(t) one has L(t + Δ) ∼ L(t) and l(t + Δ) − l(t) = ΔL(t + Δ) + t(L(t + Δ) − L(t)) = (1 + o(1))ΔL(t) + O(tL (t)Δ) = (1 + o(1))ΔL(t).
(1.2.61)
Thus the conditions for G ∈ Se(1) from Definition 1.2.22 are satisfied. However, for t = 22k+1 , 2l(t/2) − l(t) = t[L(t/2) − L(t)] = 22k+1 [L(22k ) − L(22k+1 )] = 0, so that condition (1.2.20) (or equivalently (1.2.23)) for p = 1/2 is not satisfied and therefore G ∈ S by Theorem 1.2.17(i).
41
1.2 Subexponential distributions
That a distribution G ∈ Se(1) belongs to the class S can only be proved under the additional assumption that the derivative L (t) is regularly varying. Namely, the following assertion holds true. Theorem 1.2.39. Assume that G ∈ Se(1) and that the function L(t) in the representation l(t) = − ln G(t) = tL(t) is differentiable for t > t0 for large enough t0 . If ε(t) := −tL (t)
(1.2.62)
is an s.v.f.
at infinity and, for some γ ∈ (0, 1), γε(t)L(−1) (L(t)/γ) + ln ε(t) − ln L(t) → ∞,
(1.2.63)
where L(−1) (u) = inf{t : L(t) < u} is the inverse to the function L(t), then G ∈ S. Example 1.2.40. Consider a distribution G with l(t) = tL(t), where L(t) = ln−β t, β > 0, for t > e. Clearly, in this case ε(t) = β ln−β−1 t and L(−1) (u) = exp{u−1/β }, so that for γ = 1/2 the left-hand side of (1.2.63) is equal to −1/β
2−1 βt2
ln−β−1 t + ln β − ln ln t → ∞ as
t → ∞.
Thus, the conditions of Theorem 1.2.39 are satisfied and therefore G ∈ S for all1 β > 0. Proof of Theorem 1.2.39. We will make use of Theorem 1.2.17(ii) with p = 1/2. First note that, for t > t0 , Z∞ ε(u) L(t) = du, (1.2.64) u t
so that ε(t) = o(L(t)) by Theorem 1.1.4(iv) (for the case α = 1). Therefore, l (t) = L(t) − ε(t) ∼ L(t), and hence, for M = o(t), Zt L(u) du ∼ M L(t).
l(t) − l(t − M ) ∼ t−M
Since L(t) → 0 as t → ∞, we obtain immediately that G ∈ L (choosing M := c), and also that condition (1.2.22) is satisfied for M := 1/L(t) → ∞ (note that 1/L(t) = o(t) by Theorem 1.1.4(ii)). Further, again by Theorem 1.1.4 (part (iv) in the case α = 0 and part (ii)) we have that, as t → 0, 2l
„ « » „ « – Zt ε(u) t t du − l(t) = t L − L(t) = t 2 2 u t/2
Zt
ε(u) du ∼ tε(t) −
„ « t t t ε ∼ ε(t) → ∞, 2 2 2
t/2
since ε(t) is an s.v.f. Thus condition (1.2.23) is also met. 1
It was claimed in [266] that, in the case when G(t) = exp{−t ln−β t}, ‘we can similarly show that G ∈ S’ iff β > 1 (a remark at the bottom of p. 1001). This assertion appears to be imprecise.
42
Preliminaries
It remains to verify (1.2.21), namely that the following expression tends to zero: Zt/2 M
G(t) G(dy) = G(t − y) =
Zt/2 exp{l(t) − l(t − y) − l(y)} dl(y) M
j » –ff Zt/2 ˆ ˜ L(t − y) exp t L(t) − L(t − y) − l(y) 1 − dl(y) L(y) M
ZN =
Zt/2 +
M
=: I1 + I2 , N
where we put N = N (t) := L(−1) (L(t)/γ), so that N → ∞ and N = o(t) as t → ∞. Observe that because ε(u) > 0 (as an s.v.f.), by virtue of (1.2.64) the function L(t) will decrease monotonically and continuously for t > t0 and therefore L(−1) (v) will be a unique solution to the equation L(L(−1) (v)) = v, so that one has L(N ) = L(t)/γ. Without restricting the generality, one can assume that M > t0 and therefore the function L(y) decreases for y M . This implies that for all sufficiently large t one has L(y) L(N ) = L(t)/γ for y ∈ [M, N ] and, since L(t − y) ∼ L(t) as t → ∞ when y ∈ [M, N ], for c := (1 − γ)/2 > 0 we obtain ZN I1
˘ ¯ exp −l(y)[1 − L(t − y)/L(y)] dl(y)
M
ZN
e−cl(y) dl(y) <
1 −cl(M ) . e c
M
Further, for y ∈ [M, t/2] one has t − y y and hence L(t − y)/L(y) 1, and since ε(t) is an s.v.f. we obtain Zt L(t) − L(t − y) = −t
ε(u) du − u
t−y
Zt ε(u) du = −(1 + o(1))yε(t).
t−y
Therefore, using the monotonicity of L and the equality L(N ) = γL(t), we get Zt/2 Zt/2 ˘ ¯ ˘ ¯ exp t[L(t) − L(t − y)] dl(y) exp −(1 + o(1))yε(t) dl(y) I2 N
N l(t/2) ff ff j j Z (1 + o(1))ε(t) (γ + o(1))ε(t) l(y) dl(y) = u du exp − exp − L(N ) L(t)
Zt/2 N
l(N )
j ff L(t) (γ + o(1))ε(t) L(t) −(1+o(1))ε(t)N < = o(1), exp − l(N ) ∼ e (γ + o(1))ε(t) L(t) γε(t) owing to (1.2.63). Thus condition (1.2.21) of Theorem 1.2.17(ii) is also satisfied, and hence G ∈ S. The theorem is proved.
Now we will give an example showing that the tails of distributions from S can have very large fluctuations around the tails of semiexponential distributions (much larger fluctuations than those discussed in Theorem 1.2.21). Example 1.2.41. For a fixed δ ∈ (0, 1) put tk := 2k , tk := (1 + δ)tk , k =
43
1.2 Subexponential distributions
0, 1, . . . , and partition the region [1, ∞) into half-open intervals [tk , tk ), [tk , tk+1 ), k = 0, 1, . . . Put l(1) := 0 and, for a fixed γ ∈ (0, 1), set
l(tk ) for t ∈ (tk , tk ), l(t) := l(tk ) + tγ − ((1 + δ)tk )γ for t ∈ [tk , tk+1 ], so that l (t) = 0 on intervals of the first kind and l (t) = γtγ−1 inside intervals of the second kind. Clearly l(tk+1 ) = l(tk ) + 2(k+1)γ − (1 + δ)γ 2kγ = l(tk ) + 2(k+1)γ q1 , where q1 = 1 − ((1 + δ)/2)γ . Hence l(tk ) = q1
k
2jγ = q2kγ + O(1) = l0 (tk ) + O(1) as
k → ∞,
j=1
where l0 (t) := qtγ , q = q1 /(1 − 2−γ ). Thus the function l(t) ‘oscillates’ around l0 (t), and hence G(t) := e−l(t) oscillates around G0 (t) := e−l0 (t) , where evidently G0 ∈ Se. Denote by l1 (t) a polygon with vertices at the ‘lower nodes’ (tk , l(tk )) of the function l(t). It is seen from the construction that l1 (t) = (1 + δ)−γ l0 (t) + O(1) and l1 (t) l(t). Hence l(t) γ(1 + δ)γ γ (1 + δ)γ γ−1 l1 (t) + O(1) (1 + o(1)). = l (t) γt t q t q Since γ < 1, for small enough δ > 0 we also have γ(1 + δ)γ /q < 1, i.e. the conditions of Theorem 1.2.31 are satisfied and so G ∈ S. Observe also that the fluctuations of the function l(t) around l0 (t) = qtγ are unbounded: lim sup(l(t) − l0 (t)) = lim sup(l0 (t) − l(t)) = ∞. t→∞
t→∞
This corresponds to unbounded ‘relative’ fluctuations of the function G(t) around the tail G0 (t) of the semiexponential distribution G0 . This example also shows that functions from S could be constructed in a similar way from ‘pieces’ of functions from the classes Se(α) with different values of α. Remark 1.2.42. Definition 1.2.1 and the assertions of statements 1.2.4–1.2.36 could be extended in an obvious way to the case of finite measures G with G(R) = g = 1. For them, the subexponentiality property can be stated as G2∗ (t) → 2g G(t)
as
t → ∞.
A few of the assertions in statements 1.2.4–1.2.21 will need minor obvious amendment. Thus, the assertion of Theorem 1.2.12(iii) will take the form Gn∗ (t) → ng n−1 , G(t)
44
Preliminaries
whereas that of part (iv) of the same theorem will become Gn∗ (t) bg n−1 (1 + ε)n . G(t) To prove these assertions, one has to introduce the subexponential distributions ! = g −1 G and make use of statements 1.2.4–1.2.36. G
1.3 Locally subexponential distributions Along with subexponentiality one could also consider a more subtle property, that of local subexponentiality. To simplify the exposition we will confine ourselves here to dealing with distributions on the positive half-axis. The transition to the general case could be made in exactly the same way as in § 1.2.
1.3.1 Arithmetic distributions We will start with the simpler discrete case, where one considers distributions (or measures) G = {gk ; k 0} on the set of integers. By the convolution of the sequences {gk } and {fk } we mean the sequence (g ∗ f )k :=
k
gj fk−j .
j=0
We will denote the convolution of {gk } with itself by {gk2∗ } and, furthermore, put (n+1)∗ := (g ∗ g n∗ )k , n 2. Clearly gkn∗ = Gn∗ ({k}). gk ∞ Definition 1.3.1. A sequence {gk 0; k 0}, k=0 gk = 1, is said to be subexponential if gk+1 lim = 1, (1.3.1) k→∞ gk g 2∗ (1.3.2) lim k → 2. k→∞ gk One can easily see that a regularly varying sequence gk = k−α−1 L(k),
α > 1,
where L(t) is an s.v.f., will belong (after proper normalization) to the class of α subexponential sequences, just as a semiexponential sequence gk = e−k L(k) , α ∈ (0, 1) does. Without normalizing, on the right-hand side of (1.3.2) one should ∞ have 2g, where g = k=0 gk . For subexponential sequences there exist analogues of statements 1.2.4–1.2.21 (with the difference that the property (1.3.1) is now part of the definition and therefore requires no proof). We will restrict our attention to those assertions whose proofs require substantial changes compared with the respective arguments in § 1.2. These assertions are analogues of parts (iii) and (iv) of Theorem 1.2.12.
45
1.3 Locally subexponential distributions Theorem 1.3.2. Let {gk ; k 0} be a subexponential sequence. Then: (i) for any fixed n 2 gkn∗ = n; k→∞ gk lim
(1.3.3)
(ii) for any ε > 0 there exists an M < ∞ such that for all n 1 and k M 1 gkn∗ < (1 + ε)n+1 . gk ε Proof. Similarly to the argument in the proof of Theorem 1.2.12(iii), we will make use of induction. Assume that (1.3.3) holds true. Then, for M k, (n+1)∗
gk
=
gk
k n∗ gj gk−j
gk
j=0
=
k−M j=0
k
+
.
(1.3.4)
j=k−M +1
The first sum on the final right-hand side can be rewritten as follows: k−M
=
k−M
j=0
j=0
gj
n∗ gk−j gk−j , gk−j gk
can be made arbitrarily close to n for all j k − M where the ratio by choosing M large enough, whereas n∗ /gk−j gk−j
k−M j=0
gk−j = − gk j=0
k
k
gj
→ 1 + G(M ) as
k → ∞.
(1.3.5)
j=k−M +1
The latter relation follows from the fact that, by virtue of (1.3.2) and (1.3.1), for any fixed M one has, as k → ∞, k j=0
=
gk2∗ →2 gk
k
and
gj
j=k−M +1
gk−j ∼ gk
k
gk−j → 1 − G(M ).
j=k−M +1
(1.3.6) ∞ Since G(M ) = g can be made arbitrarily small by choosing M large j=M j enough, the first sum on the final right-hand side of (1.3.4) can be made arbitrarily close to n for all sufficiently large k. The second sum on the final right-hand side of (1.3.4) satisfies the relation k j=k−M +1
gj
M −1 n∗ gk−j ∼ gjn∗ gk j=0
(1.3.7)
as k → ∞ and, therefore, can be made arbitrarily close to 1 by choosing M large ∞ enough, since j=0 gjn∗ = 1. As the left-hand side of (1.3.4) does not depend (n+1)∗
on M , we have proved that gk
/gk → n + 1 as k → ∞.
(ii) Put αn := sup kM
gkn∗ . gk
46
Preliminaries
Then αn+1 = sup
k
gj
kM j=0
sup
k−M
kM j=0
n∗ gk−j gk
n∗ gj gk−j gk−j + sup gk−j gk kM
k
gj
j=k−M +1
n∗ gk−j . gk
(1.3.8)
One can easily see from (1.3.5) that, for large enough M , the first supremum in the second line of (1.3.8) does not exceed αn sup
k−M
kM j=0
gj gk−j αn (1 + ε), gk
while the second does not exceed 1 + ε owing to (1.3.7). Thus we have obtained αn+1 αn (1 + ε) + (1 + ε) for
n 1.
Since α1 = 1, it follows that αn ((1 + ε)n+1 − 1)/ε. The theorem is proved. Theorem 1.2.17 carries over to the case of sequences in an obvious way. Neither it is difficult to obtain an analogue of Theorem 1.2.21. An arithmetic distribution G = {gk } with the properties (1.3.1), (1.3.2) could be called locally subexponential.
1.3.2 Non-lattice locally subexponential distributions There are two ways of extending the concept of local subexponentiality to the case of non-lattice distributions. The first one suggests to consider distributions having densities. A density h(t) = G(dt)/dt = −G (t) will be called subexponential if, for any fixed v, as t → ∞, h(t + v) → 1, h(t)
2∗
t
h (t) :=
h(u)h(t − u) du ∼ 2h(t).
(1.3.9)
0
By Theorem 1.2.4(i) the convergence in the first relation in (1.3.9) is uniform in v ∈ [0, 1]. For distributions with subexponential densities a complete analogue of Theorem 1.3.2 holds true, the proof of which basically coincides with the argument used in the discrete case. The second way of extending the concept of local subexponentiality does not
1.3 Locally subexponential distributions
47
require the existence of a density. For a Δ > 0, denote by Δ[t) the half-open interval Δ[t) := [t, t + Δ) of length Δ, and let Δ1 [t) := [t, t + 1). Definition 1.3.3. A distribution G on [0, ∞) is said to be locally subexponential (belonging to the class Sloc ) if, for any fixed Δ > 0, G(Δ[t)) = Δ, G(Δ1 [t))
(1.3.10)
G2∗ (Δ[t)) = 2. t→∞ G(Δ[t))
(1.3.11)
lim
t→∞
lim
It is not difficult to verify that if, for any fixed Δ > 0 as t → ∞, G(Δ[t)) ∼ Δt−α−1 L(t),
α > 1,
(1.3.12)
where L(t) is an s.v.f., then G ∈ Sloc (this will also follow from Theorem 1.3.6 below). Evidently, the relation (1.3.12) always holds in the case when ∞ G(t) =
u−α−1 L(u) du.
t
It is also clear that if G has a subexponential density then G ∈ Sloc . If, for an arithmetic distribution G = {gk ; k 0}, the sequence {gk } is subexponential, then we will also write G ∈ Sloc , although the property (1.3.10) will in this case hold only for integer-valued Δ 1 while (1.3.11) will be true for any Δ 1. In what follows, in the case of arithmetic distributions G we will always assume integer-valued Δ 1 in (1.3.10), (1.3.11). The analogues of the main assertions about the properties of distributions from the class Sloc that we established in the discrete case have the following form in the non-lattice case. Theorem 1.3.4. Let G ∈ Sloc . Then: (i) for any fixed n 2 and Δ 1, Gn∗ (Δ[t)) = n; t→∞ G(Δ[t)) lim
(1.3.13)
(ii) for any ε > 0 there exist an M = M (ε) and a b = b(ε) such that, for all n 2, t M and any fixed Δ 1, Gn∗ (Δ[t)) b(1 + ε)n . G(Δ[t))
48
Preliminaries
Proof. The proof of the theorem follows the same line of reasoning as that of Theorem 1.3.2. (i) Here we will again use induction. Suppose that (1.3.13) is correct. Then, for M ∈ (0, t), G(n+1)∗ (Δ[t)) = G(Δ[t))
t+Δ
0
t−M
Gn∗ (Δ[t − u)) G(du) = G(Δ[t))
t+Δ
+ 0
.
(1.3.14)
t−M
In the first integral, t−M
0
Gn∗ (Δ[t − u)) G(Δ[t − u)) G(du), G(Δ[t − u)) G(Δ[t))
(1.3.15)
the ratio G (Δ[t − u))/G(Δ[t − u)) can, by the induction hypothesis, be made arbitrarily close to n for all u t − M by choosing M large enough. Further, n∗
t−M
0
G(Δ[t − u)) G2∗ (Δ[t)) G(du) = − G(Δ[t)) G(t)
t+Δ
.
(1.3.16)
t−M
To estimate the integral t+Δ
t−M
G(Δ[t − u)) G(du) G(Δ[t))
(1.3.17)
we need the following lemma. Lemma 1.3.5. Let the relation (1.3.10) hold for the distribution G, and let Q(v) be a bounded non-increasing function on R. Then, for any fixed Δ, M > 0, t+Δ
lim
t→∞ t−M
1 G(du) = Q(t − u) G(Δ[t)) Δ
M Q(v) dv.
(1.3.18)
−Δ
Proof of Lemma 1.3.5. Consider a finite partition of [t − M, t + Δ) into K halfopen intervals Δ1 , . . . , ΔK of the form Δk = [t − M + (k − 1)δ, t − M + kδ),
k = 1, . . . , K,
δ=
M +Δ . K
Then K k=1
G(Δk ) Q(M − δ(k − 1)) G(Δ[t))
t+Δ
Q(t − u) t−M K k=1
G(du) G(Δ[t))
Q(M − δk)
G(Δk ) . G(Δ[t))
(1.3.19)
49
1.3 Locally subexponential distributions
But G(Δk )/G(Δ[t)) → δ/Δ as t → ∞ by virtue of (1.3.10), so the sum on the left-hand side of (1.3.19) will converge, as t → ∞, to δ Q(M − δ(k − 1)). Δ K
k=1
A similar relation holds for the sum in the second line of (1.3.19). Now each of these sums can be made, by choosing a large enough K, arbitrarily close to 1 Δ
M Q(v) dv. −Δ
Since the integral on the first right-hand side of (1.3.19) does not depend on K, the lemma is proved. Now we return to the proof of Theorem 1.3.4. Applying Lemma 1.3.5 to the functions Q(t) = G(t) and Q(t) = G(t + Δ), we obtain that the integral (1.3.17) converges as t → ∞ to ⎛ M ⎛ 0 ⎞ ⎞ M M +Δ +Δ 1 ⎝ 1 ⎝ G(v)dv − G(v)dv − G(v)dv ⎠ = G(v)dv ⎠ Δ Δ 0
−Δ
−Δ
=1−
1 Δ
M M +Δ
G(v)dv = 1 − r(M ), M
where r(M ) G(M ). From this it follows that, by choosing M large enough, one can make the integral (1.3.17) arbitrarily close to 1 as t → ∞, and therefore the integral on the left-hand side of (1.3.16) will, by virtue of (1.3.11), also be arbitrarily close to 1 while the first integral on the second right-hand side of (1.3.14) will be arbitrarily close to n. Again using Lemma 1.3.5, we obtain in a similar way that for the last integral in (1.3.14) one has t+Δ
lim
t→∞ t−M
Gn∗ (Δ[t − u)) G(du) = 1 − rn (M ), G(Δ[t))
where rn (M ) Gn∗ (M ) → 0 as M → ∞. Hence, for large t, the last integral in (1.3.14) can be made, by choosing large enough M , arbitrarily close to 1. Thus the left-hand side of (1.3.14) converges to n + 1 as t → ∞. (ii) The proof of this assertion repeats that of Theorem 1.3.2(ii), with the same modifications related to Lemma 1.3.5 as were made in the proof of part (i). The theorem is proved. A sufficient condition for a distribution to be locally subexponential is contained in the following assertion.
50
Preliminaries
Theorem 1.3.6. Let G ∈ S and, for each fixed Δ > 0, as t → ∞, G(Δ[t)) ∼ ΔG(t)v(t),
(1.3.20)
where v(t) → 0 is an upper-power function.1 Then G ∈ Sloc . Remark 1.3.7. If G ∈ R and the function G(t) is ‘differentiable at infinity’, i.e. for each fixed Δ > 0, as t → ∞, G(t) − G(t + Δ) ∼ αΔt−α−1 L(t) =
αΔG(t) , t
then the conditions of Theorem 1.3.6 are clearly met. If G ∈ Se and the relation (1.2.29) from Definition 1.2.22 holds for each fixed Δ and without the additive term o(1) (i.e. l(t + Δ) − l(t) ∼ Δv(t), v(t) = αl(t)/t), then G(t) − G(t + Δ) = e−l(t) 1 − el(t)−l(t+Δ) = e−l(t) 1 − e−Δv(t)(1+o(1)) ∼ G(t)Δv(t) so that again the conditions of Theorem 1.3.6 are satisfied. Proof of Theorem 1.3.6. That the relation (1.3.10) holds true is obvious from condition (1.3.20). Further, let r.v.’s ζi ⊂ = G, i = 1, 2, be independent, and let Z2 = ζ1 + ζ2 . Then t/2 G (Δ[t)) = P(Z2 ∈ Δ[t)) = 2 G(Δ[t − y)) G(dy) + q, 2∗
0
where by virtue of (1.3.20), (1.2.20) and the fact that v(t) is an upper-power function one has q = P(ζ1 ∈Δ[t/2), ζ2 ∈ Δ[t/2), Z2 < t + Δ) G2 (Δ[t/2)) cv 2 (t)G2 (t/2) = o(v(t)G(t)). If M = M (t) → ∞ slowly enough as t → ∞ then, since v(t) and G(t) are l.c. functions, we obtain from (1.3.20) that M
M G(Δ[t − y)) G(dy) ∼ Δ
0
v(t − y)G(t − y) G(dy) ∼ Δv(t)G(t). 0
Moreover, by (1.2.21) t/2 t/2 G(Δ[t − y)) G(dy) < cΔv(t) G(t − y) G(dy) = o(v(t)G(t)). M 1
See Definition 1.2.20 on p. 28.
M
1.4 Asymptotic properties of ‘functions of distributions’
51
Hence G2∗ (Δ[t)) ∼ 2Δv(t)G(t) ∼ 2G(Δ[t)), which immediately implies (1.3.11). The theorem is proved. Subexponential and locally subexponential distributions will be used in §§ 4.8, 7.5, 7.6 and 8.2.
1.4 Asymptotic properties of ‘functions of distributions’ As before, let G denote the distribution of a r.v. ζ, so that G(B) = P(ζ ∈ B) for any Borel set B, and let G denote the tail of this distribution: G(t) = G([t, ∞)). Let g(λ) := Eeiλζ be the characteristic function (ch.f.) of the distribution G, and let A(w) be a function of the complex variable w. In a number of problems (see e.g. § 7.5), the form of a desired distribution can be obtained in terms of certain transforms of that distribution (e.g. its ch.f.) that have the form A(g(λ)). The question is, what can one say about the asymptotics of the tails of the desired distribution given that the asymptotics of G(t) are known? It is the distribution that corresponds to the ch.f. A(g(λ)) (or to some other transform) to which we refer in the heading of this section. The next theorem answers, to some extent, the question posed. Theorem 1.4.1. Let the distribution G of the r.v. ζ be subexponential, and let a function A(w) be analytic in the disk |w| 1. Then there exists a finite measure A such that the function A(g(λ)) admits a representation of the form Im λ = 0, (1.4.1) A(ψ(λ)) = eiλx A(dx), where A(t) := A([t, ∞)) ∼ A (1)G(t)
as t → ∞.
Proof. Since the domain of analyticity is always open, the function A(w) will be analytic in the region |w| 1 + δ for some δ > 0 and the following expansion will hold true: ∞ A(w) = Ak wk , where |Ak | < c(1 + δ)−k . k=0
Hence the measure A := is finite, with
|A(dx)|
∞ k=0
∞
Ak Gk∗
k=0
|Ak |. Moreover, the function
A(g(λ)) =
∞ k=0
Ak gk (λ)
52
Preliminaries
is the Fourier transform of the measure A. This proves (1.4.1). Further, ∞ A(t) = Ak Gk∗ (t), k=0
where, due to the subexponentiality of G, for each k 1 one has Gk∗ (t) →k G(t)
as t → ∞.
Moreover, by Theorem 1.2.12(iv), for any ε > 0 Gk∗ (t) b(1 + ε)k . G(t) Choosing ε = δ/2, we obtain from the dominated convergence theorem that, as t → ∞, ∞ ∞ Gk∗ (t) A(t) Ak Ak = A (1). = → G(t) G(t) k=0
k=0
The theorem is proved. One could also find the proof of (a somewhat more general form of) Theorem 1.4.1 of [84]. For locally subexponential distributions, we have the following analogue of Theorem 1.4.1. As before, let Δ[t) = [t, t + Δ) be a half-open interval of length Δ > 0. Theorem 1.4.2. Let G ∈ Sloc and a function A(w) be analytic in the unit disk |w| 1. Then there exists a finite measure A such that A(g(λ)) = eiλx A(dx) and, for any fixed Δ, as t → ∞, A(Δ[t)) ∼ A (1)G(Δ[t)).
(1.4.2)
Proof. The proof of the theorem is quite similar to that of Theorem 1.4.1. As before, the measure A has the form A=
∞
Ak Gk∗ ,
k=0
where for the coefficients Ak in the expansion of the function A(w) we have the inequalities Ak c(1 + δ)k for some δ > 0. Then one has to make use of Theorem 1.3.4, which states that Gk∗ (Δ[t)) →k G(Δ[t))
as k → ∞
1.4 Asymptotic properties of ‘functions of distributions’
53
and Gk∗ (Δ[t)) < b(1 + ε)k G(Δ[t))
for
k 2, t M.
Setting ε = δ/2, we can use the dominated convergence theorem to obtain (1.4.2). The theorem is proved. One also has complete analogues of Theorem 1.4.2 in the case when the distribution G has a subexponential density and also in the case when the distribution G = {gk ; k 0} is discrete and the sequence {gk } is subexponential. Thus, in the arithmetic case, we have Theorem 1.4.3. If the sequence {gk } is subexponential and a function A(w) is analytic in the unit disk |w| 1 then there exists a finite discrete measure ∞ ∞ A = {ak ; k 0} such that A(g(w)) = k=0 ak wk , where g(w) = k=0 gk wk and ak ∼ A (1)gk as k → ∞. Proof. The proof of the theorem repeats that of Theorem 1.4.2. One just has to use Theorem 1.3.2 instead of Theorem 1.3.4. There arises the following natural question: is it possible to relax the assumption on the analyticity of A(w) in the above Theorems 1.4.1–1.4.3 ? We will concentrate on Theorems 1.4.1 and 1.4.3. They are the simplest in a series of results that enable one to find, given the asymptotics of G(t), the asymptotics of A(t) for the ‘preimage’ of the function A(g(λ)), where the class of functions A can be broader than that in Theorems 1.4.1 and 1.4.3 (see e.g. [83, 35, 42]). We will present here without proof some ‘local theorems’ for arithmetic distributions. These results are quite useful in the asymptotic analysis (see e.g. § 8.2). Theorem 1.4.4. Let {gk = G({k}); k 0} be a probability distribution on the ∞ set of integers, the sequence {gk } be subexponential, g(w) = k=0 gn wn and A(w) be a function analytic on the range of values g(y) on the disk |y| 1. Then there exists a finite measure A = {ak ; k 0} such that A(g(w)) =
∞
ak w k
(1.4.3)
k=0
and, as k → ∞, ak ∼ A (1)gk .
(1.4.4)
If A(w) =
∞ k=0
Ak wk ,
∞
|Ak | < ∞,
k=0
then the measure A can be identified with the measure
Ak Gk∗ .
(1.4.5)
54
Preliminaries For the proof of the theorem, see [83].
It is not hard to see that using a change of variables one can extend the assertion of the theorem to the case when, instead of the assumption on subexponentiality of the sequence {gk }, one has 1 gk+1 = , k→∞ gk r lim
gk2∗ = 2g(r), k→∞ gk
r > 1,
lim
and the function A(w) is analytic on the set of values g(y), |y| r. In [83] one can also find similar assertions for densities and discrete distributions on the whole axis, under the assumption that conditions (1.3.1) and (1.3.2) hold as k → ±∞. Then the assertion (1.4.4) also holds as k → ±∞. In the case when gk ∼ ck−α−1 as k → ∞, one can consider, instead of functions A(w) that are analytic on the set of values g(y), |y| 1, functions of the form (1.4.5) as well, where ∞
|k|r |Ak | < ∞,
r > α + 1.
k=0
In this case, the assertion (1.4.4) remains valid (see Theorem 6 of [83]). Next we will present an analogue of Theorem 1.4.4 for an extension of the class of regularly varying functions. Put d(bk ) := bk − bk+1 . Theorem 1.4.5. (i) In the notation of Theorem 1.4.4, let the following conditions be satisfied: ∞ ∞ 1 |d(kgk )| < ∞; (a) n n=1 k=n (b) there exist an s.v.f. L(t), α > 0, and a constant c < ∞ such that gk+1 L(k)k −α−1 |gk | cL(k)k −α−1 , → 1 as k → ∞. gk Further, let A(w) be a function analytic on the range of values g(y), |y| 1. Then (1.4.3), (1.4.4) hold true. (ii) Each of the following conditions is sufficient for condition (a) to hold: (a1 ) kgk is non-increasing for k k0 , k0 < ∞; (a2 ) for some ε > 0, one has gk ck−2 (ln k)−2−ε , k 0. Proof. For the proof of first part of the theorem see [35] or [42] (Appendix 3). The assertion of the second part of the theorem is obvious, since if (a1 ) holds then ∞ |d(kgk )| = ngn , |d(kgk )| = kgk − (k + 1)gk+1 , and condition (a) turns into the relation
∞
k=n
n=1 gn
< ∞, which is always true.
1.4 Asymptotic properties of ‘functions of distributions’
55
If (a2 ) is true then ∞
|d(kgk )|
k=n
∞
kgk < c
k=n
∞
k −1 (ln k)−2−ε ∼ c1 (ln n)−1−ε ,
k=n
and condition (a) will be satisfied since
∞ n=1
n−1 (ln n)−1−ε < ∞.
Theorem 1.4.5, just as Theorem 1.4.4, can be extended to the case of distributions on the whole real line (see [35, 40]). Note that the assertion of Theorems 1.4.4 and 1.4.5 on the representation (1.4.3) for the function A(g(w)) is a special case of the well-known Wiener–Levy theorem on the rings (Banach algebras) B of the generating functions of absolutely convergent series, which states the following. Let g ∈ B (i.e. g(w) = gk w k , |gk | < ∞) and let A(w) be a function analytic in a simply connected domain D, situated, generally speaking, on a multi-sheeted Riemann surface. If one can consider the range of values {g(w); |w| 1} as a set lying in D then A(g) also belongs to B, i.e. one has a representation of the form A(g(w)) =
∞
ak w k ,
k=1
∞
|ak | < ∞.
k=1
There arises a natural question: to what extent will the assertions of Theorems 1.4.2 and 1.4.3 remain true in the case of signed measures? For example, when studying the asymptotics of the distribution of the first passage time (see [58]), there arises a problem concerning an analogue of Theorem 1.4.4 in the case when the terms in the sequence {gk } can assume both positive and negative values. We will present here an analogue of Theorem 1.4.3, restricting ourselves to considering regularly varying sequences. In what follows, the relation ak ∼ cbk with c = 0 will be understood as ak = o(bk ) as k → ∞. Let L be a s.v.f., 0 < |c| < ∞, and −α
gn ∼ cn
L(n),
α > 1;
g n := |gn |,
g(w) :=
∞
g k wk .
(1.4.6)
k=0
Theorem 1.4.6. Suppose that {gk ; k 0} is a real-valued sequence of the form (1.4.6) and that a function A(w) is analytic in the disk |w| g(1). Then A(g(w)) can be represented as the series A(g(w)) =
∞ k=0
ak w k ,
∞ k=0
and, as k → ∞, ak ∼ A (g(1))gk .
|ak | < ∞,
56
Preliminaries
Proof. As before, let {gkn∗ } denote the nth convolution of the sequence {gk } with itself: (n+1)∗
=
gk
k
n 1,
gjn∗ gk−j ,
j=0
so that k gkn∗ wk = g n (w). It is not hard to see that gk2∗ ∼ 2g(1)gk and then, by induction, that gkn∗ ∼ ng n−1 (1)gk
as
k→∞
(1.4.7)
n∗ } is the for any fixed n. Further, it is evident that |gkn∗ | < g n∗ k where {g k n∗ k nth convolution of the sequence {g k := |gk |} with itself, so that gk w = g n (w). Clearly, the sequence {g k /g(1)} specifies a subexponential distribution and hence, by virtue of Theorem 1.3.2(ii), for any ε > 0 and all large enough k and n one has n
n−1 g n∗ (1) (1 + ε/2) . k gk g
(1.4.8)
Furthermore, from the relation A(g(w)) =
An g n (w)
we obtain ak =
∞
An gkn∗ =
n=0
+
nN
,
(1.4.9)
n>N
where, for each fixed N , one has from (1.4.7) that 1 → An ng n−1 (1) = A (g(1)) + rN gk nN
as
k→∞
(1.4.10)
nN
with rN → 0 as N → ∞. For the last sum in (1.4.9) we find by virtue of (1.4.8) that |An gkn∗ | |An | g k g n−1 (1)(1 + ε)n/2 . n>N
n>N
n>N
Since |An | c1 [g(1)(1 + ε)]−n for a suitable ε > 0, we see that the sequence |An |g n (1)(1 + ε)n/2 decays exponentially fast as n → ∞. Therefore c2 gk (1 + ε)−N/2 . (1.4.11) n>N
Comparing the relations (1.4.9)–(1.4.11) and noting that N can be chosen arbitrarily large, we obtain the assertion of the theorem.
57
1.5 Convergence of distributions of sums of r.v.’s to stable laws 1.5 The convergence of distributions of sums of random variables with regularly varying tails to stable laws
As is well known, in the case Eξ 2 < ∞ one has the central limit theorem, which n states that the distributions of the normalized sums Sn = i=1 ξi of independent d
r.v.’s ξi = ξ converge to the normal law as n → ∞. If Eξ 2 = ∞ then the situation noticeably changes. In this case, the convergence of the distributions of the appropriately normalized sums Sn to a limiting law will only take place for r.v.’s with regularly varying distribution tails. From the proof of the central limit theorem by the method of characteristic functions, it is seen that the nature of the limiting distribution for Sn is defined by the behaviour of the ch.f. f(λ) := Eeiλξ ,
λ ∈ R,
of ξ in the vicinity of zero. If Eξ = 0 and Eξ 2 = d < ∞ then, as n → ∞, μ dμ2 1 1 f (0)μ f (0)μ2 f √ =1− . (1.5.1) +o +o =1+ √ + 2n n 2n n n n √ It is this relation that defines the asymptotic behaviour of the ch.f. f n (μ/ n) √ of Sn / n, which leads to the limiting normal law. In the case Eξ 2 = ∞ (so that f (0) does not exist) we will use the same method, but, in order to obtain the ‘right’ asymptotics of f(μ/b(n)) under a suitable scaling b(n), we will have to impose regular variation conditions on the ‘two-sided’ tails F (t) := F((−∞, −t)) + F([t, ∞)) = P(ξ ∈ [−t, t)),
t > 0.
As before, the functions F+ (t) := F([t, ∞)) = P(ξ t),
F− (t) := F((−∞, −t)) = P(ξ < −t)
will be referred to as the right and the left tails of the distribution of ξ, respectively. Assume that the following condition holds for some α ∈ (0, 2] and ρ ∈ [−1, 1]: [Rα,ρ ] The two-sided tail F (t) = F− (t) + F+ (t) is an r.v.f. at infinity, i.e. it has a representation of the form F (t) = t−α LF (t),
α ∈ (0, 2],
(1.5.2)
where LF (t) is an s.v.f.; in addition there exists the limit lim
t→∞
F+ (t) 1 =: ρ+ = (ρ + 1) ∈ [0, 1]. F (t) 2
(1.5.3)
If ρ+ > 0 then clearly the right tail F+ (t) (just like F (t)) is an r.v.f., i.e. it admits a representation of the form F+ (t) = V (t) := t−α L(t),
α ∈ (0, 2],
L(t) ∼ ρ+ LF (t)
58
Preliminaries
(following § 1.1, we use the symbol V to denote an r.v.f.). If ρ+ = 0 then the right tail F+ (t) = o(F (t)) need not be assumed to be regularly varying. It follows from (1.5.3) that there also exists the limit lim
t→∞
F− (t) =: ρ− = 1 − ρ+ . F (t)
If ρ− > 0 then, similarly, the left tail F− (t) admits a representation of the form F− (t) = W (t) := t−α LW (t),
α ∈ (0, 2],
LW (t) ∼ ρ− LF (t).
If ρ− = 0 then the left tail F− (t) = o(F (t)) is not assumed to be regularly varying. The parameters ρ± are connected to the parameter ρ from condition [Rα,ρ ] by the relations ρ = ρ+ − ρ− = 2ρ+ − 1. Evidently, for α < 2 one has Eξ 2 = ∞, so that the representation (1.5.1) ceases to hold, and the central limit theorem is inapplicable. In what follows, in situations where Eξ exists and is finite we will always assume, without loss of generality, that Eξ = 0. Since F (t) in non-increasing, the (generalized) inverse function F (−1) (u), understood as F (−1) (u) := inf{t > 0 : F (t) < u}, always exists. If F (t) is strictly monotone and continuous then b = F (−1) (u) is the unique solution of the equation F (b) = u,
u ∈ (0, 1).
Put ζn :=
Sn , b(n)
where the scaling factor b(n) is defined in the case α < 2 by b(n) := F (−1) (1/n).
(1.5.4)
It is obvious that in the case ρ+ > 0 the scaling factor b(n) is connected to the function σ(n) = V −1 (1/n) introduced in Theorem 1.1.4(v) by σ(ρ−1 + n) ∼ b(n). For α = 2 we put b(n) := Y (−1) (1/n),
(1.5.5)
1.5 Convergence of distributions of sums of r.v.’s to stable laws where Y (t) := 2t−2
t
⎛ yF (y) dy = 2t−2 ⎝
0
t
t yV (y) dy +
0
59
⎞ yW (y) dy ⎠
0
# $ ∼ t−2 E ξ 2 ; −t ξ < t =: t−2 LY (t)
(1.5.6)
and LY is an s.v.f. (see Theorem 1.1.4(iv)). From Theorem 1.1.4(v) it follows also that if (1.5.2) holds then b(n) = n1/α Lb (n),
α 2,
where Lb is an s.v.f. t ∞ Recall the notation VI (t) = 0 V (y) dy, V I (t) = t V (y) dy. Theorem 1.5.1. Let condition [Rα,ρ ] be satisfied. Then the following assertions hold true. (i) For α ∈ (0, 2), α = 1, and the scaling factor (1.5.4), we have ζn ⇒ ζ (α,ρ)
as
n → ∞,
(1.5.7)
where the distribution Fα,ρ of the r.v. ζ (α,ρ) depends only on the parameters α and ρ and has a ch.f. f(α,ρ) (λ) given by f(α,ρ) (λ) = Eeiλζ
(α,ρ)
= exp{|λ|α B(α, ρ, φ)},
(1.5.8)
where φ = sign λ,
απ απ B(α, ρ, φ) = Γ(1 − α) iρφ sin − cos 2 2
(1.5.9)
and for α ∈ (1, 2) we put Γ(1 − α) = Γ(2 − α)/(1 − α). (ii) When α = 1, for the sequence ζn with scaling factor (1.5.4) to converge to a limiting law the former, generally speaking, needs to be centred. More precisely, we have ζn − An ⇒ ζ (1,ρ) where An :=
as
n → ∞,
$ n # VI (b(n)) − WI (b(n)) − ρC, b(n)
(1.5.10)
(1.5.11)
C ≈ 0.5772 is the Euler constant and "
(1,ρ) π|λ| f(1,ρ) (λ) = Eeiλζ − iρλ ln |λ| . (1.5.12) = exp − 2 # $ If n VI (b(n)) − WI (b(n)) = o(b(n)), then ρ = 0 and one can put An = 0.
60
Preliminaries If Eξ = 0, then An =
$ n # I W (b(n)) − V I (b(n)) − ρC. b(n)
If Eξ = 0, ρ = 0, then ρAn → −∞ as n → ∞. (iii) For α = 2 and the scaling factor (1.5.5), ζn ⇒ ζ (2,ρ) = ζ
as
n → ∞,
f(2,ρ) (λ) = Eeiλζ = e−λ
2
/2
,
so that ζ has the standard normal distribution that is independent of ρ. Remark 1.5.2. It is not difficult to verify (cf. Lemma 2.2.1 of [286]) that in the ‘extreme’ cases ρ = ±1 the ch.f.’s (1.5.8), (1.5.12) of stable distributions with α < 2 admit the following simpler representations: f(α,1) (λ) = exp −Γ(1 − α)(−iλ)α , α ∈ (0, 2), α = 1, f(α,−1) (λ) = f(α,1) (−λ), α 2. f(1,1) (λ) = exp (−iλ) ln(−iλ) ; Remark 1.5.3. From the representation (1.5.11) for the centring sequence {An } in the case α = 1 it follows that if there exists Eξ = 0 then the boundedness of the sequence implies that ρ = 0. The converse assertion, that in the case Eξ = 0 the relation ρ = 0 implies the boundedness of {An }, is false. Indeed, let ξ be an r.v. with Eξ = 0 such that for t t0 > 0 one has % & 1 1 , W (t) = V (t) 1 + , L2 (t) := ln ln t. V (t) = L2 (t) 2t ln2 t Then ρ = 0, F (t) ∼ t−1 ln−2 t, b(n) ∼ n ln−2 n and V I (t) =
1 , 2 ln t
W I (t) = V I (t) +
1 + o(1) , L2 (t) ln(t)
so that W I (t) − V I (t) ∼
1 . L2 (t) ln t
Therefore An =
(1 + o(1)) ln2 n ln n − ρC ∼ → ∞ as L2 (b(n)) ln b(n) ln ln n
n → ∞.
Remark 1.5.4. The last assertion of the theorem shows that the limiting distribution can be normal even in the case when ξ has infinite variance. Remark 1.5.5. If α < 2 then from the properties of s.v.f.’s (Theorem 1.1.4(iv)) we have that, as t → ∞, t
t yF (y) dy =
0
0
y 1−α LF (y) dy ∼
1 1 t2−α LF (t) = t2 F (t). 2−α 2−α
1.5 Convergence of distributions of sums of r.v.’s to stable laws
61
Hence for α < 2 one has Y (t) ∼ 2(2 − α)−1 F (x), Y
(−1)
1 n
∼F
(−1)
2−α 2n
∼
2 2−α
1/α
F
(−1)
1 n
(cf. (1.5.4)). However, when α = 2 and d := Eξ 2 < ∞, we have √ 1 Y (t) ∼ t−2 d, b(n) = Y (−1) ∼ nd. n Thus, the scaling (1.5.5) is ‘transitional’ between the scaling √ (1.5.4) (up to a constant factor (2/(2 − α))1/α ) and the standard scaling nd in the central limit theorem in the case Eξ 2 < ∞. This also means that the scaling (1.5.5) is ‘universal’ and can be used for all α 2 (as it is in many texts on probability theory). However, as we will see later on, for α < 2 the scaling (1.5.4) is simper and easier to deal with, and this is why it will be used in the present exposition. We will present here a proof of Theorem 1.5.1 that essentially uses the explicit form of the scaling sequence b(n) and thereby helps to establish a direct connection between the zones of ‘normal’ deviations (as in Theorem 1.5.1) and large deviations (as in Chapters 2 and 3). Recall that Fα,ρ denotes the distribution of ζ (α,ρ) . The parameter α assumes values from the half-interval (0, 2] and the parameter ρ = ρ+ − ρ− can assume any value from the closed interval [−1, 1]. The role of the parameters α and ρ will be clarified later, at the end of this section. It follows from Theorem 1.5.1 that each law Fα,ρ , 0 < α 2, −1 ρ 1 is limiting for the distributions of suitably normalized sums of i.i.d. r.v.’s. The law of large numbers implies that the degenerate distribution Ia concentrated at some point a is also a limiting one. The totality of all these distributions will be denoted by S0 . Further, it is not hard to see that if F ∈ S0 then a distribution obtained from F by scale and shift transformations, i.e. a distribution F{a,b} given, for some fixed b > 0 and a, by the relation B−a B−a F{a,b} (B) := F , where = {u ∈ R : ub + a ∈ B}, b b is also limiting (for the distributions of (Sn − an )/bn as n → ∞, with suitable {an } and {bn }). It turns out that the class S of all distributions obtained by such an extension of S0 includes all the limiting laws for sums of i.i.d. r.v.’s. Another characterization of the class S of limiting distributions is possible. Definition 1.5.6. A distribution F is called stable if, for any a1 , a2 and for any b1 > 0 and b2 > 0, there exist a and b > 0 such that F{a1 ,b1 } ∗ F{a2 ,b2 } = F{a,b} .
62
Preliminaries
This definition implies that the convolution of a stable distribution F with itself produces the same distribution F, up to scale and shift transformations (or, equivalently, for independent r.v.’s ξi ⊂ = F one has (ξ1 + ξ2 − a)/b ⊂ = F for suitable a and b > 0). In terms of ch.f.’s, stability is stated as follows. For any b1 > 0, b2 > 0 there exist a and b > 0 such that f(λb1 )f(λb2 ) = eiλa f(λb),
λ ∈ R.
(1.5.13)
The class of all stable laws will be denoted by SS . The remarkable fact is that the class S of all limiting laws coincides with the class SS of all stable laws. If, under a suitable scaling, ζn ⇒ ζ (α,ρ)
as
n→∞
then one says that the distribution F of the summands ξ belongs to the domain of attraction of the stable law Fα,ρ . Theorem 1.5.1 means that if F satisfies condition [Rα,ρ ] then F belongs to the domain of attraction of the distribution Fα,ρ . One can show that the converse is also true (see e.g. § 5, Chapter XVII of [122]): if F belongs to the domain of attraction of the law Fα,ρ for an α < 2 then condition [Rα,ρ ] is satisfied. Proof of Theorem 1.5.1. We follow the same path as when proving the central limit theorem using the relation (1.5.1). We will study the asymptotic properties of the ch.f. f(λ) = Eeiλξ in the vicinity of zero (more precisely, the asymptotics of μ f −1→0 b(n) as b(n) → ∞) and show that, under condition [Rα,ρ ], for any μ ∈ R one has μ − 1 → ln f(α,ρ) (μ) n f (1.5.14) b(n) (or some modification of this relation; see (1.5.51) below). From this it will follow that, for ζn = S(n)/b(n), fζn (μ) → f(α,ρ) (μ) Indeed,
fζn (μ) = fn
Since f(λ) → 1 as λ → 0, we have μ ln fζn (μ) = n ln f b(n) % μ = n ln 1 + f b(n)
μ b(n)
& −1
n → ∞.
as
(1.5.15)
.
% μ =n f b(n)
& − 1 + Rn ,
1.5 Convergence of distributions of sums of r.v.’s to stable laws
63
where |Rn | n|f(μ/b(n)) − 1|2 for all sufficiently large n and hence Rn → 0, owing to (1.5.14). From this we see that (1.5.14) implies (1.5.15). Thus, first we will study the asymptotics of f(λ) as λ → 0 and then establish (1.5.14). (i) Let α ∈ (0, 1). One has ∞ f(λ) = −
∞ eiλt dV (t) −
0
e−iλt dW (t).
(1.5.16)
0
Consider the first integral: ∞ ∞ iλt − e dV (t) = V (0) + iλ eiλt V (t) dt, 0
(1.5.17)
0
where, after the change of variables |λ|t = y, |λ| = 1/m, we obtain ∞ I+ (λ) := iλ
∞ iλt
e
V (t) dt = iφ
0
eiφy V (my) dy
(1.5.18)
0
with φ = sign λ (the trivial case λ = 0 is excluded throughout the argument). Assume for the present that ρ+ > 0. Then V (t) is an r.v.f. at infinity and, for each y, owing to the properties of s.v.f.’s one has V (my) ∼ y −α V (m)
as
|λ| → 0.
So it is natural to expect that, as |λ| → 0, ∞ I+ (λ) ∼ iφV (m)
eiφy y −α dy = iφV (m)A(α, φ),
(1.5.19)
0
where
∞ A(α, φ) :=
eiφy y −α dy.
(1.5.20)
0
Let us assume that the relation (1.5.19) holds and similarly that ∞ − e−iλt dW (t) = W (0) + I− (λ),
(1.5.21)
0
where ∞ I− (t) := −iλ
e−iλt W (t) dt
0
∞
∼ −iφW (m) 0
e−iφy y −α dy = −iφW (m)A(α, −φ).
(1.5.22)
64
Preliminaries
Since V (0) + W (0) = 1, the relations (1.5.16)–(1.5.22) mean that, as λ → 0, f(λ) = 1 + F (m)iφ [ρ+ A(α, φ) − ρ− A(α, −φ)](1 + o(1)).
(1.5.23)
One can find a closed-form expression for the integral A(α, φ). Observe that the contour integral along the boundary of the positive quadrant in the complex plane (closed as a contour) of the function eiz z −α , which is analytic in the quadrant, is equal to zero. From this it is not hard to obtain that A(α, φ) = Γ(1 − α)eiφ(1−α)π/2 ,
α > 0.
(1.5.24)
(Note also that (1.5.20) is a table integral; its value, (1.5.24), can be found in handbooks; see e.g. integrals 3.761.4 and 3.761.9 of [134]). Thus, one has in (1.5.23) that iφ [ρ+ A(α, φ) − ρ− A(α, −φ)] (1 − α)π (1 − α)π + iφρ+ sin = iφ Γ(1 − α) ρ+ cos 2 2 (1 − α)π (1 − α)π − ρ− cos + iφρ− sin 2 2 (1 − α)π (1 − α)π = Γ(1 − α) iφ(ρ+ − ρ− ) cos − sin 2 2 απ απ = Γ(1 − α) iφρ sin = B(α, ρ, φ), − cos 2 2 where B(α, ρ, φ) is defined in (1.5.9). Therefore, as λ → 0, f(λ) − 1 = F (m)B(α, ρ, φ)(1 + o(1)).
(1.5.25)
Setting λ := μ/b(n) (so that m = b(n)/|μ|), where b(n) is defined in (1.5.4), and taking into account that F (b(n)) ∼ 1/n, we get & % b(n) μ − 1 = nF B(α, ρ, φ)(1 + o(1)) n f b(n) |μ| ∼ |μ|α B(α, ρ, φ).
(1.5.26)
We have established the validity of (1.5.14) and hence that of assertion (i) of the theorem in the case α < 1, ρ+ > 0. we reIf ρ+ = 0 (ρ− = 0) then the above argument remains valid provided place the term V (m) W(m) with zero or o(W (m)) o(V (m)) . This follows from the fact that F+ (t) F− (t) admits in this case a regularly varying majorant V ∗ (t) = o(W (t)) W ∗ (t) = o(V (t)) . Similar remarks apply to what follows as well. Thus the theorem is proved in the case α < 1 provided that we justify the
65
1.5 Convergence of distributions of sums of r.v.’s to stable laws
asymptotic equivalence in (1.5.19). To do that, it suffices to verify that the integrals ε ∞ iφy e V (my) dy and eiφy V (my) dy (1.5.27) 0
M
can be made arbitrarily small compared with V (m) by choosing ε and M appropriately. Note beforehand that, by virtue of Theorem 1.1.4(iii) (see (1.1.23)), for any δ > 0 there exists a tδ > 0 such that for all v 1, vt tδ one has V (vt) (1 + δ)v −α−δ . V (t) So, for δ < 1 − α, t > tδ , t
t V (u) du tδ +
0
1
V (vt) dv V (t)
V (u) du = tδ + tV (t) tδ
tδ /t
1 tδ + tV (t)(1 + δ)
v −α−δ dv = tδ +
0
tV (t)(1 + δ) 1−α−δ
ctV (t),
(1.5.28)
because tV (t) → ∞ as t → ∞. From this we obtain that ε εm eiφy V (my) dy 1 V (u) du cεV (εm) ∼ cε1−α V (m). m 0
0
1−α
Since ε → 0 as ε → 0, the required assertion concerning the first integral in (1.5.27) is proved. The second integral in (1.5.27) is equal to ∞ iφy
e M
∞ ∞ 1 1 iφy e V (my) − V (my) dy = eiφy dV (my) iφ iφ M M
∞
1 1 = − eiφM V (mM ) − iφ iφ
eiφu/m dV (u), mM
so that its absolute value does not exceed 2V (mM ) ∼ 2M −α V (m).
(1.5.29)
Therefore, by choosing a suitable M , the value of the second integral in (1.5.27) can also be made arbitrarily small compared with V (m). The relation (1.5.19) (and, together with it, the assertion of the theorem in the case α < 1) is proved. Now let α ∈ (1, 2); hence there exists a finite expectation Eξ that, according
66
Preliminaries
to our convention, will be assumed to be equal to zero. In this case, |λ| f(λ) − 1 = φ f (φu) du,
φ = sign λ,
(1.5.30)
0
and we have to find the asymptotic behaviour of ∞
f (λ) = −i
∞ iλt
te
dV (t) + i
0
(1)
(1)
te−iλt dW (t) =: I+ (λ) + I− (λ)
(1.5.31)
0
as λ → 0. Since tdV (t) = d(tV (t)) − V (t) dt, integrating by parts yields ∞
(1)
I+ (λ) = −i
∞ teiλt dV (λ) = −i
0
∞ eiλt d(tV (t)) + i
0
∞ 0
∞
= iV I (0) − λ
0
∞
tV (t) eiλt dt + iV I (0) − λ
= −λ
eiλt V (t) dt
V I (t) eiλt dt 0
V! (t) eiλt dt,
(1.5.32)
0
where by Theorem 1.1.4(iv) both the functions ∞ V (t) :=
V (u) du ∼
I
t
tV (t) α−1
as
t → ∞,
V I (0) < ∞,
and αtV (t) V! (t) := tV (t) + V I (t) ∼ α−1 are regularly varying. Letting, as before, m = 1/|λ|, m → ∞ (cf. (1.5.18), (1.5.19)), we obtain ∞ −λ
V! (t) e
iλt
dt = −φV! (m)
0
∞ ∼ −φ 0
(1)
I+ (λ) = iV I (0) −
∞
V! (my) eiφy dy
0
y −α+1 eiφy dy = −
αV (m) A(α − 1, φ), λ(α − 1)
αρ+ F (m) A(α − 1, φ)(1 + o(1)), λ(α − 1)
where the function A(α, φ) defined in (1.5.20) is equal to (1.5.24).
(1.5.33)
1.5 Convergence of distributions of sums of r.v.’s to stable laws
67
Similarly,
(1) I− (λ)
∞ =i 0
te−iλt dW (t)
∞
= −λ
tW (t) e 0
−iλt
∞
= −iW I (0) − λ
∞ dt − iW (0) − λ I
W I (t) e−iλt dt
0
' (t) e−iλt dt, W
0
where ∞ W (t) :=
W (u) du,
I
t
' (t) := tW (t) + W I (t) ∼ αtW (t) W α−1
and ∞ −λ 0
' (t) e−iλt dt ∼ − αW (m) A(α − 1, −φ). W λ(α − 1)
Therefore (1)
I− (λ) = iW I (0) −
αρ− F (m) A(α − 1, −φ)(1 + o(1)). λ(α − 1)
Hence, by virtue of (1.5.31), (1.5.33) and the equality V I (0)−W I (0) = Eξ = 0, one has f (λ) = −
$ αF (m) # ρ+ A(α − 1, φ) + ρ− A(α − 1, −φ) (1 + o(1)). λ(α − 1)
Now let us return to the relation (1.5.30). Since |λ| u−1 F (u−1 ) du ∼ α−1 F (|λ|−1 ) = α−1 F (m) 0
(see Theorem 1.1.4(iv)), we obtain, again using (1.5.24) and an argument like that
68
Preliminaries
employed in the proof for the case α < 1, that $ # 1 F (m) ρ+ A(α − 1, φ) + ρ− A(α − 1, −φ) (1 + o(1)) α−1 % Γ(2 − α) (2 − α)π (2 − α)π =− F (m) ρ+ cos + iφ sin α−1 2 2 & (2 − α)π (2 − α)π + ρ− cos (1 + o(1)) − iφ sin 2 2 Γ(2 − α) απ απ = (1 + o(1)) F (m) cos − iφρ sin α−1 2 2
f(λ) − 1 = −
= F (m)B(α, ρ, φ)(1 + o(1)).
(1.5.34)
We arrive once more at the relation (1.5.25) which, by virtue of (1.5.26), implies the assertion of the theorem in the case α ∈ (1, 2). (ii) The calculations in the case α = 1 are somewhat more intricate. We will again follow the relation (1.5.16), according to which f(λ) = 1 + I+ (λ) + I− (λ).
(1.5.35)
Rewrite the relation (1.5.18) for I+ (λ) as ∞ I+ (t) = iφ
eiφy V (my) dy 0
∞
∞ V (my) cos y dy −
= iφ 0
V (my) sin y dy.
(1.5.36)
0
Here the first integral on the right-hand side can be represented as the sum of two integrals, 1
∞ V (my) dy +
0
g(y)V (my) dy,
(1.5.37)
0
where
if y 1, if y > 1.
(1.5.38)
g(y)y −1 dy = C ≈ 0.5772
(1.5.39)
g(y) =
cos y − 1 cos y
Note (see e.g. integral 3.782 of [134]) that ∞ − 0
is the Euler constant. Since V (ym)/V (m) → y −1 as m → ∞, in a similar way
1.5 Convergence of distributions of sums of r.v.’s to stable laws
69
to before we obtain for the second integral in (1.5.37) the relation ∞ g(y)V (my) dy ∼ −CV (m).
(1.5.40)
0
Now consider the first integral in (1.5.37), 1 V (my) dy = m 0
−1
m
V (u) du = m−1 VI (m),
(1.5.41)
0
where t VI (t) =
V (u) du
(1.5.42)
0
can easily be seen to be an s.v.f. in the case α = 1 (see Theorem 1.1.4(iv)) and, moreover, if E|ξ| = ∞ then VI (t) → ∞ as t → ∞, whereas if E|ξ| < ∞ then VI (t) → VI (∞) < ∞. Thus, for the first term on the right-hand side of (1.5.36) we have Im I+ (λ) = φ(−CV (m) + m−1 VI (m)) + o(V (m)).
(1.5.43)
Next we will clarify the character of the dependence of VI (vt) on v when t → ∞. For any fixed v > 0, vt VI (vt) = VI (t) +
v V (u) du = VI (t) + tV (t) 1
t
V (yt) dy. V (t)
By Theorem 1.1.3 one has v 1
V (yt) dy ∼ V (t)
v 1
dy = ln v, y
so that VI (vt) = VI (t) + (1 + o(1)) tV (t) ln v =: AV (v, t) + tV (t) ln v,
(1.5.44)
where clearly AV (v, t) = VI (t) + o(tV (t)) as
t → ∞;
(1.5.45)
evidently VI (t) tV (t) by Theorem 1.1.4(iv). Hence, for λ = μ/b(n) (so that m = b(n)/|μ| and therefore V (m) ∼ ρ+ |μ|/n) we obtain from (1.5.43), (1.5.44) (in which one has to put t = b(n), v = 1/|μ|)
70
Preliminaries
that the following representation is valid as n → ∞: % & ρ+ μ ρ+ μ μ Im I+ (λ) = −C AV (|μ|−1 , b(n)) − + ln |μ| + o(n−1 ) n b(n) n ρ μ μ + = AV (|μ|−1 , b(n)) − (C + ln |μ|) + o(n−1 ). (1.5.46) b(n) n For the second term on the right-hand side of (1.5.36) we have ∞ Re I+ (λ) = −
∞ V (my) sin y dy ∼ −V (m)
0
y −1 sin y dy.
0
Because sin y ∼ y as y → 0, the last integral converges. Since Γ(γ) ∼ 1/γ as γ → 0, the value of this integral can be found to be (see (1.5.20) and (1.5.24)) π γπ = . (1.5.47) lim Γ(γ) sin γ→0 2 2 Thus, for λ = μ/b(n), π|μ| + o(n−1 ). (1.5.48) 2n In a similar way we can find an asymptotic representation for the integral I− (λ) (see (1.5.16)–(1.5.22)): Re I+ (λ) = −
∞ I− (λ) = −iφ
W (my)e−iφy dy
0
∞ = −iφ
∞ W (my) cos y dy −
0
W (my) sin y dy.
(1.5.49)
0
Comparing this with (1.5.36) and the subsequent computation of I+ (λ), we can immediately write that, for λ = μ/b(n) (cf. (1.5.46), (1.5.48)), 1 −μAW (|μ|−1 , b(n)) ρ− μ Im I− (λ) = − , + (C + ln |μ|) + o b(n) n n (1.5.50) π|μ|ρ− −1 Re I− (λ) = − + o(n ). 2n Thus we obtain from (1.5.35), (1.5.46), (1.5.48) and (1.5.50) that π|μ| iρμ μ −1=− − (C + ln |μ|) f b(n) n n $ iμ # + AV (|μ|−1 , b(n)) − AW (|μ|−1 , b(n)) + o(n−1 ). b(n) From (1.5.45) it follows that the second to last term here is equal to $ iμ # VI (b(n)) − WI (b(n)) + o(n−1 ), b(n)
1.5 Convergence of distributions of sums of r.v.’s to stable laws so that finally μ f b(n)
−1=−
where An =
π|μ| iρμ An − ln |μ| + iμ + o(n−1 ), 2n n n
71
(1.5.51)
$ n # VI (b(n)) − WI (b(n)) − ρC. b(n)
Therefore, cf. (1.5.14), (1.5.15), we obtain μ fζn −An (μ) = exp −iμAn f n b(n)
&" % μ = exp −iμAn + n ln 1 + f −1 b(n)
% & μ μ = exp −iμAn + n f − 1 + nO f b(n) b(n)
2 " . − 1
When α = 1 the functions VI and WI are slowly varying, by Theorem 1.1.4(iv), so by virtue of (1.5.51) we get 2 A2 1 μ − 1 c + n n f b(n) n n $ 1 1 # c1 VI (b(n))2 + WI (b(n))2 → 0. + n b(n) Since clearly μ −iμAn + n f b(n) we have
−1
→−
π|μ| − iρμ ln |μ|, 2
" π|μ| fζn −An (μ) → exp − − iρμ ln |μ| , 2
and so the relation (1.5.10) is proved. The assertions about the centring sequence {An } that follow (1.5.10) are obvious when one takes into account (1.5.3) and Theorem 1.1.4(iv). (iii) It remains to consider the case α = 2. We will follow the representations (1.5.30)–(1.5.32), according to which we have to find the asymptotics (as m = 1/|λ| → ∞) of (1)
(1)
f (λ) = I+ (λ) + I− (λ),
(1.5.52)
where (1) I+ (λ)
∞ = iV (0) − λ I
V! (t) e
iλt
0
∞ dt = iV (0) − φ I
0
V! (my) eiφy dy (1.5.53)
72
Preliminaries
and, by Theorem 1.1.4(iv), ∞ V (t) =
V! (t) = tV (t) + V I (t) ∼ 2tV (t) (1.5.54)
V (y) dy ∼ tV (t),
I
t
as t → ∞. Further, ∞
V! (my) eiφy dy =
0
∞
V! (my) cos y dy + φ
0
∞
V! (my) sin y dy.
(1.5.55)
0
Here the second integral on the right-hand side of (1.5.55) is asymptotically equivalent (as m → ∞, see (1.5.47)) to V! (m)
∞
y −1 sin y dy =
0
π ! V (m). 2
The first integral on the right-hand side of (1.5.55) equals 1
V! (my) dy +
0
∞
g(y)V! (my) dy,
0
where the function g(y) was defined in (1.5.38) and 1 0
V!I (t) :=
t
1 V! (my) dy = m
m 0
1 ! VI (m); V! (u) du = m
V! (u) du is an s.v.f. according to (1.5.54). Since
0
t 0
t2 V (t) 1 − uV (u) du = 2 2
t
t
u2 dV (u),
0
t V I (u) du = tV I (t) +
0
uV (u) du, 0
V I (t) ∼ tV (t), we obtain V!I (t) =
t
2
t
(uV (u) + V (u)) du = tV (t) + t V (t) − I
I
0
u2 dV (u)
0
t =− 0
u2 dV (u) + O(t2 V (t)),
(1.5.56)
1.5 Convergence of distributions of sums of r.v.’s to stable laws
73
where the last term is negligibly small because, owing to Theorem 1.1.4(iv), t
uV (u) du t2 V (t).
0
It is also clear that, as t → ∞,
# $ V!I (t) → V!I (∞) = E ξ 2 ; ξ > 0 ∈ (0, ∞].
As a result we obtain, cf. (1.5.40), that (1)
iπ ! V (m) − λV!I (m) + φC V! (m) + o(V! (m)) 2 = iV I (0) − λV!I (m)(1 + o(1)),
I+ (λ) = iV I (0) −
since V!I (t) tV! (t). In the same manner we find that 'I (m)(1 + o(1)), I− (λ) = −iW I (0) − λW (1)
'I is an s.v.f. and is obtained from the function W in the same way as V!I where W was from V . Since V I (0) = W I (0), we see now that the relation (1.5.52) leads to # $ 'I (m) (1 + o(1)). f (λ) = −λ V!I (m) + W Hence from (1.5.30) we get the representation 1/m
f(λ) − 1 = φ
1/m
# $ 'I (1/u) du u V!I (1/u) + W
f (φu) du = − 0
0
$ # $ 1 # 'I (m) ∼ − 1 E ξ 2 ; −m ξ < m ∼ − 2 V!I (m) + W 2 2m 2m 'I . Turning now to the definition of owing to (1.5.56) and a similar relation for W −2 the function Y (t) = t LY (t) in (1.5.6) and putting b(n) := Y (−1) (1/n), we obtain n n(f(λ) − 1) ∼ − Y 2
b(n) |μ|
λ := μ/b(n),
∼−
nμ2 μ2 Y (b(n)) → − . 2 2
The theorem is proved. With regard to the role of the parameters α and ρ we will note the following. The parameter α characterizes the decay rate of the functions Fα,ρ,− (t) := Fα,ρ ((−∞, −t))
and
Fα,ρ,+ (t) := Fα,ρ ([t, ∞))
74
Preliminaries
as t → ∞. The following assertion follows from the results of Chapters 2 and 3: If condition [Rα,ρ ] is met, α = 1, α < 2, and ρ+ > 0 then Sn P v ∼ nV (vb(n)) as v → ∞. b(n) Therefore, if v → ∞ slowly enough then, owing to the properties of s.v.f.’s, Sn P v ∼ v −α nV (b(n)) ∼ ρ+ v −α nF (b(n)) ∼ ρ+ v −α . b(n) However, by virtue of Theorem 1.5.1, if v → ∞ slowly enough then Sn v ∼ Fα,ρ,+ (v). P b(n)
(1.5.57)
From this it follows that, for ρ+ > 0, Fα,ρ,+ (v) ∼ ρ+ v −α
as v → ∞.
(1.5.58)
One can easily obtain a similar relation for the left tails as well: for ρ− > 0, Fα,ρ,− (v) ∼ ρ− v −α
as
v → ∞.
(1.5.59)
Note that, for ξ ⊂ = Fα,ρ , the asymptotic relation (1.5.57) turns into an exact equality if in it one replaces b(n) by bn := n1/α : Sn P v = Fα,ρ,+ (v). (1.5.60) bn $n # This follows from the observation that f(α,ρ) (λ/bn ) coincides with f(α,ρ) (λ) (see (1.5.8)) and therefore the distribution of the scaled sum Sn /bn coincides with that of ξ. The parameter ρ assuming values from the closed interval [−1, 1] is a measure of asymmetry of the distribution Fα,ρ . If, for instance, ρ = 1 (ρ− = 0) then for α < 1 the distribution Fα,1 will be concentrated on the right half-axis. This is seen from the fact that in this case the distribution Fα,1 could be considered as the limiting law for the scaled sums of i.i.d. r.v.’s ξk 0 (with F− (0) = 0). Since the distributions of such sums will all be concentrated on the right halfaxis, the limiting law must also have this property. Similarly, for ρ = −1, α < 1, the distribution Fα,−1 is concentrated on the left half-axis. In the case ρ = 0 (ρ+ = ρ− = 1/2) the ch.f. of the distribution Fα,0 will be real-valued, and the distribution Fα,0 itself will be symmetric. As we saw, the ch.f.’s f(α,ρ) (λ) of the stable laws Fα,ρ admit closed-form expressions. It is obvious that they all are absolutely integrable on R, and the same applies to the functions λk f(α,ρ) (λ), k 1. Therefore, all stable distributions have densities that are infinitely many times differentiable (see e.g. Chapter XV of [122]). As to the explicit formulae for these densities, they are only known for a few laws. To them belong, first of all, the normal law F2,ρ and the Cauchy distribution F1,0 , with density 2/(π 2 + 4t2 ), −∞ < t < ∞.
75
1.6 Functional limit theorems
An explicit form for another stable distribution can be obtained from a closedform expression for the distribution of the maximum of the Wiener process. This is the distribution F1/2,1 with parameters 1/2, 1 and density (up to a scale transform, cf. (1.5.58)) 1 √ e−1/2t , t>0 2π t3/2 (this is the density of the first passage time of level 1 by the standard Wiener process; see e.g. § 2, Chapter 18 of [49]).
1.6 Functional limit theorems 1.6.1 Preliminary remarks In this section, we will be interested in obtaining conditions ensuring the converk gence of random processes ζn (t) generated by the partial sums Sk = j=1 ξj of i.i.d. r.v.’s ξj . For example, we could consider ζn (t) :=
Snt , b(n)
t ∈ [0, 1],
(1.6.1)
where · denotes the integral part and b(n) is a scaling factor. To state the respective assertions, we will need two functional spaces, the space C = C(0, 1) of continuous functions g(t) on [0, 1] and the space D = D(0, 1) of functions g(t), t ∈ [0, 1], without discontinuities of the second kind, g(+0) = g(0), g(1 − 0) = g(1), which can be assumed, for definiteness, to be rightcontinuous: g(t + 0) = g(t), t ∈ [0, 1). We will suppose that the spaces C and D are endowed with the respective metrics ρC (g1 , g2 ) := sup |g1 (t) − g2 (t)| t∈[0,1]
and
) ( ρD (g1 , g2 ) := inf sup |g1 (t) − g2 (λ(t))| + sup |t − λ(t)| , λ
t
t
where the infimum is taken over all continuous monotone functions λ(t) such that λ(0) = 0, λ(1) = 1 (the Skorokhod metric). Consider further the measurable functional spaces H, BH , where H can be either C or D and BH is the σ-algebra generated by cylindric sets or the Borel σ-algebra generated by the metric ρH (for H = C or D these two σ-algebras coincide with each other; see e.g. §§ 1.3 and 3.14 of [28]). Denote by FH the class of functionals f on D possessing the following properties: (1) f is BD -measurable; (2) f is continuous in the metric ρH at points belonging to H. Now consider a sequence of processes {ζn (·); n 1} defined on D, BD .
76
Preliminaries
We will say that the processes ζn (·) H-converge as n → ∞ to a process ζ(·) given in H, BH if, for any functional f ∈ FH , one has f (ζn ) ⇒ f (ζ) (the symbol ⇒ denotes, as usual, weak convergence of the respective distributions). If H = C and P(ζn (·) ∈ C) = 1,
n 1,
(1.6.2)
then the C-convergence turns into the conventional weak convergence of distributions in the space C, BC . The requirement (1.6.2), however, does not need to be satisfied – as it does, for example, for the process defined in (1.6.1). In this case, to meet condition (1.6.2) one should construct a continuous polygon instead of (1.6.1). If H = D then the D-convergence is the weak convergence of distributions in D, BD . Since the class FC is wider than FD , the C-convergence is clearly stronger than the D-convergence.
1.6.2 Invariance principle Let the r.v.’s ξj have zero mean and a finite variance Eξj2 = d < ∞. Denote by w(·) the standard Wiener process, i.e. a process with homogeneous independent increments such that for t, u 0 the increment w(t + u) − w(t) has the normal distribution with parameters (0, u).√Let, as before, ζn (·) be a random polygon, say, of the form (1.6.1) with b(n) = nd. Theorem 1.6.1. Under the above assumptions, the sequence of the processes ζn (·) defined in (1.6.1) C-converges to the process w(·) as n → ∞. In the case when one takes the ζn (·) to be continuous polygons, this theorem on the weak convergence in C, FC of the distributions of ζn to that of w is known as the invariance principle. Since the functional f (g) := maxt∈[0,1] g(t) is ρC -continuous and also ρD -continuous, Theorem 1.6.1 implies, in particular, the following assertion on the distribution of S n := maxkn Sk (for simplicity we put d = 1): for any x, as n → ∞, −1/2 P(n S n x) = P(f (ζn ) x) → P max w(t) x t∈[0,1]
= 2P(w(1) x) = 2(1 − Φ(x)),
(1.6.3)
where Φ(x) is the standard normal distribution function. One can similarly obtain a closed-form expression for the limiting distribution for n−1/2 maxkn |Sk |. It will follow from these relations and the bounds for P(S n x) to be obtained in Chapter 4 that, in the case when Eξ 2 < ∞ and E[max{0, ξ}]α < ∞ for some
77
1.6 Functional limit theorems
α > 2, along with the convergence of distributions (1.6.3) as n → ∞ one also has the convergence of the moments v Sn E √ → E(w(1))v (1.6.4) n for any v < α, where w(t) := maxut w(u). Similar relations hold for the √ maxima maxkn |Sk / n| when E|ξ|α < ∞, α > 2. The assertion (1.6.4) can be strengthened somewhat. Let F+ (t) = P(ξ t) V (t), where V (t) is an r.v.f, and let g(t) be a continuous increasing function on [0, ∞) such that − g(t) dV (t) < ∞. Then, as n → ∞,
Eg
Sn √ n
→ Eg(w(1)).
(1.6.5)
One can also obtain from Theorem 1.6.1 the following fact extending (1.6.3). Let h+ (t) > h− (t) be two functions from D. Then, under the assumptions we have made, Sk k k < √ < h+ ; 1kn P h− n n n − P h− (t) < w(t) < h+ (t); t ∈ [0, 1] → 0
as
n → ∞ (1.6.6)
(Kolmogorov’s theorem [170, 172]). When addressing the problem of convergence rates in the invariance principle, the question how to measure these immediately arises. A natural way here is to study the decay rate (in n) of the Prokhorov distance [230, 74] between the distributions Pζn and Pw of the processes ζn (·) and w(·) respectively in C(0, 1). The Prokhorov distance between the distributions P and Q in C, BC is defined as follows: ρ(P, Q) := inf{ε : ρ(P, Q, ε) ε}, where ρ(P, Q, ε) := sup |P(B) − Q(B ε )| B∈BC ε
and B is the ε-neighbourhood of B: ( ) B ε := g ∈ C : inf ρC (g, h) < ε . h∈B
The following assertion holds true for the distance ρn := ρ(Pζn , Pw ) (see also the survey and bibliography of [74]).
78
Preliminaries
Theorem 1.6.2. Let Eξ = 0, Eξ 2 = 1. (i) If E|ξ|α < ∞ for some α > 2 then, as n → ∞,
ρn = o n−(α−2)/[2(α+1)] . β
(ii) If Eeλ|ξ| < ∞ for some 0 < β 1 and λ > 0 then, as n → ∞,
ρn = o n−1/2 (ln n)1/β . The rate of the convergence Pζn (B) → Pw (B) for sets B of a special form (for instance, for the sets appearing in the boundary problems (1.6.6)) admits a sharper bound (see [202, 244] and also the survey [44]): Theorem 1.6.3. If Eξ = 0, Eξ 2 = 1, E|ξ|3 < ∞ and the functions h± (t) satisfy the Lipschitz condition that, for an h < ∞, |h± (t + u) − h± (t)| h u,
0 t t + u 1,
then the absolute value of the difference on the left-hand side of (1.6.6) does not √ exceed ch E|ξ|3 / n. Recall that, under the conditions of Theorem 1.6.3 on the distribution of the r.v. ξ, the Berry–Esseen theorem states that cE|ξ|3 Sn sup P √ < x − Φ(x) < √ , n n x where c is an absolute constant (see e.g. § 8.5 of [49]).
1.6.3 A functional limit theorem on convergence to stable laws Now consider the case where Eξ 2 = ∞ and the distribution of ξ satisfies the regularity conditions [Rα,ρ ] of § 1.5. Put b(n) = F (−1) (1/n) when α < 2 (see (1.5.4)); when α = 2 the function b(n) is defined by the relation (1.5.5). When E|ξ| < ∞ we assume that Eξ = 0. Then, by virtue of Theorem 1.5.1, in the case α ∈ (0, 2], α = 1, we have the following convergence for ζn = Sn /b(n) as n → ∞: ζn ⇒ ζ (α,ρ) , where ζ (α,ρ) is an r.v. following the stable distribution Fα,ρ with ch.f. (1.5.8). In the case α = 1 one needs, generally speaking, centring as well: ζn − An ⇒ ζ (1,ρ) (see (1.5.10), (1.5.11)). Now consider in D, BD a right-continuous process ζ (α,ρ) (·) with homogeneous independent increments such that ζ (α,ρ) (1) has the distribution Fα,ρ . As before, let ζn (·) be defined by (1.6.1).
1.6 Functional limit theorems
79
Theorem 1.6.4. Under condition [Rα,ρ ] (i.e. the condition of Theorem 1.5.1) with α = 1 the processes ζn (·) D-converge to the process ζ (α,ρ) (·). Similarly to the preceding subsection, Theorem 1.6.4 implies the convergence of the distributions of S n /b(n) and maxkn |Sk |/b(n) to those of the maxima (α,ρ)
ζ (1) := maxt1 ζ (α,ρ) (t) and maxt1 |ζ (α,ρ) (t)| respectively. The rela√ tion (1.6.6) holds, too, if in it one replaces n and w(t) by b(n) and ζ(t) respectively. Also, it follows from the bounds of Chapter 3 for P(S n x) that for v < α, as n → ∞, v (α,ρ) v Sn E →E ζ (1) . b(n)
v A similar relation holds true for the moments E max |Sk |/b(n) as well. One kn
also has an assertion similar to (1.6.5). The proof of Theorem 1.6.4 in the more general case of non-identically distributed r.v.’s ξi in the triangular array scheme is presented in Chapter 12.
1.6.4 The law of the iterated logarithm The invariance principle is closely related to the following result, which characterizes the a.s. magnitude of oscillations in a random walk with zero drift. Theorem 1.6.5. Let Eξ = 0, d = Eξ 2 ∞. Then √ Sn P lim sup √ = d n→∞ 2n ln ln n
= 1.
Proof. In the case d < ∞ a proof can be found in e.g. [49, 122] and in the case d = ∞ in [261]. When the scaled sums Sn of i.i.d. r.v.’s converge to a non-normal stable law, by the ‘law of the iterated logarithm’ one means the following assertion, which is of a somewhat different character. Theorem 1.6.6. If, as n → ∞, one has ζn ⇒ ζ (α,ρ) , α < 2, then P lim sup |ζn |1/ln ln n = e1/α = 1. n→∞
In Chapter 3 we will present some extensions of this result.
2 Random walks with jumps having no finite first moment
2.1 Introduction. The main approach to bounding from above the distribution tails of the maxima of sums of random variables 2.1.1 Introduction Let ξ, ξ1 , ξ2 , . . . be i.i.d. r.v.’s with a common distribution F. Put S0 := 0 and consider the r.v.’s Sn :=
n
ξj ,
S n (a) := max(Sk − ak), a ∈ R, kn
j=1
S n := S n (0),
and the events Bj (v) := {ξj < y + vg(j)},
B(v) :=
n *
Bj (v),
v 0,
(2.1.1)
j=1
where the choice of the function g will depend on the distribution F. One of our main goals will be to obtain bounds for probabilities of the form P(Sn x),
P(S n (a) x)
and P(S n (a) x; B(v))
(2.1.2)
as x → ∞. Note that the probabilities P(S n (a) x; B(v)) will play an important role when we come to find the exact asymptotics of P(S n (a) x). As for the distribution of ξj , it will be assumed in Chapters 2–4 that its tails F− (t) = F((−∞, −t]) = P(ξj < −t), F+ (t) = F([t, ∞)) = P(ξj t),
t > 0,
are majorated or minorated by r.v.f.’s (at infinity). Majorants (or minorants) for the right tails F+ (t) will be denoted by V (t) and for the left tails F− (t) by W (t). In addition, in the case where V (t) (W (t)) is an r.v.f., we will be using for the respective index and s.v.f. the notation α and L(t) (β and LW (t) respectively): V (t) = t−α L(t), W (t) = t
−β
LW (t), 80
α > 0,
(2.1.3)
β > 0.
(2.1.4)
2.1 Introduction. Main approach to bounding distribution tails
81
Without loss of generality, we will assume that the functions V and W are monotone, with V (0) 1 and W (0) 1. In what follows, we will often use the following conditions on the asymptotic behaviour of the distribution tails under consideration: [ · , <]
F+ (t) V (t), t > 0,
[ · , >] F+ (t) V (t), t > 0, [<, · ]
F− (t) W (t), t > 0,
[>, · ]
F− (t) W (t), t > 0,
where V (t) and W (t) are of the forms (2.1.3) and (2.1.4) respectively. When studying the exact asymptotic behaviour of the probabilities P(Sn x) and P(S n (a) x), we will also be using the condition of regular variation of the tails: [ · , =]
F+ (t) = V (t), t > 0,
which is the intersection of the conditions [ · , <] and [ · , >] (with a common function V (t)), so that one could write: [ · , =] = [ · , <] ∩ [ · , >]. Recall that we agreed to denote the class of distributions satisfying condition [ · , =] by R (see p. 11). In a similar way, for a condition of the form F− (t) = W (t), t > 0, we will use the notation [=, · ] and will be considering the following intersections of conditions already introduced: [=, =] = [ · , =] ∩ [=, · ], [<, <] = [<, · ] ∩ [ · , <], [>, <] = [>, · ] ∩ [ · , <], [<, >] = [<, · ] ∩ [ · , >]. Since we will mainly be studying the probabilities of large deviations on the positive half-axis [0, ∞), the main parameter according to which the classification of different cases will be carried out will be the index α of the function V (t) in (2.1.3). We will often refer to it as the ‘right index’ (more precisely, the index of the right tail of the distribution of the r.v. ξ, or of its majorant or minorant).
2.1.2 On the main approach to bounding from above the distributions of S n In this subsection, we present a general approach to deriving upper bounds for probabilities of the form (2.1.2). Such bounds, to be obtained in Chapters 2–5, are essentially different from one another, but also have a lot in common. In particular, they are derived using the same type of inequalities for truncated r.v.’s.
82
Random walks with jumps having no finite first moment
The arguments proving these inequalities follow a common scheme (see also [201, 40]) which can described as follows. Suppose for the time being that the r.v. ξ satisfies Cram´er’s condition, i.e. that for some μ > 0 we have ϕ(μ) := Eeμξ < ∞. We will need the following relation, which could be referred to as a Kolmogorov– Doob-type inequality. Lemma 2.1.1. For all n 1, x 0 and μ 0 one has P(S n x) e−μx max{1, ϕn (μ)}.
(2.1.5)
Proof. Since η(x) := inf{k 1 : Sk x} ∞ is a Markov time the event {η(x) = k} and the r.v. Sn − Sk are independent of each other, so that ϕn (μ) = EeμSn
n # $ E eμSn ; η(x) = k k=1
n n # $ E eμ(x+Sn −Sk ) ; η(x) = k = eμx ϕn−k (μ) P η(x) = k k=1 μx
e
k=1
min{1, ϕ (μ)} P(S n x), n
whence one immediately obtains (2.1.5). The lemma is proved. If ϕ(μ) 1 for μ 0 (which is always the case when Eξ 0 exists) then the right-hand side of (2.1.5) will be equal to e−μx ϕn (μ), while the inequality (2.1.5) itself can be obtained as a consequence of the well-known Doob inequality for submartingales (see e.g. § 3, Chapter 14 of [49]). Now return to the case of ‘heavy’ tails, when Cram´er’s condition is not satisfied. Consider ‘truncated’ r.v.’s whose distribution coincides with the conditional law of ξ given that ξ < y for some cut-off level y, the choice of which is at y our disposal. Namely, introduce i.i.d. r.v.’s ξj , j = 1, 2, . . . , with distribution function y P(ξ < t) P ξj < t := P(ξ < t| ξ < y) = , t y, P(ξ < y) and put Sny :=
n j=1
y
ξj ,
y
S ny := max Sk . kn
Using the notation B(v) from (2.1.1), we have P(S n x) P B(0) + P S n x; B(0) nF+ (y) + P, where
n P := P S n x; B(0) = P(ξ < y) P(S ny x).
(2.1.6)
(2.1.7)
2.1 Introduction. Main approach to bounding distribution tails y
Applying Lemma 2.1.1 to the r.v.’s ξj
83
we obtain that, for any μ 0,
# y $n . P(S ny x) e−μx max 1, Eeμ ξ Since μξ y
Ee
R(μ, y) = , P(ξ < y)
y where
eμt F(dt),
R(μ, y) := ∞
we arrive at the following inequality, which will form the basis of many subsequent considerations. The basic inequality. For x, y, μ 0, # $n P ≡ P S n x; B(0) e−μx max{P(ξ < y), R(μ, y)} e−μx max{1, Rn (μ, y)}.
(2.1.8)
The problem of bounding the probability P , and hence also the probability P(S n x) by virtue of (2.1.6), reduces therefore to obtaining bounds for the integral R(μ, y). These bounds will be different, depending on the value of the index α and the ‘thickness’ of the left tail F− (t). In Chapters 2–5 we will be choosing the cut-off level y present in (2.1.8) in such a way that y x and the ratio r=
x 1 y
is bounded. Thus the growth rates of x and y as y → ∞ will be the same (up to a bounded factor). In Theorem 1.1.4(v) we introduced the function σ(n) := V (−1) (1/n),
n > 0,
(2.1.9)
where V (−1) denotes the generalized inverse function for V : V (−1) (u) := inf{v : V (v) u}. −1/α
The deviations x = σ(n), like the deviations b(n) ∼ ρ+ σ(n) used in § 1.5, play the important role of the ‘standard deviation’ for Sn . As seen from the definition, they are characterized by the fact that, under the assumption F+ ≡ V , events of the form {ξj σ(n)} on average occur once during a time interval of length n. As shown in Theorem 1.1.4(v) (see (1.1.20)), under the assumption that (2.1.3) holds one has σ(n) = n1/α Lσ (n),
where Lσ is an s.v.f.
(2.1.10)
Deviations of magnitude x = sσ(n) with s → ∞ are naturally referred to as
84
Random walks with jumps having no finite first moment
large deviations for Sn (or, more generally, for the whole trajectory S1 , . . . , Sn of the random walk). Clearly, along with the quantity x , s := s(x, n) = σ(n) such deviations can be equivalently characterized by the quantity Π := Π(x, n) = nV (x), i.e. the mean number of occurrences of the events {ξj x}, j = 1, . . . , n. Many bounds in the subsequent exposition will be given in terms of this quantity. The quantities s and Π are connected to each other by the following relations. Suppose that (2.1.3) holds for V . If s is fixed then, by virtue of the properties of r.v.f.’s, one has for x = sσ(n) Π = nV sσ(n) → s−α as n → ∞. Moreover, by Theorem 1.1.4(ii) we have s−α−δ < Π < s−α+δ for any δ > 0 and all sufficiently large s. Similar ‘inverse’ inequalities hold true for s. Along with the quantities Π and s, we will sometimes also use the functions Π(y) := Π(y, n) = nV (y)
and s(y) := s(y, n) with y = x/r, r 1,
so that Π = Π(x) < Π(y) ∼ rα Π, s(y) = s/r. We stress that it will always be assumed in what follows that x 1 r = c < ∞, Π(y) < 1 (s(y) > 1). y
(2.1.11)
In concluding this section we will note that, in the cases where α < 1 or β < 1 (under conditions [ · , <] or [>, · ] respectively), the bounds for the probabilities (2.1.2) could substantially differ from one another, depending on the relative ‘thicknesses’ of the left and right distribution tails of ξ. In this connection, we will single out the following two possibilities: (1) α < 1, the tail F− (t), t > 0, is arbitrary; (2) β < 1, the tail F− (t) is substantially ‘heavier’ than F+ (t), t > 0. The bounds in the first case are essentially bounds for the sums Sn when the r.v.’s ξj are non-negative (F− (0) = 0). 2.2 Upper bounds for the distribution of the maximum of sums when α 1 and the left tail is arbitrary As we have already noted, the considerations in this and many subsequent sections will be based on bounds for the probability P = P(S n x; B), (2.1.7), the main tool used for their derivation being the basic inequality (2.1.8).
2.2 Upper bounds when α 1 and the left tail is arbitrary
85
Theorem 2.2.1. Let condition [ · , <] be satisfied with α < 1 and let r = x/y 1 be bounded. Then there exists a constant c < ∞ such that for all n 1 and x > 0 one has the following inequality for the probability P defined in (2.1.7): P cΠ(y)r ,
r = x/y,
Π(y) = nV (y).
(2.2.1)
The constant c in (2.2.1) can be replaced by the expression (e/r)r + ε(Π(y)), where ε(·) is a bounded function such that ε(v) ↓ 0 as v ↓ 0. The notation ε(v) (with or without indices) will be used in what follows for functions converging to zero (either as v ↓ 0 or as v → ∞, depending on the circumstances). Remark 2.2.2. Observe that the bound (2.2.1) is of a universal character: up to minor modifications, it is applicable to all the types of random walks with ‘heavy tails’ that are discussed in detail in the present monograph (namely, for those with F ∈ R or F ∈ Se). The bound is rather sharp and admits the following graphical interpretation. From the subsequent exposition in Chapters 2–5 it will be seen that, for these classes of random walks, the main contribution to the probability of a large deviation S n x (or Sn x) comes from trajectories having a single large jump of the order of magnitude of x, so that the asymptotics of P(S n x) have the form + n {ξj x} ∼ nV (x). P j=1
If, however, none of the jumps ξj exceeds y = x/r, r > 1 (as is the case for our event {S n x; B}), then reaching the level x by one jump is impossible – to cross the level, there need to be at least r ‘large’ jumps (assume for simplicity that r > 1 is an integer). Since the probability of a jump of a size comparable with y is of order of V (y), the probability ofhaving r jumps of that r size among n independent summands will be of order nV (y) . This is, up to a constant factor, the right-hand side of the bound (2.2.1) for the probability P(S n x, all jumps < y). For the case α = 1 we will state a separate assertion. Here, along with V (x) and s = x/σ(n), we will also need some additional characteristics. Recall that, according to Theorem 1.1.4(iv), when α = 1, 1 L1 (x) := xV (x)
x V (u) du = 0
VI (x) → ∞ as xV (x)
x→∞
is an s.v.f. For δ > 0 put (x) ∼ Π(y)L1+δ (y). Π(δ) (y) := Π(y)L1+δ 1 1 It is not hard to see that, under broad conditions, L1 σ(n) ∼ L1 (n).
(2.2.2)
86
Random walks with jumps having no finite first moment
Theorem 2.2.3. Let condition [ · , <] be satisfied with α = 1. Then (2.2.1) holds true if, for some δ > 0 and c1 < ∞, one has Π(δ) (y) c1 or Π(δ) (x) c1 . For the last inequality to hold, it suffices that 1+γ for a fixed γ > 0. s = s(x, n) > L1 σ(n) The constant c in (2.2.1) can be replaced by (e/r)r + ε Π(δ) (y) , where ε(·) is a bounded function such that ε(v) ↓ 0 as v ↓ 0. The above-stated bounds enable one to obtain the next important result. Corollary 2.2.4. Let condition [ · , <] be met. Then the following assertions hold true. (i) If α < 1 then there exists a function ε(v) ↓ 0 as v ↓ 0 such that, for all n 1, sup
x: Πv
P(S n x) 1 + ε(v), Π
Π = nV (x),
(2.2.3)
or, equivalently, P(S n x) 1 + ε(1/t), Π x: st sup
s=
x . σ(n)
(ii) If α = 1 then, for any fixed δ > 0, sup x: Π(δ) v
P(S n x) 1 + ε(v). Π
(2.2.4)
From this corollary it follows, in particular, that in the case α 1 for any δ > 0 one has P(S n x) 1. (2.2.5) lim sup sup Π n→∞ x>n1/α+δ Proof of Corollary 2.2.4. By (2.1.6), condition [ · , <] and Theorem 2.2.1 one has (2.2.6) P(S n x) 1 + cΠ(y)r−1 Π(y). Next assume without loss of generality that v < 1 and put r := 1 + | ln v|−1/2 , so that x y= . 1 + | ln v|−1/2 Clearly, the relations Π v and v ↓ 0 imply that x → ∞ and r ↓ 1, and hence that L(y)/L(x) → 1. Therefore, there exists a function ε1 (v) ↓ 0 as v ↓ 0 such that, for Π v, we have L(y)/L(x) 1 + ε1 (v) and L(y) V (y) = rα (1 + | ln v|−1/2 )α (1 + ε1 (v)) =: 1 + ε2 (v), V (x) L(x)
2.2 Upper bounds when α 1 and the left tail is arbitrary
87
so that Π(y) (1 + ε2 (v))Π. Moreover, when Π v, one has Πr−1 (eln v )1/
√
| ln v|
= e−
√
| ln v|
=: ε3 (v).
Substituting the above inequalities into (2.2.6), we find that , P(S n x) 1 + c(1 + ε2 (v))r−1 ε3 (v) (1 + ε2 (v))Π =: (1 + ε(v))Π, where evidently ε(v) → 0. Thus (2.2.3) is established. The assertion (2.2.4) can be proved in exactly the same way, using Theorem 2.2.3. The corollary is proved. Proof of Theorem 2.2.1. Let M := 2α/μ < y (μ → 0 is to be chosen later). To make use of the basic inequality (2.1.8) we have to bound y
0 e F(dt) =
R(μ, y) = −∞
M +
μt
−∞
y +
0
=: I1 + I2 + I3 ,
(2.2.7)
M
where clearly I1 F− (0).
(2.2.8)
Further, integration by parts yields b μt
b
b
e F(dt) = −F+ (t)e + μ μt
a
a
F+ (t)eμt dt.
(2.2.9)
a
From this and condition [ · , <] we obtain 2α
M
I2 F+ (0) − e F+ (M ) + μ
V (t)eμt dt, 0
where the integral increases unboundedly as μ → 0, in the case α < 1, but does not exceed e
2α
M V (t) dt = 0
c e2α M V (M ) (1 + o(1)) V (1/μ) 1−α μ
by Theorem 1.1.4(iv). Hence we conclude that I2 F+ (0) + cV (1/μ).
(2.2.10)
Note that when α > 1 we obtain only I2 F+ (0) + cμ (see (3.1.26) below), so the bound (2.2.10) will be invalid in this case.
88
Random walks with jumps having no finite first moment Further, again from (2.2.9) and [ · , <], y
I3 =
e F(dt) V (M )e μt
2α
y +μ
M
V (t)eμt dt =: V (M )e2α + I30 . (2.2.11)
M
Now we bound the second term on the right-hand side. In what follows, the value of μ will be chosen so that λ = μy → ∞
(i.e. y 1/μ).
(2.2.12)
Using the change of variables (y − t)μ =: u, we obtain I30
(y−M )μ
= e V (y) μy
0
V (y − u/μ) −u e du. V (y)
(2.2.13)
Consider the integral on the right-hand side of (2.2.13). Since 1/μ y, the factor in the integrand V (y − u/μ) ry,μ (u) := V (y) converges to 1 for any fixed u. In order to apply the dominated convergence theorem to obtain that the integral on the right-hand side of (2.2.13) tends to ∞
e−u du = 1
(2.2.14)
0
as y → ∞, we have to estimate the growth rate of the function ry,μ (u) as u increases. By virtue of the properties of r.v.f.’s (see Theorem 1.1.4(iii)), for all small enough μ (or sufficiently large M ; recall that y − u/μ M in the integrand in (2.2.13)) one has −3α/2 u ry,μ (u) 1 − =: g(u). μy Since g(0) = 1 and μy − u M μ = 2α, one has in this range the relation
ln g(u) =
3α 3 3α = , 2(μy − u) 4α 4
and therefore ln g(u) 3u/4, so that ry,μ (u) e3u/4 . This means that the integrand in (2.2.13) is dominated by the exponential e−u/4 and so the use of the dominated convergence theorem is justified. Hence, owing to the convergence of the integral in (2.2.13) to the limit (2.2.14), we obtain I30
∞ ∼ e V (y) μy
0
e−u du = eλ V (y),
2.2 Upper bounds when α 1 and the left tail is arbitrary
89
and it is not hard to find a function ε(λ) ↓ 0 as λ ↑ ∞ such that I30 eλ V (y)(1 + ε(λ)).
(2.2.15)
Summarizing (2.2.8)–(2.2.15), we get R(μ, y) 1 + cV (1/μ) + eλ V (y)(1 + ε(λ)). Therefore, recalling that Π(y) = nV (y), one has Rn (μ, y) exp ncV (1/μ) + Π(y)eλ (1 + ε(λ)) .
(2.2.16)
(2.2.17)
Now choose μ (or λ) to be a value that ‘almost minimizes’ the expression −μx + Π(y)eλ = −λr + Π(y)eλ . Namely, put λ := ln
r . Π(y)
(2.2.18)
As we have already noted, if s(y) := s(y, n) = y/σ(n) → ∞ (where σ(n) = V (−1) (1/n)) then Π(y) < s(y)−α+δ → 0. Note also that, for such a choice of λ (or μ = y −1 ln(r/Π(y))), when Π(y) → 0 one has λ = μy ∼ − ln Π(y) → ∞ and hence the assumption y 1/μ that we made above in (2.2.12) is correct. From (2.1.8), (2.2.17) and (2.2.18) it follows that ln P −xμ + cnV (1/μ) + Π(y)eλ (1 + ε(λ)),
(2.2.19)
where Π(y)eλ = r and, for any δ > 0 and large enough y, owing to Theorem 1.1.4(iii) one has nV (1/μ) c1 Π(y)
V (y/| ln Π(y)|) c1 Π(y)| ln Π(y)|α+δ → 0 V (y)
as Π(y) → ∞. Therefore (2.2.19) implies that ln P −r ln
r + r + ε1 (λ), Π(y)
where ε1 (λ) ↓ 0 as λ → ∞ and one can assume without loss of generality that ln(r/Π(y)) 1. This completes the proof of the theorem. Proof of Theorem 2.2.3. In the case α = 1 the scheme of the proof remains the same but the bounding of I2 will be different. By Theorem 1.1.4(iv), M
2α
M
V (t)e dt e μt
0
0
V (t) dt = e2 M V (M )L1 (M ),
90
Random walks with jumps having no finite first moment
where L1 (M ) is an s.v.f., so that instead of (2.2.10) we obtain I2 F+ (0) + cV (1/μ)L1 (1/μ); for simplicity’s sake, we will now replace cL1 (1/μ) by L1 (1/μ). The bound of I3 remains unchanged. As a result, in the case α = 1 one gets, instead of (2.2.17), Rn (μ, y) exp nV (1/μ)L1 (1/μ) + Π(y)eλ 1 + ε(λ) . The choice of λ = ln(r/Π(y)) (μ = y −1 ln(r/Π(y))) also remains unchanged. Then, instead of (2.2.19), we have ln P −xμ + nV (1/μ)L1 (1/μ) + r 1 + ε(λ) , where
nV (1/μ)L1 (1/μ) c2 nV
y | ln Π(y)|
L1
y | ln Π(y)| −δ/2
c2 Π(y)L1 (y)| ln Π(y)|1+δ L1
(y) → 0
(2.2.20)
as y → ∞, and Π(y) < c1 L−1−δ (y), 1
δ > 0.
(2.2.21)
The relation (2.2.21) is clearly equivalent to the inequality Π(δ) (y) < c1 . If we have Π(δ) < c1 then Π(δ) (y) ∼ rα Π(δ) < c1 r α , and the sufficiency of condition Π(δ) < c1 for (2.2.20) is also proved. Now let 1/(α−δ) s > L1 σ(n) . Then, for any δ ∈ (0, δ/4),
Π nV (sσ(n)) < s−α+δ < s−δ L1 (σ(n))(−α+2δ )/(α−δ)
< L1 (x)(−α+2δ )/(α−δ) < L−1−ε 1
(2.2.22)
for some ε > 0. This means that the inequality Π(δ) < 1 holds true. The theorem is proved. Remark 2.2.5. If the function L(t) is differentiable, L (t) = o(L(t)/t) and F+ (t) = V (t) ≡ t−α L(t) then the bound for I3 in (2.2.11) can be improved: I3 α1
V (y) μy
for any α1 > α and large enough y. This makes it possible to refine the assertion of Theorem 2.2.1 and obtain the bound &r % Π(y) . P c | ln Π(y)|
2.3 Upper bounds when the left tail dominates
91
2.3 Upper bounds for the distribution of the sum of random variables when the left tail dominates the right tail To obtain the desired estimates for the large deviation probabilities, we will need bounds for the probabilities of ‘small deviations’ of the sums of negative r.v.’s. In this section, it will be convenient for us to mean by σW (n) any fixed function such that σW (n) ∼ W (−1) (1/n), where W (−1) is the function inverse to W (t) = t−β LW (t), LW being an s.v.f. (in particular, one could put σW (n) := W (−1) (1/n)). If, for instance, W (t) ∼ ct−β then one could take σW (n) = (cn)1/β . As before, by the symbols c, C (with or without indices) we will be denoting constants that are not necessarily the same when they appear in different formulae. Theorem 2.3.1. Let the r.v.’s ξj 0 satisfy condition [>, · ] with β < 1. Then β−1 (n) and all large there exist c > 0, C < ∞ and δ > 0 such that for u CσW enough n one has −δ Sn P −u e−cu . (2.3.1) σW (n) If LW (n) is a non-increasing function or LW (n) → const as n → ∞ then one can put δ = β/(1 − β). In the general case, one can choose any δ < β/(1 − β). Remark 2.3.2. Let LW (t) ≡ 1 and C 1, for example. Then, for u = β−1 σW (n) = n−(1−β)/β , we have from (2.3.1) that Sn −u ≡ P Sn > −n e−cn . P σW (n) β−1 (n), to obtain Now observe that for any u > 0, including the case u < CσW −cn for P(Sn /σW (n) > −u) a bound that would be better than e is, generally speaking, impossible. Indeed, if p0 = P(ξj = 0) > 0 then P(Sn = 0) = pn0 , so that for all u > 0 Sn −u pn0 . P σW (n)
Proof of Theorem 2.3.1. For λ 0 and any z > 0 one has 0 ϕ(λ) ≡
0
e−λz P Sn −z ,
e P Sn ∈ dt
n
λt
−∞
so that
−z
P Sn −z eλz+n ln ϕ(λ) .
Further, clearly 0 ϕ(λ) =
0 e P(ξ ∈ du) = 1 − λ λu
−∞
−∞
eλu P(ξ < u) du.
(2.3.2)
92
Random walks with jumps having no finite first moment
Since P(ξ < −t) W (t) = t−β LW (t), we get ∞ ϕ(λ) 1 − λ
−λt
e
∞ W (t) dt = 1 −
0
e−v W (v/λ) dv,
0
where, as λ → ∞, ∞
⎛∞ ⎞ e−v W (v/λ) dv = W (1/λ) ⎝ e−v v −β dv ⎠ (1 + o(1))
0
0
= W (1/λ)Γ(1 − β)(1 + o(1)) (see Theorem 1.1.5) and Γ(·) is the gamma function. Thus ϕ(λ) 1 − Γ(1 − β)W (1/λ) (1 + o(1)), and therefore ln ϕ(λ) −Γ(1 − β)W (1/λ) (1 + o(1)). Put Γ := Γ(1 − β) and take z = uσW (n),
λ = (βΓ/u)
1/(1−β)
−1 σW (n),
β−1 (n) and one has so that λ → 0 as u σW 1/(1−β) λz + n ln ϕ(λ) u βΓ/u − nΓW (u/βΓ)1/(1−β) σW (n) (1 + o(1)).
First assume that LW (n) is non-increasing or, alternatively, that LW (n) → const as n → ∞. Then, for u/βΓ 1, β/(β−1) u W (u/βΓ)1/(1−β) σW (n) n−1 (1 + o(1)). (2.3.3) βΓ From here it follows that λz + n ln ϕ(λ) −(1 − β)β β/(1−β) Γ1/(1−β) uβ(β−1) (1 + o(1)), and hence, by virtue of (2.3.2), −δ Sn P −u e−cu (1+o(1)) σW (n)
as
δ=
β . 1−β
(2.3.4)
(2.3.5)
β−1 (n), in (2.3.5) one has Now choose C sufficiently large that, for u CσW 1 + o(1) 1/2 for all large enough n. Then (2.3.5) implies that the bound (2.3.1) is proved for
c=
1 (1 − β)β β/(1−β) Γ1/(1−β) , 2
δ=
β . 1−β
In the general case, by the properties of s.v.f.’s (Theorem 1.1.4), for any ε > 0 and all large enough μσW (n) and n, one has LW (μσW (n)) με LW (σW (n)),
W (μσW (n)) μ−β+ε n−1 .
2.3 Upper bounds when the left tail dominates
93
Putting μ := (u/βG)1/(1−β) and repeating the above argument (see (2.3.3), (2.3.4)), we again get (2.3.5) but with δ = (β − ε)/(1 − β) < β/(1 − β). The theorem is proved. Taking into account that, by virtue of (1.5.59), Fβ,−1,− (t) ∼ t−β
as
t → ∞,
we obtain the following assertion (see also (1.5.60)). Corollary 2.3.3. Let F = Fβ,−1 be the stable distribution with parameters β and ρ = −1. One can assume without loss of generality that W (t) ∼ t−β . Then, putting bn := n1/β , we obtain that bn ∼ σW (n) = n1/β and all the distributions of the r.v.’s Sn /bn coincide with F and therefore, for u > 0, −δ
P(ξ −u) e−cu ,
δ=
β . 1−β
(2.3.6)
From (2.3.6) it follows that, for any k > 0, E|ξ|−k < ∞.
(2.3.7)
Remark 2.3.4. The exact asymptotics of P(ξ −u) as u → 0 for ξ ⊂ = Fβ,−1 were found in Theorem 2.5.3 of [286]. Comparison with that result shows that the bound (2.3.6) is asymptotically tight (up to the value of the constant c). Now we will obtain bounds for P(Sn −z) in the case of arbitrary r.v.’s ξj ≶ 0, satisfying conditions [>, <] with V (t) = o(W (t)). Recall the notation σ(n) = V (−1) (1/n) (see (2.1.9)). Theorem 2.3.5. Let condition [>, <] be satisfied and let V (t) = o(W (t)) as t → ∞. Then in the case α < 1 there exists a constant c1 < ∞ such that for z 0 and all n, one has
−δ " z + σ(n) P(Sn −z) c1 nV (σW (n) − z) + exp −c , (2.3.8) σW (n) where c and δ are the same as in Theorem 2.3.1. If β < 1, α > 1, α = 2 then (2.3.8) remains true if one replaces σ(n) on the right-hand side of (2.3.8) by the quantity an, where a = E[ξ| ξ 0] =
E[ξ; ξ 0] . P(ξ 0)
The cases of the ‘threshold’ values α = 1 and α = 2 require additional consideration. However, even in these cases, inequalities of the form (2.3.8) enable one to obtain meaningful bounds since the fulfilment of condition [ · , <], say, in the case α = 1 implies that F+ (t) t−α for any α < 1 and all large enough t.
94
Random walks with jumps having no finite first moment
Proof. If ξ 0 then the desired assertion follows from Theorem 2.3.1. Now let p := P(ξ < 0) ∈ (0, 1) and let ξj∓ be independent r.v.’s with distributions P(ξ < t) , t 0, p P(0 ξ < t) P(ξj+ < t) = , t 0, 1−p
P(ξj− < t) =
(2.3.9) (2.3.10)
respectively. Denote by ν n the number of negative jumps ξj in the sum Sn . Then + Sn = Sν− + Sn−ν ,
(2.3.11) ∓ k ∓ + − Sk = and Sn−m where, for a fixed ν = m, the sums Sm j=1 ξj , like the summands ξj∓ in them, are independent. Furthermore, it is obvious (see e.g. Theorem 10, Chapter 5 of [49]) that P(|ν − np| > εn) c1 e−qn ,
(2.3.12)
where q = q(ε) > 0 for ε > 0. Fix ν = m, m being a value from the interval (n(p − ε), n(p + ε)), where − ε 12 min{p, 1 − p}. Then, for such an m, Theorem 2.3.1 is applicable to Sm . + − Now construct a new r.v. S 0, which is independent of Sn−m and follows the distribution
−δ " v β , v vm := CσW P(S − −v) = exp −c (m), σW (m) (2.3.13)
−δ " vm (2.3.14) P(S − = 0) = exp −c =: Pm . σW (m) d
− S − and thereThen it is evident that, by virtue of Theorem 2.3.1, one has Sm fore + − P(Sn −z| ν = m) = P(Sn−m −z − Sm ) + −z − S − ) Pm + Qm , P(Sn−m
where + −z − S − ; S − < −vm ). Qm := P(Sn−m
First consider the case α < 1. There are two possibilities: (1)
vm − z σ(n),
(2)
vm − z > σ(n).
(2.3.15)
95
2.3 Upper bounds when the left tail dominates In the former, Qm P −z − σ(n) S − < −vm + −z − S − ; S − < −z − σ(n) , + P Sn−m
(2.3.16)
where, owing to (2.3.14), the first term on the right-hand side does not exceed
z + σ(n) exp −c σW (m)
−δ "
Pm .
(2.3.17)
By virtue of Corollary 2.2.4, the second term on the right-hand side does not exceed the value # $ c(n − m)E V (−z − S − ); S − < −z − σ(n) . (2.3.18) First consider $ # E V (−z − S − ); −σW (n) < S − < −z − σ(n) ,
(2.3.19)
where one can assume without loss of generality that z < σW (n)/2 (otherwise the inequality (2.3.8) would not be meaningful). The expectation (2.3.19) can be rewritten as −z−σ(n)
V (−z − v) P(S − ∈ dv).
(2.3.20)
−σW (n)
Here, when u = v/σW (n) increases from −1 to −(z + σ(n))/σW (n), on the one hand the quantity V (−z − v) grows as a power function of v (or of u) from the value V (σW (n) − z) to V (σ(n)). On the other hand, as u = v/σW (n) grows, by virtue of (2.3.13) the probability P(S − v) (and its density as well) −δ decays much faster (‘semiexponentially fast’, as e−cu ) from the value e−c to −δ ) ( z + σ(n) . These simple qualitative considerations make it posexp −c σW (n) sible to omit tedious computations (the reader could reproduce them independently) and claim that (2.3.19), (2.3.20) will admit an upper bound of the form cV (σW (n) − z). The second part of the integral in (2.3.18), which is equal to $ # E V (−z − S − ); S − −σW (n) , clearly does not exceed V (σW (n) − z). Summarizing the above argument (see (2.3.15)–(2.3.20)) and the fact that both m and n − m are between c1 n and c2 n, where 0 < c1 < c2 < 1, we obtain that
96
Random walks with jumps having no finite first moment
in case (1)
−δ " vm P (Sn −z| ν = m) exp −c σW (n)
−δ " z + σ(n) + c0 nV (σW (n) − z). + exp −c σW (n) (2.3.21) In case (2), the first term on the right-hand side of (2.3.16) will be absent, while the second can be estimated as before. Hence
−δ " vm P (Sn −z|ν = m) exp −c +c0 nV (σW (n)−z). (2.3.22) σW (n) Taking into account the inequality (2.3.12) and the fact that the derived bounds (2.3.21), (2.3.22) are uniform in m ∈ [(p − ε) n, (p + ε) n], we obtain
−δ " vm P (Sn −z) ce−qn + exp −c σW (n)
−δ " z + σ(n) + exp −c + c0 nV (σW (n) − z). (2.3.23) σW (n) Finally, we note that vm σW (n)
−δ
(1−β)δ
= C σW
(n) c1 nh ,
h ∈ (0, 1],
and that the minimum value of V (σW (n)−z) (attained at z = 0) has a power term of the form n−α/β . Hence the first two terms on the right-hand side of (2.3.23) are negligibly small compared with the last one. This proves (2.3.8). Now let β < 1, α ∈ (1, 2). Then, as will be shown in Corollary 3.1.2, the bound P(Sn+ x) cnV (x − an),
a = E(ξ| ξ > 0),
(2.3.24)
becomes valid for deviations x exceeding the threshold an + σ(n), where σ(n) = o(n). Hence for any fixed a > a one has (possibly with a different c) P(Sn+ x) cnV (x) for x a n (in the case α < 1, the bound from Corollary 2.2.4 is used in (2.3.18)). Therefore the two alternative possibilities (1), (2) have become the following: (1) vm − z a n, (2)
vm − z > a n.
The rest of the argument remains valid in this case, and thus the changes needed reduce to replacing σ(n) by a n. Hence (2.3.8) will still be true (possibly, with a somewhat different c or δ) if one replaces σ(n) by a n. Finally, notice that (2.3.8) will remain true if further we replace a n by an and c by c(a/a )δ .
2.4 Upper bounds for maximum when left tail is much heavier
97
The case α > 2 can be dealt with in a similar way. Here the bound (2.3.24) becomes valid for deviations x an + (α − 2) n ln n (see § 4.1). Otherwise, the argument remains unchanged. The theorem is proved.
2.4 Upper bounds for the distribution of the maximum of sums when the left tail is substantially heavier than the right tail In this section, it will be assumed that β < α. One could expect that, in this case, owing to the domination of the left jump-distribution tail the trajectory of the random walk will drift to −∞ with probability one, and therefore the bounds for the probabilities P(S n x) will, starting from some point, be independent of n. If, to bound the probability P(S n x), one used inequalities of the form P(S n x) P(B) + P with the sets B = B(0) (as in the proof of Corollary 2.2.4) then one would not obtain the desired estimates. So instead we will use the sets B = B(v) from (2.1.1) with v > 0 and g(j) = j 1/γ ,
γ ∈ (β, α)
in these inequalities. Theorem 2.4.1. Let condition [>, <] be satisfied, functions V (t) and W (t) being of the form (2.1.3) and (2.1.4) respectively with β < min{1, α}. Then, for a suitable v and all n, as y → ∞, r−ε (2.4.1) P (v) := P S n x, B(v) c min nV (y), y −(α−β) with r = x/y, where ε = ε(v, γ) > 0 can be made arbitrarily small by choosing v > 0 and γ > β. Moreover, for any fixed ε > 0, P(S n x) cV (x) min n, xβ+ε . (2.4.2) Corollary 2.4.2. Under the conditions of Theorem 2.4.1, S = supn0 Sn is a proper r.v. The assertion of the corollary follows in an obvious way from (2.4.2). To prove Theorem 2.4.1, we will need bounds for the probability P = P (0) from (2.1.7). In the following lemma, to make computations easier, we will assume in addition that in (2.1.3) and (2.1.4) one has L(t) = L + o(1),
LW (t) = LW + o(1)
as t → ∞, where L and LW are constants.
(2.4.3)
98
Random walks with jumps having no finite first moment
Lemma 2.4.3. Let the conditions of Theorem 2.4.1 be satisfied and let (2.4.3) hold true. Then, for all n, x P min c(nV (y))r , c1 y −r(α−β) (ln y)−rβ , (2.4.4) r= . y Consequently, for all large enough y one has P min c(nV (y))r , y −r(α−β) , where c is the same as in Theorem 2.2.1. Note that conditions (2.4.3) will be used only to obtain the second part of inequality (2.4.4). Furthermore, they will not be needed for proving Theorem 2.4.1. Proof. By virtue of the inequalities (2.1.8), the problem again reduces to that of bounding the quantity R(μ, y) from (2.2.7), with the same partition of that integral into the sub-integrals I1 , I2 , I3 . Now let us show that μ → 0 can be chosen so that R(μ, y) < 1. Since 0 I1 = F− (0) − μ
∞ F− (t)e dt F− (0) − μt
−∞
e−u W (u/μ) du
0
and we have assumed that β < 1, one has that, as μ → 0, ∞ e 0
−u
∞ W (u/μ) du ∼ W (1/μ)
e−u u−β du = Γ(1 − β)W (1/μ),
0
where Γ(·) is the gamma function, so that I1 F− (0) − Γ(1 − β)W (1/μ)(1 + o(1)).
(2.4.5)
In the case α < 1, the bounds for the integrals I2 and I3 remain the same as in (2.2.10), (2.2.11) and (2.2.15). Hence for this case R(μ, y) 1 − Γ(1 − β)W (1/μ)(1 + o(1)) (2.4.6) + cV (1/μ) + V (y)eμy (1 + ε(μy)), where V (1/μ) = o W (1/μ) . If α > 1 then instead of the term cV (1/μ) in (2.4.6) one would have cμ (see the remark after (2.2.10) and also the proof of Theorem 3.1.1 and the relations (3.1.27), (3.1.28) on p. 133). Because μ = o (W (1/μ)), all the subsequent arguments involving (2.4.6) will remain valid. For α = 1, instead of the term cV (1/μ) in (2.4.6) we have cμ| ln μ|; it is also obvious that the relation μ| ln μ| = o (W (1/μ)) is true, and that the subsequent argument will remain valid as well.
99
2.4 Upper bounds for maximum when left tail is much heavier Thus one obtains from (2.4.6) the following bounds: R(μ, y) 1 − Γ(1 − β)W (1/μ)(1 + o(1)) + V (y)eμy (1 + o(1)) 1 + V (y)eμy (1 + o(1)).
(2.4.7)
We will make use of both these inequalities. First, we turn to the former and choose μ in such a way that Γ(1 − β)W (1/μ) = V (y)eμy .
(2.4.8)
To make finding the root μ of this equation easier, we will use the simplifying assumptions (2.4.3). Then for y 1/μ equation (2.4.8) will take the form y α μβ = c eμy (1 + o(1)).
(2.4.9)
Putting μy =: λ, one can rewrite (2.4.9) as β ln λ + (α − β) ln y = λ + c1 + o(1). From here it is seen that we can ‘almost satisfy’ the equation (2.4.9) by setting λ = (α − β) ln y + β ln ln y + c2 . With such a choice of λ (or μ) and a suitable c2 , R(μ, y) 1 − Γ(1 − β)LW λβ y −β (1 + o(1)) + y−α Leλ (1 + o(1)) # $ = 1 − y −β lnβ LW Γ(1 − β)(α − β)β (1 + o(1)) − Lec2 (1 + o(1)) < 1. Therefore, by the basic inequality (2.1.8) P e−μx = e−rλ = exp −r[(α − β) ln y + β ln ln y + c2 ] = c1 y −r(α−β) (ln y)−rβ . Hence the second part of the inequality (2.4.4) follows. To prove the first part of (2.4.4), consider the second inequality in (2.4.7) and choose μ in the same way as in the proof of Theorem 2.2.1, i.e. put μy = ln
r . Π(y)
Then, following the computations (2.2.17)–(2.2.19), we similarly obtain ln P −xμ + Π(y)eμy (1 + ε(μy)) = −r ln
r + r + o(1). Π(y)
This yields the inequality (2.2.1) and proves the lemma.
100
Random walks with jumps having no finite first moment
Proof of Theorem 2.4.1. First, we will estimate P (v) ≡ P(S n x; B(v)).
(2.4.10)
For g(x) = x1/γ and A > 1, put m1 := g (−1) (x) = xγ , M0 := 0,
Mk :=
k
mk := xγ Ak−1 ,
mj ≡ xγ Ak ,
Ak :=
j=1
Ak − 1 Ak−1 , A−1
1/γ
xk := x + g(Mk−1 ) = x(1 + Ak−1 ),
(2.4.11)
1/γ
yk := y + vg(Mk ) = y(1 + vrAk ),
for k = 1, 2, . . . Here one can assume, for simplicity, that the mk are integers. For n M1 , owing to Lemma 2.4.3, n * −r (α−β) , P (v) P S n x; {ξj < y1 } min c(nV (y1 ))r1 , y1 1 j=1
where r1 = x/y1 = x/y(1 + vrA1/γ ) can be made greater than r−ε by choosing v appropriately. This proves (2.4.1). Now let n > M1 . Then m1 * P (v) P S m1 x1 ; {ξj < y1 } j=1
1/γ
+ P SM1 −M1
;
M *1
{ξj < y1 }
j=1
1/γ + P S n x, S m1 < x, SM1 < −M1 ; B(v) .
(2.4.12)
Arguing in the same way, we see that for n > M2 the last term in (2.4.12) will not exceed m2 M * *2 1/γ {ξj < y2 } + P SM2 −M2 ; {ξj < y2 } P S m2 x 2 ; j=1
j=1
1/γ + P S n x, S M2 < x, SM2 < −M2 ; B(v) , and so on. Thus, to get a bound for P (v), we need to obtain estimates for m N *k P S mk xk ; {ξj < yk } k=1
and
M N *k 1/γ P SMk −Mk ; {ξj < yk } , k=1
(2.4.13)
j=1
j=1
(2.4.14)
101
2.4 Upper bounds for maximum when left tail is much heavier
where N := min{k : Mk n}. By virtue of Lemma 2.4.3, for large enough y the former sum does not exceed the quantity −r (α−β) yk k , (2.4.15) k
where 1/γ
rk =
x(1 + Ak−1 ) xk r − ε, = 1/γ yk y(1 + vrAk )
ε>0
for all k and a suitable v = v(r, A, ε). Therefore the sums (2.4.13) and (2.4.15) will not exceed −(r−ε)(α−β) yk . (2.4.16) k
But the Ak increase as a geometric progression (see (2.4.11)). The same can be 1/γ said about the sequence 1+rvAk (see the definition of yk in (2.4.11)). Therefore the sums (2.4.13), (2.4.15) and (2.4.16) will not exceed cy −(r−ε)(α−β) . 1/γ Next we estimate the sum (2.4.14). For brevity, let Mk =: zk . Denote by ν the number of events {ξj 0} in Mk independent trials. Then, by (2.3.12),
M *k
P SMk −zk ;
{ξj < yk }
j=1
Mk M *k = P SMk −zk ; {ξj < yk } ν = i P(ν = i) i=1
j=1
[Mk p2 ]
=
+ O e−qMk ,
(2.4.17)
i=[Mk p1 ]
where p1 = F− (0) − ϕ > 0, p2 = F− (0) + ϕ < 1, ϕ > 0 and q = q(ϕ) > 0. Further, let ν = i ∈ [p1 Mk , p2 Mk ] be fixed. Then, cf. (2.3.11), + SMk = Si− + SM , k −i
k where Sk∓ = j=1 ξj∓ are independent sums of independent r.v.’s ξj∓ with distribution functions (2.3.9) and (2.3.10) respectively. Hence the first factor in the ith term on the right-hand side of (2.4.17) will not exceed M* k −i + P(Si− −2zk ) + P SM zk ; {ξj+ < yk } . (2.4.18) k −i j=1
Here the second term, by virtue of Theorem 2.2.1, does not exceed the quantity ∗ c[Mk V (yk )]rk , where (see (2.4.11)) rk∗ =
1/γ
Mk yk
1/γ
xAk r >r−ε = 1/γ 1 + vr y 1 + vrAk
102
Random walks with jumps having no finite first moment
for v < ε/r2 . Therefore, the second term in (2.4.18) is bounded by %
−α &r−ε
r−ε 1/γ 1−α/γ c[Mk V (yk )]r−ε c1 xγ Ak yAk = c2 y γ−α Ak uniformly in i. But Ak Ak−1 , A > 1, γ < α. Hence the sum in k of these terms (see (2.4.14), (2.4.17)) does not exceed c3 y −(r−ε)(α−γ) . Now we will obtain a bound for the first term in (2.4.18), putting for brevity Mk = m, i = mp, p ∈ [p1 , p2 ]. One has the following embedding for the event in the first term: ( ) mp ) *( − Smp ξj− −2m1/γ . −2m1/γ ⊂ j=1
Hence P
− Smp
−2m
1/γ
mp
W (2m1/γ ) F (0)
mp 1−β/γ −β/γ 1 − cm < e−cp1 m
1−
(2.4.19)
uniformly in i ∈ [mp1 , mp2 ]. Taking into account that m = Mk grows as a geometric progression, that M1 = xγ and that 1 − β/γ > 0, we obtain that the contribution of the first terms in (2.4.18) to the sum in (2.4.14) does not exγ−β ceed e−cy . Note that a bound of the form (2.4.19) can be obtained in a different way, using Theorem 2.3.1. We have proved that, for all large enough y, P (v) < y −(r−ε)(α−β) . Combining all the above bounds, we arrive at the second part of the inequality (2.4.1). It remains to notice that, for n > M1 , nV (y) > cxγ−α xβ−α , and therefore (2.4.1) is proved. To obtain the second assertion of Theorem 2.4.1, we need to estimate P(B(v))
n
P(ξj y + vj 1/γ )
j=1
n j=1
V (y + vj
1/γ
n )
V (y + vt1/γ ) dt.
(2.4.20)
0
If n y then the integral does not exceed cnV (y). If n y γ then one should yγ n represent the integral in (2.4.20) as the sum 0 + yγ , where the first integral γ
103
2.5 Lower bounds for distributions of sums
has already been shown to be bounded by cy γ V (y). The last integral, by virtue of Theorem 1.1.4(iv), does not exceed ∞ uγ−1 V (y + vu) du ∼ c1 y γ V (y).
c y
Therefore P(B(v)) cV (y) min{yγ , n}.
(2.4.21)
Now choosing r 1 + ε in (2.4.1), we obtain for P(S n x) the same bound as in (2.4.21): P(S n x) P(S n x; B(v)) + P(B(v)) cV (x) min{xγ , n}. The theorem is proved. Remark 2.4.4. It is not difficult to verify that, at the price of some complications in the derivation, the bounds (2.4.1), (2.4.2) could be made more precise. If we put g(j) := j 1/β ln−b j and m1 := xβ lnb x then the quantity ε in (2.4.1), (2.4.2) could be replaced by 0 but then a logarithmic factor would appear on the right-hand sides of these inequalities. Indeed, the only place in the proof of Theorem 2.4.1 that is sensitive to the approach of the parameter γ to the value β is the bounding of the first term in (2.4.18). But this bound is exponential (see (2.4.19) and what follows). Hence one could achieve a power decay rate in the bound by a suitable choice of b in the new definition of the function g. Bounds for P(B(v)) would change in a similar way. Therefore one can actually obtain the bound P(S n x) cV (x) min n, xβ lnb1 x (2.4.22) for a suitable b1 > 0. Since in the assertion of Theorem 2.4.1 we have an arbitrary fixed ε > 0 in the exponent, the simplifying assumptions (2.4.3) do not lead to a loss of generality. One could even assume that
V (t) t−α ,
W (t) t−β ,
where α < α and β > β are close to α and β respectively. In that case the assertions (2.4.1), (2.4.2) would retain their form with ε = ε(β , α ).
2.5 Lower bounds for the distributions of the sums. Finiteness criteria for the maximum of the sums 2.5.1 Lower bounds In §§ 2.1–2.4 we presented upper bounds for the distributions of Sn and S n . Now we will derive lower bounds for the distributions of Sn . These are substantially simpler and more general.
104
Random walks with jumps having no finite first moment
Here we will not need conditions on the existence of regularly varying majorants or minorants. Recall that F+ (t) = P(ξ t),
F− (t) = P(ξ < t),
F (t) = F− (t) + F+ (t).
Theorem 2.5.1. Let {K(n) > 0; n 1} be an arbitrary sequence, and let Qn (u) := P Sn /K(n) < −u . Then, for y = x + uK(n − 1), n−1 F+ (y) . P(Sn x) nF+ (y) 1 − Qn−1 (u) − 2 Proof. Put Gn := {Sn x} and Bj := {ξj < y}. Then n + Bj P(Sn x) P Gn ; j=1
n j=1
n j=1
P(Gn B j ) −
P(Gn B i B j )
i<jn
P(Gn B j ) −
n(n − 1) 2 F+ (y). 2
Here, for y = x + uK(n − 1), ∞ P(Gn B j ) =
P(Sn−1 x − t) F(dt) y
P(Sn−1 x − y) F+ (y) = (1 − Qn−1 (u))F+ (y). The theorem is proved. Now we will present conditions under which one can obtain explicit estimates for K(n) and Qn (u) in the case Eξj2 = ∞. We will say that condition [ · , ≶] is satisfied if, for some c 1, V (t) F+ (t) cV (t),
(2.5.1)
where V (t) is given in (2.1.3). (If c = 1 then [ · , ≶] coincides with [ · , =].) We will also be using condition [Rα,ρ ] from § 1.5: F (t) is an r.v.f. with index −α, α ∈ (0, 2), and, moreover, there exists the limit F− (t) = ρ− , t→∞ F (t) lim
ρ− ∈ [0, 1],
ρ = 1 − 2ρ− .
When ρ− = 0, one can assume that condition [<, =] is met. Under condition [Rα,ρ ] the scaled sums Sn /b(n), b(n) = F (−1) (1/n), converge in distribution to the stable law Fα,ρ with parameters (α, ρ) (see § 1.5; recall
2.5 Lower bounds for distributions of sums
105
that we assume that Eξ = 0 when Eξ exists and is finite, and that centring may be needed in the case α = 1): Sn (2.5.2) P ∈ · ⇒ Fα,ρ ( · ). b(n) Theorem 2.5.2. (i) Let condition [<, · ] hold, β < 1, and let σW (n) = W (−1) (1/n). Then, for y = x + uσW (n − 1), n−1 (2.5.3) P(Sn x) nF+ (y) 1 − cu−β+δ − F+ (y) 2 for any fixed δ > 0 and a suitable c < ∞. (ii) Let, in addition, condition [ · , ≶] hold and let W (t) c1 V (t). Then for x = sσ(n) → ∞ (σ(n) = V (−1) (1/n)) one has P(Sn x) nV (x)(1 − ε(s)),
(2.5.4)
where ε(s) ↓ 0 as s ↑ ∞. Proof. Put K(n) := σW (n) in Theorem 2.5.1. Then, applying Corollary 2.2.4 to the sums −Sn , we obtain Qn (u) = P(−Sn uσW (n)) c1 nW (uσW (n)) cu−β+δ
(2.5.5)
for any fixed δ > 0 and all u 1. This proves (2.5.3). Now let, in addition, condition [ · , ≶] be met and W (t) c1 V (t), x = sσ(n). Then σW (n) c2 σ(n). Hence for u = s1−δ , δ ∈ (0, 1), we have y = x + uσW (n − 1) = sσ(n) + s1−δ σW (n−1) x(1 + c2 s−δ ) and therefore F+ (y) V (x)(1 + ε(s)), where ε(s) ↓ 0 as s ↑ ∞. Choosing δ so that (−β + δ)(1 − δ) −β/2, and taking into account (2.5.1) and the inequality nF+ (y) < cnV (sσ(n)) ∼ cs−α , we obtain from (2.5.3) the relation (2.5.4). The theorem is proved. The following corollary for the case of regular tails follows in an obvious way from Theorem 2.5.2. Corollary 2.5.3. Let condition [Rα,ρ ] hold with ρ > −1. Then, for x = sσ(n) and Π = Π(x) = nV (x), we have inf
x: s>t
where ε(t) ↓ 0 as t ↑ ∞.
P(Sn x) 1 − ε(t), Π
(2.5.6)
106
Random walks with jumps having no finite first moment
2.5.2 Finiteness criterion for the supremum of partial sums It is not difficult to derive from Theorem 2.3.5 and the above lower bounds a finiteness criterion for S = supk0 Sk . Theorem 2.5.4. (i) Let condition [>, <] hold with β < 1, and let V (t) = o(W (t)). Then the convergence of the integral ∞ 1
V (t) dt <∞ tW (t)
(2.5.7)
implies that S < ∞ a.s. (ii) Let condition [<, >] hold with β < 1. Then the divergence of the integral ∞ 1
V (t) dt =∞ tW (t)
(2.5.8)
implies that S = ∞ a.s. (iii) Let condition [=, =] hold with β < 1, and let the limit lim
t→∞
V (t) ∞ W (t)
(2.5.9)
exist. Then the convergence of the integral in (2.5.7) is necessary and sufficient for S < ∞ a.s. The proof of the theorem is given below (p. 107). In the general case (without assumptions [ · , <], [>, · ] on the existence of regularly varying majorants and minorants), the finiteness criterion for S was obtained in [119]. It follows from the strong law of large numbers in the case when Eξ is undefined, and it has the following form. Put x m− (x) :=
x F− (t) dt,
m+ (x) :=
0
∞ I+ := 0+
t m− (t)
F+ (t) dt, 0
0− F(dt),
I− := −∞
|t| F(dt). m+ (|t|)
Note that the integrands in I± are bounded in the vicinity of zero if ξ can assume values of both signs with positive probabilities. The following theorem holds true (an alternative proof thereof using the subadditive Kingman theorem was given in [102]). Theorem 2.5.5. If E|ξ| = ∞ then: (i) max{I+ , I− } = ∞;
2.5 Lower bounds for distributions of sums
107
Sn = ∞ a.s. ⇐⇒ I− < ∞; n→∞ n Sn (iii) lim = −∞ a.s. ⇐⇒ I+ < ∞; n→∞ n ±Sn = ∞ a.s. ⇐⇒ I− = I+ = ∞. (iv) lim sup n n→∞ (ii) lim
It follows from Theorem 2.5.5 that S < ∞ a.s.
⇐⇒
I+ < ∞.
Theorem 2.5.4 could be obtained as a consequence of Theorem 2.5.5. However, presenting the proof of Theorem 2.5.5 given in [119], which is based on quite different arguments, would take us too far from the mainstream of our exposition. For this reason, and also to illustrate the precision of the bounds that we have obtained, we will obtain the assertion of Theorem 2.5.4 in a simpler way – as a corollary to Theorems 2.3.5 and 2.5.2. The next assertion ensues from Theorem 2.5.4. Corollary 2.5.6. Let β < 1. If condition [>, <] is satisfied and if we also have V (t) cW (t)(ln t)−1−ε , ε > 0 then S < ∞ a.s. If condition [<, >] is met and V (t) cW (t)(ln t)−1 then S = ∞ a.s. Observe that condition [>, <] in Theorem 2.5.4(i) means that there exists a d
r.v. ξ! ξ with regularly varying tails (i.e. with a distribution satisfying condition [=, =]) for which (2.5.7) holds true. A similar remark is valid for condition [<, >] in Theorem 2.5.4(ii).
Proof of Theorem 2.5.4. We will need the following auxiliary assertion. Lemma 2.5.7. The integral (2.5.7) converges iff ∞
V (σW (n)) =
n=1
∞
V (W (−1) (1/n)) < ∞.
(2.5.10)
n=1
Proof. Clearly, convergence of the series in (2.5.10) is equivalent to convergence of the integral ∞
V (W (−1) (1/t)) dt.
1
Using the change of variables W (−1) (1/t) = u (i.e. t = 1/W (u)) and putting,
108
Random walks with jumps having no finite first moment
for brevity, W (−1) (1) =: w, we see that the above integral is equal to ∞ −
V (2k )W (2k ) V (u) dW (u) < c 1 W 2 (u) W 2 (2k ) k
w
V (2k ) 2k V (t) dt . < c2 = c1 W (2k ) 2k tW (t) ∞
k
1
Similar converse inequalities are also true. The lemma is proved. Now we return to the proof of Theorem 2.5.4. (i) We will make use of the following well-known criterion for the finiteness of S (see e.g. § 7, Chapter XII of [122] or § 2, Chapter 11 of [49]): S < ∞ a.s. iff ∞ ∞ P(Sn > 0) P(Sn 0) <∞ or <∞ . (2.5.11) n n n=1 n=1 If condition [>, <] is satisfied with α < 1 and β < 1 then by Theorem 2.3.5 with z = 0 one has
−δ " σ(n) P(Sn > 0) c1 nV (σW (n)) + exp −c . (2.5.12) σW (n) Here, by virtue of Theorem 1.1.4(iii), for any fixed ε > 0 and all large enough n, −α−ε σW (n) σW (n) 1 σ(n) V (σW (n)) = V σ(n) σ(n) n and therefore
nV (σW (n))
σW (n) σ(n)
−α−ε
.
At the same time, for any k > 0 exp{−ct−δ } t−k as t → ∞. Hence
nV (σW (n)) exp −c
σ(n) σW (n)
−δ "
,
and so by (2.5.12) P(Sn > 0) cnV (σW (n)).
(2.5.13)
This, together with (2.5.11) and Lemma 2.5.7, proves the first assertion of the theorem in the case α < 1. If α 1, β < 1 then the integral (2.5.7) converges. However, in that case condition [ · , <] is satisfied for any α ∈ (β, 1) and therefore S < ∞ a.s. by virtue of the above argument.
109
2.5 Lower bounds for distributions of sums
(ii) Now we will prove the second assertion of the theorem. To simplify the exposition, assume that at least one of the two alternative inequalities F+ (t) c1 W (t)
or
F+ (t) c2 W (t),
0 < c2 ,
c1 < ∞,
(2.5.14)
holds true for all large enough t. If the second inequality holds then condition d [<, >] holds with V (t) = c2 W (t). Therefore there exist r.v.’s ξ!j ξj with the distribution
P ξ!j < −t = W (t),
P ξ!j t = c2 W (t),
t0
(one can always assume that 1 − W (0) = c2 W (0)), such that for the sums S!n := n ! ! j=1 ξj the following holds true. For some c > 0, the r.v.’s Sn /cσW (n) converge in distribution (as n → ∞) to the stable law Fβ,ρ with parameters β and ρ > −1, so that Fβ,ρ,+ (0) =: q > 0. Therefore P(Sn 0) P S!n 0 → q > 0. From here and (2.5.11) it follows that S = ∞ a.s. Now suppose that the first inequality in (2.5.14) holds true. We will make use of the following lower bound for P(Sn 0), which is derived from Theorems 2.5.1 and 2.5.2 (with x = 0, y = uσW (n)): n−1 F+ (y) , P(Sn 0) nF+ (y) 1 − Qn−1 (u) − (2.5.15) 2 where
Sn < −u . σW (n) n Construct an r.v. ξ!j := min{ξj , 0} ξj and put S!n := j=1 ξ!j . Then evidently ! Sn < −u → Fβ,−1,− (−u), Qn (u) P σW (n) Qn−1 (u) = P
where Fβ,−1 is the stable distribution with parameters (β, −1). Further, c1 n c1 u−β n−1 F+ (y) W (uσW (n)) ∼ 2 2 2 as n → ∞. Hence for large enough fixed u we will have Qn−1 (u) +
n−1 1 F+ (y) , 2 2
so that by virtue of (2.5.15) n nu−β V (uσW (n)) ∼ V (σW (n)). 2 2 This means that if the series in (2.5.10) diverges (see Lemma 2.5.7) then the series in (2.5.11) also diverges and S = ∞ a.s. P(Sn 0)
110
Random walks with jumps having no finite first moment
(iii) The third assertion of Theorem 2.5.4 follows from the first two. Indeed, if the integral in (2.5.7) converges then V (t) = o(W (t)) as t → ∞ (the limit in (2.5.9) is equal to zero), and it follows from assertion (i) that S < ∞ a.s. Now let the integral (2.5.7) diverge. Then it follows from assertion (ii) that S = ∞ a.s. (owing to (2.5.9), the one of the alternatives in (2.5.14) is always true). The theorem is proved. 2.6 The asymptotic behaviour of the probabilities P(Sn x) As in §§ 2.2–2.4, we will distinguish here between the following two cases: (1)
α < 1, the tail F− (t) does not dominate F+ (t), t > 0;
(2)
β < 1, the tail F− (t) is ‘much heavier’ than F+ (t), t > 0.
First we consider the former possibility. Theorem 2.6.1. Let condition [Rα,ρ ], α < 1, ρ > −1, be satisfied. Then, for x = sσ(n), P(Sn x) sup − 1 ε(t) ↓ 0, nV (x) st P(S n x) − 1 ε(t) ↓ 0 sup nV (x) st as t ↓ 0. Proof. The relations follow immediately from Corollaries 2.2.4 and 2.5.3. In § 3.7 we will present integro-local theorems on the asymptotic behaviour of the probability P(Sn ∈ [x, x + Δ)), Δ > 0, when W (t) cV (t). Now consider the latter possibility (case (2)). Here (and also in many other considerations in the sequel) we will be using the following standard approach, which is related to the bounds for the distributions of sums of truncated r.v.’s that we obtained in §§ 2.2 and 2.4. In agreement with our previous notation, we put Bj = {ξj < y},
B=
n *
Bj ,
j=1
and let Gn := {Sn x}. Then P(Gn ) = P(Gn B) + P(Gn B), where n j=1
P(Gn B j ) P(Gn B)
n j=1
P(Gn B j ) −
i<jn
P(Gn B i B j ).
2.6 The asymptotic behaviour of the probabilities P(Sn x)
111
Since P(Gn B i B j ) P(B 1 )2 = F+2 (y), we have P(Gn ) = P(Gn B) +
n
P(Gn B j ) + O((nF+ (y))2 ).
(2.6.1)
j=1
Our first task is to estimate the probability P(Gn B). The bounds for it that we obtained in §§ 2.1–2.4 prove to be insufficient for our purposes in this section. We will need additional bounds; unfortunately, to derive them, one has to assume that β < α. Bounds in the case α = β are apparently much harder to obtain. Theorem 2.6.2. Let condition [>, <] with β < α < 1 be satisfied. Then the following assertions hold true. (i) For y = n1/γ , any fixed γ > β, ε > 0 such that θ := 1 − β/γ − ε > 0 and all large enough n one has P(Gn B) n−θxn
−1/γ
e−n . θ
(2.6.2)
(ii) For y = x/r, any fixed r 1, ε > 0 and all small enough values of nV (x) one has P(Gn B) [nV (y)]r e−ny
−β−ε
.
(2.6.3)
Remark 2.6.3. Note that, for y = n1/γ and all large enough n, it follows from the inequality (2.6.2) that P(Gn B) e−n , θ
0 < θ < 1 − α/γ,
and this bound cannot be substantially sharpened when 1 1 1 β |x| nv , v < 1 − + ∈ , γ γ γ β
(2.6.4)
.
However, if y = x/r and nV (x) → 0 fast enough then (2.6.3) turns into the inequality # $r P(Gn B) nV (y) ; (2.6.5) this also cannot be improved. Corollary 2.6.4. Let condition [>, <] with β < α < 1 be satisfied. (i) If γ > β and |x| nv , v ∈ [1/γ, 1/β), then θ P(Sn x) nV n1/γ + O e−n ,
0 < θ < 1 − β/γ.
(2.6.6)
(ii) If nV (x) → 0 then P(Sn x) nV (x)(1 + o(1)).
(2.6.7)
112
Random walks with jumps having no finite first moment
Proof. The proof of Corollary 2.6.4 is next to obvious. One has to use the inequality P(Gn ) P(B) + P(Gn B) nV (y) + P(Gn B). Then the bound (2.6.6) follows from (2.6.4) and the inequality (2.6.7) from (2.6.5), provided that we put r := 1 + | ln nV (x)|−1/2 ∼ 1. Observe that for x = 0 the bound (2.6.6) is weaker than the bound obtained in Theorem 2.3.5. It is also clear that, for γ < α, the zones of the deviations x in (2.6.6) and in (2.6.7) do overlap. Proof of Theorem 2.6.2. Following the standard argument from (2.1.8), we obtain P(Gn B) e−μx Rn (μ, y), where
(2.6.8)
y R(μ, y) =
eμt F(dt). −∞
Further, following (2.4.5), (2.4.6) and taking into account the observation that V (t) = o(W (t)) as t → ∞, we obtain that, as μ → 0, μy → ∞, R(μ, y) 1 − ΓW (1/μ)(1 + o(1)) + V (y)eμy (1 + o(1)),
(2.6.9)
where Γ = Γ(1 − β). Hence P(Gn B) exp −μx−nΓW (1/μ)(1+o(1))+nV (y)eμy (1+o(1)) . (2.6.10) (i) First put y := n1/γ ,
μ :=
θ ln n, y
θ ∈ (0, (α − β)/γ)
for some γ > β. Then for the terms on the right-hand side of (2.6.10) we will have nΓW (1/μ) ∼ nΓ θβ W (y/ln n) ∼ n1−β/γ L1 (n), nV (y)e
μy
=n
1−α/γ+θ
L2 (n),
(2.6.11) (2.6.12)
where L1 , L2 are s.v.f.’s and 1 − α/γ + θ < 1 − α/γ + (α − β)/γ = 1 − β/γ. Hence the term (2.6.12) will be negligibly small compared with (2.6.11). This means that, for any fixed ε > 0 and all large enough n, ln P(Gn B) −θxn−1/γ ln n − n1−β/γ−ε . This proves (2.6.2). (ii) Now put y :=
x , r
r 1,
1 μ := − ln nV (y), y
2.6 The asymptotic behaviour of the probabilities P(Sn x)
113
where we assume that nV (x) → 0 as x → ∞. Then for the terms in (2.6.10) we have nV (y)eμy = 1, nΓW (1/μ) = nΓW y| ln nV (y)|−1 > ny −β−ε for any fixed ε > 0 and all large enough y. This means that, under the above conditions, ln P(Gn B) r ln nV (y) − ny −β−ε , which is equivalent to (2.6.3). The theorem is proved. Now we will formulate the main assertion of the present section. Theorem 2.6.5. Let condition [=, =] with β < α < 1 be satisfied. Then, for x −n1/γ , γ ∈ (β, α), max{x, n} → ∞, one has P(Sn x) = nEV (x − ζσW (n))(1 + o(1)),
(2.6.13)
where ζ 0 is an r.v. following the stable distribution Fβ,−1 with parameters (β, −1) (i.e. the limiting law for Sn /σW (n) as n → ∞). That is, ⎧ if x = o(σW (n)) and nV (σW (n))E(−ζ)−α ⎪ ⎪ ⎪ ⎪ n → ∞, ⎪ ⎨ −α (2.6.14) P(Sn x) ∼ nV (σW (n))E(b − ζ) if x ∼ bσW (n) and ⎪ ⎪ ⎪ 0 < b < ∞, n → ∞, ⎪ ⎪ ⎩ nV (x) if x σW (n). Remark 2.6.6. Note that in the case when |x| = o(σW (n)) the asymptotics of P(Sn x) do not depend on x. As was shown in [197], the assertion of the theorem will remain true in the case α = β, V (t) = o(W (t)). For x σW (n) the assertion follows from the bounds of §§ 2.2 and 2.5. From Theorem 2.6.5 one can obtain corollaries describing the asymptotic behaviour of the renewal function H(t) =
∞
P(Sn t)
(2.6.15)
n=1
or, equivalently, of the mean time spent by the trajectory {Sn } above the level t. Corollary 2.6.7. Let condition [=, =] with β < α < 1 be met, and let t be an arbitrary fixed number. Then the following assertions hold true. (i) H(t) < ∞ iff ∞ n=1
nV (σW (n)) < ∞.
(2.6.16)
114
Random walks with jumps having no finite first moment
(ii) If (2.6.16) is true then, as x → ∞, H(−x) ∼
E(−ζ)−β . W (x)
From part (i) of the corollary it follows, in particular, that these implications hold true: {α > 2β} =⇒ {H(t) < ∞} =⇒ {α 2β}. In the case when ξ 0 the following, more general, assertion was proved in [118]: Let β < 1, t F−,I (t) :=
F− (u) du. 0
Then the relations F−,I (t) ∼
t1−β LW (t) 1−β
as
t → ∞,
(2.6.17)
where LW is an s.v.f., and # $−1 H(−x) ∼ Γ(β + 1)Γ(2 − β)
1 x−β LW (x)
as
x→∞
(2.6.18)
are equivalent. If, instead of (2.6.17), a stronger condition that F− (t) = W (t) ∼ t−β LW (t) is satisfied then (2.6.18) and Theorem 2.6.2 imply that # $−1 H(−x) ∼ Γ(β + 1)Γ(1 − β) and
1 W (x)
as
x→∞
# $−1 E(−ζ)−β = Γ(β + 1)Γ(1 − β) .
Moreover, for ξ 0, [118] also contains a local renewal theorem that describes the asymptotic behaviour of H(−x − Δ) − H(−x), for an arbitrary fixed Δ > 0, as x → ∞ in the case when F− (t) = W (t), 1/2 < β < 1: # $−1 H(−x − Δ) − H(−x) ∼ Γ(β)Γ(1 − β)
Δ . xW (x)
$ In the case when β ∈ (0, 1/2 , a similar relation was obtained but only for # $ lim sup H(−x − Δ) − H(−x) xW (x). x→∞
In the lattice case, these results were extended in [128, 279] to the class of r.v.’s ξ that can assume values of both signs.
2.6 The asymptotic behaviour of the probabilities P(Sn x)
115
Proof of Corollary 2.6.7. (i) The first assertion of the corollary follows in an obvious way from the first relation in (2.6.14). (ii) To prove the second assertion, put nv := v/W (x) and, for fixed ε > 0 and N < ∞, partition the sum H(−x) (see (2.6.15)) into three separate sums: + + . (2.6.19) H(−x) = nε
nnε
nnN
The first sum does not exceed nε . To estimate the terms in the second sum, we will use the theorem on the convergence of Sn /σW (n) in distribution to the stable law Fβ,−1 . For brevity, set Q(t) := Fβ,−1 ([−t, ∞)). Then, for nε < n < nN ,
x Sn − P(Sn −x) = P σW (n) σW (n)
∼Q
x σW (n)
as x → ∞, so that
P(Sn −x) ∼
nε
Q
nε
x σW (n)
nN Q ∼
x σW (t)
dt.
nε
(2.6.20) Next we change the variables, setting x/σW (t) = u. Then, for any fixed u, we will have t ∼ W −1 (x/u) ∼ W −1 (x)u−β .
σW (t) = x/u,
Since du (W −1 (x)u−β ) = −βu−β−1 W −1 (x) du, it is natural to expect that the integral in (2.6.20) will be asymptotically equivalent to βW
−1
uε (x)
Q(u)u−β−1 du,
(2.6.21)
uN
where uv =
x x = ∼ v −1/β . σW (nv ) σW (vW −1 (x))
(Since W (σW (t)) ∼ 1/t, by putting z := σW (W −1 (x)) we obtain the relation W (z) ∼ W (x), and therefore z ∼ x.) Hence the integral in (2.6.21) is asymptotically equivalent to −1/β ε
Q(u)u−β−1 du,
N −1/β
and by choosing small enough ε and 1/N it can be made arbitrarily close to the
116
Random walks with jumps having no finite first moment
quantity ∞ Q :=
−β−1
Q(u)u 0
1 du = β
∞
u−β dQ(u) =
0
1 E(−ζ)−β β
(that the integrals converge at zero follows from Corollary 2.3.3). To make the transition from the integral in (2.6.20) to (2.6.21) more formal, one should use integration by parts to compute uN
Q(u) d W −1 (x/u) .
uε
Thus, by choosing appropriate values of ε and N , the sum of the first two terms on the right-hand side of (2.6.19) multiplied by W (x) can be made arbitrarily close to βQ. Next we will show that the third term on the right-hand side of the representation (2.6.19) is o(W −1 (x)) as N → ∞. For a fixed N and n nN , we have σW (n) σW (N W −1 (x)) ∼ N 1/β x. Therefore, by Theorem 2.3.5,
−δ " x + σ(n) P(Sn −x) c1 nV (σW (n) − x) + exp −c σW (n)
−δ " x + σ(n) c2 nV (σW (n)) + exp −c . σW (n) Note that the first term on the right-hand side summed over all n nN converges to zero as x → ∞ (nN → ∞). So it remains to bound nnN
x + σ(n) exp −c σW (n)
−δ "
∞ ∼
x + σ(t) exp −c σW (t)
−δ "
dt
nN
∞ = nN 1
x + σ(nN u) exp −c σW (nN u)
−δ "
du.
Since 1/β − 1/α 1/2β, by virtue of (2.6.16), for any ε1 ∈ (0, 1 − β/α) and all large enough N we have σW (nN u) ∼ (N u)1/β σW (W −1 (x)) ∼ (N u)1/β x, σ(nN u) ∼ (N u)1/α σ(W −1 (x)) < (N u)1/α xβ/α+ε1 , so that A(u) : =
x + σ(nN u) σW (nN u)
2(N u)−1/β + (N u)1/α−1/β xβ/α−1+ε1 (N u)−1/2β .
2.6 The asymptotic behaviour of the probabilities P(Sn x)
117
Hence ∞ nN
e
−cA(u)−δ
∞ du nN
1
e−c(N u)
1
=W
−1
∞ (x)
δ/2β
e−cv
du
δ/2β
dv = o(W −1 (x))
N
as N → ∞. The corollary is proved. Proof of Theorem 2.6.5. Return to the representation (2.6.1) and bound the term P(Gn Bj ) = P(Sn−1 + ξn x, ξn y) = P(Sn−1 x − y)P(ξn y) + P(ξn x − Sn−1 , Sn−1 < x − y) = P(Sn−1 x − y)P(ξn y) $ # + E V (x − Sn−1 ); Sn−1 < x − y .
(2.6.22)
First let |x| n1/γ , y = n1/γ , γ ∈ (β, α). Then, by virtue of (2.6.6), 2 P(Sn−1 x − y)P(ξn y) n V (n1/γ ) (1 + o(1)). The ‘power part’ of this expression is n1−2α/γ , where 1 − 2α/γ < −α/β for γ close enough to β. So, for such γ, P(Sn−1 x − y)P(ξn y) = o V (σW (n)) . (2.6.23) The second term on the right-hand side of (2.6.22) (in which we will replace n − 1 by n for simplicity) can be rewritten as $ # (2.6.24) E V (x − Sn ); Sn < x − y = E1 + E2 ; the quantities Ei , i = 1, 2, will be defined and bounded in what follows. For a small fixed ε > 0, put # $ (2.6.25) E1 := E V (x − Sn ); −εσW (n) Sn < x − y . It is not difficult to see from Theorem 2.3.5 that # $ E1 E V (x − S ∗ ); −εσW (n) S ∗ x − y ,
(2.6.26)
where S ∗ is an r.v. which has the following distribution: at the point x − y < 0, it has an atom of size P(S ∗ = x − y) = cnV (σW (n)) for a suitable c > 0, and on the interval (−εσW (n), x − y) it has density fn (t) which is the derivative of the second term on the right-hand side of (2.3.8) (with z replaced by −t). This means that
u + σ(n) fn (t) dt = exp −c σW (n)
x−y
−u
−δ "
y − x + σ(n) − exp −c σW (n)
−δ "
118
Random walks with jumps having no finite first moment
and P(x − y > Sn −u) x−y
∗
fn (t) dt P(x − y S ∗ −u).
P(S = x − y) + −u
From here, using the same argument as that employed while bounding the integral (2.3.20) (the fast decay of fn (t) as t → 0), we obtain that ∗
x−y
E1 V (y) P(S = x − y) +
V (x − t)fn (t) dt −εσW (n) −δ
V (y)cnV (σW (n)) + V (x + εσW (n)) e−cε .
(2.6.27)
Since x = o(σW (n)), the last term on the right-hand side of (2.6.27) is asymptot−δ ically equivalent to ε−α e−cε V (σW (n)) and, by choosing a suitable ε, it can be made arbitrarily small compared with V (σW (n)). The first term on the right-hand side of (2.6.27) is o(V (σW (n))) because nV (n1/γ ) → 0 as γ < α. Now consider # $ E2 := E V (x − Sn ); Sn < −εσW (n) . (2.6.28) Since x = o(σW (n)), we obtain here that, for Sn = O(σW (n)), −α Sn V (σW (n)). V (x − Sn ) ∼ − σW (n) The function (−t)−α is continuous and bounded on (−∞, −ε). Furthermore, taking into account the continuity of the limiting stable distribution Fβ,−1 of the r.v. ζ 0 at the point −ε we obtain, as a consequence of the weak convergence Sn /σW (n) ⇒ ζ, the following relation: # $ E2 ∼ E (−ζ)−α ; ζ < −ε V (σW (n)). (2.6.29) Since by Corollary 2.3.3 $ # E (−ζ)−α ; ζ < −ε → E(−ζ)−α < ∞
(2.6.30)
as ε → 0, we finally obtain (see (2.6.1), (2.6.22)–(2.6.29)) that P(Gn Bj ) ∼ E(−ζ)−α V (σW (n)) ∼ EV (x − ζσW (n)).
(2.6.31)
Now it follows from (2.6.1) and Theorem 2.6.2 that P(Sn x) = nEV (x − ζσW (n))(1 + o(1))
2 θ + O e−n + O nV (n1/γ ) ,
θ > 0,
(2.6.32)
2 where nV (n1/γ ) = o nV (σW (n)) , so that for the respective exponents we
2.6 The asymptotic behaviour of the probabilities P(Sn x)
119
have 2(1 − α/γ) < 1 − α/β for a suitable γ ∈ (β, α). This proves (2.6.13) and (2.6.14) in the case |x| n1/γ . Now let x n1/γ , γ ∈ (β, α) (and therefore nV (x) → 0), y = x/2 (r = 2). Then again we will have (2.6.22), where by virtue of (2.6.7) P(Sn−1 x − y)P(ξn y) cnV (x)2 .
(2.6.33)
One now obtains, cf. (2.6.23),
cnV 2 (x) = o min{V (x), V (σW (n))} .
(2.6.34)
Further, instead of (2.6.24) we will now make use of the representation # $ E V (x − Sn ); Sn < x − y = E0 + E1 + E2 , where, by Theorem 2.3.5, # $ E0 : = E V (x − Sn ); 0 Sn < x − y V (y) P(Sn 0)
cnV (x)V (σW (n)) = o min{V (x), V (σW (n))} . When considering the integrals E1 and E2 (defined in (2.6.25) and (2.6.28), respectively), we will distinguish between three possibilities: (1) x = o(σW (n)), (2) x ∼ bσW (n), 0 < b < ∞, (3) σW (n) = o(x). In the first case all the considerations in (2.6.26)–(2.6.30) remain valid (upon replacing the quantity x − y by 0 in (2.6.25), (2.6.26)), so that we again obtain the relation (2.6.31) and then also (2.6.32), where the bounding terms, in accordance with Theorem 2.6.5 and the new value y = x/2, should be replaced by
O (nV (x))2 = o min{nV (x), nV (σW (n))} . Now let x ∼ bσW (n), 0 < b < ∞. Since the function (b − t)−α is continuous and bounded on the whole half-line (−∞, 0), it follows from the weak convergence Sn /σW (n) ⇒ ζ that # $ E1 + E2 = E V (x − Sn ); Sn 0 ∼ E(b − ζ)−α V (σW (n)) ∼ EV (x − ζσW (n)). By virtue of (2.6.33), (2.6.34), the same equivalence will hold for P(Gn Bj ). In case (3), evidently E0 + E1 + E2 ∼ V (x) ∼ EV (x − ζσW (n)), and the same equivalence will also hold for P(Gn Bj ). The subsequent transition to (2.6.32) uses the argument that we have already employed above, the residual terms in (2.6.32) being replaced by O (nV (x))2 . This completes the proof of the theorem.
120
Random walks with jumps having no finite first moment
Remark 2.6.8. In § 3.7 we will also present the so-called integro-local theorems on large deviations of Sn , i.e. assertions on the asymptotic behaviour of the probability P(Sn ∈ [x, x + Δ)) as x → ∞, Δ = o(x). 2.7 The asymptotic behaviour of the probabilities P(S n x) As before, we will again distinguish between the two cases mentioned at the beginning of the previous section. In the case (1) α < 1, the tail F− (t) does not dominate F+ (t), t > 0, the asymptotic behaviour of P(S n x) was established in Theorem 2.6.1. Now we will consider the second case: (2) β < 1, the tail F− (t) is ‘much heavier’ than F+ (t), t > 0. Theorem 2.7.1. Let condition [=, =] with β < min{1, α} be satisfied. Then the following assertions hold true. (i) For all n, as x → ∞, P(S n x) ∼
n
EV (x − ζσW (j)),
(2.7.1)
j=1
where the r.v. ζ 0, as before, has the stable distribution Fβ,−1 with parameters (β, −1). (ii) If x σW (n) then, for all n, as x → ∞, P(S n x) ∼ nV (x).
(2.7.2)
(iii) If x → ∞, n → ∞ then when σW (n)/x c > 0 or when σW (n)/x → 0 slowly enough, one has P(S n x) ∼ where
V (x) C β, α, σW (n)/x , W (x)
⎛ C(β, α, t) := βE⎝|ζ|−β
⎞ −tζ v β−1 (1 + v)−α dv ⎠ ,
(2.7.3)
(2.7.4)
0
so that P(S x) ∼ where
and B(β, γ) :=
V (x) C(β, α, ∞), W (x)
C(β, α, ∞) = βEζ −β B(β, α − β) 1 0
uβ−1 (1 − u)γ−1 du is the beta function.
(2.7.5)
(2.7.6)
2.7 The asymptotic behaviour of the probabilities P(S n x)
121
Remark 2.7.2. As B(β, 0) = ∞, it appears that the asymptotics of P(S x) in the case α = β will be different from cV (x)/W (x). Since for β < α, as x → ∞, ∞ x
V (t) dt 1 V (x) ∼ , tW (t) α − β W (x)
it is not unreasonable to conjecture that a general asymptotic representation for P(S x) in the case β α will have the form ∞ P(S x) ∼ c(β, α)
V (t) dt . tW (t)
x
Remark 2.7.3. Comparison with Theorem 2.6.5 shows that, along with (2.7.1), we can also write n P(Sj x) P(S n x) ∼ . (2.7.7) j j=1 It is not difficult to see that this relation will also remain true in the case when there exists Eξ1 = −a < 0 and the function V (t) = F+ (t) is subexponential, since, in that case, as x → ∞, P(S n x) ∼
n
V (x + aj) ∼
j=1
n P(Sj x) j j=1
(for the first relation see [178, 51, 63, 66, 52]). Since the functions V (x), W (x) do not appear in (2.7.7), there arises the question whether the last relation holds under broader conditions as well. In § 7.6 we will present assertions (Theorem 7.6.1 and its consequences) confirming the conjecture (2.7.7) in the case where n = ∞ and the distribution F is subexponential. Set η(x) := min{k : Sk x}. Then P(η(x) = n) = P(S n x) − P(S n−1 x), and the relation (2.7.7) makes quite plausible the conjecture that P(Sn x) n (under some additional conditions, this asymptotic relation can be extracted from [63]). However, this last relation, like (2.7.7), cannot be universal. As shown in [37, 68], under broad assumptions, when Cram´er’s condition is met the probabilities P(η(x) = n) and P(Sn x) will have the same asymptotics (up to a constant factor). P(η(x) = n) ∼
Proof of Theorem 2.7.1. The proof, like that of Theorem 2.6.5, will again be based on a relation of the form (2.6.1) but with different Gn and Bj . Put Gn := {S n x}
122
Random walks with jumps having no finite first moment
and set Bj := Bj (v) = ξj < y + vj 1/γ ,
γ ∈ (β, α),
B=
n *
Bj
j=1
(here, as in § 2.4, we assume that β < min{1, α}). Then, under condition [ · , =], we have % &2 n n 1/γ P(Gn ) = P(Gn B) + . (2.7.8) P(Gn B j ) + O V (y + vj ) j=1
j=1
A bound for P(Gn B) is contained in Theorem 2.4.1, which, in particular, includes the following: Let condition [>, <] with β < min{1, α} be satisfied. Then, for a suitable v and all n, as y → ∞, P(Gn B) cx−(α−β)(r−ε) , P(S n x) cV (x) min n, xβ+ε ,
(2.7.9) (2.7.10)
where r = x/y and ε > 0 is an arbitrarily small fixed number. A bound for the last term in (2.7.8) follows from the inequality n
V y + vj 1/γ nV (y),
j=1
which holds for all n, and the relations that follow below. which hold as n → ∞ and in which we will put v = 1 for simplicity (the value of v can only influence a constant factor that will appear at some stage). Using the change of variables t1/γ = uy, we have n j=1
V y+j
1/γ
n ∼
V y + t1/γ dt
0 −1 n1/γ y
V (y + uy)uγ−1 du
= γy γ 0
−1 n1/γ y
(1 + u)−α uγ−1 du
∼ γy γ V (y) 0
= y V (y) b(γ − 1, −α, n1/γ /y), γ
where t b(γ − 1, −α, t) := γ 0
(1 + u)−α uγ−1 du.
(2.7.11)
2.7 The asymptotic behaviour of the probabilities P(S n x)
123
Since b(γ − 1, −α, ∞) < ∞ and b(γ − 1, −α, t) ∼ tγ as t → 0, we obtain for the last term in (2.7.8) the bound ⎡ ⎤2 n
2 ⎣ V y + vj 1/γ ⎦ c [V (y) min{n, y γ }] . (2.7.12) j=1
Now we bound the main terms in (2.7.8): P(Gn B j ) = P(S n x; ξj y + vj 1/γ ),
(2.7.13)
where, for brevity, we will put yj := y + vj 1/γ . To evaluate P(Gn B j ), note that, by virtue of (2.7.10) with y = rx, r = const ∈ (0, 1), p(j, x) := P(S j−1 x, ξj yj ) cV (x) min{j, y γ }V (yj ).
(2.7.14)
Further, P(Gn B j ) = P(S n x, S j−1 < x, ξj yj ) + O(p(j, x)),
(2.7.15)
where P(S n x, S j−1 < x, ξj yj ) ∗ x, S j−1 < x, ξj yj ) = P(Sj−1 + ξj + S n−j ∗ and the sums S n−j are defined in the same way as the S n−j and are independent of the r.v.’s ξ1 , . . . , ξj . Setting ∗ Zj,n := Sj−1 + S n−j
(2.7.16)
and again using equalities of the form (2.7.15), we obtain P(Gn B j ) = P(Zj,n + ξj x, ξj yj ) + O(p(j, x)).
(2.7.17)
Now we introduce the events Cj := {Zj,n x − y − vj 1/γ } and represent {Zj,n + ξj x, ξj yj } as a sum of intersections of itself and the events Cj and C j . Since {Zj,n + ξj x; Cj } ⊂ {ξj x − Zj,n y + vj 1/γ = yj }, then P(Zj,n + ξj x, ξj yj ; Cj ) = P(Zj,n + ξj x; Cj ).
(2.7.18)
We will consider the right-hand side of this equality (which will form the main term in P(Gn B j )) later on. For the complement C j , we have the following bound: P (j, x) : = P(Zj,n + ξj x, ξj yj ; C j ) V (yj ) P(Zj,n x − y − vj 1/γ ),
(2.7.19)
124
Random walks with jumps having no finite first moment
where, by virtue of the bound (2.7.10), Theorem 2.6.5 and relations of the form (x − y)/2 = cx one has P Zj,n x − y − vj 1/γ x−y x−y 1/γ − vj P Sj−1 + P S n−j 2 2 = O jV (x − vj 1/γ + σW (j) + O V (x) min{n, xγ } = O jV (x + j 1/γ ) + O V (x) min{n, xγ } .
(2.7.20)
Multiplying the right-hand side of the last relation by V (yj ) (see (2.7.19)) yields an upper bound for P (j, x). Now we turn to considering the main term in the probability P(Gn B j ), which we singled out in (2.7.18) as , (2.7.21) P(Zj,n + ξj x; Cj ) = E V (x − Zj,n ); Zj,n < x − y − vj 1/γ and note that it has the same form as the left-hand side of (2.6.24) provided that there we replace Sn by Zj,n and y by yj . Furthermore, Zj,n possesses all the properties of Sj−1 that will allow us to use here the reasoning from the proof of Theorem 2.6.5 (see (2.6.24)–(2.6.31)). Indeed: ∗ ∗ , where S ∞ 0 is a proper r.v.; (1) Sj−1 Zj,n Sj−1 + S ∞ (2) therefore Zj,n has the same weak convergence property as Sj−1 ,
Zj,n ⇒ζ σW (j)
as j → ∞;
(3) Zj,n admits basically the same large deviation bounds (see (2.7.10) and Theorem 2.6.5) as were used in (2.6.24)–(2.6.31) (up to arbitrarily small changes in the exponents), so that
P(Zj,n x) P Sj−1 > x/2 − j 1/γ + P S n > x/2 + j 1/γ cjV (x + σW (j)) + cV (x + j 1/γ ) min n, (x + j 1/γ )γ . From this it follows that the computation of the integral (2.6.24) can be carried over, without any substantial changes, to the computation of the integral (2.7.21). The reader could verify this by repeating the argument from the proof of Theorem 2.6.5 in the present situation. Therefore one can conclude that (see (2.6.31)) # $ E V (x − Zj,n ); Zj,n < x − y − vj 1/γ ∼ EV (x − ζσW (j)). n n It remains to bound j=1 p(j, x) and j=1 P (j, x), where p(j, x) and P (j, x) were defined in (2.7.14) and (2.7.19). To estimate the first sum we observe that, cf. (2.7.11), (2.7.12), n j=1
# $2 min{j, y γ }V (yj ) cV (y) min{n, y γ } .
(2.7.22)
2.7 The asymptotic behaviour of the probabilities P(S n x)
125
Therefore (see (2.7.14)) p(x) := For P (x) :=
n j=1
n
# $2 p(j, x) c V (x) min(n, xγ ) .
(2.7.23)
j=1
P (j, x) we obtain in a similar way that, owing to (2.7.20), # $2 . (2.7.24) P (x) = O V (x) min{n, xγ }
Note that the order of magnitude of the previous bounds (2.7.12), (2.7.23) does not exceed that of P (x). Now taking into account the bound (2.7.9) we obtain that, as x → ∞, P(S n x) =
n
EV (x − ζσW (j))(1 + o(1))
j=1
+ O x−(α−β)(r−ε) + O (P (x)) ,
(2.7.25)
where a bound for P (x) is given in (2.7.24). The main term here does not exceed (see (2.7.11))
cV (x) min{n, xβ }
(2.7.26)
for any β < β. Therefore, owing to (2.7.24) and the fact that xγ V (x) → 0 as x → ∞, the quotient of P (x) and the main term in (2.7.25) tends to zero. Summarizing, we obtain P(S n x) =
n
EV (x − ζσW (j))(1 + o(1)).
j=1
This proves (2.7.1). The relation (2.7.2) then follows in an obvious way. Now we will prove (2.7.3). For m < n we have that, as m → ∞, n
Im,n :=
n EV (x − ζσW (j)) ∼
j=m
EV (x − ζσW (u)) du. m
The change of variables σW (u) = t (u = 1/W (t)) leads to the integral σ W (n)
−
EV (x − ζt) dW (t) . W 2 (t)
(2.7.27)
σW (m)
Since the increment of W (t) on the interval (t, t(1 + Δ)) for a small fixed Δ has the asymptotics −ΔβW (t) as t → ∞, and since the integrand in (2.7.27) does not change much over that interval, the above integral is asymptotically equivalent to σW σW(n)/x (n) EV (x − ζt) dt EV (x(1 − ζv)) dv =β β tW (t) vW (vx) σW (m)
σW (m)/x
126
Random walks with jumps having no finite first moment
(in other words, we can act as if the function W (t) were differentiable with W (t) ∼ −βW (t)/t). If v = const > 0 or v → 0 slowly enough as x → ∞ then V (x(1 − ζv)) ∼ (1 − ζv)−α V (x),
W (vx) ∼ v −β W (x).
Therefore when σW (m)/x → 0 (or mW (x) → 0) slowly enough, we have Im,n
βV (x) ∼ W (x)
σW(n)/x
vβ−1 E(1 − ζv)−α dv ∼
V (x) C(β, α, σW (n)/x), W (x)
σW (m)/x
whereas I1,m ∼ mV (x) = o V (x)/W (x) . This proves (2.7.3). To prove the remaining relations (2.7.5), (2.7.6) from the assertion of Theorem 2.7.1 one uses the equality ∞ v β−1 (1 + v)−α dv = B(β, α − β). 0
The theorem is proved. In concluding this chapter we note that a number of sections related to the present topics have been included in Chapter 3 because these sections relate equally to both chapters. Among them are: (1) integro-local theorems on large deviations (§ 3.7); (2) uniform limit theorems for sums of random variables, acting on the whole axis (§ 3.8); (3) iterated logarithm-type laws (§ 3.9).
3 Random walks with jumps having finite mean and infinite variance
In this chapter we will assume that the r.v. ξ has finite expectation and, moreover, that Eξ = 0. Along with the functionals Sn and S n = maxkn Sk of the random walk {Sk }, which were studied in Chapter 2, we will consider here the functionals S n (a) := max(Sk − ak) kn
S n (a) (in the last chapter, studying would have been meaningless since there the asymptotics of P S n (a) x were essentially independent of a). We will also study in this chapter a more general ‘boundary problem’ on the asymptotic behaviour of the probability P max(Sk − g(k)) 0 kn
for a given ‘boundary’ {g(k)}, minkn g(k) → ∞. Moreover, we will obtain uniform limit theorems (i.e. limit theorems acting on the whole real line) for Sn and S n , as well as analogues of the law of the iterated logarithm.
3.1 Upper bounds for the distribution of S n First consider the ‘basic case’ where the majorants V (t) and W (t) for the right and left tails, respectively, have the indices −α and −β, α ∈ [1, 2), β ∈ (1, 2). Recall the notation we used earlier (cf. (2.1.1), (2.1.7)), Bj = {ξj y},
B = B(0) =
n *
Bj ,
P = P(S n x; B),
j=1
and also the convention that the ratio r = x/y 1 remains bounded in all our formulations. Theorem 3.1.1. Let the conditions Eξ = 0, [<, <] with α ∈ [1, 2), β ∈ (1, 2) and W (t) cV (t) 127
(3.1.1)
128
Random walks with finite mean and infinite variance
be satisfied. Then the inequalities (2.2.1) from Theorem 2.2.1 and (2.2.3) from Corollary 2.2.4 hold true, i.e. for all n x P c1 Π(y)r , Π(y) = nV (y), (3.1.2) r= , y and sup n,x: Π(x)v
P(S n x) 1 + ε(v), Π(x)
ε(v) ↓ 0 as
v ↓ 0.
(3.1.3)
If (3.1.1) does not hold then (3.1.2) remains true for all n and y such that, for some c2 < ∞, y (3.1.4) < c2 . nW | ln nV (y)| For (3.1.4) to hold it suffices that we have either | ln nV (y)| < [nW (y)]−1/β−ε
nW (y) < c3 , or
nW
y ln y
< c4 ,
(3.1.5)
(3.1.6)
for some ε > 0 and c3 , c4 < ∞. Note that, for values of y such that, say, nW (y) > 1, nV (y) > 1, the inequality (3.1.2) will, as a rule, be trivial. The constant c1 in (3.1.2), provided that (3.1.1) holds, admits the representation c1 = (e/r)r + ε(Π(y)), ε(v) ↓ 0 as v ↓ 0. In the general case, r y e c1 = , + ε1 (Π(y)) + ε2 nW r | ln Π(y)| where εi (v) ↓ 0 as v ↓ 0 (cf. Theorem 2.2.1). If condition [<, <] is met but (3.1.1) is not then an analogue of Corollary 2.2.4 has the following form. Corollary 3.1.2. (i) The inequality P(S n x) 1 + ε(v) Π n,x: Πv, ΠW 1 sup
holds true with Π = nV (x),
ΠW = nW
y , | ln nV (y)|
ε(v) ↓ 0 as
(3.1.7)
v ↓ 0.
:= nV (x). Then (ii) Put V (t) := max{V (t), W (t)}, Π sup b n,x: Πv
P(S n x) 1 + ε(v). Π
(3.1.8)
3.1 Upper bounds for the distribution of S n
129
Proof. The proof of the first assertion repeats, with obvious amendments, the argument from the proof of Corollary 2.2.4. Assertion (ii) follows from (3.1.3) if one replaces both majorants in condition [<, <] by V . If one puts S n := minkn Sk then it clearly follows from (3.1.8) that sup b n,x: Πv
max{P(S n x), P(S n < −x)} 1 + ε(v). Π
(3.1.9)
Let Sn := maxkn |Sk | = max{S n , −S n }. Theorem 3.1.1 implies the following result as well. Corollary 3.1.3. If the condition [<, <] is satisfied then sup b n,x: Πv
P(Sn x) 1 + ε(v). n(V (x) + W (x))
(3.1.10)
The quantity ε(v) in the relation (3.1.10) has the same meaning as in Theorem 3.1.1 and Corollary 3.1.2. Proof of Corollary 3.1.3. We will make use of the inequality P(Sn x) P(S n x) + P(S n −x). If c1 V (t) W (t) c2 V (t) for some 0 < c1 < c2 < ∞ then (3.1.10) follows from (3.1.3). If V (t) W (t) then, fixing a δ > 0 and taking as a majorant for the right tail the function δW (t) > V (t) instead of V (t), we will obtain from what has already been said that (3.1.10) holds true with V (x) + W (x) replaced by W (x)(1 + δ). But since δ > 0 is arbitrary and V (x) = o(W (x)) then W (x)(1 + δ) can be replaced in (3.1.10) by W (x) or by V (x) + W (x). This proves (3.1.10). One can argue in the same way when W (t) V (t). The corollary is proved. Corollary 3.1.4. Under the conditions [<, <] and n xγ , 1 < γ < min{α, β}, the inequalities (3.1.2), (3.1.3) always hold true. The corollary is obvious since, as y → ∞, we have y γ γ → 0. y V (y) → 0, y W ln y Without loss of generality one can assume that y > e, so that the inequality (3.1.6) and therefore also (3.1.4), (3.1.2) hold true. Remark 3.1.5. Conditions (3.1.4)–(3.1.6) are essential for the inequalities (3.1.2), (3.1.3), because, when W (y) V (y) and the left tail is regularly varying, the deviations y for which nW (y) > 1 will fall (even when nV (y) → 0) into the ‘normal deviations zone’, where the distributions of the scaled sums Sn can be approximated by the limiting stable law.
130
Random walks with finite mean and infinite variance
Note also that the inequality σW (n) = W (−1) (1/n),
y > σW (n) ln σW (n),
is sufficient for (3.1.6) to be met. Indeed, for such a y one has y/ln y > c1 σW (n), y nW < c2 nW (σW (n)) c2 . ln y Now consider the general case, when α and β can each assume any value from [1, ∞) and Eξ = 0. In other words, in comparison with Theorem 3.1.1, one also admits now the values α 2,
β = 1,
β 2.
The formulation of the corresponding assertion will be more cumbersome and will require the introduction of some new notation. It is important to note that the case Eξ 2 < ∞ (when necessarily α 2, β 2) will be considered in more detail in Chapter 4, so that the general assertion below will be devoted mostly to the ‘threshold’ values of α and β, which are equal to 1 and 2 respectively. These values do not play any significant role in the subsequent exposition. Put
Z∞
Z∞ tW (t) dt,
W :=
V :=
0
tV (t) dt
(3.1.11)
0
and introduce the following unboundedly increasing s.v.f.’s:
L1 (t) :=
1 tW (t) Zt
Z∞ t
W I (t) tW (t)
!
Z∞
L2 (t) :=
β = 1;
W I (u) du
if
β = 2, W = ∞;
if
α = 2, V = ∞.
0
u
1 t2 V (t)
if
Zt
W (v) dv du ≡ 0
L3 (t) :=
W (u) du ≡
Zt uV (u)du 0
Further, put 8 W (t)Lβ (t) > > > < Γ(2 − β) W (t) Wβ (t) := β−1 > > > : W t−2 and Vα (t) :=
if
if β ∈ (1, 2), if
8 1 > V (t) > > < 2−α V (t)L3 (t) > > > : V t−2
W = ∞, β = 1, 2, (3.1.12)
W < ∞, β 2,
if
α ∈ [1, 2),
if
V = ∞, α = 2,
if
V < ∞, α 2.
(3.1.13)
3.1 Upper bounds for the distribution of S n Along with the product Π = nV (x), the quantity » „ « «– „ y y Π∗ := n Vα + Wβ | ln Π| | ln Π|
131
(3.1.14)
will also play an important role in the cases we are dealing with here. Theorem 3.1.6. Let the conditions [<, <] with α 1, β 1 and Eξ = 0 be satisfied. If Π∗ < c1 then x r= . (3.1.15) P c[nV (y)]r , y For the boundedness of nWβ (y/| ln Π|) it suffices that, for some c < ∞ « « „ „ y y Lβ < c if β = 1, β = 2, W = ∞, nW ln y ln y „ « y nW cn1/2 ln(n + 1)
W < ∞, β 2.
if
(3.1.16) (3.1.17) (3.1.18)
For the boundedness of nVα (y/| ln Π|) it suffices that Π
if
ΠL1+ε (y) < c, ε > 0 3
if α = 2, V = ∞,
α < 2,
(3.1.20)
y > cn1/2 ln(n + 1)
if
V < ∞, α 2.
(3.1.21)
(3.1.19)
All the above conditions are satisfied if y > n1/θ+ε
(3.1.22)
for some ε > 0, where θ = min{α, β, 2}. The following analogue of Corollary 3.1.2 holds true. Corollary 3.1.7. If the conditions [<, <] with α 1, β 1 and Eξ = 0 are met then sup
n,x: Π∗ v
P(S n x) 1 + ε(v), Π
where ε(v) ↓ 0 as v ↓ 0. Proof. The proof of the corollary repeats, with some obvious changes, the proof of Corollary 2.2.4. There are also exist analogues of the inequalities (3.1.8)–(3.1.10). Example 3.1.8. Consider now an example with the threshold value β = 1, where, for some t0 > 0, 1 for t > t0 . W (t) = t ln2 t Here 1 W1 (t) = W (t)L1 (t) = t
Z∞ W (u) du = t
1 , t ln t
132
Random walks with finite mean and infinite variance
and the quantity nW1 (y/ln y) that appears in (3.1.16) will be given by « „ n ln y y nW1 < c1 for y > cn. = ln y y ln(y/ln y) Since in the case of finite Eξ one always has nV (n) → 0 (i.e. Π → 0 for x cn) as n → ∞, the assertion (3.1.15) will hold in this example for all y > cn.
Proof. The proof of Theorem 3.1.1 mostly follows the argument used to prove Theorem 2.2.1. To use the basic inequality (2.1.8), we again have to obtain a bound for R(μ, y) in (2.2.7), where, under the present circumstances, 0 I1 =
0
0
F(dt) = F− (0) + μ
μt
e −∞
t F(dt) +
−∞
(eμt − 1 − μt) F(dt). −∞
Here 0 (e
μt
∞ − 1 − μt) F(dt) = μ (1 − e−μt )F− (t) dt.
−∞
Since 1 − e
−μt
0
0 for μ, t > 0, the last integral does not exceed ∞
−μt
W (t)(1 − e
μ
2
∞
) dt = μ
0
W I (t)e−μt dt,
0
where, by virtue of Theorem 1.1.4(iv), for β > 1 ∞ W (t) =
W (u) du ∼
I
t
tW (t) β−1
as
t → ∞,
(3.1.23)
and therefore, by Theorem 1.1.5, as μ → 0, 2
∞ W (t)e I
μ
−μt
dt ∼ μW
0
I
∞
1 μ
e−u u−β+1 du ∼
0
W (1/μ) Γ(2 − β). β−1
Thus, 0 I1 F− (0) + μ −∞
t F(dt) +
W (1/μ) Γ(2 − β)(1 + o(1)). β−1
(3.1.24)
With the same choice of M = 2α/μ as in the proof of Theorem 2.2.1, we have the following bound for the integral I2 : M I2 ≡
∞ e F(dt) F+ (0) + μ μt
0
0
M t F(dt) + (eμt − 1 − μt) F(dt), 0
3.1 Upper bounds for the distribution of S n
133
where, by the inequality eμt − 1 μt(e2α − 1)/2α which holds for t M , M M μ2 2α μt (e − 1)V ∗ (M ), (e − 1 − μt) F(dt) μ (eμt − 1)V (t) dt 2α 0
0
V ∗ (t) : =
t
V (t)t2 . 2−α
(3.1.25)
2α(e2α − 1) V (2α/μ), 2−α
(3.1.26)
uV (u) du ∼ 0
Therefore ∞ I2 F+ (0) + μ
t F(dt) + 0
and hence I1 + I2 1 + c1 W (1/μ) + c2 V (1/μ).
(3.1.27)
For I3 , which is given by (2.2.11), (2.2.13), one can obtain the same bound as in Theorem 2.2.1. Thus, under the conditions of the present section, we obtain R(μ, y) 1 + c1 W (1/μ) + c2 V (1/μ) + V (y)eμy (1 + ε(λ)),
(3.1.28)
where λ = μy (cf. (2.2.16)). The choice of the ‘optimal’ μ=
1 r ln y Π(y)
is also analogous to that in Theorem 2.2.1 (see (2.2.18)). Therefore, cf. (2.2.19), we obtain ln P −xμ + c1 nW (1/μ) + c2 nV (1/μ) + eλ Π(y)(1 + ε(λ)) y r , + r + ε1 (Π(y)) + c3 nW −r ln Π(y) | ln Π(y)|
(3.1.29)
where ε1 (v) ↓ 0 as v ↓ 0. If (3.1.1) holds then the last term on the right-hand side of (3.1.29) could be omitted (one can include it in ε1 (Π(y))). If (3.1.4) holds then r ln P −r ln + c. Π(y) This proves (3.1.2), and hence also (3.1.3). Now we will verify that conditions (3.1.5), (3.1.6) are sufficient for (3.1.4). Let (3.1.5) be satisfied. Then, for any ε > 0 and all large enough y, y nW nW (y)| ln nV (y)|β+ε/2 | ln nV (y)| [nW (y)]1−(β+ε/2)/(β+ε) = [nW (y)]ε/[2(β+ε)] < c. If (3.1.6) is met then, owing to the relation | ln nV (y)| = − ln n − ln V (y) < − ln V (y) < (α + 1) ln y,
(3.1.30)
134
Random walks with finite mean and infinite variance
which holds for large enough y, we will have y y c1 nW nW | ln nV (y)| ln y
< c2 .
The inequality (3.1.3) is proved in the same way as Corollary 2.2.4. The theorem is proved. Proof of Theorem 3.1.6. First consider the case β = 1. Here, owing to Theorem 1.1.4(iv), instead of (3.1.23) we have W I (t) = tW (t)L1 (t),
(3.1.31)
where L1 (t) → ∞ as t → ∞ and is an s.v.f. So instead of (3.1.24) we will have Z0 I1 F− (0) + μ
t F(dt) + W (1/μ)L1 (1/μ).
(3.1.32)
−∞
If β 2 then (3.1.23) remains valid and, as μ → 0, μ
2
Z∞
W I (t)e−μt dt ∼
j
W (1/μ)L1 (1/μ) μ2 W
0
if W = ∞, if W < ∞,
(3.1.33)
t → ∞,
(3.1.34)
where W ∞ is defined in (3.1.11); if W = ∞ then Zt L2 (t) :=
W I (u) du → ∞, 0
is an s.v.f. (see Theorem 1.1.4(iv)). Now we make use of the notation Wβ (t) from (3.1.12). Then (3.1.32) will hold in all the cases considered above, provided that we replace the term W (1/μ)L1 (1/μ) on the right-hand side of (3.1.32) by Wβ (1/μ); in the case β > 2, in accordance with (3.1.12) we put this term equal to W2 (1/μ) = W μ2 . We have a similar situation with the parameter α. In the case α 2, instead of (3.1.25) we will have V ∗ (t) =
j
Zt uV (u) du =
t2 V (t)L3 (t) V
0
if if
V = ∞, V < ∞,
where L3 (t) is an unboundedly increasing s.v.f. Using the notation Vα (t) introduced in (3.1.13), it can seen that, for all α 1, we will have instead of (3.1.26), (3.1.27) the inequalities Z∞ I2 F+ (0) + μ
t F(dt) + Vα (1/μ),
I1 + I2 1 + Wβ (1/μ) + Vα (1/μ).
0
For Π(y) = nV (y) and large enough y, we will have (cf. (2.2.19)) ln P −xμ + 2n [Wβ (1/μ) + Vα (1/μ)] + Π(y)eλ (1 + ε(λ)). Again putting μ=
r 1 ln , y Π(y)
3.1 Upper bounds for the distribution of S n
135
we will obtain the previous result provided that (see (3.1.14)) ˆ ˜ Π∗ = n Wβ (1/μ) + Vα (1/μ) c as y → ∞. That is, for (3.1.15) to hold we need « « „ „ y y < c, nVα < c. nWβ | ln Π(y)| | ln Π(y)|
(3.1.35)
Now we turn to conditions sufficient for (3.1.35). The cases α ∈ [1, 2), β ∈ (1, 2) have already been dealt with in Theorem 3.1.1. In the cases β = 1, 2 and W = ∞, we have « „ « „ « „ y y y = nW Lβ . nWβ | ln Π(y)| | ln Π(y)| | ln Π(y)| Since | ln Π(y)| < (α+1) ln y for sufficiently large y (see (3.1.30)), the first of the required inequalities (3.1.35) will hold if „ « „ « y y nW Lβ < c1 . ln y ln y If W < ∞ then the first of the inequalities (3.1.35) will hold if y −2 W n| ln Π(y)|2 < c. It suffices to take y > n1/2 ln(n + 1). Then by (3.1.30) 2 2 n| ln Π(y)|2 n(α + 1)2 ln2 y 2 n ln (n ln (n + 1)) c1 < < (α + 1) y2 y2 2n ln2 n
for all n 1. The sufficiency of conditions (3.1.16)–(3.1.18) is proved. The case α 2 is dealt with in a similar way. When α = 2 and V < ∞, it again suffices to take y > n1/2 ln(n + 1). If α = 2, V = ∞ then (cf. (2.2.20)) « „ y −δ/2 (y) → 0 nV2 c1 Π(y)L3 (y)| ln Π(y)|2+δ < L3 | ln Π(y)| when Π(y) < L3−2−δ (y). As was shown in (2.2.21) and (2.2.22), the last inequality holds when y s(y, n) = (σ(n)) > L1+ε 3 σ(n) or when Π(ε) ≡ ΠL1+ε (x) 1. 3 That conditions (3.1.19)–(3.1.21) are sufficient is proved. The sufficiency of condition (3.1.22) can be verified in an obvious way. The theorem is proved.
In concluding this section, we will obtain bounds for the moments of the r.v.’s Sn = max |Sk | = max{S n , |S n |}, kn
where
S n = min Sk . kn
= nV (x), and put Recall the notation V (t) = max{V (t), W (t)}, Π σ (n) := V (−1) (1/n).
136
Random walks with finite mean and infinite variance
The functions V (t), σ (n) are obviously regularly varying together with V (t) and W (t). It is also clear that, owing to (3.1.10), sup b x: Πv
P(Sn x) 2 + ε(v), Π
ε(v) ↓ 0
as
v ↓ 0.
(3.1.36)
Corollary 3.1.9. Let condition [<, <] be satisfied, and let g(t) be an r.v.f. of index γ ∈ (0, min{α, β}). Then Eg(Sn ) cg( σ (n)).
(3.1.37)
Proof. We can assume without loss of generality that g is a differentiable t+1 function (otherwise instead of g(t) we could consider the function g!(t) = t g(u)du, which is asymptotically equivalent to it). Therefore, owing to the convergence g(t)V (t) → 0 as t → ∞ we have Eg(Sn ) =
∞
g(t) P(Sn ∈ dt) c1 +
0
∞
g (t) P(Sn t) dt.
(3.1.38)
0
Further, the function V (t) is an r.v.f. (together with V (t) and W (t)), and hence it where ρ = min{α, β} and admits a representation of the form V (t) = t−ρ L(t), is an s.v.f. Now consider, along with V (t), the differentiable function L V! (t) := ρ
∞
= V (t). u−ρ−1 L(u)du ∼ t−ρ L(t)
t
! (−1)
Clearly σ !(n) := V
(1/n) ∼ σ (n) and, along with (3.1.36), we also have
P(Sn > x) c2 nV! (x)
xσ !(n) = σ (n)(1 + o(1)).
for
(3.1.39)
Hence, by virtue of Theorem 1.1.4(iv) and (3.1.39), for the integral on the final right-hand side of (3.1.38) we obtain σ e (n)
g (t)P(Sn t) dt
0
and ∞ σ e(n)
g (t) dt g σ !(n)
0
g (t)P(Sn t) dt c2 n
σ e (n)
∞
g (t)V! (t) dt
σ e(n)
3 ∞ 4 ρ−1 ! = c2 n ρ g(t)t L(t) dt − g σ !(n) V σ !(n)
σ e(n)
!(n) V! σ !(n) ∼ c3 g σ !(n) ∼ c3 g σ (n) . ∼ c3 ng σ Together with (3.1.38) this establishes (3.1.37). The corollary is proved.
3.2 Upper bounds for the distribution of S n (a), a > 0
137
3.2 Upper bounds for the distribution of S n (a), a > 0 In this section we will derive upper bounds for the probabilities P(S n (a) x)
and
P(S n (a) x; B(v)),
where, as before, Eξ = 0, S n (a) = max(Sk − ak), kn
a > 0,
n *
B(v) =
Bj (v).
j=1
Here we will put g(j) = aj, so that Bj (v) = {ξj < y + vaj},
v > 0.
Clearly, S n (a) is nothing other than the value of the maximum S n in the case where the summands ξj := ξj − a have a negative mean −a. This interpretation of S n (a) does not exclude situations with small values of a. To make formulations for such situations precise, we will introduce the triangular array scheme, where the distribution of ξ depends on n (the approach used when studying so-called transitional phenomena; see Chapter 12). Note, however, that the value of n in the forthcoming assertions can be fixed or infinite. So it is often more convenient to assume that the distribution of ξ (and therefore also the tails V (t) = Va (t) and W (t) = Wa (t)) will depend on some varying parameter that, when a → 0, could be identified with a. With regard to Va and Wa we will assume that a rather stringent condition on the character of their dependence on a is met. Namely, we will assume that the following uniformity condition is satisfied. [U] There exist tails V0 , W0 , belonging to the same classes as those to which the tails V and W belonged (we retain the same exponents −α and −β), such that Va (t) Wa (t) − 1 → 0, sup − 1 → 0 as t → ∞. sup V0 (t) W0 (t) a a This condition will be substantially broadened in Chapter 12. It is not difficult to see that all the bounds obtained in this chapter so far will remain, under condition [U], true for the triangular array scheme as well. The reason is that all the arguments and asymptotic analyses that we used for functionals of V = Va , W = Wa can be replaced, introducing a small error ε, by the same reasoning for the same functionals of the fixed functions V0 (t), W0 (t) for t tε with a suitable tε . In what follows, we will still be considering the case of centred r.v.’s ξ, so that Eξ = 0 while the tails V and W can, generally speaking, depend on the parameter a → 0 or another parameter, provided that condition [U] is satisfied. The index a indicating this dependence will be omitted for brevity. Below we will need bounds for the probabilities that the crossing of the boundary x + ak occurred after a given time. The first crossing time is given by the
138
Random walks with finite mean and infinite variance
r.v. η(x) := min{k : Sk − ak x}. k
Put m := min{n, x/a},
r = x/y.
To make our notation for conditions on the range of values of n and x more compact, in what follows we will be using condition [Q], which is satisfied when at least one of the following two conditions is met: [Q1 ] W (t) cV (t),
x→∞
and
nV (x) → 0,
or [Q2 ] x→∞
and
nV
x < c, ln x
where V (t) = max{V (t), W (t)},
so that [Q] = [Q1 ] ∪ [Q2 ]. Theorem 3.2.1. (i) Let conditions [<, <] and [U] with α ∈ (1, 2) and also condition [Q] with n replaced by m be satisfied. Then, for v 1/4r, uniformly in a ∈ (0, a0 ) for any fixed a0 > 0, P(S n (a) x; B(v)) c[mV (x)]r0 ,
(3.2.1)
where r0 = r/(1 + vr) 4r/5, r > 5/4. If a a0 for a fixed a0 > 0 and mV (x) → 0 then, without loss of generality, one can assume that condition [Q1 ] is met with n replaced in it by m. The inequality (3.2.1) remains true for any a without assuming [Q], provided that V (x) on the right-hand side of (3.2.1) is replaced by V (x). (ii) Under the conditions of part (i), P(S n (a) x) cmV (x).
(3.2.2)
If, under the conditions of part (i), condition [Q] is met with n = n1 := x/a then, for bounded values of t and all large enough x, cxV (x) 1−α xt t P ∞ > η(x) , (3.2.3) a a where the constant c can be found explicitly. If t → ∞ together with x then the inequality (3.2.3) remains true if the exponent 1 − α is replaced in it by 1 − α + ε for any fixed ε > 0.
3.2 Upper bounds for the distribution of S n (a), a > 0
139
The assertions stated in part (i) after (3.2.1) hold true for the inequalities (3.2.2), (3.2.3) as well. Note that if a a0 = const > 0, β > 1 then condition [Q2 ] with n replaced by m is always satisfied. Proof. For n x/a the assertion follows from Theorem 3.1.1 and Corollary 3.1.4. Now let n > x/a. Without loss of generality, we can assume that n1 := x/a is an integer. Then, for n0 := 0,
nk := 2k−1 n1 = 2k−1 x/a,
k = 1, 2, . . . ,
we have P(S n (a) x; B(v))
∞
pk := P η(x) ∈ (nk , nk+1 ]; B(v) .
pk ,
k=0
(3.2.4) Put xk := ank = x2k−1 ,
yk := y + avnk = y + xv2k−1 .
For any k such that nk n, we have B(v) ⊂
nk *
{ξj < y + vank } =
j=1
nk *
{ξj < yk } =: B (k)
j=1
and therefore, by Theorem 3.1.1 with n1 V (x) = xa−1 V (x) 1, # $r0 p0 = P(S n1 (a) x; B(v)) P(S n1 x; B (1) ) c n1 V (x) , where r0 :=
x1 r . ≡ y1 1 + vr
Similarly, for k 1 (that condition [Q] is met for nk = n1 2k−1 , xk = x2k−1 follows from the fact that it is satisfied for n1 and x1 = x), we obtain
ank ; B (k) pk = P η(x) ∈ (nk , nk+1 ]; B(v) P Snk 2 nk ank ank * + P Snk < {ξj < yk+1 } P S nk x + ; 2 2 j=1 nk * k−2 (k) = P Snk x2 ;B {ξj yk+1 } + P S nk x(1 + 2k−2 ); $r $r # # c1 nk V (x2k−2 ) k + c2 nk V (x(1 + 2k−2 )) k ,
j=1
(3.2.5)
140
Random walks with finite mean and infinite variance
where rk :=
x2k−2 r2k−2 1 = ↑ , k−1 yk 1 + vr2 2v
(3.2.6)
rk :=
x(1 + 2k−2 ) r(1 + 2k−2 ) 1 = ↑ k yk+1 1 + vr2 4v
(3.2.7)
as k → ∞, v 1/4r. Note that here rk r0 =
r , 1 + vr
rk r1 =
3r r, 2(1 + 2vr)
k 1,
v
1 . 4r
So min{rk , rk } = r0 = r/(1 + vr) 4r/5 for v 1/4r. Summarizing, we obtain that, for 0 < ε < α − 1, v 1/4r and all k, we have % k & r0 ,x -r0 x2 c1 V (x2k ) V (x)2−(α−1−ε)k . pk c a a Hence from (3.2.4) &r % ∞ xV (x) 0 P S n (a) x; B(v) p k c2 , a k=0
r0 > 1,
r>
5 . 4
This proves the inequality (3.2.1). Now we will show that when a a0 > 0 one can assume, without loss of generality, that the condition W (t) cV (t) is satisfied. Indeed, for h > 0 introduce (h)
ξj := max{−h, ξj } + ah ,
ah := E(ξj + h; ξj −h) < 0,
which are centred versions of the r.v.’s ξj ‘truncated’ at the level −h, h > 0, and endow the notation S n (a), corresponding to the r.v.’s (h) ξj , with a left superscript (h) . Then, clearly, S n (a) (h) S n (a + ah ), where ah → 0 as h → ∞ and all the conditions of the form [<, <], W (t) < cV (t) are satisfied for (h) ξj . This gives us the required bound for P(S n (a) x) with a ‘slightly diminished’ value of a. The last assertion of part (i) of the theorem follows in an obvious way from the above argument and Corollary 3.1.2. The proof of part (ii) of the theorem follows the same avenue. Using an argument similar to that above, we find that, for n x/a, k 1, P η(x) ∈ (nk , nk+1 ] ank ank ank + P Snk < P S nk x + P Snk 2 2 2 P Snk x2k−2 + P S nk x(1 + 2k−2 ) cnk V (x2k );
3.3 Lower bounds for the distribution of Sn from this it follows that
P(∞ > η(x) nk ) = P ∞ > η(x)
x2k−1 a
c1
141
x2k−1 V (x2k ). a
For a fixed t, this yields xV (x) 1−α xt P ∞ > η(x) c2 t . a a If t → ∞ then, owing to the properties of r.v.f.’s, one can replace t1−α in this inequality by t1−α+ε for any fixed ε > 0. The theorem is proved.
3.3 Lower bounds for the distribution of Sn Lower bounds for the distributions of the sums Sn are based, as in § 2.5, on the general result of Theorem 2.5.1 and are similar to the bounds from Theorem 2.5.2. Recall that V (t) = max{V (t), W (t)},
σ (n) = V (−1) (1/n),
α ˇ = min{α, β}.
Theorem 3.3.1. Let condition [<, <] with β ∈ (1, 2) be satisfied and let Eξ = 0. Then, for y = x + u σ (n − 1) and any fixed δ > 0, we have
n−1 ˇ F+ (y) . − P(Sn x) nF+ (y) 1 − cu−α+δ 2
(3.3.1)
If, moreover, condition [ · , ≶] is satisfied (see (2.5.1)), x σ (n) then P(Sn x) nV (x)(1 + o(1)).
(3.3.2)
Proof. By virtue of Theorem 2.5.1, n−1 F+ (y) , (3.3.3) P(Sn x) nF+ (y) 1 − Qn−1 (u) − 2 where y = x + uK(n − 1) and Qn (u) = P Sn /K(n) < −u . Corollary 3.1.2 implies that # $ (3.3.4) P(Sn < −x) nV (x) 1 + o(nV (x)) . Put K(n) := σ (n). Then, for any fixed δ > 0 and all large enough t, ˇ . Qn (u) cnV uV (−1) (1/n) cu−α+δ This proves (3.3.1). If x = s σ (n), s → ∞, u → ∞, u = o(s) then x ∼ y. Therefore, under condition [ · , ≶], we obtain nF+ (y) nV (x)(1 + o(1)), nF+ (y) → 0. This establishes (3.3.2). The theorem is proved.
142
Random walks with finite mean and infinite variance
One can also give uniform versions of the inequalities stated in Theorem 3.3.1. For instance, the following assertion holds true. Recall that condition [Rα,ρ ] (see §§ 1.5 and 2.5) means that F (t) is an r.v.f. at infinity with index −α, α ∈ (0, 2), and there exists the limit F+ (t) lim = ρ+ . t→∞ F (t) Corollary 3.3.2. Let condition [Rα,ρ ] be satisifed, α ∈ (1, 2), ρ+ > 0. Then, for x = sσ(n), Π = Π(x) = nV (x), inf
x: st
P(Sn x) 1 − ε(t), Π
(3.3.5)
ε(t) ↓ 0 as t ↑ ∞. The corollary follows in an obvious way from Theorem 3.3.1. Lower bounds for P(S n (a) x) are contained in Theorem 4.3.3.
3.4 Asymptotics of P(Sn x) and its refinements Some assertions on the first-order asymptotics of P(Sn x) follow immediately from the upper and lower bounds presented above. Theorem 3.4.1. Let Eξ = 0, α ∈ (1, 2) and condition [<, =] be satisfied. If W (t) cV (t) then there exists a function ε(t) ↓ 0 as t ↑ ∞ such that, for x = sσ(n), P(Sn x) (3.4.1) − 1 ε(t), sup Π x: st P(S n x) sup (3.4.2) − 1 ε(t). Π x: st If W (t) cV (t) but nW (x/ln x) < c then the asymptotic equivalence relations P(Sn x) ∼ P(S n x) ∼ nV (x)
as
x→∞
remain valid. In the latter case it is more difficult to determine a parameter for which uniformity would hold. There is no doubt, however, that uniform analogues of (3.4.1), (3.4.2) will remain true if nW (x/ln x) < v ↓ 0. Proof. The proof follows in an obvious way from Theorems 3.1.1 and 3.3.1. One just has to note that each of the two pairs of relations W (t) cV (t), x σ(n) and W (t) V (t), nW (x/ln x) < c implies that x σ (n) = V (−1) (1/n).
3.4 Asymptotics of P(Sn x) and its refinements
143
Below we will extend the above assertions to more general situations (this could already be partially done with the help of the bounds from §§ 3.2 and 3.3). Moreover, we will also obtain refinements of these assertions, i.e. estimates of the rate of convergence of the difference P(Sn x) −1 nV (x) to zero and asymptotic expansions for P(Sn x). Later on, analogous results will be obtained for P(S n x) and P(S n (a) x) as well. The need for deriving second- and higher-order asymptotics arises from the fact that the first-order asymptotics (the main terms of the asymptotics) often provide not very precise approximations to the probabilities under consideration (to some extent, this will be illustrated by numerical examples below). The very fact that the same function nV (x) appears in Theorem 3.4.1 as an approximation to the substantially different probabilities P(Sn x) < P(S n x) already indicates that the quality of this approximation cannot be very good. To this one should add that the same approximation is valid for the probability P(ξ n x) as well, where ξ n = maxjn ξj . Indeed, n P(ξ n x) = 1 − 1 − F+ (x) n(n − 1) = nF+ (x) − F+ (x)2 + O (nF+ (x))2 , 2 so that if F+ (x) = V (x) then
P(ξ n x) = nV (x) + O (nV (x))2 ,
(3.4.3)
and we obtain the same first-order approximation as in (3.4.1), (3.4.2). We will see below that, although nV (x) is a lower-quality approximation to the probabilities of the form P(Sn x) and P(S n x) than it is to P(ξ n x), it can be substantially improved with the help of even the first term of the asymptotic expansions. It is of interest to observe that, in the case α < 2, sometimes the approximation nV (x) to P(Sn x) (but not to P(S n x)) is, in a certain sense, as good as it is to P(ξ n x) (see Remark 3.4.5). To obtain refinements of the limit theorems, we will need additional smoothness conditions on the function V (t). In the conditions below, q(t) denotes a function such that q(t) → 0 as t → ∞. [Dh,q ] Condition [ · , =] is satisfied and, in addition, there exists a function L1 (t) → α as t → ∞ such that, as t → ∞ and Δ → 0, if h 1, V (t)[1 + O(|Δ|h ) + o(q(t))] V (t(1 + Δ)) = h V (t)[1 − L1 (t)Δ + O(|Δ| ) + o(q(t))] if 1 < h 2. (3.4.4) For h = 1 we will need a stronger condition.
144
Random walks with finite mean and infinite variance
[D(1,q) ] Condition [ · , =] is satisfied and, in addition, there exists a function L1 (t) → α as t → ∞ such that, as t → ∞ and Δ → 0, V (t(1 + Δ)) = V (t)[1 − L1 (t)Δ + o(Δ) + o(q(t))].
(3.4.5)
For example, under condition [ · , =] the distribution tail of the first positive sum in the walk {Sk } will always satisfy condition [D(1,q) ] with q(t) = t−1 (see § 7.5). This is also true for the Cram´er transforms of the distribution we consider in Chapter 6. Condition [D(1,q) ] means that the tail V can be represented as a sum of two components: one is ‘differentiable at infinity’ (it satisfies condition [D(1,0) ]) while the other can be arbitrary, the only restriction being that its absolute value should admit a majorant of the order o(q(t)). Similar remarks hold true for other conditions [D( · , · ) ] as well. Furthermore, along with conditions [Dh,q ] and [D(1,q) ] one can consider conditions [Dh,O(q) ] and [D(1,O(q)) ], which are formulated as follows. [Dh,O(q) ], [D(1,O(q)) ] Conditions [Dh,q ], [D(1,q) ] respectively are satisfied when the remainder term o(q(t)) in them is replaced by O(q(t)). In the lattice case, when the r.v. ξ is integer-valued with the greatest common divisor of its values equal to 1, V (t) will be a step function. In this case, one should assume that t and Δt in (3.4.4) and (3.4.5) are also integer-valued. If the function L(t) in the representation V (t) = t−α L(t) is differentiable, and L (t) = o(L(t)/t) then in (3.4.4) with h > 1 and in (3.4.5) we can identify the function −L1 (t) with tV (t)/V (t) ∼ −α, where V (t) is the derivative of V (t), and put q(t) ≡ 0. As it will be seen from the proofs below, the accuracy of the asymptotic representations for P(Sn x) and P(S n x) will depend on the sensitivity of the tails V (t) to relatively small variations in t. In this sense, conditions [Dh,q ] reflect the nature of things. We will also see (Remark 3.4.8) that, in the case where q(t) = o(t−h ), condition [Dh,q ] can be replaced by [Dh,0 ]. Remark 3.4.2. It is not hard to see the following. (1) If the functions Vi (t) satisfy conditions [Dh,qi ], i = 1, 2, then V (t) := V1 (t) + V2 (t) satisfies [Dh,q ] with q(t) = q1 (t)V1,0 (t) + q2 (t)V2,0 (t),
Vi,0 (t) =
Vi (t) . V1 (t) + V2 (t)
In the case h > 1, one should take, in the representation (3.4.4) for V (t), L1 (t) = L1,1 (t)V1,0 (t) + L1,2 (t)V2,0 (t), where L1,i (t) is the function from the representation (3.4.4), corresponding to Vi (t), i = 1, 2. (2) If V1 (t) satisfies condition [Dh,q1 ] and q(t) < cq1 (t)V1 (t) then V (t) := V1 (t) + q(t) also satisfies [Dh,q1 ].
3.4 Asymptotics of P(Sn x) and its refinements
145
Remark 3.4.3. Clearly, if conditions [<, <] and [Q] = [Q1 ] ∪ [Q2 ] (introduced on p. 138) are met then, by virtue of Theorem 3.1.1, we have P(S n x) nV (x)(1 + o(1)) and, moreover, if [Q2 ] is satisfied, P(S n < −x) nW (x)(1 + o(1)), where S n = minkn Sk . Furthermore, it is not difficult to see that condition [Q] implies the convergence nV (x) → 0 and therefore that the conditions of Corollary 3.1.2 are satisfied, which, in turn, implies (3.1.9). Also, note that the condition nV (x) → 0 as x → ∞ always holds for deviations x > cn, since nV (n) → 0 as n → ∞ when E|ξ| < ∞. Theorem 3.4.4. Let the conditions Eξ = 0, [<, =] with α ∈ (1, 2) and [Q] be satisfied. Then the following assertions hold true. (i) One has P(Sn x) = nV (x)(1 + o(1)). (ii) If condition [Dh,q ] is satisfied then
P(Sn x) = nV (x) 1 + o(q(x)) + rn,x ,
where
rn,x =
O nV (x) O ( σ (n)/x)h
(3.4.6)
if h > α ˇ := min{α, β}, if h < α ˇ,
(3.4.7)
and the bounds O(·) are uniform in n and x, which satisfy nV (x) ε for an arbitrary fixed ε > 0. (iii) If condition [D(1,q) ] is met then (n)/x . (3.4.8) P(Sn x) = nV (x) 1 + o(q(x)) + O σ The remainder term O σ (n)/x) is uniform in the same zones for n, x as in part (ii). (iv) If conditions [Dh,O(q) ] and [D(1,O(q)) ] are satisfied instead of conditions [Dh,q ] and [D(1,q) ] respectively, then in (3.4.6) and (3.4.8) the remainder term o(q(x)) must be replaced by O(q(x)). In the lattice case, all the assertions of the theorem remain true for integervalued x. The so-called integro-local theorems on the asymptotics of P(Sn ∈ [x, x+Δ)), Δ > 0, as x → ∞, will be obtained in § 3.7 and Chapter 9.
146
Random walks with finite mean and infinite variance
Remark 3.4.5. The assertion of Theorem 3.4.4 concerning the uniformity of O(·) means that if, say, the conditions h > α ˇ , nV (x) ε are met then rn,x cnV (x). One should note that σ(n)/x → 0 iff nV (x) → 0. More precisely, if x = sσ(n) then nV (x) ∼ s−α for a fixed s, and nV (x) s−α+ε = o(s−1 ) as s → ∞ for any ε, 0 < ε < α − 1. Therefore, σ(n) nV (x) = o as s → ∞ (3.4.9) x (or as nV (x) → 0). Similarly, nV (x) = o It is also clear that
σ (n) x
h
σ (n) x
h
σ (n) x
as
nV (x)
nV (x) → 0.
for
(3.4.10)
h>α ˇ, (3.4.11)
nV (x)
for
h<α ˇ.
It will follow from the proof of the theorem that the bound (3.4.7) can be written in the universal form x h σ (n) rn,x = O +n uh−1 V (u) du, , x σ b(n)
which coincides with (3.4.7) when h = α ˇ. Remark 3.4.6. If W (t) cV (t) and h > α then, by virtue of Theorem 3.4.4, for P(Sn x) one has an approximation nV (x)[1 + O(nV (x))] of the same form as the approximation (3.4.3) for P(ξ n x). This situation is different from that in #the case α > 2, where, under certain conditions, we have P(Sn x) = $ nV (x) 1 + cnx−2 + · · · (see Corollary 4.4.5). Proof of Theorem 3.4.4. Put Gn := {Sn x} and r = x/y = 2. It follows from Theorem 3.1.1 (see (3.1.2)) that P(Sn x) = P(Gn ) = P(Gn B) + P(Gn B) = P(Gn B) + O (nV (x))2 ,
(3.4.12)
where for the first term on the right-hand side we have n
P(Gn B j ) P(Gn B)
j=1
n j=1
=
n j=1
P(Gn B j ) −
P(Gn B i B j )
i<jn
P(Gn B j ) + O (nV (x))2 .
(3.4.13)
3.4 Asymptotics of P(Sn x) and its refinements Therefore P(Gn ) =
n
P(Gn B j ) + O (nV (x))2 .
147
(3.4.14)
j=1
Further, P(Gn B j ) = P(Gn B n ) = P(Sn x, ξn y) = P(Sn−1 + ξn x, ξn y) = P(Sn−1 x − y, ξn y) + P(Sn−1 + ξn x, Sn−1 < x − y). Since the r.v.’s ξn and Sn−1 are independent of each other, the first term on the right-hand side does not exceed cnV 2 (x), owing to (3.1.9). The second term can be rewritten as E[V (x − Sn−1 ); Sn−1 < x/2] = P1 + P2 ,
(3.4.15)
where P1 := E[V (x − Sn−1 ); Sn−1 < −x/2], P2 := E[V (x − Sn−1 ); −x/2 Sn−1 < x/2]. As we have already noted, condition [Q] implies the convergence nV (x) → 0. Using (3.1.9) to bound the left tails, we obtain P1 V (3x/2)P(Sn−1 < −x/2) V (3x/2)cnV (x/2) = O nV (x)V (x) .
(3.4.16)
(i) By virtue of (3.4.12)–(3.4.16), to prove this part of the theorem it suffices to show that P2 = V (x)(1 + o(1)). It follows from condition [Q] and (3.1.9) that if ε → 0 slowly enough as x → ∞ then for M = εx one has P(|Sn−1 | > M ) → 0.
(3.4.17)
Hence P2 = E[V (x − Sn−1 ); |Sn−1 | < M ] + o(V (x)).
(3.4.18)
But V (x − S) ∼ V (x) for |S| < M , so that P2 = V (x)(1 + o(1)). Assertion (i) is proved. (ii) If the smoothness condition [Dh,q ] is satisfied then one can obtain more precise results for P(Sn x). In this case, for h > 1 we have, by virtue of (3.4.4), that % & Sn−1 h Sn−1 x . + o(q(x)) + O ; |Sn−1 | < P2 = V (x)E 1 + L1 (x) x x 2 (3.4.19)
148
Random walks with finite mean and infinite variance
Set k Tnk := E[Sn−1 ; |Sn−1 | x/2],
(3.4.20)
Inh := E[|Sn−1 |h ; |Sn | < x/2].
(3.4.21)
Since ESn−1 = 0, we have # $ P2 = V (x) 1 + o(q(x)) − Tn0 + O(x−1 |Tn1 | + x−h Inh ) , Tnk
and now the main problem consists of bounding First we show that, for k = 0, 1 and h > α ˇ,
and
(3.4.22)
Inh .
|Tnk | cxk nV (x).
(3.4.23)
The inequality Tn0 ≡ P(|Sn−1 | x/2) cnV (x) is contained in (3.1.9). For k = 1, integrating by parts and again using (3.1.9), we obtain x |Tn | E |Sn−1 |; |Sn−1 | 2
∞ =−
u dP(|Sn−1 | u)
x/2
=
x
x + P |Sn−1 | > 2 2
∞ P(|Sn−1 | u) du x/2
cxnV (x) + cn
∞
V (u) du c1 xnV (x)
(3.4.24)
x/2
by virtue of Theorem 1.1.4(iv). This proves (3.4.23). Similarly, Inh
σ b x/2 (n) x/2 h−1 hu P(|Sn−1 | u) du = + . 0
0
(3.4.25)
σ b(n)
Here the first integral on the right-hand side does not exceed σ b (n)
uh−1 du = σ h (n).
h 0
Further, by (3.1.9) and Theorem 1.1.4(iv), the second integral on the right-hand side admits the bound ⎧ x ⎪ ⎪ ⎪ ⎪ h>α ˇ, hn uh−1 V (u) du ∼ cxh nV (x), ⎪ ⎪ ⎪ x/2 ⎨ 1 hn uh−1 V (u) du ∞ ⎪ ⎪ ⎪ ⎪ σ b(n) hn σ h (n), uh−1 V (u) du ∼ c h<α ˇ ⎪ ⎪ ⎪ ⎩ σ b(n)
3.5 Asymptotics of P(S n x) and its refinements (recall that V ( σ (n)) ∼ 1/n). Therefore, owing to (3.4.11), cxh nV (x) h In c σ h (n)
149
if h > α ˇ, if h < α ˇ.
Together with the relations (3.4.14)–(3.4.16), (3.4.18), (3.4.22) and (3.4.23), this proves (3.4.6), (3.4.7). (iii) Now assume that condition [D(1,q) ] is satisfied. Then, instead of (3.4.19), one has & % |Sn−1 | Sn−1 x P2 = V (x)E 1 − L1 (x) ; |Sn−1 | < . + o(q(x)) + o x x 2 Using the same argument as above, we obtain & % σ (n) P2 = V (x) 1 + o(q(x)) + o . x (iv) The last assertion of Theorem 3.4.4 follows from the previous considerations in an obvious way, since replacing the term o(q(t)) in conditions (3.4.4), (3.4.5) by O(q(t)) does not cause any changes in the proof apart from replacing the bounds o(q(x)) in the final answers (3.4.6), (3.4.8) by O(q(x)). Theorem 3.4.4 is proved. Remark 3.4.7. Since E|Sn |h c > 0 for n 1, adding the quantity O(x−h ) to O(|Sn−1 /x|h ) in (3.4.19) will not affect the estimation of the terms on the righthand side of that relation. Therefore, conditions [Dh,q ] with q(t) ct−h and [Dh,0 ] lead to the same results. Remark 3.4.8. As we have already said, the bound o(q(x)) ‘passes’ from condition [Dh,q ] to (3.4.19) and (3.4.6) without any change and has no influence on the rest of the argument in the proof. The same applies to the proofs of other theorems. Hence, in what follows, in the proofs related to condition [Dh,q ] we can assume that in fact condition [Dh,0 ] is met and then just add the term o(q(x)) to the right-hand sides of the respective relations, from the very beginning to the final result.
3.5 Asymptotics of P(S n x) and its refinements Some assertions on the first-order asymptotics of P(S n x) follow from the bounds of §§ 3.2 and 3.3 and were given in Theorem 3.4.1. In § 3.4 we also presented a few remarks on the precision of the first-order approximations and stated the smoothness conditions [Dh,q ] on the tails V (t) that are necessary for deriving higher order approximations. The following theorem is the main result of the present section.
150
Random walks with finite mean and infinite variance
Theorem 3.5.1. Let conditions [<, =] and [Q] with α ∈ (1, 2) be satisfied. Then the following assertions hold true. (i) One has P(S n x) = nV (x)(1 + o(1)).
(3.5.1)
(ii) If condition [Dh,q ] is met for some h ∈ (1, 2] then & % n−1 L1 (x) P(S n x) = nV (x) 1 + ES j + o(q(x)) + rn,x , nx j=1
(3.5.2)
where the form of rn,x is given in Theorem 3.4.4 and the convergence is uniform in n and x, satisfying the condtion nV (x) ε for an arbitrary fixed ε > 0. σ (n)/x). If condition [D(1,q) ] is satisfied then (3.5.2) holds with rn,x = o( (iii) If condition [Dh,q ] is satisfied for some h 1 then % σ (n) P(S n x) = nV (x) 1 + o(q(x)) + O x
h&
.
(3.5.3)
(iv) If conditions [Dh,O(q) ] and [D(1,O(q)) ] are satisfied instead of conditions [Dh,q ] and [D(1,q) ] respectively, then the remainder term o(q(x)) in (3.5.2) and (3.5.3) should be replaced by O(q(x)). In the lattice case, when the r.v. ξ is integer-valued with the greatest common divisor of its values equal to 1, all the assertions of the theorem remain true for integer-valued x. Note also that Remarks 3.4.5 and 3.4.6 concerning Theorem 3.4.4 are also applicable to Theorem 3.5.1. Corollary 3.5.2. If conditions [D(1,q) ] and [Rα,ρ ] with α ∈ (1, 2) and ρ > −1 are satisfied then, as nV (x) → 0, one has % & α2 Eζ(1) b(n) σ(n) , (3.5.4) P(S n x) = nV (x) 1 + + o(q(x)) + o α+1 x x where ζ(1) := supu1 ζ(u) is the supremum of the values ζ(u) of the stable process limiting for Snu /b(n). The bounds o(·) are uniform in n and x such that nV (x) ε for an arbitrary ε → 0. Recall that b(n) = F (−1) (1/n),
1/α
σ(n) ∼ F (−1) (1/ρ+ n) = b(ρ+ n) ∼ ρ+ b(n),
where ρ+ = (ρ + 1)/2, and that clearly σ(n)/x → 0 iff nV (x) ε → 0.
3.5 Asymptotics of P(S n x) and its refinements
151
Proof of Theorem 3.5.1. (i) Here we will put Gn := {S n x}. The same argument as we used at the beginning of the proof of Theorem 3.4.4 leads to an analogue of (3.4.14) of the form P(S n x) = P(Gn ) =
n
P(Gn B j ) + O (nV (x))2 .
(3.5.5)
j=1
Again choosing r ≡ x/y = 2 we have P(Gn B j ) = P(S n x, ξj y) = P(S n x, ξj y, S j−1 x) + P(S n x, ξj y, S j−1 < x) = P(S n x, ξj y, S j−1 < x) + O(jV 2 (x)),
(3.5.6)
where the last bound follows from Theorem 3.1.1. Next we observe that
(j) S n x, ξj y, S j−1 < x = Sj + S n−j x, ξj y, S j−1 < x , (j)
where S n−j := max0mn−j (Sm+j − Sj ). Setting (j)
Zj,n := Sj−1 + S n−j
(3.5.7)
and noting that d
Sj−1 Zj,n S n−1 ,
(3.5.8)
we obtain, again using Theorem 3.1.1, that (j) P(Gn B j ) = P Sj + S n−j x, ξj y, S j−1 < x + O jV 2 (x) (j) = P Sj + S n−j x, ξj y + O P(ξj y, S j−1 x) + O(jV 2 (x)) = P(ξj + Zj,n x, ξj y, Zj,n < x/2) + P(ξj + Zj,n x, ξj y, Zj,n x/2) + O(jV 2 (x)) = P(ξj + Zj,n x, Zj,n < x/2) + O(nV 2 (x)).
(3.5.9)
Here the last equality holds owing to the relation {ξj + Zj,n x, ξj y, Zj,n < x/2} = {ξj + Zj,n x, Zj,n < x/2} and the bound P(ξj y, Zj,n x/2) P(ξj y) P(S n−1 x/2) = O(nV 2 (x)).
152
Random walks with finite mean and infinite variance
Further, since P(ξj + Zj,n x, Zj,n < x/2) # $ = E E[I(ξj x − Zj,n )| Sj−1 , ξj+1 , . . . , ξn ]; Zj,n < x/2 = E[V (x − Zj,n ); Zj,n < x/2],
(3.5.10)
we obtain from (3.5.5) and (3.5.9) that P(Gn ) =
n
E[V (x − Zj,n ); Zj,n < x/2] + O (nV (x))2 .
(3.5.11)
j=1
As in (3.4.15), we split the summands E[V (x − Zj,n ); Zj,n < x/2] in (3.5.11) into two terms Ej,1 and Ej,2 , where, using (3.1.9) to estimate the left tails, we have Ej,1 := E[V (x − Zj,n ); Zj,n −x/2] = O nV (x)V (x) . (3.5.12) The remainder of the proof of part (i) is completely parallel to the corresponding part of the proof of Theorem 3.4.4(i). (ii) Here we just have to evaluate the term Ej,2 := E[V (x − Zj,n ); |Zj,n | < x/2].
(3.5.13)
Using ESj = 0 and condition [Dh,q ] with 1 < h 2, we can write (recall that L1 (x) ∼ α and that, according to Remark 3.4.8, in these proofs we can use condition [Dh,0 ] instead of [Dh,q ]): & % Zj,n h Zj,n x Ej,2 = V (x)E 1 + L1 (x) ; |Zj,n | < +O x x 2
(j) ES n−j x = V (x) 1 − P |Zj,n | > + L1 (x) 2 x & % & " % Zj,n h Zj,n x ; |Zj,n | < x + O E . ; |Zj,n | − L1 (x)E x 2 x 2 (3.5.14) Setting, for k = 0, 1, # k $ k Tj,n := E Zj,n ; |Zj,n | x/2 , # $ h := E |Zj,n |h ; |Zj,n | < x/2 , Ij,n
(3.5.15) (3.5.16)
one can rewrite the representation (3.5.14) for Ej,2 as
" (j) 1 −h h ES n−j Tj,n 0 Ej,2 = V (x) 1−Tj,n +L1 (x) −L1 (x) +O x Ij,n . (3.5.17) x x d
h completely repeats From (3.5.8) we have |Zj,n | Sn , so that the bounding of Ij,n
3.5 Asymptotics of P(S n x) and its refinements
153
k = O(xk nV (x)) are also obtained in a that of Inh in § 3.4. The bounds for Tj,n similar way to those for Tnk in § 3.4. When conditions [D(1,q) ] and [Q] are met, the proof of (3.5.2) with a remainder σ (n)/x) is carried out similarly to the argument in the proof of term rn,x = O( Theorem 3.4.4(iii). Part (ii) of Theorem 3.5.1 is proved.
(iii), (iv) The proofs of the last two parts of the theorem are also parallel to the corresponding arguments from the proof of Theorem 3.4.4. Theorem 3.5.1 is proved. Proof of Corollary 3.5.2. The representation (3.5.4) follows from (3.5.2). First, observe that L1 (x) → α and that, under the conditions of Corollary 3.5.2, as j → ∞, the distributions of Sj /b(j) converge weakly to the stable law Fα,ρ (see § 1.5). Further, the r.v.’s |Sj |/b(j), j 1, are uniformly integrable. This follows from Theorem 3.1.1: Sj > v cjV (vb(j)) cjv −α+δ V (b(j)) cv −α+δ (3.5.18) P b(j) for any δ > 0 and all sufficiently large v. Therefore, ES j /b(j) → Eζ(1) as j → ∞ (see also [146]), where ζ(1) = maxu1 ζ(u) and ζ(u) is the stable process that is limiting for Snu /b(n). Since b(j) = j 1/α Lb (j), Lb (t) being an s.v.f. at infinity, we have ES j =
ES j b(j) = Eζ(1) + o(1) j 1/α Lb (j) = j 1/α L0 (j), b(j)
where L0 is also an s.v.f. Hence, by Theorem 1.1.4(iv) n−1
ES j ∼ Eζ(1)
j=1
n−1 j=1
∼
j 1/α L0 (j) ∼ Eζ(1)
n
x1/α L0 (x) dx
1
Eζ(1) 1/α+1 αEζ(1) n nb(n). L0 (n) ∼ 1/α + 1 α+1
(3.5.19)
Corollary 3.5.2 is proved. Example 3.5.3. We will illustrate Theorem 3.5.1 by a numerical example. Consider symmetrically distributed r.v.’s ξj with ⎧ 4 ⎪ ⎨ 0.5 for 0 t < , 9 F+ (t) = P(ξ1 t) = ⎪ 4 t−3/2 for t 4 . ⎩ 27 9 Obviously, conditions [=, =] (with α = β = 3/2), Eξ1 = 0 and [Dh,0 ] (with h = 2) are satisfied, and L1 (t) = 3/2. Let us take n = 10 and demonstrate how much the relation (3.5.2) can improve the first-order approximation (3.5.1) for P(S n x).
154
Random walks with finite mean and infinite variance
Fig. 3.1. Illustrating Theorem 3.5.1: a comparison of the approximations (3.5.1) (lower smooth curve) and (3.5.2) (upper smooth curve) with P(S n x).
Figure 3.1 depicts plots of the crude approximation nV (x) (the main term in the approximation (3.5.1)), the second-order approximation (the right-hand side of (3.5.2) without remainders) and a Monte Carlo estimator of the tail P(S n x) (to obtain this, we simulated 105 trajectories {Sk ; k n}; the values ES j were also estimated from simulations). 3.6 The asymptotics of P(S(a) x) with refinements and the general boundary problem Now we consider the asymptotic behaviour of the probabilities P(S n (a) x) as x → ∞ for the maxima S n (a) = maxkn (Sk − ak) when Eξ = 0, a > 0. Theorem 3.6.1. Let conditions [<, =] with α ∈ (1, 2) and [Q] with n replaced by m = min{n, x/a} be satisfied. Then the following assertions hold true. (i) For an arbitrary fixed a0 < ∞, uniformly in n and a ∈ (0, a0 ) we have ⎛ ⎞ n P(S n (a) x) = ⎝ V (x + ja)⎠ (1 + o(1)). (3.6.1) j=1
(ii) Under condition [D1,q ], for an arbitrary fixed a1 > 0, uniformly in n and a > a1 we have P(S n (a) x) =
n
# $ V (x + ja) 1 + o(q(x + ja)) + O m σ (m)x−1 V (x) ,
j=1
(3.6.2) where m := min{n, x} and V (t) = max{V (t), W (t)},
σ (n) = V (−1) (1/n) = max{σ(n), σW (n)}.
3.6 Asymptotics of P(S(a) x) and the boundary problem
155
Remark 3.6.2. It will be seen from the proof of Theorem 3.6.1 that if a a1 > 0, then (3.6.1) will also hold when condition [<, =] is replaced by [ · , =]. Indeed, the only place in the proof of part (i) of the theorem where we use [<, =] is in the derivation of the bound (3.6.19). In the case a a1 > 0 one could use, instead of that relation, the bound Ej,1 = o(jV (x + aj)), which is due to the law of large numbers. Note that in Theorem 3.6.1, in contrast with the results of §§ 3.4 and 3.5, there are no upper restrictions on n (for example, of the form nV (x) → 0), and therefore one could put n = ∞. So we obtain the following results for S(a) := S ∞ (a). Corollary 3.6.3. Let condition [<, =] with α ∈ (1, 2) and W (t) < c/(t ln t) be satisfied. (i) As x → ∞, P(S(a) x) =
∞
V (x + ja) (1 + o(1))
j=1 ∞
=
1 a
V (u) du (1 + o(1)) = x
xV (x) (1 + o(1)). a(α − 1) (3.6.3)
(ii) If, moreover, condition [Dh,q ] holds with h = 1 then, as x → ∞, P(S(a) x) =
∞
# $ V (x+ja) 1+o(q(x+ja)) +O σ (x)V (x) . (3.6.4)
j=1
Observe that in the last assertion we have σ (x) = o(x). A more accurate asymptotic representation for P(S(a) x) will be found in § 7.5. Now we will consider to the case of general boundaries. For a given boundary g = {g(k)}, we set ( ) Gn := max(Sk − g(k)) 0 . kn
Further, let Gx,n be the class of boundaries {g(k) = g(k, x); k = 0, 1, . . . , n}, such that min g(k) cx
1kn
for a fixed c > 0. It is obvious that, in the case a 0, the boundary g(k) = x+ak belongs to Gx,∞ and that G∞ = {S(a) x}. From the applications viewpoint, the most interesting boundaries are of the following two types: (1) g(k) = x + gk , k = 0, 1, . . . , where the sequence {gk ; k 0} does not depend on x, inf k0 gk > −∞;
156
Random walks with finite mean and infinite variance
(2) g(k) = xf (k/n), where usually one considers only the values k n and the function f (t) is given on [0, 1], does not depend on x or n and is such that inf t∈[0,1] f (t) > 0. Evidently, boundaries of both types belong to the class Gx,n for any n. Theorem 3.6.4. Let conditions [<, =] with α ∈ (1, 2) and [Q] be satisfied. Then, for g ∈ Gx,n , the following two assertions hold true. (i) As x → ∞, one has n V (g∗ (j)) (1 + o(1)) + O n2 V (x)V (x) , P(Gn ) =
(3.6.5)
j=1
where g∗ (j) := min g(k). jkn
(ii) If condition [Dh,q ] with h = 1 is met then, as x → ∞, P(Gn ) =
n
V (g∗ (j))
1 + o(q(x)) + O( σ (n)/x) .
(3.6.6)
j=1
The bounds O(·) in (i), (ii) are uniform in g ∈ Gx,n for n and x satisfying nV (x) ε for an arbitrary fixed ε > 0. Remark 3.6.5. In Theorems 3.6.1 and 3.6.4 one can consider the case h > 1 also, but this would not lead to substantially stronger results. As was the case in Theorems 3.4.4 and 3.5.1, conditions of the form [Dh,q ] can be replaced by [Dh,O(q) ] with o(q(x)) in (3.6.6) changed to O(q(x)). If, say, g(k) = x + gk , where the sequence {gk } depends neither on x nor on n, gk ↑ ∞ as k ↑ ∞ sufficiently fast, then we can obtain from Theorem 3.6.4 assertions for the probability P(G∞ ), which are similar to (3.6.3) and (3.6.4). For example, under the conditions [<, =], W (t) < c1 V (t), one would have, as x → ∞, ∞ V (x + gk ) (1 + o(1)). (3.6.7) P sup(Sk − g(k)) 0 = k0
k=1
In the case where the sequence {gk } is dominated by a linear sequence, (3.6.7) follows from Theorems 3.2.1 and 3.6.4. Corollary 3.6.6. Let the following conditions be met: [<, =] with α ∈ (1, 2), ∞ W (t) < c1 V (t), gk ↑, gk gk for some g > 0 and k=1 V (x + gk ) > cxV (x). Then (3.6.7) holds true.
3.6 Asymptotics of P(S(a) x) and the boundary problem
157
Proof. Note that condition [Q] is always satisfied when n = cx. Therefore, by virtue of Theorem 3.6.4, to prove the corollary it suffices to verify that the probability that the boundary {x + gk } will be crossed after a time n = tx, where t → ∞ together with x (and slowly enough), is negligibly small compared with xV (x). But this follows immediately from Theorem 3.2.1(ii) (see (3.2.3)). The corollary is proved. It is not difficult to obtain an analogue of Corollary 3.6.6 for regularly varying sequences {gk }. For simplicity, we will restrict ourselves to the case gk = gkγ , γ ∈ (0, 1). Theorem 3.6.7. Let the following conditions be met: [<, =] with α ∈ (1, 2), gk = gk γ , γ ∈ (1/ˇ α, 1), α ˇ = min{α, β}, g > 0. Then, as x → ∞, ∞
P sup(Sk − gk ) x ∼ V (x + gk ) k0
k=1 1/γ
∼x
∞ V (x) (1 + guγ )−α du.
(3.6.8)
0 α−ε ˇ
Proof. For n = x
, ε > 0, condition [Q] is met and so by Theorem 3.6.4 n P sup(Sk − gk ) x ∼ V (x + gk ), kn
k=1
where, after the change of variables t = ux1/γ , we obtain n k=1
n V (x + gk ) ∼
1/γ
nx−1/γ
V (x + gt ) dt = x
γ
γ
0
∼ x1/γ V (x)
V (x(1 + guγ )) du 0
ˇ xα−ε−1/γ
(1 + guγ )−α du.
0
Since ˇ − ε − 1/γ > 0 for ˇ − 1/γ, it may be seen that the probability α ε < α P supkn (Sk − gk ) x is asymptotically equivalent to the right-hand side of (3.6.8). It remains to show that the probability that the boundary {x + gk γ } ˇ is o(x1/γ V (x)). This can be done in will be crossed after a time n = xα−ε exactly the same way as in the proof of Theorem 3.2.1(ii). One just has to put nk = x1/γ 2(k−1)/γ , k = 1, 2, . . . The theorem is proved. We can draw a number of conclusions about the distribution of S n (−a) from the above results. It is well known that, for a > 0, in the normal deviations zone the distribution of S n (−a) is close to that of an + Sn (see [142]). This proximity takes place in the large deviations zone as well. More precisely, Theorem 3.6.4 implies the following result.
158
Random walks with finite mean and infinite variance
Corollary 3.6.8. Let the following conditions be met: [<, =], α ∈ (1, 2), [Q], the function g(k), k n, is non-increasing, g(n) = cx. Then
P max(Sk − g(k)) 0 = nV (g(n))(1 + o(1)), kn
where the remainder term o(1) is uniform in x and n such that nV (x) ε → 0. In particular, for g(k) = a(n − k) + x with a fixed a > 0, one has
(3.6.9) P max(Sk + ak) − an x = nV (x)(1 + o(1)) kn
uniformly in x and n such that nV (x) ε → 0. Corollary 3.6.8 follows from Theorem 3.6.4(i) and the obvious fact that, for non-increasing {g(k)}, one has g∗ (j) = g(n), j n. The assertion (3.6.9) could also be obtained from the inequalities Sn max(Sk + ak) − an S n kn
and Theorems 3.4.4 and 3.5.1. It is also possible to obtain refinements of the representation (3.6.9) using an argument similar to those employed in Theorems 3.4.4 and 3.5.1. We will illustrate this in more detail in §§ 4.6 and 5.7. To prove Theorems 3.6.1 and 3.6.4 we will use Theorem 3.2.1. Proof of Theorem 3.6.1. Let Gn := {S n (a) x}, Bj (v) := {ξj < y + jva},
B(v) :=
n *
Bj (v),
m := min{n, x/a}.
j=1
It will be convenient for us to state a number of intermediate results in the form of lemmata. Lemma 3.6.9. Let conditions [<, <] with α ∈ (1, 2) and [Q] with n replaced by m = min{n, x/a} be satisfied. For all n and v min{1/4r, (r − 2)/2r}, r > 5/4, one has P(Gn ) =
n
P(Gn B j (v)) + O (mV (x))2 .
(3.6.10)
j=1
Proof. Clearly P(Gn ) = P(Gn B(v)) + P(Gn B(v)).
(3.6.11)
3.6 Asymptotics of P(S(a) x) and the boundary problem
159
As in (3.4.13), for the last term we can write n
P(Gn B j (v)) P(Gn B(v))
j=1
n
P(Gn B j (v)) −
j=1
=
n
P Gn B i (v)B j (v)
i<jn
P(Gn B j (v)) + O (mV (x))2 ,
(3.6.12)
j=1
where the form of the remainder follows from the bounds n P Gn B i (v)B j (v) P(B j (v))
2
j=1
i<jn
and n
P(B j (v))
j=1
n
n V (y + jva) ∼
j=1
⎧ ⎨ 1 min nV (y), ⎩ va
V (y + vat) dt
0 ∞
V (u) du y
⎫ ⎬ ⎭
cmV (x).
(3.6.13)
Applying Theorem 3.2.1(i) with v min{1/4r, (r − 2)/2r} (which ensures that r0 2) completes the proof of the lemma. Next we will obtain a representation for the summands in (3.6.10). Put (j)
S n−j (a) := max{0, ξj+1 − a, . . . , Sn − Sj − (n − j)a}, (j)
Zj,n (a) := Sj−1 + S n−j (a). Lemma 3.6.10. Let δ ∈ (0, 1) and a0 ∈ (0, ∞) be fixed. Then, uniformly in a ∈ (0, a0 ), # $ P(Gn B j (v)) = E (x + aj − Zj,n (a)); Zj,n δ(x + aj) # + O V (x+aj) min{j, x/a}V (x) $ + min{n, x/a + j}V (x + aj) . (3.6.14) Moreover, for z σ (j), P(Zj,n (a) z) c[j + min{n, z/a}]V (z), # $ P(|Zj,n (a)| z) c min{n, z/a}V (z) + j V (z) .
(3.6.15)
160
Random walks with finite mean and infinite variance
(j), Proof. First we will obtain bounds for the distribution of Zj,n (a). For z σ we find from Corollary 3.1.2 and Theorem 3.2.1(ii) that P(Zj,n (a) z) P(Sj−1 z/2) + P(S n−j (a) z/2) c jV (z) + mV (z) . Further, since (j)
|Zj,n (a)| |Sj−1 | + S n−j (a), for z σ (j) we have P(|Zj,n (a)| z) P(|Sj−1 | z/2) + P(S n (a) z/2) # $ c j V (z) + min{n, z/a}V (z) . The inequalities (3.6.15) are proved. Now we will establish (3.6.14). Clearly P(Gn B j (v)) = P(S n (a) x, ξj y + jva) = P(S n (a) x, ξj > y + jva, S j−1 (a) < x) + P(S n (a) x, ξj y + jva, S j−1 (a) x) = P(S n (a) x, ξj y + jva, S j−1 (a) < x) + ρn,j,x , (3.6.16) where, by Theorem 3.2.1(ii), ρn,j,x cV (y + jva) min{j, x/a}V (x).
(3.6.17)
Further, P S n (a) x, ξj y + jva, S j−1 (a) < x (j) = P Sj − aj + S n−j (a) x, ξj y + jva, S j−1 (a) < x = P ξj + Zj,n (a) x + aj, ξj y + jva + O min{j, x/a}V (x)V (y + jva) = P ξj + Zj,n (a) x + aj, ξj y + jva, Zj,n (a) < x + aj − y − jva + O min{j, x/a}V (x)V (x + aj) + min{n, x/a + j}V 2 (x + aj) . (3.6.18) To obtain the last relation, we used (3.6.15) and the fact that, for the chosen values of r and v, c1 (x + aj) y + jva c2 (x + aj), c3 (x + aj) x + aj − y − jva c4 (x + aj). The event {ξj y + jva} under the probability symbol on the right-hand side of (3.6.18) is redundant. Therefore, owing to the independence of the r.v.’s ξj
3.6 Asymptotics of P(S(a) x) and the boundary problem
161
and Zj,n (a), the probability on the final right-hand side of (3.6.18) can be represented (up to an additive term O min{n, x/a + j}V 2 (x + aj) ) as $ # E V (x + aj − Zj,n (a)); Zj,n (a) δ(x + aj) . Collecting bounds from (3.6.17) and (3.6.15), we obtain (3.6.14). Lemma 3.6.10 is proved. We now return to the proof of Theorem 3.6.1. Let x(j) := δ(x + aj). Represent the expectation in (3.6.14) as a sum Ej,1 + Ej,2 , where # $ Ej,1 := E V (x + ja − Zj,n (a)); Zj,n (a) < −x(j) , # $ Ej,2 := E V (x + ja − Zj,n (a)); |Zj,n (a)| x(j) , and note that, since E|ξ| < ∞, the deviations x + aj of Sj have the property that maxj0 j V (x + aj) → 0 as x → ∞, so that one can use the inequality (3.1.9) for them. Hence, owing to Zj,n (a) Sj−1 , we obtain Ej,1 cV (x + aj) P(Sj−1 < −x(j)) < c1 jV (x + aj)W (x + aj). (3.6.19) Moreover, by virtue of (3.6.15), for δ → 0 slowly enough we have Ej,2 = V (x + aj)(1 + o(1)). So, for m = min{n, x/a} we get n
Ej,1 < c1 W (x)
j=1
n
jV (x + aj) ∼ cW (x)mV (x) = o(mV (x))
j=1
(cf. (3.6.13)) and n
Ej,2 =
j=1
n
V (x + aj) (1 + o(1)),
j=1
where c1 mV (x) <
n j=1
V (x + aj) ∼
1 a
x+an
V (u) du < c2 mV (x). x
Summing the remainder terms in (3.6.14) over j yields a term O m2 V 2 (x) = o(mV (x)). Hence from (3.6.11) and Lemmata 3.6.9 and 3.6.10 we obtain the first assertion of Theorem 3.6.1. (ii) Using condition [Dh,0 ] with h = 1 (see Remark 3.4.8), one can write # $ Ej,2 = E V (x + aj − Zj,n (a)); |Zj,n (a)| x(j) & % Zj,n (a) (3.6.20) ; |Zj,n (a)| x(j) . = V (x + ja) E 1 + O x + ja
162
Random walks with finite mean and infinite variance
(Note that, had we assumed that condition [Dh,0 ] is met with 1 < h 2 then (j) on the right-hand side of (3.6.20) a term proportional to ES n−j (a)/(x + aj) would also appear, but the latter expression tends to infinity as n − j → ∞ since ES ∞ (a) = ∞ in the case Eξ 2 = ∞.) Now introduce the quantities 0 Tj,n := P(|Zj,n (a)| x(j)), 1 := E[Zj,n (a); |Zj,n (a)| x(j)], Ij,n
(cf. (3.5.15), (3.5.16) in the proof of Theorem 3.5.1), so that the expectation in (3.6.20) can be written as 0 1 + O (x + aj)−1 Ij,n 1 − Tj,n . 0 is part We need to obtain an upper bound for this expression. A bound for Tj,n of (3.6.15). Using that result, we obtain n
V (x +
n
0 aj)Tj,n
c
j=1
min{n, x/a + t}V (x + at)V (x + at) dt
1
c1 m2 V (x)V (x). 1 . For m(t) := min{n, t/a}, Now consider the bounds related to Ij,n
1,+ Ij,n
x(j)
:= −
x(j)
t dP(Zj,n t) 0
x(j)
P(Zj,n t) dt 0 x(j)
P(Sj−1 t/2) dt +
0
P(S n−j (a) t/2) dt 0 x(j)
2E(Sj−1 ; Sj−1 0) + c
m(t)V (t) dt, 0
where, for mj = min{n, x + aj}, owing to Theorem 1.1.4(iv) we have x(j)
m(t)V (t) dt c min x2 (j)V (x(j)), n2 V (n) = cm2j V (mj )
0
(recall that in part (ii) of Theorem 3.6.1 we assumed that a a1 > 0).
3.6 Asymptotics of P(S(a) x) and the boundary problem
163
Similarly, 1,− Ij,n
0
0
:=
|t| dP(Zj,n < t) −x(j)
P(Zj,n < t) dt −x(j)
0 P(Sj−1 < u) du E(|Sj−1 |; Sj−1 < 0).
−x(j)
Therefore 1,+ 1,− 1 Ij,n = Ij,n + Ij,n 2E|Sj−1 | + cm2j V (mj ).
Here, to simplify our argument, we will assume that condition [=, =] is met. A remark on how to obtain the necessary bound in the case where only condition [Q] is satisfied, can be found at the end of the proof. As before, let σ (n) = max{σ(n), σW (n)}. If condition [=, =] is satisfied then Sn /b(n) converge in distribution to an r.v. ζ following a stable distribution and, moreover, E|Sn |/b(n) → E|ζ| < ∞ (cf. (3.5.18)) and the quantities b(n) and σ (n) are of the same order of magnitude. Hence E|Sj−1 | < c σ (j) and n
V (x + aj)
j=1
c1
1 Ij,n x + aj
n V (x + aj) σ (j) j=1
x + aj
+ c2
n V (x + aj)m2j V (mj ) j=1
x + aj
.
(3.6.21)
The first sum on the right-hand side does not exceed c3
V (x) V (x) min n σ (n), x σ (x) = c3 m σ (m), x x
while the second is bounded by c4
V (x) V (x) 3 min n3 V (n), x3 V (x) = c4 m V (m). x x
Collecting the above bounds and taking into account that mV (x) = o(σ(m)/x), we obtain P(Gn ) =
n
V (x + aj)
j=1
V (x) 3 V (x) + O m2 V (x)V (x) + m σ (m) + m V (m) x x n V (x + aj) + O m σ (m)x−1 V (x) . = j=1
This proves (3.6.2). If condition [=, =] is not satisfied then one should use,
164
Random walks with finite mean and infinite variance
instead of (3.6.21), the bounds for the right and left distribution tails of Sj−1 from (3.1.9) and Theorem 3.4.4. Theorem 3.6.1 is proved. Proof of Theorem 3.6.4. Now let ( ) Gn := max(Sk − g(k)) 0 , kn
Bj := {ξj y},
B :=
n *
Bj .
j=1
Since minkn g(k) = cx and, without loss of generality, we can assume that c = 1, we have P(Gn B) P ≡ P(S n x; B), where P c(nV (y))r by Theorem 3.1.1. Therefore P(Gn ) = P(Gn B) + O (nV (y))r . Moreover, cf. (3.4.14) and (3.5.5), we have P(Gn B) =
n
P(Gn B j ) + O (nV (x))2 .
(3.6.22)
j=1
Taking r = 2, we obtain from this that P(Gn ) =
n
P(Gn ; ξj y) + O (nV (y))2 .
(3.6.23)
j=1
Next consider P(Gn ; ξj y). We will use an argument quite similar to that employed in the proofs of Theorems 3.4.4 and 3.5.1. First, using the independence of ξj and S j−1 , we can infer from Theorem 3.1.1 that P(Gn ; ξj y) = P(Gn ; ξj y, S j−1 < x) + P(Gn ; ξj y, S j−1 x)
= P(Gn ; ξj y, S j−1 < x) + O jV 2 (x) .
(3.6.24)
Note that if the event Gn occurs and S j−1 < x then maxjkn (Sk − g(k)) 0. Introduce the r.v.’s Mj,n : = =
max (Sk+j − Sj − g(k + j)) + g∗ (j)
0kn−j
max (Sk+j − g(k + j)) + g∗ (j) − Sj .
0kn−j
Again using the bound P(ξj y, S j−1 x) = O jV 2 (x)
(3.6.25)
3.6 Asymptotics of P(S(a) x) and the boundary problem 165 and setting Gj,n := maxjkn (Sk − g(k)) 0 , we obtain from (3.6.24) that P(Gn ; ξj y) = P Gj,n ; ξj y, S j−1 < x + O jV 2 (x) = P Gj,n ; ξj y + O jV 2 (x) = P Sj + Mj,n g∗ (j), ξj y, Sj−1 + Mj,n < x/2 + O jV 2 (x) (3.6.26) + P Sj + Mj,n g∗ (j), ξj y, Sj−1 + Mj,n x/2 . Now put Zj,n := Sj−1 + Mj,n ,
j = 1, . . . , n.
(3.6.27)
(j)
Clearly, Mj,n S n−j and therefore (j)
d
Zj,n Sj−1 + S n−j S n−1 .
(3.6.28)
However, there exists a kj ∈ {0, 1, . . . , n − j} such that g(kj + j) − g∗ (j) = 0
and
Mj,n
min (Sk+j − Sj ).
0kn−j
Hence d
Zj,n S n−1 = min Sk . kn−1
(3.6.29)
Since, from the above discussion, P(ξj y, Zj,n x/2) = O nV 2 (x) and the events {Sj + Mj,n g∗ (j)} and {ξj < x/2, Zj,n < x/2} are mutually exclusive (recall that y = x/2, g∗ (j) x), we obtain from (3.6.26) that P(Gn ; ξj y) = P ξj + Zj,n g∗ (j), Zj,n < x/2 + O nV 2 (x) . The probability on the right-hand side can be rewritten as P ξj + Zj,n g∗ (j), Zj,n < x/2 $ # = E E[I(ξj g∗ (j) − Zj,n )| Sj−1 , ξj+1 , . . . , ξn ]; Zj,n < x/2 # $ = E V (g∗ (j) − Zj,n ); Zj,n < x/2 , (3.6.30) so that from (3.6.23) we have P(Gn ) =
n # $ E V (g∗ (j) − Zj,n ); Zj,n < x/2 + O nV 2 (x) .
(3.6.31)
j=1
As in the proofs of the previous theorems, we split the expectation on the righthand side of (3.6.30) into two parts, Ej,1 and Ej,2 : Ej,1 := E[V (g∗ (j) − Zj,n ); Zj,n < −x/2] = O nV (x)V (x) (3.6.32)
166
Random walks with finite mean and infinite variance
(by virtue of (3.6.29) and (3.1.9)) and Ej,2 := E[V (g∗ (j) − Zj,n ); |Zj,n | x/2].
(3.6.33)
Now P(|Zj,n | x/2) = 1 − P(Zj,n > x/2) − P(Zj,n < −x/2), and, for any fixed v, the probabilities P(Zj,n > xv) and P(Zj,n < −xv) can be estimated using the inequalities (3.6.28), (3.6.29) and (3.1.9): P(|Zj,n | xv) P(Sn−1 xv) = O(nV (x)). Thus E2,j = V (g∗ (j))(1 + o(1)), which, together with (3.6.31) and (3.6.32), proves the first assertion of Theorem 3.6.4. To prove the second, consider in more detail the term (3.6.33) which, by virtue of conditions [D1,0 ] (see Remark 3.4.8), can be rewritten as # $ E2,j = E V (g∗ (j) − Zj,n ); |Zj,n | x/2 % & Zj,n (3.6.34) = V (g∗ (j)) E 1 + O ; |Zj,n | x/2 . g∗ (j) (If condition [Dh,0 ] with h > 1 is met then an additional term EMj,n will appear on the right-hand side but, in the case of a general boundary {g(k)}, computing this term is rather difficult.) The computation of the expectation on the right-hand side of (3.6.34) using the inequalities (3.6.28), (3.6.29) can be carried out in exactly the same way as in Theorem 3.5.1. Theorem 3.6.4 is proved. 3.7 Integro-local theorems on large deviations of Sn for index −α, α ∈ (0, 2) Let Δ[x) := [x, x + Δ) be a half-open interval of length Δ > 0 with left end point x. The present section deals with the asymptotics of the probabilities P(Sn ∈ Δ[x))
(3.7.1)
as x → ∞ for various Δ = o(x). It is natural to term the corresponding assertions integro-local theorems, retaining the term ‘local theorem’ for assertions relating to the density of Sn (when it exists) and to the probabilities P(Sn = k) in the lattice case. Integro-local theorems are of independent interest but can also be very useful for finding the asymptotic behaviour, as x → ∞, of integrals of the form E(f (Sn ); Sn x) for broad classes of functions f . In Chapter 6 we will use them to describe the large deviation probabilities for random walks with regular exponentially decaying jump distributions. In the multivariate case, integro-local theorems are the most convenient and natural type of assertion on the asymptotics of large deviation probabilities for sums of random vectors (see Chapter 9).
3.7 Integro-local theorems on large deviations of Sn
167
In what follows, we will be using the smoothness condition [D(1,q) ] in a form somewhat different from the previous one (the interpretations of the function q and the increment Δ being different from those in (3.4.5)). This is merely for convenience of exposition. Thus we write [D(1,q) ]
As t → ∞, for Δ ∈ [Δ1 , Δ2 ], Δ1 Δ2 = o(t), one has # $ V (t)−V (t+Δ) = V1 (t) Δ(1+o(1))+o(q(t)) , V1 (t) = αV (t)/t, (3.7.2) where the term o(q(t)) does not depend on Δ and q(t) is an r.v.f. of the form q(t) = t−γq Lq (t), γq −1, Lq (t) is an s.v.f. Here the remainder o(1) is assumed to be uniform in Δ ∈ [Δ1 , Δ2 ] in the following sense: for any fixed function δu ↓ 0 as u ↑ ∞, there exists a function εu ↓ 0 as u ↑ ∞ such that o(1) in (3.7.2) can be replaced by ε(t, Δ) εu for Δ2 δu u and all t u. If, in the important special case Δ = Δ1 = Δ2 = const (Δ is an arbitrary fixed number), we have V (t) − V (t + Δ) = V1 (t)(Δ + o(1)) as t → ∞ then clearly condition [D(1,q) ] is satisfied with q(t) ≡ 1 (here the assumption on the uniformity of o(1) disappears). The relation (3.7.2) follows from (3.4.5): we just substitute the quantities q(t)/t and Δ/t for q(t) and Δ respectively in (3.4.5) (with V1 (t) = L1 (t)V (t)/t). In particular, when Δ q(t) we obtain from (3.7.2) the representation V (t) − V (t + Δ) = V1 (t)Δ(1 + o(1)). Now let the following condition be satisfied. [D] The s.v.f. L(t) is differentiable for t t0 > 0 and, moreover, L (t) = o L(t)/t as t → ∞.
Under condition [D], the function V (t) is also differentiable for t t0 and one can identify the function V1 (t) in (3.7.2) with the derivative V1 (t) = −V (t) (for t t0 ) and put q(t) ≡ 0. In the lattice (arithmetic) case, when ξ is integer-valued with lattice span equal to 1, condition [D(1,q) ] stipulates that (3.7.2) holds for Δ = 1. Observe also that the desired asymptotics for (3.7.1) could be obtained from Theorem 3.4.4 but only when Δ σ (n). Now recall the notation α ˇ = min{α, β}. In what follows, we will consider the following two alternatives: (1) (2)
α < 1,
x > n1/γ ;
α ∈ (1, 2),
Eξ = 0,
(3.7.3) 1/γ
x>n
,
(3.7.4)
168
Random walks with finite mean and infinite variance
where γ < α ˇ is an arbitrary fixed number. Now we can state the main assertion. Theorem 3.7.1. Assume that conditions [<, =] and [D(1,q) ] in the form (3.7.2) are satisfied. Then, in the cases (3.7.3), (3.7.4), for Δ ∈ [Δ1 , Δ2 ], Δ2 = o(x), Δ1 max{cq(x), x−γ0 } for an arbitrary fixed γ0 −1, the following relation holds true. As N → ∞, P(Sn ∈ Δ[x)) = nV1 (x)Δ(1 + o(1)),
V1 (x) =
αV (x) , x
(3.7.5)
where the term o(1) in (3.7.5) is uniform in x, n and Δ ∈ [Δ1 , Δ2 ] such that x max{N, n1/γ },
Δ1 max{cq(x), x−γ0 },
Δ2 xεN
for a fixed function εN ↓ 0 as N ↑ ∞. In the lattice (arithmetic) case, assertion (3.7.5) holds true for integer-valued x and Δ max{1, q(x), x−γ0 }. By the uniformity of o(1) in Theorem 3.7.1 we understand the existence of a function δ(N ) ↓ 0 as N ↑ ∞ (depending on εN ) such that the term o(1) in (3.7.5) can be replaced by a function δ(x, n, Δ), with |δ(x, n, Δ)| δ(N ). If we had assumed in the theorem that n → ∞ then, when constructing the uniformity domain, we could have replaced the inequality x max{N, n1/γ } by x n1/γ . This would have ensured the required convergence x → ∞. The parameter N was added to cover the case when n remains bounded as x → ∞. If q(x) → 0 as x → ∞, γ0 > 0 then Δx := max{q(x), x−γ0 } → 0, and Theorem 3.7.1 implies an ‘almost local theorem’: for Δ = Δx and x → ∞, n xγ , P(Sn ∈ Δ[x)) ∼ nV1 (x). (3.7.6) Δ When q(t) = t−γq , one can put γ0 := γq and then the range of Δ will be of the form Δ x−γ0 . If Δ → 0 at a yet faster rate then the relation (3.7.6) does not need to be true (as the distributions of ξ and Sn do not necessarily have densities). However, if condition [D] is satisfied then the density V (t) ∼ −αV (t)/t will exist for t t0 and a stronger assertion will hold true. Theorem 3.7.2. Let conditions [<, =], [D] and [Q] be met. Then the distribution of Sn can be represented as a sum of two measures, P(Sn ∈ ·) = Pn,1 (·) + Pn,2 (·), where the measure Pn,1 , for any fixed r > 1 and all large enough x, has the property Pn,1 ([x, ∞)) < (nV (x))r .
3.7 Integro-local theorems on large deviations of Sn
169
The measure Pn,2 is absolutely continuous w.r.t. the Lebesgue measure, with the density Pn,2 (dx) = −nV (x)(1 + o(1)) dx as x → ∞. The term o(1) is uniform in x and n such that nV (x) < εx (or σ (n)/x < εx ) for any fixed function εx → 0 as x → ∞. Note that Pn,2 ([x, ∞)) ∼ nV (x) and therefore that the distribution of Sn can be written in the form $ # P(Sn x) = Pn,2 ([x, ∞)) 1 + O (nV (x))r for any fixed r > 0, x → ∞, nV (x) < εx → 0. Proof. The proof of Theorem 3.7.1 follows the scheme of that of Theorem 3.4.4. For y < x, put Gn := {Sn ∈ Δ[x)},
Bj = {ξj < y},
B=
n *
Bj .
(3.7.7)
j=1
Then P(Gn ) = P(Gn B) + P(Gn B),
(3.7.8)
where n
P(Gn B j ) P(Gn B)
j=1
n
P(Gn B j ) −
j=1
P(Gn B i B j ).
(3.7.9)
i<jn
We will split the proof into three steps: (1) bounding P(Gn B), (2) bounding P(Gn B i B j ), i = j, and (3) evaluating P(Gn B j ). (1) Bounding P(Gn B). We will make use of the crude inequality P(Gn B) P(S(n) x; B).
(3.7.10)
By Theorems 2.2.1 and 3.1.1, for x > max{N, n1/γ }, N → ∞, we have x r= . P(Gn B) < c[nV (y)]r , (3.7.11) y Choose r so that
r nV (x) nV1 (x)Δ
(3.7.12)
for x max{N, n1/γ }, Δ max{x−γ0 , q(x)} and γ < α ˇ (for such x and n, condition [Q] is always met). If n → ∞ then one can assume that N < n1/γ x. Putting n = xγ and comparing the powers of x on both sides of (3.7.12), we see that (3.7.12) will hold provided that r−1>
1 + γ0 . α−γ
(3.7.13)
170
Random walks with finite mean and infinite variance
For n < xγ the inequality (3.7.12) will hold true all the more. Hence for r that satisfy the inequality (3.7.13) we have P(Gn B) = o(nV1 (x)Δ).
(3.7.14)
(2) Bounding P(Gn B i B j ). It suffices to bound P(Gn B n−1 B n ). Let δ :=
1 1 < , r 2
Hk := {v : v < (1 − kδ)x + Δ},
Then
k = 1, 2.
P(Gn B n−1 B n ) =
P(Sn−2 ∈ dz)
H2
×
P(z + ξ ∈ dv, ξ δx) P(v + ξ ∈ Δ[x), ξ δx).
H1
(3.7.15) Since in the region H1 we have x − v > δx − Δ, by condition [D(1,q) ] the last factor on the right-hand side of (3.7.15) has the form ΔV1 (x − v)(1 + o(1)) cΔV1 (x) as x → ∞. Hence, for large enough x, the integral over H1 in (3.7.15) will not exceed the quantity cΔV1 (x) P(z + ξ ∈ H1 , ξ δx) cΔV1 (x)V (δx). It is obvious that the integral over H2 in (3.7.15) admits the same upper bound. This implies that P(Gn B i B j ) c1 Δn2 V1 (x)V (x) = o(ΔnV1 (x)). (3.7.16) i<jn
(3) Evaluating P(Gn B j ). Owing to (3.7.8), (3.7.12) and (3.7.16), the summands P(Gn B j ) will determine the main term in the asymptotics of P(Gn B) and P(Gn ). By virtue of condition [D(1,q) ], P(Gn B n ) = P(Sn−1 ∈ dz) P(ξ ∈ Δ[x − z), ξ δx) H1
H1
=
P(Sn−1 ∈ dz) P ξ ∈ Δ[x − z) # $ P(Sn−1 ∈ dz) V1 (x − z) (1 + o(1))Δ + o(q(x − z)) .
H1
(3.7.17) As before, set σ (n) := max V (−1) (1/n), W (−1) (1/n) ,
3.7 Integro-local theorems on large deviations of Sn
171
where V (−1) are W (−1) are functions inverse to V and W respectively. Then we obtain from Corollaries 2.2.4 and 3.1.2 (see also (3.1.9)) that P(|Sn−1 | > Mσ (n)) → 0 as M → ∞. Therefore # $ E V1 (x − Sn−1 ); |Sn−1 | < M σ (n) ∼ V1 (x) ˇ (in this case x M σ (n)). when M → ∞, M σ (n) = o(n1/γ ), x n1/γ , γ < α Moreover, as M → ∞, one obviously has that # $ E V1 (x − Sn−1 ); Sn−1 ∈ (−∞, −M σ (n)) = o(V1 (x)), # $ E V1 (x − Sn−1 ); Sn−1 ∈ (M σ (n), (1 − δ)x + Δ) = o(V1 (x)). The above discussion implies that # $ P(Sn−1 ∈ dz)V1 (x−z) = E V1 (x−Sn−1 ); Sn−1 x(1−δ)+Δ ∼ V1 (x). H1
Similarly,
P(Sn−1 ∈ dz)V1 (x − z)q(x − z) < cV1 (x)q(x). H1
Hence, for Δ q(x), by virtue of (3.7.17) we obtain P(Gn B n ) ΔV1 (x)(1 + o(1)). Similarly, using (3.7.17) one finds that (1−δ)x
P(Sn−1 ∈ dz) P ξ ∈ Δ[x − z) ∼ ΔV1 (x).
P(Gn B n )
(3.7.18)
−∞
From (3.7.17) and (3.7.18) we obtain P(Gn B n ) = ΔV1 (x)(1 + o(1)). Together with (3.7.8)–(3.7.12) and (3.7.16) this yields P(Gn ) = ΔnV1 (x)(1 + o(1)). The required uniformity of the bound o(1) is obvious from the above argument. The theorem is proved. Remark 3.7.3. To bound P(Gn B) (see step (1) of the proof of Theorem 3.7.1), instead of the crude inequality (3.7.10) one could use more precise approaches, which would yield stronger results. For more detail, see Remark 4.7.3 below. Proof of Theorem 3.7.2. Using the set B defined in (3.7.7), put Pn,1 (A) := P(Sn ∈ A; B),
Pn,2 (A) := P(Sn ∈ A; B).
172
Random walks with finite mean and infinite variance
The desired bound for Pn,1 ([x, ∞)) coincides with the bound # $r P(Gn B) c nV (x) from the proof of Theorem 3.7.1 (see (3.7.11)). The evaluation of the density Pn,2 is done in the same way as that of the probability P(Gn B) in Theorem 3.7.1. One has (cf. (3.7.17)) P(Sn ∈ dx; B n ) = dx
(1−δ)x
P(Sn−1 ∈ dz) −∞
P(ξ ∈ dx − z) dx
(1−δ)x
P(Sn−1 ∈ dz)V (x − z)
=− −∞
# $ = −E V (x − Sn−1 ); Sn−1 (1 − δ)x . The subsequent computation of the last integral does not differ from the respective argument in the proofs of Theorems 3.4.4, 3.7.1 etc. The estimation of P(Sn ∈ dx; B n−1 B n )/dx can be dealt with in a similar way. The theorem is proved. Remark 3.7.4. If n → ∞ and the order of magnitude of the deviations is fixed, say, by x ∼ A(n) := n1/γ LA (n),
(3.7.19)
where γ < α ˇ and LA is an s.v.f., then one can suggest another approach to proving integro-local theorems. With this approach, condition [D(1,q) ] (see (3.4.5)) can be substantially weakened to the following condition: [DA ] We assume that n → ∞ and condition [Rα,ρ ] with ρ > −1 is satisfied. Moreover, for Δn = εn b(n), where εn is a sequence converging to zero, we have V (x) − V (x + Δn ) = Δn V1 (x)(1 + o(1)),
V1 (x) = αV (x)/x. (3.7.20)
Condition (3.7.20) can be expressed in terms of the deviations x → ∞ only. Resolving the relation x ∼ A(n) in (3.7.19) for n, we obtain n ∼ A(−1) (x), Δn = εn b(A(−1) (x)). Taking εn to be an s.v.f., we arrive at the equality Δn = xγ/α LΔ (x), where LΔ is an s.v.f. Since 0 < γ/α < 1, one has, as x → ∞, xγ/α LΔ (x) = o(x),
xγ/α LΔ (x) → ∞,
and in this sense condition [DA ] is much weaker than condition [D(1,0) ], because it requires regular variation of the increments V (x) − V (x + Δ) only for very large Δ values. If condition [DA ] is satisfied then, for any fixed Δ > 0, the relation (3.7.5) holds true. In the arithmetic case, one should put Δ = 1. The scheme of the proof of this assertion remains the same: as before, one
3.8 Uniform relative convergence to a stable law
173
should use (3.7.14) and then observe that the principal contribution to the main term P(Gn B j ) = P(Sn ∈ Δ[x), ξn y) comes from the integral ∞ J :=
P(ξ ∈ dt) P Sn−1 ∈ Δ[x − t); |Sn−1 | < Nn b(n) ,
y
where Nn is an unboundedly increasing sequence. It is evident that x+N n b(n)
P(ξ ∈ dt) P Sn−1 ∈ Δ[x − t)
J∼ x−Nn b(n)
tk+1 P(ξ ∈ dt + x) P Sn−1 ∈ Δ[−t) , ∼ k∈In t k
where tk := kΔn and In := [−Nn /εn , Nn /εn ]. If the ξi are non-lattice then, by the Stone–Shepp theorem (see [258, 259] and also § 8.4 of [32]), P(Sn−1 ∈ Δ[−t)) ∼
Δ f (−tk /b(n)), b(n)
t ∈ [tk , tk+1 ),
where f is the density of the stable distribution Fα,ρ . Therefore, the principal part of the integral in question is asymptotically equivalent to Δ P ξ ∈ [x + tk , x + tk+1 ) f (−tk /b(n)) b(n) k∈In
Δn V1 (x)f (−tk /b(n)) b(n) k∈In εn f (−tk /b(n)) ∼ ΔV1 (x). = ΔV1 (x) ∼Δ
k∈In
Estimation of the P(Gn B i B j ) can be dealt with in a similar way (cf. (3.7.16)).
3.8 Uniform relative convergence to a stable law The assertions of this and the next sections are consequences of the results of §§ 2.2–2.5 and §§ 3.1, 3.3. They will be valid for values of the parameter α from the interval (0, 2). Denote by Nα,ρ the domain of normal attraction to the stable law Fα,ρ , i.e. the class of distributions F for which condition [Rα,ρ ] holds with L(t) → L = const as t → ∞. For distributions F ∈ Nα,ρ , the inverse function F (−1) has a simple explicit asymptotics: F (−1) (1/n) = b(n) ∼ (Ln)1/α , 1/α
V (−1) (1/n) = σ(n) ∼ ρ+ b(n) ∼ (Lnρ+ )1/α .
(3.8.1)
174
Random walks with finite mean and infinite variance
It is obvious that the stable distribution Fα,ρ itself also belongs to Nα,ρ , and that, for any F ∈ Nα,ρ with ρ > −1 and all v v0 > 0, nV (vb(n)) ∼ nρ+ F (vb(n)) ∼ ρ+ v−α
as
n → ∞.
(3.8.2)
The property (3.8.2) enables one to obtain the following assertion on uniform relative convergence to a stable law. Theorem 3.8.1. Let condition [Rα,ρ ], ρ > −1, α ∈ (0, 2), be satisfied and let Eξ = 0 provided that E|ξ| < ∞. In this case, F ∈ Nα,ρ iff P(Sn /b(n) t) sup − 1 → 0 (3.8.3) F (t) t0 α,ρ,+ as n → ∞, where Fα,ρ,+ (t) = Fα,ρ ([t, ∞)). The assertion of the theorem means that for F ∈ Nα,ρ the large deviation problem for P(Sn x) is, in a sense, non-existent: the limiting law Fα,ρ (t) gives a good approximation P(Sn x) ∼ Fα,ρ,+ (x/b(n)) uniformly in all x 0. In the central limit theorem on convergence to the normal law, this is only possible when the ξj are normally distributed. An assertion of the form (3.8.3) (with a convergence rate estimate) also follows from the results of [24] but under the much stronger condition that the pseudomoments of the orders γ > α are finite: |t|γ |F − Fα,ρ |(dt) < ∞, which necessarily entails a high rate of convergence of F (t) − Fα,ρ (t) to zero as |t| → ∞. Proof of Theorem 3.8.1. Sufficiency. Let F ∈ Nα,ρ . Theorems 2.6.1 and 3.4.1 imply (see (3.4.1)) that, for any sequence t = tn → ∞ and for x = sσ(n), P(Sn x) (3.8.4) − 1 → 0, sup nV (x) st or, equivalently, that
P(Sn sb(n)) sup − 1 → 0, nV (sb(n)) st
where, by virtue of (3.8.2), nV (sb(n)) ∼ ρ+ s−α ∼ Fα,ρ,+ (s). So (3.8.4) can also be written as P(Sn sb(n)) sup − 1 → 0 Fα,ρ,+ (s) st
(3.8.5)
3.8 Uniform relative convergence to a stable law
175
as n → ∞, t = tn → ∞. However, it follows from Theorem 1.5.1 and the continuity of Fα,ρ that, for any fixed t > 0, P(Sn sb(n)) (3.8.6) − 1 → 0. sup Fα,ρ,+ (s) 0st This means that there exists a sequence tn → ∞ increasing sufficiently slowly that (3.8.6) remains valid with t replaced by tn . Together with (3.8.5), this proves (3.8.3). Necessity. It follows from (3.8.3), (3.8.4) that nV (tb(n)) ∼ ρ+ t−α ,
nF (tb(n)) ∼ t−α .
Hence F (tb) ∼ t−α F (b),
L(tb) ∼ L(b)
(3.8.7)
for any sequences t and b tending to infinity. But this is only possible when L(b) → L = const as b → ∞. Assuming the contrary – for instance, that L(b) → ∞ as b → ∞ – one can find a sequence b such that b /b → ∞ and L(b ) > L2 (b).
(3.8.8)
Setting t := b /b in (3.8.7) we obtain L(b ) ∼ L(b), which contradicts (3.8.8). The theorem is proved. An assertion similar to Theorem 3.8.1 can also be obtained for the distribution of S n . First we observe that it follows from the ‘invariance principle’ for the case of convergence to stable laws (see § 1.6) that, as n → ∞, Sn ¯ ⇒ ζ(1), b(n)
(3.8.9)
where ζ(u) is a stable process corresponding to the distribution Fα,ρ (for this process, ζ(1) ⊂ = Fα,ρ ), ζ(t) = suput ζ(u). Denote by Hα,ρ the distribution of ζ(1). Then, using an argument like the one above, it follows from Theorems 3.4.1, 2.6.1 and 3.8.1 and the relations (3.8.2) and S n Sn that Hα,ρ,+ (t) := Hα,ρ ([t, ∞)) ∼ t−α
as
→ ∞.
(3.8.10)
Note that the convergence (3.8.9) can also be derived from the results of [141] (that paper also gives an explicit expression for Hα,ρ ). Theorem 3.8.2. Under the conditions of Theorem 3.8.1, F ∈ Nα,ρ iff P(S n tb(n)) − 1 → 0 sup Hα,ρ,+ (t) t>0 as n → ∞. Proof. The proof of Theorem 3.8.2 repeats that of Theorem 3.8.1. One just has to replace Sn by S n and Fα,ρ by Hα,ρ everywhere.
176
Random walks with finite mean and infinite variance
3.9 Analogues of the law of the iterated logarithm in the case of infinite variance The upper and lower bounds for the distributions of S n and Sn obtained in this chapter also enable one to establish assertions of the law of the iterated logarithm type for the sequence {Sn } in the case Eξj2 = ∞. Theorem 3.9.1. (i) Let condition [ · , <], α < 1, or the conditions [<, <], α ∈ (1, 2), Eξj = 0, W (t) c1 V (t) be satisfied. Then, for any ε > 0, lim sup n→∞
Sn <1 σ(n)(ln n)1/α+ε
a.s.
(3.9.1)
(ii) Let the conditions [<, ≶], α < 1, W (t) c1 V (t) or the conditions [Rα,ρ ], α ∈ (1, 2), ρ > −1, Eξ = 0 be satisfied. Then, for any ε > 0, lim sup n→∞
Sn >1 σ(n)(ln n)1/α−ε
a.s.
(3.9.2)
It is not hard to see that, since ε > 0 is arbitrary, the relations (3.9.1) and (3.9.2) are respectively equivalent to the assertions Sn =0 1/α+ε n→∞ σ(n)(ln n) Sn =∞ lim sup 1/α−ε n→∞ σ(n)(ln n) lim sup
a.s., a.s..
Let ln+ t := ln max{1, t}. Theorem 3.9.1 implies the following. Corollary 3.9.2. Let the conditions [<, ≶], α < 1, W (t) c1 V (t) or the conditions [Rα,ρ ], α ∈ (1, 2), ρ > −1, Eξ = 0 be satisfied. Then lim sup n→∞
1 ln+ Sn − ln σ(n) = ln ln n α
The relation (3.9.3) can also be written as 1/ln ln n Sn lim sup = e1/α σ(n) n→∞
a.s.
(3.9.3)
a.s.
(3.9.4)
Observe that if, for the s.v.f. L(t) from the representation V (t) = t−α L(t), one has | ln L(t)| ln ln t as t → ∞ then the function Lσ (n) from the representation σ(n) = n1/α Lσ (n) will have the same property, and in (3.9.1)–(3.9.4) one can replace σ(n) by n1/α . The formulation (3.9.3) justifies, to some extent, the use of the term ‘the law of the iterated logarithm’, since it contains the scaling factor ln ln n; in the assertions for the sums Sn themselves (not for ln+ Sn ) it is absent. There is a large number of papers devoted to analogues of the law of the iterated logarithm in the infinite variance case (see e.g. the bibliographies in [190, 164]). However, in order to
177
3.9 Analogues of the law of the iterated logarithm
obtain (3.9.4), in all of them rather strong conditions on the ξj are imposed – for instance, that their distribution belongs to the normal domain of attraction of a stable law: F ∈ Nα,ρ . Theorems 3.9.1–3.9.4 extend these results. Proof of Theorem 3.9.1. If one follows the classical way of proving the law of the iterated logarithm using the Borel–Cantelli lemma then the problem reduces to the following (see e.g. Chapter 19 of [49]): to demonstrate (3.9.1), one has to show that P(S nk xk ) < ∞, (3.9.5) k
where nk := A , xk := σ(nk )(ln nk )1/α+ε and A > 1 is an arbitrary fixed number. To prove (3.9.2), one has to establish that P(Snk − Snk−1 yk ) = ∞ k
k
or, equivalently, that
P(Smk yk ) = ∞,
(3.9.6)
k
where mk := nk − nk−1 = nk (1 − A−1 ) + i, i assumes the values 0 or 1 and yk := σ(nk )(ln nk )1/α−ε . First we prove (3.9.5) and (3.9.1). By virtue of Corollaries 2.2.4 and 3.1.2, for x σ(n) we have P(S n x) cnV (x). Putting x := σ(n)(ln n)1/α+ε , we obtain by Theorem 1.1.4(iii) that, for any fixed δ > 0, P(S n x) c(ln n)−(1/α+ε)(α−δ) as n → ∞. For δ := α2 ε/3 and small enough ε we have (1/α + ε)(α − δ) > 1 + αε/2,
P(S nk xk ) c1 k −(1+αε/2) .
This means that the series (3.9.5) converges, and hence (3.9.1) holds true. Now we prove (3.9.6) and (3.9.2). By virtue of Theorem 2.5.2 and Corollary 3.3.2, for x > σ(n) and m := n(1 − A−1 ) we have P(Sm x) cnV (x). Setting x := σ(n)(ln n)1/α−ε , one obtains P(Sm x) (ln n)−(1/α−ε)(α+δ) , where (1/α − ε)(α + δ) < 1 − αε/2 for δ := α2 ε/2. This yields P(Smk yk ) c1 k −(1−αε/2) , which means that the series (3.9.6) diverges and hence that (3.9.2) is true. The theorem is proved.
178
Random walks with finite mean and infinite variance
It follows from Theorem 3.9.1 that, provided that the conditions [<, ≶], α < 1 and W (t) cV (t) are satisfied, if for the random walk {Sk } there exists an ‘exact’ upper boundary, i.e. a function ψ(n) such that lim sup n→∞
Sn =1 ψ(n)
a.s.,
then this boundary is of the form σ(n)(ln n)1/α+o(1) . If, however, W (t) V (t) as t → ∞ then, under assumptions that are otherwise the same, one can only find ‘upper bounds’ for the upper boundary and the form of these bounds (when they exist) will, generally speaking, be different. First consider the case α < 1. Theorem 3.9.3. Let the following conditions be satisfied: [>, <] with α < 1 and W (t) V (t)(ln t)γ for some γ > 1 and all large enough t. Then, for any fixed ε > 0, lim sup n→∞
Sn (ln n)ε < −1 σW (n)
a.s.
Sn (ln n)ε = −∞ σW (n)
a.s.
(3.9.7)
or, equivalently, lim sup n→∞
Here one should note the fact that the upper bound proves to be negative and, moreover, for any ε > 0, lim sup Sn + σW (n)(ln n)−ε < 0, n→∞ sup Sn + σW (n)(ln n)−ε < ∞ a.s. n0
Proof of Theorem 3.9.3. We will follow the same line of reasoning, based on the Borel–Cantelli lemma, as was used to prove Theorem 3.9.1(i). Similarly to (3.9.5), (3.9.6), it suffices to verify that, with probability 1, there will occur only finitely many events ) ( k 1, Ck := max Sn −xk , nk−1 nnk
where nk := Ak , xk := σW (nk )(ln n)−ε and A > 1 is an arbitrary fixed num ber. To this end, it suffices, in turn, to show that the series k P(Ck ) converges. Now P(Ck ) Pk,1 + Pk,2 , where Pk,1 := P(Snk−1 −2xk ),
Pk,2 = P(S mk xk )
and mk := nk − nk−1 ∼ nk (1 − A−1 ). Here by Theorem 2.3.5 (see (2.3.8)) one
179
3.9 Analogues of the law of the iterated logarithm has Pk,1 c1 nk−1 V (σW (nk−1 ) − 2xk )
2xk + σ(nk−1 ) + exp −c σW (nk−1 )
−δ "
(3.9.8)
for δ > 0, c1 < ∞, 0 < c < ∞. Since xk+1 = o(σW (nk )) as k → ∞, we obtain (replacing, for convenience, the index k − 1 by k) that nk V (σW (nk ) − 2xk+1 ) ∼ nk V (σW (nk )) cnk W (σW (nk ))(ln nk )−γ ∼ c(ln nk )−γ ∼ ck−γ ln A. Now consider the second term on the right-hand side of (3.9.8). It is not hard to see that, for any fixed ε1 > 0 and all large enough n, σ(n) = V (−1) (1/n) = inf {t : V (t) 1/n} inf t : W (t)(ln t)−γ 1/n < σW (n)(ln n)−γ/β+ε1 . Therefore
2xk+1 + σ(nk ) σW (nk )
−δ
−δ c1 (ln nk )−ε + (ln nk )−γ/β+ε1
−δ c2 k −ε + k −γ/β+ε1 c3 k v with v := min εδ, (γ/β − ε1 )δ > 0 provided that ε1 < γ/β. Hence the second v summand in (3.9.8) decays no slower than e−c3 k k −γ . From this it follows that Pk,1 ck−γ and Pk,1 < ∞. k
Now consider the terms Pk,2 . By virtue of Theorem 2.4.1 (see (2.4.2)), for any ε2 > 0, Pk,2 c1 mk V (xk ) c2 nk V (xk ) = c2 nk V σW (nk )(ln nk )−ε c2 (ln nk )αε+ε2 nk W (σW (nk ))(ln σW (nk ))−γ ∼ c3 k αε−γ+ε2 . Next we observe that it will suffice to prove (3.9.7) for any ε ∈ (0, (γ − 1)/3α). If it holds for any ε from this interval then it will certainly hold for ε (γ − 1)/3α. Now, if ε ∈ (0, (γ − 1)/3α) then, for ε2 < (γ − 1)/3, αε − γ + ε2 (γ − 1)/3 − γ + ε2 −1 − (γ − 1)/3 < −1. Hence k Pk,2 < ∞. The theorem is proved. If, in the assumptions of Theorem 3.9.3, we were to require in addition that condition [Rα,ρ ] (with ρ = −1) is met then the distribution of Sn /σW (n) would converge to the respective stable law and so, for any fixed ε > 0, infinitely many events An = {Sn −εσW (n)} would occur. Therefore, if there exists an upper
180
Random walks with finite mean and infinite variance
boundary {ψ(n)} for {Sn } then, by Theorem 3.9.3, it will have to be of the form ψ(n) = −σW (n)ψ1 (n), where ψ1 (n) → 0 and | ln ψ1 (n)| = o(ln ln n) as n → ∞. Now consider the problem on the upper boundary of the random walk in the case Eξ = 0, [<, <], β > 1, W (t) V (t). As in Theorem 3.9.3, the result here will be different from the assertion of Theorem 3.9.1, where we assumed that W (t) c1 V (t). Theorem 3.9.4. Let the following conditions be satisfied: [<, <] with β > 1, Eξ = 0 and V (t) < cW (t) for some c < ∞. Then, for any fixed v > 0, lim sup n→∞
Sn
a.s.
(3.9.9)
Sn =0 σW (n) ln n
a.s.
(3.9.10)
or, equivalently, lim sup n→∞
Proof. Similarly to (3.9.5), (3.9.6), it suffices to verify that, with probability 1, there will occur only finitely many events ( ) max Sn xk , Ck = k 1, nk−1 nnk
where nk := Ak , xk := vσW (nk ) ln nk and A > 1 is an arbitrary fixed number. We will use Theorem 3.1.1 in the case where condition (3.1.6) is satisfied. First note that, for x := vσW (n) ln n, the latter condition is met. Indeed, x vσW (n) ln n nW ∼ nW ln x ln σW (n) ∼ nv −β β β W (σW (n)) ∼ v −β β β = const < ∞. Therefore, by virtue of Theorem 3.1.1, for any ε > 0 and all large enough n, P(Ck ) P(S nk xk ) < c1 nk V (xnk ) < c2 nk W (vσW (nk ) ln nk ) c3 nk W (σW (nk ))(ln nk )−β+ε c4 k −β+ε . Hence, for ε < (β − 1)/2 one has P(Ck ) c4 k −1−(β−1)/2 ,
P(Ck ) < ∞.
k
That (3.9.10) and (3.9.9) are equivalent to one another follows from the fact that, in the case Eξ = 0, one has lim supn→∞ Sn = ∞ (see e.g. Chapter 10 of [49]). The theorem is proved. Remark 3.9.5. It can seen from the proof of Theorem 3.9.4 that the factor ln n multiplying σW (n) in (3.9.9) appears there not from the upper bound for P(Ck ) but rather from the condition (3.1.6) for the applicability of Theorem 3.1.1. This
3.9 Analogues of the law of the iterated logarithm
181
observation, together with a comparison with Theorem 3.9.1 (the conditions of Theorems 3.9.1 and 3.9.4 have, for β > 1, a non-empty intersection), leads to a natural conjecture that the above-mentioned factor ln n could be replaced by a slower growing function (say, by (ln n)γ with γ < 1/β). Remark 3.9.6. It is obvious that Theorems 3.9.1–3.9.4 also enable one to construct bounds for lower boundaries for {Sk } (by considering the reflected random walk {−Sk }).
4 Random walks with jumps having finite variance
In this chapter, we will assume that Eξ = 0,
d := Eξ 2 < ∞.
As before, the main objects of study will be the probabilities of large deviations of Sn , S n , S n (a) and also the general boundary problem on the asymptotics of
P max(Sk − g(k)) 0 , kn
where the boundary g(k) is such that minkn g(k) =: x → ∞.
4.1 Upper bounds for the distribution of S n On the one hand, by the central limit theorem, as n → ∞, √ P(Sn x) ∼ 1 − Φ x/ nd √ uniformly in x ∈ (0, Nn n), where Nn → ∞ slowly enough. On the other hand, as we saw in § 1.1.4, when condition [ · , =], V ∈ R is met and x → ∞ one has P(Sn x) ∼ nV (x) = nP(ξ x)
(4.1.1)
for any fixed n (and hence also for an n growing sufficiently slowly). These two asymptotics ‘interlock’ with each other in the following way. If, in addition to the above-mentioned conditions, it is also assumed that E(ξ 2 ; |ξ| > t) = o(1/ ln t) √ as t → ∞ and that x > n, then √ P(Sn x) ∼ 1 − Φ x/ nd + nV (x), 182
n→∞
(4.1.2)
4.1 Upper bounds for the distribution of S n
183
(Corollary 7 of [237]; see also [206]1 ). The representation (4.1.2) will also be discussed below in § 4.7.2 of the present chapter. In what follows, we will need a value σ(n) that will characterize the range of deviations of Sn where P(Sn x) change from the ‘normal’ √the asymptotics asymptotics, 1 − Φ x/ nd , to the asymptotics nV (x) describing P(Sn x) for large enough x. We will denote this value, as before, by σ(n). It is defined 2 as the deviation for which the asymptotics e−x /2nd(1+o(1)) and nV (x) ‘almost coincide’. It is easily seen that one can put (4.1.3) σ(n) = (α − 2)nd ln n, σ(n) being the principal part of the solution for x of the equation −
x2 = ln nV (x) = ln n − α ln x + o(ln x). 2nd
Recall that, under the assumptions of the preceding chapters, the deviations x at which the approximation by a stable law was replaced with an approximation by the quantity nV (x) were of the form F (−1) (1/n), which had a power factor n1/α , α 2. Remark 4.1.1. To avoid confusion over the definition of σ(n) for n = 1 (we do not exclude the value n = 1), we just put σ(1) := 1. It will be assumed throughout the present section that √ d = 1, x > n, so that nV (x) → 0 as x → ∞. As before, we will use the notation Bj = Bj (0) = {ξj < y},
B=
n *
Bj
j=1
(see (2.1.1)) with y = x/r, where r 1 is fixed. Theorem 4.1.2. Let the conditions [ · , <], α > 2, Eξ = 0 and Eξ 2 = 1 be satisfied. Then the following assertions hold true. (i) For any fixed h > 1, s0 > 0, for x = sσ(n), s s0 and all small enough Π = nV (x), one has r−θ r Π(y) P ≡ P(S n x; B) e , (4.1.4) r 1
In [206], the representation (4.1.2) was given under the additional assumption that E|ξ|2+δ < ∞, δ > 0 (Theorem 1.9), with a reference to [194]. The latter paper in fact dealt only with the case x n1/2 ln n (then (4.1.2) turns into (4.1.1)) but without additional moment conditions. According to [191], the result presented in [206] was obtained in A.V. Nagaev’s Dr. Sci. thesis (On large deviations for sums of independent random variables, Institute of Mathematics of the Academy of Sciences of the UzSSR, Tashkent, 1970). In [225] the representation (4.1.2) was obtained under the assumption that F (t) = O(t−α ) as t → ∞.
184
Random walks with jumps having finite variance where Π(y) = nV (y),
θ=
hr2 4s2
1+b
ln s , ln n
b=
2α . α−2
(4.1.5)
(ii) For any fixed h > 1, τ > 0, for x = sσ(n), s2 < (h − τ )/2 and all large enough n, we have 2
P e−x
/2nh
.
(4.1.6)
Corollary 4.1.3. (i) If x = sσ(n), s → ∞ then, for any ε > 0 and all small enough Π = nV (x), P Πr−ε .
(4.1.7)
P c1 Πr .
(4.1.8)
(ii) If s2 > c ln n then
(iii) If s is fixed and r = 2s2 /h 1 then, as n → ∞, 2
P cΠs
/h+o(1)
.
In particular, for s2 > 2h and all large enough n, P cΠ2 .
(4.1.9)
Corollary 4.1.4. (i) If s → ∞ then, for any δ > 0 and all small enough nV (x), P(S n x) nV (x)(1 + δ).
(4.1.10)
(ii) If s2 h + τ for a fixed τ > 0 then, for all small enough nV (x), P(S n x) cnV (x).
(4.1.11)
(iii) For any fixed h > 1, τ > 0, for s2 < (h − τ )/2 and all large enough n, P(S n x) e−x
2
/2nh
.
(4.1.12)
Remark 4.1.5. It is not hard to verify that, as in Corollaries 2.2.4 and 3.1.2, there exists a function ε(t) ↓ 0 as t ↑ ∞ such that, along with (4.1.10), one has P(S n x) 1 + ε(t), nV (x) x: st sup
x = sσ(n).
Proof of Corollary 4.1.3. Since θ → 0 as s → ∞, assertion (i) follows in an obvious way from (4.1.4). We now prove the second assertion. Since y = sσ(n)/r, for any δ > 0 one has r r T := = < c1 sα+δ n(α+δ)/2−1 . Π(y) nV (y)
4.1 Upper bounds for the distribution of S n
185
From this we obtain % & hr2 α+δ ln s θ ln c1 + (α + δ) ln s + − 1 ln n . ln T 2 1 + b 4s ln n 2 Clearly, for s2 > c ln n the right-hand side of this inequality is bounded. Together with (4.1.4), this proves (4.1.8). If s ≡ x/σ(n) is fixed and nV (x) → 0 then necessarily n → ∞. Therefore, ln s hr2 = ψ(r, s) + o(1), r−θ =r− 2 1+b 4s ln n where the function ψ(r, s) := r −
hr2 4s2
achieves at the point r0 = 2s2 /h its maximum in r, which is equal to ψ(r0 , s) =
s2 . h
(4.1.13)
Hence s2 + o(1). h In particular, for s2 > 2h and all large enough n, we obtain r0 − θ
r0 − θ > 2.
(4.1.14)
(4.1.15)
This establishes (4.1.9). Corollary 4.1.3 is proved. Proof of Corollary 4.1.4. The proof, as well as parallel arguments from Chapters 2 and 3 (cf. Corollaries 2.2.4 and 3.1.2), is based on the inequality (2.1.6): P(S n x) nV (y) + P.
(4.1.16)
Assertion (i) follows from (4.1.7) by setting r := 1 + 2ε and using standard arguments from calculus. We now prove (ii). If s → ∞, the assertion follows from (i). If s is bounded then, setting r := r0 and assuming without loss of generality that s2 > h + τ (for a suitable h > 1 and a τ that is somewhat smaller than that in condition (ii)), we obtain from (4.1.4), (4.1.13) and (4.1.14) that P(S n x) nV (y) + c(nV (y))1+τ /2 2nV (x/r0 ) ∼ 2r0α nV (x). Assertion (iii) follows from inequalities (4.1.6) and (4.1.16); these imply that 2
P(S n x) nV (x) + e−x
/2nh
,
where, for s2 < (h − τ )/2 and n → ∞, "
(h − τ ) (α − 2)n ln n −x2 /2nh e > exp − 2 2nh √ −(α−2)/4 nV ( n) nV (x) >n
(4.1.17)
(4.1.18)
186
Random walks with jumps having finite variance √ (recall that x > n). Hence the second term on the right-hand side of (4.1.17) dominates. Slightly changing h (if necessary), we obtain (4.1.12). Corollary 4.1.4 is proved. Proof of Theorem 4.1.2. (i) We will follow the same line of reasoning as in the proofs of Theorems 2.2.1 and 3.1.1. The main elements will again be the basic inequality (2.1.8) and bounds for R(μ, y). In the present context, however, the partition of R(μ, y) into ‘subintegrals’ will differ from (2.2.7). Put M (v) := v/μ, so that M := 2α/μ = M (2α), and write R(μ, y) = I1 + I2 , where, for a fixed ε > 0, M (ε)
M (ε)
I1 :=
e
μt
F(dt) =
−∞
1 + μt + −∞
with 0 θ(t)/t 1. Now
M (ε) −∞
μ2 t2 μθ(t) F(dt) e 2
(4.1.19)
F(dt) 1,
M (ε)
∞
t F(dt) = − −∞
t F(dt) 0
(4.1.20)
t2 F(dt) eε =: h.
(4.1.21)
M (ε)
and M (ε)
2 μθ(t)
t e
M (ε)
F(dt) e
−∞
ε −∞
Therefore I1 1 +
μ2 h . 2
(4.1.22)
Next we will bound y y μt ε I2 := − e dF+ (t) V (M (ε))e + μ V (t)eμt dt. M (ε)
(4.1.23)
M (ε)
First consider, for M (ε) < M < y, the integral M I2,1 := μ M (ε)
2α V (t)e dt = V (v/μ)ev dv. μt
ε
As μ → 0, V (v/μ)ev ∼ V (1/μ)g(v), where the function g(v) := v −α ev is convex on (0, ∞). Hence 2α − ε V (1/μ) g(ε) + g(2α) (1 + o(1)) cV (1/μ). I2,1 2
(4.1.24)
(4.1.25)
4.1 Upper bounds for the distribution of S n
187
The integral y I2,2 := μ
V (t)eμt dt M
can be dealt with in the same way as I3 in (2.2.11)–(2.2.15), which yields I2,2 V (y)eμy (1 + ε(λ)),
λ := μy,
(4.1.26)
ε(λ) ↓ 0 as λ ↑ ∞. Summing up (4.1.22)–(4.1.25), we obtain R(μ, y) 1 + μ2 h/2 + cV (1/μ) + V (y)eμy (1 + ε(λ)), and so
R (μ, y) exp n
nμ2 h + cnV 2
1 μ
" + nV (y)e (1 + ε(λ)) . μy
(4.1.27)
(4.1.28)
Now we will choose μ :=
1 ln T, y
T :=
r , nV (y)
so that λ = ln T (cf. (2.2.18)). Then (4.1.28) will become "
2 1 nμ h + r(1 + ε(λ)) , + cnV Rn (μ, y) exp 2 μ
(4.1.29)
where, as before, by Theorem 1.1.4(iii) for any δ > 0, y 1 y nV ∼ nV ∼ cnV μ ln T | ln nV (y)| cnV (y)| ln nV (y)|α+δ → 0
(4.1.30)
since nV (y) → 0. Therefore from (2.1.8) we obtain (cf. (2.2.19)) nh ln P −r ln T + r + 2 ln2 T + ε1 (T ) 2y nh = −r + 2 ln T ln T + r + ε1 (T ), (4.1.31) 2y (α − 2)n ln n and where ε1 (T ) ↓ 0 as T ↑ ∞. For x = sσ(n), σ(n) = nV (x) → 0, we have ln T = − ln nV (x) + O(1) α = − ln n + α ln s + ln n + O ln ln n + | ln L(sσ(n))| 2 α−2 ln s = (1 + o(1)), (4.1.32) ln n 1 + b 2 ln n
188
Random walks with jumps having finite variance
where b = 2α/(α − 2) (the term o(1) appears in the last equality owing to our assumption that n + s → ∞). Hence hr2 nh ln s ln T = 2 1 + b (1 + o(1)), 2 2y 4s ln n so that, by virtue of (4.1.31), & % ln s h r 2 ln T ln P r − r − 2 1 + b 4s ln n for any h > h > 1 and all small enough nV (x). This proves the first assertion of Theorem 4.1.2. √ (ii) Since we are assuming everywhere that x > n, we have s = x/σ(n) > 1/ (α − 2) ln n. Hence it suffices to prove (4.1.6) for values of s that satisfy n−γ < s2 <
h−τ 2
for an arbitrarily small fixed γ > 0. This corresponds to the following range of values for x2 : cn1−γ ln n < x2 <
1 (h − τ )(α − 2)n ln n. 2
(4.1.33)
In this case, as will be shown below, the principal contribution to the exponent on the right-hand side of (4.1.28) will come from the quadratic term nμ2 h/2, and so we will put x μ := nh (this is the value minimizing −μx + nμ2 h/2). Then, for y = x (so that r = 1, λ = x2 /nh), we obtain from (2.1.8) and (4.1.28) that 1 nμ2 h + nV (y)eμy (1 + ε(λ)) + cnV ln P −μx + 2 μ 2 nh x2 + nV (x)ex /nh (1 + ε(λ)). + cnV =− (4.1.34) 2nh x Here x2 /2nh > 1/2h, whereas the last two terms on the right-hand side are negligibly small as n → ∞. Indeed, owing to the second inequality in (4.1.33), 8 nh n cnV → 0 as n → ∞. nV x ln n Further, using the first inequality in (4.1.33), we find that
nV (x) n(2−α)/2+γ ,
4.1 Upper bounds for the distribution of S n
189
where the choice of γ is at our disposal. Moreover, by virtue of (4.1.33), x2 1 α − 2 τ (α − 2) ln n. (h − τ )(α − 2) ln n = − nh 2h 2 2h Hence, for γ < τ (α − 2)/2h, 2
nV (x)ex
/nh
n−τ (α−2)/2h+γ → 0
as n → ∞. Thus x2 + o(1), 2nh where the term o(1) in the last relation can be removed by slightly changing the value of h > 1. (Formally, we have proved that, for h > 1 and all large enough n, the inequality (4.1.6) holds with h on its right-hand side replaced by h > h, where one could take, for instance, h = h + (h − 1)/2. Since the value h > 1 can also be made arbitrarily close to 1, by choosing a suitable h, the assertion that we have obtained is equivalent to that in the formulation of Theorem 4.1.2.) This proves (4.1.6). The theorem is proved. ln P −
Comparing the assertions of Theorem 4.1.2 and Corollaries 4.1.3, 4.1.4, we see that, roughly speaking, the range of possible values of s splits into two subranges: s < 1/2 and s > 1/2. In each subrange one can obtain quite satisfactory and, in a certain sense, unimprovable bounds for the probabilities P and P(S n x). Now consider the ‘threshold’ case α = 2, Eξ 2 < ∞. In this situation, finding the asymptotics of the solution σ(n) (for x) of the equation x2 = ln n + ln V (x) ≡ ln n − α ln x + ln L(x) (4.1.35) 2n is rather difficult, because it depends now on the s.v.f. L. Because of this, to describe the probabilities of deviations that are close to normal (i.e. they lie in the region where the bound (4.1.6) is valid), we will need two conditions. The first one assumes straight away 2 that the function e−x /nh dominates nV (x), i.e. it is supposed that x is such that −
2
nV (x)ex The second condition stipulates that
„
/nh
1 V (t) = O 2 t ln t
→ 0.
(4.1.36)
as t → ∞,
(4.1.37)
«
which, in essence, leads to no loss of generality in the threshold case α = 2, Eξ 2 < ∞. In the present context, we will consider, for the case α = 2, deviations of the form √ x = s n ln n, where the parameter s will clearly have a meaning somewhat different from that it previously had. As before, x r= , Π(y) = nV (y), Π = nV (x). y
190
Random walks with jumps having finite variance
√ Theorem 4.1.6. Let the conditions [ · , <], α = 2, Eξ = 0, Eξ 2 = 1 and x = s n ln n be satisfied. Then the following assertions hold true. (i) For any fixed s0 > 0 and all s s0 , one has „ «r+θ « „ ln s Π(y) , θ = o(s−2 ) + O 2 P er r s ln n
(4.1.38)
as nV (x) → 0. (ii) In addition, let conditions (4.1.36), (4.1.37) be met. Then, for any fixed h > 1 and all large enough n, one has (4.1.6). Corollary 4.1.7. (i) If s s0 , nV (x) → 0 then P cΠr+o(1) .
(4.1.39)
P cΠr .
(4.1.40)
P(S n x) nV (x)(1 + o(1)).
(4.1.41)
2
(ii) If s > c1 ln n then Corollary 4.1.8. (i) If s s0 , nV (x) → 0 then
(ii) Let conditions (4.1.36), (4.1.37) be satisfied. Then, for any fixed h > 1 and all large enough n, one has (4.1.12). Proof of Theorem 4.1.6. (i) For s s0 , the reasoning in the proof of the first part of Theorem 4.1.2 remains unchanged, up to relations (4.1.28), (4.1.29). However, in contrast √ with (4.1.32), for x = s n ln n we now have ln T ≡ ln r − ln nV (y) = − ln r − ln nV (x) + o(1), where
` √ ´ ` √ ´ ln nV s n ln n = −2 ln s − ln ln n + ln L s n ln n = −2 ln s + o(ln n) + o(ln s). Hence ln T = 2 ln s − ln r + o(ln n) + o(ln s), and therefore ´ hr2 ` nh ln T = 2 ln s − ln r + o(ln n) + o(ln s) = o(s−2 ) + O 2y 2 2s2 ln n
(4.1.42) „
ln s s2 ln n
« ,
so that (4.1.31) becomes
„ «– » ln s ln T + o(1). ln P r − r + o(s−2 ) + O s2 ln n
The first assertion of the theorem is proved. (ii) Now we will prove the second assertion in the case when conditions (4.1.36), (4.1.37) are met. Again letting μ := x/nh, we obtain the relation (4.1.34), in which, as in Theorem 4.1.2 as well, we need to bound the last two terms. From condition (4.1.36) it necessarily follows that s → 0 as nV (x) → 0 and so, by virtue of (4.1.37), „ « „ „r «« nh n nV = o nV = o(1) x ln n
4.2 Upper bounds for the distribution of S n (a), a > 0
191
as n → ∞. That the second term from (4.1.34) to be bounded converges to zero follows directly from (4.1.36). Theorem 4.1.6 is proved. Proof. The proofs of Corollaries 4.1.7 and 4.1.8 are also similar to the previous proofs. Their assertions follow from Theorem 4.1.6 and the fact that if nV (x) → 0, s s0 , then n + s → ∞. This implies that θ = o(1). Moreover, owing to (4.1.42), θ ln Π = O(θ ln T ) = o(1), provided that s2 > c ln n. The relations (4.1.39), (4.1.40) are proved. To obtain inequality (4.1.41), one should use relations (4.1.16) and (4.1.39), in the latter setting r := 1 + ε, where ε tends to zero so slowly that P cΠ1+ε/2 ,
Πε/2 → 0.
Corollaries 4.1.7 and 4.1.8 are proved. In conclusion, we state a consequence of Theorems 4.1.2 and 4.1.6 that is concerned with bounds for ˘ ¯ Sbn := max |Sk | = max S n , |S n | , S n := min Sk . P(Sbn x), kn
kn
Let, as before, Vb (t) := max{V (t), W (t)}, α b := max{α, β}. Corollary 4.1.9. Let the following conditions be satisfied: [<, <], Eξ 2 < ∞ and p α − 2 + ε)n ln n x > (b for some ε > 0. Then, for all small enough nV (x), P(Sbn x) cnVb (x). This assertion follows from Corollary 4.1.4(ii) and Corollary 4.1.8(i), applied to S n and S n .
4.2 Upper bounds for the distribution of S n (a), a > 0 This section does not differ much from § 3.2. As in the latter, we will put B(v) =
n *
Bj (v),
Bj (v) = {ξj < y + vaj},
v > 0,
j=1
η(x) = min{k : Sk − ak x}. We will not exclude the case a → 0 as x → ∞ and will consider here the triangular array scheme, assuming that the uniformity condition [U] from § 3.2 (see p. 137) is satisfied. Theorem 4.2.1. (i) Assume that the conditions [ · , <], α > 2, Eξ = 0, Eξ 2 = d < ∞ and [U]
192
Random walks with jumps having finite variance are satisfied. Then, for v 1/4r, a ∈ (0, a0 ) for an arbitrary fixed a0 < ∞, all n and all x such that x → ∞, x > c| ln a|/a, we have P(S n (a) x; B(v)) c1 [mV (x)]r∗ ,
(4.2.1)
P(S n (a) x) c1 mV (x),
(4.2.2)
where the constants c, c1 are defined below in the proof of the theorem and m := min{n, x/a},
r∗ :=
r , 2(1 + vr)
r≡
x 5 > . y 2
For any bounded or sufficiently slow-growing values t, we have c2 xV (x) 1−α xt t . P ∞ > η(x) a a
(4.2.3)
If, together with x, t tends to infinity at an arbitrary rate then the inequality (4.2.3) will remain true provided that one replaces in it the exponent 1 − α by 1 − α + ε with an arbitrary fixed ε > 0. (ii) Let the conditions of part (i) of the theorem be met, with the following amendments for the range of x and n: now we assume that c| ln a| x , n n1 = . a a Then, for any fixed h > 1 and all large enough n1 , x<
(4.2.4)
P(S n (a) x) c1 e−xa/2hd .
(4.2.5) If x = o | ln a|/a then, for any bounded or sufficiently slow-growing t, xt c1 e−γxa , P ∞ > η(x) (4.2.6) a where the constants c, c1 , γ are defined below in the proof of the theorem. In particular, for xa c2 > 0, xt c1 e−γ t , γ = γc2 . P ∞ > η(x) a The theorem implies the following result. Corollary 4.2.2. Let the conditions [ · , <], α > 2, Eξ = 0, Eξ 2 < ∞ and [U] be satisfied. Then there exist constants c and γ > 0 such that, for any x and n n1 := x/a, ) ( x P(S n (a) x) c max e−γxa , V (x) . a Proof of Theorem 4.2.1. The proof of part (i) basically repeats the argument used to prove Theorem 3.2.1. One just has to observe that, to prove (4.2.1), one should now use Theorems 4.1.2 and 4.1.6 forn c0 x/a with some c0 < ∞. First note that if the value s ≡ x (α − 2)n ln n in the conditions of Theorem 4.1.2 is such that θ r/2 (for the definition of θ see (4.1.5)) then all the
4.2 Upper bounds for the distribution of S n (a), a > 0
193
bounds in the proof of Theorem 3.2.1 will remain valid provided that in them we replace rk and rk by rk /2 and rk /2 respectively. Hence the inequality (4.2.1) will hold for r∗ = r0 /2 (cf. the assertion of Theorem 3.2.1, p. 138). It remains to discover for which x and a, in the case n co x/a, one has θ r/2 or, equivalently, x2 > c2 , n ln n where c1 and c2 are determined by the values of α and r. The inequality (4.2.7) will hold if c| ln a| c ln x x> or a > . a x s2 ≡ c1
Indeed, in this case c0 x , x ln (1 + o(1)) n ln n a a % & c 0 x2 x2 c0 x2 ln (1 + o(1)) (1 + o(1)). c ln x ln x c
(4.2.7)
(4.2.8)
From this it follows that s2 > c1 c /co c2 for c c2 c0 /c1 , and so (4.2.7) holds true. This proves (4.2.1). Similarly, to prove (4.2.2) we use Corollaries 4.1.4 and 4.1.8, in which one requires that the condition x2 /n ln n > c3 , with c3 an explicitly known constant, is met. We are again in a situation where we have to determine under what conditions the relation (4.2.7) holds. The only difference is in the values of the constants. Otherwise, the proof of Theorem 3.2.1(i) remains unchanged. (ii) Now we prove the second assertion. Using an argument similar to that involving (4.2.7), (4.2.8), one can verify that the inequality s2 < (h − τ )/2 (see Theorem 4.1.2(ii)) will hold for n n1 provided that x < c| ln a|/a (or a > (c ln x)/x). We will again follow the argument in the proof of Theorem 3.2.1 but will obtain different bounds, since we will now be using Theorem 4.1.2(ii), see (4.1.6). For example, instead of (3.2.5) we will obtain that, for a fixed k, nk = n1 2k−1 and n1 = x/a, the inequality pk = P(η(x) ∈ (nk , nk+1 ]; B(v))
2 2(k−2) "
2 " x 2 x (1 + 2k−2 )2 exp − + exp − 2nk hd 2nk hd "
xa2k−2 2 exp − 4hd
(4.2.9)
will hold if the pairs (nk , x2k−2 ) and (nk , x(1 + 2k−2 )) (see (3.2.5)), together with the pair (n1 , x), still satisfy the conditions of Theorem 4.1.2(ii). As k grows, starting from some point these conditions will no longer be met and then one will have to make use of the assertion of Theorem 4.1.2(i). However, when summing
194
Random walks with jumps having finite variance
the pk , the main contribution to the sum will be due to the first few summands, since, for n = n1 = x/a,
"
" x2 xa x exp − = exp − V (x) 2nhd 2hd a for x < c| ln a|/a and a suitable c. It also follows from these relations that P(B(v))
n
V (y + avj)
j=1
cx V (x) = o(e−xa/2hd ). a
The above argument implies (4.2.5). If x = o (| ln a|/a) then, for each fixed k, the bound (4.2.9) will be valid, and the right-hand side of (4.2.9) will dominate xV (x)/a. Therefore, by virtue of (4.2.9),
" xa2k−2 k−1 ) c exp − P(∞ > η(x) n1 2 . 4h This implies (4.2.6). The theorem is proved. Note that analogues of Corollary 3.6.6 and Theorem 3.6.7 (under the conditions [ · , <] and γ > 1/2) remain valid in the present set-up. As can be seen from the proof, one can make a more precise statement regarding the admissible growth rate for t in (4.2.6). If, for instance, x = v/a, a → 0, v = const then one can take t < c| ln a| with a suitable c.
4.3 Lower bounds for the distributions of Sn and S n (a) First we consider lower bounds for P(Sn x). Theorem 4.3.1. Let Eξ = 0, Eξ 2 = d < ∞. Then, for y = x + u u > 0, n−1 F+ (y) . P(Sn x) nF+ (y) 1 − u−2 − 2
(n − 1)d, (4.3.1)
Proof. The assertion will follow from Theorem 2.5.1 if we put K(n) := and make use of the Chebyshev inequality, which implies that Qn (u) ≡ P Sn /K(n) < −u u−2 .
The following results are obvious consequences of Theorem 4.3.1. Corollary 4.3.2. (i) If x → ∞, x
√ n then, as u → ∞, P(Sn x) nF+ (y)(1 + o(1)).
√
nd
4.3 Lower bounds for the distributions of Sn and S n (a)
195
(ii) If, moreover, condition [ · , >] is satisfied then P(Sn x) nV (x)(1 + o(1)). Thus, a lower bound for P(Sn x) with right-hand side nF+ (y) holds √ under weaker conditions on x than the upper bounds (the latter require x n ln n) and in the absence of a regular domination condition of the form [ · , <]. Some lower bounds for P(Sn x) were obtained in [205], while lower bounds for P(|Sn | x) were established in [208]. Next we will obtain lower bounds for P(S n (a) x). The case Eξ 2 = ∞ will not be excluded. Suppose that Eξ = 0,
bγ := E|ξ|γ < ∞
Set Zγ (x, t) :=
n
for some γ ∈ (1, 2].
(4.3.2)
F+ x + aj + (j − 1)1/γ tb ,
j=1
where, as usual, a > 0. Clearly, Zγ (x, t) → 0 as x → ∞. Theorem 4.3.3. For all n, x and t > 0, # $ P(S n (a) x) Zγ (x, t) 1 − (1 + ε(t))t−γ − Zγ (x, t) ,
(4.3.3)
where ε(t) ≡ 0 for γ = 2, and ε(t) → 0 as t → ∞, γ < 2. Let n+1
F+ x + au + (u − 1)1/γ tb du Zγ (x, t).
Iγ (x, t) := 1
Given that (4.3.3) holds, it is not hard to find values t0 > 1, z0 = z0 (t0 ) > 0 such that, for Zγ (x, t) < z0 , t > t0 , # $ P(S n (a) x) Iγ (x, t) 1 − (1 + ε(t))t−γ − Iγ (x, t) . (4.3.4) Indeed, observe that for t > t0 > 1 and c := supt (1 + ε(t)), the function g(z) := z(1 − ct−γ − z) increases monotonically on [0, z0 ], where z0 = z0 (t0 ) > 0. Hence, for Zγ (x, t) < z0 , the right-hand side of (4.3.3) exceeds # $ Iγ (x, t) 1 − (1 + ε(t))t−γ − Iγ (x, t) . It is not difficult to give explicit values for t0 , z0 . For instance, when γ = 2, one can put t0 := 2, z0 := 3/8. Corollary 4.3.4. Assume that the conditions (4.3.2) and [ · , <] are satisfied. Then, as x → ∞, P(S n (a) x)
1 a
x+an
V (u) du (1 + o(1)). x
(4.3.5)
196
Random walks with jumps having finite variance
Proof of Corollary 4.3.4. Put t := ln x in (4.3.4). Then, as x → ∞, n+1
Iγ (x, t)
V x + au + (u − 1) 1
1/γ
1 + o(1) b ln x du = a
x+an
V (u) du. x
Since Iγ (x, t) → 0 as x → ∞, the bound (4.3.5) follows from (4.3.4). Proof of Theorem 4.3.3. Let Gn := {S n (a) x},
Bj := {ξj < x + aj + (j − 1)1/γ tb}.
Using the same argument as in the proof of Theorem 2.5.1, we obtain n n 2 P(Gn ) P(Gn B j ) − P(B j ) , j=1
(4.3.6)
j=1
where, evidently, n
P(B j ) = Zγ (x, t).
(4.3.7)
j=1
Further, P(Gn B j ) > P Sj−1 −(j − 1)1/γ tb; B j # $ = F+ x + aj + (j − 1)1/γ tb 1 − P(Sj−1 < −(j − 1)1/γ tb) . (4.3.8) If γ = 2 then, by the Chebyshev inequality, P Sj−1 < −(j − 1)1/2 tb t−2 . If γ < 2 then condition [<, <] with α = β = γ, V (t) = W (t) = bγ t−γ is fulfilled and therefore, by virtue of Corollary 3.1.2, P Sj−1 < −(j − 1)1/γ tb (1 + ε(t))(j − 1)V (j − 1)1/γ tb = (1 + ε(t))t−γ , where ε(t) → 0 as t → ∞. It follows from (4.3.8) and the above discussion that n
# $ P(Gn B j ) Zγ (x, t) 1 − (1 + ε(t))t−γ
j=1
and hence, by virtue of (4.3.6) and (4.3.7), # $ P(Gn ) Zγ (x, t) 1 − (1 + ε(t) t−γ − Zγ (x, t) . The theorem is proved.
4.4 Asymptotics of P(Sn x) and its refinements
197
4.4 Asymptotics of P(Sn x) and its refinements First-order asymptotics for the probabilitiesP(Sn x) and P(S n x) in the case s = x/σ(n) → ∞ (recall that σ(n) = (α − 2)n ln n ) immediately follow from Corollary 4.1.4 (see also Remark 4.1.5) and Corollary 4.3.2(ii). Namely, we have the following results. Theorem 4.4.1. Let the conditions [ · , =], α > 2, Eξ = 0 and Eξ 2 < ∞ be satisfied. Then there exists a function ε(t) ↓ 0 as t ↑ ∞ such that P(Sn x) sup (4.4.1) − 1 ε(t), nV (x) x: st P(S n x) − 1 ε(t). (4.4.2) sup nV (x) x: st Remark 4.4.2. It will be seen from the proof of Theorem 4.4.4 below that the probabilities P(Sn x) and P(S n x) are uniformly asymptotically equivalent to nV (x) for s > c > 1 provided that n → ∞. Note also that the relation P(Sn x) lim =1 n→∞ nV (x) for s > c > 1 (i.e. for x > cσ(n), c > 1) also follows from the uniform representation (4.1.2). It appears that a similar representation # √ $ √ x > n, n → ∞, (4.4.3) P(S n x) ∼ 2 1 − Φ x/ nd + nV (x), holds for S n (with the same consequences for P(S n x)/nV (x)); it was established in [225] under the additional condition that F (t) = O(t−α ) as t → ∞. To obtain more precise results, one needs additional smoothness conditions on the function V (t). First we will formulate ‘first-order’ and ‘second-order’ smoothness conditions. It will be assumed in the remaining part of this chapter that condition [ · , =] is satisfied. As in § 3.4, let q(t) be a function vanishing as t → ∞ (for instance, an r.v.f. of index −γ, γ 0). The conditions below are similar to conditions [D(h,q) ] from § 3.4 (see p. 144). [D(1,q) ]
The following representation is valid: # $ V (t(1 + Δ)) − V (t) = V (t) − L1 (t)Δ(1 + ε(Δ, t)) + o(q(t)) ,
where L1 (t) = α + o(1) and ε(Δ, t) → 0 as |Δ| → 0, t → ∞. As we observed in § 3.4, the distribution tail of the first positive sum in a random walk will always satisfy condition [D(1,q) ] with q(t) = t−1 , provided that condition [ · , =] is met (see § 7.5). If the function L(t) in the representation V (t) = t−α L(t) is continuously differentiable for all t t0 , starting from some t0 > 0, and if L(t) as t → ∞, L (t) = o t
198
Random walks with jumps having finite variance
then condition [D(1,0) ] is satisfied. Indeed, in this case, V (t) = −
V (t) L1 (t), t
L1 (t) = α −
L (t)t = α + o(1) L(t)
as t → ∞. Integrating V (u) from t to t + Δt yields V (t + Δt) − V (t) = V (t)Δt(1 + ε(Δ, t)) = −V (t)L1 (t)Δ(1 + ε(Δ, t)),
(4.4.4)
where ε(Δ, t) → 0 as |Δ| → 0, t → ∞, and hence condition [D(1,q) ] is met for q(t) ≡ 0. The next, stronger, smoothness condition [D(2,q) ] has a similar form: [D(2,q) ] for t t0 ,
For some t0 > 0, the function V (t) is continuously differentiable L1 (t) :=
tL (t) V (t)t = −α + V (t) L(t)
is an s.v.f. and, as t → ∞,
$ # V (t(1 + Δ)) − V (t) = V (t) − L1 (t)Δ + L2 (t)Δ2 (1 + ε(Δ, t)) + o(q(t)) ,
where L2 (t) = α(α + 1) + o(1) and ε(Δ, t) → 0 as |Δ| → 0, t → ∞. It is evident that if V (t) = α(α + 1)
V (t) (1 + o(1)) t2
(4.4.5)
exists then tL (t)/L(t) → 0 as t → ∞, and so condition [D(2,q) ] is satisfied for q(t) ≡ 0. The reason is that in this case t+Δt
v
V (t) +
V (t + Δt) − V (t) = t
V (u) du dv
t
Δ 2 t2 = V (t)Δt + V (t) (1 + o(1)) 2 α(α + 1) 2 Δ (1 + ε(Δ, t)) , (4.4.6) = V (t) −L1 (t)Δ + 2 where ε(Δ, t) → 0 as |Δ| → 0, t → ∞, and, by virtue of (4.4.5), L1 (t) = −
V (t)t = α + o(1). V (t)
Clearly, condition [D(1,q) ] (like condition [D(2,q) ]) is not a differentiability condition in the usual sense, as it specifies the values of the increments V (t(1+Δ))− V (t) only for Δ > q(t). In the case q(t) = 0 this condition could be referred to as ‘differentiability at infinity’. Now we will formulate a general smoothness condition [D(k,q) ], k 1.
4.4 Asymptotics of P(Sn x) and its refinements
199
[D(k,q) ] For some t0 > 0, the function V (t) is k − 1 times continuously differentiable for t t0 , and % k−1 Δj V (t(1 + Δ)) − V (t) = V (t) (−1)j Lj (t) j! j=1 & Δk k + (−1) Lk (t) (1 + ε(Δ, t)) + o(q(t)) , k! (4.4.7) where ε(Δ, t) → 0 as |Δ| → 0, t → ∞, and |ε(Δ, t)| < c for t t0 and |Δ| < 1 − δ, where δ > 0 is a fixed arbitrary number. The s.v.f.’s Lj are defined, for j k − 1, by the equalities Lj (t) = (−1)j
tj d j V (t). V (t) dtj
(4.4.8)
The functions Lj with j k have the property that, as t → ∞, Lj (t) = α(α + 1) · · · (α + j − 1)(1 + o(1)).
(4.4.9)
It can be seen that the the expansion in (4.4.7) differs from an expansion for the power function t−α only by the presence of slowly varying factors that are asymptotically equivalent to 1. However, replacing these factors by unity is, of course, impossible, since that could lead to the introduction of errors whose orders of magnitude would exceed those of the subsequent terms of the expansion. Similarly to the above discussion, we can observe that if the kth derivative V (k) (t) ≡
dk V (t) V (t) = α(α + 1) . . . (α + k − 1) k (1 + o(1)) dtk t
(4.4.10)
exists then condition [D(k,0) ] is met. Also, it is not hard to see that condition [D(k,0) ] implies [D(j,0) ] for j < k. One could give the following sufficient conditions for (4.4.10): if the kth derivative V (k) (t) exists for t t0 and is monotone on that half-line then (4.4.10) holds and so [D(k,0) ] is satisfied. Indeed, in this case, for some t1 t0 the function V (k) (t) will have the same sign on the whole interval (t1 , ∞), and therefore the (k − 1)th derivative V (k−1) (t) will be monotone on this interval. Continuing this kind of reasoning, we conclude that all the derivatives V (k−2) (t), . . . , V (t) will be monotone ‘at infinity’. It is known, however (see e.g. Theorem 1.7.2 of [32]), that the monotonicity of V (t) and regular variation of V (t) imply that V (t) ∼ −αt−α−1 L(t),
t → ∞.
From this and the monotonicity of V (t) it follows, in turn, that V (t) ∼ (−1)2 α(α + 1)t−α−2 L(t),
t → ∞,
200
Random walks with jumps having finite variance
and so on, up to the relation (4.4.10). Remark 4.4.3. (i) As in § 3.4, one can also consider, along with [D(k,q) ], conditions [D(k,O(q)) ], that differ from [D(k,q) ] in that in them the remainder term o(q(t)) in (4.4.7) is replaced by O(q(t)). (2) As in § 3.4, one could also consider conditions [D(h,q) ] with a fractional value of h and a remainder term of order |Δ|h . Comments on how the assertions in the forthcoming theorems would change if we switched to the conditions mentioned in Remark 4.4.3 will be presented after the respective theorems. Now we state the main assertion of this section. Recall that we are assuming everywhere that Eξ = 0, d = Eξ 2 < ∞ and that V (t) = max{V (t), W (t)}. Theorem 4.4.4. Let the following conditions be satisfied: [ · , =], α > 2, and that, for some k 1, [D(k,q) ] holds and E|ξ|k < ∞ (the latter is only required when k > 2). Then, as x → ∞, we have % & k Lj (x) j k/2 −k ES +o(n x )+o(q(x)) (4.4.11) P(Sn x) = nV (x) 1+ n−1 j!xj j=2 uniformly in n cx2 / ln x for some c > 0. Since 2 = (n − 1)d, ESn−1 3 ESn−1 = (n − 1)Eξ 3 , 1 4 = (n − 1)(n − 2)d2 + O(n) ESn−1 2
and L2 (x) = α(α + 1) + o(1),
L4 (x) = 4!
α+3 + o(1), 4
we obtain the following. Corollary 4.4.5. Let the conditions of Theorem 4.4.4 be met. Then the following assertions hold as x → ∞. (i) If k = 2 then
% & α(α + 1)(n − 1)d P(Sn x) = nV (x) 1 + (1 + o(1)) . 2x2
(ii) If k = 4 then
(4.4.12)
% (n − 1)L2 (x)d (n − 1)L3 (x)Eξ 3 − P(Sn x) = nV (x) 1 + 2x2 3!x3 & (n − 1)(n − 2) α + 3 + (1 + o(1)) . 4 2x4
4.4 Asymptotics of P(Sn x) and its refinements
201
Both relations hold uniformly in n cx2 /ln x for some c > 0. Remark 4.4.6. In the lattice case, when ξ is an integer-valued r.v. and the greatest common divisor of its possible values is equal to 1, it is not hard to construct difference analogues of conditions [D(k,q) ] and then to obtain a complete analogue of Theorem 4.4.4 for integer-valued x. Remark 4.4.7. For the special case where the principal part of V (t) is a linear combination of negative powers of t, a number of refinements of the relation P(Sn x) ∼ nV (x) were derived in [276] under some additional restrictive assumptions and conditions, which are sometimes irrelevant. It will be seen easily from the proofs below that all the main assertions of the present section could be transferred, in a standard way, to the case where V (t) is a linear combination of terms of the form t−α(i) L(i) (t) satisfying the respective smoothness condition. See also Remark 3.4.2. √ j in (4.4.11) are clearly polynomials in n of Remark 4.4.8. The terms ESn−1 orders not exceeding j. Observe, however, that the asymptotic representation (4.4.11) is, generally speaking, not a ‘pure form’ asymptotic expansion in powers √ of ( n/x), which is the case in (4.4.12), since the Lj (x) could be quite complicated s.v.f.’s. If L(t) ≡ L = const for t t0 then α+j−1 Lj (x) , = j j! √ and (4.4.11) becomes an asymptotic expansion in powers of ( n/x). Proof of Theorem 4.4.4. Let Gn := {Sn x} and, as before, put Bj = {ξj < y},
B=
n *
Bj ,
j=1
r=
x > 2. y
Then, similarly to § 3.4, by virtue of Corollary 4.1.3 we will again have representations of the form (3.4.12), (3.4.13), which yield, for n cx2 /ln x and a suitable c, that P(Gn ) =
n
P(Gn B j ) + O (nV (x))2 .
(4.4.13)
j=1
So the main problem consists in finding asymptotic representations for P(Gn B j ) = P(Gn B n ) = P(Sn−1 + ξn x, ξn y) = P(Sn−1 x − y, ξn y) + P(Sn−1 < x − y, Sn−1 + ξn x).
(4.4.14)
202
Random walks with jumps having finite variance
Here, owing to Corollary 4.1.4, P(Sn−1 x − y, ξn y) = P(Sn−1 x − y) P(ξn y) cnV 2 (x).
(4.4.15)
Further, for y = δx, δ = 1/r < 1/2, # $ P(ξn x − Sn−1 , Sn−1 < x − y) = E V (x − Sn−1 ); Sn−1 < (1 − δ)x = E1 + E2 ,
(4.4.16)
where # $ E1 := E V (x − Sn−1 ); Sn−1 −(1 − δ)x , # $ E2 := E V (x − Sn−1 ); |Sn−1 | < (1 − δ)x . First, assume that condition [D(k,0) ] holds. Setting t := x and Δ := −Sn−1 /x in (4.4.7), we obtain % k j Lj (x) Sn−1 E2 = V (x)E 1 + j! x j=1 & k Sn−1 Sn−1 Lk (x) Sn−1 + ε < 1 − δ . (4.4.17) ,x ; k! x x x Next we will find out by how much the truncated moments of Sn on the righthand side of (4.4.17) differ from the complete moments. 2 Lemma 4.4.9. Let Eξ = 0, Eξ = 1 and E|ξ|b < ∞ for some b 2. Then the √ sequence (Sn / n )b ; n 1 is uniformly integrable.
√ Proof. First note that |Sn / n |b ; n 1 is uniformly integrable. This follows from the fact that, under the condition E|ξ|b < ∞, one has convergence of moments in the central limit theorem: as n → ∞, Sn b E √ → |t|b dΦ(t), n where Φ(t) is the standard normal distribution function (see e.g. § 5.10, Chapter IV of [224]). To complete the proof, it suffices to use Kolmogorov’s inequality: Sn P √ t n
√ Sn P √ t − 2 n
(see e.g. (3.13) in Chapter III of [224] or § 2, Chapter 10 of [49]).
4.4 Asymptotics of P(Sn x) and its refinements
203
By Lemma 4.4.9, for j k, E |Snj |; |Sn | (1 − δ)x
% S n k nk/2 E √ ; ((1 − δ)x)k−j n
& Sn (1 − δ)x √ √ = o nk/2 xj−k , n n
and therefore replacing the truncated moments by the complete moments in the relation (4.4.17) would introduce an error of order o(nk/2 x−k ). Further, since, as x → ∞, we have ε(Δ, x) c ε(Δ, x) → 0
for |Δ| < 1 − δ, for |Δ| → 0,
we obtain, again owing to uniform integrability, that % Sn−1 E x
k
& Sn−1 Sn−1 < 1 − δ = o nk/2 x−k . ε ,x ; x x
Therefore (4.4.17) yields k Lj (x) j ESn−1 + o nk/2 x−k . E2 = V (x) 1 + j j!x j=1
(4.4.18)
It remains to bound the integral E1 in (4.4.16). Applying Corollary 4.1.4 (or Corollary 4.1.8) to bound the probabilities of negative deviations of Sn−1 , we have E1 V (x) P Sn−1 −(1 − δ)x cV (x)nW (x).
(4.4.19)
Summarizing, we obtain from (4.4.13)–(4.4.19) the assertion of the theorem under condition [D(k,0) ]. If q(t) ≡ 0 then, as in § 3.4, the term Sn−1 < 1 − δ = o(q(x)) o(q(x))P x is added to the right-hand side of (4.4.17), owing to condition [D(k,q) ], and this term will pass, unchanged through all the calculations, to the representation (4.4.11). The theorem is proved. The proof of the assertion in Remark 4.4.2 is a simplified version of that of Theorem 4.4.4. We leave it to the reader. All the remarks from § 3.4 on how the assertion of Theorem 4.4.4 changes when one replaces condition [D(k,q) ] by [D(k,O(q)) ] remain valid in the present section. One can also obtain a modification of the assertion of the theorem in the case where condition [D(h,q) ] holds for a fractional value of h (see Remark 4.4.3).
204
Random walks with jumps having finite variance 4.5 Asymptotics of P(S n x) and its refinements
As was the case for P(Sn x), the first-order asymptotics for P(S n x) follow directly from Corollaries 4.1.4 and 4.3.2 and is contained in Theorem 4.4.1. Now we will present refinements of that theorem. As in the previous section, we will need the smoothness conditions [D(k,q) ]. Theorem 4.5.1. Let the following conditions be satisfied: [ · , =], α > 2, and that, for some k 1, [D(k,q) ] holds and E|ξ k | < ∞ (the latter is only required when k > 2). Then, as x → ∞,
n−1 L1 (x) P(S n x) = nV (x) 1 + ES j nx j=1
%n−1 & i Li (x) i i−l i l ES ES + ES n−j j−1 n−j l i!xi j=1 i=2 l=2 " k/2 −k +o n x + o q(x) (4.5.1)
+
1 n
k
uniformly in n cx2 / ln x for some c > 0. Remarks 4.4.3 and 4.4.7, which relate to Theorem 4.4.4, remain valid for the above theorem as well. To obtain simpler asymptotic representations from this theorem, note that by Kolmogorov’s inequality (or by virtue of Corollary 4.1.4) the r.v.’s n−1/2 S n are uniformly integrable. Therefore it follows from the invariance principle that 8 √ √ 2d −1/2 −1/2 n S n ⇒ d w(1), En S n → d Ew(1) = π as n → ∞, where d = Eξ 2 , {w(t); t 0} is the standard Wiener process, w(t) := maxut w(u), P(w(1) > t) = 2(1−Φ(t)), and Φ is the standard normal distribution function. Since, moreover, L1 (x) → α, we obtain the following. Corollary 4.5.2. If conditions [ · , =], α > 2, and [D(1,q) ] are satisfied then, as n → ∞, √ & % 23/2 α nd √ (1 + o(1)) + o(q(x)) (4.5.2) P(S n x) = nV (x) 1 + 3 πx uniformly in x from the zone n cx2 / ln x. One could compute higher-order terms of the asymptotic expansion (4.5.1) in a similar way. Observe also that one could easily derive from Theorem 4.5.1 asymptotic expansions for the joint distribution of Sn and S n . Namely, for any fixed u > 1, on the one hand, we clearly have P(S n x, Sn < ux) = P(S n x) − P(Sn ux).
4.5 Asymptotics of P(S n x) and its refinements
205
On the other hand, for a fixed u ∈ (0, 1), P(S n < x, Sn ux) = P(Sn ux) − P(S n x) + P(S n x, Sn < ux), (4.5.3) where, under condition [=, =], the last term is negligibly small (the order of magnitude being n2 V (x)W (x)), since the event {S n x, Sn < ux} requires, roughly speaking, two large jumps (in opposite directions) in the random walk. More precisely, letting η(x) := min{k : Sk x}, we have P(S n x, Sn < ux)
n
P(η(x) = k) P Sn−k < −(1 − u)x
k=1
c1 nW (1 − u)x P(S n x) c2 n2 W (x)V (x). Under conditions [=, =] and [D(k,q) ] on the tails V (t) and W (t), the required asymptotic expansions follow immediately from the above relation and (4.5.3). Proof of Theorem 4.5.1. According to the above remarks (see Remark 3.4.8 and the end of the proof of Theorem 4.4.4), without loss of generality we can assume from the very beginning that condition [D(k,0) ] is met and then just add a remainder term o(q(x)) to the final result. Our proof will repeat, in many aspects, the arguments in the proofs of Theorems 4.4.4 and 3.5.1. Let Gn := {S n x}. As in (4.4.13) and (3.5.5), for n cx2 /ln x with a suitable c and r = x/y > 2 we have P(Gn ) =
n
P(Gn B j ) + O (nV (x))2 .
(4.5.4)
j=1
Furthermore, as in (3.5.6) and (3.5.9), for y = δx, δ = 1/r < 1/2 and (j)
S n−j =
max (Sm+j − Sj ),
0mn−j
we have P(Gn B j ) = P(S n x, ξj y) = P(S n x, ξj y, S j−1 < x) + O(jV 2 (x)) (j) = P Sj + S n−j x, ξj y + O(jV 2 (x)). As before, set (j)
Zj,n := Sj−1 + S n−j .
(4.5.5)
206
Random walks with jumps having finite variance
Then (j)
P(Sj + S n−j x, ξj y) = P(ξj + Zj,n x, ξj y) = P ξj + Zj,n x, ξj δx; Zj,n < (1 − δ)x + P ξj + Zj,n x, ξj δx, Zj,n (1 − δ)x . The event {ξj δx} is clearly redundant in the last term. Moreover, noting d
that ξj and Zj,n are independent and that Zj,n S n−1 , we see that the last term does not exceed P(ξj δx) P S n−1 (1 − δ)x = O(nV 2 (x)). Therefore
P(Gn B j ) = P ξj + Zj,n x, Zj,n < (1 − δ)x + O(nV 2 (x)) $ # = E V (x − Zj,n ); Zj,n < (1 − δ)x +O(nV 2 (x)),
so that P(Gn ) =
n # $ E V (x − Zj,n ); Zj,n < (1 − δ)x + O (nV (x))2 .
(4.5.6)
j=1
Using reasoning similar to that in previous sections (cf. (3.5.12), (3.5.13) and (4.4.16)), we have # $ E V (x − Zj,n ); Zj,n < (1 − δ)x = Ej,1 + Ej,2 , where
# $ Ej,1 := E V (x − Zj,n ); Zj,n −(1 − δ)x = O nV (x)W (x) .
Now consider
# $ Ej,2 := E V (x − Zj,n ); |Zj,n | < (1 − δ)x
and use the inequalities d
Sj−1 Zj,n S n−1 and condition [D(k,0) ] with t = x, Δ = −Zj,n /x. We obtain % k i Li (x) Zj,n Ej,2 = V (x)E 1 + i! x i=1 & k Zj,n Zj,n Lk (x) Zj,n <1−δ . ,x ; + ε k! x x x
(4.5.7)
(4.5.8)
Next note that, by virtue of Lemma 4.4.9 and inequalities (4.5.7), the family of r.v.’s √ (Zj,n / n)k ; 1 j n < ∞
4.5 Asymptotics of P(S n x) and its refinements
207
is uniformly integrable. Hence here, as in (4.4.17) and (4.4.18), we can replace the truncated moments of Zj,n on the right-hand side of (4.5.8) by the complete moments, and then discard the last term. This will introduce an error of order o nk/2 x−k . Thus we obtain k Li (x) i Ej,2 = V (x) 1 + EZj,n + o nk/2 x−k . i i!x i=1 It remains to note that i i (j) i i−l i l = E Sj−1 + S n−j = ES n−j EZj,n ESj−1 l l=0
and, in particular, EZj,n = ES n−j . Summing up in (4.5.6) the derived expressions for Ej,1 + Ej,2 , we obtain (4.5.1) (for q(t) ≡ 0). The theorem is proved. Example 4.5.3. We will illustrate the results of Corollary 4.4.5(i) and Corollary 4.5.2 by a numerical example. Consider symmetric r.v.’s ξj with
F+ (t) = P(ξ t) =
⎧ ⎪ 0.5 ⎪ ⎪ ⎨
1 for 0 t < √ , 3
⎪ ⎪ ⎪ ⎩
1 for t √ . 3
1 √ t−3 6 3
So the conditions [=, =] (with α = β = 3), Eξ = 0 and Eξ 2 = 1 are satisfied. Condition [D(k,0) ] is clearly satisfied for any k 1. Let us take n = 10 and compare our results with the ‘crude approximation’ in a reasonable range of x values. Figure 4.1 depicts the plots of the ‘crude approximation’ nV (x), the principal part of the approximation (4.4.12) and a Monte Carlo estimator of P(Sn x); we simulated 106 trajectories {Sk ; k n}. Figure 4.2 displays the same ‘crude approximation’ nV (x), the principal part of the approximation from Corollary 4.5.2 and an estimator of P(S n x) (also obtained using simulation). We see that the approximation precision increases substantially in both cases. The fit of the plots on Fig. 4.1 is much better, for the simple reason that the approximation (4.4.12) used there is in fact an asymptotic expansion with two ‘correction terms’ (the coefficient of n1/2 /x equals zero, and the first non-trivial correction term is of order n/x2 ), whereas (4.5.2) is an expansion with only one such term.
208
Random walks with jumps having finite variance
Fig. 4.1. Illustration of Corollary 4.4.5(i), comparing the approximations nV (x) (lower smooth line) and (4.4.12) (upper smooth line) for P(Sn x) (see Example 4.5.3).
Fig. 4.2. Illustration of Corollary 4.5.2, comparing the approximations nV (x) (lower smooth line) and (4.5.2) (upper smooth line) for P(S n x) (see Example 4.5.3).
4.6 Asymptotics of P(S(a) x) and its refinements. The general boundary problem Now we will turn to more general problems on the crossing of an arbitrary boundary {g(j)} by the trajectory {Sj }. The boundary types which are most often encountered in applications were described in § 3.6.
4.6 Asymptotics of P(S(a) x) and the general boundary problem 209 4.6.1 Asymptotics of P(S(a) x) First we turn our attention to the important special case where g(j) = x + aj for a fixed a > 0 and set, as before, S n (a) = max(Sj − aj), jn
assuming that Eξ = 0, Eξ 2 = d < ∞. Theorem 4.6.1. Let condition [<, =], α > 2, be met. Then the following assertions hold true. (i) Uniformly in n = 1, 2, . . . , n V (x + ja) (1 + o(1)), P(S n (a) x) =
x → ∞.
j=1
(ii) If, for some k 1, the conditions [D(k,q) ] and E|ξ|k < ∞ hold (the latter is only required when k > 2), and q(t) is an r.v.f. then, as x → ∞, P(S n (a) x) =
k Li (x + aj) V (x + aj) 1 + i!(x + aj)i j=1 i=1
n
% &" i i i l ESj−1 × ES n−j (a) + ES i−l (a) n−j l l=2 + O mV (x)(mV (x) + xV (x)) + o xV (x)q(x) , (4.6.1) where m = min{n, x}. (iii) If we additionally require in part (ii) that E(ξ + )k+1 < ∞ then we have ES k (a) < ∞, where S(a) := S ∞ (a), and the representation (4.6.1) holds for n = ∞. In this case, P(S(a) x) =
k Li (x + aj) V (x + aj) 1 + i!(x + aj)i j=1 i=1
∞
% &" i i l × ES i (a) + ES i−l (a) ESj−1 l l=2 2 + O x V (x)V (x) + o(xV (x)q(x)). (4.6.2) Remark 4.6.2. As will be seen in the proof of the theorem, the assertion of part (ii) is obtained under the assumption that α < k + 1. If E(ξ + )k+1 < ∞ then this can be improved with the help of Lemma 4.6.6 (see below). Remarks 4.4.3 and 4.4.7, which relate to Theorem 4.4.4, remain valid in the present subsection.
210
Random walks with jumps having finite variance
Corollary 4.6.3. If the conditions [<, =], α > 2, [D(2,0) ] and E(ξ + )3 < ∞ are satisfied then P(S(a) x)
∞ L1 (x + aj) ES(a) V (x + aj) 1 + = x + aj j=1
" jL2 (x + aj)d 2 + (j − 1)d + O x2 V (x)V (x) ES(a) 2 2(x + aj)
" ∞ α(α + 1)jd α = V (x + aj) 1 + + o(V (x)). (4.6.3) ES(a) + x + aj 2(x + aj)2 j=1 +
Observing that, owing to [D(2,0) ] and the properties of r.v.f.’s, one has ∞
1 V (x + aj) = a j=1
∞ V (x + aj) j=1 ∞ j=1
∼
x + aj
1 a
∞ V (u)du −
1 V (x) + o(V (x)), 2
x
∞
V (u) V (x) du ∼ , u aα
x
1 jV (x + aj) ∼ 2 (x + aj)2 a 1 = 2 a
∞ 0
V (x + u)u du (x + u)2
%∞ x
V (t) dt − x t
∞
V (t) dt t2
&
x
% & V (x) V (x) 1 V (x) = 2 − ∼ 2 a α α+1 a α(α + 1) as x → ∞, we obtain from (4.6.3) the following ‘integral’ representation: 1 P(S(a) x) = a
∞ V (u) du x
+
d ES(a) 1 V (x) + o(V (x)). − + 2a2 a 2
(4.6.4)
It is of interest to note that the functions L1 (x), L2 (x) appearing in condition [D(2,0) ], as well as the third moment of ξ + , are not present in the representation (4.6.4). That the conditions [D(2,0) ] and E(ξ + )3 < ∞ are not needed for (4.6.4) will be confirmed in Theorem 7.5.8, where the assertion (4.6.4) is obtained without the above-mentioned conditions, see p. 361. (One should note that, in the notation of Chapter 7, the function V (x) =F+ (x) in Theorem 7.5.8 is the ∞ tail of the r.v. ξ − a, so that the expression a−1 x F+ (u) du from § 7.5 is the
4.6 Asymptotics of P(S(a) x) and the general boundary problem 211 same as 1 a
∞
1 V (u + a) du = a
x
∞ V (u) du − V (x) + o(V (x)) x
in our current notation. This is why the term −V (x)/2 in (4.6.4) is replaced by +V (x)/2 in Theorem 7.5.8.) The above means that the methods used to prove Theorem 4.6.1 are not quite adequate in the case n = ∞, i.e. when we are studying the asymptotics of P(S(a) x). Proof of Theorem 4.6.1. (i) The proof of the theorem will consist of several stages, which we will formulate as lemmata. The first is an analogue of Lemma 3.6.9. Set m := min{x, n}, Gn := {S n (a) x},
Bj (v) := {ξj < y + vj},
B(v) :=
n *
Bj (v).
j=1
Lemma 4.6.4. Let condition [ · , <], α > 2, hold. Then, for v min{a/2r, (r − 2)/2r},
r > 2,
and for all n one has, as x → ∞, P(S n (a) x) =
n
P(Gn B j (v)) + O (mV (x))2 .
(4.6.5)
j=1
Proof. The proof the lemma repeats that of Lemma 3.6.9. The only difference is that now, instead of using Theorem 3.2.1, one should use Theorem 4.2.1. The subsequent proof of Theorem 4.6.1 involves arguments quite similar to those in the proofs of Theorems 3.6.1 and 4.5.1. As in § 3.2, put (j) S n−j (a) := max 0, ξj+1 − a, . . . , Sn − Sj − (n − j)a , (j)
Zj,n (a) := Sj−1 + S n−j (a). First we will obtain representations for the summands in (4.6.5). Lemma 4.6.5. Assume that condition [<, <], α > 2, is satisfied. Then, for δ ∈ (0, 1), P(Gn B j (v)) $ # = E V (x + aj − Zj,n (a)); Zj,n (a) < δ(x + aj) # $ + O V (x + aj) min{j, x}V (x) + min{n, x + aj}V (x + aj) √ and, for z c j ln j, a suitable c and any j n, # $ P(Zj,n (a) z) c1 j + min{n, z} V (z), # $ P |Zj,n (a)| z c1 min{n, z}V (z) + j V (z) .
(4.6.6)
(4.6.7) (4.6.8)
212
Random walks with jumps having finite variance
Proof. First we derive bounds for the distribution tails of Zj,n (a) and |Zj,n (a)|. √ For z > c j ln j, we find from Corollary 4.1.4 and Theorem 4.2.1 that P(Zj,n (a) z) P Sj−1 z/2 + P S n−j (a) z/2 # $ c1 jV (z) + min{n, z}V (z) # $ (4.6.9) c2 j + min{n, z} V (z). Further, since d
√
(j)
|Zj,n (a)| |Sj−1 | + S n−j (a),
we have, for z > c j ln j, P |Zj,n (a)| z P |Sj−1 | z/2 + P S n (a) z/2 # $ c2 j V (z) + min{n, z}V (z) .
(4.6.10)
(4.6.11)
Next we obtain a representation for P(Gn B j (v)) = P S n (a) x, ξj y + jv = P S n (a) x, ξj y + jv, S j−1 (a) < x + ρn,j,x , (4.6.12) where, by Theorem 4.2.1, ρn,j,x cV (y + jv) min{j, x}V (x).
(4.6.13)
By virtue of (4.6.7), we have, similarly to § 3.6 (cf. (3.6.18)), that P S n (a) x, ξj y + jv, S j−1 (a) < x = P ξj + Zj,n (a) x + aj, ξj y + jv + O min{j, x}V (x)V (y + jv) = P ξj + Zj,n (a) x + aj, ξj y + jv, Zj,n (a) < x + aj − y − jv + O min{j, x}V (x)V (x + aj) + O min{n, x + aj}V 2 (x + aj) . Here we have used the fact that, for the chosen values of r and v, one always has c1 (x + aj) x − y + j(a − v) c2 (x + aj), c3 (x + aj) y + jv c4 (x + aj). As in § 3.6, this yields (4.6.6). Lemma 4.6.5 is proved. As before, we will split the expectation in (4.6.6) into two parts: # $ Ej,1 := E V (x + aj − Zj,n (a)); Zj,n (a) −x(j) , # $ Ej,2 := E V (x + aj − Zj,n (a)); |Zj,n (a)| < x(j) , where x(j) = δ(x + aj) and Ej,1 c1 V (x + aj) P(Sj−1 −x(j)) cjV (x + aj)W (x + aj).
(4.6.14)
4.6 Asymptotics of P(S(a) x) and the general boundary problem 213 Owing to (4.6.8), as x → ∞, P |Zj,n (a)| (x + aj)ε → 0 uniformly in j for any fixed ε > 0. This means that Ej,2 = V (x + aj)(1 + o(1)). By virtue of (4.6.5) and (4.6.12)–(4.6.14), this proves the first part of the theorem. (ii) Now let condition [D(k,q) ] be satisfied and q(t) be an r.v.f. Then we have the following expansion, similar to (4.4.17): % k i Li (x + aj) Zj,n (a) Ej,2 = V (x + aj) E 1 + i! x + aj i=1 k Lk (x) Zj,n (a) Zj,n (a) + ε , x + aj k! x + aj x + aj & Zj,n (a) <δ . + o(q(x + aj)); (4.6.15) x + aj In order to pass to ‘complete’ expectations, we will have to bound the quantities # i $ i := E |Zj,n (a)|; |Zj,n (a)| x(j) , i k − 1; Tj,n # $ k k Ij,n := E |Zj,n (a)|; |Zj,n (a)| < x(j) , % & Zj,n (a) k,ε k , x + aj ; |Zj,n (a)| < x(j) . Ij,n := E |Zj,n (a)| ε x + aj Let mj := min{n, x + aj}. Lemma 4.6.6.
k/2 cj# k $ Ij,n c j k/2 + mk+1 V (mj ) j
if E(ξ + )k+1 < ∞, if α < k + 1;
k,ε k = o(Ij,n ), Ij,n i Tj,n
# $ c(x + aj)i j V (x + aj) + mj V (x + aj) ,
(4.6.16) (4.6.17)
i k.
(4.6.18)
Proof. As is well known, if E(ξ + )k+1 < ∞ then ES k (a) < ∞ (this could also be derived easily from Theorem 4.2.1). Therefore, if E(ξ + )k+1 < ∞ then ES n (a)k ES(a)k < c < ∞ and, since E|Sn |k ck nk/2 , we obtain E|Zj,n (a)|k cj k/2 . The first inequality in (4.6.16) is proved. Now consider the case where α < k + 1 and therefore E(ξ + )k+1 = ∞. Then,
214
Random walks with jumps having finite variance
using the bound (4.2.2), we have # $ E |Zj,n (a)|k ; |Zj,n (a)| < x(j) # $ (j) (j) c1 E |Sj−1 |k + (S n−j (a))k ; |Sj−1 | x(j), S n−j (a) 2x(j)
+ x (j) P Sj−1 k
< −x(j) c2 j k/2 + c2
2x(j)
z k−1 min{n, z}V (z) dz. 0
When 2x(j) n, the integral on the right-hand side does not exceed 2x(j)
z k V (z) dz c3 x(j)k+1 V (x + aj). 0
If 2x(j) > n it does not exceed
2x(j)
c3 nk+1 V (n) + n
z k−1 V (z) dz
c4 nk+1 V (n).
n
Thus the integral is bounded from above by cmk+1 V (mj ). This proves the secj ond inequality in (4.6.16). Since the function ε z/(x + aj), x + aj is bounded and tends to zero when z = o(x + aj), and since in the subsequent argument the upper integration limit x(j) = δ(x + aj) can be chosen with an arbitrarily small fixed δ > 0, the relation (4.6.17) is also proved. i . It follows from the inequality (4.6.10) that It remains to consider Tj,n &
% x(j) x(j) i i i + (x + aj) P |Sj−1 | Tj,n c E |Sj−1 |; |Sj−1 | 2 2 % & x(j) i + E S n−j (a); S n−j (a) 2 " x(j) i + (x + aj) P S n−j (a) , (4.6.19) 2 where, by virtue of Corollary 4.1.4 and Theorem 4.2.1(i), & % x(j) i < c1 j(x + aj)i V (x + aj), E |Sj−1 |; |Sj−1 | 2 & % x(j) i < c1 min{n, x + aj}(x + aj)i V (x + aj). E S n−j (a); S n−j (a) 2 This, together with (4.6.19), implies (4.6.18). The lemma is proved. Now we return to (4.6.5) and start gathering up the bounds that we have obtained. First we consider the sum of the remainder terms that appear in (4.6.6). It
4.6 Asymptotics of P(S(a) x) and the general boundary problem 215 is not hard to see that n
# $ V (x + aj) min{j, x}V (x) + min{n, x + aj}V (x + aj) cm2 V 2 (x),
j=1
so that P(S n (a) x) =
n (Ej,1 + Ej,2 ) + O m2 V 2 (x) , j=1
where, by virtue of (4.6.14), n
Ej,1 < cm2 V (x)W (x).
j=1
To compute that
n j=1
Ej,2 , we have to use (4.6.15) and Lemma 4.6.6, which implies
n V (x + aj) j=1
(x + aj)i
i Tj,n c
n # $ jV (x + aj)V (x + aj) + mj V 2 (x + aj) j=1
# $ c1 m2 V (x)V (x) + mxV 2 (x) .
Thus, for α < k + 1, P(S n (a) x) =
n j=1
V (x + aj) 1 +
⎡
+ o⎣
k Li (x + aj) i=1
n V (x + aj)
i!(x +
i EZj,n (a) aj)i
⎤ V (mj ) ⎦ j k/2 + mk+1 j
(x + aj)k + O m2 V (x)V (x) + O mxV 2 (x) + o xV (x)q(x) , j=1
where n V (x + aj) j=1 n j=1
(x + aj)k
j k/2 cmk/2+1 x−k V (x),
V (x + aj) k+1 m V (mj ) cmk+2 V (m)x−k V (x) (x + aj)k j =c
mk+1 V (m) mxV 2 (x) cmxV 2 (x), xk+1 V (x)
since the power factor of the function tk+1 V (t) has the exponent k + 1 − α > 0, while m x. The second assertion of Theorem 4.6.1 is proved. In the case when E(ξ + )k+1 < ∞, the bounds could be improved owing to Lemma 4.6.6. (iii) The third assertion of the theorem is an obvious consequence of the second. The theorem is proved.
216
Random walks with jumps having finite variance
Proof of Corollary 4.6.3. The first equality in (4.6.3) is obvious from (4.6.2). To proof the second, it suffices to observe that L1 (x) ∼ α,
L2 (x) ∼ α(α + 1),
x2 V (x) = o(1).
4.6.2 The general boundary problem Now we will turn to the general boundary problem on the asymptotic behaviour of the probability
P max(Sk − g(k)) 0 , kn
where {g(k)} is a boundary from the class Gx,n , for which minkn g(k) = cx, c > 0 (cf. p. 155). The following analogue of Theorem 3.6.4 holds true. Theorem 4.6.7. Assume that the conditions [<, =] and Eξ 2 < ∞ are satisfied. Then, as x → ∞, 3 n 4
P max(Sk − g(k)) 0 = V (g∗ (j)) (1 + o(1)) + O n2 V (x)V (x) , kn
j=1
where g∗ (j) = minjkn g(k) and the remainder terms are uniform in g ∈ Gx,n in the zone of n and x such that n < c1 x2 /ln x for a suitable c1 > 0. Proof. The proof of Theorem 4.6.7 basically repeats the argument proving Theorem 3.6.4(i), with a few obvious changes since now Eξ 2 < ∞. So we will omit it. We also have a complete analogue of Corollary 3.6.6 for the case under consideration, and, in particular, an assertion on the asymptotics of the probabilities P S n (−a) − an x for a > 0, where S n (−a) := max(Sk + ak). kn
Under condition [D(k,q) ], one could carry out a more complete analysis of the distribution of S n (−a)−an. We will confine ourselves to the case when condition [D(2,0) ] is met. Theorem 4.6.8. Let conditions [<, =], α > 2, and [D(2,0) ] be satisfied. Then, for a fixed a > 0, as x → ∞, % n L1 (x) P(S n (−a) − an x) = nV (x) 1 + Eζj xn j=1 & 3/2 −3 α(α − 1)n + (1 + o(1)) + O n x , 2x2
4.7 Integro-local theorems for the sums Sn
217
where ζj := − minkj (Sk +ak) and the remainder terms are uniform in the zone n cx2 /ln x for some c > 0. If n N → ∞, then the above representation takes the form % & 3/2 −3 αEζ α(α − 1)n nV (x) 1 + (1 + o(1)) + , (1 + o(1)) + O n x x 2x2 where ζ = ζ∞ , Eζ < ∞ and the remainder terms are uniform in the zone n ∈ [N, cx2 /ln x], c > 0. Proof. The proof of Theorem 4.6.8 follows the scheme of the proof of Theorems 4.4.4, refg4p5t1 and 4.6.1 and uses the relations Sn S n (−a) − an S n ,
S n (−a) − an = Sn + ζn ,
d
where ζn = ζn . A detailed scheme of the proof is given in § 5.7. We will not present it here to avoid repetition. It is seen from the theorem that the first terms in the expansions for the distributions of Sn and S n (−a) − an coincide when x = o(n). 4.7 Integro-local theorems for the sums Sn 4.7.1 Integro-local theorems on the large deviations of Sn In this section, as in §3.7, we will study the asymptotic behaviour of the proba bilities P Sn ∈ Δ[x) , where Δ[x) := [x, x + Δ), x → ∞, Δ = o(x). For remarks on the formulation of this problem, see § 3.7. Integro-local theorems will be used in Chapters 6–8. In the multivariate case, integro-local theorems provide the most natural form for assertions on the asymptotics of large deviation probabilities (see Chapter 9). We will need condition [D(1,q) ] in the form (3.7.2): [D(1,q) ]
As t → ∞, for Δ ∈ [Δ1 , Δ2 ], Δ1 Δ2 = o(t), one has # $ V1 (t) = αV (t)/t, V (t) − V (t + Δ) = V1 (t) Δ(1 + o(1)) + o(q(t)) , (4.7.1) where the term o(q(t)) does not depend on Δ, and q(t) = t−γq Lq (t), γq −1, is an r.v.f. The remainder term o(1) is assumed here to be uniform in Δ in the same sense as in (3.7.2). If, in the important special case where Δ = Δ1 = Δ2 = const (Δ is an arbitrary fixed number), one has V (t) − V (t + Δ) = V1 (t)(Δ + o(1)) as t → ∞ then clearly condition [D(1,q) ] will hold with q(t) ≡ 1 (here the assumption on the uniformity of o(1) disappears).
218
Random walks with jumps having finite variance
In the lattice case, when the lattice span is equal to 1, we assume that Δ 1 and that Δ and t in (4.7.1) are integer-valued. For remarks on the form (4.7.1) of condition [D(1,q) ], which is somewhat different from that used in § 4.4, see § 3.7 (the differences are only related to the convenience of exposition). Condition (4.7.1) will always hold with q(t) ≡ 0 provided that L(t) is differentiable, L (t) = o(L (t)/t). Then one can identify V1 (t) with −V (t). Theorem 4.7.1. Let Eξ = 0, Eξ 2 < ∞ √ and let conditions [ · , =] with α > 2 and [D(1,q) ] be satisfied. Then, for x N n ln n, Δ cq(x), as N → ∞, P(Sn ∈ Δ[x)) = ΔnV1 (x)(1 + o(1)),
V1 (x) =
αV (x) , x
(4.7.2)
where the remainder term o(1) in (4.7.2) is uniform in x, n and Δ such that √ x N n ln n, max{x−γ0 , q(x)} Δ xεN for any fixed γ0 −1 and any fixed function εN ↓ 0 as N ↑ ∞. By the uniformity of o(1) in (4.7.2) we understand the existence of a function δ(N ) ↓ ∞ as N ↑ ∞ (depending on εN ) such that the term o(1) in (4.7.2) could be replaced by a function δ(x, n, Δ) with |δ(x, n, Δ)| δ(N ). In the lattice case, the assertion (4.7.2) remains valid for integer-valued x and Δ max{1, cq(x)}. All the remarks following Theorem 3.7.1 remain valid here. For Δ = const, x n1/(b−1) , where b is such that E|ξ|b < ∞, the assertions of Theorem 4.7.1 can be derived from Theorem 4.4.4. Now we will require that, in addition, the following condition be satisfied: [D]
For some t0 > 0 and all t t0 , the function V (t) (L(t)) is differentiable, αV (t) L(t) L (t) = o as t → ∞. V (t) ∼ − t t
Then the next analogue of Theorem 3.7.2 will hold true. √ Theorem 4.7.2. Let the conditions [ · , =], α > 2, [D] and x n ln n be met. Then the distribution of Sn can be represented as a sum of two measures: P(Sn ∈ ·) = Pn,1 (·) + Pn,2 (·), where the measure Pn,1 has, for any fixed r and all large enough x, the property Pn,1 [x, ∞) [nV (x)]r . The measure Pn,2 has density Pn,2 (dx) = −nV (x)(1 + o(1)), dx
4.7 Integro-local theorems for the sums Sn 219 √ where the reminder o(1) is uniform in x and n such that n ln n/x < εx for any fixed function εx → 0 as x → ∞. Proof. The proof of Theorem 4.7.1 follows the scheme of the proof of Theorem 3.7.1. For y < x, set Gn := {Sn ∈ Δ[x)},
Bj := {ξj < y},
B=
n *
Bj .
(4.7.3)
j=1
Then P(Gn ) = P(Gn B) + P(Gn B),
(4.7.4)
where n
P(Gn B j ) P(Gn B)
j=1
n
P(Gn B j ) −
j=1
P(Gn B i B j ).
(4.7.5)
i<jn
As in the proof of Theorem 3.7.1, we will split the argument into three stages: bounding P(Gn B), bounding P(Gn B i B j ), i = j, and evaluating P(Gn B j ). (1) Bounding P(Gn B). We will use the crude inequality P(Gn B) P(Sn x; B)
(4.7.6)
and Theorem 4.1.2. Owing to the latter, for x = ry, a fixed r > 2, any δ > 0 and √ x N n ln n, N → ∞, one has P(Sn x; B) (nV (y))r−δ
(4.7.7)
(see Corollary 4.1.3). Now choose r such that (nV (x))r−δ nΔV1 (x)
(4.7.8)
√ for x n, Δ cq(x). Setting n = x2 and comparing the powers of x on the right- and left-hand sides of (4.7.8), we obtain, taking into account the condition Δ x−γ0 , that for (4.7.8) to hold it suffices that r is chosen in such a way that (2 − α)(r − δ) < 1 − α − γ0 . For α > 2, this is equivalent to r>
α − 1 + γ0 . α−2
For such an r, owing to (4.7.6)–(4.7.8) we will have P(Gn B) = o nΔV1 (x) .
(4.7.9)
(2) Bounding P(Gn B i B j ) is done in the same way as in the proof of Theorem 3.7.1 and results in the inequality (3.7.16).
220
Random walks with jumps having finite variance
(3) Evaluating P(Gn B j ). This is based, as in the proof of Theorem 3.7.1, on the representation (3.7.17), which yields # $ P(Gn B n ) ΔE V1 (x − Sn−1 ); Sn−1 < (1 − δ)x + Δ (4.7.10) = ΔV1 (x)(1 + o(1)). √ The last since, due to the Chebyshev inequality, one # relation holds for x n √ $ √ has E V1 (x−Sn−1 ); |Sn−1 | M n ∼ V1 (x) as M → ∞ and M n = o(x). Moreover, the following obvious bounds are valid: $ # √ E V1 (x − Sn−1 ); Sn−1 ∈ (M n, (1 − δ)x + Δ) = o(V1 (x)) and
# √ $ E V1 (x − Sn−1 ); Sn−1 ∈ (−∞, −M n ) = o(V1 (x))
as M → ∞. Similarly, by virtue of (3.7.17) one finds that (1−δ)x
P(Sn−1 ∈ dz) P ξ ∈ Δ[x − z) ∼ ΔV1 (x).
P(Gn B n )
(4.7.11)
−∞
From (4.7.10) and (4.7.11) we obtain P(Gn B n ) = ΔV1 (x)(1 + o(1)). Together with (4.7.4), (4.7.9) and (3.7.16), this gives P(Gn ) = ΔnV1 (x)(1 + o(1)). The required uniformity of the term o(1) follows in an obvious way from the above argument. The theorem is proved. Remark 4.7.3. To bound P(Gn B) (see stage (1) of the proof of Theorem 4.7.1) one could use more refined approaches instead of the crude inequality (4.7.6), which would lead to stronger results. Note that P(Gn B) = P(B) P(Gn | B), where n y y y P(Gn | B) = P(Sn ∈ Δ[x)), Sn = ξj j=1 y
and the ξj
are ‘truncated’ (at the level y) r.v.’s, following the distribution P(ξ y < t) =
P(ξ < t) , P(ξ < y)
t y,
so that (cf. § 2.1) ϕy (λ) := Eeλξ
y
=
R(λ, y) , P(ξ < y)
R(λ, y) = E(eλξ ; ξ < y).
Since the r.v. ξ y is bounded from above, we have ϕy (λ) < ∞ for λ > 0, so
4.7 Integro-local theorems for the sums Sn
221
that one can perform a Cram´er transform on the distribution of ξ y and introduce r.v.’s ξj with the distribution P(ξ ∈ dt) =
eλt P(ξ y ∈ dt) . ϕy (λ)
(4.7.12)
Then (see e.g. § 8, Chapter 8 of [49] or § 6.1 of the present text) P(Sny ∈ dt) = e−λt ϕny (λ) P(Sn ∈ dt), n where Sn = nj=1 ξj . Since P(B) = P(ξ < y) , we have P(Sny ∈ Δ[x)) e−λx ϕny (λ) P(Sn ∈ Δ[x)) and, therefore, P(Gn B) e−λx Rn (λ, y) P(Sn ∈ Δ[x)).
(4.7.13)
Now the product e−λx Rn (λ, y) in (4.7.13) is exactly the quantity which is nory mally used to bound P(Sn x) and which was the main object of our studies in § 4.1. In particular, we established there that, for λ=
1 r ln , y nV (y)
x → ∞,
(4.7.14)
and any δ > 0, one has # $−r+δ e−λx Rn (λ, y) nV (x) , so that
# $−r+δ P(Sn ∈ Δ[x)). P(Gn B) nV (x)
(4.7.15)
This inequality is more precise than (4.7.6), (4.7.7), because of the presence of the factor P(Sn ∈ Δ[x)) on the right-hand side of (4.7.15). Indeed, for this factor we have the following bounds, owing to the well-known results for the concentration function: for all n 1, (Δ + 1) for all Δ, c P(Sn ∈ Δ[x)) √ × n Q(Δ) for Δ 1, where Q(Δ) := supt P(ξ ∈ Δ[t)) 1 is the concentration function of the r.v. ξ (see e.g. Lemma 9 and Theorem 11 in Chapter 3 of [224]). This means that, for any δ > 0, Δ > 0, as x → ∞ we have # $−r+δ Δ + 1 √ . P(Gn B) nV (x) n
(4.7.16)
It is not hard to see that if ξ has a bounded density V1 (x) and V1 (x) = O(V (x)) as x → ∞ then the Cram´er transform (4.7.12) for the value of λ given in (4.7.14)
222
Random walks with jumps having finite variance
will result in a distribution of ξ that also possesses a bounded density, and therefore Q(Δ) cΔ. In this case we will also have # $−r+δ Δ √ P(Gn B) nV (x) n
(4.7.17)
for any Δ 1. The inequality (4.7.16) enables one to obtain the bound (4.7.9) for smaller values of r, and the inequality (4.7.17) enables one to obtain (4.7.9) for any Δ 1 (not necessarily for Δ q(x)). Proof. The proof of Theorem 4.7.2 is similar to that of Theorem 3.7.2 and hence will be omitted. Remark 4.7.4. If n → ∞ and the order of magnitude of the deviations is fixed as follows: x ∼ A(n) := n1/γ LA (n), γ < 2, where LA is an s.v.f., then an analogue of Remark 3.7.4 is valid under the conditions of the present chapter. This remark concerns the possibility of an alternative approach to obtaining integrolocal theorems when condition [D(1,q) ] is substantially relaxed and replaced by √ condition (3.7.20) for x ∼ A(n), Δn = εn n, εn = o(1) as n → ∞. The latter is equivalent to the condition V (x) − V (x + Δ) = ΔV1 (x)(1 + o(1)) for Δ = xγ/2 LΔ (x), where LΔ is an s.v.f., x → ∞. The scheme of the proof of the integro-local theorem remains the same as in Remark 3.7.4. One just lets √ b(n) := nd, takes f to be the standard normal density and uses the Gnedenko– Stone–Shepp theorem [130, 254, 258, 259]. In conclusion note that, under the conditions of this section, one could also obtain asymptotic expansions in the integro-local theorem, as in § 4.4.
4.7.2 Integro-local theorems valid on the whole real line An integral theorem for the sums Sn of r.v.’s with regularly varying distributions, which is valid on the whole real line, is given by (4.1.2). Now we will present a similar integro-local representation, i.e. a representation for P Sn ∈ Δ[x) , that will be valid on the whole half-line x 0. The complexity of the proofs of the respective results, which were obtained in the recent paper [192], is somewhat beyond the level of the present monograph. So we will present these results without proofs, the latter can be found in [192]. As before, we will consider two distribution types, non-lattice and arithmetic. For these distributions we will need an additional smoothness condition [D1 ], which is a simplified version of condition [D(1,1) ], see (4.7.1):
4.7 Integro-local theorems for the sums Sn [D1 ]
223
In the non-lattice case, for any fixed Δ > 0, as t → ∞, V (t) − V (t + Δ) ∼
ΔαV (t) . t
In the arithmetic case, for integer-valued k → ∞, V (k) − V (k + 1) ∼
αV (k) . k
As we have already pointed out, the function V1 (t) := αV (t)/t in condition [D1 ] plays the role of the derivative of −V (t), and is asymptotically equivalent to the latter, when the derivative exists and behaves regularly enough at infinity. We will also need the condition that 1 as t → ∞, (4.7.18) E |ξ|2 ; ξ < −t = o ln t which controls the decay rate of the left tail F− (t) as t → ∞. First we consider the non-lattice case, where we can assume without loss of generality that Eξ = 0, d = Var ξ = 1. Theorem 4.7.5. Let ξ be a non-lattice r.v., Eξ = 0,
Var ξ = 1,
(4.7.19)
and let the conditions [ · , =] with α > 2, [D1 ] and (4.7.18) be satisfied. Then, for any fixed Δ > 0, 2 Δ V (x) P Sn ∈ Δ[x) ∼ √ e−x /2n + Δnα x 2πn √ uniformly in x n. In particular, for any fixed ε > 0, P Sn ∈ Δ[x) ⎧ √ Δ 2 ⎪ ⎪ e−x /2n if n x (1 − ε) (α − 2)n ln n, ⎨ √ 2πn ∼ ⎪ ⎪ ⎩ Δnα V (x) if x (1 + ε) (α − 2)n ln n. x If x (1 − ε) (α − 2)n ln n then condition [D1 ] is redundant.
(4.7.20)
(4.7.21)
Now consider the arithmetic case. Here condition (4.7.19) does restrict generality, and we are dealing with arbitrary values of Eξ and d = Var ξ. Theorem 4.7.6. Let ξ be an arithmetic r.v., and let the conditions [ · , =] for α > 2, [D1 ] and (4.7.18) be satisfied. Then 2 1 V (x) P Sn − nEξ = x ∼ √ e−x /2dn + nα x 2πnd
224
Random walks with jumps having finite variance √ uniformly in x n. In particular, for any fixed ε > 0, P Sn − nEξ = x ⎧ 1 2 ⎪ ⎪ e−x /2nd , if x (1 − ε) d(α − 2)n ln n, ⎨ √ 2πnd ∼ ⎪ V (x) ⎪ ⎩ nα , if x (1 + ε) d(α − 2)n ln n. x If x (1 − ε) d(α − 2)n ln n then condition [D1 ] is redundant. Note that generally speaking Theorems 4.7.5 and 4.7.6 do not imply the integral theorem (4.1.2) because in that theorem condition [D1 ] was not assumed. It is clear that Theorems 4.7.5 and 4.7.6 could be combined into one assertion about the asymptotics of P Sn − nEξ ∈ Δ[x) , without the assumption (4.7.19), where Δ is an arbitrary fixed positive number in the non-lattice case and Δ = 1 in the arithmetic case. Observe also that the assertions of Theorems 4.7.5 and 4.7.6 for deviations √ √ x n ln n and x = O n have already been proved in Theorem 4.7.1 and the Gnedenko–Stone–Shepp theorem (see √ [130, 258, 254]) respectively. For a √ proof for the remaining zone n x = O n ln n , see [192]. 4.8 Extension of results on the asymptotics of P(Sn x) and P(S n x) to wider classes of jump distributions The main assumption under which we established in Chapters 3 and 4 the asymptotics P(Sn x) ∼ nP(ξ x),
P(S n x) ∼ nP(ξ x)
(4.8.1)
in the respective deviation zones was that the distribution F of the summands ξi is regularly varying at infinity, i.e. that condition [ · , =] is satisfied. Now we will show that the asymptotics (4.8.1) remain valid for a wider class of distributions as well, but possibly for narrower deviation zones. First assume that Eξ 2 < ∞. Recall that the definitions of ψ-locally constant (ψ-l.c.) and upper-power functions can be found in Chapter 1 (see pp. 18, 28). Theorem 4.8.1. Let the following conditions be satisfied: (1) [ · , <], α > 2, and Eξ 2 < ∞; (2) the function F+ (t) is both upper-power and ψ-l.c. Let, moreover, x → ∞ and x2 nV 2 (x) = o(F+ (x)) , n < c1 ψ 2 (x), 2c ln x for some c > α − 2, c1 < ∞. Then (4.8.1) holds true. n<
(4.8.2)
4.8 Extension to wider classes of jump distributions
225
Remark 4.8.2. By virtue of Theorem 1.2.21(i), any distribution F that satisfies condition (2) of Theorem 4.8.1 is subexponential and hence, for any fixed n, one has P(Sn x) ∼ nF+ (x) as x → ∞. As we saw in Chapter 1, subexponential distributions do not need to behave ‘regularly’ at infinity; for instance, they can fluctuate between two regularly varying (or subexponential) functions (see Theorem 1.2.21(ii) and Example 1.2.41). Remark 4.8.3. The last condition in (4.8.2) is always satisfied provided that F+ (t) > ct−α−ε for some ε < α − 2. Indeed, in that case, for n < x2 and any ε ∈ (0, α − 2 − ε), by Theorem 1.1.4(i) we have
nV 2 (x) = nx−2α L2 (x) = o x−2α+2+ε = o(F+ (x)), since −2α + 2 + ε < −α − ε. Remark 4.8.4. It is easily seen that Theorem 4.8.1 together with Theorem 4.8.6 below, which covers the case Eξ 2 = ∞, includes the following as special cases: (a) situations when the distribution F is of ‘extended regular variation’, i.e. when, for some 0 < α1 α2 < ∞ and any b > 1, b−α2 lim inf x→∞
F+ (bx) F+ (bx) lim sup b−α1 ; F+ (x) x→∞ F+ (x)
(4.8.3)
(b) the somewhat more general case of distributions of ‘intermediate regular variation’ (introduced in [89] and sometimes also referred to as distributions with ‘consistently varying tails’), i.e. distributions with the property lim lim inf b↓1 x→∞
F+ (bx) = 1. F+ (x)
(4.8.4)
Under the assumption that the r.v. ξ = ξ − Eξ was formed by centring a non-negative r.v. ξ 0, the first of the asymptotic relations (4.8.1) was obtained in these (a) and (b) in [90] and [210], respectively. Note that in [210] condition [ · , <] was not assumed for any α > 1 but the large deviations theorem was established there only in the zone x δn, where δ > 0 is fixed. Remark 4.8.5. Since a ψ-l.c. function F+ with ψ(t) = t is an s.v.f., one can assume in condition (2) of Theorem 4.8.1 that necessarily ψ(t) = o(t). Proof of Theorem 4.8.1. First consider P(Sn x). As condition [ · , <] is satisfied, the bounds of § 4.1 hold true. Therefore, as in our previous arguments, we obtain, using the method of ‘truncated’ r.v.’s, that, for a fixed δ ∈ (0, 1), P(Sn x) = nP(Sn x, ξn δx) + O (nV (x))2
226
Random walks with jumps having finite variance √ (cf. (4.4.13)), where, for M ∈ 0, (1 − δ)x/ n , P(Sn x, ξn δx) = P(Sn−1 + ξn x, ξn δx)
= P Sn−1
(1 − δ)x F+ (δx) +
(1−δ)x
P(Sn−1 ∈ dt) F+ (x − t) −∞
# √ $ = o F+ (x) + E F+ (x − Sn−1 ); |Sn−1 | < M n # $ √ $ √ + E F+ (x − Sn−1 ); Sn−1 ∈ (−∞, −M n ∪ [M n, (1 − δ)x ] . (4.8.5) √ √ Since M n < c1 M ψ(x) and from condition (2) one has F+ (x − t) ∼ F+ (x) √ as x → ∞ when |t| < c1 M ψ(x) and M → ∞ slowly enough, for such an M we have # √ $ E F+ (x − Sn−1 ); |Sn−1 | < M x = F+ (x)(1 + o(1)). The last term on the right-hand is clearly o(F+ (x)), as F+ (x) is an side of (4.8.5) √ upper-power function and P |Sn−1 | M n → 0. Therefore P(Sn x, ξn δx) = F+ (x)(1 + o(1)). Since (nV (x))2 = o(nF+ (x)), the first relation in (4.8.1) is proved. Now we turn to the distribution of S n . Since P(S n x) P(Sn x), it suffices to verify that P(S n x) nF+ (x)(1 + o(1)). Again using our previous arguments (see (4.5.4)), we have P(S n x) =
n
P(S n x, ξk δx) + O (nV (x))2 .
(4.8.6)
k=1
Here
P(S n x, ξk δx) = Pk + O nV (x)F+ (x) ,
where
δx Pk : = P S k−1 < , ξk δx, S n x 2
δx , ξk δx, Sk x = P S k−1 < 2 x
δx (k) + P S k−1 < , ξk δx, Sk ∈ dt, S n−k x − t , 2
(4.8.7)
−∞
(k)
d
S n−k = max0mn−k (Sm+k − Sk ) = S n−k is independent of {S1 , . . . , Sk }.
227
4.8 Extension to wider classes of jump distributions Now the first term on the final right-hand side of (4.8.7) does not exceed δx/2
δx P Sk−1 < , Sk x = P(Sk−1 ∈ dt) F+ (x − t) 2 −∞ & % δx = E F+ (x − Sk−1 ); Sk−1 2 # √ $ = E F+ (x − Sk−1 ); |Sk−1 | < M n , # √ $√ + E F+ (x − Sk−1 ); Sk−1 ∈ (−∞, −M n ] ∪ M n, δx/2 .
In exactly the same way as in (4.8.5), we can verify that the right-hand side of the last equality has the form F+ (x)(1 + o(1)). Next consider the last term on the right-hand side of (4.8.7). It does not exceed δx/2
x−u
P(ξ ∈ dv) P(S n−k x − (u + v)) + o F+ (x) .
P(Sk−1 ∈ du) −δx/2
δx
(4.8.8) Fix a number u, |u| < δx/2. For simplicity, let u = 0. Then, since F+ is a ψ-l.c. function, we have x
P(ξ ∈ dv) P(S n−k x − v) P ξ ∈ (x − M ψ(x), x]
x−M ψ(x)
= F+ x − M ψ(x) − F+ (x) (4.8.9) = o F+ (x) .
Now put Z(v) := P(S n−k x − v) and consider x−M ψ(x) x−M ψ(x) P(ξ ∈ dv)Z(v) = −F+ (v)Z(v) + F+ (v) dZ(v)
x−M ψ(x)
δx
δx
δx
F+ (δx)Z(δx) + F+ (δx)Z x − M ψ(x) . (4.8.10) Since √ Z(x − M ψ(x)) = P S n−k M ψ(x) P S n−k > cM n → 0 as M → ∞, the right-hand side of (4.8.10) is o(F+ (x)). If we now consider, instead of (4.8.9), (4.8.10), the expression ⎛ ⎞ x−u−M x−u ψ(x) ⎜ ⎟ + ⎝ ⎠ P(ξ ∈ dv) P(S n−k x − u − v) δx
x−u−M ψ(x)
228
Random walks with jumps having finite variance
for any fixed u, |u| < δx/2 then all the computations will remain valid (with obvious minor changes, which, however, do not affect the final bound o(F+ (x))). This means that the bound o(F+ (x)) remains true for the integrals in (4.8.7), (4.8.8) as well. Comparing (4.8.6), (4.8.7) with the derived bounds, we obtain that P(S n x) = nF+ (x)(1 + o(1)). The theorem is proved. The case Eξ 2 = ∞, E|ξ| < ∞, Eξ = 0 is dealt with in exactly the same way. Theorem 4.8.6. Let the following conditions be satisfied: (1) [<, <] for α ∈ (1, 2), W (t) cV (t); (2) F+ (t) is both upper-power and ψ-l.c. with ψ(t) = σ(t) = V (−1) (1/t). Then relation (4.8.1) holds true provided that x σ(n),
n<
c , V (ψ(x))
c < ∞,
nV 2 (x) = o(F+ (x)).
(4.8.11)
Proof. The proof is similar to that of Theorem 4.8.1. One just has to replace √ in (4.8.5) the integration region {Sn−1 < M n } by {Sn−1 < M σ(n)}, where, by virtue of (4.8.11), we have σ(n) < c1 ψ(x), and then to use the fact that P(Sn−1 M σ(n)) → 0 as M → ∞. Similar changes should be made in the second part of the proof, which deals with S n . The theorem is proved. Remark 4.8.7. If condition W (t) < cV (t) does not hold, β ∈ (1, 2), then the assertion of Theorem 4.8.6 will remain true provided that we replace x σ(n) in (4.8.11) by x σ∗ (n), where σ ∗ = σ ∗ (n) is such that nW (σ ∗ /ln σ ∗ ) < c. Indeed, for such values of x, all the bounds for P(S n x) that we used in the proof of Theorem 4.8.6 will remain valid, owing to Theorem 3.1.1.
4.9 The distribution of the trajectory {Sk } given that Sn x or S n x As stated in the Introduction, the trajectories of the random walk {Sk ; k = 1, . . . , n} that give the biggest contribution to the asymptotics of the probabilities P(Sn x) and P(S n x) (such trajectories could also be called the ‘shortest’ or ‘most typical’), are of a completely different character in the Cram´er case (0.0.9) as compared with the case when the tails P(ξ t) are regularly varying. In the Cram´er case these trajectories are close to the straight path connecting the points (0, 0) and (n, x). Owing to the results obtained above, it is natural to expect that the scaled trajectory {Snt ; t ∈ [0, 1]} of the walk will be close to a step function with a single jump at a random time that is uniformly distributed over (0, 1). More precisely, we have the following result. Set η(x) := min{k : Sk x},
χ(x) := Sη(x) − x,
4.9 Distribution of {Sk } given that Sn x or S n x
229
and denote by E(t) the the following random step process on [0, 1]:
0 for t ω, E(t) := 1 for t > ω, where ω is an r.v. with the uniform distribution on [0, 1]. Theorem√4.9.1. Let the conditions [ · , =], α > 2, and Eξ 2 < ∞ be met. Then, for x n ln n, i n and a fixed z > 0, P η(x) = i, χ(x) zx| S n x ∼ P η(x) = i, χ(x) zx| Sn x ∼
(1 + z)−α . n
(4.9.1)
The conditional distribution of the process {x−1 Snt ; t ∈ [0, 1]} given Sn x (or S n x), converges weakly in the Skorokhod space D(0, 1), as n → ∞, to the distribution of the process {ζα E(t); t ∈ [0, 1]}. Here the r.v. ζα and the process {E(t)} are independent of each other and P(ζα y) = y −α , y 1. Analogous assertions will hold true under the conditions of Chapter 3 when α ∈ (1, 2), x σ(n). Proof. Put Gi−1 := S i−1 < x ,
Yi−1 := V (x + zx − Si−1 ) 1 Gi−1 √ and let ε = ε(x) → 0 as x → ∞ be such that εx n ln n. Then, for i n, P η(x) = i, χ(x) zx = P Gi−1 ; Si−1 + ξi x + zx = EYi−1 = E1 + E2 + E3 + E4 , where # E1 : = E Yi−1 ; # E2 : = E Yi−1 ; # E3 : = E Yi−1 ; # E4 : = E Yi−1 ;
$ |Si−1 | < εx ∼ V (x + zx), $ Si−1 −εx = o V (x) , $ Si−1 ∈ [εx, x/2] = o V (x) , $ Si−1 ∈ (x/2, x) V (zx)P Si−1 > x/2 cnV (x)V (zx) = o V (x) .
Thus we obtain that, for i n, P η(x) = i, χ(x) zx ∼ V (x + zx) ∼ (1 + z)−α V (x).
(4.9.2)
Moreover, because P η(x) = i, χ(x) zx, Sn < x P η(x) = i, χ(x) zx P(Sn−i < −zx) and the last factor tends to zero, we also have P η(x) = i, χ(x) zx, Sn x ∼ (1 + z)−α V (x).
(4.9.3)
230
Random walks with jumps having finite variance
Since P(S n x) ∼ P(Sn x) ∼ nV (x)
(4.9.4)
by Theorem 4.4.1 the relations (4.9.2) and (4.9.3) immediately imply (4.9.1). Furthermore, it is evident that the probabilities
(i) P max |Sj−1 | < εx, ξi x, max |Sj | < εx ji
jn−i
have the same asymptotics as (4.9.2). From this and (4.9.4) one can easily derive the convergence of the conditional distributions of {x−1 Snt ; t ∈ [0, 1]} in D(0, 1): it suffices to prove the convergence of the finite-dimensional distributions (which is obvious) and verify the compactness (tightness) conditions. The latter can be done in a standard way (see e.g. Chapter 3 of [28]), and we will leave this to the reader. The following assertion could be obtained in a similar way, using the integrolocal theorems of § 4.7. As before, let ξ n = maxkn ξk . Theorem 4.9.2. Let √ the conditions [ · , =], α > 2, Eξ 2 < ∞, [D(1,q) ] with q(t) ≡ 1 and x n ln n be satisfied, and let Δ > 0 be an arbitrary fixed number. Then, as n → ∞, 1 P η(x) = i | S n ∈ Δ[x) ∼ . n The conditional distribution of the process {x−1 Snt ; t ∈ [0, 1]} given the event Sn ∈ Δ[x) converges weakly in D(0, 1), as n → ∞, to the distribution of the process {E(t); t ∈ [0, 1]}. √ The limiting distribution for the conditional laws of (ξ n − x)/ n given the √ event Sn ∈ Δ[x) coincides with the limiting distribution for −Sn / n (which is clearly normal). The last assertion follows in an obvious way from the relation Si−1 + ξ n + Sn − Si ∈ Δ[x), which clearly holds on the event {ωn = i} ∩ {Sn ∈ Δ[x)}, where we put ωn := min{k 1 : ξk = ξ n }, and from the fact that the limiting distributions for Sn−1 + δn √ , n
|δn | Δ
and
S √n n
are the same. Along with the assertions of Theorems 4.9.1 and 4.9.2 providing first-order approximations to the conditional distributions of {x−1 Snt }, one can also obtain a second-order approximation. What happens here is that, roughly speaking, under the condition Sn ∈ Δ[x) the processes {Snt } approach (in distribution) the process √ √ x − w(1) n E(t) + nw(t),
4.9 Distribution of {Sk } given that Sn x or S n x
231
where w(t) is the standard Wiener process (we assume that Eξ 2 = 1), which is independent of E(t). We will state the above claim in a more precise form. Let
0 for t ωn , Eωn (t) := 1 for t > ωn . Theorem 4.9.3. Let the conditions of Theorem 4.9.2 be met. Then, as n → ∞, the conditional distribution of the process 1 √ Snt − xE(t) , n
t ∈ [0, 1],
given Sn ∈ Δ[x), converges weakly in D(0, 1) to the distribution of the process {w(t) − w(1)E(t)}, where the processes {w(t)} and {E(t)} are independent of each other. Proof. To prove the convergence of the finite-dimensional distributions we will ∗ follow the arguments in §§ 4.4 and 4.7. Consider the trajectory {Snt } that is obtained from {Snt } by replacing in the latter its maximum jump ξ n (which will be unique with probability tending to 1) by zero, so that ξ n ∈ x − Sn∗ + Δ[0) (given Sn ∈ Δ[x)). It can be seen from the proofs of the theorems of §§ 4.4 ∗ and 4.7 that, for each i, the conditional distribution of the trajectory {Snt } given ξi x/r ‘almost coincides’ with the distribution of the sequence of cumulative sums of independent r.v.’s (distributed as ξ), one of which (the ith one) is replaced by zero. As in §§ 4.4 and 4.7, we can represent the principal part of the probability P(Gn ) of the desired event Gn as the sum nj=1 P(Gn B j ), and then it is not hard to derive from the above observations that, for each t, the sequence 1 1 √ Snt − ξ n Eωn (t) = √ Snt − (x − Sn∗ + δn )Eωn (t) , n n
|δn | Δ,
given Sn ∈ Δ[x), will converge in distribution as n → ∞ to the same r.v. as the √ ∗ sequence Snt / n. In other words, the ‘conditional process’ 1 √ Snt − xEωn (t) n will converge to the same limiting process as 1 ∗ √ Snt − Sn∗ Eωn (t) , n i.e. to the process {w(t) − w(1)E(t)}. Compactness (tightness) conditions are verified in the standard way. One could also obtain second-order approximations to the conditional distributions of the process {Snt } given Sn x. In the case Eξ 2 = ∞ it is not difficult to obtain, using the same arguments,
232
Random walks with jumps having finite variance
complete analogues of Theorems 4.9.1–4.9.3 under condition [<, =] with α < 2 and W (t) < cV (t). For example, the following analogue of Theorem 4.9.1 holds true. Let γ ∈ (0, α) be an arbitrary fixed number. Theorem 4.9.4. Let conditions [<, =] with α < 2 and W (t) < cV (t) and [D(1,q) ] with q(t) ≡ 1 be satisfied. Then, for x > n1/γ and any fixed Δ > 0, as n → ∞, 1 P η(x) = i | Sn ∈ Δ[x) ∼ . n The conditional distribution of the process {x−1 Snt ; t ∈ [0, 1]} given the event Sn ∈ Δ[x) converges weakly in D(0, 1), as n → ∞, to the distribution of the process {E(t); t ∈ [0, 1]}. If, moreover, condition [Rα,ρ ] holds then the limit of the conditional laws of (ξ n − x)/b(n), b(n) = F (−1) (1/n) given Sn ∈ Δ[x) coincides with the limiting distribution of −Sn /b(n) (i.e. with the stable distribution Fα,−ρ ; for simplicity we assume that α = 1). In the statement of the analogue of Theorem 4.9.3 in the case Eξ 2 = ∞, one should replace the Wiener process {w(t)} by the stable process {ζ (α,ρ) (t)} and √ the scaling sequence n by b(n).
5 Random walks with semiexponential jump distributions
5.1 Introduction In Chapters 5 and 6 we will be studying random walks with jumps for which the distributions decay at infinity in a regular manner but faster than any power function. For such walks also, one can obtain a rather complete description of the asymptotics of the large deviation probabilities using methods close to those developed in Chapters 2–4. Two distribution classes will be considered. This chapter is devoted to studying random walks with semiexponential jump distributions, which were introduced in Definition 1.2.22 (p. 29). In Chapter 6, however, we will study exponentially decaying distributions of the form λ+ > 0, P(ξ t) = V0 (t)e−λ+ t , ∞ where V0 (t) is an r.v.f. such that 1 tV0 (t)dt < ∞. In this case, the left derivative ϕ (λ+ ) of the function ϕ(λ) = Eeλξ is finite, and so the existing analytic methods for studying the asymptotics of, say, the probabilities P(Sn x) and P(S n x) will not work for deviations x such that x/n > ϕ (λ+ )/ϕ(λ+ ) (see [37]). This means that one has to look for some ‘mixed’ approaches that would use the results of Chapters 2–4 (for more detail, see Chapter 6). So, the present chapter deals with large deviation problems for the class Se of semiexponential distributions, which have the form P(ξ t) = V (t) = e−l(t) ,
(5.1.1)
where (1) the function l(t) admits the representation l(t) = tα L(t),
0 < α < 1,
L(t) is an s.v.f. at infinity,
(5.1.2)
(2) as t → ∞, for Δ = o(t) one has l(t + Δ) − l(t) =
αΔl(t) (1 + o(1)) + o(1). t 233
(5.1.3)
234
Random walks with semiexponential jump distributions
In other words, (2) means that l(t + Δ) − l(t) ∼
αΔl(t) t
(5.1.4)
if Δ = o(t) and Δl(t)/t > ε for a fixed ε > 0, and that l(t + Δ) − l(t) = o(1)
(5.1.5)
if Δl(t)/t → 0. Along with the notation F ∈ Se, which indicates that the distribution F is semiexponential, we will sometimes use the equivalent notation V ∈ Se (or F+ ∈ Se), where V (or F+ ) is the right tail of F. This is quite natural, as conditions (5.1.3)–(5.1.5) refer to the right distribution tails only. Note that, in the present chapter, we will exclude the extreme cases α = 0 and α = 1 (cf. Definition 1.2.22). Some properties of semiexponential distributions were studied in § 1.2. Recall that if the function L(t) is differentiable and L (t) = o L(t)/t as t → ∞ then l (t) ∼ αl(t)/t, the property (5.1.3) always holds and (5.1.4) is true for all Δ = o(t), so that the second remainder term o(1) in (5.1.3) is absent. If the distribution of ζ satisfies Cram´er’s condition, ϕ(λ) = Eeλζ < ∞ for some λ > 0, then, under rather broad conditions, the distribution of ξ = ζ 2 will be semiexponential. Assume, for example, that, in the representation P(ζ t) = e−λ+ t+h(t) ,
λ+ = sup{λ : ϕ(λ) < ∞} > 0,
(5.1.6)
the function h(t) = o(t) is differentiable for t t0 > 0 and h (t) → 0 as t → ∞. Then √
√
P(ξ t) = e−λ+ t+h( t) , √ √ so that the function l(t) = λ+ t − h( t ) has the property that Δ l(t + Δ) − l(t) = √ 1 + o(1) 2 t as t → ∞, Δ = o(t), and hence the relations (5.1.2), (5.1.3) hold for α = 1/2. It is obvious that the same could also be said about the distribution of the sum χ2 := ζ12 + · · · + ζk2 , d
where the r.v.’s ζj = ζ are independent. In this chapter, by conditions [ · , =] and [ · , <] we will understand, respectively, the relation (5.1.1) and the inequality P(ξ t) V (t), where the function V (t) = e−l(t) satisfies (5.1.2) and (5.1.3). For semiexponential distributions, the description of the asymptotics of large deviation probabilities differs substantially from that presented in Chapter 2–4
235
5.1 Introduction
for the case of regularly varying distributions. The difference is, first of all, in the presence of a rather extensive ‘intermediate’ large deviation zone, where the asymptotics of the probabilities in question (say, P(Sn x)) will, roughly speaking, be intermediate between the so-called Cram´er approximation (which is valid for ‘moderately large deviations’) and an approximation of the form nV (x) due to the maximum jump. It will be assumed throughout Chapter 5 that the following conditions hold true, along with the semiexponentiality of P(ξ t): : 9 1 2 b + 2. (5.1.7) E|ξ| < ∞ for a b = Eξ = 0, Eξ = 1, 1−α The main objects of study will be the functionals Sn =
n k=1
ξk ,
S n (a) = max(Sk − ak), kn
S n = S n (0).
(5.1.8)
Now we will briefly review what is known about the probabilities of large deviations of the functionals (5.1.8) for distributions of the form (5.1.1), (5.1.2) and distributions close to this form, in the zone of ‘moderately large deviations’ where the Cram´er approximation is valid. Problems on ‘moderately large deviations’ of the sums Sn in the case of the above-mentioned distributions were studied in [152, 201, 220, 221, 280, 281, 247, 212, 206, 238] and a number of other papers, where, in particular, the following was proved. Let (5.1.7) be true and, for some function h(t) that is close, in some sense, to an r.v.f. of index α ∈ (0, 1) (conditions on the function h(t) vary between publications), let (5.1.9) E eh(ξ) ; ξ 0 < ∞. (In papers [152, 201, 220, 221, 280, 281, 247], the condition E eh(|ξ|) < ∞
(5.1.10)
was used.) Further, let σ1 (n) be the solution to the equation x2 = nh(x). Then, for the probability P(Sn x), we have the following Cram´er approximation, which holds uniformly in x σ1 (n): % & 0 x (5.1.11) e−nΛκ (x/n) (1 + o(1)). P(Sn x) = 1 − Φ √ n Here Φ(t) is the standard normal distribution function, x x x2 Λ0κ := Λκ − 2, n n 2n Λκ (x/n) is the ‘truncated Cram´er series’, i.e. a truncated series expansion corresponding to the ‘deviation function’ of ξ (which is formally defined only when
236
Random walks with semiexponential jump distributions
Cram´er’s condition is satisfied), Λκ (t) :=
κ v j tj j=2
j!
: 1 + 1, κ := 1−α 9
,
(5.1.12)
where v2 = 1, v3 = γ3 , v4 = γ4 − 3γ32 and so on (see e.g. [152, 201, 220]) and the γj are semi-invariants of the distribution of ξ, so that γ2 = 1. It follows from (5.1.11) that, in particular, √ n −n Λκ (x/n) (5.1.13) e P(Sn x) ∼ √ 2π x √ for x σ1 (n), x n → ∞, and that x P(Sn x) ∼ 1 − Φ √ (5.1.14) n for x = o(n2/3 ). Recall that an ∼ bn means that an /bn → 1 as n → ∞. If F ∈ Se then (5.1.9) will be satisfied for the function h(t) = l(t) − ln t for t 1 because, in that case, for α ∈ (0, 1) one has E eh(ξ) ; ξ 1 =
∞ 1
dl(t) < ∞. t
This means that, for semiexponentially distributed summands, the approximations (5.1.11)–(5.1.14) hold in the deviation zone x σ1 (n), where the function σ1 (n) is clearly of the form σ1 (n) = n1/(2−α) L1 (n),
L1 is an s.v.f.
(see Theorem 1.1.4(v)). Similar results for S n were established in [2]: under condition (5.1.9) one has % & 0 x (5.1.15) e−nΛκ (x/n) (1 + o(1)) P(S n x) = 2 1 − Φ √ n uniformly in x σ1 (n); that paper also contains a representation of the form (5.1.11) for P(S n (−a) − an x), a > 0. Beyond the deviation zone x σ1 (n) = n1/(2−α) L1 (n) one can identify two further zones, σ1 (n) < x σ2 (n) and x > σ2 (n), in which the asymptotics of P(Sn x) will be different. In the deviation zone x σ2 (n) = n1/(2−2α) L2 (n) (the asymptotics of σ2 (n) will be made more precise later on), the ‘maximum jump principle’ is valid, i.e. the principal contribution to large deviations of Sn comes from ξ n = maxjn ξj , so that P(Sn x) ∼ P(ξ n x) = nV (x) 1 + o(1) . (5.1.16) The asymptotic representation (5.1.16) for x n1/(2−2α) was obtained in [238, 196, 227, 51]. Concerning results on the large deviations of Sn for distributions
237
5.1 Introduction
satisfying (5.1.10), see also [206, 191] (these references also contain more complete bibliographies). The asymptotics of P(S n x) in cases when they coincide with those for P(Sn x) were established in [227]. In [238] theorems on the asymptotics of P(Sn x) valid on the whole real line were considered. In particular, that paper gives the form of P(Sn x) in the intermediate zone x ∈ σ1 (n), σ2 (n) but it does this under conditions whose meaning is quite difficult to comprehend. The intermediate deviation zone σ1 (n) < x < σ2 (n) was also considered in [195]. The paper deals with the rather special case when the distribution F has density f (t) ∼ e−|t|
α
|t| → ∞;
as
the asymptotics of P(Sn x) were found there for α > 1/2 in the form of recursive relations, whence one cannot extract, in the general case, the asymptotics of P(Sn x) in explicit form. Asymptotic representations for P(S n x) in the intermediate deviation zone studied in [52]. σ1 (n) < x < σ2 (n) and also in the zone x σ2 (n) were The following asymptotic representation for P S n (a) x , a > 0, was established in [178] (see also [275]): for all so-called strongly subexponential distributions V (t), as x → ∞, for all n one has 1 P(S n (a) x) ∼ a
x+an
V (u) du.
(5.1.17)
x
D.A. Korshunov has communicated to us that the sufficient conditions from [178] for a distribution to be strongly subexponential will be satisfied for laws from the class Se. The content of the present chapter is, in many aspects, similar to that of Chapters 2–4. We will be using the same approaches as in the above chapters but in a modified form. Owing to more severe technical difficulties, the class of problems will be narrowed: we will not deal with the probability that a trajectory will cross a given arbitrary boundary, although in principle there are no obstacles to deriving this. In § 5.8 we present integro-local and integral theorems for Sn that are valid on the whole real line. They cover all deviation zones including boundary regions. In particular, they improve the integral theorems for Sn established in the previous sections and the above-mentioned literature. The complexity of the technique used to prove these results, which were presented in the recent paper [73], is, however, somewhat beyond the level of the other material in the present monograph. This is why the respective theorems are presented in § 5.8 without proofs; the latter can be found in [73].
238
Random walks with semiexponential jump distributions
5.2 Bounds for the distributions of Sn and S n , and their consequences 5.2.1 Upper bounds for P(S n x) Similarly to Chapter 4, introduce a function σ1 = σ1 (n) that characterizes the 2 zone of deviations x where the ‘normal’ asymptotics e−x /2n and the ‘maximum jump’ asymptotics nV (x) will be approximately the same. More precisely, we will define such deviations x as solutions to the equation x2 = − ln nV (x). 2n
(5.2.1)
As we are only interested in the asymptotics of σ1 (n), this is the same as the solution to the equation x2 = − ln V (x) = l(x). 2n It will somewhat be more convenient for us to consider instead the equation x2 = nl(x),
(5.2.2)
of which the solution differs from that of the original equation by a bounded factor. It is not hard to see that, under the conditions of the present chapter, the function σ1 = σ1 (n) will have the form σ1 (n) = n1/(2−α) L1 (n),
(5.2.3)
where L1 (n) is an s.v.f. (see Theorem 1.1.4(v); note that σ1 (n) denoted in § 5.1 a formally different but quite close quantity, see (5.4.13) on p. 253). The deviation zone x σ1 (n) will be referred to as the Cram´er’s zone; the zone σ1 (n) < x σ2 (n), where σ2 (n) is to be defined below, will be called intermediate and the zone x > σ2 (n) will be called the zone of validity of the maximum jump principle (the asymptotics of P(S n x) in this zone coincide with those of P(ξ n x) ∼ nV (x)). Put w1 (t) := −t−2 ln V (t) = t−2 l(t) = tα−2 L(t).
(5.2.4)
One can assume, without loss of generality, that the function w1 (t) is decreasing. Then the equation (5.2.2) can be rewritten as w1 (x) = 1/n, and σ1 (n) is simply (−1) the value of w1 , the function inverse to w1 , at the point 1/n: (−1)
σ1 (n) = w1
(1/n).
It is not difficult to see that, if L satisfies the condition L tL1/(2−α) (t) ∼ L(t) as t → ∞, (−1)
then w1
(5.2.5)
(5.2.6)
(u) has the form (−1)
w1
(u) ∼ u1/(α−2) L1/(2−α) u1/(α−2) ,
(5.2.7)
5.2 Bounds for the distributions of Sn and S n 239 so that L1 (n) ∼ L1/(2−α) n1/(2−α) (see Theorem 1.1.4(v)). Note that although condition (5.2.6) is quite broad it is not always satisfied. For example, it fails for the s.v.f. L(t) = exp{ln t/ ln ln t}. Since the boundary σ1 (n) of the Cram´er deviation zone depends on n, it could be equivalently characterized by the inequalities 1 w1 (x) 1 n< w1 (x) n
in the Cram´er zone, in the intermediate zone.
Thus, deviations could be characterized both by the quantity x s1 := σ1 (n) (cf. Chapter 4; s1 1 for the Cram´er zone) and the quantity π1 := π1 (x) = nw1 (x) = nxα−2 L(x)
(5.2.8)
(π1 1 for the Cram´er zone); observe that, for a fixed s1 , w1 (σ(n)) ∼ sα−2 π1 (x) = nw1 (s1 σ(n)) ∼ nsα−2 1 1 as n → ∞. In some cases, it will be more convenient for us to use the characteriztic π1 (in which the argument x will often be omitted; when the argument of π1 (·) is different from x, it will always be included). As before, let Bj = {ξj < y},
B=
n *
P = P(S n x; B).
Bj ,
j=1
Theorem 5.2.1. Let condition [ · , <] with V ∈ Se be satisfied. Then there exists a constant c < ∞ (its explicit form could be obtained from the proof) such that, for any fixed h > 1, all n and all sufficiently large y, x r= , P c[nV (y)]r−π1 (y)h/2 , (5.2.9) y where π1 is defined in (5.2.8). If, for fixed h > 1 and ε > 0, one has π1 h 1 + ε then, for y = x and all large enough n, P e−x
2
/2nh
.
(5.2.10)
If the deviation y is characterized by the relation y = sσ1 (n) for a fixed s > 0 then (5.2.9) holds true with π1 (y) replaced by sα−2 (1+o(1)). If y = x, s2−α < h then the relation (5.2.10) is true. Now we will give a few consequences of Theorem 5.2.1. Along with the function w1 (t) (see (5.2.4)), we introduce the function w2 (t) := w1 (t)l(t) = t−2 l2 (t) = t2α−2 L2 (t),
(5.2.11)
240
Random walks with semiexponential jump distributions
which will be assumed, like w1 (t), to be monotonically decreasing, so that the (−1) inverse function w2 (·) is defined for it. Set (−1)
σ2 (n) := w2
(1/n) = n1/(2−2α) L2 (n),
(5.2.12)
where L2 is an s.v.f. that could be found explicitly under the additional assumption (5.2.6) (as was the case for L1 ). Further, let r0 be the minimum solution to the equation 1=r−
π1 h 2−α r , 2
which always exists when π1 h < 2α−1 . Obviously, π1 h as π1 → 0. 2 Here and in what follows, h > 1 is, as before, an arbitrary fixed number, which can be chosen to be arbitrarily close to 1. r0 − 1 ∼
Corollary 5.2.2. (i) If π1 h < 2α−1 then P(S n x) cnV (x/r0 ).
(5.2.13)
If π1 l(x) c or, equivalently, x c2 σ2 (n) then (ii) If π1 h 2
α−1
P(S n x) c1 nV (x).
(5.2.14)
P(S n x) < cnV (x)1/2π1 h .
(5.2.15)
then
(iii) Let h > 1, ε > 0 be any fixed numbers. If π1 h 1 + ε then, for all large enough n, one has 2
P(S n x) e−x
/2nh
= V (x)1/2π1 h .
(5.2.16)
Remark 5.2.3. As in Remark 4.1.5 (see also Corollaries 2.2.4 and 3.1.2), it is not difficult to verify that there exists a function ε(t) ↓ 0 as t ↑ ∞ such that, for x = s2 σ2 (n), P(S n x) 1 + ε(t). nV (x) x: s2 t sup
Proof of Theorem 5.2.1. The scheme of the proof remains the same as before. The main tool is again the basic inequality (2.1.8), in which one has to bound the integral R(μ, y). The integral I1 (see (4.1.19)) from the representation R(μ, y) = I1 +I2 admits the same upper bound as that obtained in the proof of Theorem 4.1.2 with M (ε) = ε/μ. Set eε = h. Then (see (4.1.22)) I1 1 +
μ2 h . 2
(5.2.17)
5.2 Bounds for the distributions of Sn and S n
241
An upper bound for y I2 =
y e F(dt) V (M (ε))h + μ
V (t)eμt dt
μt
M (ε)
(5.2.18)
M (ε)
(see (4.1.23)) will now be somewhat different. We will represent the last term as the sum M y f (t) e dt + μ ef (t) dt =: I2,1 + I2,2 , f (t) := −l(t) + μt, (5.2.19) μ M (ε)
M
where the quantity M will be chosen below, and study the properties of the function f . First assume for simplicity that the function L from the representation (5.1.2) is differentiable and, moreover, that L (t) = o(L(t)/t), In this case, l (t) =
L (t)t αl(t) 1+ t αL(t)
l (t) is decreasing.
∼
αl(t) t
as
(5.2.20)
t → ∞.
Then the minimum of f (t) is attained at t0 = λ(μ), where λ(·) = (l )(−1) (·) is the function inverse to l (·), so that l (λ(μ)) ≡ μ,
λ(μ) = μ1/(α−1) L∗ (μ),
L∗ (μ) is an s.v.f. as μ → 0.
Put μ := vl (y),
(5.2.21)
where v > 1 is to be chosen later (more precisely, it will be convenient for us to choose μ, thereby specifying v as well). It is clear that λ(μ) < y for v > 1. Observe that, for v ≈ 1/α > 1, the value f (y) = −l(y) + vl (y)y ≈ l(y)(vα − 1) could be made small, and so ef (y) becomes comparable with 1. Note also that y ≡ λ(μ/v) ∼ v 1/(1−α) λ(μ),
v > 1.
(5.2.22)
In the following argument we will keep in mind that v 1 + ε, ε > 0. Let M := γλ(μ), where γ is some point from the interval (1, α1/(α−1) ) (for example, its midpoint). Then, on the one hand, for t M we have f (t) f (M ) ∼ −γ α−1 l (λ(μ)) + μ ∼ μ(1 − γ α−1 ) = cμ,
c > 0.
(5.2.23)
242
Random walks with semiexponential jump distributions
On the other hand , for brevity setting λ := λ(μ), one has l (γλ)γλ f (M ) ∼ −l(γλ) + μγλ ∼ − + μγλ α α−1 γ μγλ, ∼ 1− α
(5.2.24)
where γ α−1 > α. Since by Theorem 1.1.4(ii) l M (ε) = l(ε/μ) > μ−α+δ μλ(μ) > μα/(α−1)+δ , for any δ > 0 and all small enough μ, we see that the quantity I2,1 from (5.2.19) can be bounded, owing to the above-mentioned properties of the function f , as follows: I2,1 μM e−μ
−α+δ
= μγλ(μ)e−μ
−α+δ
= o(μ2 )
(5.2.25)
as μ → 0. It is obvious that the term V (M (ε))h in (5.2.18) admits a bound of the same kind. To evaluate I2,2 in the case when y > M (i.e. when v 1/(1−α) > γ), we will use the inequality (5.2.23), which means that when computing the integral it suffices to consider its part over a neighbourhood of the point y only. For t = y − u, u = o(y), we have, owing to (5.2.21), that f (t) − f (y) = l(y) − l(y − u) − μu
= l (y)u(1 + o(1)) − μu ∼ μu
1 −1 . v
So, for U = o(y), U μ−1 , y μ
ef (t) dt ∼ μef (y)
y−U M
eμu(1/v−1) du ef (y)
0
y−U
The integral Therefore,
U
v . v−1
can be evaluated in a similar way, with the result o ef (y) . I2,2
vef (y) (1 + o(1)). v−1
(5.2.26)
Now we can complete the bounding of the integral R(μ, y) in the basic inequality (2.1.8). Collecting together (5.2.17), (5.2.18), (5.2.25) and (5.2.26), we obtain μ2 h vef (y) R(μ, y) 1 + (1 + o(1)) + (1 + o(1)), 2 v−1 and hence R (μ, y) exp n
" vn f (y) nhμ2 (1 + o(1)) + e (1 + o(1)) . 2 v−1
(5.2.27)
5.2 Bounds for the distributions of Sn and S n
243
Put μ :=
1 ln T, y
T :=
r(1 − α) c ≡ nV (y) nV (y)
and observe that (5.2.9) becomes trivial for π1 (y) = nw1 (y) > 2r/h (the righthand side of this inequality will then be unboundedly increasing). For deviations y such that π1 (y) 2r/h, one has n c1 y 2−α+ε for any ε > 0 and therefore μ=
1 l(y) l (y) 1 ln T ∼ − ln nV (y) ∼ ∼ . y y y α
This means that we have v ≈ 1/α > 1 in (5.2.21) and that all the assumptions made about μ and v hold true. Further, as in § 4.1, we find that nhμ2 nV (y) μy (1 + o(1)) + e (1 + o(1)), 2 1−α where, by virtue of our choice of μ, we have ln P −μx +
nV (y) μy e = r, 1−α ρ :=
−μx +
(5.2.28)
nhμ2 = (−r + ρ) ln T, 2
nh nh ln T = − 2 (ln nV (y) + ln c), 2 2y 2y
c = r(1 − α).
Here (see (5.2.4)) −
π1 (y) ln V (y) = l(y)y −2 = w1 (y) = . y2 n
(5.2.29)
Therefore, assuming for simplicity that cn 1, we see that ρ π1 (y)h/2 and hence P c1 [nV (y)]r−π1 (y)h/2
(5.2.30)
(if cn < 1 then one should add o(1) to the exponent in (5.2.30) and then remove that summand by slightly increasing h). This proves the first part of the theorem. Now we will consider the Cram´er deviation zone, where π1 = nw1 (x) can be large. Here we put x , y := x, μ := nh so that xw1 (x) l(x) l (x) 1 μ= = ∼ , v∼ π1 h xπ1 h απ1 h απ1 h (see (5.2.21)), and the condition v > 1, which we assumed after (5.2.21), could fail for large π1 . If v > γ 1−α (or, equivalently, x = y > M ; see (5.2.22)) then all the bounds for R(μ, y) obtained above will still hold and one will again obtain (5.2.27). If, however, v γ 1−α then I2,2 will disappear from the above considerations and likewise the last term in the exponent on the right-hand side of (5.2.27) will disappear, too. In this case, we will immediately obtain the second assertion of the theorem.
244
Random walks with semiexponential jump distributions
Thus it only remains to consider the case (απ1 h)−1 > γ 1−α > 1, and for this case to bound the last term in the exponent in (5.2.27). For μ = y/nh = x/nh, the logarithm of that term is equal to H = μy + ln nV (y) + O(1) x2 1 x2 1 = − w1 (y)n + ln n + O(1) = − π1 n h n h
+ ln n + O(1).
If π1 h 1 + ε and x2 n ln n then H → −∞ as n → ∞. Now observe that if n1/2 < x < n1/2+ε for a small enough ε > 0 then x2 π1 = x2 w1 (x) ln n → ∞ n
π1 = nw1 (x) → ∞,
and hence again H → −∞. Therefore, the last summand on the right-hand side of (5.2.28) is negligibly small, and ln P −
x2 (1 + o(1)) + o(1); 2nh
the last term, o(1), could be removed by slightly increasing the value of h. The theorem is proved under the assumption (5.2.20). If (5.2.20) does not hold then one should use the representation (5.1.3), which implies that the increments l(t+Δ)−l(t) (and it was the behaviour of these increments that was important in the above analysis) behave in exactly the same way as under the assumption (5.2.20), up to error terms o(1) that change neither the qualitative conclusions nor the bounds for the integrals. The value l (y) used in the above considerations should be replaced by l (1) (y) := αl(y)/y (cf. (5.1.3)). Then all the assertions in (5.2.21)–(5.2.30) will remain true. The theorem is proved. Proof of Corollary 5.2.2. We have P(S n x) nV (y) + P nV (y) + c[nV (y)]r−π1 (y)h/2 .
(5.2.31)
Our aim is to choose y (or r = x/y) as close to the optimal value as possible. Observe that, for r comparable with 1, one has, as x → ∞, π1 (y) = π1 (x/r) ∼ r 2−α π1 ,
l(y) ∼ r −α l(x). √ Moreover, recall that we are considering deviations x > n, which corresponds to the situation ln n < 2 ln x l(x). Therefore the factors n will play a secondary role in (5.2.31), and so we can only consider the behaviour of the factors V (y) (to the respective powers). The logarithm of the second term on the right-hand side of (5.2.31) has the form (recall that π1 = π1 (x)) # $r−π1 (y)h/2 hπ1 2−2α r (1 + o(1)), (5.2.32) = −l(x) r1−α − ln V (y) 2
5.2 Bounds for the distributions of Sn and S n
245
where the right-hand side attains its minimum value, −l(x)
1 1 + o(1) , 2π1 h
in the vicinity of the point r := (π1 h)1/(α−1) . For r = r, ln V (y) = −l(x)(π1 h)−α/(α−1) (1 + o(1)).
(5.2.33)
Therefore, if r −α = (π1 h)−α/(α−1) (2π1 h)−1 (or, equivalently, π1 h 2α−1 ) then the logarithm of the second term on the right-hand side of (5.2.31) will be at least as large as (5.2.33), and we could choose r0 = r as the desired optimal value of r. Moreover, since the power of n in this term is equal to (2π1 h)−1 2−α < 1, we obtain from (5.2.31) that P(S n x) cnV (x)(1+o(1))/2π1 h , where the factor 1 + o(1) could be replaced by 1 on slightly changing the value of h. This proves (5.2.15). If π1 h < 2α−1 then one should take r0 to be the value for which both terms on the right-hand side of (5.2.31) are roughly equal, i.e. r0 is chosen as the minimum solution to the equation 1=r− so that
π1 h 2−α r , 2
π1 h π1 h r0 = 1 + + (2 − α) 2 2
2
+ ···
and r0 − 1 ∼ π1 h/2 as π1 → 0. In this case, P(S n x) cnV (x/r0 ). This proves (5.2.13). The inequality (5.2.14) is a consequence of (5.2.13). Indeed, setting 1/r0 = 1 − θ we obtain that θ = 12 π1 h(1 + o(1)) as π1 → 0, V (x/r0 ) = exp −l(x(1 − θ)) = exp −l(x) + αθl(x)(1 + o(1)) ( ) α = exp −l(x) + π1 hl(x)(1 + o(1)) . (5.2.34) 2 This implies (5.2.14). Finally, the assertion (5.2.16) follows from the first inequality in (5.2.31) with √ x = y, the bound (5.2.10) and the fact that, for π1 h 1 + ε and x > n, one has "
"
"
l(x) 1 + 2ε x2 = exp − V (x) exp l(x) nV (x). exp − 2nh 2π1 h 2 + 2ε The corollary is proved.
246
Random walks with semiexponential jump distributions
Remark 5.2.4. Unfortunately, Theorem 5.2.1 does not enable one to obtain inequalities of the form P(S n x) nV (x)(1 + o(1)) for x σ2 (n) (π1 l(x) → 0 as x → ∞). Indeed, we will have the asymptotic equivalence V (y) ∼ V (x) in the main inequality (5.2.31) only if r ≡ x/y is of the form r = 1 + θ, where θl(x) → 0 as x → ∞ (cf. (5.2.34)). But, for such a θ, the second term on the second right-hand side of (5.2.31) will be asymptotically equivalent to cnV (x), so that the whole of this right-hand side will be asymptotically equivalent to (1 + c)nV (x), c > 0. Bounds of that form have already been obtained in (5.2.14), under the weaker assumption π1 l(x) < c.
5.2.2 Lower bounds for P(Sn x) Lower bounds for P(Sn x) will follow from Theorem 4.3.1, the assertion of which is not related to the condition of regular variation of the tails P(ξ t). In √ particular, the theorem implies that, for y = x + u n, as u → ∞, P(Sn x) nF+ (y)(1 + o(1)). If condition [ · , =], V ∈ Se, is met then √ √ V (y) = V (x + u n) = e−l(x+u n ) , where
√ αu nl(x) 1 + o(1) + o(1) = o(1), l(x + u n ) − l(x) = x √
provided that n x2 /l2 (x) = 1/w2 (x) or, equivalently, that x σ2 (n) and u → ∞ slowly enough. Therefore, in the specified range of x-values, one has V (y) ∼ V (x). We have proved the following assertion. Corollary 5.2.5. Let condition [ · , =], V ∈ Se, be satisfied. Then there exists a function ε(t) ↓ 0 as t ↑ ∞ such that, for x = s2 σ2 (n), P(Sn x) 1 − ε(s2 ). nV (x) In view of the absence of a similar opposite inequality (see Remark 5.2.4), one cannot derive from here the exact asymptotics P(Sn x) ∼ nV (x),
P(S n x) ∼ nV (x)
in the zone x σ2 (n). These asymptotics will be obtained below, in Theorems 5.4.1 and 5.5.1.
5.3 Bounds for the distribution of S n (a)
247
5.3 Bounds for the distribution of S n (a) We will begin with upper bounds. As in Chapters 3 and 4, the main elements of these bounds are inequalities for P (a, v) := P(S n (a) x; B(v)), where B(v) =
n *
Bj (v),
Bj (v) = {ξj < y + vj}.
j=1
Put z := z(x) =
x = o(x). αl(x)
(5.3.1)
Note that the value z(x) is an increment of x such that, for a fixed t, one has l(x + zt) − l(x) = t(1 + o(1)) or, equivalently, V (x + zt) ∼ e−t V (x).
(5.3.2)
In situations where the argument of the function z(·) is different from x, it will be indicated explicitly. Theorem 5.3.1. Suppose that condition [ · , <] with V ∈ Se is satisfied, and that δ ∈ (0, 1), ε ∈ (0, 1), a > 0 are fixed. Then, for y εx, x (5.3.3) P (a, v) c min{z r+1 (y), nr }V r (y), r= . y Note that the bound (5.3.3) in not unimprovable; the function z r+1 (y) could be replaced by z r (y). This is a result of the use of the crude inequalities (5.3.8) in the proof of the theorem. Deriving an exact bound requires additional effort. However, the inequality (5.3.3) will prove to be sufficient for finding the exact asymptotics of P(S n (a) x). Owing to the above-mentioned deficiency of the inequality (5.3.3), one cannot derive from it the following assertion, the proof of which will be constructed in a different way. Theorem 5.3.2. Let condition [ · , <] with V ∈ Se be satisfied, and a > 0 be a fixed number. Then P(S n (a) x) cmV (x),
m := min{z(x), n}.
(5.3.4)
To prove Theorem 5.3.1, we will need an auxiliary assertion. Set S(l, r) :=
n
j l V r (y + vj).
(5.3.5)
j=1
Lemma 5.3.3. If V ∈ Se then
S(l, r) cΓ(l + 1) min Al+1 ,
" nl+1 V r (y), Γ(l + 2)
(5.3.6)
248
Random walks with semiexponential jump distributions
where A = z(y)/rv and the constant c can be chosen arbitrarily close to 1 for large enough n. Proof. Clearly S(l, r) c I(l, r), where n I(l, r) :=
t V (y + vt) dt = l
1
r
0
v l+1
nv ul V r (y + u) du. 0
For u nv = o(y) we have
"
ur (1 + o(1)) . V r (y + u) = V r (y) exp − z(y)
Since A 0
"
Al+1 tl e−t dt min Γ(l + 1), l+1
we obtain nv
z(y) u V (y + u) du V (y) r l
0
r
l+1
nvr/z(y)
tl e−t(1+o(1)) dt
r
z(y) cV r (y) r
0
l+1
nvr min Γ(l + 1), z(y)
l+1
" 1 . l+1
This bound, proving (5.3.6), will obviously remain valid for arbitrary nv as well. The lemma is proved. Proof of Theorem 5.3.1. For n z(y) we have y1 := y + vn y + vz(y) ∼ y,
π1 (y) = nw1 (y) z(y)y−2 l(y) ∼
and therefore x h + o(1) π1 (y)h x − − y1 2 y(1 + vz(y)/y) 2αy z(y) + O(1/y) = r + O(1/l(y)). r − rv y Hence, by Theorem 5.2.1, n * P (a, v) P S n x; {ξj < y1 + vn} j=1
# # $x/y1 −π1 (y)h/2 $r c nV (y1 ) c1 nV (y1 ) . Now let n be arbitrary. First we bound the probability n * {ξj < y + vn} . P Sn − an x; B(v) P Sn x + an; j=1
1 , αy
5.3 Bounds for the distribution of S n (a)
249
We will use Theorem 5.2.1, there taking x to be x1 = x + an and y and r to be y1 = y + vn and r1 = x1 /y1 respectively, so that r1 r
x + an x + a(1 − δ)n
for
v
a(1 − δ) . r
(5.3.7)
By virtue of Theorem 5.2.1, # $r1 −hπ1 (y1 )/2 P(Sn − an x; B(v)) c nV (y1 ) , where π1 (y1 ) = nw1 (y1 ) = ny1α−2 L(y1 ) = o min{1, n/x} for y εx and x → ∞. However, r1 r 1 + f (n/x) , where, owing to (5.3.7), f (t) :=
atδ 1 + at −1= c min{1, t}. 1 + at(1 − δ) 1 + at(1 − δ)
Hence, for all large enough x, # $r hπ1 (y1 ) r, P(Sn − an x; B(v)) c nV (y + vn) . 2 This leads to the bounds n P(Sj − aj x; B(v)) P (a, v) r1 −
j=1 n
c
# $r+1 r j r V r (y + vj) c1 min{z(y), n} V (y).
(5.3.8)
j=1
The last inequality uses Lemma 5.3.3. Theorem 5.3.1 is proved. Proof of Theorem 5.3.2. For n z, the assertion of the theorem follows from Corollary 5.2.2. Indeed, in this case l(x) → 0, αx and therefore the conditions of the second assertion of Corollary 5.2.2(i) are satisfied. Hence π1 l(x) zl2 (x)x−2 ∼
P(S n (a) x) P(S n x) cnV (x). For n z, we will make use of Corollary 7.5.4 below (see also [275]), which states that ∞ 1 V (x + t) dt (1 + o(1)). P(S(a) x) = a 0
Therefore (see the proof of Lemma 5.3.3) P(S n (a) x) P(S(a) x) czV (x).
250
Random walks with semiexponential jump distributions
The theorem is proved. The assertion of Theorem 5.3.2 can also be obtained as a consequence of the results of [178], where the relation (5.1.17) was established for so-called strongly subexponential distributions. Sufficient conditions for a distribution to belong to the class of strongly subexponential distributions will be met for any V ∈ Se. As in § 5.2, lower bounds will follow from those of § 4.3.
5.4 Large deviations of the sums Sn In this section we will obtain the first-order approximation for P(Sn x) together with a more detailed description of the asymptotics of this probability.
5.4.1 Preliminary considerations We will introduce an additional smoothness condition on the r.v.f. l(t) = tα L(t) from (5.1.1), which will be needed for deriving asymptotic expansions. This condition is similar to condition [D(2,q) ] from § 4.4. [D] The s.v.f. L(t) is continuously differentiable for t t0 and some t0 > 0, and, as t → ∞, one has L (t) = o(L(t)/t) and, for Δ = o(t), l(t + Δ) − l(t) = l (t)Δ +
α(α − 1) l(t) 2 Δ 1 + o(1) + o q(t) , 2 2 t
(5.4.1)
where q(t) 1 is a non-increasing non-negative function. Condition [D] with q(t) ≡ 0 will be met provided there exists l (t) = α(α − 1)
l(t) 1 + o(1) . 2 t
(5.4.2)
In this case, t+Δ
v
l (t) +
l(t + Δ) − l(t) = t
l (u)du dv
t
α(α − 1) l(t) 2 = l (t)Δ + 1 + o(1) . Δ 2 t2
(5.4.3)
If a function l1 (t) has a second derivative with the property (5.4.2) and if (5.4.4) l(t) = l1 (t) + o q(t) , then condition (5.4.1) will be satisfied but with l(t) replaced on its right-hand side by l1 (t) and with the function q(t) from (5.4.4). In fact, it is this, more general, form of condition [D] that is actually required for refining the asymptotics of P(Sn x). However, the form of condition [D] presented above appears preferable as it simplifies the exposition.
5.4 Large deviations of the sums Sn
251
Condition [D] with q(t) ≡ 0 (see (5.4.2)) could be referred to as second-order differentiability at infinity. In the lattice (arithmetic) case, t and Δ are assumed to be integer-valued, whereas condition [D] takes this form: [D] The s.v.f. L(t) is such that L(t + 1) − L(t) = o L(t)/t as t → ∞ and for Δ = o(t), Δ 2, one has # $ α(α − 1) l(t) 2 Δ (1 + o(1)) + o q(t) , l(t + Δ) − l(t) = l(t + 1) − l(t) Δ + 2 2 t where q(t) 1 is a non-increasing non-negative function. All sufficiently smooth r.v.f.’s l(t) = tα L(t) will satisfy (5.4.2), and hence condition [D] as well. For example, for L(t) = (ln t)γ we have γ l (t) = α(α − 1) tα−2 (ln t)γ (1 + o(1)). L (t) = (ln t)γ−1 = o(L(t)/t), t Now consider conditions of another type. They will ensure that approximations of the form (5.1.11), (5.1.15) hold true. Had the relations (5.1.9) or (5.1.10) been satisfied, one would not need these new conditions. But the property V ∈ Se (see (5.1.1)–(5.1.3)) does not imply (5.1.9) and even less does it imply the excessive (concerning left tails) condition (5.1.10). There is little doubt, however, that (5.1.11) holds (possibly in a somewhat smaller deviation zone; see below for more details) only when conditions (5.1.1)–(5.1.3) are satisfied. Since such a result, to the best of our knowledge, has not been obtained yet we can assume, along with (5.1.1)–(5.1.3), only what we need, namely, that the Cram´er approximation (5.1.11) holds true. (For an approximation of the form (5.1.16), we use the term maximum jump approximation). In order to state the main assertions, we will discuss in some detail the characteristics of the Cram´er deviation zone, the so-called intermediate zone and the zone where the maximum jump approximation holds true (the ‘extreme zone’). First consider the boundary of the Cram´er deviation zone. Recall that we defined it as the value x = σ1 = σ1 (n), for which the logarithmic Cram´er approximation x2 (5.4.5) 2n is of the same order of magnitude as the logarithmic maximum jump approximation ln P(Sn x) ∼ −
ln P(Sn x) ∼ ln n V (x). In other words, we consider the equation x2 = − ln nV (x), 2n or, equivalently from the viewpoint of the asymptotics of σ1 (n), the equation x2 = l(x). 2n
(5.4.6)
252
Random walks with semiexponential jump distributions
It will, however, be convenient for us to amend further this equation for x = σ1 by removing the coefficient 1/2 from its left-hand side, which results in (5.2.2). Using the function w1 (t) = t−2 l(t), which we introduced in (5.2.4), we obtain a solution to the equation (5.2.2) in the form (−1)
σ1 (n) = w1
(1/n),
(5.4.7)
(−1)
is the function inverse to w1 . For simplicity, we assume that the where w1 functions l(t) and w1 (t) are continuous and monotone for t t0 and some t0 > 0, so that one could write 1 1 for n > w1 (σ1 (n)) = n w1 (t0 ) (the transition to the general case merely complicates the notation somewhat). Clearly, the solution to (5.4.6) is equal to σ1 (2n); it differs from σ1 (n) only by a constant factor, which is close to 21/(2−α) (see also (5.4.9) below). If the function L(t) has the property (5.4.8) L tL1/(2−α) (t) ∼ L(t) as t → ∞ (for example, all the powers (ln t)γ , γ > 0, and all functions varying even more slowly will possess this property) then it is not hard to see that we will have (5.2.7) or, equivalently, (5.4.9) σ1 (n) = n1/(2−α) L1 (n), L1 (n) ∼ L1/(2−α) n1/(2−α) . We will assign deviations x = s1 σ1 (n) with s1 1 to the Cram´er deviation zone. When s1 → ∞, we will assign them either to the intermediate or to the extreme zones. One could equivalently characterize deviations using the quantity π1 = π1 (x) = nw1 (x).
(5.4.10)
It is obvious that if s1 = x/σ1 (n) is fixed then π1 (x) = nw1 (sσ1 (n)) ∼ sα−2 , 1
(5.4.11)
so that the deviations x belong to the Cram´er zone if π1 (x) > 1. Now observe that, under the conditions [ · , <] with V ∈ Se, Eξ = 0 and E|ξ|b <∞ for b κ + 1, κ = 1/(1 − α) + 1, we have the following ‘Cram´er approximation’ property: [CA]
The uniform approximation (5.1.11) holds in the zone 0 < x σ+ (n) := σ1 (n)(1 − ε(n)),
(5.4.12)
where ε(n) → 0 as n → ∞. Indeed, if condition [ · , <] with V ∈ Se is met then, as we have already noted
5.4 Large deviations of the sums Sn
253
in § 5.1, (5.1.9) holds for the function h(t) = l(t) − ln t. It is not difficult to see that the function σ ∗ (n), which is the solution to the equation x2 = nh(x) and which specifies the zone where (5.1.11) takes place, has the form σ ∗ (n) = σ1 (n)(1 − ε(n)),
(5.4.13)
where σ1 (n) is defined in (5.4.7), ε(n) > 0 and, moreover, owing to the representation (5.1.3) we have ε(n) ∼
α ln n ↓0 (2 − α)2 n 2 − α
as n → ∞, as required. It remains to observe that in the papers that established (5.1.11) (Lemma 1b of [238] and Theorem 2.1 of [206]), it was also assumed that the function h in (5.1.9) satisfies certain conditions that are met if V ∈ Se. Thus, when V ∈ Se and (5.4.12) holds, the Cram´er approximation [CA] always holds true. Note that for negative deviations one has t t < 0, (5.4.14) P(Sn < t) = Φ √ (1 + o(1)), n uniformly in the zone
t σ− (n) := −(1 − ε) (b − 2)n ln n,
ε > 0.
(5.4.15)
If condition (5.1.10) is met then clearly an approximation of the form (5.1.11) will also hold for P(Sn < −t). Likewise, when studying the asymptotics of P(S n x) we will be using the following property. [CA] The uniform approximation (5.1.15) holds in the zone 0 < x σ+ (n) = σ1 (n)(1 − ε(n)), where ε(n) → 0 as n → ∞. It is not hard to establish, using an argument similar to that above and the results of [2], that [CA] always holds if the conditions [ · , <] with V ∈ Se and (5.1.10) are satisfied. As was the case with [CA], the expected result is that for [CA] to hold it suffices that condition [ · , <] with V ∈ Se is met. Now we return to characterizing the deviation zones. Recall that in § 5.2 we introduced the function w2 (t) = w1 (t) l(t) = t−2 l2 (t) = t2α−2 L2 (t)
(5.4.16)
(see (5.2.11)) and put (−1)
σ2 (n) = w2
(1/n) = n1/(2−2α) L2 (n).
(5.4.17)
Under condition (5.4.8) the s.v.f. L2 , as well as L1 , can be found explicitly. It will be seen from the assertions to be presented below that the deviations x = s2 σ2 (n) σ1 (n) will belong to the intermediate zone when s2 → 0 and to the maximum jump approximation zone when s2 → ∞.
254
Random walks with semiexponential jump distributions
One could equivalently characterize deviations x σ1 (n) using the quantity π2 (x) = π2 = nw2 (x),
(5.4.18)
assigning deviations to the intermediate zone if π2 → ∞. To state the main assertion, we will need some additional preliminary considerations. Introduce the function g2 (t) := l(x − t) +
t2 . 2n
(5.4.19)
Assume that the function L(t) (or, equivalently, the function l(t)) is continuously differentiable for t t0 > 0 and that l (t) ∼ αl(t)/t as t → ∞. Then g2 (t) will be differentiable for t x − t0 . If the deviations x belong to the intermediate zone, i.e. (see (5.4.10), (5.4.11)) π1 = nw1 (x) = nx−2 l(x) → 0,
(5.4.20)
then, for any fixed ε > 0, the function g2 (t) will attain its minimum on the interval (0, εx) at a point t∗ > 0, t∗ = o(x). Indeed, on the one hand, g2 (t) < 0 for t 0. On the other hand, for large enough x, εx εx g2 (εx) = −l (x(1 − ε)) + = −αxα−1 (1 − ε)α−1 L(x)(1 + o(1)) + n n $ x# α−1 ε − α(1 − ε) π1 (1 + o(1)) > 0 (5.4.21) = n by virtue of (5.4.20), which implies that t∗ = o(x). In what follows, along with l(x) an important role will be played by the function x x1−α 1 ∼ = . (5.4.22) z = z(x) := l (x) αl(x) αL(x) In terms of z, the property (5.4.20) can be rewritten as n xz.
(5.4.23)
To find an approximation to t∗ , observe that d = −l (x)(1 + o(1)), l(x − t) dt t=t∗ and hence g2 (t∗ ) = 0 = −l (x)(1 + o(1)) +
t∗ . n
From here we find that t∗ = n l (x)(1 + o(1)) =
n (1 + o(1)) = o(n). z
(5.4.24)
5.4 Large deviations of the sums Sn
255
Now note that if t = o(n) then for the function Λκ from (5.1.12) we have nΛκ
t n
t2 = (1 + o(1)), 2n
Λκ
t n
=
t (1 + o(1)). n
(5.4.25)
Therefore, all the above discussion (and, in particular, the relation (5.4.24)) will remain true if, in the definition (5.4.19) of g2 (t), we substitute nΛκ (t/n) for the function t2 /2n, i.e. if by t∗ we understand the point where the function gκ (t) = gκ (t, x, n) := l(x − t) + nΛκ
t n
(5.4.26)
attains its minimum value. Put M = M (x, n) := min gκ (t, x, n). t
(5.4.27)
Clearly M l(x) and, moreover, owing to (5.4.24) and (5.4.25), one has M = l(x − t∗ ) + nΛκ
t∗ n
= l(x) −
n (l (x))2 (1 + o(1)) 2
2 α2 nα2 l(x) nw2 (x)(1 + o(1)) (1 + o(1)) = l(x) − = l(x) − 2 x 2 α2 nw1 (x)(1 + o(1)) . = l(x) 1 − (5.4.28) 2 Hence, if π2 = nw2 (x) → 0 (i.e. the deviations belong to the maximum jump approximation zone) then M = l(x) + o(1).
(5.4.29)
If the deviations are in the intermediate zone (i.e. π1 = nw1 (x) → 0) then M = l(x)(1 + o(1)).
(5.4.30)
If condition [D] holds with q(t) ≡ 0 (or (5.4.2) holds true) and κ = 2 then we can find more precise expressions for t∗ and M . In this case, for t = o(x), π1 (x) = nw1 (x) → 0 we have g2 (t) = −l (x − t) +
$ t t # 1 + α(α − 1)π1 (x)(1 + o(1)) . = −l (x) + n n
So the solution t∗ to the equation g2 (t) = 0 will have the form # $ t∗ = nl (x) 1 − α(α − 1)π1 (x)(1 + o(1)) .
256
Random walks with semiexponential jump distributions
From this we find that (t∗ )2 2n α(α − 1) l(x) ∗ 2 (t∗ )2 (t ) (1 + o(1)) + = l(x) − t∗ l (x) + 2 2 x 2n 2 # $ = l(x) − n l (x) 1 − α(α − 1)π1 (x)(1 + o(1)) 2 α(α − 1) π1 (x)n l (x) (1 + o(1)) + 2 2 2 n l (x) + − n l (x) α(α − 1)π1 (x)(1 + o(1)). 2 Putting, for brevity, 2 π2∗ (x) := n l (x) ∼ α2 π2 (x), M = l(x − t∗ ) +
one obtains π2∗ (x) α(α − 1) + π1 (x)π2∗ (x)(1 + o(1)) 2 2 π ∗ (x) α3 (α − 1) = l(x) − 2 (5.4.31) + π1 (x)π2 (x)(1 + o(1)). 2 2 The case κ = 3 can be considered in a similar way but the resulting expressions will be somewhat different. M = l(x) −
5.4.2 Limit theorems for large deviations of Sn Now we can state the main assertion. Theorem 5.4.1. Let the conditions (5.1.7) and [ · , =] with V ∈ Se be satisfied. Then the following assertions hold true. (i) If condition [D] holds with q(t) ≡ 1 then in the intermediate and extreme deviation zones one has P(Sn x) = ne−M (1 + ε(x, n)),
(5.4.32)
where ε(x, n) = o(1) uniformly (see Remark 5.4.4 below) in the range of values x, n such that x → ∞. n → ∞, s1 = σ1 (n) The functions M = M (x, n), σ1 (n) and σ2 (n) are defined and described in (5.4.27)–(5.4.31), (5.4.7), (5.4.17). The condition s1 → ∞ can be replaced by π1 → 0. (ii) Let n → ∞ and s2 = x/σ2 (n) → ∞ (or, equivalently, π2 = nw2 (x) → 0). Then, uniformly in x and n from that range, P(Sn x) = nV (x)(1 + o(1)).
(5.4.33)
5.4 Large deviations of the sums Sn
257
(iii) If condition [D] holds and s2 = x/σ2 (n) → ∞ (or, equivalently, n = o(z 2 ), where z = 1/l (x) ∼ x/αl(x)), then % n − 1 (α − 1)(n − 1) + P(Sn x) = n V (x) 1 + (1 + o(1)) 2z 2 2xz & √ + O ( n/z)3 + o q(x) , (5.4.34) where the remainder terms are uniform in the range of x and n specified in part (ii). Remark 5.4.2. It will be seen below from the proof of the theorem that the value ne−M from (5.4.32), which gives the asymptotics of P(Sn x) in the intermediate zone, is simply the main part of the convolution of the Cram´er approximation and the extreme zone approximation nV (x) (see part (ii) of the theorem). Remark 5.4.3. In the lattice case, we have a complete analogue of the assertions (5.4.32) and (5.4.34) for the values of x on the lattice. Remark 5.4.4. The uniformity in part (i) of the theorem is understood in the following sense. For any sequences n → ∞, s1 → ∞ one can construct an ε = ε (n , s1 ) → 0 such that in (5.4.32) one has ε(x, n) ε for all n n , s1 s1 . The uniformity of the remainder terms in the assertion (5.4.34) is to be understood in a similar way. Remark 5.4.5. It can be seen from (5.4.28) that e−M = V (x)eα
2
π2 (1+o(1))/2
,
and hence in the case π2 → ∞ (i.e in the intermediate deviation zone) we have e−M V (x), so that the asymptotics (5.4.32) and nV (x) are quite different. It follows from (5.4.30), however, that the ‘crude asymptotics’ (i.e. the asymptotics of ln P(Sn x)) will coincide in the intermediate and extreme zones (when π1 → 0, see (5.4.28), (5.4.30)); then ln P(Sn x) = (1 + o(1)) ln nV (x) = (1 + o(1)) ln V (x). Remark 5.4.6. When b > 3 one can obtain more complete asymptotic expansions in part (ii) of the theorem. Using Theorem 5.4.1, one can find, for example, the asymptotics of the distribution tail of χ2n := ζ12 + · · · + ζn2 , where the i.i.d. r.v.’s ζi satisfy Cram´er’s condition (see (5.1.6)). To prove the theorem, we will need the following auxiliary assertion. Lemma 5.4.7. Let the conditions (5.1.7) and [ · , =] with V ∈ Se be satisfied,
258
Random walks with semiexponential jump distributions
and an r.v. ξ be independent of Sn . Then, for n → ∞ and x σ+ , 1 + o(1) P(ξ + Sn x, σ− Sn < σ+ ) = √ 2πn
σ+
V (x − t)e−nΛκ (t/n) dt,
σ−
(5.4.35) where σ± were defined in (5.4.12), (5.4.15). Proof. First note that the function % & 2 x Qn (x) := 1 − Φ √ e−nΛκ (x/n)+x /2n n on the right-hand side of (5.1.11) is differentiable and that qn (x) := −Qn (x) = √
1 e−nΛκ (x/n) (1 + o(1)) 2πn
as n → ∞ uniformly in [σ− , σ+ ]. Moreover, if points v1 < v2 from [σ− , σ+ ] are such that the integral v2 qn (t) dt v1
is comparable with Qn (v1 ) (i.e. has the same order of magnitude) then we obtain from (5.1.11) that P Sn ∈ [v1 , v2 ) =
v2 qn (t) dt (1 + o(1)). v1
For a given Δ > 0, partition the segment [σ− , σ+ ] into semi-open intervals of the form Δk := [uk−1 , uk ), where the uk are defined as solutions to the equations 1 Qn (uk ) = e−Δk , 2 uk 1 Δk Φ √ = e , 2 n
k 0, k < 0,
and assume for simplicity that N+ := −
1 ln 2Qn (σ+ ), Δ
N− :=
1 σ− ln 2Φ √ Δ n
<0
are integers, so that u0 = 0, uN− = σ− and uN+ = σ+ . (To make N+ integervalued, it suffices to slightly decrease, if necessary, the value of σ+ ; a similar remark applies to N− .) Since, for a fixed Δ > 0, the integral qn (t) dt = Qn (uk−1 ) − Qn (uk ) = Qn (uk−1 )(1 − e−Δ ), 0 < k N+ , Δk
5.4 Large deviations of the sums Sn
259
is comparable with Qn (uk−1 ) it follows from the previous argument that P(Sn ∈ Δk ) = qn (t) dt (1 + o(1)). (5.4.36) Δk
It is clear that the representation (5.4.36) remains valid for negative k > N− as well. Further, note that for large k one has |Δk | := uk − uk−1 = o(uk ). This follows √ from the fact that, for uk n, √ u2 1 ln 2Qn (uk ) ∼ k , uk ∼ 2nΔk. Δ 2nΔ As |Δk | = o(uk ), we obtain (slightly decreasing the value of σ+ if needed) that (5.4.36) holds for the ‘boundary’ interval ΔN+ +1 = [uN+ , uN+ +1 ) as well. The same remark applies to ΔN− . Now we can proceed to prove (5.4.35). Rewrite the probability on the left-hand side of (5.4.35) as k=−
σ+ V (x − t)P(Sn ∈ dt) =
p :=
N+
V (x − t)P(Sn ∈ dt)
k=N− +1Δ
σ−
k
N+
V (x − uk ) P(Sn ∈ Δk )
k=N− +1
N+
=
V (x − uk )
k=N− +1
qn (v) dv (1 + o(1)),
Δk
where thelast equality holds by (5.4.36). By the definition of the quantities uk , the integrals Δk over the intervals Δk of the function qn (v) will have the following properties: = eΔ for k 1, = , = e−Δ for k < 0. Δk
Δk+1
Δ0
Δ1
Δk
Δk+1
Therefore, continuing the above chain of inequalities, we obtain p eΔ
N+
V (x − uk )
k=N− +1
eΔ
N+ +1
qn (v) dv (1 + o(1))
Δk+1
V (x − v)qn (v) dv (1 + o(1))
k=N− +2Δ
k
uN+ +1
eΔ
V (x − t)qn (t) dt (1 + o(1)). uN−
260
Random walks with semiexponential jump distributions σ +|ΔN +1 | + On the right-hand side we have the integral σ−+ V (x−t)qn (t) dt, where, as we observed above, |ΔN+ +1 | = o(uN+ ) = o(σ+ ). The asymptotics of exactly the same integral but for limits σ− and σ+ , will be studied below (see (5.4.49)– (5.4.54)). From these computations it will follow in an obvious way that for deviations x σ+ the asymptotics in question are determined by an integral over a subinterval of (σ− , σ+ ) that is ‘far’ from the endpoints σ± and so will not depend on small relative variations in the upper integration limit. This means that, as n → ∞ and x σ+ , σ+ +|ΔN+ |
σ+ ∼
σ−
and
σ+
Δ
p e (1 + o(1))
σ−
.
σ−
In exactly the same way one finds that −Δ
pe
σ+ (1 + o(1))
.
σ−
Since Δ > 0 is arbitrary and p does not depend on Δ, it follows that σ+ p = (1 + o(1))
.
σ−
The lemma is proved. Proof of Theorem 5.4.1. (i) Let Gn := {Sn x},
Bj := {ξj < y},
B=
n *
Bj .
j=1
Then P(Sn x) = P(Gn B) + P(Gn B), where P = P(Gn B) was bounded in Theorem 5.2.1: # $r−π1 (y)h/2 x , r = , π1 (y) = nw1 (y). P c nV (y) y
(5.4.37)
(5.4.38)
Since by assumption π1 = π1 (x) → 0, we see that for y = δx, where δ ∈ (0, 1) is fixed, one has π1 (y) → 0 and V (y)r−π1 (y)h/2 = V (y)r+o(1) = exp −l(δx)(1/δ + o(1)) = exp −l(x)(δ α−1 + o(1)) V (x)1+γ (5.4.39) √ for any fixed γ δ α−1 − 1 > 0 and all large enough x. For x n the same bound will clearly hold for P as well.
5.4 Large deviations of the sums Sn
261
For the second term on the right-hand side of (5.4.37), we have n
P(Gn B j ) P(Gn B)
j=1
n
P(Gn B j ) −
j=1
n
P(Gn B j ) −
j=1
so that for y = δx, δ ∈ (0, 1), x P(Gn B) =
n
P(Gn B i B j )
i<jn
$2 n(n − 1) # P(ξ1 y) , 2
(5.4.40)
√ n the following holds:
P(Gn B j ) + O (nV (y))2 .
j=1
Therefore P(Gn ) =
n
P(Gn B j ) + O V 1+γ (x)
(5.4.41)
j=1
for some γ ∈ (0, min{1, γ }). Thus the main problem is now to evaluate the terms P(Gn B j ) = P(Gn B n ) = P(Sn−1 + ξn x, ξn y) = P(ξn y, Sn−1 x − y) + P(Sn−1 < x − y, Sn−1 + ξn x).
(5.4.42)
Here the first term in the sum is equal to P(1) := V (y)P(Sn−1 x − y). From Corollary 5.2.2(i) we obtain that, for y = δx,
1 (5.4.43) P(1) cnV (δx)V x(1 − δ) 1 − π1 (x(1 − δ))h . 2 The following evaluations are insensitive to the asymptotics of L(x), so, for simplicity, we will put for the present L(x) ≡ 1. Then we obtain from (5.4.43) that
,
-α " 1 α P(1) cn exp −(δx) − x(1 − δ) 1 − π1 (x(1 − δ))h 2 α α α = cn exp −x [δ + (1 − δ) + O(π1 )] (5.4.44) = cn exp −xα [1 + γ(δ) + O(π1 )] , where the function γ(δ) := δ α + (1 − δ)α − 1 is concave on [0, 1] and symmetric with respect to the point δ = 1/2, γ(0) = 0, γ(1/2) = 21−α − 1 and γ (0) = ∞. Hence γ(δ) > 0 for δ ∈ (0, 1), and, for any δ ∈ (0, 1/2), one has γ(δ) > 2δ(21−α − 1). In the general case, for an arbitrary s.v.f. L, we will have l(x) instead of xα on the right-hand side of (5.4.44), and the quantity γ(δ) will acquire a factor (1 + o(1)).
262
Random walks with semiexponential jump distributions
As a result, the right-hand side of (5.4.44) admits an upper bound cn(V (x))1+γ , √ γ > 2γ(δ)/3, which yields for x n nP(1) (V (x))1+γ ,
γ>
γ(δ) > 0. 2
(5.4.45)
Now consider the second term on the right-hand side of (5.4.42): P(2) : = P(Sn−1 < x − y, Sn−1 + ξn x) $ # (5.4.46) = E V (x − Sn−1 ); Sn−1 < x − y = E1 + E2 + E3 , √ where, for σ− = −c n ln n (see (5.4.15) and also (5.4.12)), $ # E1 := E V (x − Sn−1 ); Sn−1 < σ− , # $ E2 := E V (x − Sn−1 ); σ− < Sn−1 σ+ , (5.4.47) # $ E3 := E V (x − Sn−1 ); σ+ < Sn−1 x(1 − δ) . √ We will begin by bounding E1 . Since |σ− | n, by the central limit theorem we have P(Sn < σ− ) → 0 and therefore E1 V (x − σ− ) P(Sn < σ− ) = o(V (x)).
(5.4.48)
Next consider E2 . By Lemma 5.4.7 (to simplify the notation, we will replace n−1 by n; this will change nothing in the asymptotics of E2 ), 1 + o(1) E2 = √ 2πn
σ+
−nΛκ (t/n)
V (x − t) e
1 + o(1) dt = √ 2πn
σ−
σ+
e−gκ (t) dt. (5.4.49)
σ−
We have already discussed the properties of the function gκ (t) = l(x − t) + nΛκ (t/n) (see pp. 254–256). For t = o(x), t = o(n) we have, by virtue of condition [D], that l(x − t) = l(x − t∗ ) − (t − t∗ )l (x − t∗ ) +
α(α − 1) l(x − t∗ ) (t − t∗ )2 (1 + o(1)) + o(1). 2 (x − t∗ )2
Here the last term, o(1), on the right-hand side could be omitted (i.e. one can assume that [D] holds with q ≡ 0), since eo(1) ∼ 1 and hence that term does not affect the first-order asymptotics dealt with in part (i) of the theorem. Further, (t∗ )2 t∗ (t − t∗ ) (t − t∗ )2 t2 = + + . 2n 2n n 2n It follows that, when π1 → 0 (i.e. when x−2 l(x) = o(1/n)), gκ (t) = gκ (t∗ ) +
(t − t∗ )2 (1 + o(1)). n
(5.4.50)
5.4 Large deviations of the sums Sn
263 √ Now we show that the minimum point t∗ , together with its n-neighbourhood, lies inside the integration interval [σ− , σ+ ]. If s1 → ∞ so slowly that L(s1 σ1 (n)) ∼ L(σ1 (n)) then, by virtue of (5.4.24), t∗ ∼ αn
l(x) = αnxw1 (x) = αns1 σ1 (n)w1 (s1 σ1 (n)) x ∼ αsα−1 σ1 (n) = o(σ1 (n)). 1
(5.4.51)
If s1 → ∞ at a faster rate then it is even more obvious that t∗ = o(σ1 (n))
(5.4.52)
holds. Moreover, since σ+ ∼ σ1 we have t∗ = o(σ+ (n)). As
(5.4.53)
√ n = o(σ+ (n)), along with (5.4.53) the following also holds: √ t∗ + c n σ+ .
√ Finally, it is evident that t∗ > 0 and n |σ− |. The above, together with (5.4.49) and (5.4.50), enables one to conclude that ∗
E2 = e−gκ (t ) (1 + o(1)) = e−M (1 + o(1)). It remains to bound the quantity # $ E3 = E V (x − Sn−1 ); σ+ Sn−1 < x(1 − δ) x(1−δ)
V (x − u) dP(Sn−1 u)
=− σ+
x(1−δ) = − V (x − u) P(Sn−1 u) σ+
x(1−δ)
P(Sn−1 > u)V (x − u) l (x − u) du.
+ σ+
(5.4.54)
264
Random walks with semiexponential jump distributions
By virtue of Corollary 5.2.2 and the fact that l (x) → 0 as x → ∞, we have E3 V (x − σ+ ) P(Sn−1 σ+ ) π1 (u)h V (x − u)l (x − u)cnV u 1 − 2
x(1−δ)
+ σ+
du
π1 (σ+ )h cnV (x − σ+ )V σ+ 1 − 2 ⎡ x(1−δ) π1 (u)h ⎢ + cn ⎣ V (x − u)V u 1 − 2
⎤ ⎥ du⎦ o(1),
(5.4.55)
σ+
where π1 (u) π1 (σ+ ) ∼ π1 (σ1 ) = 1. We have already estimated products of the form π1 (u)h V (x − u)V u 1 − , 2 which are present in (5.4.55). This was done in (5.4.43) and (5.4.44), but for the case where u is comparable with x whereas in (5.4.55) one could have u = o(x). In the latter case, for h 4/3 and u σ+ , π1 (u) h V (x − u)V u 1 − 2
" π1 (u) h = exp −l(x − u) − l u 1 − 2
" αul(x) h exp −l(x) + (1 + o(1)) (1 + o(1)) − l u 1 − x 2
" αul(x) exp −l(x) + (1 + o(1)) − 3−α l(u)(1 + o(1)) , x where l(x)/x = o(l(u)/u) for u = o(x). Therefore, for such a u → ∞ one has π1 (u) h V (x − u)V u 1 − V (x)V (u)γ1 2 for some fixed γ1 > 0. From here and (5.4.55) it follows that, for large enough n, E3 V (x)V (σ1 )γ
(5.4.56)
for some γ ∈ (0, 1). Collecting together the relations (5.4.48), (5.4.54) and (5.4.56) and noticing that V (x) e−M , we obtain P(2) = e−M (1 + o(1)). This, together with (5.4.41) and (5.4.45), proves the first assertion of the theorem. (ii) In this case, the bound for P(1) remains true, and the only difference will
5.4 Large deviations of the sums Sn
265
be in how we evaluate P(2) . Instead of (5.4.46), (5.4.47), one now needs to use another partition of the integration range Sn−1 < x − y. To simplify the exposition, we first assume that the function l(t) is differentiable and that l (t) ∼ αl(t)/t,
t → ∞.
(5.4.57)
Further, for z = 1/l (x) ∼ x/αl(x), set P(2) = E1 + E2 + E3 , where # $ E1 := E V (x − Sn−1 ); Sn−1 < −z , # $ E2 := E V (x − Sn−1 ); |Sn−1 | z , # $ E3 := E V (x − Sn−1 ); z < Sn−1 < x(1 − δ) . The quantity E1 can be bounded using Chebyshev’s inequality:
E1 V (x + z)P(Sn−1 < −z) = V (x) o nb/2 z −b .
(5.4.58)
(5.4.59)
Evaluating E2 is quite trivial: since V (x + v) = V (x) (1 + o(1)) for v = o(z), √ 0 < c1 < V (x+v)/V (x) < c2 < ∞ for |v| < z and z n by the assumptions of the theorem, we have from the central limit theorem that E2 = V (x)(1 + o(1)).
(5.4.60)
In order to bound E3 we need to distinguish between the following two cases: z σ1 and z > σ1 . In the second case, by virtue of Corollary 5.2.2(i) we find, cf. (5.4.55), that # $ E3 = E V (x − Sn−1 ); z Sn < x(1 − δ) V (x − z) P(Sn−1 z) π1 (u)h V (x − u)l (x − u)cnV u 1 − 2
x(1−δ)
+
du,
(5.4.61)
z
where π1 (u)h/2 h/2 for u z σ1 . Repeating the argument that follows (5.4.55), we find that, for z u x(1 − δ), π1 (u)h V (x − u)V u 1 − V (x)V (y)γ1 , γ1 > 0, 2 and therefore E3 V (x)V (z)γ ,
0 < γ < 1.
If z σ1 then we split the integral E3 into two parts, # $ E3,1 := E V (x − Sn−1 ); z Sn−1 < σ1
(5.4.62)
266 and
Random walks with semiexponential jump distributions # $ E3,2 := E V (x − Sn−1 ); σ1 Sn−1 < x(1 − δ) .
(5.4.63)
The integral E3,2 coincides with E3 in the preceding considerations and therefore admits an upper bound V (x)V (σ1 )γ V (x)V (z)γ , γ > 0. For E3,1 we obtain in a way similar to that used in our previous analysis, that E3,1
# $ = E V (x − Sn−1 ); Sn−1 ∈ [z, σ1 ) = −
σ1 V (x − u) dP(Sn−1 u) z
σ1 V (x − z) P(Sn−1 z) +
P(Sn−1 u)V (x − u)l (x − u) du
z
V (x − z)e−z
2
σ1 /2nh
+
2
V (x − u)l (x − u)e−u
/2nh
du,
(5.4.64)
z
where the last inequality follows from Corollary 5.2.2(ii). For u σ1 = o(x) we have α l(x) u l(x − u) = l(x) − 1 + o(1) . x Observe that, for u z ∼ x/αl(x) and π2 (x) → 0, one has 2 l(x) nl(x) cn = cπ2 (x) → 0. xu x Hence for u z and large enough x we obtain the inequality 2
V (x − u)eu so that E3,1 V (x)e−z
2
/2nh
V (x)e−u
2
/3nh
,
/3nh
and therefore 2 E3 V (x) V γ (z) + e−z /3nh .
(5.4.65)
Since z 2 /n ∼ 1/α2 π2 (x) → ∞, collecting up the above bounds leads to the relation (5.4.33). In the general case, when (5.4.57) does not necessarily hold, one should put z := x/αl(x) and then replace the integrals containing l (x − u) du by sums of integrals with respect to du l(x − u) over intervals of length Δ0 z for a small fixed Δ0 > 0; on these intervals the increments of the function l behave in the same way as when (5.4.57) holds true (see (5.1.3)). (iii) It remains to establish the asymptotic expansion (5.4.34). To this end, one needs to do a more detailed analysis of the integral E2 . All the other bounds remain the same. √ Recall that z n in the deviation zone under consideration. We have # $ (5.4.66) E2 = E e−l(x−Sn−1 ) ; |Sn−1 | z ,
5.4 Large deviations of the sums Sn
267
where, for S = o(x), owing to condition [D] (see (5.4.1)) we have l(x − S) = l(x) − l (x)S +
α(α − 1) l(x) 2 S (1 + o(1)) + o q(x) 2 2 x
or, equivalently, l(x) − l(x − S) =
S (1 − α) S 2 + (1 + o(1)) + o q(x) . z 2 xz
(5.4.67)
Clearly, for |S| z, |l(x − S) − l(x)| c < ∞, and the Taylor expansion of the function el(x)−l(x−S) in the powers of the difference (l(x) − l(x − S)) yields 3 S2 |S| S (1 − α) S 2 , el(x)−l(x−S) = 1 + + (1 + o(1)) + o q(x) + 2 + O z 2 xz 2z z3 (5.4.68) where the remainders o(1) and O(·) are uniform in |S| z. Substituting the sum Sn−1 for S in (5.4.68) and using (5.4.67) and the fact that b = 1/(1 − α) + 2 3 and therefore E|ξ|3 < ∞ and E|Sn |3 = O(n3/2 ), we obtain
% 2 (1 − α) Sn−1 Sn−1 + (1 + o(1)) E2 = V (x) E 1 + z 2 xz & " 2 Sn−1 3/2 −3 + o q(x) + ; |Sn−1 | z + O(n z ) . 2z 2 (5.4.69) Next note that, for k b,
% & # $ |Sn−1 |b ; |S | > z Tk := E |Sn−1 |k ; |Sn−1 | > z = E n−1 |Sn−1 |b−k z k−b E|Sn−1 |b = O z k−b nb/2 .
(5.4.70)
2 Returning to (5.4.69) and observing that ESn−1 = 0, ESn−1 = n − 1 and −b 3/2 −3 nz = o(n z ), we have % 3/2 & n − 1 (1 − α) (n − 1) n + E2 = V (x) 1 + . (1 + o(1)) + o q(x) + O 2 2z 2 xz z3
Now taking into account the bounds (5.4.59) and (5.4.65) for E1 and E3 respectively, we obtain (5.4.34). The uniformity of the bounds claimed in the statement of Theorem 5.4.1 can be verified in an obvious way since, for all the terms o(1) and O(·), one can give explicit bounds in the form of functions of x and n. The theorem is proved.
268
Random walks with semiexponential jump distributions 5.5 Large deviations of the maxima S n
As noted, the asymptotics of P(S n x) in the zone of moderately large deviations x σ+ (n) ∼ σ1 (n) (σ1 is defined in (5.4.7)) were studied in [2], where it was established that under condition (5.1.10) one has the representation (5.1.15). In the present section, as in § 4, we will deal with deviations x σ1 (n). We observe that condition (5.1.10) is somewhat excessive for (5.1.15) (cf. the discussion in the previous section), and so we will simply assume that property [CA] (see p. 253) is satisfied. Recall that the expected result here is that condition (5.1.9) (or [ · , <] with V ∈ Se) will imply [CA]. Theorem 5.5.1. (i) Let the conditions (5.1.7), [ · , =] with V ∈ Se, [D] with q = 1 and [CA] be satisfied. Then, in the intermediate deviation zone, one has P(S n x) = 2ne−M (1 + ε(x, n)),
(5.5.1)
where ε(x, n) = o(1) uniformly (see Remark 5.4.4) for values of x and n such that x x n → ∞, s1 = → ∞, s2 = → 0. (5.5.2) σ1 (n) σ2 (n) The quantities M = M (x, n), σ1 (n), σ2 (n) are defined in (5.4.27)–(5.4.31), (5.4.7) and (5.4.17). The conditions s1 → ∞, s2 → 0 can be replaced by π1 → 0, π2 → ∞. (ii) Let n → ∞ and s2 → ∞ (or, equivalently, π2 → 0). Then, uniformly in x and n from that range, P(S n x) = nV (x)(1 + o(1)).
(5.5.3)
(iii) If condition [D] is satisfied and s2 → ∞ (or, equivalently, n = o(z 2 ), where z = 1/l (x) ∼ x/αl(x)) then
n−1 1 ES i P(S n x) = nV (x) 1 + zn i=1 % & n−1 1 1 (n − 1)(n − 2) 2 + + 2 ES j 2z 2n n j=1 % & (1 − α)z × 1+ (1 + o(1)) x " √ 3 , (5.5.4) + o q(x) + O ( n/z) where the remainder terms are uniform in the range of values of x and n stated in part (ii).
5.5 Large deviations of the maxima S n
269
Observe that here, owing to the invariance principle and the uniform integra√ √ 2 bility of S n / n and S n / n (see Lemma 4.4.9 on p. 202), we have 8 S j2 Sj 2 , E ∼ 1, j → ∞. (5.5.5) E√ ∼ π j j Hence Theorem 5.5.1 immediately implies the following. Corollary 5.5.2. Under the conditions of Theorem 5.5.1(iii), as n → ∞, % & √ 23/2 n P(S n x) = nV (x) 1 + √ (1 + o(1)) . (5.5.6) 3 π z (Compare Corollary 4.5.2, where a similar (but not identical) representation was obtained for V ∈ R.) Remark 5.5.3. Remarks 5.4.2–5.4.6 following Theorem 5.4.1 remain valid in this setup as well. Moreover, as already observed, condition [CA] in Theorem 5.5.1 is likely to be superfluous. Proof of Theorem 5.5.1. (i) Set Gn := {S n x}. Then, cf. (5.4.37), P(S n x) = P(Gn B) + P(Gn B), where B has the same meaning as before and P = P(Gn B) can be bounded with the help of Corollary 5.2.2 in exactly the same way as in § 5.4 (see (5.4.38) and (5.4.39)): P cV 1+γ (x),
γ > 0.
The relation (5.4.41) also remains valid, so that the main problem here consists in evaluating, for y = δx, the probabilities P(Gn B j ) = P Gn B j {S j−1 x − y} (5.5.7) + P Gn B j {S j−1 x − y} =: P(1) + P(2) . Owing to Corollary 5.2.2(i) the first term, P(1) , satisfies
1 P(1) cnV (δx) V (1 − δ)x 1 − π1 ((1 − δ)x)h . 2
(5.5.8)
We have already bounded such expressions in § 5.4 (see (5.4.43)–(5.4.45)): nP(1) V 1+γ (x),
γ > 0.
For the second term on the right-hand side of (5.5.7), we have P(2) = P ξj + Zj,n x, ξj y, S j−1 < x − y , where (j)
Zj,n := Sj−1 + S n−j ,
j = 0, . . . , n − 1,
(5.5.9)
(5.5.10)
270
Random walks with semiexponential jump distributions (j)
d
and S n−j = S n−j is independent of ξ1 , . . . , ξj . Since {Sj−1 < x − y} = {S j−1 < x − y} ∪ S j−2 x − y, Sj−1 < x − y , one can again use the bounds (5.5.8), (5.5.9) to write P(Gn B j ) = P ξj x − Zj,n , ξj y, Sj−1 < x − y + O V 1+γ (x) # $ (j) = E V (x − Zj,n ); Sj−1 + S n−j x − y + O V 1+γ (x) . (5.5.11) We have obtained the same expectation as in (5.4.46) but with Sn−1 replaced by Zj,n . If we again make use of the representation $ # (5.5.12) E V (x − Zj,n ); Zj,n < x − y =: E1,j + E2,j + E3,j of the expectation as a sum of three integrals of the form (5.4.47) then the evaluations of the first and third integrals will simply repeat our computations from the previous section, because d
(1) Sj−1 Zj,n S n−1 and (2) there, for bounding the distribution of Sn−1 we actually used bounds for S n−1 (see Corollary 5.2.2). Now consider $ # E2,j = E V (x − Zj,n ); σ− Zj,n < σ+ ,
(5.5.13)
where σ± are the same as in (5.4.47) (i.e. they are given by (5.4.12) and (5.4.15)). To evaluate this integral, we need to know the asymptotics of P(Zj,n t). √ Lemma 5.5.4. Suppose that δn < j (1 − δ) n and n t < δ σ+ for some fixed δ ∈ (0, 1/2). Then P(Zj,n t) ∼ 2P(Sn−1 t).
(5.5.14)
Proof. Clearly, t P(Zj,n t) =
P(Sj−1 ∈ du) P(S n−j t − u) + P(Sj−1 t). −∞
Split the integral into two parts: t
σ− =
−∞
t +
−∞
=: I1 + I2 ,
σ−
where, by virtue of Corollary 5.2.2(iii), 2
I1 P(Sj−1 < σ− ) P(S n−j t − σ− ) e−t
/2(n−j)h
.
(5.5.15)
5.5 Large deviations of the maxima S n
271
Since |σ− | = o(σ+ ) and therefore t − u t − σ− < δσ+ (n) (1 + o(1)) < σ+ (δn) σ+ (n − j),
(5.5.16)
the probability P(S n−j t − u) in the integrand of I2 can be estimated using the uniform approximation (5.1.15). Hence t I2 = 2
P Sj−1
σ−
& % t−u ∈ du 1 − Φ √ n−j
t−u × exp −(n − j)Λκ √ n−j
+
" (t − u)2 (1 + o(1)) 2(n − j)
t =2
P(Sj−1 ∈ du) P(Sn−j t − u)(1 + o(1)).
σ−
Bounding such integral but for limits −∞ and σ− (which again leads to the bound (5.5.15)) andobserving that an integral of this kind from t to ∞ does not exceed P Sj−1 t , we obtain ∞ I2 = 2
P(Sj−1 ∈ du) P(Sn−j t − u)(1 + o(1))
−∞
+ O exp −
t2 2(n − j)h
" − 2θ P(Sj−1 t),
0 < θ 1.
Therefore, using Corollary 5.2.2(iii) to bound P(Sj−1 t), we have P Zj,n t = 2P(Sn−1 t)(1 + o(1)) "
"
t2 t2 + exp − . + O exp − 2(n − j)h 2jh
(5.5.17)
As the value of h can be arbitrarily close to 1, we can always choose it in such a way that ((1 − δ)h)−1 > 1 + δ/2. Then t2 t2 δt2 − > . 2(n − j)h 2n 4n 2 The same inequality will hold for t2 /2jh − t /2n. Since t term in (5.5.17) will be o P(Sn−1 t) and hence
√ n, the remainder
P(Zj,n t) = 2P(Sn−1 t)(1 + o(1)). The lemma is proved. We return to the evaluation of E2,j in (5.5.13). First we compute # $ √ E2,j := E V (x − Zj,n ); v n Zj,n < δσ+ ,
(5.5.18)
272
Random walks with semiexponential jump distributions
where v → ∞ slowly enough. This problem is quite similar to that of calculating E2 , see (5.4.47), (5.4.49), because, according to (5.5.17), the asymptotics √ of P(Zj,n t) in the zone n t < δσ+ coincide (up to the factor 2(1+o(1))) with that of the probability P(Sn−1 t), which determines the integral E2 in (5.4.47) and (5.4.49). Therefore, cf. Lemma 5.4.7, we have E2,j
2(1 + o(1)) √ = 2πn
δσ+
V (x − t) e−nΛκ (t/n) dt.
(5.5.19)
√
v n
The evaluation of this integral repeats our calculations in (5.4.49)–(5.4.54), since √ (5.4.50) is still true and t∗ + c n σ+ . Moreover, t∗ ∼ αn
√ √ l(x) = α nπ2 n. x
Consequently we obtain, as in (5.4.54), the following asymptotics for (5.5.19): 2e−M (1 + o(1)).
(5.5.20)
The difference between E2,j and E2,j can be estimated without difficulty using the inequality d
Sj−1 Zj,n S n−1
(5.5.21)
and calculations which we have already done above, and so this part of the proof is left to the reader. The result is that for δn j (1 − δ)n the integral E2,j also has the asymptotics (5.5.20). Furthermore, owing to (5.5.21), for all j one has E2,j 2e−M (1 + o(1)). Summing up the above results, we obtain P(Gn ) = (1 + o(1))
n−1
# $ E V (x − Zj,n ); σ− Zj,n < σ+ + o V (x)
j=0
$ # = (1 + o(1)) 2(n − 2δn) e−M + rn , where 0 rn 4δne−M (1 + o(1)). Since δ > 0 can be chosen arbitrarily small, this implies the first assertion of Theorem 5.5.1. (ii) Next we prove the asymptotics (5.5.3). As in the proof of Theorem 5.4.1(ii), we will need to split the integral # $ E = E V (x − Zj,n ); Zj,n < x − y (5.5.22) into three parts, in a way that differs from the representation (5.5.12). We write
5.5 Large deviations of the maxima S n
273
+ E2,j + E3,j , cf. (5.4.58), where E = E1,j
# $ E1,j := E V (x − Zj,n ); Zj,n < −z , # $ := E V (x − Zj,n ); |Zj,n | z , E2,j # $ := E V (x − Zj,n ); z < Zj,n < (1 − δ)x . E3,j
(5.5.23)
Owing to (5.5.21), the quantities E1,j , E3,j can be bounded using the same argument as in the proof of Theorem 5.4.1 (see (5.4.59), (5.4.62) and (5.4.65)). √ Since for s2 = x/σ2 (n) → ∞ one has z n, and the main mass of the √ distribution of Zj,n is concentrated in a n-neighbourhood of 0, we obtain as before (cf. (5.4.60)) that = V (x)(1 + o(1)). E2,j
This proves (5.5.3). (iii) Now we will prove the asymptotic expansion (5.5.4). Again, the argument is quite similar to previous calculations. Using (5.4.67) and (5.4.68), we find that % 2 2 (1 − α)Zj,n Zj,n Zj,n E2,j + (1 + o(1)) + = V (x)E 1 + z 2xz 2z 2 & |Zj,n |3 ; |Z + o q(x) + O | z . j,n z3
(5.5.24)
Here we will need results that are somewhat more detailed than (5.4.70). Since Zj,n Sj−1 , the quantities − := E |Zj,n |k ; Zj,n < −z , k = 0, 1, 2, Tk,j − 3 I := E |Zj,n | ; Zj,n ∈ (−z, 0) can be bounded as before: − Tk,j
Further, since z one has
√
cnz k−b , b−k
I − = O n3/2 .
n (again compare our previous calculations, see (5.4.70))
k k + := E Zj,n ; Zj,n > z E S n−1 ; S n−1 > z = O z k−b nb/2 , Tk,j 3 3 I + := E Zj,n ; Zj,n ∈ (0, z) ES n−1 cn3/2 . Taking into account that EZj,n = ES n−j−1 ,
2 2 EZj,n = j − 1 + ES n−j ,
274
Random walks with semiexponential jump distributions
we finally obtain E2,j
%
1 z ES n−j 2 + 2 j − 1 + ES n−j 1 + (1 + o(1)) = V (x) 1 + z 2z x & √ + o q(x) + O ( n/z)3 .
Combining this with the previously derived estimates, we arrive at (5.5.4). The theorem is proved.
5.6 Large deviations of S n (a) when a > 0 As already stated, the following asymptotic representation (a first-order approximation) was proved in [178] (see also [275]) for the functional S n (a) = max(Sk − ak) kn
in the case a > 0: for all n, as x → ∞, 1 P(S n (a) x) ∼ a
x+an
V (u) du.
(5.6.1)
x
This representation holds true for all so-called strongly subexponential distributions F. It can be shown that the class Se of semiexponential distributions is embedded in the class of strongly subexponential distributions (for sufficient conditions for a distribution to belong to the latter, see [178]). In the present section, we will supplement (5.6.1) with asymptotic expansions for P(S n (a) x) that hold as x → ∞ uniformly in n = 1, 2, . . . As before, let 1 . z = z(x) = l (x) The argument of the function z(·) will only be shown when it is different from x. Theorem 5.6.1. Let conditions (5.1.7) and [ · , =] with V ∈ Se be satisfied. Moreover, assume that a > 0 is fixed, condition [D] holds and xj = x + aj, zj = z(xj ), m = min{n, z}. Then, as x → ∞, uniformly in all n, P(S n (a) x) % n ES n−j (a) = V (xj ) 1 + zj j=1
& 1 (1 − α) zj 2 j − 1 + ES (a) 1 + (1 + o(1)) n−j 2zj2 xj % 5/2 & m + V (x) O + o(mq(x)) . (5.6.2) z3
+
5.6 Large deviations of S n (a) when a > 0
275
In particular, P(S n (a) x) =
z V (x) 1 − e−an/z (1 + o(1)). a
(5.6.3)
Remark 5.6.2. Condition [D] is not needed for (5.6.3). Remark 5.6.3. To simplify the right-hand side of (5.6.2), one could replace the sums by integrals (see Lemma 5.6.4 below) and use the relations ES n−j (a) → ES(a) as n − j → ∞;
xj ∼ x, zj ∼ z for j = o(x).
This, however, would introduce new errors due to the above approximations (see Corollary 5.6.5 below). The sums in (5.6.2) and the respective integrals can be computed using the following lemma and its refinements. For a fixed v > 0, put S(k, r) :=
n−1
j k V r (x + vj).
(5.6.4)
j=0
Then clearly S(k, r) = I(k, r)(1 + εn ) cI(k, r),
(5.6.5)
where εn → 0 as n → ∞ and n I(k, r) =
t V (x + vt) dt = k
r
0
1 v k+1
nv uk V r (x + u) du.
(5.6.6)
0
Lemma 5.6.4. Let V ∈ Se. Then the following assertions hold true. (i) z V (x) 1 − e−nv/z (1 + o(1)), v2 nv −nv/z z e S(1, 1) = 2 V (x) 1 − e−nv/z − (1 + o(1)). v z S(0, 1) =
(5.6.7)
(ii) Under condition [D], 1 S(0, 1) = v
x+(n−1/2)v
# $ V (u) du 1 + O(z −2 ) + o(q(x))
x−1/2
=
1 v
x+nv
V (u) du 1 + O(z −1 )
x
(5.6.8)
276
Random walks with semiexponential jump distributions and, setting A := rvn/z, rvn/z z k+1 , z r I(k, r) = + o(q(x)) V (x) tk e−t dt 1 + O rv x 0
k z k+1 Aj r = k! V (x) 1 − e−A rv j! j=0
,
1+O
z x
+ o(q(x)) . (5.6.9)
In particular, I(0, 1) =
z , z + o(q(x)) . V (x) 1 − e−vn/z 1 + O v x
(5.6.10)
Proof. (i) For n zxγ , α > γ > 0, and u vn = o(x) one has V (x + u) = V (x)e−u/z(1+o(1))+o(1) owing to (5.1.3), and therefore S(1, 1) =
n−1
jV (x + vj) = V (x)
j=0
n−1
je−vj/z(1+o(1))+o(1)
j=0
n ∼ V (x)
te
−vt/z
0
nv/z z 2 dt = V (x) ue−u du v 0
z 2
nv −nv/z e = V (x) 1 − e−nv/z − . v z For n > zxγ this result will clearly remain true, since
(5.6.11)
∞ γ uV (x + u) du = z 2 V (x) O e−x . zxγ
The calculation of S(0, 1) is even simpler. (ii) Now assume that [D] is satisfied. Then, for |u − xj | < c, V (u) = V (xj ) + (u − xj )V (xj ) + O (u − xj )2 V (xj )zj−2 + o V (xj )q(xj ) , where V (xj ) ∼ −
V (xj ) . zj
Therefore 1 V (xj ) = v
xj+v/2
# $ V (u) du 1 + O zj−2 + o(q(xj )) ,
(5.6.12)
xj −v/2
1 S(0, 1) = v
x+v(n−1/2)
# $ V (u) du 1 + O(z −2 ) + o(q(x)) .
x−v/2
(5.6.13)
5.6 Large deviations of S n (a) when a > 0
277
This proves the first equality in (5.6.8). Now we prove the second equality in (5.6.8) and also (5.6.10). For n z xγ , α > γ > 0, and u n = o(x) one has % 2 & u V (x + u) = V (x)e−u/z 1 + O + o(q(x)) . (5.6.14) xz Making the change of variables u/z = t, we obtain t2 z + o(q(x)) e−t dt 1 + O x 0 &
% z −nv/z = zV (x) 1 − e 1+O + o(q(x)) . x
nv
nv/z
V (x + u) du = zV (x)
vI(0, 1) = 0
For n > zxγ this result will clearly remain true because ∞
γ V (x + u) du = zV (x)O e−x .
zxγ
This proves (5.6.10). Since
y y−v/2
V (u) du ∼ (v/2)V (y) as y → ∞ and
∞ 1 V (u) du , V (x) = O z x
the previous relation proves the second equality in (5.6.8) as well. In a similar way, we have n I(k, r) =
tk V r (x + vt) dt 0
z k+1 = k+1 V r (x) v
vn/z
0
, z tk e−tr dt 1 + O + o(q(x)) . x
Since A
k −u
u e 0
& % k Aj −A , du = k! 1 − e j! j=0
the previous relation implies (5.6.9). The lemma is proved. Note that, using the inequality A 0
" Ak+1 tk e−t dt min k!, , k+1
278
Random walks with semiexponential jump distributions
one obtains from Lemma 5.6.4 the bound "
z k+1 nk+1 S(k, r) k!V r (x) min (1 + εn ), , rv (k + 1)!
(5.6.15)
where εn → 0 as n → ∞. In particular,
(z ) S(0, 1) V (x) min , n (1 + εn ). v
(5.6.16)
From Theorem 5.6.1 and Lemma 5.6.4 one can easily derive the following result. Corollary 5.6.5. As x → ∞, n → ∞, under condition [D] one has: (i) 1 P(S n (a) x) = a
% 1 ES(a) +o V (u)du 1 + z z
x+(n−1/2)a
& + o(q(x)) ;
x−a/2
(ii) P(S n (a) x) =
&
% z ES(a) V (x) 1 − e−na/z 1 + + o(q(x)) a z
1 na e−na/z + o(V (x)). + 2 V (x) 1 − e−na/z − 2a z
Assertion (i) follows from (5.6.2) (taking into account only the first two terms of the expansion) and (5.6.8), and (ii) follows from (5.6.7) and (5.6.2) (taking into account the first three terms). Proof of Theorem 5.6.1. Here we will put Gn := {S n (a) x},
Bj (v) := {ξj < y + jv},
B(v) =
n *
Bj (v).
j=1
As usual, we have P(Gn ) = P(Gn B(v)) + P(Gn B(v)). A bound for the first term on the right-hand side was obtained in Theorem 5.3.1: for v (1 − δ)/r with δ ∈ (0, 1) a fixed number, P(Gn B(v)) c min{z r+1 (y), nr }V r (y),
(5.6.17)
where, as before, z(y) = 1/l (y) ∼ y/αl(y) and r = x/y 1. For the second term one has (cf. (5.4.40)) % n &2 n , P(Gn B j (v)) + O V (y + vj) P(Gn B(v)) = j=1
j=1
5.6 Large deviations of S n (a) when a > 0
279
where, by virtue of Lemma 5.6.4 (see (5.6.16)), n
V (y + vj) c min{z(y), n}V (y).
j=1
So, for y = δx, % n
&2 V (y + vj) cm2 V 2 (δx) < V 1+γ (x)
j=1
for sufficiently large x, where γ ∈ (0, 2δ α − 1) can be chosen arbitrarily close to 2δ α − 1 > 0 when δ > 2−1/α . Further, by (5.6.17) the probability P(Gn B(v)) also admits an upper bound V 1+γ (x) with γ ∈ (0, δ α−1 − 1). Summarizing, the above yields P(Gn ) =
n
P(Gn B j (v)) + O V 1+γ (x) ,
γ > 0.
(5.6.18)
j=1
Now consider (again in a similar way to that used in the preceding exposition) the representation P(Gn B j (v)) = P Gn B j (v) S j−1 (a) x − y + P Gn B j (v) S j−1 (a) < x − y =: P1,j + P2,j . (5.6.19) By Theorem 5.3.2 the first term on the right-hand side satisfies P1,j c min{j, z(x − y)}V (x − y)V (y + jv) xj − y!j ) min{j, z(! xj − y!j )}, cV (! yj )V (! where x !j := x + jv and y!j := y + jv. Again setting for simplicity L(x) ≡ 1 (as in (5.4.44)) we obtain V (! yj )V (! xj − y!j ) = exp{−(ρα + (1 − ρ)α )! xα j }, xj δ, δ < 1/2. Furthermore, if j x/v then we will also where ρ = ρj = y!j /! have ρ (1 + δ)/2 < 1. Recall that we studied the function γ(ρ) = ρα + (1 − ρ)α − 1 > 0 on p. 261 (see (5.4.44) and (5.4.45)). As in those calculations, we obtain the inequalities xj − y!j ) V 1+γ (! xj ), V (! yj )V (!
P1,j V 1+γ (! xj ),
γ > 0 for j
x . v
Hence, by virtue of Lemma 5.6.4 (see (5.6.15)), for n x/v, n j=1
P1,j S(0, 1 + γ) c min{z, n}V 1+γ (x).
(5.6.20)
280
Random walks with semiexponential jump distributions
However, if j n0 = x/v then y!j yn0 = (1 + δ)x and P1,j cz((1 − δ)x)V (! yj ), so that n
P1,j cz 2 V ((1 + δ)x) V 1+γ (x),
γ > 0,
j=n0
and therefore the inequality (5.6.20) holds for all n. Now consider the second term on the right-hand side of (5.6.19). For Sj (a) := Sj − aj, (j)
(j)
Zj,n (a) := Sj−1 + S n−j (a),
d
where S n−j (a) = S n−j (a) is independent of ξ1 , . . . , ξj , we obtain (cf. (5.5.10), (5.5.11)) P2,j = P S n (a) x, ξj y!j , S j−1 (a) < x − y (j) = P Sj−1 + ξj − aj + S n−j (a) x, ξj y!j , S j−1 (a) < x − y = P ξj x − Zj,n (a) + aj, ξj y!j , S j−1 (a) < x − y + O V 1+γ (x) # $ = E V (x − Zj,n (a) + aj); Zj,n < x − y!j + aj + O V 1+γ (x) . (5.6.21) As before, let xj = x + ja,
zj = z(xj ).
(5.6.22)
Then the principal part of (5.6.21) can be written in a form similar to (5.5.22), (5.5.23): $ # E(j) = E V (xj − Zj,n (a)); Zj,n (a) < xj − y!j = E1,j + E2,j + E3,j , where # $ E1,j := E V (xj − Zj,n (a)); Zj,n (a) < −zj , $ # E2,j := E V (xj − Zj,n (a)); |Zj,n (a)| zj , # E3,j := E V (xj − Zj,n (a)); zj < Zj,n (a)
(5.6.23)
$ < xj − y!j = (1 − δ) xj + (aδ − v)j .
To evaluate these expectations, we will need bounds for the distribution of Zj,n (a). Clearly d
Sj−1 Zj,n (a) Sj−1 + ζ,
(5.6.24)
d
where ζ = S(a) is independent of Sj−1 , and P(ζ x) V (x) := czV (x)
(5.6.25)
5.6 Large deviations of S n (a) when a > 0
281
(see Theorem 5.3.2 or (5.6.3) and Lemma 5.6.4). By virtue of the first inequality in (5.6.24), the term E1,j admits the same bound as before (see (5.4.59)): (5.6.26) E1,j V (xj )O j 3/2 zj−3 (since b 3, one can use Chebyshev’s inequality with exponent 3). In order to estimate E3,j , we will need bounds for the distribution of Sj + ζ (see (5.6.24)). In what follows, by π1 = π1 (x) we will understand the value (5.2.8) with n = j, i.e. π1 = π1 (x) = jl(x)x−2 . Recall that the relations π1 (x) 1 and x σ1 (j) are equivalent to each other. Lemma 5.6.6. We have
Pj (x) := P(Sj + ζ x) where Δ = π1 1.
1 2
2
c e−x
for x σ1 (j),
/2hj
V ((1 − Δ)x)
for x σ1 (j),
(5.6.27)
π1 (x)h(1 + o(1)) provided that π1 (x) → 0, and Δ Δ0 < 1 for
Corollary 5.6.7.
c V (x)1/2π1 h
Pj (x)
for π1 1,
(1−Δ)α
for π1 1,
V (x)
where Δ has the properties described in Lemma 5.6.6. Proof. The assertions of the corollary follow from the relations 2
e−x
= e−l(x)/(2π1 h) = V (x)1/(2π1 h)
/(2hj)
α
and V ((1 − Δ)x) = V (x)(1−Δ)
(1+o(1))
.
Proof of Lemma 5.6.6. First consider the case x σ1 (j) (i.e. π1 1). Using Corollary 5.2.2(ii) and (5.6.25) and integrating by parts, we find that ∞ Pj (x) =
P(ζ ∈ dt) P(Sj x − t) 0
x
P(ζ x) +
2
e−(x−t)
/(2jh)
P(ζ ∈ dt)
0
= e−x
2
x
/(2jh)
+ 0
−x2 /(2jh)
e
x
+c 0
2
e−(x−t)
/(2jh)
2
e−(x−t)
x−t P(ζ t) dt jh
/(2jh)
x−t z(t) e−l(t) dt jh
(5.6.28)
282
Random walks with semiexponential jump distributions
(in this calculation we have assumed for simplicity that the inequality (5.2.16) in Corollary 5.2.2 holds for all x σ1 (j). The changes one should make for x such that l(x) 2 ln j are obvious, and are left to the reader). In the integrand of the last integral in (5.6.28) one has x2 1 1 x−t z(t) = , jh jhl(x) π1 (x)h h (x − t)2 x2 xt t2 − =− + p(t), p(t) := − . 2jh 2jh jh 2jh After the change of variables t = ux, u ∈ (0, 1), the function p(t) − l(t) will have the form u2 x2 u− −uα l(x)(1 + o(1)) p(t) − l(t) = jh 2 & % u − u2 /2 α u (1 + o(1)) = l(x) π1 (x)h % & u2 l(x) α u− − hu (1 + o(1)) h 2 h−1 l(x) − (h − 1)uα (1 + o(1)) ∼ − l(t) h h as t → ∞. Since for x σ1 (j) one has j/x → ∞ as x → ∞, the part of the last integral in (5.6.28) over the interval (j/x, x) does not exceed
−x2 /(2jh)
x
e
2 e−(1−1/h)l(t)(1+o(1)) dt = o e−x /(2jh) .
j/x
Because p(t) 1/h for t ∈ (0, j/x), the remaining part of that integral will not exceed
−x2 /(2jh)
ce
j/x 2 e1/h−l(t) dt c1 e−x /(2jh) . 0
This proves the first inequality of the lemma. Now let x σ1 (j) (π1 1). Then, for δ ∈ (0, 1), we have by virtue of (5.6.25) and Corollary 5.2.2(i) (assuming for simplicity that the inequality (5.2.13) holds
5.6 Large deviations of S n (a) when a > 0
283
for all x σ1 (j)) that x Pj (x) P(Sj x) +
P(Sj ∈ dt) V (x − t)
−∞
cjV
π1 h x + V (x) + 1− 2
x + cj
1−
V
σ1 (j)
2
e−t
/(2jh)
V (x − t)
0
π1 (t) h t V (x − t)l (t) dt. 2
t dt jh (5.6.29)
σ1 (j)
Here the first two terms on the right-hand side can clearly be written, as x → ∞, in the form V ((1 − Δ)x), where Δ π1 h/2 and h ∈ (1, 2) can still be chosen arbitrarily close to 1 (this new value of h differs from that in (5.6.29)). The third term (we will denote it by I3 ) is an analogue of the integral E2 in (5.4.49) (for n = j), where the function gκ (t) is replaced now by g2 (t) = √ l(x − t) + t2 /2jh and the power factor 1/ j is replaced by tz(x − t)/j. If π1 → 0 (s1 = x/σ1 (j) → ∞) then, using an argument quite similar to that in (5.4.50)–(5.4.54), we obtain for the third term the value ∗
ct∗ z(x − t∗ )j −1/2 e−g2 (t ) , where, as in (5.4.24) and (5.4.51), for the maximum point t∗ of the integrand one has t∗ ∼ αjl(x)h/x = απ1 hx. But (t∗ )2 2jh 2 1 α hj l2 (x) = l(x) − (1 + o(1)) 2 x2 , 1 = l(x) 1 − α2 π1 h(1 + o(1)) . 2
g2 (t∗ ) = l(x − t∗ ) +
Summarizing, we have obtained that the third term on the right-hand side of (5.6.29) also admits a bound of the form I3 V ((1 − Δ)x),
Δ
π1 h = o(1) as π1 → 0. 2 1/(α−2)
(5.6.30)
If, however, π1 π0 > 0 then x π0 σ1 (j). In this case, we consider the following two alternatives: t∗ σ1 /2 and t∗ > σ1 /2. In the former, one has I3 σ1 V (σ1 /2); in the latter, 2
I3 σ1 e−σ1 /8jh = σ1 e−l(σ1 )/8π1 h = σ1 V 1/8π1 h (σ1 ). 1/(2−α)
Since σ1 xπ0
, in both cases I3 cxV γ1 (x) V γ (x)
284
Random walks with semiexponential jump distributions
for some fixed γ > 0 and large enough j (or x). Thus, the first relation in (5.6.30) holds for all π1 1 and Δ Δ0 < 1. It remains to bound the last integral in (5.6.29). We have already estimated (1−δ)x the part σ1 of this integral (see the bound for E3 in (5.4.55), (5.4.56)); this x gives an upper bound V (x)V γ (σ1 ). Estimating the remaining part (1−δ)x of the integral is also not difficult. One can easily see that it is equivalent to π1 h cjV x 1 − 2
∞
l (x)
V (u) du = V ((1 − Δ)x), 0
Δ=
π1 h (1 + o(1)), 2
where Δ = o(1) as π1 → 0 and Δ Δ0 < 1 for π1 1. The lemma is proved. Now we can continue evaluating the terms in (5.6.23). If we assume for simplicity that v aδ then the form of E3,j in (5.6.23) is almost the same as that of E3 in (5.4.61) (or E3,j in (5.5.23)): $ # E3,j E V (xj − Zj,n (a)); zj Zj,n (a) < (1 − δ)xj , where xj = x + aj, zj = z(xj ). However, here we cannot directly use the results of § 5.4 (see (5.4.65)), since in (5.4.65) we employed the condition π2 → 0. To bound E3,j we will consider, as in § 5.4, the two alternatives, zj σ1 (j) and zj > σ1 (j). If zj > σ1 (j) then we can basically repeat the computations from (5.4.61), (5.4.62); this yields E3,j V (xj )V (zj )γ ,
γ > 0.
(5.6.31)
If zj σ1 (j) then E3,j , like E3 in (5.4.63), should be split into two parts: # $ E3,j,1 := E V (xj − Zj,n (a)); zj Zj,n (a) < σ1 (j) and $ # E3,j,2 := E V xj − Zj,n (a) ; σ1 (j) Zj,n (a) < (1 − δ)xj . Since V (zj ) V (σ1 (j)), the integral E3,j,2 can clearly be bounded by the righthand side of (5.6.31). So we only have to bound E3,j,1 . Using an argument similar to that in § 5.4 (cf. (5.4.64)), we obtain from Lemma 5.6.6 that E3,j,1 V (xj − zj ) e
−zj2 /(2jh)
σ1 (j)
2
V (xj − u) l (xj − u) e−u
+ zj
Since σ1 (j) = o(xj ), we have l(xj − u) = l(xj ) −
u (1 + o(1)), zj
u ∈ [zj , σ1 (j)],
/(2jh)
du.
5.6 Large deviations of S n (a) when a > 0 and the integrand in the last integral will not exceed "
u u2 . V (xj ) exp (1 + o(1)) − zj 2jh
285
(5.6.32)
First let j xθ , where 1 − α < θ < min{1, 2 − 2α}. Then the ratio of the terms in the exponent in (5.6.32) is of order zj2 (x + j)2 x2−θ uzj θ2 ∼ 2 → ∞, j j x l (x + j) l (x) since 2 − θ > 2α. This means that the second term in the exponent in (5.6.32) will dominate, and so the value of (5.6.32), for large enough x, will not exceed 2
V (xj ) e−zj /3jh . Thus, for j xθ we have an inequality similar to (5.4.65):
2 γ > 0. E3,j V (xj ) V γ (zj ) + e−zj /3jh ,
(5.6.33)
If j > xθ then for E3,j,1 we will use the obvious bound E3,j,1 V (xj − σ1 (j)).
(5.6.34)
To estimate E3,j,2 , one should use the second inequality in Lemma 5.6.6, which implies that, for v σ1 (j), P(Zj,n (a) v) Pj (v) V ((1 − Δ)v),
Δ = o(1) as π1 (j) → 0.
Using this inequality, one can bound E3,j,2 in exactly the same way as in our estimation of the integral E3 in (5.4.55) and (5.4.56), which yields the bound E3,j,2 V (xj )V γ (σ1 (j)) for some γ > 0. Therefore, by virtue of (5.6.34) we can use the crude inequality E3,j V (xj − σ1 (j)).
(5.6.35)
Now we evaluate E2,1 in (5.6.23). Again making use of expansions of the form (5.4.3), (5.4.67), (5.4.68), we obtain, cf. (5.4.69): % 2 (a) Zj,n (a) Zj,n + E2,j = V (xj )E 1 + zj 2zj2 & 2 (1 − α)Zj,n (a) + (1 + o(1)) + o q(xj ) ; |Zj,n (a)| zj 2xj zj % EZj,n (a) = V (xj ) 1 + zj & 2 EZj,n (a) (1 − α) zj 1 + (5.6.36) + (1 + o(1)) + R n,j , 2zj2 xj
286
Random walks with semiexponential jump distributions
where (0) (1) (2) (3) |Rn,j | Rj + Rj + Rj + Rj + o q(xj ) , (k)
Rj
(3)
Rj
$ c # E |Zj,n (a)|k ; |Zj,n (a)| > zj , zjk $ # zj−3 E |Zj,n (a)|3 ; |Zj,n (a)| zj .
k = 0, 1, 2,
(5.6.37)
Since the r.v. ζ has finite moments of all orders we see that, owing to the inequality (5.6.24), the moments of Zj,n (a) over the set {|Zj,n (a)| > zj } admit the same bounds as the moments of Sj over the set {|Sj | > zj }. Hence one can bound the (k) quantities Rj , k = 0, 1, 2, in exactly the same way as Tk in (5.4.70): (k)
Rj
j 3/2 zj−3 ,
k = 0, 1, 2;
(5.6.38)
for k = 3 we have the same bound, (3)
zj−3 E|Zj,n (a)|3 cj 3/2 zj−3 . (5.6.39) Hence the total remainder term in the sum j E2,j , by virtue of (5.6.36)–(5.6.39) and Lemma 5.6.4 (it is obvious that (5.6.5) and the first relation in (5.6.9) remain true for fractional k 0 as well), will not exceed 3 n 4 n n j 3/2 V (xj )|Rn,j | c V (xj ) + o q(xj )V (xj ) zj3 j=1 j=1 j=1 % 2 & m m5/2 c1 V (x) + 3 + o mq(x) z3 z % 5/2 & m + o mq(x) , (5.6.40) c2 V (x) z3 Rj
m = min{z, n}. To obtain the last two relations in (5.6.40), we used the monotonicity of q(t) and the inequalities n
q(xj )V (xj ) q(x)
j=1
n
V (xj ) cmq(x)V (x).
j=1
Since, furthermore, 2 2 EZj,n (a) = ES n−j (a), EZj,n (a) = j − 1 + ES n−j (a), n the sum j=1 E2,j has exactly the same form as the right-hand side of (5.6.2). To complete the proof of the theorem, it suffices to show that the contributions of all the remaining terms in the representation for P(Gn ), which we obtain from (5.6.18), (5.6.19), (5.6.21) and (5.6.23), will be ‘absorbed’ by the remainder terms on the right-hand side of (5.6.2). For the sum nj=1 P1,j , this follows from (5.6.20). That the remainder term O(V 1+γ (x)) from (5.6.21) is negligible is obvious. Further, comparison of the
5.7 Large deviations of S n (−a) when a > 0
287
bounds (5.6.26) and (5.6.38) shows that the total contribution of the terms E1,j does not exceed the right-hand side of the inequality (5.6.40), which bounds n the total remainder term for j=1 E2,j . Similarly, a comparison of the bounds (5.6.33) and (5.6.38) shows that the same assertion is true for the total contribution from the terms E3,j , j mθ := min{n, xθ }. Finally, the contribution of the terms E3,j with j > mθ is bounded, by virtue of (5.6.35) and Lemma 5.6.4, by the sum E3,j V (x + aj − σ1 (j)) V (x + aj/2) j>mθ
j>mθ
j>mθ −xψ
czV (x + ax /2) V (x) e θ
ψ > 0.
,
Here the last inequality follows from the relation % & αaxθ−1 axθ = l(x) 1 + (1 + o(1)) , l x+ 2 2 where, since θ > 1 − α, one has αaxθ−1 l(x) xγ , 2
γ > 0.
Theorem 5.6.1 is proved. Observe that the monotonicity of q(t), assumed in condition [D], was used only in (5.6.40). We have not needed it in the theorems of §§ 5.4 and 5.5. 5.7 Large deviations of S n (−a) when a > 0 In this section, we will study the asymptotics of the probability P(S n (−a) − an x)
as
x → ∞,
(5.7.1)
where S n (−a) = max(Sk + ak), kn
a > 0,
Eξ = 0.
Possible approaches to solving this problem were mentioned in § 3.6. One approach is based on the observation that for a > 0 the r.v. S n (−a) will, in a certain sense, ‘almost coincide’ with Sn + an. More precisely, Sn S n (−a) − an = Sn + ζn , where, for ξi (−a) = ξi + a and Sn (−a) = Sn + an, the r.v. ζn := max 0, −ξn (−a), −ξn (−a) − ξn−1 (−a), . . . , −Sn (−a) has the same distribution as max(−Sk (−a)) = − min Sk (−a) kn
kn
(5.7.2)
(5.7.3)
288
Random walks with semiexponential jump distributions
and is dominated in distribution by the proper r.v. ζ := sup(−Sk (−a)) = − inf (Sk + ak); k0
k0
(5.7.4)
moreover, Eζ b−1 < ∞
if
E|ξj |b < ∞.
(5.7.5)
However, evaluating the large deviation probabilities in question on the basis of (5.7.2) is difficult and inconvenient for at least two reasons: (1) the r.v.’s ζn and Sn are dependent; (2) bounds for probabilities of large deviations of ζn require additional conditions on the left distribution tails of ξi , and these conditions prove to be superfluous for the problems we consider here. Another approach uses the cruder inequalities Sn S n (−a) − an S n .
(5.7.6)
These inequalities and results already derived give the ‘correct’ asymptotics of (5.7.1) in the zone x σ2 (n) and an ‘almost correct’ asymptotics (up to a factor 2) in the intermediate zone σ1 (n) x σ2 (n). In this section we will use the same approach as in §§ 5.2–5.4. We will assume that condition [D] is satisfied, and, for the intermediate deviation zone, the following condition also (cf. conditions [CA], [CA]). [CA ∗] The uniform approximation given by the right-hand side of the relation (5.1.11) holds for P(S n (−a) − an x) in the zone 0 < x < σ+ (n) = σ1 (n) (1 − ε(n)), where ε(n) → 0 as n → ∞. It was established in [2] that condition [CA ∗] will hold provided that (5.1.10) is satisfied. As before, it is likely that, under conditions (5.1.1), (5.1.2) and for large enough b > 2 (see (5.1.7)), condition [CA ∗] will be redundant. Following the same exposition path as in §§ 5.2–5.4, it will be noticed that, owing to inequality (5.7.2), the exposition undergoes no serious changes. So we will present below only a sketch of the proof of the following main assertion. Theorem 5.7.1. Let the conditions (5.1.7) and [ · , =] with V ∈ Se be satisfied. Then the following statements hold true. (i) The assertions of Theorem 5.4.1(i), (ii) remain true if one substitutes in them S n (−a) − an for Sn and condition [CA ∗] for [CA].
5.7 Large deviations of S n (−a) when a > 0
289
(ii) Let condition [D] hold and s2 = x/σ2 (n) → ∞ (i.e. π2 = nw2 (x) → 0). Then, uniformly in x and n from that range, P(S n (−a) − an x) % n n−1 1 Eζj + = nV (x) 1 + zn j=1 2z 2
3/2 √ n n (1 − α)(n − 1) +O (1 + o(1)) + o + 2 2xz z z3 % Eζ n (1 − α)n = nV (x) 1 + + 2+ (1 + o(1)) z 2z 2xz 3/2 & n −1 + o(z +O + q(x)) , z3
& + o(q(x))
(5.7.7)
where ζj and ζ were defined in (5.7.3), (5.7.4). Here one also has analogues of Remarks 5.4.3–5.4.6, which followed Theorem 5.4.1. Comparison with Theorem 5.4.1 expansions shows that the asymptotic for the probabilities P(Sn x) and P S n (−an) − an x differ even in the first term, provided that z n (x n1/(1−α) ). Proof. The argument proving the first part of the theorem does not deviate much from the corresponding argument in § 5.4. As in the previous situation, the main contribution to the asymptotics of P(Gn ), Gn := {S n (−a) − an x}, comes from the summands (cf. (5.4.41), (5.4.42), (5.5.7))
P Gn B j {S j−1 (−a) − a(j − 1) < x − y} , B j = {ξj y}, which, up to the respective higher-order correction terms (owing to (5.7.2), (5.7.6), all the bounds we used before remain valid), could be written as (j) P Sj−1 + ξj + S n−j (−a) − a(n − j) x,
(j) Sj−1 + S n−j (−a) − a(n − j) < x − y # $ = P ξj + Zj,n x, Zj,n < x − y = E V (x − Zj,n ); Zj,n < x − y , (5.7.8) where (j)
Zj,n := Sj−1 + S n−j (−a) − a(n − j) (j)
d
and the r.v. S n−j (−a) = S n−j (−a) is independent of ξ1 , . . . , ξj . By an argument similar to that in § 5.5 (see Lemma 5.5.4), we can verify that, in the deviation zone x < δσ+ , one has for Zj,n the same approximations, of the form (5.1.11), as those that hold for Sn (all the evaluations remain true, owing to (5.7.6)). Therefore the computation of the asymptotics of P(Gn ) leads to the same result as in Theorem 5.4.1. This proves the first assertion of the theorem.
290
Random walks with semiexponential jump distributions
When deriving asymptotic expansions for probability (5.7.1), there will be some changes compared with the proof of Theorem 5.4.1. We will have to use the representation ∗ Zj,n = Sn−1 + ζn−j , d
d
∗ where the ζn−j = ζn−j are defined, in the appropriate way, by the last n − j summands in the sum Sn−1 . Further, one should use, as before, the decompositions (5.4.67) and (5.4.68). We obtain, cf. (5.4.69), (5.5.24), (5.6.36), % n 1 P(Gn ) = nV (x) 1 + Eζn−j zn j=1 & n 1 (1 − α)z 2 (1 + o(1)) + Rn (x) , + 2 EZj,n 1 + 2z n x j=1
(5.7.9) where |Rn (x)| cn3/2 z −3 . Next, we use the relations ∗ ) = Eζn−j → Eζ EZj,n = E(Sn−1 + ζn−j
as
n − j → ∞;
2 ∗ 2 = (n − 1) + 2ESn−1 ζn−j + Eζn−j . EZj,n √ √ ∗ ∗ = o( n) because Sn−1 / n and ζn−j are asymptotically inHere ESn−1 ζn−j dependent and, for all n,
Eζn2 Eζ 2 < ∞
(5.7.10)
owing to (5.7.5) since b 3. Therefore
√ 2 =n+o n . EZj,n
(5.7.11)
Substituting this in (5.7.9), we obtain the desired assertion. The theorem is proved.
5.8 Integro-local and integral theorems on the whole real line In this section, we present integro-local theorems on the asymptotic behaviour of P(Sn ∈ Δ[x)) and integral theorems on the asymptotics of P(Sn x), x → ∞, that are valid on the whole real line, i.e. in all deviation zones, including the boundary zones. In particular, these results improve the integral theorems for Sn , both those obtained in the preceding sections and also obtained in the literature cited in § 5.1. The complexity of the proofs of the above-mentioned results, which were obtained in the recent paper [73], renders them somewhat beyond the level of the present monograph. So we will present the theorems below without proofs; these can be found in [73].
5.8 Integro-local and integral theorems on the whole line
291
When studying the asymptotics of P(Sn x) in § 5.4, we found that, in the case s1 = x/σ1 (n) → ∞, an important role is played by the function t M (x, n) = min M (x, t, n), M (x, t, n) = l(x − t) + nΛκ 0tx n (see Theorem 5.4.1 and (5.4.27), (5.1.12)), whose properties were studied in § 5.4.1. To study the asymptotics of P(Sn x) in the transitional zone x = s1 σ1 (n) for a fixed s1 ∈ (0, ∞), we will need the function Mδ (x, n) :=
min
0t(1−δ)x
M (x, t, n)
with
δ = δα :=
1−α , 2−α
so that M (x, n) = M0 (x, n). It turns out, however, that, as s1 → ∞ the asymptotic properties of Mδ and M0 coincide and therefore do not depend on δ. So, by function M in the discussion in § 5.4 one can understand the function Mδ , and everywhere in what follows we can assume that δ = δα and denote the function Mδ with δ = δα by M . This will lead to no confusion. The asymptotics of M (x, n) in the zone x = s1 σ1 (n) with a fixed s1 are ‘transitional’ from the asymptotics of l(x) to that of x2 /2n; these asymptotics correspond respectively to the cases s1 → ∞ and s1 → 0. In this connection, it is of interest to consider the ratio limits (recall that δ = δα ) G1 (s1 ) := lim
n→∞
M (x, n) l(x)
and
G2 (s1 ) := lim
n→∞
2nM (x, n) , (5.8.1) x2
√ where x = s1 σ1 (n). Since for t n one has t t2 Λκ n∼ , n 2n
(5.8.2)
we obtain for x = s1 σ1 (n) and n → ∞ that, putting t := px for p ∈ [0, 1 − δ] and using (5.2.3) and the relation l(σ1 (n)) ∼ σ12 (n)/n, one has p 2 x2 (1 − p)α l(x) + M (x, n) ∼ min 0p1−δ 2n % & p2 s21 σ12 (n) α = l(x) min (1 − p) + 0p1−δ 2nl(s1 σ1 (n)) p2 s2−α 1 (1 − p)α + ∼ l(x) min 0p1−δ 2 and so M (x, n) ∼
x2 min 2sα−2 (1 − p)α + p2 . 1 2n 0p1−δ
From here it follows that the limits in (5.8.1) are equal to the values of Gi (s) =
min
0p1−δ
Hi (s, p),
i = 1, 2,
(5.8.3)
292
Random walks with semiexponential jump distributions
at s = s1 , where H1 (s, p) := (1 − p)α +
1 2 2−α , p s 2
H2 (s, p) := 2sα−2 (1 − p)α + p2 .
Now we will study the properties of the functions Gi (s). Denote by p(s) the maximum value p ∈ [0, 1 − δ], at which the minimum in (5.8.3) is attained, so that Gi (s) = Hi (s, p(s)) for i = 1, 2 (clearly, p(s) does not depend on i). The quantity s0 :=
2−α (2 − 2α)(1−α)/(2−α)
(5.8.4)
will play an important role in what follows. Lemma 5.8.1. If Var ξ = 1 then the functions G1 (s), G2 (s) and p(s) have the following properties. (i) The functions G1 (s) and G2 (s) are related to each other by G1 (s) =
s2−α G2 (s); 2
the function G2 (s) is decreasing, while G1 (s) is increasing for s > 0. Moreover, G2 (s0 ) = 1,
G2 (s) → ∞ as s → 0,
G1 (s) ↑ 1 as s → ∞. (5.8.5) (ii) The function p(s) is continuous and positive for s 0 and decreasing for s s0 , and α , 2−α
α as s → ∞. s2−α √ If we waived the condition d ≡ Var ξ = 1 then, for t n, instead of (5.8.2) we would have t t2 Λκ n∼ , n 2nd p(s0 ) =
p(s) ∼
and instead of (5.8.1) we would obtain, for x = s1 d1/(2−α) σ1 (n) and any fixed s1 > 0, the relation M (x, n) ∼ G1 (s1 )l(x) ∼
x2 G2 (s1 ). 2nd
Equally obvious changes consequent on replacing Var ξ = 1 by Var ξ = d could be made in what follows as well. So, from here on, we will assume in the present section, without loss of generality, that Var ξ = 1 unless stipulated otherwise. Now we will state the main result on crude (logarithmic) asymptotics in the integral theorem setup.
5.8 Integro-local and integral theorems on the whole line √ Theorem 5.8.2. Let F ∈ Se. Then, for x n, x = s1 σ1 (n), ⎧ √ x2 ⎪ ⎨ − if x n, s1 s0 , 2n ln P(Sn x) ∼ ⎪ ⎩ −G1 (s1 )l(x) if s1 s0 ,
293
(5.8.6)
where G1 (s1 )l(x) ∼ G2 (s1 )x2 /2n for each fixed s1 > 0. It follows from Theorem 5.8.2 that the ‘point’ x = s0 σ1 (n) separates the Cram´er deviation zone from the ‘non-Cram´er’ zone. One can also show that there exists a common representation for the right-hand side of (5.8.6) in the form −M0 (x, n), since ⎧ 2 √ x ⎪ ⎨ if x n, s1 s0 , 2n M0 (x, n) ∼ ⎪ ⎩ G1 (s1 )l(x) if s1 s0 , i.e. for −M0 (x, n) one has the same representation as for the left-hand side of (5.8.6). Next we will state an ‘exact’ integral theorem. Put 1
−1/2 for s s0 , c1 (s) := H2 (s, p(s)) 2 where s0 is defined in (5.8.4) and H2 (s, p) is the second derivative of the function H2 (s, p) with respect to p, and let c1 (s) := c1 (s0 )
for
s s0 .
By Lemma 5.8.1 the function c1 (s) is continuous and positive for all s 0, and c1 (s) → 1 (H2 (s, p(s)) → 2) as s → ∞. Theorem 5.8.3. Let F ∈ Se and E|ξ|κ < ∞, where : ⎧ 9 1 1 ⎪ ⎪ +2 if is a non-integer, ⎨ 1−α 1−α κ= ⎪ 1 1 ⎪ ⎩ +1 if is an integer. 1−α 1−α √ Then, uniformly in x = s1 σ1 (n) → ∞, x n, we have √ 0 P(Sn x) ∼ Φ −x/ n e−Λκ (x/n)n + nc1 (s1 )e−M (x,n) , where
√ √ 0 n Φ −x/ n e−Λκ (x/n)n ∼ √ e−Λκ (x/n)n x 2π
(5.8.7)
√ n and c1 (s1 ) → 1 as s1 → ∞. In particular, for any fixed ε > 0, ⎧ √ √ 0 if x n, s1 s0 − ε, ⎨ Φ −x/ n e−Λκ (x/n)n P(Sn x) ∼ ⎩ nc1 (s1 ) e−M (x,n) if s1 s0 + ε.
for x
294
Random walks with semiexponential jump distributions
In the ‘extreme’ deviation zone x = s2 σ2 (n) → ∞, s2 c = const > 0, one has P(Sn x) ∼ nc2 (s2 )e−l(x) , where
c2 (s) := exp
α2 2s2−2α
(5.8.8)
" →1
as
s → ∞.
Theorem 5.8.3 is an analogue of the results of [238], which hold on the whole √ real line (the zone of normal deviations x < n, n → ∞, is covered by the central limit theorem), but it has a simpler form. It is also an analogue of the √ representation (4.1.2), which is uniform over the range x n and holds for regularly varying tails F+ (t). Now we will turn to integro-local theorems. Here we will need the following additional condition. [D1 ]
For non-lattice distributions, for any fixed Δ > 0, as t → ∞,
l(t) ; t for arithmetic distributions, for integer-values k → ∞, l(t + Δ) − l(t) ∼ Δα
l(k + 1) − l(k) ∼ α
l(k) . k
The function αl(t)/t in condition [D1 ] plays the role of the derivative l (t) and is asymptotically equivalent to it, provided that the latter exists and is sufficiently regular. Observe that, in the zone of deviations that are close to normal, this condition will not be required (see Theorems 5.8.4 and 5.8.5 below) while in the large deviation zone it could be relaxed (cf. § 4.8). For x = s1 σ1 (n), put m(x) := c(m) (s1 )α
l(x) , x
where c(m) (s1 ) := (1 − p(s1 ))α−1 for s1 s0 , and c(m) (s1 ) := c(m) (s0 ) for s1 s0 . By Lemma 5.8.1(ii), l(x) x as s1 → ∞. Furthermore, we can show that, for any fixed Δ > 0, in the zone s1 s0 one has p(s1 ) → 0,
c(m) (s1 ) → 1,
m(x) ∼ α
M (x + Δ, n) − M (x, n) ∼ Δm(x), so that the function m(x) plays the role of the derivative of M (x, n) with respect to x. First we will state an integro-local theorem in the non-lattice case.
295
5.8 Integro-local and integral theorems on the whole line
Theorem 5.8.4. Assume that F ∈ Se and that the conditions E|ξ|κ < ∞ and [D1 ] are met, where κ is defined in Theorem 5.8.3. Then, for any fixed Δ > 0, √ uniformly in x = s1 σ1 (n) → ∞, x n, Δ e−Λκ (x/n)n + Δnm(x)c1 (s1 ) e−M (x,n) , P Sn ∈ Δ[x) ∼ √ 2πn
(5.8.9)
where one has m(x)c1 (s1 ) ∼ αl(x)/x as s1 → ∞. In particular, for any fixed ε > 0, ⎧ √ ⎪ −Λκ (x/n)n ⎪ Δ if x n, s1 s0 − ε, ⎨ √2πn e P Sn ∈ Δ[x) ∼ ⎪ ⎪ ⎩ Δnm(x)c1 (s1 ) e−M (x,n) if s1 s0 + ε. In the ‘extreme’ deviation zone x = s2 σ2 (n) → ∞, s2 c = const > 0, we have l(x) P(Sn ∈ Δ[x)) ∼ Δnα c2 (s2 )e−l(x) , (5.8.10) x where the ci (si ), i = 1, 2, are defined in Theorem 5.8.3. If x = O(σ2 (n)) then condition [D1 ] is superfluous. Now consider the arithmetic case. We cannot assume here without losing generality that a = Eξ = 0, d = Var ξ = 1. Hence the following theorem is stated for arbitrary a and d. Theorem 5.8.5. Suppose that F ∈ Se and that the conditions E|ξ|κ < ∞ and [D1 ] are satisfied, where κ is defined in Theorem 5.8.3. Then, uniformly √ in integer-valued x = s1 b2/(2−α) σ1 (n) → ∞, x n, the following holds: 1 P Sn − an = x ∼ √ e−Λκ (x/n)n + nm(x)c1 (s1 )e−M (x,n) , 2πnd where m(x)c1 (s1 ) ∼ αl(x)/x as s1 → ∞. In particular, for any fixed ε > 0, ⎧ √ 1 ⎪ −Λκ (x/n)n ⎪√ if x n, s1 s0 − ε, ⎨ 2πnd e P Sn − an = x ∼ ⎪ ⎪ if s1 s0 + ε. ⎩ nm(x)c1 (s1 ) e−M (x,n) In the ‘extreme’ deviation zone x = s2 σ2 (n) → ∞, s2 c = const > 0, for integer-valued x one has l(x) P Sn − an = x ∼ nα c2 (s2 )e−l(x) , x
(5.8.11)
where ci (si ), i = 1, 2, are defined in Theorem 5.8.3. If x = O(σ2 (n)) then condition [D1 ] is superfluous. Note that, in the deviation zone x σ2 (n) Theorem 5.8.3 will, generally speaking, not follow from Theorems 5.8.4 and 5.8.5 (indeed, Theorem 5.8.3 does
296
Random walks with semiexponential jump distributions
not contain condition [D1 ]). Theorems 5.8.4 and 5.8.5 are analogues of Theorems 4.7.5 and 4.7.6. Further, observe that the case when n does not grow is not excluded in Theorems 5.8.2–5.8.5. In this case s2 := x/σ2 (n) → ∞, c2 (s2 ) → 1 as x → ∞, and the assertions (5.8.8), (5.8.10) and (5.8.11), with c2 (s2 ) replaced on their righthand sides by 1, can also be obtained from the known properties of subexponential and locally subexponential distributions (see e.g. §§ 1.2 and 1.3).
5.9 Additivity (subexponentiality) zones for various distribution classes We have seen that, under the conditions of Chapters 2–5, the following subexponentiality property holds for the sums Sn of i.i.d. r.v.’s ξ1 , . . . , ξn , Eξ = 0. For any fixed n, as x → ∞, P(Sn x) ∼ nV (x),
V (x) = P(ξ x).
(5.9.1)
This property could also be referred to as tail additivity with respect to the addition of random variables. This term is more justified in the case of non-identically distributed independent summands ξ1 , . . . , ξn . In this case, instead of (5.9.1) one has (see Chapters 12 and 13) P(Sn x) ∼
n
Vj (x),
j=1
where Vj (x) = P(ξj x). It follows from the results of Chapters 2–5 that the relation (5.9.1) remains valid also for all n n(x), where n(x) → ∞ as x → ∞. If n(x) is an ‘unimprovable’ value with this property then it could be called the boundary of the additivity (subexponentiality) zone. When considering the whole class S of subexponential distributions, one can only say about the boundary n(x) of the additivity zone for a distribution F ∈ S that it ought to be a sufficiently slow growing function of x (that depends on F). On the one hand, the more regular is the behaviour of F at infinity, the greater the boundary n(x). On the other hand, there exists a trivial upper bound 1 n(x) < (5.9.2) V (x) dictated by the fact that nV (x) approximates a probability and therefore one must have nV (x) 1. It will be seen in what follows that, in a number of cases, this trivial bound is essentially attained. For the distribution classes R(α) and Se(α), which we considered in Chapters 1–5 (see pp. 11, 29), the boundaries of the additivity zones can be found explicitly. It turns out that we always have n(x) = xγ Ln (x),
(5.9.3)
5.9 Additivity zones for distribution classes
297
where Ln (x) is an s.v.f. and γ > 0. More precisely, the following assertions hold true. (i) If F ∈ R(α) and condition [<, =] with α ∈ (0, 2) is satisfied then, under suitable conditions on the majorant W of the left distribution tail (for example, W (t) < cV (t)), one has xα ε(x) n(x) = , (5.9.4) L(x) where ε(x) is an arbitrary s.v.f. such that ε(x) → 0 as x → ∞ and L(x) is the s.v.f. from the representation V (x) = x−α L(x), so that n(x) = o V −1 (x) . This means that the trivial bound (5.9.2) is essentially attained in this case. The additivity boundary n(x) cannot assume values comparable with V −1 (x), since for n ∼ cV −1 (x) the distribution of Sn /V (−1) (1/n) is approximated by a stable law. (If condition [Rα,ρ ] holds for ρ > −1 then the scaling sequence F (−1) (1/n) from § 1.5 will differ from V (−1) (1/n) just by a constant factor; if n V −1 (x) then P(Sn x) ∼ Fα,ρ,+ (0).) Thus, for F ∈ R(α) with α ∈ (0, 2), in the representation (5.9.3) one has γ = γ(F) = α,
Ln (x) =
ε(x) . L(x)
(ii) If F ∈ R(α) for α > 2 and Eξ 2 < ∞ then, from the discussion in Chapter 4, x2 (5.9.5) n(x) c ln x for any c > α − 2. If n assumes values in the vicinity of x2 /(α − 2) ln x then for the asymptotics we will have a ‘mixed’ approximation, given by the sum of nV (x) of P(Sn x) √ and 1 − Φ(x/ nd ), where Φ is the standard normal distribution function and d = Eξ12 (see (4.1.2)). For n > x2 /c1 ln x, c1 < α−2 (in particular, for n ∼ cx2 ), we will have the normal approximation. If n x2 then P(Sn x) → 1/2. Hence, in the case F ∈ R(α), α > 2, in the representation (5.9.3) one can let γ = γ(F) = 2,
Ln (x) =
1 + ε(x) ln x, (α − 2)
(5.9.6)
where ε(x) → 0 as x → ∞. (iii) If F ∈ Se(α), α ∈ (0, 1) then, by virtue of the results of the present chapter (see Theorem 5.4.1(ii)), n(x) =
x2−2α ε(x) , L2 (x)
where ε(x) is an arbitrary s.v.f. vanishing at infinity and L(x) is the s.v.f. from the representation P(ξ t) = e−l(t) ,
l(t) = tα L(t),
α ∈ (0, 1).
298
Random walks with semiexponential jump distributions
If n is comparable with or greater than x2−2α L−2 (x) but satisfies the relation n x2−α /L(x) then there is another approximation for P(Sn x) (see Theorem 5.4.1(i)). For n x2−α /L(x), we have first the ‘Cram´er approximation’ and then the normal one. If n x2 then P(Sn x) → 1/2. Thus, for F ∈ Se(α) in (5.9.3) we have γ = γ(F) = 2 − 2α,
Ln (x) =
ε(x) . L2 (x)
Summarizing the above, we can now plot the dependence of the main parameter γ = γ(F), which characterizes the additivity boundary n(x), on the parameter α that specifies the classes R(α) and Se(α) containing the distribution F (see Fig. 5.1). 6γ(F)
6γ(F)
2 2j
3j -
1j
2
F ∈ R(α)
3j @ A 4j A@ A@ 5i A 1j A A α
1
F ∈ Se(α)
α
Fig. 5.1. The plots show the values of γ in the representation (5.9.3) for which one or another approximation holds: 1 , additivity (subexponentiality) zone; 2 , approximation 3 , normal approximation; 4 , Cram´er’s approximation; 5 , intermediate by a stable law; approximation.
For exponentially decaying distributions the tail additivity property, generally speaking, does not hold. For example, if P(ξ t) ∼ ce−μt as t → ∞, μ > 0 then P(S2 t) ∼ c1 te−μt 2P(ξ t). For distributions with a negative mean, however, the additivity property becomes possible. As was shown in § 1.2 (Example 1.2.11 on p. 19), if P(ξ t) = e−μt V (t), where V (t) is an integrable r.v.f. then ϕ(μ) = Eeμξ < ∞ and P(S2 t) ∼ 2ϕ(μ)P(ξ t). If the distribution F is such that ϕ(μ) = 1 (which is only possible in the case ϕ (0) = Eξ < 0) then it will have the additivity (subexponentiality) property. Using Cram´er transforms of the distributions of the r.v.’s ξ and Sn , the problem
5.9 Additivity zones for distribution classes
299
on the asymptotics of P(Sn x) can be reduced to the respective problem for sums of r.v.’s with distributions from R (for this reduction, we do not need the equality ϕ(μ) = 1; see § 6.2).
6 Large deviations on the boundary of and outside the Cram´er zone for random walks with jump distributions decaying exponentially fast
6.1 Introduction. The main method of studying large deviations when Cram´er’s condition holds. Applicability bounds In this chapter, in contrast with the rest of the book, we will assume that the distribution of ξ satisfies Cram´er’s condition, i.e. ϕ(λ) := Eeλξ < ∞, for some λ > 0. In this case, methods for studying the probabilities of large deviations of Sn are quite well developed and go back to Cram´er’s paper [95] (see also [233, 16, 219, 259, 120, 49] etc.). Somewhat later, these methods were extended in [37, 44, 69, 70] to enable the study of S n and also the solution of a number of other problems related to the crossing of given boundaries by the trajectory of a random walk. The basis of the modern approach to studying the probabilities of large deviations of Sn , including integro-local theorems, consists of the following two main elements: (1) the Cram´er transform and the reduction of the problem to integro-local theorems in the normal deviation zone; (2) the Gnedenko–Stone–Shepp integro-local theorems [130, 254, 258, 259] on the distribution of Sn . We will briefly describe the essence of the approach. As before, let Δ[x) := [x, x + Δ) be a half-open interval of length Δ > 0 with left endpoint x. We will study the asymptotics of P Sn ∈ Δ[x) as n → ∞. (Note that earlier we assumed that x → ∞ but often made no such assumption with regard to n. Now the unbounded growth of x will follow from that of n.) We say that an r.v. ξ (λ) follows the Cram´er transform of the distribution (or the 300
6.1 Introduction. The main method under Cram´er’s condition
301
conjugate distribution1 ) of ξ if eλt P(ξ ∈ dt) , ϕ(λ)
P(ξ (λ) ∈ dt) =
(6.1.1)
so that Eeμξ
(λ)
ϕ(λ + μ) . ϕ(λ)
=
The Cram´er transform of the distribution of Sn has the form eλt P(Sn ∈ dt) . ϕn (λ)
(6.1.2)
It is clear that the n moment generating function of this distribution is equal to ϕ(λ + μ)/ϕ(λ) . So we obtain the following remarkable fact: the transform (6.1.2) corresponds to the distribution of the sum (λ)
Sn(λ) := ξ1 (λ)
(λ)
of i.i.d. r.v.’s ξ1 , . . . , ξn follows that
+ · · · + ξn(λ)
following the same distribution as ξ (λ) . From this it
P(Sn ∈ dt) = ϕn (λ) e−λt P(Sn(λ) ∈ dt), and therefore P Sn ∈ Δ[x) = ϕn (λ) e−λx
x+Δ
e(x−t)λ P(Sn(λ) ∈ dt)
x
= ϕn (λ) e−λx
Δ
e−λu P Sn(λ) − x ∈ du .
(6.1.3)
0
Since e−λu → 1 uniformly in u ∈ [0, Δ] as Δ → 0, we have, for such Δ’s, that P Sn ∈ Δ[x) ∼ ϕn (λ) e−λx P Sn(λ) − x ∈ Δ[0) ; (6.1.4) (λ) hence, knowing the asymptotics of P Sn − x ∈ Δ[0) would mean that we also know the desired asymptotics of P Sn ∈ Δ[x) . Now note that Eξ (λ) = ϕ (λ)/ϕ(λ) = ln ϕ(λ) . Clearly, Eξ (λ) is an in creasing function of λ because ln ϕ(λ) > 0. This function increases from the value
" ϕ (λ) θ− := inf : λ ∈ (λ− , λ+ ) ϕ(λ) to the value
θ+ := sup
1
" ϕ (λ) : λ ∈ (λ− , λ+ ) , ϕ(λ)
In actuarial and finance-related literature, this distribution is often referred to as the Esscher transformed or exponentially tilted distribution; see e.g. p. 31 of [113].
302
Random walks with exponentially decaying distributions
where λ− := inf{λ : ϕ(λ) < ∞},
λ+ := sup{λ : ϕ(λ) < ∞}.
Therefore, if x ∈ (θ− , θ+ ) n then we can choose a λ = λ(θ) such that θ :=
Eξ (λ) =
ϕ (λ) = θ. ϕ(λ)
(6.1.5)
(6.1.6)
(λ)
Inthis case we have ES n − x = nθ − x = 0, which means that the probability (λ) P Sn − θn ∈ Δ[0) on the right-hand side of (6.1.4) will refer to the normal deviation zone. Observe also that Eξ = ϕ (0) ∈ [θ− , θ+ ],
λ(θ± ) = λ± .
Now consider the lattice case, for which the distribution of ξ is concentrated on the set {a + kh, k = . . . , −1, 0, 1, . . .} and h is the maximum value possessing this property. Without loss of generality, we can put h = 1. Moreover, when studying the distribution of Sn , we can set a = 0 (sometimes it is more convenient to set Eξ = 0 but in such a case, generally speaking, a = 0). When h = 1 and a = 0, the distribution of ξ is said to be arithmetic. In the arithmetic case, analogues of the relations (6.1.3), (6.1.4) have a somewhat simpler form: for integer-valued x, P(Sn = x) = ϕn (λ) e−λx P(Sn(λ) − θn = 0),
(6.1.7)
where, as before, E(ξ (λ) − θ) = 0 for λ = λ(θ), θ ∈ (θ− , θ+ ). Thus, to study the asymptotics of P Sn ∈ Δ[x) , we should turn to integrolocal theorems for the sums Zn := ζ1 + · · · + ζn of i.i.d. r.v.’s ζ1 , ζ2 , . . . in the normal deviation zone in the non-lattice case and to local theorems for these sums in the lattice case. ! be a stable distribution with a density f . Let F In the lattice case, we have Gnedenko’s theorem [130], as follows. Theorem 6.1.1. Assume that ζ has a lattice distribution with span h = 1 and an arbitrary a. The relation na + k − an lim sup bn P(Zn = na + k) − f (6.1.8) =0 n→∞ k bn holds, for suitable an , bn , iff the distribution of ζ belongs to the domain of attrac! Moreover, here the sequence (an , bn ) is the same as that tion of the stable law F. ! ensuring convergence of the distribution of (Zn − an )/bn to the law F.
6.1 Introduction. The main method under Cram´er’s condition
303
Proof. The proof of Theorem 6.1.1 can be found in §§ 49, 50 of [130]; see also Theorem 4.2.1 of [152] and Theorem 8.4.1 of [32]. In the non-lattice case, the following Stone–Shepp integro-local theorem holds true. Theorem 6.1.2. Suppose that ζ is a non-lattice r.v. Then a necessary and sufficient condition for the existence of an , bn such that x − an sup sup bn P Zn ∈ Δ[x) − Δf lim (6.1.9) =0 n→∞ Δ∈[Δ ,Δ ] x bn 1 2 for any fixed 0 < Δ1 < Δ2 < ∞ is that the distribution of ζ belongs to ! Moreover, here the sethe domain of attraction of the stable distribution F. quence (an , bn ) is the same as that ensuring convergence of the distribution ! of (Zn − an )/bn to the law F. Proof. The proof of this assertion can be found in [254, 258, 259, 120, 32]. Remark 6.1.3. The assertions of Theorems 6.1.1 and 6.1.2 can be presented in a unified form, using instead of (6.1.8) the common assertion (6.1.9), where in the lattice case the variable x assumes values an + k, k = . . . , −1, 0, 1, . . . , and Δ 1 is integer-valued. Remark 6.1.4. If the Cram´er condition on the characteristic function, [C] p := lim sup|t|→∞ g(t) < 1, where g(t) := Eeitζ , is satisfied then the assertion of Theorem 6.1.2 can be made stronger: the relation (6.1.9) will hold uniformly in Δ ∈ [e−p1 n , Δ2 ] for any fixed p1 < p and Δ2 < ∞ (see [259]). Note that we want to apply Theorems 6.1.1 and 6.1.2 in the representations (λ) (6.1.4), (6.1.7) to the sums Sn − θn with λ = λ(θ), where the value θ = x/n and hence that of λ(θ) will depend, generally speaking, on n (as, for instance, is the case when x = θ0 n + bnα , α ∈ (0, 1), θ0 = const). This means that we will have to deal with the triangular array scheme and so will need versions of Theorems 6.1.1 and 6.1.2 that are uniform in λ. Such versions were established in [72], and in part in [259]. In the general situation, conditions for the uniformity are rather cumbersome but in the special case when ζ = ξ (λ(θ)) − θ, θ ∈ [θ− , θ + ] and θ− < θ− < θ + < θ+ , one needs no additional conditions and Theorems 6.1.1 and 6.1.2 take more concrete forms. In particular, we have the following result. Theorem 6.1.5. If the r.v. ξ is non-lattice and ζ = ξ (λ(θ)) − θ then uniformly in θ = x/n ∈ [θ− , θ + ] one has the relation (6.1.9), where 2 1 f (t) = √ e−t /2 2π
304
Random walks with exponentially decaying distributions
is the density of the standard normal distribution, an = 0 and b2n = nd(θ) with d(θ) := Var ξ (λ(θ)) . In particular, (λ(θ)) Δ √ sup nd(θ) P Sn − θn ∈ Δ[0) − lim = 0. (6.1.10) n→∞ Δ∈[Δ ,Δ ] 2π 1 2 One should also note that the Cram´er transform (6.1.1) converts the distribution of ξ into distributions that are absolutely continuous with respect to the original one and also with respect to each other. If θ− > −∞ and ϕ λ(θ− ) < ∞ (Var ξ (λ(θ− )) < ∞) then the uniformity + in θ = x/n in Theorems 6.1.1 and 6.1.2 will hold on the interval [θ− , θ ].−Simi larly, for θ+ < ∞ and ϕ λ(θ+ ) < ∞ one will have uniformity in θ ∈ [θ , θ+ ]. Thus, under the conditions of Theorem 6.1.5, we have Δ P Sn(λ(θ)) − θn ∈ Δ[0) ∼ , n → ∞, (6.1.11) 2πnd(θ) uniformly in Δ ∈ [Δ1 , Δ2 ]. If the distribution of ξ satisfies condition [C] then the interval [Δ1 , Δ2 ] can be replaced by [e−p1 n , Δ2 ] for some p1 > 0. The assertion of Theorem 6.1.5 follows from the uniform versions of Theorems 6.1.1 and 6.1.2 in an obvious way. If the conditions that ϕ λ(θ± ) are finite do not hold then, for θ = x/n ↑ θ+ (θ ↓ θ− > −∞), one will need versions of Theorems 6.1.1 and 6.1.2 with a nonnormal stable density f (see § 6.2). Coresponding changes will have to be made in assertions (6.1.10), (6.1.11) as well. Further, note that assertions (6.1.9), (6.1.11), being true in the non-lattice case for any fixed Δ, will also hold for Δ = Δn → 0 slowly enough. Therefore, returning to (6.1.4), we obtain from Theorem 6.1.5 the following statement. Again let [θ − , θ+ ] be any interval, located inside (θ− , θ+ ). Theorem 6.1.6. In the non-lattice case, uniformly in θ = x/n ∈ [θ − , θ + ], as n → ∞ and Δ = Δn → 0 slowly enough, one has, for λ = λ(θ), Δ . P Sn ∈ Δ[x) ∼ ϕn (λ) e−λx 2πnd(θ)
(6.1.12)
In the lattice case, for values of x of the form an + k, one has 1 P(Sn = x) ∼ ϕn (λ) e−λx . 2πnd(θ)
(6.1.13)
The uniformity in θ could be extended to the entire intervals [θ− , θ + ] and [θ− , θ+ ] if θ− > −∞, ϕ λ− < ∞ and θ+ < ∞, ϕ λ+ < ∞ respectively. The case θ+ < ∞, ϕ λ+ = ∞ and θ ∼ θ+ will be considered in § 6.2. Note that the product ϕn λ(θ) e−λ(θ)x in (6.1.12) and (6.1.13) can be written as # $n ϕ λ(θ) e−λ(θ)θ = e−nΛ(θ) ,
305
6.1 Introduction. The main method under Cram´er’s condition where
Λ(θ) = λ(θ)θ − ln ϕ λ(θ)
is the large deviation rate function (a.k.a. the deviation function or Legendre transform of ln ϕ(λ)) given by Λ(θ) := sup λθ − ln ϕ(λ) . λ
In this form, thedeviation function appears in a natural way when finding the asymptotics of P Sn ∈ Δ[x) using an alternative approach employing the inversion formula and the saddle-point method. The properties of the deviation function as a probabilistic characteristic are known well enough (see e.g. [38, 49, 67]). In particular, it is known that the function Λ(θ) is non-negative, lower semicontinuous, convex and analytic on the interval (θ− , θ+ ) and that Λ(Eξ) = 0,
Λ (θ) = λ(θ) > 0
for
θ > Eξ.
In terms of the function Λ(θ), the relation (6.1.12) takes the form Δe−nΛ(θ) P Sn ∈ Δ[x) ∼ . 2πnd(θ) From here (or from (6.1.12)) one can easily obtain an integral theorem as well. Since for θ < θ+ , v/n → 0 one has v v v = Λ(θ) + λ(θ) + o , Λ θ+ n n n we see that, for any fixed v, as n → ∞, Δe−nΛ(θ)−vλ(θ) . P Sn ∈ Δ[x + v) ∼ 2πnd(θ)
(6.1.14)
This implies the following. Corollary 6.1.7. Let θ = x/n > Eξ. Then in the non-lattice case, uniformly in θ = x/n ∈ [θ− , θ + ], for any fixed Δ ∞ one has e−nΛ(θ) P Sn ∈ Δ[x) ∼ 2πnd(θ)
Δ
e−vλ(θ) dv =
0
(1 − e−λ(θ)Δ )e−nΛ(θ) . λ(θ) 2πnd(θ) (6.1.15)
In particular, for Δ = ∞, P(Sn x) ∼
e−nΛ(θ) . λ(θ) 2πnd(θ)
(6.1.16)
In the lattice case, we similarly obtain from (6.1.13) that, for values of x of the form an + k, one has P(Sn x) ∼
e−nΛ(θ) . (1 − e−λ(θ) ) 2πnd(θ)
(6.1.17)
306
Random walks with exponentially decaying distributions
The general integro-local theorem (6.1.15), (6.1.13) was obtained, in a somewhat different form, in [259]. The asymptotic relations (6.1.16), (6.1.17) show, in particular, the degree of precision of the following crude bound, which is obtained directly from Chebyshev’s inequality. For x Eξ 0, P(Sn x) inf e−λx EeλSn = inf e−λx+n ln ϕ(λ) = e−nΛ(θ) . λ>0
λ>0
(6.1.18)
We emphasize that, when studying one-sided deviations, one of the main conditions in the basic Theorem 6.1.6 is the relation x θ + < θ+ . n
θ=
This inequality, describing the so-called Cram´er zone of right-sided deviations, sets applicability bounds for methods based on the Cram´er transform and also on all the other analytic methods leading to asymptotics of the form (6.1.12)– (6.1.17). If θ+ = ∞ then there are no such bounds. If θ+ < ∞, λ+ = ∞ then necessarily P(ξ θ+ ) = 1, i.e. the r.v. ξ is bounded. This case is scarcely interesting from the large deviations viewpoint, and we will not consider it here. There remains the little-studied (for deviations x > θ+ n) possibility 0 < λ+ < ∞,
θ+ =
ϕ (λ+ ) < ∞, ϕ(λ+ )
(6.1.19)
where necessarily one has ϕ(λ+ ) < ∞, ϕ (λ+ ) < ∞. It follows from these relations that the function V (t) from the representation P(ξ t) = e−λ+ t V (t),
λ+ > 0,
(6.1.20)
has the property that tV (t) is integrable: ∞ tV (t) dt < ∞.
(6.1.21)
0
Indeed, the condition ϕ (λ+ ) < ∞ implies that E ξeλ+ ξ ; ξ > 0) < ∞, and therefore, as t → ∞, V (t) = eλ+ t P(ξ t) t−1 E ξeλ+ ξ ; ξ t = o(t−1 ).
(6.1.22)
307
6.1 Introduction. The main method under Cram´er’s condition From (6.1.22) and the last relation, integrating by parts, we obtain ∞ > E ξeλ+ ξ ; ξ > 0) > E ξeλ+ ξ ; 0 < ξ < N ) N =−
λ+ t
te
d e
−λ+ t
V (t) = λ+
0
N
N tV (t) dt −
0
N λ+
t dV (t) 0
∞ tV (t) dt − N V (N ) → λ+
0
tV (t) dt
as
N → ∞,
0
which establishes (6.1.21). Moreover, if 1/V (t) is an upper-power function (see p. 28) then 2c V (t) t
t
4c V (u) du 2 t
t/2
t
uV (u)du = o(t−2 )
t/2
owing to (6.1.21). It is not difficult to construct an example showing that without additional assumptions on the behaviour of V (t) the bound V (t) = o(t−1 ) is essentially unimprovable. It turns out that one can find the asymptotics of the probabilities of large deviations x > θ+ n in the case (6.1.19), (6.1.20) only under the additional assumption of regular variation of the function V ; this is quite similar to what we saw in Chapters 2–5 and in contrast with the ‘Cram´er theory’ presented at the beginning of the present section. Moreover, the asymptotics of P(Sn x) are determined in this case not only by the function V (t) but also by the values of λ+ , ϕ(λ+ ) and ϕ (λ+ ), which depend on the entire distribution of ξ. (We did not have this phenomenon in the situations considered in Chapters 2–5; in particular, in the Cram´er deviation zone there was no dependence on the asymptotics of the nonexponential factor V (t)). Thus, in this sense, we will be dealing with ‘transitional’ asymptotics, whose place is between the asymptotics of Chapters 2–5 and the asymptotics (6.1.15)–(6.1.17) in the Cram´er deviation zone. In what follows, we will consider the class ER of distributions (or, equivalently, functions of the form (6.1.20)) with the property that the function V (t) in (6.1.20) belongs to the class R of regularly varying functions. The class ER will be referred to as the class of regularly varying exponentially decaying distributions. We will distinguish between two subclasses of ER, for which one has, respectively, ∞ t2 V (t) dt = ∞ (6.1.23) 0
and
∞ 0
t2 V (t) dt < ∞.
(6.1.24)
308
Random walks with exponentially decaying distributions
For studying these one will have to use the results of Chapter 3 and Chapter 4, respectively. Functions V ∈ R of indices from the intervals (−2, −3) correspond to the subclass (6.1.23), (6.1.21) whereas functions of indices less than −3 correspond to the subclass (6.1.24). One can also obtain results, similar to (but, unfortunately, technically more complicated than) those to be derived below in §§ 6.2 and 6.3, for the class ESe of distributions with the property that the function V in (6.1.20) is from the class Se of semiexponential functions. In this case one should make use of integro-local theorems. As mentioned in the Introduction, a companion volume to the present text will be devoted to a more detailed study of random walks with distributions fast decaying at infinity.
6.2 Integro-local theorems for sums Sn of r.v.’s with distributions from the class ER when the function V (t) is of index from the interval (−1, −3) In the previous section, we used integro-local theorems in the normal deviation zone and the Cram´er transform to obtain theorems of the same type in the whole Cram´er deviation zone. Now we will turn to integro-local theorems on the boundary and outside the Cram´er zone, where the approaches presented in § 6.1 do not work. In this section, we will consider the possibility (6.1.23), i.e. the case when the distribution of ξ has the form P(ξ t) = e−λ+ t V (t),
λ+ > 0,
(6.2.1)
where V (t) = t−α−1 L(t),
(6.2.2)
α ∈ (0, 2) and L is an s.v.f. (For reasons that will become clear later on it is convenient in the section to denote the index of the r.v.f. V (t) by −α − 1.) Formally, the case α ∈ (0, 1) is not related to the group of distributions with λ+ < ∞ and θ+ < ∞, which was specified by (6.1.19), since, for such indices, ∞
∞ tV (t) dt =
1
t−α V (t) dt = ∞
1
and therefore ϕ (λ+ ) = ∞, θ+ = ϕ (λ+ )/ϕ(λ+ ) = ∞. This means that we can use the approaches discussed in § 6.1 to study the probabilities P(Sn x) with x ∼ cn for any fixed c. It turns out, however, that the approaches presented in § 6.2.2 below (first and foremost, this refers to the use of the Cram´er transform at the fixed point λ+ ) enable one to study the asymptotics of P(Sn ∈ Δ[x)) when x n also (such deviations x could be called super-large; under Cram´er’s
6.2 Integro-local theorems: the index of V is from (−1, −3) 309 √ condition, along with normal deviations O( n ), one usually distinguishes be√ tween moderately large deviations, when n x = o(n), and the ‘usual’ large deviations x ∼ cn). 6.2.1 Large deviations on the boundary of the Cram´er zone: case α ∈ (1, 2) Now we return to the case α ∈ (1, 2) and put y := x − nθ+ .
(6.2.3)
We will assume in this subsection that y = o(n) as n → ∞ (in fact, the range of the values of y to be considered will be somewhat narrower). This means that θ = x/n remains in the vicinity of the point θ+ , i.e. next to the right-hand boundary of the Cram´er zone (θ− , θ+ ). Applying the Cram´er transform (6.1.1) at the fixed point λ+ , we obtain that, as Δ → 0 (cf. (6.1.4)), (6.2.4) P Sn ∈ Δ[x) ∼ ϕn (λ+ )e−λ+ x P Sn(λ+ ) − θ+ n ∈ Δ[y) . Now we will find out what kind of distribution will describe the jumps ζ := ξ (λ+ ) − θ+ .
(6.2.5)
We have λ+ V (t + θ+ ) dt − dV (t + θ+ ) eλ+ (t+θ+ ) P(ξ ∈ dt + θ+ ) = . ϕ(λ+ ) ϕ(λ+ ) (6.2.6) From this and Theorem 1.1.4, one derives that P(ζ ∈ dt) =
λ+ tV (t) (1 + o(1)) αϕ(λ+ ) λ+ t−α L(t) = (1 + o(1)) = t−α Lζ (t), αϕ(λ+ )
Vζ (t) := P(ζ t) =
(6.2.7)
where Lζ (t) ∼
λ+ L(t) αϕ(λ+ )
is clearly also an s.v.f. Let ζ1 , ζ2 , . . . be independent copies of the r.v. ζ. Since Eζ = Eξ (λ+ ) − θ+ = 0 we obtain, by virtue of the results of § 1.5, that, as n → ∞, the distribution of the scaled sums Zn /bn , where n 1/α λ+ (−1) ζi , bn = Vζ (1/n) ∼ V (−1) (1/n), (6.2.8) Zn = αϕ(λ ) + i=1
310
Random walks with exponentially decaying distributions
will converge to the stable law Fα,1 with parameters (α, 1) (the left distribution tail of ζ decays exponentially fast). Hence, owing to Theorem 6.1.2 one (−1) has (6.1.9), where an = 0, bn = Vζ (1/n) and f = fα,1 is the density of the stable law Fα,1 . This means that if Δ = Δn → 0 slowly enough then y Δ Δ +o . P Zn ∈ Δ[y) = fα,1 bn bn bn In other words, when |y| = O(bn ) (i.e. for n > c/Vζ (y) or when nVζ (y) decays slowly enough) one has y Δ P Zn ∈ Δ[y) ∼ . fα,1 bn bn Returning to (6.2.4) and noting that ϕn (λ+ )e−λ+ x = e−nΛ(θ+ )−λ+ y , we obtain the following assertion. Theorem 6.2.1. Assume that conditions (6.2.1), (6.2.2) with α ∈ (1, 2) are satisfied. Then, in the non-lattice case, for y = x − θ+ n = o(n) and for Δ = Δn → 0 slowly enough as n → ∞, one has the representation % & Δe−nΛ(θ+ )−λ+ y y P Sn ∈ Δ[x) = fα,1 + o(1) , (6.2.9) bn bn where the bn are defined in (6.2.8). In the arithmetic case, for integer-valued x, % e−nΛ(θ+ )−λ+ y y P(Sn = x) = fα,1 bn bn
& + o(1) .
(6.2.10)
In both cases the remainder terms are uniform in x (in y) in the zone where nVζ (y) εn , εn → 0 slowly enough as n → ∞. Remark 6.2.2. The function Λ(θ) is linear for θ θ+ : Λ(θ) = Λ(θ+ ) + (θ − θ+ )λ+ ,
θ θ+ .
(6.2.11)
Hence for y > 0 the argument of the exponent in (6.2.9), (6.2.10) can also be written in the form −nΛ(θ). Corollary 6.2.3. In the non-lattice case, under conditions (6.2.1), (6.2.2), for any sequence εn → 0 slowly enough as n → ∞, one has & % e−nΛ(θ+ )−λ+ y y −λ+ Δ + o(1) 1−e fα,1 P Sn ∈ Δ[x) = λ+ bn bn uniformly in y, |y| < εn n and Δ εn .
6.2 Integro-local theorems: the index of V is from (−1, −3)
311
Proof. The assertion of the corollary follows from Theorem 6.2.1 owing to the continuity of the function fα,1 and the fact that, as n → ∞, Δ e
−λ+ v
fα,1
0
% y+v 1 − e−λ+ Δ y dv = fα,1 bn λ+ bn
& + o(1)
uniformly in y and Δ. Thus, Theorem 6.2.1 and the asymptotics of the prob Corollary 6.2.3 describe (−1) abilities P Sn ∈ Δ[x) when |y| = O Vζ (1/n) . We will need another (−1)
approach when y Vζ
(1/n).
6.2.2 Large deviations and super-large deviations outside the Cram´er zone: case α ∈ (0, 2) In this subsection, we will assume that x and n are such that y = x − nθ+ → ∞,
(−1)
y σ(n) := Vζ
(1/n),
where the tail Vζ (t) of the r.v. ζ = ξ (λ+ ) − θ+ is given in (6.2.7). Observe that the left tail of ζ decays at infinity exponentially fast, so that condition [<, =] (n) always holds for ζ. Furthermore, under with Wζ (t) = o(Vζ (t)), σ(n) = σ the conditions of the present subsection, deviations y σ(n) for Zn belong to the large deviation zone and so the approach used subsection is no in the previous longer applicable. Instead, to approximate P Zn ∈ Δ[y) we can now use the integro-local theorems of § 3.7. This results in the following assertion. Recall that, in the case under consideration, we have θ = x/n > θ+ , Eξ (λ+ ) = θ+ and λ(θ) = λ+ ,
Λ(θ) = θλ+ − ln ϕ(λ+ ) = Λ(θ+ ) + λ+ y/n.
Theorem 6.2.4. Let conditions (6.2.1), (6.2.2) with α ∈ (0, 2) be met. Then, in the non-lattice case, for Δ = Δy → 0 slowly enough as y → ∞, we have the representation nλ+ V (y) P Sn ∈ Δ[x) = Δe−nΛ(θ) (1 + o(1)) ϕ(λ+ )
(6.2.12)
as N → ∞, where the remainder term o(1) is uniform in y and n such that y max{N, n1/γ } for some fixed γ < α. In the arithmetic case, the representation (6.2.12) holds for Δ = 1 and integervalued x → ∞, provided that we replace the factor λ+ on the right-hand side of (6.2.12) by 1 − e−λ+ . Remark 6.2.5. Note that in this theorem, in contrast with Theorem 6.1.6 and Corollary 6.1.7, generally speaking, we did not assume that n → ∞.
312
Random walks with exponentially decaying distributions
Proof of Theorem 6.2.4. It follows from (6.2.6) that, for any Δ = o(t), P ξ (λ+ ) − θ+ ∈ Δ[t) 3 t+θ + +Δ =ϕ
4
−1
(λ+ ) V (t + θ+ ) − V (t + θ+ + Δ) + λ+
−1
# $ (λ+ ) V (t + θ+ ) − V (t + θ+ + Δ) + λ+ ΔV (t)(1 + o(1)) .
V (u) du t+θ+
=ϕ
Since for Δ = o(t) one has V (t + θ+ ) − V (t + θ+ + Δ) = o V (t) , we obtain that the distribution of ζ = ξ (λ+ ) − θ+ has the property that λ+ V (t) # $ P ζ ∈ Δ[t) = (1 + o(1))Δ + o(1) . ϕ(λ+ )
(6.2.13)
In other words, the tail Vζ (t) := P(ζ t) (see (6.2.7)) satisfies condition [D(1,q) ] from § 3.7 in the form (3.7.2): # $ Vζ (t) − Vζ (t + Δ) = V1 (t) (1 + o(1))Δ + o q(t) with q(t) ≡ 1,
V1 (t) =
λ+ V (t) , ϕ(λ+ )
Vζ (t) ∼
tλ+ V (t) . αϕ(λ+ )
(6.2.14)
Hence we can use Theorem 3.7.1. Applying the theorem with γ0 = 0, we obtain that, for any Δ c, Δ = o(y), one has nλ+ V (y) (1 + o(1))Δ P Sn(λ+ ) − θ+ n ∈ Δ[y) = ϕ(λ+ )
(6.2.15)
as N → ∞, where the remainder term o(1) is uniform in y, n and Δ satisfying the inequalities y max{N, n1/γ } for some fixed γ < α and c Δ yεN for an arbitrary function εN ↓ 0 as N ↑ ∞. Since c is arbitrary, the relation (6.2.15) will still be true for Δ = Δy → 0 slowly enough. It only remains to use (6.2.4). The demonstration in the lattice case is quite similar. The theorem is proved. Corollary 6.2.6. Assume that (6.2.1), (6.2.2) hold true. In the non-lattice case, for any Δ Δy , where Δy → 0 slowly enough as y → ∞, one has e−nΛ(θ) nV (y) P Sn ∈ Δ[x) = 1 − e−λ+ Δ (1 + o(1)). ϕ(λ+ )
(6.2.16)
In particular, when Δ = ∞ we obtain P(Sn x) =
e−nΛ(θ) nV (y) (1 + o(1)) ϕ(λ+ )
(6.2.17)
6.2 Integro-local theorems: the index of V is from (−1, −3)
313
as N → ∞, where the remainders o(1) are uniform in y max{N, n1/γ } for any fixed γ < α, Δ Δy . In the arithmetic case, the relation (6.2.17) holds for integer-valued x → ∞. Proof. The proof of Corollary 6.2.6 is almost obvious from Theorem 6.2.4. Indeed, in the non-lattice case, for small Δ one has (6.2.16) and, moreover, for any fixed v > 0, v nΛ θ + = nΛ(θ) + λ+ v. n Therefore, for small Δ we have Δnλ+ V (y + v) −nΛ(θ) −λ+ v e P Sn ∈ Δ[x + v) = e (1 + o(1)), ϕ(λ+ ) and so for arbitrary Δ Δy nλ+ e−nΛ(θ) P Sn ∈ Δ[x) = ϕ(λ+ ) =ϕ
−1
Δ
V (y + v)e−λ+ v dv 1 + o(1)
0
(λ+ )nV (y)e−nΛ(θ) 1 − e−λ+ Δ (1 + o(1)), (6.2.18)
where the required uniformity of o(1) follows from Theorem 6.2.4 and obvious uniform bounds for the integral in (6.2.18). This proves (6.2.16) and (6.2.17). In the lattice case, the assertion (6.2.17) follows directly from Theorem 6.2.4. The corollary is proved.
6.2.3 Super-large deviations in the case α ∈ (0, 1) As already noted, in the case α ∈ (0, 1) one has θ+ = ∞ and therefore the probabilities of deviations of the form cn for any fixed c can be obtained within the framework of § 6.1. It turns out that, using the same approaches as in §§ 6.2.1 and 6.2.2, in the case α ∈ (0, 1) one can also find the asymptotics of P(Sn x) for x n. For Δ → 0 one has (cf. (6.1.4), (6.2.4)) (6.2.19) P Sn ∈ Δ[x) ∼ ϕn (λ+ )e−λ+ x P Sn(λ+ ) ∈ Δ[y) , where ζ := ξ (λ+ ) follows the distribution P(ζ ∈ dt) =
eλ+ t λ+ V (t) dt − dV (t) P(ξ ∈ dt) = , ϕ(λ+ ) ϕ(λ+ )
so that Vζ (t) := P(ζ t) =
λ+ tV (t) (1 + o(1)) = t−α Lζ (t), αϕ(λ+ )
Lζ (t) ∼
λ+ L(t) . αϕ(λ+ )
314
Random walks with exponentially decaying distributions
As in § 6.2.1, it follows from this that the scaled sums Zn /bn (see (6.2.8)) converge in distribution to the stable law Fα,1 with parameters (α, 1). Therefore, by virtue of Theorem 6.1.2, for Δ = Δn → 0 slowly enough, x Δ Δ +o , fα,1 P Zn ∈ Δ[x) = bn bn bn where fα,1 is the density of the distribution Fα,1 (recall that in the case of convergence to a stable law with α < 1, no centring of the sums Zn is needed). Returning to (6.2.19), we obtain, as in § 6.2.1, the following assertion. Theorem 6.2.7. Let conditions (6.2.1), (6.2.2) with α ∈ (0, 1) be satisfied and let x n. Then, in the non-lattice case, for Δ = Δn → 0 slowly enough as n → ∞ one has % & Δ n x −λ+ x ϕ (λ+ )e fα,1 + o(1) , P Sn ∈ Δ[x) = bn bn where the bn were defined in (6.2.8). In the arithmetic case, for integer-valued x,
% 1 n x −λ+ x P(Sn = x) = ϕ (λ+ )e fα,1 bn bn
Corollary 6.2.8. In the non-lattice case, as n → ∞, % x ϕn (λ+ )e−λ+ x fα,1 P(Sn x) = λ+ bn bn
& + o(1) . & + o(1) .
The above assertions could be made uniform in the same way as in § 6.2.1. They give exact asymptotics of the desired probabilities only when x is comparable with bn ∼ n1/α Lb (n), where Lb is an s.v.f. If x grows at a faster rate then the following analogue of Theorem 6.2.4 should be used. Let γ < α be an arbitrary fixed number. Theorem 6.2.9. Let conditions (6.2.1), (6.2.2) with α ∈ (0, 1) be satisfied. Then, as N → ∞, for x max{N, n1/γ } and Δ = Δx → 0 slowly enough, in the non-lattice case one has (6.2.20) P Sn ∈ Δ[x) = Δϕn−1 (λ+ )e−λ+ x nλ+ V (x)(1 + o(1)). In the arithmetic case, the assertion (6.2.20) remains true for integer-valued x and Δ = 1, provided that we replace the factor λ+ on the right-hand side of (6.2.20) by 1 − e−λ+ . Corollary 6.2.10. Under conditions (6.2.1), (6.2.2), for α ∈ (0, 1), N → ∞ and x max{N, n1/γ } one has P(Sn x) = ϕn−1 (λ+ )e−λ+ x nV (x)(1 + o(1)). We will omit the proofs of Theorem 6.2.9 and Corollary 6.2.10 as they are completely analogous to the proofs of Theorem 6.2.4 and Corollary 6.2.6 respectively.
6.3 Integro-local theorems: the finite variance case
315
6.3 Integro-local theorems for the sums Sn when the Cram´er transform for the summands has a finite variance at the right boundary point The structure of this section is similar to that of § 6.2. First we will consider deviations √ y = x − nθ+ = O n under the assumption that ϕ (λ+ ) < ∞ (here and elsewhere in the present chapter, by the derivatives at the end points λ± we understand the respective one-sided derivatives) and then, in the second part of the section, turn to deviations √ y n ln n under conditions (6.2.1), (6.2.2), α > 2. Note that in the first part of the section the assumptions (6.2.1), (6.2.2) will not be needed.
6.3.1 Large deviations on the boundary of the Cram´er zone We will assume in this subsection that ϕ (λ+ ) < ∞ (under the assumptions of § 6.2 this condition was not met). Approaches to studying the asymptotics of P(Sn ∈ Δ[x)) in this case are close to those used in § 6.1 to prove Theorem 6.1.6. The difference is that here we will use the Cram´er transform at the fixed point λ+ (as in § 6.2). For the r.v. ζ = ξ (λ+ ) − θ+ we have ϕ(λ + λ+ ) , Eζ = 0, ϕ(λ+ ) 2 ϕ (λ+ ) ϕ (λ+ ) − b := Eζ 2 = < ∞. ϕ(λ+ ) ϕ(λ+ )
Eeλξ
(λ+ )
=
(6.3.1)
Theorem 6.3.1. Let ϕ (λ+ ) < ∞. Then, in the non-lattice case, for y = x − θ+ n = o(n) and Δ = Δn → 0 slowly enough as n → ∞, one has Δe−nΛ(θ+ )−λ+ y # −y 2 /2bn $ √ P Sn ∈ Δ[x) = + o(1) , e 2πnb
(6.3.2)
where the remainder o(1) is uniform in x (in y). In the arithmetic case, for integer-valued x one has P(Sn = x) =
$ e−nΛ(θ+ )−λ+ y # −y2 /2bn √ + o(1) . e 2πnb
Recall that, for θ θ+ , λ(θ) = λ+ ,
e−nΛ(θ) = e−nΛ(θ+ )−λ+ y = ϕn (λ+ )e−λ+ x .
The assertion of Theorem 6.3.1 extends, in the case ϕ (λ+ ) < ∞, the applicability zone of Theorem 6.1.6 up to the boundary values θ = x/n ∼ θ+ . We also have the following analogue of Corollary 6.1.7.
316
Random walks with exponentially decaying distributions
Corollary 6.3.2. In the non-lattice case, for any Δ Δn , where Δn → 0 slowly enough, one has e−nΛ(θ+ )−λ+ y−y √ P Sn ∈ Δ[x) = λ+ 2πnb √ uniformly in y, |y| c n. In particular, for Δ = ∞, P(Sn x) ∼
2
/2bn
1 − e−λ+ Δ (1 + o(1))
e−nΛ(θ+ )−λ+ y−y √ λ+ 2πnb
2
(6.3.3)
/2bn
.
(6.3.4)
In the arithmetic case, for integer-valued x one has 2
e−nΛ(θ+ )−λ+ y−y /2bn √ . P(Sn x) ∼ (1 − e−λ+ ) 2πnb Corollary 6.3.2 follows from Theorem 6.3.1 in an obvious way. Proof of Theorem 6.3.1. For the distribution of ζ = ξ (λ+ ) − θ+ we have relations (6.3.1), which imply that b = Var ζ < ∞. Hence the distribution of the normalized sums Zn = ζ1 +· · ·+ζn converges to the normal law, and the integrolocal theorem 6.1.2 holds true in the non-lattice case: −y2 /2nb 1 + o(1) (6.3.5) Δe P Zn ∈ Δ[y) ∼ √ 2πnb uniformly in Δ ∈ [Δ1 , Δ2 ]. It only remains to make use of (6.2.4). In the arithmetic case, the proof is similar. The theorem is proved.
6.3.2 Large deviations outside the Cram´er zone
√ We will assume in this section that y = x − nθ+ n ln n. Such deviations are large for Zn , and so the relations (6.3.5) are not meaningful for them. However, we can now use the integro-local theorems from § 4.7. As a result, we can prove the following assertion. Theorem 6.3.3. Let conditions (6.2.1), (6.2.2) with α > 2 be satisfied, and let √ y = x − nθ+ n ln n.
Then, in the non-lattice case with Δ = Δy → 0 slowly enough as y → ∞, we have nλ+ V (y) (1 + o(1)) (6.3.6) P Sn ∈ Δ[x) = Δe−nΛ(θ) ϕ(λ+ ) √ as N → ∞, where the term o(1) is uniform in y and n such that y N n ln n. In the arithmetic case, the assertion (6.3.6) holds true for integer-valued x, Δ = 1, if we replace the factor λ+ on the right-hand side of (6.3.6) by 1 − e−λ+ . Remark 6.2.5 following Theorem 6.2.4 is valid for the above theorem as well.
6.3 Integro-local theorems: the finite variance case
317
Proof. The proof of Theorem 6.3.3 essentially repeats that of Theorem 6.2.4. Under the new conditions, we still have the representation (6.2.13), and hence condition [D(1,q) ] of § 3.7 holds in the form (3.7.2), where q and V1 are given in (6.2.14). Therefore the conditions of Theorem 4.7.1 are met and so, by virtue of that theorem, for Δ = Δy → 0 slowly enough as y → ∞, one has P Zn ∈ Δ[y) = ΔnV1 (y) 1 + o(1) ,
V1 (t) =
λ+ V (t) , ϕ(λ+ )
√ as N → ∞, where the term o(1) is uniform in y and n such that y N n ln n. It remains to make use of (6.2.4). The proof in the lattice case is completely analogous. The theorem is proved. Theorem 6.3.3 implies the following. Corollary 6.3.4. Under the conditions of Theorem 6.3.3, for any Δ Δy , e−nΛ(θ) nV (y) P Sn ∈ Δ[x) = 1 − e−λ+ Δ (1 + o(1)). ϕ(λ+ ) In particular, e−nΛ(θ) nV (y) (1 + o(1)) ϕ(λ+ ) √ as N → ∞, where the remainder o(1) is uniform in y N n ln n. The assertion remains true in the lattice case as well. P(Sn x) =
Assertions that are close to the theorems and corollaries of §§ 6.2 and 6.3 were obtained for narrower deviation zones in [27] (Lemmata 2, 3). Remark 6.3.5. Observe that, under the assumptions of the present chapter, the integro-local theorems could be obtained from the corresponding integral theorems (provided that the latter are available). Indeed, for y = x − θ+ n > 0, e−nΛ(θ) = e−nΛ(θ+ )−λ+ y , and therefore P(Sn x) decays in x exponentially fast (in the presence of other, not so quickly varying, factors). Hence, for any fixed Δ > 0, the asymptotics of the probability P Sn ∈ Δ[x) will differ from that for P(Sn x) just by a constant factor (1 − e−λ+ Δ ). So the headings of §§ 6.2 and 6.3 reflect the methodological side of the subject rather than its essence, since the assertions in these sections were obtained with the help of the integro-local theorems of Chapters 3 and 4. Theorems 6.2.4 and 6.3.3 show that the integro-local theorems of §§ 3.7 and 4.7, together with the Cram´er transform, are effective tools for studying large deviation probabilities outside the Cram´er zone.
318
Random walks with exponentially decaying distributions
In concluding this section we observe that, under the conditions assumed in it, one can also obtain asymptotic expansions in the integro-local and integral theorems, in a way similar to that used in § 4.4 (see also [64]). 6.4 The conditional distribution of the trajectory {Sk } given Sn ∈ Δ[x) The present section is an analogue of § 4.9. For F ∈ ER, the conditional distribution of the trajectory {Sk ; 1 k n} given Sn x and that given Sn ∈ Δ[x) will not differ much from each other. So, for convenience of exposition, we will restrict ourselves to conditioning on the events {Sn ∈ Δ[x)} only. First observe that, in the Cram´er deviation zone (when θ := x/n < θ+ ), in √ contrast with the results of § 4.9, for x n → ∞ the process {x−1 Snt ; t ∈ [0, 1]} given Sn ∈ Δ[x) converges in distribution to the deterministic process {ζ1 (t) ≡ t; t ∈ [0, 1]}. This is the ‘first-order approximation’ in trajectory space. The ‘second-order approximation’ states that the (conditional) process {n−1/2 (Snt − xt); t ∈ [0, 1]} given Sn ∈ Δ[x) converges in distribution to the Brownian bridge process {w(t) − tw(1)}, where {w(t)} is the standard Wiener process (see [45, 47]). In the present section, we will show that, outside the Cram´er deviation zone, when y = x − θ+ n → ∞, we have for the conditional distributions of the sums {Sk } of r.v.’s with distributions from the class ER a picture that is intermediate between two absolutely different behaviour types, that described above and that established in § 4.9. As in § 4.9, let E(·) denote a step process on [0, 1]:
0 for t ω, E(t) := 1 for t > ω, where ω is an r.v. uniformly distributed over [0, 1]. Theorem 6.4.1. Let conditions (6.2.1), (6.2.2) be satisfied, Δ > 0 be an arbitrary fixed number, n → ∞ and one of the following two conditions hold true: (1) α ∈ (1, 2), y = x − θ+ n n1/γ , where γ < α is some fixed number; √ (2) α > 2, y = x − θ+ n n ln n. Then the conditional distribution of the process −1 y (Snt − θ+ nt); t ∈ [0, 1] given the event Sn ∈ Δ[x) will converge weakly in D(0, 1) to the distribution of E(t). Proof. We will present only a sketch of the proof. It is not difficult to give a more detailed and formal argument, using [45, 47] and Chapters 3 and 4. The Cram´er transform of the distribution of the sequence S1 , . . . , Sn at the point λ+ has the property that the distribution of the process {Snt } in the zone (λ )
+ of deviations of order x coincides with the distribution of {Snt − θ+ nt } (up
6.5 The probability of the crossing of a remote boundary
319
to a factor ϕn (λ+ )e−λ+ x , cf. (6.2.4), which then disappears when we switch to the conditional distribution given Sn ∈ Δ[x)). The conditional distribution of (λ+ ) the process {Snt − θ+ nt } given Sn ∈ Δ[x) coincides with the conditional distribution of (λ )
+ − θ+ nt Znt := Snt
given Zn ∈ Δ[y), where y = x − θ+ n. Now, the r.v.’s ζ = ξ (λ+ ) − θ+ have a regularly varying right tail Vζ and an exponentially fast decaying left tail. Hence we can make use of Theorems 4.9.2 and 4.9.4 to describe the conditional distribution of Znt given Zn ∈ Δ[y). Therefore, we will obtain the conditional behaviour of the original trajectory Snt by taking the sum of the trajectory θ+ t and the trajectory described in Theorems 4.9.2 and 4.9.4. This completes the proof of the theorem. The assertion of Theorem 6.4.1 means that the trajectory n−1 Snt given the event Sn ∈ Δ[x) can be represented as n−1 Snt = θ+ t + yn−1 E(t) + rn (t), where rn = O(n−1 b(n)) and (−1) V (1/n) b(n) = √ζ n
when α ∈ (1, 2), when α > 2.
As in § 4.9, one could also obtain here an approximation of higher order by determining the behaviour of the remainder rn . Namely, as in Theorem 4.9.3, one can show that conditioning on Sn ∈ Δ[x) results in the weak convergence $ 1 # Snt − θ+ nt − yEωn (t) ⇒ ζ(t) − ζ(1)E(t), b(n) where the ωn have the same meaning as in Theorem 4.9.3, but refer to the r.v.’s ζi , and ζ(t) is the corresponding stable process; {ζ(t)} and {E(t)} are independent.
6.5 Asymptotics of the probability of the crossing of a remote boundary by the random walk 6.5.1 Introduction As in Chapters 3 and 4, we will study here the asymptotics of the probability P(Gn ) of the crossing of a remote boundary g(·) = {g(k); k = 1, . . . , n} by the trajectory {Sk ; k = 1, . . . , n}: ( ) Gn := max(Sk − g(k)) 0 . kn
Without losing generality, we will assume in this section that Eξ = 0. We have a situation similar to the one that we had when studying the asymptotics of P(Sn x). When the boundary g(·) lies, roughly speaking, in the
320
Random walks with exponentially decaying distributions
Cram´er deviation zone, the asymptotics of P(Gn ) have been studied fairly thoroughly (see e.g. [36, 37, 38, 44]). If, for instance, g(n) = x, g(k) = ∞ for k < n, θ=
x ϕ (λ+ ) < θ+ ≡ n ϕ(λ+ )
and λ+ = sup{λ : ϕ(λ) < ∞}
then for P(Gn ) = P(Sn x) we obtain the asymptotics (6.1.16), (6.1.17). The asymptotic behaviour of P(S n x) and also that of the joint distribution P(S n x, Sn < x − y) for such x values was studied in detail in [33, 34, 37] (for lattice r.v.’s ξ, and also when the distribution of ξ has an absolutely continuous component). In the general case, first assume for simplicity that the boundary g(·) has the form g(k) = xf (k/n),
k = 1, . . . , n,
inf f (t) > 0,
0t1
(6.5.1)
where f (t), t ∈ [0, 1], is a fixed piecewise continuous function. As shown in [36], a determining role for the behaviour of the probabilities P(Gn ) is played by the level lines pθ (t), t ∈ [0, 1], pθ (1) = θ, which are given implicitly as solutions to the equation tΛ(pθ (t)/t) = Λ(θ),
t ∈ [0, 1],
(6.5.2)
for fixed values of the parameter θ. The relation (6.5.2) is obtained by equating the expressions kΛ(pθ (k/n)n/k) and nΛ(θ). Owing to the well-known logarithmic asymptotics [80] − ln P(Sk y) ∼ kΛ(y/k)
as
k → ∞,
(6.5.3)
the above values are the principal terms of the asymptotics for the quantities − ln P Sk pθ (k/n)n and − ln P(Sn θn) respectively (cf. (6.1.16), (6.1.17)). Therefore, any two points on a given level line pθ (t), t ∈ [0, 1], have the property that, for a suitably scaled random walk (for which both the time and the space coordinates are ‘compressed’ n times each), the probabilities that these points will be reached (in the respective times) are roughly the same. As was shown in [36], the functions pθ (·) are concave and θt pθ (t) θ for t ∈ [0, 1]. Moreover, in the general case, on the left of the point
" Λ(θ) tθ := min 1, (6.5.4) Λ(θ+ ) the function pθ is linear with a slope coefficient, whose value does not depend on θ: pθ (t) = λ−1 + Λ(θ) + g+ t,
t ∈ [0, tθ ],
g+ := λ−1 + ln ϕ(λ+ ),
(6.5.5)
6.5 The probability of the crossing of a remote boundary
321
whereas on the right of tθ this function is analytic with a derivative continuous at tθ . The deviation function Λ(θ) increases for θ > 0, so that for θ θ+ one has tθ = 1 owing to (6.5.4), and hence (6.5.5) holds for all t ∈ [0, 1]. This also implies that pθ (t) is an increasing function of θ for θ > 0. Moreover, it is obvious from (6.5.2) and (6.5.4) that t ∈ [0, tθ )
iff
pθ (t)/t > θ+ .
(6.5.6)
Now let θ∗ > 0 = Eξ be the minimum value of θ, for which a curve from the family {pθ (·); θ > 0} will ‘touch’ the scaled boundary xn−1 f (·); cf. (6.5.1). Then, using the well-known logarithmic asymptotics (6.5.3) and the crude bounds n max P Sk xf (k/n) P(Gn ) P Sk xf (k/n) kn
k=1
n max P Sk xf (k/n) kn
(or the results of [36]), one can easily verify that, as n → ∞, ln P(Gn ) ∼ −nΛ(θ∗ ).
(6.5.7)
In other words, the probability that the random walk will cross the boundary (6.5.1) has the same logarithmic asymptotics as the probability that the random walk will be above the point g(k) = xf (k/n) = npθ∗ (k/n) at the time k := nt∗ , where t∗ := inf{t > 0 : pθ∗ (t) = xn−1 f (t)} is the point where the level lines ‘touch’ the scaled boundary for the first time. In the case when t∗ lies in the ‘regularity interval’ (tθ∗ , 1) (i.e. when we are within the Cram´er deviation zone), the exact asymptotics of P(Gn ) were obtained in [36] under rather broad assumptions on f . In the present chapter, we are dealing with an alternative situation, t∗ ∈ (0, tθ∗ ),
(6.5.8)
and assuming, as in §§ 6.2 and 6.3, that λ+ < ∞,
ϕ(λ+ ) < ∞.
(6.5.9)
In this case, the nature of the asymptotics of P(Gn ) will be quite different from those in the case when the boundary g(·) lies within the Cram´er deviation zone. Obtaining the asymptotics of P(Gn ) in the case (6.5.8) is more difficult than in the Cram´er case t∗ ∈ (tθ∗ , 1); moreover, in the present situation there exists no common (universal) law for P(Gn ). As we have already noted, most often one encounters in applications boundaries that belong to one of the following two main types: (1) g(k) = x + gk , k 1, where the sequence {gk } depends neither on x nor on n;
322
Random walks with exponentially decaying distributions
(2) g(k) = xf (k/n), where f (t) > 0 is a function on [0, 1] that depends neither on x nor on n. In this section, we will somewhat change the form in which we represent the boundaries {g(k)} and will write k = 1, 2 . . . ;
g(k) = x + gk,x,n ,
this includes representations (1) and (2) as special cases. In addition, in each of the typical cases to be dealt with below we will impose on {gk,x,n } a few specific conditions. Recall that 1 ln ϕ(α+ ) > 0 (6.5.10) g+ = λ+ (because ϕ (0) = Eξ = 0 and ϕ(λ) is a convex function). The above-mentioned ‘typical cases’ of boundaries refer to situations where the level lines first touch the boundary at the left endpoint, at the right endpoint, or at a middle point, respectively. They are characterized by the following sets of conditions [A ]. [A0 ]
For some fixed k0 1 and γ > 0, gk,x,n (g+ + γ)k
for
k > k0
(6.5.11)
for all large enough x and n, and there exists a fixed number a (independent of x and n) such that, for any fixed k 1, gk,x,n → ak
as
x, n → ∞.
(6.5.12)
Note that, owing to (6.5.11), a g+ + γ necessarily holds and also that condition (6.5.12) actually stipulates convergence on an initial segment of the boundary {gk,x,n ; k = 1, . . . , n} (the relation (6.5.12) holds uniformly in k N, where the value N = N (x, n) → ∞ grows slowly enough as x, n → ∞, so that for, √ say, k > n the values of gk,x,n could be quite far from ak). Condition (6.5.12) could be referred to as asymptotic local linearity of the boundary at zero. An example of a boundary for which condition [A0 ] is met is given by a boundary of the form (6.5.1) with x ∼ cn, f (0) = 1 and f (t) − f (0) c−1 (g+ + γ)t and such that the function f (t) has a finite right derivative f (0) > c−1 (g+ + γ) at t = 0 (in this case a = cf (0)). The next condition has a similar form but refers to the terminal segment of the boundary {gk,x,n ; k = 1, . . . , n}. [An ]
For some fixed k0 1 and γ > 0, gn−k,x,n −(g+ − γ)k
for
k > k0
6.5 The probability of the crossing of a remote boundary
323
for all large enough x and n, gn,x,n = 0 and there exists a fixed value b (independent of x and n) such that, for any fixed k 1, gn−k,x,n → −bk
as
x, n → ∞.
(6.5.13)
Remarks similar to those we made regarding condition [A0 ] apply to [An ] as well (in particular, that b g+ − γ). The case when the level lines first touch the boundary inside the temporal interval is described by the following combination of the above conditions. [Am ] There exists a sequence m = m(n) → ∞ such that n − m → ∞, gm,x,n = 0 and the sequences {gm+k,x,n ; k 1}
and {gm−k,x,n ; k 1}
possess the properties stated in conditions [A0 ] and [An ] respectively. Observe that, in the above-listed cases, the level lines pθ∗ (t) ‘touch’ the boundary n−1 g( nt ) at a non-zero angle. To get an idea of what happens in the case of a smooth contact of these lines, we will consider the following condition: [A0,n ]
g(k) = x + g+ k,
k = 1, . . . , n.
The analysis of the cases [A ] below will show how diverse the asymptotic behaviour of P(Gn ) can be. Including in our considerations the general case of ‘smooth contact’ of the level lines with the boundary would lead to an even wider variety of different types of asymptotic behaviour and to much more complicated proofs. At the same time, as we will see, under conditions [A ] the case of an arbitrary boundary can often be reduced to that of a linear boundary. 6.5.2 Boundaries under condition [A0 ] In this case, the boundary g(·) increases so fast that the most likely scenario for its crossing is that the random walk exceeds the boundary at the very beginning of the time interval. Hence the asymptotics of P(Gn ) will be the same as that of P(G∞ ), the probability that the boundary g(·) will ever be crossed. Prior to stating the main result observe that, when a > g+ , for the random walk Sk (a) := Sk − ak,
k = 0, 1, . . . ,
generated by the r.v.’s ξj (a) := ξj − a, we have Eξj (a) := ξj − a < −g+ < 0 (see (6.5.10)) and therefore S(a) = sup Sk (a) < ∞
a.s.
k0
Hence the first ladder epoch ηa := η(0+, a) = inf{k 1 : Sk (a) > 0}
(6.5.14)
324
Random walks with exponentially decaying distributions
is an improper r.v., P(ηa < ∞) < 1. Since any r.v.f. is an upper-power function (see Definition 1.2.20, p. 28), the following assertion holds true for a distribution class that is wider than ER. Theorem 6.5.1. Assume that the following conditions are met: F+ (t) = e−λ+ t V (t),
ϕ(λ+ ) < ∞,
V (t) is upper-power.
(6.5.15)
If condition [A0 ] holds then, as x → ∞ and n → ∞, P(Gn ) ∼ c(a)F+ (x),
(6.5.16)
1 − P(ηa < ∞) # $ . 1 − E exp{λ+ Sηa (a)}; ηa < ∞
(6.5.17)
where c(a) :=
eλ+ a
−
e−λ+ g+
We will need two lemmata to prove the theorem. The first follows immediately from Theorem 12, Chapter 4 of [42]. Lemma 6.5.2. If (6.5.15) holds and ϕ(a) (λ+ ) := Eeλ+ ξ(a) < 1 then, as x → ∞, P(S(a) x) ∼ c(a)F+ (x), where c(a) is given in (6.5.17). Lemma 6.5.3. Assume that (6.5.15) holds true. Then, for any a > g+ , we have ϕ(a) (λ+ ) = e−λ+ (a−g+ ) < 1 and, for any fixed N 1, as x → ∞, N −1 P(SN (a) x) ∼ N ϕ(a) (λ+ )F+ (x)
and
(6.5.18)
P sup Sk (a) x k>N
cN ϕN (a) (λ+ )F+ (x)(1 + o(1)).
(6.5.19)
Proof of Lemma 6.5.3. From the definition of g+ = λ−1 + ln ϕ(λ+ ) we have ϕ(a) (λ+ ) = e−λ+ a ϕ(λ+ ) = e−λ+ (a−g+ ) < 1 for
a > g+ .
Further, repeating the argument from Example 1.2.11 (p. 19), we can establish that P(S2 (a) x) ∼ 2ϕ(a) (λ+ )F+ (x)
as
x→∞
(see (1.2.11)). The demonstration of (6.5.18) is completed by induction, in exactly the same way as in the above-mentioned computation in Example 1.2.11. Now we will prove (6.5.19). Clearly d
sup Sk (a) = SN +1 (a) + S,
k>N
6.5 The probability of the crossing of a remote boundary
325
d
where S = S(a) is an r.v. independent of SN +1 (a). Owing to Lemma 6.5.2 and the fact that V (t) is an l.c. function, we have P(S t) cP(ξ t) c1 P(ξ − a t),
t 0,
whereas for t < 0 the above inequality is obvious. Therefore, by virtue of (6.5.18)
P sup Sk (a) x = P(SN +1 (a) ∈ dt)P(S x − t) k>N c1 P(SN +1 (a) ∈ dt)P(ξ − a x − t) = c1 P(SN +2 (a) x) +1 c2 (N + 2)ϕN (a) (λ+ )F+ (x)(1 + o(1)).
The lemma is proved. Proof of Theorem 6.5.1. Fix an N k0 and put ε = εx,n (N ) := max |gk,x,n − ak|. kN
Clearly, for n N ,
P S(a) x + ε − P sup Sk (a) x + ε k>N
P sup Sk (a) x + ε kN
P(Gn ) P sup Sk (a) x − ε + P sup Sk (g+ + γ) x kN
k>N
P(S(a) x − ε) + cN e−λ+ γN F+ (x)(1 + o(1)), owing to (6.5.19) and the obvious equality ϕ(g+ +γ) (λ+ ) = e−λ+ γ . Again using Lemmata 6.5.2 and 6.5.3, we obtain, after dividing the left- and right-hand sides of the previous relation by c(a)F+ (x), the inequality F+ (x + ε) (1 + o(1)) − cN e−λ+ γN F+ (x) F+ (x − ε) P(Gn ) (1 + o(1)) + cN e−λ+ γN , c(a)F+ (x) F+ (x) since a − g+ γ. By choosing a large enough N one can make the term N e−λ+ γN arbitrarily small. To complete the proof of the theorem, it remains for us to notice that F+ (x ± ε) V (x ± ε) = e∓λ+ ε → 1, F+ (x) V (x) since, for any fixed N , one has that ε → 0 as x, n → ∞ owing to (6.5.12), and since V (x) is an l.c. function.
326
Random walks with exponentially decaying distributions
6.5.3 Boundaries under condition [An ] It follows from what we said in § 6.5.1 that, in the case [An ], the ‘most likely’ crossing time of the boundary g(·) (given that event Gn has occurred) will be close to the right-end point n. Recall that [An ] implies the inequality b g+ − γ. Let (λ+ )
τk := −ξk
+ b,
Tk := τ1 + · · · + τk .
(6.5.20)
Clearly, Eτ1 = b − θ+ < g+ − θ+ =
1 Λ(θ+ ) (ln ϕ(λ+ ) − λ+ θ+ ) = − < 0. λ+ λ+ (6.5.21)
Therefore T := sup Tk < ∞
a.s.,
(6.5.22)
k0
and we have from (6.1.1) that ϕτ (λ) := Eeλτ1 = eλb
ϕ(λ+ − λ) ϕ(λ+ )
and hence ϕτ (λ+ ) = e−λ+ (g+ −b) e−λ+ γ < 1.
(6.5.23)
Now put y := x − θ+ n ≡ (θ − θ+ )n. Theorem 6.5.4. For α > 1, let F+ (t) = e−λ+ t V (t),
V (t) = t−α−1 L(t),
L(t) be an s.v.f.,
(6.5.24)
let condition [An ] be met, n → ∞ and s > 0 be fixed. Then, for α ∈ (1, 2), uniformly in y n1/α for any fixed α < α, one has P(Gn ; Sn x − s) = e−nΛ(θ)
$ nV (y) # λ+ T ; T < s + eλ+ s P(T s) (1 + o(1)) (6.5.25) E e ϕ(λ+ )
and P(Gn ) = e−nΛ(θ)
nV (y) Eeλ+ T (1 + o(1)), ϕ(λ+ )
Eeλ+ T < ∞.
(6.5.26)
√ In the case α > 2, these relations hold uniformly in y Nn n ln n for any fixed sequence Nn → ∞. Proof. To simplify the argument, we will confine ourselves to considering the special case of a linear boundary g(k) = x − b(n − k),
k = 1, . . . , n,
b g+ − γ.
(6.5.27)
327
6.5 The probability of the crossing of a remote boundary
The changes that need to be made in the proof in the case of general (asymptotically locally linear) boundaries satisfying condition [An ] amount to adding an argument similar to that used in the proof of Theorem 6.5.1. First we will show that, if the random walk does cross the linear boundary (6.5.27) during the time interval [0, n] then, with a high probability, this will occur at the very end of the interval. For j n (the choice of j will be made later), introduce the event ( ) Gn,j := max (Si − g(i)) 0 . n−jin
On the one hand, it is obvious that P(Gn ) P(Gn,j ) P(Sn x). On the other hand, for the event difference Gn,j := Gn \ Gn,j , we have from Corollaries 6.2.6 and 6.3.4 the following bound:
n−j
P(Gn,j )
n−j
P(Sk g(k)) cnV (y)
k=1
= cnV (y)e−nΛ(θ)
n−j
e−kΛ(g(k)/k)
k=1
exp nΛ(θ) − kΛ
k=1
g(k) k
" .
(6.5.28)
Since both the argument values, θ and g(k)/k > θ+ , of the function Λ in (6.5.28) lie, in the case under consideration, in the interval where this function is linear (see (6.2.11)), we conclude that the argument of the exponential on the right-hand side of (6.5.28) is equal to % & % & x x − b(n − k) n λ+ − ln ϕ(λ+ ) − k λ+ − ln ϕ(λ+ ) n k = λ+ (b − g+ )(n − k) −λ+ γ(n − k). Therefore
n−j
P(Gn,j )
−nΛ(θ)
cnV (y)e
e−λ+ γ(n−k) c1 e−λ+ γj P(Sn x),
k=1
by virtue of Corollaries 6.2.6 and 6.3.4. By choosing a large enough j, this probability could be made arbitrarily small relative to P(Sn x) and hence also relative to P(Gn ) and P(Gn ; Sn x − s) c P(Sn x − s). This means that we only have to evaluate the probabilities P(Gn,j ) and P(Gn,j ; Sn x − s). Next we observe that, for any s > 0, one has 0 P(Gn,j ; Sn x − s) = P(Sn x) +
P(Gn,j ; Sn ∈ x + du).
(6.5.29)
−s
In the range of x values with which we are dealing, one can make the Cram´er
328
Random walks with exponentially decaying distributions
transform according to (6.1.1) and switch to the random walk {Zi }, which was introduced in (6.2.5) and (6.2.8). Then, for u < 0, putting sk := x1 + · · · + xk , and
( A := (x1 , . . . , xn ) :
k = 1, 2, . . . ,
) max (sk − g(k)) > 0, sn ∈ x + du ,
n−jkn
we obtain the representation P(Gn,j ; Sn ∈ x + du) = P(ξ1 ∈ dx1 , . . . , ξn ∈ dxn ) A
= ϕ (λ+ ) n
(λ+ )
e−λ+ sn P(ξ1
∈ dx1 , . . . , ξn(λ+ ) ∈ dxn )
A
= e−nΛ(θ) e−λ+ u P(Hn,j | Zn = y + u) P(Zn ∈ y + du), where the event Hn,j :=
(
(6.5.30)
)
max (Zi − h(i)) 0
n−jin
means that the random walk {Zi } crossed the boundary h(i) := g(i) − θ+ i = y + (θ+ − b)(n − i) =: y + hn,i ,
i = 1, . . . , n, (6.5.31) during the time interval [n − j, n]. Note that the slope coefficient of this linear boundary is negative (see (6.5.21)). Thus the integral in (6.5.29) takes the form −nΛ(θ)
0
e
e−λ+ u P(Hn,j | Zn = y + u) P(Zn ∈ y + du).
(6.5.32)
−s
Since we have not assumed that V (t) is absolutely continuous (even for large enough t), we will now have to approximate the last integral by a finite sum. Fix a Δ > 0 such that k := s/Δ is an integer, and put ym := y − mΔ, m = 1, 2, . . . It is clear that the relative error of approximating the integral in (6.5.32) by the sum Σ :=
k
eλ+ mΔ P(Hn,j | Zn ∈ Δ[ym )) P(Zn ∈ Δ[ym ))
(6.5.33)
m=1
does not exceed max eλ+ mΔ − eλ+ (m−1)Δ λ+ eλ+ s Δ mk
and so could be made uniformly arbitrarily small by choosing a small enough Δ.
329
6.5 The probability of the crossing of a remote boundary Now consider the conditional probabilities
P(Hn,j | Zn ∈ Δ[ym ))
= P max Zi − y − hn,i 0 Zn ∈ Δ[ym ) n−jin
P max Zi − Zn − (m − 1)Δ − hn,i 0 Zn ∈ Δ[ym ) n−jin
(n) (6.5.34) = P max Ti (m − 1)Δ Zn ∈ Δ[ym ) , ij
(n)
where Ti
(n)
is the time-reversed random walk with jumps τi
(n)
Ti
(n)
:= τ1
(n)
+ · · · + τi
= Tn − Tn−i ,
:= τn−i+1 :
i = 0, 1, . . . , n.
Observe that the event under the last probability sign is expressed in terms of the r.v.’s ζn−j+1 , ζn−j+2 , . . . , ζn only (see (6.2.5), (6.2.8)), whereas their joint conditional distribution given Zn ∈ Δ[ym ) will be close to the unconditional distribution of ζ1 , . . . , ζj . Indeed, following the remark made at the beginning of § 4 in [45], for any fixed bounded Borel set B we have P(ζn ∈ B| Zn ∈ Δ[ym )) = P(ζn ∈ du| Zn ∈ Δ[ym )) B
P(ζn ∈ du)
= B
P(Zn−1 ∈ Δ[ym − u)) → P(ζ1 ∈ B), P(Zn ∈ Δ[ym ))
because the ratio of the probabilities in the last integrand converges to unity, owing to Corollaries 6.2.6 and 6.3.4. One can show in the same way that for any fixed j the joint conditional distribution of the random vector (ζn−j+1 , ζn−j+2 , . . . , ζn ) given Zn ∈ Δ[ym ) tends to the unconditional distribution of (ζ1 , . . . , ζj ). Therefore
(6.5.35) P(Hn,j | Zn ∈ Δ[ym )) P max Ti (m − 1)Δ (1 + o(1)). ij
We can establish in a similar way that
P(Hn,j | Zn ∈ Δ[ym )) P max Ti mΔ (1 + o(1)). ij
Note that
P max Ti u = P(T u) + εj , ij
(6.5.36)
(6.5.37)
where εj = εj (u) → 0 as j → ∞ uniformly in u ∈ [0, s], since T < ∞ a.s. Now combining (6.5.35) and (6.5.37) and using Theorems 6.2.4 and 6.3.3, we obtain for the sum (6.5.33) the upper bound Σn
k # $ λ+ V (y) Δeλ+ mΔ P(T (m − 1)Δ) + εj (1 + o(1)). ϕ(λ+ ) m=1
330
Random walks with exponentially decaying distributions
Now, fixing an arbitrarily small δ > 0, one can always choose j large enough that εj < δ and Δ small enough that the relative error of approximating the integral in (6.5.32) by the sum (6.5.33) and the relative error of the approximation k
s Δe
λ+ mΔ
P(T (m − 1)Δ) ≈
m=1
eλ+ u P(T u) du 0
will each not exceed δ. Together with a similar argument for the lower bound (using (6.5.36) instead of (6.5.35)), this shows that, for j = j(n) → ∞ slowly enough as n → ∞, the integral in (6.5.32) is asymptotically equivalent to nV (y) ϕ(λ+ )
s λ+ eλ+ u P(T u) du 0
=
$ nV (y) # λ+ T E e ; T < s + eλ+ s P(T s) − 1 ϕ(λ+ )
(we have used integration by parts). Thereby the asymptotic behaviour of the second term on the right-hand side of (6.5.29) is established. Next we observe that the asymptotic behaviour of the first term on the right-hand side of (6.5.29) has already been established in Corollaries 6.2.6 and 6.3.4. Taking into account the remark we made (at the beginning of the proof of the theorem) concerning the possibility of replacing Gn with Gn,j , the sum of these two asymptotics will give the desired representation (6.5.25). To prove (6.5.26), note that P(Gn,j ) = P(Gn,j ; Sn x − s) + P(Gn,j ; Sn < x − s), where
P(Gn,j ; Sn < x − s) P(Gn,j ) P min(Sk − b(k)) < −s kj
and, for a fixed j, the second factor on the right-hand side of this inequality could be made arbitrarily small by choosing a large enough s. Again turning to the remark about replacing Gn with Gn,j , we conclude that the desired result will follow immediately from (6.5.25) once we have shown that Eeλ+ T < ∞, since in this case # $ E eλ+ T ; T < s + eλ+ s P(T s) → Eeλ+ T < ∞ as
(6.5.38)
s → ∞.
To prove (6.5.38), observe that from (6.5.23) and Chebyshev’s inequality we have e−λ+ t P(Tk t) e−λ+ t ϕkτ (λ+ ) = , P(T t) 1 − e−λ+ γ k0
k0
6.5 The probability of the crossing of a remote boundary
331
and hence EeλT < ∞ for any λ ∈ [0, λ+ ). By the monotone convergence theorem, it remains to show that limλ↑λ+ EeλT < ∞. Now, the last relation follows from the factorization identity (see e.g. § 18 of [42] or § 3, Chapter 11 of [49]) EeλT = w(λ)/(1 − ϕτ (λ)),
(6.5.39)
where w(λ) is a function that is analytic in the half-plane Re λ > 0 and continuous for Re λ 0. Indeed, we see from (6.5.23) that the right-hand side of (6.5.39) is analytic for Re λ ∈ (0, λ+ ) and continuous in the closed strip Re λ ∈ [0, λ+ ]. Thus we have proved (6.5.38). The uniform smallness of the remainder terms in (6.5.25), (6.5.26) follows from the uniform smallness in the assertions of Theorems 6.2.4 and 6.3.3 and their corollaries. Theorem 6.5.4 is proved.
6.5.4 Boundaries under condition [Am ] In this case, the ‘most likely’ time of the crossing of the boundary g(·) by the random walk is in the vicinity of m. Theorem 6.5.5. Let conditions (6.5.24) and [Am ] be satisfied, n → ∞, and let θ := x/m,
y := x − θ+ m ≡ (θ − θ+ )m.
Then, for α ∈ (1, 2), uniformly in y n1/α for any fixed α < α, P(Gn ) = e−mΛ(θ)
mV (y) λ+ max{T , S(a)} Ee (1 + o(1)), ϕ(λ+ )
where T and S(a) are independent copies of the r.v.’s defined in (6.5.22) and (6.5.14) respectively. In the case α > 2,√the above representation for the probability P(Gn ) holds uniformly in y Nn n ln n for any fixed sequence Nn → ∞. Proof. Using an argument similar to (6.5.30) but transforming the distributions of the first m r.v.’s ξi only, we obtain for u < 0 and ) ( A := max(sk − g(k)) 0, sm ∈ x + du kn
the representation P(Gn ; Sm ∈ x + du) (λ ) (λ+ ) ∈ dxm , = ϕm (λ+ ) e−λ+ sm P ξ1 + ∈ dx1 , . . . , ξm A
ξm+1 ∈ dxm+1 , . . . ξn ∈ dxn
= e−mΛ(θ) e−λ+ u P(Hm ∪ Gm,n,u | Zm = y + u) P(Zm ∈ y + du), (6.5.40)
332
Random walks with exponentially decaying distributions
where the events ( ) Hm := max(Zk − h(k)) 0 , km
and Gm,n,u :=
(
h(k) := g(k) − θ+ k,
k = 1, . . . , m,
) # $ max (Sm+k − Sm ) − (g(m + k) − x) −u
kn−m
are independent. It is not difficult to see that, for a fixed Zm equal to y + u ≡ x − θ+ m ≡ g(m) − θ+ m, the occurrence of the event Hm is equivalent to the occurrence of the event ( ) # $ max (Tm − Tm−k ) + g(m) − g(m − k) − bk −u . km
Using the argument from the proof of Theorem 6.5.4 we obtain that, for a fixed Zm = y + u, P(Hm | Zm ) → P(T −u) as
n→∞
(here we again have to consider events of the form Gn,j and Hn,j in order to approximate integrals by finite sums etc.). Thus the right-hand side of (6.5.40) takes the form , e−mΛ(θ) e−λ+ u P max{T , S(a)} −u + o(1) P(Zm ∈ y + du). The rest of the proof follows the argument establishing Theorem 6.5.4. 6.5.5 Boundaries under condition [A0,n ] The case [A0,n ] adjoins, in a certain sense, both [A0 ] and [An ]. Recall that here we are considering a boundary of the form g(k) = g(0) + g+ k,
k 1,
(6.5.41)
and that for ξj (g+ ) := ξj − g+ we have ϕ(g+ ) (λ+ ) ≡ Eeλ+ ξ1 (g+ ) = 1. As was the case for [A0 ], in the case [A0,n ] we can establish the asymptotics of P(Gn ) for a distribution class that is wider than ER. We will use the r.v.’s Sk (a) and ηa , which were introduced before Theorem 6.5.1 (see p. 323). Theorem 6.5.6. Let F+ (t) = e−λ+ t V (t), where the function V (t) satisfies condition (6.1.21), ∞ tV (t) dt < ∞. 0
If x → ∞, n → ∞ in such a way that x + g+ < θ+ − γ n
6.5 The probability of the crossing of a remote boundary
333
for a fixed γ > 0 then, for the boundary (6.5.41), we have P(Gn ) ∼ c0 e−λ+ x ,
(6.5.42)
1 − P(ηg+ < ∞) # $. λ+ E Sηg+ (g+ ) exp{λ+ Sηg+ (g+ )}; ηg+ < ∞
(6.5.43)
where c0 :=
In the case when x/n + g+ > θ+ − γ, the asymptotics of P(Gn ) can depend on n. Proof. We have
P(Gn ) P(S(g+ ) x) P(Gn ) + P sup Sk (g+ ) x . k>n
The probability in the middle part of this relation was evaluated in Theorem 11, Chapter 4 of [40] and has the following form. Lemma 6.5.7. Under the conditions of Theorem 6.5.5, we have ϕg+ (λ+ ) < 1 and, as x → ∞, P(Gn ) ∼ c0 e−λ+ x , where c0 is defined in (6.5.43). To complete the proof of Theorem 6.5.6, it remains to demonstrate that, as x → ∞, n → ∞,
P sup Sk (g+ ) x = o e−λ+ x . k>n
∗
∗
As before, let θ = θ (x, n) be the minimum value of the parameter θ, for which a curve from the family of level lines {pθ (·); θ > 0} will ‘touch’ the boundary n−1 g( nt ) (although the level lines were introduced on p. 320 when considering a boundary of the form xf (k/n), the concept does not depend on the specific form of boundary and retains its meaning in the general case). Since in the case under consideration λ+ < ∞ and θ+ < ∞ (cf. (6.1.19)) and hence Λ(θ+ ) < ∞, by virtue of (6.5.4) we have tθ > 0 for any fixed θ > 0. Therefore, there is a straight-line segment with a slope coefficient g+ at the beginning of any level line pθ (·). As the boundary (6.5.41) itself has a constant slope coefficient g+ , and the level lines are concave, it is clear that the ‘contact’ of pθ∗ (t) and n−1 g( nt ) takes place over an entire interval at the initial part of the boundary and, in particular, for t = 0 one has Λ(θ∗ (x, n)) x g(0) . = = pθ∗ (0) = n n λ+
(6.5.44)
Further, from the property pθ (1) = θ, the concavity of the level lines and (6.5.5), it follows owing to the assumptions of the theorem that x θ ∗ = pθ ∗ (1) + g+ θ+ − γ. n
334
Random walks with exponentially decaying distributions
Therefore, Λ(θ ∗ )/Λ(θ+ ) 1 − c1 , c1 > 0, so that by virtue of (6.5.4) we have tθ∗ 1 − c2 , c2 > 0. From this it is easily seen that, for some c3 > 0 and all k n, x + g+ θ ∗ (x, k) + c3 k and hence Λ(x/k + g+ ) Λ(θ ∗ (x, k) + c3 ) > Λ(θ∗ (x, k)) + c4 ,
c4 > 0.
So, owing to inequality (6.1.18) and to (6.5.44), we have
P(Sk x + g+ k) P sup Sk (g+ ) x k>n
k>n
e−kΛ(x/k+g+ )
k>n
=e
−λ+ x
k>n
as required. The theorem is proved.
k>n
e
−c4 k
e−kΛ(θ
∗
(x,k))−c4 k
= O e−λ+ x−c4 k = o e−λ+ x ,
7 Asymptotic properties of functions of regularly varying and semiexponential distributions. Asymptotics of the distributions of stopped sums and their maxima. An alternative approach to studying the asymptotics of P(Sn x) The present chapter continues § 1.4, where we studied the asymptotic properties of functions of subexponential distributions. More precisely, assuming that ζ is an r.v. with a subexponential distribution and that g(λ) = Eeiλζ is its ch.f., we studied there properties of the preimage A(x) corresponding to the asymptotic function A g(λ) , where A(w) is a given function of a complex variable: A(x) := A [x, ∞) , eiλx A(dx) = A g(λ) . We refer to the measure A as a function of the distribution of the r.v. ζ. In this chapter, we will identify ζ with the original r.v. ξ considered in Chapters 1–6. (In Chapters 6 and 8 and in a number of other places as well, ζ is actually identified with r.v.’s other than ξ, and this is why this notation was introduced in § 1.4.) Thus, in this chapter, g(λ) = f(λ) ≡ Eeiλξ , where ξ has a distribution F, F+ (t) = 1 − F([t, ∞)) = V (t), V is defined in (1.1.2) in the case when F ∈ R and in (5.1.1)–(5.1.3) (p. 233; see also (1.2.28) and (1.2.29)) for F ∈ Se. In the case when the function A(w) is analytic in the unit disk |w| < 1, including the boundary |w| = 1, and the distribution F is subexponential the asymptotics of the preimage A(x) were studied in § 1.4. If, however, we assume that the distribution F belongs to R or Se then one can broaden the conditions on A, and the conditions we have to impose will now depend on the value of a := Eξ. 7.1 Functions of regularly varying distributions In this section, we will assume only that A(w) =
∞ k=0
335
ak w k ,
336
Asymptotic properties of functions of distributions ∞ where the series k=0 ak is absolutely convergent, and that there exists A (1) = ∞ k=0 kak > 0. In some cases, we will also assume sufficiently fast (but not exponentially fast) decay of the sequence T (n) :=
∞
ak → 0,
n → ∞,
k=n
where T (n) > 0 for all large enough n. Furthermore, for a non-integer t we will put T (t) := T ( t ). As in § 1.4, the main object of study in this section will be the asymptotics of the preimage A(x), which clearly admits a representation of the form A(x) =
∞
an P(Sn x).
(7.1.1)
n=0
Theorem 7.1.1. Let F ∈ R, x → ∞. (i) If a = Eξ < 0 then, without any additional conditions, A(x) ∼ A (1)V (x).
(7.1.2)
(ii) Let a = 0, α > 2 and Eξ 2 < ∞ (we will assume without loss of generality that Eξ 2 = 1). Then the following assertions hold true. (ii1 ) If
T (x2 ) = o V (x)
(7.1.3)
T ∈R
(7.1.4)
A(x) ∼ A (1)V (x) + BT (x2 ),
(7.1.5)
B = 2γ−1 π −1/2 Γ(γ + 1/2)
(7.1.6)
then we have (7.1.2). (ii2 ) If
then
where and −γ < −1 is the index of the r.v.f. T . (iii) Let a = 0, α ∈ (1, 2) and the condition [<, =] with W ∈ R be met. Further, let at least one of the following two conditions be satisfied: (iii1 ) W (t) cV (t); (iii2 )
W (t) c1 V (t),
Then (7.1.2) holds true.
T
1 W (x/ ln x)
(7.1.7) = o V (x) .
(7.1.8)
337
7.1 Functions of regularly varying distributions (iv) Let a > 0 and one of the following two conditions be satisfied: (iv1 )
T (x) = o V (x) ;
(7.1.9)
T ∈ R.
(7.1.10)
(iv2 ) Then, in the former case, (7.1.2) holds true. In the latter case, A(x) ∼ A (1)V (x) + T (x/a).
(7.1.11)
The condition F ∈ R of Theorem 7.1.1 could be broadened by replacing R by wider distribution classes, described in § 4.8. Proof. (i) First let a = Eξ < 0. Then, uniformly in n, as x → ∞, P(Sn x) = P(Sn − an x − an) ∼ nV (x − an),
(7.1.12)
so that for any k one has P(Sk x) →k V (x)
as
P(Sk x) < c. kV (x)
x → ∞,
Therefore, by virtue of (7.1.1) and the dominated convergence theorem, ∞
A(x) ∼ kak = A (1). V (x) k=0
This proves (7.1.2). (ii) Now let a = 0, Eξ 2 = 1. In this case, as x → ∞, P(Sn x) = nV (x) 1 + o(1)
(7.1.13) uniformly in n n1 = n1 (x) = c1 x2 /ln x for some c1 ∈ 0, 1/[2(α − 2)] (see Remark 4.4.2 and Theorem 4.4.1). Represent the preimage A(x) from (7.1.1) as ∞ an P(Sn x) = Σ1 + Σ2 + Σ3 , (7.1.14) A(x) =
n=0
where Σ1 :=
,
n
Σ2 :=
n1 n
,
Σ3 :=
,
nn2
n2 = n2 (x) = c2 (1 + ε)x2 /ln x , c2 = 1/[2(α − 2)] and ε > 0. Then it follows from (7.1.13) that nan V (x) 1 + o(1) ∼ A (1)V (x). Σ1 = n
Before we start evaluating the sums Σ2 and Σ3 , observe that the arguments in the subsequent proof of the theorem in the cases (ii1 ), (ii2 ) (see (7.1.3), (7.1.4))
338
Asymptotic properties of functions of distributions
are completely analogous to each other, and that the proof in the former case can easily be derived from that in the latter case. So we will consider only the second situation, T ∈ R. It follows from the uniform representation x P(Sn x) = nV (x)(1 + o(1)) + 1 − Φ √ (1 + o(1)), (7.1.15) n √ valid for x > n (see p. 182), that we have x P(Sn x) ∼ 1 − Φ √ n uniformly in n n2 . Hence, using the Abel transform, we obtain x Σ3 = an P(Sn x) ∼ an 1 − Φ √ n nn2 nn2 x + T (n)f (x, n), = T (n2 ) 1 − Φ √ n2
(7.1.16)
nn2
where x f (x, n) := Φ √ n
−Φ √
1 = √ 2π
x n+1
√ x/ n
2
e−t
/2
dt.
√ x/ n+1
√ Therefore, since T (t) = t−γ LT (t) is an r.v.f. and x/ n2 → ∞, we have nn2
1 T (n)f (x, n) = √ 2π 1 ∼√ 2π 1 ∼√ 2π
√ x/ n
nn2
2
T (n) e−t
/2
dt
√ x/ n+1
√ x/ n2
2
T (x2 /t2 ) e−t
/2
dt
0
∞ 0
T (x2 ) = √ 2π
2 t2γ LT (x2 /t2 ) e−t /2 dt x2γ
∞ 0
LT (x2 /t2 ) 2γ −t2 /2 t e dt. LT (x2 )
It can easily be seen from Theorem 1.1.4 that the last integral is asymptotically equivalent to ∞ ∞ 2γ −t2 /2 γ−1/2 t e dt = 2 uγ−1/2 e−u du = 2γ−1/2 Γ(γ + 1/2), 0
0
so that the second term on the right-hand side of (7.1.16) is 2γ−1 π −1/2 Γ(γ + 1/2)T (x2 )(1 + o(1)).
339
7.1 Functions of regularly varying distributions
Further, for the first term on the right-hand side of (7.1.16) we have 2 x x (ln x)−1/2 e−c4 ln x = o T (x2 ) . T (n2 ) 1 − Φ √ ∼ c3 T n2 ln x Therefore the third sum Σ3 = nn2 is asymptotically equivalent to BT (x2 ) (see (7.1.6)). The bound Σ2 =
n2
an P(Sn x) = o V (x) + o T (x2 )
n1
for the remaining sum Σ2 follows from previous calculations since, in the interval n ∈ [n1 , n2 ], one has % & x P(Sn x) < 2 nV (x) + 1 − Φ √ n owing to (7.1.15). We observe that the right-hand sides of the above estimates for the sums and integrals are not sensitive to constant factors in the definition of the boundaries n1 = n1 (x), n2 = n2 (x). This proves the second assertion of the theorem. (iii) Now let a = 0, α ∈ (1, 2). First assume that (7.1.8) is true. Split the series (7.1.1) representing A(x) into two parts: 1 , n(x) := . A(x) = Σ1 + Σ2 , where Σ1 = W (x/ln x) nn(x)
Then, by Theorem 3.4.4(i), P(Sn x) ∼ nV (x) uniformly in n n(x), and therefore an P(Sn x) ∼ A (1)V (x). Σ1 = nn(x)
Owing to (7.1.8) the second sum does not exceed T (n(x)) = o V (x) . This proves the asymptotics (7.1.2). If condition (7.1.7) is satisfied then, by virtue of Theorem 3.4.4(i) the equiva lence P(Sn x) ∼ nV (x) holds for n n(x) := 1/V (x). Since T n(x) = T (1/V (x)) = o V (x) , we again obtain (7.1.2), in a similar way to our previous argument. (iv) Now let a > 0. Then, for n n(x) := xa−1 (1 − ε), ε > 0, we have P(Sn x) ∼ nV (x − an). Assume that condition (7.1.9) is met. As V (x − an) < c for V (x)
n < n(x),
V (x − an) →1 V (x)
for
n = o(x),
we obtain, in a similar way to our previous calculations, that A(x) = A (1)V (x)(1 + o(1)) + Σ2 ,
(7.1.17)
340
Asymptotic properties of functions of distributions where |Σ2 | |T (n(x))| ∼ c1 |T (x)| = o V (x) . Now let T ∈ R. By the law of large numbers Sn /n → a as n → ∞, so that we have P(Sn x) → 1 for n xa−1 (1 + ε) and hence (assuming for definiteness that T (xa−1 ) > 0) T (xa−1 (1 + ε))(1 + o(1)) Σ2 T (xa−1 (1 − ε)). As ε is arbitrarily small, this implies that Σ2 ∼ T (xa−1 ) and therefore, together with (7.1.17), proves the relation (7.1.11). The theorem is proved. Remark 7.1.2. In the third part of the theorem, the asymptotics of A(x) remain unknown for a rather broad class of cases, for which neither (7.1.7) nor (7.1.8) hold; for example, in the case when W (t) V (t), T (1/W (t)) is comparable with or greater than V (t). It is not difficult to determine the form of the asymptotics, but we could not give a rigorous proof thereof owing to the absence of estimates for P(Sn x) in the ‘intermediate’ zone n1 (x) :=
1 1
(see Theorem 3.4.4), where N (x) → ∞ slowly enough that one has the uniform asymptotic equivalence x Sn x P ∼ Fβ,−1,+ for n > n2 . b(n) b(n) b(n) Here b(n) = W (−1) (1/n) and Fβ,−1 is the stable law with parameters (β, −1). To obtain the asymptotics of A(x) in the case when W (t) V (t),
T ∈ R,
we need to split the series (7.1.1) representing A(x) into three parts, cf. (7.1.14), over the ranges n n1 (x), n1 (x) < n n2 (x) and n > n2 (x) respectively. As before, we can then derive that Σ1 ∼ A (1)V (x). Further, setting for brevity n2 (x) =: n2 and using summation by parts, we obtain x Sn Σ3 = an P b(n) b(n) n>n2 x ∼ T (n2 )Fβ,−1,+ + T (n)f (x, n), (7.1.18) b(n2 ) n>n 2
where f (x, n) = Fβ,−1,+
x b(n)
− Fβ,−1,+
x b(n + 1)
x/b(n)
f (u) du,
= x/b(n+1)
7.2 Functions of semiexponential distributions
341
f being the density of Fβ,−1 . Hence, setting T1 (t) := T (1/W (x)), we see that the second term on the right-hand side of (7.1.18) is asymptotically equivalent to x/b(n 2)
T1 (xu
−1
x/b(n 2)
)f (u) du = T1 (x)
0
0
T1 (xu−1 ) f (u) du T1 (x)
∞ ∼ T1 (x)
uβγ f (u) du, 0
where −γ is the index of the r.v.f. T ∈ R. It remains to evaluate the ‘intermediate’ sum Σ2 . It is possible to obtain the desired bound Σ2 = o V (x) + T (1/W (x)) , but this would require rather cumbersome calculations, which we will not present here. They would lead to the asymptotics
A(x) ∼ A (1)V (x) + BT 1/W (x) ,
∞ B :=
tβγ f (t) dt. 0
Remark 7.1.3. If we assume that P(ξ ∈ Δ[x)) ∼ ΔαV (x)/x as x → ∞, Δ[x) = [x, x + Δ), and that the integro-local theorems for P(Sn ∈ Δ[x)) hold true (see §§ 3.7, 4.7 and 9.2) then we canderive, in a way similar to our previous calculations, the asymptotics of A Δ[x) = A(x) − A(x + Δ) as x → ∞.
7.2 Functions of semiexponential distributions In this section, we use the notation of § 7.1 but assume that F+ = V ∈ Se. Theorem 7.2.1. Let V ∈ Se, α ∈ (0, 1). Suppose that the series ∞ n=0 nan = A (1) > 0 converges. (i) If a := Eξ < 0 then, as x → ∞, A(x) ∼ A (1)V (x).
(7.2.1)
|an | < exp −nα/(2−α) L1 (n)
(7.2.2)
(ii) Let a = 0, Eξ 2 < ∞. If
for a suitable s.v.f. L1 (see the proof below) then (7.2.1) holds true. (iii) Let a > 0 and, for some ε > 0, T xa−1 (1 − ε) = o V (x) . (7.2.3) Then (7.2.1) holds true.
342
Asymptotic properties of functions of distributions
As in Theorem 7.1.1, the condition V ∈ Se can be relaxed in the above assertion. Moreover, in parts (ii) and (iii) of the theorem one can obtain asymptotics of A(x), depending on the function T , in the case when T ∈ Se and T (t2 ) cV (t) for a = 0, and also when T (t/a) cV (t) for a > 0. Proof. (i) The first assertion of the theorem is proved in exactly the same way as in Theorem 7.1.1. (ii) To prove the second assertion, we will make use of the following results (see Theorem 5.4.1(ii) and Corollary 5.2.2): 2 x (7.2.4) P(Sn x) ∼ nV (x) for n = o 2 l (x) and
% &" bnl(x) (1 + o(1)) for P(Sn x) < cn exp −l(x) 1 − x2
x2 , l(x) (7.2.5) where the constants b and c1 are known. First put n1 (x) := x2 /l2 (x), n2 (x) := c1 x2 /l(x) > n1 (x) and split the series (7.1.1) into three parts as in (7.1.14) with Σ1 = , Σ2 = , Σ3 = . (7.2.6) n
n1 (x)n
n < c1
nn2 (x)
In the first sum, we have the relation (7.2.4) for n = o n1 (x) and the inequality P(Sn x) < cnV (x) for n n1 (x). Therefore, by an argument similar to before, Σ1 ∼ A (1)V (x). Owing to (7.2.5), for large enough x the sum Σ2 will not exceed
n2 (x)
cV (x)
n1 (x)
nan exp
2bn n1 (x)
" .
By condition (7.2.2), the absolute value of the latter sum is less than or equal to n2 (x)
n1 (x)
" 2bn n exp −nα/(2−α) L1 (n) + , n1 (x)
(7.2.7)
where 2b 2b − n(2α−2)/(2−α) L1 (n) < − n2 (x)(2α−2)/(2−α) L1 (n). (7.2.8) n1 (x) n1 (x) The power factor in the last two terms is the same, x2α−2 . Hence one can always choose an s.v.f. L1 (n) such that the difference in (7.2.8) will be less than −3(ln n)/n1 (x). In this case, the sum (7.2.7) will not exceed
" 3n ln n 1 n exp − n−2 ∼ = o(1). n1 (x) n1 (x) nn1 (x)
nn1 (x)
7.2 Functions of semiexponential distributions 343 Thus, the second sum in (7.2.6) is o V (x) . The sum Σ3 in (7.2.6) is bounded by ( ) ( ) exp −nα/(2−α) L1 (n) exp −n2 (x)α/(2−α) L2 n2 (x) , nn2 (x)
where L2 is an s.v.f. and the power-function part of n2 (x)α/(2−α) is equal to xα . Therefore, choosing a suitable L1 (n) (or L2 (n)), one can always obtain n2 (x)α/(2−α) L2 n2 (x) l(x). This means that the third sum in (7.2.6) is also o V (x) . (iii) Finally, consider the case a > 0. As in (7.1.14) and (7.2.6), we split the series representing A(x) into three parts, setting n1 (x) := z/a in (7.2.6) (where, as before, z = z(x) = x/αl(x)) and n2 (x) = xa−1 (1 − ε) for a fixed ε > 0. Since V (x − an) →1 V (x)
n = o(z),
for
V (x − an) < c for V (x)
n < z,
we have here that, owing to Theorem 5.4.1 and Corollary 5.2.2, P(Sn x) = P(Sn − an x − an) ∼ nV (x)
for n = o(z),
P(Sn x) < cnV (x)
for n < z
(the inequality n x2 /l2 (x) clearly holds in these cases). Therefore, in (7.2.6), = V (x)A (1) 1 + o(1) . Σ1 ≡ n
In the sum Σ2 from (7.2.6), we have to deal with summands of the form an P(Sn − an x ), where the deviations x := x − an > εx satisfy the conditions n < xa−1 (1 − ε) c1 (x )2 /l(x ), and hence for them (7.2.5) holds. Moreover, owing to (7.2.3) one can assume that |an | < V (an). Therefore
% &" 2bnl(x ) |an |P(Sn x) cn exp −l(an) − l(x ) 1 − . (7.2.9) (x )2 Further, we will split this sum Σ2 in turn into two sub-sums, one over the range an ∈ [z, εx] and the other over an ∈ [εx, x(1 − ε)]. In the first range, for small ε one has an (1 + o(1)), l(x ) = l(x − an) = l(x) − z
344
Asymptotic properties of functions of distributions
so that, for any δ ∈ (0, α) and all sufficiently large x, % an α−δ & αan l(x ) + l(an) l(x) 1 − (1 + o(1)) + x x % &
α−δ 1 an l(x) 1 + . 2 x
(7.2.10)
Hence for an z one obtains l(x ) + l(an) − l(x)
1 l(x)1−α+δ . 2αα−δ
(7.2.11)
Since for n < εx and a suitable δ > 0 we have for the ‘remainder’ term in (7.2.9) the bound nl(x ) n α−δ , =o 2 (x ) x it follows from (7.2.10) that a bound of the form (7.2.11) will remain true for & % 2bnl(x ) l(an) + l(x ) 1 − − l(x). (x )2 From this and (7.2.9) we obtain |an |P(Sn x) cnV (x) exp −cl(x)1−α+δ . This clearly shows that the sub-sum over the range an ∈ [z, εx] admits the bound o V (x) . For the sub-sum corresponding to an ∈ [εx, x(1 − ε)], we can use the representation obtained in Chapter 5 (see (5.4.44)): , an l(an) + l(x − an) − l(x) = l(x) 1 + γ (1 + o(1)) , x where the function γ(v) = vα +(1−v)α −1 0 is concave and symmetric around the point v = 1/2 and γ(v) γ(ε) > 0 for v ∈ [ε, 1 − ε], ε < 1/2. From this, we can again easily derive that the sub-sum over the range an ∈ [εx, x(1 − ε)] is o V (x) . Therefore the same bound holds for the whole sum Σ2 . It remains to bound the third sum Σ3 in (7.2.6). This sum is taken over the range an x(1 − ε) and so, owing to (7.2.3), is also o V (x) . The theorem is proved.
7.3 Functions of distributions interpreted as the distributions of stopped sums. Asymptotics for the maxima of stopped sums One could interpret the assertions of the previous sections and § 1.4 as follows. Let us depart from the representation (7.1.1). Given that ak 0,
A(1) =
∞ k=0
ak < ∞,
345 we can assume without loss of generality that A(1) = 1 and consider A f(λ) as the ch.f. of an r.v. S for which EeiλS = A f(λ) 7.3 Distributions of stopped sums
and which can be represented as S = Sτ = ξ1 + · · · + ξτ ,
(7.3.1)
where the r.v. τ is independent of {ξi }, P(τ = k) = ak . In this case, A (1) = Eτ and our Theorem 1.4.1 implies, for instance, the following result (see also e.g. Theorem A3.20 of [113]). Corollary 7.3.1. Let F ∈ S and E(1 + δ)τ < ∞ for some δ > 0. Then P(Sτ x) ∼ Eτ V (x),
x → ∞.
(7.3.2)
One can similarly reformulate, in terms of the distribution of τ and the tails T (n) = P(τ n), the assertions of Theorems 7.1.1 and 7.2.1. In regard to the above probabilistic interpretation of Theorems 7.1.1 and 7.2.1, there arise the following two natural problems. (1) To study the asymptotics of P(S τ x), S n = maxkn Sk . (2) To establish under what conditions the relation (7.3.2) will remain valid when τ is an arbitrary stopping (Markov) time, not necessarily independent of {ξi } (i.e. an r.v. defined on a common probability space together with {ξi } and such that, for any n 0, the event {τ n} belongs to the σ-algebra σ(ξ1 , . . . , ξn ) generated by ξ1 , . . . , ξn ). This and the next sections are devoted to solving the above problems. In the present section we will concentrate on analogues of Theorems 7.1.1 and 7.2.1 for S τ in the case when τ is independent of {ξi }. Since the asymptotics of P(Sn x) and P(S n x) are close to each other in many deviation zones, the assertions below do not differ much from those of Theorems 7.1.1 and 7.2.1. Theorem 7.3.2. Assume that F ∈ R, x → ∞ and that an integer-valued r.v. τ with Eτ < ∞ is independent of {ξi }. (i) If a := Eξ < 0 then P(S τ x) ∼ Eτ V (x).
(7.3.3)
(ii) If a = 0, α > 2, Eξ 2 = 1 and at least one of the conditions (7.1.3), (7.1.4) holds true then P(S τ x) ∼ Eτ V (x) + 2BT (x2 ),
(7.3.4)
where B is defined in (7.1.6). (iii) Let a = 0, F satisfy condition [<, =] with α ∈ (1, 2), and let at least one of the conditions (7.1.7) and (7.1.8) be met. Then (7.3.3) holds true.
346
Asymptotic properties of functions of distributions
(iv) Let a > 0 and one of the conditions (7.1.9) and (7.1.10) be met. Then, in the former case (7.3.3) holds true. In the latter case, P(S τ x) ∼ Eτ V (x) + T (x/a).
(7.3.5)
As was the case with Theorem 7.1.1, the condition F ∈ R could be broadened by replacing R by the wider distribution classes described in § 4.8. Proof of Theorem 7.3.2. We can essentially repeat the argument used to prove Theorem 7.1.1. Instead of (7.1.1), we need to start with the relation P(S τ x) =
∞
an P(S n x),
(7.3.6)
n=0
where for the probabilities P(S n x) we have, as a rule, the same bounds as for P(Sn x) (see Chapters 3 and 4). More precisely, the following hold. (i) The proof of the first assertion repeats verbatim the argument from § 7.1 for the case a < 0. (ii) The proof of the second assertion differs from that for Theorem 7.1.1 in two respects. These refer to the evaluation of analogues of the sums Σ2 and Σ3 . In the sum Σ3 = nn2 (x) an P(S n x) one should take the summation limit n2 (x) := x2 /N (x), where N (x) → ∞ slowly enough that, uniformly in n n2 (x), % & x P(S n x) ∼ 2 1 − Φ √ . n The subsequent evaluation of the sum Σ3 differs from the calculations in § 7.1 (see (7.1.16) etc.) only by the presence of the factor 2, and therefore we will eventually obtain Σ3 ∼ 2BT (x2 ). n (x) To bound Σ2 = n21 (x) an P(S n x), with the same value of the summation limit n1(x) = c1 x2/ ln x , we use inequalities from Corollary 4.1.4. This yields Σ2 = o V (x) + o T (x2 ) and completes the proof of (7.3.4). (iii), (iv) There are no substantial changes in the proofs of the third and fourth parts of the theorem compared with those § 7.1, since the same bounds are valid for P(Sn x) and P(S n x) in the respective calculations, and S n /n → a as n → ∞. The theorem is proved. Theorem 7.3.3. Assume that F ∈ Se, α ∈ (0, 1) and that an integer-valued r.v. τ with Eτ < ∞ is independent of {ξi }. (i) If a := Eξ < 0 then P(S τ x) ∼ Eτ V (x),
x → ∞.
(7.3.7)
7.4 Sums stopped at an arbitrary Markov time
347
(ii) Let a = 0, Eξ 2 < ∞. If
an < exp −nα/(2−α) L1 (n)
for a suitable s.v.f. L1 then (7.3.7) holds true. (iii) Let a > 0 and, for some ε > 0, T xa−1 (1 − ε) = o V (x) . Then (7.3.7) holds true. Proof. The proof of Theorem 7.3.3 does not differ from that of Theorem 7.2.1, because, under the conditions of the theorem, we have the same estimates for P(Sn x) and P(S n x). The remarks that we made following Theorems 7.1.1 and 7.2.1 remain valid for Theorems 7.3.2 and 7.3.3 (in Remark 7.1.2, instead of Fβ,−1 we now need the distribution of the maximum of a stable process {ζ(1); t ∈ [0, 1]} with stationary independent increments, for which ζ(1) ⊂ = Fβ,−1 ).
7.4 Sums stopped at an arbitrary Markov time In this section we will concentrate on the second class of problems mentioned in § 7.3. They deal with the conditions under which the asymptotic laws established in §§ 7.1–7.3 remain valid in the case when τ is an arbitrary stopping time. It turns out that in this case the asymptotics of P(S τ x) are, in a certain sense, more accessible that those of P(Sτ x). 7.4.1 Asymptotics of P(S τ x) We introduce the class S ∗ of distributions F with a finite mean a = Eξ that have the property t
F+ (u)F+ (t − u) du ∼ 2 Eξ + F+ (t)
as
t → ∞,
(7.4.1)
0 +
where ξ = max{0, ξ}. It is not difficult to verify that R and Se are subclasses of S ∗ . It isalso known that if F ∈ S ∗ then F ∈ S and a distribution with the tail ∞ F I (t) := t F+ (u) du will also belong to S (see [166]). The following rather general assertion was obtained in [126] (this paper also contains a comprehensive bibliography on related results). Theorem 7.4.1. Let F ∈ S ∗ , a < 0 and τ be an arbitrary stopping time. Then P(S τ x) → Eτ F+ (x)
as x → ∞.
(7.4.2)
348
Asymptotic properties of functions of distributions
For the case τ = min{k : Sk < 0} this assertion was proved in [8]. Similar results for the asymptotics of the trajectory maxima over cycles of an ergodic Harris Markov chain were obtained in [56]. Now let a be arbitrary. Corollary 7.4.2. Let F ∈ S ∗ and let there exist a function h(x) such that, as x → ∞, one has h(x) → ∞, h(x) = o(x) and F+ (x − h(x)) → 1, F+ (x)
P τ > h(x) = o F+ (x) .
(7.4.3)
Then (7.4.2) holds true. The first relation in (7.4.3) means that F+ (t) is an h-l.c. function (see p. 18). The second relation clearly implies that Eτ < ∞. d
Proof. Choose a number b > a = Eξ and introduce i.i.d. r.v.’s ξi (b) = ξ − b, n so that Eξi (b) = a − b < 0. Then, on the one hand, for Sn (b) := i=1 ξi (b), S n (b) := maxkn Sk (b), we will have P(S τ x) P(S τ (b) + bτ x) P bτ > bh(x) + P(S τ (b) x − bh(x) = o F+ (x) + Eτ F+ x − bh(x) + b (1 + o(1)) ∼ Eτ F+ (x), owing to Theorem 7.4.1. On the other hand, clearly ξ > ξ(b), S τ S τ (b) and therefore P(S τ x) P(S τ (b) x) ∼ Eτ F+ (x + b) ∼ Eτ F+ (x), since F+ is l.c. The corollary is proved. Corollary 7.4.3. (i) Let F ∈ R and, in the case a 0, in addition let P(τ > x) = o F+ (x) .
(7.4.4)
Then (7.4.2) holds true. (ii) Let F ∈ Se and, in the case a 0, in addition let P(τ > z(x) = o e−l(x) ,
(7.4.5)
where z(x) = x/αl(x). Then (7.4.2) holds true. Theorems 7.1.1 and 7.2.1 show that conditions (7.4.4), (7.4.5) are essential for (7.4.2) to hold. Proof. (i) If (7.4.4) there exists a function h(x) = o(x) such holds then clearly that P τ > h(x) = o F+ (x) . This means that conditions (7.4.3) of Corollary 7.4.2 will be met.
7.4 Sums stopped at an arbitrary Markov time
349
(ii) If F there exists a function ∈ Se and (7.4.5) holds true then, similarly, h(x) = o z(x) such that P(τ > h(x) = o F+ (x) , and hence (7.4.3) is true. The corollary is proved. 7.4.2 On the asymptotics of P(Sτ x) Now we turn to the asymptotic behaviour of Sτ . Unfortunately, under the conditions of Theorem 7.4.1, it is impossible to obtain a result of the form P(Sτ x) ∼ Eτ F+ (x)
as
x → ∞.
(7.4.6)
This is demonstrated by the following simple example. Let a = Eξ < 0 and τ = η− := min{k 1 : Sk 0}. Then, clearly, Sτ 0 and P(Sτ x) = 0 for all x > 0, so that (7.4.6) cannot hold under any conditions. Thus, for (7.4.6) to hold true, we must narrow down the class of stopping times. The following assertion was obtained in [136] (on p. 43 of this paper the authors state that it could easily be extended to the case F ∈ R). Theorem 7.4.4. Let F+ (x) ∼ x−α and F− (x) = O(x−β ) as x → ∞, where α, β > 0. Further, let τ be an arbitrary stopping time such that P(τ > n) = o(n−r ) where r > max{1, α/β} and r
α/2 α
as
n → ∞,
(7.4.7)
if Eξ = 0, if Eξ = 0.
Then P(Sτ x) ∼ Eτ x−α as x → ∞. Bounds for P(Sτ x) obtained under very broad conditions can be found in [76, 46]. In connection with the above example for τ = η− , observe that condition (7.4.7) is not satisfied in that situation (indeed, by virtue of Theorem 8.2.3 we have P(η− > n) > cn−α ), so that condition (7.4.7) is essential for the assertion of Theorem 7.4.4: it restricts the class of stopping times under consideration. Another restriction of this class was considered in § 7.1; there we assumed that τ was independent of {ξi }. Now we will consider another restriction. Suppose we are given two sequences (boundaries), {g+ (k)} and {g− (k)}, such that g+ (k) > −g− (k) for k 1. We will call τ a boundary stopping time if (7.4.8) τ := inf k 1 : Sk g+ (k) or Sk −g− (k) . Observe that the differences g± (k) ∓ ak, a = Eξ, cannot grow too fast as k → ∞, as then τ might be an improper r.v., owing to the law of the iterated logarithm. For example, in the case√Eξ = 0, Eξ 2 < ∞ it is natural to consider only functions satisfying g± (k) < c k ln k.
350
Asymptotic properties of functions of distributions
The values of g− (k) in the definition (7.4.8) may all be infinite. In this case, we will have a boundary stopping time corresponding to a single boundary g(k) = g+ (k). In accordance with the above observation, we will assume in this case that g(k) c + k,
k 1.
(7.4.9)
Note that if g(k) is non-decreasing then, clearly, Sτ ≡ S τ and so all the assertions of the previous subsection become applicable. First we consider the case Eξ = 0. Theorem 7.4.5. Let Eξ = 0, the distribution of ξ satisfy condition [<, =] with V = F+ ∈ R, τ be a boundary stopping time and let conditions (7.4.4), (7.4.9) be met. Then (7.4.6) holds true. Proof. Since Sτ S τ , Corollary 7.4.3 implies that it suffices to verify that P(Sτ x) Eτ F+ (x)(1 + o(1)).
(7.4.10)
Let N = N (x) be a function such that, as x → ∞, N → ∞,
N = o(x),
V (x) + W (x) N 2 → 0.
(7.4.11)
Fix an arbitrary ε > 0 and introduce the events Ak := {ξk (1 + ε)x},
k = 1, 2, . . .
One can easily see that, for the indicator function of the event in question, we have I(Sτ x) I1 − I2 , where min{τ,N }
I1 :=
I Ak ; Sτ x ,
k=1 min{τ,N }
I2 :=
(ν − 1)I ν events Ak occurred prior to the time min{τ, N } .
ν=1
Here EI1 = E
N I Ak ; Sτ x, τ k , k=1
EI2
2 1 N V (x) (1 + o(1)). 2
As τ is a boundary stopping time and g(k) < c + k, N = o(x), we see that, for k N , Ak ∩ {Sk−1 > −εx/2, τ k} ⊂ Ak ∩ {Sτ x, τ k}.
7.4 Sums stopped at an arbitrary Markov time
351
Therefore EI1
N k=1
εx P Sk−1 − , τ k P(Ak ) 2
N , εx = F+ (1 + ε)x P(τ k) − P τ k, Sk−1 < − 2 k=1 4 3 N εx F+ (1 + ε)x Eτ (1 + o(1)) − P Sk−1 < − 2 k=1 4 3 N εx F+ (1 + ε)x Eτ (1 + o(1)) − (1 + o(1)) (k − 1)W 2 k=1 # $ = F+ (1 + ε)x Eτ + o(1) + O N 2 W (x) = Eτ F+ (x) (1 + ε)x + o F+ (x) .
Since ε is arbitrary, this implies (7.4.10). The theorem is proved. Transition to the case a = 0 under the conditions of Theorem 7.4.5 can be done in a way similar to that used in Corollaries 7.4.2 and 7.4.3. Now we return to the general case (7.4.8) where τ is a stopping time specified by two boundaries. Here the sign of Eξ does not play any important role, whereas conditions on the distribution F can be relaxed. Set θ± (x) := sup k : g± (k) x , where we let θ± (x) = ∞ if sup g± (k) x (the θ± (x) are generalized inverse functions for g± (k)). It is evident that θ± (x) → ∞ as x → ∞; moreover, θ± (x) may be equal to ∞ for finite values of x. Lemma 7.4.6. Assume that Eτ < ∞ and that h(x) is an arbitrary function with the properties x h(x) → ∞ as x → ∞. h(x) < , 2 (i) The following lower bound holds true: P(Sτ x) F+ (x + h(x)) Eτ − δ(x) ,
(7.4.12)
where δ(x) → 0 as x → ∞. If g± (k) g < ∞
(7.4.13)
P(Sτ x) F+ (x + g) Eτ.
(7.4.14)
then, for x > g,
352
Asymptotic properties of functions of distributions
(ii) The following upper bound holds true: P(Sτ x) F+ (x − h(x))Eτ + F+ (x/2)δ(x) +
P(τ > k).
kθ+ (x/2)
(7.4.15) If g+ (k) g+ < ∞ then, for x g+ , P(Sτ x) F+ (x − g+ ) Eτ. Proof. (i) Let θ(x) := min θ− (h(x)), θ+ (x) , Ck := Sj ∈ (−g− (j), g+ (j)), j k ≡ {τ > k}.
(7.4.16)
Is is clear that θ(x) → ∞ as x → ∞. We have P(Sτ x) =
∞
P(τ = k + 1, Sk+1 x)
k=0
θ(x)−1
P(Ck ; Sk + ξk+1 x),
(7.4.17)
k=0
where the inequality holds owing to the fact that g+ (k + 1) x for k < θ(x). Since for such k holds g− (k) h(x), we obtain P Sk + ξk+1 x| Ck F+ (x + h(x)), (7.4.18) so that P(Sτ x) F+ (x + h(x))
P(τ > k).
k<θ(x)
This implies (7.4.12). In the case (7.4.13), using the equality in (7.4.17) and noting that the left-hand side of (7.4.18) is not less than F+ (x + g) we obtain P(Sτ x) F+ (x + g)
∞
P(τ > k) = Eτ F+ (x + g).
k=0
(ii) Using the equality in (7.4.17) we can split the sum on its right-hand side into three parts as follows: P(Sτ x) =
θ+ (x/2)−1
+
k<θ+ (h(x))
k=θ+ (h(x))
+
.
kθ+ (x/2)
In the first sum (over k < θ+ (h(x))) one has g+ (k + 1) h(x) < x, P(τ = k + 1, Sk+1 x) = P Sk + ξk+1 x | Ck P(Ck ) F+ (x − h(x)) P(Ck ).
353
7.4 Sums stopped at an arbitrary Markov time For the terms in the second sum one has g+ (k + 1) x/2, P(τ = k + 1, Sk+1 x) = P Sk + ξk+1 x| Ck P(Ck ) F+ (x/2) P(Ck ). Therefore P(Sτ x) F+ (x − h(x))
P(τ > k)
k<θ+ (h(x)) θ+ (x/2)
+ F+ (x/2)
k=θ+ (h(x))
P(τ > k) +
P(τ > k).
kθ+ (x/2)
Since θ+ (h(x)) → ∞ as x → ∞ and Eτ < ∞ we have P(τ > k) → 0 δ(x) := kθ+ (h(x))
as x → ∞. Hence
P(Sτ x) F+ (x − h(x))Eτ + F+ (x/2)δ(x) +
P(τ > k).
kθ+ (x/2)
This proves (7.4.15). If g+ (k) g+ then the left-hand side in (7.4.18) does not exceed F+ (x − g+ ) for x g+ , and so (7.4.17) implies (7.4.16). The lemma is proved. Theorem 7.4.7. Let τ be a boundary stopping time (7.4.8) with Eτ < ∞ and let F+ be an l.c. function. (i) If (7.4.13) is true then we have (7.4.6). (ii) Assume that F+ has the property F+ (x/2) < cF+ (x) and, moreover, P(τ > k) = o F+ (x) as x → ∞. (7.4.19) kθ+ (x/2)
Then (7.4.6) holds true. It is evident that functions F+ ∈ R satisfy the conditions of the theorem. Proof. The first assertion of the theorem is obvious from (7.4.14) and (7.4.16). To obtain the second assertion, observe that for an l.c. F+ (x) there always exists a function h(x) tending to infinity slowly enough that F+ (x − h(x)) ∼ F+ (x) as x → ∞. Therefore it remains for us to make use of (7.4.12) and 7.4.15. The theorem is proved. The most difficult condition to verify in Theorem 7.4.7 is (7.4.19). It can be simplified for special boundaries g+ (k). For example, suppose that g+ (k) = kγ , γ ∈ (0, 1). Then θ+ (x) ∼ x1/γ and in the case F+ (x) = x−α L(x), L being an s.v.f., condition (7.4.19) will be satisfied provided that P(τ > k) < kθ ,
θ < −αγ − 1.
354
Asymptotic properties of functions of distributions
Indeed, in this case the sum on the left-hand side in (7.4.19) does not exceed cx(θ+1)/γ , where (θ + 1)/γ < −α. If Eξ 2 < ∞ and g+ (k) + g− (k) cnγ for γ < 1/2 and all k n then it is not difficult to obtain the bound 1−2γ
P(τ > n) (1 − p)n
,
where p > 0 is the minimum probability that the random walk will reach a boundary during the time interval [jn2γ , (j +1)n2γ ] having started from inside the strip. Remark 7.4.8. Recall that the l.c.-property used in Theorem 7.4.7, generally speaking, does not imply that F is subexponential (see Theorem 1.2.8). Thus Theorem 7.4.7 provides us with an example of a situation where the relation P(Sn x) ∼ nF+ (x) (as x → ∞) may not hold for each fixed n, but there exists a stopping time τ such that (7.4.6) holds true. 7.5 An alternative approach to studying the asymptotics of P(Sn x) for sub- and semiexponential distributions of the summands As we stated in § 1.4, there is a class of large deviation problems for random walks that are analyzed more naturally using not the techniques being developed in the main part of this monograph for regularly varying and semiexponential distributions but, rather, asymptotic analysis within a wider class of subexponential distributions. For example, one such problem is that on the asymptotics of P(S x), S = supk0 Sk , in the case Eξ < 0 (see also problems from Chapter 8). For such problems, analytic approaches different from those used in Chapters 2–5 and in subsequent chapters prove to be the most effective. These approaches are based on factorization identities and employ the asymptotic analysis of subexponential distributions presented in §§ 1.2–1.4 and also in the present chapter. For upper-power distributions the asymptotics of P(S x) were obtained in that way in [39, 42]. For the class of subexponential distributions, these results were established, using the same approaches, in [275]. The present section is devoted to presenting the above-mentioned approaches. It lies somewhat outside the mainstream exposition in Chapters 2–5. 7.5.1 An integral theorem on the first order asymptotics of P(S x) As before, let ξi be independent and identically distributed, Eξ < 0, S = sup Sk , k0
Sk =
n
ξi
and
F+ (x) = P(ξ x).
i=1
Set
∞ F+I (t)
:=
F+ (u) du. t
7.5 An alternative approach to studying P(Sn x)
355
A main aim of this subsection is to prove the following assertion (see also [275]). Theorem 7.5.1. If the function F+I (t) is subexponential (F+I ∈ S) then P(S x) ∼ −
1 I F (x) Eξ +
as
x → ∞.
(7.5.1)
Theorem 7.5.1 undoubtedly differs from the theorems in Chapters 2–5 on the distribution of S in the cases of regularly varying and semiexponential distributions F, in that it is more general and complete. Finding the asymptotics of P(S n x) (or P(S n (a) x) if Eξ = 0, a > 0) for subexponential distributions in the case of a finite growing n, however, requires much more effort and additional conditions (see e.g. [178]). If we also take into account that one needs to refine the asymptotics and find distributions of other functionals of the random walk {Sk } then the separate analysis of regularly varying and semiexponential distributions becomes justified. Note also that the condition F+I ∈ S is not only sufficient but also necessary for the asymptotics (7.5.1); see [177]. First we will state a factorization identity to be used in what follows. Let η+ := inf{k 1 : Sk > 0},
η− := inf{k 1 : Sk 0}.
(7.5.2)
On the events {η± < ∞} we can define r.v.’s χ± := Sη± , which are respectively the first positive and the first non-positive sums. It is clear that in the case Eξ < 0 we have P(η− < ∞) = 1,
P(η+ < ∞) = P(S > 0) =: p < 1.
Further, set f(λ) := Eeiλξ and introduce an r.v. χ with distribution U, given by the following relation: P(χ < t) := P(χ+ < t| η+ < ∞) ≡
1 P(χ+ < t, η+ < ∞), p
Lemma 7.5.2. If Eξ < 0 then, for Im λ = 0, 1 − f(λ) = 1 − p Eeiλχ+ 1 − Eeiλχ− , 1−p . EeiλS = 1 − p Eeiλχ
t > 0.
(7.5.3) (7.5.4)
Proofs of these assertions can be found, for instance, in [42, 49, 122]. The ν latter identity can easily be derived directly from the representation S = i=1 χi , d
where the r.v. ν does not depend on {χi } (the χi = χ are independent) and is the number of ‘upper ladder points’ in the sequence {Sk }, P(ν = k) = (1 − p)pk , k = 0, 2, . . .
356
Asymptotic properties of functions of distributions
Proof of Theorem 7.5.1. We will split the proof into two steps. First we will find the asymptotics of the tail U (x) := P(χ x). Lemma 7.5.3. If Eξ < 0 and F+ (x) = o F+I (x)
(7.5.5)
as x → ∞ then U (x) ∼
F+I (x) , a− p
(7.5.6)
where a− := −Eχ− < ∞. Proof. The proof of the lemma follows a scheme used in [39, 42]. Let
0 for t 0, δ(t) := F0 (t) := δ(t) − P(ξ < t), 1 for t > 0, so that F0 (t) → 0 as t → ±∞. Then 1−f(λ) and 1/(1 − Eeiλχ− ) can be written, respectively, as 1 − f(λ) =
iλx
e
1 =− 1 − Eeiλχ−
dF0 (x),
0 eiλx dH(−x),
−∞
where H(x) is the renewal function for the r.v. −χ− 0. Differentiating the identity (7.5.3) at λ = 0 we obtain Eξ = (1 − p)Eχ− , so that a− := −Eχ− = −
Eξ >0 1−p
(7.5.7)
is finite and therefore H(x) ∼ x/a− as x → ∞. Hence the identity (7.5.3) can be rewritten as 0 −
eiλx dF0 (x)
eiλx dH(−x) = 1 − p Eeiλχ , −∞
which implies that, for x > 0, ∞ pU (x) =
H(t − x) F(dt).
(7.5.8)
x
The renewal function H(t) has the following property (see e.g. § 1, Chapter 9 of [49]): for any ε > 0 there exists an N < ∞ such that t t+N . H(t) < a− a− − ε
(7.5.9)
7.5 An alternative approach to studying P(Sn x)
357
Therefore 1 a−
∞ (t − x) F(dt) pU (x) < x
1 a− − ε
∞ (t − x + N ) F(dt), x
where ∞ ∞ ∞ (t − x) F(dt) = − (t − x) dF+ (t) = F+ (t) dt ≡ F+I (x). x
x
x
From this we find that
1 pU (x) 1 F+ (x)N . I 1+ a− a− − ε F+ (x) F+I (x)
Since the right-hand side converges to 1/(a− − ε) as x → ∞, and ε > 0 is arbitrary, the lemma is proved. Now we can proceed to the second stage of the proof of the theorem. Let F+I ∈ S. Then, according to Theorem 1.2.4(v) and Theorem 1.2.8, we have F+ (x) = o F+I (x) , and so (7.5.5) is proved. Further, it follows from (7.5.4) that EeiλS = A g(λ) ,
(7.5.10)
where g(λ) := Eeiλχ and A(z) = (1 − p)/(1 − pz). The function A(z) is analytic in the disk |z| < 1/p, and so we can make use of Theorem 1.4.1, according to which P(S x) ∼ A (1)P(χ x) =
F+I (x) p U (x) ∼ 1−p a− (1 − p)
due to (7.5.6). It remains to use identity (7.5.7). The theorem is proved. Corollary 7.5.4. If the tail F+ (x) = P(ξ x) is regularly varying or semiexponential then the conditions of Theorem 7.5.1 are satisfied and therefore (7.5.1) holds true. Proof. The assertion is nearly obvious: if the function F+ is regularly varying then F+I is also regularly varying, by virtue of Theorem 1.1.4(iv), and hence is subexponential. The same claim holds for semiexponential tails F+ (see § 1.3). The fact that U (x) ∼
1 a− p
∞ F+ (u) du x
358
Asymptotic properties of functions of distributions
and hence that for an l.c. F+ (u) we have Δ U Δ[x) = U (x) − U (x + Δ) ∼ F+ (x), a− p
(7.5.11)
suggests that the function U (x) will, under broad conditions, be locally subexponential (see Definition 1.3.3, p. 47). This means that one could sharpen the ‘integral’ theorem 7.5.1 and obtain, along with the latter, an ‘integro-local’ theorem on the asymptotics of P(S ∈ Δ[x)) as well. 7.5.2 An integro-local theorem on the asymptotics of P(S x) The objective of this subsection is to give a proof of the following refinement of Theorem 7.5.1. Theorem 7.5.5. Let the distribution of ξ be non-lattice, Eξ < 0, F+ be an l.c. function and F+I ∈ S. Further, assume that, as x → ∞, F+I (x) ∼
F+ (x) , v(x)
(7.5.12)
where v(x) → 0 is an upper-power function (see Definition 1.2.20, p. 28). Then, for any fixed Δ > 0, Δ F+ (x). P S ∈ Δ[x) ∼ − Eξ
(7.5.13)
If the distribution of ξ is arithmetic then, under the same conditions, for integervalued x → ∞, F+ (x) . (7.5.14) P(S = x) ∼ Eξ Corollary 7.5.6. Let Eξ < 0. If F ∈ R or F ∈ Se, α ∈ (0, 1) then (7.5.13) ((7.5.14) in the arithmetic case) holds true. This result could be strengthened by replacing the conditions F+ ∈ R and F+ ∈ Se by the weaker conditions of Theorems 1.2.25, 1.2.31 and 1.2.33. In the case when Eξ 2 < ∞, Corollary 7.5.6 also follows from Theorem 7.5.8 below. An assertion close to the claims of Theorem 7.5.5 and Corollary 7.5.6 was obtained in [26, 11]. In [11], to ensure the validity of the relations (7.5.13), (7.5.14) the authors used the condition that F belongs to the distribution class S ∗ , which is characterized by the relation (7.4.1). Proof of Corollary 7.5.6. If F ∈ R then the assertion of the corollary is almost obvious. In this case F+ (t) = V (t) := t−α L(t) is an l.c. function, and, by Theorem 1.1.4(iv), ∞ F+I (x)
= x
u−α L(u) du ∼
x−α+1 L(x) F+ (x) = , α−1 v(x)
7.5 An alternative approach to studying P(Sn x)
359
where v(x) = (α − 1)/x. The conditions of Theorem 7.5.5 are satisfied. Now let F ∈ Se. Then F+ (t) = e−l(t) , αul(x) ul(x) as x → ∞, u = o(x), → ∞. (7.5.15) x x Set v(x) := αl(x)/x and then choose N = N (x) such that N = o(x) and N v(x) → ∞. Then, by virtue of (7.5.15), l(x + u) − l(x) ∼
x+N
−l(t)
e
dt = e
−l(x)
N
e−(l(x+u)−l(x)) du
0
x
= e−l(x)
N
e−uv(x)(1+o(1)) du
0
e−l(x) = v(x)
Nv(x)
e−y(1+o(1)) dy ∼
0
F+ (x) . v(x)
Repeating the same argument for x+N +N (x+N )
e−l(t) dt
x+N
and so on, we obtain F+I (x) ∼
F+ (x) . v(x)
This proves (7.5.12) and also that F+I ∈ Se and therefore that F+I ∈ S. The conditions of Theorem 7.5.5 are again satisfied. The corollary is proved. Proof of Theorem 7.5.5. The scheme of this proof is roughly the same as in [42, 275]. It is based on two elements: the well-known facts about factorization identities, and Theorems 1.3.6 and 1.4.2 (1.4.3 in the discrete case). The first element can be presented in the form of the following assertion, most of whose parts are well known. To formulate it, we will need the notation introduced after the statement of Theorem 7.5.1 (see p. 355) and also some factorization identities. Lemma 7.5.7. Let F+ be an l.c. function, Eξ < 0 and let x → ∞. Then the following assertions hold true. (i) Along with (7.5.3), (7.5.4), we have F+I (x) , a− = −Eχ− , a− p ΔF+ (x) U Δ[x) ∼ a− p
U (x) ∼
(7.5.16) (7.5.17)
360
Asymptotic properties of functions of distributions
for any fixed Δ > 0. In the arithmetic case, the quantity Δ in (7.5.17) has to be integer-valued. (ii) If Eξ 2 < ∞ then, for non-lattice ξ’s, U (x) =
F+I (x) b + F+ (x) + o F+ (x) , a− p p
(7.5.18)
where b := a(2) /2a2− and a(2) := Eχ2− < ∞. In the arithmetic case (7.5.18) remains true, but with a somewhat different coefficient of F+ (x) (a(2) is replaced by a(2) + a− ). Proof. (i) Since, in the case of an l.c. F+ , the relation (7.5.5) follows from Theorem 1.2.4(v), the assertion (7.5.16) was obtained in Lemma 7.5.3. Now we prove (7.5.17) (formally, (7.5.16) does not imply (7.5.17)). From the representation (7.5.8) and the local renewal theorem (see e.g. (10) in § 4, Chapter 9 of [49]), we obtain that for an l.c. F+ one has pU Δ[x) =
x+Δ
∞
H(t − x) F(dt) + x
# $ H(t − x) − H(t − x − Δ) F(dt)
x+Δ
Δ = o F+ (x) + F+ (x + Δ)(1 + o(1)) a− ΔF+ (x) + o F+ (x) . = a− This proves (7.5.17). (ii) If Eξ 2 < ∞ then, differentiating the identity (7.5.3) at λ = 0, we obtain Eχ2− < ∞. Therefore t + b + ε(t), H(t) = a− where ε(t) → 0 as t → ∞ (see e.g. Appendix 1 of [42] or [120, 49]). Hence, owing to (7.5.8), 1 U (x) = p
∞ H(t − x) F(dt) = x
where 1 ε1 (x) := p
∞
F+I (x) bF+ (x) + + ε1 (x), a− p p
ε(v) F(x + dv) = o F+ (x) ,
0
when F+ is an l.c. function. The lemma is proved. Now we are in position to prove the theorem. We will restrict ourselves to the non-lattice case and will make use of Theorem 1.3.6, where we take G to be the distribution U. That conditions (1.3.20) are satisfied follows from Lemma 7.5.7
7.5 An alternative approach to studying P(Sn x)
361
and (7.5.12), and therefore U ∈ Sloc , owing to the above-mentioned theorem. It remains to use the representation (7.5.4), which has the form EeiλS = A g(λ) , (7.5.19) where χ⊂ = U,
g(λ) = Eeiλχ ,
A(w) =
1−p . 1 − pw
The function A(w) is analytic in the disk |w| < 1/p and so, by Theorem 1.4.2, P S ∈ Δ[x) ∼ A (1)U Δ[x) . From this, Lemma 7.5.7 and the relation (7.5.7) we derive (7.5.13). The theorem is proved.
7.5.3 A refinement of the integral theorem for the maxima of sums In this subsection we will obtain another refinement of Theorem 7.5.1, which contains the next term in the asymptotic expansion for P(S x). Such a refinement in the case F ∈ R was established in Corollary 3 of [63], but this was under additional moment and smoothness conditions on F+ . As will follow from Theorem 7.5.8 below, these additional conditions prove to be superfluous. For M/G/1 queueing systems, i.e. in the case ξ = ζ − τ , where the r.v.’s ζ 0 and τ 0 are independent and τ follows the exponential distribution, a refinement of the first-order asymptotics for the distribution of S (or, equivalently, for the limiting waiting-time distribution in M/G/1 systems) was obtained, under the assumption that ζ has a heavy-tailed density of a special form, in [267]. Theorem 7.5.8. Let F ∈ R or F ∈ Se, α ∈ (0, 1), Eξ < 0, Eξ 2 < ∞ and the distribution of ξ be non-lattice. Then P(S x) = −
F+I (x) + cF+ (x) + o F+ (x) , Eξ
(7.5.20)
where c=
2a+ p b − , 1 − p Eξ(1 − p)
b=
Eχ2− , 2(Eχ− )2
p = P(η+ < ∞),
a+ = Eχ.
In the arithmetic case, this representation remains true for integer-valued x and somewhat different values of c (see Lemma 7.5.7). One also has1 Eξ 2 ES . c= − 2 2(Eξ) Eξ 1
The calculation of the constant c to be found in [63] contains an error: in the corresponding expan and so account for the dependence of sion, the terms that correspond to the second derivatives F+ c on the variance of ξ were not taken into account.
362
Asymptotic properties of functions of distributions
For remarks on why the coefficient c of F+ (x) in (7.5.20) and the corresponding coefficient in the representation (4.6.4), which was obtained under more restrictive conditions, are different, see the end of Corollary 4.6.3 (p. 210). Remark 7.5.9. Observe that the asymptotic expansion (7.5.20) was obtained for the classes R and Se only. The question whether (7.5.20) holds for the entire class S remains open. Remark 7.5.10. In the case Eξ 2 < ∞, Theorem 7.5.8 clearly implies Corollary 7.5.6. As in Corollary 7.5.6, the conditions F ∈ R and F ∈ Se can be relaxed. For example, instead of F ∈ R it suffices to assume that (1) F+ has the property that, for any fixed v, F+ (x + v ln x) →1 F+ (x)
as
x → ∞;
(2) F+ is an upper-power function; (3) F+ has a regularly varying majorant V such that x−1 V (x) = o F+ (x) as x → ∞. The scheme of the proof of Theorem 7.5.8 is basically the same as for Theorem 7.5.5: it is based on factorization identities, Lemma 7.5.7(ii) and direct calculations, relating the distribution of S to the distributions of the sums Zn :=
n
ζi ,
ζi := χi − a+ ,
i=1 d
where the r.v.’s χi = χ are independent. We will denote the distribution of the r.v. ζ = χ − a+ by G. At the last stage of the proof we will need an additional proposition, refining the asymptotics of P(Zn x). It is an insignificant modification (and simplification) of Theorem 3 of [63] in the case G ∈ R, and of Theorem 2.1 of [52] in the case G ∈ Se. In the case G ∈ R (G+ (t) = t−αG LG (t)), consider the following smoothness condition1 , which was introduced in § 4.7 (see p. 217): [D(1,q) ] As t → ∞, Δ → 0, # $ G+ t(1 + Δ) − G+ (t) = −G+ (t) ΔαG (1 + o(1)) + o(q(t)) ,
(7.5.21)
where q(t) → 0. As observed inRemark 3.4.8 and in the proofs of Theorems 4.4.4 and 4.5.1, the remainder term o q(t) can be directly transferred to the final answer (cf. (7.5.23), (7.5.25) below), after which one can simply assume that condition [D(1,0) ] is met. 1
This is a relaxed version of conditions [D1 ] of [63]: the latter did not have the term o(q(t)) on the right-hand side of (7.5.21). It almost coincides with condition [D(1,q) ] from [66], which contained ` ´ O(q(t)) instead of o q(t) .
7.5 An alternative approach to studying P(Sn x)
363
In the case G ∈ Se, condition [D(1,q) ] will have the following form. As usual, put z(t) := t/l(t). [D(1,q) ] As t → ∞, Δ = o(l−1 (t)) (or Δt = o(z(t))), # $ G+ t(1 + Δ) − G+ (t) = −G+ (t) ΔαG l(t)(1 + o(1)) + o q(t) . (7.5.22) (This condition is a relaxed version of condition [D1 ] from [52].) In the case G ∈ R, αG < 2, we will need the inverse function (−1)
σ(n) := G+
(1/n).
The above-mentioned refinement of the asymptotics of P(Zn x) is contained in the following assertion. Theorem 7.5.11. Let Eζ = 0. (i) If Eζ 2 < ∞, G ∈ R, αG < 2 and condition [D(1,q) ] holds then # $ √ P(Zn x) = nG(x) 1 + o(q(x)) + o( nx−1 )
(7.5.23)
2
uniformly in n < cx / ln x for some c > 0. (ii) If G ∈ R, αG ∈ (1, 2), G− (t) cG+ (t) and condition [D(1,q) ] is met then # $ (7.5.24) P(Zn x) = nG+ (x) 1 + o(q(x)) + o(σ(n)x−1 ) uniformly in n < ε/G+ (x), ε > 0. (iii) If G ∈ Se, αG ∈ (0, 1), Eζ 2 < ∞ and (7.5.22) holds then # $ √ P(Zn x) = nG+ (x) 1 + o(q(x)) + o( nz −1 ) ,
(7.5.25)
where z = z(x) = x/l(x), uniformly in n = o(z 2 ). Proof. (i) When G ∈ R, Eζ 2 < ∞, the assertion (7.5.23) is a direct consequence of Theorem 4.4.4 for k = 1. (ii) The assertion (7.5.24) is a direct consequence of Theorem 3.4.4(iii). (iii) The case G ∈ Se can be dealt with in the same way. All the calculations from the proof of Theorem 5.4.1(iii) remain valid here except for the estimate of the quantity E2 , which was introduced in (5.4.58): # $ E2 = E G+ (x − Zn−1 ); |Zn−1 | z , and which gives the principal part of the desired asymptotics. More precisely, √ owing to (5.4.58), (5.4.59) and (5.4.65), one has for z n the representation √ P(Zn x) = nE2 + nG+ (x) o nz −2 for z n. (7.5.26) Since |G+ (x − vz) − G+ (x)| < cG+ (x) for |v| 1, the part of the integral E2 over the set εz < |Zn−1 | z, where ε tends to zero slowly enough, can be bounded, owing to Chebyshev’s inequality, by G+ (x)O P(εz |Zn−1 | z) = G+ (x)o(nz −2 ).
364
Asymptotic properties of functions of distributions
Therefore we just need to evaluate # $ E G+ (x − Zn−1 ); |Zn−1 | εz , where ε → 0. For this term, owing to [D(1,q) ] and the relations EZn−1 = 0, √ E|Zn−1 | = O( n ), we have, again assuming that ε → 0 slowly enough, $ # E G+ (x − Zn−1 ); |Zn−1 | εx # $ = G+ (x)E 1 + αG Zn−1 z −1 + o |Zn−1 |z −1 + o(q(x)); |Zn−1 | εx , √ = G+ (x) 1 + o( nz −1 ) + o(q(x)) + O P(|Zn−1 | > εx + E |Zn−1 |z −1 ; |Zn−1 | > εz # $ √ = G+ (x) 1 + o( nz −1 ) + o(q(x)) . Together with (7.5.26), this proves (7.5.25). Theorem 7.5.11 is proved. Proof of Theorem 7.5.8. It follows from the identity (7.5.4) that P(S x) = (1 − p)
∞
pn P(Zn x − a+ n),
(7.5.27)
n=0
n where Zn = i=1 (χi − a+ ), a+ = Eχ and the χi are independent and follow the same distribution as χ. Hence, owing to Lemma 7.5.7(ii), the probability P(χ x) = U (x) has the form (7.5.18) and therefore G+ (t) ≡ P(χ − a+ t) = U (t + a+ ) =
F+I (t + a+ ) bF+ (t) + + o F+ (t) . a− p p
(7.5.28)
Now let us verify that G+ (t) satisfies condition [D(1,q) ]. First let F ∈ R, Eζ 2 <∞. Then, as t → ∞, Δ → 0, G+ (t(1 + Δ)) − G+ (t) = −
1 a− p
t(1+Δ)+a +
F+ (u) du + o F+ (t)
t+a+
Δt F+ (t)(1 + o(1)) + o F+ (t) . =− a− p
(7.5.29)
Since by Lemma 7.5.7(i) G+ (t) ∼
F+I (t) tF+ (t) ∼ , a− p a− p(α − 1)
we obtain # $ G+ (t(1 + Δ)) − G+ (t) = −G(t) Δ(α − 1)(1 + o(1)) + o(t−1 ) , which means that condition [D(1,q) ] is met for q(t) = t−1 and an α-parameter value αG equal to α − 1 and corresponding to the index of G+ .
7.5 An alternative approach to studying P(Sn x) Therefore, by (7.5.28) and Theorem 7.5.11 we have, uniformly in n x → ∞,
365 √
x, as
P(Zn x − a+ n)
√ n = nG+ (x − a+ n) 1 + o x % I & √ F+ (x − a+ (n − 1)) bF+ (x) n + + o F+ (x) =n 1+o a− p p x % I F (x) a+ (n − 1)F+ (x) + (1 + o(1)) =n + a− p a− p & √ bF+ (x) n + . (7.5.30) + o F+ (x) 1+o p x
Now we return to the identity (7.5.27) and split the sum on its right-hand side √ √ into two parts, a sum over n < x and a sum over n x. Then the second part √ will not exceed cp x , so that √ P(S x) = O p x +
F+I (x) 2a+ F+ (x)p bF+ (x) + + o F+ (x) . + 2 a− (1 − p) a− (1 − p) 1−p (7.5.31)
This proves (7.5.20). Now consider the case F ∈ R, Eζ 2 = ∞. (Since ζ −a+ , the left tail G− (t) of the distribution ζ disappears for t < −a+ and therefore αG = α − 1 2.) Owing to (7.5.29) and Lemma 7.5.7(ii), condition [D(1,q) ] is again satisfied for q(t) = t−1 . We find from (7.5.24) that (cf. (7.5.30)) σ(n) . P(Zn x − a+ n) = nG(x − a+ n) 1 + o x This again yields (7.5.31) and therefore (7.5.20). Now assume that F ∈ Se. As before, we will verify that G satisfies [D(1,q) ] (see (7.5.22)). Since here F+I (t) z(t)F+ (t) t ∼ , z(t) = , a− p αa− p l(t) it follows from (7.5.29) with Δt = o z(t) that l(t(1 + Δ)) − l(t) = o(1), G+ (t) ∼
Δt F+ (t)(1 + o(1)) + o F+ (t) a− p # $ = −G+ (t) Δαl(t)(1 + o(1)) + o(z −1 (t)) .
G+ (t(1 + Δ)) − G+ (t) = −
Thus condition [D(1,q) ] is satisfied for q(t) = z −1 (t). Hence, by virtue of Theorem 7.5.11(iii), we again find (cf. (7.5.30)) that, for z = z(x), √ P(Zn x − a+ n) = nG(x − a+ n) 1 + o( nz −1 ) uniformly in n = o min{x, z 2 } . As before, we turn to (7.5.27) and split the
366
Asymptotic properties of functions of distributions
sum in this representation into three parts: for n1 = εz , n2 = εx , where ε = ε(x) → 0 slowly enough (so that ε > z −1/2 ), we write ∞
=
n=0
n1
n2
+
+
n=n1 +1
n=0
∞
=: Σ1 + Σ2 + Σ3 .
n=n2 +1
Then
√ Σ3 = O e−cεx = O e−cεl(x)z = O e−cl(x) z = o F+ (x) .
(7.5.32)
In the sum Σ1 we have n = o(z). For such an n, one clearly has l(x − a+ n) = l(x) + o(1) and so (cf. (7.5.30)) we obtain % I F (x) a+ (n − 1)F+ (x) bF+ (x) + + P(Zn x − a+ n) = n + a− p a− p p & √ n , 1+o + o F+ (x) z so that, as in (7.5.31), (1 − p)Σ1 =
F+I (x) a+ F+ (x)p bF+ (x) + + o F+ (x) . + 2 a− (1 − p) 2a− (1 − p) (1 − p)
(7.5.33)
In the sum Σ2 , we have n εx, π1 ≡ nl(x)x−2 ε/z → 0 and therefore, owing to Corollary 5.2.2(i), for a c1 < ∞, π1 h x − a+ n , r0 = 1 + (1 + o(1)). P(Zn x − a+ n) c1 nG+ r0 2 But, for n = o(x) and all small enough π1 , x − a+ n l > l(x − a+ n)(1 − cπ1 ) r0 = l(x) − αa+ nz −1 + o max{1, nz −1 } + O nz −2 > l(x) − c2 nz −1 for a c2 > 0. Hence ( ) n exp −l(x) + c2 + n ln p + ln n z n1 +1 ) ( εz ln p c3 exp −l(x) + 2
" √ z c3 exp −l(x) + ln p = o F+ (x) . 2
(1 − p)Σ2 c3
∞
From this and (7.5.32), (7.5.33) we obtain (7.5.20). The theorem is proved. The coefficient c in (7.5.20) also admits a somewhat different representation. Since b = a(2) /2a2− and, owing to factorization identities, a− = −
Eξ , 1−p
a+ =
(1 − p) ES, p
Eξ 2 = a(2) (1 − p) − 2Eξ ES,
7.6 A Poissonian representation for (θ, S)
367
we have b=
(Eξ 2 + 2Eξ ES)(1 − p) , 2(Eξ)2
c=
2ES ES Eξ 2 + 2Eξ ES Eξ 2 − − = . 2(Eξ)2 Eξ 2(Eξ)2 Eξ
7.6 A Poissonian representation for the supremum S and the time when it was attained We will complete this chapter with a remark that is somewhat outside the mainstream exposition of the book and applies to arbitrary random walks (those without any conditions on the distribution F), for which S = supk0 Sk < ∞ a.s. Let (τ, Z), (τ1 , Z1 ), (τ2 , Z2 ), . . . be a sequence of i.i.d. random vectors with distribution P(τ = j, Z t) =
P(Sj t) , Dj
j 1,
t > 0,
(7.6.1)
where D=
∞ P(Sj 0) < ∞. j j=1
The latter condition is equivalent to the relation S < ∞ a.s. (see e.g. § 7, Chapter XII of [122] or § 2, Chapter 11 of [49]). Here we make no additional assumptions on the distribution of ξj . Denote by θ the time when the supremum value S is first attained in the random walk {Sk }: θ := min k 0 : Sk = S , and by ν an r.v. which is independent of {(τj , Zj ); j 1} and has the Poisson distribution with parameter D. Theorem 7.6.1. If D < ∞ then the following representation holds true: d
(θ, S) = (τ1 , Z1 ) + · · · + (τν , Zν ),
(7.6.2)
where the right-hand side is equal to (0, 0) when ν = 0. In particular, d
S=
ν
Zj ,
j=1
where P(Z t) =: U (t) =
∞ 1 P(Sj t) , D j=1 j
t > 0.
368
Asymptotic properties of functions of distributions
If the distribution U (t) is subexponential then, as x → ∞, for a suitable function n(x) → ∞ we have P(Z1 + · · · + Zn x) ∼ nU (x) for all n n(x). In this case, P(S x) ∼
∞ k=0
∞ P Sj x . P(ν = k)kU (x) = EνU (x) = j j=1
The required subexponentiality property U (t) will apparently follow from the subexponentiality of F. In any case, U (t) has this property under the conditions of Theorem 2.7.1 in the case E|ξj | = ∞ (see also (2.7.7)). This is also true when the conditions Eξj = −a < 0 and [ · , =], V ∈ R, are satisfied. Indeed, in this case, for all n as x → ∞, P(Sn x) = P(Sn + an x + an) ∼ nF+ (x + an), so that by Theorem 1.1.4(iv) U (x) =
∞ ∞ 1 1 P(Sn x) ∼ F+ (x + an) D n=1 n D n=1
1 ∼ aD
∞ F+ (t) dt ∼ x
xF+ (x) . aD(α − 1)
(7.6.3)
The distribution U (t) will be subexponential for semiexponential distributions F as well; this can be verified in the same way as (7.6.3) (see Chapter 5). Proof of Theorem 7.6.1. The assertion of the theorem immediately follows from Corollary 6, § 15 of [42], which establishes the infinite divisibility of the distribution of (θ, S) via the representation ⎫ ⎧ ∞ ∞ ⎬ ⎨ dP(S < t) k ω k eiλt − 1 Eω θ eiλS = exp ⎭ ⎩ k k=1 0 ∞ ωk = exp E(eiλSk ; Sk 0) − D k k=1
(see formula (10) in § 15 of [42]). Now, the integral representation Eω
P
jν
τj iλ
e
P
jν
Zj
for the sum on the right-hand side of (7.6.2) has exactly this form. The theorem is proved. Results similar to Theorem 7.6.1 can also be obtained for the distribution of (θ∗ , S), where θ∗ := max{k 0 : Sk = S} (see [42]).
8 On the asymptotics of the first hitting times
8.1 Introduction For x 0, let
η+ (x) := inf k 1 : Sk > x ,
η− (x) := inf k 1 : Sk −x ,
where we put η+ (x) = ∞ (η− (x) = ∞) if Sk x (Sk > −x) for all k = 1, 2, . . . The objective of the present chapter is to find the asymptotics of and bounds for the probabilities P(η− (x) = n) and P(η+ (x) = n) or for their integral analogues P(n < η− (x) < ∞) and P(n < η+ (x) < ∞) as n → ∞. The level x 0 can be either fixed or growing with n. Special attention will be paid to the random variables η± = η± (0). Since {η+ (x) > n} = {S n x}, the asymptotics of P η+ (x) > n → 0 can be considered as the asymptotics of the probabilities of small deviations of S n . In the study of the above-mentioned asymptotics, the determining role is played by the ‘drift’ in the random walk, which we will characterize using the quantities D = D+ :=
∞ P(Sk > 0) k
and
k=1
D− :=
∞ P(Sk 0) , k
(8.1.1)
k=1
where clearly D+ + D− = ∞. Let p := P(η+ = ∞). It is well known (see e.g. [122, 49]) that {D+ < ∞, D− = ∞} ⇐⇒ {η− < ∞ a.s., p > 0} ⇐⇒ {S = −∞, S < ∞ a.s.}
(8.1.2)
and {D+ = ∞, D− = ∞} ⇐⇒ {η− < ∞, η+ < ∞ a.s.} ⇐⇒ {S = −∞, S = ∞ a.s.}, where S = supk0 Sk and S = inf k0 Sk . Also, it is obvious that a relation symmetric to (8.1.2) holds in the case {D− < ∞, D+ = ∞}. Taking into account this symmetry, we can confine ourselves to considering the case A0 = {D− = ∞, D+ = ∞} 369
370
On the asymptotics of the first hitting times
and only one of the two possibilities A− = {D− = ∞, D+ < ∞}
or
A+ = {D− < ∞, D+ = ∞}.
If Eξ = a exists then A0 = {a = 0},
A− = {a < 0},
A+ = {a > 0}.
In what follows, the classification of the results will be made according to the following three main criteria: (1) the value of x (we will distinguish between the cases x = 0, a fixed value of x > 0 and x → ∞); (2) the direction of the drift (one of the possibilities A0 , A± ); (3) the character of the distribution of ξ. Accordingly, the present chapter has the following structure. In § 8.2 we study the case of a fixed level x, mostly when x = 0: §§ 8.2.1–8.2.3 are devoted to the case A− , x = 0, for various distributions classes for ξ; § 8.2.4 deals with the case A0 . The relationship between the distributions of η± (x) and η± for a fixed x > 0 is discussed in § 8.2.5. In § 8.3 we consider the case x → ∞ together with n; §§ 8.3.1–8.3.3 deal with various distribution classes for the law of ξ. The asymptotics of the distributions of η± (x) do not always depend on the exact asymptotic behaviour of the distribution tails of ξ. But sometimes, in the cases of regularly varying or exponentially fast decaying tails F+ , the desired asymptotics are closely related to each other (cf. Chapter 6). Therefore exponentially fast decaying distributions are also considered in the present chapter. A survey of results, which is close to the exposition of this chapter, was presented in [193].
8.2 A fixed level x 8.2.1 The case A− with x = 0. Introduction Assuming that a = Eξ < 0 exists, we will concentrate here on studying the asymptotics of the probabilities P(η− > n)
and
P(η+ = n) as
n → ∞,
(8.2.1)
and also on the closely related asymptotics of the probability P(θ = n) for the time θ := min{n 0 : Sn = S} when the random walk first attains its maximum value S. This relation has a simple form. Since P(θ = n) = P(S n−1 < Sn = S n ) P max(Sk − Sn ) = 0 kn
8.2 A fixed level x
371
(the first factor on the right-hand side is equal to 1 for n = 0) and from the duality principle for random walks (see e.g. § 2, Chapter XII of [122]) we have P S n−1 < S n = Sn = P(η− > n), it follows that P(θ = n) = p P(η− > n)
(8.2.2)
for p = P(S = 0) = P(η+ = ∞) = 1/Eη− (with regard to the last two equalities, see Theorem 8.2.1(i) below or [122, 49, 42]). We will consider the following distribution classes, which we have already dealt with, and their extensions: • the class R of distributions with regularly varying right tails F+ (t) = V (t) = t−α L(t),
α 0,
(8.2.3)
where L(t) is an s.v.f., and • the class Se of semiexponential distributions with tails F+ (t) = V (t) = e−l(t) ,
l(t) = tα L(t),
α ∈ (0, 1),
(8.2.4)
where L(t) is an s.v.f. such that, for Δ = o(t), t → ∞, and any fixed ε > 0, ⎧ αΔl(t) αΔl(t) ⎪ ⎪ if > ε; ⎨ l(t + Δ) − l(t) ∼ t t (8.2.5) ⎪ ⎪ αΔl(t) ⎩ l(t + Δ) − l(t) → 0 →0 if t (see also (5.1.1)–(5.1.5), p. 233). According to Remark 1.2.24, a sufficient condition for (8.2.5) is that the function L(t) is differentiable for all large enough t and that L (t) = o(L(t)/t), t → ∞. Another sufficient condition for (8.2.5) is that l(t) = l1 (t) + o(1), t → ∞, where l1 (t) satisfies (8.2.5). Some properties of distributions from the classes R and Se were studied in Chapter 1. In particular, we saw there that distributions from these classes are subexponential (see § 1.1.4 and Theorem 1.2.36 respectively). It turns out that if we somewhat extend the classes R and Se by allowing the functions F+ to ‘oscillate slowly’ (see below) then distributions from the new classes will still possess the properties of the distributions F ∈ R and F ∈ Se that are required for our purposes in this chapter. So we will deal with such extensions as well. An alternative to R and Se is • the class C of exponentially fast decaying distributions, i.e. distributions that satisfy the right-sided Cram´er condition: λ+ := sup{λ : ϕ(λ) < ∞} > 0,
where
ϕ(λ) := Eeλξ .
372
On the asymptotics of the first hitting times
Let λ0 be the point at which the minimum min ϕ(λ) =: ϕ λ
occurs. We will distinguish between the two possibilities: (1)
λ0 λ+ ,
ϕ (λ0 ) = 0,
(2)
λ0 = λ+ ,
ϕ (λ+ ) < 0.
(8.2.6)
It is evident that in case (1) we always have ϕ (λ0 ) = 0 if λ0 < λ+ and Eξ < 0. Now we will turn to previously known results. As usual, by the symbol c, with or without indices, we will be denoting a constant, not necessarily with the same meaning if it appears in different formulae. In the case of bounded lattice-valued ξ’s, the complete asymptotic analysis of P(η± (x) = n) for all x and n → ∞ was presented in [33, 34]. For an extended version of the class R, the asymptotics P(θ = n) ∼ cV (−an) and hence that of P(η− > n) were obtained in Chapter 4 of [42]. It was also established there that, for the class C in the case λ0 < λ+ , one has P(θ = n) ∼
cϕn n3/2
∼ p P(η− > n) owing to (8.2.2) .
(8.2.7)
Comprehensive results on the asymptotics of the probabilities P(η− (x) > n) and P(n < η+ (x) < ∞) in the case of a fixed x 0 for the classes R and C were obtained in [116, 32, 104, 27, 193]. In this connection, one should note that some results from Theorems 8.2.3, 8.2.12, 8.2.14 and 8.2.16 below are already known. Nevertheless, we present them here for completeness of exposition. γ , γ > 0, and some Necessary and sufficient conditions for the finiteness of Eη− relevant problems were studied in [143, 154, 138, 160]. It was found, in particular, γ < ∞, γ > 0, iff E(ξ + )γ < ∞ where ξ + = max{0, ξ} (see e.g. [143]). that Eη− Unimprovable bounds for P(η− > n) were given in § 43 of [50]. Because of the established relationship (8.2.2) between the distribution of the r.v.’s θ and η− , we will concentrate on the distributions of η± in what follows. We will begin with results that clarify the relationship between the distributions of η− and η+ , which is of independent interest. Introduce an r.v. ζ having a generating function 1 − Ez η− H(z) := Ez ζ = (1 − z)Eη− (Eη− < ∞ owing to our assumption that Eξ < 0), so that P(η− > k) , Eη− and put p(z) := E z η+ | η+ < ∞ , P(ζ = k) =
p := P(η+ = ∞) = P(S = 0),
k = 0, 1, . . . ,
q := 1 − p = P(η+ < ∞).
8.2 A fixed level x
373
Moreover, let η be an r.v. with distribution P(η = k) = P(η+ = k| η+ < ∞) (and generating function p(z)); let η1 , η2 , . . . be independent copies of η, T0 := 0,
Tk := η1 + · · · + ηk ,
k 1,
and ν be an r.v. with the geometric distribution P(ν = k) = pq k , k 0, that is independent of {ηj }. Theorem 8.2.1. The following assertions hold true. (i) D=
∞ P(Sk > 0) = − ln p, k
p=
k=1
1 . Eη−
(8.2.8)
(ii) H(z) =
1−q , 1 − qp(z)
(8.2.9)
and therefore the distribution of η+ completely defines that of η− , and vice versa. (iii) For any n 0, (8.2.10) P(η− > n) = pP(Tν = n) > P(η+ = n). (iv) If the sequence P(η+ = n)/q; n 1 is subexponential then, as n → ∞, P(η− > n) ∼
1 P(η+ = n). p2
(8.2.11)
If the sequence P(Sn > 0) , nD is subexponential then, as n → ∞, bn :=
n 1,
(8.2.12)
eD P(Sn > 0), n (8.2.13) e−D P(Sn > 0). P(η+ = n) ∼ n (v) The r.v. ζ has an infinitely divisible distribution and admits the representation P(η− > n) ∼
d
ζ = ω1 + · · · + ων ,
(8.2.14)
where ω1 , ω2 , . . . are independent copies of an r.v. ω with distribution P(ω = k) = P(Sk > 0)/kD,
k = 1, 2, . . . ,
and the r.v. ν is independent of {ωi } and has the Poisson distribution with parameter D.
374
On the asymptotics of the first hitting times It follows from (8.2.14) that P(η− > n) = Eη− P(ω1 + · · · + ων = n).
(8.2.15)
Remark 8.2.2. The assertion of the theorem that the distributions of the r.v.’s η± determine each other looks somewhat surprising, as a similar assertion for the first positive and first non-positive sums χ+ := Sη+ and χ− := Sη− would be wrong. Indeed, if F− (t) = ce−ht for t 0, c < 1, h > 0, then for any F+ we would have P(χ− < −t) = e−ht , t > 0, whereas 1 − E(eiλχ+ ; η+ < ∞) =
(1 − f(λ)) (iλ + h) , iλ
f(λ) := Eeiλξ .
Therefore, the distributions P(χ+ > x, η+ < ∞) will be different for different tails F+ (t), t > 0. Proof. The proof of Theorem 8.2.1 generally follows the path used in § 21 of [42] to study the asymptotics of P(θ = n). It is based on the factorization identities
" ∞ zn η− P(Sn 0) , (8.2.16) 1 − Ez = exp − n n=1
" ∞ zn (8.2.17) P(Sn > 0) , 1 − E z η+ ; η+ < ∞ = exp − n n=1 |z| 1 (see e.g. [122, 49, 42]). Since "
∞ 1 zn − ln(1−z) , =e = exp 1−z n n=1 the identities (8.2.16), (8.2.17) can be rewritten as
" ∞ ∞ 1 − Ez η− zn = exp P(Sn > 0) , z n P(η− > n) = 1−z n n=0 n=1
" ∞ ∞ zn P(Sn > 0) . z n P(η+ = n) = 1 − exp − n n=1 n=1
(8.2.18) (8.2.19)
We now consider the theorem parts in turn. (i) Letting z = 1 in (8.2.18), (8.2.19), we obtain D = − ln p, p = 1/Eη− . (ii) The assertion (8.2.9) follows from comparing (8.2.18) with (8.2.19). (iii) The relation (8.2.9) is clearly equivalent to the equality 1 1−
∞ n=1
z n P(η+ = n)
=
∞ n=0
z n P(η− > n),
8.2 A fixed level x
375
of which the left-hand side is equal to ∞
qk
∞
z n P(Tk = n) = pEz Tν .
n=0
k=0
This immediately implies (8.2.10). (iv) Consider the entire functions A− (v) := evD
and
A+ (v) := 1 − e−vD .
For |z| 1, the relations (8.2.18), (8.2.19) can be rewritten in the form ∞ n=0 ∞
z n P(η− > n) = A− b(z) ,
(8.2.20)
z n P(η+ = n) = A+ b(z) ,
(8.2.21)
n=1
where b(z) :=
∞ ∞ 1 z n P(Sn > 0) n ≡ z bn D n=1 n n=1
and the bn were defined in (8.2.12). Since {bn } is subexponential by assumption, it only remains to make use of the known theorems (see § 1.4 or e.g. [83]) on functions of distributions (specified by the functions A± in (8.2.20), (8.2.21)). As A± are entire functions and A− (1) = DeD , A+ (1) = De−D , we obtain by virtue of (8.2.20), (8.2.21) and Theorem 1.4.3 that P(Sn > 0) P(Sn > 0) = Eη− , n n P(Sn > 0) P(Sn > 0) = P(η+ = ∞) . P(η+ = n) ∼ bn A+ (1) = e−D n n This proves (8.2.13). The relation (8.2.11) can be proved in exactly the same way, taking into account the fact that the function P(η− > n) ∼ bn A− (1) = eD
A(v) :=
1−q 1 − qv
is analytic in the disk |v| < 1/q, A (1) = q/(1 − q), and that H(z) = A p(z) owing to (8.2.9). Therefore, as n → ∞, q P(η+ = n) P(η− > n) ∼ , Eη− 1−q q which (taking account of (8.2.8)) is equivalent to (8.2.11). (v) The assertion (8.2.14) follows from the observation that "
∞ z n P(Sn > 0) − D = exp D(Q(z) − 1) , H(z) = exp n n=1
(8.2.22)
376
On the asymptotics of the first hitting times
where Q(z) =
∞ z n P(Sn > 0) . nD n=1
It is clear that the right-hand side of (8.2.22) is the generating function of the random sum ω1 + · · · + ων . The theorem is proved. Now we will turn to ‘explicit’ asymptotic representations for the distributions of the r.v.’s η± in terms of the distribution F. 8.2.2 The case Eξ < 0 for x = 0 when F belongs to the classes R or Se or to their extensions Theorem 8.2.3. Let a = Eξ < 0 and either F ∈ R or F ∈ Se, where we assume in the latter case that α < 1/2. Then, as n → ∞, P(η− > n) ∼ eD V (|a|n), −D
P(η+ = n) ∼ e
(8.2.23)
V (|a|n),
(8.2.24)
where D is defined in (8.1.1), (8.2.8) and eD = Eη− ,
e−D = P(S = 0) = P(η+ = ∞).
(8.2.25)
The same assertion holds for the ‘dual’ pair of r.v.’s 0 := inf{k 1 : Sk 0}, η+
0 η− := inf{k : Sk < 0}.
Theorem 8.2.4. Assume that the conditions of Theorem 8.2.3 are satisfied. Then, as n → ∞, 0
0 P(η− > n) ∼ eD V (|a|n), 0
0 = n) ∼ e−D V (|a|n), P(η+
where D0 :=
∞ P(Sk 0) , k
k=1
0
0 eD = Eη− =
1 . = ∞)
0 P(η+
Remark 8.2.5. Both assertions of Theorem 8.2.3 have a simple ‘physical’ interpretation. First note that, under the conditions of the theorem, the probability that during time m a large jump of size x ∼ cn or bigger will occur is approximately equal to mV (x). Further, the rare event {η− > n} where n is large occurs, roughly speaking, when there is a large jump of size |a|n in a time interval of length η− (with mean Eη− ); after that, the random walk will need, on average, a time n to reach the negative half-axis. Therefore it is natural to expect that P(η− > n) ∼ Eη− V (|a|n). The result P(S η− > x) ∼ Eη− V (x) of [8] admits a similar interpretation. The rare event {η+ = n} occurs when first (prior to time n) the trajectory stays
8.2 A fixed level x
377
above zero (the probability of this is close to P(S = 0); at time n the trajectory will be in a ‘neighbourhood’ of the point an) and then, at time n, there occurs a large jump of size −an. These considerations explain, to some extent, the asymptotics (8.2.24). Proof of Theorem 8.2.3. The assertions (8.2.23) and (8.2.24) are simple consequences of Theorem 8.2.1(iv) (see (8.2.12), (8.2.13)). We simply have to find the asymptotics of {bn } as n → ∞ and verify that this sequence is subexponential. The probability P(Sn > 0) can be written in the form P(Sn0 > |a|n), where 0 Sk = Sk − ak, ESk0 = 0. Now if F ∈ R then P(Sn0 > |a|n) ∼ nV (|a|n)
(8.2.26)
(see e.g. §§ 3.4 and 4.4). The same relation holds if F ∈ Se (that is, conditions (8.2.4), (8.2.5) are satisfied) and α < 1/2 (see § 5.4; for α 1/2 the deviations x = |a|n do not belong to the region where (8.2.26) holds true). Thus, under the conditions of Theorem 8.2.3, bn ∼
V (|a|n) , D
and so the sequence {bn } is subexponential (see p. 44). The theorem is proved. It follows from Theorem 8.2.3 that the relation (8.2.11) holds as n → ∞ and t that, for any increasing function Q(t), we have, putting QI (t) := 0 Q(u) du, the following equivalences: EQ(η− ) < ∞ ⇐⇒ E[Q(ξ/|a|); ξ > 0] < ∞, E[Q(η+ ); η+ < ∞] < ∞ ⇐⇒ E[QI (ξ/|a|); ξ > 0] < ∞. Theorem 8.2.4 can be proved in exactly the same way, using the ‘dual’ identities
" ∞ 0 zk P(Sk < 0) , 1 − Ez η− = exp − k k=1
k " 0 0 z < ∞ = exp − 1 − E z η + ; η+ P(Sk 0) . k The duality seen in Theorems 8.2.3 and 8.2.4 is present everywhere in the remainder of the chapter as well. Because of its obviousness, its further description will be omitted in what follows. One can broaden the conditions under which the relations (8.2.23), (8.2.24) remain true. First we consider the case of finite variance. Theorem 8.2.6. Let Eξ 2 < ∞.
378
On the asymptotics of the first hitting times
(i) Suppose that, instead of the condition F ∈ R, the following conditions hold in Theorem 8.2.3: (i1 ) [ · , <], V ∈ R, and nV 2 (n) = o F+ (|a|n) ; (i2 ) for any fixed h ∈ (0, 1) and some c < ∞, for all large enough t one has F+ (ht) < cF+ (t); (i3 ) for any fixed √ M < ∞ one has F+ (t + u) ∼ F+ (t) as t → ∞ for |u| < M t. Then the relations (8.2.23), (8.2.24) hold true with V replaced by F+ . (ii) Suppose that, instead of the condition F ∈ Se, the following conditions hold in Theorem 8.2.3: (ii1 ) [ · , <], V ∈ Se, α < 1/2; (ii2 ) for any fixed γ ∈ (α, 1) and some c < ∞, for all large enough t one has v(th)/v(t) > chγ , where v(t) := − ln F+ (t); (ii3 ) for any fixed √ M < ∞ one has v(t + u) − v(t) = o(1) as t → ∞, |u| M t. Then the relations(8.2.23), (8.2.24) hold true with V replaced by F+ . Proof. We can again use the argument employed to prove Theorem 8.2.3. One just has to verify that the new, broader, conditions still imply (8.2.26) and the subexponentiality of the sequence {bn }. (i) Let conditions (i1 )–(i3 ) be satisfied. Using ‘truncation’ (at the level h|a|n) to analyze the asymptotics of P(Sn0 > an) (see Chapters 3 and 4), one obtains from (i1 )–(i3 ) that, for h ∈ (0, 1), P(Sn0 > |a|n) = nP(Sn0 > |a|n, ξn − a > h|a|n) + O (nV (h|a|n))2 , where, for M → ∞ slowly enough, one has, by virtue of Chebyshev’s inequality and (i2 ), (i3 ), that P(Sn0 > |a|n, ξn − a > h|a|n) 0 + ξn − a > |a|n, ξn − a > h|a|n) = P(Sn−1 0 = P Sn−1 > (1 − h)|a|n F+ (|a|(hn − 1)) (1−h)|a|n
0 P(Sn−1 ∈ dt) F+ (|a|(n − 1) − t)
+ −∞
(8.2.27)
# √ $ 0 0 = o(V (n)) + E F+ (|a|(n − 1) − Sn−1 ); Sn−1 <M n $ # √ 0 0 ); M n Sn−1 < (1 − h)|a|n + E F+ (|a|(n − 1) − Sn−1 = F+ (|a|n) + o F+ (n) . This proves the relation P Sn0 > |a|n ∼ nF+ (|a|n).
8.2 A fixed level x
379
The subexponentiality of {bn }, bn ∼ D −1 F+ (|a|n), follows from the relations (supposing for simplicity that n is even) n
n/2−1
bk bn−k = 2
k=0
bk bn−k + b2n/2 ,
k=0
where
M
n/2−1
=
k=0
√
n
k=0
n/2−1
+
k=M
√
n+1
and, owing to (i2 ), (i3 ), M
√
n
k=0
n/2−1
bk bn−k ∼ bn ,
k=M
n/2−1
bk bn−k < cF+ (|a|n)
√ n+1
k=M
√
bk = o(bn ).
n+1
(ii) The case when conditions (ii1 )–(ii3 ) are met is dealt with in a similar manner, using the results of Chapter 5. Theorem 8.2.6 is proved. The case Eξ 2 = ∞ can be considered in exactly the same way. For simplicity let α ∈ (1, 2) in (8.2.3), and let F− (t) ≡ P(ξ < −t) cF+ (t).
(8.2.28)
Theorem 8.2.7. Under the above conditions, the first assertion of Theorem 8.2.6 √ remains true in the case (8.2.28), provided that we replace the range |u| < M t in (i3 ) by |u| < M σ(t), σ(t) = V (−1) (1/t). If the condition P(ξ < −t) cF+ (t) does not hold then Theorem 8.2.7 still remains true, but with a different function σ(t) (see Chapter 3). Proof. The proof of Theorem 8.2.7 completely repeats that of Theorem 8.2.6. √ 0 < M n} by the range One just replaces in (8.2.27) the deviation range {Sn−1 0 {Sn−1 < M σ(n)} and then uses bounds from Chapter 3. One could also consider the cases α = 1, α = 2. Now we will derive bounds for the probabilities (8.2.1). Since, owing to Theorem 8.2.1(iii), one has P(η+ = n) < P(η− > n), we will obtain bounds only for P(η− > n). which satisfies the conTheorem 8.2.8. Assume that there exists a distribution F ditions of Theorem 8.2.6 and is such that F+ (t) V (t) := F(t) for all t (in d
P(ξ t) = V (t), ξ satisfies the conditions of other words, one has ξ ξ, Theorem 8.2.3, if Eξ = a < 0). Then, for any ε > 0 and all large enough n, P(η− > n) V |a|n(1 − ε) Eη− . (8.2.29)
380
On the asymptotics of the first hitting times
∈ Se, α 1/2 then we still have (8.2.29) (in the case F ∈ Se, mulIf F tiplying V |a|n(1 − ε) bya constant factor makes no difference since a small variation in ε can change V |a|n(1 − ε) by more than a constant factor). d
Proof. The assertion (8.2.29) is next to obvious, since η− η− , where η− is Therefore, by virtue of Theo= F. defined as η− but for i.i.d. r.v.’s ξ1 , ξ2 , . . . ⊂ rem 8.2.3, P(η− > n) P( η− > n) ∼ V (| a|n)E η− ,
n → ∞.
(8.2.30)
Now we will redefine the distribution of ξ as follows,
F+ (t) for t M, P(ξ t) = V (t) for t > V (−1) (F+ (M )), and then, for a given ε > 0, choose a value M such that Eξ =: a < a + ε, E η− < Eη− + ε. This, together with (8.2.30), implies (8.2.29). To prove the second assertion, we can again make use of the above argument, n except that there is no relation of the form (8.2.26) for the sums Sn = i=1 ξi . If V ∈ Se, α 1/2 then, from the results of Chapter 5, it follows only that, for any ε > 0 and all large enough n, 1−ε an > | a|n cn V (| a|n) = cne−l(|ba|n)(1−ε) (8.2.31) P Sn − (see Corollary 5.2.2, p. 240). Since l(tn) ∼ tα l(n) for a fixed t > 0, we see that when a < a+ε one can bound the right-hand side of (8.2.31) (slightly changing ε if needed) by ne−l(|a|n(1−ε)) = nV |a|n(1 − ε) . η− > n) do not exceed the Owing to (8.2.18), the probabilities P(η− > n) P( coefficients in the Taylor expansion of "
∞ zk P(Sk > 0) , exp k k=0
where
1 P(Sk > 0) V |a|k(1 − ε) k for large enough k. Therefore these probabilities will also not exceed the coefficients in the expansion of "
∞ exp B z k bk , k=0
where bk ∼ B −1 V (|a|k(1 − ε)), B :=
M k=1
∞
k=1 bk
P(Sk > 0) +
= 1, and
k>M
V |a|k(1 − ε) .
8.2 A fixed level x
381
Now set A (v) := eBv , so that A (1) = BeB . As the sequence {bk } is subexponential, we obtain from Theorem 1.4.3 that P(η− > n) < V |a|n(1 − ε) BeB 1 + o(1) . Since the value of V |a|n(1 − ε) can be diminished BeB times (or by any other fixed factor) by slightly changing ε, we have obtained the second assertion of the theorem. The theorem is proved. Corollary 8.2.9. If
E := E el(ξ) ; ξ > 0 < ∞,
where l(t) = tα L(t) is a non-decreasing function, α ∈ (0, 1), and L(t) is an s.v.f. then, for any ε > 0, E el(|a|(1−ε)η+ ) ; η+ < ∞ < ∞. Eel(|a|(1−ε)η− ) < ∞, This assertion follows directly from Theorem 8.2.8 and Chebyshev’s inequality P(ξ x) Ee−l(x) . Remark 8.2.10. In the case V ∈ Se the assertion (8.2.29) admits a simpler proof, based on the crude inequalities (for, say, α < 1/2) P(η− > n) P(Sn > 0) P(Sn > 0)(1 + o(1)) nV (| a|n) V |a|n(1 − ε) . Remark 8.2.11. In the assertion (8.2.29) of Theorem 8.2.8 one can assume that ε = ε(n) → 0 as n → ∞. The rate of convergence ε(n) → 0 can be obtained from the bounds for ε in (8.2.31), which are found in Corollary 5.2.2, and from estimates for the distance between a and a. 8.2.3 The case Eξ < 0 for x = 0 when F belongs to the class C First we consider the possibility (1) in (8.2.6) for the case F ∈ C. Recall that ϕ(λ) = Eeλξ , λ0 > 0 is the point where the function ϕ(λ) attains its minimum value ϕ := minλ ϕ(λ) and that λ+ = sup{λ : ϕ(λ) < ∞}. In the case under consideration, one has λ0 λ+ , ϕ (λ0 ) = 0, which includes the following two substantially different subcases: (1a) λ0 < λ+ ; (1b) λ0 = λ+ ,
ϕ (λ+ ) = 0.
In case (1b), the fact that both quantities ϕ(λ+ ) and ϕ (λ+ ) are finite implies that (see (6.1.20), (6.1.21)) F+ (t) = e
−λ+ t
∞ V (t),
tV (t)dt < ∞. 0
(8.2.32)
382
On the asymptotics of the first hitting times
In this situation, we will need the following condition: [B]
Either ϕ (λ+ ) < ∞, or ϕ (λ+ ) = ∞ and ∞ V (t) =
V (u) du ∈ R.
I
(8.2.33)
t
The relation (8.2.33) means that V I (t) = t−α L0 (t),
(8.2.34)
where α ∈ [1, 2] and L0 is an s.v.f. Now return to the general case λ0 λ+ and consider the distribution, conjugate to F (its Cram´er transform): F(λ0 ) (dt) := (λ0 )
Let the ξi
eλ0 t F(dt). ϕ
(8.2.35)
be independent r.v.’s with distribution F(λ0 ) . Put Sn(λ0 ) :=
n
(λ0 )
ξi
.
i=1
(λ ) 2 (λ ) < ∞ and therefore Then clearly Eξi 0 = 0. If λ0 < λ+ then d := E ξi 0 (λ0 ) √ the distribution of Sn / nd converges weakly to the normal law as n → ∞. If λ0 = λ+ then (cf. (6.2.6), (6.2.7), p. 309) V0 (t) := P
(λ ) ξ1 0
∞
t =
eλ0 u F(du) ϕ
t
=
1 ϕ
∞
λ0 I V (t) ∈ R. λ0 V (u) du − dV (u) ∼ ϕ
t
(8.2.36) This means (see § 1.5) that, provided that the conditions λ0 = λ+ and [B] are (λ ) satisfied, the distribution of ξi 0 belongs to the domain of attraction of the stable law Fα,1 with parameters (α, 1), α ∈ [1, 2] (note that the left tail of the distribution F(λ0 ) decays exponentially fast; for α = 2, the limiting law Fα,1 is normal). The distribution Fα,1 has a continuous density f . Put √ ∞ nd if d < ∞, P(Sk > 0) Dϕ := , σn := (−1) k kϕ V0 (1/n) if d = ∞, k=1 (−1)
:= inf{v : V0 (v) u} is the (generalized) inverse of the funcwhere V0 tion V0 . Recall that a distribution F is said to be non-lattice if its support is not part
8.2 A fixed level x
383
of a set of the form {b + hk; k = 0, ±1, ±2, . . .}, 0 b h, where h can be assumed, without loss of generality, to be equal to 1. A distribution F is called arithmetic if the r.v. ξ ⊂ = F is integer-valued with lattice span equal to 1. Theorem 8.2.12. Assume that F ∈ C, λ0 λ+ , ϕ (λ0 ) = 0 and, in the case λ0 = λ+ , that the conditions (8.2.32) and [B] are met. Then, in the non-lattice case, P(η− > n) ∼ eDϕ
f (0)ϕn , λ0 nσn
P(η+ = n) ∼ e−Dϕ
(8.2.37)
f (0)ϕn , λ0 nσn
(8.2.38)
where f (0) is the value of the density of the limiting stable law Fα,1 at 0. If the r.v. ξ has an arithmetic distribution then the factor 1/λ0 on the right-hand sides of (8.2.37), (8.2.38) should be replaced by e−λ0 /(1 − e−λ0 ). (λ0 )
Proof. The distributions of Sn and Sn
have a similar relation to that in (8.2.35): P(Sn ∈ dt) = ϕn e−λ0 t P Sn(λ0 ) ∈ dt
(see e.g. (6.1.3)), so that ∞ P(Sn > 0) = ϕ
n 0
e−λ0 t P Sn(λ0 ) ∈ dt
∞
= ϕ λ0 n
e−λ0 u P Sn(λ0 ) ∈ (0, u] du.
(8.2.39)
0
Repeating the argument from the proofs of Theorems 6.2.1 and 6.3.1, we obtain λ0 ϕn f (0) P(Sn > 0) ∼ σn
∞
ue−λ0 u du =
0
ϕn f (0) . λ0 σn
This result also follows from Corollaries 6.2.3 and 6.3.2. Further, we will again make use of the factorization identities (8.2.18), (8.2.19), where, having made the change of variables s = zϕ, we will proceed as in the proof of Theorem 8.2.1, using in (8.2.20), (8.2.21) the function ∞ s = bϕ,k sk bϕ (s) := b ϕ k=1
instead of b(z). The sequence bϕ,k =
P(Sk > 0) f (0) ∼ Dϕ kϕk λ0 kσk Dϕ
(8.2.40)
384
On the asymptotics of the first hitting times
is subexponential. Therefore, again applying results from § 1.4, we obtain P(η− > n)ϕ−n ∼ eDϕ
f (0) . λ0 kσk Dϕ
In the arithmetic case, we use the local theorem P(Sn(λ0 ) = k) − 1 f k σn σn
→0
k
as n → ∞ (see Theorem 4.2.2 of [152]), which implies that P(Sn > 0) = ϕn
∞ k=1
ϕn e−λ0 f (0) e−λ0 k P Sn(λ0 ) = k ∼ . (1 − e−λ0 )σn
Again, the same result follows from Corollaries 6.2.3 and 6.3.2. The proof of the second assertion of the theorem is completely analogous. The theorem is proved. Remark 8.2.13. Note the cases λ0 < λ+ or λ0 = λ+ , ϕ (λ+ ) < ∞, √ that, in √ one has f (0) = 1/ 2π, σn = nd and d = ϕ (λ0 )/ϕ in the relations (8.2.37), (8.2.38). Observe also that one could consider the lattice non-arithmetic case as well, when ξ assumes the values b + hk, k = 0, ±1, . . . , 0 < |b| < h. In this case the coefficient of (nσn )−1 on the right-hand sides of (8.2.37), (8.2.38) will depend on n, oscillating within fixed limits. Now consider possibility (2) in (8.2.6): λ0 = λ+ , ϕ (λ+ ) < 0. Then we still have (8.2.32). Suppose that V ∈ R, i.e. V (t) = t−α−1 L(t),
α > 1,
(8.2.41)
where L is an s.v.f. Theorem 8.2.14. Assume that F ∈ C, λ0 = λ+ and ϕ (λ+ ) < 0 and also that conditions (8.2.32), (8.2.41) are satisfied. Then P(η− > n) ∼ eDϕ ϕn−1 V (a+ n), P(η+ = n) ∼ e−Dϕ ϕn−1 V (a+ n), where a+ = −Eξ (λ0 ) = −ϕ (λ+ )/ϕ > 0. Proof. Under the conditions of Theorem 8.2.14 one still has the equality (8.2.39) for P(Sn > 0), in which λ0 can be replaced by λ+ , so that the problem again reduces to finding the asymptotics of P Sn(λ0 ) ∈ (0, u] = P Sn(λ0 ) + a+ n ∈ (a+ n, a+ n + u] .
8.2 A fixed level x
385
We have (λ ) P ξ1 0 ∈ (t, t + u] =
t+u
eλ+ v F(dv) ϕ
t t+u
=
λ+ V (v) dv − ϕ
t
t+u
dV (v) λ+ u ∼ V (t). ϕ ϕ
t
Again repeating the argument from the proofs of Theorems 6.2.1 and 6.3.1 we obtain that, for any fixed u > 0, λ+ u nV (a+ n). P Sn(λ0 ) ∈ (0, u] ∼ ϕα
(8.2.42)
This assertion also follows from Corollaries 6.2.3 and 6.3.2. As before, from (8.2.42) we find that, by virtue of (8.2.39), P(Sn > 0)ϕ
−n
∞ ∼ λ+
e−λ+ u u du
0
nV (a+ n) λ+ nV (a+ n) = . ϕα ϕα
Thus, using the notation (8.2.40) we have bϕ,k =
V (a+ k) P(Sk > 0)ϕ−k ∼ . Dϕ k ϕαDϕ
As in Theorem 8.2.12 the sequence {bϕ,k } is subexponential, and hence the remainder of the previous proof will remain valid. The theorem is proved. It appears that it would not be very difficult to extend the assertion of Theorem 8.2.14 to the case V ∈ Se, α < 1/2. Observe that the assumption (8.2.41), as may be seen from the relation (8.2.42), corresponds to the non-lattice case. The lattice case can be considered in exactly the same way, but this will require a change in the form of condition (8.2.41). Note also that the local theorems 2.1 and 3.1 of [54] for the lattice case were obtained earlier in [105].
8.2.4 The case A0 for x = 0 First note that we have here the following analogue of Theorem 8.2.1. Theorem 8.2.15. For the case A0 we have that "
∞ 1 1 − Ez η− zk , = exp P(Sk > 0) = 1−z k 1 − Ez η+ k=1
Eη± = ∞ and that the distribution of η− is uniquely determined by that of η+ and vice versa.
386
On the asymptotics of the first hitting times
Proof. The proof of the theorem follows in an obvious way from (8.2.17), (8.2.18) and the fact that P(η+ < ∞) = 1. Now, for γ ∈ (0, 1) put
" ∞ z k Δk D(z) := exp − . k
Δn := P(Sn 0) − γ,
(8.2.43)
k=1
Theorem 8.2.16. The following assertions hold true for the case A0 . (i) The relation P(η− > n) ∼
n−γ L(n) Γ(1 − γ)
as
n → ∞, γ ∈ (0, 1), L is an s.v.f., (8.2.44)
holds iff n−1
n
P(Sk 0) → γ
(8.2.45)
k=1
or, equivalently, iff P(Sn 0) → γ.
(8.2.46)
(ii) If (8.2.44) or (8.2.45) holds then necessarily L(n) ∼ D(1 − n−1 ) and P(η− (x) > n) → r(x), P(η− > n)
n → ∞,
for any fixed x 0, where the function r(x) is given in explicit form. (iii) If Eξ = 0, Eξ 2 < ∞ then γ = 1/2 and |Δn | n
< ∞.
(8.2.47)
This means that the function L(n) in the relation (8.2.44) can be replaced by D(1), 0 < D(1) < ∞. Proof. The proof of Theorem 8.2.16 can be found in [32], pp. 381–2 (see [32] for a more detailed bibliography; the equivalence of conditions (8.2.45) and (8.2.46) was established in [106]). It is evident that an assertion symmetric to (8.2.44) holds for η+ , η+ (x) (with γ and Δn replaced by 1 − γ and −Δn respectively): P(η+ > n) ∼
nγ−1 L−1 (n) . Γ(γ)
If Eξ 2 = ∞ (assuming that if Eξ is finite then Eξ = 0) then, under the well known regularity conditions on the distribution tails of ξ (see p. 57 and Theorem 1.5.1), we have, as n → ∞, P(Sn 0) → Fα,ρ ((−∞, 0]) = Fα,ρ,− (0) ≡ γ,
8.2 A fixed level x
387
where Fα,ρ is a stable law with parameters α ∈ (0, 2], ρ ∈ [−1, 1]. For symmetric ξ’s one has γ = 1/2, P(Sn 0) −
1 c 1 = P(Sn = 0) < √ , 2 2 n
so that the series (8.2.47) is convergent and
" ∞ 1 zn P(Sn = 0) . D(z) = exp − 2 n=1 n It may be seen from the above that here, in contrast with the case A− , the influence of the distribution tails of ξ on the asymptotics (8.2.1) is less significant. There is a vast literature on the rate of convergence of F (n) (t) := P(Sn /σn < t) (for a suitable scaling sequence σn ) to Fα,ρ,− (t). In particular, one can find in it conditions that are sufficient for convergence of the series (8.2.47) in the case Eξ 2 = ∞. For illustration purposes, we will present here just one of the results known to us : r Let νr := |x (F(dx) − Fα,ρ (dx))| < ∞ for some r > α, and assume that x(F(dx) − Fα,ρ (dx)) = 0 in the case α 1. Then supF (n) (t) − Fα,ρ,− (t) cνr n1−r/α t
(see [216, 217]). The convergence (8.2.47) clearly occurs in such a case. Now we will obtain a refinement of Theorem 8.2.16. Theorem 8.2.17. (i) If in the case A0 we have that Δn ∼ cn−γ
as
n → ∞,
0 c < ∞,
γ ∈ (0, 1),
(8.2.48)
then P(η− = n) ∼
γn−γ−1 D(1), Γ(1 − γ)
0 < D(1) < ∞.
(8.2.49)
(ii) If Eξ = 0, E|ξ|3 < ∞ and the distribution ξ is either lattice or satisfies the condition lim sup |f(λ)| < 1, |λ|→∞
then γ = 1/2 and the relations (8.2.48), (8.2.49) hold true. Similar relations hold for P(η+ = n).
388
On the asymptotics of the first hitting times
Concerning the asymptotics of P(η+ = n) under weaker conditions, see remarks after Theorem 8.2.18 below. Proof of Theorem 8.2.17. Owing to (8.2.16) and (8.2.43), we have "
∞ ∞ z k z k Δk Ez η− = 1 − exp −γ = 1 − (1 − z)γ D(z). − k k k=1
(8.2.50)
k=1
The asymptotics of the coefficients ak in the expansion of the function ∞
a(z) := −(1 − z)γ =
ak z k
k=0
are well known: an ∼
γn−1−γ . Γ(1 − γ)
(8.2.51)
To find the asymptotics of the coefficients dk in the expansion D(z) =
∞
dk z k ,
k=0
we will make use of Theorem 1.4.6 (p. 55) with A(w) = ew , gk = −Δk /k to claim that dn ∼ A (g(1))gn = −
D(1) Δn ∼ cD(1)n−γ−1 , n
n → ∞.
(8.2.52)
Now, from (8.2.50) with ε = ε(n) → 0 slowly enough as n → ∞, we have (assuming for simplicity that εn is integer-valued) P(η− = n) =
n
ak dn−k =
k=0
εn k=0
(1−ε)n
+
k=εn+1
n
+
,
k=(1−ε)n+1
where, by virtue of the equality a(1) = 0, εn k=0
=
εn
ak dn (1 + o(1)) = o(dn ).
k=0
Further, n
=
k=(1−ε)n+1
n
an (1 + o(1)) dn−k ∼ D(1)an ,
k=(1−ε)n+1
and so from (8.2.51), (8.2.52) we obtain that (1−ε)n
k=εn+1
This proves (8.2.49).
# $2 n (εn)−1−γ = cε−2−2γ n−1−2γ = o(an ).
8.2 A fixed level x
389
The second assertion of the theorem follows from the observation that, under the conditions of part (ii), one has cEξ 3 1 1 Δn = P(Sn 0) − = √ + o √ 2 n n (see e.g. Theorem 21 in Chapter V of [224]), and so (8.2.48) holds with γ = 1/2. The theorem is proved.
8.2.5 A fixed level x > 0 The asymptotics of the distributions of r.v.’s η± (x) for x > 0 differ from the respective asymptotics for η± by factors that depend only on x. Namely, the following assertion was established in [116, 32, 104, 27, 193]. Theorem 8.2.18. Let the conditions of one of Theorems 8.2.3 (for the class R), 8.2.12, 8.2.14, 8.2.16 ((8.2.45)) be satisfied. Then, for the cases A− and A0 , as n → ∞, for any fixed x 0 we have P(η− (x) > n) → r− (x), P(η− > n) P(n < η+ (x) < ∞) → r+ (x), P(n < η+ < ∞)
(8.2.53)
where the functions r± (x) are given in explicit form. In the case Eξ = 0, d = Eξ 2 < ∞ the following local theorem was obtained in [3]: if the distribution of ξ is arithmetic then there exists a function U (x) such that for any fixed x 0 we have lim n3/2 P(η+ (x) = n) = U (x),
n→∞
(8.2.54)
√ where U (x) ∼ x/ 2πd as x → ∞. If, in addition, E|ξ|3 < ∞ then (8.2.54) holds true for non-lattice distributions as well [193].1 In the case of bounded lattice-valued r.v.’s ξ, asymptotic expansions for P(η± (x) = n) were derived in [33, 34]. In the next section, we will find the asymptotics of the functions r± (x) from (8.2.53) as x → ∞. In the present subsection we will restrict ourselves to giving a simple proof of Theorem 8.2.18 and the explicit form of the functions r± (x) in two cases, when Eξ = 0, Eξ 2 < ∞ and when Eξ = 0 and condition [Rα,ρ ], ρ ∈ (−1, 1), (see p. 57) is met. For the case Eξ < 0 we will prove the first of the assertions (8.2.53). 1
A more general result claiming that Eξ = 0, Eξ 2 < ∞ would suffice for (8.2.54) was published in [117]. This theorem, however, proved to be incorrect since its conditions contain no restrictions on the structure of F (as communicated to us by A.A. Mogulskii). A paper by A.A. Mogulskii with a correct version of this result was submitted to Siberian Advances in Mathematics.
390
On the asymptotics of the first hitting times
Proof of Theorem 8.2.18. Let Eξ 0 (we will assume that Eξ 2 < ∞ in the Eξ = 0), let η1 , η2 , . . . and χ1 , χ2 , . . . be independent r.v.’s that are distributed respectively as η− and χ− , and let Tk :=
k
ηi ,
Hk :=
i=1
k
χi ,
τ (x) := min{k : Hk −x}.
i=1
Then it is well known (see e.g. § 17 of [42]) that E|χ− | < ∞,
η− (x) = Tτ (x) ,
Eτ (x) < ∞.
Denote by Fn the σ-algebra generated by (η1 , χ1 ), (η2 , χ2 ), . . . , (ηn , χn ). Then obviously {τ (x) n} ∈ Fn , so that τ (x) will be a stopping time. By virtue of Theorems 8.2.3 and 8.2.16(i) (in the cases under consideration, when Eξ = 0 and either Eξ 2 < ∞ or condition [Rα,ρ ], ρ ∈ (−1, 1), is satisfied, clearly the limit (8.2.45) will always exist), the distribution tail of ηj is an r.v.f. Since ηi > 0 one has maxkn Hk = Hn and so one can make use of Corollary 7.4.2. It is evident that P(τ (x) > n) decays exponentially fast as n → ∞, and hence the conditions of that theorem are met and therefore P η− (x) > n = P(Tτ (x) > n) ∼ Eτ (x)P(η− > n), (8.2.55) where Eτ (x) =
∞
P(Hk −x) = r− (x).
k=1
The first assertion in (8.2.53) is proved. The second assertion in the case Eξ = 0, Eξ 2 < ∞ is proved in exactly the same way. If F ∈ C, Eξ < 0 then the function r− (x) will be of a more complex nature. Since by the renewal theorem Eτ (x) ∼
x E|χ− |
as
x → ∞,
it is natural to expect from (8.2.55) and (8.2.23) that, under the conditions of Theorem 8.2.3, for Eξ = a < 0 and large x, x xEη− P η− (x) > n ∼ V |a|n = V |a|n . E|χ− | |a|
(8.2.56)
For x = o(n), this fact (in its more precise formulation) will be proved in the next section. Observe also that relations similar to (8.2.55), (8.2.56), could also be obtained for the distribution tails of χ− (x) = Sη− (x) + x, x 0. Indeed, using the factorization identity (see e.g. formula (1) in Chapter 4 of [42]) 1 − Eeiλχ− = Eη− EeiλS 1 − f(λ) , f(λ) = Eeiλξ ,
8.3 A growing level x
391
one can easily demonstrate that, in the case of an l.c. left tail W (t) = P(ξ < −t), we have, as t → ∞, P(χ− < −t) ∼ Eη− W (t) (for more detail, see e.g. § 22 of [42]). This corresponds to the above reasoning and the representation χ− = ξ1 + · · · + ξη− . A similar representation holds for χ− (x), which makes it natural to expect that, as t → ∞, P(χ− (x) < −t) ∼ Eη− (x) W (t). 8.3 A growing level x In this section, it will be convenient for us to change the direction of the drift when it is non-zero, so that now the main case is A+ = {a > 0} and the main object of study will be the time η+ (x) of the first crossing of the level x → ∞. Accordingly, the regular variation condition will now be imposed on the left tail: F− (t) = P(ξ < −t) = W (t) ≡ t−β LW (t),
(8.3.1)
where β > 1 and LW (t) is an s.v.f. at infinity (this is condition [=, · ] as used in Chapters 3 and 4). In the present section, we will deal only with the tail classes R and C and also with distributions from the class Ms , for which E|ξ|s < ∞. It is clear that the intersection of the classes Ms and R is non-empty. Since {η+ (x) > n} = {S n x}, we can also view our problem as that on ‘small deviations’ of the maximum S n , if x does not grow fast enough. 8.3.1 The distribution class C For distributions from the class C, a comprehensive study of the asymptotics of P η+ (x) > n = P(S n x) for arbitrary a = Eξ was made in [33, 34, 37]. The method papers consists in finding double transforms (in x and used in these in n) of P η+ (x) > n in terms of solutions to Wiener–Hopf-type equations (or, equivalently, in terms of the factorization components; these components were found in [33, 34] in explicit form) and then using asymptotic inversion of these transforms. For this method to work, one has to assume that the following condition is met. [Ca ]
The distribution F of ξ has an absolutely continuous component.
The same approach (in its simpler form) is also applicable in the case ξ is lattice-valued (see [33, 34]). Along with condition [Ca ], we will also need Cram´er’s condition [C0 ] on the ch.f. f of the distribution F:
392
On the asymptotics of the first hitting times
[C0 ]
lim sup |f(λ)| < 1. |λ|→∞
Clearly, [Ca ] ⊂ [C0 ]. To formulate the corresponding results we will need some notation. As before, let θ = x/n, (8.3.2) ϕ(λ) = Eeλξ , Λ(θ) = sup λθ − ln ϕ(λ) λ
and λ+ = sup{λ : ϕ(λ) < ∞}, θ− = lim
λ↓λ−
ϕ (λ) , ϕ(λ)
λ− = inf{λ : ϕ(λ) < ∞}, θ+ = lim
λ↑λ+
ϕ (λ) . ϕ(λ)
The deviation function Λ(θ) is analytic in (θ− , θ+ ), and the supremum in (8.3.2) is attained at the point λ(θ) = Λ (θ) (see e.g. § 8, Chapter 8 of [49]). The next assertion follows from the results of [37]. Theorem 8.3.1. Let F ∈ C, condition [Ca ] be satisfied and n → ∞. x = o(1) then n cx P(η(x) = n) ∼ 3/2 e−nΛ(θ) , n
(i) If θ− < 0 < θ+ and x → ∞, θ =
where c is a known constant admitting a closed-form expression in terms of the factorization components of the function 1 − zf(λ). (ii) If 0 < θ1 θ θ2 < θ+ then c(θ) P(η(x) = n) ∼ √ e−nΛ(θ) , n where c(θ) is a known analytic function, also admitting a closed-form expression in terms of the above factorization components. From this theorem one can easily obtain asymptotic representations for P(η(x) > n),
P(S x),
P(η(x) > n| η(x) < ∞)
and so on. Note also that there are no conditions on the sign of a = Eξ in the statement of the theorem. As we have already observed, the method of proof of Theorem 8.3.1 is analytic and is based on the idea of factorization and inverting double transforms. It was discovered relatively recently that this approach is not the only possible one for solving the problems under consideration. Using direct probabilistic methods, it was shown in [45, 47] that, in a number of cases, the asymptotics of the probabilities of large (and small) deviations of S n can be found from the respective asymptotics for Sn . In particular, the following result holds true.
8.3 A growing level x
393
Theorem 8.3.2. Let a distribution F ∈ C be either lattice or satisfy condition [C0 ] and let θ = x/n satisfy the inequalities θ < a, 0 < θ1 θ θ2 < θ+ . Then, as n → ∞, P(η(x) > n) ∼ c(θ) P(Sn x),
(8.3.3)
where the function c(θ) is defined below (see 8.3.5). In fact, the relation (8.3.3) and the form of the function c(θ) are simple corollaries of the following general fact. Consider an r.v. ξ (λ(θ)) that is the Cram´er transform of ξ at the point λ(θ), eλ(θ)t P ξ (λ(θ)) ∈ dt = P(ξ ∈ dt), ϕ(λ(θ))
Eξ (λ(θ)) = θ.
n (λ(θ)) (λ(θ)) (λ(θ)) := , where the r.v.’s ξi are independent copies of Put Sn i=1 ξi (λ(θ)) the r.v. ξ . For simplicity assume that the distribution of Sn has a density and that x/n → θ = const. Then, for any fixed k, the conditional distribution of the random vector (ξn , ξn−1 , . . . , ξn−k ) given Sn ∈ x − dy, where y = o(x) (for instance, y is fixed), converges as n → ∞ to the distribution (λ(θ)) (λ(θ)) (λ(θ)) (see [45, 47]). From here it is not difficult to see , ξ2 , . . . , ξk of ξ1 that P S n x| Sn ∈ x − dy → P T (λ(θ)) y , (λ(θ)) < ∞ a.s. Now we use the total probability where T (λ(θ)) := supk0 −Sk formula for P(S n x) (conditioning on the value of Sn ) and take into account the facts that, for θ ∈ (θ− , θ+ ), θ < a, Λ (θ) −nΛ(θ) √ e P(Sn x) ∼ (8.3.4) λ(θ) 2πn (cf. (6.1.16); observe that Λ (θ) = 1/d(θ)) and P(Sn ∈ x − dy) ∼ λ(θ)eλ(θ)y dy. P(Sn x) Thus we arrive at (8.3.3) with ∞ c(θ) = λ(θ)
eλ(θ)y P(T (λ(θ)) y) dy < ∞
(8.3.5)
0
(here λ(θ) < 0). The above assumption that x/n → θ = const does not restrict the generality, since all the assertions that we have used hold uniformly in θ. Moreover, it is not difficult to derive from (8.3.3) and (8.3.5) the asymptotics of P η+ (x) = n .
394
On the asymptotics of the first hitting times
8.3.2 The distribution class Ms For distributions from the class Ms , in the absence of any regularity assumptions on the function (8.3.1) one can obtain the asymptotics of the probability P(η+ (x) > n) only in case A0 = {a = 0}. It will be assumed everywhere in what follows that, in the lattice case, the value of the level x → ∞ belongs to the respective lattice. √ Theorem 8.3.3. Let F ∈ M3 , a = 0, Eξ 2 = 1, x → ∞ and x = o n . Then 8 2 P(η+ (x) > n) ∼ x . (8.3.6) πn This assertion immediately follows from the convergence rate estimate & % c x 1 sup P S n x − 2 Φ √ c < ∞, − < √ , 2 n n x ∞. Here Φ is the from [203] (see also [204, 202]), which holds when E|ξ|3 < √ standard normal distribution function. Since Φ(u) − 1/2 ∼ u/ 2π as u → 0, we obtain (8.3.6). If the distribution of ξ is lattice or condition [C0 ] is satisfied then, provided that F ∈ Ms for s > 3, one can also study asymptotic expansions for P S n x as x → ∞ (see [204, 37, 175, 176]); it is also possible to obtain similar expansions for x P(η+ (x) = n) ∼ √ (8.3.7) 2πn3/2 (i.e. the local limit theorems for η+ (x)). As far as we know, the problem when F ∈ Ms with a > 0 and x → ∞ remains open. The next subsection is devoted to studying the problem for distributions from the class R.
8.3.3 Asymptotics of P(η+ (x) > n) under the conditions x → ∞, a > 0 and [=, · ] with W ∈ R Along with the conditions listed in the heading of this subsection, in the case β ∈ (1, 2) (see (8.3.1)) we might in addition need condition [ · , <], P(ξ t) V (t),
V ∈ R,
(8.3.8)
i.e. that the majorant V has the form V (t) = t−α L(t), α > 1, where L is an s.v.f. Put z := an − x. Theorem 8.3.4. Let a > 0, x → ∞, x < an, and let condition [=, · ] with W ∈ R be satisfied. Then the following assertions hold true.
8.3 A growing level x √ (i) If Eξ 2 < ∞, β > 2 and z n ln n then P(η+ (x) > n) ∼
395
x W (z). a
(8.3.9)
(ii) The relation (8.3.9) remains true when β ∈ (1, 2), condition (8.3.8) is met and n and z are such that z → 0. (8.3.10) nW (z) → 0, nV ln z The meaning of the assertion (8.3.9) is rather simple: the principal contribution to the probability P(S n x) comes from trajectories that have a negative jump x − an at one of the first x/a steps (prior to the crossing of the boundary x by the ‘drift line’ a(k) = ESk = ak). It follows from Theorem 8.3.4 and the relation P(Sn x) ∼ nW (z) (see Chapters 3 and 4) that P(S n x) ∼
x P(Sn x), an
so that we will still have the asymptotics (8.3.3), but with a very simple function c(θ) = θ/a instead of a rather complex factor c(θ) (the case θ → 0 is not excluded). Proof of Theorem 8.3.4. We will split the argument into three steps. (1) A lower bound. Let Gn := {S n x},
Bj := {ξj < −z(1 + ε)},
v :=
x (1 − ε), a
where ε > 0 is a fixed small number. Then, assuming for simplicity that v is an integer, we will have (cf. (2.6.1)) + v v P(Gn ) P Gn ∩ Bj P(Gn Bj ) + O (xW (z))2 . = j=1
j=1
It is obvious that, for j v, P(Gn | Bj ) → 1 as x → ∞ owing to the invariance principle. As xW (z) anW (z) → 0, we have P(Gn )
v j=1
x P(Bj ) (1 + o(1)) + O (xW (z))2 ∼ (1 − ε) W (z(1 + ε)). a
Since ε > 0 is arbitrary, we finally obtain P(Gn )
x W (z) (1 + o(1)). a
(8.3.11)
(2) An upper bound for P(Gn ) for truncated summands. Set ( z) , Cj := ξj > − r
r > 1,
C :=
n * j=1
Cj .
396
On the asymptotics of the first hitting times
Then P(Gn ) = P(Gn C) + P(Gn C),
(8.3.12)
where C is the complement of C, rP(Gn C)
n
P(Gn C j ),
(8.3.13)
j=1
P(Gn C) P(Sn − an −z; C). √ Lemma 8.3.5. If Eξ 2 < ∞, β > 2 and x n ln n then
(8.3.14)
P(Gn C) (nW (z))r+o(1) . If β ∈ (1, 2) and conditions (8.3.8), (8.3.10) are met then r P(Gn C) c nW (z) . The assertion of the lemma follows from the bound (8.3.14) and Theorem 4.1.2 (or Corollary 4.1.3) and Theorem 3.1.1 (or Corollary 3.1.2). If we choose an r > β/(β − 1) then clearly r nW (z) = o xW (z) , P(Gn C) = o xW (z) . (8.3.15) (3) An upper bound for (8.3.13). We have
z (j) P(Gn C j ) = P S j−1 x, ξj − , Sj−1 + ξj + S n−j x , r (j)
d
where the r.v. S n−j = S n−j is independent of ξ1 , . . . , ξj . Since ξj does not (j)
depend on ξ1 , . . . , ξj−1 or S n−j , one has , (j) z(j) P(Gn C j ) E W −x + S n−j + Sj−1 ; S n−j + Sj−1 > x + r (z ) (z ) 0 EW max = EW max , −x + Sn−1 , z + Sn−1 r r %
"&−β 0 1 S , ∼ W (z) E max , 1+ n r z p
where√Sn0 = Sn − an. Since Sn0 /z − → 0 under the respective conditions (for z n ln n in part (i) of the theorem and under condition (8.3.10) in part (ii)), we obtain that P(Gn C j ) W (z)(1 + o(1)). Moreover, P(Gn C j ) W
z r
P(S j−1 x) = W
z r
P(η+ (x) j).
8.3 A growing level x Hence n j=1
P(Gn C j ) =
j(1+ε)x/a
397
+
j>(1+ε)x/a
z x (1 + ε)W (z) + W a r
P η(x) j .
j>(1+ε)x/a
(8.3.16) Here W (z/r) cW (z), and we find from the strong law of large numbers and the renewal theorem Eη+ (x) = P(η+ (x) > j) ∼ x/a that the second term on the right-hand side of (8.3.16) is o(x)W (z). Therefore n j=1
P(Gn C j )
x W (z)(1 + o(1)). a
Comparing this inequality with (8.3.12)–(8.3.15), we obtain the same bound for the probability P(Gn ). This, together with the lower bound (8.3.11), completes the proof of the theorem. Corollary 8.3.6. Assume that the conditions of Theorem 8.3.4 are satisfied and Eξ 2 < ∞. Then, for a fixed x 0, there exist constants c± = c± (x) such that, for all sufficiently large n, we have c− < P(η+ (x) > n)/W (n) < c+ .
(8.3.17)
Proof. First assume that there exists a subsequence n → ∞ such that (8.3.18) P η+ (x) > n r(n)W (n), where r(n) → ∞, r(n) = o(n). Set y := r(n). Then, by Theorem 8.3.4(i), for all sufficiently large n we have x < y and r(n) W (an), P η+ (x) > n < P η+ (y) > n ∼ a which contradicts to (8.3.18). This proves the second inequality in (8.3.17). The first inequality in (8.3.17) follows from the relations P(η+ (x) > n) P(ξ1 < −2an) P S n−1 x + 2an ∼ W (2an) P S n−1 x + 2an , where the second factor on the final right-hand side tends to 1 as n → ∞. The corollary is proved.
9 Integro-local and integral large deviation theorems for sums of random vectors
9.1 Introduction Let ξ1 , ξ2 , . . . be independent d-dimensional random vectors, d 1, having the same distribution as ξ ⊂ = F, let Sn = ni=1 ξi and let Δ[x) be a cube in Rd with edge length Δ > 0 and a vertex at the point x = (x(1) , . . . , x(d) ), Δ[x) := y ∈ Rd : x(i) y (i) < x(i) + Δ, i = 1, . . . , d . Let (x, y) = di=1 x(i) y (i) be the standard scalar product. The main objective of this chapter is to study the asymptotics of (9.1.1) P Sn ∈ Δ[x) for t := |x| =
(x, x) σ(n),
Δ ∈ [Δ1 , Δ2 ],
t−γ Δ1 < Δ2 = o(t),
where γ > −1 and √ the sequence σ(n) determines the zone of large deviations (see below; σ(n) = n ln n in the case Eξ = 0, E|ξ|2 < ∞). In the case when the Cram´er condition holds, Ee(λ,ξ) < C < ∞
(9.1.2)
in theneighbourhood of a point λ0 = 0, a comprehensive study of the asymptotics of P Sn ∈ Δ[x) as (λ0 , x) → ∞ was presented in [69, 70]. The method of studying the asymptotics of (9.1.1) in Cram´er’s case (9.1.2), which is based on using the Cram´er transform and was described in § 6.1 in the univariate setting, remains fully applicable in the multivariate case as well. In this chapter we will study the asymptotics of (9.1.1) in the case condition (9.1.2) does not hold. In the univariate case, the asymptotics of (9.1.1) were found in §§ 3.7 and 4.7 (for regularly varying tails in the cases Eξ 2 = ∞ and Eξ 2 < ∞ respectively). The problem becomes more complicated in the multivariate case d > 1, as there are no ‘collective’ limit theorems on the asymptotics of (9.1.1) in that situation; while in the Cram´er case the asymptotics of these probabilities were determined 398
399
9.1 Introduction
by the properties of the analytic function Ee(λ,ξ) (see e.g. [69, 70, 49, 224]), in the case when (9.1.2) is not satisfied these asymptotics will depend strongly on the ‘configuration’ of the distribution F (more precisely, on the configuration of certain level lines). The examples below illustrate this statement. Example 9.1.1. Let ξ = ξ (1) , . . . , ξ (d) , where the r.v.’s ξ (i) are independent and have distribution tails V (i) (u) := P(ξ (i) u) = u−αi Li (u), the Li being s.v.f.’s at infinity such that the functions Vi satisfy condition √ [D(1,0) ] (see pp. 167, 217). Then, in the case E|ξ|2 < ∞ and mini x(i) n ln n, we clearly have from Theorem 4.7.1 that d = (i) V1 (x(i) )(1 + o(1)), P Sn ∈ Δ[x) = Δd nd
(9.1.3)
i=1 (i)
where V1
is the function V1 from (4.7.1) corresponding to the coordinate ξ (i) .
Example 9.1.2. The character of the asymptotics of (9.1.1) is similar, in a sense, to (9.1.3) in the case when ξ = (0, . . . , 0, ξ (i) , 0, . . . , 0) with probability pi > 0, d i = 1, . . . , d, i=1 pi = 1 and the r.v.’s ξ (i) again satisfy condition [D(1,0) ]. In this case, the components of the vector ξ are strongly dependent. By the law of large numbers (or by the central limit theorem for the multinomial scheme), d = (i) nd n1 ! · · · nd ! n1 d P Sn ∈ Δ[x) ∼ Δ p1 · · · pd ni V1 (x(i) ) n! P i=1
ni =n
∼ Δ d nd
d =
(i)
pi V1 (x(i) ).
(9.1.4)
i=1
In these two examples, the asymptotics of P ξ ∈ Δ[x) as |x| → ∞ are the ‘star-shaped’: for equal αi = α, say, and for the same values of t = |x|, (i) values of P ξ ∈ Δ[x) along the coordinate axes (more precisely, when x maxj=i x(j) for some i) are much larger than, for instance, those on the ‘diagonal’ x(1) = · · · = x(d) . Example 9.1.3. If, however, P ξ ∈ Δ[x) ∼ Δd V1 (x), where V1 (x) has a form V1 (x) = v(t)g e(x) , t = |x|, e(x) := x/t, (9.1.5) g being a continuous positive function on the unit sphere Sd−1 in Rd then, as we will see below, under broad assumptions on the r.v.f. v the asymptotics of (9.1.1) will have the form (9.1.6) P Sn ∈ Δ[x) ∼ Δd nV1 (x) and therefore will be of a substantially different character compared with (9.1.3), (9.1.4).
400
Large deviation theorems for sums of random vectors
Example 9.1.4. Combining Examples 9.1.1 and 9.1.3, i.e. considering random vectors ξ with independent subvectors satisfying conditions (9.1.5), one can ob tain for P Sn ∈ Δ[x) asymptotics of the form Δd nk
k =
(i)
V1 (x)
i=1
for any k, 1 k d, where k will be determined, roughly speaking, by the minimum number of large ‘regular’ jumps required to move from 0 to Δ[x). One could give other examples illustrating the fact that, even for distributions F possessing the property of ‘directional tail’ regular variation, the asymptotics of the large deviation probabilities will essentially depend on the ‘configuration’ of the distribution. Moreover, in some cases, the very setting-up of the large deviation problem proves to be difficult (a possible general setup will be illustrated by Theorem 9.2.4 below). It is seemingly due to this reason that there is only a rather small number of papers on large deviations in the multivariate ‘non-Cram´erian’ case, most of the publications known to us being devoted mainly to the ‘isotropic’ situation, when the rate of decay of the distribution in different directions is given by the same r.v.f. First of all, observe that the convergence of the scaled sums Sn to a nonGaussian limiting law takes place iff P(|ξ| t) is an r.v.f., P(|ξ| t) = t−α L(t),
L(t) is an s.v.f.,
(9.1.7)
where α ∈ (0, 2), and the distribution St (B) := P e(ξ) ∈ B| |ξ| t on the unit sphere Sd−1 converges weakly to a limiting distribution S: St ⇒ S
as
t→∞
(9.1.8)
(Theorem 4.2 of [243]; see also Corollary 6.20 of [5]; further references, as well as a detailed discussion of the problem on the convergence of Sn under operator scaling, can be found in [189]). In the special case when F has a bounded density f that admits a representation of the form # $ f (x) = t−α−d h(e(x)) + θ(x)ω(t) , t = |x| > 1, (9.1.9) where h(e) is a continuous function on Sd−1 , |θ(x)| 1 and ω(t) = o(1) as t → ∞, the large deviation problem was considered in [283, 200]. In [283], a local limit theorem for the density fn of Sn was established for large deviations along ‘non-singular directions’, i.e. directions satisfying e(x) ∈ {e ∈ Sd−1 : h(e) δ} for a fixed δ > 0. The theorem states that, uniformly in such directions, one
9.1 Introduction
401
has fn (x) ∼ nf (x) in the zone t n1/α . This result was complemented in [200] by an analysis of the asymptotics of fn (x) along ‘singular directions’ (i.e. for e ∈ Sd−1 such that h(e) = 0) for an even narrower distribution class (in particular, it was assumed that there are only finitely many such singular directions and that the density f (x) decays along such directions as a power function of order −β − d, β > α). The main result of [200] shows that the principal contribution to the probabilities of large deviations along singular directions comes not from trajectories with a single large jump (which was the case in the univariate problem, and also for non-singular directions when d > 1) but from those with two large jumps. In § 9.2.2 below we will investigate this phenomenon under substantially broader assumptions, when the principal contribution comes from trajectories with k 2 jumps. In a more general case, when only condition (9.1.7) is met (with α > 0), an integral-type large deviation theorem describing the behaviour of the probabilities P(Sn ∈ xA) when the set A ⊂ Rd is bounded away from 0 and has a ‘regular’ boundary was obtained in [151] together with its functional version. As we have already said, the above-mentioned difficulties related to the distribution configuration are absent in the univariate case, and the large deviation problem itself is quite well studied in that case, the known results including asymptotic expansions and large deviation theorems in the space of trajectories (see Chapters 2–8; a more complete bibliography can be found in these chapters and in the respective bibliographic notes at the end of the book). We emphasize once again that in the multivariate case we encounter a situation where, for a distribution with ‘heavy’ tails, the principal contribution to the probability that a set in the large deviation zone will be reached is due not to one large jump (as in the univariate case) but to several large jumps. In Examples 9.1.1 and 9.1.2, say, in order to hit a cube Δ[x) located at the diagonal x(1) = · · · = x(d) the random walk will need d large jumps. All other trajectories will be far less likely to reach that cube. In the present chapter, we will concentrate on the ‘most regular’ distribution types (for example, those of the form (9.1.5)) and also on distributions that are more general versions of the laws in Examples 9.1.1 and 9.1.2. Moreover, we would like to point out that, in the multivariate case, the language of integro-local theorems is the most natural and convenient one, because: (1) describing the asymptotics of the probability that a ‘small’ remote cube will be hit is much easier than finding such asymptotics for an arbitrary remote set; (2) in this case, the large deviation problem admits a complete (in a certain sense) solution, since knowing such ‘local’ probabilities for small cubes allows one to easily obtain the ‘integral’ probability of hitting a given arbitrary set; (3) in integro-local theorems, one needs to impose essentially no additional
402
Large deviation theorems for sums of random vectors conditions on the distribution F, in contrast with integral theorems; in essence, we can obtain local limit theorems without assuming the existence of a density.
Moreover, integro-local theorems on the asymptotics of probabilities (9.1.1) are of substantial independent interest: for a broad class of functions f , they prove # $ to be quite useful for evaluating integrals of the form E f (Sn ); Sn ∈ tA as t → ∞, where A is a ‘solid’ set that is bounded away from the origin. The usefulness of integro-local theorems was demonstrated in Chapters 6–8 (e.g. in §§ 6.2 and 6.3, where such theorems helped us to study the asymptotics of large deviation probabilities for distributions from the class ER). In § 9.2, we will study of (9.1.1) under the assumption of the the asymptotics regular behaviour of P ξ ∈ Δ[x) as x escapes to infinity within a given cone, when the principal contribution to the probability P Sn ∈ [x) comes from trajectories containing a single large jump. Assuming that Eξ = 0, we consider both the case E|ξ|2 < ∞ and the case E|ξ|2 = ∞. We will also deal with a more general approach to the problem, in which regularity conditions are im (j) (j) , for a fixed j and events B that mean that all posed on P Sj ∈ Δ[x); B the jumps ξ (1) , . . . , ξ (j) are large (for a more precise formulation, see (9.2.11) below). it may happen that the principal contribution to the probability In this case, P Sn ∈ Δ[x) belongs to trajectories of a ‘zigzag’ shape, which contain several large jumps. Integral theorems (i.e. theorems on the asymptotics of the prob large deviation ability P Sn ∈ tA as t → ∞) are obtained in § 9.3 as corollaries of the integrolocal theorems of § 9.2. We also consider an alternative approach to proving integral theorems that is not related to integro-local theorems. Of course, the asymptotics of (9.1.1) can be found when ξ has a regularly varying density. In this case, one can obtain an asymptotic representation for the density of Sn (cf. Theorem 8 in [63]).
9.2 Integro-local large deviation theorems for sums of independent random vectors with regularly varying distributions 9.2.1 The case when the main contribution to the probability that Δ[x) will be hit is due to trajectories containing one large jump We will begin with a more precise formulation (compared with that in (9.1.5)) of the conditions under which we will be studying the asymptotics of (9.1.1). Consider the asymptotics of P ξ ∈ Δ[x) in a cone, i.e. for such x with t = |x| → ∞ that e(x) = x/t ∈ Ω, where the subset Ω ⊂ Sd−1 of the unit sphere, which characterizes the cone, is assumed to be open. An analogue [DΩ ] of condition [D(1,0) ] (see §§ 3.7 and 4.7) has here the following form.
9.2 Random vectors with regularly varying distributions
403
Introduce the half-space Π(e) := {v ∈ Rd : (v, e) 0} and put + Π(Ω) := Π(e). e∈Ω
It is evident that if Ω contains a hemisphere then Π(Ω) coincides with the whole space Rd . [DΩ ] There exists a sector Ω ⊂ Sd−1 and functions Δ1 = Δ1 (t) t−γ for some γ > −1, Δ2 = Δ2 (t) = o(t) such that, for any Δ ∈ [Δ1 , Δ2 ], e(x) ∈ Ω, one has (9.2.1) P ξ ∈ Δ[x) = Δd V1 (x)(1 + o(1)) as t → ∞, where
V1 (x) = v(t)g e(x) ,
x , v(t) = t−α−d L(t), t L(t) is an s.v.f. at infinity and g(e) is a continuous function on Ω that satisfies on this set the inequalities 0 < g1 < g(e) < g2 < ∞. Moreover, for any constant c1 > 0 and all y ∈ Π(Ω), |y| c1 t, we have (9.2.2) P ξ ∈ Δ[y) c2 Δd v(t). t = |x|,
e(x) =
The remainder term o(1) in (9.2.1) is assumed to be uniform in the following sense: there exists a function εu ↓ 0 as u ↑ ∞ such that the o(1) term in (9.2.1) can be replaced by ε(x, Δ) εu for all t u → ∞, e(x) ∈ Ω, Δ ∈ [Δ1 , Δ2 ]. Clearly, under condition [DΩ ] one has P |ξ| ∈ Δ[t), e(ξ) ∈ Ω ∼ c(Ω)td−1 v(t)Δ,
(9.2.3)
so that the left-hand side of (9.2.3) follows the same asymptotics as the probability P ξ ∈ Δ[t) in the univariate case under condition [D(1,0) ] (see §§ 3.7 and 4.7). Note that condition (9.2.2) is essential for the claim of our theorem below. It does not allow the ‘most plausible’ of the trajectories hitting the cube Δ[x) to contain two (or more) large jumps (i.e. to reach the set Δ[x) along a ‘zigzag’ trajectory, as was the case in Example 9.1.1). Condition (9.2.2) can be weakened, depending on the deviation zone. Denote by ∂Ω the boundary of the set Ω in Sd−1 , by U ε (∂Ω) the ε-neighbourhood of ∂Ω in Sd−1 and by Ωε = Ω \ U ε (∂Ω) the ‘ε-interior’ of Ω. First we consider the case of finite variance. Theorem 9.2.1. Let √ Eξ = 0, E|ξ|2 < ∞, α > 2 and condition [DΩ ] be satisfied. Then, for t = |x| n ln n and e(x) ∈ Ωε for some ε > 0, P Sn ∈ Δ[x) = Δd nV1 (x)(1 + o(1)), (9.2.4) where the remainder term o(1) is uniform in the following sense. For any sequence s(n) → ∞, there exists a sequence R(n) → 0 such that the o(1) term
404
Large deviation theorems for sums of random vectors
√ in (9.2.4) can be replaced by R such that |R| R(n) for all t > s(n) n ln n, e(x) ∈ Ωε and Δ ∈ [Δ1 , Δ2 ]. Proof. We will follow the scheme of the proofs of Theorems 3.7.1 and 4.7.1. Using notation similar to before, we put Gn := Sn ∈ Δ[x) ,
Bj := (ξj , e(x)) < t/r ,
r > 1,
B :=
n *
Bj ,
j=1
and again make use of the relations (3.7.8) and (3.7.9) ((4.7.4) and (4.7.5)). (1) Bounding P(Gn B). It is not hard to see that, under the conditions [DΩ ] and e(x) ∈ Ω, as t → ∞, P (ξ, e(x)) t V0 (t), where V0 (t) ∼ ctd v(t) = ct−α L(t). (9.2.5) Hence, cf. §§ 3.7 and 4.7, we obtain from Corollary 4.1.3 that for any δ > 0, as t → ∞, r−δ , P(Gn B) nV0 (t) √ where we choose r > 2 and δ in such a way that, for t n and Δ t−γ , we have r−δ nΔd v(t). nV0 (t) Using an argument like that following (4.7.8) (p. 219), one can easily see that it suffices to choose r − δ in such a way that r−δ >1+
(1 + γ)d . α−2
Therefore, for such an r and δ, P(Gn B) = o nΔd V1 (x) . (2) Bounding P(Gn B n−1 B n ). Let √ Ak := v ∈ Rd : (v, e(x)) < t(1 − k/r) + Δ d ,
(9.2.6)
k = 1, 2.
(9.2.7)
Then Gn B n−1 B n ⊂ {Sn−2 ∈ A2 } and P(Gn B n−1 B n ) = P(Sn−2 ∈ dz) P z + ξ ∈ dv, (ξ, e(x)) t/r z∈A2
v∈A1
× P v + ξ ∈ Δ[x), (ξ, e(x)) t/r .
(9.2.8) √ Since for v ∈ A1 one has x−v ∈ Π(Ω), |x−v| > t/r−Δ d, we see from (9.2.2) (with y = x − v) that the last factor in (9.2.8) does not exceed cΔd v(t). Therefore (see also (9.2.5)), the integral over the range A1 in (9.2.8) is less than cΔd v(t)P (ξ, e(x)) t/r c1 Δd v(t)V0 (t).
405
9.2 Random vectors with regularly varying distributions It is evident that the same bound holds for the integral over A2 . Hence P(Gn B i B j ) c1 Δd n2 v(t)V0 (t) = o nΔd V1 (x) .
(9.2.9)
i=j
(3) Evaluating P(Gn B n ). Since Gn B n ⊂ {Sn−1 ∈ A1 } by the definition of the set A1 , we obtain that, by virtue of condition [DΩ ] (recall that x − z ∈ Π(Ω) for z1 ∈ A1 ), P(Gn B n ) = P(Sn−1 ∈ dz)P ξ ∈ Δ[x − z), (ξ, e(x)) t/r A1
$ # E P ξ ∈ Δ[x − Sn−1 )| Sn−1 ; Sn−1 ∈ A1 # $ Δd E V1 x − Sn−1 ; |Sn−1 | < εt (1 + o(1)) + cΔd v(t)P Sn−1 ∈ A1 , |Sn−1 | εt . √ Here e(x − z) ∈ Ω for |z| < εt, P |Sn−1 | εt → 0 for t n. Hence P(Gn B n ) Δd V1 (x)(1 + o(1)). √ Setting A0 := v ∈ Rd : (v, e(x)) < t(1 − 1/r) − Δ d , we find in a similar way that P(Gn B n ) P Sn−1 ∈ dz P ξ ∈ Δ[x − z) A0
$ # Δd E V1 x − Sn−1 ; |Sn−1 | < εt (1 + o(1)) = Δd V1 (x)(1 + o(1)). This means that P(Gn B n ) = Δd V1 (x)(1 + o(1)). Together with (9.2.6), (9.2.9) and the relations (4.7.4), (4.7.5), this implies P(Gn ) = nΔd V1 (x)(1 + o(1)). As before, the required uniformity of the bounds follows from the argument used in the proof. The theorem is proved. Remark 9.2.2. Observe that if we had used more precise bounds for integrals in the proof of the above theorem then condition (9.2.2) could have been relaxed to the following: |z|2 P ξ ∈ Δ[x − z) cΔd v(t) n √ for e(x) ∈ Ωε , z ∈ A1 , |z| εt, t n (cf. condition (9.2.12) below).
406
Large deviation theorems for sums of random vectors
Now consider the case of infinite variance and assume that condition [DΩ ] of the present subsection holds for α < 2. For such an α, one should complement condition [DΩ ] by adding the following requirement. For any e ∈ Sd−1 we have
P (ξ, e) t W (t),
(9.2.10)
where W (t) = t−αW LW (t) is an r.v.f. of index −αW −α. Theorem 9.2.3. Let condition [DΩ ], complemented in the above-mentioned way, be satisfied. Then, in each of the following two cases, (1) α < 1, t = |x| > n1/θ , θ < α, (2) α ∈ (1, 2), there exists Eξ = 0, αW > 1 and t > n1/θ , θ < αW , we will have the relation (9.2.4) for all x such that e(x) ∈ Ωε , t = |x| → ∞ and all Δ ∈ [Δ1 , Δ2 ]. The statements of Theorems 3.7.1 and 9.2.1 concerning the uniformity of the remainder term o(1) in t t∗ → ∞, t > n1/θ , e(x) ∈ Ωε , Δ ∈ [Δ1 , Δ2 ], remain true in this case. Proof. The proof of Theorem 9.2.3 is quite similar to that of Theorems 3.7.1 (p. 169) and 9.2.1, and so is left to the reader. We just note that, at the third stage of the argument (see the proofs of Theorems 3.7.1, 9.2.1), it will be necessary to use the relation P |Sn−1 | > M σW (n) → 0 as M → ∞, where σW (n) = W (−1) (1/n), which follows from Corollaries 2.2.4 and 3.1.2.
9.2.2 The case when the main contribution to the probability that Δ[x) will be hit is due to ‘zigzag’ trajectories containing several large jumps We return to the case E|ξ|2 < ∞ and consider some more general conditions that would cover Examples 9.1.1–9.1.4 and allow situations when the most likely trajectories to reach Δ[x) are of ‘zigzag’ shape. For simplicity we can restrict ourselves to the case when the cone from Theorem 9.2.1 coincides with the positive orthant, i.e. Ω = e = (e(1) , . . . , e(d) ) ∈ Sd−1 : e(i) 0 . As before, we use the notation t = |x|,
B j = (ξj , e(x)) t/r ,
B (k) =
k * j=1
In this subsection, we will be using the following conditions:
Bj .
407
9.2 Random vectors with regularly varying distributions
For a fixed j and the values of Δ specified in condition [DΩ ], we have P Sj ∈ Δ[x); B (j) = Δd v(j) (t)g(j) e(x) (1 + o(1)) (9.2.11) for t = |x| → ∞, e(x) ∈ Ωε , ε > 0, where v(j) is an r.v.f. of the form v(j) (t) = t−α(j) −d L(j) (t),
α(j) > 2,
L(j) is an s.v.f., the g(j) (e) > 0 are continuous functions on Ωε and the remainder term o(1) in (9.2.11) is uniform in the same sense as in condition [DΩ ]. Moreover, √ for e(x) ∈ Ωε , z ∈ A1 (see (9.2.7)), |z| εt, t n, we have |z|2 . P Sj ∈ Δ[x − z); B (j) cΔd v(j) (t) n
(9.2.12)
We will also need the following condition. [D∗Ω ]
Let there exist a k, possessing the following properties:
(1) for all j k, conditions (9.2.11), (9.2.12) are satisfied; (2) x is such that v(j) (t) = o v(k) (t)nk−j as t = |x| → ∞ for j < k; !(k+1) = o Δd v(k) (t)nk where (3) P Sn ∈ Δ[x); B + !(k+1) = B i1 · · · B ik+1 B 1i1 <···
means that k + 1 or more events of the form B i have occurred. The following condition is sufficient for (3) to hold: (31 ) x is such that, as t = |x| → ∞, one has the relation (9.2.12) for j = k+1, and (9.2.13) nv(k+1) (t) = o v(k) (t) . The sufficiency of this condition will be demonstrated after the proof of Theorem 9.2.4. Observe that conditions (2) and (31 ) are of the same type and have a rather simple probabilistic interpretation: they mean that, for deviations x, the product nj v(j) (t), t = |x|, attains its maximum value when j = k and that the value is much greater than that of nj v(j) (t) for j < k and for j = k + 1. As we k will see below in Theorem 9.2.4, it is the product n v(k) (t) that determines the asymptotics of the probabilities P Sn ∈ Δ[x) , and the value of k (found by comparing the quantities nj v(j) (t)) gives the number of large jumps (with sizes of order t) required for the most probable trajectories {Si } to reach the set Δ[x). The factor nk gives (up to a factor j!) the asymptotics of the number of allocations of k large jumps to n places. Now we will explain why condition [D∗Ω ] holds in Examples 9.1.1–9.1.3.
408
Large deviation theorems for sums of random vectors
In Example 9.1.1 (the case of independent components ξ (i) ), for directions e(x) ∈ Ωε , j k = d and large enough r, we have d = (i) P Sj ∈ Δ[x); B (j) ∼ P Sj ∈ Δ[x) ∼ (jΔ)d V1 (x(i) ),
(9.2.14)
i=1
so that one can put
v(j) (t) =
d =
(i)
V1 (t)
for all
j d.
(9.2.15)
i=1
(Recall that, in the sector e(x) ∈ Ωε with ε > 0, the value of t = |x| and those of all the components x(i) are of the same order of magnitude, as t → ∞.) The relations (9.2.14) can be clarified as follows. For instance, in the case j = 2 the main contribution to the probability P S2 ∈ Δ[x); B (2) is from the ‘trajectories’ {S1 , S2 } for which the two large jumps (whose sizes are comparable with t = |x|) occur in different directions, each of which belongs to one of two disjoint groups of coordinates: one direction is from a group (i1 , . . . , is ), 1 s d − 1, and the other is from the complementary group of d − s coordinates. (i ) Hence the probability P S2 l ∈ Δ[x(il ) ); B (2) , l = 1, . . . , s, will in fact be (i ) asymptotically equivalent to the probability of the event S2 l ∈ Δ[x(il ) ) when there is only one large jump (we denote the latter event by B (il ) (1)), whereas (i ) (i ) (i ) P S2 l ∈ Δ[x(il ) ); B (il ) (1) ∼ P S2 l ∈ Δ[x(il ) ) ∼ 2ΔV1 l x(il ) (see § 4.7). This proves (9.2.14) when j = 2. The case j > 2 can be dealt with in exactly the same way (although the argument is more tedious). For j = d + 1, the relation (9.2.14) is no longer correct. In this case, d (i) V1 (t) P Sd+1 ∈ Δ[x); B (d+1) Δd v(d) (t)t
(9.2.16)
i=1
(we write a b if a = O(b), b = O(a)). For example, when d = 2 the principal contribution to P S3 ∈ Δ[x); B (3) comes from trajectories for which (1)
(2)
either S3 contains two large jumps and S3 just one (this comprises the event B (1) (2)B (2) (1)) or the other way around (the event B (1) (1)B (2) (2)). Now for
409
9.2 Random vectors with regularly varying distributions some c1 , c2 , 0 < c1 < c2 < 1, (1) P S3 ∈ Δ[x(1) ); B (1) (2)B (2) (1) (1) c 2x
(1) (1) (1) P S2 ∈ du, one large jump in S2 P ξ3 ∈ Δ[x(1) −u)
∼ −∞
(1) c 2x
(1)
(1) (1)
V1 (u)V1
∼ 2Δ
x
− u du
c1 x(1)
# (1) $2 ∼ cΔx(1) V1 (x(1) ) . Moreover, for the second component we clearly have (2) (2) P S3 ∈ Δ[x(2) ); B (2) (1) ∼ 3ΔV1 x(2) . Since analogous relations hold true for the event B (1) (1)B (2) (2), this proves the relation (9.2.16). The case d > 2 is dealt with in a similar (but more cumbersome) way. Since √ (i) n max Vi (t) → 0 for t n, tV1 (t) ∼ αi Vi (t), i
the above relations prove (9.2.11) for j k = d, and they also prove that condition (2) and relation (9.2.13) hold for Example 9.1.1. Now we prove(9.2.12).(i)For j d, under additional conditions of the form P ξ (i) ∈ Δ[y (i) ) cΔV1 (|y (i) |) for y (i) < 0, we have
P Sj ∈ Δ[x − z); B (j) c(j)Δ
d
d =
(i)
V1 (t),
t = |x|,
(9.2.17)
i=1
for all x ∈ Ωε and z ∈ A1 , |z| εt. This proves (9.2.12) for j d. For j = d + 1 we obtain, cf. (9.2.16), that for e(x) ∈ Ωε and z ∈ A1 , |z| > εt, d (i) V1 (t). P Sj ∈ Δ[x − z) Δd v(d) (t)t i=1
√ (i) Since nt maxi V1 (t) → 0 for t n, we arrive at (9.2.14). One can verify in a similar way that condition [D∗Ω ] is satisfied in Examples 9.1.2 (with k = d ) and 9.1.3 (with k = 1). It is not difficult to see that in Example 9.1.2 we have P Sj ∈ Δ[x); B (j) = 0 for j < d, e(x) ∈ Ωε and t = |x| > 0. It is clear that condition [D∗Ω ] will be satisfied for a much wider class of distributions F. For instance, this applies to distributions similar to those from Example 9.1.1 when there is a weak enough dependence between the components of ξ,
410
Large deviation theorems for sums of random vectors
and also to distributions for which P (ξ, x) u ∼ h e(x) u−f (e(x)) , where the functions h(·) and f (·) satisfy some additional conditions, and so on. 2 ∗ Theorem 9.2.4. √ Let Eξ = 0, E|ξ| < ∞ and condition [DΩ ] be satisfied. Then, for t = |x| n ln n and e(x) ∈ Ωε for some ε > 0, we have P Sn ∈ Δ[x) = Δd nk v(k) (t)g(k) e(x) (1 + o(1)),
where the term o(1) is uniform in the same sense as in Theorem 9.2.1. Proof. The scheme of the argument is somewhat different in this case. We will again make use of the equality (4.7.4) but will apply the latter representation to the probability P(Gn B): P(Gn B) =
n i=1
+
(1) + P Gn B i
(2) P Gn B ij + · · ·
1i<jn
(k) !(k+1) , (9.2.18) P Gn B i1 ,...,ik + P Gn B
1i1
where B i1 ,...,ij is the event that j events B i1 , . . . , B ij of the form B m occurred !(k+1) is the event that k + 1 or more events of the form B m during a time n and B occurred. (1) Bounding P(Gn B). By integrating over the cubes Δ[x) we find from (9.2.11) that P(B j ) ∼ cv(1) (t)td = ct−α(1) L(1) (t) =: v(t). Hence, again using Corollary 4.1.3 we obtain that, for any δ > 0 and large enough t, r−δ P(Gn B) nv(t) . Now choose r − δ in such a way that r−δ nv(t) = o Δd nk v(k) (t) . To this end, for t n1/2 it suffices to take r−δ >1+
γ + α(k) − 2k α(1) − 2
(cf. the bound for P(Gn B) in the proof of Theorem 9.2.1). Thus P(Gn B) = o Δd nk v(k) (t) .
(9.2.19)
411
9.2 Random vectors with regularly varying distributions (j)
(2) Bounding P(Gn B i1 ,...,ij ), j k. We have P
(j) Gn B i1 ,...,ij
=
P Sn−j ∈ dz;
n−j *
Bi P Sj ∈ Δ[x − z); B (j) ,
i=1
A1
(9.2.20) where A1 is defined√by (9.2.7). Clearly, x − z ∈ Π(Ω) for z ∈ A1 and, moreover, |x − z| > t/r − Δ d ∼ t/r. Therefore we can apply condition (9.2.12) to the second factor in the integrand in (9.2.20). Note that this factor is asymptotically equivalent to the right-hand side of (9.2.11) for |z| < ε1 t as ε1 → 0. In the first factor, n−j * n−j n Bi = 1 − P(B 1 ) 1 − c1 v(1) (t) → 1, P i=1
n−j * P Sn−j < ε1 t, Bi → 1
P Sn−j < ε1 t → 1,
i=1
√
assuming that ε1 → 0 slowly enough that ε1 t n. Now we will represent the integral in (9.2.20) as the sum + . |z|<εt
A1 ∩{|z|εt}
Then, owing to [D∗Ω ] and the previous discussion, the first integral will be equivd alent to Δ v(j) (t)g(j) e(x) . By virtue of (9.2.12), the second integral will not exceed % & 2 1 d Sn−j ; Sn−j ∈ A1 , |Sn−j | ε1 t = o Δd v(j) (t) . cΔ v(j) (t) E n Therefore (j) P Gn Bi1 ,...,ij = Δd v(j) (t)g(j) e(x) (1 + o(1)).
(9.2.21)
(3) The asymptotics of P(Gn ). Now we can return to (4.7.4) and (9.2.18). Collecting together the estimates (9.2.19) and (9.2.21), we obtain from [D∗Ω ] that k P(Gn ) = o Δd nk v(k) (t) + Δd v(j) (t)g(j) e(x) nj (1 + o(1))
= Δ n v(k) (t)g(k) d k
j=1
e(x) (1 + o(1)).
The theorem is proved. Now we will demonstrate that condition (9.2.13) is sufficient for part (3) of condition [D∗Ω ].
412
Large deviation theorems for sums of random vectors
For j = k + 1, let conditions (9.2.12) and (9.2.13) be satisfied. Then !(k+1) P Gn B P(Gn B i1 · · · B ik+1 ) 1i1 <···
nk+1 P(Gn B (k+1) ),
(9.2.22)
where, cf. (9.2.20), P Gn B (k+1) = P Sn−k−1 ∈ dz P Sk+1 ∈ Δ[x − z); B (k+1) . A1
Since x − z ∈ Π(Ω) for z ∈ A1 and |x − z| > r −1 t(1 + o(1)), we see that, owing to (9.2.12) for j = k + 1, the second factor in the integrand on the right-hand side will not exceed |z|2 cΔd v(k+1) (t) , n and so the integral itself is O Δd v(k+1) (t) . Therefore, by virtue of (9.2.22) and (9.2.13), !(k+1) = O nk+1 Δd v(k+1) (t) = o nk Δd v(k) (t) . P Gn B The sufficiency of condition (9.2.13) is proved.
9.3 Integral theorems 9.3.1 Integral theorems via the corresponding integro-local theorems The integro-local theorems that we proved in the previous sections allow one to obtain easily the asymptotics of the probability that the sum Sn of random vectors hits an arbitrary remote set. For example, one can find the asymptotics of probabilities of the form t → ∞, P Sn ∈ tA → 0, where A is a fixed solid set, bounded away from 0. Let, for the sake of definiteness, Eξ = 0, E|ξ|2 < ∞ and A be a set in Rd that, together with the vector ξ, has the following properties: (1) The inequalities inf |v| : v ∈ A > 0,
sup sup ε > 0 : U ε ({v}) ⊂ A > 0
(9.3.1)
v∈A
hold true, i.e. the set A is bounded away from the origin and is solid (contains an open subset). (2) Condition [DΩ ] from § 9.2 is satisfied for some α > 2, γ > −1, ε > 0 and (9.3.2) Ω(A) := e(v) : v ∈ A ⊂ Ωε , where Ωε = Ω \ U ε (∂Ω) is the ε-interior of Ω.
413
9.3 Integral theorems
In the present context, the function g(e) does not need to be everywhere positive on Ω(A) ; here, the relation (9.2.1) has to be written in the form P ξ ∈ Δ[x) = d d Δ V1 (x) + o Δ v(t) , t = |x|, and it should be assumed that Ω(A) g(e) de > 0.
(3) For any fixed M > 0, the Lebesgue measure μ of the intersection of the ε-neighbourhood U ε (∂A) of the boundary ∂A of A with the ball U M ({0}) tends to 0 as ε → 0: μ U ε (∂A) ∩ U M {0} → 0. (9.3.3)
It is obvious that the property (9.3.3) will hold for all sets A with smooth enough boundaries. √ Theorem 9.3.1. Under the above conditions (1)–(3), as t → ∞ and t n ln n, one has −α P Sn ∈ tA = nt L(t) |v|−α−d g e(v) dv (1 + o(1)), (9.3.4) A
where the functions L and g are defined in condition [DΩ ] (see p. 403). The nature of the large deviation probabilities for the sums Sn clearly remains the same as before: the right-hand side of (9.3.4) can be written as nP(ξ ∈ tA) (1 + o(1)),
(9.3.5)
so that the main contribution to the probability of the event Sn ∈ tA comes from the n events ξj ∈ tA , j = 1, . . . , n. With the help of Theorem 9.2.3 the reader will encounter no difficulties in proving a similar assertion in the case E|ξ|2 = ∞. As we have already said, Theorem 9.3.1 demonstrates the following useful property ofintegro-local theorems. When calculating the asymptotics of the prob abilities P Sn ∈ tA , these theorems enable one to proceed as if Sn had a density and we knew its asymptotics, although condition [DΩ ] contains no assumptions on the existence of densities. d Proof of Theorem 9.3.1. Let ZΔ be a lattice in Rd with a span Δ = o(t). Denote b b by U := U ({0}) the ball of radius b > 0 whose centre is at the origin, and let d A(t,M ) be the set of all points z ∈ ZΔ for which Δ[z) ⊂ tA ∩ U tM and A(t,M ) d be the minimal set of points z ∈ ZΔ such that + Δ[z). tA ∩ U tM ⊂ z∈A(t,M )
Then, clearly,
P Sn ∈ Δ[z) P Sn ∈ tA ∩ U tM
z∈A(t,M )
z∈A(t,M )
P Sn ∈ Δ[z) .
(9.3.6)
414
Large deviation theorems for sums of random vectors
It is evident that, owing to conditions (9.3.1)–(9.3.3) and Theorem 9.2.1, each sum in (9.3.6) is asymptotically equivalent to n Δd |z|−α−d L |z| g e(z) d ∩U tM z∈tA∩ZΔ
|u|−α−d L |u| g e(u) du
∼n u∈tA∩U tM
∼ nt−α L(t)
|v|−α−d g e(v) dv.
v∈A∩U M tM
It only remains to notice that, for U := Rd \ U tM ,
tM P Sn ∈ tA ∩ U = o nt−α L(t) as M → ∞, which can be verified either using relations of the form (9.2.4) or using bounds for the probability that Sn hits a half-space of the form Π(e, tM ) = v ∈ Rd : (v, e) tM , e ∈ Ω. The theorem is proved.
9.3.2 A direct approach to integral theorems The relations (9.3.4), (9.3.5) indicate that a somewhat different approach to proving integral large deviation theorems in Rd , d > 1, may also be possible. Instead of condition [DΩ ] one could postulate those properties of the set A and vector ξ that turn out to be essential for the asymptotics (9.3.5) to hold true. Namely, in place of a rather restrictive condition [DΩ ], we will assume that condition [A], to be stated below, is satisfied. For simplicity we will first restrict ourselves to the case when Eξ = 0, E|ξ|2 < ∞ and the set A can be separated from the origin by a hyperplane, i.e. there exist a b > 0 and a vector e ∈ Sd−1 such that (9.3.7) A ⊂ Π(e, b) = v ∈ Rd : (v, e) b . (A remark concerning the transition to the general case, when A is bounded away from the origin by a ball U b for some b > 0, will be made after the proof of Theorem 9.3.2.) Condition [A] has the following form: [A]
(1) We have the relation P (tA) := P(ξ ∈ tA) ∼ V (t)F (A),
(9.3.8)
where V (t) = t−α L(t) (α > 2 when E|ξ|2 < ∞), L is an s.v.f. at infinity and F (A) is a functional defined on a suitable class of sets. This functional, the set A and the vector ξ are such that P (z + tA) ∼ P (tA)
for |z| = o(t),
t → ∞.
(9.3.9)
415
9.3 Integral theorems
The property (9.3.9) simply expresses the continuity of the functional F : we have F (v + A) ∼ F (A) as |v| → 0. In Theorem 9.3.1 the role of the functional F is played by the integral on the right-hand side of (9.3.4). (2) Condition (9.3.7) is met and, for the corresponding e ∈ Sd−1 , P (ξ, e) t cV (t)
(9.3.10)
for some c < ∞.
√ Theorem 9.3.2. Let condition [A] be satisfied. Then, for t n ln n, (9.3.11) P Sn ∈ tA ∼ nV (t)F (A) ∼ nP(ξ ∈ tA).
Proof. The proof of this result follows our previous approaches. Set Gn := Sn ∈ tA ,
Bj := (ξj , e) < ρt
for
ρ < b,
B=
n *
Bj ,
j=1
and again make use of the representations (4.7.4), (4.7.5). (1) Bounding P(Gn B). Since Gn ⊂ Gn,e := Sn , e bt , we have the inequality P(Gn B) P(Gn,e B), where, owing to the bounds from § 4.1, for any δ > 0 and as t → ∞, # $r−δ , P(Gn,e B) c nV (t) Hence
r :=
b > 1. ρ
P(Gn B) = o nV (t) .
(9.3.12)
2 (2) The bound P(Gn B n−1 B n ) P(B n−1 B n ) < c V (ρt) is obvious here, so that 2 P(Gn B i B j ) c nV (t) = o nV (t) . (9.3.13) i=j
(3) Evaluating P(Gn B n ). We have P(Gn B n ) = P Sn−1 ∈ dz P z + ξ ∈ tA, (ξ, e) ρt = + . (9.3.14) |z|<M
√
n
|z|M
√ n
Here, in the first integral on the right-hand side, for ρ < b, P z + ξ ∈ tA, (ξ, e) ρt = P(z + ξ ∈ tA) ∼ P(ξ ∈ tA) owing to condition (9.3.9), so that when M → ∞ slowly enough, ∼ P(ξ ∈ tA). |z|<M
√ n
416
Large deviation theorems for sums of random vectors
In the second integral on the right-hand side of (9.3.14), P z + ξ ∈ tA, (ξ, e) ρt P (ξ, e) ρt cV (t), so that, as M → ∞,
|z|>M
= o V (t) . √ n
The above means that P(Gn B n ) ∼ P ξ ∈ tA . Together with (9.3.12), (9.3.13), this establishes (9.3.11). The theorem is proved. The transition to the case A is bounded away from the origin by a ball of radius b > 0 can be made by, for example, partitioning the set A into finitely many subsets A1 , . . . , AK (one could, say, take the intersections of A with each of the K = 2d coordinate orthants), in such a way that each of the subsets would lie in a subspace Π(ek , bk ) for suitable ek ∈ Sd−1 and bk > 0, k = 1, . . . , K. After that, the above argument is applied to each of the subsets A1 , . . . , AK . Instead of (9.3.10) one should now, of course, assume that, for all k = 1, . . . , K and some ck < ∞, P (ξ, ek ) t < ck V (t). The transition to the case E|ξ|2 = ∞ can be made in a way similar to that used in Theorem 9.2.3 (see also § 3.7).
10 Large deviations in trajectory space
10.1 Introduction We return now to the one-dimensional random walks studied in Chapters 2–8. The most general of the problems considered in those chapters was that on the crossing of a given remote boundary by the trajectory of the walk. This is one of the simplest problems on large deviations in the space of trajectories. Now we will consider a more general setting of this problem. Let D(0, 1) be the space of functions f (·) on [0, 1] without discontinuities of the second kind, endowed with the uniform norm f = supt∈[0,1] |f (t)| and the respective σ-algebra B of Borel (or cylindrical) sets. For definiteness, we will assume that the elements of f ∈ D(0, 1) are right-continuous. Define a random process {Sn (t)} in D(0, 1) by letting Sn (t) := Snt , t ∈ [0, 1], k where, as before, Sk = i=1 ξi , the r.v.’s ξi are independent and have the same distribution as ξ and nt is the integral part of nt. Then the probability (10.1.1) P Sn (·) ∈ xA is defined for any A ∈ B. If Eξ = 0 and the set A is bounded away from zero (for a more precise definition, see below) then, provided that x → ∞ fast enough, the probability (10.1.1) will tend to zero, and so there arises the problem of studying the asymptotics of this probability. This problem is referred to as the problem on probabilities of large deviations in the trajectory space. As we have already said, the simplest problems of this kind were dealt with in Chapters 2–8, where, for a given function g, we considered a set A ( ) A = Ag = f : sup f (t) − g(t) 0 . t∈[0,1]
In § 10.2 we will consider a problem on so-called one-sided large deviations in trajectory space under the assumption that condition [ · , =] of Chapters 3 and 4 is met. In § 10.3, the results obtained in the previous section will be extended to the general case. 417
418
Large deviations in trajectory space 10.2 One-sided large deviations in trajectory space
To formulate the main assertions, we will need some notation and notions. Introduce the step functions
0 for u < t, ev,t = ev,t (u) := t ∈ [0, 1], v ∈ R, v for u t, and the number set A(t) := {v : ev,t ∈ A},
(10.2.1)
which could be called the section of the set A ∈ B at the point t. Define the measure μA(t) of the set A(t) by μA(t) := α u−α−1 du, (10.2.2) A(t)
where −α is the index of the r.v.f. V (t) in condition [ · , =] . We will need the following conditions on the set A. [A0 ]
The set A is bounded away from zero, i.e. inf f > 0.
f ∈A
In this case, one can assume without loss of generality that inf f = 1.
f ∈A
(10.2.3)
The set A will be said to be one-sided if the following condition is met: [A0+ ] inf sup f (u) > 0.
f ∈A u∈[0,1]
It is evident that condition [A0+ ] implies [A0 ] and, moreover, one can assume without loss of generality that inf sup f (u) = 1,
f ∈A u∈[0,1]
A(t) ⊂ [1, ∞).
(10.2.4)
The next condition is related to the structure of the sets A(t). [A1 ] For each t ∈ [0, 1], the set A(t) is a union of finitely many intervals (segments, half-intervals). The dependence of the boundaries of these intervals on t is such that the function μA(t) is Riemann integrable. Thus, under condition [A1 ] we can define the measure 1 μ(A) :=
μA(t) dt. 0
(10.2.5)
419
10.2 One-sided large deviations in trajectory space
Remarks on a possible relaxation of condition [A1 ] can be found after Theorem 10.2.3. Further, we will need a continuity condition. Take an arbitrary f ∈ D(0, 1) and add to it a jump of a variable size at a point t, i.e. consider the family of functions fv,t (u) := f (u) + ev,t (u). Those values of v for which fv,t ∈ A form the set Af (t) := {v ∈ R : fv,t ∈ A}.
(10.2.6)
Our continuity condition (for the measure μA(t) ) consists in the following. [A2 ] For each t ∈ [0, 1], the set Af (t) is a finite union of intervals (segments, half-intervals) and, as f → 0, the set Af (t) converges ‘in measure’ to A(t): uniformly in t ∈ [0, 1], α u−α−1 du → μA(t) . (10.2.7) Af (t)
Example 10.2.1. The boundary problems considered in Chapters 3–5. In these problems, we had ) ( A = f ∈ D(0, 1) : sup f (t) − g(t) 0 t∈[0,1]
for a given function g. If inf t∈[0,1] g(t) > 0 and g ∈ D(0, 1) then conditions [A0 ]–[A2 ] are satisfied, and the sets A(t) and Af (t) are half-intervals of the form [v, ∞): A(t) = [g∗ (t), ∞), ∞ μA(t) = α
u
−α−1
du =
g∗ (t) = inf g(u), u∈(t,1]
g∗−α (t),
1 μ(A) =
g∗−α (t) dt.
0
g∗ (t)
Example 10.2.2. The set
A=
"
1 f ∈ D(0, 1) :
f (u)du b ,
b > 0,
0
satisfies all the conditions [A0 ]–[A2 ]. In this situation, the sets A(t) and Af (t) are, as was the case in Example 10.2.1, half-intervals of the form [v, ∞): we have A(t) = [b/(1 − t), ∞) and ∞ μA(t) = α b/(1−t)
u
−α−1
du =
1−t b
α
,
1 α u b−α . μ(A) = du = b α+1 0
420
Large deviations in trajectory space
Now we can formulate the main assertion. First consider one-sided sets A and the case of finite variance Eξ 2 < ∞. Theorem 10.2.3. Let Eξ = 0, Eξ 2 < ∞ and assume that conditions [ · , =] with V ∈ R and α > 2, [A0+ ], [A1 ] and [A2 ] are satisfied. Then, as n → ∞, (10.2.8) P Sn (·) ∈ xA = μ(A)nV (x)(1 + o(1)), where the measure μ(A) is defined in (10.2.1), (10.2.2) and (10.2.5) and the remainder term o(1) is uniform in the zone n < cx2 /ln x for a suitable c. Regarding the value of c, see Remark 4.4.2. Remark 10.2.4. The assertion of the theorem is substantially different from those on the asymptotics of P Sn (·) ∈ xA in the ‘Cram´er case’, when Eeλξ < ∞ for some λ > 0. The foremost difference in the Cram´er case one is that, generally, can find only the asymptotics of ln P Sn (·) ∈ xA . In addition, the nature of that asymptotics proves to be completely different from (10.2.8) (see e.g. [38] and also Chapter 6). Proof of Theorem 10.2.3. The scheme of the proof is the same as that used in Chapters 2–5. For Gn := {Sn (·) ∈ xA},
Bj = {ξj < y},
B=
n *
Bj ,
j=1
we have, for x = ry, r > 2, P(Gn ) = P(Gn B) + P(Gn B), n P(Gn B j ) + O (nV (x))2 , P(Gn B) =
(10.2.9) (10.2.10)
j=1
where, owing to the obvious inclusion Gn ⊂ {S n x} (see (10.2.4)) and Theorem 4.1.2, P(Gn B) = o (nV (x))2 . As before, it remains to evaluate P(Gn B j ). Let Snj (t) := Sn (t) − eξj ,j/n (t),
j n,
(10.2.11)
j
so that the process Sn (t) is obtained from Sn (t) by removing the jump ξj at the point j/n. For ε > 0, put ( ) (10.2.12) Cj := max |Snj (t)| < εx . t∈[0,1]
j
It is obvious that ξj and {Sn (t)} are independent of each other and that, for
10.2 One-sided large deviations in trajectory space 421 √ ε = ε(x) → 0 slowly enough (but in such a way that εx n), we have from the Kolmogorov inequality that max P(C j ) → 0. jn
This implies that, for such an ε, P(Gn B j ) = P(Gn B j Cj ) + o V (x) .
(10.2.13)
j
Now let Fn be the σ-algebra generated by ξ1 , . . . , ξj−1 , ξj+1 , . . . , ξn . Then $ # (10.2.14) P(Gn B j Cj ) = E P(Gn B j | Fj n ); Cj . Furthermore, observe that x−1 Sn (·) ∈ U ε (eξj ,j/n ) on the event Cj , where U ε (f ) denotes the ε-neighbourhood of f in the uniform norm. Next we note also that, j for fixed σ-algebra Fn , the event Gn B j in (10.2.14) is equivalent to the event j B j ∩ {ξj ∈ xAf (j/n)} with f (·) = x−1 Sn (·), so that Af (t) does not depend on ξj . By condition [A2 ], on the set Cj one has P B j ∩ {ξj ∈ xAf (j/n)}| Fj n = P ξj y, x−1 ξj ∈ A(j/n) (1 + o(1)) (10.2.15) and
P ξj ∈ xA(j/n) = V (x)μA(j/n) (1 + o(1)).
(10.2.16)
The event {ξj y} on the right hand-side of (10.2.15) is ‘redundant’ owing to (10.2.4), whereas the equality in (10.2.16) holds because the set A(j/n) is a union of a finite number of intervals and V (ux) ∼ u−α V (x) as x → ∞. Since P(Cj ) → 1, we obtain from (10.2.14)–(10.2.16) that P(Gn B j Cj ) = V (x)μA(j/n) (1 + o(1)).
(10.2.17)
Combining this relation with (10.2.9), (10.2.10) and (10.2.13) and using the uniformity in condition [A2 ] and (10.2.15), (10.2.16), we obtain P(Gn ) = V (x)
n
μA(j/n) (1 + o(1)) = μ(A)nV (x)(1 + o(1)).
j=1
The uniformity of o(1) in (10.2.8) follows from that of the bounds from § 4.1. The theorem is proved. Remark 10.2.5. An assertion close to Theorem 10.2.3 was obtained in [225] in the case when ξ has a regularly varying density −F+ (t) = αt−α−1 L(t),
L is an s.v.f. at infinity.
In this case, conditions [A1 ] and [A2 ] could be somewhat relaxed with regard to the assumption that A(t) and Af (t) are unions of finite collections of intervals.
422
Large deviations in trajectory space
√ The same paper [225] contains a ‘transient’ assertion for deviations x n as well, where an approximation for P(Gn ) includes also the term P w(·) ∈ xA , w(·) being the standard Wiener process. An assertion on the asymptotics of the probability P(S(·) ∈ xA) in the case Eξ = 0, Eξ 2 = ∞ has a similar form, as follows. Theorem 10.2.6. Let Eξ = 0, Eξ 2 = ∞ and let conditions [<, =] with W, V ∈ R, [A0+ ], [A1 ] and [A2 ] be satisfied. If W (t) < cV (t) then the relation (10.2.8) holds for x σ(n) ≡ V (−1) (1/n). If condition W (t) < cV (t) is not satisfied then (10.2.8) will still be true but only for values of n and x such that nW (x/ ln x) < c. The assertions on the uniformity of the remainder term o(1) in (10.2.8) in the respective range of n-values and an analogue of the above remark on relaxing conditions [A1 ] and [A2 ] remain valid in this case as well. Under more stringent conditions on the tail F+ (t), such a relaxation was given in paper [131], which contains an assertion close to Theorem 10.2.6. Proof. The proof of Theorem 10.2.6 repeats, up to obvious changes, the proof of Theorem 10.2.3. Now, to obtain desired bounds for the probabilities P(Gn B), P(S n εx) and P minkn Sk < −εx , one just needs the corresponding results of Chapter 3. Therefore a more detailed exposition of the proof will be omitted.
10.3 The general case We will begin with the so-called two-sided boundary problem, where, for given functions g+ (t) > 0, g− (t) < 0 possessing the properties inf g+ (t) = 1,
t∈[0,1]
sup g− (t) = −g,
g > 0,
(10.3.1)
t∈[0,1]
we want to find the asymptotic behaviour of the probability of the event Gn complementary to
" k k Gn := xg− < Sk < xg+ ; k = 1, . . . , n . n n − Clearly Gn = G+ n ∪ Gn , where
" k
S G+ = max − xg 0 , k + n kn n
G− n =
" k
min Sk − xg− 0 . kn n
Hence − + − P(Gn ) = P(G+ n ) + P(Gn ) − P(Gn Gn ).
(10.3.2)
423
10.3 The general case
The asymptotics of P(G± n ) were found (under the appropriate conditions) in Chapters 3 and 4 (they are of the form c+ nV (x) and c− nW (x) respectively). Now we will show that − (10.3.3) P(G+ n Gn ) = o P(Gn ) . To avoid repeating very similar formulations in the cases Eξ 2 < ∞, Eξ = 0 and Eξ 2 = ∞, Eξ = 0 (cf. e.g. Theorems 10.2.3 and 10.2.6), we will introduce a ‘united’ condition [Q∗ ], which ensures that the approximations P(S n x) ∼ P(Sn x) ∼ nV (x),
(10.3.4)
P(S n −x) ∼ P(Sn −x) ∼ nW (x),
(10.3.5)
where S n = minkn Sk , hold true (not necessarily both at the same time) or that the respective upper bounds cnV (x) and cnW (x) for the probabilities in question will be valid. √ In the case Eξ 2 < ∞, Eξ = 0, condition [Q∗ ] means that x > c n ln n, n → ∞, and that one of the following three alternative sets of conditions: √ (1) [=, =], W (x) ∼ c1 V (x), c1 > 0, α > 2, c > α − 2; (10.3.6) (2) [<, =], W (x) = o V (x) , α > 2, c > β − 2; (10.3.7) √ (10.3.8) (3) [=, <], V (x) = o W (x) , β > 2, c > α − 2. It follows from Remark 4.4.2 that under conditions (10.3.6) we will have the relations (10.3.4), (10.3.5). Under condition (10.3.7), we will have (10.3.4) and the bound P(S n −x) < c2 nW (x) (see Remark 4.4.2 and Corollary 4.1.4). A similar (symmetric) assertion holds under condition (10.3.8). In the case Eξ 2 = ∞, Eξ = 0, condition [Q∗ ] means that x → ∞ and that one of the following three alternative sets of conditions is satisfied: (10.3.9) (1) [=, =], α ∈ (1, 2), W (x) ∼ c1 V (x), c1 > 0, nV (x) → 0; x (2) [<, =], α ∈ (1, 2), W (x) = o V (x) , nV → 0; (10.3.10) ln x x → 0. (10.3.11) (3) [=, <], β ∈ (1, 2), V (x) = o W (x) , nW ln x It follows from Theorem 3.4.1 that under conditions (10.3.9) we will have the asymptotics (10.3.4), (10.3.5). If (10.3.10) holds true then we will have (10.3.4) and the bound P(S n −x) < c2 nW (x). A similar (symmetric) situation takes place when (10.3.11) is satisfied. − We have the following bound for P(G+ n Gn ).
Lemma 10.3.1. Let condition [Q∗ ] be satisfied. Then # $ − 2 P(G+ n Gn ) cn V (x)W x(1 + g) + W (gx)V x(1 + g) c1 n2 V (x)W (x),
(10.3.12)
424
Large deviations in trajectory space
where g is defined in (10.3.1). Proof. Let
k " η± := min k > 0 : ±Sk ±xg± . n
Since {η± > k} ∈ Fk := σ(ξ1 , . . . , ξk ), we have − P(G+ n Gn )
n
P(η+ = k, η− > k) P S n−k −x(1 + g)
k=1 n
+
P(η− = k, η+ > k) P S n−k x(1 + g) .
(10.3.13)
k=1
By virtue of condition [Q∗ ], P S n−k x(1 + g) c(n − k)V x(1 + g) cnV x(1 + g) , P S n−k −x(1 + g) c(n − k)W x(1 + g) cnW x(1 + g) . Hence (10.3.13) implies the inequality − P(G+ n Gn ) cnW x(1 + g) P(S n x) + cnV x(1 + g) P(S n −gx). Again using bounds from Chapters 3 and ,4, we obtain (10.3.12). The lemma is proved. Lemma 10.3.1 and the relation (10.3.2) imply the following result. Theorem 10.3.2. Let condition [Q∗ ] be satisfied. Then − P(Gn ) = P(G+ n ) + P(Gn ) (1 + o(1)).
(10.3.14)
The asymptotics of P(G± n ) were described in Theorems 3.6.4 and 4.6.7. Now we can obtain in a similar way extensions of Theorems 10.2.3 and 10.2.6 to the general case of ‘two-sided’ deviations. We will again assume that our conditions [A0 ]–[A2 ]are satisfied. In the case of ‘two-sided’ deviations, the sets A(t) and Af (t) from conditions [A1 ] and [A2 ] could contain intervals from the negative half-line (−∞, 0) as well. Put A+ (t) := A(t) ∩ (0, ∞),
A− (t) := A(t) ∩ (−∞, 0).
Then, under conditions [A1 ] and [A2 ], there will exist measures μA± (t) on the real line and measures μ± (A) on D(0, 1) such that ±
1
μ (A) :=
μA± (t) dt. 0
10.3 The general case
425
We define the sets A± f (t) in a similar way and assume in condition [A2 ] that, as f → 0, α u−α−1 du → μA± (t) A± f (t)
uniformly in t ∈ [0, 1]. As in the previous section (see Examples 10.2.1 and 10.2.2), one can easily verify that the above-mentioned conditions are satisfied in the two-boundary problem stated at the beginning of this section and also in the problem on the asymptotics of the probabilities 1 1 Sn (u) du x . or P P Sn (u)du x 0
0
Now we can formulate the following assertion. Theorem 10.3.3. Let conditions [Q∗ ], [A1 ] and [A2 ] be satisfied. Then # $ P Sn (·) ∈ xA = n μ+ (A)V (x) + μ− (A)W (x) (1 + o(1)). (10.3.15) Proof. Since, owing to (10.2.3), Gn = Sn (·) ∈ xA ⊂ {S n x} ∪ {S n −x}, − we can write Gn = G+ n ∪ Gn , where
G+ n := Gn ∩ {S n x} and
G− n := Gn ∩ {S n −x}.
Then, by virtue of Lemma 10.3.1,
− 2 P(G+ n Gn ) P(S n x, S n −x) cn V (x)W (x) .
Owing to (10.3.2), it remains to find the asymptotics of P(G± n ). Assume, for simplicity and definiteness, that condition [=, =] is satisfied. It follows from the standard arguments from the previous chapters (e.g. from the argument in the proof of Theorem 3.5.1 on the asymptotics of P(Hn ), where Hn = {S n x}) that, for Bj = {ξj < y} with y = x/r, r > 1 fixed, P(G+ n ) = P(Gn Hn ) =
n
P(Gn Hn B j ) + o nV (x) ,
j=1
where, cf. (10.2.13),
P(Gn Hn B j ) = P(Gn Hn B j Cj ) + o V (x) = P(Gn B j Cj ) + o V (x) ;
the Cj were defined in (10.2.12). Thus, n P(G+ P(Gn B j Cj ) + o nV (x) ; n) = j=1
426
Large deviations in trajectory space
we note that the asymptotics of P(Gn B j Cj ) was established in the proofs of Theorems 10.2.3 and 10.2.6. Combining this with similar results for P(G− n ), we obtain the assertion of the theorem. If condition [<, =] with W (x) = o V (x) holds then finding the exact asymp− totics of P(G− n ) is replaced by establishing the bound P(Gn ) < cnW (x), and in this case, the assertion (10.3.15) will remain true. The case [=, <] with V (x) = o W (x) can be dealt with in a similar way. The theorem is proved.
11 Large deviations of sums of random variables of two types
Chapters 11–13 of the book deal with random walks with non-identically distributed jumps. In the present chapter, we will consider a rather special problem (compared with the material of Chapters 12 and 13) on the asymptotics of the distributions of sums of r.v.’s of two types. The results of this chapter will be used in Chapter 13. We will start with a discussion of motivations and applications of the problem.
11.1 The formulation of the problem for sums of random variables of two types Let ξ1 , ξ2 , . . . and τ1 , τ2 , . . . be two independent sequences of independent r.v.’s, which are independent copies of the r.v.’s ξ and τ with distributions Fξ and Fτ respectively, and let E|ξ| < ∞, Put Sn :=
n
E|τ | < ∞.
Tm :=
ξi ,
i=1
m
τi .
i=1
The aim is to study the asymptotics of the probabilities P (m, n, x) := P Tm + Sn x → 0
(11.1.1)
as x → ∞. Without losing generality, we may assume that m n. The numbers m and n can be either fixed or unboundedly increasing. If n → ∞ (m → ∞) then, again without loss of generality, it may be assumed that Eξ = 0
(Eτ = 0).
Studying the asymptotics of P (m, n, x) is of interest in a range of problems. For example, papers [13, 124] (which also contain discussions of motivations and applications to queueing theory) deal with the asymptotics of P η ∗ (τ ) > n → 0 427
428
Large deviations of sums of random variables of two types
as n → ∞, where the r.v. τ 0, η ∗ (t) = min{n 1 : Sn∗ > t} is the first crossing time of the level t by the random walk Sn∗ = ni=1 ξi∗ , n = 1, 2, . . . and the r.v.’s ξi∗ 0 are i.i.d. and independent of τ . Evidently {η ∗ (τ ) > n} = {Sn∗ τ }. If we let ξi := a − ξi∗ , where a = Eξi∗ , then {η ∗ (τ ) > n} = {Sn + τ an} so that P(η∗ (τ ) > n) = P (1, n, an).
(11.1.2)
Hence we have arrived at problem (11.1.1) under the following special assumptions: (1) m = 1, x = an; (2) ξ a, τ 0. Observe that in [13, 124] it was assumed that the distribution Fτ is close to distributions from the class Se. Analysing the asymptotics of P (m, n, x) for m = 1 could be of interest in a number of other problems, unrelated to (11.1.2). Thus to some extent, one can reduce to such an analysis the problem on the probabilities of large deviations of semi-Markov processes defined on a Markov chain {Y (n)} with a positive atom, say, at a point y0 . Let X(n) be the value of the semi-Markov process at the time of the nth visit of the chain to y0 . Then X(n) = X(1) + ξ1 + · · · + ξn , where ξk is the increment of the semi-Markov process on the kth of the cycles formed by the visits of {Y (j)} to y0 . In this case the distribution of τ := X(1) is, generally speaking, different from that of ξ1 , when Y (0) = y0 . Another example is provided by a renewal process with a time of the first renewal τ > 0 and time intervals ξ1 , ξ2 , . . . between subsequent renewals (ξi 0). As is well known, in the case of a stationary process one has here 1 P(τ t) = Eξ
∞ P(ξ u) du.
(11.1.3)
t
Studying the large deviations of the time of the nth renewal, we will arrive at problem (11.1.1) with m = 1. Many results on the asymptotics of P (m, n, x) for a fixed m can be easily extended to the case when m = ν is a Markov time for the sequence {τi }, and one is interested in the asymptotics of P (ν, n, x) or P (ν, n − ν, x). Such probabilities can arise, for example, in the theory of semi-Markov processes. In what follows, we will consider problem (11.1.1) in its general form. As before, we will deal with the two classes of ‘right-sided distribution tails’:
11.2 Asymptotics of P (m, n, x) for regularly varying distributions
429
(1) the class R of distributions with regularly varying tails, F ∈ R ⇔ F+ (t) = V (t) = t−α L(t),
α > 1,
(11.1.4)
where L(t) is an s.v.f. at infinity; (2) the class Se of distributions with semiexponential tails, F ∈ Se ⇔ F+ (t) = V (t) = e−l(t) ,
l(t) = tα L(t),
α ∈ (0, 1), (11.1.5) where L(t) is an s.v.f. at infinity possessing certain smoothness properties (see Definition 1.2.22 or (5.1.3)). Also, we will sometimes write V ∈ R (V ∈ Se) in the cases when V (t) is the right tail of a distribution from the respective class. For these classes, we will study the asymptotics of P (m, n, x) both in the case of fixed m and when m → ∞, m n. When dealing with the class of semiexponential distributions, it will be assumed that n → ∞. Because of the technical difficulties, we will consider not all possible growth rates of m.
11.2 Asymptotics of P (m, n, x) related to the class of regularly varying distributions For the characteristics of the distributions of the r.v.’s τ and ξ we will use, as a rule, the same symbols, if necessary endowing them with subscripts τ and ξ, respectively. Recall that we assumed without losing generality that m n. If n → ∞ (m → ∞) then we also assume that Eξ = 0 (Eτ = 0). The relations a(x, m, n) ∼ b(x, m, n) and a(x, m, n) b(x, m, n) respectively mean that a(x, m, n) →1 b(x, m, n)
and
a(x, m, n) →∞ b(x, m, n)
as x → ∞ (the dependence of the ranges for m and n on x will be specified in the respective assertions). For an r.v. ζ, its distribution will be denoted by Fζ , so that Fζ,± (t) will stand for the tails of that distribution: Fζ,+ (t) = P(ζ t),
Fζ,− (t) = P(ζ < −t).
Condition [ · , · ]ζ will denote condition [ · , · ], which we introduced in § 2.1, but for the distribution Fζ . If, say, condition [<, <]ζ is satisfied then Vζ (t) and Wζ (t) will denote the regularly varying majorants for Fζ,+ (t) and Fζ,− (t) respectively. If condition [=, =]ζ is met then Vζ (t) and Wζ (t) will denote the regularly varying tails themselves: Fζ,+ (t) = Vζ (t) = t−αζ Lζ (t), Fζ,− (t) = Wζ (t) = t−βζ LWζ (t). Theorem 11.2.1. Let x → ∞ and at least one of the following three sets of conditions hold (the first two sets are symmetric with respect to τ and ξ):
430
Large deviations of sums of random variables of two types (1) [ · , =]τ , Eτ 2 < ∞, ατ > 2, [ · , <]ξ , nVξ (x) = o(mVτ (x)); (2) [ · , =]ξ , Eξ 2 < ∞, αξ > 2, [ · , <]τ , mVτ (x) = o(nVξ (x));
(3) [ · , =]τ , Eτ 2 < ∞, ατ > 2, [ · , <]ξ , Eξ 2 < ∞, αξ > 2. Then, for x n ln(n + 1), P (m, n, x) ∼ mVτ (x) + nVξ (x).
(11.2.1)
Now let the following set of conditions be met: (1a) [<, =]τ , ατ ∈ (1, 2); [ · , <]ξ , αξ > 2, nVξ (x) = o(mVτ (x)). Then (11.2.1) will hold true provided that the quantities x and m satisfy the relations x x mVτ → 0, mWτ → 0. (11.2.2) ln x ln x If αξ ∈ (1, 2) in conditions (1a) then, in order to have (11.2.1), we require in addition that condition [<, <]ξ must be met and x nWξ → 0. ln x Analogues (2a), (3a) of the sets of conditions (2), (3) in the cases specified by conditions [<, =]ξ , [ · , <]τ and [<, =]ξ , [<, =]τ , αξ ∈ (1, 2), ατ ∈ (1, 2), respectively are formulated in a similar way. Under these conditions, the relation (11.2.1) will hold true. Proof. Let G := Gn,m = {Tm + Sn x}. The proof is based on the representation x x x x P(G) = P G; Sn < + P G; Tm < + P Tm , Sn , 2 2 2 2 (11.2.3) where the first two terms on the right-hand side are evaluated in the same way, owing to their symmetry. If condition (1) is met then, for δ ∈ (0, 1/2), one has # $ x P G; Sn = E FTm ,+ (x − Sn ); |Sn | δx + R1 + R2 , (11.2.4) 2 where x , R2 FSn ,− (δx)FTm ,+ ((1 + δ)x). R1 FSn ,+ (δx)FTm ,+ 2 Here, by virtue of the results of § 4.1, for x n ln(n + 1) we have the inequalities x FSn ,+ (δx) cnVξ (x), cmVτ (x), FTm ,+ 2 so that R1 cmnVτ (x)Vξ (x).
11.2 Asymptotics of P (m, n, x) for regularly varying distributions
431
It is obvious that the last term in (11.2.3) admits the same upper bound. For R2 , we have R2 = o(mVτ (x)). It remains to evaluate the first term on the right-hand side of (11.2.4). Choosing a small enough δ > 0, we can make its ratio to mVτ (x) arbitrarily close to 1. Therefore x ∼ mVτ (x). P G; Sn < 2 In exactly the same way one can show that x cnVξ (x). P G; Tm < 2 Combining the above results and taking into account the conditions (1), we obtain that P(G) ∼ mVτ (x), which proves (11.2.1). The proof of (11.2.1) under conditions (2) and (3) is similar to the above argument. If ατ ∈ (1, 2) then the asymptotic relations used in the above proofs will be valid provided that the relations (11.2.2) hold true. That we have conditions (11.2.2) enables one to claim that, under conditions (1a) with Eτ 2 = ∞, as m → ∞ (see § 3.1), P(Tm < −δx) cmWτ (x) = o(1),
P(Tm x) ∼ mVτ (x).
Now repeating the rest of the above argument, we arrive at (11.2.1). The case αξ ∈ (1, 2) can be dealt with in the same way. The theorem is proved. In problem (11.1.2) we have ξi a, and so those of conditions (1) of Theorem 11.2.1 that concern the r.v. ξ are always satisfied. In the problem on a stationary renewal process, when [ · , =]ξ holds true, we see that conditions (3) and (3a) of Theorem 11.2.1 are satisfied, so that x P(τ + Sn x) ∼ n + Vξ (x). (α − 1)Eξ Extending the results of Theorem 11.2.1 to the case when m = ν is a random stopping time for {τj } does not require any additional considerations provided that we impose the conditions of the theorem directly on the sum τ ∗ := Tν and study the probabilities P (1, n, x) constructed for the r.v.’s τ ∗ and {ξj }. If these conditions are imposed on τj and ν then conditions (2) will be satisfied provided that Vτ (x) + Fν,+ (x) = o(nVξ (x)) since, owing to the results of [76], one has P(Tν x) c(Vτ (x) + Fν,+ (x)). To state conditions (1) and (3) in terms
432
Large deviations of sums of random variables of two types
of τj and ν, we need to know the nature of ν in more detail. If, for instance, condition [ · , =]τ is met, Fν ∈ R and {τj } and ν are independent then P(Tν x) =
∞ k=1
x P(ν = k)P(Tk x) ∼ EνP(τ x) + P ν , Eτ
and so reformulating conditions (1) and (3) in terms of τj and ν will cause no difficulties.
11.3 Asymptotics of P (m, n, x) related to semiexponential distributions In this section, it will be assumed that n → ∞ whereas m can be either fixed or growing with n. However, we will not cover all growth rates for m → ∞, since dealing with some of them requires serious technical difficulties to be overcome. Regarding ξi , we will assume that Eξi = 0,
Eξi2 = 1,
E|ξi |b < ∞,
b > 2.
(11.3.1)
As in the previous section, functions V (t) of the form (11.1.5) but with subscripts τ or ξ will play the role of the tails or their majorants for the distributions of τ and ξ. As in Chapter 5, a condition of the form [ · , =]ζ or [ · , <]ζ will now mean respectively that the distribution of ζ is semiexponential, Fζ,+ = Vζ ∈ Se, or that the right tail Fζ,+ of the distribution admits a majorant Vζ ∈ Se and so on. The relation V ∈ Se means that the function l(t) = tα L(t) in (11.1.5) has the property (see (5.1.3)) that, for Δ = o(t), t → ∞, l(t + Δ) − l(t) =
αΔl(t) (1 + o(1)) + o(1). t
(11.3.2)
We will also need a stronger condition, [D], introduced in § 5.4 (see 5.4.1)), with q(t) = 0. This condition requires that, for Δ = o(t), t → ∞, l(t + Δ) − l(t) = l (t)Δ +
α (α − 1) l(t) 2 Δ (1 + o(1)). 2 t2
(11.3.3)
Note that condition [D] is somewhat excessive for the assertions below and could be relaxed (cf. [13, 124, 51]). If the function l(t) is defined just for integral t (the lattice case) then we assume that a modification of condition [D] presented in § 5.4 (see p. 251) is satisfied with q(t) = 0. In what follows, we will need the basic facts and results presented in §§ 5.1–5.3. As in § 11.2, the notation introduced in these sections will be endowed with subscripts ξ and τ , respectively. For example, if condition [ · , =]τ (with Fτ ∈ Se) or condition [ · , <]τ is satisfied then lτ , ατ will denote the respective characteristics l and α, which are present in (11.1.5) and which correspond to the r.v. τ . We will also endow any further notation with subscripts τ and ξ when it is necessary
11.3 Asymptotics of P (m, n, x) for semiexponential distributions
433
to indicate to which distribution it corresponds. For instance, we will denote by (τ,ξ) gκ the functions gκ in (5.4.26) with l(x − t) replaced by lτ (x − t), while the (ξ) deviation function Λκ = Λκ and the parameters κ = κξ = 1/(1 − α) + 1, α = αξ , still correspond to ξ. Let M (τ,ξ) = M (τ,ξ) (x, n) := min gκ(τ,ξ) (t, x, n)
(11.3.4)
t
(cf. (5.4.27)). Then, cf. (5.4.28), we have α2 (τ ) M (τ,ξ) = lτ (x) 1 − τ nw1 (x)(1 + o(1)) 2 α2 (τ ) = lτ (x) − τ nw2 (x)(1 + o(1)), 2
(11.3.5)
(τ )
where wi (x) = lτi (x)x−2 , i = 1, 2, so that corollaries similar to (5.4.29) and (τ ) (τ )(−1) (1/n) for i = 1, 2 (5.4.30) will hold true. The functions σi (n) = wi are introduced in a similar way. Thus, according to the above convention, in (ξ) (ξ) Theorem 5.4.1 one has M = M (ξ,ξ) , wi = wi and σi = σi . Theorem 11.3.1. Let m be fixed, n → ∞. (i) Let condition [ · , =]ξ be satisfied (so that the Cram´er approximation (5.1.11) holds true) and, for some t0 > 0, Fτ,+ (t) cVξ (t),
t t0 .
(11.3.6)
Then, for all x 0, P (m, n, x) ∼ P(Sn x).
(11.3.7)
(ii) Let conditions [ · , =]τ and [ · , <]ξ be met and (1 + θ)lτ (t) l(t)
(11.3.8)
for t t0 and some θ > 0, t0 > 0, where l corresponds to Vξ and lτ satisfies (τ ) (τ ) condition [D]. Then, for s1 = x/σ1 (n) → ∞, P (m, n, x) ∼ me−M
(τ,ξ)
(x,n)
,
(11.3.9)
where M (τ,ξ) = M (τ,ξ) (x, n) is defined in (11.3.4), (11.3.5). (τ ) (τ ) If s2 = x/σ2 (n) → ∞ then P (m, n, x) ∼ mVτ (x). (τ )
(iii) Let conditions [ · , =]τ and [ · , =]ξ be met, s1 and lξ satisfy [D]. Then P (m, n, x) ∼ me−M
(τ,ξ)
(x,n)
+ ne−M
(11.3.10) (ξ)
→ ∞, s1
(ξ,ξ)
(x,n)
.
→ ∞ and lτ (11.3.11)
Here, as in (11.3.9), (11.3.10), one has P (m, n, x) ∼ mVτ (x) + nVξ (x)
(11.3.12)
434
Large deviations of sums of random variables of two types (τ )
(ξ)
provided that s2 → ∞, s2 → ∞. Remark 11.3.2. If n is fixed then the assertion (11.3.7) will be satisfied under the conditions of Theorem 11.3.1(i) once we have Fτ,+ (t) = o(Vξ (t)) as t → ∞. The condition that Fτ,+ (t) cVξ (t) for t t0 can be relaxed in the case Fτ ∈ Se to the condition that lτ (t) lξ (t)−v ln t for a suitable v, which is chosen depending on the range of the deviations x. It is not difficult to see from Theorem 5.4.1 that a more substantial relaxation of this condition is impossible. Remark 11.3.3. Condition (11.3.8) can be relaxed to the condition that l(t) lτ (t) + γ(t), where γ(t) = o(lτ (t)) but γ(t) → ∞ fast enough. In problem (11.1.2) one has ξ a, and so the conditions of part (ii) of the theorem are met. Note also that problem (11.1.2) was considered in [13, 124] under broader conditions on the function l. As we have already observed, in Theorem 11.3.1 condition [D] could be relaxed. In the problem on a stationary renewal process (see (11.1.3)), if Fξ ∈ Se then the conditions of part (iii) of the theorem are satisfied and Vτ (x) ∼
x1−α Vξ (x). αEξ
Proof of Theorem 11.3.1. (i) First suppose that m = 1 and that c 1 in (11.3.6). Put G := {τ + Sn x}. Then P (1, n, x) = P(G) = P(G; τ > t0 ) + P(G; τ t0 ). The r.v.’s τ and ξn+1 can be given on a common probability space by setting (−1) (−1) τ := Fτ,+ (ω), ξn+1 := Fξ,+ (ω), where ω is an r.v. that is uniformly distributed over [0, 1] and independent of Sn . Then ξn+1 τ for ω ω0 = Fτ,+ (t0 ) and {ω < ω0 } ⊆ {τ > t0 }. Therefore P(G) P(Sn+1 x; ω ω0 ) + P(Sn + t0 x; ω > ω0 ) = P(Sn+1 x) + P(Sn + t0 x) P(ω > ω0 ) − P(Sn+1 x; ω > ω0 ).
(11.3.13)
Further, for an arbitrary fixed numerical sequence ωn ↑ 1 as n → ∞, put (−1) Nn := −Fξ,+ (ωn ). Then, clearly, ξn+1 −Nn for ω ωn and P(Sn+1 x; ω > ω0 ) P(Sn+1 x; ωn ω > ω0 ) P(Sn − Nn x; ωn ω > ω0 ) = P(Sn x + Nn ) P(τ ∈ (ω0 , ωn ]).
(11.3.14)
Now observe that the asymptotics of P(Sn x) as n → ∞, x 0 will not change if we replace n by n + 1, and x by x + y, where |y| = o(nγ ),
γ<
1 1−α < . 2−α 2
11.3 Asymptotics of P (m, n, x) for semiexponential distributions
435
That is, P(Sn+1 x + y) ∼ P(Sn x).
(11.3.15)
This follows from the approximation (5.1.11) and Theorem 5.4.1. (On the junction of the Cram´er and intermediate deviation zones one could use either a refinement of Theorem 5.4.1 or theorems from [238], which are valid on the whole real line). Therefore, choosing ωn ↑ 1 in such a way that Nn = o(nγ ), we obtain P(τ ∈ (ω0 , ωn ]) → P(ω > ω0 ) =: p, and hence, owing to (11.3.13)–(11.3.15), P(G) P(Sn x)(1 + o(1)) + P(Sn x)p(1 + o(1)) − P(Sn x)p(1 + o(1)) = P(Sn x)(1 + o(1)). At the same time, for the same reasons, for any sequence Nn → ∞ such that Nn = o(nγ ) we have P(G) P(G; τ > −Nn ) P(Sn x + Nn ) P(τ > −Nn ) = P(Sn x)(1 + o(1)). This proves (11.3.7) in the case when m = 1 and c 1 in (11.3.6). When c > 1, we put k := c + 1. Since Fξ ∈ S owing to Theorem 1.2.36, we have from Theorem 1.2.12(iii) that FSk ,+ (t) ∼ kFξ,+ (t) as t → ∞, and one can assume without loss of generality that Fτ,+ (t) FSk ,+ (t),
t t0 .
Therefore, in a similar way to the previous argument, one can define the r.v.’s τ (n) and Sk := ξn+1 + · · · + ξn+k on a common probability space by setting τ := (−1) (n) (−1) Fτ,+ (ω), Sk := F (n) (ω), where ω is an r.v. that is uniformly distributed Sk ,+
(n)
over [0, 1] and independent of Sn , and this will ensure that Sk τ for ω ω0 . The subsequent argument remains basically unchanged from the case when c 1 (one simply replaces Sn+1 by Sn+k ). If m > 1, one should repeat the above argument m times. (ii) For simplicity we will again assume first that m = 1. Then, cf. (11.2.3), for any fixed δ > 0 we have P (1, n, x) = P(G) = P(1) + P(2) + P(3) , where
P(1) := P Sn (1 − δ)x P(τ > δx), P(2) := P G; Sn < (1 − δ)x , P(3) := P(G; τ δx).
(11.3.16)
436
Large deviations of sums of random variables of two types
We will begin by bounding P(1) . One can assume without loss of generality that (τ ) l(t) = (1+θ)lτ (t) for t > t0 . Then, along with s1 → ∞ we will have s1 → ∞, where s1 corresponds to the tail Vξ , so that π1 ≡ nx−2 l(x) → 0 and, by virtue of Corollary 5.2.2(i), P(Sn x) nVξ (x)1+o(1) .
(11.3.17)
Hence, for large enough x, P(1) nVτ (δx) exp −l((1 − δ)x)(1 + o(1)) n exp −lτ (δx) − lτ ((1 − δ)x) . Here lτ (δx) + lτ ((1 − δ)x) ∼ lτ (x)ψ(δ), where ψ(δ) = δ ατ + (1 − δ)ατ > 1 for δ ∈ (0, 1). Therefore lτ (δx) + lτ ((1 − δ)x) lτ (x)(1 + ψ1 ), (see also (5.4.43)–(5.4.54)), so that for
(τ ) s1
δ>0
→ ∞ and all large enough x,
P(1) Vτ (x)1+ψ , Further,
ψ1 > 0 for
ψ > 0.
(11.3.18)
# $ P(2) = E Vτ (x − Sn ); Sn (1 − δ)x .
Expressions of this kind were studied in (5.4.46)–(5.4.56). It follows from those (τ ) considerations that if s1 → ∞ and condition [D] is met for lτ then P(2) ∼ e−M
(τ,ξ)
(x,n)
.
(11.3.19)
It remains to bound δx P(3) =
P(Sn x − t) P(τ ∈ dt). −∞
Owing to (11.3.17), as s1 → ∞, δx P(3) n
e−l(x−t)(1+o(1)) P(τ ∈ dt)
−∞
n exp −l((1 − δ)x)(1 + o(1)) = n exp −l(x)(1 − δ)α (1 + o(1)) n exp −lτ (x)(1 − δ)α (1 + θ)(1 + o(1)) , where the last inequality follows from (11.3.8). Hence, for small enough δ we have P(3) e−lτ (x)(1+θ/2)
(11.3.20)
11.3 Asymptotics of P (m, n, x) for semiexponential distributions
437
for all large enough x. Since M (τ,ξ) (x, n) = lτ (x)(1 + o(1)), we derive from (11.3.18)–(11.3.20) that (11.3.9) holds true for m = 1. When considering an arbitrary m > 1, one should note that since Vτ is subexponential we have P(Tm x) = e−lτ (x)+ln m+o(1) .
(11.3.21)
(τ )
If s2 → ∞ and condition [D] holds for lτ then one can see from the calculations in (5.4.58)–(5.4.65) that M (τ,ξ) (x, n) = lτ (x) + o(1), and hence (11.3.10) holds true. The second assertion of the theorem is proved. (iii) Now we will prove the third assertion. Taking into account the assertions of parts (i), (ii) of the theorem and somewhat simplifying the problem, we will confine ourselves to considering the case lτ (t) lξ (t) lτ (t)(1 + θ),
t t0 ,
(11.3.22)
for some θ > 0, t0 > 0, which is complementary to the case (11.3.8) and inequality lτ (t) lξ (t) in part (i) of the theorem. We will follow the argument in the proof of part (ii) and the representations (11.3.16). Nothing changes in the bounding of P(1) and the evaluation of P(2) . It is not hard to see that, by virtue of Theorem 5.4.1, P(3) ∼ ne−M
(ξ,ξ)
(x,n)
∼ P(Sn x).
The transition to the case of arbitrary m > 1 and also to the situation where (τ ) (ξ) s2 → ∞, s2 → ∞ can be made in a similar way to the previous argument. The theorem is proved. If m → ∞, n → ∞ then studying the asymptotics of P (m, n, x) could prove to be more difficult. We will note here the following two cases for which this problem turns out to be simple: (τ )
(ξ)
(1) Fτ ∈ Se, Fξ ∈ Se, s2 → ∞, s2 → ∞; (2) r = m/n is a fixed rational number, n → ∞. In the first case, as we have already observed, M (τ,ξ) (x, n) = lτ (x) + o(1),
M (ξ,ξ) (x, n) = lξ (x) + o(1)
(here we have assumed for simplicity that Eτ 2 = Eξ 2 = 1; if Eτ 2 = Eξ 2 then one should consider somewhat different functions). Hence the minimum (11.3.23) min M (τ,τ ) (x − t, m) + nΛ(ξ) κξ (t/n) , t
which we will need to evaluate when calculating an integral for P Tm + Sn x, Sn (1 − δ)x
438
Large deviations of sums of random variables of two types
(cf. P(2) in (11.3.16)) and when using approximations of Theorem 5.4.1 for the distribution of Tm , will be attained at t∗ = o(x), as will the minimum in (5.4.26), (5.4.27). Therefore the value (11.3.23) will again have the form M (τ,ξ) (x, n) + o(1) = lτ (x) + o(1). From this, using the representation (11.3.16) and computing integrals, which are similar to those we dealt with in § 5.4, we obtain the following assertion. Theorem 11.3.4. Let one of the following three conditions be satisfied: (1) [ · , <]τ , [ · , =]ξ , mVτ (x) = o(nVξ (x)); (2) [ · , =]τ , [ · , <]ξ , nVξ (x) = o(mVτ (x)); (3) [ · , =]τ , [ · , =]ξ . (τ )
(ξ)
Then, as s2 → ∞, s2 → ∞, P (m, n, x) ∼ mVτ (x) + nVξ (x).
(11.3.24)
Now consider the second case mentioned on p. 437, when m/n = r = m1 /n1 , n → ∞, m1 and n1 being fixed integers. In other words, m = m1 k, n = n1 k, k → ∞. Clearly, where Hk :=
k i=1
Tm + Sn = Hk ,
(11.3.25)
ηi and the r.v.’s ηi are independent and distributed as η = Tm1 + Sn1 .
(11.3.26)
Here, under conditions (1)–(3) of Theorem 11.3.4 we have Fη ∈ Se, and so it remains to make use of Theorem 5.4.1 to obtain an approximation for P (m, n, x) = P(Hk x).
12 Random walks with non-identically distributed jumps in the triangular array scheme in the case of infinite second moment. Transient phenomena
Chapters 12 and 13 deal with the general problem on random walks with independent non-identically distributed jumps in the triangular array scheme. We consider the cases of both finite and infinite variances for jumps with regularly varying distributions (analogues of Chapters 3 and 4). Introducing the triangular array scheme enables us, in particular, to study so-called transient phenomena, when the drift in the random walk vanishes in the limit. In applications to queueing theory, this corresponds to a heavy traffic situation. The transition to the case of random walks with non-identically distributed jumps extends the applicability of the results obtained in Chapters 2–5 (see the Introduction). In particular, this enables one to evaluate risks in insurance problems, where one deals with claims of different types that are time-inhomogeneous, to study the reliability of queueing systems with customers of different types and so on. Almost all the main results of Chapters 2–5 can be extended to random walks with independent non-identically distributed jumps in the triangular array scheme – of course, under some additional conditions ensuring the uniformity of the regular variation in the jump distributions. In this chapter, we will consider random walks whose jumps have zero means, infinite variance and ‘averaged distributions’ that are close to being regularly varying (or which admit regularly varying majorants or minorants).
12.1 Upper and lower bounds for the distributions of S n and Sn Let ξ1 , ξ2 , . . . be independent r.v.’s following the distributions F1 , F2 , . . . respectively. The distributions Fj may also depend on some parameter. As in the classical triangular array scheme, this parameter may be the number of sum(n) mands n, so that Fj = Fj will depend on both j and n. In the case when the sequence ξ1 , ξ2 , . . . is infinite, it would be natural to consider other parameters as well (for example, this is the case when one studies transient phenomena, see § 12.5). For brevity we will omit in what follows the superscript (n) indicating that we are dealing with the triangular array scheme. 439
440
Non-identically distributed jumps with infinite second moments
As before, let Sn =
n
S n = max Sk .
ξi ,
kn
i=1
Next we will go through all the main stages of deriving the asymptotics of the probabilities P(Sn x) and P(S n x) presented in Chapter 3, but will do that now under new, more general, conditions.
12.1.1 Extensions of Lemma 2.1.1 and of the main inequality (2.1.8) The extension of Lemma 2.1.1 for P(S n x) has the following form. Suppose for a moment that Cram´er’s condition is satisfied: ϕi (μ) := Eeμξi < ∞,
i = 1, . . . , n,
(12.1.1)
for some μ > 0. Lemma 12.1.1. For all n 1, x 0, μ 0 P(S n x) e−μx max kn
k =
ϕi (μ).
(12.1.2)
i=1
This inequality can also be written as
P max Sk x e−μx max EeμSk . kn
kn
Proof. The proof follows the argument from § 2.1. Put η(x) := inf{k : Sk x}. Then, since for k n the event {η(x) = k} and the r.v.’s Sn −Sk are independent, we have n =
ϕi (μ) = EeμSn
i=1
n E eμSn ; η(x) = k k=1
n E eμ(x+Sn −Sk ) ; η(x) = k k=1
= eμx
n n = k=1 i=k+1 n =
eμx min jn
ϕi (μ) P(η(x) = k) ϕi (μ)
i=j+1
= eμx P(S n x) min jn
n
P(η(x) = k)
k=1 n = i=j+1
ϕi (μ).
12.1 Upper and lower bounds for the distributions of S n and Sn
441
Observing that n >
ϕi (μ)
i=1
min
= max
n >
kn i=k+1
kn
ϕi (μ)
k =
ϕi (μ),
i=1
completes the proof of the lemma. If the the Cram´er condition (12.1.1) is not satisfied then we will make use of (y) ‘truncated’ (at level y) versions ξi of the r.v.’s ξi , which have the distribution functions (y) P(ξi < t) P ξi < t = P(ξi < t | ξi < y) = , t y. P(ξi < y) As before, let Bi = {ξi < y},
B=
n *
Bi .
i=1
Our aim is to bound the probability P := P(S n x; B). Repeating the argument from § 2.1 and using 12.1.1, we obtain the following analogue of the basic inequality (2.1.8): P(S n x; B) e−μx max kn
where
k =
Ri (μ, y),
(12.1.3)
i=1
y Ri (μ, y) :=
eμt Fi (dt). −∞
12.1.2 Upper bounds for the distribution of the maximum of sums To obtain the desired bounds, we will need, as in Chapter 3, conditions on the existence of regularly varying majorants but this time for the averaged distributions 1 Fj n j=1 n
F := with tails 1 Fj,± (t), n j=1 n
F± (t) =
Fj,− (t) = Fj ((−∞, t)),
Fj,+ (t) = Fj ([t, ∞))
(the conditions will be of a form uniform in the parameter of the triangular array scheme). Since we are studying in this section the distributions of Sn and S n with finite n, the parameter of the scheme can be identified with n.
442
Non-identically distributed jumps with infinite second moments
The conditions below could be referred to as uniform averaged regular variation conditions. They relate to the properties of regularly varying functions presented in Theorems 1.1.2 and 1.1.4. Following our previous conventions, we will say that the averaged distribution F satisfies condition [<, <]U ([<, =]U ) if
F− (t) W (t), F− (t) W (t),
F+ (t) V (t)
F+ (t) = V (t) ,
(12.1.4)
where V (t), W (t) may depend on the triangular array scheme parameter n and will possess the properties of an r.v.f. in the sense of conditions [U1 ], [U2 ] below. In situations where it is important to stress the dependence of the distribution F and the majorants V, W on n, we will write F(n) , V (n) , W (n) respectively. The above-mentioned condition on the averaged right tails has the following form. Let α > 0 be fixed. [U1 ] For any given δ > 0 there exists a tδ < ∞ such that, for all n, t tδ and v ∈ [1/2, 2], (n) V (vt) −α (12.1.5) V (n) (t) − v δ. The second condition is stated in terms of uniform inequalities. [U2 ] For any given δ > 0 there exists a tδ < ∞ such that, for all n, v v0 < 1 and tv tδ , V (n) (vt) v −α−δ . (12.1.6) v −α+δ (n) V (t) Denote by [U] the condition that both conditions [U1 ], [U2 ] are satisfied for a common α ∈ (1, 2) that does not depend on n: [U] = [U1 ] ∩ [U2 ]. Without loss of generality, we will identify the values of tδ in conditions [U1 ] and [U2 ]. Observe that it follows from condition [U] that for any given δ > 0 there exists a tδ < ∞ such that, for all t and v satisfying the inequalities t tδ , tv tδ , we have (cf. (1.1.23)) (1 − δ) min{v δ , v −δ }
v α V (n) (vt) (1 + δ) max{v δ , v −δ }. V (n) (t)
(12.1.7)
It is not difficult to see that the converse is also true: (12.1.7) implies [U]. Condition [U] on the tails W has exactly the same form but with α replaced by β. We will say that the averaged distribution F satisfies condition [U] if the functions V and W corresponding to F satisfy [U1 ] and [U2 ].
12.1 Upper and lower bounds for the distributions of S n and Sn
443
Remark 12.1.2. When studying the large deviations of S n and Sn in the case n → ∞, condition [U] can be somewhat relaxed and replaced by the following condition: [U∞ ] For any given δ > 0 there exist nδ , tδ < ∞ such that (12.1.7) holds for all n nδ , t tδ , tv tδ . A condition sufficient for [U∞ ] is condition [UR], which will be presented in § 12.4 below (see p. 464). The latter condition requires that, for some fixed (i.e. independent of n) r.v.f. G(t) and ρ+ ∈ (0, 1), F (n) (t) =
H(n) G(t)(1 + o(1)), n
(n)
F+ (t) = ρ+ F (n) (t)(1 + o(1)) (12.1.8) as t → ∞, n → ∞, where H(n) → ∞, H(n) n. A simple sufficient condition for the existence of the right regularly varying majorants for the averaged distributions is the boundedness of the averaged onesided moments: for an α 1, 1 E(ξjα ; ξj 0) < cα < ∞. n j=1 n
In this case, clearly, for t > 0, F+ (t) V (t) ≡ cα t−α , and condition [U] will be satisfied. A similar remark is valid for the left tails. If each distribution Fj satisfies condition [<, <]U uniformly in n and j with common α and β values (the s.v.f.’s Lj in the representations Vj = t−α Lj (t) can be different, however) then this condition will be met for the averaged distribution F as well. For example, consider condition [U2 ] on the tails Vj . If, for each n and j, we have for v v0 > 1, t tδ Vj (vt) Vj (t)v −α+δ then, obviously, the same inequality will hold for the sums as well: V (vt) V (t)v −α+δ . Now if each distribution Fj satisfies [<, <]U but the value α = αj does depend on j, 1 < α∗ αj α∗ < 2 then it remains unclear whether condition [<, <]U will hold for the averaged distribution F as well. It turns out, however, that when [<, <]U holds for each Fj (uniformly in j, n), one can also obtain the desired bounds. In this connection, we will distinguish between the following two forms of condition [<, <]U : (1) condition [<, <]U on the average, when relations (12.1.5), (12.1.6) hold true;
444
Non-identically distributed jumps with infinite second moments
(2) the individual conditions [<, <]U in which the relations (12.1.5), (12.1.6) hold for each Vj uniformly in n and j, the exponent α in (12.1.5), (12.1.6) being replaced by αj . The same properties are assumed for the tails Wj . In addition to condition [<, <]U (in one or another form), to obtain the necessary asymptotics in the desired simple form we will also need some further conditions to prevent too rapid ‘thinning’ (degeneration) of the tails F− (t) and F+ (t) (or W (t), V (t)) in the triangular array scheme as n → ∞ (say, too rapid convergence V (t1 ) → 0 or W (t2 ) → 0 for some fixed t1 or t2 respectively as n → ∞). The degree of ‘thinning’ of the averaged distribution F under conditions (12.1.8) is described by the ratio H(n)/n, which may tend to zero. In the case of too rapid ‘thinning’, a substantial role in bounding the probabilities P(S n x) may be played by the ‘central parts’ of the distributions F rather than their tails. We will need the following numerical characteristics: t JjV
(t) :=
J V (t) :=
0
t ∞ JjW (t) :=
∞ Wj (v) dv =
0
1 V J (t), n j=1 j n
uVj (u) du,
u
n 1 W J W (t) := Jj (t), n
min{u, t}Wj (u) du, 0
J(t) := J V (t) + J W (t).
j=1
The condition ensuring not too rapid ‘thinning’ of the tails (or their non-degeneracy) as n → ∞ has the following form. [N] For some δ ∈ (0, min{β − 1, 2 − α}) and for all n and x such that x → ∞, nV (x) < 1, we have V (x) cJ(tδ )x−2 ,
(12.1.9)
where tδ is from conditions [U]. Since for α, β > 1 and a fixed tδ < ∞ the quantity J(tδ ) is bounded, it is obvious that we always have (12.1.9) provided that V (x) cx−2 in the range nV (x) < 1 (recall that V (x) = V (n) (x) depends on n). Another condition sufficient for (12.1.9) is the inequality V (tδ ) c1 J(tδ ),
(12.1.10)
where tδ corresponds to a δ < min{β − 1, 2 − α}. (Recall that tδ specifies the range of t where the regular variation properties hold with precision characterised by δ.) Indeed, in this case, as x → ∞, −α+δ x V (x) c1 J(tδ ) , V (x) = V (tδ ) V (tδ ) tδ
12.1 Upper and lower bounds for the distributions of S n and Sn
445
so that x2−α+δ → ∞. x2 V (x)J −1 (tδ ) c1 tα−δ δ Consider the following example. Example 12.1.3. Let ζi be i.i.d. r.v.’s following a common symmetric distribution with majorant V(ζ) for the right tail. Put
with probability δ(n), ζi ξi := 0 with probability 1 − δ(n), where δ(n) → 0 as n → ∞. Here V (t) = δ(n)V(ζ) (t), condition [<, <]U is met, J(tδ ) = cδ(n) → 0, V (tδ ) = δ(n)V(ζ) (tδ ) = c−1 J(tδ )V(ζ) (tδ ) = c1 J(tδ ) and the sufficient condition (12.1.10) is satisfied. Thus, if the ‘thinning’ occurs owing to the concentration of the probability mass at zero, this does not prevent the desired conditions [U] and [N] from being satisfied. However, if
with probability δ(n), ζi ξi := ζi I |ζi | < 1 with probability 1 − δ(n) then the situation will be quite different. For simplicity, let tδ = 1 for a suitable δ, and δ(n) = n−γ , γ ∈ (0, 1), V(ζ) (x) = Lx−α for x > 1. It is clear that condition (12.1.10) is not satisfied since V (1) → 0, J(1) > c > 0 as n → ∞. Condition (12.1.9) will be met if δ(n)V(ζ) (x) cx−2 , or, equivalently, if nγ xα−2 < c. This relation will follow from the inequality nV (x) = n1−γ Lx−α < 1 only if αγ/(1 − γ) < 2 − α or, equivalently, if γ < (2 − α)/2. When γ > (2 − α)/2 then condition [N] will not be satisfied, and the principal contribution to the probabilities of large deviations of S n and Sn may well come from the central parts of the distributions of the ξi (or the ζi ) rather than from their tails. Now we can formulate the main assertion. Recall that by condition [<, <]U on the distribution F we understand the existence of majorants V and W for the tails of Fi , which satisfy condition [U], and that P = P(S n x; B) and Π(x) = nV (x). Theorem 12.1.4. Let Eξj = 0, j = 1, . . . , n, and let the averaged distribution F satisfy conditions [<, <]U and [N] with α ∈ [1, 2), β ∈ (1, 2), and let r = x/y 1 be fixed. Then the following assertions hold true.
446
Non-identically distributed jumps with infinite second moments
(i) If W (t) c1 V (t) then, for all n,
sup n,x: Π(x)v
P cΠ(x)r ,
(12.1.11)
P(S n x) 1 + ε(v), Π(x)
(12.1.12)
where ε(v) ↓ 0 as v ↓ 0. (ii) If W (t) c1 V (t) for all large enough t then the above inequalities will remain true for all n and x such that x < c2 < ∞. nW (12.1.13) ln x (iii) The assertions of parts (i), (ii) of the theorem remain true if, instead of stipulating condition [<, <]U on the averaged distribution F, we assume that each distribution Fj , j = 1, . . . , n, satisfies condition [<, <]U with α = αj , β = βj uniformly in j and n, where 1 < α∗ = min αj max αj = α∗ < 2, jn
jn
1 < β∗ = min βj max βj = β ∗ < 2. jn
jn
In all the assertions where it is assumed that n → ∞, we can replace condition [U] by [U∞ ]. Put V (t) := max{V (t), W (t)},
Π(x) = nV (x),
S n = min Sk . kn
If in condition [<, <]U we replace both majorants by V (t) then (12.1.12) will imply Corollary 12.1.5. Let the preliminary conditions of Theorem 12.1.4 (those stated prior to part (i)) be satisfied. Then max P(S n x), P(S n < −x) 1 + ε(v), (12.1.14) sup Π(x) b n,x: Πv where ε(v) ↓ 0 as v ↓ 0. Let Sn := maxkn |Sk |, F(t) := V (t) + W (t). Theorem 12.1.4 also implies the following analogue of Corollary 3.1.3. Corollary 12.1.6. If the preliminary conditions of Theorem 12.1.4 (those stated prior to part (i)) are met then sup b n,x: Πv
P(Sn x) 1 + ε(v). nF(x)
(12.1.15)
12.1 Upper and lower bounds for the distributions of S n and Sn
447
Remark 12.1.7. The assertions of Theorem 12.1.4 and its corollaries can also be stated in somewhat different terms. The uniformity condition [U] makes it natural to introduce classes F of distributions that satisfy this condition. A class F will be characterized by the function tδ in conditions [U] and also the constant c in condition [N]. In these terms, the assertion of Theorem 12.1.4 states that the main inequalities (12.1.11), (12.1.12) of the theorem hold uniformly in all distributions {Fi } from the class F; moreover, the function ε(·) in (12.1.12), (12.1.14) is specified by the characteristics of the class F only. Note that we could have avoided introducing condition [N]. In that case, however, the quantity J(tδ ) for some δ such that α ± δ ∈ (1, 2), β ± δ ∈ (1, 2) (tδ is from condition [U]) would appear in the bounds for the probabilities P and P(S n x) and so would make the formulations more complicated. Nevertheless, we will take this path in Chapter 13 (see Theorem 13.1.2), where the quantity χ that characterizes the ‘thinning’ rate of the tails Vi (t) is present in the formulations of the main assertions. Observe also that the majorants V (t), W (t) in conditions [N] can be identified on the segment [0, tδ ] with the tails F+ (t) and F− (t) respectively. Remark 12.1.8. As we have already noted, when studying the probabilities of large deviations of Sn and S n as n → ∞, one can use the relaxed version [U∞ ] of conditions [U] described in Remark 12.1.2. In this case (12.1.11) will hold for all large enough n but instead of (12.1.12) one should write sup x: Π(x)v
P(S n x) 1 + ε(v, n), Π(x)
where ε(v, n) → 0 as v → 0, n → ∞. The same observation applies to Corollaries 12.1.5 and 12.1.6. In paper [127] upper bounds were obtained for the distribution of Sn in terms of ‘truncated’ moments, without any assumptions being made on the existence of regularly varying majorants (such majorants always exist in the case of finite moments but their decay rate will not be the true one). This makes the bounds from [127] more general, in a sense, but also substantially more cumbersome. One cannot derive from them bounds for Sn of the form (12.1.11), (12.1.12). Proof of Theorem 12.1.4. The proof mostly repeats that of Theorem 3.1.1, but the arguments need to be modified for the triangular array scheme under condition [U]. We will consider in detail only those steps where the fact that the ξj are now non-identically distributed requires substantial amendments. As before, the proof is based on the basic inequality (12.1.3), which reduces the problem to bounding the product max kn
k = j=1
y Rj (μ, y),
Rj (μ, y) =
eμt Fj (dt). −∞
448
Non-identically distributed jumps with infinite second moments
Since ln(1 + v) v for v > −1, we have Rj (μ, y) = eln[1+(Rj (μ,y)−1)] eRj (μ,y)−1 , so that k =
Rj (μ, y) exp
k
j=1
" (Rj (μ, y) − 1) .
(12.1.16)
j=1
Now we will bound the right-hand side of this inequality for k = n. Letting 1 R(μ, y) := Rj (μ, y), n j=1 n
we have y eμt F(dt) = I1 + I2 + I3 ,
R(μ, y) = −∞
where 0 I1 :=
M I2 := ,
, −∞
y I3 :=
0
M = M (2α) =
,
2α . μ
M
(12.1.17) One can bound the integrals I1 , I2 in exactly the same way as in § 3.1 but this time using condition [U]. As in § 3.1, we obtain 0 I1 = F− (0) + μ
∞ t F(dt) + μ (1 − e−μt ) F− (t) dt,
−∞
0
where the last integral does not exceed ∞
−μt
W (t)(1 − e
μ
2
∞
) dt = μ
0
W (t)e I
−μt
∞ dt,
W (t) =
W (u) du.
I
0
t
By virtue of conditions [U] and the dominated convergence theorem, we obtain that, as t → ∞, ∞ W (t) = t
∞ W (tv) dv ∼ tW (t)
I
1
v −β dv =
1
tW (t) β−1
uniformly in n. Now let δ < β − 1. Then, for tδ from condition [U2 ] and such that W I (t) (1 + δ)
tW (t) β−1
for t tδ ,
12.1 Upper and lower bounds for the distributions of S n and Sn
449
one has the inequality μ
2
∞ W (t)e I
−μt
dt μ
0
2
tδ
(1 + δ)μ2 W (t) dt + β−1
∞
I
0
tW (t)e−μt dt,
tδ
where, owing to condition [U], as μ → 0, 2
∞
−μt
tW (t)e
μ
∞ dt = W (1/μ)
tδ
μtδ ∞
∼ W (1/μ)
uW (u/μ) −u e du W (1/μ)
u1−β e−u du = W (1/μ)Γ(2 − β).
0
Thus, for μ → 0, 0 I1 F− (0) + μ
t F(dt) + J W (tδ )μ2
−∞
+ where t J W (t) =
W (1/μ) Γ(2 − β) 1 + o(1) , β−1
(12.1.18)
⎞ ⎛∞ ∞ ⎝ W (v) dv ⎠ du = min{u, t}W (u) du.
0
0
u
Further, for M = 2α/μ, M I2 =
∞ e F(dt) F+ (0) + μ μt
0
M t F(dt) + (eμt − 1 − μt) F(dt),
0
0
where, using an argument similar to that for (3.1.26) (see p. 133), we find that the last integral does not exceed 1 2 2α μ (e − 1)V ∗ (M ), 2α
∗
t
V (t) :=
uV (u) du. 0
Next, for δ such that α + δ < 2, t
−2
∗
V (t) = t
−2
t
−2
tδ
uV (u) du = t 0
1 uV (u) du +
0
J (tδ )t V
vV (tv) dv
tδ /t −2
1 + (1 + δ)V (t) 0
v 1−α−δ dv,
450
Non-identically distributed jumps with infinite second moments t where J V (t) := 0 uV (u) du. Hence ∞ I2 F+ (0) + μ
t F(dt) + J V (tδ ) + cV (1/μ), 0
which yields, together with (12.1.18), that I1 + I2 1 + J(tδ )μ2 + cF(1/μ),
(12.1.19)
where J(t) = J V (t) + J W (t), F(t) = V (t) + W (t). Now we turn to bounding the integral y I3 =
e F(dt) V (M )e μt
2α
y +μ
M
M 2α
(cf. (2.2.9), (2.2.11)). Since V (M )e evaluate the integral I30
cV (1/μ), the main task will be to
(y−M )μ
y := μ
V (t)eμt dt
V (t)e dt = e V (y) μt
μy
0
M
V (y − u/μ) −u e du. V (y)
(12.1.20)
Repeating the corresponding arguments from § 2.2 (see (2.2.13)–(2.2.15)) and using condition [U], we conclude that there exists a function ε(λ) → 0 as λ = μy → ∞ such that I30 eμy V (y)(1 + ε(λ)).
(12.1.21)
As the final result, we obtain the inequality (in which it is convenient to multiply both sides by n) n(R(μ, y) − 1) nJ(tδ )μ2 + cnF(1/μ) + neμy V (y)(1 + ε(λ)), (12.1.22) where ε(λ) → 0 as λ → ∞. Now observe that, bounding the integrals Rj (μ, y) for each of the r.v.’s ξj and again using a partition Rj (μ, y) = I1,j + I2,j + I3,j of the form (12.1.17), we will similarly obtain 0 I1,j = Fj,− (0) + μE(ξj ; ξj < 0) + μ
(1 − eμt ) Fj (dt),
−∞
M I2,j Fj,+ (0) + μE(ξj ; ξj 0) + 0
y I3,j =
eμt Fj (dt) M
(eμt − 1 − μt) Fj (dt),
12.1 Upper and lower bounds for the distributions of S n and Sn
451
and 0 Rj (μ, y) − 1 μ
(1 − eμt ) Fj (dt)
−∞
M y μt + (e − 1 − μt) Fj (dt) + eμt Fj (dt). 0
(12.1.23)
M
In the inequalities (12.1.23) the right-hand sides are non-negative, and hence the sums k Rj (μ, y) − 1 , k n, j=1
will not exceed the sum of the right-hand sides of (12.1.23) over all j n. Now we have already shown that the last sum is bounded by the right-hand side of (12.1.22). It is evident that max
k
kn
Rj (μ, y) − 1
j=1
admits the same upper bound. Therefore (see (12.1.16)),
" k k = Rj (μ, y) exp max (Rj (μ, y) − 1) max kn
j=1
kn
j=1
exp nJ(tδ )μ2 + cnF(1/μ) + nV (y)eμy (1 + ε(λ)) .
(12.1.24) As in § 3.1, let μ :=
r 1 ln , y Π(y)
Π(y) = nV (y).
Then, cf. calculations on p. 133, we obtain r ln P − r ln + r + ε1 Π(y) Π(y) y r + cnW + nJ(tδ )y −2 ln2 , | ln Π(y)| Π(y)
(12.1.25)
where ε1 (v) → 0 as v → 0. If Π(y) → 0 then Π(y) ln2 Π(y) also tends to zero. Hence the last term on the right-hand side of (12.1.25) will vanish when Π(y) → 0, provided that J(tδ )y −2 V −1 (y) < c,
(12.1.26)
where we recall that the quantity tδ from condition [U] corresponds to δ such that β − δ > 1, α + δ < 2. Thus, the last term in (12.1.25) is bounded provided that (12.1.26) holds true
452
Non-identically distributed jumps with infinite second moments
and Π(x) 1. It will converge to zero if (12.1.26) holds true and Π(x) → 0. This means that, when condition [N] holds,the last term on the right-hand side of (12.1.25) can be included in the term ε1 Π(y) . We have obtained for ln P an inequality that coincides with (3.1.29) (where V and W have now a new meaning). The subsequent argument repeats verbatim that in the proof of Theorem 3.1.1, and hence will be omitted. The theorem is proved.
12.1.3 Lower bounds for the distributions of the sums Sn In the present subsection we will establish analogues of the lower bounds from Theorems 2.5.1 and 3.3.1. Set Snj := Sn − ξj . Theorem 12.1.9. Let K(n) > 0 be an arbitrary sequence, and let j Sn j < −t . Qn (t) := P K(n) Then, for y = x + tK(n), P(Sn x)
n
1 2 nF+ (y) . Fj,+ (y) 1 − Qj n (t) − 2 j=1
Proof. Put Gn = {Sn x}, Then
P(Sn x) P Gn ;
n +
Bj
j=1
Bj = {ξj < y}. n
P(Gn B j ) −
j=1
n
P(Gn B j ) −
j=1
P(Gn B i B j )
i<jn n
1 2
2 Fj,+ (y) .
j=1
For y = x + tK(n) we have ∞ P(Gn B j ) =
Fj (du) P(Snj x − u)
y
P(Snj x − y)Fj,+ (y) = Fj,+ (y) 1 − Qj n (t) . The theorem is proved. Recall the notations V (t) = max{V (t), W (t)},
σ (n) = V (−1) (1/n).
(12.1.27)
12.1 Upper and lower bounds for the distributions of S n and Sn
453
Theorem 12.1.10. Let the averaged distribution F satisfy the conditions [<, <]U , [N] with α ∈ [1, 2), β ∈ (1, 2), and let y = x + t σ (n). Then, for t → ∞, P(Sn x) nF+ (y)(1 + o(1)).
(12.1.28)
(n) are met then If, in addition, the conditions [<, =]U and x σ P(Sn x) nV (x) 1 + o(1) .
(12.1.29)
Proof. We will make use of Theorem 12.1.9, puting K(n) := σ (n). Then j x), Qj n (t) = P(Sn < −
x := t σ (n).
Owing to condition [U1 ], for any fixed t one has nV ( x) = nV t σ (n) ∼ t−αˇ , α ˇ = min{α, β}. The right-hand side of the last relation tends to zero as t → ∞, and hence Corollary 12.1.5 will be applicable for large enough t, which implies that x) c1 t−αˇ → 0. Q(j) n (t) cnV ( Since, moreover, nF+ (y) → 0 as t → ∞ we obtain (12.1.28). The inequality (12.1.29) evidently follows from (12.1.28) since, when t → ∞, t = o(x/ σ (n)) and the conditions of the theorem are met, we have x ∼ y, F+ (y) = V (y) ∼ V (x). The theorem is proved. Example 12.1.11. Let ξi = ci ζi , where the ζi are i.i.d. r.v.’s following a common distribution Fζ , Eζi = 0, Fζ,+ (t) V(ζ) (t) = Lt−α , α ∈ (1, 2), and Fζ,− (t) V(ζ) (t) for t t1 , for some t1 < ∞. Then Vi (t) = V(ζ) (t/ci ), Wi (t) V(ζ) (t/ci ) for t ci t1 . If c∗ := minin ci c− > 0 and c∗ := maxin ci < c+ < ∞, where c± do not depend on n then conditions [U] and [N] will clearly be satisfied. p
→ 0. Condition [U] will evidently be met Now let ci ↓ 0 as i → ∞. Then ξi − (Vi and Wi are just power functions for t ci t1 → 0 as i → ∞). Further, for simplicity letting t1 = 1, ci 1, we obtain 1 JiV
(1) =
uVi (u) du = 0
c2i
c2i
t/ci vV(ζ) (v) dv 0
t/ci Lcα c2 1 1−α i +L bcα v dv i + i , 2 2 2−α 1
b :=
L 1 + . 2 2−α
Hence, for t 1, Vi (t) = Lt−α cα i
L V J (1)t−2 , b i
which proves the inequality (12.1.9) of condition [N]. Thus, conditions [U] and
454
Non-identically distributed jumps with infinite second moments
[N] are satisfied, and so the assertions of Theorems 12.1.4 and 12.1.10 and those of their corollaries will hold true for the sequence ξi = ci ζi . If ci ↑ ∞ as i → ∞ then conditions [U] and [N] are, generally speaking, not satisfied. In this case, however, one can reduce the problem to the previous one, to a certain extent. Introduce new independent r.v.’s ξi∗ :=
ξn−i+1 cn−i+1 = ζn−i+1 , cn cn
i = 1, . . . , n,
so that we are again dealing with a representation of the form ξi∗ = c∗i ζi with decreasing coefficients c∗i = cn−i+1 /cn , i = 1, . . . , n, but this time in a ‘triangular array scheme’, since the c∗i depend on n. Note that here Sn∗ =
n i=1
ξi∗ =
Sn , cn
P(Sn x) = P(Sn∗ x∗ )
for x∗ =
x . cn
12.2 Asymptotics of the crossing of an arbitrary remote boundary For the probabilities of the crossing of linear horizontal boundaries and for the distribution of Sn , one can obtain the desired asymptotics from the bounds of the previous section. The next result follows from Theorems 12.1.4 and 12.1.10. Theorem 12.2.1. Let Eξj = 0 and the averaged distribution F satisfy conditions [<, =]U and [N] with α ∈ (1, 2). Then the following assertions hold true. (i) If W (t) < cV (t) and x σ(n) = V (−1) (1/n) (i.e. nV (x) → 0) then P(Sn x) = nV (x)(1 + o(1)),
(12.2.1)
P(S n x) = nV (x)(1 + o(1)).
(12.2.2)
(ii) If W (t) > cV (t) as t → ∞ then the relations (12.2.1), (12.2.2) remain true for n and x such that x < c. nW ln x The assertion of the theorem could be stated in a uniform version, as in Theorem 3.4.1. Under condition [UR] with H(n) = n (see p. 464), the relation P(Sn x) ∼ nV (x) was obtained for x > cn in [218]. The assertion of Theorem 12.2.1 can be extended to the case of arbitrary boundaries. For that case, however, imposing conditions on the averaged distributions proves to be insufficient. To make the exposition more compact, we will introduce, as in Chapter 3, the following conditions (cf. p. 138): [Q] [Q1 ] [Q2 ]
At least one of two following two conditions is met: W (t) cV (t), x → ∞ and nV (x) → 0; x → ∞ and nV (x/ln x) < c, where V (t) = max{V (t), W (t)}.
12.2 Asymptotics of the crossing of an arbitrary remote boundary
455
Condition [Q] for the averaged distributions was assumed to be satisfied in Theorem 12.2.1(i),(ii). In this section, we will consider boundaries {g(k)} of the same form as in § 3.6 and will assume that min g(k) = cx, kn
c > 0,
(12.2.3)
where c does not depend on n (or on any other triangular array scheme parameter). The class of all boundaries satisfying (12.2.3) will be denoted by Gx,n . As in § 3.6, we will study the asymptotics of the probability P(Gn ) of the event ( ) Gn = max Sk − g(k) 0 . kn
Theorem 12.2.2. Let conditions [<, =]U be satisfied for the distributions Fj uniformly in j and n (with common α ∈ (1, 2) and β). Moreover, let x, n and the averaged distribution F satisfy conditions [Q] and [N]. Then 3 n 4 Vj g∗ (j) (1 + o(1)) + O n2 V (x)V (x) , (12.2.4) P(Gn ) = j=1
where g∗ (j) = minkj g(k) and the terms o(·) and O(·) are uniform over the class Gx,n of boundaries and the class F of distributions {Fj } satisfying conditions [U], [Q] and [N]. It follows from relation (12.2.4) that if condition [Q] is met and max g(k) < c1 x
(12.2.5)
kn
then P(Gn ) ∼
n
Vj g∗ (j) .
(12.2.6)
j=1
Proof. The proof of Theorem 12.2.2 is quite similar to that of Theorem 3.6.4. Let, as before, Bj = {ξj < y},
B=
n *
Bj ,
P = P(S n x; B),
j=1
and assume without loss of generality that c = 1 in (12.2.3). Since the averaged distributions also satisfy condition [<, =]U , it follows from Theorem 12.1.4 that r P(Gn B) P c nV (x) . Hence, for r = 2, P(Gn ) = P(Gn B) + O (nV (x))2 .
456
Non-identically distributed jumps with infinite second moments
Therefore, as in (3.6.23), we obtain P(Gn ) =
n
P(Gn ; ξj y) + O (nV (x))2 .
j=1
The analysis of the terms P(Gn ; ξj y) under new conditions also repeats that from § 3.6, with some insignificant obvious amendments that can be summarized j as follows. Set V(j) (x) := k=1 Vk (x). Then, instead of (3.6.24) we will have the relation P(Gn ; ξj y) = P(Gn ; ξj y, S j−1 x) + O V(j) (x)Vj (x) . With the same definitions of the quantities Mj,n :=
max
0kn−j
Sk+j − g(k + j) + g∗ (j) − Sj
as in (3.6.25), we will have, instead of (3.6.26), the relation P(Gn ; ξj y) = P Sj + Mj,n g∗ (j), ξj y, Zj,n < x/2 + P Sj + Mj,n g∗ (j), ξj y, Zj,n x/2 + O Vj (x)V(j) (x) , where d
S n = min Sk Zj,n := Sj−1 + Mj,n S n−1 . kn
Hence P ξj y, Zj,n x/2 = O Vj (x)nV (x) , and we obtain, cf. calculations in § 3.6, that P(Gn ; ξj y) = P ξj + Zj,n g∗ (j), Zj,n < x/2 + O Vj (x)nV (x) , (12.2.7) where the principal part on the right-hand side of (12.2.7) can be written as # $ E Vj g∗ (j) − Zj,n ; Zj,n < x/2 = Vj g∗ (j) (1 + o(1)) + O Vj (x)nV (x) (cf. (3.6.30)–(3.6.33)). By virtue of the theorems of the previous section, all the terms o(·) and O(·) that appear in the above argument are uniform over the class Gx,n of boundaries and the class F of distributions {Fj } satisfying conditions [U], [Q] and [N] (see Remark 12.1.7). Collecting together the above results, we obtain the assertion of the theorem. The theorem is proved.
457
12.3 Crossing of a boundary on an unbounded time interval
12.3 Asymptotics of the probability of the crossing of an arbitrary remote boundary on an unbounded time interval. Bounds for the first crossing time In this section we will deal with boundaries {g(k)} of the form g(k) = x + gk ,
k = 1, 2, . . . ,
(12.3.1)
where the infinite sequence {gk ; k 1} lies between two linear functions: c1 ak − p1 x gk c2 ak + p2 x,
p1 < 1,
k = 1, 2, . . .
(12.3.2)
We assume that the constants 0 < c1 c2 < ∞ and pi > 0, i = 1, 2, do not depend on the parameter of the triangular array scheme, while the parameter a ∈ (0, a0 ), a0 = const < ∞, can tend to 0. Recall that in our triangular array scheme both the distributions {Fj } and the boundary {g(k)} can depend on the parameter of the scheme, so that, generally speaking, the sequence {gk } and also the value of a will depend on that parameter. In particular, if the parameter of the scheme coincides with n then we have a = a(n), whereas the constants ci and pi in (12.3.2) will not depend on n. The parameter of the scheme can, of course, be different: thus, it could be the parameter a → 0, which we identify with the value −Eξk = a → 0 in the problem on transient phenomena for the distribution of S n when Eξk < 0. If the boundary is such that gn bx,
(12.3.3)
then the first term on the right-hand side of assertion (12.2.4) of Theorem 12.2.2 will be minorated by the function cnV (x) and hence will dominate under condition [Q]. Therefore, under the above conditions, it will follow from Theorem 12.2.2 that n
P max(Sk − gk ) x ∼ Vj (x + gj,n ), kn
(12.3.4)
j=1
where gj,n := minjkn gk . Now, the inequality (12.3.3) means, together with (12.3.2), that c1 an (b + p1 )x. This presents one more (in addition to condition [Q]) upper constraint on n. The question of what will happen for n > (b + p1 )x/c1 a and, in particular, in the case n = ∞, remains open. To answer this question, we will need, as in § 3.6, bounds for the first crossing time of the boundary {g(k)} that are similar to the results of § 3.2. For this, we will have to impose some homogenous domination
458
Non-identically distributed jumps with infinite second moments
conditions on the tails {Vj }. Let V(1) (x) :=
n1 1 Vj (x), n1
W(1) (x) :=
j=1
n1 1 Wj (x), n1
n1 :=
j=1
x , a
(12.3.5) where we assume for simplicity that n1 is an integer. The above-mentioned homogenous condition has the following form: [H]
For nk = 2k−1 n1 , all k = 1, 2, . . . and t > 0, 2n 1 k Vj (t) c(2) V(1) (t), c(1) V(1) (t) nk j=nk
c(1) W(1) (t)
2n 1 k Wj (t) c(2) W(1) (t). nk j=nk
Let S n (a) := max(Sk − ak), kn
η(x, a) := inf{k : Sk − ak x},
η(x, a) = ∞ on the event {S ∞ (a) < x}, and Bj (v) := {ξj < y + vaj},
v > 0,
B(v) :=
n *
Bj (v).
j=1
The followingtheorem bounds probabilities of the form P S n (a) x; B(v) , P S n (a) x and P(∞ > η(x, a) xt/a). For n n1 ≡ x/a, these bounds can be easily obtained from Theorem 12.1.4, since {S n 2x} ⊂ {S n (a) x} ⊂ {S n x}. So it will be assumed below that n n1 . Theorem 12.3.1. (i) Let the averaged distribution F(1) :=
n1 1 Fj n1 j=1
with n1 = x/a satisfy conditions [<, <]U , [Q] and [N] with n replaced by n1 in the last two and with α ∈ (1, 2). Furthermore, let condition [H] be met. Then, for v 1/4r, r = x/y, one has the inequality # $r P S n (a) x; B(v) c n1 V(1) (x) 0 , (12.3.6) where r0 = r/(1 + vr) 4r/5 and r > 5/4. If a a1 = const > 0 then one can assume without loss of generality that condition [Q1 ] is satisfied with n replaced in it by n1 .
12.3 Crossing of a boundary on an unbounded time interval
459
The inequality (12.3.6), with V(1) (x) on its right-hand side replaced by V(1) (x) := max V(1) (x), W(1) (x) , holds true for any value of a without the assumption that condition [Q] is satisfied. (ii) Under the conditions of part (i) we have P S n (a) x cn1 V(1) (x), (12.3.7) and, for values of t that are bounded or grow slowly enough, xt cn1 V(1) (x)t1−α . (12.3.8) P ∞ > η(x, a) a If t → ∞ together with x, and we assume nothing about its growth rate then the inequality (12.3.8) remains true provided that the exponent 1 − α in it is replaced by 1 − α + ε for any fixed ε > 0. The assertions stated in part (i) after the inequality (12.3.6) hold true for the inequalities (12.3.7), (12.3.8) as well. All the above inequalities are uniform in a and over the classes of distributions {Fj } and boundaries {g(k)} that satisfy the uniformity conditions from part (i). In other words, the constants c in (12.3.6)–(12.3.8) depend only on the parameters that characterize the classes of {Fj } and {g(k)}. Proof. The proof of Theorem 12.3.1 completely repeats that of Theorem 3.2.1 but under more general conditions. One just needs to use condition [H]. In the proof of the theorem, this property is used when bounding the maxima of sums on the intervals [nk , 2nk ]. References to Theorem 3.1.1 in the proof of Theorem 3.2.1 should be replaced by references to Theorem 12.1.4 and its corollaries. Note that, owing to condition [H], conditions [U], [Q] and [N] of Theorem 12.1.4 will be satisfied on the intervals [0, nk ], [nk , 2nk ] with nk = n1 2k−1 , k = 1, 2, . . . Consider, for instance, condition [N] on the interval [0, nk ], i.e. for n = nk . By virtue of [H] and condition [N] on [0, n1 ], we have tδ J (tδ ) =
tδ uV (u) du c1
V
0
A similar inequality holds for J on [0, nk ]. The theorem is proved.
uV(1) (u) du c2 V(1) (tδ ) c3 V (tδ ). 0
W
. This demonstrates that condition [N] holds
Corollary 12.3.2. Let the conditions of Theorem 12.3.1(i) except for [Q] be met. Then the inequalities (12.3.7), (12.3.8) will hold true provided that we replace V(1) on their right-hand sides by V(1) . The assertion will follow in an obvious way from Theorem 12.3.1(i) if one uses a common majorant V(1) for both tails. Then condition [Q] will always be
460
Non-identically distributed jumps with infinite second moments
satisfied (one does not need to assume that n1 V1 (x) is small; if it were not then the inequalities in question would become meaningless). Let ηg (x) := min{k : Sk − gk x}. Corollary 12.3.3. Let the conditions of Theorem 12.3.1(i) and conditions (12.3.1), (12.3.2) be met. Then, for values of t that are bounded or grow slowly enough, xt cxV(1) (x) 1−α t . (12.3.9) P ∞ > ηg (x) a a If t → ∞ together with x and we assume nothing about its growth rate then the inequality (12.3.9) remains true provided that the exponent 1 − α in it is replaced by 1 − α + ε for any fixed ε > 0. Proof. The assertions of the corollary follow from Theorem 12.3.1 and the inequality (see (12.3.2)) xt xt P ∞ > η x(1 − p1 ), c1 a . P ∞ > ηg (x) a a It only remains to notice that if we replace x and a in the bound (12.3.8) by x(1 − p1 ) and c1 a respectively then the right-hand side of the bound will change by an asymptotically constant factor. Put gk∗ := min gj = gk,∞ jk
and observe that
gk∗
↑,
gk∗
gk,n gk and (12.3.2) implies that
c1 ak − p1 x gk∗ c2 ak + p2 x.
(12.3.10)
From Theorem 12.2.2 and Corollary 12.3.2 we obtain the following assertion, where the scheme parameter is identified with a. Theorem 12.3.4. Let the distributions Fj satisfy condition [<, =]U with common values of α ∈ (1, 2) and β uniformly in j and n. Moreover, let conditions (12.3.1), (12.3.2) and [H], [Q] be satisfied for the averaged distribution F(1) (with n = n1 = x/a) and let x V(1) (x) δ(x) → 0 a Then
P sup(Sk − gk ) x =
k0
∞
as
x → ∞.
(12.3.11)
Vj (x +
gj∗ )
(1 + o(1)),
(12.3.12)
j=1
where the term o(·) is uniform in a and over the classes of the distributions {Fj } and boundaries {g(k)} satisfying the conditions of the theorem.
12.3 Crossing of a boundary on an unbounded time interval
461
Proof. For an arbitrary fixed t 1 and T :=
1 (c2 t + p1 + p2 ) > t, c1
put xt xT ≡ n1 t, ≡ n1 T. N := a a It follows from condition (12.3.2) that, for j n, one has n :=
gj,N = gj,∞ = gj∗ .
(12.3.13)
Further,
P sup(Sk − gk ) x = P(ηg (x) N ) + P(∞ > ηg (x) > N ), k0
(12.3.14) where, by virtue of Theorem 12.2.2, we have approximation (12.2.4) for the first term on the right-hand side of (12.3.14). This implies (see also (12.3.13) and [H]) that n N Vj (x + gj∗ ) + Vj (x + gj,N ) (1 + o(1)) P(ηg (x) N ) = j=1
j=n+1
(12.3.15) + O N V(1) (x)V(1) (x) , where the remainder term O(·) is o x V(1) (x)/a owing to condition (12.3.11). Since gj,N gj∗ we obtain from (12.3.14), (12.3.15) and Corollary 12.3.3 that ∞
P sup(Sk − gk ) x Vj (x + gj∗ ) (1 + o(1))
k0
2
j=1
cxV (x) (1) V(1) (x) + T 1−α . (12.3.16) a a It also follows from (12.3.14) and (12.3.15) that n
x
∗ P sup(Sk − gk ) x Vj (x + gj ) (1 + o(1)) + o V(1) (x) . a k0 j=1 +o
x
(12.3.17) Therefore, in order to prove (12.3.12) it suffices to verify that ∞ j=1 ∞ j=n+1
Vj (x + gj∗ ) > c
x V(1) (x), a
(12.3.18)
Vj (x + gj∗ ) < c
x V(1) (x)t1−α , a
(12.3.19)
i.e. that the sum on the left-hand side of (12.3.19) can be made, by choosing a suitable t, arbitrarily small compared with the left-hand side of (12.3.18), uniformly over the above-mentioned classes of boundaries and distributions. Since
462
Non-identically distributed jumps with infinite second moments
by choosing a suitable t (or, equivalently, T ) the last term on the right-hand side ∗ of (12.3.16) can also be made arbitrarily small relative to ∞ j=1 Vj (x + gj ), while the second to last term possesses this property by virtue of (12.3.18), we see that (12.3.12) follows from (12.3.16), (12.3.17). We now prove (12.3.18). This relation follows from the properties of the sequence gj∗ and the inequality (we use here the right-hand relation in (12.3.10)) ∞
Vj (x + gj∗ ) >
j=1
n1
Vj (x + gn∗ 1 )
j=1
n1
cx V(1) (x). Vj (1 + c2 + p2 )x a j=1
Next we will prove (12.3.19). Owing to (12.3.10) and [H], ∞
Vj (x + gj∗ )
j=n+1
∞
Vj (1 − p1 )x + c1 aj
j=n+1
=
2n j=n+1
c(2) xt a
+
4n
+···
j=2n+1 ∞ 2k V(1) (1 k=0
− p1 )x + 2k c1 tx .
Now by condition [U] the function V(1) (x) behaves like an r.v.f., so that ∞
2k V(1) (1 − p1 )x + 2k c1 tx ∼ cV(1) (1 − p1 + c1 t)x .
k=0
(To obtain a more formal proof of this relation, consider, for n1 = x/a, the sum n1 ∞ 1 2k Vj (1 − p1 )x + 2k c1 tx n1 j=1 k=0
and for each j make use of condition [U]). From these relations we find that ∞ j=n+1
Vj (x + gj∗ )
cxtV(1) (x) . a(1 − p1 + c1 t)α
This proves (12.3.19). The required assertions, and hence Theorem 12.3.4, are proved. If we strengthen somewhat the homogeneity conditions for the tails and boundaries then we can obtain a simpler form for the right-hand side of (12.3.12). Introduce a new condition [HΔ ]
Let condition [H] be met and, for any fixed Δ > 0 and ? xΔ @ = n1 Δ , nΔ := a
463
12.3 Crossing of a boundary on an unbounded time interval
let there exist a number g > 0 and majorants Vj , Wj for Fj , j 1, such that, as x → ∞, the following relations hold uniformly in k: (k+1)nΔ
1 nΔ 1 nΔ
Vj (x) ∼ V(1) (x),
(12.3.20)
Wj (x) cW(1) (x),
(12.3.21)
j=knΔ +1 (k+1)nΔ
j=knΔ +1
1 g(k+1)nΔ − gknΔ ∼ ga. nΔ
(12.3.22)
Corollary 12.3.5. Suppose that the averaged distribution F(1) =
n1 1 Fj n1 j=1
satisfies conditions [<, =]U , α ∈ (1, 2), [N] and [Q] with n = n1 . Moreover, let condition [HΔ ] be met and, for definiteness, g1 = o(x). Then ∞
xV(1) (x) 1 V(1) (u) du ∼ P sup(Sk − gk ) x ∼ . ga ga(α − 1) k0
(12.3.23)
x
Note that if the boundary {gk } increases faster than linearly then condition (12.3.2) will not be satisfied and the asymptotics of the left-hand side of (12.3.23) could be different (cf. Theorem 3.6.7). Proof of Corollary 12.3.5. We will begin by observing that the relations (12.3.22) and g1 = o(x) also hold for the sequence {gj∗ }. Further, owing to (12.3.21), (12.3.22), we obtain that, as x → ∞, ∞
Vj (x + gj∗ ) =
(k+1)nΔ
Vj (x + gj∗ )
k=0 j=knΔ +1
j=1
∞
∞ (k+1)n Δ
Vj (x + g1∗ + nΔ kga) ∼ nΔ
k=0 j=knΔ +1 ∞
∞
V(1) (x + g1∗ + nΔ kga)
k=0
∞ xV(1) (x) Δ xΔ = V(1) (1 + o(1) + Δkg)x ∼ a a (1 + o(1) + Δkg)α k=0 k=0 ∞ xV(1) (x) xV(1) (x) dv = 1 + ε(Δ) , (12.3.24) 1 + ε(Δ) = ga (1 + v)α ga(α − 1)
0
where ε(Δ) → 0 as Δ → 0. A similar converse asymptotic inequality holds as
464
Non-identically distributed jumps with infinite second moments
well. As the left-hand side of (12.3.24) does not depend on Δ, we infer that ∞
Vj (x + gj∗ ) ∼
j=1
xV(1) (x) . ga(α − 1)
(12.3.25)
Since the conditions of Theorem 12.3.4 are satisfied, the relations (12.3.25) and (12.3.12) imply (12.3.23). The corollary is proved. Corollary 12.3.5 implies, in turn, the following. Corollary 12.3.6. Let ξ1 , ξ2 , . . . be i.i.d. r.v.’s and let condition [<, =]U and conditions [N], [Q] with n = n1 = x/a be satisfied. Moreover, assume that xV (x)/a → 0 as x → ∞. Then xV (x) P S(a) x ∼ . a(α − 1) It is clear that the assertions of Corollaries 12.3.5 and 12.3.6 could be stated in a uniform version, as in Theorem 12.3.4.
12.4 Convergence in the triangular array scheme of random walks with non-identically distributed jumps to stable processes 12.4.1 A theorem on convergence to a stable law In this subsection we will establish the convergence of the distributions of Sn /b(n) under a suitable scaling b(n) to a stable law under certain (not the most general) conditions; this will be used in the next section when studying transient phenomena. These conditions could be referred to as ‘uniform regular variation’ and formulated as follows (recall our convention that when we want to emphasize the (n) fact that the averaged distribution F depends on n we write F (n) , F+ ). [UR] There exist an increasing r.v.f. H(n) = nh Lh (n) → ∞
as
n→∞
(Lh is an s.v.f.) that satisfies H(n) n and a fixed r.v.f. G(t) = t−α L(t) (where α ∈ (1, 2) and L is an s.v.f. that does not depend on n) such that the following two conditions, (1) and (2), are met. (1) nF (n) (t) = 1. t→∞, n→∞ H(n)G(t) lim
In other words, for any given δ > 0 there exist nδ , tδ < ∞ for which nF (n) (t) − 1 < δ. sup nnδ , ttδ H(n)G(t)
(12.4.1)
12.4 Convergence of random walks to stable processes
465
(2) There exists the limit (n)
nF+ (t) = ρ+ ∈ [0, 1]. t→∞, n→∞ H(n)G(t) lim
(12.4.2)
Observe that, for ρ+ > 0, condition [UR] implies condition [U] of § 12.1. Indeed, by virtue of the properties of r.v.f.’s, for any given [c1 , c2 ] and δ > 0 there exists a tδ < ∞ such that α v G(vt) (12.4.3) G(t) − 1 δ for all t tδ , v ∈ [c1 , c2 ], and such that (1 − δ) min{vδ , v −δ } <
v α G(vt) < (1 + δ) max{v δ , v −δ } G(t)
(12.4.4)
for all t tδ and vt tδ (see Theorem 1.1.2 and (1.1.23)). Therefore, from (12.4.1)–(12.4.3) with ρ+ > 0, (n)
F+ (tv)v α G(tv)v α ∼ ∼ 1, F+ (t) G(t) which, together with the uniformity of these relations, means that [U1 ] is satisfied. Condition [U2 ] is verified in a similar way, with the help of (12.4.1)–(12.4.4). Condition [UR] also implies the non-degeneracy condition [N] of § 12.1 but only when H(n) grows fast enough. Indeed, as we have already noted, a sufficient condition for (12.1.9) has the form F+ (x) cx−2 in the range nF+ (x) < 1. Under our present conditions, F+ (x) ∼
H(n) 2 H(n)ρ+ G(x) = ρ+ x−2 x G(x). n n
Let b(n) := G(−1)
1 H(n)
= nh/α Lb (n),
Lb is an s.v.f.
(12.4.5)
(12.4.6)
Then, on the boundary of the range nF+ (x) < 1, the deviations x will have the form x = vb(n). (For such an x we have nF+ (x) ∼ H(n)ρ+ G vb(n) ∼ ρ+ v −α .) Therefore, for the factor following x−2 in (12.4.5) one has at the point x = vb(n) that v 2 b2 (n)v −α H(n) 2 x G(x) ∼ →∞ n n
(12.4.7)
for b2 (n) n (which is always true for h > α/2). Owing to (12.4.5), this means that F+ (x)x−2 for x > vb(n), as was to be demonstrated. We also need an upper restriction on the tails Fj . Namely, we will assume that the following condition of ‘negligibility’ of the summands ξj /b(n) is met.
466 [S]
Non-identically distributed jumps with infinite second moments For any ε > 0, as n → ∞, max P |ξj | > εb(n) → 0 jn
(or, equivalently, maxjn Fj εb(n) → 0). Now we can formulate our theorem on convergence to a stable law. As in § 1.5, put ζn :=
Sn , b(n)
where b(n) is defined in (12.4.6). Theorem 12.4.1. Assume that Eξj = 0, that the averaged tails F (·) satisfy condition [UR] with α ∈ (1, 2), and that condition [S] is satisfied. Then, as n → ∞ √ and b(n) n, we have ζn ⇒ ζ (α,ρ) ,
ρ = 2ρ+ − 1,
where ζ (α,ρ) follows the stable distribution Fα,ρ with ch.f. f(α,ρ) (·) defined in Theorem 1.5.1 (see (1.5.8), (1.5.9)). √ Remark 12.4.2. If b(n) ∼ c n as n → ∞ then, as will be seen from the proof, under all other conditions of the theorem, the (infinitely divisible) distribution that is limiting √ for ζn will contain, along with F(α,ρ) , a normal component. If b(n) = o n then the limiting distribution for the scaled sums Sn will be √ normal (under the scaling n ). Similar assertions undoubtedly hold in the cases α ∈ (0, 1), α = 1, α = 2 as well but we will not dwell on them in the present chapter. Observe that all the conditions of the theorem except [S] refer to the averaged distributions, in the same way as in the Lindeberg condition in the central limit theorem for the case of finite variances. Example 12.4.3. An analogue of Example 12.1.3. Let ζi be i.i.d. r.v.’s that follow a common symmetric distribution having fixed regularly varying tails Gζ± (t), Gζ (t) = Gζ+ (t) + Gζ− (t) with exponent α ∈ (1, 2), and let
with probability δ(n), ζi ξ= 0 with probability 1 − δ(n), where δ(n) → 0 as n → ∞. It is evident that here F (t) = δ(n)Gζ (t) and that all the conditions of Theorem 12.4.1 will be satisfied provided that the function √ H(n) = nδ(n) → ∞ as n → ∞ is such that b(n) n. Example 12.4.4. An analogue of Example 12.1.11 under the condition [=, =] with Wζ (t) = cV(ζ) (t) for t t1 . Here ξi = ci ζi , Vi (t) = V(ζ) (t/ci ) and
467
12.4 Convergence of random walks to stable processes
Wi (t) = cV(ζ) (t/ci ) for t t1 , where V(ζ) (t) = Lt−α for t t0 > 0. This means that, for t t0 maxin ci , nF+ (t) =
n
V(ζ) (t/ci ) = V(ζ) (t)
i=1
n
cα i,
nF− (t) = cV(ζ) (t)
i=1
n
cα i .
i=1
n
Put H(n) := i=1 cα i . Condition [UR] will clearly be satisfied provided that H(n) → ∞ as n → ∞. For simplicity let ci = i−γ , γ > 0. Then for γ < 1/α, as n → ∞, H(n) ∼ nh Lh
with
h = 1 − αγ, Lh = (1 − αγ)−1 ,
and, furthermore, G(t) = (1 + c)V(ζ) (t), b(n) = G(−1)
1 H(n)
∼ bn1/α−γ ,
b=
(1 + c)Lh 1 − αγ
1/α
.
Since b(n) → ∞ as n → ∞, the above relations imply that condition [S] will be met. One can easily verify that the second condition in [UR] holds with ρ+ = 1/(1 + c). Thus, the r.v.’s ξi in Example 12.1.11 with ci = i−γ and γ < 1/α − 1/2 satisfy the conditions of Theorem 12.4.1. The case when ci ↑ as i → ∞ (see Example 12.1.11) can be dealt with in a similar way. Proof of Theorem 12.4.1. There are two ways to prove the convergence of the distributions of ζn . The first consists in modifying the argument in the proof of Theorem 1.5.1. Owing to the negligibility of the summands ξj /b(n) (condition [S]), the problem reduces to proving the convergence n % λ fj b(n)
& − 1 → ln f(α,ρ) (λ),
j=1
where fj is the ch.f. of ξj ; observe that n $ # eiμt F(dt) − 1 . fj (μ) − 1 = n j=1
That is, the problem reduces to studying the asymptotics of eiμt nF(dt) or, iμt which is essentially the same, the asymptotics of e G(dt) as μ → 0 (where G is the distribution with tails ρ± G(t) and G(t) is the function from condition [UR]), which is what we did in the proof of the above-mentioned Theorem 1.5.1. We will use the second way, which is based on a general theorem on convergence to infinitely divisible laws (see e.g. § 19 of [130]). If we fix the limiting distribution in this theorem to be Fα,ρ , α ∈ (1, 2) then the theorem will take the following form.
468
Non-identically distributed jumps with infinite second moments
Let Eξj = 0 and condition [S] be satisfied. Then, in order to have the convergence ζn ⇒ ζ (α,ρ)
n → ∞,
as
it is necessary and sufficient that the following two conditions are met: (1) for any fixed t > 0, nF± tb(n) → ρ± t−α ,
ρ = 2ρ+ − 1 = 1 − 2ρ− ,
(2)
t2 F dt b(n) = 0.
lim lim sup n
ε→0 n→∞
(12.4.8)
(12.4.9)
|t|<ε
To prove Theorem 12.4.1, it suffices to verify that condition [UR] implies conditions (12.4.8), (12.4.9). That (12.4.8) holds is next to obvious: for each fixed t, as n → ∞, nF± tb(n) ∼ H(n)ρ± G tb(n) ∼ ρ± t−α H(n)G b(n) ∼ ρ± t−α (if ρ± = 0 then nF± tb(n) → 0). To verify (12.4.9), first consider ε I+ := n 0
n t F dt b(n) = − 2 b (n) 2
εb(n)
0
2n u dF+ (u) 2 b (n) 2
εb(n)
uF+ (u) du. 0
Let ρ+ > 0 and t1 , n1 be such that, owing to condition [UR], we have the inequality F+ (u) < 2ρ+ H(n)n−1 G(u) for u t1 , n n1 . Then, for n n1 , 4H(n)ρ+ 2n t21 + I+ 2 b (n) 2 b2 (n)
εb(n)
uG(u) du.
(12.4.10)
t1
By virtue of Theorem 1.1.4(iv), the second term on the right-hand side of (12.4.10) is asymptotically equivalent, as n → ∞ (b(n)→∞), to 4ε2−α ρ+ 4ε2−α ρ+ 4H(n)ρ+ (εb(n))2 G(εb(n)) ∼ H(n)G b(n) ∼ . b2 (n)(2 − α) 2−α 2−α The first term on the right-hand side of (12.4.10) tends to zero when b(n) This implies that
√ n.
lim lim sup I+ = 0.
ε→0 n→∞
The quantity I− can be introduced and bounded in a similar way (if ρ± = 0 then lim supn→∞ I± = 0). Condition (12.4.3) and, hence, Theorem 12.4.1 are proved. √ Observe that if I+ + I− had a positive limit (with b(n) ∼ c n, see(12.4.10)) then the limiting distribution for Sn would have a normal component.
469
12.4 Convergence of random walks to stable processes 12.4.2 Convergence to stable processes
Denote by ζ (α,ρ) (·) a homogeneous stable process with independent increments on [0, 1] such that ζ (α,ρ) (1) follows the distribution Fα,ρ , and put ζn (t) :=
Snt , b(n)
t ∈ [0, 1],
where {b(n)} is a suitable scaling sequence. Let D(0, 1) be the space of functions on [0, 1] without discontinuities of the second kind, endowed with the Skorokhod metric ρD (f1 , f2 ) = inf sup |f2 (t) − f1 (λ(t))| + |λ(t) − t| , λ
t
where the infimum is taken over all continuous increasing functions λ(·) on [0, 1] such that λ(0) = 0, λ(1) = 1 (see also p. 75). In this subsection we will obtain conditions ensuring the weak convergence, as n → ∞, of the distributions of the processes ζn (·) in the space D(0, 1) to the law of ζ (α,ρ) (·). Fix an arbitrary Δ ∈ (0, 1) and consider the totality of the r.v.’s ξk+1 , . . . , ξk+nΔ . For these r.v.’s, introduce the averaged distribution F(k,Δ)
1 := nΔ
k+nΔ
Fj
(12.4.11)
j=k+1
and the averaged tails F(k,Δ),± (t) :=
k+nΔ 1 Fj,± (t), nΔ
F(k,Δ) (t) := F(k,Δ),+ (t) + F(k,Δ),− (t).
j=k+1
To obtain the convergence of the processes ζn (·) to a homogeneous stable process, we will need homogeneous uniform regular variation conditions, which are based on condition [UR]. [HUR] There exist a fixed r.v.f. G(t) = t−α L(t), where α ∈ (1, 2) and L is an s.v.f., and an r.v.f. H(n) = nh Lh (n) n, where h ∈ (0, 1] and Lh is an s.v.f., such that, for any fixed Δ > 0, we have the limits nF(k,Δ) (t) = 1, t→∞, n→∞ H(n)G(t) lim
nF(k,Δ),+ (t) = ρ+ , t→∞, n→∞ H(n)G(t) lim
where convergence is uniform in k n(1 − Δ). As in (12.4.6), put (−1)
b(n) := G
1 H(n)
= nh/α Lb (n).
470
Non-identically distributed jumps with infinite second moments
Theorem 12.4.5. Let Eξj = 0 and the conditions [HUR] and b(n) satisfied. Then, as n → ∞, ζn (·) ⇒ ζ (α,ρ) (·),
√ n be
(12.4.12)
i.e. the distributions of the processes ζn (·) converge weakly in D(0, 1) to the distribution of the process ζ α,ρ (·). √ Remark 12.4.6. As in Remark 12.4.2, we note that for b(n) ∼ c n thelimiting √ process for ζn (·) will have a Wiener component. For b(n) = o n , it will √ coincide with the Wiener process (under the scaling n ). The conditions of Theorem 12.4.5 will clearly be satisfied if there exists a sequence n0 = o(n) as n → ∞ such that the averaged distributions F(k) (t) :=
k+n 1 0 Fj (t) n0
(12.4.13)
j=k+1
satisfy condition [HUR] uniformly in both k and n (under the usual assumption that Eξj = 0). Proof of Theorem 12.4.5. To demonstrate the convergence (12.4.12), we need to verify the following two conditions (see e.g. § 15 of [28]). (1) The finite-dimensional distributions of ζn (·) converge to the respective distributions of ζ (α,ρ) (·). (2) The compactness (tightness) condition is satisfied for the family of distributions of the processes ζn (·) in the space D(0, 1). That the convergence of the finite-dimensional distributions holds is obvious from Theorem 12.4.1 and conditions of Theorem 12.4.5, since the latter mean that the F(k,Δ) satisfy all the conditions of Theorem 12.4.1 with common α and ρ values. Indeed, condition [HUR] clearly implies that condition [UR] of Theorem 12.4.1 is met. The negligibility condition [S] also follows from condition [HUR]: for j ∈ (k, k + nΔ ) and all large enough n, P |ξj | > εb(n) nΔF(k,Δ) εb(n) 2ΔH(n)G εb(n) 3Δε−α . Since Δ can be chosen arbitrarily small, whereas the left-hand side of the above relation does not depend on Δ, we necessarily have max P |ξj | > εb(n) → 0 as n → ∞. jn
For the compactness condition to be satisfied it suffices that, for t1 < t < t2 , lim sup P |ζn (t) − ζn (t1 )| v, |ζn (t2 ) − ζn (t)| v cv −2γ |t2 − t1 |1+δ n→∞
(12.4.14) for some c < ∞, γ > 0, δ > 0 (see e.g. Theorem 15.6 of [28]). Now, the
12.5 Transient phenomena
471
conditions of Theorem 12.4.5 imply that the conditions of Corollary 12.1.5 are met. This allows us to conclude that, for Δ = t−u > 0, m := nt − nu ∼ nΔ and Sm := Snt − Snu , we have as n → ∞ P |ζn (t) − ζn (u)| v = P |Snt − Snu | vb(n) | vb(n) cmF vb(n) = P |Sm H(n) G vb(n) n ∼ cΔv −α H(n)G b(n) ∼ cΔv−α .
∼ cΔn
(12.4.15)
Since the events under the probability symbol in (12.4.14) are independent we find that, owing to (12.4.15), for all large enough n this probability will not exceed 2c2 v −2α (t2 − t1 )2 . This means that inequality (12.4.14) holds true. The theorem is proved. Remark 12.4.7. The asymptotic inequality (12.4.15) (and hence the proof of compactness in (12.4.14) as well) can also be obtained, for a fixed large v, with the help of Theorem 12.4.1, using the limiting stable distribution to approxi vb(n) . mate P Sm Remark 12.4.8. It is evident that if we consider, for a fixed T > 0, a sequence nT of independent r.v.’s ξ1 , . . . , ξnT that satisfy the conditions of Theorem 12.4.5 (uniformly in k nT − nΔ) then the weak convergence (12.4.12) will also hold for the processes ζn (·) in the space D(0, T ).
12.5 Transient phenomena 12.5.1 Introduction In boundary-crossing problems for random walks, by transient phenomena one usually understands the following. Let ζ, ζ1 , ζ2 , . . . be i.i.d. r.v.’s and let Zn :=
n i=1
ζi ,
Z := sup Zk . k0
It is well known that if Eζ = −a < 0 then Z is a proper r.v. and that Z = ∞ a.s. if a 0. We call ‘transient’ the phenomena observed in the behaviour of the distribution of Z (or of that of the maximum of finitely many cumulative sums) as a ↓ 0. It is natural to expect that the r.v. Z will grow in probability in this case. The problem is to find out how fast this growth is and to determine the respective limiting distribution. To make the formulation of the problem more precise, we introduce a triangular array scheme, in which the distribution of ζ depends on some varying parameter that can be identified in our case with a, and then consider the situation
472
Non-identically distributed jumps with infinite second moments
when a → 0. Sometimes it is more convenient to consider the traditional triangular array scheme, i.e. sequences ζ1,n , ζ2,n , . . . for which the distribution of ζk,n depends on the series number n. In this case, a = a(n) depends on n in such a way that a(n) → 0 as n → ∞. However, in problems dealing with the distribution of Z, introducing a triangular array scheme with parameter n may prove to be artificial, as this parameter could be absent in the formulation of the problem. Moreover, since everywhere in what follows the scaling factors will all be functions of a (rather than of n), the first approach to introducing a triangular array scheme (i.e. using parameter a) often appears to be preferable. We will assume in what follows that we are considering a triangular array scheme in which the distribution of ζ = ζ (a) depends on a but for brevity we will omit the superscript (a) indicating this dependence. Further, it will be convenient for us to deal with centred r.v.’s ξi = ζi + a, so that Eξi = 0. Then the desired functional Z of the sequence of partial sums Zk will have the form Z = S(a) = sup (Sk − ak),
Sk =
kn
n
ξi .
i=1
We will also consider the maxima of finitely many partial sums: S n (a) = max(Sk − ak). kn
Studying transient phenomena was stimulated, to a great extent, by applications. In queueing problems, transient phenomena arise when a queueing system works under ‘heavy traffic’ conditions, i.e. when the intensity of the arrival flow approaches the maximum possible service rate. In this case, for example, for the simplest single channel queueing system, in which the queue length (or the waiting time) at time n can be described by S n (a), we will have a situation where a is small. It is known that if Eξ 2 =: da → d < ∞ as a → 0 then, for n = T a−2 , there exists a limiting distribution for aS n (a): for any z 0,
√ lim P aS n (a) z) = P max ( dw(t) − t) z , (12.5.1) a→0
tT
where w(·) is the standard Wiener process [165, 231]. In particular, when T = ∞ the right-hand side of (12.5.1) becomes e−2z/d . For more detail, see e.g. § 25 of [42]. In the case Eξ 2 = ∞, all we know is a number of results concerning the limiting distribution for Z, which have been obtained for some special types of distribution of ζ; see [93, 78, 94]. These results do not explain the nature of the scaling functions and limiting distributions in the general case. The analytical methods employed in the above-mentioned papers are related to factorization identities and have certain limitations. This makes the problems under consideration, in their general formulation, hard to address using such methods.
12.5 Transient phenomena
473
12.5.2 The main limit theorems In this subsection, in accordance with the general aim of the chapter we will deal with a more general problem formulation, where the ξi are non-identically distributed independent r.v.’s in the triangular array scheme and max Eξi2 = ∞. i
Under these conditions, finding the limiting distributions is possible only when the distributions of the partial sums Sn converge (after suitable scaling) to a stable law. More precisely, we will use Theorems 12.4.1 and 12.4.5. This means that we need to assume that the conditions of these theorems are satisfied. The main one is condition [HUR] of § 12.4. To simplify the formulations and proofs we will restrict ourselves to considering the most important special case, when H(n) = n (in which ‘degeneration’ of the tails is impossible). Clarifications concerning what the results and their proofs would look like in a more general setup are given after the proof of Theorem 12.5.2 below. In the case H(n) = n, condition [HUR] means that there exists an r.v.f. G(t) = t−α L(t), where α ∈ (1, 2) and L is an s.v.f., such that for any fixed Δ > 0 one has F(k,Δ) (t) F(k,Δ),+ (t) lim = 1, lim = ρ+ (12.5.2) t→∞, n→∞ t→∞, n→∞ G(t) G(t) and the convergence is uniform in k (F(k,Δ) was defined in (12.4.11)). We will take a to be the parameter of the triangular array scheme. This means that the increasing n values of will be considered as functions n = n(a) → ∞ as a → 0. These functions will be specified below, under conditions (12.5.2). Set D(t) := tG(t), d(v) := D(−1) (v) = inf u : D(u) < v ,
n1 := n1 (a) =
d(a) . a (12.5.3)
Theorem 12.5.1. Assume that Eξi = 0 and that conditions (12.5.2) are met with n replaced in them by n1 = d(a)/a. Further, let a → 0 and n = n(a) be such that there exists the limit n(a) lim = T < ∞. a→0 n1 (a) Then we have the convergence in distribution S n (a) ⇒ Z (α,ρ) (T ) := sup ζ (α,ρ) (u) − u . d(a) uT
(12.5.4)
One can show that the distribution of Z (α,ρ) (T ) is continuous. This, together with Theorem 12.5.1, means that, for all v, S n (a) (12.5.5) v → P Z (α,ρ) (T ) v as a → 0. P d(a)
474
Non-identically distributed jumps with infinite second moments
Proof of Theorem 12.5.1. As before, let b(n) = G−1 (1/n). First note that d(a) = b(n1 ).
(12.5.6)
This follows from the equalities (we assume for simplicity that the function G(t) is continuous) 1 a = a ≡ D d(a) = d(a)G d(a) , G d(a) = , d(a) n1 so that d(a) = G−1 (1/n1 ) ≡ b(n1 ). Hence S n (a) Sk k ak = max ζn1 = max − kn b(n1 ) kn d(a) b(n1 ) n1
−
kθ , n1
(12.5.7)
where, by (12.5.6), θ := Further, the functional
n1 a d(a) = = 1. b(n1 ) b(n1 )
fT (ζ) := sup ζ(u) − u uT
is continuous in the Skorokhod metric and has the property that fT (ζn1 ) = max ζn1 (k/n1 ) − k/n1 kn
(assuming for simplicity that n = T n1 ) and so, by virtue of (12.5.7), S n (a) = fT (ζn1 ). d(a) It remains to make use of Theorem 12.4.5 and Remark 12.4.8. The theorem is proved. Now consider the case n n1 and, in particular, the situation when n = ∞. Put Z (α,ρ) := Z (α,ρ) (∞). Theorem 12.5.2. Let the conditions of Theorem (12.5.1) be met, and let T = ∞. Then, as a → 0, S(a) ⇒ Z (α,ρ) . d(a) Proof. As before, let η(x, a) = inf{k : Sk − ak x}. Put, for brevity, ζ(n, a) :=
S n (a) . d(a)
(12.5.8)
12.5 Transient phenomena Fix a large number T1 and set nT1 = n1 T1 . Then, for any fixed v > 0, P ζ(∞, a) v = P ζ(nT1 , a) v + P ∞ > η vd(a), a > nT1 ,
475
(12.5.9)
where, 12.5.1, the first term on the right-hand side converges by virtue of Theorem to P Z (α,ρ) (T1 ) v as a → 0. Further, it is not difficult to see that the conditions of the theorem imply that also satisfied are the conditions of Corollary 12.3.2 (i.e. conditions [<, <]U , [H] and [N]; that the first two are met is obvious whereas, regarding condition [N], see (12.4.7) and the comments following condition [UR] of § 12.4.1). Therefore, for the second term on the right-hand side of (12.5.9), we obtain from Corollary 12.3.2 and (12.3.8) with x = vd(a) that T1 x P ∞ > η(vd(a), a) > nT1 = P ∞ > η(x, a) > av 1−α cn1 V(1) (x) T1 /v 1−α c1 n1 G(x) T1 /v ,
(12.5.10)
where V(1) corresponds to the averaged distribution F(1) =
n1 1 Fj n1 j=1
nT1 (one could also consider the averaged distributions F(T1 ) = n−1 j=1 Fj ). Now T1 observe that, for the last expression in (12.5.10), D vd(a) v −α d(a) G vd(a) = ∼ D d(a) ∼ v −α , n1 G(x) = a va a and hence, uniformly in a, c1 1−α , P η(vd(a), a) > nT1 T v 1 where the right-hand side can be made arbitrarily small by choosing a suitable T1 . It follows from the above that lim sup P ζ(∞, a) v P Z (α,ρ) (T1 ) v + R(v, T1 ) a→0 P Z (α,ρ) v + R(v, T1 ), (12.5.11) lim inf P ζ(∞, a) v P Z (α,ρ) (T1 ) v a→0 = P Z (α,ρ) v − r(v, T1 ), where R(v, T1 ) c1 v −1 T11−α , and that r(v, T1 ) := P Z (α,ρ) v − P Z (α,ρ) (T1 ) v ↓ 0 as T1 → ∞. Therefore, the right-hand sides in (12.5.11) differ from P Z (α,ρ) v only by
476
Non-identically distributed jumps with infinite second moments
summands that can be made arbitrarily small by choosing a suitable T1 . But the left-hand sides in (12.5.11) do not depend on T1 . Therefore there exists the limit lim P ζ(∞, a) v = P Z (α,ρ) v . a→∞
The theorem is proved. Now we will turn to conditions [HUR] in their more general form (i.e. in the case H(n) ≡ n). For a function H(n) = nh Lh (n), where Lh (·) is an s.v.f., the considerations become much more tedious, beginning with the construction of the scaling functions d(a) and n1 (a). So, for simplicity we will put Lh (n) ≡ Lh = const (in this case, b(n) = √ G(−1) (1/H(n)) = nh/α Lb (n) n ). We based our choice of the scaling functions d(a) and n1 (a) in (12.5.3) on the relation an = b(n), which states that the value of the drift an and that of the standard deviation b(n) at time n are ‘comparable’ with each other. It followed from this relation that nG(an) = 1,
an = D(−1) (a) = d(a),
anG(an) = a,
n1 = n1 (a) =
d(a) . a
Now such ‘comparability’ relations lead to the equalities H(n)G(an) = 1,
H(an)G(an) = ah .
Therefore, putting D(v) := H(v)G(v),
d(u) := D (−1) (u),
(12.5.12)
we obtain d(ah ) . a Repeating the argument from the proof of Theorem 12.5.1 we find that an = d(ah ),
S n (a) ⇒ Z (α,ρ) (T ), d(a)
n1 = n1 (a) =
where T = lim
a→0
(12.5.13)
n n1 (a)
and the functions d(·) and n1 (a) are defined in (12.5.12), (12.5.13). This follows from the fact that the main equality (12.5.6) in the proof of the theorem now becomes d(ah ) = b(n1 ), which is a direct consequence of the initial ‘comparability’ relation an = b(n) with n = n1 (a) = d(ah )/a inserted into it. The rest of the argument remains unchanged.
12.5.3 On the explicit form of the limiting distributions It is scarcely possible to find closed-form expressions in the general case for the proper limiting distributions in Theorems 12.5.1 and 12.5.2 (i.e. the distributions of Z (α,ρ) (T ); the probabilities of large deviations of S n (a)/d(a) were studied in Theorem 12.3.1 and Corollary 12.3.2). In some special cases, however, one can do this for T = ∞. One of these is the case when F+ (t) = o(F− (t)) or, equivalently, when ρ = −1. Then one has G(t) ∼ F− (t), and so it is natural to replace the parameter α by the parameter β that describes the behaviour of the left tail F− (t) as t → ∞.
477
12.5 Transient phenomena Theorem 12.5.3. For β ∈ (1, 2),
P Z
(β,−1)
−bv
v =e
,
where b :=
β−1 Γ(2 − β)
1/(β−1)
.
If we use the convention Γ(1 − β) = Γ(2 − β)/(1 − β) for β ∈ (1, 2) then b −1(β−1) . See also Theorem 12.5.6 below. can also be written as b = −Γ(1 − β) Proof. It is clear that when finding the limiting distribution for Z (β,−1) one can assume without loss of generality that the ξi are i.i.d. r.v.’s and that F− (t) is an r.v.f. of index −β. Further, in the case when F+ (t) = o(F− (t)) as t → ∞, one has ρ = −1 and hence the distributions of the process ζ (β,−1) (·) and the r.v. Z (β,−1) will not change if we replace the tail F+ (t) by any other tail F+∗ (t) with the property that F+∗ (t) = o(F− (t)) as t → ∞; it is only required that condition [HUR] is still satisfied. This will be the case if we put F+∗ (t) := qe−γt for t > 0, γ > 0, q ∈ (0, 1). In other words, we can assume without loss of generality (from the viewpoint of finding the distribution of Z (β,−1) ) that F+ (t) ≡ P(ξj t) = qe−γt , ∞ where q < 1, g = q/a+ and a+ = E(ξ + ) = 0 u F(du), so that we still have Eξj = 0. Now recall that in the case of exponential right tails the distribution of S(a) is known explicitly. Namely, in this case (see e.g. § 20 of [40] or § 5, Chapter 11, of [49]) EeλS(a) = p +
(1 − p)λ1 , λ1 − λ
p = P(S(a) = 0),
so that the distribution of S(a) is also exponential: P S(a) x = (1 − p)e−λ1 x ,
x > 0.
(12.5.14)
(12.5.15)
Here λ1 > 0 is the solution to the equation ψ(λ) = 1, where ψ(λ) := Eeλ(ξ−a) = e−aλ
∞ eλt F(dt),
−∞
and it is clear that p → 0 as a → 0. To find λ1 , an asymptotic representation for ψ(λ) as λ → 0 is needed. Lemma 12.5.4. As λ → 0, ψ(λ) = 1 − λa + b1 F− (1/λ)(1 + o(1)),
b1 := b1−β =
Γ(2 − β) , β−1 (12.5.16)
and (−1) (a/b1 ) ∼ bd(a) as λ−1 1 ∼D
a → 0.
(12.5.17)
478
Non-identically distributed jumps with infinite second moments
Proof. We have eaλ ψ(λ) = I1 + I2 , where 0 I1 :=
e F(dt) = F− (0) + λ −∞
0
Here
0 λt
0 t F(dt) +
−∞
(eλt − 1 − λt) F(dt). −∞
−∞
t F(dt) = −E(ξ + ) = −a+ ,
(e
∞ ∞ −λt 2 − 1 − λt) F(dt) = λ (1 − e )F− (t) dt = λ e−λt F−I (t) dt,
0 λt
−∞
0
0
where, by virtue of Theorem 1.1.4(iv), as t → ∞, ∞ F−I (t)
=
F− (u) du ∼ t
tF− (t) . β−1
Therefore, as λ → 0 (λt = u), it follows from Theorem 1.1.5(i) that ∞ Γ(2 − β) 1 2 F− (1/λ). e−λt F−I (t) dt ∼ λ2 Γ(1 − (β − 1)) F−I (1/λ) ∼ λ λ β−1 0
Thus I1 = F− (0) − λa+ +
Γ(2 − β) F− (1/λ)(1 + o(1)). β−1
Also, it is obvious that since E(ξ + )2 < ∞ we have, as λ → 0, ∞ I2 :=
eλt F(dt) = F+ (0) + λa+ + O(λ2 ),
e−aλ = 1 − aλ + O(λ2 ).
0
This proves (12.5.16). Further, the desired value λ1 solves the equation λa = b1 F− (1/λ)(1 + o(1)), or, equivalently, λa = b1 G(1/λ)(1 + o(1)) or D(1/λ) ∼ a/b1 , since in our case F− (t) ∼ G(t) as t → ∞. From this we find that −1/(β−1)
λ−1 1 ∼ d(a/b1 ) ∼ b1
d(a).
The lemma is proved. Returning to (12.5.15) and putting x = vd(a), where v > 0 is fixed, we obtain that S(a) v ∼ e−bv , P Z (β,−1) v = e−bv . P d(a) The theorem is proved.
479
12.5 Transient phenomena
The second case, for which a closed-form expression for the distribution of Z (α,ρ) can be found, is the case when F− (t) = o(F+ (t)), i.e. when ρ = 1. This time, however, one can only find explicitly the ch.f. of Z (α,1) . Theorem 12.5.5. For α ∈ (1, 2) we have (α,1) iφ|λ|α−1 = 1− A(α − 1, φ) EeiλZ α−1
−1
,
φ = sign λ,
where ∞ A(γ, φ) =
eiφv v−γ dv = Γ(1 − γ)eiφ(1−γ)π/2
0
(see (1.5.20), (1.5.24) on p. 64). The following relationship between the ch.f.’s of Z (α,1) and ζ (α−1,1) is easily established: −1 ln f(α−1,1) (λ) iλZ (α,1) = 1− . Ee α−1 Proof of Theorem 12.5.5. Using essentially the same argument as in the proof of Theorem 12.5.3, one can assume without loss of generality, when one is concerned with finding the distribution of Z (α,1) , that the ξj are i.i.d. and that F− (t) ≡ P(ξj < −t) = qe−γt
for t −1,
where γ = −qa−1 > 0 and a− := E min{0, ξ} are fixed. Then we still have Eξj = 0 and, for a ∈ (0, 1), the first negative sum χ− in the random walk {Sk − ak; k 1} will be exponentially distributed: P(χ− < −t) = e−γt ,
t 0,
Eeiλχ− =
γ . γ + iλ
Using factorization identities (see e.g. relation (1) from § 18 of [40] or § 5, Chapter 11 of [49]), we obtain EeiλS(a) =
piλ p 1 − Eeiλχ− = , 1 − f(a) (λ) (1 − f(a) (λ))(γ + iλ)
where f(a) (λ) := Eeiλ(ξ−a) ,
p = P S(a) = 0 .
Clearly, as λ → 0, we have 1 − f(a) (λ) = 1 − e−iλa f(λ) = 1 − 1 − iλa + O(λ2 ) (f(λ) − 1) + 1 = iλa + O(λ2 ) − (f(λ) − 1)(1 + o(1)).
(12.5.18)
480
Non-identically distributed jumps with infinite second moments
We have already found the asymptotic behaviour of f(λ) − 1 as λ → 0. It follows from (1.5.34) and (12.5.18) that, as m = 1/|λ| → ∞, F+ (m) 1 − f(a) (λ) = iλa + A(α − 1, φ)(1 + o(1)) α−1 % & iF+ (m) = iλa 1 − A(α − 1, φ)(1 + o(1)) . λa(α − 1) Therefore EeiλS(a) =
% &−1 iF+ (m) p 1− A(α − 1, φ)(1 + o(1)) (γ + iλ)−1 . a λa(α − 1)
Setting λ := μ/d(a) for a fixed μ and a → 0, we find that EeiμS(a)/d(a) % &−1 −1 iφD d(a)/|μ| iμ p 1+ 1− A(α − 1, φ)(1 + o(1)) = , aγ a(α − 1) d(a)γ where a−1 D d(a)/|μ| ∼ |μ|α−1 and φ = sign μ. Since S(a)/d(a) ⇒ Z (α,1) , this implies that necessarily p/aγ → 1 and −1 iφ|μ|α−1 EeiμS(a)/d(a) → 1 − A(α − 1, φ) . α−1 The theorem is proved. Observe that the exponential character of the distribution of Z (β,−1) can be established (without computing the parameter b) in a much easier way than in Theorem 12.5.3, using the fact that there are no positive jumps in the process ζ (β,ρ) (u) − u when ρ = −1. The desired exponentiality will follow from the next assertion. Theorem 12.5.6. Let {ζ(t); t 0} be an arbitrary homogenous process with independent increments such that Z = supt0 ζ(t) < ∞ a.s. For the r.v. Z to have the exponential distribution P(Z > x) = e−bx ,
x 0,
b > 0,
(12.5.19)
it is necessary and sufficient that the trajectories ζ(·) are continuous from above (do not have positive jumps). Proof. Put η(x) := inf t 0 : ζ(t) > x if Z > x and η(x) := ∞ if Z x. The r.v. χ(x) := ζ η(x) −x is defined on the event {Z > x}. For the definiteness let the trajectories of the process ζ(·) be right-continuous. Then χ(x) 0 and, moreover, χ(x) ≡ 0 iff the process ζ(t) is continuous from above. For an arbitrary y ∈ (0, x), we have # $ P (x) := P(Z > x) = E P Z > x| Fη(y) ; η(y) < ∞ , (12.5.20)
12.5 Transient phenomena
481
where Fη(y) is the σ-algebra generated by the trajectory of the process ζ(·) on the time interval [0, η(y)]. Owing to the strong Markov property of the process ζ(·), on the event {η(y) < ∞} one has P Z > x| Fη(y) = P Z > x − y − χ(y)| χ(y) , (12.5.21) where Z := supt0 ζ (t) and ζ (·) is an independent copy of the process ζ(·). Therefore, if the process ζ(·) is continuous from above then P Z > x| Fη(y) = P(Z > x − y) = P (x − y), which, together with (12.5.20), implies that P (x) = P (x − y) P(η(y) < ∞) = P (y)P (x − y). Solutions to this equation in the class of decreasing functions have the form P (x) = e−bx ,
b ≥ 0.
(see e.g. § 6, Chapter XVII of [121]). Since P(Z > x) → 0 as x → ∞, we have b > 0 and hence (12.5.19) is true. Now let (12.5.19) hold true. Assume that the process ζ(·) has positive jumps and so P(χ(y) > 0) > 0. Then, by virtue of (12.5.19)–(12.5.21), P(Z > x − y − χ(y)) > P (x − y),
P (x) > P (y)P (x − y).
This contradicts (12.5.19). The theorem is proved.
13 Random walks with non-identically distributed jumps in the triangular array scheme in the case of finite variances
In the present chapter we will extend the main results of Chapter 4 to the case of random walks with independent non-identically distributed jumps ξi in the triangular array scheme when Eξi2 < ∞.
13.1 Upper and lower bounds for the distributions of S n and Sn Let ξ1 , ξ2 , . . . be independent r.v.’s, following respective distributions F1 , F2 , . . . The distributions Fj can also depend on some parameter. As in the classical triangular array scheme this could be the number of summands n, so that the (n) Fj = Fj depend both on j and n. All the remarks that we made in Chapter 12 concerning the use of other parameters in the scheme apply to the present situation as well. As before, let n ξi , S n = max Sk . Sn = kn
i=1
In what follows, we will go through all the main stages of bounding and evaluating the probabilities P(Sn x) and P(S n x) presented in Chapter 4, but now under new, more general, conditions.
13.1.1 Upper bounds for the distributions of S n . The first version of the conditions First of all, an extension of the basic inequality (2.1.8) to the case of non-identically distributed jumps has the same form as in § 12.1 (see Lemma 12.1.1 and inequality (12.1.3) on p. 441). Further, as in § 12.1 we will introduce a regular variation condition, but this time for the right averaged tails 1 Fj,+ (t), n j=1 n
F+ (t) =
where Fj,+ (t) = Fj ([t, ∞)).
482
13.1 Upper and lower bounds for the distributions of S n and Sn
483
The conditions [U] and [ · , <]U that we are about to introduce are similar to conditions [U] and [<, <]U from § 12.1. In the present chapter, we say that condition [ · , <]U is satisfied if the averaged (n) tail F+ (t) = F+ (t) admits a majorant V (t) = V (n) (t) possessing the property [U] = [U1 ] ∩ [U2 ] for a fixed α > 2, where conditions [U1 ] and [U2 ] have the following form. [U1 ] For any given δ > 0 there exists a tδ such that, for all n, t tδ and v ∈ [1/2, 2], V (vt) −α (13.1.1) V (t) − v δ. [U2 ] For any given δ > 0 there exists a tδ such that, for all n, t and v for which t tδ , tv tδ , one has v −α+δ
V (vt) v −α−δ . V (t)
(13.1.2)
One can assume, without loss of generality, that the tδ ’s in [U1 ] and [U2 ] are one and the same function. An equivalent form of condition [U] was given in (12.1.7) (see p. 442). Remark 13.1.1. While studying the probabilities of large deviations of S n and Sn in the case n → ∞, condition [U] can be somewhat relaxed and replaced by the following condition. [U∞ ] For any given δ > 0 there exist nδ and tδ such that the uniform inequalities (13.1.1), (13.1.2) hold for all n nδ , t tδ and tv tδ . A simple sufficient condition for the existence of regularly varying right majorants of index α for the averaged distributions is the boundedness of the averaged one-sided moments of order α: n 1 E(ξjα ; ξj 0) < cα < ∞. n j=1 In this case, clearly, for t > 0, F+ (t) V (t) ≡ cα t−α , and so condition [U] will be met. Along with condition [U] on the averaged majorant one could consider, as in § 12.1, an alternative version of this condition, in which condition [ · , <] and the relations (13.1.1), (13.1.2) hold uniformly for each majorant V1 , . . . , Vn , with respective exponents α1 , . . . , αn : 2 < α∗ min αi max αi α∗ , in
in
(13.1.3)
484
Non-identically distributed jumps with finite variances
where α∗ and α∗ do not depend on n. If α∗ = α∗ = α then the above ‘individual’ version of condition [ · , <]U will imply the ‘averaged’ version (13.1.1), (13.1.2) (see § 12.1). If the αj are different then it is no longer clear whether the averaged condition [ · , <]U will be met. In this connection one can consider, as in § 12.1, two versions of condition [ · , <]: (1) the averaged version and (2) the individual one. In contrast with § 12.1, in the formulation of the main theorem of this section we will not use condition [N] preventing too rapid ‘thinning’ of the distribution tails (this condition will be introduced later) but, rather, will use an alternative approach, introducing the quantity J(n, δ) := tδα+δ nV (tδ ),
(13.1.4)
where tδ is from condition [U]. The quantity J(n, δ) will be part of the bounds to be derived. Now we can state the main assertion of the present section. Put di := Eξi2 ,
D = Dn :=
n
di ,
Bj = {ξj < y},
i=1
n *
B=
Bj
j=1
and let y = x/r, r 1, ρ = ρ(n, δ) :=
J(n, δ) , Dn
σ(n) =
(α − 2)D ln D,
x = sσ(n),
(13.1.5) where δ > 0 will be chosen later on. Without loss√of generality, we will assume that D 1 and, as was the case in § 4.1, that x D. In the individual version of condition [ · , <]U , the parameter α in (13.1.4), n (13.1.5) should be replaced by α∗ , and one should put V (t) := n−1 j=1 Vj (t). Theorem 13.1.2. Let Eξj = 0 and let the averaged distribution F satisfy condition [ · , <]U . Then the following assertions hold true. (i) For any fixed h > 1, for s 1 and all small enough nV (x), one has r−θ nV (y) , (13.1.6) P ≡ P(S n x; B) er r where
ln s hr2 , θ := 2 1 + χ + b 4s ln D
χ := −
2 ln ρ , α − 2 ln D
b :=
2α , α−2
and the value δ, which determines the quantity ρ = ρ(n, δ), depends on the chosen h > 1 and will be specified in the proof. If D → ∞, ln ρ = o(ln D) then one can put hr 2 ln s θ := 2 1 + b . (13.1.7) 4s ln D
13.1 Upper and lower bounds for the distributions of S n and Sn
485
(ii) Let D = Dn → ∞ as n → ∞. Then, for any fixed h > 1, τ > 0, for x = sσ(n),
s2 < (h − τ )/2
and all large enough n, one has 2
P e−x
/2Dh
.
(13.1.8)
(iii) If, instead of the averaged condition [ · , <]U we require that the ‘individual’ version of this condition is met, in which each distribution Fj separately satisfies condition [ · , <]U uniformly in j and n and the relations (13.1.3) hold true, then the assertions of parts (i), (ii) of the theorem remain true for σ(n) = (α∗ − 2)D ln D. In this case, one should replace α in the first assertion by α∗ = maxin αi . In the second assertion the inequality (13.1.8) will hold for x = sσ(n),
s
(h − τ )(α∗ − 2) , 2(α∗ − 2)
α∗ = min αi . in
In all the assertions in which it is assumed that n → ∞, condition [U] can be replaced by [U∞ ]. We see that, in the case where ln ρ = o(ln D), the first assertion of Theorem 13.1.2 essentially repeats the second assertion of Theorem 4.1.2, the number n in the representation for θ being replaced by D. A similar remark applies to the second assertion. Now we will turn our attention to the quantity χ (or, equivalently, to the value −ln ρ/ln D), which is present in (13.1.6). Owing to Chebyshev’s inequality, we have J(n, δ) nV (t) Dn t−2 , J(n, δ) tα+δ−2 Dn , ρ= tδα−2+δ ≡ c1 . δ Dn (13.1.9) It is also clear that the inequality (13.1.6) remains true if in it we replace χ by max{0, χ} (this can only make the inequality cruder), so that one can assume without loss of generality that χ 0 (ρ 1). The most ‘dangerous’ values for the quality of the bounds are the small values of ρ. Lower bounds for ρ can be obtained provided that the following condition is satisfied. [N]
For some δ < min{1, α − 2} and γ 0, J(n, δ) = tδα+δ nV (tδ ) cDn−γ .
(13.1.10)
We note straight away that if condition [N] is met for some δ > 0 then it will also be satisfied for any other fixed δ1 < δ, e.g. for δ1 = δ/2. Indeed, since the relation V (tδ1 ) V (tδ )(tδ1 /tδ )−α−δ holds owing to [U2 ], we have −γ 1 1 J(n, δ1 ) = tα+δ nV (tδ1 ) tδα+δ nV (tδ )tδδ11 −δ = J(n, δ)t−δ δ1 δ1 c1 Dn ,
486
Non-identically distributed jumps with finite variances
1 where c1 = ct−δ δ1 . If tδ does not increase too rapidly as δ → 0 (for instance, if tδ = eo(1/δ) ) then the constant c1 ceo(1) will be essentially the same as c. If condition [N] is satisfied then
J(n, δ) cDn−γ ,
ln ρ ln c − (1 + γ) ln Dn .
Hence 0−
ln c ln ρ 1+γ− . ln Dn ln Dn
This means that under condition [N] the quantity χ = χ(n, δ) 0 admits, for any fixed δ > 0, an upper bound which is independent of n, and therefore θ → 0 as s → ∞. Let condition [ · , <]U be satisfied (in the averaged or the ‘individual’ version). Then Theorem 13.1.2 and the above imply the following results. Corollary 13.1.3. If x = sσ(n) and condition [N] is met then, under the conditions of Theorem 13.1.2 (i), (iii), for any ε > 0 and all large enough s, r−ε . (13.1.11) P < nV (x) Corollary 13.1.4. If x = sσ(n) and condition [N] is met then, under the conditions of Theorem 13.1.2 (i), (iii), for any ε > 0 and all large enough s, P(S n x) nV (x)(1 + ε).
(13.1.12)
Assume that Dn → ∞ as n → ∞. Then, for any fixed h > 1 and τ > 0, for x = sσ(n), s2 < (h − τ )/2 and all large enough n, we have P(S n x) e−x
2
/2Dh
.
Corollary 13.1.4 is proved in the same way as Corollary 4.1.4. Remark 13.1.5. As we have already noted, if one studies the probabilities of large deviations of Sn and S n as n → ∞ then one could use a relaxed version [U∞ ] of condition [U] (see Remark 13.1.1). In this case, (13.1.11) and (13.1.12) will hold true for large enough n. In paper [127] upper bounds for the distribution of Sn were obtained in terms of ‘truncated’ moments, without making any assumptions on the existence of regularly varying majorants (such majorants always exist in the case of finite moments but their decay rate will not be the true one). This makes the bounds from [127] more general in a sense but also substantially more cumbersome. One cannot derive from them bounds for Sn of the form (13.1.11), (13.1.12). Some further bounds for the distribution of Sn were obtained in papers [240, 226, 197, 206]. Now we consider an example illustrating the assertions of Theorem 13.1.2 and Corollaries 13.1.3 and 13.1.4 (it is an analogue of Example 12.1.11 on p. 453).
13.1 Upper and lower bounds for the distributions of S n and Sn
487
Example 13.1.6. Let ξi = ci ζi , where the ζi are i.i.d. r.v.’s, satisfying the conditions Eζi2 = 1,
Eζi = 0,
P(ζ t) = V(ζ) (t) := Lt−α
t t0 > 0.
for
Then Vi (t) = V(ζ) (t/ci ) for t ci t0 . If c∗ = inf i1 ci > 0 and c∗ = supi1 ci < ∞ then Dn =
n
c2i ∼ c(n) n,
J(n, δ) ∼ c(n) n,
ρ∼
i=1
c(n) c(n)
and ln ρ = o(ln D) as n → ∞; here c(n) c2∗ > 0 and c(n) c < ∞ are bounded sequences, so that in (13.1.6) one can use the representation (13.1.7). Now let ci ↓ 0 as i → ∞. In our case, in condition [U] and in the formulations that follow it, one can put αi = α∗ = α∗ = α, δ = 0 and tδ = t0 . Assume for simplicity that ci = i−γ , γ > 0. Then, for γ < 1/α, 1 n1−2γ , 1 − 2γ n n Ln1−αγ γ . V (t i ) = L i−αγ ∼ J(n, δ) = tα (ζ) 0 0 1 − αγ Dn ∼
i=1
i=1
From this it follows that ρ ∼ cn−γ(α−2) as n → ∞ and that −
γ(α − 2) ln ρ ∼ , ln D 1 − 2γ
χ∼
2γ . 1 − 2γ
Thus, in this case we have the relations (13.1.6), (13.1.11) and (13.1.12), in which one can put hr 2 2γ ln s θ = 2 1+ . +b 4s 1 − 2γ ln D If ci ↑ ∞ as i → ∞ then condition [U] is not satisfied. However, as in Example 12.1.11, the problem can then be reduced, in a sense, to the previous one. Introduce new independent r.v.’s ξi∗ :=
ξn−i+1 cn−i+1 = ζn−i+1 , cn cn
i = 1, . . . , n,
so that again we have a representation of the form ξi∗ = c∗i ζi with decreasing coefficients c∗i = cn−i+1 /cn but now in a ‘triangular array scheme’ since the c∗i depend on n. We have Sn∗ =
n i=1
ξi∗ =
Sn , cn
P(Sn x) = P(Sn∗ x∗ )
for
x∗ =
x . cn
Proof of Theorem 13.1.2. The proof follows the scheme used in that of Theorem 4.1.2. We will go into detail only where the fact that the ξi are non-identically distributed requires us to amend the argument.
488
Non-identically distributed jumps with finite variances
(i) As before, the proof of such an assertion will be based on the main inequality (12.1.3) (see p. 441), which reduces the problem to bounding the sums (see (12.1.16)) y
1 R(μ, y) = Rj (μ, y), n j=1 n
Ri (μ, y) =
eμt Fi (dt), −∞
so that
y eμt F(dt).
R(μ, y) = −∞
Exactly these integrals were estimated in the proofs of Theorems 3.1.1 and 4.1.2. We again put M (v) = v/μ. Then, in complete analogy with (4.1.19)–(4.1.21) (see p. 186), we obtain R(μ, y) = I1 + I2 , where M (ε)
I1 =
eμt F(dt) 1 + −∞
μ2 hD , 2n
h = eε .
Next we will bound y y μt ε I2 = − e dF+ (t) V (M (ε))e + μ V (t)eμt dt. M (ε)
M (ε)
First consider for M (ε) < M := M (2α) < y the integral M I2,1 := μ
V (t)eμt dt.
M (ε)
Owing to condition [U] the asymptotic equivalence relation (4.1.24) will be uniform in n, and therefore we have an analogue of the inequality (4.1.25): I2,1 cV (1/μ). The integral y I2,2 := μ
V (t)eμt dt M
is bounded using the same considerations as those employed in § 2.2, see (2.2.11)– (2.2.15) (p. 88), and in § 12.1 when estimating I3 . We have I2,2 V (M )e
2α
y +μ M
V (t)eμt dt ≡ V (M )e2α + I30 ,
13.1 Upper and lower bounds for the distributions of S n and Sn y where I30 = μ M V (t)eμt dt. Next we derive that (cf. (12.1.20), (12.1.21)) I30 eμy V (y) 1 + ε(λ) ,
489
where ε(λ) → 0 as λ = μy → ∞. As a result, we obtain μ2 hD + cnV n R(μ, y) − 1 2
+ nV (y)eμy 1 + ε(λ)
1 μ
(cf. (4.1.27)). Since for k n one has k
k
di D,
i=1
Vi (t) nV (t),
i=1
inequalities of the same form (with the same right-hand sides) will also hold for the quantities k
Ri (μ, y) − 1 ,
max kn
i=1
k
Ri (μ, y) − 1 .
i=1
Therefore, as in (12.1.24), we obtain max kn
k =
Ri (μ, y) exp max kn
i=1
k
Ri (μ, y) − 1
i=1
μ2 hD + cnV exp 2
1 μ
"
+ nV (y)e
μy
" 1 + ε(λ) . (13.1.13)
Next we put, as in § 4.1, μ :=
1 ln T, y
T :=
r . nV (y)
Then λ = ln T → ∞ as nV (y) → 0, and (cf. (4.1.29))
k =
μ2 hD max + cnV Ri (μ, y) exp kn 2 i=1
1 μ
" + r 1 + ε(λ) .
Here, by virtue of [U], one has, uniformly in n, α+δ y y 1 =V ∼ cV cV (y)ln nV (y) , V μ ln T | ln nV (y)| so that
nV
1 μ
α+δ cnV (y)ln nV (y) →0
as
nV (y) → 0.
(13.1.14)
δ > 0,
490
Non-identically distributed jumps with finite variances
Hence (cf. (4.1.31)) hD ln P −r ln T + r + 2 ln2 T + ε1 (T ) 2y hD = − r − 2 ln T ln T + r + ε1 (T ), 2y
(13.1.15)
where ε1 (T ) → 0 as T → ∞. Owing to [U1 ], we have ln T = − ln nV (x)+O(1). Next we will bound nV (x) from below. It follows from [U2 ] that for x > tδ one has nV (x) J(n, δ)x−α−δ .
(13.1.16)
Hence, putting J := J(n, δ), ρ := J/D and α := α + δ, we obtain that, for x = sσ(n) → ∞, σ(n) = (α − 2)D ln D,
nV (x) Jx−α = Js−α (α − 2)−α /2 D −α /2 (ln D)−α /2
= cD (2−α )/2 s−α (ln D)−α /2 ρ.
(13.1.17)
Therefore ln T = − ln nV (x) + O(1) α α − 2 ln D + α ln s − ln ρ + ln ln D + O(1) (13.1.18) 2 2 2α ln s 2 ln ρ α − 2 (1 + o(1)). ln D 1 + − = 2 α − 2 ln D α − 2 ln D
Since s 1, ρ 1 and 2α 2α , α − 2 α−2 we obtain ln T
α − 2 ln D 2
1+b
1 1 , α − 2 α−2 ln s 2 ln ρ (1 + o(1)). − ln D α − 2 ln D
Why the relative remainder term in (13.1.18) has the form o(1) can be explained as follows. The convergence x → ∞, which is necessary for nV (x) → 0, means that either s → ∞ or D → ∞. If s → ∞, D < c then the value in the large parentheses in (13.1.18) increases unboundedly. Hence the terms ln ln D + O(1) translate into a factor o(1) times the expression in the large parentheses. Alternatively, if D → ∞ then also ln D → ∞, and we again obtain o(1) for the same reason. If ln ρ = o(ln D) (which is only possible when D → ∞) then the representation (13.1.18) will coincide with (4.1.32) provided that in the latter we replace n by D.
13.1 Upper and lower bounds for the distributions of S n and Sn
491
As in § 4.1, (13.1.18) implies that hr2 hD δ ln s 2 ln ρ ln T 2 1 + 1+b (1 + o(1)). − 2y 2 4s α−2 ln D α − 2 ln D (13.1.19) Since δ can be chosen arbitrarily small we obtain that, owing to (13.1.15), one has the following bound with a new value of h, somewhat greater than that in (13.1.19): & % ln s hr2 + χ ln T. ln P r − r − 2 1 + b 4s ln D This proves the first assertion of the theorem. (ii) Now we will prove the second part of the theorem. We will make use of (13.1.13), where we put x μ := . Dh Then, for y = x (r = 1),
μ2 hD 1 ln P −μx + + nV (y)eμy (1 + o(1)) + cnV 2 μ 2 x2 Dh =− + ex /Dh nV (x)(1 + o(1)). + cnV 2Dh x
Here, for s2 c, we have from [U2 ] that 8 Dh D c1 nV nV x ln D where, for t tδ ,
t V (t) V (tδ ) tδ
(13.1.20)
,
−α+δ
.
Hence for δ < 2 − α we find from (13.1.4) and (13.1.9) that, for α := α − δ, Dh c2 D−α /2 J c3 D 1−α /2 → 0 as D → ∞. nV (13.1.21) x Consider the last term in (13.1.20). When
(recall that x
√
1 (α − 2)(h − τ ) s2 2(α − 2) (α − 2) ln D
(13.1.22)
D), we find in a similar way, using (13.1.9), that
nV (x) c1 x−α J(n, δ) < c2 s−α D 1−α and (h − τ )(α − 2) ln D x2 = Dh 2h
/2
c2 D 1−α
α − 2 τ (α − 2) − 2 2h
/2
(ln D)α
ln D.
/2
(13.1.23)
492
Non-identically distributed jumps with finite variances
Therefore nV (x)ex
2
/Dh
D−τ (α
−2)/2h
(ln D)α
/2
→0
(13.1.24)
as D → ∞. Thus, x2 + o(1). (13.1.25) 2Dh The term o(1) could be removed from (13.1.25) by slightly changing h > 1. Since by choosing a suitable δ one can make the ratio (α − 2)/(α − 2) in (13.1.22) arbitrarily close to 1, we can replace the upper bound for s2 in (13.1.22) by s2 (h − τ )/2, again slightly changing, if necessary, either h or τ (see the end of the proof of Theorem 4.1.2 for a remark concerning this observation). This completes the proof of the second assertion. ln P −
(iii) To prove the third assertion, which uses ‘individual’ conditions [U] for the distributions Fj , one bounds the quantities Rj (μ, y) separately for each j. The whole argument that we used above to derive ‘averaged’ bounds will remain valid, as in this case the uniformity condition [U] will be satisfied for ‘individual’ distributions. Further, the relations (13.1.16), (13.1.18) will also remain true provided that in them we replace α by α∗ = maxjn αj and put α = α∗ + δ. In the proof of the second assertion, one replaces α by α∗ = minjn αj and lets α = α∗ − δ. Thus the inequalities (13.1.23), (13.1.24) will hold true when (α∗
(h − τ )(α∗ − 2) 1 s2 . − 2) ln D 2(α∗ − 2)
The theorem is proved.
13.1.2 Upper bounds for the distributions of S n . The second version of the conditions Along with condition [ · , <]U one can consider a similar but somewhat simpler version thereof, which is analogous to condition [UR] of Chapter 12. In this way one can get rid of the additional and not quite convenient condition [N], which was present in Corollaries 13.1.3 and 13.1.4 We will say that condition [ · , <]UR is satisfied if F+ (t) admits a majorant V (t) such that [UR] lim
n→∞, t→∞
nV (t) = 1, H(n)V0 (t)
(13.1.26)
where H(n) n is a non-decreasing function, H(1) = 1 and V0 (t) is an r.v.f. that is fixed (independent of n). If we study the asymptotics of P(S n x) when the value n remains fixed as x → ∞ then condition [ · , <]UR means that the regularly varying majorant V (t) = V0 (t) is fixed.
13.1 Upper and lower bounds for the distributions of S n and Sn
493
It is not difficult to see that condition [UR] implies [U]. The relation (13.1.26) clearly means that, for a given δ > 0, there exist an nδ and a tδ such that nV (t) = 1 + δ(n, t) H(n)V0 (t), where |δ(n, t)| δ for n nδ , t tδ . Theorem 13.1.7. Let Eξj = 0 and let the averaged distribution F satisfy condition [ · , <]UR . Then the following assertions hold true. (i) The assertion of Theorem 13.1.2(i) remains true, provided that there we put ρ := min{1, H(n)/Dn }, so that ρ Dn−1 , χ 2/(α − 2). (ii) Let n → ∞, D = Dn → ∞. Then, for √ 2 x = sσ(n) D, s s0 , H(n) Dα/2−s0 (α−2) (13.1.27) and all large enough n, we have (13.1.8). Note that in Example 13.1.6 with ci = i−γ , γ 0, one has Dn ∼
n1−2γ , 1 − 2γ
H(n) ∼ n1−αγ 1
for γ 1/α, so that the last inequality in (13.1.27) is equivalent to α 1 − αγ < (1 − 2γ) − s20 (α − 2)(1 − 2γ) 2 that is, s20 < (2(1 − 2γ))−1 . Proof of Theorem 13.1.7. (i) The proof of the first assertion of the theorem repeats that of Theorem 13.1.2(i) up to the relation (13.1.14). Further, for any fixed δ > 0 we have 1 1 y nV ∼ H(n)V0 ∼ cH(n)V0 μ μ ln(H(n)V0 (y)) α+δ cH(n)V0 (y) ln(H(n)V0 (y)) →0 as nV (y) = H(n)V0 (y) → 0. Hence the relation (13.1.15) remains true. As in § 13.1.1, we will now bound nV (x) from below. By virtue of (13.1.26), we have, for n nδ , x tδ , nV (x) (1 − δ)H(n)V0 (x). Setting α := α + δ, x = sσ(n) and σ(n) := (α − 2)D ln D, we obtain, cf. (13.1.17), that ∗
∗
nV (x) cH(n)s−α D −α
/2
(ln D)α
∗
/2
∗
= cs−α D(2−α
∗
)/2
(ln D)α
∗
/2
ρ,
where ρ := H(n)/Dn . This implies (13.1.18) but with a new value of ρ, with
494
Non-identically distributed jumps with finite variances
regard to which one can again assume that ρ 1 (if a value ρ > 1 is replaced by ρ = 1 then the bounds (13.1.18) and (13.1.6) can only become cruder). The entire subsequent argument in the proof of Theorem 13.1.2(i) remains valid. Since ρ 1/Dn , we have χ 2/(α − 2). In the case of bounded n, the argument becomes even simpler. (ii) The proof of the second assertion also differs little from the argument demonstrating Theorem 13.1.2. All the calculations up to (13.1.21) remain the same (except that the reference to condition [U2 ] should be replaced by a reference to condition [UR]). Instead of (13.1.21) we will have by virtue of [UR] that, for α = α − δ, n → ∞ and s s0 , −α /2 Dh D Dh nV = H(n)V0 (1 + o(1)) cH(n) →0 x x ln D for small enough δ > 0, owing to (13.1.27). Further, for the last term in (13.1.20), we find, for
(recall that x
√
((α + 2) ln D)−1 s2 s20 D), that
nV (x) cH(n)x−α cH(n)D −α
/2
,
x2 s2 (α − 2) ln D. Dh h
Therefore nV (x)ex
2
/Dh
2
cH(n)D s0 (α−2)/h−α
/2
→0
2
as n → ∞, when H(n) Dα/2−s0 (α−2) and δ is small enough. Together with (13.1.21), this demonstrates (13.1.25) and hence the second assertion of the theorem as well. The theorem is proved. It is not hard to see that the assertions of Corollaries 13.1.3 and 13.1.4 will remain true provided that in them we replace conditions [ · , <]U and [N] by condition [ · , <]UR .
13.1.3 Lower bounds for the distributions of the sums Sn The lower bounds are based on the main inequality of Theorem 12.1.9, which states that P(Sn x)
n
1 2 nF+ (y) , Fj,+ (y) 1 − Qj n (u) − 2 j=1
where Qj n (u)
=P
j
Sn < −u , K(n)
Snj = Sn − ξj ,
13.2 Probability of the crossing of a remote boundary
495
K(n) > 0 is an arbitrary sequence and y = x + uK(n). Let Dnj := Dn − dj ,
Dn = max Dnj Dn . jn
Observe that since D n = Dn − minjn dj we always have D n Dn (1 − 1/n) for n 2, so that Dn = Dn (1 + o(1)) as n → ∞. Set K(n) := Dn . Then, by Chebyshev’s inequality, j j Sn Sn j < −u < P < −u u−2 Qn (u) = P j Dn Dn and we obtain the following assertion. Theorem 13.1.8. Let Eξj = 0, Eξj2 = dj < ∞. Then, for y = x + uDn ,
1 P(Sn x) nF+ (y) 1 − u−2 − nF+ (y) . 2 The form of this assertion almost coincides with that of Theorem 4.3.1. Corollary 13.1.9. For x2 Dn , u → ∞, P(Sn x) nF+ (y)(1 + o(1)). Proof of Corollary 13.1.9. If x2 Dn then nF+ (y) =
n j=1
Fj,+ (x)
n Dn dj = 2 → 0. x2 x j=1
Letting u → ∞, we obtain the desired assertion from Theorem 13.1.8. Corollary 13.1.10. Let x2 Dn and let the averaged distribution F satisfy conditions [ · , >] and [U1 ] or conditions [ · , >] and [UR]. Then P(Sn x) nV (x)(1 + o(1)). Proof. The proof of the corollary is obvious. It follows from Corollary 13.1.9, √ because if the conditions [U1 ] (or [UR]), x2 Dn and u = o(x/ Dn ) are satisfied then we have x ∼ y and so nV (y) ∼ nV (x).
13.2 Asymptotics of the probability of the crossing of an arbitrary remote boundary Under somewhat excessive conditions, the desired asymptotics for the distributions of S n and Sn can be obtained from the bounds derived in § 13.1. As in our
496
Non-identically distributed jumps with finite variances
previous exposition, we will say that the averaged distribution 1 Fj n j=1 n
F=
satisfies condition [ · , =]U ([ · , =]UR ) if it satisfies condition [ · , =] with the function V satisfying condition [U] ([UR]) of § 13.1. Recall also that condition [N] from § 13.1 has the following form: [N]
For some δ < min{1, α − 2} and γ 0, J(n, δ) = tδα+δ nV (tδ ) cD−γ ,
where tδ is from condition [U]. Theorem 13.2.1. Let Eξj = 0 and the averaged distribution F satisfy the condi√ tions [ · , =]U , [N] and x Dn ln Dn . Then P(Sn x) = nV (x)(1 + o(1)), P(S n x) = nV (x)(1 + o(1)).
(13.2.1)
The assertion remains true if we replace conditions [ · , =]U and [N] by condition [ · , =]UR . Under condition [UR] with H(n) = n (see p. 492), the asymptotic relation P(Sn x) ∼ nV (x) for x > cn was established in [218]. The assertion of Theorem 13.2.1 can be sharpened. Namely, one can find an explicit value of c such that (13.2.1) holds true for x = s (α − 2)D ln D, s c. To achieve this, one would need to do a more detailed analysis, similar to that in the proof of Theorem 4.4.4 (cf. Remark 4.4.2 on p. 197). The assertion of the theorem could be stated in a uniform version, like that of Theorem 4.4.1. This follows from the fact that all the upper and lower bounds obtained above are explicit and uniform. Moreover, Theorem 13.2.1 can be extended to the case of arbitrary boundaries. Then, however, it is more natural to use ‘individual’ conditions (see § 13.1), where each tail Fj,+ satisfies condition [ · , =]U . As before, let Gx,n be the class of boundaries {g(k)} for which minkn g(k) = cx, c > 0. The following analogue of Theorem 4.6.7 for the probability of the event ( ) Gn = max Sk − g(k) 0 kn
holds true. Theorem 13.2.2. Let Eξj = 0 and the distributions Fj satisfy condition [ · , =]U uniformly in j and n with a common α = αj , j = 1, . . . , n. Moreover, let the
497
13.2 Probability of the crossing of a remote boundary
averaged distribution F satisfy condition [N]. Then there exists a c1 < ∞ such √ that, for x > c1 D ln D, x → ∞, 3 n 4 P(Gn ) = Vj g∗ (j) (1 + o(1)) + O n2 V 2 (x) , (13.2.2) j=1
where g∗ (j) = minkj g(k) and the reminder terms o(·) and O(·) are uniform over the class Gx,n of boundaries and the class F of distributions {Fj } that satisfy conditions [ · , =]U and [N]. The above assertion remains true if conditions [ · , =]U and [N] are replaced by [ · , =]UR . Recall that when n → ∞ condition [U] can be replaced by a weaker condition, [U∞ ] (see Remark 13.1.1). It follows from (13.2.2) that when max g(k) < c1 x
(13.2.3)
kn
we have P(Gn ) ∼
n
Vj g∗ (j) .
(13.2.4)
j=1
Proof. The proof of Theorem 13.2.2 is similar to those of Theorems 3.6.4 (see p. 156) and 12.2.1, its argument repeating almost verbatim the respective arguments for those theorems (up to obvious amendments due to the fact that now dj = Eξj2 ). Therefore it will be omitted. We could also obtain here analogues of all main assertions of § 12.3 on the probability of the crossing of an arbitrary boundary on an unbounded time interval. We will briefly discuss them. Consider a class of boundaries {g(k)} of the form g(k) = x + gk ,
k = 1, 2, . . . ,
(13.2.5)
which are defined on the whole axis and lie between two linear functions: c1 ak − p1 x gk c2 ak + p2 x,
p1 < 1,
k = 1, 2, . . . ,
(13.2.6)
where the constants 0 < c1 c2 < ∞ and pi > 0, i = 1, 2, do not depend on the parameter of the triangular array scheme and the variable a ∈ (0, a0 ), a0 = const > 0, can tend to zero. We are interested in deriving asymptotic representations for P(Gn ) that (1) are uniform in a as a → 0, and (2) hold for n, growing faster than cx/a, in particular, for n = ∞.
498
Non-identically distributed jumps with finite variances
Put n1 := x/a and introduce, as in § 12.3, averaged majorants V(1) (x) :=
n1 1 Vj (x) n1 j=1
(we assume for simplicity that x/a is an integer). We will need the homogeneity condition [H]
For nk = 2k−1 n1 and all k = 1, 2, . . . and t > 0, c(1) V(1) (t)
2nk 1 Fj,+ (t) c(2) V(1) (t), nk j=n k
c(2) c(1) 1 Dn1 Dnk Dn1 . n1 nk n1 The second line of inequalities basically means that the values Dn grow almost linearly with n. As before, let S n (a) = max(Sk − ak), kn
Bj (v) = {ξj < y + vaj},
η(x, a) = inf{k : Sk − ak x}, v > 0,
B(v) =
n *
Bj (v).
j=1
The following theorem bounds probabilities of the form P S n (a) x; B(v) , P S n (a) x and P ∞ > η(x, a) xt/a for n n1 = x/a. (For n n1 such bounds can be obtained from Theorem 13.2.2.) In what follows, by condition [U] we understand condition [U∞ ] (see Remark 13.1.1). Theorem 13.2.3. (i) Let the averaged distribution F(1) =
n1 1 Fj , n1 j=1
n1 = x/a,
(13.2.7)
satisfy conditions [ · , <]U and [N] (with n replaced in the latter by n1 ). Moreover, let condition [H] be met. Then, for n n1 ,
x>
c| ln a| , a
v
1 , 4r
r=
we have the inequalities # $r1 P S n (a) x; B(v) c1 n1 V(1) (x) , P S n (a) x c1 n1 V(1) (x),
x 5 > , y 2
(13.2.8) (13.2.9)
where r1 = r/[2(1 + vr)] and the constants c and c1 are defined in the proof.
13.2 Probability of the crossing of a remote boundary Moreover, for any fixed or slowly enough growing t, xt c2 n1 V(1) (x)t1−α . P ∞ > η(x, a) a
499
(13.2.10)
If t → ∞ together with x at an arbitrary rate then the inequality (13.2.10) will remain true provided that we replace the exponent 1−α on its right-hand side by 1 − α + ε, where ε > 0 is an arbitrary fixed number. (ii) Let the conditions of part (i) be satisfied when the range of x is c| ln a| , a and assume additionally that Dn → ∞ as n → ∞. Then, for any fixed h > 1 and all large enough n, Dn1 d := . P S n (a) x c1 e−xa/2dh , n1 x<
If x = o(| ln a|/a) then, for any fixed or slowly enough growing t, xt P ∞ > η(x, a) c1 e−γt , (13.2.11) a where the constants γ and c1 can be found explicitly. The assertion of the theorem remains true if conditions [ · , =]U and [N] are replaced by [ · , =]UR . It follows from the theorem that there exist constants γ > 0 and c < ∞ such that, for n n1 = x/a and all x, one has ( ) x P S n (a) x c max e−γxa/d , V (x) . (13.2.12) a Proof. The proof of the theorem repeats those of Theorems 4.2.1 and 12.3.1. The only difference is that now for n n1 we use Theorem 13.1.2 (instead of Theorem 4.1.2, as was the case in § 4.2). In comparison with Theorem 12.3.1, the exponent of the product n1 V(1) (x) in (13.2.8) is different (the r0 in (12.3.6) is replaced by r1 = r0 /2 in (13.2.8)). This is due to the fact that, as in Theorem 4.2.1, we can only use inequality (13.1.6) of Theorem 13.1.2 with the exponent r/2 on its right-hand side if we ensure that θ r/2. This last inequality will hold if s2 = c
x2 > c3 n1 ln n1
(13.2.13)
with a suitable constant c3 . As in (4.2.8), one can verify that (13.2.13) will be true provided that c4 | ln a| c4 ln x x> or a > a x for a suitable c4 . The rest of the argument does not differ from that used to prove Theorems 4.2.1 and 12.3.1. The theorem is proved.
500
Non-identically distributed jumps with finite variances
Now we return to considering boundaries of the more general form (13.2.5). Let ηg (x) := min{k : Sk − gk x}. Corollary 13.2.4. Let the conditions of Theorem 13.2.3 (i) and conditions (13.2.5), (13.2.6) be satisfied. Then, for t bounded or growing slowly enough, cxV(1) (x) 1−α xt P ∞ > ηg (x) t . (13.2.14) a a If t → ∞ at an arbitrary rate then the inequality (13.2.14) will remain true provided that we replace the exponent 1 − α on its right-hand side by 1 − α + ε, where ε > 0 is an arbitrary fixed number. Proof. The assertion of the corollary follows from Theorem 13.2.3 and the inequality (see (13.2.6)) xt xt P ∞ > η (1 − p1 )x, c1 a . P ∞ > ηg (x) a a
Observe that a similar corollary can be obtained under the conditions of Theorem 13.2.3(ii). Now put gj∗ := min gk kj
and identify the parameter of the triangular array scheme with a. It is not hard to see that {gj∗ } is non-decreasing and, together with {gj }, satisfies the inequalities (13.2.6) (cf. (12.3.10)). Theorem 13.2.5. Let Eξj = 0 and let the distributions Fj satisfy condition [ · , =]U uniformly in j and a, with a common α = αj , j = 1, 2, . . . Moreover, let conditions [H], (13.2.5) and (13.2.6) be met and the averaged distribution F(1) (see (13.2.7)) with n = n1 = x/a satisfy condition [N]. Further, let x > c| ln a|/a for a suitable c (see the proof of Theorem 13.2.3). Then ∞
∗ P sup(Sk − gk ) x = Vj (x + gj ) (1 + o(1)), (13.2.15) k0
j=1
where the remainder term o(·) is uniform in a and also over the classes of distributions {Fj } and boundaries {gk } that satisfy the conditions of the theorem. The assertion of the theorem remains true if we replace conditions [ · , =]U and [N] by [ · , =]UR . Proof. The proof of the theorem repeats, with obvious amendments, the proof of Theorem 12.3.4.
13.2 Probability of the crossing of a remote boundary
501
If we make the homogeneity conditions for the tails and boundaries somewhat stronger then it is possible to obtain a simpler representation for the right-hand side of (13.2.15). Consider the following condition. [HΔ ]
Let condition [H] hold and, for any fixed Δ > 0 and 9 : xΔ = n1 Δ , nΔ := a
let there exist a g > 0 and majorants Vj such that uniformly in k one has 1 nΔ
(k+1)nΔ
Vj (x) ∼ V(1) (x),
j=knΔ +1
Dn 1 1 D(k+1)nΔ − DknΔ ∼ , nΔ n1 1 g(k+1)nΔ − gknΔ ∼ ga nΔ as n → ∞. Corollary 13.2.6. Let the averaged distribution F(1) (see (13.2.7)) satisfy conditions [ · , =]U and [N] with n = n1 . Moreover, assume that x > c| ln a|/a, condition [HΔ ] is met and, for definiteness, g1 = o(x). Then ∞
xV(1) (x) 1 . P sup(Sk − gk ) x ∼ V(1) (u)du ∼ ga ga(α − 1) k0
x
The above assertion remains true if conditions [ · , =]U and [N] are replaced by [ · , =]UR . Proof. The proof of the corollary repeats that of Corollary 12.3.5. Corollary 13.2.6 implies the following. Corollary 13.2.7. Let ξ1 , ξ2 , . . . be i.i.d. r.v.’s (Fj = F, Vj = V , dj = d < ∞) and let condition [ · , =]U and condition [N] with n = n1 = x/a be satisfied. Then, if x > c| ln a|/a (see Theorem 13.2.3) we have xV (x) P S(a) x ∼ . a(α − 1) The assertion remains true if conditions [ · , =]U and [N] are replaced by condition [ · , =]UR . It is clear that the assertions of Corollaries 13.2.6 and 13.2.7 can be stated in a uniform version, as in Theorem 13.2.5.
502
Non-identically distributed jumps with finite variances 13.3 The invariance principle. Transient phenomena
There is no need for us to go into much detail in this section, since the invariance principle and transient phenomena have already been well covered in the existing literature (the single reservation being that transient phenomena have been studied for i.i.d. jumps only). We will review here the main results (for completeness of exposition) and give brief explanations in those cases where the results are extended.
13.3.1 The invariance principle As before, let ξ1 , ξ2 , . . . , ξn be independent r.v.’s in the triangular array scheme, Eξj = 0,
dj = Eξj2 < ∞,
Dn =
n
dj .
j=1
√ In this case, the convergence of the distribution of Sn / Dn to the limiting normal law is determined by the Lindeberg condition: [L]
For any fixed τ > 0, as n → ∞, n $ 1 # 2 E ξj ; |ξj | > τ Dn → 0 Dn j=1
(see e.g. § 4, Chapter 8 of [49] or § 4, Chapter VIII of [122]). To ensure that the processes Snt ζn (t) := √ , Dn
t ∈ [0, 1],
(13.3.1)
converge to the standard Wiener process w(·), one also needs the homogeneity conditions [HD Δ]
For any fixed Δ > 0, nΔ = Δn and all k 1/Δ, Dn 1 D(k+1)nΔ − DknΔ ∼ nΔ n
as
n → ∞.
Let C(0, T ) be the space of continuous functions on [0, T ], endowed with the uniform metric. The trajectories of ζn (·) can be considered as elements of the space D(0, 1) of functions without discontinuities of the second kind. Theorem 13.3.1. Let conditions [L] and [HD Δ ] be satisfied. Then, for any measurable functional f on D(0, 1) that is continuous in the uniform metric at the points of the space C(0, 1), we have the weak convergence of distributions as n → ∞: f (ζn ) ⇒ f (w), where w is the standard Wiener process.
13.3 The invariance principle. Transient phenomena
503
Along with the process (13.3.1), one often considers a continuous process ζ!n (·) defined as a polygon with nodes at the points k Sk , k = 1, . . . , n. ,√ n Dn In this case, the assertion of Theorem 13.3.1 can be formulated as the weak convergence of the distributions of ζ!n (·) and w(·) in the metric space C(0, 1) endowed with the σ-algebra of Borel sets (which coincides with the σ-algebra generated by cylinders). Proof. The proof of Theorem 13.3.1 follows the standard path: one needs to establish the convergence of finite-dimensional distributions and the compactness of the family of distributions of ζn (·) (see e.g. [28, 44]). It will be convenient for us to rewrite the assertion of Theorem 13.3.1 in a somewhat different form. Let the totality of the independent r.v.’s ξ1 , ξ2 , . . . , ξn be extended (or shortened) to the sequence ξ1 , ξ2 , . . . , ξnT , T > 0, and let this new sequence also satisfy conditions [L] and [HD Δ ] (in condition [L] we sum up D the first nT r.v.’s, and in condition [HΔ ] the value k varies from 1 to T /Δ). The process ζn (·) (see (13.3.1)) will now be defined on the segment [0, T ]. Corollary 13.3.2. If the totality of independent r.v.’s ξ1 , . . . , ξnT satisfies conditions [L] and [HD Δ ] then, for any measurable functional f on D(0, T ) that is continuous in the uniform metric at the points of C(0, T ), we have the weak convergence of distributions as n → ∞: f (ζn ) ⇒ f (w), where w is the standard Wiener process on [0, T ].
13.3.2 Transient phenomena The essence of transient phenomena was described in sufficient detail in § 12.5. Here there is no real difference. The main result for i.i.d. summands ξj has already been given in § 12.5 (see (12.5.1)): for any z 0,
√ lim P aS n (a) z = P max d w(t) − t z , (13.3.2) a→0
tT
where w is the standard Wiener process, n = T a−2 and d = lima→0 Eξj2 (see e.g. § 25, Chapter 4 of [42] or [165, 231]). For n = ∞ (T = ∞) the right-hand 2 side of (13.3.2) is equal to e−2z /d . Below we will extend these results to the case of non-identically distributed independent jumps ξj in the triangular array scheme. As before, let F(1) be the averaged distribution F(1) =
n1 1 Fj n1 j=1
with
n1 = a−2 .
504
Non-identically distributed jumps with finite variances
Theorem 13.3.3. Let condition [L] be met for n = n1 T for any fixed T, and Dn →d n
as
n → ∞.
(13.3.3)
Further, let the averaged distribution F(1) satisfy conditions [ · , <]U and [N] (with n replaced in them by n1 = a−2 ). If condition [H] of § 13.2 is met then (13.3.2) holds true for n = T a−2 . In particular, for n = ∞, 2 lim P aS(a) z = e−2z /d . (13.3.4) a→0
The assertion of the theorem remains true if conditions [ · , =]U and [N] are replaced by [ · , =]UR . Note that the condition that, for some α > 2, n1 1 E|ξj |α < c < ∞ n1 j=1
(13.3.5)
implies conditions [<, <]UR and [L]. The condition (13.3.5) with |ξj |α replaced by (ξj+ )α (v + = max{0, v}) implies [ · , <]UR and a ‘right-sided’ Lindeberg condition. Proof of Theorem 13.3.3. The argument repeats that used in the proof of Theorem 12.5.1 but now it will employ Theorem 13.2.3 and Corollary 13.3.2. Since (13.3.3) implies that [HD Δ ] holds for n = n1 T , we see that the conditions of Corollary 13.3.2 are met for n = n1 T . Similarly, the representation (12.5.7) with Sn t ζn1 (t) = 1 , Dn1
n1 = a−2 ,
could be written as
where a Dn1
Sk ak aS n (a) = a Dn1 max − kn Dn1 Dn1 k kθ = a Dn1 max ζn1 − , kn n1 n1 √ √ ∼ d and θ = an1 / Dn1 ∼ 1/ d. For T < ∞, the functional √ fT (ζ) := sup ζ(u) − u/ d uT
is continuous in the uniform metric and has the property that √ √ k k dfT (ζn1 ) = d max ζn1 −√ = aS n (a)(1 + o(1)) + o(1). kn n1 dn1 Hence (13.3.2) holds true by virtue of Corollary 13.3.2. If n = ∞ (T = ∞) then one should make use of Theorem 13.2.3 (more precisely, of the inequality (13.2.11)) in the same way as we used Theorem 12.3.1 to prove Theorem 12.5.2 in the case T = ∞.
13.3 The invariance principle. Transient phenomena
505
To find the closed-form expression (13.3.4) for the limiting distribution for aS(a), one could use, as in Theorem 12.5.2, the ‘invariance principle’ (13.3.2) proved in the first part of the theorem. That result states that, under the conditions of Theorem 13.3.3, the limiting law for aS(a) does not depend on the distributions Fj . Therefore, when finding the desired limiting distribution we can assume that the r.v.’s ξj are identically distributed (Fj = F(1) ) and have the right + tail V (t) = P(ξ1 t) = qe−λ+ t , where q < 1, qλ−1 + = Eξ1 . In this case, the distribution of S(a) is well known to be exponential also (see e.g. § 20 of [42]; cf. (12.5.14)): (1 − p)λ1 , EeλS(a) = p + λ1 − λ so that
P S(a) v = (1 − p)e−λ1 v
for
v > 0,
p = P S(a) = 0 ,
where λ1 is a solution to the equation ψ(λ) = 1, ψ(λ) = Eeλ(ξ−a) . Next we note that in our case, as λ → 0, ψ(λ) = 1 − λa +
dλ2 (1 + o(1)). 2
Therefore λ1 ∼ 2a/d as a → 0. Hence
" z z 2a P S(a) = exp − (1 + o(1)) = e−2z/d (1 + o(1)). a a d The theorem is proved.
14 Random walks with dependent jumps
14.1 The classes of random walks with dependent jumps that admit asymptotic analysis The results of Chapters 12 and 13 for random walks with independent non-identically distributed jumps can be used for the asymptotic analysis of some types of random walks with dependent jumps. Since everywhere in Chapters 12 and 13 we used the condition Eξj = 0, it is natural to attempt to extend the results of these chapters to martingales, in the first place. The martingale property, however, does not play a decisive role. Another broad class of random walks for which one could study large deviations in a similar way are ‘arbitrary’ walks (not necessarily martingales) defined on Markov chains. The following four basic classes of random walks, which are rather close to each other, admit asymptotic analysis of the same type as in Chapters 12 and 13. 1. Martingales with a common majorant for the jump distributions. Let a sequence of r.v.’s ξ1 , ξ2 , . . . be given on a basic probability space (Ω, F, P) endowed with a family of increasing σ-algebras F1 ⊆ F2 ⊆ · · · ⊆ F such that ξn is Fn -measurable, n 1. The stochastic sequence {Sn , Fn ; n 1},
Sn :=
n
ξj ,
j=1
forms a martingale if E|ξn | < ∞
and E(ξn+1 | Fn ) = 0,
n 1.
(14.1.1)
Denote by Fj (B, ω) := P(ξj ∈ B| Fj−1 ) the conditional distribution of ξj given Fj−1 and by Fj,+ (t, ω) := Fj ([t, ∞), ω),
Fj,− (t, ω) := Fj ((−∞, −t), ω) 506
14.1 Classes of random walks with dependent jumps
507
its tails. Assume that there exist regularly varying majorants V (t), W (t) such that a.s. Fj,+ (t, ω) V (t),
Fj,− (t, ω) W (t),
t > 0.
(14.1.2)
It may be seen from the analysis in § 12.1 that when deriving upper bounds for the distributions P(S n x), S n = maxkn Sk , we have not used the distributions Fj themselves but rather the majorants for their tails. In the present case, we have such majorants in the uniform version (14.1.2). This enables one to obtain the required bounds for the probabilities of large deviations of S n (of the form nV (x)) for the class of martingales under consideration. 2. Martingales defined on countable Markov chains. Another class of martingales, which will be discussed in more detail in §§ 14.2 and 14.3 below and for which one can derive more advanced and precise results, is a modification of the above general model. This class contains martingales defined on countable Markov chains. Let X = {Xk ; k 1} be a time-homogeneous ergodic Markov chain with a countable state space. We will assume that all the states are essential. One can identify, without loss of generality, the state space X of the chain with the set N = {1, 2, . . .}. Ergodicity of the chain means that, for any fixed i, j ∈ X , there exist the limits ∞ lim P(Xn = j| X1 = i) = πj > 0, πj = 1. (14.1.3) n→∞
j=1
Further, on each of the states j ∈ X of the chain let an r.v. ξ(j) be given whose distribution F(j) depends on j, but in such a way that, for any j, Eξ(j) = 0.
(14.1.4)
Consider an array of independent r.v.’s {ξk (j); k, j 1}, which is independent of the chain X and in which the ξk (j) are distributed as ξ(j), and form the sums Sn :=
n
ξk (Xk ).
(14.1.5)
k=1
If we denote by Fn the σ-algebra generated by X1 , . . . , Xn ; ξ1 (X1 ), . . . , ξn (Xn ) , then the stochastic sequence {Sn , Fn ; n 1} will be a martingale. Under suitable assumptions on the distributions of ξ(j), which will be stated in §§ 14.2 and 14.3, one can extend to such random walks all the main results of the asymptotic analysis of Chapters 12 and 13, without great effort and without using any additional constructions. The reason for this is that for any fixed trajectory X (n) := {X1 , . . . , Xn },
508
Random walks with dependent jumps
the sequence {ξ1 (X1 ), . . . , ξn (Xn )} will consist of independent non-identically distributed r.v.’s whose distribution satisfies the conditions of Chapters 12 and 13. Then averaging the derived asymptotic results over the set X1n of ‘ergodic’ trajectories (P(X (n) ∈ X1n ) → 1 as n → ∞), we will obtain a sufficiently simple explicit answer. The probability of trajectories that are not from X1n will be negligibly small, and such trajectories will not affect the form of the desired asymptotics. 3. Martingales defined on arbitrary Markov chains. The above random walk model could be extended to the case of a Markov chain with an arbitrary state space X . Assume that for each z ∈ X one is given a distribution F(z) on the real line such that t F(z) (dt) = 0. This time, we will make use of a somewhat different, more ‘economical’, way of defining the random walk. Let Q(z) (v) be the v-quantile of F(z) , i.e. Q(z) (v) is the generalized inverse of the distribution function F(z) ((−∞, t)): Q(z) (v) = inf t : F(z) ((−∞, t)) > v . Further, let ω1 , ω2 , . . . be a sequence of i.i.d. r.v.’s that are uniformly distributed over [0, 1] and independent of X. Then we put ξk (z) := Q(z) (ωk ) and, as before, define the random walk {Sn } by (14.1.5). It is obviuos that Eξk (z) = 0 for any z ∈ X and that the random walk (14.1.5) is again a martingale, which is now defined on an arbitrary Markov chain. Provided appropriate restrictions are imposed on the distributions F(z) , all the considerations from §§ 14.2 and 14.3 below which are devoted to the case of countable chains will remain valid for these more general walks. We single out the countable chains for the sole reason of simplifying the exposition and avoiding technical difficulties concerning measurability, integrability, convergence etc. 4. Random walks defined on Markov chains. This term will be used for the processes described in items 2 and 3 above, in the case when the martingale property (i.e. the property that Eξ(z) ≡ 0) is not assumed. We will assume only that a(z) := Eξ(z) is finite and, for the reasons mentioned at the end of item 3, will confine ourselves to considering walks defined on countable Markov chains. The possibility of studying such more general processes is due to the fact that, in the decomposition Sn = An + Sn0 ,
An :=
n
a(Xk ),
Sn0 := Sn − An ,
k=1
the sequence
{Sn0 }
forms a martingale since Eξk (z) − a(z) = 0 whereas, as n
14.2 Martingales on countable Markov chains: infinite variances
509
increases, the sequence {An } will behave, with probability close to 1, almost as a linear function aπ n, aπ := j πj a(j). This allows one to use the results of §§ 14.2 and 14.3 on the crossing of arbitrary fixed boundaries. In the case when ξj (Xj ) = f (Xj ), where f is a given function on X and {Xk } forms a Harris Markov chain (having a positive recurrent state z0 ), the asymptotics of P(Sn x) in terms of regularly varying distribution tails of the sums of quantities f (Xk ) taken over a cycle formed by consecutive visits to z0 was studied in [187].
14.2 Martingales on countable Markov chains. The main results of the asymptotic analysis when the jump variances can be infinite In this section, we will consider in more detail random walks formed by the martingales (14.1.4), (14.1.5) and defined on ergodic Markov chains with countably many states. Let F(j) be the distribution of the jump ξ(j) (in the walk {Sk }) corresponding to state j. Put F(j),+ (t) := F(j) ([t, ∞)),
F(j),− (t) := F(j) ((−∞, t)).
Consider the following conditions. [<, <]U [<, =]U There exist regularly varying majorants V(j) , W(j) , V ∗ ∗ and W such that, for t > 0, F(j),+ (t) V(j) (t) V ∗ (t),
F(j),+ j(t) = V(j) (t) V ∗ (t),
F(j),− (t) W(j) (t) W ∗ (t) (14.2.1)
F(j),− (−t) W(j) (t) W ∗ (t) , (14.2.2)
where V(j) , W(j) satisfy condition [U] of uniformity in j from § 12.1 with exponents αj 1, βj > 1. Further, let π = (π1 , π2 , . . .) be the stationary distribution of the chain X. Set Vπ (t) :=
∞
πj V(j) (t),
Wπ (t) :=
j=1
∞
πj W(j) (t),
(14.2.3)
j=1
and assume that, for some p > 0, Vπ (t) pV ∗ (t),
Wπ (t) pW ∗ (t).
(14.2.4)
Condition (14.2.4) is essentially non-restrictive. For example, let there exist a ‘heaviest’ tail V(j ∗ ) (t) = max V(j) (t). j
510
Random walks with dependent jumps
Then, clearly, one can put V ∗ (t) := V(j ∗ ) (t) and the first relation in (14.2.4) will always hold with p = πj ∗ . It is not hard to see that, under the above conditions, the averaged majorants Vπ , Wπ will also be r.v.f.’s. We will assume that the respective exponents are α ∈ [1, 2) and β ∈ (1, 2), so that the ‘heaviest’ of the tails V(j) and W(j) have infinite second moments. Consider the random walk {Sn : n 1} defined in (14.1.5) when the jumps ξk (z), z ∈ X , are given by the quantiles Q(z) (ωk ); the r.v.’s ω1 , ω2 , . . . are independent of each other and of X and are uniformly distributed over [0, 1]. As before, let S n = maxkn Sk . We have the following analogue of Theorem 12.1.4. Theorem 14.2.1. Assume that the random walk (14.1.5), (14.1.4) satisfies the conditions X1 N < ∞ and [<, <]U (see (14.2.1)–(14.2.4)). Then the following assertions hold true. (i) If W ∗ (t) < cV ∗ (t) then sup x: nV ∗ (x)v
sup n,x: nV
∗ (x)v
P(S n x) 1 + ε(v, n), nVπ (x)
(14.2.5)
P(S n x) 1 + ε(v), nV ∗ (x)
(14.2.6)
where ε(v, n) → 0 as v → 0, n → ∞ and ε(v) → 0 as v → 0. (ii) If W ∗ (t) > cV ∗ (t) then the inequalities (14.2.5), (14.2.6) remain true for all n and x such that x c1 < ∞. nW ∗ (14.2.7) ln x There will also be analogues of Corollaries 12.1.5 and 12.1.6. Note that, unlike (14.2.6), the inequality (14.2.5) will prove to be asymptotically exact under broad conditions. In this inequality, however, it is assumed that n → ∞ (in contrast with (14.2.6) and the assertion of Theorem 12.1.4). Proof of Theorem 14.2.1. If we fix the trajectory X (n) = {X1 , . . . , Xn } then the sequence ξ1 (X1 ), . . . , ξn (Xn ) will consist of independent r.v.’s, with distributions F(X1 ) , . . . , F(Xn ) respectively, that satisfy the ‘individual’ conditions [<, <]U of § 12.1. For a fixed j, the frequency of occurrence of the event {Xk = j} in the trajectory X (n) will, for large values of n, be close to nπj . Hence the majorant of the averaged tail for the sequence ξ1 (X1 ), . . . , ξn (Xn ) will, in a sense, be close to ∞ πj V(j) (t). Vπ (t) = j=1
More precisely, for a given ε > 0 we can split the set of trajectories X (n) into
14.2 Martingales on countable Markov chains: infinite variances
511
two parts X1n and X2n , where X1n is defined as the collection of (z1 , . . . , zn ) such that n 1 V(zk ) (t) Vπ (t)(1 + ε). (14.2.8) n k=1
The set X2n comprises all the other trajectories. By virtue of the ergodic theorem and (14.2.1)–(14.2.4), for any fixed ε > 0 we clearly have P(X (n) ∈ X1n ) → 1 as n → ∞. Therefore, this relation will still be true when ε = εn → 0 slowly enough as n → ∞. (n) Denote by FX . Then n the σ-algebra generated by the trajectories X X # $ P(S n x) = E P S n x Fn ; X (n) ∈ X1n $ # (n) (14.2.9) ∈ X2n . + E P S n x FX n ; X In each term on the right-hand side, to compute the probability P(S n x| FX n) we can use Theorem 12.1.4, of which all the conditions are satisfied. For the first term, it is obvious that condition [<, <]U is met. Condition [N] for the majorant Vπ (t)(1 + εn ) of the averaged distribution holds because Vπ is a fixed r.v.f., independent of n, whereas the factor 1+εn tends to 1 as n → ∞ and affects nothing. (In this case, conditions [<, <] and [UR] of Chapter 12 will also be met for the averaged distribution.) For the second term (for the set X2n ), one should use the majorants V ∗ , W ∗ , which also clearly satisfy conditions [U] and [N] since they do not depend on n. Owing to the above, under the conditions of the first assertion of the theorem we have sup n,x: nVπ (x)v
P(S n x| FX n) 1 + ε(v) nVπ (x)(1 + εn )
on the set {X (n) ∈ X1n }, and sup n,x: nV ∗ (x)v
P(S n x| FX n) 1 + ε(v) nV ∗ (x)
on the set {X (n) ∈ X2n }, where ε(v) → 0 as v → 0. Hence, by virtue of (14.2.4) and (14.2.9), P(S n x) nVπ (x) x: nVπ (x)v , - 1 P(X (n) ∈ X1n ) + P(X (n) ∈ X2n ) 1 + ε1 (v, n) , (14.2.10) p sup
where ε1 (v, n) → 0 as v → 0, n → ∞. This proves (14.2.5). The inequality (14.2.6) and the second assertion of the theorem are proved in eactly the same way.
512
Random walks with dependent jumps
Next for the probabilities P(Sn x). Set V (t) := we will obtain lower bounds max V ∗ (t), W ∗ (t) , σ (n) := V (−1) (1/n) and Fπ,+ (t) :=
∞
πj F(j),+ (t).
k=1
σ (n) and Theorem 14.2.2. Let the conditions X1 N , [<, <]U , y = x + u u → ∞ be satisfied. Then, as n → ∞, P(Sn x) nFπ,+ (y)(1 + o(1)).
(14.2.11)
If, in addition, x σ (n) and condition [<, =]U is met then P(Sn x) nVπ (x)(1 + o(1)).
(14.2.12)
Proof. We can follow the same path as in the proof of Theorem 14.2.1. Take X1n to be the set of trajectories (z1 , . . . , zn ), for which (cf. (14.2.8)) n n 1 1 V(zk ) (t) − 1 < εn , F(zk ),+ (t) − 1 < εn , nVπ (t) nFπ,+ (t) k=1
k=1
where εn → 0 as n → ∞ slowly enough that P(X (n) ∈ X1n ) → 1. Then $ # (n) P(Sn x) E P Sn x FX ∈ X1n , n ; X where, owing to Theorem 12.3.1, one has P Sn x FX n nFπ,+ (y)(1 + o(1)) on the set {X (n) ∈ X1n }, so that P(Sn x) nFπ,+ (y) P(X (n) ∈ X1n )(1 + o(1)) = nFπ,+ (y)(1 + o(1)). This proves (14.2.11). σ (n)), u → ∞, If x σ (n) and condition [<, =]U is met then, for u = o(x/ we obtain y ∼ x, Fπ,+ (y) ∼ Vπ (x), which means that (14.2.12) holds true. The theorem is proved. As in § 12.2, it will be convenient for us to introduce condition [Q] that states that at least one of the following two conditions is met: [Q1 ] [Q2 ]
W ∗ (t) cV ∗ (t), x → ∞ and nV ∗ (x) → 0; x < c, where V (t) := max V ∗ (t), W ∗ (t) . x → ∞ and nV ln x
The next result follows in an obvious way from Theorems 14.2.1 and 14.2.2.
14.2 Martingales on countable Markov chains: infinite variances
513
Theorem 14.2.3. Let the random walk {Sn } (see (14.1.5)) satisfy the conditions X1 N and [<, =]U . Then, under the conditions n → ∞, x σ (n) and [Q], we have P(Sn x) = nVπ (x)(1 + o(1)), P(S n x) = nVπ (x)(1 + o(1)).
(14.2.13)
It is seen that the main results of §§ 12.1 and 12.2 can be extended without much effort to martingales defined on Markov chains. We could also extend the other results of Chapter 12 in exactly the same way. One just needs to keep in mind the following observations. (1) While considering the problem on the crossing of an arbitrary boundary {g(k)} (an analogue of Theorem 12.2.2) we will have for X1 N and {g∗ (k)} that grow not too rapidly that, by virtue of the ergodic theorem, 3 n 4 n X E E ∼ V(Xk ) g∗ (k) Fn Vπ g∗ (k) . (14.2.14) k=1
k=1
(2) While proving convergence to a stable law, one should make use of the fact that, on a suitable set X1n with P(X (n) ∈ X1n ) → 1, the averaged distrin bution n−1 j=1 F(Xj ) will be close to Fπ . We take the function G = Gπ to be given by Gπ (t) = Fπ,+ (t) + Fπ,− (t) and assume its regular variation. Then condition [Rα,ρ ] will have the form lim
t→∞
Fπ,+ (t) = ρ+ . Gπ (t)
(3) While proving the convergence of finite-dimensional distributions in the invariance principle (an analogue of Theorem 12.4.5), the set X1n should be somewhat narrowed so that the event {X (n) ∈ X1n } would also imply the event Xnt1 N, . . . , Xntk N . This is needed to prove that the joint distribution of Sntk Snt1 ,..., b(n) b(n) converges to the respective finite-dimensional distribution of the stable process. (4) The proofs of all the limit theorems are based on the fact that, for an initial chain state X1 N , the distributions, averaged over long time intervals, will have tails close to Fπ,+ , Fπ,− . (5) While studying transient phenomena, to avoid making the exposition too complicated one should avoid introducing the triangular array scheme. We could assume that all the tails V(j) , W(j) and Vπ , Wπ are fixed and then, in the analysis of the sequences S n (a) = maxkn (Sk − ak), suppose that only the parameter a is changing (a → 0).
514
Random walks with dependent jumps
14.3 Martingales on countable Markov chains. The main results of the asymptotic analysis in the case of finite variances As in the previous section, we will deal here with a random walk {Sn } defined on a countable Markov chain X (see (14.1.4), (14.1.5)). Consider the following conditions: We have [ · , <]U [ · , =]U d(j) := Var ξ(j) < c < ∞, and there exist regularly varying majorants V(j) , V ∗ , such that
F(j),+ (t) V(j) (t) V ∗ (t) F(j),+ (t) = V(j) (t) V ∗ (t) and V(j) satisfies the uniformity condition [U] of § 13.1 (see p. 483) with exponent α(j) > 2. As before, we will also assume that (cf. (14.2.4)) Vπ (t) =
∞
πj V(j) (t) pV ∗ (t)
(14.3.1)
j=1
for some p < 1 (see the remark following (14.2.4)). (n) Recall that FX . For any X1 N , we have n is the σ-algebra generated by X % & n n 2 ξk (Xk ) = E E ξk (Xk )ξm (Xm ) FX ESn2 = E n k=1
k,m=1
=E
n
d(Xk ) ∼ n
∞
πj d(j) =: ndπ .
j=1
k=1
(14.3.2) √ √ Further, put σ(n) := n ln n, x = sσ(n) and assume, as in § 4.1, that x n (i.e. s2 (ln n)−1 ). Theorem 14.3.1. Let the conditions X1 N and [ · , <]U be satisfied. Then the following assertions hold true. (i) For s 1, P(S n x) nVπ (x)(1 + ε(s)),
(14.3.3)
where ε(s) → 0 as s → ∞. (ii) If s < c then for any fixed h > 1 and all large enough n, P(S n x) e−x
2
/2hndπ
.
One can give a closed-form expression for the constant c.
(14.3.4)
14.3 Martingales on countable Markov chains: finite variances
515
Proof. The proof is based on Theorem 13.1.2 and the approach used in the proof of Theorem 14.2.1. We will again make use of the relation (14.2.9). For the first assertion of the theorem, we will define the set X1n in the same way as in the proof of Theorem 14.2.1: for (z1 , . . . , zn ) ∈ X1n , one has (14.2.8). Then, repeating the argument from the proof of Theorem 14.2.1, we obtain (14.3.3) by virtue of Corollary 13.1.4. For the second assertion, the set X1n will be defined as a collection of trajectories (z1 , . . . , zn ) for which 1 d(zk ) dπ (1 + εn ), n n
k=1
where εn → 0 so slowly that P(X (n) ∈ X1n ) → 1 as repeat Again n → ∞. ing the considerations with separate calculations of P S n x FX on the sets n (n) (n) n n {X ∈ X1 } and {X ∈ X2 }, and using condition d(j) c and Corollary 13.1.4, we arrive at (14.3.4). The theorem is proved. Next we will obtain lower bounds. Theorem 14.3.2. Let conditions X1 N and [ · , =]U be satisfied. Then, as √ n → ∞, for x n we have P(S n x) nVπ (x)(1 + o(1)). Proof. Choose the set X1n in (14.2.9) so that for X (n) ∈ X1n one has n 1 V(Xj ) (t) − Vπ (t) εn Vπ (t), n k=1
where εn → 0 slowly enough that P(X (n) ∈ X1n ) → 1 as n → ∞. Then $ # (n) P(Sn x) E P Sn x FX ∈ X1n , n ; X √ where, for y = x + u n, the conditional probability on the right-hand side will, by virtue of Theorem 13.1.8, be greater than , 1 nFπ,+ (y)(1 − εn ) 1 − cu−2 − nFπ,+ (y)(1 + εn ) . 2 √ √ If x n, u → ∞, u = o(x/ n) then y ∼ x, nFπ,+ (y) < cnx−2 = o(1) and P(Sn x) nVπ (x)(1 − εn ) P(X (n) ∈ X1n ) = nVπ (x)(1 − εn ), where εn , εn → 0 as n → ∞. The theorem is proved. Theorems 14.3.1, 14.3.2 imply the following result.
516
Random walks with dependent jumps
Theorem 14.3.3. Let√the conditions X1 N and [ · , =]U be satisfied. Then, as n → ∞, for x n ln n, P(Sn x) = nVπ (x)(1 + o(1)), P(S n x) = nVπ (x)(1 + o(1)). As in the previous section, we see that all the main results of §§ 13.1 and 13.2 can be extended without great effort to martingales defined on Markov chains. We could also extend in exactly the same way all the other results of Chapter 13. In this connection, one should bear in mind the observations made at the end of the previous section. Remark 14.3.4. The condition V(j) (t) V ∗ (t) p−1 Vπ (t) is, in a sense, excessive. The tails V(j) (t) could get ‘thicker’ as j increases, at a rate depending on how fast the πj vanish as j → ∞. In this case, of course, the series πj V(j) (t) should converge and represent an r.v.f. It is of interest to determine the character of the former dependence. For example, it would be interesting to find out whether the assertions of Theorems 14.3.1–14.3.3 remain true when πj cj −γ ,
γ > 1,
V(j) (t) min{1, j β V1 (t)},
β < γ − 1.
14.4 Arbitrary random walks on countable Markov chains In this section, we will deal with the random walks described in §§ 14.2 and 14.3 but without assuming the martingale property. That is, we will consider random walks for which the function a(z) := Eξ(z) does not need to be identically equal to zero.
14.4.1 The case of infinite variances First we consider the case where the random walk (14.1.5) satisfies conditions (14.2.1)–(14.2.4) with α ∈ [1, 2), β ∈ (1, 2) and the condition πj a(j) = 0 (14.4.1) aπ := (when studying the distribution of Sn the last condition does not amount to a restriction of generality). Put Sn = An + Sn0 ,
An :=
n
a(Xk ),
Sn0 := Sn − An .
(14.4.2)
k=1
The sequence {Sn0 } forms a martingale and could be studied as in § 14.2. It remains to estimate the sequence {An }. The most natural way of doing this is to
517
14.4 Arbitrary random walks on countable Markov chains
split the trajectory of the chain X into cycles by repeated visits to the initial state X1 , which is assumed to be fixed. Let τ := min{k > 1 : Xk = X1 } − 1
ζ := Aτ − A1
and
be the length of such a cycle and the increment of the sequence {An } on one cycle respectively. It is not hard to see that Eζ = aπ Eτ = aπ /πX1 and therefore, by (14.4.1), Eζ = 0.
(14.4.3)
To simplify the subsequent considerations, we will assume that the absolute value of the function a(z) is bounded; without loss of generality, one can stipulate that |a(z)| < 1.
(14.4.4)
Then clearly |ζ| τ and ζ τ , where ζ := Aτ − A1 = max Ak − A1 . kτ
This means that the crucial part of what follows will be obtaining bounds for the tail P(τ > t). Upper bounds for P(S n x) Assume that there exists an r.v.f. Vτ (t) such that P(τ > t) Vτ (t) = t−γ Lτ (t),
γ > 1.
(14.4.5)
For finite Markov chains, the probability P(τ > t) decays exponentially fast as t → ∞, and so condition (14.4.5) is satisfied for any γ > 0. If the chain is countable and the increments ϑ(x) := X2 − X1 of the chain, given X1 = z, have the following properties: there exists an r.v. ϑ such that d
(1) ϑ(z) ϑ for large enough z, (2) Eϑ < 0, (3) P(ϑ > t) Vϑ (t), where Vϑ (t) is an r.v.f., then it is not hard to demonstrate that P(τ > t) cVϑ (t) (see [50]; the easiest way to show this is to use Theorem 3, § 43 of [50]). Now we can state an extension of Theorem 14.2.1 to the case a(z) ≡ 0. Theorem 14.4.1. Assume that the random walk (14.1.5) satisfies the conditions X1 N < ∞ and [<, <]U (see (14.2.1)–(14.2.4)) and that the relations (14.4.1), (14.4.4) and (14.4.5) hold true. Moreover, let Vτ (t) = o V ∗ (t) as t → ∞. (14.4.6)
518
Random walks with dependent jumps
Then the assertions (14.2.5)–(14.2.7) of Theorem 14.2.1 remain true for all n and x such that, for any fixed v > 0, e−vn nV ∗ (x),
nV ∗ (x) → 0.
(14.4.7)
Observe that condition (14.4.6) necessarily implies the inequality γ α. Condition (14.4.7) means that the deviations x are ‘exponentially’ bounded from above. Proof of Theorem 14.4.1. Owing to the representation (14.4.2), we have S n An + S 0n ,
where
An := max Ak , kn
S n0 := max Sk0 , kn
so that, for any δ ∈ (0, 1), P(S n x) P(An δx) + P S n0 (1 − δ)x . It suffices to show that P(An x) = o nV ∗ (x) ,
(14.4.8)
since if (14.4.8) holds and δ = δ(x) → 0 slowly enough as x → ∞, we will also have P(An δx) = o nV ∗ (x) (see Theorem 14.2.1). Let τ1 , τ2 , . . . be independent copies of the r.v. τ and let k aτ := Eτ . Set Tk := j=1 τj and denote by ητ (n) := min{k : Tk > n} the minimum number of cycles ‘covering’ the time interval [1, n]. It follows from Lemma 16.2.6 that, for n+ := (n + un)/aτ , u 1/2, one has 2 if Eτ 2 < ∞, e−cu n P(ητ (n) > n+ ) −cuh(u)n e if (14.4.5) holds for a γ ∈ (1, 2), (14.4.9) 1/(γ−1) −1 −1 Lh (u) is the inverse of u(λ) := λ Vτ (λ ). where h(u) = uz Further, (14.4.10) P(An x) P(ητ (n) > n+ ) + P An x; ητ (n) n+ , where, for any fixed u > 0, owing to (14.4.7) one has P(ητ (n) > n+ ) nV ∗ (x) when nV ∗ (x) → 0 (the right-hand side of (14.4.9) decays ‘exponentially fast’). For the second term on the right-hand side of (14.4.10) we have the inequality
P An x; ητ (n) n+ P max ζ k x/2 + P 1 + Z n+ x/2 , kn+
(14.4.11)
14.4 Arbitrary random walks on countable Markov chains
519
where the ζ k are independent copies of the r.v. ζ and Zn :=
n
ζk ,
Z n := max Zk .
k=1
Clearly,
kn
P max ζ k x/2 n+ Vτ (x/2) = o nV ∗ (x) , kn+
and, using (14.4.3)–(14.4.6), we obtain P(Z n+ x/2 − 1) n+ Vτ (x/2)(1 + o(1)) = o nV ∗ (x) as nV ∗ (x) → 0. This means that (14.4.8) holds true. The theorem is proved. Lower bounds for P(Sn x) As above, by δ > 0 we will understand a function δ = δ(x) → 0 slowly enough as x → ∞. The choice of δ will be made in the proof of Theorem 14.4.2 below. Theorem 14.4.2. Assume that the conditions of Theorem 14.2.2 are met for the random walk (14.1.5) and, moreover, that (14.4.1) and (14.4.4)–(14.4.7) hold true. Then, for y = (1 + δ)x + u σ (n), n → ∞, u → ∞, we have (14.4.12) P(Sn x) nFπ,+ (y)(1 + o(1)) + o nV ∗ (x) . If, in addition, x σ (n) and condition [<, =]U is satisfied then P(Sn x) nVπ (x)(1 + o(1)).
(14.4.13)
Proof. Owing to the representation (14.4.2), P(Sn x) P(Sn0 (1 + δ)x, A n −δx) P(Sn0 (1 + δ)x) − P(A n < −δx),
(14.4.14)
where A n := minkn Ak . An upper bound for P(A n < −x) can be obtained in exactly the same way as the bound for P(An δx) above (see (14.4.9)– (14.4.11)). Therefore we obtain (cf. (14.4.8)) P(A n < −x) = o nV ∗ (x) . Moreover, if δ(x) → 0 slowly enough then, clearly, we will also have P(A n < −δx) = o nV ∗ (x) . A lower bound for the first term on the right-hand side of (14.4.14) can be obtained from Theorem 14.2.2. This proves (14.4.12). The assertion (14.4.13) follows from (14.4.12) in the same way as (14.2.12) followed from (14.2.11). The theorem is proved.
520
Random walks with dependent jumps Exact asymptotics
The following extension of Theorem 14.2.3 to the case a(z) ≡ 0 holds true. Theorem 14.4.3. Assume that the random walk (14.1.5) satisfies the conditions X1 N and [<, =] and, moreover, that the relations (14.4.1), (14.4.4)–(14.4.7) hold true. If n → ∞ and the conditions x σ (n) and [Q] are met then (14.2.13) holds true. Proof. The proof follows in an obvious way from Theorems 14.4.1 and 14.4.2. Remark 14.4.4. If Vτ (t) V ∗ (t) as t → ∞ then it may happen that the decisive role in determining the asymptotics of P(S n x) will be played by the ‘deterministic’ walk {An } on the Markov chain X, i.e. the asymptotics will be determined by that of P(An x), which can also be studied with the help of cycles. In the case aπ = 0, one should use the asymptotic linearity Ak ∼ aπ k (with probability close to 1) and the asymptotics (14.2.14) for the probability of the crossing of an arbitrary boundary by a martingale walk.
14.4.2 The case of finite variances Let all the assumptions made at the beginning of § 14.3 hold true. As at the end of the previous subsection, assume that a(z) ≡ 0 and that conditions (14.4.1) and (14.4.4) are satisfied. Then the relation (14.3.2) should be replaced by E(Sn0 )2 ∼ ndπ ,
dπ :=
∞
πj d(j) .
j=1
Concerning the cycle length τ, we will assume that Eτ 2 < ∞ and that (14.4.5) holds for a γ α > 2. Then, repeating the argument of the previous subsection and using Theorem 14.3.1, we will obtain, in the notation of § 14.3, the following assertion. Theorem 14.4.5. Let the conditions X1 N , [<, <]U , (14.4.1), (14.4.4) and (14.4.5) be satisfied. Moreover, let Vτ (t) V ∗ (t) −vn
e
∗
as
nV (x)
t→∞ ∗
as nV (x) → 0, for any fixed v > 0.
Then, for s 1, P(S n x) nVπ (x)(1 + ε(s)), where ε(s) → 0 as s → ∞.
(14.4.15) (14.4.16)
14.4 Arbitrary random walks on countable Markov chains
521
Concerning the lower bounds, the following extension of Theorem 14.3.2 to the case when a(z) ≡ 0 holds true. Theorem 14.4.6. Assume that the following conditions are satisfied: X1 N , √ [ · , =]U , (14.4.1), (14.4.4), (14.4.5), (14.4.15) and (14.4.16). Then, for x n, P(Sn x) nVπ (x)(1 + o(1)). Proof. The proof of the theorem repeats the argument of § 14.4.1 concerning the derivation of the lower bounds. Theorems 14.4.5 and 14.4.6 imply the following result. Theorem√14.4.7. Let the conditions of Theorem 14.4.6 be met. Then, as n → ∞, for x n ln n one has P(Sn x) = nVπ (x)(1 + o(1)), P(S n x) = nVπ (x)(1 + o(1)). Remark 14.4.4 remains applicable to the cases aπ = 0 and Vτ (t) V ∗ (t).
15 Extension of the results of Chapters 2–5 to continuous-time random processes with independent increments
15.1 Introduction The objective of this and the next chapter is to extend the main results of Chapters 2–5 to the following two classes of continuous-time processes: (1) processes with independent increments; (2) compound (generalized) renewal processes. The two classes clearly have a substantial intersection: the collection of compound Poisson processes. For the first class, we will consider two approaches to studying large deviation probabilities. The first approach uses rather crude estimates of the closeness of the trajectories of discrete- and continuous-time processes to reduce the problem to the already available results of Chapters 2–5 for discrete-time processes. This approach, however, does not enable one to extend to the continuous-time case any results related to asymptotic expansions. The second approach consists in constructing, in the continuous-time case, a complete analogue of the whole procedure used to derive the desired asymptotics in Chapters 2–5. In this way, one can obtain analogues of all the results from those chapters. The distribution of any right-continuous process with homogeneous independent increments {S(t)} is completely specified by the ch.f. EeiλS(t) = etψ(λ) , where ψ(λ) admits a L´evy–Khinchin representation, which we write in the following form: λ2 d ψ(λ) = iλq − 2 (eiλu − 1 − iλu)G[0] (du) + (eiλu − 1)G(du). (15.1.1) + |u|<1
|u|1
522
15.1 Introduction
523
As in the case of the standard representation for ψ(λ) (using a single integral, see e.g. [129, 122]), the measures G[0] , G will be referred to as spectral measures. For these measures, we have G[0] ({0}) = G[0] (R \ (−1, 1)) = 0, u2 G[0] (du) < ∞, Θ := G(R) = G(R \ (−1, 1)) < ∞. The constant d is the variance of the Wiener component of the process. There are two ways of describing ‘local’ properties of the distributions of the process {S(t)}, either via the distributions F of the increments d
ξ = ξ1 := S(1) ⊂ =F (we will use this approach in § 15.2) or via the tails G± (t) of the spectral measure G (to be used in § 15.3). There is a direct relationship between the distribution of ξ and the tails of the spectral measure, which is perhaps easiest to grasp using the following argument. The function ψ(λ) is decomposed in (15.1.1) into four summands, which correspond to the representation ξ = ξ(1) + ξ(2) + ξ(3) + ξ(4) ,
(15.1.2)
where the ξ(i) are independent r.v.’s: ξ(1) ≡ q, ξ(2) has the normal distribution with parameters (0, d), ξ(3) corresponds to the third term on the right-hand side of (15.1.1) and has an entire ch.f., so that the tails of ξ(3) decay faster than any exponential function. The above means that the sum ξ(1) + ξ(2) + ξ(3) of the first three terms on the right-hand side of (15.1.2) follows a distribution whose tails decay at infinity faster than any exponential function. The term ξ(4) in (15.1.2) corresponds to the fourth term on the right-hand side of (15.1.1) and has a ch.f. eψ(4) (λ) , with ψ(4) (λ) = (eiλu − 1) G(du) = Θ (eiλu − 1) G[1] (du), (15.1.3) where Θ = G R \ (−1, 1) = G(R) and the probability measure G[1] := Θ−1 G is concentrated on R \ (−1, 1). The ch.f. (15.1.3) corresponds to a compound Poisson process with jumps following the distribution G[1] and jump intensity Θ, so that the distribution F(4) of ξ(4) will have the tail F(4),+ (t) = e
−Θ
∞ Θk k=1
k!
Gk∗ [1],+ (t),
(15.1.4)
k∗ where Gk∗ [1],+ is the tail of the kth convolution G[1] of the distribution G[1] with itself. Since for subexponential distributions G[1] we have Gk∗ [1],+ (t) ∼ kG[1],+ (t)
524
Extension to processes with independent increments
as t → ∞, we obtain from (15.1.4) that, in this case, F(4),+ (t) ∼ G[1],+ (t) e−Θ
∞ kΘk k=1
k!
= ΘG[1],+ (t) = G+ (t),
(15.1.5)
so that the tails F+ (t) ∼ F(4),+ (t) and G+ (t) are equivalent. Now, as we have already said, the tails of ξ(1) + ξ(2) + ξ(3) decay faster than any exponential functions and therefore, when considering measures F or G with ‘heavy’ (l.c. or subexponential) tails, they do not affect the tails of F or G in any way (see Chapter 11). Hence, in the case of ‘heavy’ tails (of G or F), we have F+ (t) ∼ F(4),+ (t), and so for subexponential G[1] (see (15.1.5)) F+ (t) ∼ ΘG[1],+ (t) = G+ (t). Thus, we have proved the following assertion (see also Theorem A3.22 of [113] and § 8.2.7 of [32]). Theorem 15.1.1. If the distribution G[1] is subexponential then so is F, and F+ (t) ∼ G+ (t) as t → ∞. A converse assertion, stating that F ∈ S implies G[1] ∈ S so that again F+ (t) ∼ G+ (t) holds, is true as well ([112]; see the bibliography in [32]). Similar assertions hold true for the classes R (see [110]) and Se. Now let the tail F+ (t) (and hence also F(4),+ (t)) satisfy condition [ · , <]. Then it follows from the inequality G[1],+ (t)Θe−Θ F(4),+ (t) that G[1] also satisfies condition [ · , <]. And, vice versa, if G[1] satisfies [ · , <] with a subexponential majorant VG then, for any fixed ε > 0 and all large enough t, F+ (t) (Θ + ε)VG (t), and so F also satisfies condition [ · , <] (with majorant (Θ + ε)VG ). It is evident that the above observations equally apply to the left tails. Thus, conditions in terms of the tails of ξ can be considered as conditions on the tails G± (t), and vice versa. It also follows from what was said above about the role of the terms in the sum ξ(1) + ξ(2) + ξ(3) that, in all considerations concerning ‘heavy’ (l.c. or subexponential) tails, one can, without loss of generality, put G[0] ≡ 0 in (15.1.1): this will not affect the behaviour of the tails of F or G.
15.2 The first approach: closeness of trajectories
525
15.2 The first approach, based on using the closeness of the trajectories of processes in discrete and continuous time Let {S(u); u ∈ [0, T ]} be a separable process with homogeneous independent increments, S(0) = 0. One can assume, without loss of generality, that the trajectories of S(·) belong to the space D(0, T ) of functions without discontinuities of the second kind (say, right-continuous). We will show that all the theorems of Chapters 2–5 concerning the asymptotics of large deviation probabilities in boundary-crossing problems (except for asymptotic expansions), will remain true in the continuous-time case under the respective conditions on the increments of the process. We will need an auxiliary assertion. Let ξk := S(k) − S(k − 1),
k = 1, 2, . . .
If T is an integer (in most problems this can be assumed without loss of generality) then, in the notation of the previous chapters, we have S(T ) = ST . This means that, under the respective conditions on the tails of ξ, all the assertions concerning the asymptotics of P(Sn x) carry over to the probabilities P S(T ) x with T = n. As we saw in § 15.1, for processes with independent increments, the regular variation or semiexponentiality of the tail F+ (t) = P(ξ t) is equivalent to the corresponding regularity property of the spectral function G+ (t) in the L´evy–Khinchin representation for the ch.f. of ξ. From this and a representation of the form (15.1.5) for the tail of S(u), it also follows that if F ∈ S or G[1] ∈ S then, for any fixed u, P(S(u) t) ∼ uF+ (t)
as t → ∞.
(15.2.1)
Similar assertions hold for the negative tails as well. Moreover, we always have p
S(u) − →0
as
u → 0.
This means that the above-noted equivalence of the asymptotics of P(S(T ) x) and P(ST x) (for integer-valued T → ∞) will also hold for non-integer T , because T = T + {T } and S(T ) = Sn + S({T }), where the terms on the right-hand side are independent and n = T . (As usual, T and {T } denote respectively the integral and fractional parts of T .) Under these conditions we can use the results of Chapter 11 on the asymptotics of large deviation probabilities for sums of r.v.’s of two types (for classes R and Se). As before, let σ(T ) = V (−1) (1/T ) and σW (T ) = W (−1) (1/T ). We have obtained the following assertion. Theorem 15.2.1. (i) Let the conditions [Rα,ρ ], α ∈ (0, 1), ρ > −1 and T V (x) → 0 be satisfied for the distribution F (or G[1] ). Then P S(T ) x = T V (x)(1 + o(1)). (15.2.2)
526
Extension to processes with independent increments
(ii) Let α ∈ (1, 2), ES(1) = 0 and the conditions [<, =] and W (t) cV (t) be satisfied for some c < ∞, T V (x) → 0. Then (15.2.2) holds true. If the condition W (t) < cV (t) does not hold then (15.2.2) will still be true provided that x → 0. TW | ln T V (x)| √ (iii) If α > 2, ES(1) = 0, ES 2 (1) < ∞ and x T ln T then (15.2.2) holds true. (iv) If the distribution of S(1) (or the spectral function of the process) is semiexponential then the assertions of Theorem 5.5.1 (i), (ii) remain true for S(T ) (provided that in them we replace n by T ). In the above assertions, the remainder terms o(1) are uniform in the same sense as in the theorems of Chapters 2–5. One can also obtain in a similar way analogues of other assertions from Chapters 2–5 for continuous-time processes with independent increments. To illustrate this statement, we will demonstrate that the assertions of Chapters 2–5 concerning boundary-crossing problems will also remain true in the present case. As before, consider the class Gx,T of boundaries g(t) such that inf g(t) = x.
tT
(15.2.3)
We will narrow this set to a smaller class Gx,T,ε by adding the following requirement to condition (15.2.3): For some s.v.f. ε = ε(x) → 0 as x → ∞, g(k − 1) < g(k)(1 + ε),
inf t∈[k−1,k]
g(t) > g(k)(1 − ε),
k = 1, . . . , T .
As before (see e.g. § 6.5), we can concentrate on boundaries g(·) of one of the following two types: (1) g(t) = x + gt , where gt does not depend on x; (2) g(t) = xf (t/T ), t T, where f (·) depends neither on x nor on T and inf u∈[0,1] f (u) = 1. In the first case g(·) ∈ Gx,T,ε for all non-decreasing functions gt that grow more slowly than any exponential function. In the second, g(·) ∈ Gx,T,ε provided that f is continuous. Theorem 15.2.2. The following assertions hold true. (i) Let the conditions [=, =] with W, V ∈ R and β < min{1, α} be satisfied
15.2 The first approach: closeness of trajectories
527
for the distribution F (or G[1] ). Then, for all T , as x → ∞, T P(S(T ) x) ∼
EV x − ζσW (u) du,
(15.2.4)
0
where, as before, σW (u) = W (−1) (1/u) and ζ follows the stable law Fβ,−1 . All the corollaries (2.7.2), (2.7.3), (2.7.5) of (2.7.1) remain true. In particular, for S = S(∞), P(S x) ∼
V (x) C(β, α, ∞), W (x)
where the function C is defined in (2.7.4). (ii) Let α ∈ (1, 2), Eξ = 0 and the conditions [Rα,ρ ], ρ > −1 and T V (x) → 0 be satisfied. Then P S(T ) x = T V (x) 1 + o(1) . (iii) Let α ∈ (1, 2), Eξ = 0 and conditions [<, =] (W, V ∈ R), W (t) cV (t) and T V (x) → 0 be satisfied. Then, for a g ∈ Gx,T,ε and an event ( ) (15.2.5) GT := sup (S(u) − g(u)) 0 , uT
we have T P(GT ) = (1 + o(1))
V g∗ (u) du,
(15.2.6)
0
where g∗ (u) = inf uvT g(v). If the condition W (t) cV (t) is not met then (15.2.6) will still be true provided that x TW → 0. | ln T V (x)| (iv) Let α > 2, Eξ = 0, Eξ 2 < ∞ and the conditions [ · , =] (V ∈ R) and g ∈ Gx,T,ε be satisfied. Then, as x → ∞, T P(GT ) = (1 + o(1))
V g∗ (u) du + o T 2 V (x)V (x)
0
when T cx2 / ln x for a suitable c > 0. (v) If the distribution of ξ (or the spectral function) is semiexponential and the conditions of Theorem 5.5.1 are satisfied (with n replaced in them by T ) then the assertion of Theorem 5.5.1 (ii) (provided that in it we replace n by T and P(S n x) by P(S(T ) x)) will also hold true for P(S(T ) x).
528
Extension to processes with independent increments
(vi) Let α > 1, Eξ = 0 and condition [ · , =] (V ∈ R) be satisfied. Put S(T, a) := sup S(t) − at . tT
Then, as x → ∞, we have for any T the relation
1 P S(T, a) x = (1 + o(1)) a
x+aT
V (u) du. x
In the semiexponential case, the assertion of Theorem 5.6.1 (i) (with obvious changes in notation) remains true. In all the assertions of the theorem, the remainder terms o(1) are uniform in the same sense as in the theorems of Chapters 2–5. As was the case for random walks in discrete time, one can remove any restrictions on the growth of T from the assertions of parts (iii)–(v) (as in part (vi)√of the theorem), provided that g(t) increases fast enough (for example, if g(t) > t ln t in the case Eξ 2 < ∞). To prove Theorem 15.2.2 and to be able to reduce our problems to the corresponding ones for discrete-time random walks, we will need two auxiliary assertions. The first one is an analogue of the well-known Kolmogorov inequality, which refers to the distribution of S(T ) = maxuT S(u). Let ⎧ (−1) ⎨ W (1/T ) if α ∈ (0, 1), (−1) (−1) (1/T ), V (1/T ) if α ∈ (1, 2), Eξ = 0, σ !(T ) := max W ⎩ √ T if Eξ 2 < ∞, Eξ = 0. Lemma 15.2.3. Assume that either the conditions [<, <] with α ∈ (0, 1) ∪ (1, 2) and Eξ = 0 for α > 1 or the conditions Eξ 2 < ∞ and Eξ = 0 are satisfied for the distribution F (or G[1] ). Then, for any ε ∈ (0, 1), there exists an M = M (ε) such that, for all T , x σ !(T ), P(S(T ) x)
1 P S(T ) x − M σ !(T ) . 1−ε
(15.2.7)
If T is fixed (for example, T = 1) then the conditions [<, <] and Eξ = 0 become superfluous, and in (15.2.7) one can put σ !(T ) = 1. A similar assertion holds in the semiexponential case as well. Remark 15.2.4. The second assertion of the lemma together with the fact that S(T ) S(T ) implies that the distribution tails of S(1) = ξ and S(1) behave in the same way: they simultaneously satisfy conditions [ · , <] or [ · , =] with a common function V .
15.2 The first approach: closeness of trajectories
529
Remark 15.2.5. Note that the assertion of Theorem 15.2.2 that concerns the asymptotics of P(S(T ) x) follows immediately from Lemma 15.2.3 and requires no additional considerations. Indeed, assuming for simplicity that T is an integer, we have P(S(T ) x) P(S T x) ∼ T V (x), !(T ) ∼ T V (x) P(S(T ) x) (1 − ε)−1 P ST x − M σ as ε → 0, M σ !(T ) = o(x). Proof of Lemma 15.2.3. Let η(x) := inf t > 0 : S(t) x . Then, owing to the strong Markov property of the process {S(t)}, we have P S(T ) x − M σ !(T ) # $ E P S(T ) x − M σ !(T ) η(x) ; η(x) T # $ E P S(T − η(x)) −M σ !(T ) η(x) ; η(x) T . (15.2.8) It follows from the bounds of Chapters 2 and 3 (see Corollaries 2.2.4 and 3.1.2) in the case α < 2 and from the Chebyshev inequality in the case Eξ2 < ∞ that, under the conditions stated in the lemma, P S(T − u) < −M σ !(T ) P S(T − u) < −M σ !(T − u) → 0 asM → ∞. Hence, for any given ε > 0, there exists an M = M (ε) such that P S(T − u) > −M σ !(T ) 1 − ε for all u ∈ [0, T ]. Owing to (15.2.8) this means that P S(T ) x − M σ !(T ) (1 − ε)P(η(x) T ). The last inequality is obviously equivalent to the assertion of the lemma. The proof in the case of a fixed T proceeds in exactly the same way and is based on the observation that, for any fixed u, P S(u) −M → 1 as M → ∞. The lemma is proved. Lemma 15.2.6. Let the conditions of Lemma 15.2.3 be met. Then, for all large enough z, P(S(1) z, S(1) < z/2) 2V (z/2)W (z/2). Proof. Set A := S(1) z, S(1) < z/2 . Again making use of the strong Markov property of the process {S(t)}, we obtain from Lemma 15.2.3 that, for
530
Extension to processes with independent increments
all large enough z,
# $ P(A) = E P A η(z) ; η(z) < 1 , E P S(1 − η(z)) < −z/2 η(z) ; η(z) < 1 W (z/2) P η(z) < 1
3 2
W (z/2) P(S(1) > z/2) 2W (z/2)V (z/2),
where we have used the almost obvious relation (see (15.2.1)) sup P(S(u) < −x) (1 + o(1)) P(S(1) < −x) u1
as x → ∞. The lemma is proved. Proof of Theorem 15.2.2. The proofs of the assertions of the theorem are all quite similar to each other. First we will consider the asymptotics of P(GT ), where the event GT is defined in (15.2.5). For simplicity, let T be an integer (for a remark on how to pass to the case of a general T, see the end of the proof below). Set ( ) G∂T,g := max Sk − g(k) 0 kT
and consider the boundary gε (k) := (1 − 4ε)g(k), where ε = ε(x) → 0 is the s.v.f. at infinity that appeared in the definition of the class Gx,T,ε , so that εg(k) → ∞. Then P(GT ) P G∂T,g , (15.2.9) ∂ ∂ (15.2.10) P(GT ) P GT,gε + P GT,gε GT . We already know the asymptotics of the right-hand side of (15.2.9) and that of the first term on the right-hand side of (15.2.10): they coincide with the asymptotics of P G∂T,g , which we studied in Chapters 2–5 and found to be given by T V g∗ (k) (1 + o(1)). k=1
We will show that the second term on the right-hand side of (15.2.10) is negligibly small compared to the first. We have T + ∂ GT,gε GT ⊂ Hk , k=1
where Hk =
(
sup
) # $ S(u) − g(u) 0, S(k − 1) < gε (k − 1), S(k) < gε (k) ,
u∈[k−1,k)
531
15.2 The first approach: closeness of trajectories so that T ∂ P GT,gε GT P(Hk ).
(15.2.11)
k=1
The value of the probability P(Hk ) can only increase if we ‘move’ the initial value S(k − 1) of the process {S(u)} on the time interval [k − 1, k] to the point gε (k − 1) = (1 − 4ε)g(k − 1). Then, setting Sk (u) := S(k − 1 + u) − S(k − 1),
S k (1) := sup Sk (u), u∈[0,1]
we will obtain P(Hk ) P
# $ sup gε (k − 1) + Sk (u) − g(k − 1 + u) 0,
u∈[0,1]
Sk (1) < gε (k) − gε (k − 1) .
(15.2.12)
Here, for any u ∈ [0, 1], we have by virtue of the condition g(·) ∈ Gx,T,ε that gε (k − 1) + Sk (u) − g(k − 1 + u) < S k (1) + gε (k − 1) − (1 − ε)g(k) < S k (1) + (1 + ε)(1 − 4ε)g(k) − (1 − ε)g(k) < S k (1) − 2εg(k) and Sk (1) − gε (k) + gε (k − 1) > Sk (1) − (1 − 4ε)g(k) + (1 − ε)(1 − 4ε)g(k) > Sk (1) − εg(k). Hence the event under the probability symbol on the right-hand side of (15.2.12) implies the event S k (1) − 2εg(k) 0, Sk (1) − εg(k) < 0 = S k (1) 2εg(k), Sk (1) < εg(k) . Applying Lemma 15.2.6 with z = 2εg(k), we obtain P(Hk ) 2V εg(k) W εg(k) . Here, for any δ > 0 and all large enough x, V εg(k) W εg(k) ε−α−β−δ V g(k) W g(k) ε−α−β−δ V g(k) W (x), where, evidently, ε−α−β−δ W (x) → 0
532
Extension to processes with independent increments
since ε → 0 is an s.v.f. Therefore P(Hk ) = o V g(k) = o V g∗ (k) and, by virtue of (15.2.10) and (15.2.11), T V g∗ (k) ∼ V g∗ (k) du. P(GT ) = (1 + o(1)) T
k=1
0
Thus the first four assertions of Theorem 15.2.2 are proved. Assertion (v) can be proved in exactly the same way. For part (vi), if T is not an integer then, in the above considerations, one should put n = T , v = {T } and add the term # $ P sup S(u) − g(u) 0, S(n) < (1 − 4ε)g(n), S(T ) < (1 − 4ε)g(T ) u∈[n,T ]
T to the sum k=1 P(Hk ). This term can then be dealt with in exactly the same way. The theorem is proved.
15.3 The construction of a full analogue of the asymptotic analysis from Chapters 2–5 for random processes with independent increments As we have already noted in § 15.1, an arbitrary continuous-time process {S(t)} with homogeneous independent increments can be represented, in accordance with (15.1.1) and (15.1.2), as a sum of two independent processes: S(t) = S[0] (t) + S[1] (t),
(15.3.1)
where S[0] (t), like ξ(1) + ξ(2) + ξ(3) , has an entire ch.f. while {S[1] (t)} is a compound Poisson process with jumps ζj , |ζj | 1. In what follows we will, in turn, represent the component {S[1] (t)} as a sum of two independent processes {S 1: S[1] (t) = S
15.3 Analogue of the asymptotic analysis from Chapters 2–5
533
time interval [0, T ] and by Jy,2 the event that the process has at least two jumps on that interval. Then P Jy,1 = Θy T e−Θy T ∼ G+ (y)T provided that G+ (y)T → 0 and, likewise, P(Jy,2 ) = 1 − e−Θy T − Θy T e−Θy T ∼
1 (Θy T )2 . 2
(15.3.2)
15.3.1 Upper bounds Now we will turn to the general scheme for studying the asymptotics of large deviation probabilities in Chapters 2–5. Its first stage consisted in deriving bounds for the distribution of the maxima of partial sums. In the present case, we require bounds for S(T ) := sup S(t). tT
They will be based on the following analogue of Lemma 2.1.1. Assume for a moment that ϕ(μ) := EeμS(1) < ∞ for some μ > 0. Lemma 15.3.1. For any x > 0, μ 0, T > 0, P S(T ) x e−μx max 1, ϕT (μ) .
(15.3.3)
Proof. The proof is similar to that in the discrete-time case. As before, putting η(x) := min t > 0 : S(t) x , we have ϕT (μ) = EeμS(T )
T
P η(x) ∈ dt Eeμ(x+S(T −t))
0
T = eμx
P η(x) ∈ dt ϕT −t (μ) eμx P S(T ) x min 1, ϕT (μ) .
0
This implies (15.3.3). The lemma is proved. Applying the lemma to the process {S
y (e
r(μ, y) := −∞
μu
− 1)G(du) =
eμu G(du) − 1 + G+ (y) −∞
(15.3.4)
534
Extension to processes with independent increments
(cf. (2.1.8)). Our task is to bound y RG (μ, y) − 1 :=
eμu G(du) − 1. −∞
This coincides with the problem on evaluating the asymptotics of y R(μ, y) − 1 =
eμu F(du) − 1, −∞
which we considered in Chapters 2–5 (see (2.2.7) on p. 87 etc.). We saw that in all the upper bounds for R(μ, y) − 1 in Chapters 2–5 under conditions [ · , <] or [<, <], the right-hand sides included the summands cV (1/μ) + eμy V (y)
(15.3.5)
(see (2.2.11) and (2.4.6)). Further, observe that the quantity r(μ, y), which we are aiming to bound, differs from RG (μ, y) only by the term G+ (y) (see (15.3.4)). The value μ chosen to optimize bounds for R(μ, y) in (2.2.11) and (2.4.6), was such that 1/μ y. Hence, under condition [ · , <], the additional term G+ (y) ∼ F+ (y) in (15.3.4) is negligibly small compared with each summand in (15.3.5). Therefore, all the upper bounds derived in Chapters 2–5 for R(μ, y) can be carried over to the quantity r(μ, y) in (15.3.4). So we have proved the following assertion. Lemma 15.3.2. All the upper bounds obtained for P = P(S n x; B(0)) in Chapters 2–5 also hold true for P( S
15.3 Analogue of the asymptotic analysis from Chapters 2–5
535
The last inequality, together with the results of Chapters 2–5 and Lemma 15.3.2, implies the next theorem. Theorem 15.3.3. For the probabilities P(S(T ) x), all the upper bounds derived in Chapters 2–5 for P(S n x) (see Theorems 2.2.1, 3.1.1 and 4.1.2 and their corollaries) hold true with n replaced by T and under conditions [<, <], [ · , =] etc. (in the above-listed theorems) imposed on the tails of the distributions G[1] (or F). Similarly, the inequalities for P(S n (a) x) and P S n (a) x; B(v) of Chapters 2–5 can be carried over to the case of continuous-time processes; this leads to corresponding inequalities for P(S(T, a) x) and P S
S
and {S
G+ y + vg(t) dt.
536
Extension to processes with independent increments
Returning to (15.3.6), we obtain the inequality P(S(T, a) x) P S
T
G+ y + vg(t) dt.
0
We have established the following result. Theorem 15.3.4. For P S(T, a) x andP S 0, t 0, be an arbitrary function and let QT (u) := P S(T )/K(T ) < −u . Then, for y = x + uK(T ), # $ P S(T ) x Π(T, y) 1 − QT (u) − Π(T, y) , (15.3.7) where Π(T, y) := 1 − e−T G+ (y) ∼ T G+ (y), provided that T G+ (y) → 0. (It is also clear that Π(T, y) T G+ (y).) Proof. Since P(Sy (T ) y) P(J y,0 ) = Π(T, y) and, for any two events A and B, P(AB) 1 − P(A) − P(B), we have
P S(T ) x = P S
The theorem is proved. It follows from the last theorem that P(S(T ) x) T G(x)(1 + o(1)) provided that the following two conditions are met:
(15.3.8)
15.3 Analogue of the asymptotic analysis from Chapters 2–5 (1) K(T ) is such that, as u → ∞. P S(T ) −uK(T ) → 0,
537
(15.3.9)
(2) x K(T ), T G(x) → 0. √ Since the function K(T ) = T satisfies condition (15.3.9) in the case Eξ = 0, Eξ 2 < ∞, we obtain the next assertion. √ Corollary 15.3.6. If Eξ = 0, Eξ 2 < ∞ then (15.3.8) holds true for x T . As we have already observed, Theorem 15.3.5 is a complete analogue of Theorem 2.5.1. The same applies to the proof of the theorem and to its corollaries. To avoid almost verbatim repetitions, carrying over the corollaries of Theorem 2.5.1 to the case of continuous-time processes {S(t)} is left to the reader. As in Chapters 2 and 3, these corollaries imply the following results. 1. Theorems on the uniform convergence of the distributions of the scaled value of S(T ) to a stable law (see Theorems 3.8.1 and 3.8.2). As in the above-mentioned theorems, let Nα,ρ be the domain of normal attraction of the stable law Fα,ρ , i.e. the class of distributions F (or G[1] ) that satisfy condition [Rα,ρ ] and have the property that, in the representation F+ (t) = V (t) = t−α L(t), one has L(t) → L = const as t → ∞. Theorem 15.3.7. Let F (or G[1] ) satisfy [Rα,ρ ], ρ > −1, α ∈ (0, 1) ∪ (1, 2) and Eξ = 0 when α > 1. In this case, the inclusion F ∈ Nα,ρ (or G[1] ∈ Nα,ρ ) is equivalent to each of the following two relations: as T → ∞, P (S(T )/b(T ) t) sup − 1 → 0, F (t) t0 α,ρ,+ P S(T )/b(T ) t − 1 → 0, sup Hα,ρ,+ (t) t0 −1/α
where b(T ) = ρ+ V (−1) (1/T ), ρ+ = (ρ + 1)/2 and Hα,ρ,+ is the right distribution tail of supu1 ζ(u), {ζ(t)} being a stable process with homogeneous independent increments, corresponding to Fα,ρ . 2. Analogues of the law of the iterated logarithm for the processes {S(t)} in the infinite variance case. Theorem 15.3.8. Let F (or G[1] ) satisfy the conditions [<, ≶] with α < 1 and W (t) c1 V (t) or the conditions [Rα,ρ ], α ∈ (1, 2), ρ > −1 and Eξ = 0. Then lim sup T →∞
1 ln+ S(T ) − ln σ(T ) = . ln ln T α
(15.3.10)
538
Extension to processes with independent increments
15.3.3 Asymptotic analysis in the boundary-crossing problems Along with the upper and lower bounds discussed above, the results of the more detailed asymptotic analysis from Chapters 2–5, including asymptotic expansions, will also remain valid for continuous-time processes {S(t)} with homogeneous independent increments. Reproducing all the results and their proofs (in their somewhat modified form) would take too much space and would contain many almost verbatim repetitions of the exposition from Chapters 2–5. Therefore we will restrict ourselves to presenting, for illustration purposes, assertions concerning the asymptotic analysis of P(S(T ) x) only in the case Eξ = 0, Eξ 2 < ∞. The form of the asymptotics of the probability P(S(T ) x) (its principal part) follows immediately from the upper and lower bounds we obtained above (cf. e.g. Theorem 4.4.1). Theorem 15.3.9. Let the distribution F (or G[1] ) satisfy the conditions [ · , =] (V ∈ R) with α > 2 and Eξ 2 < ∞, x = sσ(T ), σ(T ) = (2 − α)T ln T . Then there exists a function ε(u) ↓ 0 as u ↑ ∞ such that P(S(T ) x) sup − 1 ε(u). T G+ (x) x: su Now we will turn to refinements of this theorem. The following result is an analogue of Theorem 4.5.1. Theorem 15.3.10. Let condition [ · , =] (V ∈ R) with α > 2 be met and let, for some k 1, the distribution F (or G[1] ) satisfy conditions [D(k,q) ] from § 4.4 and E|ξ k | < ∞ (the latter is only required for k > 2). Then there exists a c < ∞ such that, as x → ∞,
T L1 (x) P(S(T ) x) = T G(x) 1 + ES(t) dt Tx 0
1 + T
k i=2
Li (x) i!xi
T % ES i (T − t) 0
+
i i l=2
+ o T k/2 x−k + o q(x)
"
l
ES (t)S l
i−l
& (T − t) dt
uniformly in T cx2 / ln x, where the Li (t) are the s.v.f.’s from the decomposition (4.4.7) for the function V (t) = G+ (t). Theorem 15.3.10 implies the next result (cf. § 4.5).
15.3 Analogue of the asymptotic analysis from Chapters 2–5
539
Corollary 15.3.11. Let the conditions of Theorem √15.3.10 with k = 1 be satisfied, Eξ 2 = 1 . Then, as T → ∞, uniformly in x c T ln T , √ & % 23/2 α T √ (1 + o(1)) + o q(x) . P(S(T ) x) = T G+ (x) 1 + 3 πx Proof of Theorem 15.3.10. As we have already said, the scheme of the proof here is the same as in the discrete-time case. The main change is that summation is now replaced by integration. For GT = S(T ) x , y = x/r, we have P(GT ) = P GT Jy,0 + P GT Jy,1 + P GT Jy,2 , where as before the Jy,k , k = 0, 1, 2, are the events that, on the time interval [0, T ], the trajectory of {S(t)} has respectively 0, 1 or at least 2 jumps √ of size ζ y. By virtue of Theorem 15.3.4 and the relation (15.3.2), for x c T ln T , r > 2, we have 2 2 P GT Jy,0 = O T V (x) , P Jy,2 = O T V (x) . It remains to consider the term P GT Jy,1 . Denote by J(dt, dv) the event that on the interval [0, T ] the trajectory {S(t)} has just one jump of size ζ y and, moreover, that ζ ∈ (v, v + dv), the time of the jump belonging to (t, t + dt). It is clear that, for v y > 1, P J(dt, dv) = e−tG+ (y) dt G(dv)e−(T −t)G+ (y) = e−T G+ (y) dt G(dv) and that
T ∞
P GT Jy,1 = 0
P GT J(dt, dv) .
(15.3.11)
y
Further, observe that, as the process {S(t)} is stochastically continuous, the distributions of S(t − 0) and S(t) and also those of S(t − 0) and S(t) will coincide. Next note that S 0, T ∞ 0
P S(t − 0) εx; J(dt, dv)
y
T ∞ c 0
y
2 tG+ (x)e−T G+ (y) G(dv) dt c1 T G+ (x) . (15.3.12)
540
Extension to processes with independent increments
The following bounds are obtained in a similar way: T ∞ 0
T ∞ 0
P S(t) −εx; J(dt, dv) cT 2 G+ (x)W (x)
y
= o T G+ (x)T k/2 x−k ,
2 P S (t) (T − t) εx; J(dt, dv) c T G+ (x) ,
(15.3.13) (15.3.14)
y
where
# $ S (t) (z) := sup S(v + t) − S(t) . vz
The above means that, under the probability symbol on the right-hand side of the representation (15.3.11), we can add or remove the events (t) S(t − 0) < εx , |S(t − 0)| < εx , S (T − t) < εx , the errors thus introduced being bounded by the right-hand sides of the relations (15.3.12)–(15.3.14). Therefore, putting εT,x := T G+ (x)T k/2 x−k ,
Zt,T := S(t − 0) + S (t) (T − t),
we will have, owing to (15.3.11), the following representation: P GT Jy,1 T ∞ = 0
P GT ; S(t − 0) < εx; J(dt, dv) + o(εT,x )
y
−T G+ (y)
T ∞
=e
0
T ∞ = 0
T =
dt G(dv) P S(t − 0) < εx, v + Zt,T x + o(εT,x )
y
dt G(dv) P |S(t − 0)| < εx, S (t) (T − t) < εx,
y
v + Zt,T x + o(εT,x )
# $ E G+ x − Zt,T ; |S(t − 0)| < εx, S (t) (T − t) < εx dt + o(εT,x ).
0
We have come to the problem of evaluating , E G+ x − S(t − 0) − S (t) (T − t) ;
|S(t − 0)| < εx, S (t) (T − t) < εx ,
(15.3.15)
15.3 Analogue of the asymptotic analysis from Chapters 2–5
541
which is identical to the problem of evaluating the terms in (4.5.6) on p. 206. The subsequent argument does not differ from that in the proof of Theorem 4.5.1. The theorem is proved. We will also state a theorem on asymptotic expansions for the probabilities P S(T, a) x , where S(T, a) = suptT S(t) − at (an analogue of Theorem 4.6.1(ii)). Theorem 15.3.12. Let condition [ · , =] (V ∈ R), α > 2, be met, and, for some k 1, let the distribution F (or G[1] ) satisfy conditions [D(k,q) ] from § 4.4 with an r.v.f. q(t) and E|ξ k | < ∞ (the latter is only required when k > 2). Then, as x → ∞, uniformly in all T, P(S(T, a) x) T = 0
k Li (x + au) G+ (x + au) 1 + i!(x + au)i i=1
% &" i i ES l (u) ES i−l (T − u, a) du × ES i (T − u, a) + l l=2 + O mG+ (x)(mG(x) + xG(x)) + o xG+ (x)q(x) , := max{G+ (t), W (t)} and W (t) is the r.v.f. domwhere m := min{T, x}, G(t) inating G− (t). All the remarks following Theorem 4.6.1 and an analogue of Corollary 4.6.3 will remain true in the present situation. The proofs of these assertions are similar to the arguments presented in § 4.6 (up to obvious changes, illustrated by the proof of Theorem 15.3.10). Note, however, that while finding the asymptotics of P(S(∞, a) x) it would seem to be more natural to use the approaches presented in § 7.5, where we studied the asymptotics of P(S(a) x) for random walks under conditions broader than those in Chapters 3–5. As we have observed already in §§ 15.1 and 15.2, when studying large deviation probabilities for processes with independent increments with ‘heavy’ tails, the component {S[0] (t)} of the process (see (15.3.1)) that corresponds to the first three terms in the representation (15.1.2) does not affect the asymptotics of the desired distributions, because it does not influence the behaviour of the tails of the measures F and G at infinity. This allows one, in particular, to reduce the problem on the first-order asymptotics for P(S(∞, a) x) to the corresponding problem for random walks in discrete time considered in Chapters 2–5. This is due to the fact that for compound Poisson processes (corresponding to the fourth term in (15.1.2)) such a reduction is quite simple. Indeed, let {S(t)} be a compound Poisson process with jump intensity Θ and jump distribution P(ζj ∈ dz) = GΘ (dz). The jump epochs tk in the process
542
Extension to processes with independent increments k {S(t)} have the form tk = j=1 τj , where the r.v.’s τj are independent and P(τj t) = e−Θt , t > 0. Clearly, S(∞, a) ≡ sup S(t) − at = sup S(tk ) − atk = S ∗ := sup Sk∗ , t0
where Sk∗ :=
k0
k
∗ ∗ j=1 ξj , ξj
k0
:= ζj − aτj and the r.v.’s τj , ζj are independent. If
a∗ := Eξi∗ = Eζ − aΘ−1 < 0 then the first- and second-order asymptotics of the probability P(S(∞, a) x) = P(S ∗ x) can be found with the help of Theorems 7.5.1 and 7.5.8, where we should replace ξ by ξ ∗ and V (t) by ∗
∞
∗
V (t) := P(ξ t) = Θ
e−Θu GΘ,+ (t + au) du ∼ GΘ,+ (t)
0
(the last relation holds when GΘ,+ (t) := GΘ,+ ([t, ∞)) is an l.c. function), so that ∞ ∞ 1 1 GΘ,+ (u) du = G+ (u) du. P(S(∞, a) x) ∼ ∗ |a | |ES(1) − a| x
x
That the last answer has this particular integral form is in no way related to the assumption that {S(t)} is a compound Poisson process; it will clearly remain true for more general processes with independent increments. In a somewhat formal way, the same assertion could be obtained by, for example, using approaches from § 15.2. The above reduction to discrete-time random walks (when studying the distribution of S(∞, a)) will be carried out in a more general case (for generalized renewal processes) in § 16.1.
16 Extension of the results of Chapters 3 and 4 to generalized renewal processes
16.1 Introduction In the present chapter, we will continue carrying over the results of Chapters 3 and 4 to continuous-time random processes. Here we will deal with generalized renewal processes (GRPs), which are defined as follows. Let τ, τ1 , τ2 , . . . be a sequence of positive independent i.i.d. r.v.’s with a finite mean aτ := Eτ . Put t0 := 0, tk := τ1 + · · · + τk ,
k = 1, 2, . . . ,
and let ν(t) :=
∞
1(tk t),
t 0,
k=1
be the (right-continuous) renewal process generated by that sequence. Recall the obvious relation {ν(t) < k} = {tk > t}, which will often be used in what follows, and note that ν(t) + 1 is the first hitting time of the level t by the random walk {tk , k 0}: ν(t) + 1 = min{k 1 : tk > t}. Further, suppose that S0 = 0,
Sk = ξ1 + · · · + ξk ,
k 1,
is a random walk generated by a sequence of i.i.d. r.v.’s ξ, ξ1 , ξ2 , . . . , which is independent of {τi ; i 1}. Definition 16.1.1. A continuous-time process S(t) := Sν(t) + qt,
t 0,
where q is a real number, is called a generalized renewal process with linear drift q. 543
544
Extension to generalized renewal processes
Note that in the previous chapter, which was devoted to processes with independent increments, we have already considered an important special case of GRPs, that of compound Poisson processes. Our main aim will be to approximate probabilities of the form
" (16.1.1) GT := sup(S(t) − g(t)) 0 tT
for sufficiently ‘high’ boundaries {g(t); t ∈ [0, T ]} (assuming, without loss of generality, that T 1; the case T → 0 is trivial). The most important special cases of such events GT are, for sufficiently large x, the events {S(T ) x}
{S(T ) x},
and
(16.1.2)
where S(T ) := suptT S(t). Regarding the distributions F and Fτ of the r.v.’s ξ and τ respectively, we will assume satisfied a number of conditions, which are mostly expressed in terms of the asymptotic behaviour at infinity of their tails F+ (t) = P(ξ t),
F− (t) = P(ξ < −t),
Fτ (t) := P(τ t),
t > 0.
As in our previous exposition, the conditions F+ ∈ R and Fτ ∈ R respectively mean that F+ (t) = V (t) := t−α L(t), Fτ (t) = Vτ (t) := t
−γ
Lτ (t),
α > 1,
L is an s.v.f.,
(16.1.3)
γ > 1,
Lτ is an s.v.f.
(16.1.4)
The problem on the asymptotic behaviour of the probabilities P(GT ) is a generalization of similar problems for random walks {Sk } with regularly varying tails, which were studied in detail in Chapters 3 and 4. The asymptotic behaviour of the large deviation probabilities for GRPs was studied less extensively, although such processes, like random walks, are widely used in applications. One of the most important is the famous Sparre Andersen model in risk theory [257, 268, 9]. Throughout the chapter we will be using the notation aξ := Eξ,
aτ := Eτ,
H(t) := Eν(t),
t 0,
so that H(t) is the renewal function for the sequence {tk ; k 0}. In [188], an analogue of the uniform representation (4.1.2) was given for GRPs under the additional assumptions that, apart from condition (16.1.3) on the distribution tail of aτ ξ − aξ τ , one also has E|ξ|2+δ < ∞ and Eτ 2+δ < ∞ for some δ > 0 and the distribution Fτ has a bounded density. In [169], in the case q = 0 the authors established the relation P S(T ) − ES(T ) x ∼ H(T )V (x) as T → ∞, x δT, (16.1.5) for any fixed δ > 0 under the assumption that the distribution tail of the r.v. ξ 0 satisfies the condition of ‘extended regular variation’ (see § 4.8) and that, for the
545
16.1 Introduction
process {ν(t)} (which in [169] can be of a more general form than a renewal process), the following condition holds: for some ε > 0 and c > 0, eck P(ν(T ) > k) → 0 as T → ∞. (16.1.6) k>(1+ε)T /aτ
The paper [169] also contains sufficient conditions for (16.1.6) to hold for renewal processes: Eτ 2 < ∞ and Fτ (t) 1 − e−bt , t > 0, for some b > 0 (Lemma 2.3 of [169]). It was shown in [262] that condition (16.1.6) is always met for renewal processes with aτ < ∞ without any additional assumptions on the distribution Fτ . Observe also that the last fact immediately follows from inequality (16.2.13) below, in the proof of Lemma 16.2.8(i). In this chapter, we will not only extend the zone of x values for which the relation (16.1.5) holds true (and this will be done for arbitrary q) but also establish exact asymptotic behaviour for the probabilities P(GT ) in the case of a much wider class of events GT of the form (16.1.1). First of all, note that the problem on the asymptotics of P(S(∞) x) as x → ∞ can be reduced to that on the asymptotics of P(S x) for the maxima S = supk0 Sk of ordinary random walks, considered in Chapters 3 and 4. This follows from the observation that, for q 0, one has S(∞) = sup(Sk + qtk ) =: Z
(16.1.7)
k0
and for q > 0 d
S(∞) = sup(Sk−1 + qtk ) = qτ1 + sup[Sk−1 + q(tk − τ1 )] = qτ + Z, k1
k1
(16.1.8) where the r.v.’s τ and Z are independent and Z = supk0 Zk for the random walk k Zk := j=1 ζj generated by i.i.d. r.v.’s ζj := ξj + qτj . These representation imply the following result. Theorem 16.1.2. Let a := Eζ = aξ + qaτ < 0. Then the following assertions hold true. I. If q 0 and F+ ∈ R then
1 P S(∞) x ∼ |a|
∞ F+ (t)dt ∼ x
xV (x) . (α − 1)|a|
II. If q > 0 and one of the following three conditions is met, (i) F+ ∈ R and Fτ (t) = o V (t) as t → ∞, (ii) F+ ∈ R and Fτ ∈ R, (iii) Fτ ∈ R and F+ (t) = o Vτ (t) as t → ∞,
(16.1.9)
546
Extension to generalized renewal processes
then
1 P S(∞) x ∼ |a|
∞
F+ (t) + Fτ (t/q) dt.
(16.1.10)
x
Note that, in cases II(i) and II(iii), the second and the first summands respectively in the integrand in (16.1.10) become negligibly small and so can be omitted. Observe also that the first relation in (16.1.9) was obtained in [115]. We will need the following well-known assertion, whose first part is a consequence of Theorem 12, Chapter 4 of [42] (see also Corollary 3.6.3), while the second follows from the main theorem of [178]. Theorem 16.1.3. If F+ ∈ R and aξ < 0 then, as x → ∞,
1 P Sx ∼ |aξ |
∞ V (t) dt ∼ x
xV (x) (α − 1)|aξ |
(16.1.11)
and, moreover, 1 P Sn x ∼ |aξ |
∼
x+n|a ξ|
V (t) dt x
xV (x) − (x + n|aξ |)V (x + n|aξ |) (α − 1)|aξ |
(16.1.12)
uniformly in n 1. Note that the first asymptotic relation in (16.1.11) was established in [42] in a more general case (later on, it was shown in [275, 115] that a sufficient condition for this relation is that F+I is a tail of a subexponential distribution; the necessity of this condition was proved in [177]), while the first relation in (16.1.12) was obtained in [178] for so-called strongly subexponential distributions. The second relations in each of the formulae (16.1.11), (16.1.12) are consequences of (16.1.3) and Theorem 1.1.4(iv). Proof of Theorem 16.1.2. I. In the case q 0 one has (16.1.7), the tail Fζ,+ of the distribution Fζ of ζ := ξ + qτ being asymptotically equivalent to the tail F+ = V ∈ R: ∞ Fζ,+ (t) =
V (t − qu) dFτ (u) 0
⎡
= V (t) ⎣
M 0
⎤ V (t − qu) dFτ (u) + θFτ (M )⎦ , V (t)
0 < θ < 1,
where, for M = M (t), increasing to infinity slowly enough as t → ∞, the expression in the square brackets converges to 1 by the uniform convergence
16.1 Introduction
547
theorem for r.v.f.’s (Theorem 1.1.2). Hence (16.1.9) follows immediately from (16.1.11) with aξ replaced in it by a. II. When q > 0 one has the representation (16.1.8). In case (i) it is easily seen, cf. (1.1.39)–(1.1.42), that Fζ,+ (t) = P(ξ + qτ t) ∼ F+ (t),
(16.1.13)
so that, by virtue of (16.1.11), P(Z x) ∼
1 |a|
∞ Fζ,+ (t) dt.
(16.1.14)
x
Therefore the tail of Z is also an r.v.f. and, moreover, Fτ (t) = o P(Z t) . Hence, cf. (16.1.13), we obtain that P(qτ + Z x) ∼ P(Z x), which coincides in this case with (16.1.10) by virtue of (16.1.13), (16.1.14) (see also Chapter 11). In case (ii), again using calculations similar those in (1.1.39)–(1.1.42), we have Fζ,+ (t) ∼ V (t) + Vτ (t/q) so that Fζ,+ ∈ R and therefore, as in case (i), 1 P(Z x) ∼ |a|
∞ Fζ,+ (t) dt ∼ cxFζ,+ (x) x
and P(qτ + Z x) ∼ P(Z x). Case (iii) is considered similarly to (i), (ii). The theorem is proved. For T < ∞ such a simple reduction of the problem on the asymptotic behaviour of P(S(T ) x) to the respective results for random walks is impossible, and so we will devote a special section (§ 16.5) to studying the asymptotics of P(GT ) for linear boundaries in the whole spectrum of deviations. A possible way of doing the asymptotic analysis of the probabilities of the form P(GT ) (in the first place for the events (16.1.2)) for a GRP is to use the decomposition P(GT ) =
∞ P GT ν(T ) = n P(ν(T ) = n).
(16.1.15)
n=0
If, on the set {ν(T ) = n}, the conditional probability P GT ν(t), t T does not depend on the behaviour of the trajectory of the renewal process {ν(t)} inside the interval [0, T ], i.e. P GT ν(T ) = n = P GT {ν(t); t T }, ν(T ) = n (which holds, for example, for the event GT = {S(T ) x}, when one has
548 Extension to generalized renewal processes P GT | ν(T ) = n = P(Sn +qT x)) then we will say that partial factorization takes place, since in this case the problem reduces to studying the processes {Sn } and {ν(t)} separately. It then turns out that the asymptotic behaviour of P(GT ) (including some asymptotic expansions) can be derived from the known results for random walks presented in Chapters 3 and 4 and some bounds for renewal processes. In the general case, however, one may need to employ another approach, which directly follows the basic scheme for studying random walks with regularly varying jump distribution tails developed in Chapters 3 and 4. Namely, along with ‘truncated versions’ of the jumps ξi , we will now truncate the renewal-interval lengths τi . The main contribution to the probability of the event GT will again be due to trajectories containing exactly one large jump, but now the latter can comprise not only a large ξi but also a large τi (when q > 0). The asymptotic behaviour of the probabilities of the events (16.1.2) is mostly determined by simple relations for the mean values aξ and aτ , the linear drift coefficient q and the quantities x and T . Since the mean number of jumps per time unit is equal to 1/aτ , clearly the rate of the mean trend in the process {S(t)} equals a/aτ , where a := aξ + qaτ is the mean trend of the GRP per renewal interval. Therefore the event {S(T ) x}, say, will be a large deviation when x is much greater than aT /aτ . To simplify the exposition, we will assume here and in what follows that the mean trend of the process S(t) per renewal interval is equal to zero: a := aξ + qaτ = 0.
(16.1.16)
It is clear that, when considering boundary-crossing problems, such an assumption does not restrict generality. Indeed, let {S(t)} be a general GRP with a linear drift. For this process, the event GT will clearly coincide with the event G0T = {suptT (S 0 (t) − g 0 (t)) > 0} of the same type, but for the ‘centred’ process S 0 (t) := S(t) − (a/aτ )t with zero mean trend and for the boundary g 0 (t) := g(t) − (a/aτ )t. In particular, the event {S(T ) x} will become, for the new process {S 0 (t)}, the event of the crossing of a linear boundary of the form x − (a/aτ )t. Next we will discuss briefly how the nature of the asymptotics of P(S(T ) x) (the simplest of the probabilities under consideration) depends on the value of aξ , under the assumption (16.1.16). First let aξ 0 (and therefore q 0 owing to (16.1.16)). Clearly, starting from the ‘vicinity of zero’ at time t < T and moving along a straight line with slope coefficient q, at time T we will be even further below the (high) level x. This means that the occurrence of the event {S(T ) x}, whether in the presence or absence of long renewal intervals (it is during these intervals that the process S(t) moves along straight lines with slope coefficient q 0), will be very unlikely in the absence of large jumps ξi . In this case, the asymptotics of large deviation probabilities will be similar to those established for ordinary random walks: in
549
16.1 Introduction
the respective zone of x values and under the assumption that F+ ∈ R, one has P(S(T ) x) ∼ H(T )V (x),
x→∞
(16.1.17)
(recall that H(T ) is simply the mean number of jumps in the process {S(t)} during the time interval [0, T ], so that (16.1.17) is a natural generalization of the asymptotic relation P(Sn x) ∼ nV (x)
(16.1.18)
for random walks). If T → ∞ then, by the renewal theorem, the relation (16.1.17) clearly becomes T P(S(T ) x) ∼ V (x) (16.1.19) aτ for an appropriate range of values x → ∞. If aξ < 0 (so that q > 0 owing to (16.1.16)) then we have to distinguish between the two cases qT x and qT > x. In the first case, for the event {S(T ) x} to occur a large jump ξj is necessary (with a dominating probability). Moreover, it turns out that, while the relations (16.1.17)–(16.1.19) still hold true when x − qT > δT (δ > 0 is fixed), in the ‘threshold’ situation when o(T ) = x − qT → ∞ the presence of a long renewal interval could make it much easier for the level x to be reached (it would then suffice for the ‘large’ jump ξj to have a substantially smaller value). This could be reflected by the appearance of an additional term on the right-hand side of (16.1.19). In the second case, one has in any case to take into account the possibility of the occurrence of the event {S(T ) x} due to a very long renewal interval τk . For this to happen, such an interval should appear close to the starting point of the trajectory of S(t). Indeed, owing to the zero mean trend in the process the value of S(tk−1 ) will be ‘moderate’ in the absence of large jumps ξj , j < k. Therefore, if tk−1 is close enough to T then, starting at the point with co-ordinates (tk−1 , S(tk−1 )) and moving along a straight line with slope coefficient q, it will already be impossible to reach the level x by the time T . Roughly speaking, the occurrence of the event {S(T ) x} due to a large τk is only possible when q(T − aτ k) > x, i.e. when T − x/q . k< aτ Then, for the event {S(T ) x} to occur, it will suffice for the large jump at the beginning of the trajectory to have the property τk > x/q since, both before and after that long renewal interval, the process {S(t)} will not deviate far from the mean trend line, i.e. it will be moving roughly ‘horizontally’. Assuming the regularity of the tail Fτ (t) of the distribution of τ , we arrive at the conclusion that, in such a case, one could expect to have asymptotic results of the form P(S(T ) x) ∼
T T − x/q V (x) + Fτ (x/q). aτ aτ
(16.1.20)
550
Extension to generalized renewal processes
The corresponding results are established in Theorem 16.2.1 below, with the help of the representation (16.1.15). They include delicate ‘transient’ phenomena that occur when x ∼ qT . Moreover, it turns out that one can also obtain the few first terms of the asymptotic expansion for the probability P(S(T ) x) (Theorem 16.3.1). The derivation of more complete expansions is substantially harder than solving the same problem for ordinary random walks, owing to the more complex structure of the process {S(t)}. Arguments similar to those above also hold in the case of more general boundaries. In particular, the relations (16.1.17)–(16.1.20) remain true (under the respective conditions) for the probabilities P(S(T ) x) (Theorem 16.2.3). For boundaries of a general form, we will restrict ourselves to considering cases where the occurrence of the event GT due to long renewal intervals is impossible, as otherwise the asymptotic analysis would become rather tedious. (An exceptional case is that of boundaries for which the right endpoint is the lowest, i.e. g(T ) = min0tT g(t). For such boundaries we will show in Corollary 16.4.4 that all the asymptotic results obtained in this chapter for the probabilities P(S(T ) x) remain valid for the probabilities P(GT ).) Namely, we consider classes of boundaries {g(t); t ∈ [0, T ]}, satisfying the conditions x inf t0 g(t) Kx (K > 1 is fixed) under the additional assumptions that x δT when q 0 and inf tT (g(t) − qt) δT when q > 0. It will be shown that, cf. the results in §§ 3.6 and 4.6 on the asymptotics of the boundarycrossing probabilities for a random walk, the main term in the probability P(GT ), as x → ∞, has the form T V (g∗ (t)) dH(t), 0
where g∗ (t) := inf tsT g(s) (Theorem 16.4.1). If, moreover, T → ∞ then the above integral is asymptotically equivalent to 1 aτ
T V (g∗ (t)) dt. 0
For linear boundaries g(t) = x + gt, we will obtain more complete results in § 16.5. For a finite large T , we can find the asymptotics of P(GT ) including the case when the ‘deterministic drift’ line qt can cross the boundary x+gt. As in our analysis of the distributions of S(T ) and S(T ), we also study the ‘threshold case’, when x + gT and qT are relatively close to each other. Note that the case of linear boundaries with T = ∞ is covered by Theorem 16.1.2. In conclusion, we note that studying large deviation probabilities for S(T ) in the case when the components of the vectors (τi , ξi ) are dependent is also possible, but only for some special joint distributions of (τi , ξi ) (those that are regularly
16.2 Large deviation probabilities for S(T ) and S(T )
551
varying at infinity). The representation ∞
T
P(S(T ) x) =
P(Tn ∈ dt, Sn + qT x) P(τn+1 > T − t)
n=1 0
(16.1.21) reduces the problem to one of analysing the joint distribution of the vector of sums (Tn , Sn ). Such an analysis, enabling one to find the asymptotics of (16.1.21), was given in Chapter 9 for some basic types of joint distributions of (τi , ξi ) that display regular variation at infinity. 16.2 Large deviation probabilities for S(T ) and S(T ) We now introduce the conditions [QT ] and [ΦT ], which are respectively similar to condition [Q] of Chapter 3 and the conditions that we often used in Chapter 4. [QT ] One has F+ ∈ R for α ∈ (1, 2), and at least one of the following two conditions holds true: (i) F− (t) cV (t), t > 0, and T V (x) → 0; (ii) F− (t) W (t), t > 0, W ∈ R, x → ∞ and T [V (x/ ln x) + W (x/ ln x)] < c. Clearly, [QT ] is simply condition [Q] with n replaced in it by T ; see p. 138. [ΦT ]
One has F+ ∈ R forAα > 2, d := Var ξ < ∞ and x → ∞; moreover,
for some c > 1, we have x > c
(α − 2)a−1 τ dT ln T .
Furthermore, we will also need a condition of the form [ · , <] on the distribution Fτ . It will be convenient for us to denote it by [<]: [<] For some γ > 1, one has Fτ (t) Vτ (t) := t−γ Lτ (t), where Lτ is an s.v.f. It is evident that, by the Chebyshev inequality, for condition [<] to hold for some Lτ (t) = o(1) as t → ∞ it suffices that Eτ γ < ∞. Theorem 16.2.1. Let either condition [QT ] or condition [ΦT ] be satisfied, and let δ > 0 be an arbitrary fixed number. Suppose also that (16.1.16) holds, i.e. that the mean trend in the process {S(t); t 0} is equal to zero. Then the following assertions hold true. I. The case q 0 (i) As x → ∞, uniformly in T values that satisfy x δT, one has P(S(T ) x) ∼ H(T )V (x).
(16.2.1)
If T → ∞ then the term H(T ) on the right-hand side can be replaced by T /aτ .
552
Extension to generalized renewal processes (ii) If condition [<] is met for a γ ∈ (1, 2) then (16.2.1) holds as x → ∞ uniformly in the range of T values satisfying, along with [QT ] or [ΦT ], the condition x T 1/γ Lz (T ) for a suitable s.v.f. Lz (a way of choosing the function Lz (t) is indicated in Lemma 16.2.8(ii) below). (iii) If α ∈ (1, 2) and Fτ (t) = o(V (t)) as t → ∞, or if α > 2 and Eτ 2 < ∞ then (16.2.1) holds without any additional assumptions on x and T (apart from those in conditions [QT ] and [ΦT ]).
II. The case q > 0 (i) The relation (16.2.1) holds as x → ∞ uniformly in T values that satisfy x∗ := x − qT δT. (ii) Let Fτ ∈ R. If T → ∞,
x∗ → ∞
and
x∗ = o(T ),
then, for α ∈ (1, 2), P(S(T ) x) ∼
T V (x) aτ
and, for α > 2, P(S(T ) x) ∼
T x2 V (x∗ )Vτ (T ) . V (x) + 2 2∗ aτ q aτ (α − 1)(α − 2)
(iii) If Fτ (t) = o(V (t)) as t → ∞ then the assertions of parts I(i), I(ii) hold without any additional assumptions on x and T (apart from those in conditions [QT ]and [ΦT ] and in the above-mentioned parts of the theorem). (iv) Let condition Fτ ∈ R hold for a γ ∈ (1, 2) ∪ (2, ∞) (see (16.1.4)) and let x∗ = x − qT → −∞,
x T 1/γ Lz (T )
for a suitable s.v.f. Lz (a way of choosing the function Lz (t) is indicated in Lemma 16.2.8(ii); for γ > 2 the latter inequality always holds owing to conditions [QT ] or [ΦT ]). Then P(S(T ) x) ∼
$ 1# T V (x) + (T − x/q)Vτ (x/q) aτ
as
T → ∞.
Remark 16.2.2. If the conditions α > 2 and [<] with Lτ = const are met in part I(ii) of the theorem then Lz (t) = c ln1−1/γ (t), so that the relation (16.2.1) will hold as x → ∞ uniformly in T values that satisfy x cT 1/γ ln1−1/γ T for a large enough c.
16.2 Large deviation probabilities for S(T ) and S(T )
553
A few comments regarding the second half of Theorem 16.2.1 are in order. As already stated in § 16.1, case II (the presence of a positive linear drift qt) is more complicated than case I. In case II(i) of the theorem, the level x is much higher than the point qT in whose neighbourhood we could find ourselves at time T , owing to the linear drift over a very long renewal interval. In this case the asymptotics of the probability P(S(T ) x) will remain of the same form as in case I: roughly speaking, to cross the level x one still needs the presence of a large jump ξj during the time interval [0, T ]. In case II(ii) the level x is still above qT but the difference between the two values is relatively small: x∗ = x − qT = o(T ). In this transient situation, the form of the asymptotics of the probability P(S(T ) x) is determined by the ‘thickness’ of the right tail of the distribution of ξj . For a ‘thick’ tail (when α ∈ (1, 2)) the asymptotics remain the same and do not depend on the distribution of the renewal interval length (of which we require that Fτ ∈ R). When the distribution tail of ξj is ‘thinner’ (α > 2), the asymptotics acquire an additional term which depends on the distribution of τk and which may or may not dominate. Its presence is caused by the following (relatively probable) possibility: one renewal interval, starting at the very beginning of the time interval [0, T ], proves to be very long (and covers the time point t = T ), the sum of the jumps ξj ‘accumulated’ during the initial time interval (prior to the start of that renewal interval) being large enough for the process to negotiate the ‘gap’ between qT and x. This, in its turn, is also a ‘large deviation’ and is due to a single ‘moderately large’ jump ξj exceeding the value x∗ (hence the factor V (x∗ )). The presence of the factor x2∗ can be explained by the fact that, roughly speaking, the number of such jumps, at the beginning of the trajectory of the process, that would have to correspond to these large values of τk and ξj is of order x∗ , the sequences {τk } and {ξj } being independent of each other. Therefore the probability of the required combination of events will be of the order of magnitude of the product of x∗ V (x∗ ) and x∗ Fτ (T ). In case II(iv) the level x is already substantially lower than qT (owing to the condition x−qT → −∞), and hence the probability that the process will be above that level increases by a contribution corresponding to the presence of a single very large τk at the beginning of the interval [0, T ]. The form of this additional term can be explained as follows. Roughly speaking, in the absence of large deviations in the random walk {Sn }, for the process S(t) to exceed the level x by the time T it suffices that one of the first (roughly) (T −x/q)/aτ renewal intervals is long (> x/q). In this case, the trajectory of S(t) will oscillate about zero until the start of the long renewal interval, and then during that interval it will move along a straight line with slope coefficient q > 0 (and this will take the trajectory above the level x). After that (if t is still less than T ), the trajectory will again oscillate at an approximately constant level (and therefore will still be above the level x by the time T ). The very narrow transient case x∗ = O(1) (when the values of qT and x are
554
Extension to generalized renewal processes
almost the same) proves to be quite difficult. This case is not considered in the present exposition and is not covered by Theorem 16.2.1. As was the case for the ordinary random walks, the first-order asymptotics of the probabilities P(S(T ) x) of large deviations of the maximum of the process turn out to be of the same form as those for P(S(T ) x). This is due to the same reason: if (say, due to a large jump ξj ) the process {S(t)} crosses a high level x somewhere inside the interval [0, T ] then, with a high probability, it will stay in a ‘neighbourhood’ of the point S(tj ) until the end of the interval [0, T ] (recall that our process has a zero mean trend). Hence the events {S(T ) x} and {S(T ) x} prove to be almost equivalent. The process can cross the level x during an interval of its linear growth, but even then the above remains true as well. Theorem 16.2.3. All the assertions of Theorem 16.2.1 remain true, under the respective assumptions, for the probability P(S(T ) x). As we saw in § 4.8 in the case of random walks {Sn }, the asymptotics P(Sn x) ∼ nF+ (x),
P(S n x) ∼ nF+ (x)
(16.2.2)
(valid in the respective deviation zones) extend to a wider distribution class than R (possibly for narrower deviation zones). Using the partial factorization approach, which is based on the relations (16.2.2), one can show that a similar situation occurs for the GRPs as well. We will give here just the following simple corollary (further results can be derived in a similar way, using Theorem 4.8.6 and bounds from Lemmata 16.2.6–16.2.8). Theorem 16.2.4. Suppose that the distribution of ξ satisfies the conditions of Theorem 4.8.1 and the relations (4.8.2) hold true (with n replaced in them by T ), and let Eτ 2 < ∞. Assume also that (16.1.16) is met, and that δ > 0 is an arbitrary fixed number. Then, uniformly in T such that x (δ + max{0, q})T, one has P(S(T ) x) ∼ P(S(T ) x) ∼ H(T )F+ (x). If T → ∞ then the term H(T ) on the right-hand side can be replaced by T /aτ . To prove the theorems we will need a few auxiliary results on the deviations of the renewal process ν(t) from its mean value H(t) ∼ t/aτ . We will state these results as separate lemmata. First we will note the following. Remark 16.2.5. To simplify the computations, we will be assuming in all the proofs of the present chapter that aτ = 1.
(16.2.3)
Clearly, this does not lead to any loss of generality. One can easily see how to
16.2 Large deviation probabilities for S(T ) and S(T )
555
make the transition from the results obtained under this assumption to the general case. One just has to scale the time by a factor aτ , i.e. to make the following changes in the derived results: replace T by T /aτ and Fτ (·) by Fτ (aτ ·) (and, correspondingly, H(·) by H(aτ ·) and so on, so that, for example, the value H(T ) will remain unchanged). Further, q should be replaced by qaτ , and g(·) by g(aτ ·) (so that, say, in the case of a linear boundary g(t) = x + gt one replaces, in the relations obtained in the case (16.2.3), the coefficient g by gaτ ). Precisely this will be done when the results of our theorems are stated (convention (16.2.3) is not used therein). Denote by t0k := tk − k,
k = 0, 1, 2, . . . ,
the centred random walk generated by the sequence {τi }, and by Λθ (v) the deviation function for the r.v. θj := 1 − τj : ϕθ (λ) := Eeλθ = eλ Ee−λτ . (16.2.4) Λθ (v) := sup vλ − ln ϕθ (λ)}, λ
Lemma 16.2.6. (i) For all z 0 and n 1,
P min t0k −z e−nΛθ (z/n) . kn
(16.2.5)
In particular, (ii) if Eτ 2 < ∞ then Λθ (v) cv 2 for v > 0 and some c > 0. Therefore, for all z 0, n 1,
2 P min t0k −z e−cz /n . (16.2.6) kn
(iii) Assume that condition [<] holds for a γ ∈ (1, 2). Then u(λ) := λ−1 Vτ (λ−1 ) is a regularly varying function converging to zero as λ → 0 . The generalized inverse of u(λ), h(t) = u(−1) (t) := sup{λ : u(λ) t},
(16.2.7)
has the form h(t) = t1/(γ−1) Lh (t), where Lh (t) is an s.v.f. as t → 0. For n 1 and z > 0, one has
(16.2.8) P min t0k −z e−czh(z/n) . kn
Proof. (i) The inequality (16.2.5) follows immediately from Lemma 2.1.1 (it is obtained by minimizing the right-hand side of (2.1.5) with respect to μ, see p. 82). (ii) If Eτ 2 < ∞ then there exists Λθ (0) = Var τ . Moreover, as is well known, the function Λθ (v) is convex (being the Legendre transform of a convex function, see e.g. § 8, Chapter 8 of [49]). Hence Λθ (v) cv 2 for v ∈ [0, 1] (Λθ (v) = ∞ for v > 1). This proves (16.2.6).
556
Extension to generalized renewal processes
(iii) Here the asymptotics of Λθ (v) as v → 0 will differ from that in the case Eτ 2 < ∞. Integrating by parts yields the representation −λτ
Ee
∞ =1−λ
Fτ (t)e
−λt
2
∞
dt = 1 − λ + λ
0
FτI (t)e−λt dt,
0
where, by virtue of condition [<] and Theorem 1.1.4(iv), ∞ FτI (t)
:=
∞ Fτ (u)du
t
Vτ (u)du = t
1 + ε(t) tVτ (t), γ−1
ε(t) = o(1)
as t → ∞. Therefore, as λ → 0, ∞
FτI (t)e−λt dt
0
1 γ−1
∞ (1 + ε(t))tVτ (t)e−λt dt 0
1 + o(1) = γ−1
∞
tVτ (t)e−λt dt ∼
0
Γ(2 − γ)Vτ (1/λ) (γ − 1)λ2
by Theorem 1.1.5. Thus, for c > 0 large enough, we get Ee−λτ 1 − λ + cVτ (1/λ) e−λ+cVτ (1/λ) ,
λ ∈ [0, 1],
and therefore ϕθ (λ) = eλ Ee−λτ ecVτ (1/λ) ,
λ ∈ [0, 1].
Again using Lemma 2.1.1, we obtain
P min t0k −z e−λz+cnVτ (1/λ) . kn
(16.2.9)
The assertions of the theorem regarding the regular variation at zero of the function u(λ) = λ−1 Vτ (λ−1 ) and its inverse h(t) = u(−1) (t) are obvious. Further, for λ = h(z/cnγ) the exponent on the right-hand side of (16.2.9) equals −λz + cnVτ (1/λ) = (−z + cnu(λ))λ z cnz h −z + cnγ cnγ z z(γ − 1) z =− c1 zh h γ cnγ n for z/n 1, since h(t) is an r.v.f. as t → 0. This establishes (16.2.8) in the case z n. For z > n, the inequality (16.2.8) is trivial. Lemma 16.2.6 is proved. The next result follows from Corollary 3.1.2 and Remark 4.1.5.
16.2 Large deviation probabilities for S(T ) and S(T )
557
Lemma 16.2.7. If condition [<] holds for a γ ∈ (1, 2) ∪ (2, ∞) then there exists a function ε(v) ↓ 0 as v ↓ 0 such that sup x,n: nVτ (x)v
P(maxkn t0k x) 1 + ε(v). nVτ (x)
√ When γ = 2, this relation holds under the addition restriction that x c n ln n, c > 0 is fixed. In the following lemma, V (t) is an arbitrary r.v.f. Lemma 16.2.8. (i) Let n+ := T + z0 and z0 = εx, where ε = ε(x) > 0, and let x δT for an arbitrary fixed δ > 0. Then, for any k 0 and m 0, uniformly in the zone x δT one has the relation nk P(ν(T ) = n) = o((T V (x))m ) as x → ∞ (16.2.10) n>n+
provided that ε = ε(x) → 0 slowly enough as x → ∞. (ii) Let condition [<] be met with a γ ∈ (1, 2) and let n+ = T + z0 , where z0 is a solution to the asymptotic equation z0 h(z0 /T ) ∼ c+ ln T
(16.2.11)
for a sufficiently large c+ ; the function h(t) is defined in (16.2.7). Then z0 = T 1/γ Lz (T ), where Lz is an s.v.f., and for any k 0 and m 0 the relation (16.2.10) holds uniformly in the zone x cT , where c > 0 is an arbitrary fixed number. Observe that, if condition [<] is satisfied with Lτ ≡ const then z0 = cT 1/γ ln1−1/γ T. Proof of Lemma 16.2.8. (i) Changing variables and integrating by parts yields the bound ∞ k n P(ν(T ) = n) uk P(ν(T ) ∈ du) n>n+
n+
= (T + z0 )k P(ν(T ) > T + z0 ) ∞ + k (T + z)k−1 P(ν(T ) > T + z) dz.
(16.2.12)
z0
Assuming for simplicity that T + z is an integer, we note that
" θj > z , {ν(T ) > T + z} ⊆ {tT +z < T } = jT +z
θj = 1 − τj .
558
Extension to generalized renewal processes
Therefore, using the deviation function (16.2.4), we get from the Chebyshev inequality the following bound. For z z0 , T +z P(ν(T ) > T + z) P θj > z j=1
( exp −(T + z)Λθ
z ) T +z exp{−(T + z)Θ(εδ)},
(16.2.13)
where Θ(u) := Λθ (u/(1 + u)) > 0 for u > 0 and we have used the fact that the function Θ(u) is increasing, u := z/T z0 /T = εx/T εδ. Hence the right-hand side of (16.2.12) does not exceed k −(T +εx)Θ(εδ)
(T + εx) e
∞ + k (T + z)k−1 e−(T +z)Θ(εδ) dz = o((T V (x))m ), εx
(16.2.14) when ε → 0 slowly enough. (ii) In this case the inequality in the first line of (16.2.13) and Lemma 16.2.6(iii) give the bound "
z . (16.2.15) P(ν(T ) > T + z) exp −czh T +z Since one can assume that z0 < T , we see that the right-hand side of (16.2.12) is bounded from above by k −c2 z0 h(z0 /T )
c1 T e
T + c1 T
k−1
e
−c2 z0 h(z0 /T )
∞ dz + c1
z0
z k−1 e−c3 z dz
T
) ( c c 2 + ln T + c4 e−c3 T /2 = o(T −c5 ) 2c1 T exp − 2 for any given c5 , once c+ is large enough. As one has x cT , the lemma is proved. k
Proof of Theorem 16.2.1. As stated earlier (on p. 548), when considering the event {Sn x}, one could make use of partial factorization. Letting Sn0 := Sn − aξ n ≡ Sn + qn (the last relation holds by virtue of (16.2.3) and (16.1.16)), rewrite (16.1.15) as P Sn0 x − q(T − n) P(ν(T ) = n) P(S(T ) x) = n0
=
n
+
n− nn+
+
,
(16.2.16)
n>n+
where the values n± are chosen according to the situation. It will be convenient
16.2 Large deviation probabilities for S(T ) and S(T )
559
to estimate these three sums separately, and, depending on the situation, to do this in different ways. The contribution of the last sum will always be negligibly small and the middle sum will reduce to the main term of the form H(T )V (x), while the first sum will contribute substantially only in those cases when the presence of a long renewal interval can ‘help’ the trajectory of the process S(t) to exceed the level x by the time T . I. The case q 0. (i) In this part we put n± := T ± εx, where ε = ε(x) → 0 as x → ∞ slowly enough that, uniformly in the specified zone of the T -values, one has P ν(T ) ∈ [n− , n+ ] = o(1) as x → ∞ (16.2.17) (since x δT , this is always possible owing to the law of large numbers for renewal processes). As x − q(T − n) x δT > δn
for n < n− ,
in this case by Corollaries 3.1.2 and 4.1.4 one has P Sn0 x − q(T − n) = O(nV (x)) = O(T V (x)),
n < n− , (16.2.18)
and hence we obtain from (16.2.17) that = o(T V (x)). n
Further, by Lemma 16.2.8(i) the following relation holds uniformly in the zone x δT : P(ν(T ) > n+ ) = o(T V (x)), (16.2.19) n>n+
so that it remains only to estimate the middle sum in the second line of (16.2.16). Since without loss of generality we can assume that |qε| < 1/2 and εδ 1, we have for n n+ that x δ δ δ x T+ n+ n. x − q(T − n) (1 + qε)x 2 4 δ 4 4 Therefore, by Theorems 3.4.1 and 4.4.1, = (1 + o(1)) nV (x − q(T − n)) P(ν(T ) = n) n− nn+
n− nn+
= (1 + o(1))V (x)
n P(ν(T ) = n)
n− nn+
= (1 + o(1))V (x)
n P(ν(T ) = n) + o(T V (x)) ∼ H(T )V (x)
n>0
(16.2.20) using the bounds from Lemma 16.2.8(i) and from (16.2.17). If T → ∞ then H(T ) ∼ T by the renewal theorem. Part I(i) of the theorem is proved.
560
Extension to generalized renewal processes
(ii) It follows from I(i) that here we could restrict ourselves to considering the case x cT (so that automatically T → ∞). Put n± = T ± εz0 (without loss of generality, one can assume for simplicity that n± are integers), where ε = ε(x) → 0 slowly enough as x → ∞ and z0 = T 1/γ Lz (T ) is defined in Lemma 16.2.8(ii), and then again turn to the representation (16.2.16). For n < n− we have x − q(T − n) x, and (16.2.18) will hold by virtue of the conditions imposed on x. If we show that T Vτ (z0 ) → 0
(16.2.21)
then, when ε tends to zero slowly enough, we will also have T Vτ (εz0 ) → 0 and therefore, by Lemma 16.2.7, P(ν(T ) < n− ) = P t0T −εz0 > εz0 = O(T Vτ (εz0 )) = o(1).
(16.2.22)
Thus for the first sum on the right-hand side of (16.2.16) we have
= O T V (x)P(ν(T ) < n− ) = o(T V (x)).
n
Now we will verify (16.2.21). To this end, consider a sequence of i.i.d. r.v.’s {ˆ τj } such that P(ˆ τj t) = Vτ (t),
t t0 > 0.
On the one hand, as for the original r.v.’s τj , we have the bound (16.2.8) for the sums ˆt0j := kj=1 (ˆ τj − Eˆ τj ). Therefore, using an argument similar to that in the proof of Lemma 16.2.8(ii), for any given c > 0 one has P ˆt0T −z0 = o(T −c ),
T → ∞.
(16.2.23)
(−1)
On the other hand, putting bτ (T ) := Vτ (1/T ) one can easily see that, by virtue of Theorem 1.5.1, the distribution of ˆt0T /bτ (T ) converges as T → ∞ to the stable law Fγ,1 with parameters γ and 1. Since the support of that law is the whole real axis when γ > 1, this, together with (16.2.23), means that z0 bτ (T ) and hence (16.2.21) holds true. Further, to prove (16.2.19) in the case under consideration we put m+ := T +z0 and write = + . (16.2.24) n>n+
n+
n>m+
Owing to Corollary 3.1.2 and Lemma 16.2.6(iii), the first sum on the right-hand
16.2 Large deviation probabilities for S(T ) and S(T )
561
side does not exceed P Sn0 x − |q|z0 P(ν(T ) = n) n+
P Sn0 cx P(ν(T ) = n)
n+
= O T V (x) P(t0T +εz0 < −εz0 ) = O T V (x) exp{−c1 εz0 h(εz0 /T )} = O T V (x) exp{−c2 ε1−γ ln T } = o(T V (x))
(16.2.25)
(we have also used (16.2.11) and the fact that h(εz0 /T ) ∼ ε−γ h(z0 /T ) provided that ε → 0 slowly enough, which holds since h is an r.v.f.), while the second sum is o(T V (x)) owing to Lemma 16.2.8(ii). Finally, for n ∈ [n− , n+ ], x − q(T − n) ∼ x in the specified zones of x values, and therefore one can evaluate the middle sum in the second line of (16.2.16) in the same way as in part I(i) but this time using (16.2.22) and Lemma 16.2.8(ii) to replace the sum n− 0 . (iii) First consider the case where α ∈ (1, 2) and Fτ (t) = o(V (t)) as t → ∞. We can again assume that x < cT (the case x δT was dealt with in part I(i)) and take n± = (1 ± ε)T , ε = ε(x) → 0 as x → ∞. It is obvious that the distribution tail of τ is dominated by an r.v.f. Vτ (t) = o(V (t)) as t → ∞ and also that if ε → 0 slowly enough then condition [QT ] will still hold when we replace x by εT and the function V by Vτ . Hence from Lemma 16.2.7 we obtain P(ν(T ) n+ ) = P(tn+ T ) = P(t0n+ −εT ) = O(T Vτ (εT )) = o(T V (x)), P(ν(T ) < n− ) = P(tn− > T ) = P(t0n− > εT ) = O(T Vτ (εT )) = o(T V (x)) (when ε tends to 0 slowly enough), so that P ν(T ) ∈ [n− , n+ ) = o(T V (x)). Moreover, for n ∈ [n− , n+ ) we have P Sn0 x − q(T − n) ∼ nV (x). Now the desired assertion can easily be obtained, using computations similar to those in (16.2.20). We will only note that, in order to replace the sum nP(ν(T ) = n) by nP(ν(T ) = n), n− nn+
n>0
562
Extension to generalized renewal processes
we will have to prove the required smallness of nP(ν(T ) = n) = o(T )
(16.2.26)
n>n+
in a somewhat different way. Namely, since as T → ∞ one has the convergence ν(T )/T → 1 a.s. by the law of large numbers for renewal processes, and since Eν(T )/T = H(T )/T → 1 by the renewal theorem, we conclude that the r.v.’s ν(T )/T 0 are uniformly integrable. Therefore, when ε tends to 0 slowly enough, we get ν(T ) ν(T ) E ; > 1 + ε → 0 as T → ∞, T T which is equivalent to (16.2.26). √ When α > 2 and Eτ 2 < ∞, put n± = T ± ε T ln T and let ε = ε(x) tend to 0 slowly enough. Then again the relation (16.2.18) clearly holds for n < n− . Combining this with the observation that
√ P(ν(T ) ∈ [n− , n+ ]) = P |ν(T ) − T | > ε T ln T → 0 (16.2.27) by the central limit theorem for renewal processes (see e.g. § 5, Chapter 9 of [49]), we have nn+ = o(T V (x)) is established in √ the same way as in the proof of part I(ii), but this time we choose m+ = c1 T ln T , where c1 is large enough, and use Lemma 16.2.6(i). Finally, for n ∈ [n− , n+ ] we again have x − q(T − n) ∼ x √ when x > c T ln T (condition [ΦT ]), and the proof is completed in the same way as in the previous parts of the theorem. II. The case q > 0. (i) If x − qT δT then all the arguments from the proof of part I(i) remain valid without any significant changes. (ii) Turning to the representation (16.2.16) with n± = (1 ± ε)T (assuming for simplicity that T and εT are integers; similar assumptions will be tacitly made in what follows as well), one can easily verify that, as in the proof of part I(i), one has n>n+ = o(T V (x)) (note that x ∼ qT in the case under consideration) and n− nn+ = (1 + o(1))T V (x). Thus it remains to consider 0 = P Sν(T ) x∗ + qν(T ), ν(T ) < εT n
0 + P Sν(T ) x∗ + qν(T ), εT ν(T ) < n− .
(16.2.28)
16.2 Large deviation probabilities for S(T ) and S(T )
563
Clearly, the last probability does not exceed
P max Sn0 qεT P(ν(T ) < n− ) cT V (εT ) P(t0(1−ε)T > εT ) nn−
cT V (εT ) T (εT )−γ = o(T V (T ))
(16.2.29)
when ε tends to 0 slowly enough (the first two relations in (16.2.29) follow from Corollaries 3.1.2 and 4.1.4 and Lemma 16.2.7 respectively). To estimate the first term on the right-hand side of (16.2.28), fix a small enough δ > 0 and introduce events C (k) , k 0, of which the meaning is that for exactly k of the r.v.’s τj , j εT , one has τj δT whereas for all the rest τj < δT . Clearly, the probability we require is equal to P(S(T ) x, ν(T ) < εT ) =
P S(T ) x, ν(T ) < εT ; C (k) .
(16.2.30)
0k1/δ+1
We will show that the main contribution to the sum of probabilities on the righthand side comes from the second summand (k = 1), which corresponds to the presence of exactly one long renewal interval τj (covering most of the segment [0, T ]). It is this large τj that will be responsible for ‘driving’ the trajectory of the process S(t) = Sν(t) +qt along a straight line, with slope coefficient q > 0, beyond the level x = qT + x∗ by time T . The total contribution of all the other terms (k = 1) will be negligibly small relative to T V (x), and we already know that P(S(T ) x (1 + o(1))T V (x). We will start with the case k = 0. By Theorems 3.1.1 and 4.1.2, for any given b > 0 we have P S(T ) x, ν(T ) < εT ; C (0) P t0εT > (1 − ε)T, τi < δT, i εT = O (εT Vτ (T ))m = O(T −b )
(16.2.31)
provided that δ < 1/m, where m is large enough. Therefore this probability is o(T V (x)). The main contribution to the sum (16.2.30), as we have already said, is from the term with k = 1. This case requires a detailed analysis. Consider the events (1)
Cj
:= {τj δT, τi < δT, i εT, i = j},
so that C (1) =
jεT
(1)
1 j εT,
(16.2.32)
Cj . We are interested in the second term in (16.2.30),
564
Extension to generalized renewal processes
namely,
0 (1) P Sν(T ) x∗ + qν(T ), ν(T ) < εT ; Cj
j<εT
⎛ =O⎝
⎞ P(ν(T ) < j, τi < δT, i j)⎠
j<εT
+
(1) P(Sj0 x∗ + qj) P ν(T ) = j; Cj+1
j<εT
+
(1) P Sn0 x∗ + qn P ν(T ) = n; Cj .
(16.2.33)
j<εT jn<εT
The first term on the right-hand side of (16.2.33) corresponds to the situation when the first overshoot tn+1 > T occurs for a ‘small’ n (< εT ) and all ‘small’ τi , i n + 1 (not exceeding δT ). As we know, the probability of such an event is very small. The second term corresponds to the case when the first overshoot tj+1 > T occurs due to a very long last renewal interval τj+1 , all the previous τi being less than δT , i j. This situation, as one could expect, will be the most probable and will give the main contribution to the probability we require. Finally, the last term on the right-hand side of (16.2.33) corresponds to the following case: the first overshoot tn+1 > T occurs on a short renewal interval τn+1 < δT , but one of the previous renewal intervals τj was long (τj δT ) while all the rest were short (τi < δT for i n, i = j). Since, as we know, the sum of these short τi will be small with a high probability, for this event to occur we will need the value of τj to hit a relatively small neighbourhood of the point T , which is unlikely when the distribution of τj is regular. Now we will formally estimate all the terms on the right-hand side of (16.2.33). For any fixed b > 0, the first term on the right-hand side of (16.2.33) will not exceed P t0j > T − j, τi < δT, i j c j<εT
c1
(jVτ (T ))m c2 (εT )m+1 Vτ (T )m = o(T −b )
(16.2.34)
j<εT
by virtue of Theorems 3.1.1 and 4.1.2, when δ < 1/m and m is large enough. Therefore this term is negligible. The last term in (16.2.33) is also negligible. Indeed, one can easily see that
sup P Sn0 x∗ + qn P sup(Sn0 − qn) x∗ x∗ V (x∗ ) (16.2.35) n
n
by Theorem 16.1.3, the maximum being attained at a value n x∗ (we write a b if a = O(b), b = O(a)), and also that, for n > j > x∗ , one has P(Sn0 x∗ + qn) ∼ nV (x∗ + qn) cjV (x∗ + qj).
16.2 Large deviation probabilities for S(T ) and S(T )
565
Therefore, the above-mentioned term does not exceed c
(1)
x∗ V (x∗ ) P(j ν(T ) < εT ; Cj )
jx∗
+c
(1)
jV (x∗ + qj) P(j ν(T ) < εT ; Cj ).
(16.2.36)
x∗ <j<εT
Now note that {j ν(T ) < εT } = {tj T < tεT }, so that in the case when tj−1 +(tεT −tj ) < 2εT, for this event to occur one requires (1−2ε)T τj < T . Hence in each sum in (16.2.36) we can give the following upper bound for the probabilities: (1) P j ν(T ) < εT ; Cj (1) P tj−1 + (tεT − tj ) 2εT ; Cj + P tj−1 + (tεT − tj ) < 2εT, (1 − 2ε)T τj < T P t0εT −1 εT + 1, ti < δT, i < εT + P (1 − 2ε)T τj < T = o(T −b ) + Vτ ((1 − 2ε)T ) − Vτ (T ) = o(Vτ (T )) similarly to (16.2.34) and according to the assumption on the regular behaviour of the tail Vτ (t). Therefore the entire sum (16.2.36) is negligibly small relative to ⎛ ⎞ Vτ (T ) ⎝x2∗ V (x∗ ) + jV (x∗ + qj)⎠ . (16.2.37) x∗ <j<εT
Here by Theorem 1.1.4(iv) we get for α > 2
∞ jV (x∗ + qj) c
x∗ <j<εT
t−α+1 L(x∗ + qt) dt
x∗
L(x∗ ) = c1 x2∗ V (x∗ ), ∼ c1 x−α+2 ∗ and for α ∈ (1, 2) x∗ <j<εT
εT jV (x∗ + qj) c
t−α+1 L(x∗ + qt) dt ∼ c1 (εT )2 V (εT ).
1
(16.2.38) Using a bound similar to (16.2.29) we find that the corresponding contribution to the sum (16.2.37) will be o(T V (T )). Thus we have shown that the last sum in (16.2.33) is o x2∗ V (x∗ )Vτ (T ) + T V (T ) . Now consider the middle sum on the right-hand side of (16.2.33). As above,
566
Extension to generalized renewal processes
one can easily show that for j < εT and for any fixed b > 0 one has (1) P ν(T ) = j; Cj+1 (1) = P tj < 2εT tj+1 T ; Cj+1 + o(T −b ) = (1 + o(1))Vτ (T ) P(τ1 δT, . . . , τεT δT ) + o(T −b ) ∼ Vτ (T ),
(16.2.39)
since P(τ1 < δT )εT = (1 − Vτ (δT ))εT = exp{−εT Vτ (δT )(1 + o(1))} → 1. Hence the above-mentioned middle sum is jV (x∗ + qj). (1 + o(1)) Vτ (T ) j<εT
It follows from (16.2.38) that for α ∈ (1, 2) this expression is o(T V (T )) (when ε → 0 slowly enough). For α > 2 one has j<εT
jV (x∗ + qj) ∼
1 q
=q
εT
(x∗ + qt)V (x∗ + qt) − x∗ V (x∗ + qt) dt
0
−2
∼ q −2
x∗ +qεT
uV (u) − x∗ V (u) du ∼
x∗
x2∗ V (x∗ ) x2∗ V (x∗ ) − α−2 α−1
=
x2∗ V (x∗ ) q 2 (α − 1)(α − 2)
by Theorem 1.1.4(iv), when ε → 0 slowly enough (so that x∗ = o(εT )). Thus we have shown that, under our assumptions, one has P(S(T ) x, ν(T ) < εT ; C (1) ) = (1 + o(1))
x2∗ V (x∗ )Vτ (T ) + o(T V (T )), q 2 (α − 1)(α − 2)
(16.2.40)
where, clearly, for α ∈ (1, 2) the first term on the right-hand side is o(T V (T )), so that its ‘power component’ has the form x2−α T −γ = (x∗ /T )2−α T 2−γ−α = o(T −(γ−1) × T 1−α ), ∗
γ > 1.
Now consider the term with k = 2 in the sum (16.2.30) (it corresponds to the presence of two large values among the τj , j εT ). In this case, the random walk {tn } can first cross the level T in the following alternative ways: (1) by the first of the two ‘large’ jumps τj δT , or (2) by the second of these jumps, or (3) by one of the ‘small’ jumps τj < δT .
16.2 Large deviation probabilities for S(T ) and S(T )
567
In case (1) it can be shown as before that the probability of the event S(T ) x, τ1 < δT, . . . , τν(T ) < δT, τν(T )+1 δT, ν(T ) < εT will be of the order of the value (16.2.40) that we found in the case k = 1. Since the event C (2) also includes another large jump τn with n > ν(T ) + 1, we will obtain, owing to the mutual independence of the r.v.’s, a value of the order of magnitude of the quantity (16.2.40) multiplied by T Vτ (T ). In a similar way one can show that the contribution corresponding to case (2), will also be small. In case (3), the probability that the random walk {tn } will exceed the level T in the absence of large jumps τj > δT , j ν(T ) < εT , will be very small (by Theorems 3.1.1 and 4.1.2). In the same way one can consider the case k 3. Since the total number of probabilities in the sum (16.2.30) corresponding to that case is bounded, their total contribution will be small relative to (16.2.40). This observation completes the proof of part II(ii) of the theorem. (iii) In this case, the only difference from the argument proving part I of the theorem is that we have to give another bound for n
P(ν(T ) < n− ) P max t0k > εx n
kT
= O(T Vτ (εx)) = o(T V (x)),
(16.2.41)
when ε → 0 slowly enough. The case x < δT can be considered in a way similar to that used to prove part I(iii). (iv) In this case, one can estimate the sum nn− for n− = T −εx in the same way as above, which leads to the expression (1 + o(1))T V (x) (since T → ∞ in the case under consideration, we have H(T ) ∼ T ). However, the sum n 0 is fixed First we obtain an upper bound for n x/2) = O(T V (x))
(16.2.42)
by virtue of Corollaries 3.1.2 and 4.1.4. Since P(ν(T ) < n− ) = o(1) owing to (16.2.22), by virtue of the previous bound and the independence of the sequences
568
Extension to generalized renewal processes
{τj } and {ξj } we have 0 = P Sν(T ) + q(T − ν(T )) x, ν(T ) < n− , Y x/2 + o(T V (x)), n
and it remains to estimate the probability on the right-hand side of this equality. Denoting it by P1 , we get x P1 P Y + q(T − ν(T )) > x, Y 2 x−Y x = P ν(T ) < T − ,Y q 2 x x − Y = P t0T −(x−Y )/q > ,Y q 2 x/2 x−z = P t0T −(x−z)/q > q
P(Y ∈ dz)
0
x/2 ∼ 0
x−z T− q
Vτ
x−z q
P(Y ∈ dz)
using the asymptotic relation from Theorem 3.4.1 for distribution tails of the sums t0n (condition [Q] holds for the r.v.’s τ for x T 1/γ Lz (T ) owing to (16.2.21)). Since, when ε tends to 0 slowly enough, the probability P(Y > εx) vanishes by x/2 εx Corollaries 3.1.2 and 4.1.4, we obtain that 0 ∼ 0 and hence (16.2.43) P1 (T − x/q) Vτ (x/q)(1 + o(1)). At the same time, as P minkT Sk0 −εx = o(1) when ε tends to 0 slowly enough (for α > 2 this follows from the Kolmogorov inequality and for α ∈ (1, 2) it follows from the bounds of Corollary 3.1.3), we obtain
0 0 P Sν(T ) + q(T − ν(T )) x, ν(T ) < n− , min Sk > −εx n
kT
P −εx + q(T − ν(T )) > x, ν(T ) < n− P min Sk0 > −εx kT 0 ∼ P ν(T ) < T − (1 − ε)x/q = P tT −(1−ε)x/q > (1 − ε)x/q ∼ (T − x/q) Vτ (x/q),
where the last relation follows from Theorems 3.4.1 and 4.4.1 (for γ ∈ (1, 2) one should use (16.2.21)) and from the fact that T − (1 − ε)x/q → ∞ in the case under consideration when −x∗ = q(T − x/q) > δx. Thus, ∼ (T − x/q) Vτ (x/q), n
16.2 Large deviation probabilities for S(T ) and S(T )
569
which completes our analysis of the case x∗ < −δx. • The case x∗ = o(x). Note that T ∼ x/q in this case, and put nT := T − x/q,
nT + := (1 + ε)nT ,
where ε tends to 0 slowly enough as x → ∞. As in the first case, we simply have to estimate the sum = + + . (16.2.44) n
n
nT + n
nT +εT n
We begin with the second sum on the right-hand side. Observing that n − nT nε/(1 + ε) for n nT + and also that {ν(T ) < z} = {ν(T ) < z} = {tz > T }, where z = min{k z : k ∈ Z}, we obtain = P Sn0 q(n − nT ) P(ν(T ) = n) nT + n
nT + n
P Sn0 qnε/(1 + ε) P(ν(T ) = n)
nT + n
c
zV (εz) dz P(ν(T ) < z) nT + nT+εT
=c
zV (εz) dz P t0z > T − z
nT +
= czV (εz) P nT+εT
−c
t0z
nT +εT > T − z nT +
P(t0z > T − z) dz (zV (εz))
(16.2.45)
nT +
by integrating by parts. Since we have T − z x/q − εT ∼ x/q z in the integration interval, by Theorems 3.4.1 and 4.4.1 one has the relation P t0z > T − z ∼ zVτ (T − z) ∼ zVτ (x/q) = O(zVτ (T )) for z ∈ [nT + , nT + εT ]. Hence the first term in the last line of (16.2.45) is nT +εT 2 O z V (εz)Vτ (T ) = O (εT )2 V (ε2 T )Vτ (T ) + n2T V (nT )Vτ (T ) nT +
= o T V (T ) + nT Vτ (T )
(16.2.46)
570
Extension to generalized renewal processes
(recall that Vτ (T ) = T −γ Lτ (T ) with γ > 1 and ε → 0 slowly enough). The second term in the last line of (16.2.45) is of the order of nT+εT
Vτ (T )
z dz (zV (εz)) nT +
⎛
nT +εT 2 ⎝ − = Vτ (T ) z V (εz) nT +
nT+εT
⎞ zV (εz) dz ⎠ ,
nT +
and since, by Theorem 1.1.4(iv), b
zV (εz) dz = O a2 V (εa) − b2 V (εb) ,
aε, bε → ∞,
a
this term is also bounded by the right-hand side of (16.2.46). Further, the last sum in (16.2.44) can be estimated as follows: P Sn0 q(n − nT ) P(ν(T ) = n) nT +εT n
P Sn0 qεT P(ν(T ) = n)
nT +εT n
= O T V (εT ) P(ν(T ) < n− ) = O T V (εT ) P(t0n− > εx) (16.2.47) = O T 2 V (εT )Vτ (εT ) = o(T V (T )), cf. (16.2.46) (to get the second to last relation we used (16.2.21)). To estimate the first sum n
and note that, by the law of large numbers, the probability P(Dc ) tends to 0 when ε vanishes slowly enough as T → ∞. Therefore P ν(T ) < nT + ; D = P t0nT + > (1 + ε)x/q − εT P(D) = O nT Vτ (x) P(D) = o(nT Vτ (x)) by virtue of independence of the sequences {τk } and {ξj }. Hence 0 = P Sν(T ) + q(T − ν(T )) x, ν(T ) < nT + ; D + o(nT Vτ (T )), n
and it remains to estimate the probability on the right-hand side of the above relation, which we will denote by P2 . Let z± = (x ± εnT )/q and note that, on the one hand, P2 P εnT + q(T − ν(T )) > x = P(ν(T ) < T − z− ).
16.2 Large deviation probabilities for S(T ) and S(T )
571
On the other hand, since T − z+ = (1 − ε/q)nT < nT + , we have P2 P −εnT + q(T − ν(T )) > x, ν(T ) < nT + ; D = P(ν(T ) < T − z+ ) P(D) ∼ P(ν(T ) < T − z+ ). Now we use the fact that z± ∼ x/q → ∞ and T − z± = nT (1 ∓ ε/q) ∼ T − x/q = −x∗ → ∞, while T − z± = o(x) = o(z± ), and therefore P(ν(T ) < T − z± ) = P(t0T −z± > z± ) ∼ (T − z± ) Vτ (z± ) ∼ (T − x/q) Vτ (x/q) by Theorems 3.4.1 and 4.4.1. This relation, together with (16.2.44)–(16.2.47), immediately implies that = (1 + o(1))P2 + o(T V (x) + nT Vτ (x)) n
= (1 + o(1))(T − x/q) Vτ (x/q) + o(T V (x)), which completes the proof of Theorem 16.2.1. Proof of Theorem 16.2.3. It is easiest to get the desired result by using the obvious relation P S(T ) x P(S(T ) x) (16.2.48) and then establishing an upper bound of the form P S(T ) x (1 + o(1)) P(S(T ) x).
(16.2.49)
This relation holds, roughly speaking, because, given the event {S(T ) x}, it is rather unlikely that the quantity S(T ) will be noticeably lower than the (high) level x: such a ‘dive’ of the trajectory {S(t)} after it has achieved the level x would mean the presence of yet another (independent) large deviation on the time interval [0, T ]. To simplify the exposition, we will again assume that aτ = 1. First consider case I(i) in Theorem 16.2.1 (q 0, x δT ). Clearly, if we put ηx := inf{n > 0 : S(tn ) x}, then, by virtue of the asymptotics of P(S(T ) x) found in Theorem 16.2.1, I(i), P(S(T ) x) = P S(T ) x, S(T ) (1 − ε)x + P S(T ) x, S(T ) < (1 − ε)x P S(T ) (1 − ε)x + P ηx = n, tn < T ; S(T ) − S(tn ) < −εx n1
(1 + o(1)) P(S(T ) x) + P inf S(t) < −εx P S(T ) x , tT
572
Extension to generalized renewal processes
where we also made use of the independence of the segments {S(t); t tn } and {S(tn + t) − S(tn ); t 0} of the trajectory of the process. The desired assertion now follows from the convergence
P inf S(t) < −εx P min S(tn ) − |q| max τn < −εx tT
n2T
n2T +1
+ P(ν(T ) > 2T ) εx P min S(tn ) < − n2T 2 εx + P max τn > + P(ν(T ) > 2T ) → 0 n2T +1 2|q| as T → ∞, which holds owing to the law of large numbers and the obvious relation εx = O(T Vτ (εx)) = o(1), P max τn > n2T +1 2|q| when ε tends to 0 slowly enough. Cases I(ii), I(iii) and II(iii) are dealt with in almost the same way. In case II(i), we have q > 0, and the crossing of the level x is possible not only by a jump but also on a linear growth interval. The number of the renewal interval on which the process {S(t)} first crosses the level x is equal to ηx := inf n > 0 : max{S(tn ), S(tn ) − ξn )} > x . Next we have to make use of the inequality
P ηx = n, tn < T, S(T ) − max{S(tn ), S(tn ) − ξn } < −εx
P ηx = n, tn < T, inf S(t) − |ξ| < −εx , tT
where ξ is independent of {S(t)}. All the subsequent calculations have to be modified accordingly. In case II(ii), when x∗ = x − qT = o(T ), such a simple approach proves to be insufficient. We will again begin with inequality (16.2.48), but to derive (16.2.49) we have to repeat all the steps in the proof of Theorem 16.2.1, this time just to derive an upper bound only for P(S(T ) x). The modifications that have to be made on the way are relatively minor and do not cause any particular difficulties. Thus, instead of the bounds for the probability P S(T ) x, εT ν(T ) n− 0 = P Sν(T ) x∗ + qν(T ), εT ν(T ) n− in (16.2.28), (16.2.29) (with n− = (1 − ε)T ), we could use the following observations to bound a similar expression for S(T ). Put T := (1 − ε/4)T (again assuming for simplicity that both T and T are integers). Then:
16.2 Large deviation probabilities for S(T ) and S(T )
573
(1) by Lemma 16.2.6(i) one has P ν(T ) − ν(T ) > εT /2 P ν(εT /4) εT /2 = P tεT /2 εT /4 = P t0εT /2 −εT /4 e−cεT ; (2) one has the following inclusion: S(T ) x, ν(T ) n− = Sν(t) x∗ + q(T − t) for some t T , ν(T ) n− ) ( ⊂ max Sn qεT /4, ν(T ) n− ; nn−
(3) if ε tends to 0 slowly enough then x − qT 0 and one has (
)
max S(t) x, εT ν(T ) n− , ν(T ) − ν(T ) εT /2 t∈[T ,T ] ( 0 ⊂ Sν(t) x − qt + qν(t) for some t ∈ [T , T ], ) εT ν(T ) n− , ν(T ) εT /2 ( ) ⊆ max Sn0 qεT /2; ν(T ) n− . nn−
From (1)–(3) it clearly follows that P S(T ) x, εT ν(T ) n− P ν(T ) − ν(T ) > εT /2 + P S(T ) x, ν(T ) n−
+ P max S(t) x, εT ν(T ) n , ν(T ) − ν(T ) εT /2 − t∈[T ,T ]
e−cεT + P max Sn qεT /4 P(ν(T ) n− ) nn−
+ P max Sn0 qεT /2; ν(T ) n− nn−
e−cεT + 2P max Sn qεT /4 P(ν(T ) n− ) = o(T V (T )) nn−
(16.2.50) by virtue of (16.2.29), when ε vanishes slowly enough. The main contribution to an analogue of the right-hand side of (16.2.33) for S(T ) will also be from the middle sum, which can be estimated as follows: since ) ( ) ( {S(T ) x} = sup(Sν(t) + qt) x ⊂ max Sj x∗ , tT
jν(T )
574
Extension to generalized renewal processes
we have
(1) P S(T ) x; ν(T ) = j, Cj+1
j<εT
j<εT
(1) P max(Sn0 − qn) x∗ P ν(T ) = j; Cj+1 nj
(1 + o(1))Vτ (T )
jV (x∗ + qj)
(16.2.51)
j<εT
by (16.1.12) and (16.2.39). After that, the argument proceeds as in the proof of Theorem 16.2.1 in case II(ii). In case II(iv) one can modify the argument from the proof of Theorem 16.2.1 in a similar way. Proof. The proof of Theorem 16.2.4 in the case q 0 follows the argument in the proof of Theorem 16.2.1, case I(i). There exists a function ψ1 (t) such that ψ1 (t)/ψ(t) → ∞ as t → ∞ and, moreover, the ψ-l.c. function F+ (t) is ψ1 -l.c. as well (cf. the remark made after Definition 1.2.7, √ p.18). Let n± := T ± ψ1 (x). Since ψ(x) > c T (see (4.8.2)) and Eτ 2 < ∞ by the conditions of the theorem, we see that (16.2.17) will hold true owing to the Chebyshev inequality (or the central limit theorem for renewal processes). Further, (16.2.18) and the first equality in (16.2.20) (with V replaced in them by F+ ) hold true owing to the choice of n± and the fact that F+ is a ψ1 -l.c. function. The last equality in (16.2.20) follows from (16.2.17) and the observation that, cf. (16.2.12), we have the following bound, using Lemma 16.2.6(ii): nP(ν(T ) = n) n>n+
∞ (T + ψ1 (x)) P(ν(T ) > T + ψ1 (x)) +
P(ν(T ) > T + z) dz ψ1 (x)
( c ψ 2 (x) ) 1 1 (T + ψ1 (x)) exp − + T + ψ1 (x)
∞
( c z2 ) 1 dz = o(T ). exp − T +z
ψ1 (x)
Therefore P(Sn x) ∼ H(T )F+ (x) in the case under consideration. The modifications that one needs to make in the proof in the case q < 0 and also when considering the asymptotics of P(S n x) are equally elementary. The theorem is proved.
16.3 Asymptotic expansions In this section we will state and proof an assertion establishing for the probabilities P(S(T ) x) asymptotic expansions similar to those obtained in § 4.4 for
16.3 Asymptotic expansions
575
random walks. Our approach will consist in using partial factorization and relations of the form (16.2.16) in conjunction with the above-mentioned results for random walks. As in § 4.4, we will need additional conditions of the form [D(k,q) ] on the distribution of the random jumps ξj (see p. 199). Moreover, we will also need additional conditions on the distribution Fτ . First of all, since the asymptotic formulae for P(S(T ) x) will include the variance of ν(T ), the minimum moment condition on τ will be aτ,2 := Eτ 2 < ∞. In the case when qT > x and thus the event {S(T ) x} could occur because of the presence of large intervals between the jumps in the process, we will also require the tail Fτ (t) to be regular. Since we will only be considering events of the form {S(T ) x}, we can again assume without loss of generality that the mean trend in the process is equal to zero (assumption (16.1.16)). Recall that H(t) = Eν(t) denotes the renewal function for the process {ν(t)}, and put Hm (t) := Eν m (t), m 2. Theorem 16.3.1. Suppose that the distribution of ξ is non-lattice, conditions (16.1.3) hold for α > 2, d = Var ξ < ∞ and the right tail of the distribution of ξ satisfies [D(2,0) ] and also that aτ,2 < ∞ and the mean trend in the condition process S(t) is equal to zero. Let δ > 0 be an arbitrary fixed number. I. In the case q 0 the following assertions hold true. (i) Uniformly in T , satisfying x δT, as x → ∞, P(S(T ) x)
%
& T + 1 H(T ) − H2 (T ) aτ % 2 α(α + 1) T 2 2 (T ) − H(T ))d + q a + 1 H(T ) + (H 2 τ 2x2 aτ & T 2 2 −2 + 1 H2 (T ) + H3 (T ) + o((1 + T )/x ) . aτ
= V (x) H(T ) +
qaτ L1 (x) x
(16.3.1) (ii) If T → ∞ then, uniformly in x δT , aτ,2 3αqT aτ,2 T + − 1 − −1 P(S(T ) x) = V (x) aτ 2a2τ x 2a2τ α(α + 1)T 2 d a τ,2 + + q2 −1 + o(1) . 2x2 a2τ a2τ (16.3.2)
576
Extension to generalized renewal processes If Fτ is a lattice distribution then the relation (16.3.2) holds for T values from the lattice.
II. In the case q < 0, the following assertions hold true. (i) The relations (16.3.1) and (16.3.2) remain true uniformly in the zone x (q + δ)T . (ii) If Fτ ∈ R with exponent γ > 2 (see (16.1.4)) then, for x δT , T − x/q → ∞, the relation (16.3.2) holds with an additional term on the right-hand side of the form x x −1 Vτ (1 + o(1)), T → ∞. (16.3.3) aτ T − q q If, moreover, E|ξ|3 < ∞ and condition [D(1,0) ] holds for the distribution tail of τ then the remainder term o(1) in (16.3.3) can be replaced by o((T − x/q)−1/2 ). Remark 16.3.2. Repeating the argument from the proof of Theorem 16.3.1, one can easily verify that, under condition [D(1,0) ] on the right distribution tail of ξ, the error in the approximation of the probabilities of the form P(Sn x) proves to be too large and so ‘masks’ the effects introduced by the randomness of the jump epochs in the process {S(t)}. Therefore, non-trivial asymptotic expansions for P(S(T ) x) can be obtained only by imposing condition [D(2,0) ] on V (t) = P(ξ t) and assuming that E(ξ 2 + τ 2 ) < ∞. However, imposing additional conditions (say, [D(3,0) ] on the right distribution tail of ξ and/or Eτ 3 < ∞) does not lead to any substantial improvement of the results. Proof of Theorem 16.3.1. Cases I and II(i). We will again make use of the decomposition (16.2.16) with n± = T ± εx, where ε → 0 slowly enough as x → ∞ (with the natural convention that n− = 0 for T − εx 0). We will also assume, as usual, that aτ = 1 (so that aξ = −q by (16.1.16)) and that d = 1. To estimate the sum n 0, x − qT δT. ⎩ x − qT q+δ Hence, using Lemma 16.2.7 (with γ = 2) to bound the probability P(t0n− > εx) in the following formula, we obtain max P Sn0 > cx P(ν(T ) < n− ) n
nn−
= O T V (x) P t0n− > εx = O T V (x) n− Vτ (εx) = O T 2 V (x)x−2 ε−2 Lτ (εx) = o T 2 V (x)x−2
when ε vanishes slowly enough.
(16.3.5)
16.3 Asymptotic expansions
577
Moreover, by Lemma 16.2.6(ii) the last sum in (16.2.16) can be bounded as follows: "
cε2 x2 = o(T 2 V (x)x−2 ), P(ν(T ) > n+ ) exp − (16.3.6) T + εx n>n +
and so it remains to consider the middle sum n− nn+ . Using the obvious inequality P(ξ − aξ > t) = V (t + aξ ) = V (t − q) and our observation (16.3.4), we have from condition [D(2,0) ] for V (t) and Corollary 4.4.5(i) (p.200) the following relation for all n− n n+ and the values of x specified in the conditions of the theorem: P Sn0 x − q(T − n) % & % & q α(α + 1)(n − 1)d = nV x 1 − (T − n + 1) 1+ (1 + o(1)) x 2(x − q(T − n))2 4 3 2 q(T − n + 1) α(α + 1) q(T − n + 1) + = nV (x) 1 + L1 (x) x 2(1 + o(1)) x & % α(α + 1)(n − 1)d (1 + o(1)) × 1+ 2(x − q(T − n))2 q(T − n + 1) = nV (x) 1 + L1 (x) x & % 2 α(α + 1) q(T − n + 1) n−1 + + 2 d 2 x x −3 (16.3.7) × (1 + o(1)) + O (|T − n| + 1)nx (note that on the right-hand side of the first line we have nV (x(1 + Δ)), where Δ := −q(T − n + 1)/x is indeed o(1) for n− n n+ , x → ∞; we have used this to substitute the representation for V (x(1+Δ)) from condition [D(2,0) ], taking into account (4.4.9)). If we substitute the expressions for the respective probabilities into the sum n− nn+ from (16.2.16) and replace that sum by a sum over all n 0, the error introduced thereby will be of the order of V (x)
+
n
n>n+
n+
n|T − n| n2 + n(T − n)2 + . x x2
(16.3.8)
Here the first sum, by virtue of Lemma 16.2.7 (with γ = 2), is O
T+
T3 T2 + 2 x x
P(ν(T ) < n− )
= O T P(t0T −εx > εx) = o(T 2 x−2 )
578
Extension to generalized renewal processes
(cf. (16.3.5)), while the second sum is n2 n3 O n+ + 2 P(ν(T ) = n) = o(T 2 x−2 ) x x n>n +
by Lemma 16.2.8(i), so that that the total error (16.3.8) is o(T 2 V (x)x−2 ). Thus we obtain from (16.2.16) and (16.3.5)–(16.3.8) that P(S(T ) x)
qL1 (x) E ν(T )(T − ν(T ) + 1) = V (x) Eν(T ) + x α(α + 1) , 2 q E ν(T )(T − ν(T ) + 1)2 + 2 2x -
+ (Eν 2 (T ) − Eν(T ))d (1 + o(1)) " −3 2 + O x E(ν (T )(|T − ν(T )| + 1)) + o(T 2 V (x)x−2 ). (16.3.9)
It remains to estimate the remainder term O(·) (this will complete the proof of the representation (16.3.1) in cases I(i) and II(i)) and also the coefficients expressed in terms of the expectations in the case when T → ∞. We will start with the latter problem and recall the well-known fact that, for aτ = 1 and aτ,2 < ∞, Eν(T ) = T + (aτ,2 /2 − 1) + o(1), Var ν(T ) = (aτ,2 − 1)T + o(T )
(16.3.10) (16.3.11)
(see e.g. § 12, Chapter XIII of [121] and § 4, Chapter XI of [122]; in the lattice case it is assumed that T belongs to the lattice). Hence # $ E ν(T )(T − ν(T ) + 1) = −Var ν(T ) − (Eν(T ))2 + (T + 1)Eν(T ) = 3(1 − aτ,2 /2)T + o(T ), 2
(16.3.12)
2
Eν (T ) = T (1 + o(1)). Now we estimate the moment E[ν(T )(T − ν(T ) + 1)2 ]. By the law of large numbers and the central limit theorem for renewal processes (see e.g. § 5, Chapter 9 of [49]), as T → ∞, we have ν(T ) →1 T
a.s.,
ζT :=
T − ν(T ) + 1 √ ⊂ =⇒ N (0, aτ,2 − 1), T
(16.3.13)
where the last relation means that the distributions of ζT converge weakly to the indicated normal law. Note that this weak convergence takes place together with the convergence of the first and second moments of ζT (cf. (16.3.10)–(16.3.11)),
16.3 Asymptotic expansions
579
so that, in particular, the r.v.’s ζT2 are uniformly integrable. Now write $ # T −2 E ν(T )(T − ν(T ) + 1)2 % & ν(T ) 2 ν(T ) =E ζT ; − 1 ε T T % & % & ν(T ) 2 ν(T ) ν(T ) 2 ν(T ) +E ζT ; <1−ε +E ζT ; >1+ε T T T T =: E1 + E2 + E3 and note that, owing to the above, E1 ∼ EζT2 ∼ T −1 Var ν(T ) → aτ,2 − 1, # $ E2 < E ζT2 ; ν(T ) < (1 − ε)T = o(1), # $ E3 < T −2 E ν(T )3 ; ν(T ) > (1 + ε)T = o(1) as T → ∞ (assuming that ε = ε(T ) tends to 0 slowly enough). Here the estimate for E2 follows from the uniform integrability of ζT2 and the law of large numbers (owing to which P(ν(T ) < (1 − ε)T ) = o(1)), while the last relation follows from the inequalities (16.2.12) and (16.2.13) in the proof of Lemma 16.2.8. Thereby we have proved that $ # T → ∞. (16.3.14) E ν(T )(T − ν(T ) + 1)2 = (aτ,2 − 1)T 2 (1 + o(1)), Finally, for the remainder term O(·) in (16.3.9) we have, from the relations (16.3.10)–(16.3.11), the Cauchy–Bunyakovskii inequality and the almost obvious relation Eν 4 (T ) ∼ T 4 as T → ∞ (cf. (16.3.13) and Lemma 16.2.8), the bound # $ x−3 E ν 2 (T )(|T − ν(T )| + 1) , 1/2 x−3 Eν 2 (T ) + Eν 4 (T ) E(T − ν(T ))2 = O x−3 1 + T 5/2 = o 1 + T 2 x−2 . (16.3.15) Now the assertions of parts I(i), (ii) and II(i) of Theorem 16.3.1 can be obtained by substituting the estimates (16.3.10)–(16.3.15) into the representation (16.3.9). One should just note that, for x δT , we have T k /xk = O(1), x−1 = O(T −1 ) and T 2 V (x)x−2 = O(V (x)) and that 1 T2 1 1 + 2 . 1/2 x x x 2T Case II(ii). In this case we clearly have to consider also the possibility that the event {S(T ) x} occurs as a result of the presence of a very long renewal interval on [0, T ]. Using the representation (16.2.16) together with the results for large deviation probabilities in the random walk {Sn } and for the moments of the renewal process is no longer sufficient. Now we will have to introduce truncated versions of not only the random jumps ξj but also of the renewal intervals τj .
580
Extension to generalized renewal processes
Let n± := (1 ± ε)T (note that x T in this part of the theorem) and B :=
n+ *
{ξj < y},
j=1
C :=
n− *
{τj < y},
j=1
where y x is such that r := x/y > max{1, x/T }. It is evident that P(B) = O(T V (T )),
P(C) = O(T Vτ (T )).
(16.3.16)
By virtue of Lemma 16.2.8(i), it suffices to estimate the probability of the event D := {S(T ) x; ν(T ) n+ }. It is obvious that P(D) = P(DB C) + P(DBC) + P(DB) =: P1 + P2 + P3 , where
P1 P(B) P(C) = O T 2 V (T ) Vτ (T )
(16.3.17)
(16.3.18)
and P2 = P DBC; ν(T ) < n− + P DBC; ν(T ) n− .
(16.3.19)
By Corollary 4.1.3, the first term on the right-hand side of the last equality does not exceed P B P(ν(T ) < n− ; C) n+ V (y) P t0n− > εT ; C = O T V (T ) (T Vτ (εT ))r = O T 2 V (T )Vτ (T ) , when ε vanishes slowly enough, while the second term does not exceed P DB; ν(T ) n− − P DB C; ν(T ) n− = P S(T ) x, ν(T ) ∈ [n− , n+ ]; B + O T 2 V (T )Vτ (T ) by virtue of the bound in (16.3.18). The first term on the right-hand side coincides, up to O(T 2 V 2 (T )), with P(S(T ) x, ν(T ) ∈ [n− , n+ ]) (we again make use of Corollary 4.1.3), and it can be estimated in exactly the same way as in the proof of part I of the theorem; this leads to the expression on the right-hand side of (16.3.2). It remains to consider the term P3 in (16.3.17). When for the distribution of τ we assume only condition Fτ ∈ R, the required assertion can be derived in exactly the same way as in the proof of Theorem 16.2.1, II(iv). Now we turn to the case where we also impose condition [D(1,0) ] on the function Vτ from (16.1.4) and require that E|ξ|3 < ∞. Introduce the notation nT := T − x/q,
nT ± := (1 ± ε)nT ;
581
16.3 Asymptotic expansions
as usual, we will assume for simplicity that all these quantities (as well as T itself) are integers. We have P3 = P S(T ) x, ν(T ) n+ ; B =
P Sn0 q(n − nT ); B P(ν(T ) = n)
nn+
# $ 1(n < nT ) − P Sn0 q(n − nT ); B
= P(ν(T ) < nT ) −
nn+
= P(t0nT > x/q) − +
P
× P(ν(T ) = n) P Sn0 < q(n − nT ); B P(ν(T ) = n)
n
Sn0
q(n − nT ); B P(ν(T ) = n).
(16.3.20)
nT nn+
By the central limit theorem, the two sums in the last lines are, roughly speaking, the integrals of the left and right tails of an (almost) normal distribution with respect to the distribution of ν(T ), i.e. a measure that, owing to the regularity of the distribution of τ , will be close to a multiple of the Lebesgue measure in the region where the values of the integrands are noticeably large. Therefore, as we will show next, owing to the symmetry of the normal distribution the contributions of these sums effectively cancel each other out and so, with high precision, the main term will be given by 1/2 P t0nT > x/q = nT Vτ (x/q)(1 + o(nT /x)). (16.3.21) This follows from Theorem 4.4.4 (for k = 1) and the fact that, by condition [D(1,0) ] on the distribution tail of τ 0 = τ − 1, one has the representation Vτ 0 (t) := P(τ 0 t) = Vτ (t + 1) = Vτ (t)(1 + O(1/t)).
(16.3.22)
So, we turn to the sums on the right-hand side of (16.3.20). Replacing them by the sums and nT − n
nT n
respectively leads to the introduction of errors E1 and E2 , for which we have the following bounds. The uniform integrability of the sequence {(Sn0 )2 /n} implies that $ # % 0 2 & E (Sn0 )2 ; |Sn0 | > qεnT √ 1 (Sn ) |Sn0 | 2E ; √ > qε nT = o(1) ε 2 nT ε n n uniformly in n < nT − when ε vanishes slowly enough. Using this relation and
582
Extension to generalized renewal processes
the Chebyshev inequality, one obtains E1 P Sn0 < −qεnT P(ν(T ) = n) n
# $ E (Sn0 )2 ; |Sn0 | > qεnT P(ν(T ) = n) q 2 ε2 n2T n
1 0 x 1 P(ν(T ) < nT − ) = o P tnT − > + εnT =o = o(Vτ (T )). nT nT q For the second error, we have E2 = P Sn0 > q(n − nT ); B P(ν(T ) = n) nT + nn+
=
+
nT + n
nT +εT n
+
.
(16.3.23)
n− nn+
Since n−nT x/q −εT cx for n n− , the last sum above is estimated, from Corollary 4.1.3, as O(T 2 V 2 (T )) = o(V (T )). According to (16.2.47), the middle sum on the right-hand side of (16.3.23) is O(T 2 V (εT )Vτ (εT )) = o(V (T )), because Vτ (t) = t−γ Lτ (t), γ > 2, and ε → 0 slowly enough. Finally, the first sum, as shown in (16.2.45), (16.2.46), is
O (εT )2 V ε2 T Vτ (T ) + n2T V (nT ) Vτ (T ) = o(V (T ) + Vτ (T )). Turning back to (16.3.20), after the change in the summation limits we see that, to complete the proof of the theorem in case II(ii), it remains to show that J1 − J2 := P Sn0 q(n − nT ); B P(ν(T ) = n) nT − n
−
P Sn0 > q(n − nT ); B P(ν(T ) = n)
nT nnT +
1/2 = o nT Vτ (T ) .
(16.3.24)
First note that here we can remove the event B from the expression under the probability symbol; the error introduced thereby will certainly be negligible. Indeed, since for x δT one has T − nT + = (1 + ε)x/q − εT ((1 + ε)δ/q − ε)T > δ1 T,
δ1 > 0,
one obtains from (16.3.16) that P · · · ; B P(ν(T ) = n) P B; ν(T ) < nT + nT − nnT +
= P(B) P t0nT + > T − nT + = O T 2 Vτ (T ) V (T ) = o(Vτ (T )).
Further, put
√ z(u) := q(u − nT )/ ud,
d = Var ξ,
16.3 Asymptotic expansions
583
and observe that, by the Berry–Esseen theorem (see e.g. Appendix 5 of [49]), the following asymptotic representations hold true uniformly in n ∈ [nT − , nT + ]: −1/2 , P Sn0 q(n − nT ) = Φ(z(n)) + O nT 0 −1/2 P Sn > q(n − nT ) = Φ(−z(n)) + O nT , where Φ denotes the standard normal distribution function. Substituting these representations into the sums in (16.3.24) (now without the event B under the probability symbols), we see that, by Theorem 4.4.4 (in the case k = 1) and in −1/2 view of (16.3.22), the contribution of the terms O(nT ) to these sums does not exceed $ −1/2 # cnT P(ν(T ) < nT + ) − P(ν(T ) < nT − ) , −1/2 P t0nT + > x/q − εnT − P t0nT − > x/q + εnT = cnT , −1/2 −1/2 = cnT nT + Vτ (x/q − εnT ) 1 + o nT −1/2 . − nT − Vτ (x/q + εnT ) 1 + o nT −1/2
Here the terms with o(nT ) will yield a term of order −1/2 −1/2 = o(Vτ (T )), × nT Vτ (x) × o nT O nT while, owing to condition [D(1,0) ] on the distribution tail Vτ (t) of the r.v. τ and in view of (16.3.22), the remaining main terms in the sum are equal to , −1/2 nT Vτ (x/q − εnT ) − Vτ (x/q + εnT ) cnT + εnT Vτ (x/q − εnT ) + Vτ (x/q + εnT ) % x nT 1/2 = c(1 + ε)nT Vτ 1 − qε q x & 1/2 x nT − Vτ 1 + qε + o nT Vτ (T ) q x 3/2 1/2 1/2 n x + o nT Vτ (T ) = o nT Vτ (T ) . = 2γ(c + o(1))qε T Vτ x q Therefore, up to negligibly small terms (i.e. terms of order not exceeding the error order stated in part II(ii) of the theorem), the sums in (16.3.24) are equal respectively to Φ(z(n))P(ν(T ) = n), J1 = nT − n
J2 =
nT n
Φ(−z(n))P(ν(T ) = n).
584
Extension to generalized renewal processes
Integrating by parts and changing variables by putting w = nT + s, we obtain nT J1 =
Φ(z(w)) dw [P(ν(T ) < w) − P(ν(T ) < nT )] nT −
= −Φ(z(nT − )) [P(ν(T ) < nT − ) − P(ν(T ) < nT )] nT −
[P(ν(T ) < w) − P(ν(T ) < nT )] dw Φ(z(w)) nT −
1/2 = O Φ −qεnT d−1/2 nT Vτ (T ) 0 − −εnT
qs . [P(tnT +s > T ) − P(tnT > T )] ds Φ (nT + s)d
Denoting the last integral by J1 and assuming for simplicity that nT + s is an integer, we note that, by Theorem 4.4.4 (for k = 1) the integrand in J1 is equal, by virtue of (16.3.22), to x x 0 [· · · ] = P tnT +s > − s − P t0nT > q q −1/2 −1/2 x x = (nT + s)Vτ 1 + o nT − s 1 + o nT − nT Vτ . q q 1/2 It is clear that integrating the remainder terms will yield o nT Vτ (T ) , whereas the sum of the main terms is, by condition [D(1,0) ], equal to % & x x x nT Vτ + sVτ − s − Vτ −s q q q x γqs x = nT Vτ (1 + o(1)) (1 + o(1)) + sVτ q x q x γqnT =s 1+ Vτ (1 + o(1)), −εnT s < 0. x q −1/2 1/2 Therefore, up to the term O Φ(−qεnT d−1/2 )nT Vτ (T ) = o(nT Vτ (T )), one has 0 x qs γqnT J1 = − 1 + Vτ (1 + o(1))s ds Φ x q (nT + s)d −εnT
γqnT =− 1+ x =
1 q
8
Vτ
x q
√ nT d q
nT d γqnT 1+ 2π x
Vτ
0 √
−qεnT /
(1 + o(1))v dΦ(v) nT − d
x (1 + o(1)). q
585
16.4 The crossing of arbitrary boundaries In a similar way we can establish that 8 1/2 1 nT d x γqnT J2 = 1+ Vτ (1 + o(1)) + o nT Vτ (T ) , q 2π x q 1/2 so that J1 − J2 = o nT Vτ (T ) . Theorem 16.3.1 is proved.
16.4 The crossing of arbitrary boundaries In this section we consider the problem of the crossing of an arbitrary boundary {g(t); t ∈ [0, T ]} by the trajectory of the process {S(t)}, when inf tT g(t) tends to infinity fast enough, so that the event ( ) GT := sup(S(t) − g(t)) 0 tT
belongs to the large deviation zone in which we are interested. We will again assume, without loss of generality, that the mean trend in the process {S(t)} is equal to zero (i.e. (16.1.16) holds true). Note that we have already considered the special case of a flat boundary g(t) ≡ x in Theorem 16.2.3. As everywhere in this chapter, an important factor in this situation is the possibility that the event GT may occur as a result of a very long renewal interval. To avoid cumbersome computations, we will exclude this possibility in the present section. For q 0 it simply does not exist, while in the case q > 0 we will require the ‘gap’ between the boundary g(t) and the trajectory of the deterministic linear drift qt, which appears in the definition of the process {S(t)}, to be wide enough: inf (g(t) − qt) > δT
0tT
(16.4.1)
for a fixed δ > 0. In this case, in the absence of large jumps ξj and/or τj , the process S(t) = Sν(t) + qt moves along the line of its mean trend (with a slope equal to zero), i.e. it stays in an εT -neighbourhood of its initial value, which is zero. The presence of a single large τj means that, over the respective renewal interval, the value of S(t) will increase linearly at the rate q > 0 while outside the interval it will just oscillate about the mean trend line. Therefore the abovementioned condition (16.4.1) makes the probability that the process will cross the boundary g(t) during that long renewal interval very small. The presence of two or more large jumps τj and/or ξj will, as usual, be very unlikely. Therefore, as in the random walk case, the process {S(t)} can actually cross a high boundary as a result of a single large jump ξj . At the time t of the jump it suffices if, instead of crossing the boundary g(t) itself, the process just exceeds the level g∗ (t) := inf g(s), tsT
since afterwards its trajectory will again move along the (horizontal) mean trend
586
Extension to generalized renewal processes
line and, at some time, will rise above the boundary, which by that time will have already ‘dropped’ down to the level g∗ (t). As with the random walks, in the case of a general boundary we can only give the main term of the asymptotics. Owing to the ‘averaging’ of the jump epochs in the process {S(t)}, the answer will have the form of an integral with respect to the renewal measure dH(t) (see (16.4.2) below). As T → ∞, the renewal theorem allows one to replace dH(t) by a−1 τ dt. Now we will state the main result of the section. Denote by Gx,K the class of all measurable boundaries g(t) such that x inf g(t) Kx,
x > 0,
t0
K ∈ (1, ∞).
Theorem 16.4.1. Let either condition [QT ] or condition [ΦT ] hold for the distribution of ξj , and let δ > 0 and K > 1 be arbitrary fixed numbers. Assume that the mean trend of the process {S(t)} is equal to zero. I. In the case q 0, the relation T P(GT ) = (1 + o(1))
V (g∗ (t)) dH(t),
x → ∞,
(16.4.2)
0
holds uniformly in g ∈ G(x,K) , x δT. If, moreover, condition [<] with γ ∈ (1, 2) holds for the distribution of τ then the relation (16.4.2) holds as x → ∞ uniformly in the range of values of T , satisfying (along with [QT ] or [ΦT ]) the condition x T 1/γ Lz (T ) with a suitable s.v.f. Lz (a way of choosing the function Lz (t) is indicated in Lemma 16.2.8(ii)). II. In the case q > 0 the following assertions hold true. (i) The relation (16.4.2) holds uniformly in g ∈ G(x,K) for x∗ := inf (g(t) − qt) δT. tT
(ii) If Fτ (t) = o(V (t)) as t → ∞ then all the assertions stated in part I of the theorem remain true. Remark 16.4.2. It is not difficult to see that, as T → ∞, one has the relation T 0
1 V (g∗ (t)) dH(t) ∼ aτ
T V (g∗ (t)) dt
( T V (x))
(16.4.3)
0
uniformly in g ∈ G(x,K) , so that in this case the integral on the right-hand side of (16.4.2) can be replaced by the expression on the right-hand side of (16.4.3). Indeed, assuming for simplicity that the functions H(t) and V (g∗ (t)) have no common jump points, and integrating twice by parts (recall that g∗ (t) is nondecreasing function, so that V (g∗ (t)) is a function of bounded variation), we obtain by the renewal theorem that the integral on the left-hand side of (16.4.3)
587
16.4 The crossing of arbitrary boundaries equals T V (g∗ (T )) H(T ) −
H(t) dV (g∗ (t)) 0
T 1 = V (g∗ (T )) + o(T V (x)) − aτ aτ T −
H(t) − 0
1 = aτ
T t dV (g∗ (t)) 0
T dV (g∗ (t)) aτ T
T
H(t) −
V (g∗ (t)) dt + o(T V (x)) − 0
0
T dV (g∗ (t)). aτ
One can easily see that the last integral is o(T V (x)) as well, since by the renewal theorem H(t) − t/aτ = o(t) as t → ∞, so that T εT T T + H(t) − dV (g∗ (t)) aτ 0
0
εT
cεT V (g∗ (0)) + o(T V (g∗ (εT ))) = o(T V (x)) when ε vanishes slowly enough. The following corollary follows immediately from Theorem 16.4.1 and the definition of an r.v.f. Corollary 16.4.3. Let f (s), s ∈ [0, 1], be a given measurable function taking values in an interval [c1 , c2 ] ⊂ (0, ∞). Then, for the boundary g(t) := xf (t/T ),
0 t T,
under the respective conditions on x from Theorem 16.4.1 and, in the case q > 0, under the condition that x∗ = inf (xf (s) − qsT ) δT, 0s1
we have P(GT ) ∼
T V (x) aτ
1
f∗−α (s) ds
0
where f∗ (s) := inf su1 f (u), 0 s 1.
as T → ∞,
588
Extension to generalized renewal processes
Indeed, one just has to observe that V (g∗ (t)) =
V (xf∗ (t/T )) L(xf∗ (t/T )) V (x) = f∗−α (t/T ) V (x) V (x) L(x) = (1 + o(1))f∗−α (t/T )V (x)
uniformly on [0, T ] by virtue of Theorem 1.1.2. In contrast with Theorems 16.2.1 and 16.2.3, we do not consider here the case when q > 0 and x∗ → ∞ but x∗ = o(T ). As we saw in those theorems, in this case there appears the possibility that the boundary will be crossed owing to a combination of two rare events: first, at the very beginning of the interval [0, T ], the process exceeds a linear boundary of the form x∗ + qt and then, during a very long renewal interval, the process is taken above the boundary g(t) by the deterministic positive linear drift. It is clear that considering such a possibility in the case of an arbitrary boundary will be much more difficult than in the situation of Theorems 16.2.1 and 16.2.3. Therefore, in the present exposition we will restrict ourselves to analysing the above situation in the important special case of a linear boundary; this will be done in the next section. Another exceptional case is when the terminal point g(T ) of the boundary is the ‘lowest’ one, i.e. the function g(t) attains there its minimal value on [0, T ]: g∗ (t) = g(T ) =: x
for all t ∈ [0, T ].
(16.4.4)
Then from the evident relations P(S(T ) x) P(GT ) P(S(T ) x) and Theorems 16.2.1 and 16.2.3 we immediately obtain the following result. Corollary 16.4.4. Under the assumptions of Theorem 16.2.1, all the asymptotic representations established in that theorem for P(S(T ) x) will remain true for P(GT ) uniformly over all the boundaries satisfying (16.4.4). Proof of Theorem 16.4.1. As usual, we assume that aτ = 1, so that aξ = −q due to (16.1.16). Put tj := tj for j ν(T ), tν(T )+1 := T , and let τj+1 := tj+1 −tj , j ν(T ). For j ν(T ), introduce the variables g!(j) :=
inf
tj t
g!− (j) :=
g(t), inf
tj t
It is not hard to see that, for the events + {Sj + qtj g!(j)}, G := jν(T )
g!+ (j) := g!(j) + |q|τj+1 ,
g(t) − q(t − tj ) .
G± :=
+
{Sj + qtj g!± (j)}
jν(T )
(16.4.5)
589
16.4 The crossing of arbitrary boundaries one has the following inclusions: G + ⊆ GT ⊆ G
for
q 0,
(16.4.6)
G ⊆ GT ⊆ G−
for
q > 0.
(16.4.7)
These are the main relations that will be used for proving the theorem. Furthermore, we will need the following observation: for the events ) ( ) ( D+ := sup(ν(t) − t) εx , (16.4.8) D := sup |ν(t) − t| εx , tT
tT
where ε = ε(x) vanishes slowly enough as x → ∞, for x δT one has P(D) → 0,
P(D+ ) = o(T V (x)),
T → ∞.
(16.4.9)
The former relation holds by the law of large numbers for renewal processes (see e.g. § 5, Chapter 9 of [49] or Chapter 2 of [138]), while the latter follows from the relations ( ) + {k − tk > εx} = min t0k < −εx sup(ν(t) − t) > εx ⊂ tT
kεx+T
kεx+T
and Lemma 16.2.6(i). Indeed, for n := εx + T we have z := εx bε (εx + T ) ≡ bε n
with
bε :=
εδ . 1 + εδ
Since ε → 0 arbitrary slowly, both bε z/n and Λθ (bε ) (see (16.2.4)) will also have the same property. Therefore it follows from (16.2.5) and the inequality Λθ (z) Λθ (bε ) that
P min t0k < −εx exp{−(εx + T )Λθ (bε )} = o(T V (x)) kεx+T
if ε tends to 0 slowly enough. I. The case q 0. From (16.4.6) we conclude that + 0 + P(GT ) P(G+ ) P(G+ D) = P Sj g! (j) − q(tj − j) ; D =P
jν(T )
+
Sj0
(1 + o(1))! g (j) ; D , +
(16.4.10)
jν(T )
since |tj − j| εx, j ν(T ), on the event D, whereas g!+ (j) x for any boundary g ∈ G(x,K) . Now put g!∗+ (j) :=
min
jkν(T )
g!+ (k)
and note that one has the following double-sided inequality on the event D: g∗ (tj ) g!∗+ (j) g∗ (tj ) + 2|q|εx,
j ν(T ).
590
Extension to generalized renewal processes Using this observation and the fact that V (1 + o(1))t = (1 + o(1))V (t) as t → ∞, we see that by Theorems 3.6.4 and 4.6.7 the conditional probability of the event jν(T ) {· · · } appearing in the last line of (16.4.10), given t1 , t2 , . . . , is equal on the event D to V (1 + o(1) g!∗+ (j)) (1 + o(1)) jν(T )
= (1 + o(1))
V (! g∗+ (j))
jν(T )
= (1 + o(1))
V (g∗ (tj )) ν(T ) V (x). (16.4.11)
jν(T )
To find the unconditional probability of the event of interest, it only remains to find the expectation of the expression above. To this end, first note that, for any bounded measurable function h(t), t ∈ (0, ∞), E
T
h(tj ) = E
0<jν(T )
T h(t) dν(t) =
0
h(t) dH(t),
T < ∞.
0
Hence & T % V (g∗ (tj )) ∼ V (g∗ (t)) dH(t) cT V (x) (16.4.12) E (1 + o(1)) jν(T )
0
(using g(t) Kx). Moreover, since the r.v.’s ν(T )/T are uniformly integrable (see e.g. p. 56 of [138] or our argument following (16.2.26)) and one has (16.4.9), we have & % V (g∗ (tj )); D E (1 + o(1)) jν(T )
cV (x)E(ν(T ); D) = o(T V (x)). (16.4.13) From this and (16.4.10)–(16.4.12) we conclude that T P(GT ) (1 + o(1))
V (g∗ (t)) dH(t).
(16.4.14)
0
At the same time, from (16.4.6) it also follows that P(GT ) P(G) = P(GD) + P(GDD+ ) + O(P(D+ )).
(16.4.15)
Repeating the argument used to estimate the probability P(G+ D), we obtain T V (g∗ (t)) dH(t).
P(GD) = (1 + o(1)) 0
(16.4.16)
591
16.4 The crossing of arbitrary boundaries Further, on the event D+ , for sufficiently large x we have 1 −q(tj − j) = |q|t0j −|q|εx − g!(j), 2 so that
+
P(GDD+ ) = P
Sj0
j ν(T ),
g!(j) − q(tj − j) ; DD+
jν(T )
P
) + ( 1 Sj0 g!(j) ; DD+ 2
jν(T )
cV (x) E(ν(T ); D) = o(T V (x)),
(16.4.17)
cf. (16.4.13). From (16.4.15)–(16.4.17) and (16.4.9) we obtain T P(GT ) (1 + o(1))
V (g∗ (t)) dH(t), 0
which, together with (16.4.14), establishes (16.4.2) in the case under consideration. If condition [<] is satisfied for a γ ∈ (1, 2) then one considers the events D and D+ with the quantity εx replaced in them by εz0 , where z0 = T 1/γ Lz (T ). As in the proof of Theorem 16.2.1, I(iii) one can verify that in this case the relation (16.4.9) will hold as well (assuming, without loss of generality, that x cT ). The remaining part of the proof is carried out in the same way as before, but with εx replaced by εz0 . II. The case q > 0. (i) Using the left-hand side of (16.4.7) and repeating the argument proving (16.4.14), we obtain T P(GT ) P(G) P(GD) (1 + o(1))
V (g∗ (t)) dH(t). 0
On the other hand, from the right-hand side of (16.4.7) we find that P(GT ) P(G− ) = P(G− D) + P(G− D).
(16.4.18)
Since on the event D one has τj+1 2εx,
|tj − j| εx, we obtain G− D = D
+ ( Sj0 jν(T )
⊂D
+
jν(T )
inf
tj t
j ν(T ),
) [g(t) − q(t − tj )] − q(tj − j)
+ g (j) , Sj0 g!(j) − 3εx = D Sj0 (1 + o(1))! jν(T )
(16.4.19)
592
Extension to generalized renewal processes
so that using an argument similar to that leading from (16.4.10) to (16.4.14), one finds that T P(G− D) (1 + o(1)) V (g∗ (t)) dH(t). 0
To bound the last probability in (16.4.18), note that g(t) − qt cx,
t ∈ [0, T ]
(16.4.20)
with c = 12 min{δ/q, 1}. Indeed, if T x/2q then by the conditions of the theorem one has g(t) − qt δT δx/2q, while if T < x/2q then g(t) − qt x − qT > x/2. Therefore (16.4.21) g!− (j) − q(tj − j) = inf g(t) − qt + qj cx tj t
by virtue of (16.4.20), and hence P(G− D) cV (x) E(ν(T ); D) = o(T V (x)), cf. (16.4.13). This completes the proof of part II(i) of the theorem. (ii) It is not hard to see that the only complication one encounters when considering the case q > 0 is in estimating the probability P(G− Dc ) in (16.4.18). Assuming that Fτ (t) = o(V (t)), one can easily see that P(D) = o(T V (x)) (cf. the proof of Theorem 2.1, I(iii); one just has to replace |ν(T )−T | by suptT |ν(t)−t|, while all the estimates used will remain true). This establishes the required assertion.
16.5 The case of linear boundaries In this section we will consider a more special problem on the asymptotic behaviour of the probability of the event GT introduced in (16.1.1), in the case of linear boundaries of the form {g(t) = x + gt; 0 t T } as x → ∞. This behaviour will depend on the relationship between the parameters of the problem, and in some particular cases it will immediately follow from the results derived above. Therefore we will give a general picture of what happens in that special case and will formulate new results only in the form of several separate theorems. As before, we assume that the zero mean trend condition (16.1.16) is met. First of all note that the case T = O(1) is covered by Theorem 16.4.1 on the crossing of arbitrary boundaries. In this case the conditions of Theorem 16.4.1 reduce to the requirement that the distribution of ξj satisfies (1.3), while when α ∈ (1, 2) the requirement is that the condition W (t) cV (t), t > 0, holds for the majorant of the left tail and when α > 2 it is that E ξj2 < ∞. The relation (16.4.2) established in Theorem 16.4.1 now takes the form P(GT ) ∼ H(T )V (x),
x → ∞.
16.5 The case of linear boundaries
593
Further, the case g 0 is covered by Corollary 16.4.4, which holds for boundaries satisfying (16.4.4). Clearly, in that case it suffices to consider the situation when T < ∞ (since by virtue of (16.1.16) and the law of large numbers one has P(GT ) = 1 when T = ∞). Then, according to Corollary 16.4.4, P(GT ) ∼ P(S(T ) x + gT ),
(16.5.1)
provided that the conditions of Theorem 16.2.1 are satisfied (with x replaced in them by x0 := x + gT ), in which case the right-hand side of (16.5.1) can be replaced by the representations established in the above-mentioned theorem for the probability P(S(T ) x0 ). Thus, it remains only to consider the situation of an increasing linear boundary g(t) = x + gt, g > 0, when either T → ∞ or T = ∞. The latter case is evidently covered by Theorem 16.1.2. Indeed, for the GRP S (t) := S(t) − gt with a negative mean trend (a = aξ + (q − g)aτ = −gaτ by (16.1.16)) we have G∞ = S (∞) x , so that the probability P(G∞ ) will be asymptotically equivalent, under the respective conditions, to the right-hand sides of the relations (16.1.9), (16.1.10) with |a| replaced in them by gaτ . Note that the condition q 0 (q > 0) of Theorem 16.1.2 will translate then into r := g − q 0 (r < 0 respectively). Now we turn to the case of a (finite) T → ∞. As one can expect from earlier results in the present chapter, the asymptotic behaviour of the probability P(GT ) will depend, generally speaking, on whether the condition g q is met (the condition excludes the possibility that the boundary will be crossed in any way other than by one of the jumps ξj ). • The case r = g − q 0 Write P(GT ) = P(G∞ ) − P(G∞ GT ).
(16.5.2)
The behaviour of the first probability on the right-hand side of (16.5.2) was considered in Theorem 16.1.2. On the event G∞ GT the crossing of the boundary g(t) = x + gt occurs (and, owing to the assumption r 0, it occurs at one of the jumps ξj ) at a time t ∈ (T, ∞), and, as we know, with a very high probability this will be a ‘large’ jump. Since a combination of two or more ‘large’ jumps is quite unlikely, on this event the trajectory {S(t); t T } will, with high probability, only ‘moderately’ deviate from the (horizontal) mean trend line. Therefore, one can expect that the relation S(T ) = o(x + gT ) will hold true and that hence the probability of that the boundary x + gt, t > T , is crossed will be close to the probability of the crossing of the boundary (x + gT ) + gt, t > 0, by the original process {S(t); t 0}, i.e. the probability will be close to the quantity (x + gT )V (x + gT )/((α − 1)gaτ ) by virtue of Theorem 16.1.2. More precisely, the following theorem holds true.
594
Extension to generalized renewal processes
Theorem 16.5.1. Let g > 0, r = g−q 0, let either condition [QT ] or condition [ΦT ] be satisfied for the distribution of ξj and let the mean trend in the process {S(t)} be equal to zero. Then, as x → ∞ and T → ∞, P(GT ) ∼
1 xV (x) − (x + gT )V (x + gT ) . (α − 1)gaτ
(16.5.3)
Proof. If T = O(x) then we are in the situation of Theorem 16.4.1 (I or II(i)), and moreover x∗ ≡ inf tT (g(t) − qt) = x δT for some fixed δ > 0, g∗ (t) = g(t) = x + gt and T → ∞. Therefore, by virtue of the above-mentioned theorem and (16.4.3), one has 1 P(GT ) ∼ aτ ∼
T V (x + gt) dt 0
# $ 1 xV (x) − (x + gT )V (x + gT ) (α − 1)gaτ
by Theorem 1.1.4(iv). Now if x = o(T ) then the desired assertion will follow from (16.5.2). Indeed, as we have already noted, by Theorem 16.1.2 one has P(G∞ ) ∼
xV (x) . (α − 1)gaτ
(16.5.4)
Further, putting x0 := (x + gT )/2 we conclude that, on the event {S(T ) x0 } the crossing of the boundary x + gt in the time interval (T, ∞) is only possible at a time t tν(T )+1 , and, moreover, that S(tν(T )+1 ) S(T ) + (tν(T )+1 − T )q + ξν(T )+1 x0 + (tν(T )+1 − T )g + ξν(T )+1 . Therefore the aforementioned crossing also implies that, for some t > 0, S(t + tν(T )+1 ) − S(tν(T )+1 ) x + (tν(T )+1 + t)g − x0 + (tν(T )+1 − T )g + ξν(T )+1 = x0 − ξν(T )+1 + gt. Hence
# $ P(G∞ GT ) P sup S(t) − g(t) 0, S(T ) x0 + P(S(T ) > x0 ) t>T # P sup S(t + tν(T )+1 ) − S(tν(T )+1 ) t>0
$ − (x0 − ξν(T )+1 + gt) 0 + O(T V (x0 ))
$ # P(ξ x0 /2) + P sup S(t) − (x0 /2 + gt) 0 + O(T V (x0 )) t>0
= O(V (x0 ) + x0 V (x0 ) + T V (x0 )) = o(xV (x))
595
16.5 The case of linear boundaries
(here we used Theorem 16.2.1 to bound P(S(T ) > x0 ), Theorem 16.1.2 to bound the last probability in the formula, and also the obvious relation (x0 +T )V (x0 ) = O(T V (T )) = o(xV (x))). So we obtain from (16.5.2) that P(GT ) ∼ P(G∞ ), where the right-hand side was given in (16.5.4). Since (x + gT )V (x + gT ) = o(xV (x)) in the case under consideration, the assertion (16.5.3) remains true. • The case r = g − q < 0 We have x∗ ≡ inf (g(t) − qt) = (x + gT ) − qT = x + rT. tT
If x∗ δT for a fixed δ > 0 then we are in the situation of Theorem 16.4.1, and hence (16.5.3) will hold true (cf. Theorem 16.5.1). Now suppose that x∗ → ∞, but x∗ = o(T ). Along with the events D and D+ from (16.4.8), introduce the event D− := inf (ν(t) − t) −εT tT
(note that x T in the case under consideration, so that in the definition of the events D and D± one can replace x with T , and this we have done). Now write P(GT ) = P(GT D) + P(GT D),
(16.5.5)
where, using the events G and G± introduced in (16.4.5) and arguing as in the proof of Theorem 16.4.1, II, we obtain that T (1 + o(1))
V (g∗ (t)) dH(t) P(GD) P(GT D) 0
T P(G− D) (1 + o(1))
V (g∗ (t)) dH(t). 0
In our case of a linear boundary g(t) = x + gt with g > 0, one clearly has g∗ (t) = g(t), so that the asymptotic behaviour of the integral (and therefore that of the probability P(GT D)) will be given by the right-hand side of (16.5.3). To estimate the second probability on the right-hand side of (16.5.5), observe that D = D− D+ and therefore D = D− D+ + D + . Hence P GT D = P GT D− D+ + P GT D+ . The last term does not exceed P(D+ ) = o(T V (x)), according to (16.4.9), so that we just have to bound the probability P(GT D− D+ ) = P sup Sν(t) + qt − (x + gt) 0; D− D+
tT
$ # 0 − (x + rt + qν(t)) 0; D− D+ = P sup Sν(t) tT
(16.5.6)
596
Extension to generalized renewal processes
(assuming, as usual, for simplicity that aτ = 1). Here it will be convenient for us to estimate supt(1−ε)T and sup(1−ε)T 0, on the event D+ one clearly has the inequality # 0 0 $ sup Sν(t) − (x + rt + qν(t)) sup Sν(t) − (x + rt) t(1−ε)T
t(1−ε)T
sup
0 Sν(t) − x∗ + rεT
t(1−ε)T
max Sn0 − |r|εT. nT
Therefore, P
# 0 $ Sν(t) − (x + rt + qν(t)) 0; D − D+
sup
t(1−ε)T
P max Sn0 |r|εT ; D− D+ nT
P max Sn0 |r|εT P(D− ) = o(T V (x)), nT
cf. (16.2.29) (and using (16.4.9) again). At the same time, on the event {ν((1 − ε)T ) εT }D+ we have the bound sup (1−ε)T
# 0 $ Sν(t) − (x + rt + qν(t))
sup (1−ε)T
0 [Sν(t) − x∗ − εqT ]
max n(1+ε)T
Sn0 − εqT
so that we obtain, again in a way similar to (16.2.29), that P
sup (1−ε)T
$ # 0 Sν(t) − (x + rt + qν(t)) 0, ν((1 − ε)T ) εT ; D− D+
= o(T V (x)).
(16.5.7)
Summing up the above, we see that P(GT DD+ ) = P
sup (1−ε)T
# 0 $ Sν(t) − (x + rt + qν(t)) 0, ν((1 − ε)T ) < εT ; D− D+
+ o(T V (x)).
Now observe that, in the expression under the probability symbol on the righthand side of the last relation, one can: (1) remove D− since {ν((1 − ε)T ) < εT } ⊂ D − ; (2) replace the event {ν((1 − ε)T ) < εT } by {ν(T ) < 3εT }, since the
597
16.5 The case of linear boundaries symmetric difference between these events is the sum {ν((1 − ε)T ) < εT, ν(T ) 3εT }
+ {ν((1 − ε)T ) εT, ν(T ) < 3εT }, (16.5.8) where, by Lemma 16.2.6(i), P ν((1 − ε)T ) < εT, ν(T ) 3εT = P(ν(εT ) 2εT ) = P t02εT −εT e−2εT Λθ (1/2) = o(T V (x)) when ε vanishes slowly enough, and the contribution of the second event in the sum (16.5.8) to the probability of interest is o(T V (x)) by (16.5.7); (3) remove D+ since P(D + ) = o(T V (x)) according to (16.4.9). Thus, we simply have to estimate P1 := P
sup (1−ε)T
0 [Sν(t) − (x + rt + qν(t))] 0, ν(T ) < 3εT .
We will represent the probability as a sum of the form (16.2.30), but with the events C (k) defined now in a somewhat different way: for exactly k of the r.v.’s τj , j 3εT , one has the inequality τj δT , whereas for the rest one has τj < δT . In exactly the same way as in the proof of Theorem 16.2.1, II(ii), we verify that the contribution to this sum of the terms with k = 1 is negligibly small. The case k = 1 is also considered in a way similar to that used in the argument in the above-mentioned proof, but instead of (16.2.33) we use here the following representation (for a small enough δ > 0): P1 = O
P(ν(T ) < j, τi < δT, i j)
j<3εT
+
P
j<3εT
+
(1)
sup (1−ε)T
j<3εT jn<3εT
0 [Sν(t) − (x + rt + qν(t))] 0, ν(T ) = j; Cj+1
P
sup (1−ε)T
0 [Sν(t) − (x + rt + qν(t))] 0, (1)
ν(T ) = n; Cj (1)
(16.5.9)
(the events Cj are now defined as in (16.2.32) but with ε replaced by 3ε). Under the assumption that condition [<] is met, the first term on the right-hand side of (16.5.9) is negligibly small owing to (16.2.34). Observing that the probabilities
598
Extension to generalized renewal processes
in the last sum in (16.5.9) do not exceed
(1) P max Sk0 − (x∗ + qk) 0 P ν(T ) = n; Cj kn
(1) P sup[Sn0 − qn] x∗ P ν(T ) = n; Cj , n
we use (16.2.35)–(16.2.38) to obtain the bound o x2∗ V (x∗ )Vτ (T ) + T V (T ) for the whole sum. Now we turn to the middle term in (16.5.9). As for (16.2.39), one can neglect (1) the event {tj 4εT } Cj+1 , so that this term will become (up to negligibly small terms, which we discard) 0 P sup [Sν(t) − (x + rt + qν(t))] 0, j<3εT
(1−ε)T
(1)
tj < 4εT, tj+1 T ; Cj+1 . On the event j<3εT {tj < 4εT, tj+1 T } the crossing of the boundary prior to the time tν(T ) can clearly only occur with a probability of order O(εT V (x)) = o(T V (x)). Therefore, under the assumption that Fτ ∈ R, the above sum (again up to discarded negligibly small terms) is equal to
(1) P Sj0 x∗ + qj P tj < 4εT, tj+1 T ; Cj+1
j<3εT
= (1 + o(1))Vτ (T )
jV (x∗ + qj)
j<3εT
(again cf. (16.2.39); we have used relations of the form (16.1.18) for the random walk {Sj0 }). Therefore we obtain (cf. the derivation of (16.2.40)) that P1 = o(T V (T )) for α ∈ (1, 2), whereas for α > 2 we have P1 = (1 + o(1))
x2∗ V (x∗ )Vτ (T ) 2 q (α − 1)(α − 2)
+ o(T V (T )),
which completes the proof of the following theorem. Theorem 16.5.2. Let g > 0, r = g − q < 0, x∗ = x + rT → ∞ but x∗ = o(T ), and let Fτ ∈ R, the mean trend of the process {S(t)} being equal to zero. (i) If the distribution of the jumps ξj satisfies condition [QT ] then P(GT ) ∼
1 xV (x) − (x + gT )V (x + gT ) . (α − 1)gaτ
16.5 The case of linear boundaries
599
(ii) If condition [ΦT ] holds for the distribution of ξj then P(GT ) ∼
1 xV (x) − (x + gT )V (x + gT ) (α − 1)gaτ x2 V (x∗ )Vτ (T ) + 2 2∗ . q aτ (α − 1)(α − 2)
Now assume that x∗ = x + rT → −∞. We will restrict ourselves to the case when x∗ < −δT and x δT for a fixed δ > 0. As before, starting with the representation (16.5.5) we verify that the probability P(GT D) asymptotically behaves as the right-hand side of (16.5.3): P(GT D) ∼
# $ 1 xV (x) − (x + gT )V (x + gT ) , (α − 1)gaτ
(16.5.10)
which means that it simply remains to evaluate (16.5.6). If Fτ (t) = o(V (t)) as t → ∞ then P(GT D − D+ ) P(D− D+ ) P max t0k > εT
= o(T V (T )),
(16.5.11)
k(1+ε)T
cf. (16.2.41), so that we just obtain (16.5.3). In the general case, put Y :=
max k(1+ε)T
Sk0
and observe that P(Y > x/2) = O(T V (x)), cf. (16.2.42). Hence P(D− {Y > x/2}) = o(T V (x)) by virtue of (16.4.9) as D − ⊂ D. Therefore P(GT D− D+ ) = P GT D− D+ {Y x/2} + o(T V (x)). Further, the event
GT D+ =
(16.5.12)
" # 0 $ sup Sν(t) − (x + rt + qν(t)) 0 D+
tT
implies that ν(t) <
|r|t x − Y − q q
for some t T,
or, equivalently, that t0(1−g/q)t−(x−Y )/q > Setting
gt x − Y − q q
for some t T.
g x−z T− kz := 1 − , q q
(16.5.13)
600
Extension to generalized renewal processes
we see that the previous relation means that, for some k kY , one has t0k >
g k + xY , q−g
xz :=
x−z . q−g
From this we see that if Fτ ∈ R then by (16.1.12) (applied to the r.v.’s ξj := − g/(q − g) with Eξj = −g/(q − g) < 0) we have for the probability on the right-hand side of (16.5.12) (denote it by P2 ) the bound
τj0
x/2 gk > xz P(Y ∈ dz) P2 P max t0k − kkz q−g 0
q−g ∼ (γ − 1)g
x/2 0
gkz gkz xz Vτ (xz ) − xz + Vτ xz + P(Y ∈ dz). q−g q−g
Since P(Y > εx) = o(1) when ε tends to 0 sufficiently slowly, we obtain, cf. (16.2.43), the bound % x x + gT x + gT & x q−g − . Vτ Vτ P2 (1 + o(1)) (γ − 1)g q − g q−g q q At the same time,
P2 P GT D− D+ ; min Sk0 > −εx k(1+ε)T # $ P sup −εx − (x + rt + qν(t)) > 0; D− D+ tT
×P
min k(1+ε)T
Sk0 > −εx
(16.5.14)
|r|t (1 + ε)x <− = (1 + o(1)) P inf ν(t) − tT q q
+ o(T V (x)),
since we have the following: (1) P mink(1+ε)T Sk0 −εx = o(1) when ε → 0 slowly enough; (2) the presence of the event D − under the probability symbol in the middle line of (16.5.14) is superfluous, as
" # $ sup −εx − (x + rt + qν(t)) > 0 ⊂ D − ; tT
(3) removing the event D+ from under the above-mentioned probability symbol will introduce, owing to (16.4.9), an error of order o(T V (x)). The asymptotic behaviour of the probability on the right-hand side of (16.5.14) can be derived similarly to the way in which we evaluated the probability of the
16.5 The case of linear boundaries
601
event (16.5.13); this leads to the inequality % x x + gT x + gT & q−g x P2 (1 + o(1)) − Vτ Vτ (γ − 1)g q − g q−g q q + o(T V (x)). Therefore q−g P(GT D− D+ ) = o(T V (x)) + (1 + o(1)) (γ − 1)g % x x + gT x + gT & x × − . Vτ Vτ q−g q−g q q Together with (16.5.5), (16.5.10) and (16.5.11), the above relation establishes the following result. Theorem 16.5.3. Let g > 0, r = g − q < 0, x∗ = x + rT −δT and x δT for a fixed δ > 0, T → ∞. Assume that the distribution of the jumps ξj satisfies either condition [QT ] or condition [ΦT ] and that the mean trend in the process {S(t)} is equal to zero. Then the following assertions hold true. (i) If Fτ (t) = o(V (t)) as t → ∞ then the relation (16.5.3) holds true for the probability P(GT ). (ii) If Fτ ∈ R, γ > 1 (see (16.1.4)) then # $ 1 P(GT ) ∼ xV (x) − (x + gT )V (x + gT ) (α − 1)gaτ % x x + gT x + gT & x q−g − . Vτ Vτ + (γ − 1)gaτ q − g q−g q q
Bibliographic notes
Chapter 1 The concept of slow variation first appeared in [158] (where only continuous functions were considered). Somewhat earlier, the close concept of a very slowly oscillating sequence (v.s.o.s., in German sehr langsam oszillirende Folge) had been introduced in [249, 250] when studying various types of convergence in the context of extending the conditions of applicability of Tauberian theorems. By definition, a v.s.o.s. {sn } is characterized by the property that, for any subsequence of indices k = k(n) such that k/n → u ∈ (0, ∞) as n → ∞, one has sk − sn → 0 (i.e. lk /ln → 1 for ln := esn ). In [250] analogues of Theorems 1.1.2 and 1.1.4(ii) for v.s.o.s.’s were obtained, and it was noted that the very idea of a v.s.o.s. can be easily extended to functions of a continuous variable. The assertions of Theorems 1.1.2 and 1.1.3 were established in [158] for continuous functions; for arbitrary measurable functions they were first obtained in [174]. The proofs of these theorems presented in this book are close to those in [32] (see ibid. for other versions of the proofs and further historical comments). According to [251] the traditional notation L for s.v.f.’s goes back to [158], although the notation xα L(x) for functions displaying regular variation type behaviour can be seen already in [250]. The publication of the books [130] and [122] played an important role in introducing r.v.f.’s into probability theory. For a more detailed account of the properties of s.v.f.’s and r.v.f.’s see [32, 251]. The class of subexponential distributions was introduced in [81]. This paper also contained the proof of the first part of Theorem 1.2.8. Our proof of Theorem 1.2.12(iii) mostly follows the exposition in [15] (the term ‘subexponential distribution’ apparently appeared in that monograph). According to [32], subexponential distributions on the whole real line were first considered in [137]. Theorem 1.2.21(i) was obtained in [133, 166], Theorem 1.2.21(ii) was established in [166]. In conjunction with the relations l (t) → 0 and tl (t) → ∞ as t → ∞, the sufficient condition for a distribution G to belong to the class S (and simultaneously to a narrower class S ∗ ) from Corollary 1.2.32 was obtained in [166] (as a corollary of Theorem 3 in that paper). Along with a number of other conditions sufficient for G ∈ S, it was presented in [133] (see ibid. for a more complete bibliography); see also [81, 266, 229, 114]. Surveys of the main properties of ‘heavy-tailed’ distributions can be found in [113, 9, 252, 235]. A proof of Theorem 1.3.2 can be found in [83]. A more general case was considered in [242]. A proof of Theorem 1.4.1 (in a somewhat more general form) can be found in [84]. Locally subexponential distributions were studied in [10]; some results of that paper are close to the results of §§ 1.3 and 1.4. For instance, Corollary 2 and Assertion 4 are close to our Theorem 1.3.4, while Theorem 2 is close to our Theorem 1.4.2. Some results presented in §§ 1.2–1.4 were established in [55]. The general theory of the convergence of sums of independent r.v.’s was presented in [130] (see also [122]; for results in the multivariate case, historic comments and further
602
Bibliographic notes
603
bibliography, see e.g. [5, 189]). The role of the contribution of the maximum summands to the sum Sn in the case of convergence to a non-Gaussian stable distribution was studied in [97, 7] (see also [182, 100, 98]). The monographs [286, 86, 273, 245, 153] are concerned with stable distributions and processes; see also [256, 211, 25, 246]. The invariance principle (Theorem 1.6.1) was established in [107] (and in [230] for nonidentically distributed summands ξj ; for more details, see e.g. [28, 74]). The functional limit theorem on convergence to stable processes was obtained in [255]; conditions for the convergence of arbitrary functionals of the processes of partial sums were found in [41, 43, 75]. The law of the iterated logarithm was first obtained in the case of the uniform discrete distribution of ξ in [161]. Then this result was extended in [171] to the case of independent non-identically distributed bounded r.v.’s. In the case of i.i.d. r.v.’s with finite variance the law of the iterated logarithm was established in [140], and the converse result was obtained in [261]. The assertion of Theorem 1.6.6 was first obtained in [147] in the case when F belongs to the domain of normal attraction of a stable law Fα,ρ , α < 2, |ρ| < 1; it was then generalized in [274]. Detailed surveys of results related to the law of the iterated logarithm can be found in our § 3.9, in § 5.5.4 of [260] and in [31].
Chapters 2 and 3 Theorem 2.5.5 was established in [119]. An alternative proof of this theorem, using the subadditive Kingman theorem, was given in [102]. The assertion of Theorem 2.6.5 was obtained in [197]. In [118] a local renewal theorem for ξ 0 was obtained in the case when condition [ · , =] holds with 1/2 < α < 1 (for α ∈ (0, 1/2] the paper gave an upper bound only). In the lattice case, these results were extended in [128, 279] to the case of r.v.’s ξ assuming both negative and positive values. Upper bounds for the distributions of Sn in terms of truncated moments and without assuming the existence of majorants were obtained in [201, 127] (and also in [206]). The asymptotics of P(Sn x) ∼ nV (x) for x cn were established in [207]. Some results of Chapters 2 and 3 were obtained in [51, 54, 57, 66]. An analogue of the uniform representation (4.1.2) for x > n1/α in the case when F belongs to the domain of attraction of a stable law with exponent α was obtained in [238, 239]. An analogue of the law of the iterated logarithm (the assertion of Corollary 3.9.2) was first obtained in [82] for the case of symmetric stable summands (it was noted in that paper that, as had been pointed out by V. Strassen, this assertion immediately follows from the results of [163] and also that, with the help of the results of [96], it could be extended to the case of distributions F that belong to a special subset of the domain of normal attraction of Fα,ρ , α < 2). For distributions F from the entire domain of attraction of a stable law Fα,ρ , α < 2, |ρ| < 1, this law was obtained in [147], while some refinements and extensions to it were established in [190]. See also the bibliography in [190, 164] and detailed surveys of results, related to the law of the iterated logarithm, presented in § 5.5.4 of [260] and in [31]. Some assertions from § 3.9 were obtained in [51].
Chapter 4 For bibliographies concerning the asymptotic equivalence P(Sn x) ∼ nV (x),
P(S n x) ∼ nV (x)
under condition [ · , =] with α > 2, see e.g. [225, √ 237, 191, 206]. The uniform representation (4.1.2) for x > n under the minimal additional condition
604
Bibliographic notes
that E(ξ 2 ; |ξ| > t) = o(1/ ln t), t → ∞, was established in Corollary 7 of [237]. In [206] the representation (4.1.2) was presented in Theorem 1.9 under the additional condition that E|ξ|2+δ < ∞, δ > 0 with a reference to [194]; the latter, in fact, only dealt with the zone x n1/2 ln n (when (4.1.2) degenerates into (4.1.1)), but under no additional moment conditions. According to [191], the result presented in [206] was obtained in the doctoral thesis of A.V. Nagaev (On large deviations for sums of independent random variables, Institute of Mathematics of the Academy of Sciences of the UzSSR, Tashkent, 1970). In [225] the representation (4.1.2) was obtained under the assumption that F (t) = O(t−α ) as t → ∞ (Corollary 2). −α Under the additional assumption √ that F (t) = O(t ) as t → ∞, the representation (4.4.3) was established for x/ n → ∞ in Corollary 1 of [225]. Upper bounds for the distributions of Sn in terms of truncated moments and without assuming the existence of majorants were obtained in [201, 127] (and also in [206]). These inequalities are, in a sense, more general but also much more cumbersome. One cannot derive from them the assertions of Corollary 4.1.4 (even for the sums Sn ). Some lower bounds for P(Sn x) were found in [205]. Lower bounds for P(|Sn | x) were obtained in [208]. In [194] the following integral and integro-local theorems for the distribution of Sn were √ presented. Let F ∈ R, α > 2, x/ ln x > n and n → ∞. Then P(Sn > x) ∼ nV (x). If, moreover, P(ξ ∈ Δ[t)) ∼ αΔV (t)/t as t → ∞ and 0 < c Δ = o(t) then P(Sn ∈ Δ[x)) ∼ nαΔV (x)/x. Under the assumption that the r.v. ξ = ξ −Eξ was obtained by centring a non-negative r.v. ξ 0, the first of the asymptotic relations (4.8.2) was obtained in [90] for distributions with tails of extended regular variation (i.e. in the case when (4.8.3) holds true) and in [210] for distributions with ‘intermediate regular variation’ of the tails (i.e. in the case when (4.8.4) holds; see also [89, 91, 156, 248]), but only in the zone x δn, where δ > 0 is fixed. The assertion of Theorem 4.9.1 in a narrower case (in particular, under the assumption that x = cn) was obtained in Theorem 3.1 of [108]. The limiting behaviour (as x → ∞) of the conditional distribution j ff « „ Sη(x)t (a) Sη(x) (a) − x Sη(x)−1 (a) η(x) F I (x) , , , t ∈ [0, 1] , , , N (x) := + N (x) N (x) η(x) N (x) F+ (x) under the condition that η(x) := inf{k : Sk − ak x} < ∞ was studied in [12] in the case when Eξ = 0, a > 0 and either F ∈ R for α > 1 or F belongs to the maximum domain of attraction of the Gumbel distribution (the latter condition is known to be equivalent to the property that, for any u > 0, lim
t→∞
F+ (t + N (t)u) = e−u , F+ (t)
see e.g. Proposition 3.1 of [12] or Theorem 3.3.27 of [113]; an alternative characterization of this class was given by Theorem 3.3.26 of [113]). The paper also dealt with the asymptotics of the conditional distribution of ξη(x) and some others; in particular, it was shown there that, in the case when F ∈ R, α > 1, ˛ ` ´ ` ´ lim P x−1 ξη(x) > u˛ η(x) < ∞ = 1 + α(1 − u−1 ) u−α , u > 0. x→∞
Some results of Chapter 4 were established in [51, 54, 63].
Chapter 5 Problems on ‘moderately large’ deviations of the sums Sn for semiexponential and related distributions were studied in [152, 201, 220, 221, 280, 281, 247, 212, 206, 238] and some other papers, where, in particular, the validity of the Cram´er approximation (5.1.11)
Bibliographic notes
605
` ´ for P(Sn x) was established under the condition that E eh(ξ) ; ξ 0 < ∞ (or Eeh(ξ) < ∞) for some function h(t) that is close, in a sense, to an r.v.f. of index α ∈ (0, 1) (conditions on the function h(t) vary from paper to paper). Similar results for P(S n x) were obtained in [2]. The asymptotic representation P(Sn x) ∼ nV (x) for x n1/(2−2α) was obtained in [238, 196, 227, 51]. For results concerning large deviations of Sn in the case of distributions satisfying the condition Eeh(|ξ|) < ∞, see also [206, 191] (one can find there more complete bibliographies as well). In [227] the asymptotics of P(S n x) were established in cases where they coincide with the asymptotics of P(Sn x). Theorems on the asymptotics of P(Sn x) that would be valid on the whole real line were considered in [238]. In particular, ` ´ the paper gives the form of P(Sn x) in the intermediate zone x ∈ σ1 (n), σ2 (n) , but it does that under conditions of which the meaning is quite difficult to comprehend. The intermediate deviation zone σ1 (n) < x < σ2 (n) was also considered in [195]. The paper deals with the rather special case when the distribution F has density α
f (t) ∼ e−|t|
as |t| → ∞;
the asymptotics of P(Sn x) were found there for α > 1/2 in the form of recursive relations, from which one cannot extract, in the general case, a closed-form expression for the asymptotics of P(Sn x). Close results on the first-order asymptotics of P(Sn x) were obtained in [205] (they are also discussed in [206]) for the class of distributions with P(ξ t) = e−l(t) , where the function l(t) is thrice differentiable and its derivatives satisfy a number of conditions. Asymptotic representations for P(S n x) in the intermediate deviation zone σ1 (n) < x < σ2 (n) and also in the ` zone x ´ σ2 (n) were studied in [52]. The asymptotic representation (5.1.17) for P S n (a) x , a > 0, which holds, as x → ∞, for all n and all so-called strongly subexponential distributions, was established in [178] (see also [275]); sufficient conditions from [178] for a distribution to be strongly subexponential are satisfied for F ∈ Se. A number of results of Chapter 5 were obtained in [51, 52].
Chapter 6 In the Cram´er case, methods for studying the probabilities of large deviations of Sn are quite well developed and go back to [95] (see also [233, 16, 219, 259, 120, 49] etc.). Somewhat later, these methods were extended in [37, 44, 69, 70] to enable one to study S n and also to solve a number of other problems related to the crossing of given boundaries by the trajectory of a random walk. Theorem 6.1.1 was established by B.V. Gnedenko (see §§ 49 and 50 of [130]; see also Theorem 4.2.1 of [152] and Theorem 8.4.1 of [32]); the multivariate case was considered in [243]. Sufficiency in Theorem 6.1.2 was established in [259] (cf. [120]; the finite variance case was studied earlier in [254]) and the proof of the necessary part was presented in § 8.4 of [32]. Uniform versions of Theorems 6.1.1 and 6.1.2 were established in [72] and, to some extent, in [259]. The properties of the deviation function Λ (which is sometimes also referred to as the Chernoff function [80] or rate function) are discussed in [38, 49, 67, 101]. The monograph [101] is devoted to the analysis of the crude (logarithmic) asymptotics of large deviation probabilities. In the case when the distribution of ξ has a density of the form e−λ+ t tγ−1 L(t), L(t) being an s.v.f. at infinity, γ > 0, a number of results on the large deviations of Sn (including (6.1.15)) were obtained in [241]. Some assertions close to the theorems and corollaries of §§ 6.2 and 6.3, were obtained for narrower deviation zones in [27] (Lemmata 2 and 3).
606
Bibliographic notes
A detailed investigation of the ‘lower sub-zone’ in the boundary case θ ≡ x/n = θ+ is presented in [198], where the values of x up to which the ‘classical’ exact asymptotic expansions for P(Sn x) remain correct were found. The paper also contains an integral analogue of Theorem 6.2.1. The multivariate case was considered in [284] in a very special situation. Some assertions of Theorems 6.2.1 and 6.3.1 can be obtained from [259]. The main results of § 6.5 were obtained in [64].
Chapter 7 Corollaries 1 and 2 of [272] contain conditions, sufficient for (7.3.2), that are close to the conditions of Theorems 7.1.1 and 7.2.1. The assertion of Theorem 7.4.1 was obtained in [126] (see ibid. for a more complete bibliography on related results). In the special case when τ is the number of the first positive sum, it was established in [8]. A necessary and sufficient condition for (7.4.6) in the case F ∈ R was obtained in [135]. This condition has the form ˆ ˜ lim lim sup P(Sτ t, τ n) − P(Sn t, τ n) /P(ξ t) = 0. n→∞
t→∞
The assertion of Theorem 7.4.4 was obtained in [136]. For upper-power distributions, the asymptotics of P(S x) from the assertion of Theorem 7.5.1 was obtained in [39, 42]. For the class of subexponential distributions, Theorem 7.5.1 was established in [275] (see also [213, 282]; special cases were considered in [278, 269]). That the condition F+I ∈ S is necessary for the asymptotics (7.5.1) was proved in [177]; in the special case when ξ = ζ − τ, where the r.v.’s τ, ζ 0 are independent, τ has an exponential distribution (which corresponds to M/G/1 queueing systems), this result was obtained in [213, 115]. The case Eξ = −∞ was studied in [103]. Assertions close to Theorem 7.5.5 and Corollary 7.5.6 were obtained in [26, 11]. The condition used in [11] to ensure that (7.5.13), (7.5.14) hold true is that F+ belongs to the distribution class S ∗ , which is characterized by the property (7.4.1). The assertion of Theorem 7.5.8 in the case F ∈ R, under additional moment conditions and assumptions concerning the smoothness of F+ , was established in Corollary 3 of [63]. For queueing systems of the type M/G/1 (see above), under the assumption that ζ has a heavy-tailed density of a special form, such a refinement for the distribution of S (or for the limiting distribution for the waiting time in M/G/1) was obtained in [267]. See also [18, 19, 20]. Theorem 7.5.11 is a slight modification (and simplification) of Theorem 3 of [63] in the case G ∈ R and of Theorem 2.1 of [52] in the case G ∈ Se.
Chapter 8 A complete asymptotic analysis (including the derivation of asymptotic expansions) of P(η± (x) = n) for all x and n → ∞ for bounded lattice-valued ξ was given in [33, 34]. For an extension of the class R, the asymptotics P(θ = n) ∼ cV (−an), and hence the asymptotics of P(η− > n) as well, was found in Chapter 4 of [42]. Comprehensive results on the asymptotics of P(η− (x) > n) and P(n < η+ (x) < ∞) in the case of a fixed x 0 for the classes R and C were obtained in [116, 32, 104, 27, 193]. Necessary γ , γ > 0, and some relevant problems and sufficient conditions for the finiteness of Eη− were studied in [143, 154, 138, 160]. Unimprovable bounds for P(η− > n) were given in § 43 of [50]. Note also that the local theorems 2.1 and 3.1 of [54] in the lattice case were established earlier, in [105]. For a proof of Theorem 8.2.16 and bibliographic comments on it, see [32], pp. 381–2; that conditions (8.2.45) and (8.2.46) are equivalent was established in [106]. The assertion of Theorem 8.2.17(ii) was established in [39].
Bibliographic notes
607
Theorem 8.2.18 was established in [116, 32, 104, 27, 193]. In the case when Eξ = 0, Eξ`2 < ∞ and the ´ distribution F is arithmetic, local limit theorems for the asymptotics of P η± (x) = n as n → ∞ were obtained in [3]. In the non-lattice case, it was shown in [193] that (8.2.54) holds true under the additional condition that E|ξ|3 < ∞. A more general result claiming that Eξ 2 < ∞ would suffice for (8.2.54) was published in [117]. However, as communicated to us by A.A. Mogulskii, this result is incorrect as its conditions contain no restrictions on the structure of F (a paper by A.A. Mogulskii with a correct version of this theorem was submitted to Siberian Adv. Math). In the case of bounded lattice-valued r.v.’s ξ, asymptotic expansions for P(η± (x) = n) were derived in [33, 34]. For`distributions´F ∈ C, a comprehensive study of the asymptotics of the probabilities P η+ (x) > n = P(S n x) for arbitrary a = Eξ, under the assumption that F has an absolutely continuous component, was made in [33, 34, 37]. The assertion of Theorem 8.3.1 follows from the results of [37], while that of Theorem 8.3.2 follows from bounds in [45, 47]. The assertion of Theorem 8.3.3 follows immediately from the convergence rate´ ` estimate obtained in [203] (see also [204, 202]). An asymptotic expansion for P S n x as x → ∞ in the cases when Eξ = 0, E|ξ|s < ∞, s > 3, and the distribution of ξ is either lattice or satisfies the Cram´er condition lim sup|λ|→∞ |f(λ)| < 1, was studied in [204, 37, 175, 176]. The exposition in Chapter 8 is close to [58]. A survey of results close to those presented in Chapter 8 is contained in [193].
Chapter 9 Necessary and sufficient conditions for partial sums Sn of i.i.d. random vectors in Rd to converge to a non-Gaussian stable distribution were found in [243]; further references, as well as a detailed discussion of the problem on the convergence of Sn under operator scaling, can be found in [189]). The concept of regular variation in the multivariate setup was discussed in [23]. In the case when the Cram´er condition Ee(λ,ξ) < C < ∞ holds in a neighbourhood ` ´ of a point λ0 = 0, a comprehensive study of the asymptotic behaviour of P Sn ∈ Δ[x) as (λ0 , x) → ∞ was presented in [69, 70]. In the special case when F has a bounded density f (x), which admits a representation of the form (9.1.9), the large deviation problem was considered in [283, 200]; see also [199]. In [283], a local limit theorem for the density fn (x) of Sn was established for large deviations along ‘non-singular directions’. This result was complemented in [200] by an analysis of the asymptotics of fn (x) along ‘singular directions’ for an even narrower distribution class (in particular, it was assumed that there are only finitely many such directions, and that the density f (x) decays along such directions as a power function of order −β −d, β > α). The main result of [200] shows that the principal contribution to the probabilities of large deviations along singular directions comes not from trajectories with a single large jump (which is the case in the univariate problem and also for non-singular directions when d > 1) but from those with two large jumps. The case when ξ follows a lattice distribution (with probabilities regularly varying at infinity) was studied in [285]. In a more general case, when only conditions (9.1.7) (with α > 0) and (9.1.8) are met, an integral-type large deviation theorem, describing the behaviour of the probabilities P(Sn ∈ xA) when the set A ⊂ Rd is bounded away from 0 and has a ‘regular’ boundary, was obtained in [151] together with its functional version. The main results presented in Chapter 9 were obtained in [54].
608
Bibliographic notes
Chapter 10 An assertion close to Theorem 10.2.3 was obtained in [225] in the case when ξ has a density regularly varying at infinity. In this case, conditions [A1 ] and [A2 ] could be somewhat relaxed where it is assumed that the sets A(t) and Af (t) are unions of finite collections √ of intervals. The same paper [225] contains a ‘transient’ assertion for deviations x n, ` ´ where an approximation for P(Gn ) includes the term P w(·) ∈ xA , where w(·) is the standard Wiener process. An assertion close to Theorem 10.2.6 was obtained in [131] under more stringent conditions on F+ (t) and somewhat relaxed versions of conditions [A1 ] and [A2 ]. In [150] a theorem was established, which, in the case of Markov processes in Rd with weakly dependent increments and distributions regularly varying at infinity, enables one to make a transition from the large deviation results for one-dimensional distributions of the process to those for the trajectories of the process. This result was used in [151] to obtain a functional version of the large deviation theorem in the case when one has (9.1.7) (for an α > 0) and (9.1.8).
Chapter 11 The problem on the asymptotics of P (1, n, an) as n → ∞, when ξ a < ∞, τ 0 and the distribution of τ is close to semiexponential, was studied in [13, 124] (see ibid. for motivations and applications to queueing theory).
Chapter 12 In [127] upper bounds for the distribution of Sn in terms of ‘truncated’ moments were obtained without any assumptions on the existence of regularly varying majorants (such majorants always exist in the case of finite moments, but their decay rate will not be the true one). This makes bounds from [127] in a sense more general than those presented in § 12.1 but also substantially more cumbersome. One cannot derive from them bounds for Sn of the form (12.1.11), (12.1.12). In the case Eξ 2 → d < ∞, the limiting relation (12.5.1) in the problem on transient phenomena was obtained in [165, 231] (see also [42]). Only some partial results for some special distributions of ξ were known in the case Eξ 2 = ∞ (see [93, 78, 94]). Under condition [UR] with h(n) = n, the relation P(Sn x) ∼ nV (x) for x > cn was obtained in [218]. The main results presented in Chapter 12 were obtained in [59, 61].
Chapter 13 In [127] upper bounds for the distribution of Sn in terms of ‘truncated’ moments were obtained without any assumptions on the existence of regularly varying majorants (such majorants always exist in the case of finite moments, but their decay rate will not be the true one). This makes bounds from [127] more general in a sense, but also substantially more cumbersome. One cannot derive from them bounds for Sn of the form (13.1.11), (13.1.12). Inequalities for P(|Sn | x) close to our lower bounds for P(Sn x) were obtained in [208]. √ P The crude inequality P(Sn x) 12 n j=1 Fj,+ (2x) for x 2 Dn was obtained in [206]. Some bounds for the distribution of Sn were also established in [240, 226]. Under condition [UR] with H(n) = n, the relation P(Sn x) ∼ nV (x) for x > cn was obtained in [218]. The main result (13.3.2) for transient phenomena in the i.i.d. case with Eξ 2 → d < ∞ was established in [165, 231] (see also § 25, Chapter 4 of [42]).
Bibliographic notes
609
The main results presented in Chapter 13 were obtained in [60].
Chapter 14 In the case when ξj (Xj ) = f (Xj ), where f is a given function on X and {Xk } forms a Harris Markov chain (having a positive recurrent state z0 ), the asymptotics of P(Sn x) in terms of regularly varying distribution tails of the sums of quantities f (Xk ) taken over a cycle (formed by consecutive visits to a fixed positive recurrent atom), were studied in [187]. The asymptotics of P(S(a) x) in the case when the Sn are partial sums of a sequence of moving averages of i.i.d. r.v.’s, satisfying conditions of a subexponential type, were established in [179]. The asymptotics of the distribution of the maximum of a random walk, defined on a finite Markov chain, were studied in [6]. An analogue of Theorem 7.5.1 for processes with ‘modulated increments’ (i.e. processes the distributions of whose increments depend on the value of an unobservable regenerative process) both in discrete and in continuous time, was obtained in [125, 123]. See also [155, 14, 4, 139]. For sequences {ξj } of r.v.’s satisfying a mixing condition and such that F ∈ R, results on the large deviations of Sn were obtained in [99]; for autoregressive processes with random coefficients (i.e. when ξn = An ξn−1 + Yn , where the sequences of positive i.i.d. r.v.’s {An }, {Yn } are independent, Yn ⊂ = FY ∈ R), results on the large deviations of Sn were obtained in [173, 263].
Chapter 15 The properties of processes with independent increments are described in the monographs [256, 245, 25, 246, 273]. Theorem 15.1.1 (including the converse assertion) was proved in [112] (see also Theorem A3.22 of [113] and § 8.2.7 of [32]; more precisely, it was shown in [112] that the following assertions are equivalent: (1) F ∈ S, (2) G[1] ∈ S, (3) F+ (t) ∼ G[1],+ (t) as t → ∞). A similar assertion for the class R was obtained in [110]. The asymptotics of the tails of subadditive functionals of processes with independent increments in the case when G[1] ∈ R were studied in [79]. The asymptotics of P(S(∞) x) and some other characteristics of a process with independent increments in the case when the spectral measure has a subexponential right tail was studied in [168].
Chapter 16 The assertion of Theorem 16.1.2, I was proved in [115]. The first assertion of Theorem 16.1.3 follows from Theorem 12 in Chapter 4 of [42], where it was established in a more general case (later on, it was shown in [275, 115] that a sufficient condition for this assertion is that F+I ∈ S; that this a necessary condition as well was proved in [177], see also the remarks on Theorem 7.5.1 above), while the second assertion follows from the main theorem of [178] (where it was proved for strongly subexponential distributions). An analogue of the uniform representation (4.1.2) for generalized renewal processes, under additional moment assumptions and the condition that the distribution Fτ has a bounded density, was presented in [188]. In the case q = 0 and under the assumptions that the distribution tail of the r.v. ξ 0 is of ‘extended regular variation’ (see § 4.8) and that the process {ν(t)} satisfies condition (16.1.6) for some ε > 0 and c > 0, the relation (16.1.5) for any fixed δ > 0 was established in [169] (the process {ν(t)} was assumed there to be of a more general form than a renewal process). This latter paper also contains sufficient
610
Bibliographic notes
conditions for (16.1.6) to hold for a renewal process: Eτ 2 < ∞ and Fτ (t) 1 − e−bt , t > 0, for some b > 0 (Lemma 2.3 of [169]). In [262] it was shown that condition (16.1.6) is always met for renewal processes with aτ < ∞, without any additional assumptions regarding Fτ . Note that the last fact follows also from the inequality (16.2.13). The main results presented in Chapter 16 were obtained in [65].
References
[1]
[2]
[3] [4] [5] [6] [7] [8]
[9] [10]
[11]
[12] [13] [14] [15] [16] [17]
Adler, R., Feldman, R. and Taqqu, R.S., eds. A Practical Guide to Heavy Tails: Statistical Techniques for Analysing Heavy-tailed Distributions (Birkh¨auser, Boston, 1998). Aleˇskjaviˇcene, A.K. On the probabilities of large deviations for the maximum of sums of independent random variables. I, II. Theory Probab. Appl., 24 (1979), 16– 33, 322–337. Alili, L. and Doney, R.A. Wiener-Hopf factorization revisited and some applications. Stochastics and Stoch. Reports., 66 (1999), 87–102. Alsmeyer, G. and Sbignev, M. On the tail behaviour of the supremum of a random walk defined on a Markov chain. Yokohama Math. J., 46 (1999), 139–159. Araujo, A. and Gin´e, E. The Central Limit theorem for Real and Banach Valued Random Variables (Wiley, New York, 1980). Arndt, K. Asymptotic properties of the distribution of the supremum of a random walk on a Markov chain. Theory Probab. Appl., 25 (1980), 309–324. Arov, D.Z. and Bobrov, A.A. The extreme members of a sample and their role in the sum of independent variables. Theory Probab. Appl., 5 (1960), 377–396. Asmussen, S. Subexponential asymptotics for stochastic processes: extremal behaviour, stationary distributions and first passage probabilities. Ann. Appl. Probab., 8 (1998), 354–374. Asmussen, S. Ruin Probabilities (World Scientific, Singapore, 2000). Asmussen, S., Foss, S. and Korshunov, D. Asymptotics for sums of random variables with local subexponential behaviour. J. Theoretical Probab., 16 (2003), 489– 518. Asmussen, S., Kalashnikov, V., Konstantinides, D., Kl¨uppelberg, C. and Tsitsiashvili, G. A local limit theorem for random walk maxima with heavy tails. Statist. Probab. Letters, 56 (2002), 399–404. Asmussen, S. and Kl¨uppelberg, C. Large deviations results for subexponential tails, with applications to insurance risk. Stoch. Proc. Appl., 64 (1996), 103–125. Asmussen, S., Kl¨uppelberg, C. and Sigman, K. Sampling of subexponential times with queueing applications. Stoch. Proc. Appl., 79 (1999), 265–286. Asmussen, S. and Møller, J.R. Tail asymptotics for M/G/1 type queueing processes with subexponential increments. Queueing Systems, 33 (1999), 153–176. Athreya, K.B. and Ney, P.E. Branching Processes (Springer, Berlin, 1972). Bahadur, R. and Ranga Rao, R. On deviations of the sample mean. Ann. Math. Statist., 31 (1960), 1015–1027. Baltrunas, A. On the asymptotics of one-sided large deviation probabilities. Lithua-
611
612 [18] [19] [20] [21] [22]
[23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36]
[37]
[38] [39] [40] [41]
References nian Math. J., 35 (1995), 11–17. Baltrunas, A. Second-order asymptotics for the ruin probability in the case of very large claims. Siberian Math. J., 40 (1999), 1226–1235. Baltrunas, A. Second order behaviour of ruin probabilities. Scandinavian Actuar. J., (1999), 120–133. Baltrunas, A. The rate of convergence in the precise large deviation theorem. Probab. Math. Statist., 22 (2002), 343–354. Baltrunas, A. Second-order tail behavior of the busy period distribution of certain GI/G/1 queues. Lithuanian Math. J., 42 (2002), 243–254. Baltrunas, A., Daley, D.J. and Kl¨uppelberg, C. Tail behaviour of the busy period of a GI/GI/1 queue with subexponential service times. Stochastic Process. Appl., 11 (2004), 237–258. Basrak, B., Davis, R.A. and Mikosch, T. A characterization of multivariate regular variation. Ann. Appl. Probab., 12 (2002), 908–920. Bentkus, V., Bloznelis, M. Nonuniform estimate of the rate of convergence in the CLT with stable limit distribution. Lithuanian Math. J., 29 (1989), 8–17. Bertoin, J. L´evy Processes (Cambridge University Press, Cambridge, 1996). Bertoin, J. and Doney, R.A. On the local behaviour of ladder height distributions. J. Appl. Probab., 31 (1994), 816–821. Bertoin, J. and Doney, R.A. Some asymptotic results for transient random walks. Adv. Appl. Probab., 28 (1996), 207–226. Billingsley, P. Convergence of Probability Measures (Wiley, New York, 1968). Bingham, N.H. Maxima of sums of random variables and suprema of stable processes. Z. Wahrscheinlichkeitstheorie und verw. Geb., 26 (1973), 273–296. Bingham, N.H. Limit theorems in fluctuation theory. Adv. Appl. Probab., 5 (1973), 554–569. Bingham, N.H. Variants of the law of the iterated logarithm. Bull. London Math. Soc., 18 (1986), 433–467. Bingham, N.H., Goldie, C.M. and Teugels, J.L. Regular Variation (Cambridge University Press, Cambridge, 1987). Borovkov, A.A. Limit theorems on the distributions of maxima of sums of bounded lattice random variables. I. Theory Probab. Appl., 5 (1960), 125–155. Borovkov, A.A. Limit theorems on the distributions of maxima of sums of bounded lattice random variables. II. Theory Probab. Appl., 5 (1960), 341–355. Borovkov, A.A. Remarks on Wiener’s and Blackwell’s theorems. Theory Probab. Appl., 9 (1964), 303–312. Borovkov, A.A. Analysis of large deviations in boundary-value problems with arbitrary boundaries. I, II. Siberian Math. J., 5 (1964), 253–289, 750–767. (In Russian.) Borovkov, A.A. New limit theorems in boundary problems for sums of independent random variables. Select Transl. Math. Stat. Probab., 5 (1965), 315–372. (Original publication in Russian: Siberian Math. J., 3 (1962), 645–694.) Borovkov, A.A. Boundary-value problems for random walks and large deviations in function spaces. Theory Probab. Appl., 12 (1967), 575–595. Borovkov, A.A. Factorization identities and properties of the distribution of the supremum of sequential sums. Theory Probab. Appl., 15 (1970), 359–402. Borovkov, A.A. Notes on inequalities for sums of independent variables. Theory Probab. Appl., 17 (1972), 556–557. Borovkov, A.A. The convergence of distributions of functionals of stochastic processes. Russian Math. Surveys, 271(1) (1972), 1–42.
References [42] [43] [44] [45]
[46] [47] [48] [49] [50] [51]
[52] [53] [54]
[55]
[56]
[57] [58] [59] [60] [61] [62] [63]
[64]
613
Borovkov, A.A. Stochastic Process in Queueing Theory (Springer, New York, 1976). Borovkov, A.A. Convergence of measures and random processes. Russian Math. Surveys, 31(2) (1976), 1–69. Borovkov, A.A. Boundary problems, the invariance principle and large deviations. Russian Math. Surveys, 38(4) (1983), 259–290. Borovkov, A.A. On the Cram´er transform, large deviations in boundary value problems, and the conditional invariance principle. Siberian Math. J., 36 (1995), 417– 434. Borovkov, A.A. Unimprovable exponential bounds for distributions of sums of random number of random variables. Theory Probab. Appl., 40 (1995), 230–237. Borovkov, A.A. On the limit conditional distributions connected with large deviations. Siberian Math. J., 37 (1996), 635–646. Borovkov, A.A. Limit theorems for time and place of the first boundary passage by a multidimensional random walk. Dokl. Math., 55 (1997), 254–256. Borovkov, A.A. Probability Theory (Gordon and Breach, Amsterdam, 1998). Borovkov, A.A. Ergodicity and Stability of Stochastic Processes (Wiley, Chichester, 1998). Borovkov, A.A. Estimates for the distribution of sums and maxima of sums of random variables when the Cram´er condition is not satisfied. Siberian Math. J., 41 (2000), 811–848. Borovkov, A.A. Probabilities of large deviations for random walks with semiexponential distributions. Siberian Math. J., 41 (2000), 1061–1093. Borovkov, A.A. Large deviations of sums of random variables of two types. Siberian Adv. Math., 4 (2002), 1–24. Borovkov, A.A. Integro-local and integral limit theorems on the large deviations of sums of random vectors: regular distributions. Siberian Math. J., 43 (2002), 402–417. Borovkov, A.A. On subexponential distributions and asymptotics of the distribution of the maximum of sequential sums. Siberian Math. J., 43 (2002), 995–1022, 1253–1264. Borovkov, A.A. Asymptotics of crossing probability of a boundary by the trajectory of a Markov chain. Heavy tails of jumps. Theory Probab. Appl., 47 (2003), 584–608. Borovkov, A.A. Large deviations probabilities for random walks in the absence of finite expectations of jumps. Probab. Theory Relat. Fields, 125 (2003), 421–446. Borovkov, A.A. On the asymptotic behavior of the distributions of first-passage times. I, II. Math. Notes, 75 (2004), 23–37, 322–330. Borovkov, A.A. Large deviations for random walks with nonidentically distributed jumps having infinite variance. Siberian Math. J., 46 (2005), 35–55. Borovkov, A.A. Asymptotic analysis for random walks with nonidentically distributed jumps having finite variance. Siberian Math. J., 46 (2005), 1020–1038. Borovkov, A.A. Transient phenomena for random walks with nonidentically distributed jumps with infinite variances. Theory Probab. Appl., 50 (2005), 199–213. Borovkov, A.A. and Borovkov, K.A. Probabilities of large deviations for random walks with a regular distribution of jumps. Dokl. Math., 61 (2000), 162–164. Borovkov, A.A. and Borovkov, K.A. On probabilities of large deviations for random walks. I. Regularly varying distribution tails. Theory Probab. Appl., 46 (2001), 193-213. Borovkov, A.A. and Borovkov, K.A. On probabilities of large deviations for ran-
614
[65]
[66] [67]
[68]
[69] [70] [71] [72] [73]
[74]
[75] [76] [77] [78] [79] [80] [81]
[82] [83] [84] [85]
References dom walks. II. Regular exponentially decaying distributions. Theory Probab. Appl., 49 (2005), 189–206. Borovkov, A.A. and Borovkov, K.A. Large deviation probabilities for generalized renewal processes with regularly varying jump distributions. Siberian Adv. Math., 15 (2006), 1–65. Borovkov, A.A. and Boxma, O.J. Large deviation probabilities for random walks with heavy tails. Siberian Adv. Math., 13 (2003), 1–31. Borovkov, A.A. and Mogulskii, A.A. Large Deviations and the Testing of Statistical Hypotheses (Proceedings of the Institute of Mathematics, vol. 19. Nauka, Novosibirsk, 1992). Borovkov, A.A. and Mogulskii, A.A. The second rate function and the asymptotic problems of renewal and hitting the boundary for multidimensional random walks. Siberian Math. J., 37 (1996), 647–682. Borovkov, A.A. and Mogulskii, A.A. Integro-local limit theorems including large deviations for sums of random vectors. I. Theory Probab. Appl., 43 (1999), 1–12. Borovkov, A.A. and Mogulskii, A.A. Integro-local limit theorems including large deviations for sums of random vectors. II. Theory Probab. Appl., 45 (2001), 3–22. Borovkov, A.A. and Mogulskii, A.A. Limit theorems in the boundary hitting problem for a multidimensional random walk. Siberian Math. J., 42 (2001), 245–270. Borovkov, A.A. and Mogulskii, A.A. Integro-local theorems for sums of independent random vectors in the series scheme. Math. Notes, 79 (2006), 468–482. Borovkov, A.A. and Mogulskii, A.A. Integro-local and integral theorems for sums of random variables with semiexponential distributions. Siberian Math. J., 47 (2006), 990–1026. Borovkov, A.A., Mogulskii, A.A. and Sakhanenko, A.I. Limit Theorems for Random Processes (Current Problems in Mathematics. Fundamental Directions 82. Vsesoyuz. Inst. Nauchn. i Tekhn. Inform. (VINITI), Moscow, 1995). (In Russian.) Borovkov, A.A. and Peˇcerski, E.A. Weak convergence of measures and random processes. Z. Wahrscheinlichkeitstheorie verw. Geb., 28 (1973), 5–22. Borovkov, A.A. and Utev, S.A. Estimates for distributions of sums stopped at Markov time. Theory Probab. Appl., 38 (1993), 214–225. Borovkov, K.A. A note on differentiable mappings. Ann. Probab., 13 (1985), 1018–1021. Boxma, O.J. and Cohen, J.W. Heavy-traffic analysis for the GI/G/1 queue with heavy-tailed distributions. Queueing Systems, 33 (1999), 177–204. Braverman, M., Mikosch, T. and Samorodnitsky, G. Tail probabilities of subadditive functionals of L´evy processes. Ann. Appl. Probab., 12 (2002), 69–100. Chernoff, H.A. A measure of asymptotic efficiency for tests of a hypothesis based on the sums of observations. Ann. Math. Statist., 23 (1952), 493–507. Chistyakov, V.P. A theorem on sums of independent positive random variables and its application to branching random processes. Theory Probab. Appl., 9 (1964), 640–648. Chover, J. A law of the iterated logarithm for stable summands. Proc. Amer. Math. Soc., 17 (1965), 441–443. Chover, J., Ney, P.E. and Wainger, S. Functions of Probability Measures. J. Analyse Math., 26 (1973), 255–302. Chover, J., Ney, P.E. and Wainger, S. Degeneracy properties of subcritical branching processes. Ann. Probab., 1 (1973), 663–673. Chow, Y.S. and Lai, T.L. Some one-sided theorems on the tail distribution of sample sums with applications to the last time and largest excess of boundary crossing.
References [86] [87] [88] [89] [90]
[91] [92] [93] [94] [95] [96] [97] [98] [99]
[100]
[101] [102] [103] [104] [105] [106] [107] [108]
615
Trans. Amer. Math. Soc., 120 (1975), 108–123. Christoph, G. and Wolf, W. Convergence Theorems With a Stable Limit Law (Akademie-Verlag, Berlin, 1992). Cline, D.B.H. Convolution tails, product tails and domains of attraction. Probab. Theory Relat. Fields, 72 (1986), 529–557. Cline, D.B.H. Convolution of distributions with exponential and subexponential tails. J. Austral. Math. Soc. Ser. A, 43 (1987), 347–365. Cline, D.B.H. Intermediate regular and Π variation. Proc. London Math. Soc., 68 (1994), 594–616. Cline, D.B.H. and Hsing, T. Large deviations probabilities for sums and maxima of random variables with heavy or subexponential tails. Texas A&M University preprint (1991). Cline, D.B.H. and Samorodnitsky, G. Subexponentiality of the product of independent random variables. Stoch. Proc. Appl., 49 (1994), 75–98. Cohen, J.W. Some results on regular variation for distributions in queueing and fluctuation theory. J. Appl. Probab., 10 (1973), 343–353. Cohen, J.W. A heavy-traffic theorem for the GI/G/1 queue with Pareto-type service time distributions. J. Appl. Math. Stochastic Anal., 11 (1998), 247–254. Cohen, J.W. Random walk with a heavy-tailed jump distribution. Queueing Systems, 40 (2002), 35–73. Cram´er, H. Sur un nouveau th´eoreme-limite de la th´eorie des probabilit´es. Actualites Sci. Indust., 736 (1938), 5–23. Cram´er, H. On asymptotic expansions for sums of independent random variables with a limiting stable distribution. Sankhya A, 25 (1963), 12–24. Darling, D.A. The influence of the maximum term in the addition of independent random variables. Trans. Amer. Math. Soc., 73 (1952), 95–107. Davis, R.A. Stable limits for partial sums of dependent random variables. Ann. Probab., 11 (1983), 262–269. Davis, R.A. and Hsing, T. Point processes and partial sum convergence for weakly dependent random variables with infinite variance. Ann. Probab., 23 (1995), 879– 917. Davydov, Yu.A. and Nagaev, A.V. On the role played by extreme summands when a sum of independent and identically distributed random vectors is asymptotically α-stable. J. Appl. Probab., 41 (2004), 437–454. Dembo, A. and Zeitouni, O. Large Deviation Techniques and Applications (Jones and Bartlett, London, 1993). Denisov, D.F. and Foss, S.G. On transience conditions for Markov chains and random walks. Siberian Math. J., 44 (2003), 44–57. Denisov, D., Foss, S. and Korshunov, D. Tail asymptotics for the supremum of a random walk when the mean is not finite. Queueing Systems, 46 (2004), 15–33. Doney, R.A. On the asymptotic behaviour of first passage times for a transient random walk. Probab. Theory Relat. Fields, 18 (1989), 239–246. Doney, R.A. A large deviation local limit theorem. Math. Proc. Cambridge Phil. Soc., 105 (1989), 575–577. Doney, R.A. Spitzer’s condition and ladder variables in random walks. Probab. Theory Relat. Fields, 101 (1995), 577–580. Donsker, M.D. An invariance principle for certain probability limit theorems. Mem. Amer. Math. Soc., 6 (1951), 1–12. Durrett, R. Conditional limit theorems for random walks with negative drift. Z. Wahrscheinlichkeitstheorie verw. Geb., 52 (1980), 277–287.
616 [109]
[110] [111] [112] [113] [114] [115]
[116] [117] [118] [119] [120]
[121] [122] [123]
[124] [125]
[126]
[127]
[128] [129] [130]
References Embrechts, P. and Goldie, C.M. On closure and factorization theorems for subexponential and related distributions. J. Austral. Math. Soc. Ser. A, 29 (1980), 243– 256. Embrechts, P. and Goldie, C.M. Comparing the tail of an infinitely divisible distribution with integrals of its L´evy measure. Ann. Probab., 9 (1981), 468–481. Embrechts, P. and Goldie, C.M. On convolution tails. Stoch. Proc. Appl., 13 (1982), 263–278. Embrechts, P., Goldie, C.M. and Veraverbeke, N. Subexponentiality and infinite divisibility. Z. Wahrscheinlichkeitstheorie verw. Geb., 49 (1979), 335–347. Embrechts, P., Kl¨uppelberg, C. and Mikosch, T. Modeling Extremal Events (Springer, New York, 1997). Embrechts, P. and Omey, E. A property of long-tailed distributions. J. Appl. Prob., 21 (1982), 80–87. Embrechts, P. and Veraverbeke, N. Estimates for the probability of ruin with special emphasis on the possibility of large claims. Insurance: Math. Econom., 1 (1982), 55–72. Emery, D.J. Limiting behaviour of the distribution of the maxima of partial sums of certain random walks. J. Appl. Probab., 9 (1972), 572–579. Eppel, N.S. A local limit theorem for first passage time. Siberian Math. J., 20(1) (1979), 130-138. Erickson, K.B. Strong renewal theorems with infinite mean. Trans. Amer. Math. Soc., 151 (1970), 263–291. Erickson, K.B. The strong law of large numbers when the mean is undefined. Trans. Amer. Math. Soc., 185 (1973), 371–381. Feller, W. On regular variation and local limit theorems, in Proc. Fifth Berkeley Symp. Math. Stat. Prob. II(1), ed. Neyman, J. (University of California Press, Berkeley, 1967), pp. 373–388. Feller, W. An Introduction to Probability Theory and its Applications I, 2nd edn (Wiley, New York, 1968). Feller, W. An Introduction to Probability Theory and its Applications II, 3rd edn (Wiley, New York, 1971). Foss, S., Konstantopoulos, T. and Zachary, S. The principle of a single big jump: discrete and continuous time modulated random walks with heavy-tailed increments. arxiv.org/pdf/math.PR/0509605 (2005). Foss, S. and Korshunov, D. Sampling at random time with a heavy-tailed distribution. Markov Proc. Related Fields., 6 (2000), 543–568. Foss, S.G. and Zachary, S. Asymptotics for the maximum of a modulated random walk with heavy-tailed increments. Analytic methods in applied probability. Amer. Math. Soc. Transl. Ser. 2, 207 (2002), 37–52. Foss, S.G. and Zachary, S. The maximum on a random time interval of a random walk with long-tailed increments and negative drift. Ann. Appl. Probab., 1 (2003), 37–57. Fuk, D.H. and Nagaev, S.V. Probability inequalities for sums of independent random variables. Theory Probab. Appl., 16 (1971), 643–660. (Also, ibid., 1976, 21, 896.) Garsia, A. and Lamperti, J. A discrete renewal theorem with infinite mean. Comment Math. Helv., 36 (1962), 221–234. Gikhman, I.I. and Skorokhod, A.V. Introduction to the Theory of Random Processes (Dover, Mineola, 1996). (Translated from the 1965 Russian original.) Gnedenko, B.V. and Kolmogorov, A.N. Limit Distributions for Sums of Indepen-
References
[131] [132] [133]
[134] [135] [136] [137] [138] [139]
[140] [141]
[142] [143] [144] [145] [146] [147] [148] [149]
[150] [151]
[152]
617
dent Random Variables (Addison-Wesley, Reading, 1954). (Translated from the 1949 Russian original.) Godovanchuk, V.V. Probabilities of large deviations for sums of independent random variables attracted to a stable law. Theory Probab. Appl., 23 (1978), 602–608. Goldie, C.M. Subexponential distributions and dominated-variation tails. J. Appl. Prob., 15 (1978), 440–442. Goldie, C.M. and Kl¨uppelberg, C. Subexponential distributions, in A Practical Guide to Heavy Tails: Statistical Techniques for Analysing Heavy-tailed Distributions, ed. Adler, R. et al. (Birkh¨auser, Boston, 1998), pp. 435–454. Gradshtein, I.S. and Ryzhik, I.M. Table of Integrals, Series, and Products (Academic Press, New York, 1980). Greenwood, P. Asymptotics of randomly stopped sequences with independent increments. Ann. Probab., 1 (1973), 317–321. Greenwood, P. and Monroe, I. Random stopping preserves regular variation of process distributions. Ann. Probab., 5 (1977), 42–51. Gr¨ubel, R. Asymptotic analysis in probability theory using Banach-algebra techniques. Essen Universit¨at Habilitationsschrift (1984). Gut, A. Stopped Random Walks (Springer, Berlin, 1988). Hansen, N.R. and Jensen, A.T. The extremal behaviour over regenerative cycles for Markov additive processes with heavy tails. Stoch. Proc. Appl., 115 (2005), 579–591. Hartman, P. and Wintner, A. On the law of the iterated logarithm. Amer. J. Math., 63 (1941), 169–176. Heyde, C.C. A contribution to the theory of large deviations for sums of independent random variables. Z. Wahrscheinlichkeitstheorie verw. Geb., 7 (1967), 303– 308. Heyde, C.C. A limit theorem for random walks with drift. J. Appl. Probab., 4 (1967), 144–150. Heyde, C.C. Asymptotic renewal results for a natural generalization of classical renewal theory. J. Roy. Statist. Soc. Ser. B, 29 (1967), 141–150. Heyde, C.C. On large deviation problems for sums of random variables which are not attracted to the normal law. Ann. Math. Statist., 38 (1967), 1575–1578. Heyde, C.C. On large deviation probabilities in the case of attraction to a nonnormal stable law. Sankhya A, 30 (1968), 253–258. Heyde, C.C. On the maximum of sums of random variables and the supremum functional for stable processes. J. Appl. Probab., 6 (1969), 419–429. Heyde, C.C. A note concerning behaviour of iterated logarithm type. Proc. Amer. Math. Soc., 23 (1969), 85–90. Heyman, D.P., and Lakshman, T.V. Source models for VBR broadcast-video traffic. IEEE/ACM Trans. Netw., 4 (1996), 40–48. H¨oglund, T. A unified formulation of the central limit theorem for small and large deviations from the mean. Z. Wahrscheinlichkeitstheorie verw. Geb., 49 (1979), 105–117. Hult, H. and Lindskog, F. Extremal behaviour for regularly varying stochastic processes. Stoch. Proc. Appl., 115 (2005), 249–274. Hult, H., Lindskog, F., Mikosch, T. and Samorodnitsky, G. Functional large deviations for multivariate regularly varying random walks. Ann. Appl. Probab., 15 (2005), 2651–2680. Ibragimov, I.A. and Linnik, Yu.V. Independent and Stationary Sequences of Random Variables (Wolters-Noordhoff, Groningen, 1971). (Translated from the 1965
618 [153] [154]
[155]
[156] [157]
[158] [159] [160] [161] [162]
[163] [164]
[165] [166] [167] [168]
[169] [170] [171] [172] [173]
References Russian original.) Janicki, A. and Weron, A. Simulation and Chaotic Behavior of α-stable Stochastic Processes (Marcel Dekker, New York, 1994). Janson, S. Moments for first-passage and last-exit times. The minimum, and related quantities for random walks with positive drift. Adv. Appl. Probab., 18 (1986), 865– 879. Jelenkovi´c, P.R. and Lazar, A.A. Subexponential asymptotics of a Markovmodulated random walk with queueing applications. J. Appl. Probab., 25 (1998), 132–141. Jelenkovi´c, P.R. and Lazar, A.A. Asymptotic results for multiplexing subexponential on-off processes. Adv. Appl. Probab., 31 (1999), 394–421. Jelenkovi´c, P.R., Lazar, A.A. and Semret, N. The effect of multiple time scales and subexponentiality of MPEG video streams on queueing behavior. IEEE J. Sel. Areas Commun., 15 (1997), 1052–1071. Karamata, J. Sur un mode de croissance r´eguli`ere des fonctions. Mathematica (Cluj), 4 (1930), 38–53. Karamata, J. Sur un mode de croissance r´eguli`ere. Th´eor`emes fondamenteaux. Bull. Math. Soc. France, 61 (1933), 55–62. Kesten, H. and Maller, R.A. Two renewal theorems for general random walks tending to infinity. Probab. Theory Relat. Fields, 106 (1996), 1–38. ¨ Khintchine, A.Ya. Uber einen Satz der Wahrscheinlichkeitsrechnung. Math. Annalen, 6 (1924), 9–20. Khintchine, A.Ya. Asymptotische Gesetze der Wahrscheinlichkeitsrechnung (Springer, Berlin, 1933). (In German. Russian translation published by ONTI NTKP, 1936.) Khintchine, A.Ya. Two theorems on stochastic processes with stable increment distributions. Matem. Sbornik, 3(45) (1938), 577–584. (In Russian.) Khokhlov, Yu.S. The law of the iterated logarithm for random vectors with an operator-stable limit law. Vestnik Moskov. Univ. Ser. XV Vychisl. Mat. Kibernet., (1995), 62–69. (In Russian.) Kingman, F.G. On queues in heavy traffic. J. Royal Statist. Soc. Ser. B, 24 (1962), 383–392. Kl¨uppelberg, C. Subexponential distributions and integrated tails. J. Appl. Probab., 25 (1988), 132–141. Kl¨uppelberg, C. Subexponential distributions and characterizations of related classes. Probab. Theory Relat. Fields, 82 (1989), 259–269. Kl¨uppelberg, C., Kyprianou, A.E. and Maller, R.A. Ruin probabilities and overshoots for general L´evy insurance risk processes. Ann. Appl. Probab., 14 (2004), 1766–1801. Kl¨uppelberg, C. and Mikosh, T. Large deviations of heavy-tailed random sums with applications in insurance and finance. J. Appl. Probab., 34 (1997), 293–308. ¨ Kolmogoroff, A.N. Uber die Grenzwerts¨atze der Wahrscheinlichkeitsrechnung. Math. Annalen, 101 (1929), 120–126. ¨ Kolmogoroff, A.N. Uber das Gesetz des iterierten Logarithmus. Math. Annalen, 101 (1929), 126–135. ¨ Kolmogoroff, A.N. Uber die analytischen Methoden in der Wahrscheinlichkeitsrechnung. Math. Annalen, 104 (1931), 415–458. Konstantinides, D. and Mikosch, T. Large deviations and ruin probabilities for solutions to stochastic recurrence equations with heavy-tailed innovations. Ann. Probab., 33 (2005), 1992–2035.
References [174] [175] [176] [177] [178]
[179] [180] [181] [182] [183]
[184] [185] [186] [187] [188] [189]
[190]
[191] [192] [193] [194]
[195]
[196]
619
Korevaar, J., van Aardenne-Ehrenfest, T. and de Bruijn, N.G. A note on slowly oscillating functions. Nieuw Arch. Wiskunde (2), 23 (1949), 77–86. Korolyuk, V.S. On the asymptotic distribution of the maximal deviations. Dokl. Akad. Nauk SSSR, 142 (1962), 522–525. (In Russian.) Korolyuk, V.S. Asymptotic analysis of distributions of maximum deviations in a lattice random walk. Theory Probab. Appl., 7 (1962), 383–401. Korshunov, D.A. On distribution tail of the maximum of a random walk. Stoch. Proc. Appl., 72 (1997), 97–103. Korshunov, D.A. Large deviations probabilities for maxima of sums of independent random variables with negative mean and subexponential distribution. Theory Probab. Appl., 46 (2001), 355-366. Korshunov, D.A., Shlegel, S. and Shmidt, F. Asymptotic analysis of random walks with dependent heavy-tailed increments. Siberian Math. J., 44 (2003), 833–844. Landau, E. Darstellung und Begr¨undung Einiger Neuer Ergebnisse der Funktiontheorie, 2nd edn (Springer, Berlin, 1929). Leland, W.E., Taqqu, M.S., Willinger, W. and Wilson, D.V. On the self-similar nature of Ethernet traffic, in Proc. SIGCOMM ’93 (1993), 183–193. LePage, R., Woodroofe, M. and Zinn, J. Convergence to a stable distribution via order statistics. Ann. Probab., 9 (1981), 624–632. Linnik, Yu.V. On the probability of large deviations for the sums of independent variables, in Proc. Fourth Berkeley Symp. Math. Stat. Prob. 2 (University of California Press, Berkeley, 1961), pp. 289–306. Linnik, Yu.V. Limit theorems for sums of independent variables taking into account large deviations. I, II. Theory Probab. Appl., 6 (1961), 113–148, 345–360. Linnik, Yu.V. Limit theorems for sums of independent variables taking into account large deviations. III. Theory Probab. Appl., 7 (1962), 115–129. Lo`eve, M. Probability Theory, 4th edn (Springer, New York, 1977). Malinovskii, V.K. Limit theorems for Harris Markov chains. II. Theory Probab. Appl., 34 (1989), 252–265. Malinovskii, V.K. Limit theorems for stopped random sequences II: Large deviations. Theory Probab. Appl., 41 (1996), 70–90. Meerschaert, M.M. and Scheffler, H.-P. Limit Distributions for Sums of Independent Random Vectors: Heavy Tails in Theory and Practice (Wiley, New York, 2001). Mikosh, T. The law of the iterated logarithm for independent random variables outside the domain of partial attraction of the normal law. Vestnik Leningradskogo Univ. Mat. Mech. Astronom., 3 (1984), 35–39. (In Russian.) Mikosh, T. and Nagaev, A.V. Large deviations of heavy-tailed sums with applications in insurance. Extremes, 1 (1998), 81-110. Mogulskii, A.A. Integro-local theorem for sums of random variables with regularly varying distributions valid on the whole real line. Siberian Math. J. (In print.) Mogulskii, A.A. and Rogozin, B.A. A local theorem for the first hitting time of a fixed level by a random walk. Siberian Adv. Math., 15 (2005), 1–27. Nagaev, A.V. Limit theorems that take into account large deviations when Cram´er’s condition is violated. Izv. Akad. Nauk. UzSSR, Ser. Fiz-Mat Nauk, 13 (1969), 17– 22. (In Russian.) Nagaev, A.V. Integral limit theorems taking large deviations into account when Cram´er’s condition does not hold. I, II. Theory Probab. Appl., 14 (1969), 51–64; 193–208. Nagaev, A.V. On a property of sums of independent random variables. Theory
620
References
Probab. Appl., 22 (1977), 326–338. [197] Nagaev, A.V. On the asymmetric problem of large deviations when the limit law is stable. Theory Probab. Appl., 28 (1983), 670–680. [198] Nagaev, A.V. Cram´er large deviations when the extreme conjugate distribution is heavy-tailed. Theory Probab. Appl., 43 (1998), 405–421. [199] Nagaev, A.V. and Zaigraev, A. Multidimensional limit theorems allowing large deviations for densities of regular variation. J. Multivariate Anal., 67 (1998), 385– 397. [200] Nagaev, A.V. and Zaigraev, A. New large deviation local theorems for sums of independent and identically distributed random vectors when the limit is α-stable. Bernoulli, 11 (2005), 665–688. [201] Nagaev, S.V. Some limit theorems for large deviations. Theory Probab. Appl., 10 (1965), 214–235. [202] Nagaev, S.V. On the speed of convergence in a boundary problem. I, II. Theory Probab. Appl., 15 (1970), 163–186, 403–429. [203] Nagaev, S.V. On the speed of convergence of the distribution of maximum sums of independent random variables. Theory Probab. Appl., 15 (1970), 309–314. [204] Nagaev, S.V. Asymptotic expansions for the maximum of sums of independent random variables. Theory Probab. Appl., 15 (1970), 514–515. [205] Nagaev, S.V. Large deviations for sums of independent random variables, in Trans. Sixth Prague Conf. on Information Theory, Random Proc. and Statistical Decision Functions (Academia, Prague, 1973), pp. 657–674. [206] Nagaev, S.V. Large deviations of sums of independent random variables. Ann. Probab., 7 (1979), 745–789. [207] Nagaev, S.V. On the asymptotic behavior of one-sided large deviation probabilities. Theory Probab. Appl., 26 (1981), 362–366. [208] Nagaev, S.V. Probabilities of large deviations in Banach spaces. Math. Notes, 34 (1983), 638–640. [209] Nagaev, S.V. and Pinelis, I.F. Some estimates for large deviations and their application to strong law of large numbers. Siberian Math. J., 15 (1974), 153–158. [210] Ng, K.W., Tang, Q., Yan, J.-A. and Yang, H. Precise large deviations for sums of random variables with consistently varying tails. J. Appl. Prob., 41 (2004), 93–107. [211] Nikias, C.L. and Shao, M. Signal Processing with Alpha-stable Distributions and Applications (Wiley, New York, 1995). [212] Osipov, L.V. On probabilities of large deviations for sums of independen random variables. Theory Probab. Appl., 17 (1972), 309–331. [213] Pakes, A. On the tails of waiting times distributions. J. Appl. Probab., 7 (1975), 745–789. [214] Pakshirajan, R.P. and Vasudeva, R. A law of the iterated logarithm for stable summands. Trans. Amer. Math. Soc., 232 (1977), 33–42. [215] Park, K. and Willinger, W., eds. Self-similar Network Traffic and Performance Evaluation (Wiley, New York, 2000). [216] Paulauskas, V.I. Estimates of the remainder term in limit theorems in the case of stable limit law. Lithuanian Math. J., 14 (1974), 127–146. [217] Paulauskas, V.I. Uniform and nonuniform estimates of the remainder term in a limit theorem with a stable limit law. Lithuanian Math. J., 14 (1974), 661–672. [218] Paulauskas, V. and Skucha˘ıte, A. Some asymptotic results for one-sided large deviation probabilities. Lithuanian Math. J., 43 (2003), 318–326. [219] Petrov, V.V. Generalization of Cram´er’s limit theorem. Uspehi Matem. Nauk, 9 (1954), 195–202. (In Russian.)
References [220] [221] [222] [223] [224]
[225] [226] [227]
[228]
[229] [230] [231] [232] [233] [234] [235] [236] [237]
[238] [239]
[240]
[241]
621
Petrov, V.V. Limit theorems for large deviations when Cram´er’s condition is violated. Vestnik Leningrad Univ. Math., 19 (1963), 49–68. (In Russian.) Petrov, V. V. Limit theorems for large deviations violating Cram´er’s condition. II. Vestnik Leningrad. Univ. Ser. Mat. Meh. Astronom., 19 (1964), 58–75. (In Russian.) Petrov, V.V. On the probabilities of large deviations for sums of independent random variables. Theory Probab. Appl., 10 (1965), 287–298. Petrov, V.V. Sums of Independent Random Variables (Springer, Berlin, 1975). (Translated from the 1972 Russian original.) Petrov, V.V. Limit Theorems of Probability Theory: Sequences of Independent Random Variables (Clarendon Press, Oxford University Press, New York, 1987). (Translated from the 1987 Russian original.) Pinelis, I.F. A problem on large deviations in a space of trajectories. Theory Probab. Appl., 26 (1981), 69–84. Pinelis, I.F. On certain inequalitites for large deviations. Theory Probab. Appl., 26 (1981), 419–420. Pinelis, I.F. Asymptotic equivalence of the probabilities of large deviations for sums and maxima of independent random variables. Trudy Inst. Mat., 5 (1985), 144–173. (In Russian.) Pinelis, I. Exact asymptotics for large deviation probabilities, with applications, in Modelling Uncertainty. Internat. Ser. Oper. Res. Management Sci. 46 (Kluver, Boston, 2002), pp. 57–93. Pitman, E.J.G. Subexponential distribution functions. J. Austral. Math. Soc. Ser. A., 29 (1980), 337–347. Prokhorov, Yu.V. Convergence of random processes and limit theorems in probability theory. Theory Probab. Appl., 1 (1956), 157–214. Prokhorov, Yu.V. Transition phenomena in queueing processes. Litovsk. Math. Sb., 3 (1963), 199–206. (In Russian.) Resnick, S.I. Extreme Values, Regular Variation, and Point Processes (Springer, New York, 1987). Richter, V. Multi-dimensional local limit theorems for large deviations. Theory Probab. Appl., 3 (1958), 100–106. Rogozin, B.A. On the constant in the definition of subexponential distributions. Theory Probab. Appl., 44 (2001), 409–412. Rolski, T., Schmidli, H., Schmidt, V. and Teugels, J. Stochastic Processes for Insurance and Finance (Wiley, New York, 1999). Rozovskii, L.V. An estimate for probabilities of large deviations. Math. Notes, 42 (1987), 590–597. Rozovskii, L.V. Probabilities of large deviations of sums of independent random variables with common distribution function in the domain of attraction of the normal law. Theory Probab. Appl., 34 (1989), 625–644. Rozovskii, L.V. Probabilities of large deviations on the whole axis. Theory Probab. Appl., 38 (1994), 53–79. Rozovskii, L.V. Probabilities of large deviations for sums of independent random variables with a common distribution function from the domain of attraction of an asymmetric stable law. Theory Probab. Appl., 42 (1998), 454–482. Rozovskii, L.V. A lower bound for the probabilities of large deviations of the sum of independent random variables with finite variances. J. Math. Sci., 109 (2002), 2192–2209. Rozovskii, L.V. Superlarge deviations of a sum of independent random variables having a common absolutely continuous distribution under the Cram´er condition.
622
References
Theory Probab. Appl., 48 (2003), 108–130. [242] Rudin, W. Limits of ratios of tails of measures. Ann. Probab., 1 (1973), 982–994. [243] Rvacheva, E.L. On domains of attraction of multi-dimensional distributions. Select. Transl. Math. Statist. Probab., 2 (1962), 183–205. (Original publication in Russian: L’vov. Gos. Univ. Uˇc. Zap. Ser. Meh.-Mat., 3 (1954), 5–44.) [244] Sahanenko, A.I. On the speed of convergence in a boundary problem. Theory Probab. Appl., 19 (1974), 399–403. [245] Samorodnitsky, G. and Taqqu, M. Stable Non-Gaussian Random Processes (Chapman & Hall, New York, 1994). [246] Sato, K. L´evy Processes and Infinitely Divisible Distributions (Cambridge University Press, Cambridge, 1999). [247] Saulis, L. and Statuleviˇcius, V. A. Limit Theorems for Large Deviations (Kluwer, Dordrecht, 1991). (Translated and revised from the 1989 Russian original.) [248] Schlegel, S. Ruin probabilities in perturbed risk models. Insurance Math. Econom., 22 (1998), 93–104. ¨ [249] Schmidt, R. Uber der Borelsche Summirungsverfahren. Schriften der K¨onigsberger gelehrten Gesellschaft, 1 (1925), 202–256. ¨ [250] Schmidt, R. Uber divergentente Folgen und lineare Mittelbildungen. Mathem. Zeitschrift, 22 (1925), 89–152. [251] Seneta, E. Regularly Varying Functions (Springer, Berlin, 1976). [252] Sigman, K. A primer on heavy-tailed distributions. Queueing Systems, 33 (1999), 261–275. [253] Sgibnev, M.S. Banach algebras of functions with the same asymptotic behavior at infinity. Siberian Math. J., 22 (1981), 467–473. [254] Shepp, L.A. A local limit theorem. Ann. Math. Statist., 35 (1964), 419–423. [255] Skorokhod, A.V. Limit theorems for stochastic processes with independent increments. Theory Probab. Appl., 2 (1957), 138–171. [256] Skorokhod, A.V. Random Processes with Independent Increments (Kluwer, Dordrecht, 1991). (Translated and revised from the 1964 Russian original.) [257] Sparre Andersen, E. On the collective theory of risk in the case of contagion between the claims, in Trans. XVth Internat. Congress of Actuaries II (New York, 1957), pp. 219–229. [258] Stone, C. A local limit theorem for nonlattice multi-dimensional distribution functions. Ann. Math. Statist., 36 (1965), 546–551. [259] Stone, C. On local and ratio limit theorems, in Proc. Fifth Berkeley Symp. Math. Stat. Prob. II(2), ed. Neyman, J. (University of California Press, Berkeley, 1967), pp. 217–224. [260] Stout, W. Almost Sure Convergence (Academic Press, New York, 1974). [261] Strassen, V. A converse to the law of the iterated logarithm. Z. Wahrscheinlichkeitstheorie verw. Geb., 4 (1966), 265–268. [262] Tang, Q., Su, Ch., Jiang, T. and Zhang, J. Large deviations for heavy-tailed random sums in compound renewal model. Statist. Probab. Lett., 52 (2001), 91–100. [263] Tang, Q. and Tsitsiashvili, G. Precise estimates for the ruin probability in finite horizon in a discrete-time model with heavy-tailed insurance and financial risks. Stoch. Proc. Appl., 108 (2004), 299–325. [264] Tang, Q. and Yang, J. A sharp inequality for the tail probabilities of sums of i.i.d. r.v.’s with dominantedly varying tails. Sci. China A, 45 (2002), 1006–1011. [265] Teugels, J.L. The sub-exponential class of probability distributions. Theory Probab. Appl., 19 (1974), 821–822. [266] Teugels, J.L. The class of subexponential distributions. Ann. Probab., 3 (1975),
References [267]
[268] [269] [270]
[271] [272] [273] [274] [275] [276] [277]
[278] [279] [280] [281] [282] [283] [284]
[285]
[286]
623
1000–1011. Teugels, J.L. and Willekens, E. Asymptotic expansions for waiting time probabilities in an M/G/1 queue with long tailed service time. Queueing Systems Theory Appl., 10 (1992), 295–311. Thorin, O. Some remarks on the ruin problem in case the epochs of claims form a renewal process Skand. Aktuarietidskr. (1970), 29–50. Thorin, O. and Wikstad, N. Calculation of ruin probabilities when the claim distribution is lognormal. Astin Bulletin, 9 (1976), 231–246. Tkaˇcuk, S.G. Local limit theorems, allowing for large deviations, in the case of stable limit laws. Izv. Akad. Nauk UzSSR Ser. Fiz.-Mat. Nauk, 17 (1973), 30–33. (In Russian.) Tkaˇcuk, S.G. A theorem on large deviations in Rs in case of a stable limit law, in Random processes and statistical inference, 4 (Fan, Tashkent, 1974), pp. 178–184. Shneer, V.V. Estimates for the distributions of the sums of subexponential random variables. Siberian Math. J., 45 (2004), 1143–1158. Uchaikin, V.V. and Zolotarev, V.M. Chance and Stability (VSP Press, Utrecht, 1999). Vasudeva, R. Chover’s law of the iterated logarithm and weak convergence. Acta Math. Hungar., 44 (1984), 215–221. Veraverbeke, N. Asymptotic behaviour of Wiener–Hopf factors of a random walk. Stoch. Proc. Appl., 5 (1977), 27–37. Vinogradov, V. Refined Large Deviation Limit Theorems (Longman, Harlow, 1994). Vinogradov, V.V. and Godovanchuk, V.V. Large deviations of sums of independent random variables without several maximal summands. Theory Probab. Appl., 34 (1989), 512–515. von Bahr, B. Asymptotic ruin probabilities when exponential moments do not exist Scand. Actuarial Journal (1975), 6–10. Williamson, J.A. Random walks and Riesz kernels. Pacific J. Math, 25 (1968), 393–415. Wolf, W. On probabilities of large deviations in the case in which Cram´er’s condition is violated. Math. Nachr., 70 (1975), 197–215. (In Russian.) Wolf, W. Asymptotische Entwicklungen f¨ur Wahrscheinlichkeiten grosser Abweichungen. Z. Wahrscheinlichkeitstheorie verw. Geb., 40 (1977), 239–256. Zachary, S. A note on Veraverbeke’s theorem. Queueing Systems, 46 (2004), 9–14. Zaigraev, A. Multivariate large deviations with stable limit laws. Probab. Math. Statist., 19 (1999), 323–335. Zaigraev, A.Yu. and Nagaev, A. V. Abelian theorems, limit properties of conjugate distributions, and large deviations for sums of independent random vectors. Theory Probab. Appl., 48 (2004), 664–680. Zaigraev, A.Yu., Nagaev, A.V. and Jakubowski, A. Probabilities of large deviations of the sums of lattice random vectors when the original distribution has heavy tails. Discrete Math. Appl., 7 (1997), 313–326. Zolotarev, V.M. One-dimensional Stable Distributions (American Mathematical Society, Providence RI, 1986). (Translated from the 1983 Russian original.)
Index
deviation, 305, 321, 392, 555 generalized inverse, 58, 83, 508 locally constant (l.c.), 16, 348, 358–360 regularly varying (r.v.f.), 1 slowly varying (s.v.f.), 1 upper-power, 28, 224, 307, 362, 358 ψ-locally constant (ψ-l.c.), 18, 224
Abelian type theorem, 9 arithmetic distribution, 44, 302 boundary stopping time, 349 condition Cram´er’s, xx, 234, 303, 371, 398 Lindeberg’s, 502 conjugate distribution, 301, 382 convolution, 14 of sequences, 44 Cram´er approximation, 235, 251–253, 298, 433 condition, xx, 234, 303, 371, 398 deviation zone, 252 transform, 301, 382, 393
generalized inverse function, 58, 83, 508 renewal process, 543 inequality, Kolmogorov–Doob type, 82 integral representation theorem, 2 intermediate deviation zone, 252 invariance principle, 76, 395 iterated logarithm, law of the, 79
density, subexponential, 46 deviation function, 305, 321, 392, 555 deviations, moderately large, 309 normal, 309 super-large, 308 distribution arithmetic, 302 classes, see below conjugate, 301, 382 exponentially tilted, 301 function of, 335 locally subexponential, 46, 47, 358 regularly varying exponentially decaying, 307 semiexponential, 29, 233 stable, 61 strongly subexponential, 237, 250, 274, 546 subexponential, 14 tail, 11, 14, 57 domain of attraction, 62 Esscher transform, 301 exponentially tilted distribution, 301 extreme deviation zone, 252 function of distribution, 335
Kolmogorov–Doob type inequality, 82 large deviation rate function, 305, 321, 392 law of the iterated logarithm, 79 level lines, 320 Lindeberg condition, 502 locally constant function, 16, 348, 358–360 locally subexponential distribution, 46, 47, 358 Markov time, 345, 428 martingale, 506 moderately large deviations, 309 normal deviations, 309 partial factorization, 548, 554 process generalized renewal, 543 renewal, 543 stable, 78, 469 Wiener, 75, 76, 472 random walk, 14 defined on a Markov chain, 508 regularly varying exponentially decaying distribution, 307 regularly varying function (r.v.f.), 1
624
Index renewal process, 543 semiexponential distribution, 29, 233 sequence subexponential, 44 convolution of, 44 Skorokhod metric, 75 slowly varying function (s.v.f.), 1 stable distribution, 61 process, 78, 469 stopping time, 345, 428 boundary, 349 strongly subexponential distribution, 237, 250, 274, 546 subexponential density, 46 distribution, 14 sequence, 44 super-large deviations, 308 tail, two-sided, 57 Tauberian theorem, 10 theorem Abelian type, 9 integral representation, 2 Tauberian, 10 uniform convergence, 2 Wiener–Levy, 55 time, stopping, 345, 428 transform Cram´er, 301, 382, 393 Esscher, 301 Laplace, 9 transient phenomena, 439, 471, 503 uniform convergence theorem, 2 upper-power function, 28, 224, 307, 358, 362 Wiener process, 75, 76, 472, 502, 503 Wiener–Levy theorem, 55 ψ-(asymptotically) locally constant (ψ-l.c.) function, 18, 224
Boundary classes Gx,K , 586 Gx,n , 155, 455 Gx,T,ε , 526 Gx,T , 526
Conditions and properties [<], 551 [ · , <] ([ · , >], [ · , =]), 81, 234 [<, · ] ([>, · ], [=, · ]), 81 [<, <] ([>, <], [<, >], [=, =]), 81 [ · , ≶], 104 [ · , <]U , 483, 514 [ · , =]U , 514
[<, <]U ([<, =]U ), 509 [ · , <]UR , 492 [A], 414 [A0+ ], 418 [A0,n ], 323 [A0 ], 322, 418 [A1 ], 418 [A2 ], 419 [Am ], 323 [An ], 322 [B], 382 [C], 303 [C0 ], 392 [Ca ], 391 [CA], 252 [CA], 253 [CA ∗], 288 [D], 167, 218, 250 [D1 ], 223, 294 [DA ], 172 [D(1,O(q)) ], 144 [D(1,q) ], 144, 167, 197, 217, 362, 363 [D(2,q) ], 198 [Dh,q ], 143 [D(k,q) ], 199 [DΩ ], 403, 406 [D∗Ω ], 407 [H], 458, 498 [HΔ ], 462, 501 [HD Δ ], 502 [HUR], 469 [N], 444, 485 [Q], 138, 454, 512 [QT ], 551 [Q∗ ], 423 [Rα,ρ ], 57 [S], 466 [U], 137, 442, 483 [U1 ], [U2 ] 442, 483 [U∞ ], 443, 483 [UR], 464, 492 [ΦT ], 551
Distribution classes C, 371 ER, 307 ESe, 308 L, 16 Ms , 391 R, 11, 15, 371 R(α), 11 S, 13, 14 S(α), 38 S+ , 14 Sloc , 47 Se, 371 Se(α), 29 S ∗ , 347, 358
625