This page intentionally left blank
Continuous Semi-Markov Processes
This page intentionally left blank
Continuous...
66 downloads
1823 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
This page intentionally left blank
Continuous Semi-Markov Processes
This page intentionally left blank
Continuous Semi-Markov Processes
Boris Harlamov Series Editor Nikolaos Limnios
First published in Great Britain and the United States in 2008 by ISTE Ltd and John Wiley & Sons, Inc. Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd 6 Fitzroy Square London W1T 5DX UK
John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA
www.iste.co.uk
www.wiley.com
© ISTE Ltd, 2008 The rights of Boris Harlamov to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. Library of Congress Cataloging-in-Publication Data Harlamov, Boris. Continuous semi-Markov processes/Boris Harlamov. p. cm. Includes index. ISBN 978-1-84821-005-9 1. Markov processes. 2. Renewal theory. I. Title. QA274.7.H35 2007 519.2'33--dc22 2007009431 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN: 978-1-84821-005-9 Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire.
Contents
Introduction
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 1. Stepped Semi-Markov Processes
9
. . . . . . . . . . . . . . . . . .
17
. . . . .
. . . . .
17 20 25 29 32
Chapter 2. Sequences of First Exit Times and Regeneration Times . . . . .
37
1.1. Random sequence . . . . . . . . 1.2. Markov chain . . . . . . . . . . . 1.3. Two-dimensional Markov chain . 1.4. Semi-Markov process . . . . . . 1.5. Stationary distributions . . . . .
2.1. Basic maps . . . . . . . . . 2.2. Markov times . . . . . . . . 2.3. Deducing sequences . . . . 2.4. Correct exit and continuity 2.5. Time of regeneration . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
Chapter 3. General Semi-Markov Processes
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . .
. . . . .
. . . . .
. . . . . .
. . . . .
. . . . .
. . . . . .
. . . . .
. . . . .
. . . . . .
. . . . .
. . . . .
. . . . . .
. . . . .
. . . . .
. . . . . .
. . . . .
. . . . .
. . . . . .
. . . . .
. . . . .
. . . . . .
. . . . .
. . . . .
71
. . . . . .
. . . . .
. . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
38 42 46 55 63
. . . . . .
. . . . .
. . . . .
. . . . .
3.1. Deſnition of a semi-Markov process . . . . . . 3.2. Transition function of a SM process . . . . . . 3.3. Operators and SM walk . . . . . . . . . . . . . 3.4. Operators and SM process . . . . . . . . . . . . 3.5. Criterion of Markov property for SM processes 3.6. Intervals of constancy . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. 72 . 79 . 82 . 91 . 103 . 110
Chapter 4. Construction of Semi-Markov Processes using Semi-Markov Transition Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 4.1. Realization of an inſnite system of pairs . . . . . . . . . . . . . . . . . . 116 4.2. Extension of a measure . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 5
6
Continuous Semi-Markov Processes
4.3. Construction of a measure . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.4. Construction of a projective system of measures . . . . . . . . . . . . . 127 4.5. Semi-Markov processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Chapter 5. Semi-Markov Processes of Diffusion Type . . . . . . . . . . . . . 137 5.1. One-dimensional semi-Markov processes of diffusion type 5.1.1. Differential equation . . . . . . . . . . . . . . . . . 5.1.2. Construction SM process . . . . . . . . . . . . . . . 5.1.3. Some properties of the process . . . . . . . . . . . . 5.2. Multi-dimensional processes of diffusion type . . . . . . . 5.2.1. Differential equations of elliptic type . . . . . . . . 5.2.2. Neighborhood of arbitrary form . . . . . . . . . . . 5.2.3. Neighborhood of spherical form . . . . . . . . . . . 5.2.4. Characteristic operator . . . . . . . . . . . . . . . . Chapter 6. Time Change and Semi-Markov Processes 6.1. Time change and trajectories . . . . . . . 6.2. Intrinsic time and traces . . . . . . . . . . 6.3. Canonical time change . . . . . . . . . . . 6.4. Coordination of function and time change 6.5. Random time changes . . . . . . . . . . . 6.6. Additive functionals . . . . . . . . . . . . 6.7. Distribution of a time run along the trace 6.8. Random curvilinear integrals . . . . . . . 6.9. Characteristic operator and integral . . . . 6.10. Stochastic integral . . . . . . . . . . . . . 6.10.1. Semi-martingale and martingale 6.10.2. Stochastic integral . . . . . . . . 6.10.3. Ito-Dynkin’s formula . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
138 138 143 159 168 168 171 178 188
. . . . . . . . . . . . 197 . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
198 206 211 223 228 233 242 252 264 268 268 275 277
Chapter 7. Limit Theorems for Semi-Markov Processes . . . . . . . . . . . 281 7.1. Weak compactness and weak convergence . . . . . . . . . . . . . . . . . 281 7.2. Weak convergence of semi-Markov processes . . . . . . . . . . . . . . . 289 Chapter 8. Representation of a Semi-Markov Process as a Transformed Markov Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 8.1. Construction by operator . . . . . . . . . . . . 8.2. Comparison of processes . . . . . . . . . . . 8.3. Construction by parameters of Lévy formula 8.4. Stationary distribution . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
300 302 307 311
Contents
7
Chapter 9. Semi-Markov Model of Chromatography . . . . . . . . . . . . . 325 9.1. Chromatography . . . . . . . . . . . . . . 9.2. Model of liquid column chromatography 9.3. Some monotone Semi-Markov processes 9.4. Transfer with diffusion . . . . . . . . . . . 9.5. Transfer with ſnal absorption . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
326 328 332 337 346
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
This page intentionally left blank
Introduction
Semi-Markov processes are a generalization of Markov processes, having a proper method of investigation and a ſeld of application. In the 1950s, some practical problems, especially problems of the mass service, compelled researchers to look for an adequate mathematical description. Attempts to apply Markov models to these problems were sometimes unsatisfactory due to unjustiſed conclusions about exponential distribution of corresponding time intervals. In order to reduce this limitation P. Lévy [LEV 54] and B. Smith [SMI 55] almost simultaneously proposed the class of stepped processes, called semi-Markov processes. For these processes the Markov property with respect to a ſxed non-random time instant is not fulſlled in general cases, but they remain Markov processes with respect to some random times, for example, jump times of their trajectories. Therefore, such a process is not a Markov process, although it inherits some important properties of Markov processes. Recently the theory of stepped semi-Markov processes has been developed more intensively. An exhaustive bibliography of corresponding works was compiled by Teugels [TEU 76] (see also Koroluk, Brodi, and Turbin [KOR 74], Koroluk and Turbin [KOR 76], Cheong, de Smit, and Togels [CHE 73], Nollau [NOL 80]). The list of works devoted to practical use of this model is very long (see Kovalenko [KOV 65], Korlat, Kuznecov, Novikov, and Turbin [KOR 91], etc.). We will consider a more general class of random processes with trajectories in a metric space which are continuous from the right and have limits from the left at any point of the half-line. These processes, called semi-Markov processes (in the general sense), have the Markov property with respect to any intrinsic Markov time such as the ſrst exit time from an open set or any ſnite iteration of such times. The class of semi-Markov processes includes as a sub-class all stepped semi-Markov processes, and also all strong Markov processes, among them continuous ones. Apparently for the ſrst time general semi-Markov processes [HAR 74] were deſned in [HAR 71b] under the name of processes with independent sojourn times. 9
10
Continuous Semi-Markov Processes
These processes were called continuous semi-Markov processes in [HAR 72] according to the property of their ſrst exit streams. For these processes it is impossible or not convenient to use speciſc Markov methods of investigation such as semi-group theory, inſnitesimal operators, differential parabolic equations or stochastic differential equations. Thus, it is necessary to develop new methods or to modernize traditional methods of investigation, which do not use the simple Markov property. Some methods are borrowed, ſrstly, from works on stepped semi-Markov processes (see Lévy [LEV 54], Pyke [PYK 61], Smith [SMI 55, SMI 66], etc.), and secondly, from works on Markov processes (see Dynkin [DYN 59, DYN 63], and also [KEM 70, BLU 68, MEY 67], etc.). Some ideas on non-standard description of processes are taken from works of [COX 67]. [HAR 69, HAR 71a] were attempts of such a description. An investigation of some partial types of continuous non-Markovian semi-Markov processes can be found in [CIN 79]. Problems of ergodicity for processes with embedded stepped semi-Markov processes are investigated by Shurenkov [SHU 77]. The simplest element of description of the semi-Markov process is the pair: the ſrst exit time and the corresponding exit position for the process leaving some open set. However, from the classical point of view, such a pair is not simple because it is not convenient to describe it by ſnite-dimensional distributions of the process ([KOL 36], [GIH 71], etc.). One of the problems we solve in this book is to eliminate this difſculty by means of changing initial elementary data on the process. In our case distributions of the above ſrst exit pairs or joint distributions for a ſnite number of such pairs becomes the initial data. From engineering point of view it is not difſcult to make a device which ſxes ſrst exit times and their compositions, and values of the process at these times. It makes it possible to describe the process in a simple, economic and complete way. Such an approach also has some theoretical advantages over the classic approach. In terms of the ſrst exits the continuity problem of the process has a trivial solution. In addition, the necessary and sufſcient conditions for the process to converge weakly are simpler than in classical terms. The same can be said for time change problems. However, when accepting this point of view, one must be ready to meet difſculties of another kind. Efforts in this direction would hardly be justiſed if there did not exist a class of processes such that their description in terms of ſrst exit times would be natural and convenient. This is the class of semi-Markov processes. For them joint distributions of a ſnite number of the ſrst exit pairs are determined by repeatedly integrated semi-Markov kernels. The semi-Markov process possesses the Markov property with respect to the ſrst exit time from an open set. It is a simple example of such times. The most general class of such times is said to be the class of intrinsic Markov times, which plays an important role in theory of semi-Markov processes. One of the main features of the class of semi-Markov processes is its closure with respect to time change transformation of a natural type, which generally does not preserve the usual Markov property. One of the main problems in the theory of semi-Markov processes is the problem of representing such a process in the form of
Introduction
11
a Markov process, transformed by a time change. At present a simple variant of this problem for stepped semi-Markov process (Lévy hypothesis) under some additional assumption is solved (see [YAC 68]). In general, this problem is not yet solved. General semi-Markov processes are the natural generalization of general Markov processes (like stepped semi-Markov processes generalize stepped Markov processes). Both of these are mathematical models of real phenomena in the area of natural and humanitarian sciences. They serve to organize human experience and to extend the prognostic possibility of human beings as well as any mathematical model. Evidently semi-Markov models cover a wider circle of phenomena than Markov processes. They are more complex due to the absence of some simplifying assumptions, but they are more suitable for applications for natural phenomena than Markov processes. The use of semi-Markov models makes it possible to give a quantitative description for some new features of the process trajectories, which are not accessible using the Markov model. The characteristic feature of the semi-Markov process is a set of intervals of constancy in its trajectory. For example, such simple operation as truncation transforms Wiener process W into non-Markov continuous semi-Markov process Wab : ⎧ ⎪ W (t) ≤ a, ⎨a, b Wa (t) = W (t), a < W (t) < b, (t ≥ 0) ⎪ ⎩ b, W (t) ≥ b, where a < b. This process contains intervals of constancy on two ſxed levels, but the most typical semi-Markov process possesses intervals of constancy on random spatial positions. In general, such intervals are a consequence of the property of so-called “time run” process with respect to the sequence of states (trace) of the process, which has conditionally independent increments. The famous result of Lévy requires for such a time run process a Poisson ſeld of jumps, which turn into intervals of constancy for an original process itself. This theoretical result ſnds an unexpected interpretation in chromatography (see [GUT 81]) and in other ſelds in engineering, where ƀow of liquid and gas in a porous medium is being used. The continuous semi-Markov model for this movement is reasonable due to the very small inertia of particles of the substance to be ſltered. Therefore, the assumption about independence of time intervals, which are taken by non-overlapping parts of the ſlter while the particle is moving through the ſlter, is very natural. Moreover, the chromatography curve is a record of the distribution of the ſrst exit time of the particle from the given space interval. Well-known interpretations of intervals of constancy of the particle move trajectory are intervals of delay of particles on the hard phase (convertible adsorption). Such a movement is the experimental fact. A mathematical theory of reliability is another ſeld of application of continuous semi-Markov processes. For example, the adequate description of an abrasive wear
12
Continuous Semi-Markov Processes
can be given by the inverse process with independent positive increments, i.e. a partial case of a continuous semi-Markov process [VIN 90]. The advantage of such a description appears when analyzing some problems of reliability [HAR 99]. General semi-Markov processes and, in particular, continuous ones can ſnd application in those ſelds, where the stepped semi-Markov processes are applied, because the former is the limit for a sequence of the latter. Another possible ſeld of application appears when one looks for the adequate model for a Markov-type process, transformed by varying its parameters (see [KRY 77]). The class of semi-Markov processes is more stable relative to such transformations than to Markov ones. There are many problems of optimal control for random processes, which can be reduced to an optimal choice of a Markov time in order to begin a control action (see [ARK 79, GUB 72, DYN 75, KRY 77, LIP 74, MAI 77, ROB 77, SHI 76]). As a rule, such a Markov time is the ſrst exit time of the process from some set of states. However, the consequence of such a control may be loss of Markovness. An example of such a control is time change depending on a position of the system. Let (ξ(t)) be a stationary random process, and (ψ(t)) be a time change, i.e. a strictly increasing ˜ be the transformed map R+ → R+ , where ψ(0) = 0, ψ(t) ∼ t (t → ∞). Let (ξ(t)) process. Then T T ˜ ξ(t)ψ (t) dt, ξ(t) dt = 0
0
where T = inf ψ −1 [T, ∞) ∼ T (T → ∞). The average in time of the transformed process can be more than that of the original process, if ψ (t) is chosen in a suitable form. For example, it would be the case, if ψ (t) = f (ξ(t)), where f is an increasing function. In this case ψ is uniquely determined if the function ξ is determined. The similar effect of enlargement of the average meaning is possible, when under ſxed ξ the time change is some non-degenerated random process. For example, let ψ (t) be a Poisson process with the intensity f (ξ(t)) (Cox process [COX 67]). Then E
0
T
˜ dt = E ξ(t)
ξ(t) dψ(t)
0
=E
T
0
T
ξ(t)f ξ(t) dt ∼
0
T
ξ(t)f ξ(t) dt
(T −→ ∞).
If the ſrst form determines the random time change, then the second form of time change is twice random. The main distinction of these two kinds of time changes is in their relation to Markovness. The former preserves the Markov property and the latter does not. The process of the second type appears to be semi-Markovian, and the class of such processes is invariant with respect to such a time change. An answer on the practical question about preference of the time change form depends on engineering possibilities.
Introduction
13
Let us pay attention to another aspect of the problem. In [HAR 90] the possibility of checking the hypothesis on Markovness of a one-dimensional continuous process is discussed. It was noted that to reveal Markovness of a process with an uncountable set of states is not possible in principle, when probability of the process to be at when any partial one-dimensional distribution of the process is continuous. In addition, it is impossible to check this hypothesis for discrete time and continuous set of states, and also that for continuous time and multi-dimensional (d ≥ 2) space of states. It follows from the fact that it is impossible to have a reasonable statistical estimate for conditional probability P (A|ξ(t) = x) for a random process ξ, if P (ξ(t) = x) = 0 for any t ∈ R. The last condition can possibly be true for the one-dimensional process too. However, in this case, in order to check Markovness we can use the inſnite sequence of random times, when the trajectory of the process crosses the level x. We can organize a sequence of Markov times containing iterated ſrst exit times from interval (∞, x). For a strictly Markov process any such sequence of marked point processes determines a stepped semi-Markov process. The ſrst test must be checking this property of the point process, although it does not guarantee Markovness of the original process. Therefore, if the ſrst test is positive, the second test must check Markovness with the help of distributions of the ſrst exit times from small neighborhoods of an initial point. For a proper Markov process this distribution has a special limit property while diameters of the neighborhoods tend to zero. Later on we consider two corresponding criteria of Markovness for semi-Markov processes. It seems to be convincing enough that continuous semi-Markov processes are worth investigating. Their practical usefulness is connected with numerical calculations. Here we will not cover computer problems. The main theme of this book is devoted to theoretical aspects of continuous semi-Markov models. It happens that there are a lot of such aspects. The book is divided into chapters, sections, subsections, and paragraphs. In every chapter there is its own enumeration of divisions, and as well that of theorems, lemmas, and propositions. All these items, except subdivisions and paragraphs, have double numbers. For example, in 3.14 the number 3 means chapter, and 14 the number of this object in this chapter. Subdivisions have triple numbers. For example, 5.2.3 means the follows: 5 is the chapter number, 2 is that of the section, 3 is the number of subsection in the section. Paragraphs have single numbers. A reference on a paragraph inside a given chapter is given as item 9, where 9 is its number. A reference to a paragraph from another chapter is given as item 2.4, where 2 is the chapter number, and 4 is the paragraph number within this chapter. It is necessary to include in Chapter 1 some information about stepped semiMarkov processes. It will be assumed that the reader is familiar with this concept. The main focus will be on constructing a measure of the stepped semi-Markov process with the help of the method, which will be further generalized for the general semiMarkov processes. In addition, we give without proof the main result of the theory of
14
Continuous Semi-Markov Processes
stepped semi-Markov processes, namely the ergodic theorem and the corresponding formula of their stationary distribution. In Chapter 2 a method of investigation of a process with the help of embedded point processes (streams) of the ſrst exits relative to a sequence of subsets of the given metric space of states is discussed. Deducing sequences of subsets, the main tool for analyzing properties of random processes connected with the ſrst exit times, are deſned. The structure of the set of regenerative times for a measurable family of probability measures is investigated. Such a name is assigned to a set of Markov times for which the Markov property for the given family of measures is fulſlled for. In Chapter 3 general semi-Markov processes are deſned and investigated. Such a process is characterized by a measurable family of probability measures with the set of regenerative times which includes any ſrst exit time from an open set. Semi-Markov transition functions, transition generating functions, and lambda-characteristic operators are considered. The conditions necessary for a semi-Markov process to be Markov are analyzed. In Chapter 4 the semi-Markov process is constructed on the base of a priori given family of semi-Markov transition functions. It is considered as a special case of more general problem how to construct a process from a consistent system of marked point processes, namely, that of streams which can be interpreted as the streams of the ſrst exit pairs (time, position) from open sets for some process. In Chapter 5 semi-Markov diffusion processes in a ſnite-dimensional space are considered. They enable the description in terms of partial differential elliptical equations. In Chapter 6 properties of trajectories of semi-Markov processes are investigated. For any trajectory, a class of trajectories is deſned in such a way that a trajectory from this class differs from the origin one only by some time change. This class can be interpreted as a sequence of states which the system goes through without taking into account the time spent on being in these states. We call this the trace of the trajectory. The individual trajectory from this class is distinguished by time run along the trace. The measure of the semi-Markov process induces a distribution on the set of all traces. This distribution possesses the Markov property of a special kind. In this aspect the projection is related to the Markov process. The special property of the semi-Markov process is exposed by its conditional distribution of time run along the trace. The character of this distribution is reƀected in properties of trajectories of the semi-Markov process. In Chapter 7 we consider conditions for the sequence of stepped semi-Markov processes to converge weak to a general semi-Markov process. Results related to semiMarkov processes are obtained as a consequence of general theorems about weak
Introduction
15
compactness and weak convergence of probability measures in the space D. These theorems in terms of the ſrst exits times have some comparative advantages with the similar results in terms of the usual ſnite-dimensional distributions. In Chapter 8 a problem of representation of the semi-Markov process in the form of a Markov process, transformed by a random time change, is solved. We demonstrate two methods of solving this problem with construction of the corresponding Markov process and time change. The ſrst solution is formulated in terms of the so called lambda-characteristic operator of this process. The corresponding Markov process is represented by its inſnitesimal operator. The second solution is obtained in terms of Lévy decomposition of the conditional process with independent positive increments (process of time run). The corresponding Markov process is determined by the distribution of its random trace, and that of the conditional distribution by its time run along the trace. These distributions are used to derive formulae of stationary distributions of corresponding semi-Markov processes. In Chapter 9 we consider some applications of continuous semi-Markov processes. It seems natural that they can serve as an adequate model for carrying substance through a porous medium. This follows from both some premises about independence, and experimental data of such a phenomenon as chromatography. The latter gives the purest example of a continuous semi-Markov process. We develop semiMarkov models for both the liquid and gas chromatography. The similar application has a geological origin. We consider a continuous semi-Markov model of accumulation of a substance which consists of particles moving up to some non-Markov time and remaining in the stop position forever. According to accepted convention we use denotations like P (A, B) instead of P (A ∩ B). We also will omit braces in expression like A = {Xn ∈ C} as an argument of a measure or expectation: P (A) = P (Xn ∈ C), E(f ; A) = E(f ; Xn ∈ C), and so on. The author would like to express their deep appreciation for the help given by Yu. V. Linnik, who supported the ſrst works of the author on continuous semi-Markov processes. The author expresses sincere gratitude to all participants of the seminar on probability theory and mathematical statistics at St Petersburg Department of Steklov Mathematical Institute of the Russian Academy of Sciences, and especially to I. A. Ibragimov for their assistance with this work.
This page intentionally left blank
Chapter 1
Stepped Semi-Markov Processes
Before investigating general semi-Markov processes, we consider some properties of stepped semi-Markov processes. In this special case we show a structure of the class of the ſrst exit times and corresponding ſrst exit positions of the process. For stepped semi-Markov processes the main element of their constructive description is called a semi-Markov kernel determining the distribution of the pair, consisting of a pair of a time and a position of the process just after the ſrst jump from the initial point. With the help of these kernels it is possible construct the family of probability measures on the set of all stepped right-continuous functions, depending on the initial points of the functions. How are measures of the family coordinated? The answer to this question for the family of semi-Markov distributions (semi-Markov family) follows easily from the construction of an individual measure of the family. In the case of general semiMarkov processes this coordination condition will be accepted as an essential part of the deſnition.
1.1. Random sequence 1. Random element of a set According to the fundamental work of Kolmogorov [KOL 36] a probability model begins with a set of elementary events Ω. In this set the system A of subsets (events) is chosen, representing sigma-algebra, and on which a normalized measure, P , is given. This measure deſnes probability P (B) of any event B ∈ A. While deſning a random element of some set S, they assume that in S a sigma-algebra of subsets B is also chosen. Then any measurable function v : Ω → S (deſned on Ω and having values in S) is said to be a stochastic element of a set S. Take a simple example: a random 17
18
Continuous Semi-Markov Processes
variable is a measurable map v : Ω → R ≡ (−∞, ∞) with respect to Borel sigmaalgebra of subsets B(R) on R. The basic characteristic of a random element of a set S (sometimes called a S-valued random variable) is its distribution, namely a new probability measure Q, given on B, connected with the initial measure P by the condition: (∀B ∈ B) Q(B) = P (v −1 (B)) where, according to the deſnition of a measurability of map v, the pre-image v −1 (B) of the set B belongs to A, and hence the measure P is deſned on it. In what follows, probability constructions in frames of the given model are connected only with the measure Q or with a set of such measures on the same sigmaalgebra B. The connection between them is either not taken into account absolutely or is formulated in terms of these measures. If it is true then it is natural to accept as (Ω, A) the measurable space (S, B) itself. A stochastic element in this case is each point s ∈ S. To be more precise, in order to remain within the framework of the initial nomenclature, it is the identity mapping E : S → S ((∀s ∈ S) E(s) = s). It implies the other deſnition of a stochastic element of a set S: it is a measurable space with a probability measure (S, B, Q). A point from this space, called a realization (or as a selective value) of the random element, is not a part of the probability theory, which studies only probabilities of various measurable subsets. Any question on an individual realization usually concerns a probability for a measurable subset from S to contain or not contain this realization. 2. Random sequence According to the deſnition of a random element of any set, the random sequence of points from a set X is a triple (Z, F , P ), where Z is a space of all such sequences: z ∈ Z ⇔ z = (z0 , z1 , z2 , . . .) (zk ∈ X), F is a sigma-algebra of subsets of a set Z, and P is a probability measure on F. Let Xn (z) = zn be a n-th term of a sequence z (n-th coordinate). Thus, Xn is a map Z → X. The natural requirement to sigma-algebra F consists of measurability of this map. So we assume the set X to be measurable, i.e. a sigma-algebra of its subsets is determined. We will assume that X is a complete, separable, locally compact metric space, with Borel sigma-algebra of subsets B(X). In particular, if X is ſnite or countable (inſnite), then B(X) is generated by all distinct points. Let N = {1, 2, 3, . . .} and N0 ≡ N ∪ 0. The fact of a measurability of any map Xn (n ∈ N0 ) makes σ-algebra F rich enough, containing any event which can be connected with sequences. Let F = σ(Xn , n ∈ N0 ), i.e. the least σ-algebra, with respect to which all maps Xn are measurable. The measure P is deſned for all events of the form
Ak ∈ B(X) . [1.1] Bn = X0 ∈ A0 , X1 ∈ A1 , . . . , Xn ∈ An On the other hand, the coordinated meanings of P on such events determines this measure on the sigma-algebra F uniquely. This statement (with a deſnition of the
Stepped Semi-Markov Processes
19
coordination) is a special case of the Kolmogorov theorem, which in turn is a special example of the theorem about extension of a measure from an algebra on the generated sigma-algebra. It is a basis of the constructive determination of a measure P on (Z, F ). Such representation of a measure is natural for a sequence of independent random elements and for a sequence of elements connected in a Markov chain. 3. Sequence of independent random variables In this case P Bn = P X0 ∈ A0 P X1 ∈ An · · · P Xn ∈ An . The constructive representation of P reduces to a sequence of probability distributions (measures) (Fn )∞ n=0 on B(X) where Fk Ak = P Xk ∈ Ak = P ◦ Xk−1 Ak (k ≥ 0) and P (Bn ) = F0 (A0 ) · · · Fn (An ). Here and later on we use a denotation f ◦ g for a superposition of two functions: (f ◦ g)(x) = f (g(x)). In particular, all measures Fn can be identical (case independent and identically distributed members of a sequence). 4. Conditional probabilities According to the deſnition
P B1 =
X0 ∈A0
P X1 ∈ A1 | X0 P (dz),
where P (C | X0 ) is a conditional probability of an event C with respect to X-valued random variable X0 . This formula can be rewritten with a change of variables. It is known (see Neveu [NEV 69]) that P (C | X0 ) as a X0 -measurable function can be represented as gC (X0 ), where (with ſxed C ∈ F) gC is a measurable function on X. Then by a rule of a change of variables gC X0 (z) P (dz) = gC (x) P ◦ X0−1 (dx) = gC (x)F0 (dx). X0−1 A0
A0
A0
We will also use more intuitive notation in this and similar cases: gC (x) = P C | X0 = x , F0 (dx) = P X0 ∈ dx . Thus, we obtain P B1 =
A0
P X1 ∈ A1 | X0 = x P X0 ∈ dx .
20
Continuous Semi-Markov Processes
In a more common case we have P Xn ∈ An | X0 = x, . . . , Xn−1 = xn−1 P Bn = A0 ×A0 ×···×An−1
× P Xn−1 ∈ dxn−1 | X0 = x, . . . , Xn−2 = xn−2 · · · × P X1 ∈ dx1 | X0 = x P X0 ∈ dx0 .
This generalization of the previous formula represents an integrated variant of the formula of total probability in terms of conditional distributions. It can also be used for a constructive representation of a measure P . Namely, we determine F0 , and also for any n ≥ 0 the kernel Fn (A | x0 , . . . , xn ), where given (x0 , . . . , xn ) ∈ Xn+1 it is a probability measure on B(X), and given A ∈ B(X) it is a B(Xn+1 )-measurable function on Xn+1 with a value in [0, 1]. With the help of these functions the probability is determined Fn An | x0 , . . . , xn−1 F dxn−1 | x0 , . . . , xn−2 · · · P Bn = A0 ×A0 ×···×An−1
× F1 dx1 | x0 F0 dx0 .
The collection of such probabilities with ſxed n determines a probability measure Pn on a set of all ſnite sequences (z0 , z1 , . . . , zn ). The set (Pn )∞ n=0 is consistent in the ) = Pn−1 (Bn−1 ), if Bn−1 = {X0 ∈ A0 , . . . , Xn−1 ∈ An−1 } sense that Pn (Bn−1 = {X0 ∈ A0 , . . . , Xn−1 ∈ An−1 , Xn ∈ X}. Such a set (Pn ) determines a and Bn−1 unique probability measure P on F (Ionescu-Tulcea theorem; see Neveu [NEV 69]), for which P (Bn ) = Pn (Bn ). 1.2. Markov chain 5. Markov sequence The constructive representation of a measure P in terms of conditional distributions is most natural for a Markov sequence. In this case P Xn+1 ∈ An+1 | X0 , . . . , Xn = P Xn+1 ∈ An+1 | Xn for any one n ∈ N0 . The equality is understood almost everywhere on Z with respect to the measure P . It means P Xn+1 ∈ An+1 | X0 = x0 , . . . , Xn = xn = P Xn+1 ∈ An+1 | Xn = xn almost everywhere on Xn+1 concerning a measure P ◦ (X0 , . . . , Xn )−1 , where −1 A0 × · · · × An = P X0 ∈ A0 , . . . , X ∈ An . P ◦ X0 , . . . , Xn
Stepped Semi-Markov Processes
21
The theory of Markov sequences (chains) is a detailed branch of the probability theory (see Kemeny and Snell [KEM 70], etc.). We consider these chains only to develop the Markov formalism (regeneration property) for a family of measures which will be convenient later. 6. Shift and Markov property Besides the map Xn : Z → X we will also consider map θn : Z → Z, where θn (z0 , z1 , z2 , . . .) = (zn , zn+1 , zn+2 , . . .) (shift). Thus (∀z ∈ Z) Xk (θn z) = zn+k = Xn+k (z), which it is possible to note as functional equality Xk ◦ θn = Xn+k (k, n ∈ N0 ). Furthermore, the event {Xn+k ∈ A} can be noted as {Xk ◦ θn ∈ A} = θn−1 {Xk ∈ A}. And hence
Xn+1 ∈ An+1 , . . . , Xn+k ∈ An+k =
k
k
Xi ∈ An+i . θn−1 Xi ∈ An+i = θn−1
i=1
i=1
On the other hand, by the Markov property for any k ∈ N and An+i ∈ B(X) k
−1 θn Xi ∈ An+i | X0 , . . . , Xn P i=1
=P
k
−1
θn
= P θn−1
Xi ∈ An+i | Xn
i=1
k
Xi ∈ An+i | Xn .
i=1
In addition P Xn ∈ An , θn−1 B | X0 , . . . , Xn = I{Xn ∈An } P θn−1 B | X0 , . . . , Xn , P Xn ∈ An , θn−1 B | Xn = I{Xn ∈An } P θn−1 B | Xn (B ∈ F), where
1, IA (z) = 0,
z ∈ A, z∈ / A.
From here it is possible to note k k
−1 −1 P θn Xi ∈ An+i | X0 , . . . , Xn = P θn Xi ∈ An+i | Xn . i=0
i=0
k
Since F is generated by events of the form i=0 {Xi ∈ Ai }, for any event B ∈ F we have: [1.2] P θn−1 B | X0 , . . . , Xn = P θn−1 B | Xn .
22
Continuous Semi-Markov Processes
7. Notation for integrals The following denotation relates to an integral of a measurable real function by a measure. If the measure is probability, this integral is called an expectation of this function (random variable). Let μ be a measure on measurable space (S, B), f a measurable numerical function on S and A ∈ B. Designate f dμ. [1.3] μ(f ; A) = A
This denotation, common for the general theory of a measure, is convenient for dealing with many measures. In a special case, when μ = P , a probability measure, this integral refers to an expectation of a random variable f · IA and is designated as E(f · IA ) = E(f ; A). 8. A Markov property in terms of integration Using the previous denotations according to the deſnition of conditional probability, we can write P θn−1 B, A = E P θn−1 B | X0 , . . . , Xn ; A , where A = {X0 ∈ A0 , . . . , Xn ∈ An }. Under the theorem of extension of a measure (see [KOL 36]) this equality is also true for any A ∈ Fn ≡ σ(Xk , k ≤ n). In a Markov case for these A we have P θn−1 B, A = E P θn−1 B | Xn ; A . Thus, the following equality can be considered as a deſnition of the Markov sequence: [1.4] P θn−1 B, A = E P θn−1 B | Xn ; A for any n ∈ N, B ∈ F, A ∈ Fn . Furthermore, we shall encounter cases when this equality is fulſlled for n random (i.e. depending on z). We refer to such deterministic or random numbers as regeneration times. Thus, the Markov sequence is a sequence for which any ſxed number (the constant instant) is a regeneration time. 9. Markov family of measures To specify the concept of regeneration, we deſne a homogenous Markov property and pass from one measure to a coordinated family of measures. Let us assume that
Stepped Semi-Markov Processes
23
for any n ∈ N there is a transition function Fn : X × B(X) → [0, 1], i.e. a kernel Fn (A | x) ∈ [0, 1] (x ∈ X, A ∈ B(X)), such that P Xn ∈ An | Xn−1 = Fn An | Xn−1 −1 P -almost surely (a.s.). From here it follows that (P ◦ Xn−1 )-a.s. on B(X) x∈X . P Xn ∈ An | Xn−1 = x = Fn An | x
Thus, for a Markov sequence we have the equality F1,...,n A1 × · · · × An | x F0 (dx), P Bn = A0
where F0 ≡ P ◦ X0−1 and F0 -a.s. F1,...,n A1 × · · · × An | x = Fn An | xn−1 Fn−1 dxn−1 | xn−2 · · · F1 dx1 | x A0 ×···×An−1
= P X1 ∈ A1 , . . . , Xn ∈ An | X0 = x . (1)
The latter probabilities determine a probability measure Px on the set of sequences, starting from the point x. In other words, the family of transition functions (Fn )∞ 1 determines a distribution of a Markov sequence within the initial point, i.e. the family (1) of distributions (Px )x∈X . The same method is applicable for determining a family (m) of distributions (Px )x∈X with the help of the subfamily (Fn )∞ m , where m ≥ 1. (1) The connection between these families can be found, considering Px -measure of the −1 of the form [1.1]. This event can be represented as Bm ∩ θm Bn where event B m+n n Bn = i=1 {Xi ∈ Am+i }. Thus (m+1) Px(1) Bm+n = Ex(1) PXm Bn ; Bm . From here for any x ∈ X, B ∈ F, C ∈ Fm it is true that −1 (m+1) Px(1) θm B, C = Ex(1) PXm (B); C . As in the previous case we obtain the equality −1 (m+n+1) Px(n+1) θn+m B, C = Ex(n+1) PXm+n (B); C ,
[1.5]
where m, n ∈ N and C ∈ Fm+n . It is a consistency condition for the family of (n) probability measures (Px ) (n ∈ N0 , x ∈ X).
24
Continuous Semi-Markov Processes
10. Condition of regeneration The family of transition functions (Fn ) determines the consistent family of distri(n) butions (Px )x∈X , n ∈ N. We deal generally with a homogenous family of transition functions, when Fn = F1 ≡ F and with an appropriate homogenous (in time) family (n) (1) of distributions on F : Px = Px ≡ Px . For such a family we have P θn−1 B | Xn = x = Px (B), (P ◦ Xn−1 )-a.s., and the Markov property can be deſned as P θn−1 B | X0 , . . . , Xn = PXn (B), P -a.s., and also for any x ∈ X, B ∈ F, A ∈ Fn Px θn−1 B, A = Ex PXn (B); A
∀n ∈ N0 .
[1.6]
It is a condition of coordination inside the family of measures (Px ). We call this property a condition of regeneration. We came to another deſnition of a Markov sequence when it is determined within the initial point by a consistent Markov family of probability measures. This point of view on the process will be used to deſne a general semi-Markov process. Let us emphasize that in the case of Markov sequences, the family of distributions (Px ) and the transition function F determine each other.
11. Distributions of the ſrst exit Let us consider the ſrst exit time of the sequence z from the set Δ ∈ B(X):
σΔ (z) = min n ≥ 0 : zn ∈ Δ , and XσΔ is the corresponding ſrst exit position for the case, when σΔ (z) < ∞: XσΔ (z) = XσΔ (z) (z). We assume σΔ (z) = 0 and XσΔ (z) = z0 , if z0 ∈ Δ. For any m ∈ N we have
∞
σΔ ≥ m = X0 ∈ Δ, . . . , Xk−1 ∈ Δ, Xk ∈ Δ ∈ F, k=m
and also {XσΔ ∈ A} ∈ F, where A ∈ B(X). Let FσΔ (m, A | x) = Px σΔ ≤ m, X σΔ ∈ A ,
Stepped Semi-Markov Processes
25
Then it follows from the deſnition of the ſrst exit time that for z0 ∈ Δ, σΔ (z) = 1 + σΔ (θ1 (z)) ≥ 1 and XσΔ = (XσΔ ◦ θ1 )(z). From here for 1 ≤ m we have Px σΔ ≤ m, XσΔ ∈ A = Px σΔ = 1, X1 ∈ A + Px 1 < σΔ , 1 + σΔ ◦ θ1 ≤ m, XσΔ ◦ θ1 ∈ A
= F (A\Δ | x) + Px θ1−1 σΔ ≤ m − 1, XσΔ ∈ A , X1 ∈ Δ = F (A\Δ | x) + Ex PX1 σΔ ≤ m − 1, XσΔ ∈ A ; X1 ∈ Δ . Hence
FσΔ (m, A | x) = F (A\Δ | x) +
Δ
FσΔ m − 1, A | x1 F dx1 | x
is an integral equation with respect to kernel FσΔ for the Markov sequence. For x ∈ Δ we assume FσΔ (0, A | x) = IA\Δ (x). 1.3. Two-dimensional Markov chain 12. Markov renewal process We designate R+ = [0, ∞), R+ = [0, ∞]. Let W be a set of all sequences of pairs / X (some w = (vn , zn )∞ n=0 , where vn ∈ R+ and zn ∈ X ≡ X ∪ ∞, where ∞ ∈ additional point), and (1) vn = ∞ ⇔ zn = ∞, (2) v0 = 0, (3) vn < ∞ ⇒ 0 < v1 < · · · < vn , (4) vn → ∞ (n → ∞). It is a new space of elementary events, on which we consider a special class of Markov sequences. A space of states in this case is R+ × X, which we consider as a new metric space with some metric generated by the real line metric and that of space X. It would be possible in this case to use all previous notations and conclusions for random sequences with values from the new metric space. However, taking into account some special properties of sequences, we give some new denotations. Let ζ(w) be the number of the last ſnite vn . If such a number does not exist, let n (w) = vn , and X n (w) = zn . Then X n (w) = us assume that ζ(w) = ∞. Let X (Xn (w), X n (w)). On a set {Xn < ∞} ≡ {ζ ≥ n} we deſne map θn : W → W by the equality θn (w) = 0, zn , vn+1 − vn , zn+1 , . . . ,
26
Continuous Semi-Markov Processes
n , n ∈ N0 ); Q be a probabilwhich differs a little from that of θn . Let B = σ(X ity measure on B. The random sequence (W, B, Q) is said to be a two-dimensional Markov sequence with conditionally independent increments of the ſrst component, n < ∞} Q-a.s. if (∀n ∈ N0 ) (∀B ∈ B) on a set {X 0 , . . . , X n = Q θ−1 B | X n . Q θn−1 B | X n n = ∞} function θn is not deſned. On this set the Markov property On a set {X consists of determined passage (∞, ∞) → (∞, ∞), thus n+1 = (∞, ∞) | X 0 , . . . , X n = Q X n = 1 n+1 = (∞, ∞) | X Q X n = (∞, ∞)). This sequence also has the short name: Markov renewal process. (X 13. Homogenous family of distributions In order to deſne homogenous in time (number) sequences we consider transition functions (kernels) for passage from X to B(R × X). Let (Fn )∞ n=1 be a sequence of the kernels Fn ([0, t] × A | x) ∈ [0, 1], where t ≥ 0, A ∈ B(X), x ∈ X. We deſne them as n ≤ t, X n+1 ∈ A | X n = Fn+1 [0, t] × A | X n n+1 − X Q X n < ∞}. From here Q-a.s. on the set {X n ≤ t, X n+1 ∈ A | X n = x = Fn+1 [0, t] × A | x n+1 − X Q X −1
(Q◦X n )-a.s. on (X, B(X)). We call the sequence deſned above homogenous in time if Fn = F1 ≡ F . With the help of these kernels as well as in case of simple Markov sequences, we obtain a homogenous in time family of distributions (Qx )x∈X on B, coordinated by the condition of the two-dimensional Markov property with conditionally independent increments of the ſrst component (condition of regeneration). So we have ∀n ∈ N0 ∀B ∈ B ∀A ∈ Bn ∀x ∈ X
n < ∞ = EQ Q (B); A ∩ X n < ∞ , Qx θn−1 B, A ∩ X x Xn 0 , . . . , X n ), EQ is an integral with respect to the measure Qx . where Bn = σ(X x Thus n − X 1 − X 0 ≤ t1 , X 1 ∈ A1 , . . . , X n−1 ≤ tn , X n ∈ An Qx X 0 ∈ A0 , X n [1.7] = IA0 (x) F 0, ti , dxi | xi−1 . A1 ×···×An i=1
Stepped Semi-Markov Processes
27
(Ai ∈ B(X), x0 = x). The measure Qx deſned on sets of this form can be uniquely extended to the whole sigma-algebra B. We assume that F ([0, ∞) × X | x) ≤ 1 (sub-probability kernel). A condition (∃x ∈ X) F ([0, ∞)×X | x) < 1 reƀects a possibility of ſnite ζ(w) : Qx (ζ < ∞) > 0. Thus Qx (ζ = 0) = 1 − F ([0, ∞) × X | x) and, moreover, Qx (ζ = n) =
n 1 − F [0, ∞) × X | xn F [0, ∞) × dxi | xi−1
Xn
[1.8]
i=1
If (∀x ∈ X) F ([0, ∞) × X | x) = 1, then Qx (ζ = ∞) = 1. Thus, we again pass from an individual Markov chain with distribution Q to a Markov chain given within the initial point, with a coordinated family of distributions (Qx )x∈X . We are interested in conditions on a priori given kernel F for the functions Qx constructed by formula [1.3] which could possibly be extended to a measure on (W, B). The necessary and sufſcient condition for this property requires the kernel F (satisfying usual conditions of a measurability) to be of the form: (∀t > 0) (∀x ∈ X) F ∗n [0, t] × X | x −→ 0 (n −→ ∞), where F ∗n ([0, t] × S | x) is the convolution: F
∗n
[0, t] × S | x =
0
t
F ∗n−1 0, t − t1 × S | x1 F dt1 , dx1 | x ,
(S ∈ B(X)). This condition formalizes the requirement vn → ∞ as n → ∞ for all w ∈ W . A sufſcient condition for this property to be realized is the existence of such ε > 0 that (∀x ∈ X) F ([0, ε] × X | x) < 1 − ε. Let f (λ, A | x) =
0
∞
e−λt F (dt × A | x).
Then f (λ, X | x) < 1 − ε + εe−λε ≤ δ < 1. Furthermore 0
∞
e−λt F ∗n (dt × X | x) = f ∗n (λ, X | x),
where f ∗n (λ, A | x) =
X
f ∗n−1 λ, A | x1 f λ, dx1 | x ≤ δ n −→ 0.
28
Continuous Semi-Markov Processes
We also have f
∗n
(λ, X | x) =
0
≥
∞
t
0
e−λt F ∗n (dt × X | x)
e−λt1 F ∗n dt1 × X | x
≥ e−λt F ∗n [0, t] × X | x . 14. Independence of increments of the ſrst component n < ∞} conditional probability Qx (Bn | X 0 , . . . , X n ) Let us consider on a set {X of the event
1 − X 0 ≤ t1 , . . . , X n − X n−1 ≤ tn . Bn = X We will prove that n k − X k−1 ≤ t | X k−1 , X k Qx Bn | X 0 , . . . , X n = Qx X
[1.9]
k=1
Qx -a.s., where
1 − X 1 ≤ t | X 1 . 0 ≤ t | X 0 , X 1 = Qx X Qx X
It is enough for it to prove equality of integrals of these functions by a measure Qx on any set of the form:
Bn = X 0 ∈ A0 , . . . , X n ∈ An , where Ak ∈ B(X). For n = 1 the formula being proved turns to identity. Let it be correct for n ≥ 1. Then, using the regenerative property of the family (Qx ), we obtain = Qx Bn+1 , Bn+1 EQx Qx Bn+1 | X 0 , . . . , X n+1 ; Bn+1
n+1 − X n ≤ tn+1 , Bn ∩ X n+1 ∈ An+1 = Qx Bn ∩ X n+1 − X n ≤ tn+1 , X n+1 ∈ An+1 | X n ; Bn ∩ Bn = EQx Qx X n+1 − X n ≤ tn+1 , X n+1 ∈ An+1 | X n ; B . = EQx Qx Bn | X 0 , . . . , X n Qx X n Using the deſnition of conditional probability with respect to the measure Qx (· | X n ) we obtain the Qx -a.s. equality n+1 − X n ≤ tn+1 , X n+1 ∈ An+1 | X n Qx X n+1 − X n ≤ tn+1 | X n , X n+1 ; X n+1 ∈ An+1 | X n . = EQx Qx X
Stepped Semi-Markov Processes
29
This implies that the preceding expression is equal to n+1 − X n ≤ tn+1 | X n , X n+1 ; Bn+1 . EQx Qx Bn | X 0 , . . . , X n Qx X 1 ≤ t | X 1 = y). Such a transition function will be We denote F (t | x, y) = Qx (X covered in Chapter 7. 1.4. Semi-Markov process 15. Stepped random process Some questions about two-dimensional Markov sequences concern an interpretation of an element of W as a step-function ξ on [0, ∞). Let D0 be a set of step-functions ξ : R → X having on each ſnite interval a ſnite number of point of discontinuities, constant on each interval between neighboring points of discontinuities, continuous from the right (which deſnes it in points of discontinuities). The set of measures of Markov renewal processes deſned above determines stepped semi-Markov process, that is, the semi-Markov process in such a simple case, when it can be traced step by step on its sequence of states. Let map f : W → D0 put in correspondence to a sequence w ∈ W a function n (w) ≤ t < X n+1 (w). From ξ = f (w) such that (∀t ∈ R+ ) ξ(t) = X n (w) ⇐ X a condition Xn (w) → ∞ (n → ∞) it follows that the function ξ is deſned everywhere on R+ . Let F be σ-algebra of subsets of the set D0 , generated by all maps Xt : D0 → X (t ∈ R+ ), where Xt ξ = ξ(t). Thus, f is a measurable map (with respect to B and F), which for any measure Q on B induces a measure P = Q ◦ f −1 on F. Up to now we did not limit measures Q on B by a condition on the neighboring values X n , X n+1 . Now for convenience of use of representation ξ = f (w) we assume that Q is concentrated on W0 ⊂ W , where W0 consists of sequences with unequal neighboring values X n , X n+1 (n ∈ N0 ). More precisely, let (∀w ∈ W0 ) n < ∞ ⇒ X n = X n+1 . When the original measure Q does not satisfy this conX dition, we can enlarge X, using the space X = X × N0 , where the additional coordinate counts numbers of the elements of a sequence. New W contains the elements w = (w0 , w1 , w2 , . . .), where vn , zn , n , vn < ∞, wn = vn = ∞, (∞, ∞), . An example of a i.e. realizes a case, when zn ≡ (zn , n) = (zn+1 , n + 1) = zn+1 Markov renewal process, when it is necessary to accept such enlargement, is a simple
30
Continuous Semi-Markov Processes
renewal process. It corresponds to a case where X consists of one element. Furthermore, we will consider this example in more detail. However, we start by introducing some denotations connected to functions ξ ∈ D0 . 16. Shift and stop operators Let θt : D0 → D0 be a shift operator, (θt ξ)(s) = ξ(t + s) (s, t ∈ R+ ); and τn = σ0n be a time of the n-th discontinuity of function ξ (n ≥ 1). This time we also consider for ξ ∈ D0 with the number of jumps less than n. In this case we assume τn (ξ) = ∞ and Xτn (ξ) = ∞. Let αt : D0 → D0 be an operator of a stopping, (αt ξ)(s) = ξ(s∧t), where s∧t = min(s, t), s ∈ R+ , t ∈ R+ . This operator is useful for the representation sigmaalgebra of subsets of the set D0 , used in the theory of general semi-Markov processes. Let us consider sigma-algebra of subsets (events), preceding time t. From an obvious relation Xs ◦ αt = Xs∧t it follows that Ft = σ(Xs ◦ αt , s ∈ R+ ) = αt−1 σ(X1 , s ∈ R+ ) = αt−1 F (see Neveu [NEV 69]). Now we show another representation for F . Let Xτ (ξ) = Xτ (ξ) (ξ), (if τ (ξ) = ∞, we suppose Xτ (ξ) (ξ) = ∞), and F = σ τn , Xτn , n ∈ N0 . We have (∀t ∈ R+ ) Xt = Xτn ⇐ τn ≤ t < τn+1 . Hence Xt is a F -measurable function, i.e. F ⊂ F . On the other hand
n
Xsk/n = X0 ∈ F, τ1 > s = lim n−→∞
τ1 ≤ s, Xτ1 ∈ S = lim
n−→∞
n k−1
k=1
Xs/n = X0 , Xsk/n = X0 , Xsk/n ∈ S ∈ F.
k=1 =1
[1.10]
The inclusions {τn ≤ s, Xτn ∈ S} ∈ F are similarly proved, i.e. F ⊂ F. We also consider sigma-algebra Fτn = σ τk , Xτk , k ∈ {0, 1, . . . , n}
n ∈ N0 .
It is a sigma-algebra of events preceding to a time of the n-th jump (including the n-th jump). It also can be presented with the help of an operator of a stopping such as Fτn = ατ−1 F. Essentially, the function ατn ξ coincides with function ξ up to a n
Stepped Semi-Markov Processes
31
moment of the jump, and further is equal to a constant value (ατn ξ)(t) = Xτn ξ (t ≥ τn ). Hence τk (ατn ξ) = ∞ and Xτk (ατn ξ) = ∞, if k > n. Therefore F. Fτn = σ τk ◦ ατn , Xτk ◦ ατn , k ∈ N0 = ατ−1 n n (w) = τn (f (w)) and X n w = Xτ (f (w)). Furthermore, Note that for w ∈ W0 , X n n < ∞} = {τn ◦ f < ∞}. From here functional f (θn w) = θτn (f (w)) on a set {X n = τn ◦f ; X n = Xτ ◦f (on a set {τn = ∞} assume Xτ = ∞), equalities follow: X n n n < ∞}. and f ◦ θn = θτn ◦ f on set {X 17. Semi-Markov consistency condition Now we can express a consistency condition for the family of measures (Px ), generated by the family of measures (Qx ), in terms of maps θt and αt . Let us prove that Px θτ−1 B, A ∩ τn < ∞ = Ex PXτn (B); A ∩ τn < ∞ n
[1.11]
where B ∈ F, A ∈ Fτn (regeneration condition for the family (Px )). Using represenF, it is possible to put A = ατ−1 C, where C ∈ F. Then tation Fτn = ατ−1 n n B, ατ−1 C ∩ τn < ∞ = Qx f −1 θτ−1 B, Bn = Qx θn−1 f −1 B, Bn Px θτ−1 n n n C ∩ (τn ◦ f < ∞). Evidently, it belongs to sigma-algebra Bn . where Bn = f −1 ατ−1 n From here, using the condition of regeneration of the family (Qx ), we obtain EQx QX n f −1 B ; Bn = EQx PX n (B); Bn = EQx PXτn ◦f (B); Bn C ∩ τn < ∞ . = Ex PXτn (B); ατ−1 n The family (PX ), satisfying condition [1.11], is called the semi-Markov (stepped) family of measures. The random process, being determined within the initial point by this family of measures, is said to be a stepped semi-Markov process. For the semi-Markov family of measures (Px ) the kernel F ([0, t] × A | x) means a conditional distribution of a pair (time, point) of the ſrst jump: 1 ≤ t, X 1 ∈ A = F [0, t] × A | x . Px τ1 ≤ t, Xτ1 ∈ A = Qx X We call this kernel a semi-Markov transition function, and the corresponding kernel f (λ, A | x) a transition generating function of the stepped semi-Markov process.
32
Continuous Semi-Markov Processes
Furthermore, the regeneration condition of a semi-Markov family of measures (Px ) will be an initial concept. We will not deduce it from properties of more simple objects, as in the case of step-functions. To prepare this passage, we will derive the formula
B, A ∩ σΔ < ∞ = Px PXσΔ (B), A ∩ σΔ < ∞ Px θσ−1 Δ where Δ is an open set, θσΔ (ξ) = θσΔ (ξ) (ξ) (determined on {σΔ < ∞}), B ∈ F and F. A ∈ FσΔ , FσΔ is a sigma-algebra of events preceding to a time σΔ : FσΔ = ασ−1 Δ For stepped semi-Markov processes this property can be proved (for an arbitrary measurable Δ):
B, ασ−1 A ∩ σΔ < ∞ Px θσ−1 Δ Δ =
∞
Px θσ−1 B, ασ−1 A ∩ σ Δ < ∞ , σΔ = τ k Δ Δ
k=0
=
∞
Px θτ−1 B, ατ−1 A ∩ τk < ∞ , σΔ = τk . k k
k=0
On the other hand {σΔ = τk } = {X0 ∈ Δ, Xτ1 ∈ Δ, . . . , Xτk−1 ∈ Δ, Xτk ∈ / Δ} ∈ Fτk and consequently, according to the semi-Markov property of a set (Px ), it is possible B on PXτk (B) (with corresponding integration). We obtain to replace θτ−1 k ∞ k=0
Ex PXτk (B); ατ−1 A ∩ τk < ∞ , σΔ = τk k
A ∩ σΔ < ∞ . = Ex PXσΔ (B); ασ−1 Δ
1.5. Stationary distributions for a semi-Markov process 18. Parts of an interval of constancy and their distributions Let Rt− (ξ) = t − τNt (ξ),
Rt+ (ξ) = τNt +1 (ξ) − t,
where Nt (ξ) = sup{n : τn (ξ) ≤ t} is a number of jumps of function ξ up to a time t. These functions give distances from t up to nearest jump time from the left, and that from the right. We have Px Rt− ≥ r, Rt+ ≥ s, Xt ∈ A =
∞ k=0
Px Xtn ∈ A, τn ≤ t − r, τn+1 ≥ t + s .
Stepped Semi-Markov Processes
33
If τn < ∞, it is obvious that τn+1 = τn + τ1 ◦ θτn (the latter is usually designated ˙ τ1 ). Hence, {τn+1 > t + s} = {τ1 ◦ θτn > t + s − τn }. On a set {τn = s1 } as τn + {τ1 > t + s − s1 }, which makes it possible to use the last event is represented as θτ−1 n a condition of regeneration for any term of this series: ∞
Px τ ◦ θτn Xτn ∈ A, τn ≤ t − r
k=0
=
∞ k=0
=
t−r
0
∞ t−r
k=0
0
Ex PXτn τ1 > t + s − s1 ; τn ∈ ds1 , Xτn ∈ A A
Px1 τ1 > t + s − s1 Px τn ∈ ds1 , Xτn ∈ dx1 .
The strict deduction of this formula can be obtained with the help of passage to the limit with a partition of an interval [0, t − s] on n parts with n → ∞. So we have Px Rt− ≥ r, Rt+ ≥ s, Xt ∈ A t−r [1.12] Px1 τ1 > t + s − s1 U x, ds1 × dx1 , = 0
A
∞
where U (x, [0, t]×A) = k=0 Px (τn ≤ t, Xτn ∈ A). For ſxed xthis kernel is a mea∞ sure of intensity of some random point ſeld. So, let N (B×A) = n=0 IB×A (τn , Xn ) be a number of pairs (time, point) of discontinuity belonging to a set B × A (B ∈ B(R+ ), A ∈ B(X)). Then ∞ Ex N (B × A) = Ex IB×A τn , Xn n=0
=
∞
Px τn ∈ B, Xτn ∈ A = U (x, B × A).
n=0
19. Renewal processes The equations for distributions of Rt− and Rt+ are quite simple in a case X = {x} (X consists of one point), i.e. for renewal processes. According to the common scheme the renewal process is a Markov renewal process with space of states X = N0 and with deterministic passages {Xn = n} → {Xn+1 = n + 1}. The formal family of distributions (Pk )k∈N0 on (D0 , F ) is reduced to one distribution P0 , since this family is homogenous on k ∈ N0 , namely Pk (B − k) = P0 (B), where for B ∈ B, k ∈ N0 B − k = {ξ ∈ D0 : ξ + k ∈ B}. Moreover P0 τ1 ≤ t1 , τ2 − τ1 ≤ t2 , . . . , τn − τn−1 ≤ tn = F 0, t1 F 0, t2 · · · F 0, tn ,
34
Continuous Semi-Markov Processes
where F (A) = F (A, 1 | 0) = F (A, k | k − 1) (∀k ∞∈ N). The measure U (x, B × A) is reduced in this case to a measure U (B) = n=0 P0 (τn ∈ B). The analysis of properties of measure U is the main problem of the renewal theory. In this theory the following property is proved. For any non-lattice distribution F with ſnite expectation m > 0 it is true that b−a [1.13] U [a + t, b + t] −→ m for any a, b (a < b) (key renewal theorem – see Feller, II [FEL 67]). For lattice distributions with a pitch h the similar inference is correct for all values from a set {kh : k ∈ N0 }. From here it follows that if F ([0, ∞)) = 1 and m < ∞, then for any bounded function ψ on R+ there exists a limit t 1 ∞ ψ(t − s)U ds1 = ψ(t) dt. [1.14] lim t−→∞ 0 m 0 From here P0 Rt− ≥ r, Rt+ ≥ s =
t−r
0
1 −→ m
P0 τ1 > t + s − s1 U ds1
∞
P0 τ1 > t dt
(t −→ ∞).
r+s
We have found that distributions of Rt− and Rt+ are asymptotically identical with a limit density p0 (s) = P0 (τ1 > s)/m, where ∞ ∞ P0 τ1 > s ds = x F (dx). m = E0 τ1 = 0
0
20. Ergodic theorem Let us consider the transition function F (dt, dx1 | x) of a stepped semi-Markov process. Let (∀x ∈ X) F (R+ , X | x) = 1. The kernel H(dx1 | x) = F (R+ , dx1 | x) is a transition function of some Markov chain, which we call the embedded Markov chain of the given semi-Markov process. A Markov chain is said to be ergodic if there exists a stationary probability distribution ν(dx1 ) such that A ∈ B(X) H A | x1 ν dx1 ν(A) = X
and for any x ∈ X this distribution can be obtain as a weak limit of the sequence of measures H ∗n (dx1 | x) (n → ∞), where H ∗n A | x1 H dx1 | x , H ∗n+1 (A | x) = X
and this limit is independent of x. For a stepped semi-Markov process the following generalization of formula [1.14] is true.
Stepped Semi-Markov Processes
35
THEOREM 1.1. Let the embedded Markov chain of the given semi-Markov process be ergodic, and let it have as its stationary distribution ν(dx). In addition, for any x ∈ X, let the distribution F (dt, X | x) be a non-lattice probability distribution on R+ . Then for any directly integrable by Riemann function ψ(t, x) (t ≥ 0, x ∈ X) the following property is true t ∞ 1 ψ t − s1 , y U x, ds1 × dy = ψ(t, y) dt ν(dy), [1.15] lim t−→∞ X 0 M X 0 where
M=
X
0
∞
Py τ1 > t dt ν(dy).
Proof. See Shurenkov [SHU 89, p. 107]. A directly integrable by Riemann function is an absolutely integrable function, which can be approximated uniformly by stepfunctions, having steps of equal length (see Shurenkov [SHU 89, p. 80]). 21. Semi-Markov process Let us consider probability [1.12] for a stepped semi-Markov process. The right part of this equality can be represented as t ψ t − s1 , y U x, ds1 × dy , 0
X
where ψ(t, y) = Py τ1 − s > t IA (y)I[r,∞) (t). The direct integrability by Riemann of the integrand in t follows from its monotonicity as t > r. Using theorem 1.15, we obtain for t → ∞ ∞ 1 − + Py τ1 > t dt ν(dy). [1.16] Px Xt ∈ A, Rt ≥ r, Rt ≥ s −→ M A r+s In particular, 1 Px Xt ∈ A −→ M where
m(y) =
0
∞
Py τ1 > t dt,
m(y) ν(dy), A
M=
X
m(y) ν(dy).
This page intentionally left blank
Chapter 2
Sequences of First Exit Times and Regeneration Times
In this chapter the technique of investigating processes with trajectories continuous from the right and having limits from the left (cádág) is developed. For this purpose we use inserted point processes (streams) of the ſrst exits corresponding to sequences of subsets of metric space of states. These sequences can depend on a trajectory (random) or can be independent (non-random). We will consider either non-random sequences of subsets or the special kind of random sequences. In the latter case each subset is a spherical neighborhood of a point of the ſrst exit from the previous subset. If the radii of all balls are identical, then for any one trajectory a set of time of the ſrst exit is ſnite on each bounded interval, and, hence, the ordinary stream of the ſrst exits is deſned. With a non-random sequence of sets there is a danger of accumulation of inſnite number of time of the ſrst exit on a ſnite interval, even if all these sets are balls of one radius. However, in a sigma-compact phase it is possible to choose such a non-random sequence of balls with identical small radius (deducing sequence) for which such accumulation does not happen for any trajectory. The method of deducing sequences is convenient for an analysis of properties of random processes connected with times of the ſrst exit, and with a proof of a measurability of various maps. The trajectory space D[0,1] (X) is usually equipped with the Skorokhod metric (see [GIH 71]). In our case the modiſcation of this metric for space D = D[0,∞) (X), called a Stone-Skorokhod metric, is more natural [STO 63]. The time and points of the ſrst exit, generally speaking, are not continuous maps with respect to this metric. However, they will be continuous on sets of all functions ξ ∈ D, “correctly going out” from the appropriate regions. At the same time any open set Δ can “be deformed as little as desired” to a set Δε so that a set of functions, “correctly going out” from Δε , is a set of probability 1 [HAR 72]. It enables us to have at our disposal a rather rich class of 37
38
Continuous Semi-Markov Processes
sets “of correct exit almost surely” in tasks connected with continuity of times and positions of the ſrst exit. The most fruitful direction for a research of processes with the help of times of the ſrst exit is, of course, the theory of semi-Markov processes, for which the time of the ſrst exit from any open set Δ ⊂ X is a time of regeneration. In other words such a process has a Markov property with respect to the ſrst exit time from any open set. Problems of regeneration times have been investigated by many authors (see Shurenkov [SHU 77], Mainsonneuve [MAI 71, MAI 74], Mürmann [MUR 73], Smith [SMI 55], Taksar [TAK 80]). In this chapter we research a structure of classes of regeneration times connected with families of probability measures depending on a parameter x ∈ X. The closure of such a class with respect to some natural compositions of Markov times makes it possible to be limited by a rather small set of these times when deſning a semi-Markov process for its constructive exposition. 2.1. Basic maps We will consider a space of cádlág functions (see Gihman and Skorokhod [GIH 71]). In this section elementary properties of four basic maps of this space: (τ, Xt , θt , αt ) and that of their compositions are proved. Two classes of maps τ are introduced. Their deſnitions are based on properties of Markov times and times of the ſrst exit from open sets. For completeness we repeat some deſnitions from Chapter 1. 1. Space of functions and its maps Let R+ = [0, ∞); X be a full sigma-compact metric space with a metric ρ; let D = DR+ (X) be a set of all continuous from the right and having limits from the left (cádlág) functions ξ : R+ → X; R+ = R+ ∪ {∞}, where ∞ is a point which is not belonging to R+ and closing this set from the right; let T be a set of all maps τ : D → R+ . For any t ∈ R+ the maps are deſned: Xt : D → X, where Xt (ξ) = ξ(t) is a one-coordinate projection, θt : D → D, where (∀s ∈ R+ ) (θt ξ)(s) = ξ(s + t) is a shift. The following map is deſned for all t ∈ R+ : αt : D → D, where (∀s ∈ R+ ) (αt ξ)(s) = ξ(s ∧ t) is a stopping. We also consider compositions of the previous maps, namely, for any τ ∈ T the maps are deſned: Xτ : {τ < ∞} → X, where Xτ (ξ) = Xτ (ξ) (ξ),
Sequences of First Exit Times
39
θτ : {τ < ∞} → D, where θτ (ξ) = θτ (ξ) (ξ), ατ : D → D, where ατ (ξ) = ατ (ξ) (ξ). The important role in many consequent constructions is played by an operation of ˙ for functions τ ∈ T (see Itô and McKean [ITO 65], Blua non-commutative sum + ˙ τ2 = τ1 +τ2 ◦θτ1 . menthal and Getoor [BLU 68]). For τ1 < ∞ let us assume that τ1 + The latter sum is deſned to be equal to ∞ on the set {τ1 = ∞} (τ1 , τ2 ∈ T ). 2. Classes of maps The deſnition of the following subsets of the set T reƀects basic properties of Markov times and times of the ſrst exit from open sets.
[2.1] Ta = τ ∈ T : ∀t ∈ R+ {τ ≤ t} = αt−1 {τ ≤ t} ,
Tb = τ ∈ T : ∀t ∈ R+ {τ ≤ t} = αt−1 {τ < ∞} . [2.2] 3. Properties of basic maps PROPOSITION 2.1.
∀s, t ∈ R+ ∀s, t ∈ R+ ∀s, t ∈ R+
∀s ∈ R+ , t ∈ R+ ∀s ∈ R+ , t ∈ R+
Xt ◦ θs = Xs+t
[2.3]
θt ◦ θs = θs+t
[2.4]
αt ◦ αs = αs∧t
[2.5]
Xt ◦ αs = Xs∧t
[2.6]
θt ◦ αt+s = αs ◦ θt
[2.7]
Proof. It immediately follows from the deſnitions, for example, [2.7]: Xu θt αt+s (ξ) = X(t+u)∧(t+s) (ξ) = Xt+(u∧s) (ξ) = Xu αs θt (ξ). 4. Properties of classes of maps PROPOSITION 2.2. The following properties of classes Ta and Tb are fair: Tb ⊂ Ta ,
∀τ ∈ Ta , ∀t ∈ R+ {τ ≤ t} ⊂ τ = τ ◦ αt ,
∀τ ∈ Ta τ = τ ◦ ατ .
[2.8] [2.9] [2.10]
40
Continuous Semi-Markov Processes
Proof. [2.8]. Let τ ∈ Tb and t ∈ R+ . As αt ◦ αt = αt , {τ ≤ t} = αt−1 {τ < ∞} = αt−1 αt−1 {τ < ∞} = αt−1 {τ ≤ t}.
[2.11]
On the other hand, (∀ξ ∈ D) α∞ ξ = ξ and consequently the equality is true for t = ∞. [2.9]. Let τ ∈ Ta , t ∈ R+ , ξ ∈ D, τ (ξ) ≤ t and τ (ξ) < τ (αt (ξ)). Then (∃s ≤ t) τ (ξ) ≤ s, τ (αt (ξ)) > s. From here τ (αs (ξ)) ≤ s (since αs−1 {τ ≤ s} = {τ ◦αs ≤ s}) and τ (αs (αt (ξ))) = τ (αs (ξ)) > s (since {τ > s} = αs−1 {τ > s}). It is inconsistency. The same with τ (ξ) > τ (αt (ξ)). [2.10]. Let τ ∈ Ta , t ∈ R+ . It is obvious that {τ = t} = αt−1 {τ = t} and −1 {τ = ∞}. Then {τ = ∞} = α∞
D = {τ = t} = τ ◦ αt = t, τ = t
= τ ◦ ατ = τ, τ = t = τ ◦ ατ = τ , where the union is taken by all t ∈ R+ . 5. Properties of the combined maps PROPOSITION 2.3. Let τ1 , τ2 ∈ T . Then Xτ2 ◦ θτ1 = Xτ1 +˙ τ2
[2.12]
θτ2 ◦ θτ1 = θτ1 +˙ τ2
[2.13]
ατ2 ◦ ατ1 = ατ1 ∧τ2
[2.14]
Xτ2 ◦ ατ1 = Xτ1 ∧τ2
[2.15]
θτ2 ◦ ατ2 +˙ τ1 = ατ1 ◦ θτ2
[2.16]
In addition, let τ2 ∈ Ta . Then
For equalities [2.15] and [2.16] the domain of the left map is contained in the domain of the right map. Proof. [2.12]. The common domain of both the left and right maps: {τ2 ◦ θτ1 < ∞, ˙ τ2 < ∞}. Furthermore τ1 < ∞} = {τ1 + Xτ2 θτ1 (ξ) = Xτ2 θτ1 (ξ) (ξ) = X(τ1 +˙ τ2 )(ξ) (ξ).
Sequences of First Exit Times
41
[2.13] is proved similarly. [2.14]. The domain of both maps is the whole D. Thus ατ2 ατ1 (ξ) = ατ2 (ατ1 (ξ))∧τ1 (ξ) (ξ). We have further τ2 (ξ) ≤ τ1 (ξ) ⇔ τ2 ατ1 (ξ) (ξ) ≤ τ1 (ξ) (τ2 ∈ Ta ). From here τ2 (ατ1 (ξ)) ∧ τ1 (ξ) ≤ τ2 (ξ) ∧ τ1 (ξ). [2.15]. The domain of the left map {τ2 ατ1 < ∞} belongs to the domain of the right map {τ1 ∧ τ2 < ∞}, because if τ1 (ξ) = τ2 (ξ) = ∞, then τ2 ατ1 (ξ) = τ2 (ξ) = ∞. Thus Xτ2 ατ1 (ξ) = Xτ2 ατ1 (ξ)∧τ1 (ξ) (ξ) = Xτ1 ∧τ2 (ξ). [2.16]. The domain of the left map {τ2 ◦ ατ2 +˙ τ1 < ∞} belongs to the domain of the right map {τ2 < ∞}, because if τ2 (ξ) = ∞, then τ2 ατ2 +˙ τ1 (ξ) = τ2 (ξ) = ∞. Let τ2 ατ2 +˙ τ1 (ξ) < ∞. With t ∈ R+ we have Xt θτ2 ατ2 +˙ τ1 (ξ) = Xs (ξ), where s = (t + τ2 ατ2 +˙ τ1 (ξ)) ∧ (τ2 (ξ) + τ1 (θτ2 ξ)), and also Xt ατ1 θτ2 (ξ) = Xz (ξ), where z = (t ∧ τ1 θτ2 (ξ)) + τ2 (ξ). Since τ2 ∈ Ta and ατ2 = ατ2 ◦ ατ2 +˙ τ1 , τ2 ατ2 +˙ τ1 (ξ) = τ2 (ξ) and s = z. ˙ 6. An associativity of operation + PROPOSITION 2.4. For any τi ∈ T (i = 1, 2, 3) ˙ τ2 + ˙ τ3 = τ 1 + ˙ τ2 + ˙ τ3 . τ1 + Proof. It is an immediate corollary of formula [2.13] from proposition 2.3. 7. A semigroup property PROPOSITION 2.5. The following properties are fair: ˙ τ2 ∈ Ta ; (1) τ1 , τ2 ∈ Ta ⇒ τ1 + ˙ τ2 ∈ Tb ; (2) τ1 , τ2 ∈ Tb ⇒ τ1 + (3) let A be some set of indexes. Then, if for any α ∈ A τα ∈ Ta , then and if A is ſnite, τα ∈ Ta ;
α∈A τα
∈ Ta ,
α∈A τα
∈ Tb ,
α∈A
(4) let A be some set of indexes. Then, if for any α ∈ A τα ∈ Tb , then and if A is ſnite, then τα ∈ Tb . α∈A
42
Continuous Semi-Markov Processes
˙ Proof. (1) t} = a≤t {τ1 = a, τ2 ◦θa ≤ Let τ1 , τ2 ∈ Ta . Then (∀t ∈ R+ ) {τ1 + τ2 ≤ t−a} = a≤t {τ1 ◦αt = a, τ2 ◦αt−a ◦θa ≤ t−a} = a≤t {τ1 ◦αt = a, τ2 ◦θa ◦αt ≤ ˙ τ2 ≤ t}. t − a} = αt−1 a≤t {τ1 = a, τ2 ◦ θa ≤ t − a} = αt−1 {τ1 + ˙ τ2 ≤ t} = a≤t {τ1 = a, τ2 ◦ θa ≤ (2) Let τ 1 , τ2 ∈ Tb . Then (∀t ∈ R+ ) {τ1 + t − a} = a≤t {τ1 ◦ αt = a, τ2 ◦ αt−a ◦ θa < ∞} = a≤t {τ1 ◦ αt = a, τ2 ◦ θa ◦ αt < ∞} = αt−1 {τ1 ≤ t, τ2 ◦ θτ1 < ∞} = αt−1 {τ1 < ∞, τ2 ◦ θτ1 < ∞} = ˙ τ2 < ∞}. αt−1 {τ1 + (3) Let (∀α ∈ A) τα ∈ Ta . Then { α τα ≤ t} = α {τα ≤ t} = αt−1 α {τα ≤ t} = αt−1 { α τα ≤ t}. Furthermore {τα1 ∧ τα2 ≤ t} = {τα1 ≤ t} ∪ {τα2 ≤ t} = αt−1 ({τα1 ≤ t} ∪ {τα2 ≤ t}) = αt−1 {τα1 ∧ τα2 ≤ t} (α1 , α2 ∈ A). (4) Let (∀α ∈ A) τα ∈ Tb . Then { α τα ≤ t} = α {τα ≤ t} = αt−1 α {τα < ∞} ⊃ αt−1 { α τα < ∞}. Furthermore {τα1 ∧ τα2 ≤ t} = {τα1 ≤ t} ∪ {τα2 ≤ t} = αt−1 ({τα1 < ∞} ∪ {τα2 < ∞}) = αt−1 {τα1 ∧ τα2 < ∞}. 2.2. Markov times The various denotations concerning Markov times will now be introduced (see Dellacherie [DEL 72], Dynkin [DYN 59, DYN 63], Loève [LOE 62], Neveu [NEV 69], Shiryaev [SHI 80], Protter [PRO 77], etc.). The measurability of basic maps, the representation of sigma-algebra of events preceding a Markov time τ ∈ MT, and also the characterization of Markov times (propositions 2.8 and 2.9), known in the literature as the Galmarino theorem, are discussed (see Itô and McKean [ITO 65, p. 113]). The outcomes of this section are not new. A concrete form of probability space and that of basic maps and classes Ta (T˜a ) facilitates presentation of results and their proof. 8. Sigma-algebra of subsets Let F = σ(Xt , t ∈ R+ ) be the least sigma-algebra of subsets of a set D such that all maps Xt are measurable with respect to it. Let
t ∈ R+ . Ft = σ Xs , s ≤ t , Ft+ = Ft+ε ε>0
Let us designate by
MT = τ ∈ T : ∀t ∈ R+ {τ ≤ t} ∈ Ft
Sequences of First Exit Times
43
a set of Markov times concerning a family of sigma-algebras (Ft )t≥0 ;
MT+ = τ ∈ T : ∀t ∈ R+ {τ ≤ t} ∈ Ft+ the same deſnition concerns a family of sigma-algebras (Ft+ )t≥0 . Let
(τ ∈ MT), Fτ = B ∈ F : ∀t ∈ R+ B ∩ {τ ≤ t} ∈ Ft
τ ∈ MT+ Fτ + = B ∈ F : ∀t ∈ R+ B ∩ {τ ≤ t} ∈ Ft+ be sigma-algebras of events preceding a Markov time τ . Furthermore we will connect to each τ ∈ MT+ one more sigma-algebra contained in Fτ + , more convenient for research of properties of regeneration times. We will designate B(E) a Borel sigma-algebra of subsets of a topological space E. If (A, A) and (E, E) are two sets with sigma-algebras of subsets, then ϕ ∈ A/E means that the map ϕ : A → E is measurable with respect to sigma-algebras A and E. If B ⊂ A, then A ∩ B means a track of σ-algebra A on a set B (i.e. all sets of the form A ∩ B, where A ∈ A). 9. The Stone metric Metric We frequently use the map βτ : D → Y, where Y = (R+ ×X)∪{∞}, ∞ ∈ (R+ × X), τ ∈ T and ∞, τ (ξ) = ∞, βτ (ξ) = τ (ξ), Xτ (ξ) , τ (ξ) < ∞. Let us consider one-to-one map B : Y → Y , where Y = ([0, 1) × X) ∪ {1} and 1 ∈ [0, 1) × X: 1, y = ∞, B(y) = (2/π) arctan t, x , y = (t, x) t ∈ R+ , x ∈ X . The metric of Stone ρY on a set Y is said to be the following function ρY t1 , x1 , t2 , x2 = 1 − t1 ∧ 1 − t2 arctan ρ x1 , x2 + t1 − t2 , ρY 1, (t, x) = ρY (t, x), 1 = 1 − t, ρY (1, 1) = 0, where t, t1 , t2 ∈ [0, 1), x, x1 , x2 ∈ X. A proof of the metric axioms for function ρY does not present difſculties (see Stone [STO 63]).
44
Continuous Semi-Markov Processes
On a set Y we consider the metric ρY of the form y 1 , y2 ∈ Y , ρY y1 , y2 = ρY B y1 , B y2 converting Y in separable metric space. Thus, we have B(Y) = σ {∞}, [0, t] × S t ∈ R+ , S ∈ B(X) . 10. Measurability of combined maps PROPOSITION 2.6. Let τ, τ1 , τ2 ∈ F/B(R+ ). Then (1) Xτ ∈ F ∩ {τ < ∞}/B(X); (2) θτ ∈ F ∩ {τ < ∞}/F; (3) ατ ∈ F/F; ˙ τ2 ∈ F/B(R+ ); (4) τ1 + (5) βτ ∈ F/B(Y). Proof. (1) and (2): see Blumenthal and Getoor [BLU 68, p. 34]; (3), (4), (5): it is obvious. 11. Representation of sigma-algebra PROPOSITION 2.7. For any τ ∈ MT the following representation is fair: Fτ = ατ−1 F. Proof. For any t ≥ 0 let us check the equality Ft = αt−1 F. For this aim we use representation of an indexed sigma-algebra: for any measurable map M it is true that M −1 σ(B : B ∈ A) = σ M −1 B : B ∈ A , where A is a family of sets [NEV 69]. From here αt−1 σ Xs , s < ∞ = σ Xs ◦ αt , s < ∞ = σ Xs∧t , s < ∞ = σ Xs , s ≤ t . The condition B ∈ αt−1 F means that (∃B ∈ F) B = αt−1 B . Hence, in particular, B = αt−1 B (to prove it, it is necessary to use the property αt ◦ αt = αt ). Note
Sequences of First Exit Times
45
that with B ∈ Fτ the property is fair (∀t ≥ 0) B ∩ {τ = t} ∈ Ft . Consequently, B ∩ {τ = t} = αt−1 B ∩ {τ = t}. From here for this B the following representation is true: ατ−1 B ∩ {τ = t} = αt−1 B ∩ {τ = t} ατ−1 B = t∈R+
=
t∈R+
B ∩ {τ = t} = B.
t∈R+
However, this means that B ∈ ατ−1 F. The inverse relation we should show is (∀s ≥ 0) (∀S ∈ B(X)) ατ−1 (Xs−1 S ∩ {τ ≤ t}) ∈ Ft . Including this we obtain from the equalities −1 −1 S ∩ {τ ≤ t} = Xt∧s∧τ S ∩ αt−1 {τ ≤ t} αt−1 Xs∧τ −1 S ∩ {τ ≤ t}. = Xs∧τ
A similar theorem is proved in Shiryaev [SHI 76, p. 22]. 12. Relation of classes Let
Ta = τ ∈ T : ∀t ∈ R+ {τ < t} = αt−1 {τ < t} . PROPOSITION 2.8. The properties are fair: (1) MT ⊂ Ta ; (2) MT+ ⊂ Ta . Proof. It is an obvious corollary of proposition 2.7. 13. Measurability of times PROPOSITION 2.9. Let τ ∈ F/B(R+ ). Then (1) τ ∈ Ta ⇒ τ ∈ MT; (2) τ ∈ Ta ⇒ τ ∈ MT+ . Proof. It is an obvious corollary of proposition 2.7; see Shiryaev [SHI 76, p. 24] (for denotations see proposition 2.8(2)).
46
Continuous Semi-Markov Processes
2.3. Time of the ſrst exit and deducing sequences Various denotations connected to streams of the ſrst exit are introduced. Apparently, the most simple stream of the ſrst exit is a sequence of points from space R+ × X, where each next pair is a time and position of the ſrst exit from a spherical neighborhood of a position of the previous pair. If radii of all balls are identical, then for any function ξ ∈ D the sequence of time of the ſrst exits for such stream has no points of condensation on a ſnite part of the half-line. Despite the simplicity and naturalness of such a stream, it is not suitable to research some properties of the process. The ſxed sequence of open sets is more convenient for applications of corresponding streams of the ſrst exits. The sequences guaranteeing for all ξ ∈ D a lack of points of condensation in a ſnite part of the half-line are referred to as deducing [HAR 74, HAR 80b]. There are deducing sequences composed of balls as small as may be desired. This fact implies that any function can be approximated uniformly on the whole half-line by step-functions corresponding to streams of the ſrst exit of such a view. For any τ ∈ MT+ we connect little bit smaller, than Fτ + , sigma-algebra Fτ = σ(ατ , τ ), where only τ itself depends on “an inſnitesimally close future”. It is measurable with respect to new sigma-algebras by deſnition. For τ ∈ MT the new definition coincides with old one, see item 8. The method of deducing sequences is applied in order to derive the form of representation of the sigma-algebra Fτ1 +˙ τ2 (τ1 , τ2 ∈ MT+ ), which does not have a place for F(τ1 +˙ τ2 )+ . This representation ˙ of the set of will be used in order to prove “semigroup” property (with respect to +) regeneration times. 14. Time of the ſrst exit is a class of all We designate that A is a class of all open subsets of a set X, A closed subsets of a set X (see Kelley [KEL 68]), [Δ] is a closure and ∂Δ is a boundary of a set Δ ⊂ X, B(x, r) an open ball with a center x ∈ X and radius r > 0. As a time of the ſrst exit from a set Δ ⊂ X we call the function σΔ ∈ T of the form inf t ≥ 0 : ξ(t) ∈ Δ , (∃t ≥ 0) ξ(t) ∈ Δ σΔ = ∞, (∀t ≥ 0) ξ(t) ∈ Δ, (see Dynkin [DYN 59], Blumenthal and Getoor [BLU 68], Keilson [KEI 79], etc.). Thus σX = ∞. Sometimes it is useful to have the ſrst exit time from an empty set which naturally can be deſned as zero: σØ = 0. We consider also the case when choice of Δ depends on ξ, i.e. time of the ſrst exit σG , where G = G(ξ) ∈ A and σG (ξ) = σG(ξ) (ξ). For example, σB(x,r) (ξ), where x = X0 (ξ). We designate the latter Markov time as σr , i.e. σr ≡ σB(X0 ,r) . The most common time of this aspect,
Sequences of First Exit Times
47
with which we will now deal, is the time σB(X0 ,r)∩Δ (Δ ∈ A), which, obviously, is equal to σr ∧ σΔ . Let us accept the following denotation for sum of times of the ſrst exit. Let r > 0; δ = (Δ1 , Δ2 , . . .) be a sequence of sets Δi ⊂ X. Then ˙ σr n ∈ N, σr0 = 0 , σrn = σrn−1 + ˙ σΔn σδn = σδn−1 + n ∈ N, σδ0 = 0 . ˙ genLet T1 ⊂ T . Designate T (T1 ) “semi-group” with respect to the operation + ˙ erated by a class of functions T1 . It means τ1 , τ2 ∈ T (T1 ) ⇒ τ1 +τ2 ∈ T (T1 ). 15. Properties of the ſrst exit times Let us designate by
Tb = τ ∈ Ta : ∀t ∈ R+ σΔ ≤ t ⊃ αt−1 {τ < ∞} , class Ta ; see proposition 2.8. PROPOSITION 2.10. The following properties are fair: (1) σΔ , σr ∈ Tb ∩ MT (r > 0, Δ ∈ A); where (2) σΔ , σ[r] ∈ Tb ∩ MT+ (r > 0, Δ ∈ A), σ[r] (ξ) = σ[B(X0 (ξ),r)] (ξ). Proof. (1) In Gihman and Skorokhod [GIH 73, p. 194] it is proved that for any Δ ∈ A σΔ ∈ MT. It is similarly proved that σr ∈ MT. So we have to prove that (∀t ∈ R+ )
σΔ ≤ t = αt−1 σΔ < ∞ , σr ≤ t = αt−1 σr < ∞ . Since MT ⊂ Ta (proposition 2.8), (∀t ∈ R+ ) {σΔ ≤ t} ⊂ αt−1 {σΔ < ∞}. Let σΔ αt (ξ) < ∞. Then (αt ◦ ξ)−1 (X\Δ) = Ø. If Xt αt (ξ) = Xt (ξ) ∈ X\Δ, then σΔ (ξ) ≤ t. If Xt αt (ξ) ∈ Δ, then (∀t1 ≥ t) Xt1 αt (ξ) = Xt (ξ) ∈ Δ, and, hence, (∃t2 < t) Xt2 αt (ξ) = Xt2 (ξ) ∈ X\Δ and σΔ (ξ) < t. From here αt−1 {σΔ < ∞} ⊂ {σΔ ≤ t}. For σr the proof is similar. follow from a possibility of representation (2) Properties σΔ ∈ MT+ (Δ ∈ A) σΔ as a limit of decreasing sequence of Markov times τn ∈ MT (see Blumenthal and
48
Continuous Semi-Markov Processes
Getoor [BLU 68, p. 34]). Based on theorem 2.8, MT+ ⊂ Ta . We have to check that (∀t ∈ R+ )
σΔ ≤ t ⊃ αt−1 σΔ < ∞ , but it is being proved like that of item (1). The time σ[r] is investigated in a similar manner. 16. Inequalities for the ſrst exit time PROPOSITION 2.11. Let Δ ⊂ X and r > 0. Then ˙ σ Δ ] ⇒ t1 + ˙ σ Δ = t2 + ˙ σΔ , (1) t2 ∈ [t1 , t1 + ˙ σΔ ≤ t2 + ˙ σΔ , (2) t1 ≤ t2 ⇒ t1 + ˙ σ r ≤ t2 + ˙ σ2r . (3) t1 ≤ t2 ⇒ t1 + Proof. (1) It follows from the following representation inf t ≥ t1 : ξ(t) ∈ Δ , ∃t ≥ t1 ξ(t) ∈ Δ, ˙ t1 + σΔ (ξ) = ∞, ∀t ≥ t1 ξ(t) ∈ Δ, see Mainsonneuve [MAI 71, MAI 74], Blumenthal and Getoor [BLU 68]. (2) It follows from (1). (3) It follows from an obvious relation: (∀ξ ∈ D) t2 ∈ t1 , t1 + σr θt1 (ξ) =⇒ ∀t ∈ t1 , t1 + σr θt1 (ξ) ρ ξ t2 , ξ(t) ≤ ρ ξ t2 , ξ t1 + ρ ξ t1 , ξ(t) < 2r. 17. Deducing sequence Let us pay attention to a difference between sequences (σrn ) and (σδn ). The former is strictly increasing in n (up to the ſrst n, when it heads towards inſnity), while the latter is increasing at the step n only if Xσn−1 belongs to the next set Δn , otherwise δ the next term of the sequence is equal to the preceding one. The sequence of point “waits” for the next subset, covering the last point. It can be the case that (Xσδi )n−1 1 such a covering subset is absent (in spite of the sequence being inſnite). However there exist sequences of any small rank that “service” any one function from D. For a given subset A ⊂ X a sequence of subsets of a set X is referred to as deducing from A if δ = (Δ1 , Δ2 , . . .) (Δi ⊂ X) and for any ξ ∈ D and t ∈ R+ there exists n ∈ N, such that Xσδn (ξ) ∈ A or σδn (ξ) > t.
Sequences of First Exit Times
49
Let DS(A1 , A) be a set of all sequences deducing from A, composed from the elements of a class A1 subsets of a set X. Other abbreviations are: DS A1 = DS A1 , X ,
DS(A) = DS(A, A),
DS = DS(A, X),
and also DS(r, A) = DS
B(x, r) : x ∈ X , A
(r > 0),
DS(r) = DS(r, X).
Deducing from X, a sequence is said to be deducing without attributes.
18. Stepped images of functions For any δ ∈ DS and r > 0 the maps Lδ and Lr of type D → D are deſned: ∀t ∈ R+ Xt Lδ ξ = Xσδn (ξ),
Xt Lr ξ = Xσrn (ξ),
where σδn (ξ) ≤ t < σδn+1 (ξ) and σrn (ξ) ≤ t < σrn+1 (ξ) respectively. 19. Measurability of maps and r > 0 the following conditions of PROPOSITION 2.12. For any δ ∈ DS(A ∪ A) measurability are fair Lδ ∈ F/F,
Lr ∈ F/F.
Proof. It follows from F/B(Y)-measurability of all βσδn and βσrn (n ∈ N0 ) (see proposition 2.6(5)). A measurability of σΔ , σr can be found in Gihman and Skorokhod ˙ is proved, [GIH 73, p. 194]. A closure of sets MT and MT+ concerning operation + for example, by Itô and McKean [ITO 65, p. 114].
20. Existence of deducing sequences THEOREM 2.1. The following assertions are fair: (1) for any covering A0 of the set X, where A0 ⊂ A, there exists a deducing sequence composed of elements of this covering; (2) for any A ∈ A and r > 0 there exists δ = (Δ1 , Δ2 , . . .) ∈ DS(A) such that Δi ⊂ A and (∀i ∈ N) diam Δi ≤ r.
50
Continuous Semi-Markov Processes
∞ Proof. (1) Let X = n=1 Xn and any Xn belong to K, where K is a set of all compact subsets of a set X. Let An be ſnite covering of Xn and An ⊂ A0 . For any ξ ∈ D and t ∈ R+ there is n ∈ N such that for all s ≤ t ξ(s) ∈ Xn . Let for all s ≤ t ξ(s) ∈ Xn and for any ſnite set δ = (Δ1 , . . . , Δm ) σδ (ξ) ≤ t, where Δi ∈ An (1 ≤ i ≤ m), } : t m ↑ t0 , m ∈ N. Then t0 = sup∀δ σδ (ξ) ≤ t and also there is a consequence {δm (ξ). Let x = limm→∞ Xt (ξ) where x ∈ Xn since Xn is closed. where tm = σδm m Let us consider two possibilities: (a) (∃Δ ∈ An ) x, ξ(t0 ) ∈ Δ ; (b) (∃Δ , Δ ∈ A0 ) x ∈ Δ , ξ(t0 ) ∈ Δ \Δ . (ξ), t0 )) ξ(s) ∈ Δ . From here In case (a) (∃m ∈ N) (∀s ∈ [σδm ˙ σΔ (ξ) = t0 + ˙ σΔ (ξ) > t0 + σδm
(see proposition 2.11(1)). It is a contradiction. (ξ), t0 )) In case (b) (∃m ∈ N) (∀s ∈ [σδm ξ t0 ∈ Δ \Δ . ξ(s) ∈ Δ ,
˙ σΔ )(ξ) = t0 and (σδ + ˙ σΔ + ˙ σΔ )(ξ) > t0 . It is a contradic + From here (σδm m tion. Hence, there is a ſnite sequence δ = (Δ1 , . . . , Δ ) (Δi ∈ An ) such that σδ (ξ) > t. A set of all ſnite sequences composed from sets Δ ∈ An (n ∈ N) is countable. Let us enumerate all these sequences by numbers of all positive integers and compose from them one inſnite sequence δ. Then for any ξ ∈ D and t ∈ R+ there is a ſnite subsequence δ such that σδ (ξ) > t. Let δ = (Δk+1 , . . . , Δk+m ) for some k ˙ σδ . Under proposition 2.11(2) we have and m. Then σ(Δ1 ,...,Δk+m ) = σ(Δ1 ,...,Δk ) + σ(Δ1 ,...,Δk+m ) (ξ) ≥ σδ (xi) > t. Hence, δ ∈ DS. (2) From the previous assertion it follows that DS(r) = DS(r, X) = Ø. Let δ = (Δ1 ∩ A, Δ2 ∩ A, . . .), where δ = (Δ1 , Δ2 , . . .) ∈ DS(r). We assume that (∃ξ ∈ D) (∃t ∈ R+ ) (∀n ∈ N) Xσδn (ξ) ∈ A,
σδn (ξ) ≤ t.
Besides there exists n0 ∈ N such that σδn0 (ξ) > t. As A is open, XσΔ1 ∩A (ξ) ∈ A ⇒ XσΔ1 ∩A (ξ) ∈ Δ1 and, hence, under proposition 2.11(1) ˙ σΔ1 (ξ) = σΔ1 ∩A (ξ). σΔ1 (ξ) = σΔ1 ∩A + Let σδk (ξ) = σδk (ξ) (k < n). Again we have Xσk+1 (ξ) = XσΔk+1 ∩A θσk (ξ) ∈ A =⇒ XσΔk+1 ∩A θσk (ξ) ∈ Δk+1 δ
δ
δ
Sequences of First Exit Times
51
and, hence, ˙ σΔk+1 (ξ) = σδk + ˙ σΔk+1 ∩A + ˙ σΔk+1 (ξ) σδk+1 (ξ) = σδk + ˙ σΔk+1 ∩A (ξ) = σδk+1 = − σδk + (ξ). From here σδn0 (ξ) = σδn0 (ξ). It is a contradiction. 21. Covering of a compact set We designate K a set of all compact subsets of a set X (see proof of the previous theorem). THEOREM 2.2. Let δ = (Δ1 , Δ2 , . . .) ∈ DS and (∀n ∈ N) Δn = X. Then (∀K ∈ K) (∃r > 0) (∀x ∈ K) (∀n ∈ N) (∃k > n) B(x, r) ⊂ Δk . Proof. Let δ = (Δ1 , Δ2 , . . .), where (∀n ∈ N) Δn = X and (∃x ∈ X) (∀ε > 0) (∃n ∈ N) (∀k ≥ n) B(x, ε)\Δk = Ø. Then, obviously, there exists a sequence (rn )∞ 1 (rn > 0, rn ↓ 0) such that (∀n ∈ N) (∀k ≥ n) B x, rn \Δk = Ø. We consider a sequence (xn )∞ 0 , where x0 ∈ X, x1 ∈ B x, rk1 \Δk1 , . . . , xn ∈ B x, rkn \Δkn , . . . ; k1 is a number of the ſrst set in a sequence δ, covering x0 , and (∀n ∈ N) kn+1 is a number the ſrst set after Δkn covering xn . Let us construct ξ ∈ D as follows: ξ(t) = xn for t ∈ [2 − 21−n , 2 − 2−n ) (n ∈ N0 ). If sequence (kn ) is ſnite and kM is its last term, we suppose ξ(t) = xM for t ≥ 2 − 2−M . If (kn ) is inſnite, we suppose ξ(t) = x for t ≥ 2. Then n ˙ ···+ ˙ σΔk (ξ) = σΔ1 + 2−i < 2, n
i=0
i.e. σδn (ξ) → ∞, and δ ∈ DS. Hence, if δ ∈ DS and (∀n ∈ N) Δn = X, then (∀x ∈ X) (∃εx > 0) (∀n ∈ N) (∃k ≥ n) B x, εx ⊂ Δk .
52
Continuous Semi-Markov Processes
Let
r(x) = max r : (∀n ∈ N) (∃k ≥ n) B(x, r) ⊂ Δk . Then r(x) is a continuous function. Really, if ρ(x1 , x2 ) < c < r(x1 ) ∧ r(x2 ), then B x1 , r ⊂ Δk =⇒ B x2 , r − c ⊂ Δk , i.e. r(x2 ) ≥ r(x1 ) − c. Similarly r(x1 ) ≥ r(x2 ) − c. From here |r(x1 ) − r(x2 )| < c, and therefore, on any compact set K ⊂ X the function r(x) has a positive minimum.
22. Representation of sigma-algebra PROPOSITION 2.13. Let rm ↓ 0 and δm ∈ DS(rm ). Then F = σ βσk−1 ; k, m ∈ N = σ βσrk−1 ;k,m∈N . m
δm
The same is fair for δ ∈ DS({[B(x, rm )]}x∈X ) and σ[rm ] . Proof. Let δ = (Δ1 , Δ2 , . . .) ∈ DS(r). Then (∀t ∈ R+ ) (∀ξ ∈ D) (∃n ∈ N) t ∈ σδn−1 (ξ), σδn (ξ) . From here Xt (ξ), Xt (Lδ ξ) ∈ Δn and ρ(Xt (ξ), Xt (Lδ ξ)) < r. Hence, Xt (ξ) = limm→∞ Xt (Lδm ξ ). Therefore (∀S ∈ B(X)) −1 Xt ◦ Lδm S ∈ σ βσk−1 ; k, m ∈ N , δm
hence Xt−1 S ∈ σ(βσk−1 ; k, m ∈ N). The statement for maps σr and sequences of δm closed sets can be similarly proved. 23. Note about representation of sigma-algebra The map Lr corresponds to a so-called “random” deducing sequence of balls depending on a choice of ξ. For non-random deducing sequences a rather stronger statement is fair, namely, F = σ σδkm ; k, m ∈ N . Really, in each Δim ∈ δm let some interior point xim ∈ Δim be chosen. Then for each ξ (ξ) ≤ t < σδi m (ξ) (i ∈ N), there is the function L∗δm (ξ) : (L∗δm (ξ))(t) = xim , where σδi−1 m 0 ∗ i σδm (ξ) = 0. Thus ρ(ξ(t), (Lδm (ξ)))(t)) < rm , as xm , ξ(t) ∈ Δim .
Sequences of First Exit Times
53
24. Representation of sigma-algebra in terms of Markov time PROPOSITION 2.14. Let τ ∈ MT+ . Then F = σ ατ , τ, θτ . Proof. Under proposition 2.13 we have F = σ βτ , τ ∈ T σ Δ , Δ ∈ A (see item 14). Let δ = (Δ1 , Δ2 , . . .) ∈ DS, t ∈ R+ , S ∈ B(X). Then for any n ≥ 0 we have
βσδn ∈ [0, t] × S = σδn ≤ t, Xσδn ∈ S = τ = 0, σδn ≤ t, Xσδn ∈ S ∪
n
τ ∈ σδk−1 , σδk , σδn ≤ t, Xσδn ∈ S
k=1
∪ τ > σδn , σδn ≤ t, Xσδn ∈ S .
We have
τ = 0, σδn ≤ t, Xσδn ∈ S = τ = 0, σδn ◦ θτ ≤ t, Xσδn ◦ θτ ∈ S ,
τ ∈ σδk−1 , σδk , σδn ≤ t, Xσδn ∈ S
= τ ∈ σδk−1 , σδk , τ + σδk,n ◦ θτ ≤ t, Xσk,n ◦ θτ ∈ S , δ
where
σδk,n
˙ ···+ ˙ σΔn and besides = σΔk +
τ > σδn , Xtaunδ ∈ S = τ > σδn , σδn ◦ ατ ≤ t, Xtaunδ ◦ ατ ∈ S .
Finally,
σδn ◦ ατ ≤ t, t < τ , σδn < τ =
t∈R+
where R+ are all rational t ∈ R+ . Hence, all components of a set {βσδn ∈ [0, t] × S} belong to σ(ατ , τ, θτ ). It is true as well for the set {βσδn = ∞}. 25. One more sigma-algebra of preceding events For τ ∈ MT+ let Fτ = σ(ατ , τ ). Under theorem 2.7 for τ ∈ MT Fτ = σ ατ = σ ατ , τ , i.e. the new deſnition is coordinated with old one (see item 8).
54
Continuous Semi-Markov Processes
26. One more representation of a sigma-algebra Let us denote by R+ the all rational set t ∈ R+ (this is used in the proof of the previous proposition). THEOREM 2.3. For any τ1 , τ2 ∈ MT+ we have Fτ2 . Fτ1 +˙ τ2 = σ Fτ1 , θτ−1 1 ˙ τ2 ). We have Proof. By item 25, Fτ1 +˙ τ2 = σ(ατ1 +˙ τ2 , τ1 + σ Fτ1 , θτ−1 Fτ2 = σ ατ1 , τ1 , ατ2 ◦ θτ1 , τ2 ◦ θτ1 , 1 and ατ1 = ατ1 ◦ ατ1 +˙ τ2 ∈ σ ατ1 +˙ τ2 /F, ˙ τ2 /B R+ , τ1 ∈ σ ατ1 +˙ τ2 , τ1 + because
˙ τ2 ∪ τ1 < t, τ1 = τ1 + ˙ τ2 τ1 < t = τ1 < t, τ1 < τ1 +
˙ τ2 τ1 ◦ ατ1 +˙ τ2 ≤ t, t < τ1 + = τ1 ◦ ατ1 +˙ τ2 < t ∪ t∈R+
˙ τ2 < t, τ1 ≥ τ1 + ˙ τ2 ∈ σ ατ +˙ τ , τ1 + ˙ τ2 ∪ τ1 + 1 2 (see proposition 2.14). It is not difſcult to show that formula [2.16] is correct with replacement Ta by Ta . Hence
ατ2 ◦ θτ1 = θτ1 ◦ ατ1 +˙ τ2 ∈ σ ατ1 +˙ τ2 / F ∩ τ1 < ∞ , ˙ τ2 − τ1 ∈ σ ατ +˙ τ , τ1 + ˙ τ2 /B R+ . τ2 ◦ θτ1 = τ1 + 1 2 ˙ τ2 ∈ σ(τ1 , τ2 ◦ θτ1 ). Fτ2 ) ⊂ Fτ1 +˙ τ2 . On the other hand, τ1 + From here σ(Fτ1 , θτ−1 1 Under proposition 2.14 F = σ(ατ1 , τ1 , θτ1 ). For all S ∈ F and t ∈ R+ we have S = ατ−1 S ∈ σ(ατ1 ); and also ατ−1+˙ τ ατ−1 1 1 1
2
ατ−1+˙ τ {τ < t} = τ1 ◦ ατ1 +˙ τ2 < t, τ2 ◦ θτ1 > 0 ∪ τ1 ◦ ατ1 +˙ τ2 < t, τ2 ◦ θτ1 = 0 1 2
= τ1 < t, τ2 ◦ θτ1 > 0 ∪ τ1 ◦ ατ1 < t, τ2 ◦ θτ1 = 0 ∈ σ τ1 , ατ1 , τ2 ◦ θτ1
and S = θτ−1 ατ−1 S ∈ σ ατ2 ◦ θτ1 . ατ−1+˙ τ θτ−1 1 1 2 1
2
Sequences of First Exit Times
55
From here ατ1 +˙ τ2 ∈ σ(Fτ1 /F, θτ−1 Fτ2 ) and 1 Fτ2 . Fτ1 +˙ τ2 ⊂ σ Fτ1 , θτ−1 1 2.4. Correct exit and continuity of points of the ſrst exit A property of a correct ſrst exit of a trajectory ξ ∈ D from a set Δ ⊂ X is deſned (see [HAR 77]). It is shown that the class of sets, from which a trajectory has a correct exit with probability one, is rather rich. On the other hand, the pair of the ſrst exit βσΔ is continuous on a set of functions correctly going out from Δ, with respect to the metric of Stone-Skorokhod on D (see Gihman and Skorokhod [GIH 71], Skorokhod [SKO 56, SKO 61], Stone [STO 63]). 27. Correct exit For any Δ ⊂ X and r > 0 let
Δ+r = x ∈ X : ρ(x, Δ) < r ,
Δ−r = x ∈ X : ρ(x, X\Δ) > r .
Let us state that the function ξ ∈ D has a correct ſrst exit from Δ, if (1) ξ(0) ∈ ∂Δ; (2) βσΔ+r ξ → βσΔ ξ (r ↓ 0); (3) βσΔ−r ξ → βσΔ ξ (r ↓ 0). Let Π(Δ) be a set of all ξ ∈ D, having a correct ſrst exit from a set Δ. In this case, if Δ = B(ξ(0), r) for given ξ (the choice Δ depends on ξ), we designate the previous set as Π(r) ≡ Π1 (r). Let
Π Δ2 , Π Δ1 , Δ2 = Π Δ1 ∩ σΔ1 = ∞ ∪ Π Δ1 ∩ θσ−1 Δ1 where Δ1 , Δ2 ⊂ X and, according to the deſnition θτ , θτ−1 A ⊂ {τ < ∞} and m−1 k m
−1 Π Δ1 , . . . , Δm = θτi−1 Π Δi ∩ τk = ∞ ∪ θτ−1 Π Δi , i−1 k=1 i=1
i=1
˙ σΔi , and where τ0 = 0, τi = τi−1 + ∞
Π Δ1 , . . . , Δm . Π Δ1 , Δ2 , . . . = m−1
Similarly deſne Πm (r1 , . . . , rm ) (ri > 0) and Π(r1 , r2 , . . .), In particular, all ri can be equal to each other: Πm (r) = Πm (r, . . . , r).
56
Continuous Semi-Markov Processes
28. Discontinuity and interval of constancy Let us designate h(ξ, t) =
Δ ∈ δ : σΔ (ξ) ≤ t
ξ ∈ D, t ∈ R+ .
PROPOSITION 2.15. The following assertions are fair: (1) if βσΔ−r ξ → βσΔ (r ↓ 0, Δ ∈ A), then there exists a point of discontinuity t ≤ σΔ ξ of the function ξ, for which ξ(t − 0) ∈ ∂Δ; (2) let δ be such a family of open sets, that for any Δ1 , Δ2 ∈ δ (Δ1 = Δ2 ) it is either [Δ1 ] ⊂ Δ2 , or [Δ2 ] ⊂ Δ1 ; besides let Δ ∈ δ be not a maximal element in δ and βσΔ+r ξ → βσΔ ξ (r ↓ 0); then
t ∈ R+ : h(ξ, t) = Δ
is an interval of constancy of the function h(ξ, ·). Proof. (1) It is obvious. (2) For any ξ ∈ D we have σΔ+r (ξ) ≥ σΔ (ξ) and σΔ+r (ξ) does not increase with r ↓ 0. If σΔ+r (ξ) ↓ t > σΔ (ξ) and t ∈ [σΔ (ξ), t), then h(ξ, t ) ⊃ Δ. On the other hand, let h(ξ, t ) = Δ and also there exists Δ1 ⊃ Δ (Δ1 ∈ δ) such that σΔ1 (ξ) ≤ t . Since σΔ+r (ξ) ↓ σ[Δ] (ξ), we obtain an inconsistency: [Δ] ⊂ Δ1 and σ[Δ] (ξ) > σΔ1 (ξ).
29. Countable subfamily COROLLARY 2.1. For any ξ ∈ D the family δ from proposition 2.15(2) contains nothing more than countable subfamily δ such that ξ ∈ Π(Δ) with Δ ∈ δ . Proof. It follows from denumerability of a set of all point of discontinuities of function ξ and all intervals of a constancy of function h(ξ, ·). 30. Correct exit with probability 1. THEOREM 2.4. Let δ be a family of concentric balls (B(x0 , r))r>0 (x0 ∈ X). For any probability measure P on D and any measurable map τ : D → R+ there exists no more than countable subfamily of balls δ ⊂ δ such that for Δ ∈ δ P θτ−1 D\Π(Δ) > 0.
Sequences of First Exit Times
57
Proof. We consider B(D)-measurable map fk,n : D → X, where fk,n (ξ) = ξ(tk,n − 0) and tk,n is the k-th discontinuity of the function ξ with size of the jump ≥ 1/n. Then both maps fk,n ◦ θτ and ρ(fk,n ◦ θτ , x0 ) on {τ < ∞} are also B(D)-measurable. From here the set of those r > 0, for which P
ρ fk,n ◦ θτ , x0 = r ∩ τ < ∞ > 0,
is not more than countable. Let us consider B(D)-measurable map h(·, t) : D → R, where h(ξ, t) = sup{r : σB(x0 ,r) (ξ) ≤ t}. From here h = h(·, ·) is B(D)-measurable map D in D1 = D[0,∞) (R1 ). Let gk,n : D1 → R be such a function, where gk,n (ξ) is a value of ξ ∈ D1 on the k-th interval of constancy of length not less than 1/n. Obviously, this map is B(D1 )-measurable. Then the map gk,n ◦ h ◦ θτ is B(D1 )measurable on {τ < ∞}, and the set of those r for which P
gk,n ◦ h ◦ θτ = r ∩ {τ < ∞} > 0,
is not more than countable. Since for Δ = B(x0 , r) θτ−1 D\Π(Δ) ⊂ {τ < ∞}
∩ ρ fk,n ◦ θτ , x0 = r ∪ gk,n ◦ h ◦ θτ = r , k,n
k,n
the theorem is proved.
31. Note about the family of balls The similar theorem for a family of concentric balls depending on a choice ξ, like (B(ξ(0), r))r>0 , is fair as well: there exists nothing more than countable set of those r for which P (θτ−1 (D\Π1 (r))) > 0. 32. Deducing sequences of correct exit ∞ COROLLARY 2.2. For any probability measure P on D and sequence (r m∞)1 (rm ↓ 0) ∞ there exists a sequence (δm )1 (δm ∈ DS(rm ), rm ≤ rm ) such that P ( m=1 Π(δm )) = 1.
Proof. It follows from theorem 2.1 and such an obvious fact that for any δ ∈ DS the replacement of any open set Δ ∈ δ ∈ DS by anything larger reduces it to a new sequence, which also is deducing.
58
Continuous Semi-Markov Processes
33. Note about a set According to a note of item 31 and denumerability of a set of those r, for which ∞ , for which P ( P (Π(r)) = 1, there is a sequence (rm )∞ 1 m=1 Π(rm )) = 1, where ∞
Πk rm . Π rm = k=1
34. The Skorokhod metric Let ϕ be a map of the set D in a set of all maps η : [0, 1] → Y (see item 9), deſned as follows: (∀ξ ∈ D) ⎧ ⎨1, η(t) = ϕ(ξ) (t) = ⎩ t, ξ tan π t , 2
t = 1, t ∈ [0, 1).
Let D = ϕD. Then D is a set of all cádlág maps η : [0, 1] → Y continuous from the left at point 1 (with respect to metric ρY ), and η(1) = 1. The Skorokhod metric on a set D is said to be the function ρD of the following view (see Gihman and Skorokhod [GIH 71, p. 497]):
ρD η1 , η2 =
inf
λ∈Λ[0,1]
sup ρY t≤1
η1 (t), η2 λ(t) + sup t − λ(t) ,
t≤1
where η1 , η2 ∈ D, Λ[0, t] is the set of all increasing maps of the interval [0, t] on itself (t > 0). For any ξ1 , ξ2 ∈ D we deſne ρD ξ1 , ξ2 = ρD ϕ ξ1 , ϕ ξ2 . The metric ρD is referred to as the Stone-Skorokhod metric. It is known [GIH 71] that this metric transforms space D into separable metric space. The metric ρC is similarly deſned on a set C of all continuous ξ ∈ D:
ρ η1 , η2 = sup ρY η1 (t), η2 (t) , 0 ≤ t ≤ 1 , where C = ϕC and η1 , η2 ∈ C. It generates a topology of uniform convergence on all ſnite intervals. Thus, for ξ1 , ξ2 ∈ C ρC ξ1 , ξ2 = ρC ϕ ξ1 , ϕ ξ2 .
Sequences of First Exit Times
59
35. Borel sigma-algebra PROPOSITION 2.16. The following representations are fair F = B D, ρD , F ∩ C = B C, ρC . Proof. For proof of these statements, see Gihman and Skorokhod [GIH 71]. 36. Distance in the Skorokhod metric Let ξ1 , ξ2 ∈ D and t be a continuity point of both functions. Let us designate ρtD (ξ1 , ξ2 ) a distance in the Skorokhod metric on an interval [0, t]: t ρD ξ1 , ξ2 = inf sup ρ ξ1 (s), ξ2 λ(s) + sup s − λ(s) . λ∈Λ[0,t]
Similarly for ξ1 , ξ2 ∈ C
s≤t
s≤t
ρtC ξ1 , ξ2 = sup ρ ξ1 (s), ξ2 (s) . s≤t
37. Estimate from above THEOREM 2.5. The following estimates are fair: (1) let ξ1 , ξ2 ∈ D and t be a continuity point of both functions; then π 4 ρD ξ1 , ξ2 ≤ ρtD ξ1 , ξ2 + − arctan t; π 2 (2) for any ξ1 , ξ2 ∈ C and t > 0 π ρC ξ1 , ξ2 ≤ ρtC ξ1 , ξ2 + − arctan t. 2 Proof. (1) Let a(s) = (2/π) arctan s (s ∈ R+ ). We have ρD ξ1 , ξ2 sup ρY s, ξ1 a−1 (s) , λ(s), ξ2 a−1 λ(s) + sup s − λ(s) = inf λ∈Λ[0,1]
≤
inf
λ∈Λ[0,a(t)]
∨
sup
1>s>a(t)
s
s<1
sup ρY s≤a(t)
ρY
s, ξ1 a−1 (s) , λ(s), ξ2 a−1 λ(s)
s, ξ1 a−1 (s) , s, ξ2 a−1 (s) + sup s − λ(s) . s≤a(t)
60
Continuous Semi-Markov Processes
Furthermore sup ρY s≤a(s)
= sup s≤a(t)
s, ξ1 a−1 (s) , λ(s), ξ2 a−1 λ(s)
(1 − s) ∧ 1 − λ(s) arctan ρ ξ1 a−1 (s) , ξ2 a−1 λ(s) + s − λ(s)
≤ sup arctan ρ ξ1 a−1 (s) , ξ2 a−1 λ(s) + sup s − λ(s) s≤a(t)
s≤a(t)
= arctan sup ρ ξ1 (s), ξ2 λ1 (s) + sup a(s) − a λ1 (s)
s≤t
s≤t
2 ≤ arctan sup ρ ξ1 (s), ξ2 λ1 (s) + sup s − λ1 (s), π s≤t s≤t where λ1 = a−1 1 ◦ λ ◦ a1 is an increasing map of [0, t] on itself, a1 is a contraction of a on [0, t]. Besides ρY
sup a(t)<s<1
=
s, ξ1 a−1 (s) , s, ξ2 a−1 (s)
sup (1 − s) arctan ρ ξ1 a−1 (s) , ξ2 a−1 (s)
a(t)<s<1
π π = − arctan t. ≤ 1 − a(t) 2 2 From here
2 arctan sup ρ ξ1 (s), ξ2 λ1 (s) + sup s − λ1 (s) π s≤t λ1 ∈Λ[0,t] s≤t π 2 ∨ − arctan t + sup s − λ1 (s) 2 π s≤t 4 arctan sup ρ ξ1 (s), ξ2 λ1 (s) + sup s − λ1 (s) ≤ inf π s≤t λ1 ∈Λ[0,t] s≤t
ρD ξ1 , ξ2
+
inf
π 4 π − arctan t ≤ ρtD ξ1 , ξ2 + − arctan t. 2 π 2
(2) It is proved similarly.
38. Estimates for metrics PROPOSITION 2.17. The following properties are fair:
Sequences of First Exit Times
(1) (∀t ∈ R+ ) (∀ε > 0) (∃δ > 0) ρD ξ1 , ξ2 < δ =⇒ ∃λ ∈ Λ[0, ∞)
61
sup ρ ξ1 (s), ξ2 λ(s) + sup s − λ(s) < ε; s≤t
s≤t
(2) (∀ξ1 , ξ2 ∈ C) (∀t ∈ R+ ) (∀ε > 0) (∃δ > 0) ρC ξ1 , ξ2 < δ =⇒ ρtD ξ1 , ξ2 < ε. Proof. (1) If ρD (ξ1 , ξ2 ) < δ, then (∃λ ∈ Λ[0, 1]) sups<1 |s − λ(s)| < δ and < δ, sup(1 − s) ∧ 1 − λ(s) arctan ρ ξ1 a−1 (s) , ξ2 a−1 λ(s) s<1
where a(t) = (2/π) arctan t (t ∈ R+ ). Let t ∈ R+ and δ < (1 − a(t))/(1 + 2/π). Then sups≤t |a(s) − λ(a(s))| < δ, and since | arctan s − arctan r| ≥
|s − r| , 1 + (s ∨ r)2
sups≤t |s − a−1 λ(a(s))| < (π/2)δ(1 + tan2 (π/2)(a(t) + δ)), where s ∨ a−1 λ a(s) = a−1 a(s) ∨ λ a(s) < a−1 a(t) + δ π = tan a(t) + δ < ∞. 2 Furthermore, sup 1 − a(s) ∧ 1 − λ a(s) arctan ρ ξ1 (s), ξ2 a−1 λ a(s) < δ, s≤t
and since (1 − a(s)) ∧ (1 − λ(a(s))) > 1 − a(t) − δ, < tan sup ρ ξ1 (s), ξ2 a−1 λ a(s) s≤t
δ . 1 − a(t) − δ
Let us choose δ such that (π/2)δ(1 + tan2 (π/2)(a(t) + δ)) ≤ ε/2 and tan(δ/(1 − a(t) − δ)) ≤ ε/2. Note that a−1 ◦ λ ◦ a ∈ Λ[0, ∞). So the ſrst statement is proved. (2) The second statement is proved similarly. 39. Continuity THEOREM 2.6. For any k ∈ N and Δ1 , . . . , Δk ∈ A the map β(Δ1 ,...,Δk ) is continuous on Π(Δ1 , . . . , Δk ).
62
Continuous Semi-Markov Processes
Proof. Let Δ ∈ A, σΔ (ξ) = ∞, ξ ∈ Π(Δ) and ρD (ξn , ξ) → 0 (n → ∞). Then under proposition 2.17 (∀t ∈ R+ ) (∀ε > 0) (∃δε,t > 0) (∃n0 ∈ N) (∀n > n0 ) the inequality ρD (ξn , ξ) < δε,t is sufſcient for the conditions [2.17] sup ρ ξ(s), ξn λn (s) < ε, s≤t
sup s − λn (s) < ε.
[2.18]
s≤t
to be fulſlled for λn ∈ Λ[0, ∞]). Let ε > 0 such that σΔ−ε (ξ) > t. Then σΔ (ξn ◦ λn ) > t and, hence, λ−1 n σΔ (ξn ) > t, σΔ (ξn ) > λn (t) and σΔ (ξn ) > t − ε. From here, because of arbitrary choice t and ε, we obtain βσΔ ξn −→ βσΔ (ξ). [2.19] σΔ ξn −→ ∞, Let σΔ (ξ) < ∞ and ρD (ξn , ξ) → 0 (n → ∞). Then the same conditions are sufſcient for inequalities [2.17] and [2.18]. Let us take ε and t such that σΔ+ε (ξ) ≤ t. Then σΔ−ε (ξ) ≤ σΔ (ξn ◦ λn ) ≤ σΔ+ε (ξ). Evidently, σΔ (ξn ◦ λn ) = λ−1 n σΔ (ξn ). Hence, the previous inequality is equivalent to the inequality [2.20] λn σΔ−ε (ξ) ≤ σΔ ξn ≤ λn σΔ+ε (ξ), and consequently
σΔ−ε (ξ) − ε ≤ σΔ ξn ≤ σΔ+ε (ξ) + ε.
[2.21]
Let us consider two possibilities: (a) σΔ (ξ) is a point of continuity of the function ξ; then ρ XσΔ (ξ), XσΔ ξn ◦ λn + ρ ξ σΔ (ξ) , ξ σΔ ξn ◦ λn ≤ ρ ξ σΔ (ξn ◦ λn , ξn ◦ λn σΔ ξn ◦ λn
< ε + sup ρ ξ σΔ (ξ) , ξ(s) : σΔ−ε (ξ) ≤ s ≤ σΔ+ε (ξ) ; from here βσΔ ξn → βσΔ ξ; (b) σΔ (ξ) is a discontinuity point of the function ξ; because ξ ∈ Π(Δ) we have ξ(σΔ (ξ) − 0) ∈ Δ and (∃r > 0) XσΔ−r ξ ∈ Δ,
XσΔ−r ξ = XσΔ ξ;
since ρ(ξ(s), (ξn ◦ λn )(s)) < ε (s ≤ t) we have for ε < r σΔ (ξn ◦ λn ) ≥ σΔ (ξ); from here ρ XσΔ ξ, XσΔ ξn ◦ λn + ρ XσΔ ξ, ξ σΔ ξn ◦ λn ≤ ρ ξ σΔ ξn ◦ λn , ξn ◦ λn σΔ ξn ◦ λn ≤ε+ sup ρ XσΔ ξ, ξ(t) ; σΔ (ξ)≤t≤σΔ−r
Sequences of First Exit Times
63
therefore by right-continuity of functions ξ we have βσΔ ξn −→ βσΔ ξ. ρ XσΔ ξ, XσΔ ξn −→ 0, ˙ σΔk , τk−1 (ξ) < ∞, ξ ∈ Π(Δ1 , . . . , Δk ), Let Δ1 , . . . , Δk ∈ A, τ0 = 0, τk = τk−1 + ρD (ξn , ξ) → 0 and βτi ξn → βτi ξ (i ≤ k − 1, n → ∞, k ∈ N), and for all N > n0 Xτi ξn ∈ Δi+1 ⇐⇒ Xτi ξ ∈ Δi+1 , Xτi ξn ∈ Δi+1 ⇐⇒ Xτi ξ ∈ [Δi+1 ] (see item 27 for deſnition of Π(Δ)). There may be three possibilities: ˙ σΔ−ε )(ξ) > t (a) τk (ξ) = ∞; then Xτk−1 ξ ∈ Δk and if ε > 0 is such that (τk−1 + k then τk (ξn ) > t − ε; from here τk (ξn ) → ∞ and βτk (ξn ) → βτk (ξ); (b) τk (ξ) < ∞, Xτk ξ ∈ Δk , and τk (ξ) is a continuity point of the function ξ; evidently in this case βτk (ξn ) → βτk (ξ); (c) τk (ξ) < ∞, Xτk ξ ∈ Δk , and τk (ξ) is a discontinuity point of the function ξ, moreover ξ(τk − 0) ∈ Δk . In this case we also obtain βτk (ξn ) → βτk (ξ). 40. Note about a set The same technique as in the proof of theorem 2.6 can be applied to prove the statement: (∀k ∈ N) (∀r > 0) the function βσrk is continuous on Πk (r). 2.5. Time of regeneration Semi-Markov processes, which will be deſned in the next chapter, are based on the concept of a regeneration time. It is a Markov time with respect to which the given family of measures possesses the Markov property. Let us recall that Markov property of the process at the time τ ∈ MT is independence future (after τ ) from the past (before τ ) with the ſxed present (just at τ ) (see Shurenkov [SHU 77], Blumenthal and Getoor [BLU 68], Keilson [KEI 79], Meyer [MEY 73], etc.). Unfortunately, the most appropriate term for this case, “Markov time”, is used in a wide sense without relation to any probability measure (see Gihman and Skorokhod [GIH 73], Dynkin [DYN 63], Dynkin and Yushkevich [DYN 67], Itô and McKean [ITO 65], etc.). The term “time of Markov interference of chance”, which is also sometimes used, does not seem to us to be good enough, generally because of its length. The term “time of regeneration” generally speaking is also occupied. They use it usually in a narrower sense, namely, when the independence future from the past fulſls only with respect to the hitting time of one ſxed position. The “family” of measures of such a process consists of one measure (see Mainsonneuve [MAI 71, MAI 74], Mürmann [MUR 73], Smith [SMI 55], Taksar [TAK 80], etc.). The most correct term for this class of times would be “Markov regeneration times”. However, we will use a shorter name.
64
Continuous Semi-Markov Processes
In this section some deſnitions connected with a time of regeneration are discussed. The structure of a set of regeneration times is investigated. Closure of this set ˙ allows us in the following chapter to give a deſnition with respect to the operation + of semi-Markov processes, using a rather small set of Markov times.
41. Measures, integrals, space of functions Let (E, E) be a set with sigma-algebra of subsets; μ be a measure on E; A ∈ E; and f ∈ E/B(R) be an integrable function (with respect to μ). In item 1.7 we have introduced designation μ(f ; A) for the integral of the function f by the measure μ on the set A. In a simple case μ(f ) = μ(f ; E). For probability measures we reserve denotation Ex (f ; A), EQx (f ; A) and so on. Let (Bi , Bi ) be sets with sigma-algebras of subsets, fi ∈ E/Bi be a measurable map 2 E → Bi (i = 1, . . . , k). Let us designate by μ ◦ (f1 , . . . , fk )−1 a measure induced by a measure μ and maps fi on measurable space (B1 × · · · × Bk , B1 ⊗ · · · ⊗ Bk ). It means that for any S = S1 × · · · × Sk (Si ∈ Bi ) −1 μ ◦ f1 , . . . , fk (S) = μ f1−1 S1 ∩ · · · ∩ fk−1 Sk . Let C0 (A) be a set of continuous and bounded real functions on topological space A.
42. Admissible family of probability measures. A family of probability measures (Px ) = (Px )x∈X on (D, F ) are said to be admissible, if (a) (∀x ∈ X) Px (X0 = x) = 1; (b) for any B ∈ F the map x → Px (B) is B(X)-measurable.
43. Time of regeneration The time τ ∈ MT+ is said to be a time of regeneration of an admissible family of measures (Px ), if for any x ∈ X and B ∈ F Px -a.s. Px θτ−1 B | Fτ = PXτ (B) on the set {τ < ∞}. Let RT(Px ) be a set of time of regeneration of an admissible family (Px ). If it is clear what family there is a speech about, we write RT.
Sequences of First Exit Times
65
44. Lambda-continuous family of measures For any admissible set of measures (Px ) with any k ∈ N, λi > 0, and with any bounded B(Xk )-measurable function ϕ we deſne a function of x ∈ X: R λ1 , . . . , λk ; ϕ | x = Px
0≤t1 ≤···≤tk <∞
k exp − λi ti dti , ϕ Xt1 , . . . , Xtk i=1
where ti = ti − ti−1 , t0 = 0. Considering this function as an operator on the set of functions-arguments, we can treat it as a multi-dimensional generalization of λpotential operator R(λ; ϕ|x) ≡ Rλ (ϕ) (see Blumenthal and Getoor [BLU 68, p. 41]). An admissible set of measures (Px ) are said to be λ-continuous if for any k ∈ N, and λ1 , . . . , λk , and ϕ1 , . . . , ϕk ∈ C0 we have R(λ1 , . . . , λk ; ϕ | ·) ∈ C0 , where C0 = C0 (X) and ϕ x1 , . . . , xk = ϕ1 x1 · · · ϕk xk ,
x1 , . . . , xk ∈ Xk .
45. Other forms of the deſnition A class of subsets of a set is said to be a pi-system if this class is closed with respect to all pairwise intersections of its elements (see Dynkin [DYN 59, p. 9]). PROPOSITION 2.18. Let A, A1 be a pi-system of sets, for which F = σ(A), Fτ = σ(A1 ), and let V and Vτ be classes of real functions such that F = σ(V ), Fτ = σ(Vτ ). The following conditions are then equivalent: (a) τ ∈ RT; (b) Px (θτ−1 B, B1 ) = Ex (PXτ (B); B1 ∩ {τ < ∞}); (c) Ex (f ◦ θτ ; B1 ) = Ex (EXτ (f ); B1 ∩ {τ < ∞}); (d) Ex ((f ◦ θτ ) · f ) = Ex (EXτ (f ) · f ; {τ < ∞}); (e) Ex (f ; θτ−1 B) = Ex (EXτ (B) · f ; {τ < ∞}), where each condition (b)–(e) is fulſlled for any x ∈ X, B ∈ A, B1 ∈ A1 , f ∈ V , f ∈ V correspondingly.
Proof. Proof is implied from sigma-additivity of measures Px and both deſnitions of conditional expectation, and time of regeneration (see, e.g., Gihman and Skorokhod [GIH 73, p. 57]).
66
Continuous Semi-Markov Processes
46. Properties of the set of regeneration time THEOREM 2.7. The following properties of RT are fair: ˙ τ2 ∈ RT; (1) τ1 , τ2 ∈ RT ⇒ τ1 + (2) if (∀n ∈N) τn ∈ RT and τ = τn on Bn ∈ Fτn , where Bi ∩ Bj = Ø (i = j), ∞ {τ < ∞} ⊂ n=1 Bn , then τ ∈ RT; (3) if (∀n ∈ N) τn ∈ RT, τn ≥ τ , τn → τ , and (Px ) is a lambda-continuous family, then τ ∈ RT; (4) if (∀n ∈ N) τn ∈ RT, τn ≤ τ τn → τ , (Px ) is a lambda-continuous family, and also (∀x ∈ X) Px (Xτn → Xτ , τ < ∞) = 0, then τ ∈ RT.
Proof. (1) From theorem 2.3 and proposition 2.18(d) it follows that it is enough to prove Ex f ◦ θτ1 +˙ τ2 · f2 ◦ θτ1 · f1 = Ex EXτ
˙ 1 + τ2
(f ) · f2 ◦ θτ1 · f1 ,
where f1 ∈ Fτ1 /B[0, 1], f2 ∈ Fτ2 /B[0, 1], f ∈ F/B[0, 1], {f1 = 0} ⊃ {τ1 = ∞} and {f2 = 0} ⊃ {τ2 = ∞}. It follows from proposition 2.18(d) and the following equalities Ex f ◦ θτ1 +˙ τ2 f2 ◦ θτ1 f1 = Ex f ◦ θτ2 ◦ θτ1 f2 ◦ θτ1 f1 f ◦ θτ2 f2 ◦ θτ1 f1 = Ex EXτ1 f ◦ θτ2 f2 f1 = Ex = Ex EXτ1 Eτ2 (f )f2 f1 = Ex Eτ2 (f )f2 ◦ θτ1 f1 = Ex EXτ2 ◦θτ1 (f ) · f2 ◦ θτ1 f1 = Ex EXτ +˙ τ (f ) · f2 ◦ θτ1 f1 . 1
2
(2) Since Fτn ⊂ Fτn + for τn ∈ RT we have (∀t ∈ R+ ) {τ < ∞} =
∞
τn < t, Bn ∈ Ft ,
n=1
From here τ ∈ MT+ . For such τ according to the deſnition in item 25 we have Fτ = σ(ατ , τ ), and also B ∩ Bn ∈ Fτn (B ∈ F), ατ−1 B ∩ Bn = ατ−1 n
t ∈ R+ . {τ < t} ∩ Bn = τn < t ∩ Bn ∈ Fτn
Sequences of First Exit Times
67
Therefore B ∩ Bn ∈ Fτn for any B ∈ Fτ , and according to proposition 2.18(c) we have ∞ Ex f ◦ θτn ; B ∩ Bn ∩ {τ < ∞} Ex f ◦ θτ ; B ∩ {τ < ∞} = n=1
=
∞
Ex EXτn (f ); B ∩ Bn ∩ {τ < ∞}
n=1
= Ex EXτ (f ); B ∩ {τ < ∞} , where f ∈ F/B[0, 1] and B ∈ Fτ . (3) Obviously, τ ∈ MT+ (see Blumenthal and Getoor [BLU 68, p. 32]). Besides, Fτ ⊂ Fτn . Really, ατ = ατ ◦ ατn , hence ατ is Fτn -measurable map, and also {τ < t} = {τ < t ≤ τn } ∪ {τn < t} ∈ Fτn , where
τ < t ≤ τn = αt−1 {τ < t} ∩ t ≤ τn
{τ < t} ∩ t ≤ τn ∈ Fτn . = ατ−1 n Furthermore, {τn < ∞} ⊂ {τ < ∞} and {τ < ∞} = f=
k
0≤t1 ≤···≤tk <∞ i=1
∞
n=1 {τn
< ∞}. Let
exp − λi ti ϕi Xti dti ,
where λ1 , . . . , λk > 0, ϕ1 , . . . , ϕk ∈ C. Then (∀ξ ∈ D) f ◦ θτn (ξ) −→ f ◦ θτ (ξ), and EXτn (ξ) (f ) = R λ1 , . . . , λk ; ϕ | Xτn (ξ) −→ R λ1 , . . . , λk ; ϕ | Xτ (ξ) = EXτ (ξ) (f ), where ϕ(x1 , . . . , xk ) = ϕ1 (x1 ) · · · ϕk (xk ). From here
Ex f ◦ θτ ; B ∩ {τ < ∞} = lim Ex f ◦ θτn ; B ∩ τn < ∞ n→∞
= lim Ex EXτn (f ), B ∩ τn < ∞ n→∞ = Ex EXτ (f ), B ∩ {τ < ∞} , where B ∈ Fτ . Hence, under theorem 2.18(c), τ ∈ RT.
68
Continuous Semi-Markov Processes
(4) Obviously, τ ∈ MT+ (see Blumenthal and Getoor [BLU 68, p. 32]). Let Cτ = {τ = ∞} ∪ {τ < ∞, Xτn → Xτ }. Evidently, Cτ ∈ Fτ and (∀x ∈ X) Px (Cτ ) = 1, and besides ∞ Fτn ∩ Cτ . Fτ ∩ Cτ = σ n=1
∞
∞
Really, {τ ≤ t} = n=1 {τn ≤ t} ∈ σ( n=1 Fτn ) and (∀t ∈ R+ ) on the set Cτ we have Xt ◦ ατn → Xt ◦ ατ . Let f be like that in part 3 of the proof. Then because of lambda-continuity on a set Cτ ∩ {τ < ∞} it is true EXτn (f ) → EXτ (f ) (n → ∞). Let δ ∈ DS(r) and Px (θτ−1 Π(δ)) = 1. Then f ◦ θτ − f ◦ θτn ≤ f ◦ θτ − fr ◦ θτ + fr ◦ θτ − fr ◦ θτn + fr ◦ θτn − f ◦ θτn , where fr = f ◦ Lδ =
k
exp − λi ti ϕi ◦ Xti ◦ Lδ dti .
0≤t1 ≤···≤tk i=1
Let us assume without being too speciſc that all ϕi (i = 1, . . . , k) are uniformly continuous. Since (∀ξ ∈ D) (∀t ∈ R+ ) ρ(Xt Lδ ξ, Xt ξ) < r, the ſrst and third terms of the right part of the previous inequality tend to zero with r → 0 uniformly on n. On the other hand fr =
k
0≤l1 ≤···≤lk i=1
k exp − λi ti dti , ϕi X σ l i δ
i=1
where the integral is undertaken on the region
0 ≤ ti ≤ · · · ≤ tk ; σδli ≤ t1 < σδl1 +1 , . . . , τ δ lk ≤ tk < σδlk +1 and, hence, it is continuous function of σδli , σδli +1 , where i = 1, . . . , k. Let l ≥ 0 and δ = (Δ1 , Δ2 , . . .). Consider two possibilities on {τ < ∞}: (a) Xτ ∈
l
i=1 [Δi ].
Here Px -a.s. (∃n0 ∈ N) (∀n > n0 ) Xτn ∈
l Δi . i=1
˙ σδl and on Cτ ∩ {τ < ∞} we have Xσl ◦ θτn = ˙ σδl = τn → τ = τ + From here τn + δ Xτn → Xτ = Xσδl ◦ θτ .
Sequences of First Exit Times
(b) (∃j ≤ l) Xτ ∈ Δj ∩
j−1 i=1
69
X\[Δi ]. Here Px -a.s. (∃n0 ∈ N) (∀n > n0 )
Xτn ∈ Δj ∩
j−1
X\ Δi .
i=1
˙ σδl (see proposition 2.11(1)), i.e. τn + ˙ σδl and ˙ σδl = τ + ˙ σδl → τ + From here τn + on Cτ ∩ {τ < ∞} we have Xσδl ◦ θτn → Xσδl ◦ θτ . From here it follows that Px a.s. fr ◦ θτn → f ◦ θτ , where Px (θτ−1 Π(δ)) = 1.Then under corollary ∞2.2 Px -a.s. on ∞ {τ < ∞} we have f ◦θτn → f ◦θτ . For any B ∈ n=1 Fτn and A = n=1 {τn < ∞} we have Ex lim inf f ◦ θτn ; A ∩ B ≤ lim inf Ex f ◦ θτn ; A ∩ B n→∞ n→∞
= lim inf Ex f ◦ θτn ; τn < ∞ ∩ B n→∞
= lim inf Ex EXτn ; τn < ∞ ∩ B n→∞ ≤ Ex lim sup EXτn (f ); A ∩ B . n→∞
∞ ∞ Since {τ < ∞} ⊂ n=1 {τn < ∞} and {τ < ∞} ∈ σ( n=1 Fτn ), we obtain Ex f ◦ θτ , {τ < ∞} ∩ B ≤ Ex EXτ (f ), {τ < ∞} ∩ B . Interchanging the positions of f ◦ θτn and EXτn (f ) in the previous inequalities, we obtain Ex f ◦ θτ , {τ < ∞} ∩ B ≥ Ex EXτ , {τ < ∞} ∩ B . From here by proposition 2.18(c) τ ∈ RT.
This page intentionally left blank
Chapter 3
General Semi-Markov Processes
In this chapter general semi-Markov processes are deſned and investigated. In such a process its set of regeneration times contains all ſrst exit times from open subsets of its space of states. Remember that we consider a more general class of regeneration times than that of the renewal theory (Smith [SMI 55]), namely, a regeneration time is a Markov time for which the corresponding admissible family of measures has the Markov property. Obvious examples of semi-Markov processes are a strictly Markov process (Dynkin [DYN 63]) and a stepped semi-Markov process (Korolyuk and Turbin [KOR 76]). As will be shown later on, the class of semi-Markov processes is far from being exhausted. Among them there are continuous non-Markov semi-Markov processes [HAR 71b]. Other possible deſnitions of semi-Markov processes are considered (see Gihman and Skorokhod [GIH 73], Chacon and Jumison [CHA 79], Çinlar [CIN 79]). The large role for analysis of semi-Markov processes is played by semi-Markov transition functions. An appropriate countable family of semi-Markov transition functions deſnes the process uniquely. The problem of existence of a semi-Markov process with the given coordinated family of transition functions is considered in the next chapter. In the second half of this chapter the conditions are analyzed under which the semi-Markov process is Markovian. Properties of semi-Markov transition functions making it possible to judge Markovness of the given semi-Markov process represent a special interest. Deſned for this purpose, a so called lambda-characteristic operator (the analog of a characteristic operator of Dynkin) depends linearly on λ (parameter of a Laplace transformation) only in the case when the process is a Markov process. For some types of semi-Markov processes it is possible to judge the Markovness of the process by a property of its trajectory. One such property, a lack of intervals of constancy, is considered at the end of the chapter. 71
72
Continuous Semi-Markov Processes
3.1. Deſnition of a semi-Markov process The introduced deſnition enables us to look at any strictly Markov process as a semi-Markov process of a special kind. Some other deſnitions are also considered. 1. Semi-Markov process A semi-Markov (SM) process is said to be the process determined by an admissible family of probability measures (Px ), for which (∀Δ ∈ A) σΔ ∈ RT(Px ). (Px ) ∈ SM means the process with the family of measures (Px ) is semi-Markovian. 2. Strictly Markov process This is a process for which any time τ ∈ ST is a time of regeneration. Hence, any strictly Markov process is semi-Markovian. 3. The ſrst exit from a spherical neighborhood PROPOSITION 3.1. Let (Px ) ∈ SM. Then (∀r > 0) σr ∈ RT Px . Proof. We have (∀x ∈ X) Px (X0 = x) = 1 and Px (σr = σΔ ) = 1, where Δ = B(x, r). Further Fσr ∩{X0 = x} = FσΔ ∩{X0 = x} and for any B ∈ Fσr B = B∩{X0 = x} ∈ Fσr ∩ {X0 = x} ⊂ FσΔ . From here for any f ∈ F/B(R)
Ex f ◦ θσr ; B ∩ σr < ∞
= E x f ◦ θ σΔ ; B ∩ σ Δ < ∞
= Ex EXσΔ (f ); B ∩ σΔ < ∞
= Ex EXσr (f ); B ∩ σr < ∞ . 4. Jump- and step-functions The function ξ ∈ D is said to be jumped if (∀t ∈ R+ ) (∃δ > 0) (∀h ∈ (0, δ)) ξ(t) = ξ(t + h), i.e. ξ is right-continuous in discrete topology (Kelly [KEL 68]). The function ξ is said to be stepped, if it is jumped and has a ſnite number of jumps on each limited interval. D denotes the class of all jump-functions and D0 class of all step-functions.
General Semi-Markov Processes
73
5. Semi-Markov walk A semi-Markov walk is an admissible set of probability measures (Px ), for which (1) (∀x ∈ X) Px (D0 ) = 1; (2) σ0 ∈ RT(Px ), where σ0 (ξ) = inf(t ≥ 0, ξ(t) = ξ(0)) the ſrst exit time from an initial point (see Gihman and Skorokhod [GIH 73], Kovalenko [KOV 65], Korolyuk, Brodi and Turbin [KOR 74], Korolyuk and Turbin [KOR 76], Nollau [NOL 80], Pyke [PYK 61], Smith [SMI 55, SMI 66], etc.). 6. Properties of SM walks PROPOSITION 3.2. Let (Px ) be an admissible set of probability measures. Then (1) if (∀x ∈ X) Px (D ) = 1 and (∀Δ ∈ A) σΔ ∈ RT, σ0 ∈ RT; (2) if (∀x ∈ X) Px (D0 ) = 1 and σ0 ∈ RT, (∀Δ ∈ A) σΔ ∈ RT, i.e. the semiMarkov walk is a semi-Markov process. Proof. (1) We have (∀ξ ∈ D) σ0 (ξ) = limr↓0 σr (ξ). Furthermore (∀r > 0) Fσ0 ⊂ Fσr and (∀t > 0) Xσr +t → Xσ0 +t (r → 0). From here (∀k ∈ N) (∀t1 , . . . , tk ∈ R+ ) (∀f ∈ C(Xk )) (∀B ∈ Fσ0 )
Ex f ◦ Xt1 , . . . , Xtk ◦ θσ0 ; B ∩ σ0 < ∞
= lim Ex f ◦ Xt1 , . . . , Xtk ◦ θσr ; B ∩ σ0 < ∞ r↓0
= lim Ex EXσr f ◦ Xt1 , . . . , Xtk ; B ∩ σ0 < ∞ r↓0
= Ex EXσ0 f ◦ Xt1 , . . . , Xtk ; B ∩ σ0 < ∞ . The latter equality follows from a convergence in discrete topology Xσr → Xσ0 (r → 0) on a set D and property σr ∈ RT; see proposition 3.1. (2) We have (∀ξ ∈ D ) (∀Δ ∈ A) (∃k ∈ N0 ) σΔ (ξ) = σ0k (ξ), where σ0k = ˙ σ0 (k ∈ N, σ00 = 0). Under theorem 2.7(1) (∀k ∈ N) σ0k ∈ RT and, hence, σ0k−1 + under theorem 2.7(2) σΔ ∈ RT. 7. Quasi-continuity from the left The admissible family of measures (Px ) is said to be quasi-continuous from the left on the set of its regeneration times if for any non-decreasing sequence (τn ), where τn ∈ RT, it is the case that (∀x ∈ X) Px (Xτn → Xτ , τ < ∞) = 0, where τ = lim τn (Gihman and Skorokhod [GIH 73], Dynkin [DYN 59], Blumenthal and Getoor [BLU 68]). Let us denote by QC the class of all such families.
74
Continuous Semi-Markov Processes
8. Quasi-continuity from the left on RT The proof of the following theorem is based on an idea borrowed from the work of Dynkin [DYN 63]. THEOREM 3.1. If for any compact set K ⊂ X and r > 0 it is the case that lim sup Px σr ≥ c = 0, c→0 x∈X
then the admissible family of measures (Px ) belongs to QC. Proof. Let Kn ↑ X, Kn ∈ K, and also τN ∈ RT, τN ↑ τ . Then XτN → Xτ , τ < σKn , Px XτN → Xτ , τ < ∞ = Px
Px XτN → Xτ , τ < σKn
n
2 = Px ρ Xτk , Xτ ≥ , τ < σKn m m N k≥N σ1/m ◦ θτN ≤ h, τN < σKn ≤ Px
m M N ≥M
with any h > 0. In addition (∀N ≥ 1) Px σ1/m ◦ θτN ≤ h, τN < σKn = Ex PXτN σ1/m ≤ h ; τN < σKn ≤ sup Px σ1/m ≤ h . x∈Kn
From here
2 Px ρ Xτk , Xτ ≥ , τ < σKn m N k≥N σ1/m ◦ θτN ≤ h, τN < σKn ≤ sup Px σ1/m ≤ h −→ 0 ≤ Px M N ≥M
x∈Kn
with h → 0. From here Px (XτN → Xτ , τ < σKn ) = 0 and Px (XτN → Xτ , τ < ∞) = 0. 9. Other semi-Markov families of measures ∞ Let A0 = m=1 ∞Am , where Am = {Δim , i ∈ N}, Δim ∈ A, i=1 Δim = X and diam Δim ≤ rm , rm → 0. Let A0 ⊂ A1 ⊂ A. Admissible set of measures (Px ) is called:
General Semi-Markov Processes
75
(1) semi-Markov with respect to A0 or SM(A0 )-process, if (∀Δ ∈ A0 ) σΔ ∈ RT, (2) semi-Markov with respect to (rm )∞ 1 or SM(rm )-process, if (∀m ∈ N) σrm ∈ RT. 10. On correspondence of deſnitions of SM processes PROPOSITION 3.3. Let (Px ) ∈ SM(A0 ), and (Px ) be lambda-continuous. Then (closed set), σΔ ∈ RT; (1) if Δ ∈ A (2) if (Px ) ∈ QC, (∀Δ ∈ A) σΔ ∈ RT (i.e. (Px ) is a SM process). and δm ∈ DS(Am ). Then σΔ ≤ σΔ ◦ Lδ ≤ σΔ+rm . Whence Proof. (1) Let Δ ∈ A m σΔ ◦ Lδm → σΔ (m → ∞). From theorem 2.7(2) follows that σΔ ◦ Lδm ∈ RT and from theorem 2.7(3) follows that σΔ ∈ RT. (2) Let Δ ∈ A and δm ∈ DS(Am ). Then σΔ−rm ◦Lδm ≤ σΔ and XσΔ−rm ◦Lδm ∈ n . By theorem 2.7(2) σΔ−rm ◦ Lδm ∈ RT, τn = m=1 (σΔ−rm ◦ Lδm ) ∈ RT Δ and τn ↑ τ ≤ σΔ . On the other hand, from the quasi-continuity it follows Px Xτn → Xτ , τ < ∞ = Px Xτ ∈ Δ, τ < ∞ = Px τ < σΔ , τ < ∞ = 0. −rm
Hence, Px (σΔ = τ ) = 1 and Px (Xτm → XσΔ , σΔ < ∞) = 0. By theorem 2.7(4) σΔ ∈ RT. Note that the previous theorem remains fair, if the condition (Px ) ∈ SM(A0 ) is replaced by the condition (Px ) ∈ SM(rm ). 11. Sojourn time in current position For any function ξ ∈ D we deſne a real non-negative function J(ξ) on positive half-line as follows:
J(ξ)(t) = t − 0 ∨ sup s ≤ t : ξ(s) = ξ(t) (t ≥ 0). For a step-function this function had been considered in item 1.18: J(ξ)(t) = Rt− (ξ). It is a piece-wise linear function of a saw-tooth form, determining sojourn time in the current position since the last hitting it before t. On the set of all real functions on the positive half-line we consider a projection Xt and a shift θt similar to those on D. Let us designate (Xτ J)(ξ) = Xτ (ξ) (J(ξ)), where τ (ξ) < ∞, τ ∈ MT+ . Evidently, (∀τ ∈ MT+ ) Xτ J ∈ Fτ ∩ {τ < ∞}/B(R+ ) (measurable function).
76
Continuous Semi-Markov Processes
12. Semi-Markov process by Gihman and Skorokhod A semi-Markov process by Gihman and Skorokhod (SMGS-process) is said to be an admissible set of probability measures (Px ) on D with a special property. In order to formulate it let us determine a family of probability measures (Px,s ) depending on two parameters x ∈ X and s ≥ 0, where Px,0 (B) = Px (B), and for s > 0 and x such that Px (σ0 > s) > 0 Px,s (B) = Px θs−1 B, σ0 > s /Px σ0 > s . The special property of such a family is as follows. For any x ∈ X and τ ∈ MT and B∈F [3.1] Px θτ−1 B | Fτ = PXτ ,Xτ J (B). In other words, the two-dimensional process (ξ, J(ξ)) is strictly Markovian. This definition corresponds to that of a stepped semi-Markov process given in Gihman and Skorokhod [GIH 73, p. 345] (see also Korolyuk and Turbin [KOR 76] and Çinlar [CIN 79]). In [GIH 73] Markovness of the two-dimensional process is proved when the original process is a stepped Markov process. In order to prove the strictly Markov property of this process, let us consider the construction of a Markov time on a set of stepped trajectories. 13. Markov time for step-function On the set of all step-functions D0 we consider a sigma-algebra of Borel subsets (generated by the Stoun-Skorokhod metric). In this case it has the form F = σ(τk , Xτk ; k ∈ N0 ), where τ0 = 0, τ1 = σ0 , τk = σ0k . Let us determine as usual a Markov time with respect to the increasing family of sigma-algebras αt−1 F (t ≥ 0). Let us consider such a time τ . By proposition 2.2 (formula [2.10]), we have τ = τ ◦ ατ . On the other hand, for any step-function ξ we have ατ ξ = ατNτ ξ, where Nτ (ξ) is a number of jumps of the function ξ up to the time τ (ξ): Nτ (ξ) = n ⇔ τn (ξ) ≤ τ (ξ) < τn+1 (ξ). Hence, τ is FτNτ -measurable random value. 14. Correspondence of classes SM and SMGS PROPOSITION 3.4. The following assertions on an admissible family of measures (Px ) are true: (1) if (Px ) ∈ SMGS, then (Px ) ∈ SM; (2) if (Px ) determines a SM process with stepped trajectories, then (Px ) ∈ SMGS.
General Semi-Markov Processes
77
Proof. (1) It is obvious that XσΔ J = 0 for any Δ ∈ A on {σΔ < ∞}. Then B | FσΔ = PXσΔ ,XσΔ J (B) = PXσΔ ,0 (B) Px θσ−1 Δ = PXσΔ (B) B ∈ F, σΔ < ∞ . (2) Let (Px ) be a family of measures of a semi-Markov walk (see proposition 3.2). We have (∀x ∈ X) Px (σ0 > 0) = 1. Let us consider the (not empty) family (Px,s ) deſned above and check for it the property [3.1], which is equivalent to the following property: Px θτ−1 B ∩ A = Ex PXτ ,Xτ J (B); A , for any A ∈ Fτ ∩ {τ < ∞}. It is sufſcient to check this property for sets A and B of the form
B = τ1 ≤ z, Xτ1 ∈ S ∩ θτ−1 B A = Ak ∩ τk ≤ τ < τk+1 , 1 (see proposition 2.18(b)), where Ak ∈ Fτk , B ∈ F, S ∈ B(X), z > 0, k ∈ N0 . In this case Px θτ−1 B ∩ A
= Px τ1 θτ ≤ z, Xτ +˙ τ1 ∈ S ∩ θτ−1+˙ τ B ∩ Ak ∩ τk ≤ τ < τk+1 . 1
˙ τ1 = τk + ˙ τ1 . Hence By proposition 2.11(1) it follows that τ +
B ∩ Ak ∩ τk ≤ τ < τk+1 Px τk+1 − τ ≤ z, Xτk+1 ∈ S ∩ θτ−1 k+1
= Px τ − τk < τ1 θτk ≤ z + τ − τk ∩ θτ−1 Xτ1 ∈ S ∩ θτ−1 B 1 k
∩ Ak ∩ τ − τk ≥ 0 . Representing this expression as an integral on the set of all possible values of Fτk measurable random variable τ − τk with the help of the semi-Markov property, we obtain ∞
Ex PXτk t < τ1 ≤ z + t ∩ Xτ1 ∈ S ∩ θτ−1 B ; Ak ∩ τ − τk ∈ dt 1 0
=
0
∞
Ex PXτk θt−1 B ∩ τ1 > t ; Ak ∩ τ − τk ∈ dt .
Using deſnition of the two-parametric family we obtain ∞
Ex PXτk ,t (B)PXτk τ1 > t ; Ak ∩ τ − τk ∈ dt . 0
78
Continuous Semi-Markov Processes
Repeatedly using the semi-Markov property (for inverse conclusion) we have ∞
Ex PXτk ,t (B); Ak ∩ τ − τk ∈ dt ∩ τ1 θτk > t 0
= Ex PXτk ,τ −τk (B); Ak ∩ τ − τk ≥ 0 ∩ τk+1 > τ .
15. Example of a SM but not a SMGS-process Let us consider a family of degenerate probability measures (Px ) with the phase space X = R. For any x ∈ X the measure Px is concentrated on a unique trajectory. For x = 0 this trajectory represents the function ξ0 (Figure 3.1), where ξ0 (t) = 0 for t ∈ [0, 1) ∪ [2, ∞), and ξ0 (t) = (−1/2)n−1 for t ∈ [2 − 2−n+1 , 2 − 2−n ) (n ≥ 0). For point x = (−1/2)n−1 the measure Px concentrates on the corresponding shift of the function ξ0 . ξ0 (t) 1
0
1
2
t
Figure 3.1. Graphic of the function ξ0 (t)
For other points x trajectories are constant: Px (σ0 = ∞) = 1. These points can be excluded from the space of states without violation the condition for remaining measures to be coordinated. It is a jumped process, but not a stepped process. Its two-dimensional process (ξ, J(ξ)) is not strictly Markov, because the ſrst time of accumulation of inſnitely many jumps is not a regeneration time of this family of measures. As such, the process is not a SMGS-process. The set of distributions of the process at the ſrst jump times does not determine the process as well. However, for any open set Δ the time σΔ is a regeneration time of the family (Px ). Therefore, it is a SM process. For any initial point the corresponding function ξ can be constructed as a limit of a sequence of step-functions ξ ◦ Lr (r → 0), having for every r > 0 a ſnite number of jumps.
General Semi-Markov Processes
79
3.2. Transition function of a SM process The possibility of a constructive representation of a SM process is connected with its transition function (see Gihman and Skorokhod [GIH 73], Kuznetsov [KUZ 80]). In contrast to a transition function of Markov process (Dynkin [DYN 63], Blumenthal and Getoor [BLU 68], etc.) a parameter in the transition function of a SM process is a set. For lambda-continuous processes these sets can be taken from some accounting sub-class of all open sets. For lambda-continuous process on the line such a sub-class can be consisted of all intervals with rational ends (two parameters: the extreme points of an interval). A basic condition for a family of SM transition functions to be coordinated is an integral equation for transition functions with set-parameters where one set includes the other. This equation is a consequence of the regeneration property of the ſrst exit time from an open set. In terms of the Laplace transformation of transition functions (by t) these equations accept very simple forms (see Feller [FEL 67]). 16. Transition function Let (Px ) be an admissible set of probability measures and τ ∈ ST+ . Designate B ∈ B R+ × X , Fτ (B | x) = Px τ, Xτ ∈ B ∞ fτ (λ, S | x) ≡ e−λt Fτ (dt × S | x) = Ex e−λτ ; Xτ ∈ S, τ < ∞ 0
(λ ≥ 0, S ∈ B(X)). The kernel Fτ is said to be a semi-Markov transition function of the family (Px ), corresponding to a Markov time τ ; fτ is its Laplace transformation by time. We call the kernel fτ a semi-Markov transition generating function. It is clear that Fτ (B | ·), fτ (λ, S | ·) are B(X)-measurable functions from x, and Fτ (· | x), fτ (λ, · | x) are sub-probability measures on the appropriate measurable spaces. Many properties of semi-Markov processes (e.g., theorem 3.1) can be formulated in terms of these transition functions. 17. Properties of the transition functions PROPOSITION 3.5. Let (Px ) ∈ SM, τ1 ∈ RT(Px ) and τ2 ∈ MT+ . Then t
(1) Fτ1 +˙ τ2 ([0, t) × S | x) = 0 X Fτ1 (dt1 × dx1 | x)Fτ2 ([0, t − t1 ) × S | x); (2) fτ1 +˙ τ2 (λ, S | x) = X fτ1 (λ, dx1 | x)fτ2 (λ, S | x1 ). Proof. (1) Using representation " ! n ∞
t(k − 1) tk t(n − k) ˙ τ2 < t = , , τ2 ◦ θτ1 < , τ1 + τ1 ∈ n n n n=1 k=1
80
Continuous Semi-Markov Processes
the ſrst formula follows from left-continuity on t of the cumulative distribution function Fτ ([0, t) × S | x), and from the condition of regeneration. Actually, Px τ1 ∈ [a, b), τ2 ◦ θτ1 ∈ [c, d), Xτ1 +˙ τ2 ∈ S = Ex PXτ1 τ2 ∈ [c, d), Xτ2 ∈ S ; τ1 ∈ [a, b) Fτ1 [a, b) × dx1 | x Fτ2 [c, d) × S | x1 . = X
(2) The second formula follows from the ſrst one, but it is easier to prove it immediately:
˙ τ2 ; τ1 + ˙ τ2 < ∞, Xτ +˙ τ ∈ S fτ1 +˙ τ2 (λ, S | x) = Ex exp − λ τ1 + 1 2 = Ex exp λτ1 EXτ1 exp − λτ2 ; τ2 < ∞, Xτ2 ∈ S ; τ1 < ∞ fτ1 λ, dx1 | x fτ2 λ, S | x1 . = X
In this case formula 1 follows from formula 2 as a consequence of the property of the Laplace transformation (see [DIT 65]). 18. One sufſcient condition of equality of measures PROPOSITION 3.6. Let P and P be two probability measures on D, and Am be a countable open covering of X with the rank rm (i.e. (∀Δ ∈ Am ) diam Δ ≤ rm ), where rm ↓ 0, and δm ∈ DS(Am ). Let (∀k, m ∈ N) (∀λi > 0) (∀ϕi ∈ C0 ) E fm,k ; σδi m < ∞ = E fm,k ; σδi m < ∞ , where fm,k =
k
exp − λi σδi m ϕi ◦ Xσδi . m
i=0
Then P = P on F. Proof. From properties of the Laplace transformation (see Feller [FEL 67]) it follows (∀k, m ∈ N) −1 −1 = P ◦ βσδ0 , . . . , βσδk . P ◦ βσδ0 , . . . , βσδk m
m
m
m
General Semi-Markov Processes
81
From here by the Caratheodory extension theorem (see Gihman and Skorokhod −1 [GIH 71], Loève [LOE 62], Shiryaev [SHI 80]) we have: P ◦ L−1 δm = P ◦ Lδm . Let f ∈ C0 (D). Since (∀ξ ∈ D) sup ρ Lδm ξ(t), ξ(t) ≤ rm −→ 0, t∈R+
we have ρD (Lδm ξ, ξ) → 0 and E (f ) − E(f ) ≤ E (f ) − E ◦ L−1 (f ) + E ◦ L−1 (f ) − E(f ) δm δm ≤ E f − f ◦ Lδm + E f − f ◦ Lδm −→ 0. 19. Criterion of equality of SM processes PROPOSITION 3.7. Let (Px ), (Px ) be two semi-Markov processes on D; (∀m ∈ N) (∀Δ ∈ Am ) fσΔ = fσ Δ , i.e. their transition generating functions are identical (Am is deſned in proposition 3.6). Then (Px ) = (Px ). Proof. We have (∀δm ∈ DS(Am )) (∀λi ≥ 0) (∀ϕi ∈ C0 ) k i i−1 ϕi Xσδi ; σδkm < ∞ Ex exp − λi σδm − σδm m
i=1
=
Xk
k i=1
fσδi
m
λi , dxi | xi−1 ϕi xi ,
where δm = (Δ1m , Δ2m , . . .), σδ0m = 0, x0 = x. From here by theorem 3.6 we obtain (Px ) = (Px ). Let us note that propositions 3.6 and 3.7 would stay valid if in their conditions the times σΔ (Δ ∈ Am ) are replaced by σrm (m ∈ N) and accordingly that of σδkm by σrkm . For semi-Markov functions FσΔ and fσΔ we will also use simpliſed labels FΔ and fΔ accordingly. 20. Sufſcient condition of the semi-Markov property PROPOSITION 3.8. Let (Px ) be an admissible family of probability measures. For any x ∈ X, ϕ ∈ C0 , let Δ ∈ A, λ > 0 and τ ∈ T (σΔ , Δ ∈ A) Px -a.s. on the set {τ < ∞} ˙ σΔ < ∞ | Fτ = fΔ (λ, ϕ) ◦ XΔ , Ex exp − λσΔ ◦ θτ ; τ +
82
Continuous Semi-Markov Processes
where
fΔ (λ, ϕ | x) =
X
ϕ x1 fΔ λ, dx1 | x ,
fΔ (λ, ϕ) = fΔ (λ, ϕ | ·).
Then the family of measures (Px ) is semi-Markovian. Proof. From properties of the Laplace transformation it follows that for any measurable function ϕ1 on {τ < ∞} Px -a.s. Ex (ϕ1 ◦ βσΔ ◦ θτ | Fτ ) = EXτ (ϕ1 ◦ βσΔ ). It follows by induction that for any ſnite set (Δ1 , . . . , Δk ) (Δi ∈ A) and (ϕ1 , . . . , ϕk ) (ϕi ∈ B(Y )/B(R)) the equality is fulſlled k k Ex ϕi βσΔi ◦ θτ Fτ = EXτ ϕi βσΔi ; i=1
i=1
and hence for all B ∈ F Px (θτ−1 B | Fτ ) = PXτ (B). 3.3. Operators and SM walk Multi-dimensional lambda-potential operators are also used so that Markov processes can be characterized in the class of semi-Markov processes. It is based on the Markov condition expressed in terms of the lambda-potential operator. With the help of transition generating function, a differential (difference) operator similar to the characteristic operator for a Markov process is deſned. The duality of the differential and integral operators is proved. 21. Integral operator An integral operator R on the set of B(Xn )-measurable functions is deſned in item 2.44. For further investigations a summary of this operator will be useful. Let us denote B0 the class of all bounded measurable functions, For τ ∈ MT we deſne function n −λi ti e ϕ Xs1 , . . . , Xsn dti , Rτ λ1 , . . . , λn ; ϕ | x = Ex sn <τ
i=1
where si = t1 + · · · + ti , λi ≥ 0, ϕ ∈ B0 (Xn ). We will use the operator Rτ generally on the class of functions ϕ = ϕ1 ⊗ · · · ⊗ ϕn (ϕi ∈ B0 ) (i.e. ϕ(x1 , . . . , xn ) = ϕ1 (x1 ) · · · ϕn (xn )) denoting n −λi ti e ϕi ◦ Xsi dti . Rτ λ1 , . . . , λn ; ϕ1 , . . . , ϕn | x = Ex sn <τ i=1
General Semi-Markov Processes
83
We call this operator a multi-dimensional lambda-potential operator on the interval [0, τ ) (τ ≤ ∞). If τ ≡ ∞ we omit the subscript τ . From deſnition of the operator Lr (see item 2.18) it follows that (∀ξ ∈ D) (∀t ≥ 0) ρ(ξ(t), Lr ξ(t)) < r. Hence Lr ξ(s) → ξ(s) for r → 0 uniformly on s ∈ R+ . Consequently, any semi-Markov process (Px ) can be represented as a limit of the sequence of SM walks (Px ◦ L−1 r )x∈X as r → 0. Chapter 7 of this book is devoted to problems of convergence of sequences of processes. Here we consider a method of evaluating operator RσΔ (λ1 , . . . , λn ) for SM process when there exists a sequence of Markov times τr ◦ Lr (r → 0), converging on σΔ Px -a.s., where τr is a Markov time on the set of step-functions, depending on r (see item 13). If Δ is closed set, then for a general process, such a Markov time can be taken as σΔ ◦ Lr . If Δ is open set and the process belongs to the class QC, then such a Markov time can be taken as σΔ−r ◦ Lr (see item 7 and proposition 3.3). Both these approximations are good enough if for the given x Px (Π(Δ)) = 1, i.e. we can use property of the correct exit (see item 2.27). In this case Px (σΔ ◦ Lr → σΔ ) = 1, and also (∀n ∈ N) (∀ϕi ∈ C0 ) (∀λi > 0) RσΔ λ1 , . . . , λn ; ϕ1 , . . . , ϕn | x = lim Rσr Δ λ1 , . . . , λn ; ϕ1 , . . . , ϕn | x , r→0
where Rσr Δ λ1 , . . . , λn ; ϕ1 , . . . , ϕn | x n −λi ti e ϕi ◦ Xsi ◦ Lr dti = Ex sn <σΔ Lr i=1
(sn = t1 +· · ·+tn ). This stepped approximation can be expressed in terms of transition generating functions. For example, ∞ ϕ 1 − fσr (λ; 1) , Rr (λ; ϕ) = fσrk λ; λ k=0
where fσr (λ; ϕ) = E· (e−λσr (ϕ ◦ Xσr ); σr < ∞) is an integral with respect to the transition generating function. 22. Markov property for the operator In the theory of Markov processes, the one-dimensional operator R(λ; ϕ) = R(λ; ϕ | ·) = Rλ ϕ corresponds to the deſnition of a resolvent for a semi-group of operators, called a lambda-potential operator (see item 2.44, Dynkin [DYN 63, p. 44], Gihman and Skorokhod [GIH 73, p. 139], Blumenthal and Getoor [BLU 68, p. 41], Hunt [HAN 62], etc.).
84
Continuous Semi-Markov Processes
It is not difſcult to show that for a Markov process RσΔ λ1 , . . . , λn ; ϕ1 , . . . , ϕn = RσΔ λ1 ; ϕ1 RσΔ λ2 , . . . , λn ; ϕ2 , . . . , ϕn .
[3.2]
For an admissible family measures determining these multi-dimensional operators, this condition is a sufſcient condition of Markovness (up to the ſrst exit time from the open set Δ). Namely, if for any n ≥ 1, ϕi ∈ C0 , λi > 0 on the set Δ the condition [3.2] is fulſlled, then for any x ∈ Δ and almost all t ∈ R+ the Markov property for the measure Px is fulſlled: (∀B ∈ F) (∀A ∈ Ft )
Px θt−1 B, A ∩ t < σΔ
= Ex PXt (B); A ∩ t < σΔ . For a semi-Markov family of measures, which is lambda-continuous on the interval [0, σΔ ) this Markov property is fulſlled for any t ∈ R+ . A proof of this assertion follows from the Laplace transformation properties (see, e.g., [DIT 66]).
23. Difference operator For any λ ≥ 0, ϕ ∈ B0 , x ∈ X let fλ (ϕ | x) ≡ fσ0 (λ, ϕ | x) = Ex e−λσ0 ϕ ◦ Xσ0 ; σ0 < ∞ . Under the condition Px (0 < σ0 < ∞) > 0 we deſne operators Aλ ϕ = fλ ϕ − ϕ /m0 Aλ ϕ =
fλ ϕ − ϕ λ−1 1 − fλ I
(λ ≥ 0), (λ > 0),
where fλ I(x) = Ex (e−λσ0 ; σ0 < ∞), fλ ϕ = fλ (ϕ | ·), and m0 (x) = Ex (σ0 ; σ0 < ∞) are the expectation of the ſrst time for the initial state to be changed. These operators are said to be lambda-characteristic operators (of the ſrst and second kind correspondingly). In order to formulate properties of SM walk we use the operators of the second kind.
General Semi-Markov Processes
85
24. Connection of operators In the following theorem we use denotations: R(k, n) = RσΔ λk , . . . , λn ; ϕk , . . . , ϕn | x (1 ≤ k ≤ n), λ ≥ 0, λi > 0, ϕ, ϕi ∈ B0 , RλΔ ϕ = RσΔ (λ; ϕ) Δ ∈ A ∪ A, ψkn =
n
ϕi ,
Πi (k, n) =
λj − λi
−1
λj = λi ,
k≤j≤n,j=i
i=k
Φk,n = ψkn Πk (k, n) +
n−1
ψkj Πk (k, j + 1)Aλk R(j + 1, n)
j=k
(1 ≤ k ≤ n), NΔ is a number of jumps of a step-function up to the time σΔ : NΔ = n ⇐⇒ τn < σΔ = τn+1 . Evidently, σΔ < ∞ implies NΔ < ∞. These events cannot be identical if the number of jumps is ſnite on the half-line. THEOREM 3.2. Let (Px ) ∈ SM and Δ ∈ B(X). Then (1) if (Px ) is a SM walk, then (a) RλΔ Aλ ϕ = fσΔ (λ; ϕ) − ϕ (λ > 0), (b) if Px (σΔ < ∞) = 1, then R0Δ A0 ϕ = fσΔ (0; ϕ | x) − ϕ(x); (2) if Px (σ0 > 0) = 1 and Px (Xσ0 = x | σ0 < ∞) = 1, then Aλ1 R(1, n)(x) = −Φ1,n (x). Proof. (1) Using denotation τk = σ0k , we have for λ > 0 σΔ e−λt Aλ ϕ ◦ Xt dt; NΔ < ∞ RλΔ Aλ ϕ = E· 0
+ E·
σΔ 0
−λt
e
Aλ ϕ ◦ Xt dt; NΔ = ∞ .
The ſrst term of this sum is equal to n ∞ −1 −λτ −λσ0 θτk k Aλ ϕ ◦ Xτk λ e 1−e ; NΔ = n E· n=0
k=0
=
∞ k=0
E· Aλ ϕ ◦ Xτk λ−1 e−λτk 1 − e−λσ0 θτk ; k ≤ NΔ < ∞ .
86
Continuous Semi-Markov Processes
The second term is equal to ∞
E· Aλ ϕ ◦ Xτk λ−1 e−λτk 1 − e−λσ0 θτk ; NΔ = ∞ ,
k=0
which when added to the ſrst term gives ∞
E· Aλ ϕ ◦ Xτk λ−1 e−λτk 1 − e−λσ0 θτk ; k ≤ NΔ .
k=0
Since {k ≤ NΔ } ∈ Fτk , the semi-Markov property can be used to obtain the following ∞
E· Aλ ϕ ◦ Xτk λ−1 e−λτk 1 − fσ0 (λ; I) ◦ Xτk ; k ≤ NΔ
k=0
=
∞
E·
fσ0 (λ; ϕ) − ϕ ◦ Xτk e−λτk ; k ≤ NΔ
k=0
=
∞
E· e−λ(σ0 θτk ) ϕ ◦ Xτk+1 − e−λτk ϕ ◦ Xτk ; k ≤ NΔ
k=0
=
∞
E·
n=0
+ E·
n
−λτk+1
e
−λτk
ϕ ◦ Xτk+1 − e
ϕ ◦ Xτk
; NΔ = n
k=0 ∞
−λτk+1
e
−λτk
ϕ ◦ Xτk+1 − e
ϕ ◦ Xτk
; NΔ = ∞
k=0
=
∞
E· e−λτn+1 ϕ ◦ Xτn+1 − ϕ ◦ X0 ; NΔ = n − E· ϕ ◦ X0 ; NΔ = ∞
n=0
= E· e−λσΔ ϕ ◦ XσΔ − ϕ ◦ X0 ; NΔ < ∞ − E· ϕ ◦ X0 ; NΔ = ∞ = fσΔ (λ; ϕ) − ϕ. So assertion (a) is proved. Similarly we can prove assertion (b). In this case we do not need to construct Fτk measurable event {NΔ ≥ k} with the help of two non-measurable parts {k ≤ NΔ < ∞} and {NΔ = ∞}. The second distinction is that after replacing (A0 ϕ · m0 ) ◦ Xτk with (fσ0 (0; ϕ) − ϕ) ◦ Xτk and using the semi-Markov property we have to show explicitly the domain of integration taking into account that {NΔ ≥ k} ⊂ {τk < ∞} and {τk+1 < ∞} ⊂ {τk < ∞}. Namely Ex fσ0 (0; ϕ) − ϕ ◦ Xτk ; NΔ ≥ k = Ex ϕ ◦ Xτk+1 I τk+1 < ∞ − ϕ ◦ Xτk I τk < ∞ ; NΔ ≥ k .
General Semi-Markov Processes
87
(2) We have at point x
−
fλ1 R(1, n) − R(1, n) = E·
e
n
−
−λi ti
e
sn <σ0 i=1
+
−λi ti
t1 <σ0 ,sn <σΔ i=1
= E·
n
n−1
n
−
E·
ϕi ◦ Xsi dti
ϕi ◦ Xsi dti
−λi ti
e
Sk i=1
k=1
ϕi ◦ Xsi dti
,
where Sk = sk < σ0 ≤ sk+1 , sn < σΔ . The ſrst term is equal to n −λi ti n e ψ 1 E· − dti . sn <σ0 i=1
In the second term the summand with a number k is equal to n −λi ti e ϕi ◦ Xsi dti E· − Sk i=1
= E·
k
−
sk <σ0 i=1
e−λi ti ϕi ◦ Xsi dti
×
e−λi ti ϕi ◦ Xsi dti .
n
σ0 ≤sk+1 ,sn <σΔ i=k+1
Assuming tk+1 = sk+1 − σ0 , ti = ti for i ≥ k + 2 and si = tk+1 + · · · + ti for ˙ σΔ , we obtain the intrinsic integral to i ≥ k + 1, and using the property σΔ = σ0 + be equal to e−λk+1 (σ0 −sk )
n
Rn−k ∩{sn <σΔ } i=k+1 +
e−λi ti ϕi ◦ Xτ +si dti .
Using the semi-Markov property, we obtain the summand with number k: E·
−λk+1 σ0
−e
R(k + 1, n) ◦ Xσ0
=
ψ1k
E·
−λk+1 σ0
−e
k
−(λi −λk+1 )ti
e
sk <σ0 i=1
R(k + 1, n) ◦ Xσ0
k
sk <σ0 i=1
ϕi ◦ Xsi dti
−(λi −λk+1 )ti
e
dti
.
88
Continuous Semi-Markov Processes
Let us suppose temporarily that λi = λj (i = j), and apply the formula −λk+1 τ1
e
k
−(λi −λk+1 )ti
e
dti =
sk <τ i=1
k+1
e−λi τ1 Πi (1, k + 1),
[3.3]
i=1
which can be easily proved by induction using the following identity. LEMMA 3.1. For any n ≥ 2 and λi = λj (i = j) n
Πi (1, n) = 0
n ≥ 2, λj = λi .
i=1
Proof. The given identity is equivalent to the following one: ∀x = λi (1 ≤ i ≤ n − 1) n−1 1 1 = . Πi (1, n − 1) λ λ1 − x · · · λn−1 − x i−x i=1
We prove it by induction. The identity is evident for n = 2. Let this identity be true for a given n. Multiplying both parts of the latter equality by (λn − x)−1 we have n−1 1 1 = Πi (1, n − 1) λ1 − x · · · λ n − x λi − x λn − x i=1
1 1 1 − Πi (1, n − 1) = λn − λi λi − x λn − x i=1 n−1 n−1 1 1 + − . Πi (1, n) Πi (1, n) = λi − x λn − x i=1 i=1 n−1
However, the coefſcient before the fraction in the second term is equal to Πn (1, n) by inductive supposition, which proves the identity for the order n + 1.
Let us continue the proof of the theorem. For λi > 0 (i ≤ n) and λn+1 = 0 from identity [3.3] we obtain
n
sn <τ1 i=1
n e−λi ti dti = 1 − e−λi τ1 λ−1 i Πi (1, n). i=1
[3.4]
General Semi-Markov Processes
Hence, fλ1 R(1, n) − R(1, n) n = ψ1n E· e−λi τ1 − 1 λ−1 i Πi (1, n) i=1
−
n−1
ψ1k E·
k+1
e
R(k + 1, n) ◦ Xτ1 Πi (1, k + 1)
i=1
k=1
= ψ1n
−λi τ1
n
fλi 1 − 1 Πi (1, n) λ−1 i
i=1
−
n−1
ψ1k
fλi R(k + 1, n)Πi (1, k + 1)
i=1
k=1
= ψ1n
k+1
n
fλi 1 − 1 Πi (1, n) λ−1 i
i=1
−
n−1
ψ1k
k=1
k+1
fλi R(k + 1, n) − R(k + 1, n) Πi (1, k + 1) ≡ Φ1,n .
i=1
From here Aλ1 R(1, n) = Φ1,n
λ1 . 1 − fλ1 1
LEMMA 3.2. Under the conditions of the theorem the following equality is fair Φ1,n λ1 / fλ1 1 − 1 = Φ1,n . Proof. In order to be concise, let us designate di z = fλi z − z μi = 1 − fλi 1 /λ1 ,
z ∈ B0 .
We have to prove ψ1n Π1 (1, n)μ1 +
n−1
ψ1j Π1 (1, j + 1)d1 R(j + 1, n)
j=1
=
ψ1n
n i=1
Πi (1, n)μi +
n−1 j=1
ψ1j
j+1 i=1
Πi (1, j + 1) di R(j + 1, n).
89
90
Continuous Semi-Markov Processes
We prove this equality by induction on n. Assuming Π1 (1, 1) = 1, we see the equality to be fair for n = 1. Let it also be fair for all natural numbers up to n − 1 (n ≥ 2). Let us ſnd the difference of parts of the equality to be proven: ψ1n
n
Πi (1, n)μi +
n−1
i=2
ψ1j
j=1
= ψ1n
n
Πi (1, j + 1) di R(j + 1, n)
i=2
Πi (1, n)μi +
i=2
j+1
n−1
ψ1j
j=2
j
Πi (1, j + 1) di R(j + 1, n) + Ψ,
i=2
where Ψ=
n−1
ψ1k Πk+1 (1, k + 1) dk+1 R(k + 1, n).
k=1
Substituting the expression dk+1 R(k + 1, n) and using the inductive supposition, we discover that it is equal to n Πk+1 (k + 1, n)μk+1 − −ψk+1
n−1
j ψk+1 Πk+1 (k + 1, j + 1) dk+1 R(j + 1, n).
j=k+1
Using evident relations n = ψ1n , ψ1k ψk+1
Πk+1 (1, k + 1)Πk+1 (k + 1, j + 1) = Πk+1 (1, j + 1), we obtain Ψ=−
n−1
ψ1n Πk+1 (1, n)μk+1
k=1
−
n−2 n−1
ψ1j Πk+1 (1, j + 1) dk+1 R(j + 1, n).
k=1 j=k+1
Changing the order of summands and varying variables, we see the function −Ψ to be equal to the remain part of the difference to be estimated. The second assertion of the theorem is proved under condition λi = λj (i = j). However, the function Φ1,n is determined for all λi = λ1 (domain of Π1 (1, n)). The difference of interest is being extended on this region by continuity. In order to extend it on the region of all λi > 0 the function Φ1,n should be redetermined. At the points of
General Semi-Markov Processes
91
n-dimensional space of positive vectors (λ1 , . . . , λn ), where some coordinates coincide, let us determine Φ1,n as a limit of this function values on a sequence of vectors with pair-wise different coordinates. It is not difſcult to show that existence of these limits depends on the existence of derivatives of Aλ g by λ, but this function for any g ∈ B0 has derivatives of any order at any point. 3.4. Operators and general SM processes Both differential and integral operators for a general semi-Markov process are deſned and being investigated. In order to evaluate the integral operator on the set of continuous functions a stepped approximation of a semi-Markov process can be used, transforming the process to a semi-Markov walk (item 21). However, if the ſrst exit time from the initial point is equal to zero, it is not convenient to use formulae of SM walks in order to derive conditions of duality of the differential and integral operators for a general semi-Markov process. It is more rational to derive them directly by similar methods. 25. Family of lambda-continuous measures on the interval Let Δ be a measurable set. An admissible family (Px ) is said to be lambda-continuous on an interval [0, σΔ ) at point x if (∀n ∈ N) (∀λi > 0) (∀ϕi ∈ C0 ) the function RσΔ (λ1 , . . . , λn ; ϕ1 , . . . , ϕn ) is continuous at point x; the lambda-continuity of a family means the lambda-continuity of it on the whole half-line (see item 2.44). It is not difſcult to show that weak continuity of the family (Px ) with respect to the metric ρD implies lambda-continuity and lambda-continuity on the interval [0, σΔ ) (see theorem 2.4 and item 2.34). In Chapter 7 it will be for “almost all” Δ ∈ A ∪ A shown that weak continuity is equivalent to lambda-continuity. Lambda-continuity plays an important role while proving the regeneration property of some Markov times. Thus, for example, if equation [3.2] is fulſlled for a lambda-continuous SM family (Px ) for all n ∈ N, λi > 0, and ϕi ∈ C0 , then the family is strictly Markovian. It follows from the property that the class RT(Px ) is closed with respect to convergence from above (see theorem 2.7, and Blumenthal and Getoor [BLU 68]). 26. Lambda-characteristic operator We are interested in properties of kernels fσr (λ, dx1 | x) for small r > 0 (see denotations of item 16 and 2.14). Let
Xr = x ∈ X : Px σr < ∞ > 0
92
Continuous Semi-Markov Processes
and X0 = r>0 Xr . Evidently, X0 = {x ∈ X : Px (σ0 < ∞) > 0}. In most cases we will assume that (∀x ∈ X0 ) Px (σ0 < ∞) = 1 and X0 is an open set. It is not difſcult to show that the set X0 is open if the family (Px ) is lambda-continuous on the whole X. For x ∈ Xr let us consider the value Arλ (ϕ | x) =
1 fσr (λ; ϕ | x) − ϕ(x) , mr (x)
where λ ≥ 0, mr (x) = Px (σr ; σr < ∞), ϕ ∈ B0 . For λ > 0 and τ = ∞ let us determine e−λτ (ϕ ◦ πτ ) = 0. In this case fσr (λ, ϕ | x) ≡ Px e−λσr ϕ ◦ πσr ; σr < ∞ = Px e−λσr ϕ ◦ πσr . For x ∈ X0 let Vλ (x) mean the set of all ϕ ∈ B0 , what the limit Aλ (ϕ | x) = lim Arλ (ϕ | x) r→0
is determined for. If function ϕ is such that this limit exists for any x ∈ S, then it determines the function Aλ (ϕ | x) on S.In other words, the operator Aλ : Vλ (S) → B(S) is determined, where Vλ (S) = x∈S Vλ (x), B(S) is the set of all measurable real functions on S. The operator Aλ , determined above, is said to be a lambdacharacteristic operator (of the ſrst kind) (see Dynkin [DYN 63, p. 201]). Evidently, the characteristic operator of Dynkin, if it is determined for given ϕ and x, coincides with the lambda-characteristic operator for λ = 0. We distinguish strict (narrow) and wide sense of the deſnition of the operator Aλ . It depends on whether this operator is a limit for an arbitrary sequence of Δ ↓ x, or only for decreasing sequence of balls. For the purposes of this chapter it is enough to consider the operators in a wide sense. Furthermore, we will give some sufſcient conditions for the existence of lambda-characteristic operators in a strict sense. Some properties of SM processes are convenient to formulate in terms of lambdacharacteristic operators of the second kind: Aλ ϕ = limr→0 Arλ ϕ, where Arλ ϕ =
fσr (λ; ϕ) − ϕ , −1 1 − fσr (λ; I) λ
which is determined at point x on the set of functions Vλ (x). Using operators Aλ is actual for processes with mr = ∞ (∀r > 0). For example, it is actual for process of maxima of the standard Wiener process (see Chapter 9). In this case existence of the function Aλi I is not necessary. Instead the limit λ1 ≥ 0, λ2 > 0 a λ1 , λ2 | x = lim ar λ1 , λ2 | x r→0
is used, where
1 − fσr λ1 ; I | x . ar λ1 , λ2 | x = 1 − fσr λ2 ; I | x
General Semi-Markov Processes
93
In particular, ar (0, λ | x) =
Px σ r = ∞ . 1 − fσr (λ; I | x)
The existence condition for this limit is denoted as I ∈ Vλλ21 (x). Evidently, if I ∈ Vλi (x) (i = 1, 2), then a λ1 , λ2 | x = Aλ1 (I | x)/Aλ2 (I | x), and if operator Aλ is determined, then Aλ ϕ = −λ Aλ ϕ/Aλ I. From here Aλ I = −λ. Obviously, A0 ϕ = A0 ϕ if the last operator is determined. The meaning of the ratio (1 − fσr (λ; I))/λ|λ=0 is accepted to be a limit of this expression as λ → 0. 27. Local and pseudo-local operators According to deſnitions item 26 with Px (σ0 > 0) > 0 (i.e. there exists an interval of constancy at the start point of the trajectory) and λ > 0 we have Ex e−λσ0 ϕ ◦ Xσ0 − ϕ(x) . Aλ (ϕ | x) = m0 (x) It corresponds to the deſnition of the operator of the ſrst kind from item 23. In this sense the operator Aλ ϕ, deſned in item 26, is a generalization of the operators of the second kind from item 23. In contrast to a Markov process, when the probability Px (σ0 > 0) can accept only two values: 0 and 1, a semi-Markov process admits this probability equal to any value in the interval [0, 1]. At the time σ0 the trajectory of the SM process can behave differently. It can be a.s. a point of continuity of the process. In this case Aλ ϕ = ϕ Aλ I. If σ0 is a point of discontinuity with a positive probability, then the operator Aλ at point x depends, in general, on all the values of the function ϕ. The operator Aλ is said to be local at point x if (∀ϕ1 , ϕ2 ) ϕ1 ∈ Vλ (x) and ϕ1 = ϕ2 in some neighborhood of point x implies ϕ2 ∈ Vλ (x) and Aλ (ϕ1 | x) = Aλ (ϕ2 | x). Similarly, a local property of the operator Aλ is deſned. For locality of A0 at point x it is necessary and sufſcient that (∀R > 0) Px σr = σR < ∞ −→ 0 (r −→ 0) mr (x) (see Gihman and Skorokhod [GIH 73], Dynkin [DYN 63]). Furthermore, the rather weaker property of a process is used, which can be expressed by properties of its lambda-characteristic operator for λ > 0.
94
Continuous Semi-Markov Processes
The operator Aλ is said to be pseudo-local at point x if for any R > 0 Ex 1 − e−λσr ; σr = σR < ∞ −→ 0 (r −→ 0). mr (x)
[3.5]
Correspondingly, the deſnition of a pseudo-local operator Aλ uses another normalization: Ex 1 − e−λσr ; σr = σR < ∞ −→ 0 (r −→ 0). [3.6] Px 1 − e−λσr Here we assume Px (σ0 < ∞) = 1. Thus, a local operator of the process having a ſnite interval of constancy at the beginning of its trajectory and admitting discontinuity at the end of this interval cannot be pseudo-local. Let us note that pseudo-locality of operators Aλ and Aλ does not depend on λ: if an operator is pseudo-local for one λ > 0, it is pseudo-local for any λ > 0. In order to prove this assertion the inequality (∀λ1 , λ2 , 0 < λ1 < λ2 ) (∀t > 0) can be used: 1<
λ2 1 − e−λ2 t < , 1 − e−λ1 t λ1
[3.7]
which follows from monotone decreasing of the function (1 − e−x )/x. 28. Example of a process with discontinuous trajectories The locality of the operator A0 and the pseudo-locality of Aλ for any λ > 0 follows from a.s. continuity of trajectories of the process (Px ). Let us show an example of a SM process with discontinuous trajectories such that (∀x ∈ X) (∀λ > 0) its operator Aλ is pseudo-local, but A0 is not local at point x. Let X = R+ and Px be probability measures of the process ξ(t) = x + t + Nt , where Nt is the Poisson process with intensity c. Moreover 1 − e−ct , t < r, Px σr ≤ t = 1, t ≥ r, 1 mr (x) = 1 − e−cr , c r Ex 1 − e−λσr = 1 − e−λt ce−ct dt + 1 − e−λr e−cr 0
= 1 − e−(c+λ)r
λ , c+λ
Ex 1 − e−λσr ; σr = σR ≤ λrPx σr = σR . However, for r < R < 1 Px (σr = σR ) = Px (Nr > 0) = cr + o(r) (r → 0).
General Semi-Markov Processes
95
29. Correspondence of characteristic operators THEOREM 3.3. Let (Px ) be an admissible family. The following assertions are fair: (1) if x ∈ X0 , I(·) ∈ Vλ (x), and operator Aλ is pseudo-local at point x, and the function ϕ is continuous at point x, then (a) (ϕ ∈ V0 (x)) ⇔ (ϕ ∈ Vλ (x)); (b) Aλ (ϕ | x) = A0 (ϕ | x) + ϕ(x)(Aλ (I | x) − A0 (I | x)); (2) if x ∈ X0 , I ∈ Vλμ (x) (λ, μ > 0), and operator Aλ is pseudo-local, and function ϕ is continuous at point x, then (a) (ϕ ∈ Vλ (x)) ⇔ (ϕ ∈ Vμ (x)), (b) λ−1 Aλ (ϕ | x) + ϕ(x) = a(μ, λ | x)(μ−1 Aμ (ϕ | x) + ϕ(x)).
Proof. (1) In the case of Px (σr < ∞) = 1 we have Arλ (ϕ | x) = Ar0 (ϕ | x) + ϕ(x) Arλ (I | x) − Ar0 (I | x) +
1 Ex e−λσr − 1 ϕ ◦ Xσr − ϕ ◦ X0 ; σr < ∞ . mr (x)
In this case 1 −λσr E e − 1 ϕ ◦ X − ϕ ◦ X σr 0 mr (x) x ≤ − sup ϕ(x) − ϕ x1 λ x1 ∈B(x,R)
+ 2 sup ϕ(x) x∈X
1 Ex 1 − e−λσr ; σr = σR < ∞ . mr (x)
From continuity and boundedness of the function ϕ it proves the convergence of both the right and left parts of the equality. (2) We have λ Ex e−λσr ϕ Xσr − ϕ X0 Ex 1 − e−λσr Ex − 1 − e−λσr ϕ Xσr + 1 − e−μσr ϕ Xσr = λ−1 Ex 1 − e−λσr
Arλ (ϕ | x) =
+
λ ar μ, λ | x Arμ (ϕ | x) μ
96
Continuous Semi-Markov Processes
λ ar (μ, λ | x)Arμ (ϕ | x) μ Ex 1 − e−λσr ϕ Xσr − ϕ X0 −λ Ex 1 − e−λσr Ex 1 − e−μσr ϕ Xσr − ϕ X0 + λar (μ, λ | x) . Ex 1 − e−μσr
= −λϕ(x) + λϕ(x)ar (μ, λ | x) +
According to pseudo-locality two last terms tend to zero. Convergence of the remaining terms to a limit proves the second assertion. 30. Difference of operators of the ſrst kind Let us note as a simple consequence of theorem 3.3(1) the following formula: Aμ (ϕ | x) − Aλ (ϕ | x) = Aμ (I | x) − Aλ (I | x) ϕ(x), which fulſlls for the same x, ϕ, λ, μ, which is sufſcient for theorem 3.3. 31. The meaning of the operator In the following theorem we use denotations from theorem 3.2. THEOREM 3.4. Let (Px ) ∈ SM; Δ ∈ A; x ∈ X0 ∩ Δ; the family (Px ) be lambdacontinuous on the interval [0, σΔ ) at point x; for any λ > 0 the operator Aλ be pseudo-local at point x; and also for any λ1 , λ2 > 0 I ∈ Vλλ21 . Then (∀μ ≥ 0) (∀n ∈ N) (∀λi > 0, λi = λ1 ) (∀ϕi ∈ C0 ) R(1, n) ∈ Vμ (x), Aλ1 R(1, n) | x = −Φ1,n (x). where Φ1,n (x) corresponds to denotations of theorem 3.2 while substituting into the formula the meaning of Aλ , determined in item 26. Proof. According to theorem 3.3(2), and due to continuity of R(1, n) it is sufſcient to prove the existence of a limit of function Arλ1 R(1, n) (r → 0), and to ſnd its meaning. In theorem 3.2(2) it is proved that under B(x, r) ⊂ Δ fσr λ1 , R(1, n) | x − R(1, n)(x) n −λi ti = −Ex e ϕi ◦ Xsi dti sn <σr i=1
−
n−1
[3.8]
Ex e−λk+1 σr R(k + 1, n) ◦ Xσr
k=1
×
k
sk <σr i=1
−(λi −λk+1 )ti
e
ϕi ◦ Xsi dti
.
General Semi-Markov Processes
97
In this case we cannot take the function ϕi out of the integral as r > 0. However, using continuity of ϕi at point x, and the existence of limits a(λi , λj | x), and formula [3.4], we obtain for r → 0, and λi = λj (i = j) #n Ex sn <σr i=1 e−λ1 ti ϕi ◦ Xsi dti 1 − fλr1 (I | x) λ−1 1 −→ ψ1n (x)
n
Πi (1, n)a λi , λ1 | x ,
i=1
and also −λk+1 σr
Ex e
R(k + 1, n) ◦ Xσr
∼
ψ1k (x)Ex
= ψ1k (x)
k+1
−λk+1 σr
e
k
−(λi −λk+1 )ti
e
sk <σr i=1
R(k + 1, n) ◦ Xσr
ϕi ◦ Xsi dti
k
−(λi −λk+1 )ti
e
dti
sk <σr i=1
Πi (1, k + 1) fλri R(k + 1, n) | x − R(k + 1, n)(x) .
i=1 r Divided by λ−1 1 (1 − fλ1 (I | x)), the last expression is equal to
ψ1k (x)
λ1 Πi (1, k + 1) ar λk+1 , λ1 Arλk+1 R(k + 1, n) | x λk+1 i=1
k+1
+ λ1 ar λk+1 , λ1 | x − ar λi , λ1 | x R(k + 1, n | x) + εr ,
where εr is equal to Ex e−λ1 σr − e−λk+1 σr R(k + 1, n) ◦ Xσr − R(k + 1, n) ◦ X0 , 1 − fλr1 (I | x λ−1 1 and tends to zero, according to the pseudo-locality of the operator Aλ and the continuity of R(k + 1, n). Now convergence to a limit of the function Arλ1 R(1, n | x) can be proved by induction on n ≥ 1 based for n = 1: σ Ex 0 r e−λ1 t ϕ1 ◦ Xt dt Arλ1 R(1, 1 | x) = . 1 − fλr1 (I | x) λ−1 1 It converges to the limit ϕ1 (x). Taking into account theorem 3.3, and the evident relation a(λ3 , λ2 ) × a(λ2 , λ1 ) = a(λ3 , λ1 ) we can write the obtained formula as
98
Continuous Semi-Markov Processes
1,n , where follows Aλ1 R(1, n | x) = −Φ 1,n = ψ1n (x) Φ
n
Πi (1, n)
i=1
+
n−1
ψ1k (x)
k+1
λ1 a λ i , λ1 | x λi
Πi (1, k + 1)
i=1
k=1
λ1 a λi , λ1 | x Aλi R(k + 1, n) | x . λi
1,n , but it can be shown in the same way as lemma It remains to prove that Φ1,n = Φ 3.2 is proved. 32. Another representation of operators Under conditions of theorem 3.4 the formula Aλ1 R(1, n) = −Φ1,n is fair, where Φ1,n = ψ1n
n
Πi (1, n)
i=1
− λ1
n−1
λ1 a λ i , λ1 λi
ψ1k R(k + 1, n)
k=1
k+1
Πi (1, k + 1)a λi , λ1 .
i=1
1,n Using the identity from lemma 3.1 this formula follows from Aλ1 R(1, n) = −Φ and the substitution λ1 a λi , λ1 Aλi R(k + 1, n) = Aλ1 R(k + 1, n) + λ1 R(k + 1, n) λi − λ1 R(k + 1, n)a λi , λ1 . It is true due to theorem 3.3(2). 33. Inverse operator Theorem 3.2(1) makes plausible suppositions that RσΔ (λ; Aλ ϕ) = fσΔ (λ; ϕ) − ϕ (λ ≥ 0) for a class of function wide enough. Let us denote
(λ ≥ 0), Wλ (Δ) = ϕ ∈ Vλ (Δ) : RσΔ λ; Aλ ϕ = fσΔ (λ; ϕ) − ϕ
Wλ (Δ) = ϕ ∈ Vλ (Δ) : RσΔ λ; Aλ ϕ = fσΔ (λ; ϕ) − ϕ (λ > 0). 34. Domain of the inverse operator PROPOSITION 3.9. Let (Px ) ∈ SM and Δ ⊂ X0 be such that at least one of the two following conditions is fulſlled (for denotations, see item 7):
General Semi-Markov Processes
99
(a) Δ is a closed set; (b) Δ is an open set, and (Px ) ∈ QC. (1) If (∀x ∈ Δ) Px (σΔ ) < ∞, ϕ ∈ C0 and the following conditions are fulſlled: (A) (∃M > 0) (∀r > 0) supx∈Δ |Ar0 (ϕ | x)| ≤ M ; (B) (∀x ∈ Δ) Ar0 (ϕ | x) → A0 (ϕ | x) (r → 0); (C) the function A0 ϕ is continuous on Δ; then ϕ ∈ W0 (Δ). (2) If λ > 0, ϕ ∈ C0 and the following conditions are fulſlled: (D) (∃M > 0) (∀r > 0) supx∈Δ |Arλ (ϕ | x)| ≤ M ; (E) (∀x ∈ Δ) Arλ (ϕ | x) → Aλ (ϕ | x) (r → 0); (F) the function Aλ ϕ is continuous on Δ; then ϕ ∈ Wλ (Δ). Proof. In fact, the following formulae are proved in theorem 3.2(1): Rσr Δ(r) 0; Ar0 ϕ = fσΔ(r) Lr (0; ϕ) − ϕ, Rσr Δ(r) λ; Arλ ϕ = fσΔ(r) Lr (λ; ϕ) − ϕ (λ > 0), where ϕ ∈ C0 , Δ(r) is a measurable set depending on r. If condition (a) is fulſlled we assume Δ(r) = Δ. If condition (b) is fulſlled we assume Δ(r) = Δ−r (denotation, see items 10, 21 and 26). In both cases for r → 0 fσΔ(r) ◦Lr (0; ϕ) → fσΔ (0; ϕ) and fσΔ(r) ◦Lr (λ; ϕ) → fσΔ (λ; ϕ) on the set Δ (see proposition 3.3), and also for any x ∈ Δ |σΔ(r) ◦ Lr − σΔ | → 0 Px -a.s. According to the Egorov theorem [KOL 72, p. 269], for any x ∈ Δ and ε > 0 there exists a set Eε ⊂ Δ with the measure μ0 (Δ\Eε ) ≡ RσΔ (0; IΔ\Eε | x) < ε, on which the convergence Ar0 ϕ → A0 ϕ is uniform. In this case σΔ(r) ◦Lr σΔ r Ex A dt − E A dt ϕ ◦ X ◦ L ϕ ◦ X t r x 0 t 0 0
0
= ε 1 + ε2 + ε3 + ε4 , where ε1 ≤ M Ex σΔ(r) ◦ Lr − σΔ , σΔ A0 ϕ ◦ Xt − A0 ϕ ◦ Xt ◦ Lr dt , ε 2 ≤ Ex 0
ε 3 ≤ Ex
σΔ 0
r A0 ϕ ◦ Xt ◦ Lr − A0 ϕ ◦ Xt ◦ Lr IE ◦ Xt dt , ε
ε4 ≤ 2M Ex
σΔ 0
IΔ\Eε ◦ Xt dt
≤ 2M ε.
100
Continuous Semi-Markov Processes
The ſrst of these members tends to zero under both conditions (a) and (b); the second one tends to zero because of boundedness condition (A) and continuity condition (B); the third one tends to zero because of uniform convergence on the chosen set; the fourth one can be made as small as is desired at the expense of choice of ε. The ſrst assertion is proved. Similarly, the second assertion can also be proved.
Note that in the second assertion a size of the set Δ is not speciſed at all. In particular, it can be Δ = X, which corresponds to integration on the whole half-line. In this case the second assertion establishes sufſcient conditions for the equality R(λ; Aλ ϕ) = −ϕ.
35. Dynkin’s formulae Let us note that formulae Rτ λ; Aλ ϕ = fτ (λ; ϕ) − ϕ (λ > 0), Rτ 0; A0 ϕ = fτ (0; ϕ) − ϕ (τ < ∞),
[3.9] [3.10]
justiſed in proposition 3.9 for τ = σΔ , can be considered as variants of the Dynkin’s formulae (see [DYN 63, p. 190]), which sets the connection between the inſnitesimal operator of a Markov process, the resolvent of a semi-group of transition operators, and the semi-Markov transition generating function of the Markov process. While deriving them, the Markov properties of the process were essentially being used. Therefore our formulae, derived for a general semi-Markov process, can be considered as some generalizations of the Dynkin’s formulae. Dynkin applied these formulae in order to prove the existence of the characteristic operator on the same class of functions that the inſnitesimal operator is determined on. In the case of a general semiMarkov process the Dynkin’s formulae in terms of inſnitesimal operator are not in general true at least because for a non-Markov semi-Markov process the inſnitesimal operator does not coincide with the characteristic one. We will apply these formulae in order to prove the existence of the lambda-characteristic operator in strict sense, if this operator is determined in wide sense, i.e. in such a view as it is included in the formulae, proved in proposition 3.9.
36. Operators in a strict sense Up to now operators Aλ and Aλ have been considered as limits of fractions determined by a sequence of balls (B(x, r))r>0 for r → 0. Proposition 3.9 opens possibility
General Semi-Markov Processes
101
for the limits Δ (ϕ | x), λ (ϕ | x) = lim A A λ Δ↓x
Aλ (ϕ | x) = lim AΔ λ (ϕ | x) Δ↓x
to exist for any admissible sequence of neighborhoods of point x. Here fσΔ (λ; ϕ | x) − ϕ(x) Δ A λ (ϕ | x) = mΔ (x) fσΔ (λ; ϕ | x) − ϕ(x) AΔ λ (ϕ | x) = −1 1 − fσΔ (λ; I | x) λ
(λ ≥ 0), (λ > 0),
mΔ (x) = Ex (σΔ ; σΔ < ∞). The convergence Δn ↓ x of a sequence of neighborhoods (Δn )∞ 1 of point x means that (∀ε > 0) (∃m ≥ 1) (∀n > m) Δn ⊂ B(x, ε). λ ϕ and Aλ ϕ, determined on corresponding classes of functions (eviThe operators A dently, more narrow than that deſned in item 26), are said to be lambda-characteristic operators in strict sense (of the ſrst and second kind, correspondingly). Let Vλ (x) and λ (x) be domains of the operators in strict sense at point x. V For the difference operators, determined in item 23, evidently there is no distinction between strict and narrow sense. In general it is not an evident fact (maybe it is not true). In any case the existence of operators in strict sense requires a special analysis. For operators in strict sense it is natural to re-formulate the deſnition of pseudo-locality in item 27. The limit in this deſnition with respect to a sequence of balls must be replaced by a limit with respect to a sequence of neighborhoods of more general forms. So, operators Aλ and Aλ are said to be pseudo-local in strict sense at point x if for any R > 0 the following ratios Ex 1 − e−λσΔn ; σΔn = σR Ex 1 − e−λσΔn ; σΔn = σR , (λ > 0) mΔn (x) 1 − fσΔn (λ; I | x) tend to zero for any admissible sequence Δn ↓ x. This property takes place, for example, if Px (C) = 1. Admissibility has to be corrected in every partial case. Since we admit sequences of closed sets, the ſrst condition of admissibility of the sequence (Δn ), converging to {x}, we assume the condition (∀n ≥ 1) (∃r > 0) B(x, r) ⊂ Δn . From pseudo-locality in strict sense it follows the theorem about representation of λ ϕ and Aλ ϕ, similar to theorem 3.3 and formulae item 30, is fair. On the operators A other hand, from representation Δ Δ Δ AΔ λ (ϕ | x) = A0 (ϕ | x) + ϕ(x) Aλ (I | x) − A0 (I | x) +
1 Ex e−λσΔ − 1 ϕ ◦ XσΔ − ϕ ◦ X0 mΔ (x)
102
Continuous Semi-Markov Processes
the problem of pseudo-locality in strict sense reduces to the existence problem for operators in strict sense. 37. Existence of operators in strict sense THEOREM 3.5. Let (Px ) ∈ SM, x ∈ X0 , Δn ↓ x. Then (1) if (∃n ≥ 1) ϕ ∈ W0 (Δn ) and the function A0 ϕ is continuous at point x, then 0 (ϕ | x) = A0 (ϕ | x); ϕ ∈ V0 (x) and A (2) if (∃n ≥ 1) ϕ ∈ Wλ (Δn ) and the function Aλ ϕ is continuous at point x, then λ (x) and Aλ (ϕ | x) = Aλ (ϕ | x) (λ > 0). ϕ∈V Proof. According to the deſnition of operators RσΔ from item 21, the following estimates are fair RσΔ 0; A0 ϕ | x ≤ sup A0 ϕ | x1 , inf A0 ϕ | x1 ≤ x1 ∈Δ mΔ (x) x1 ∈Δ Rσ λ; Aλ ϕ | x ≤ sup Aλ ϕ | x1 . inf Aλ ϕ | x1 ≤ −1 Δ x1 ∈Δ 1 − fσΔ (λ; I | x) λ x1 ∈Δ According to the deſnition of regions W and W from item 33, it means that Δ inf A0 ϕ | x1 ≤ A 0 (ϕ | x) ≤ sup A0 ϕ | x1 , x1 ∈Δ
x1 ∈Δ
inf Aλ ϕ | x1 ≤ AΔ λ (ϕ | x) ≤ sup Aλ ϕ | x1 .
x1 ∈Δ
x1 ∈Δ
Existence of the limits follows from continuity of operators in a neighborhood of point x. 38. Arbitrary Markov time The Dynkin’s formulae [3.9] and [3.10] (see item 35), have proven under known suppositions for the time τ = σΔ , can be justiſed for τ from a more wide sub-class of Markov times. Let us denote by Tλ (λ > 0) a class of all Markov times that the Dynkin’s formula [3.9] is fulſlled for. Let class of ϕ ∈ C0 be such that the operator Aλ is determined and bounded on it. PROPOSITION 3.10. Let (Px ) be an admissible family of probability measures on D. Then ˙ τ2 ∈ T λ ; (1) if τ1 , τ2 ∈ Tλ and τ1 ∈ RT(Px ), then τ1 +
General Semi-Markov Processes
103
(2) if τn ∈ Tλ (n ≥ 1) and τn ↓ τ , then τ ∈ Tλ . Proof. (1) Using the regeneration property we have for any integrable function ψ Rτ1 +˙ τ2 (λ; ψ) τ1 +˙ τ2 = P· e−λt ψ ◦ πt dt 0
= P·
0
τ1
e−λt ψ ◦ πt dt + P· e−λτ1
τ2 ◦θτ1
0
e−λt ψ ◦ πt ◦ θτ1 dt
= Rτ1 (λ; ψ) + fτ1 λ; Rτ2 (λ; ψ) . Assuming ψ = Aλ ϕ, we obtain the required equality Rτ1 +˙ τ2 λ; Aλ ϕ = fτ1 +˙ τ2 (λ; ϕ) − ϕ. (2) The second assertion follows from boundedness of Aλ ϕ and continuity from the right of the process trajectories. It implies that fτn (λ; ϕ) → fτ (λ; ϕ) as n → ∞.
The application of the Dynkin’s formulae for more general Markov times is connected, ſrstly, with a generalization of theorem 3.2(1) for semi-Markov walks, and secondly, with a convergence of a sequence of embedded semi-Markov walks to the semi-Markov process, and with corresponding limit formulae. 3.5. Criterion of Markov property for SM processes In a class of SM walks the sub-class of Markov processes is characterized by an exponential distribution of a sojourn time in each state from the sequence of states of the process. In addition, if F (dt, dx1 | x) is a transition function of a semi-Markov walk, in a Markov case it is representable as H dx1 | x exp − a(x)t dt, where H(dx1 | x) is a Markov kernel on a phase space X and a(x) is some positive function. The sub-class of Markov processes in a class of general semi-Markov processes is also characterized by some property of a semi-Markov transition function. Such a characteristic property is exhibited in an asymptotic behavior of a semiMarkov transition function Fσr (dt, dx1 | x) with r → 0. This asymptotic behavior is reƀected in properties of the lambda-characteristic operator Aλ . With some regularity conditions the semi-Markov process is Markovian only in the case when a family of operators (Aλ ) is a linear function on a parameter λ.
104
Continuous Semi-Markov Processes
39. Necessary condition of the Markov property THEOREM 3.6. Let (Px ) ∈ SM; let the multi-dimensional potential operators of the process satisfy equation [3.2] (Markov condition); (Px ) be a lambda-continuous on the interval [0, σΔ ) family in point x, and I ∈ λ≥0 Vλ (x). Then (1) if Px (σ0 < ∞) = 1, and Px (Xσ0 = x, 0 < σ0 < ∞) = 1, then fσ0 (λ; ϕ | x) = fσ0 (0; ϕ | x)
1/m0 (x) ; 1/m0 (x) + λ
(2) if Aλ is pseudo-local in point x, then Aλ (I | x) = −λ. Proof. (1) Using theorem 3.2 we obtain the formula Aλ1 R(1, 2) =
ϕ1 ϕ2 ϕ1 Aλ1 R(2, 2) + . λ2 − λ1 λ2 − λ1
On the other hand, from equation [3.2] it follows that Aλ1 R(1, 2) = ϕ1 R(2, 2). Hence we obtain an equation [3.11] Aλ1 R(2, 2) = −ϕ2 + λ2 − λ1 R(2, 2). Hence the relation is fair Aλ1 R(2, 2) − Aλ3 R(2, 2) = λ3 − λ1 R(2, 2)
λ3 > 0, λ3 = λ1 .
In this relation, valid for any ϕ2 ∈ C0 and λ2 > 0, we can use the limit lim λ2 R(2, 2) = ϕ2
λ2 →∞
(see, e.g., [DIT 65]), and replace R(2, 2) by an arbitrary function ϕ2 ∈ C0 : Aλ1 ϕ2 − Aλ3 ϕ2 = λ3 − λ1 ϕ2 . Consequently, for any λ3 > 0 the relation is fair fλ1 ϕ2 − ϕ2 −1 λ1 1 − fλ1 I
+ λ 1 ϕ2 =
fλ3 ϕ2 − ϕ2 −1 λ3 1 − fλ3 I
+ λ 3 ϕ2 .
On the right part, letting λ3 → 0, we obtain fλ1 ϕ2 − ϕ2 −1 λ1 1 − fλ1 I From here we obtain
+ λ 1 ϕ2 =
f0 ϕ2 − ϕ2 . m0
f0 ϕ2 − ϕ2 1 − fλ1 I . fλ1 ϕ2 = ϕ2 fλ1 I + m0 λ 1
General Semi-Markov Processes
105
For the given point x ∈ Δ we consider a sequence of continuous functions ϕn , equal to one at point x, and equal to zero outside of the ball B(x, rn ) (rn → 0). Substituting ϕn instead of ϕ2 , and passing to a limit, we get an equation with respect to fλ1 I, and its solution fλ1 I =
1/m0 . 1/m0 + λ1
Substituting the obtained meaning in the previous formula, we obtain fλ1 ϕ2 = f0 ϕ2
1/m0 , 1/m0 + λ1
which correspond to conditional mutual independence of the ſrst exit time and that of position from the position x with an exponent distribution of the ſrst exit time. It is a well-known Markov property in its traditional view. (2) Formula [3.11] with corresponding deſnition of the operator Aλ stays true in the case when operator Aλ is determined and pseudo-local at point x. Therefore Aλ1 R(2, 2) = ϕ2
Aλ I Aλ1 I − λ2 − λ1 R(2, 2) 1 . λ1 λ1
Applying theorem 3.3 to the operator of a continuous function R(2, 2), we obtain λ2 Aλ1 I Aλ1 I − R(2, 2) − A0 I . A0 R(2, 2) = ϕ2 λ1 λ1 From here for any λ3 > 0 (λ3 = λ1 )
ϕ2 R(2, 2) − λ2
Aλ1 I Aλ3 I − λ1 λ3
= 0.
For the given point x let ϕ2 (x) = 1 and ϕ2 (x1 ) < 1 for x1 = x. Since x ∈ X0 −1 R(2, 2)(x) < λ−1 2 for any λ2 > 0. Therefore, it is necessary that λ3 Aλ3 (I | x) = −1 λ1 Aλ1 (I | x). Hence, there exists b(x) such that (∀λ ≥ 0) Aλ (I | x) = −λb(x), where 0 ≤ b(x) ≤ 1. Let b(x) < 1. In this case we have two limits: −Arλ (I | x) −→ b(x) (r −→ 0), r→0 λ −Arλ (I | x) −→ 1 (λ −→ 0). lim λ→0 λ lim
106
Continuous Semi-Markov Processes
From here a partial derivative of the function −Arλ (I | x)/λ on λ for λ = 0 must not be bounded (tends to −∞ as r → 0). In this case 1 − fσr (λ; I | x) 1 −Arλ (I | x) = = Ex 1 − e−λσr λ λmr (x) λ mr (x) σr 1 1 1 −λt −λσr t Ex Ex σr = e dt = e dt , mr (x) mr (x) 0 0 r E σ 2 1 x r ∂ Aλ (I | x) = 1 Ex σ r 2 . te−λσr t dt ≤ mr (x) ∂λ λ 2mr (x) 0 Consequently, under this supposition there must be a non-bounded ratio of the second moment to the ſrst one as r → 0. Let us show that it is not true. LEMMA 3.3. Let P be a probability measure on (D, F ); (∃r0 > 0) (∃n ≥ 1), E(σrn0 ) < ∞; and for any r (0 < r ≤ r0 ) the moment E(σr0 ) be positive. Then the ratio E((σr )n )/E(σr ) is a non-decreasing function of r on the interval (0, r0 ). Proof. Let 0 < r1 < r2 ≤ r0 . Let us represent the product of integrals as an integral on a product of spaces. We have n n E σ r1 − E σ r1 E σ r2 E σ r2 n n σr2 ξ1 σr1 ξ2 − σr1 ξ1 σr2 ξ2 P dξ1 P dξ2 = D2
=
D2
n−1 n−1 σr2 ξ1 P dξ1 P dξ2 ≥ 0. σr2 ξ1 σr1 ξ2 − σr1 ξ1
Since for a Markov process under conditions of theorem 3.6 the ratio Ex ((σr0 )2 )/ Ex (σr0 ) is ſnite (see [GIH 73, p. 195]), the supposition that b(x) < 1 leads to contradiction. Note that for a Markov process the result of lemma 3.3 can be strengthened as follows. PROPOSITION 3.11. If conditions of theorem 3.6(2) are fulſlled, then for any n ≥ 2 and x ∈ X n /E σr = 0. lim E σr r→0
General Semi-Markov Processes
107
Proof. It follows from the Taylor formula for function e−x in a neighborhood of zero and that of the proven theorem. 40. Correlation with inſnitesimal operator The Markov property of a stochastically continuous process (Px ) makes it possible to use some results of the theory of contractive operator semi-groups. In particular, if for any x ∈ Δ Px (σΔ > t) < 1, then the operator TtΔ (ϕ | x) = Ex (ϕ ◦ Xt ; σΔ > t) is contractive. In addition, let us suppose that the semi-group of operators (TtΔ )t≥0 is continuous from the right on t and is a Feller semi-group, i.e. every operator of the semi-group maps the set of all continuous bounded functions C0 in itself. Let AΔ be an inſnitesimal operator of the semi-group of such operators. Let the domain of the operators be a set of such functions ϕ ∈ C0 that for any x ∈ Δ there exists the limit AΔ (ϕ | x) = lim
t→0
1 Δ T (ϕ | x) − ϕ(x) . t t
Then for any function g ∈ C0 and λ > 0 there exists a solution of the operator equation λϕ − AΔ ϕ = g and this solution is represented by the formula ϕ = RλΔ g ∈ C0 , where, according to our denotations, RλΔ g = RσΔ (λ; g) (see Dynkin [DYN 63, p. 43], Venzel [VEN 75], Gihman and Skorokhod [GIH 73], Ito [ITO 63]). It is well-known that such a Markov process is strictly Markovian [DYN 63, p. 144] and consequently, it is semi-Markovian. From here, according to formula [3.8], for n = 1 we have σr Δ r r Δ −λt e g Xt dt . fλ (ϕ | x) − ϕ(x) = fλ Rλ g | x − Rλ (g | x) = −Ex 0
Hence we obtain the ſrst Dynkin’s formula: σr e−λt λϕ − AΔ ϕ ◦ Xt dt . fλr (ϕ | x) − ϕ(x) = −Ex
[3.12]
Passing to a limit as λ → 0, we obtain the second Dynkin’s formula: σr Δ A ϕ ◦ Xt dt . f0r (ϕ | x) − ϕ(x) = Ex
[3.13]
0
0
Δ From the ſrst formula we obtain AΔ λ ϕ = A ϕ − λϕ. It is the lambda-characteristic operator of the second kind for a semi-Markov process, stopped at the ſrst exit time Δ from Δ. From the second formula it follows that AΔ 0 ϕ = A ϕ. It is the lambdacharacteristic operator of the ſrst kind for this process with λ = 0. If the condition of Δ Δ pseudo-locality is fulſlled, then AΔ λ and A0 does not depend on Δ. Hence A also
108
Continuous Semi-Markov Processes
does not depend on Δ. In particular, from the ſrst formula it follows that for any μ > 0 Aλ ϕ + λϕ = Aμ ϕ + μϕ. Comparing this formula with that of theorem 3.3(2), we obtain in this case a(μ, λ) = μ/λ. This means that Arλ I/λ ∼ Arμ I/μ as r → 0. Using boundedness of these ratios, and passing to converging subsequences, by lemma 3.3, we convince ourself that Aλ I = −λ. 41. When R belongs to W THEOREM 3.7. Let (Px ) ∈ SM, and Δ ⊂ X0 be such that at least one of two following conditions is fulſlled (see item 7): (a) Δ is a closed set; (b) Δ is an open set, and (Px ) ∈ QC. Besides, let (Px ) be lambda-continuous family on interval [0, σΔ ), and also for any λ1 , λ2 > 0 I ∈ Vλλ21 (Δ), and function a(λ1 , λ2 ) is continuous on Δ. Then (∀n ∈ N) (∀λi > 0, λi = λj ) (∀ϕi ∈ C0 ) R(1, n) ∈ Wλ1 (Δ), RσΔ λ1 ; Aλ1 R(1, n) = −R(1, n), where R(1, n) = RσΔ (λ1 , . . . , λn ; ϕ1 , . . . , ϕn ) (see item 24). Proof. It is sufſcient to check that for function R(1, n) (which is continuous, under the theorem condition) all the conditions of proposition 3.9(2) are fulſlled for positive functions ϕi . Convergence to a limit follows from theorem 3.4. Continuity of the limit follows from formula of item 32, and that of the theorem conditions. Let us show boundedness of a pre-limit function. In order to do it we use representation [3.8] from the proof of theorem 3.4: fσr λ1 ; R(1, n) − R(1, n) n −λi ti n e dti ≤ Ψ1 E· sn <σr i=1
+
n−1
Ψk1 R(k
−λk+1 σr
+ 1, n)E· e
k
−(λi −λk+1 )ti
e
sk <σr i=1
k=1
where B(x, r) ⊂ Δ, Ψnk =
n i=k
sup ϕi (x) : x ∈ Δ ,
R(k + 1, n) = sup R(k + 1, n)(x) : x ∈ Δ .
dti
,
General Semi-Markov Processes
109
Let 0 < λ0 ≤ λi (i = 1, . . . , n). In this case R(k + 1, n) ≤ Ψnk+1 λn−k . Furthermore, 0 it is not difſcult to show that n −λi ti e 1 − e−λ0 σr . dti ≤ λ−n 0 sn <σr i=1
Similarly the other integral can be estimated. Thus we obtain r Aλ R(1, n) | x ≤ nλ−n λ1 Ψn1 . 0 1 Hence, R(1, n) ∈ Wλ1 (Δ). The ſnal formula we obtain from the evident equality fσΔ (λ1 , R(1, n)) = 0, which follows from the relation σΔ ◦ θσΔ = 0. 42. Sufſcient condition of the Markov property THEOREM 3.8. Let the conditions of theorem 3.7 be fulſlled and (∀λ, μ > 0) a(λ, μ) = λ/μ on Δ. Then for the semi-Markov family (Px ) the Markov property [3.2] is fulſlled. Proof. By theorem 3.7 the following equality is fulſlled: RσΔ λ1 ; Aλ1 R(1, n) = −R(1, n). Besides, according to formula of item 32, and that of lemma 3.1, we have Aλ1 R(1, n) = −
n−1
ψ1k R(k
+ 1, n)
k+1
Πi (1, k + 1)λi .
i=1
k=1
LEMMA 3.4. For any positive λi = λj (i = j) for n ≥ 3 it is true that n
Πi (1, n)λi = 0;
i=1
for n = 2 the left part is equal to −1. Proof. We have n
Πi (1, n)λi =
i=1
n−1
Πi (1, n)λi + λn Πn (1, n)
i=1
=
n−1
Πi (1, n − 1)
i=1
=
n i=1
Πi (1, n) λn −
λn − 1 + λn Πn (1, n) λn − λi
n−1 i=1
Πi (1, n − 1).
110
Continuous Semi-Markov Processes
By lemma 3.1, the latter expression is equal to zero as n ≥ 3. The case n = 2 is evident. It also follows from this expression, if we deſne Π1 (1, 1) = 1. Applying this lemma we obtain Aλ1 R(1, n) = −ϕ1 R(2, n). Thus the equation [3.2] is satisſed. 3.6. Intervals of constancy and SM processes Intervals of constancy play an important role in the theory of semi-Markov processes. Every lambda-continuous semi-Markov process with trajectories without intervals of constancy is a Markov process. Some inversion of this result is fair as well. Thus, the absence of intervals of constancy in trajectories has to be reƀected in properties of the lambda-characteristic operator of the process. This properties are corrected for some intervals of constancy of special view. Absence of intervals of constancy for a Markov process can be used for a new approach to a correlation of a lambdacharacteristic operator and an inſnitesimal one. The whole solution of the problem of intervals of constancy will be considered in Chapter 6 in terms of the Lévy measure. 43. Intervals of constancy The function ξ ∈ D is said to have no interval of constancy on the right of point t ≥ 0, if for any ε > 0 the function ξ is not constant on the interval [t, t + ε). Let Dt∗ be a set of functions not having intervals of constancy on the right of point t. Then D∗ = t∈R+ Dt∗ is the set of functions not having intervals of constancy on the whole half-line. Evidently, D∗ = t∈R Dt∗ , where R+ is any set which is dense everywhere + in the half-line, for example, the set of all rational positive numbers. From here P D∗ = 1 ⇐⇒ ∀t ∈ R+ P Dt∗ = 1. 44. Another sufſcient condition of the Markov property THEOREM 3.9. Let (Px ) be a lambda-continuous SM family of measures. If (∀x ∈ X) Px (D∗ ) = 1, then (Px ) is a Markov family (Markov process). ˙ 0 ) ◦ Lr = σrk+1 , where σrk ≤ t < σrk+1 . In this Proof. Let t ∈ R+ and σr,t = (t+σ case σr,t ∈ RT, and for any ξ ∈ Dt∗ σr,t (ξ) → t (r → 0). For B ∈ Ft ⊂ Fσr,t , and for f of the form f=
n Rn + i=1
e−λi ti ϕi ◦ Xsi dtt
General Semi-Markov Processes
111
(si = t1 + · · · + ti , ϕi ∈ C0 , λi > 0) we have equalities
Ex f ◦ θt ; B = lim Ex f ◦ θσr,t ; B ∩ σr,t < ∞ r→0
= lim Ex EXσr,t (f ); B ∩ σr,t < ∞ = Ex EXt (f ); B , r→0
since (∀t1 ∈ R+ ) Xσr,t +t1 → Xt+t1 , Px (σr,t < ∞) → 1 and the function Ex (f ) is continuous on x (follows from lambda-continuity). Therefore t ∈ RT. 45. Another necessary condition of the Markov property THEOREM 3.10. Let (Px ) be a Markov family, and (∀x ∈ X) Px (D0∗ ) = 1 (there is no interval of constancy at the beginning of a trajectory; see item 43). Then (∀x ∈ X) Px (D∗ ) = 1. Proof. Let (∃x ∈ X) (∃t ∈ R+ ) Px (Dt∗ ) < 1. We have ξ ∈ Dt∗ ⇔ θt ξ ∈ D0∗ . From here Px (Dt∗ ) = Px (θt−1 D0∗ ) = Ex (PXt (D0∗ )) = 1. This is a contradiction. 46. Inſnite interval of constancy THEOREM 3.11. Let (Px ) ∈ SM; Δ ⊂ X0 (Δ ∈ A); I ∈ Vλ0 (Δ); and (∀x ∈ Δ) a(0, λ | x) = 0. Then (∀x ∈ Δ) Px -a.s. an inſnite interval of constancy is absent until the ſrst exit time from Δ. ¯ + ), but Proof. Let τ (ξ) = inf{t ≥ 0 : ξ be constant on [t, ∞)}. Evidently τ ∈ F/B(R τ ∈ MT+ (if τ ≡ ∞). From the deſnition of τ it follows that τ = limr→0 τ ◦ Lr , where τ ◦ Lr ≤ τ , and therefore (∀λ > 0) as r → 0 we have Ex (e−λτ ◦ Lr ) → Ex (e−λτ ). On the other hand, ∞ k Ex e−λσr ; σrk < ∞, σrk+1 = ∞ Ex e−λτ ◦ Lr = k=0
=
∞ k=0
k Ex e−λσr PXσk σr = ∞ ; σrk < ∞
= Rλr
r
λP· σr = ∞ x = λRλr ar (0, λ) | x 1 − fσr (λ, I)
(see theorem 3.2 and item 21). However, by the deſnition of ar (0, λ) (λ > 0) (see item 26), the latter expression tends to zero (passing to a limit can be justiſed in the same way as that of proposition 3.9). Consequently, Ex (e−λτ ) = 0. Hence (∀x ∈ X) Px (τ = ∞) = 1.
112
Continuous Semi-Markov Processes
47. Intervals of constancy before jumps THEOREM 3.12. Let (Px ) ∈ SM; Δ ⊂ X0 (Δ ∈ A); the operator Aλ is pseudo-local on Δ. Then (∀x ∈ X) until the ſrst exit time from Δ in a trajectory of the process intervals of constancy before any point of discontinuity Px -a.s. are absent (it means a point of discontinuity is not the right end of an interval of constancy). Proof. Let τε (ξ) < ∞ be the ſrst jump time of the trajectory ξ with a jump value of more than ε, i.e. if τε (ξ) = t, then ρ(ξ(t − 0), ξ(t)) > ε. Let
τε (ξ) = inf t < τε (ξ) : on the interval t, τε (ξ) , ξ is constant , if such t does not exist, we assume τε (ξ) = τε (ξ). Without being too speciſc it can be assumed that Px -a.s. there are no jumps with value equal to ε. Then Px -a.s. τε ◦ Lr → τε and τε ◦ Lr → τε . From here Ex e−λτε − e−λτε ◦ Lr −→ Ex e−λτε − e−λτε . On the other hand, for any sufſciently small r > 0 Ex e−λτε − e−λτε ◦ Lr =
∞
k Ex e−λσr 1 − e−λσr ◦ θσrk ; σrk < τε , σrk+1 = τε < ∞ .
k=0
˙ τε (τε is the so called terminal Markov Since from t < τε it follows that τε = t + time; see [BLU 68, p. 78]), the latter expression is equal to ∞
k Ex e−λσr EXσk 1 − e−λσr ; σr = τε < ∞ ; σrk < τε < ∞ r
k=0
≤
∞ k=0
k Ex e−λσr EXσk 1 − e−λσr ; σr = τε < ∞ ; σrk < ∞
=
Rλr
r
λ E· 1 − e−λσr ; σr = τε < ∞ x . 1 − fσr (λ; I)
Evidently, τε ≥ σε/2 , and hence for r < ε/2 Ex 1 − e−λσr ; σr = τε < ∞ ≤ Ex 1 − e−λσr ; σr = σε/2 < ∞ . Then the integral is not more than −λσr ; σr = σε/2 < ∞ r E· 1 − e λRλ 1 − fσr (λ; I)
x .
General Semi-Markov Processes
113
According to the pseudo-locality of Aλ and boundedness of the integrand for any r > 0, this value tends to zero as r → 0. Therefore Ex (e−λτε − e−λτε ) = 0. Hence, (∀x ∈ X) for all ε > 0 Px (τε = τε ) = 1. Let τεk be the k-th jump time with the jump value of more than ε, and [(τεk ) , τεk ) be the maximal interval of constancy on the left of τεk . Then for all ε (excluding at most countable number of them) ˙ τε < τεk−1 + ˙ τε Px τεk < τεk = Px τεk−1 + k−1 τ , τ < τ < ∞ = Px θτ−1 ε k−1 ε ε ε = Ex PX(τεk−1 ) τε < τε ; τεk−1 < ∞ = 0 (evidently τεk ∈ RT). Any jump time of ξ belongs to the set {(τεkn ), k, n ∈ N} (εn ↓ 0).
48. Another correlation with an inſnitesimal operator We show once more a way to prove linearity of the operator Aλ on λ. Let σr,t = ˙ σ0 ) ◦ Lr (see theorem 3.9). In this case σr,t = σrk+1 ⇔ σrk ≤ t < σrk+1 . It is (t + not difſcult to show theorem 3.2(1) to be true for such a compound Markov time as τ ≡ σc,t (c > 0). Let us assume conditions ensuring convergence Rτr λ; Arλ ϕ −→ Rτ λ; Aλ ϕ , fτr (λ; ϕ) −→ fτ (λ; ϕ) (r −→ 0). In this case we would obtain the formula Rτ λ; Aλ ϕ = fτ (λ; ϕ) − ϕ. If a trajectory of the original process does not contain any interval of constancy, then, evidently, τ ↓ t as c ↓ 0. Applying proposition 3.10(2), we would receive the formula t e−λs Aλ ϕ ◦ Xs ds = e−λt E· ϕ ◦ Xt − ϕ. E· 0
Dividing terms of this equality by t and tending t to zero, we obtain the formula as a limit Aλ ϕ = Aϕ − λϕ, which is justiſed for a Markov process in item 40, and follows Aλ I = −λ.
This page intentionally left blank
Chapter 4
Construction of Semi-Markov Processes using Semi-Markov Transition Functions
For a semi-Markov process an exposition of its distribution with the help of all ſnite-dimensional distributions P (Xt1 ∈ S1 , . . . , Xtk ∈ Sk ) (k ∈ N, ti ∈ R+ , Si ∈ B(X)) [KOL 36] is not natural. It would be more convenient to describe them in terms of semi-Markov transition functions Fτ (dt, dx1 | x), because for them an analogy of the Kolmogorov-Chapman equation holds [GIH 73]. However, the exposition with the help of functions Fτ means it is necessary to analyze a set of new ſnite-dimensional distributions, joint distributions of pairs of the ſrst exit. The ſrst problem is to construct such a distribution for an arbitrary family of open subsets of the phase space, and then to compose these distributions into a projective system of ſnite-dimensional measures. We solve this problem for an arbitrary process with trajectories from D. The second problem is to extend a projective limit of this system to a probability measure on (D, F). It can be shown that an a priori given projective system of such distributions cannot always be extended. Conditions of such extensions are given in this chapter. The presentation is adapted to semi-Markov processes [HAR 74]. Having the theory of extension of a projective system of measures up to a probability measure on (D, F), it is not difſcult to give conditions under which an a priori given set of semi-Markov transition functions determines a semi-Markov process. Construction of ſnite-dimensional distributions in this case has some peculiarities on a comparison with a Markov case. It is connected with a lack of a linear ordering property in the set of ſrst exit times. Besides, semi-Markov transition functions should have properties which are exhibited in an inſnite sequence of iterations of such functions [RUS 75]. 115
116
Continuous Semi-Markov Processes
In contrast to methods of the theory of Markov processes [GIH 71], the conditions for trajectories of a semi-Markov process to be continuous in terms of semi-Markov transition functions look very simple. The necessary and sufſcient condition for continuity is concentration of distributions of the ſrst exit points on the boundaries of the corresponding sets. The consistency conditions for distributions of the ſrst exit pairs, which are only sufſcient in a common case, are necessary and sufſcient for continuous processes. Note that it is possible to construct a theory of semi-Markov processes for which the set of regeneration times consists of the ſrst exit times from closed subsets. The investigation shows that such theory even has some advantages connected with continuity of these times with respect to any decreasing sequence of closed sets. However, at the same time in this case a construction of joint distributions of several pairs of the ſrst exit requires more complicated conditions. 4.1. Realization of an inſnite system of pairs of the ſrst exit For each function ξ ∈ D and sequence of open sets (Δ1 , . . . , Δk ) there exists “a ˙ ···+ ˙ σΔk . We will deal with ſnite sets of such pairs, pair” βτ ξ, where τ = σΔ1 + where each pair is a point from space Y = (R+ × X) ∪ {∞}, corresponding to sequences (Δ1 , . . . , Δk ) (k ≥ 1, Δi ∈ A). Before proving a theorem about extension of a measure it is important to ſnd conditions when the a priori given inſnite sequence of such ſnite sets of pairs can be realized, i.e. when a function ξ ∈ D exists such that each pair from this sequence is a pair of the ſrst exit for this function from the corresponding open sets. 1. Full collection, chains Let K(A1 ) be a set of all ſnite sequences z = (Δ1 , . . . , Δk ) (k ∈ N0 , Δi ∈ A1 ), where A1 ⊂ A is some system of open subsets of a set X; for k = 0 we set z = (Ø) (empty sequence). We say that z1 precedes z2 , or z1 is less than z2 , if z1 is an initial piece of a sequence z2 ; and designate z1 ≺ z2 (z1 , z2 ∈ K(A1 )). For example, z ≺ z, where z = (Δ1 , . . . , Δk ), z = (Δ1 , . . . , Δk−1 ); and also (Ø) ≺ z for all non-empty z ∈ K(A1 ). Let us denote by Z(A1 ) a set of all non-empty ſnite subsets of a set K(A1 ). We call z ∈ Z(A1 ) as full collection if (Ø) ∈ z and for any non-empty z ∈ z: z1 ≺ z ⇒ z1 ∈ z. Let Z0 (A1 ) be a set of all full collections z ∈ Z(A1 ). A full collection contains a unique minimal element (empty set), but can contain more than one maximal element.
Construction of Semi-Markov Processes
117
A linearly ordered full collection is said to be a chain. A chain z contains a unique maximal element, max z. Let Z0 (A1 ) be a set of all chains z ∈ Z0 (A1 ). k Each z ∈ Z0 (A1 ) is representable as z = i=1 z i , where z i ∈ Z0 (A1 ). Let us designate by c(z) the least number k in such representation. It is a number of maximal chains in z; |z| is a number of non-empty z ∈ z. Let us designate by rank(z) the maximal diameter of an open set what the definition of z is formulated from rank(z) = max(rank(z i ) : 1 ≤ i ≤ c(z)), where rank(z i ) = rank(max(z i )) = max(diam Δij ) : 1 ≤ j ≤ m, if max(z i ) = (Δi1 , . . . , Δim ). Let (zn )∞ 1 (zn ∈ K(A1 )) be a non-decreasing sequence; denote by lim zn either the maximal term of this sequence, or the unique inſnite extension of all these ſnite sequences. In case when A1 = A (all open sets), we will write K, Z, Z0 , Z0 . 2. Correct map and realizing function For y ∈ Y the following denotations are introduced t, y = (t, x), k2 (y) = x, k1 (y) = ∞, y = ∞, where y = (t, x), t ∈ R+ , x ∈ X. Map ζ : z → Y (z ∈ Z) is said to be correct, if the following conditions are fulſlled: (1) (Ø) ∈ z ⇒ k1 (ζ(Ø)) = 0; (2) z1 , z2 ∈ z, k1 (ζ(z1 )) = k1 (ζ(z2 )) < ∞ ⇒ k2 (ζ(z1 )) = k2 (ζ(z2 )); (3) z1 , z2 ∈ z, z1 ≺ z2 ⇒ k1 (ζ(z1 )) ≤ k1 (ζ(z2 )); (4) z = (Δ1 , . . . , Δn ) ∈ z ⇒ ζ(z) = ∞ or k2 (ζ(z)) ∈ X \ Δn , and if Δn = X, then ζ(z) = ∞; (5) if z1 , z2 , z3 ∈ z, z1 = (Δ1 , . . . , Δn ), and z2 = (Δ1 , . . . , Δn+1 ), and k1 (ζ(z1 )) ≤ k1 (ζ(z3 )) < k1 (ζ(z2 ))1, that k2 (ζ(z3 )) ∈ Δn+1 . Let B(z) be a set of all such correct maps. For any z ∈ Z and ζ : z → Y , for which conditions (1) and (2) of previous deſnitions are fulſlled and (Ø) ∈ z, the function Lζ ∈ D is deſned, where (Lζ)(t) = k2 (ζ(z0 ))2, if (∃z0 ∈ z) such that k1 (ζ(z0 )) ≤ t and for any z ∈ z either k1 (ζ(z)) ≤ k1 (ζ(z0 )), or k1 (ζ(z)) > t.
118
Continuous Semi-Markov Processes
3. Realization of correct map PROPOSITION 4.1. If z ∈ Z0 and ζ ∈ B(z), then (∀z ∈ z) ζ(z) = βτz (Lζ), where ˙ ···+ ˙ σΔk if z = (Δ1 , . . . , Δk ). τz = 0, if z = (Ø), and τz = σΔ1 + Proof. Let z =
k i=1
z i , c(z) = k ≥ 1, and also
z i = (Ø), Δi1 , . . . , Δi1 , . . . , Δini
(ni ∈ N0 , Δij ∈ A). Let us assume zij = (Δi1 , . . . , Δij ), yij = ζ(zij ), y0 = ζ(Ø) and ξ = Lζ. Then ζ(Ø) = βτØ ξ, and k1 (y0 ) = 0, k2 (y0 ) = ξ(0). Let βτzij ξ = ˙ σΔij+1 )ξ ≤ ζ(zij ) for some i and j < ni and k1 (ζ(zij )) < ∞. Then τzij+1 ξ = (τzij + / Δij+1 , or k1 (ζ(zij+1 )) k1 (ζ(zij+1 )), because either k2 (ζ(zij+1 )) = ξ(k1 (ζ(zij+1 ))) ∈ = ∞. On the other hand, for any z k1 (ζ(zij )) ≤ k1 (ζ(z)) < k1 (ζ(zij+1 )) ⇒ k2 (ζ(z)) ∈ Δij+1 . Hence, for any t k1 ζ zij ≤ t < k1 ζ zij+1 =⇒ ξ(t) ∈ Δij+1 , i.e. τzij+1 ξ ≥ k1 (ζ(zij+1 )). From here βτzij+1 ξ = ζ(zij+1 ). 4. Admissible sequence The sequence (zn )∞ 1 (zn ∈ Z0 (A1 )) is referred to as admissible if the following conditions are fulſlled: (1) zn ⊂ zn+1 (n ∈ N); (2) for any r > 0 there exists an increasing sequence of maximal chains (znin )∞ n=1 ⊂ zn ) such that lim max znin ∈ DS(r) (deducing sequence of rank r).
(znin
Let Σ0 (A1 ) be a set of all admissible sequences, constructed for a class of open sets A1 , and Σ0 = Σ0 (A). 5. Realization of a sequence The sequence of maps (ζn )1 : ∞ (ζn ∈ B(zn )) is said to be consistent on the sequence (zn )∞ 1 of elements from Z if (∀n1 , n2 ∈ N) ζn1 = ζn2 on zn1 ∩ zn2 . A consistent sequence of maps can be extended uniquely on the union of all zn . It means that there exists a map ζ : n zn → Y such that (∀n) ζn = ζ on zn (projective limit). It is interesting to clarify conditions for a map ζ to be realized with the help of some function ξ ∈ D just like it is possible for a ſnite map (see proposition 4.1): (∀z ∈ n zn ) βτz ξ = ζ(z). This function (if any) is said to be a realizing function.
Construction of Semi-Markov Processes
119
In the following theorem we use denotation , z−ε = Δ1 , . . . , Δk−1 , Δ−ε k where z = (Δ1 , . . . , Δk−1 , Δk ) and Δ−ε = {x ∈ Δ : ρ(x, X \ Δ) > ε}. For a full collection z, consisting of c(z) chains z i , and for a correct map ζ on it we denote rt (z, ζ) = min rank z i : 1 ≤ i ≤ c(z), k1 ζ zi > t (t > 0). ∞
THEOREM 4.1. Let (ζn )1 (ζn ∈ B(zn )) be a consistent sequence of correct maps ∞ with a projective limit ζ, where (zn )1 ∈ Σ0 , and some of the following conditions be fulſlled: (1) if δ (in ) ≡ lim max znin ∈ DS, then k1 (ζn (zinn )) → ∞, where zinn = max znin ; ∞ (2) for any t > 0 and z = (Δ1 , . . . , Δk ) ∈ n=1 zn such that k1 (ζn (z)) ≤ t, it is fair that τz−rt (n) ξn → k1 (ζn (z)) as n → ∞, where ξn is a realizing function for (zn , ζn ) (constructed in proposition 4.1) and rt (n) = rt (zn , ζn ); ∞ (3) for any z = (Δ1 , . . . , Δk ) ∈ n=1 zn if k2 (ζ(z )) ∈ Δk and k1 (ζ(z)) < ∞, then k2 (ζ(z)) ∈ ∂Δk , where it is assumed z = (Δ1 , . . . , Δk−1 ). Then (A) condition (1) together with (2) is sufſcient for a realizing function to exist; (B) condition (1) together with (2) is necessary for existence ∞ of a realizing function, which exits correctly from any set of the sequence z ∈ n=1 zn (see item 2.27 for deſnition of correct exit); (C) condition (1) together with (2) and (3) is necessary and sufſcient for a continuous realizing function ξ to exist. Proof. (A) Let us prove the existence of a limit (∀t ∈ R+ ) lim ξn (t) for n → ∞. In order to do this we use a deducing sequence of a rank as small as desired, which is constructed according to the growth of sequence (zn ). We also use condition (1) which implies that the ſrst coordinate of ζn on elements of this sequence exceeds t for sufſciently large n. From here for any r > 0 and t > 0 (∃n1 ∈ N) (∀n ≥ n1 ) rt (zn , ζn ) < r and k1 (ζn (zin )) > t for some chain zni with a rank not more than r. In this case (∀m ≥ n, s < t) ρ(ξn (s), ξm (s)) < r. Let us show the latter assertion is true. Let max zni = (Δ1 , . . . , ΔN ) and s ∈ [k1 (ζn (z )), k1 (ζn (z))), where z = (Δ1 , . . . , Δk−1 ) and z = (Δ1 , . . . , Δk ) (1 ≤ k ≤ N ). In this case the values ξn (s), ξm (s) of the function ξn , ξm , constructed in proposition 4.1 are determined by nearest from the left points k1 (ζn (z(n))), k1 (ζn (z(m))) for some z(n) ∈ zn , z(m) ∈ zm , which belongs to the interval [k1 (ζn (z )), s]. These values are k2 (ζn (z(n))) and k2 (ζn (z(m))) correspondingly. According to the deſnition of a correct exit both of them belong to an open set Δk , the diameter of which is not more than r. Therefore, for any t > 0 the sequence (ξn (t)) converges in itself, and due to completeness of
120
Continuous Semi-Markov Processes
the space X there exists a limit of this sequence, which we denote as ξ(t). Evidently, the function ξn converges on ξ uniformly on each bounded interval. It follows that the limit function (like all pre-limit functions) belongs to space D. In addition, it is ∞ obvious that for all z ∈ n=1 zn ξ(k1 (ζn (z))) = k2 (ζn (z)). ∞ Now we will prove that (∀z ∈ n=1 zn ) βτz ξ = ζn (z). Let us prove this property by induction for all chains. For z = (Ø) it is obviously true. Let us consider the chain zki = {zi0 , zi1 , . . . , zik }, where zi0 = (Ø), zi1 = (Δ1 ), zik = (Δ1 , . . . , Δk ) (k ≥ 0). We assume it to belong to the union of all full collections zn . Let βτ (zik ) ξ = ζn (zik ). With respect to ζn (zik+1 ) there may be four possibilities. (a) If ζn (zik ) = ∞, then ζn (zik+1 ) = ∞, but if τzik ξ = ∞, then τzik+1 ξ = ∞. Therefore βτ (zik+1 ) ξ = ζn (zik+1 ). / Δk+1 . Then βτ (zik+1 ) ξ = βτ (zik ) ξ. In this (b) Let ζn (zik ) = ∞ and k2 (ζn (zik )) ∈ case for any big enough n and τzik ξ < t (when ρ(ξn (s), ξ(s)) < rt (n) for all s < t) τ(zik+1 )−rt (n) ξn = τzik+1 ξ. Therefore, by condition (2), k1 (ζn (zik+1 )) = τzik ξ and, consequently, βτ (zik+1 ) ξ = ζn (zik+1 ). (c) Let ζn (zik ) = ∞, k2 (ζn (zik )) ∈ Δk+1 , k1 (ζn (zik+1 )) = t < ∞. Since / Δk+1 , we have τzik+1 ξ ≤ t. On the other hand, for big enough n k2 (ζn (zik+1 )) ∈ s < t it is true that ρ(ξ(s), ξn (s) ) ≤ rt (n). Therefore τ(zik+1 )−rt (n) ξn ≤ τzik+1 ξ, and from condition (2) it follows that τzik+1 ξ ≥ t. Hence βτ (zik+1 ) ξ = ζn (zik+1 ). (d) Let ζn (zik ) = ∞, k2 (ζn (zik )) ∈ Δk+1 , ζn (zik+1 ) = ∞. For any t ∈ R+ and big enough n it follows that ρ(ξ(s), ξn (s)) ≤ rt (n) for all s < t. From here τzik+1 ξ ≥ τ(zik+1 )−rt (n) ξn > t. Hence τzik+1 ξ = ∞ and ζn (zik+1 ) = βτzi ξ. k+1
∞ (B) For the given sequence (zn ) ∈ Σ0 let us consider function ξ ∈ Π( n=1 zn ) (correct exit from every set of a countable system of open sets; see item 2.27). For any n let us determine a map ζn : zn → Y of the form ζn (z) = βz ξ. Evidently, this map is correct, and the family of maps is consistent. The ſrst condition of the theorem is fulſlled for any function ξ ∈ D due to the deſnition of a deducing sequence. Let us check the second condition to be fulſlled. If k1 (ζn (zik )) = τzik ξ ≤ t < ∞ and n is big enough, then ρ(ξ(s), ξn (s)) < rt (n) (s < t), where ξn is step-function constructed by values of ξ at the points τz ξ (z ∈ zn ) according to the method of proposition 4.1. Therefore τ(zik )−2rt (n) ξ ≤ τ(zik )−rt (n) ξn ≤ τzik ξ. According to the correct exit property τ(zik )−2rt (n) ξ → τzik ξ. Hence, τ(zik )−rt (n) ξn → τzik ξ = k1 (ζn (zik )). If τzik = ∞ and t is any value as big as desired, then for n big enough ρ(ξ(s), ξn (s)) < rt (n) and τ(z)−2rt (n) ξ > t/2. From here τ(zik )−rt (n) ξn > t/2 and τ(zik )−rt (n) ξn → ∞ = k1 (ζ(zik )).
Construction of Semi-Markov Processes
121
(C) If ξ is a continuous function, it exits correctly from any system of open sets. As such, it satisſes conditions (1) and (2). Condition (3) follows immediately from the continuity of ξ. It remains to prove that under condition (3) the realizing function can be chosen continuous, but for any t > 0 and big enough n on the interval [0, t] the function ξn has jumps with values not more than rt (n). It means that on this interval the limit function ξ has jumps with values not more than 2rt (n). Since rt (n) ↓ 0 as n → ∞ ξ is continuous. 4.2. Extension of a measure One sufſcient condition on the existence of a projective limit of a family of probability measures is proved. Each element of the family is a probability measure on some partition of the given set. Such a partition of space D gives, for example, map ξ → (βτ1 ξ, . . . , βτk ξ), where τi ∈ T (σΔ , Δ ∈ A) (see item 2.14). This example will be used later. 6. Maps of partitioning and enlargement Let Ω be a ſxed set; U a directed set (set of indexes) with an order relation ≤; B(u) a partition of a set Ω (a class of non-intersecting subsets, consisting the whole set Ω in their union) corresponding to an index u ∈ U. It is assumed that B is an isotone map: u1 ≤ u2 ⇒ B(u1 ) ≺ B(u2 ), i.e. partition B(u1 ) is more rough, than B(u2 ). This means that for any a2 ∈ B(u2 ) exists a1 ∈ B(u1 ) such that a2 ⊂ a1 . Let Ku : Ω → B(u) be a map of a partition, where Ku ω = a ∈ B(u), if ω ∈ a ⊂ Ω; Kuu12 : B(u2 ) → B(u1 ) (u1 ≤ zu ) is a map of enlargement, where Kuu12 a2 = a1 , if a2 ⊂ a1 , where a1 ∈ B(u1 ), a2 ∈ B(u2 ). In this case, if u1 ≤ u2 ≤ u3 , then Kuu13 = Kuu12 ◦ Kuu23 , and Ku1 = Kuu12 ◦ Ku2 . We call this property a consistency of projection operators (partition and enlargement). 7. Determining sequence of functionals Let Σ be a class of sequences (un )∞ 1 (un ∈ U), possessing the following properties: (1) if (un )∞ 1 ∈ Σ, then un ≤ un+1 (n ∈ N); ∞ (2) for any sequences (un )∞ 1 (un ∈ U) there exists a sequence (un )1 ∈ Σ such that (∀n ∈ N) un ≤ un . A sequence of non-negative functionals (hn )∞ 1 is referred to as determining for , if from a condition an ∈ B(un ), an ⊃ an+1 a sequence of partitions (B(un ))∞ 1 ∞ (n ∈ N) and lim inf hn (an ) = 0 follows n=1 an = Ø.
122
Continuous Semi-Markov Processes
8. Theorem about extension of a measure A family of probability measures (Pu )u∈U on B(B(u)) is said to be consistent, if from relation u1 ≤ u2 it follows that Pu1 = Pu2 ◦ (Kuu12 )−1 . A unique function P determined on the algebra of sets Ku−1 B B(u) u∈U
by the equality P ◦ Ku−1 = Pu is an additive function of sets (projective system and limit; see [BLU 68]). THEOREM 4.2. Let the following conditions be fulſlled: (1) each partition B(z) is a Hausdorff topological space; (2) (∀u1 , u2 ∈ U, u1 ≤ u2 ) Kuu12 ∈ B(B(u2 ))/B(B(u1 )) (measurability of enlargement maps); (3) for any ſnite measure μ on B(B(u2 )), any u1 ≤ u2 , ε > 0 and B ∈ B(B(u2 )), where μ(B) > 0, there exists compact B ⊂ B, on which Kuu12 is continuous and μ(B \ B ) < ε; (4) there exists a class of majorizing sequences Σ, and for any sequence (un )∞ 1 ∈ ; Σ there is a sequence of B(B(un ))-measurable determining functionals (hn )∞ 1 (5) a consistent family of probability measures (Pu )u∈U on B(B(u)) (u ∈ U) is determined; (6) for any ε > 0 and (un )∞ 1 ∈ Σ Pun (hn ≥ ε) → 0 as n → ∞. Then there exists probability measure P on σ(Ku−1 (B(B(u))), u ∈ Z), such that Pu = P ◦ Ku−1 . Proof. It is sufſcient to prove continuity of P . Let (Mn ) be a sequence of sets such B(B(un )); Mn = Ku−1 Sn , Sn ∈ B(B(un )) that (∀n ∈ N) Mn ⊃ Mn+1 ; Mn ∈ Ku−1 n n and P (Mn ) ≥ p > 0. By the property of class Σ it is enough to assume that (un )∞ 1 ∈ Σ. A proof of the continuity is here divided into four stages. ∞ (1) Let (hn )∞ 1 be a determining sequence for (B(un ))1 and Ank = {a ∈ B(unk ) : hnk (a) ≤ 1/k}, where nk is chosen such that Punk (Ank ) ≥ 1 − p · 2−k−1 . Let Sn 1 = unk An1 ∩ Sn1 and Sn k = (Kunk−1 )−1 Sn k−1 ∩ Ank ∩ Snk (k ≥ 2). Then Sn k ∈ B(B(unk )) and has properties (∀k ∈ N): un
(a) Sn k ⊃ Kunkk+1 Sn k+1 ; (b) Punk (Sn k ) ≥ p/2;
(c) a ∈ Sn k ⇒ hnk (a) ≤ 1/k.
Construction of Semi-Markov Processes
123
un
k (2) Let Snk ⊂ Sn k , Snk ∈ K (compact), Kunk−1 is continuous on Snk , and unk )−1 Sn◦k−1 ∩ Snk Punk (Sn k \ Snk ) < 2−k−2 p. Let Sn◦1 = Sn1 , and Sn◦k = (Kunk−1 ◦ (k ≥ 2). Then Snk ∈ B(B(unk )) and has properties:
un
(a) Sn◦k ⊃ Kunkk+1 Sn◦k+1 ; (b) Punk (Sn◦k ) ≥ p/4; un
k (c) Sn◦k ∈ K and Kunk−1 is continuous on it;
(d) a ∈ Sn◦k ⇒ hnk (a) ≤ 1/k. (3) Let Sn∗k =
un
≥k
Kunk Sn◦ . Then Sn∗k ∈ B(B(unk )), and has properties:
un
(a) Sn∗k = Kunkk+1 Sn∗k+1 , (b) Punk (Sn∗k ) ≥ p/4, (c) a ∈ Sn∗k ⇒ hnk (a) ≤ 1/k. In order to prove (a) it is necessary to establish that
∀a ∈ Sn∗k
un
Kunkk+1
−1
a ∩ Sn∗k+1 = Ø.
We have
un
Kunkk+1
−1
un −1 un a ∩ Sn∗k+1 = Kunkk+1 a∩ Kunk+1 Sn◦ , ≥k+1
and
un
Kunkk+1
−1
a∩
N
un
Kunk+1 Sn◦ = Ø
(∀N ≥ k + 1),
≥k+1
since a∈
N
un
≥k+1
=
N
un
Kunk Sn◦ = Kunkk+1
un unN Kunkk+1 Kunk+1 Sn◦N
un
Kunk+1 Sn◦
≥k+1 un
= KunkN Sn◦N . un
From here, since Sn◦k+1 is compact and its intersection with (Kunkk+1 )−1 a and all un Kunk+1 Bn◦ are closed, property (a) follows. u
(4) For any an1 ∈ Sn∗1 there exists an2 ∈ Sn∗2 such that an1 = Kunn12 an2 , and if un {an1 , . . . , ank } is a sequence such that ani ∈ Sn∗i and ani = Kunii+1 ani+1 (i = 1, . . . , k − 1), then it can be continued with the preservation of these properties.
124
Continuous Semi-Markov Processes
By the Hausdorff theorem on a maximal inserted circuit (see [BIR 84, p. 252], ∞ [KEL 68, p. 352], [KOL 72, p. 36], [SKO 70, p. 22]) there exists sequence {ank }1 u n k+1 such that ank ∈ Sn∗k ⊂ B(unk ), ank = Kunk ank+1 (k = 1, 2, . . .) and hnk (ank ) → 0. This sequence can be uniquely determined on all skipped numbers n: if nk−1 < un n ≤ nk , then an = Kun k ank and since Kuu13 = Kuu12 Kuu23 (u1 ≤ u2 ≤ u3 ), we un+1 ∈ N). Then according to deſnition of a determining have an = ∞ ∞Kun an+1 (n a = sequence n=1 Ku−1 n n=1 an = Ø. Hence, there is ω ∈ Ω such that (∀n ∈ N) n ω ∈ Mn . 4.3. Construction of a measure with the given system of distributions of pairs of the ſrst exit The theorem on extension of a measure proved in the previous section is applied to a construction of a measure with a given family of distributions of pairs of the ſrst exit. 9. Partitions of the set of functions For any z ∈ Z0 the set of correct maps, B(z), can be interpreted as a partition of a set D. The element of this partition, corresponding to map ζ ∈ B(z), is a set of all ξ ∈ D such that ζ(z) = βτz ξ for all z ∈ z. Obviously, Z0 is an ordered set with respect to the relation of inclusion, and B is an isotone map. Let Kz : D → B(z), (Kz ξ)(z) = βτz ξ (z ∈ z) be an operation of projection, and Kzz12 : B(z2 ) → B(z1 ) (z1 ⊂ z2 ); (∀ζ ∈ B(z2 )) Kzz12 ζ be a narrowing (restriction) of ζ from z2 to z1 . It is clear that the narrowing again leads to a correct maps ζ, and the class of all maps of projection and narrowing is consistent in the sense of item 6. In this case z1 ⊂ z2 ⊂ z3 ⇒ Kzz13 = Kzz12 ◦ Kzz23 and Kz1 = Kzz12 ◦ Kz2 (consistency of maps). 10. Theorem on construction of a measure ∞In the following theorem and up to the end of this chapter we assume that A0 = n=1 A(rn ), where A(rn ) is some covering of set X by open sets of rank rn , rn ↓ 0, and A1 ⊂ A is a pi-system of open sets, containing A0 (see item 3.9). THEOREM 4.3. Let (Pz ) (z ∈ Z(A1 )) be a consistent family of probability measures on measurable topological spaces (B(z), B(B(z))) with a projective limit P such that: (a) (∀(zn ) ∈ Σ0 (A1 )) (∀t > 0) Pzn (k1 (ζ(zinn )) ≤ t) → 0 (n → ∞), where zinn = max znin , znin is a maximal chain from zn such that lim zinn ≡ δ in ∈ DS; (b) (∀(zn ) ∈ Σ0 (A1 )) (∀t > 0) (∀ε > 0) (∀z ∈ n zn ) Pzn k1 ζn (z) ≤ t, k1 ζn (z) − τz−rt (n) ξn ≥ ε −→ 0 (n −→ ∞);
Construction of Semi-Markov Processes
125
(c) for any z = (Δ1 , . . . , Δk ) ∈ z (k ≥ 1) Pz k1 ζ(z ) < k1 ζ(z) < ∞, k2 ζ(z) ∈ ∂Δk = 0, where z = (Δ1 , . . . , Δk−1 ). Then: (1) condition (a) together with (b) is sufſcient for a probability measure P on (D, F) to exist, where Pz = P ◦ Kz−1 (∀z ∈ Z(A1 )); (2) conditions (a) and (b) are necessary for a process with the measure P to exist, where either this process possesses the property of correct exit with respect to any z ∈ Z(A1 ) (i.e. P (Π(z)) = 1), or it is quasi-continuous from the left (see [DYN 59, p. 150], [BLU 68, p. 45], and also item 3.7); (3) condition (a) together with (b) and (c) is sufſcient and necessary for measure P to exist, there it is an extension of all measures Pz such that P (C) = 1 (C is the set of all continuous ξ ∈ D). Proof. (1) It is sufſcient to check for the given family the conditions (1)–(4) and (6) of theorem 4.2. The ſrst condition (topology). Every ζ : z → Y is determined by a collection of number |z| of points {ζ(z1 ), . . . , ζ(z|z| )} (zi ∈ z). Hence, any B(z) is a Hausdorff topological space and is a subset of the set Y|z| . The second condition (measurability and continuity). Map Kzz12 : B(z2 ) → B(z1 ) (z1 ⊂ z2 ) is continuous and, consequently, it is measurable on the whole B(z2 ), since a convergence in Y|z2 | is equivalent to the coordinate-wise convergence. The third condition (regularity). Every B(z) (z ∈ Z) is a locally-compact Hausdorff topological space with a countable basis. For such a space for any ſnite measure μ on B(B(z)), any S ∈ B(B(z)) (μ(S) > 0), and any ε > 0 there exists a compact S ⊂ S, for which μ(S \ S ) < ε (see [BLU 68, p. 8]). The fourth condition (majorizing and determining sequences). Class Σ0 (A1 ) of sequences (zn )∞ 1 (zn ∈ Z0 (A1 )) is a Σ-class by item 7. In order to use theorem 4.2, let us construct for every (zn )∞ 1 ∈ Σ0 (A1 ) a sequence of determining functionals ∞ . Sequence (ζ ) can be interpreted as a decreasing sequence of subsets of (hn )∞ n 1 1 the set D, where the n-th term of the sequence is an element of partition B(zn ). By theorem 4.1, a convergence of sequences: −→ 0 lim max znin ∈ DS , ρ ∞, k1 ζ zinn t ∈ N, z ∈ ρ k1 ζ(z) , τz−rt (n) ξn −→ 0 zn , k1 ζ(z) ≤ t n
126
Continuous Semi-Markov Processes
as n → ∞ is sufſcient for the existence of a function ξ ∈ D, realizing all this ζn (in other words, so that a non-empty intersection of the decreasing sequence of subsets can exist). Hence, we have to construct a sequence of functionals (hn )∞ 1 such that hn (ζn ) → 0 if and only if all the previous sequences tend to zero. In order to obtain such a sequence it is sufſcient to enumerate the original sequences by natural numbers and to compose a new sequence of weighted sums as follows: ∞ 2−i 1 ∧ ρ ∞, k1 ζn zin hn ζn = i=1
+
∞ ∞
2−i−j 1 ∧ ρ k1 ζn (z(i) , τz−rj (n) ξn ,
i=1 j=1
where zin is the n-th term of the i-th deducing sequence; z(i) is the i-th element of the union n zn , rj (n) = rj (zn , ζn ). The sixth condition (convergence to zero). For the above deſned determining sequence, condition Pzn (hn ≥ ε) → 0 is, evidently, equivalent to conditions (b) and (c) combined. (2) The necessity of conditions (a) and (b) for the extension of all Pz to the measure P of a quasi-left-continuous process follows from the property of a deducing sequence (a), and also from the property of quasi-left-continuity (b): P (τz−rt (n) → τz ) = 1. Actually, let τ = limn→∞ τz−rt (n) . Then P τz−rt (n) → τz = P τ < τz ≤ P Xτz−rt (n) → Xτ , τ < ∞ = 0, / Δk , where z = (Δ1 , . . . , Δk ) and Since lim Xτz−rt (n) ∈ −r (n) z−rt (n) = Δ1 , . . . , Δk−1 , Δk t , and Xτ ∈ Δk , if τ < τz . (3) From theorem 4.1 (for continuous ξ ∈ D) it follows that the set of continuous functions C can be represented as follows
ξ ∈ D : (∀n, m ∈ N) Xσδmn ξ ∈ / Δn,m+1 ∨ σδm+1 ξ = ∞ ∨ Xσm+1 ∈ ∂Δn,m+1 , n δ
where δn = (Δn1 , Δn2 , . . .) ∈ DS(rn ). From here it follows that the union of conditions (a), (b), and (c) is sufſcient for P (C) = 1. The necessity of them is evident.
Construction of Semi-Markov Processes
127
4.4. Construction of a projective system of measures by semi-Markov transition functions A construction of a joint distribution of pairs of the ſrst exit on the basis of a given set of semi-Markov transition functions is somewhat more complicated than similar constructions with the help of Markov transition functions. It is connected with a lack of linear ordering in a set of all ſrst exit times and their iterations. The exception is made for a chain z ∈ Z0 , where z = {(Ø), z1 , . . . , zn } and z1 ≺ · · · ≺ zn (n ≥ 1). In this case 0 ≤ τz1 ≤ τz2 ≤ · · · ≤ τzn . In general the construction is connected by a passage from a full collection of a general view to a chain. Thus, for each z ∈ Z0 , there exists a one-to-one correspondence between the set of all correct maps ζ : z → Y and all chains z equipped with correct maps ζ : z → Y . This correspondence is formulated in terms of indexes of intersection connected with z. 11. Indexes of intersection Let
z = (Ø); z11 , . . . , z1n1 ; zk1 , . . . , zknk
k ≥ 0, ni ≥ 1 ,
where zij = (Δi1 , . . . , Δij ), Δij ∈ A0 ⊂ A (Δij = X), i.e. z ∈ Z0 (A0 ). Let A1 be a pi-system generated by a class A0 . For k ≥ 1 we name as an index of intersection connected with z, vector α = (j1 , . . . , jk ), where (∀s ∈ {1, . . . , k}) 1 ≤ js ≤ k ns + 1. With the help of index α we will determine intersections of sets i=1 Δiji . In this denotation we assume Δi,ni +1 = X. Let I(z) be collection of all indexes of intersection for given z. In the case of α1 , α2 ∈ I(z), α1 = (j11 , . . . , j1k ), α2 = (j21 , . . . , j2k ) we denote: α1 < α2 ⇔ (∀s)j1s ≤ j2s and (∃s) j1s < j2s . 12. Indexes and generated correct maps For any z ∈ Z0 (A0 ) there is a ſnite increasing sequence of indexes of intersection (α1 , . . . , αN ) (N ≥ 0, αi < αi+1 ), and also a chain z ∈ Z0 (A1 ) (z = {(Ø), z1 , . . . , zN }) and its correct map ζ ∈ B(z ) connected with ζ ∈ B(z). We denote k1 (ζ(zij )) = tij , and if tij < ∞, then k2 (ζ(zij )) = xij ; k1 (ζ (zi )) = ti , and if ti < ∞, then k2 (ζ (zi )) = xi . The chain z and the map ζ ∈ B(z ) are determined by the following rules: (1) The chain z begins with element (Ø). In this case ζ (Ø) = ζ(Ø). (2) Let (∀i : 1 ≤ i ≤ k) ji1 = min{j : 1 ≤ j ≤ ni + 1, k2 (ζ(Ø)) ∈ Δij }. Because of Δi,ni +1 = X, the set in braces is not empty. If (∀i) ji1 = ni + 1, then N = 0 and the construction is complete. Otherwise α1 = (j11 , . . . , jk1 ), Δ1 = $k k i=1 Δi,ji1 , z1 = (Δ1 ), t1 = i=1 tiji1 . If t1 = ∞, then N = 1, and the construction is complete. If t1 < ∞, then (∃i) t1 = tiji1 , x1 = xiji1 and N ≥ 1.
128
Continuous Semi-Markov Processes
(3) Let the sequence of indexes of intersection (α1 , . . . , αm ), the chain {(Ø), z1 , . . . , zm }, and meanings of the correct map ζ on this chain be constructed; let / Δm , and also N ≥ m. αm = (j1m , . . . , jkm ), zm = (Δ1 , . . . , Δm ), tm < ∞, xm ∈ Let us construct the index αm+1 = (j1,m+1 , . . . , jk,m+1 ), where (∀i : 1 ≤ i ≤ k) tm < tijim , jim ,
ji,m+1 = min j : jim < j ≤ ni + 1, xm ∈ Δij , tm = tijim . If (∀i ≥ 1) ji,m+1 = ni + 1, then N = m, and the construction is complete. Otherk $k wise Δm+1 = i=1 Δiji,m+1 , zm+1 = (Δ1 , . . . , Δm+1 ), tm+1 = i=1 tiji,m+1 . If tm+1 = ∞, then N = m + 1, and the construction is complete. If tm+1 < ∞, then (∃i ≥ 1) tm+1 = tiji,m+1 , xm+1 = xiji,m+1 and N ≥ m + 1. The construction is made after a ſnite number of steps: N ≤ n1 + · · · + nk . According to deſnition of a correct map ζ (see item 2), the sequence of indexes is determined by a sequence of the ſrst exit points, because tm < tsjsm ⇔ xm ∈ Δsjsm . For any z there exists a ſnite set J(z) of sequences of indexes of intersections A = (αm )N 1 , which generates a ſnite partition (B(z, A))A∈J(z) of the set B(z) of all correct maps ζ. Evidently, Lζ = Lζ (see item 2), there ζ is the correct map of the chain z , constructed above for ζ. From here it follows that for a ſxed z the correspondence ζ ↔ (A, z , ζ ) is one-to-one. Evidently, the partition B(z, A) is measurable: (∀z ∈ Z0 (A)) (∀A ∈ J(z)) B(z, A) ∈ B(B(z)). 13. Admissible family of semi-Markov kernels Let A1 ⊂ A. We will consider families of kernels (YΔ (B | x))Δ∈A1 (x ∈ X, B ∈ B(R+ × X)), which satisfy the following conditions: (a) YΔ (· | x) (x ∈ X) is a sub-probability measure on B(R+ × X), where x∈ / Δ, IB (0, x) , YΔ (B | x) = YΔ B ∩ R+ × (X \ Δ) | x , x ∈ Δ, R+ = (0, ∞); (b) YΔ (B | ·) (B ∈ B(R+ × X)) is a B(X)-measurable function; (c) (∀Δ1 , Δ2 ∈ A1 , Δ1 ⊂ Δ2 ) t YΔ1 dt1 × dx1 | x YΔ2 0, t − t1 × S | x1 . YΔ2 [0, t) × S | x = 0
X
We call the family of kernels (YΔ ) satisfying conditions (a), (b) and (c) an admissible family of semi-Markov transition functions. These conditions are necessary for YΔ = FσΔ to be semi-Markov transition functions for some semi-Markov process (Px ) (see item 3.17).
Construction of Semi-Markov Processes
129
Consider also a family of Laplace transformations of semi-Markov transition functions: ∞ e−λt YΔ (dt × S | x), yΔ (λ, S | x) ≡ 0
where λ ≥ 0. Evidently, yΔ (0, S | x) = YΔ (R+ × S | x). The following properties correspond to properties (a), (b) and (c): (d) (∀λ ≥ 0) (∀x ∈ X) yΔ (λ, S | x) is a sub-probability measure on B(X), with x∈ / Δ, IS (x), yΔ (λ, S | x) = yΔ (S \ Δ | x), x ∈ Δ; (e) (∀λ ≥ 0) (∀S ∈ B(X) yΔ (λ, S | ·)) is a B(X)-measurable function; (f) (∀λ ≥ 0) (∀Δ1 , Δ2 ∈ A1 , Δ1 ⊂ Δ2 ) yΔ2 (λ, S | x) = yΔ1 λ, dx1 | x yΔ2 λ, S | x1 . X
Besides, each function yΔ (λ, S | x) is a completely monotone function of λ. Remember that function f (λ) (λ ≥ 0) is said to be completely monotone [FEL 67], if it is non-negative, inſnitely differentiable and (∀k ∈ N)
(−1)k
∂k f (λ) ≥ 0. ∂λk
We call the family of kernels (yΔ ) satisfying conditions (d), (e) and (f) an admissible family of semi-Markov transition generating functions. Conditions (d), (e) and (f) are necessary for yΔ = fσΔ to be the semi-Markov transition generating functions of some semi-Markov process (Px ) (see item 3.17). The admissible families (YΔ ) and (yΔ ) mutually determine each other. Furthermore, we will establish duality of their properties providing existence of corresponding semi-Markov process. 14. Projective system of measures Let A1 be a pi-system of open sets and (YΔ ) (Δ ∈ A1 ) be a family of kernels satisfying the conditions of item 13. Let us assume z, ζ, A, z , ζ , as those in items 11 and 12. For any x ∈ X we determine distribution Px,z on B(B(z, A)) by its values on sets G ∈ B(B(z, A)) of the form G=
N
i=1
ζ : ti − ti−1 ∈ Ti , xi ∈ Si ∩ Δi+1 ,
130
Continuous Semi-Markov Processes
where Ti ∈ B(R+ ), Si ∈ B(X), t0 = 0 and ΔN +1 = X: Px,z (G) =
N
YΔi Ti × dxi | xi−1 ,
i=1
where x0 = x and integration is made by all (x1 , . . . , xN ), from the set (S1 ∩ Δ2 ) × · · ·×(SN ∩ΔN +1 ). The probability of the event with respect to ζ , where ζ (zN ) = ∞, is equal to the difference of probabilities of the considered view. Without being too speciſc we can accept an additional condition that all sets Ti represent intervals [0, τi ) (τi > 0). The function of sets Px,z constructed on the pi-system of sets G can be extended uniquely up to a measure on the whole sigma-algebra B(B(z, A)) (see, e.g., [GIH 73, DYN 63]). THEOREM 4.4. For any x ∈ X the family of measures (Px,z )z∈Z0 (A1 ) constructed above is a projective system of measures i.e. −1 Px,z1 = Px,z2 ◦ Kzz12 . ∀z1 , z2 ∈ Z0 A1 , z1 ⊂ z2 Proof. Because of the consistency of projective operators (see item 6 and 9) it is sufſcient to prove that Px,z1 ◦ (Kzz1 )−1 (G1 ) = Px,z (G1 ) (G1 ∈ B(B(z))), where z is obtained from z1 by deleting the last element in some one chain of those, consisting z1 . For the given z it is sufſcient to consider two cases:
z1 = (Ø); z11 , . . . , z1n1 ; . . . ; zk1 , . . . , zknk , zknk+1 , zknk+1 = (zknk , Δ) (Δ = X) (one chain is enlarged), and
z1 = (Ø); z11 , . . . , z1n1 ; . . . ; zk1 , . . . , zknk ; zk+1,1 , where zk+1,1 = (Δ) (number of chains is enlarged). We consider only the ſrst case, because the second is reduced to the ſrst one. It is sufſcient to prove equality of sets G1 = G, mentioned in the preface of the theorem. For any ζ ∈ G the ſxed sequence A ∈ J(z) determines the same order statistics of the ſrst coordinates of pairs ζ(zij ) on axis R+ . We consider variants of location of a new point on the time axis corresponding to the additional parameter, the sequence (zknk , Δ) = (Δk1 , . . . , Δknk , Δ). All these variants are contained in the set (Kzz1 )−1 B(z, A). For each of these variants there exists a corresponding speciſc condition on a sequence of indexes. Let s = min{m : 0 ≤ m ≤ N, jk,m+1 = nk + 1}. This s shows the number of such an index that ts = tknk . At this instant the ſrst exit of the function Lζ from the sequence of sets zknk happens. Hence, up to this number the sequences of indexes z and z1 coincide. Indexes with larger numbers can be determined considering the full collection z1 = z ∪ z , composed of two chains: z = {(Ø), z1 , . . . , zN } (old chain)
Construction of Semi-Markov Processes
131
and z = {(Ø), z1 , . . . , zs , (zs , Δ)} (new chain). The sequence of two-dimensional indexes Am for z1 is (1, 1), . . . , (s, s), (s + 1, s + 1), . . . , (s + m, s + 1), (s + m, s +2), . . . , (N, s +2) , where m corresponds to including a value of k1 (ζ((zs , Δ))) into interval (ts+m−1 , ts+m ) (where 1 ≤ m ≤ N − s + 1, t0 = 0, tN +1 = ∞). The appearance of a pair with meaning of the second coordinate s + 2 in this sequence means the ſrst exit from the additional set Δ in the chain z . The sequence AN −s+2 = (1, 1), . . . , (s, s), (s + 1, s + 1), . . . , (N + 1, s + 1) , with k1 (ζ((zs , Δ))) = ∞ corresponds to the case when the sequence (process) does not exit from the set Δ. According to the rule of construction of the measure on a set G by the family of kernels (YΔ ), the measure Px,z1 of the set (Kzz1 )−1 G determined for N − s + 2 possible chains with corresponding sequences of indexes A1 , . . . , AN −s+2 , is equal to s YΔi Ti × dxi | xi−1 Hs xs , i=1
where the region of integration is (S1 ∩ Δ2 ) × · · · × (Ss ∩ Δs+1 ). In this integral −s N Hs xs = pm xs + p xs + q xs , m=1
where pm xs =
s+m−1
YΔi ∩Δ Ti × dxi | xi−1 gm xs+m ,
i=s+1
and the region of integration is Ss+1 ∩ Δs+2 \ Δs+1 × · · · × Ss+m−1 ∩ Δs+m \ Δs+m−1 . Besides, gm xs+m−1 =
τs+m
0
×
Δs+m
YΔs+m ∩Δ dt × dy | xs+m−1
Ss+m ∩Δs+m+1
YΔs+m 0, τs+m − t × dxs+m | y hm xs+m ,
where
hm xs+m =
N i=s+m+1
YΔi Ti × dxi | xi−1 ,
132
Continuous Semi-Markov Processes
and the region of integration is
Ss+m+1 ∩ Δs+m+2 × · · · × SN ∩ ΔN +1
ΔN +1 = X .
While interpreting these formulae it should be taken into account that the sum of the empty set of summands is equal to zero, and the product of the empty set of terms is equal to 1. In addition, we have p xs =
N
YΔi ∩Δ Ti × dxi | xi−1 YΔ R+ × X | xN ,
i=s
q xs =
N
YΔi ∩Δ Ti × dxi | xi−1 1 − YΔ R+ × X | xN ,
i=s
where the region of integration is Ss+1 ∩ Δs+2 \ Δs+1 × · · · × SN ∩ ΔN +1 \ ΔN ΔN +1 = X . On the other hand, according to the deſnition of an admissible family of kernels (item 13), for any f ∈ B and 1 ≤ s ≤ N YΔs Ts × dx1 | x0 f x1 S
=
τs
0
=
τs
0
X
YΔs ∩Δ dt × dy | x0
Δs
+ S\Δs
S
YΔs ∩Δ dt × dy | x0
YΔs 0, τs − t × dx1 | y f x1
S
YΔs 0, τs − t × dx1 | y f x1
YΔs ∩Δ Ts × dxs | xs−1 f xs .
Applying this decomposition repeatedly we obtain Hs xs =
N
YΔi TN × dxi | xi−1 ,
i=s+1
where the region of integration is Ss+1 ∩ Δs+2 × · · · × SN ∩ ΔN +1 and, consequently, Px,z (G) = Px,z1 ((Kzz1 )−1 G).
ΔN +1 = X ,
Construction of Semi-Markov Processes
133
4.5. Semi-Markov processes with given transition functions The theorem about a limit of a projective system of consistent distributions of pairs of the ſrst exit proved in section 4.3 is applied to a projective system constructed with the help of semi-Markov transition functions in section 4.4. The conditions are given for the projective limit to be a semi-Markov process. 15. Existence of an admissible set of measures ∞ THEOREM 4.5. Let A1 be a pi-system of open sets, containing A0 = n=1 Arn (rn ↓ 0, Arn is a covering of rank rn ); (YΔ )Δ∈A1 be an admissible family of kernels (see item 13) and besides some of the following conditions are fulſlled: (a) for any t ∈ R+ and δ = (Δ1 , Δ2 , . . .) ∈ DS(A1 ) as n → ∞ n YΔi dti × dxi | xi−1 −→ 0, i=1
where the region of integration is Xn × {t1 + · · · + tn ≤ t}; (b) (∀Δ ∈ A1 ) (∀ε > 0) (∀x ∈ Δ) YΔ [ε, ∞) × X | x1 YΔ−r R+ × dx1 | x −→ 0 X
(r −→ 0);
(c) (∀Δ ∈ A1 ) (∀x ∈ Δ) (∀S ∈ B(X)) YΔ R+ × S | x = YΔ R+ × ∂Δ ∩ S | x . Then (1) conditions (a) and (b) combined are sufſcient for (∀x ∈ X) the ſnitely additive function of sets Px , corresponding to the family of probability measures (Px,z ) (z ∈ Z0 (A1 )), constructed in theorem 4.4, to be a probability measure on (D, F), and (Px )x∈X to be an admissible family of probability measures (see item 2.42); (2) conditions (a) and (b) are necessary for the process to exist, which is determined by an admissible family of probability measures (Px )x∈X , which either possesses the property of the correct exit with respect to any z ∈ Z(A1 ), or is quasi-continuous from the left (see item 3.7); (3) conditions (a), (b) and (c) combined are sufſcient and necessary for the process to exist, which is determined by an admissible family of probability measures (Px )x∈X , and (∀x ∈ X) is Px -a.s. continuous. Proof. (1) Condition (a) of theorem 4.3 follows immediately from condition (a) of this assertion, because the latter is a reformulation of the previous one in terms of
134
Continuous Semi-Markov Processes
kernels YΔ . Let us show fulſlment of condition (b) of theorem 4.3 for ſxed x ∈ X. Let z = (Δ1 , . . . , Δk ) and x ≡ x0 ∈ Δ1 . Evidently, (∀r > 0) for big enough n Px,zn rt (n) ≥ r ≤ Px,zn k1 ζn zinn ≤ t , where lim zinn ∈ DS(r) and lim zinn ⊂ n zn , according to the deſnition of a correct sequence (zn ). On the other hand Px,zn k1 ζn (z) ≤ t, k1 ζ(z) − τz−rt (n) ξn ≥ ε ≤ Px,zn rt (n) ≥ r + Px,zn k1 ζn (z) < ∞, k1 ζ(z) − τz−r ξn ≥ ε . We consider the case when t1 < · · · < tk (ti is the ſrst exit time from the i-th subsequence of z). Variants can be considered similarly when at least one of the inequalities (but not all of them) turns to an equality. In this case according to construction the second summand is equal to k−1 YΔi R+ × dxi | xi−1 YΔ−r R+ × dy | xk−1 YΔk [ε, ∞) × X | y , Δk
i=1
k
where the external integral is determined on the set Δ1 × · · · × Δk−1 . The internal integral is bounded and tends to zero under condition (b) of the assertion. From here fulſlment of condition (b) of theorem 4.3 follows. Thus, the family of probability measures (Px ) is constructed. The admissibility of this family follows from condition Px,z (k2 (ζ(Ø)) = x) = 1 and measurability of all functions YΔ (B | ·). (2) The necessity of condition (a) is evident. A proof of necessity of condition (b) repeats the proof of assertion (2) of theorem 4.3 for z = (Δ). (3) Evidently, condition (c) of theorem 4.3 follows from condition (c) of the present assertion. From here follows the third assertion. 16. Semi-Markovness of the constructed family THEOREM 4.6. Let (Px ) be an admissible family of probability measures, constructed in theorem 4.5. Then (∀Δ ∈ A1 ) σΔ ∈ RT(Px ). Proof. Let Δ ∈ A1 and δ ≡ (Δ1 , Δ2 , . . .) ∈ DS(A1 ). The sequence δ ∩ Δ ≡ (Δ1 ∩ Δ, Δ2 ∩ Δ, . . .) is then deduced from Δ (see items 2.17 and 2.20). Let Q1 = n k −1 (Ti ×Si ) and Q2 = j=1 βσ−1 j (Tj ×Sj ), where Ti , Tj ∈ B(R+ ), Si , Sj ∈ i=1 βσ i δ∩Δ
δ
B(X), k, n ∈ N (denotation σδn ; see item 2.14). Then
Q2 ∩ Q1 ∩ σΔ < ∞ Px θσ−1 Δ =
∞ m=0
m ∩ σΔ < ∞ . Px θσ−1 Q2 ∩ Q1 ∩ σΔ = σδ∩Δ Δ
Construction of Semi-Markov Processes
135
In this case
m ∩ σΔ < ∞ ∈ Kz−1 B B(z) , θσ−1 Q2 ∩ Q1 ∩ σΔ = σδ∩Δ m δ∩Δ
where z = {(Ø), z1 , . . . , zn } is a chain with the common term of the form zm = (Δ1 ∩ Δ, . . . , Δm ∩ Δ) (see item 9). According to the construction of such a probability, the sequence of meanings ζ(zi ) determines a semi-Markov renewal process, i.e. a two-dimensional Markov chain of special view, considered in Chapter 1. The m Q2 ∩ Q1 ∩ {σΔ = σδ∩Δ }) is determined by iterative method on probability Px (θσ−1 Δ the basis of the family of kernels YΔi ∩Δ and YΔj (i = 1, . . . , m; j = 1, . . . , n). In this case according to the construction of measures Px,z we have
m ∩ σΔ < ∞ Px θσ−1 Q2 ∩ Q1 ∩ σΔ = σδ∩Δ Δ
m ∩ σΔ < ∞ = Ex PπσΔ Q2 ; Q1 ∩ σΔ = σδ∩Δ (see items 1.10 and 1.17). Therefore
Q2 , Q1 ∩ σΔ < ∞ = Ex PπσΔ Q2 ; Q1 ∩ σΔ < ∞ . Px θσ−1 Δ Since the rank of the sequence δ can be as small as desired (see item 2.20), the sigmaalgebras FσΔ (F ) are generated by sets Q1 (Q2 ) (see item 2.22). From here the previous formula is true for all Q1 ∈ FσΔ and Q2 ∈ F. 17. A system of transition generating functions Conditions of theorem 4.5 can be reformulated in terms of semi-Markov transition generating functions. PROPOSITION 4.2. Let (yΔ ) (Δ ∈ A1 ) be an admissible family of semi-Markov transition generating functions, corresponding to the family of semi-Markov transition functions (kernels) (YΔ ) (Δ ∈ A1 ) (see item 13). Then conditions (a), (b) and (c) of theorem 4.5 are equivalent to corresponding conditions (d), (e) and (f): (d) for any λ > 0 and δ = (Δ1 , Δ2 , . . .) ∈ DS(A1 ) for n → ∞ it is fair n yΔi λ, dxi | xi−1 −→ 0; Xn i=1
(e) (∀λ > 0) (∀Δ ∈ A1 ) (∀x ∈ Δ) yΔ λ, X | x1 yΔ−r 0, dx1 | x −→ yΔ (0, X | x) r −→ 0 ; X
(f) (∀λ > 0) (∀Δ ∈ A1 ) (∀x ∈ Δ) (∀S ∈ B(X)) yΔ (λ, S | x) = yΔ (λ, ∂Δ ∩ S | x).
136
Continuous Semi-Markov Processes
Proof. (1) Let us designate Fn (t) =
n
YΔi dti × dxi | xi−1 .
i=1
Applying the Laplace transformation to this function we obtain 0
∞
e−λt Fn (t) dt =
1 λ
n
Xn i=1
yΔi λ, dxi | xi−1 .
Hence, properties (a) and (d) are equivalent. (2) Let us designate Gr (t) =
X
YΔ [t, ∞) × X | x1 YΔ−r R+ × dx1 | x .
Applying the Laplace transformation to this function and using equality YΔ (R+ × S | x) = yΔ (0, S | x), we obtain ∞ 1 yΔ 0, X | x1 − yΔ λ, X | x1 yΔ−r 0, dx1 | x . e−λt Gr (t) dt = λ X 0 From here by boundedness of Gr (t) implication (b)⇒(e) follows. For an arbitrary function Gr (t) the inverse implication is true for almost all t. However, for monotone in t functions (e.g., Gr (t)) from convergence almost for all t, the convergence for all t follows. Hence the implication (e)⇒(b) is true. (3) The equivalence of conditions (c) and (f) is evident. 18. Note on construction of the process For the given family of semi-Markov kernels the constructed process is proved to be semi-Markov with respect to A1 and, in particular, to be proper semi-Markov, if A1 = A. For most applications a set of regeneration times of rather simple view can be used, for example, ſrst exit times from open sets, their ſnite iterations with respect ˙ and some limits. From a theoretical point of view it is preferable to operation +, to enlarge this class, extracting only some main properties of the ſrst exit times. A class of such Markov times, called intrinsic Markov times, is mentioned in Chapter 6. Including these times into the set of regeneration times determines the class of semiMarkov processes in strict sense. It gains logical completeness, although it loses the possibility of a constructive description of the process with the help of semi-Markov kernels.
Chapter 5
Semi-Markov Processes of Diffusion Type
A semi-Markov processes of diffusion type is described by a differential secondorder equation. As for a Markov process it is possible to deſne a class of such processes on the basis of local behavior of their transition functions. The family of Markov transition functions determines a semi-group of operators with an inſnitesimal operator of differential type. Coefſcients of this differential operator are Kolmogorov’s coefſcients, a local drift and a local variance, determined by a limit behavior of the process in a ſxed time t as t → 0. For semi-Markov processes the family of Markov transition functions generally speaking does not form a semi-group, so it is useless for them to be investigated with the help of an inſnitesimal operator. However, for these processes there is another aspect of local characterization, with the help of distributions of the ſrst exit time and the corresponding ſrst exit position (called the ſrst exit pair) of the process, leaving a small neighborhood of the initial point. It is shown that in the case of regular asymptotics of the distribution, when a diameter of a neighborhood tends to zero, the transition generating function of the process satisſes a differential second-order elliptic equation with coefſcients determined by the ſrst terms of this asymptotics. It is interesting to reverse this outcome. The inverse problem consists of (1) construction of a semi-Markov process with transition generating functions, satisfying the a priori given differential equation, and (2) investigation asymptotic properties of these functions as solutions of a Dirichlet problem with various regions of determination of this equation. 137
138
Continuous Semi-Markov Processes
5.1. One-dimensional semi-Markov processes of diffusion type The continuous semi-Markov processes on the line represent the most natural class of processes, which are easy to study with the help of the ſrst exit time. In this case X = R ≡ (−∞, ∞). The integral equation to which the transitional generating functions submit turns into a difference equation, and under some regularity conditions it turns into a differential equation. The lambda-characteristic operator of such a processes is in the form of a differential operator of the second order. Such a process can be obtained from a Markov process with the help of special transformation, which, as will be seen in the following chapter, is equivalent to a time change.
5.1.1. Differential equation for transition functions A transition generating function fG (λ, dx1 | x) of a continuous semi-Markov process on the line in the case when G is a ſnite interval and x ∈ G breaks up to two functions gG (λ, x) and hG (λ, x) corresponding to the ſrst exit from the interval through the left or right end [HAR 80b, HAR 83]. If these functions tend to nondegenerate limits when the ends of the interval tend to some internal point x, and ſrst three coefſcients of the asymptotics are continuous, and functions gG and hG are twice differentiable on x, these functions submit to differential second-order equations. In the following section the condition about a differentiability of the functions will be removed.
1. Functions g and h Let X = R; (Px ) ∈ SM; (∀x ∈ X) Px (C) = 1. If G = (a, b) (a < b) and x ∈ [G], then fG (λ, X | x) = fG λ, {a} | x + fG λ, {b} | x . Let us designate fG (λ, {a} | x) = gG (λ, x) and fG (λ, {b} | x) = hG (λ, x). In addition hG (λ, a) = gG (λ, b) = 0 and gG (λ, a) = hG (λ, b) = 1. Functions gG (λ, x), and hG (λ, x) (G ∈ A0 , λ > 0, x ∈ X) completely determine a semi-Markov process, where A0 is the set of all ſnite open intervals on the line. ˙ σG2 , and the formula of theorem 3.5(2) can be If G1 ⊂ G2 , then σG2 = σG1 + rewritten as fG1 λ, dx1 | x fG2 λ, S | x1 . fG2 (λ, S | x) = X
SM Processes of Diffusion Type
139
In the case of G2 = (a, b), G1 = (c, d) (a ≤ c < d ≤ b) and x ∈ [G1 ] this equation passes in a system of two equations: gG2 (λ, x) = gG1 (λ, x)gG2 (λ, c) + hG1 (λ, x)gG2 (λ, d),
[5.1]
hG2 (λ, x) = gG1 (λ, x)hG2 (λ, c) + hG1 (λ, x)hG2 (λ, d).
[5.2]
Assuming d = b, we discover that gG2 (λ, x) does not increase on (a, b). Similarly, hG2 (λ, x) does not decrease. 2. Symmetric neighborhood of an initial point If G1 = B(x, r) ≡ (x−r, x+r), we write hG1 (λ, x) = hr (λ, x) and gG1 (λ, x) = gr (λ, x) (r > 0). Thus, the previous system of equations passes in the following system of difference equations: gG (λ, x) = gr (λ, x)gG (λ, x − r) + hr (λ, x)gG (λ, x + r), hG (λ, x) = gr (λ, x)hG (λ, x − r) + hr (λ, x)hG (λ, x + r), where B(x, r) ⊂ G. Let us assume that for any λ ≥ 0 and x ∈ G0 (G0 is an open interval) the following expansions are fair: [5.3] gr (λ, x) = C1 (λ, x) + A1 (λ, x)r + B1 (λ, x)r2 /2 + o r2 , hr (λ, x) = C2 (λ, x) + A2 (λ, x)r + B2 (λ, x)r2 /2 + o r2 [5.4] uniformly on x in each ſnite interval. Let all coefſcients of this expansion be continuous functions of x, and both C1 and C2 is positive. Under these conditions we will prove that gG and hG are twice continuously differentiable and satisfy the differential equation of the second order, which will be shown below. 3. Corollaries from conditions on C factors LEMMA 5.1. Let (∀λ ≥ 0) (∀x ∈ X) gr (λ, x) −→ C1 (λ, x),
hr (λ, x) −→ C2 (λ, x)
(r −→ 0)
uniformly on x in each ſnite interval G ⊂ G0 ; C1 and C2 are continuous on x and positive. Then (∀λ ≥ 0) (1) (∀x ∈ G) C1 (λ, x) + C2 (λ, x) = 1 and Ci do not depend on λ; (2) functions hG (λ, x) is strictly increasing, and gG (λ, x) is strictly decreasing in x ∈ G: (3) if (∃x ∈ G) hG (λ, x) + gG (λ, x) = 1 then (∀x ∈ G) hG (λ, x) + gG (λ, x) = 1.
140
Continuous Semi-Markov Processes
Proof. (1) Since gG (λ, x) = C1 (λ, x)gG (λ, x − 0) + C2 (λ, x)gG (λ, x + 0) and gG (λ, x) almost everywhere on G is continuous, then from continuity and positiveness of C1 and C2 it follows C1 (λ, x) + C2 (λ, x) = 1. However, as C1 and C2 are principal terms of expansion of functions gr and hr correspondingly, which are non-increasing on λ, they also do not depend on λ. (2) Let G = (a, b) and 2r < b − a. We have hG (λ, b − r) = hr (λ, b − r) + gr (λ, b − r)hG (λ, b − 2r) = C(b − r) + 1 − C(b − r) hG (λ, b − 2r) + εr , where εr → 0 (r → 0). From here −1 hG (λ, b − 2r) = 1 − hG (λ, b − 2r) − hG (λ, b − r) + εr C(b − r) , and consequently hG (λ, b−0) = 1. In addition gG (λ, a+0) = 1. Hence, hG (λ, a+0) = gG (λ, b − 0) = 0. From here hG (λ, x − r) = h(a,x) (λ, x − r)hG (λ, x) −→ hG (λ, x), hG (λ, x + r) = h(x,b) (λ, x + r) + g(x,b) (λ, x + r)hG (λ, x) hG (λ, x + r) → hG (λ, x). The same is true for gG (λ, x), i.e. hG (λ, x) and gG (λ, x) are continuous on x ∈ G. Furthermore, from the positiveness of C1 , C2 it follows that #n (∃r0 > 0) (∃ε > 0) (∀x ∈ G) (∀r < r0 ) hr (λ, x) ≥ ε. Then hG (λ, x) ≥ k=0 hr (λ, x+kr) > 0, where n = (b−x)/r, an integer, and r ≤ x−a, i.e. hG (λ, x) (and also gG (λ, x)) is positive on G. From here gG (λ, x), hG (λ, x) are less than 1 for x ∈ G, and since hG (λ, x1 ) = h(a,x2 ) (λ, x1 )hG (λ, x2 ), hG (λ, x1 ) < hG (λ, x2 ) (x1 < x2 ). Similarly, gG (λ, x1 ) > gG (λ, x2 ). (3) Let x0 ∈ G and hG (λ, x0 ) + gG (λ, x0 ) = 1. Then for x ∈ G and x < x0 we have hG λ, x0 = h(x,b) λ, x0 + g(x,b) λ, x0 hG (λ, x), gG λ, x0 = g(x,b) λ, x0 gG (λ, x), hence hG (λ, x) + gG (λ, x) = 1 − h(x,b) λ, x0 /g(x,b) λ, x0 ≥ 1. The same for x > x0 .
SM Processes of Diffusion Type
141
4. Corollaries from conditions on A and C factors LEMMA 5.2. Let (∀λ ≥ 0) (∀x ∈ G0 ) hr (λ, x) = C(x) + A2 (λ, x)r + o(r), gr (λ, x) = 1 − C(x) + A1 (λ, x)r + o(r) uniformly on x on each ſnite interval, coefſcients of expansion are continuous on x, C(x) and 1 − C(x) are positive. Then (∀x ∈ G0 ) (1) C(x) = 1/2; (2) A2 (λ, x) = −A1 (λ, x), and A2 (λ, x), A1 (λ, x) do not depend on λ. Proof. For x ∈ G ⊂ G we have
gG (λ, x) = C(x)gG (λ, x + r) + 1 − C(x) gG (λ, x − r) + A2 (λ, x)rgG (λ, x + r) + A1 (λ, x)rgG (λ, x − r) + o(r),
hence
gG (λ, x) − gG (λ, x − r) 1 − C(x) − gG (λ, x + r) − gG (λ, x) C(x) = A2 (λ, x)rgG (λ, x + r) + A1 (λ, x)rgG (λ, x − r) + o(r).
Similarly we have hG (λ, x) − hG (λ, x − r) 1 − C(x) − hG (λ, x + r) − hG (λ, x) C(x) = A2 (λ, x)rhG (λ, x + r) + A1 (λ, x)rhG (λ, x − r) + o(r). It is known that derivative of a non-decreasing (non-increasing) function exists almost everywhere on the interval (see, e.g., [NAT 74, p. 199]). From here it follows that derivative of the strictly increasing (strictly decreasing) function is negative (positive) almost everywhere. Let us divide both parts of the previous expressions by r and direct it to zero. Let G ⊂ G be the set on which there are negative derivatives (λ, x), and positive derivative hG (λ, x). For x ∈ G we have gG 1 − 2C(x) gG (λ, x) = A2 (λ, x) + A1 (λ, x) gG (λ, x), 1 − 2C(x) hG (λ, x) = A2 (λ, x) + A1 (λ, x) hG (λ, x). Since gG (λ, x) and hG (λ, x) are positive, the supposition that at least one of the coefſcients, 1 − 2C(x) or A2 (λ, x) + A1 (λ, x), differs from zero brings a contradiction. From the continuity of functions C and Ai it follows 1 − 2C(x) = 0 and A2 (λ, x) + A1 (λ, x) = 0 for all x ∈ G, and since both functions A2 (λ, x) and A1 (λ, x) do not increase on λ, they do not depend on λ.
142
Continuous Semi-Markov Processes
Under conditions of lemma 5.2 let us designate 1 A(x) ≡ A2 (λ, x) = −A1 (λ, x). 2 5. Corollaries from conditions on B factors LEMMA 5.3. Let (∀λ ≥ 0) gr (λ, x) = 1/2 − A(x)r/2 + B1 (λ, x)r2 /2 + o r2 , hr (λ, x) = 1/2 + A(x)r/2 + B2 (λ, x)r2 /2 + o r2 ,
[5.5]
and (∃G1 ∈ A0 ) (∃x ∈ G1 ) gG1 (λ, x) and hG1 (λ, x) are twice differentiable at point x. Then: (1) (∀G ∈ A0 , G1 ⊂ G) gG (λ, x), hG (λ, x) are twice differentiable in a point x and their derivatives satisfy the equations: (λ, x) + 2A(x)gG (λ, x) + B1 (λ, x) + B2 (λ, x) gG (λ, x) = 0, gG hG (λ, x) + 2A(x)hG (λ, x) + B1 (λ, x) + B2 (λ, x) hG (λ, x) = 0; (2) (∀λ > 0) B1 (λ, x) + B2 (λ, x) < 0; −∂Bi (λ, x)/∂λ is a completely monotone function of λ (i = 1, 2); if gG1 (0, x) + hG1 (0, x) = 1 then B1 (0, x) + B2 (0, x) = 0. Proof. (1) We have gG (λ, x) =
1 1 gG (λ, x − r) + gG (λ, x + r) A(x)r gG (λ, x + r) − gG (λ, x − r) 2 2 1 1 + B1 (λ, x)r2 gG (λ, x − r) + B2 (λ, x)r2 gG (λ, x + r) + o r2 , 2 2
from which the ſrst equation for G = G1 follows. Similarly the second equation can be derived. From equations [5.1] and [5.2] it follows that these formulae are true for any interval including G1 . (2) We have (∀λ > 0) r2 gG1 (λ, x) + hG1 (λ, x) = 1 + B1 (λ, x) + B2 (λ, x) + o r2 < 1. 2 Hence, B1 (λ, x)+B2 (λ, x) < 0. Furthermore, according to the deſnition, gr (λ, x) and hr (λ, x) are completely monotone functions of λ; and since B1 (λ, x) and B2 (λ, x)
SM Processes of Diffusion Type
143
are limits of the sequences (2(gr (λ, x)−1/2+A(x)r/2)/r2 ) and (2(gr (λ, x)−1/2− A(x)r/2)/r2 ) (r > 0) for r → 0 correspondingly, the property of the derivative of a completely monotone function of λ > 0 is fulſlled (except perhaps the nonnegativeness of the function itself), i.e. (∀λ > 0) (∀k ∈ N) (−1)k ∂ k Bi (λ, x)/∂λk ≥ 0. If gG1 (0, x) + hG1 (0, x) = 1, then, evidently, for (x − r, x + r) ⊂ G gr (0, x) + hr (0, x) = 1, hence B1 (0, x) + B2 (0, x) = 0.
5.1.2. Construction of semi-Markov process with given factors of the differential equations We have shown that for some class of transition generating functions of a onedimensional SM process, when a parameter of such a function is a neighborhood of the initial point and the diameter of this neighborhood (length of an interval) tends to zero, it satisſes a second-order differential equation depending on two coefſcients. In the present section the problem of construction of a semi-Markov process by a second-order differential equation with the given coefſcients is decided [HAR 80b]. The properties of transition generating functions of the constructed process make it possible to specify factors of the asymptotic expansion and to omit the conditions on differentiability in lemma 5.3.
6. Consistency of solutions Let us consider the differential equation f + 2A(x)f + 2B(λ, x)f = 0,
[5.6]
where A, B are measurable functions. Moreover there is a twice differentiable solution of this equation on an interval G0 ∈ A0 , and (∀λ > 0) (∀x ∈ G0 ) B(λ, x) < 0. THEOREM 5.1. Let (∀G ∈ A0 ∩ G0 ) g G and hG be solutions of differential equation [5.6] on an interval G where for G = (a, b) the solutions accept values g G (λ, a) = hG (λ, b) = 1 and g G (λ, b) = hG (λ, a) = 0. Then the family of such solutions (g G , hG )G∈A0 ∩G0 satisſes a system of difference equations: (∀λ > 0) (∀G1 , G2 ∈ A0 ∩ G0 , G1 ⊂ G2 ) (∀x ∈ G1 ) g G2 (λ, x) = g G1 (λ, x)g G2 (λ, c) + hG1 (λ, x)g G2 (λ, d),
[5.7]
hG2 (λ, x) = g G1 (λ, x)hG2 (λ, c) + hG1 (λ, x)hG2 (λ, d),
[5.8]
where G1 = (c, d).
144
Continuous Semi-Markov Processes
Proof. We designate by gG2 (λ, x) and hG2 (λ, x) the right parts of the equations [5.7] and [5.8]. On interval G1 = (c, d) ⊂ G2 gG2 , hG2 satisfy equation [5.6] as linear combinations of solutions g G1 and hG1 of this equation on G1 . On the other hand, g G2 (λ, c) = gG2 (λ, c), g G2 (λ, d) = gG2 (λ, d), and under the uniqueness theorem, which is ensured by the negative value of factor B [SAN 54], we have gG2 (λ, x) = gG2 (λ, x) for all x ∈ G1 . The same is true for the second pair of functions: hG2 (λ, x) = hG2 (λ, x) on G1 . 7. Minimum principle Let f be the solution of the equation f + 2A(x)f + 2B(λ, x)f = F (x) with non-negative boundary conditions where (∀x ∈ G) F (x) ≤ 0 and B(λ, x) < 0, and let x0 ∈ G be a point of its negative minimum. Then at this point we have f (x0 ) = 0 and f (x0 ) = −2B(λ, x0 )f (x0 ) + F (x0 ) < 0. It contradicts a necessary condition of a smooth function f to have a local minimum at the point x0 , which is: f (x0 ) ≥ 0. Hence, f ≥ 0. This conclusion is an example of the so called minimum principle (see, e.g., [GIL 89]). A dual maximum principle is fair as well. In this case, F (x) ≥ 0 and boundary values of the solution are non-positive. Thus f ≤ 0. From the minimum principle it follows that solutions g G and hG of equation [5.6] are non-negative and do not exceed 1. On the other hand, from equations [5.1], [5.2] it follows that the ſrst of them does not increase, and the second one does not decrease on interval G = (a, b). From the negative value of coefſcient B a rather more strong assertion follows: these solutions are positive inside the interval where they are determined and are strictly monotone (they decrease and increase correspondingly). 8. Asymptotics of solution with a small parameter THEOREM 5.2. Let g G (λ, ·), hG (λ, ·) be solutions of the equation [5.6] with boundary conditions speciſed in theorem 5.1, where G = (a, b) and a < x0 < b. Let A be differentiable, and A , B(λ, ·) be continuous on x in some neighborhood of a point x0 . Then with a ↑ x0 , b ↓ x0 the expansions are fair b − x0 x0 − a b − x0 − A x0 g G λ, x0 = b−a b−a b − x0 x0 − a 2b − a − x0 1 + B λ, x0 3 b−a [5.9] b − x0 x0 − a a + b − 2x0 1 − A2 x0 + A x0 3 b−a + b − x0 x0 − a · O(b − a),
SM Processes of Diffusion Type
b − x0 x0 − a x0 − a + A x0 hG λ, x0 = b−a b−a b − x0 x0 − a b − 2a + x0 1 + B λ, x0 3 b−a b − x0 x0 − a a + b − 2x0 1 2 + A x0 + A x0 3 b−a + b − x0 x0 − a · O(b − a),
145
[5.10]
Proof. Let G = (a, b). It is known (see Sobolev [SOB 54, p. 296]) that a solution of equation y = g(x) on an interval (a, b) under conditions y(a) = y1 , y(b) = y2 has the form b−x x−a b−x x + y2 − g(t)(t − a) dt y(x) = y1 b−a b−a b−a a [5.11] x−a b g(t)(b − t) dt. − b−a x Let us represent equation [5.6] as f = q(x), where q = −2Af − 2Bf . Substituting q in the previous expression, we obtain for equation [5.6] a formal solution, which reduces to an integral equation with respect to f : b x−a b−x + f (b) + KG (λ, x, t)f (t) dt, [5.12] f (x) = f (a) b−a b−a a where the kernel KG (λ, x, t) has a view ⎧b − x ⎪ − 2A(t) + 2B(λ, t) − 2A (t) (t − a) , a < t < x, ⎨ b−a KG (λ, x, t) = x ⎪ ⎩ − a 2A(t) + 2B(λ, t) − 2A (t)(b − t), x < t < b. b−a From here hG (λ, x) = Furthermore (2) KG (x, t)
≡
x−a + b−a
b
KG (λ, x, t)hG (λ, t) dt. a
b
KG (x, s)KG (s, t) ds a
⎧ (b − x)(2t − a − x) 2 ⎪ ⎪ 2A (x) + (b − x)(x − a)Y1 (x, t), ⎨− b−a = ⎪ (x − a)(2t − b − x) 2 ⎪ ⎩ 2A (x) + (b − x)(x − a)Y2 (x, t), b−a
a < t < x, x
146
Continuous Semi-Markov Processes
functions Y1 and Y2 are limited in G0 × G0 . We also have (3)
KG (x, t) ≡
a
b
(2)
KG (x, s)KG (s, t) ds = (b − x)(x − a)Z(x, t),
where function Z is limited in G0 ×G0 . The integral equation for hG (·) can be written as 3
hG (x) =
x−a + b − a i=1
a
b
(i)
KG (x, t)
t−a dt + b−a
b
a
(4)
KG (x, t)hG (t) dt.
From here we obtain x b − x x−a + − 2A(x) + 2B(λ, x) − 2A (x) (t − a) hG (x) = b−a b−a a t−a (b − x)(2t − a − x) 2 2A (x) dt − b−a b−a b x − a + 2A(x) + 2B(λ, x) − 2A (x) (b − t) b−a x t−a (x − a)(2t − b − x) 2 2A (x) dt+(b−x)(x−a)O(b−a) + b−a b−a =
(x − a)(b − x) 1 (x − a)(b − x)(b − 2a + x) x−a + A(x) + B(λ, x) b−a b−a 3 b−a (x − a)(b − x)(b + a − 2x) 1 +(b−x)(x−a)O(b−a) + A2 (x) + A (x) 3 b−a
uniformly on x ∈ G. The asymptotic expansion for function hG is proved. Similarly the expansion for g G can be obtained, but it is easier to deduce it from the proved expansion by a replacement of variable x = −x. Let us designate f (x) = f (−x) = f (x). If on G the initial function f satisſes the original equation f + 2A(x)f + 2B(λ, x)f = 0 with boundary conditions f (a) = 1, f (b) = 0 (i.e. f = g G in our denotation), then f satisſes the equation f − 4A(x)f + 2B(λ, x)f = 0 with boundary conditions f (−a) = 1 and f (−b) = 0, i.e. it plays the role of h(−G) for the new equation. Hence, in order to derive the expansion of function gG (x) it is sufſcient to replace {a, b, x, A(x)} on {−b, −a, −x, −A(−x)} in obtained expansion for hG (we need not change a sign before A (−x), since the sign of this factor varies twice; the second time it happens after differentiation by the new variable x). In an outcome we obtain the required expression.
SM Processes of Diffusion Type
147
9. Note on the theorem In the previous theorem the existence of limits of ratios (b − x0 )/(b − a) and (x0 − a)/(b − a) is not supposed. If these limits exist, for example, x0 − a = cr, and b − x0 = dr (r → 0), the obtained formulae reduce to expansions on degrees b − a or r. COROLLARY 5.1. Let c = d = 1 (sequence of symmetric neighborhoods). Then under conditions of theorem 5.2 the expansions are fair 1 g (x0 −r,x0 +r) λ, x0 = 1 − A x0 r + B λ, x0 r2 + o r2 2 1 h(x0 −r,x0 +r) λ, x0 = 1 + A x0 r + B λ, x0 r2 + o r2 . 2
[5.13] [5.14]
Proof. It immediately follows from the previous theorem. 10. Pseudo-maximum of a function A point x from the domain (region of determination) of a function f is referred to as a point of pseudo-maximum of this function with a step r > 0, if two conditions are fulſlled: (1) f (x − r) = f (x + r); (2) f (x + r) − 2f (x) + f (x − r) ≤ 0, where points x − r, x + r also belong to the domain. LEMMA 5.4. Let a continuous function f have a point of a maximum xM inside some ∞ interval from its domain. Then there are sequences (rn )∞ 1 (rn > 0, rn → 0) and (xn )1 (xn → xM ) such that for any n ≥ 1 the function f has in a point xn a pseudo-maximum with a step rn . Proof. If point xM belongs to a closure of some interval of constancy of the function f then, obviously, the lemma is proved. Let the point xM not belong to a closure of any interval of constancy of the function f . Then for big enough n the following points also belong to the same interval as xM :
an = sup x < xM : f (x) ≤ f xM − 1/n ,
bn = inf x > xM : f (x) ≤ f xM − 1/n . Let us assume rn = (bn − an )/2 and xn = (bn + an )/2. Then we have f (xn ) > f (xM ) − 1/n, and also f (xn − rn ) = f (an ) = f (xM ) − 1/n and f (xn + rn ) = f (bn ) = f (xM ) − 1/n. Due to the continuity of f the points an , bn tend to xM as n → ∞.
148
Continuous Semi-Markov Processes
11. Comparison with solutions of the equation LEMMA 5.5. Let λ > 0 and expansion [5.5] of lemma 5.3 is fulſlled uniformly on x ∈ G, where A, A , B1 (λ, ·), B2 (λ, ·) are continuous on x, and in addition, the last two functions are negative. Then transition generating functions gG and hG of the corresponding semi-Markov process are twice differentiable on x and coincide on the interval G with solutions g G and hG of differential equation [5.6] with B1 (λ, ·) = B2 (λ, ·) = B(λ, ·). Proof. We use the expansion of the solutions of differential equation [5.6] obtained in corollary 5.13 for central points of an interval. Consider a difference z(x) = hG (x) − hG (x). It is a continuous function, accepting zero values on the ends of an interval G. Moreover, the following representation is fair (for brevity, we omit arguments λ and x of the coefſcients): z(x) = z(x − r) 1/2 − Ar/2 + B1 r2 /2 + z(x + r) 1/2 + Ar/2 + B2 r2 /2 + hG (x − r) B1 − B + hG (x + r) B2 − B + o r2 , or, by taking into account the differentiability of hG , −z(x) + (1/2) z(x − r) + z(x + r) + Ar z(x + r) − z(x − r)) + z(x − r)B1 r2 + z(x + r)B2 r2 + o r2 = 0. Let us assume that this function has a positive maximum inside interval G. Then according to lemma 5.4 for this function there exists a sequence (rn ), converging to zero, and a sequence of points of pseudo-maxima (xn ) converging on the point of maximum with the appropriate steps such that z(xn −rn ) = z(xn +rn ) and −z(xn )+ (1/2)(z(xn − rn ) + z(xn + rn )) ≤ 0. Substituting these sequences in the previous expression, dividing by rn and letting n tend to inſnity, we obtain a contradiction, because the factors B1 , B1 are negative. Applying the lemma about pseudo-maximum to function −z(x), we prove that the original function has no points of a negative minimum inside an interval as well. Hence, hG = hG on the given interval. It is obvious that the same is fair for the second pair of functions: gG = g G . From lemma 5.3 by uniqueness of asymptotic expansions of a solution of the differential equation (corollary 5.13) we ſnd that condition B1 = B2 = B is necessary. 12. Derivation of differential equation from local properties THEOREM 5.3. Let (gG , hG ) G ∈ A0 be a family of semi-Markov transition generating functions of a continuous SM process. Let expansions [5.3], [5.4] from item 2 be
SM Processes of Diffusion Type
149
fulſlled uniformly on x in every ſnite interval G ⊂ G0 , moreover (∀λ > 0) Ci (λ, ·), Ai (λ, ·), ∂Ai (λ, ·)/∂x, Bi (λ, ·) be continuous on x and Ci (λ, ·) > 0 (i = 1, 2). Then (1) C1 (λ, ·) = C2 (λ, ·) = 1/2; (2) A1 (λ, ·) = −A2 (λ, ·) ≡ (1/2)A(·) do not depend on λ; (3) B1 (λ, ·) = B2 (λ, ·) ≡ B(λ, ·) ≤ 0; (4) (∀λ > 0) B(λ, ·) < 0, (∀x ∈ G) the function −(∂/∂λ)B(λ, x) is completely monotone on λ; (5) gG and hG are twice differentiable and satisfy differential equation [5.6]. Proof. It follows from lemmas 5.1, 5.2, 5.3 and 5.5. DEFINITION 5.1. A semi-Markov process with transition generating functions satisfying equation [5.6] with some coefſcients A and B, or, which amounts to the same thing, with transition generating functions admitting expansions [5.3] and [5.4] with some coefſcients Ai , Bi , Ci , is said to be a semi-Markov process of diffusion type. 13. Probabilistic solution Let us denote
nG (λ) = − sup B(λ, x) : x ∈ G . THEOREM 5.4. Let g G (λ, x) and hG (λ, x) be solutions of equation [5.6] with boundary conditions speciſed in theorem 5.1. Let function A be limited on G, (∀x ∈ G) function −∂B(λ, x)/∂λ be completely monotone on λ, (∀λ > 0) B(λ, x) < 0 and nG (λ) → ∞ as λ → ∞. Then (∀x ∈ G) the functions g G (λ, x) and hG (λ, x) are Laplace transformed sub-probability measures on [0, ∞), not containing an atom in zero Proof. From the minimum principle (see item 7) it follows that g G (λ, x) and hG (λ, x) are not negative. Let f (x) = 1 − g G (λ, x) − hG (λ, x). Then f (a) = f (b) = 0 and f (x) + 2A(x)f (x) + 2B(λ, x)f (x) = 2B(λ, x). Hence, by minimum principle f (x) ≥ 0, i.e. g G (λ, x) + hG (λ, x) ≤ 1.
150
Continuous Semi-Markov Processes
Let us prove the complete monotonicity of functions g G (λ, x) and hG (λ, x) on λ. In this case, according to the Bernstein theorem [FEL 67, p. 505], they are the Laplace images of some sub-probability measures on [0, ∞). A proof of complete monotonicity of a solution of equation [5.6] would be signiſcantly simpler if we assume in advance that it is inſnite-differentiable on parameter λ. We do not know it and therefore we will use the following criterion of complete monotonicity. Denote Δh f (s) = f (s + h) − f (s) and for any n ≥ 1 Δh1 ,...,hn+1 f (s) = Δhn+1 Δh1 ,...,hn f (s). LEMMA 5.6. A function f is completely monotone n on interval (0, b) if and only if (∀n ∈ N ) (∀s > 0) (∀hi > 0) (∀s > 0, s < b − i=1 hi ) (−1)n Δh1 ,...,hn f (s) ≥ 0.
Proof. Representing a ſnite difference as an integral of a derivative, we obtain proof of necessity of the proposed condition. Representing a derivative as a limit of relative increments of the function, and noting that any jump implies non-monotonicity of the ſnite difference of the second order, we obtain proof of sufſciency of the proposed condition.
Let us continue to prove the theorem. Denote for brevity Δh1 ,...,hn = Δn . Condition f ≥ 0 means fulſllment of the condition of lemma 5.6 for n = 0, if to deſne Δ0 f = f . Let already it be proved that for all k ∈ {0, 1, . . . , n} (−1)k Δk f ≥ 0. We have Δn+1 f + 2A(x)f + 2Bf = 0 (B = B(λ, x)). Evidently, the operator of differentiation on x can be transposed with the operator of ſnite difference on λ. Hence, it is true that
Δn+1 f
+ 2A(x) Δn+1 f + 2Δn+1 (Bf ) = 0.
Consider the n-th difference of the product of two functions: Δn (Bf ). Let θh be the shift operator on class of functions, determined on R+ (see item 2.16). In this case we can write Δh f = θh f − f . Denote Hn = (h1 , . . . , hn ) (n ∈ N, hi > 0) (elements of this sequence can be repeated); let Ck be a combination of k elements from the sequence Hn (i.e. sub-sequence); |Ck | be the sum of all elements composing the combination Ck ; let the set Hn \ Ck mean the residual sub-sequence (if it is not empty); ΔØ and θ0 mean operators of identity.
SM Processes of Diffusion Type
151
LEMMA 5.7. For the n-th difference of a product of two functions the following formula is true ΔHn (B f ) =
n
ΔCk B
θ|Ck | ΔHn \Ck f ,
k=0 Ck
where the intrinsic sum is on all combinations of Ck . Proof. It is passed for all variants by induction on the basis of induction for n = 1: Δh (Bf ) = BΔh f + Δh Bθh f . While proving we take into account that operator ΔH does not vary with permutation of members of sequence H. We also take into account permutability of operators Δ and θ and properties of these operators: ΔS1 ΔS2 = Δ(S1 ,S2 ) where (S1 , S2 ) is a sequence composed with two sub-sequences S1 and S2 , and θh1 θh2 = θh1 +h2 (hi > 0). Let us prove the formula in its simple variant when all hi are equal to h. In this case ΔCk = Δk , ΔHn \Ck = Δn−k , θ|Ck | = θkh and the formula we have to prove turns to a simple form Δn (B f ) =
n n Δk B θkh Δn−k f . k
k=0
Let us apply operator Δ1 to this formula and use the formula of the ſrst difference of a product: Δ1 (Bf ) = B Δ1 f + Δ1 B θh f . We obtain Δn+1 (Bf ) =
n n k=0
+
n n
k=0
=
Δk B θkh Δn+1−k f
k
k
Δk+1 B θ(k+1)h Δn−k f
n Δk B θkh Δn+1−k f k
n k=0
n n Δk B θkh Δn+1−k f + k−1 k=1
=
n+1 Δk B θkh Δn+1−k f . k
n+1 k=0
So, the formula is proved by induction.
152
Continuous Semi-Markov Processes
Let us continue to prove complete monotonicity of function f . Applying the formula of lemma 5.7 we can write the obtained equation as (Δn+1 f ) + 2A(x) Δn+1 f + 2B(λ, x)Δn+1 f =−
n
ΔCk B
θ|Ck | ΔHn \Ck f ,
k=0 Ck
Since function −∂B/∂λ is completely monotone, and under supposition (∀k ≤ n) (−1)k Δk f ≥ 0, for all even n the summands on the right are positive (because in each summand superscripts of two of its factors have different evenness). From here the right part is not negative. Moreover, evidently, function Δn+1 f (n ≥ 0) has zero meanings at the ends of the interval G. Hence according to the maximum principle Δn+1 f ≤ 0. For odd n the right part of the equation is not positive and, hence, Δn+1 f ≥ 0. From here it follows that (∀n ∈ N ) (−1)n Δn f ≥ 0, i.e., according to lemma 5.6, f is a completely monotone function on λ ∈ (0, ∞). Hence, gG (λ, x) and hG (λ, x) are completely monotone functions on λ ∈ (0, ∞), and there exist measures F G (dt × {a} | x) and F G (dt × {b} | x) on R+ (G = (a, b)) which determine the following representations ∞ g G (λ, x) = e−λt F G | dt × {a}|x| , 0
hG (λ, x) =
0
∞
e−λt F G | dt × {b}|x| .
Now let us prove that measure F G does not contain an atom at zero. In order to prove this property it is sufſcient to prove that g G (λ, x) and hG (λ, x) tend to zero as λ → ∞. To do this we compare these solutions with solutions of the similar equation, but with constant coefſcients. Let
B0 (λ) = sup B(λ, x) : x ∈ G . A3 = sup A(x) : x ∈ G , Then equation f + 2A3 f + 2B0 (λ)f = 0 with boundary conditions f (a) = 0, f (b) = 1 has a solution % 2 A3 (b−x) sh(x − a) A3 − 2B0 (λ) % . hG (λ, x) = e sh(b − a) A23 − 2B0 (λ) Let f (λ, x) = hG (λ, x) − hG (λ, x). Then f (λ, a) = f (λ, b) = 0 and besides f (λ, x) + 2A3 f (λ, x) + 2B0 (λ)f (λ, x) = 2(A(x) − A3 )hG (λ, x) + 2 B(λ, x) − B0 (λ) hG (λ, x).
SM Processes of Diffusion Type
153
However, from the consistency of solutions of equation [5.6] and non-negative value of hG it follows that hG (λ, x) does not increase on G (see item 1). Hence the right part of the previous equation is not positive, and as such f (λ, x) ≥ 0. Yet (∀x ∈ G) √ 2 hG (λ, x) ∼ e2A3 (b−x) e−(b−x)2 A3 −B0 (λ)/2 −→ 0 as λ → ∞, since B0 (λ) → −∞. Hence, (∀x ∈ G) hG (λ, x) → 0 (λ → ∞). Similarly it can be proved that g G (λ, x) → 0 (λ → ∞). In this case g G (λ, x) is compared with a solution of differential equation f + 4A4 f + 2B0 (λ)f = 0 with boundary conditions f (a) = 1, f (b) = 0, where A4 = inf{A(x) : x ∈ G}. 14. Sub-probability kernel connected with solution Let us determine a sub-probability kernel yG (λ, dx1 | x) connected to a solution of equation [5.6] on interval G ∈ A0 as follows: (∀x ∈ X) (∀S ∈ B(X)) g G (λ, x)IS (a) + hG (λ, x)IS (b), x ∈ G yG (λ, S | x) = x∈ / G, IS (x), where G = (a, b). By theorem 5.1, the kernels (yG ) (G ∈ A0 ) are connected by the condition (∀G1 ⊂ G2 ) yG2 (λ, ϕ) = yG1 λ, yG2 (λ, ϕ) , where, as above, yG (λ, ϕ | x) =
X
ϕ x1 yG λ, dx1 | x
(ϕ ∈ B).
Let δn = (G1 , . . . , Gn ) (Gi ∈ A0 ). Let yδn (λ, ϕ) denote an iterated kernel, determined by induction: yδn (λ, ϕ) = yG1 λ, yδn2 (λ, ϕ) , where δni = (Gi , . . . , Gn ) (i ≤ n) and yδn (λ, dx1 | x) is a corresponding iterated kernel for the sequence δn . We assume yδn (λ, ϕ) = ϕ when n = 0 (δ0 = Ø). 15. Behavior of iterated kernels In order to construct a semi-Markov process with transition generating functions yG (λ, S | x) it remains for us to check that the necessary conditions for a cádlág process exist. Namely, the trajectory of the process has to leave any ſnite time interval
154
Continuous Semi-Markov Processes
with the help of any deducing sequence of open sets (in this case, open intervals). In other words, the following condition must be fulſlled: (∀λ > 0) (∀x ∈ X) (∀δ ∈ DS) yδn (λ, I | x) −→ 0
(n −→ ∞).
Using sigma-compactness of space X (in this case, the real line) it is convenient to divide the proof into two parts. Firstly, we ſnd a sufſciently large compact such that a trajectory exits from it not before a ſxed time. Furthermore it is being proved that a step-process constructed on the deducing sequence either exits from the compact or remains in it. In this case the sequence of jump times tends to inſnity (see item 2.17). Let K ⊂ X and K = X \ K. We denote K yG (λ, ϕ | x) = yG λ, dx1 | x ϕ x1 , K
K λ, yδKn2 (λ, ϕ) , = yG 1
MG = sup A(x) : x ∈ G .
yδKn (λ, ϕ)
In order to prove the next theorem we use the existence of a ſnite epsilon-net for a pre-compact subset of a metric space [KOL 72, p. 100]. Remember that an epsilonnet of the set K with parameter ε > 0 is said to be such a set T ⊂ X that (∀x ∈ K) (∃x1 ∈ T ) ρ(x, x1 ) < ε; pre-compact is called the set with a compact closure. THEOREM 5.5. Let coefſcients A and B(λ, ·) in equation [5.6] be continuous on x ∈ X, moreover (∀G ∈ A0 ) (∀λ > 0) nG (λ) > 0 and r2 n(−r,r) → ∞; M(−r,r) / (r n(−r,r) (λ)) → 0 as r → ∞. Then (∀λ > 0) (∀x ∈ X) (∀δ ∈ DS) yδn (λ, I | x) → 0 (n → ∞) where δn = (G1 , . . . , Gn ) is an initial piece of the sequence δ = (G1 , G2 , . . .). Proof. We divide the proof into four parts. (1) From the Markov property of the family of kernels (yG ), proved in theorem 5.1, we obtain
yδn λ, I | x0 =
n
Xn
+
yGi λ, dxi | xi−1 = yδKn λ, I | x0
i=1
n−1 i=0
K
yδKi λ, dx1 | x
K
yGi+1 λ, dx2 | x1 yδni+2 λ, I | x2 .
SM Processes of Diffusion Type
155
On the other hand,
yK λ, K | x0 ≥
n−1 i=0
K
yδKi λ, dx1 | x0 yGi+1 λ, K | x1 .
This inequality for SM processes is a consequence of evident inequality σK ≤ σK ◦ Lδn . For an arbitrary family of kernels (yG ) it follows from the representation yG1 λ, K | x0 =
X
yG1 ∩K λ, dx1 | x0 yG1 λ, K | x1
yG1 ∩K λ, dx1 | x0 yG1 λ, K | x1
= K
+ = K
K\G1
yG1 ∩K λ, dx1 | x0 yG1 λ, K | x1
yG1 ∩K λ, dx1 | x0 yG1 λ, K | x1
≤ yG1 ∩K λ, K | x0 ≤ yK λ, K | x0 yG1 ∩K λ, dx1 | x0 yK λ, K | x1 . = yG1 ∩K λ, K | x0 + K
Let the inequality be true for all sequences of length n. Then n i=0
K
yδKi λ, dx1 | x0 yGi+1 λ, K | x1
= yG1 λ, K | x0 n + yG1 λ, dx1 | x0 yδK2 λ, dx2 | x1 yGi+1 λ, K | x2 K
i=1
n yG1 ∩K λ, dx1 | x0
= K
+ K\G1
i=0
K
K
i
yδKi λ, dx2 | x1 yGi+1 λ, K | x2
n yG1 ∩K λ, dx1 | x0 i=1
K
yδK2 λ, dx2 | x1 yGi+1 λ, K | x2 . i
Obviously, in the ſrst term the sum not more than 1 for any n. In the second term, according to the assumption, the sum is not more than yK (λ, K | x1 ). From here the
156
Continuous Semi-Markov Processes
latter expression is not more than yG1 ∩K λ, K | x1 + yG1 ∩K λ, dx1 | x yK λ, K | x1 K\G1
≤ yG1 ∩K λ, K | x0 +
K
yG1 ∩K λ, dx1 | x yK λ, K | x1 = yK λ, K | x0 .
The inequality is proved. From here it follows that yδn (λ, I | x) ≤ yδKn (λ, I | x) + yK (λ, K | x). (2) Estimate from above the function yG (λ, I | x). Let G = (a, b). Consider the function 2 a + b − 2x + 1 − cG , y(x) = cG b−a where cG = nG (λ)
−1 4 8MG + nG (λ) + . (b − a)2 b−a
Let u(x) = y(x) − yG (λ, I | x). We have u(a) = u(b) = 0 and in addition u + 4A(x)u + 2B(λ, x)u = F (x), where 2 a + b − 2x F (x) = 2B(λ, x) cG + 1 − cG b−a +
8cG 16cG (2x − a − b) + A(x) (b − a)2 (b − a)2
≤ −2nG (λ)(1 − cG ) +
8cG 16MG cG = 0. + 2 (b − a) b−a
Hence, (∀x ∈ G) y(x) ≥ yG (λ, I | x). For x, situated in the middle of the interval, we obtain yr (λ, I | x) ≡ y(x−r,x+r) (λ, I | x) ≤ 1 − c(x−r,x+r) . From here, in particular, it follows that (∀x ∈ R) yr (λ, I | x) → 0 (r → ∞), because in this case r2 n(−r,r) → ∞, M(−r,r) / rn(−r,r) (λ) −→ 0 and consequently c(−r,r) → 1.
SM Processes of Diffusion Type
157
(3) Prove the formula
yδn (λ, I | x) ≤ min yGi (λ, I | x) : 1 ≤ i ≤ n . Evidently, yG1 (λ, yG2 (λ, I) | x) ≤ yG1 (λ, I | x). From here (∀i : 1 ≤ i ≤ n) yδn (λ, I | x) ≤ yδi (λ, I | x) ≤ yG1 (λ, I | x). If x ∈ G1 , then yG1 (λ, yG2 (λ, I) | x) = yG2 (λ, I | x). If x ∈ / G2 , then yG1 (λ, yG2 (λ, I) | x) ≤ 1 = yG2 (λ, I | x). If x ∈ G1 ∩ G2 , then yG1 λ, yG2 (λ, I) | x y(G1 ∩G2 ) λ, dx1 | x yG1 λ, yG2 (λ, I) | x = X\(G1 ∩G2 )
≤
X\(G1 ∩G2 )
y(G1 ∩G2 ) λ, dx1 | x yG2 λ, I | x1 = yG2 (λ, I | x).
Therefore (∀i : 2 ≤ i ≤ n) yδi (λ, I | x) = yδi−2 λ, yGi−1 λ, yGi (λ, I) | x ≤ yδi−2 λ, yGi (λ, I) | x ≤ yGi (λ, I | x). (4) Let K = (x0 − r, x0 + r). Then (∀ε > 0) (∃r > 0) yK (λ, K | x0 ) < ε. In order to prove the theorem it is sufſcient to know that yδKn (λ, I | x0 ) tends to zero, where δ ∈ DS. According to theorem 2.2, (∀δ ∈ DS) (∃s > 0) (∀x ∈ K) the interval (x − s, x + s) is covered an inſnite number of times by elements of the sequence δ = (G1 , G2 , . . .), i.e. (∀k ∈ N) (∃n ∈ N, n > k) (x − s, x + s) ⊂ Gn . Let T be a ſnite s/2-net of set K. Then there exists such a sequence of integers 0 = n(0) < n(1) < n(2) < · · · that (∀i ∈ N) (∀x ∈ T ) (∃m : n(i) < m ≤ n(i + 1)) (x − s, x + s) ⊂ Gm . In this case
yδKn(k+1) (λ, I | x) = yδKn(k) λ, yδKn(k)+1 (λ, I) | x ≤ yδKn(k) λ, yδn(k)+1 (λ, I) | x ≤
yδKn(k)
n(k+1)
n(k+1)
λ, min yGi (λ, I) : n(k) < i ≤ n(k + 1) | x .
Consider the function
z(x) = min yGi (λ, I | x) : n(k) < i ≤ n(k + 1)
158
Continuous Semi-Markov Processes
on set K. For any point x1 ∈ K there exists x2 ∈ T such that |x1 − x2 | < s/2. Let m2 be a number of a member of the sequence δ, which covers s-neighborhood of the point x2 : (x2 − s, x2 + s) ⊂ Gm2 (n(k) < m2 ≤ n(k + 1)). Then G(s) ≡ (x1 − s/2, x1 + s/2) ⊂ Gm2 and z x1 ≤ yGm2 λ, I | x1 ≤ yG(s) λ, I | x1 ≤ 1 − c, where, according to the second part of the proof −1 4 8 MK c ≥ nK (λ) 2 + + nK (λ) > 0. s s Hence, yδKn(k+1) (λ, I | x) ≤ (1 − c)yδKn(k) (λ, I | x) and yδKn (λ, I | x) → 0 as n → ∞.
16. Construction of a semi-Markov process In order to construct a SM process on the basis of coefſcients of differential equation [5.6] we are almost ready since: (1) a consistent system of kernels (yG ) is obtained (item 6); (2) a probabilistic character of these kernels is established (item 13); (3) a response of the family of these kernels on a deducing sequence is checked (item 19); (4) a condition of correct exit is checked (item 9 and 3); (5) the construction of the process is based on the condition of its continuity (item 8). From these items only the fourth one requires some explanation. It concerns fulſllment of condition (b) of theorem 4.5 or condition (e) of corollary 4.2. In this case taking into account denotations from item 14 the last condition is formulated as follows: (∀λ > 0) (∀G ∈ A0 ) (∀x ∈ G) g G (λ, a + r) + hG (λ, a + r) g G−r (0, x) + g G (λ, b − r) + hG (λ, b − r) hG−r (0, x) −→ 1 (r −→ 0), where G = (a, b), which is trivially fulſlled, if g G (λ, a + r) −→ 1,
hG (λ, b − r) −→ 1.
It means continuity of solutions of the equation on the boundary. Actually it is an initial demand to a solution of the equation with given boundary conditions. Formally this property can be derived from the asymptotic representation of item 8 and lemma 5.1.
SM Processes of Diffusion Type
159
THEOREM 5.6. Let coefſcients A and B of differential equation [5.6] satisfy the following conditions: (1) function A(x) is continuously differentiable on x ∈ X; (2) (∀λ ≥ 0) function B(λ, x) is continuous on x ∈ X; (3) (∀x ∈ X) B(0, x) ≤ 0, function B(λ, x) inſnitely differentiable and −∂B(λ, x)/ ∂λ is a completely monotone function on λ ≥ 0; (4) (∀G ∈ A0 ) nG (λ) > 0 for λ > 0 and nG (λ) → ∞ as λ → ∞; (5) (∀λ ≥ 0) r2 n(−r,r) (λ) → ∞ and M(−r,r) /(r n(−r,r) (λ)) → 0 as r → ∞. Then there exists a continuous semi-Markov process with transition generating functions (fG ) which are solutions of this equation such that for any interval G = (a, b), x ∈ G and λ > 0 fG λ, {b} | x = hG (λ, x). fG λ, {a} | x = g G (λ, x), Proof. Proof can be found in theorems 5.1, 5.2, 5.4 and 5.5. Solutions of equation [5.6] for λ = 0 can be obtained as limits from solutions for positive meanings of λ. Let us note that in equation [5.6] negative meanings of the coefſcient B(0, x) are admitted. In this case we obtain yG (0, I | x) < 1 (x ∈ G). This solutions is interpreted as an absence of exit of a trajectory of the semi-Markov process from set G. It can be the case that the process need not stop at a particular point of the interval. On the contrary, for a process of diffusion type, as a rule, the time and place of stopping are random. It is possible for the process to tend to some limit which is not attainable in a ſnite time. We will return to this question in Chapters 6 and 9. 5.1.3. Some properties of the process In this subsection some properties of one-dimensional SM processes of diffusion type are considered. In particular, we investigate the existence and a view of lambdacharacteristic operators of the ſrst and second kind (strict deſnition), conditions of markovness, and the representation of the semi-Markov process in the form of a transformed Markov process. 17. Operator of the second kind We consider a semi-Markov process of diffusion type corresponding to equation [5.6] with coefſcients satisfying conditions of theorem 5.6. From theorem 5.2 it
160
Continuous Semi-Markov Processes
follows that for any bounded function ϕ and x ∈ G = (a, b) fG (λ, ϕ | x) ≡ gG (λ, x)ϕ(a) + hG (λ, x)ϕ(b) (b − x)(x − a) ϕ(b) − ϕ(x) ϕ(x) − ϕ(a) − = ϕ(x) + b−a b−x x−a + A(x)
(b − x)(x − a) ϕ(b) − ϕ(a) +B(λ, x)(b − x)(x − a)ϕ(x) b−a
+ (b − x)(x − a)O(b − a) + F, where F =
(b − x)(x − a) 1 B(λ, x) (b − 2a + x) ϕ(b) − ϕ(x) 3 b−a + (2b − a − x) ϕ(a) − ϕ(x) +
(b − x)(x − a)(a + b − 2x) 1 2 A (x) + A (x) ϕ(b) − ϕ(a) . 3 b−a
Besides, this asymptotics with b ↓ x and a ↑ x is uniform on x at each bounded interval. In addition, it shows as well that for ϕ with a bounded derivative the term F is of the order (b − x)(x − a) O(b − a) as b − a → 0 uniformly on each bounded interval. For constant ϕ this term is equal to zero. In particular, fG (λ, I | x) = 1 + B(λ, x)(b − x)(x − a) + (b − x)(x − a)O(b − a). Hence, for the given semi-Markov process the following lambda-characteristic operator of the second kind in the strict sense is determined on the class of all twice differentiable functions ϕ λ fG (λ, ϕ | x) − ϕ(x) Aλ (ϕ | x) ≡ lim G↓x 1 − fG (λ, I | x) = ϕ (x)/2 + A(x)ϕ (x) + B(λ, x)ϕ(x)
−λ . B(λ, x)
In order to ſnd this limit it is important to note that the denumenator has an order (b − x)(x − a). As will be shown below, an expectation of the ſrst exit time from G is of the same order. This makes it possible to obtain a formula for the lambdacharacteristic operator of the ſrst kind in strict sense. 18. Asymptotics of the expectation for a small parameter THEOREM 5.7. Let (Px ) be a continuous SM process on the line with transition generating functions hG and gG satisfying differential equation [5.6], where G = (a, b),
SM Processes of Diffusion Type
161
a < x0 < b. Let A be differentiable, and A and B(λ, ·) be functions continuous on x, B(0, x) = 0, and also let the following partial derivative from the right −∂B(λ, x)/∂λ|λ=0 = γ(x) 0 < γ(x) < ∞ exist and be continuous on x in some neighborhood of the point x0 . Then there exists mG (x0 ) ≡ Ex0 (σG ) (0 < mG (x0 ) < ∞) and mG x0 = γ x0 b − x0 x0 − a + b − x0 x0 − a O(b − a), where a ↑ x0 and b ↓ x0 . Proof. Note that under conditions of the theorem yG (0, I | x) = 1. We have to analyze the function ∂ mG (x) = − yG (λ, I | x)λ=0 . ∂λ Formally the asymptotics of mG (x) we could receive by differentiation on λ every term of the expansion yG (λ, I | x) = 1+B(λ, x)(b−x)(x−a)+(b−x)(x−a)O(b−a). In order to justify possibility of such differentiation we use the integral representation [5.12] of solutions of the Dirichlet problem from the proof of theorem 5.2. For any λ ≥ 0 the function yG (λ, I | x) satisſes the equation b yG (λ, I | x) = 1 + KG (λ, x, t)yG (λ, I | t) dt, [5.15] a
In this case the differentiation on λ for λ = 0 is justiſed, for example, by the Arcell theorem [KOL 72, p. 104]. In order to apply it, it is sufſcient to have ratios B(λ, x)/λ and (1 − yG (λ, I | x))/λ tending to their limits monotonically. It follows from nonnegativeness of the second derivatives of the functions B(λ, x) and yG (λ, I | x) on λ. In the outcome of this operation we obtain the equation b KG (0, x, t)mG (t) dt, [5.16] mG (x) = MG (x) + a
where
⎧ b − x ⎪ − A(t) − A (t)(t − a) (t < x), ⎨2 b−a KG (0, x, t) = ⎪ ⎩2 x − a A(t) − A (t)(b − t) (t > x), b−a b b−x x MG (x) = − KG (0, x, t) dt = 2 γ(t)(t − a) dt b−a a a x−a b γ(t)(b − t) dt. +2 b−a x
162
Continuous Semi-Markov Processes
From continuity of function γ in a neighborhood of point x0 we obtain MG (x) = γ(x)(b − x)(x − a) + (b − x)(x − a)O(b − a) uniformly on x in some neighborhood of point x0 . Since the kernel KG (0, x, t) is bounded in square G × G, for sufſciently small b − a the solution of equation [5.16] is representable in the form of the series mG (x) = MG (x) +
∞ k=1
a
b
(k)
KG (0, x, t)MG (t) dt,
uniformly converging in some neighborhood of the point x0 . From here follows the boundedness of function mG (x). It is enough to know only the ſrst term of its asymptotics in a neighborhood of the point x0 . For this aim we consider three terms of the series and the remainder b 3 b (k) (4) KG (0, x, t)MG (t) dt + KG (0, x, t)mG (t) dt. mG (x) = MG (x) + k=1
a
a
Furthermore, using positiveness of the kernel KG (0, x, t) and an asymptotics of the integral b KG (0, x, t)(b − t)(t − a) dt = (b − x)(x − a)O(b − a), a
we discover that three terms of the sum are of the same order. In the last term we have (3)
KG (0, x, t) = (b − x)(x − a)Z(x, t), where function Z is bounded in the square (see theorem 5.2), and therefore this term is of the order (b − x)(x − a)O(b − a). From the proved theorem it follows that for a given process there exists a lambdacharacteristic operator of the ſrst kind in strict sense on a set of twice differentiable functions. Moreover Aλ ϕ | x0 = ϕ x0 /2 + A x0 ϕ x0 + B λ, x0 ϕ x0 /γ x0 . Note that γ(x) = lim(−B(λ)/λ) as λ → 0. This means that operators of the ſrst and second kind for λ = 0 coincide. Both normings used for deſnition of the lambda-characteristic operators are of the order (b − x)(x − a), depending on the length of an interval and on the position of the initial point inside the interval. It it resonable to consider this expression as a standard norming in the one-dimensional case. In the case of a symmetric situation for
SM Processes of Diffusion Type
163
the interval of the length 2r with respect to the initial point, such a standard norming is r2 . It is interesting to note that the standard norming in a symmetric case does not depend on the dimension of the state space. This fact will be proved in section 5.2 devoted to multi-dimensional processes of diffusion type. An advantage of operators of the second kind (as well that of operators with the standard norming) appears in the case when the limit lim(−B(λ)/λ) as λ → 0 is equal to inſnity. In this case the operator of the ſrst kind does not exist. A construction of a SM process on the basis of an a priori given lambda-characteristic operator uses the identity (∀G ∈ A) (∀ϕ ∈ Vλ ) Aλ fG (λ, ϕ) = 0. The same is true for operators of the second kind both in a strict and in a wide sense. For a diffusion process this identity passes to a differential equation considered here. However, the question about existence and uniqueness of a general semi-Markov process corresponding to the equation Aλ ϕ = 0 remains open. 19. Criterion of Markovness for SM processes THEOREM 5.8. Let (Px ) be a continuous semi-Markov process on the line with transition generating functions satisfying differential equation [5.6] with boundary conditions speciſed in theorem 5.1. Let coefſcient A be differentiable, and functions A , B(λ, ·) and γ be continuous on x. This process is Markovian if and only if (∀λ ≥ 0) B(λ, x) = −λγ(x). Proof. Necessity. Let us check the fulſllment of the conditions of theorem 3.6(2). The continuity of transition generating functions provides (λ, Δ)-continuity of the family of measures (Px ). The pseudo-locality of the operator Aλ follows from continuity of trajectories of the process. According to item 18, Aλ (I | x) = B(λ, x)/γ(x). Hence, a necessary condition of Markovness is the condition B(λ, x)/γ(x) = −λ. Sufſciency. Let us check the fulſllment of the conditions of theorems 3.7 and 3.8. From the continuity of trajectories of the process the quasi-left-continuity of the process follows. In addition to properties established above we use the linearity of the operator on λ, which implies equality a(λ/μ) = λ/μ. This is sufſcient for the given semi-Markov process to be Markovian.
20. Equation for transformed functions THEOREM 5.9. Let (Px ) be a continuous semi-Markov process on the line with transition generating functions satisfying differential equation [5.6] with differentiable coefſcient A and continuous on x functions A and B(λ, ·) (λ > 0). Let β(x) be a
164
Continuous Semi-Markov Processes
continuous non-negative function on x ∈ X. Then (∀G ∈ A0 ) (G = (a, b)) functions hG (x) and gG (x) (x ∈ G) of the form σG β ◦ Xt dt ; XσG = b, σG < ∞ , hG (x) = Ex exp −
−
gG (x) = Ex exp
0
σG
β ◦ Xt dt ; XσG = a, σG < ∞
0
satisfy the differential equation f + 2A(x)f + 2B β(x), x f = 0 with boundary conditions hG (b) = gG (a) = 1.
hG (a) = gG (b) = 0,
Proof. It can be shown that hG and gG satisfy functional equations [5.1] and [5.2]. ˙ σG and Let G1 = (c, d) and x ∈ [c, d] ⊂ [a, b]. We have σG = σG1 + hG (x) = Ex exp −
σG1
0
β ◦ Xt dt exp
˙ σG σG1 +
−
(β ◦ Xt ) dt ;
σG1
˙ σG < ∞ πG = b, σG1 + = Ex exp −
σG1
0
β ◦ Xt dt
exp
−
β ◦ Xt dt ◦ θσG1 ;
σG
0
πG ◦ θσG1 = b, σG1 < ∞, σG ◦ θσG1 < ∞
= Ex exp
−
σG 0
β ◦ Xt dt EXG1
XG = b, σG < ∞ ; σG1 < ∞
exp
−
σG 0
β ◦ Xt dt ;
= hG1 (x) hG (d) + gG1 (x) hG (c). hG1 (x) gG (d) + gG1 (x) gG (c). Evidently, hG (a) = gG (b) = 0 and Similarly gG (x) = hG (b) = gG (a) = 1. Let hr (x) = h(x−r,x+r) (x), gr (x) = g(x−r,x+r) (x). We have hr (x) ≤ hr β1 , x , hr β2 , x ≤
gr β2 , x ≤ gr (x) ≤ gr β1 , x ,
SM Processes of Diffusion Type
165
where
β1 = inf β x1 : x1 − x < r ,
β2 = sup β x1 : x1 − x < r .
It is easy to show (see theorem 5.1) that expansions for hr and gr are uniform on (λ, x) in each limited rectangle of region (0, ∞) × R. From here it follows that gr (x) = 1/2 − A(x)r/2 + B β(x), x r2 /2 + o r2 , hr (x) = 1/2 + A(x)r/2 + B β(x), x r2 /2 + o r2 uniformly on x in each ſnite interval. Now we can use theorem 5.3 where from semiMarkov conditions this functional equation and boundary conditions have been used.
21. Connection with Markov processes THEOREM 5.10. Every continuous SM process on the line with transition generating functions gG and hG satisfying differential equation [5.6] with functions A, A and B(λ, ·) (λ > 0), which are continuous with respect to x, can be obtained from a Markov process (P x ), corresponding to differential equation f + 2A(x)f − 2λf = 0, in such a way that hG (λ, x) = P x exp gG (λ, x) = P x exp
σG
B(λ, Xt ) dt; XσG = b, σG < ∞ ,
σG
= a, σG < ∞ .
0
0
B(λ, Xt ) dt; XσG
Proof. According to theorem 5.9, the right parts of the previous equalities satisfy differential equation [5.6] with coefſcients A(x), B(λ, x) with the corresponding boundary conditions. Hence they are transition generating functions of the process with the family of distributions (Px ). In Chapter 6 it will be shown that this transformation corresponds to a random time change transformation of the process with the help of system of additive functionals: at (λ) = −
0
t
B λ, Xt1 dt1 .
166
Continuous Semi-Markov Processes
22. Homogenous SM process Let us consider a SM process on the line corresponding to differential equation [5.6] with coefſcients independent of x. Let hG (λ, x) and gG (λ, x) be transition generating functions of the process. Then for x ∈ G = (a, b) % − x) A2 − 2B(λ) % , gG (λ, x) = e sh(b − a) A2 − 2B(λ) % 2 A(b−x) sh(x − a) A − 2B(λ) % . hG (λ, x) = e sh(b − a) A2 − 2B(λ) −A(x−a) sh(b
For A = 0 and B(λ) = −λ we obtain semi-Markov transition generating functions of the standard Wiener process [DAR 53]. If B(λ, x) = B(λ) is a non-linear function of λ, this process is not Markovian. It is connected to the Wiener process (with shift) by the integral relation, considered in theorem 5.10. For example, B(λ) = − ln(1 + λ). This is a negative function. Its ſrst derivative on λ is a completely monotone function (with minus sign). Such B(λ) corresponds to a non-Markov semi-Markov process of diffusion type. As will be shown in the next chapter this process can be obtained from the Wiener process with a shift by random time change transformation with the help of independent Gamma-process [HAR 71b]. Besides, as it will be shown in Chapter 7, a continuous SM process can be obtained as a limit of an a priori given sequence of semi-Markov walks.
23. Comparison of the processes’ parameters It is interesting to compare well-known Kolmogorov’s coefſcients of a Markov diffusion process with coefſcients of differential equation [5.6]. For this aim we will derive a formula for the resolvent kernel Rλ (S | x) (x ∈ X, S ∈ B(X)) (see item 3.22), which expresses this kernel in terms of transition generating functions of a SM process. In the case of B(λ) = −λ this formula gives the tabulated meaning of the Laplace transformed transition function of the Markov process. It is convenient to use this for comparison of parameters. Let (Px ) be a SM process on the line with transition generating function fG . Let us consider a resolvent kernel Rλ , where ∞ Rλ (S | x) ≡ R λ; IS | x = e−λt Px Xt ∈ S dt. 0
In item 3.21 a way has been obtained for the functional R(λ; ϕ | x) to be evaluated as a limit of some sums suitable for piece-wise continuous functions ϕ. Theorem 3.4
SM Processes of Diffusion Type
167
gives another way of evaluating the resolvent kernel. For n = 1 this theorem implies that Aλ (R(λ, ϕ) | x) = −ϕ(x). From here Aλ R(λ, ϕ) | x = λ−1 ϕ(x)Aλ (I | x) at points of continuity of function ϕ. According to item 17 and theorem 5.7, we have Aλ (I | x) = B(λ, x)/γ(x). Let f (x) = Rλ ((−∞, b) | x) ≡ R(λ; I(−∞,b) | x). It is not difſcult to show that f is bounded, continuous, differentiable everywhere and twice differentiable on intervals (−∞, b), (b, ∞). Hence it belongs to the region of determination of the operator Aλ . Therefore from item 18 we obtain a differential equation for the function f : f /2 + A(x)f + B(λ, x)f = λ−1 I(−∞,b) (x)B(λ, x). A continuous solution of this equation tending to zero as |x| → ∞, the case of the coefſcients A and B independent of x is represented by the following formulae: f (x) = f1 (x) (x ≥ b), f (x) = f2 (x) (x < b), where % % A2 − B(λ)/2 − A % exp − 2(x − b) A2 − B(λ)/2 + A , f1 (x) = 2λ A2 − B(λ)/2 % % A2 − B(λ)/2 + A 1 % exp − 2(b − x) A2 − B(λ)/2 − A . f2 (x) = − λ 2λ A2 − B(λ)/2 Let us consider a density of the measure Rλ (S | x), which is equal to a derivative of f by b: ∂ Rλ (−∞, b) | x ∂b % −B(λ)/λ exp A(b − x) − |b − x| A2 − 2B(λ) . =% A2 − 2B(λ)
pλ (b | x) ≡
For a homogenous Markov diffusion process ξ(t) with an initial point x, local variance D, and parameter of shift V this density can be obtained in outcome of the Laplace transformation on t of its one-dimensional density: (b − x − V t)2 1 , exp − ϕt (b | x) = √ 2 Dt 2πDt which has the form
√ 1 V 2 + 2Dλ pλ (b | x) = √ exp V (b − x)/D − |b − x| D V 2 + 2Dλ
(see [DIT 65], etc.). A comparison of these densities reveals the sense of parameters A and B(λ): the function pλ (b | x) is equal to pλ (b | x) for A = V /D,
B(λ) = −λ/D.
168
Continuous Semi-Markov Processes
5.2. Multi-dimensional semi-Markov processes of diffusion type In this section, semi-Markov processes of diffusion type in d-dimensional space (d ≥ 2) are considered. In addition to the common properties of SM processes considered in Chapter 3, where the metric structure of space of states is used only, these processes have a lot of speciſc properties. After analyzing one-dimensional SM processes of a diffusion type there is a natural desire to generalize obtained outcomes on a multi-dimensional case. In this way the qualitative saltus is obvious already with passage to two-dimensional space. Instead of a two-dimensional parameter of an elementary neighborhood of an initial point we obtain at once an inſnite-dimensional one: (1) even for the simplest form of the neighborhood the arbitrary distribution on its boundary is possible, (2) the form of this neighborhood can be arbitrary. Nevertheless there is much in common in a local behavior of one-dimensional and multi-dimensional processes. Principally there is a differentiability of their transition generating functions and a differential second-order equation of elliptic type for their derivatives. Moreover, such a function has asymptotics for a sequence of small neighborhoods of an initial point which can be deduced by methods of the theory of differential equations, in particular, using methods of the Dirichlet problem. In this way the weak asymptotic expansion of the distribution density on the boundary of such a small neighborhood is obtained. This makes it possible to prove the existence of a characteristic Dynkin operator [DYN 63] as a limit of ratios of appropriate expectations, when the form of a neighborhood is arbitrary. In case of spherical neighborhoods the simple formulae for the ſrst three coefſcients of the expansion reƀecting a probability sense of coefſcients of the partial differential equation of elliptic type are derived.
5.2.1. Differential equations of elliptic type In this section we consider continuous semi-Markov processes of diffusion type in region S ⊂ Rd (d ≥ 2). The transition generating function of such a process is a solutions in this region of a partial differential second-order equation of elliptic type. We look for conditions of local characterization of the processes similar to Kolmogorov’s conditions, formulated for Markov processes of diffusion type. From these conditions in work [KOL 38] differential equations of parabolic type, the direct and inverse Kolmogorov equations, are derived; transition densities of the process satisfy such an equation (see [DYN 63, p. 221]). For a semi-Markov process such transition densities are useless, but it has another way of local characterization, namely, with the help of its distribution of the ſrst exit pair from a small neighborhood of an initial point.
SM Processes of Diffusion Type
169
The basic content of the present subsection makes a solution of the inverse problem for multi-dimensional semi-Markov process of a diffusion type: for the given differential equation of elliptic type to construct the corresponding semi-Markov process and to ſnd asymptotic local properties of this process. 24. Denotation for coordinates and derivatives In this section we prefer to designate a point of multi-dimensional space Rd as p = (p , . . . , pd ). Designate 1
Di u =
∂u , ∂pi
Dij u =
∂2u . ∂pi ∂pj
In some integrals we designate a variable of integration p1 , p2 , . . ., etc. In these cases (k) (k) Di , Dij , . . . means that the derivation is made on coordinates of a point pk . To avoid a great many indexes, we will apply other designations for the ſrst coordinates of a point p = (x, y, z, . . .), and under sign of integral xk , yk , zk , . . . mean coordinates of a point pk , a variable of an integration. As a universal designation for coordinates we will use πi (pk ) = pik , the i-th coordinate of a point pk . 25. Differential equation Consider in region S ∈ A a partial differential equation of elliptic type Lu ≡
d d 1 ij a Dij u + bi Di u − cu = 0. 2 i,j=1 i=1
[5.17]
Here aij , bi , c are continuous real functions, determined on S; (∀p ∈ S) (aij (p)) is a positive deſnite matrix; c ≥ 0; u is an unknown differentiable function. We admit that some coefſcients (and solutions as well) of this equation can depend on a parameter λ ≥ 0. Without being too speciſc we assume that the following norming condition is fulſlled: (∀p ∈ S) (∀λ ≥ 0) d
aii (p) = d.
[5.18]
i=1
According to standard terminology [SOB 54], equation [5.17] is a partial differential equation of elliptic type. Let G ⊂ S and uG (ϕ | ·) be a solution of the Dirichlet problem for this equation in the bounded region G with a continuous function ϕ given on the boundary of the region. With the suppositions made, this solution is continuous on G ∪ ∂G. If the function ϕ is determined and continuous on the whole set S ∪ ∂S, then determining uG (ϕ | ·) we take the contraction of ϕ on the boundary of a set
170
Continuous Semi-Markov Processes
G. We determine the solution for any S\G as uG (ϕ | p) = ϕ(p). The operator uG deſned in such a manner maps a set of continuous and bounded functions C(S) in itself. Furthermore, we will understand a solution of the Dirichlet problem in this extended sense. 26. Functional equation Show that for a family of operators (uG )G∈A∩S the following equation is fair uG2 (ϕ | p) = uG1 uG2 (ϕ | ·) | p , [5.19] where G1 , G2 ∈ A, G1 ⊂ G2 and p ∈ G1 . It is the same equation that the transition generating operators of a semi-Markov process fG (λ, · | ·) are subordinated to. Let a function ϕ be determined on the boundary of G2 . In this case uG2 (ϕ | ·) and the function f ≡ uG1 (uG2 (ϕ | ·) | ·) satisfy equation [5.17] inside G1 . Moreover, they coincide on the boundary of this set. According to the uniqueness theorem for a solution of the Dirichlet problem in region G1 , which is fair due to our suppositions about coefſcients of the equation, we obtain uG2 (ϕ | ·) = f everywhere inside G1 . The equality of these functions outside of G1 follows from a rule of extensions of a solution of the Dirichlet problem. 27. Process of diffusion type DEFINITION 5.2. The continuous semi-Markov process is referred to as a semiMarkov process of diffusion type in region S, if for any λ ≥ 0 there exists a differential operator of elliptic type L = L(λ) with appropriate coefſcients (see equation [5.17]) such that the family of transition generating operators (fG (λ, ϕ | ·)) of this process coincides with a set (uG ) of solutions of the Dirichlet problem of equation Lu = 0. Furthermore, it will be clear that the differential operator L(λ), deſning transition generating functions of a diffusion semi-Markov process, depends on λ only through the factor c = c(λ, ·). It is known (see, e.g., [DYN 63]) that the operator of the Dirichlet problem can be represented in an integral form with some kernel HG (p, B): ϕ p1 HG p, dp1 . uG (ϕ | p) = ∂G
This kernel for region G with piece-wise smooth boundary is determined by a density hG (p, p1 ) with respect to a Lebesgue measure on the boundary: (p ∈ G). hG p, p1 dS p1 HG (p, B) = B∩∂G
SM Processes of Diffusion Type
Equation [5.19] in this case passes in an integral equation hG1 p, p1 hG2 p1 , p2 dS p1 . hG2 p, p2 =
171
[5.20]
∂G1
For a semi-Markov process with the initial point p ∈ G the kernel hG (p, ·) ≡ fG (λ, · | p) is interpreted as a density of a transition generating function of a semi-Markov process, i.e. distribution density of the ſrst exit pair from region G transformed by the Laplace transformation on argument t ∈ R+ .
Note that the Laplace transformation image of the distribution density on argument t with a parameter λ can be interpreted as a corresponding distribution density of the process stopped at a random instant which is independent of the process and is exponentially distributed with the parameter λ (see [BOR 96]).
5.2.2. Asymptotics of distribution of a point of the ſrst exit from arbitrary neighborhood Weak asymptotic expansion for a distribution density function of a point of the ſrst exit from a small neighborhood of an initial point of the process is derived. The diameter of the neighborhood is assumed to tend to zero. The form of such a neighborhood can by arbitrary. It makes it possible to use the obtained expansion as proof of existence of the characteristic Dynkin operator in its common deſnition.
28. Density of a point of the ſrst exit Consider equation [5.17] in region S. Let this region contain the origin of coordinates, point 0. Without being too speciſc we can assume that (∀i, j) aij (0) = δ ij . Otherwise we could apply non-degenerate linear transformation of spaces, which reduces the equation to this form, and further we could make a transformation of obtained factors of expansion appropriate to the inverse transformation of space (see item 34). Let G be a neighborhood of zero, a bounded region G ⊂ S, with piece-wise smooth boundary, on which a distribution density hG (p0 , ·) is determined. It corresponds to a semi-Markov diffusion process with the initial point p0 ∈ G. Let kG = {p : (∃p1 ∈ G) p = kp1 } (k > 0) be a scale transformation of set G. Under the above supposition the density hkG (p0 , ·) (p0 ∈ kG) will be determined. Due to rescaling the Lebesgue measure of the boundary will vary correspondingly: |∂(kG)| = k d−1 |∂G|. It is clear that for any reasonable asymptotics of probability density on ∂(kG), this normalizing factor should be taken into account.
172
Continuous Semi-Markov Processes
29. Class of regions In [GIL 89, p. 95] the following class of regions is deſned. Region G with its boundary is said to belong to class C k,α (k = 0, 1, 2, . . . , 0 ≤ α ≤ 1), if for each point p0 ∈ ∂G there is a ball B = B(p0 , r) and one-to-one map f of this ball on D ⊂ Rd such that: (1) f (B ∩ G) ⊂ Rd+ ≡ Rd−1 × R+ ; (2) f (B ∩ ∂G) ⊂ ∂Rd+ ; (3) f ∈ C0k,α (B), f −1 ∈ C0k,α (D). In the latter case C0k,α (·) means space of k-times differentiable functions, k-th derivatives of which are continuous on the corresponding set with Hëlder index α. Let R(G) ≡ inf{r > 0 : G ⊂ B(0, r)}, where B(p, r) is a ball of a radius r with a center p in d-dimensional space. Furthermore, we will consider normalized neighborhoods of zero of the form G = (1/R(G))G, touching the surface of ball B(0, 1). For positive number k ≥ 1 we designate A◦ = A◦ (k) a family of normalized neighborhoods of zero such that (∀G ∈ A◦ (k)) there exists diffeomorphism f of a class C02,α of maps G → B(0, 1) such that (∀p1 , p2 ∈ G) k −1 p1 − p2 ≤ |f p1 − f p2 ≤ k p1 − p2 . It is clear that A◦ (k) ⊂ C 2,α . 30. Asymptotic expansion of a general view Let QG (p1 , p2 ) (p1 , p2 ∈ G) be the Green function for the Laplace operator Δ ≡ d i=1 Dii in region G and KG (p1 , p2 ) = ∂QG (p1 , p2 )/∂n2 be a derivative of the Green function according to the second argument on direction of an interior normal to the boundary of the region. In the following theorem the family of sets G = RG is considered, where R > 0 (R → 0) and G ∈ A◦ (neighborhood of zero). THEOREM 5.11. Let factors bi be continuously differentiable and aij be twice continuously differentiable in some neighborhood of zero. Also let (∀ij) aij (0) = δ ij and there be positive numbers Λ1 and Λ2 such that (∀p ∈ Rd ) aij p0 pi pj ≥ Λ1 p2 , ij
& & ij i ij a p0 , b p0 , c p0 , a p0 − δ ij /&p0 & ≤ Λ2 with all i, j and p0 in some neighborhood of zero.
SM Processes of Diffusion Type
173
Then the following weak asymptotics as R → 0 are true Rd−1 hRG Rp0 , Rp = KG p0 , p (1) QG p0 , p1 R bi (0)Di KG p1 , p + RXG p1 , p + G
i
[5.21]
− R2 c(0)KG p1 , p + R2 YG p1 , p dV p1 + o R2
QG p0 , p1 dV p1
G
(p0 , p ∈ G). For any k ≥ 1 the term o(R2 ) is uniform on all G ∈ A◦ (k) and p0 ∈ G. It is understood in a weak sense: an integral on ∂G from this term, multiplied by a function, which is continuous and bounded on this set, is a value of the order o(R2 ). Function XG (p1 , ·) is orthogonal to any constant and any linear function on the boundary, function YG (p1 , ·) is orthogonal to any constant function on the boundary. Moreover (1) ij am (0)πm p1 Dij KG p1 , p , X G p1 , p = ij
m
6 YG p1 , p = Yi p1 , p , i=1
where (1) 1 ij amn (0)πm p1 πn p1 Dij KG p1 , p , Y1 p1 , p = 2 ij mn
(1) i Y2 p1 , p = bm (0)πm p1 Di KG p1 , p ,
Y3 p1 , p =
i
m
ij
m (1)
× Dij
Y4 p1 , p =
aij m (0)
i (1)
× Di
kl
G
bi (0)
G
n
(2) QG p1 , p2 πn p2 Dkl KG p2 , p dV p2 ,
akl n (0)πm p1
kl
akl n (0)
n
(2) QG p1 , p2 πn p2 Dkl KG p2 , p dV p2 ,
174
Continuous Semi-Markov Processes
ij Y5 p1 , p = am (0) bk (0)πm p1 ij
m (1)
× Dij
Y6 p0 , p =
G
i
b (0)
i (1)
× Di
k
(2) QG p1 , p2 Dk KG p2 , p dV p2 ,
bk (0)
k
G
(2) QG p1 , p2 Dk KG p2 , p dV p2 .
ij i ij ij i Here aij m , amn , bm , . . . mean partial derivatives of functions a , a , b , . . . on m-th and n-th coordinates accordingly.
Proof. In what follows we will omit subscripts in designations of kernels QG , KG , XG , YG when it does not lead to misunderstanding. It is sufſcient to prove that for any twice continuously differentiable functions ϕ : (0 ≤ ϕ ≤ 1), determined on a unit ball, the function Rd−1 hRG Rp0 , Rp − K p0 , p ∂G
−R G
+R
2
i (1) Q p0 , p1 b (0)Di K p1 , p + X p1 , p dV p1 i
Q p0 , p1 c(0)K p1 , p + Y p1 , p dV p1 ϕ(p) dS(p)
G
×
−1
Q p0 , p1 dV p1
G
[5.22]
has an order o(R2 ) as R → 0 uniformly on all G ∈ A◦ (K) and p0 ∈ G. On the boundary of region RG we consider function ϕR , where ϕR (Rp) = ϕ(p) (p ∈ ∂G). For the solution uRG of the Dirichlet problem for equation [5.17] in region RG with meaning ϕR on the boundary of the region the following representation is true ϕR (p)hRG p0 , p dSR (p) uRG p0 ≡ ∂RG
=
R ∂G
d−1
hRG p0 , Rp ϕ(p) dS(p).
[5.23]
SM Processes of Diffusion Type
175
The equation Lu = 0 we rewrite as Δu = −(L − Δ)u. For a short we will omit argument, function ϕ, in designations of solutions of the Dirichlet problem: uG (p) instead of a full designation uG (ϕ | p). Then according to the known formula of a solution of the Dirichlet problem for the Poisson equation (see [GIL 89, SOB 54], etc.) we have [5.24] QR p0 , p1 f p1 dVR p1 , uRG p0 = ψR p0 + RG
where ψR p0 =
KR p0 , p1 ϕR p1 dSR p1 ,
∂RG
f = (L−Δ)uRG , dSR (p), dVR (p) are elements of surface and volume of region RG. Thus, we obtain integral equation with respect to uRG . Change variables: pi = Rpi (pi ∈ G, i = 0, 1). Using representation ⎧ 1 ⎪ ⎪ + g p1 , p2 , d ≥ 3 ⎨ (d − 2)ωd rd−2 12 QR p1 , p2 = ⎪ ⎪ ⎩ 1 ln 1 + g p1 , p2 , d=2 12 2π r and its uniqueness (see [SOB 54, p. 302]), where rij = |pi − pj |, ωd = 2π (d/2) / Γ(d/2) is a measure of surface of a unit ball in d-dimensional space, g(p1 , p2 ) is a regular harmonic function, we obtain relations Rd−2 QR Rp1 , Rp2 = Q1 p1 , p2 ≡ Q p1 , p2 , Rd−1 KR Rp1 , Rp2 = K1 p1 , p2 ≡ K p1 , p2 . Moreover, if F (Rp) = F (p) and Di , Dij are derivations on the appropriate coordinates of a point p, then Di F (Rp) = Di F (p)/R and Dij F (Rp) = Dij F (p)/R2 . From here KR Rp, p1 ϕR p1 dSR p1 ψR (Rp) ≡ ∂RG
KR Rp, Rp1 ϕR Rp1 Rd−1 dS p1
= ∂G
K p, p1 ϕ p1 dS p1 ≡ ψ(p),
= ∂G
QR Rp, p1 (L − Δ)uRG p1 dVR p1 RG
= G
QR Rp, Rp1 LR − ΔR uRG p1 Rd dV p1 ,
[5.25]
176
Continuous Semi-Markov Processes
where LR u =
aij Dij u/R2 +
ij
ΔR u =
i
b Di u/R − cu,
i
Dii u/R2 .
i
According to conditions of the theorem the function (aij (p) − δ ij )/R = (aij (Rp) − δ ij )/R is bounded in a unit ball uniformly on all R that are small enough. Using this property, we consider an operator MR = R·(LR −ΔR ). With the help of this operator the integral equation [5.24] can be rewritten as (1) [5.26] uRG (p0 ) = ψ(p0 ) + R Q p0 , p1 MR uRG p1 dV p1 , G
(p0 ∈ G). Let us estimate a norm of an operator MR . We have ij a − δ ij /R · Dij u MR u ≤ ij
+
i b · Di u + R · |c| · |u| ≤ C1 · |u|2;G , i
where
|f |2;S = sup sup Dij f (p) + sup sup Di f (p) + sup f (p) p∈S ij
p∈S
i
p∈S
(the denotation is borrowed from [GIL 89, p. 59]), and C1 = C1 (d, Λ2 ). The norm |u|2;S for u = uS (ϕ | ·) (solution on S of the Dirichlet problem for equation Lu = f, f |∂S = ϕ) is estimated in [GIL 89, p. 100]: [5.27] |u|2;S ≤ C2 |u|0;S + |ϕ|2;S + |f |0;S with a constant C2 = C2 (d, Λ1 , Λ2 , K), where S ∈ A◦ (K) (S ⊂ B). Here, as well as above, it is assumed that the function ϕ is determined and uniformly twice continuously differentiable on B. Let Lu = 0 on RG and u = ϕR on ∂(RG). Then u satisſes equation LR u = 0 and u = ϕ on ∂G. From here MR u ≤ C1 C2 |u|0;G + |ϕ|2;G ≤ 2 C1 C2 |ϕ|2;G . On the other hand, according to [5.25] the function ψ is a solution of the Dirichlet problem for the Laplace equation on region G with a boundary value ψ = ϕ on ∂G. From here MR ψ ≤ C1 C3 |ψ|0;G + |ϕ|2;G
SM Processes of Diffusion Type
177
(C3 = C3 (d, K)). Furthermore, we apply the operator MR to the function u = Q(·, p)f (p) dV (p). This u is a solution of the Dirichlet problem for the Poisson G equation with a right part −f and with a zero boundary condition. In this case MR u ≤ C1 C3 |u|0;G + |f |0;G ≤ C1 C3 SG + 1 |f |0;G , 0;G where
S G p0 =
Q p0 , p dV (p).
G
The value |SG |0;G + 1, where SG is the solution of the Poisson equation, is no more than a constant C4 = C4 (d, Λ1 , Λ2 ) (see [GIL 89, p. 43]). Consider an operator
Q p, p1 u p1 dV p1 .
TR u(p) ≡ MR G
We have received |TR u|0;G ≤ C|u|0;G where C = C1 C3 C4 . From here under condition RC < 1 a solution of the integral equations [5.26] can be represented as a series uRG = ψ + R Q(·, p)MR ψ(p) dV (p) G
+
∞
Rk+1 G
k=1
Q(·, p) TRk MR ψ(p) dV (p).
[5.28]
In order to evaluate terms of the ſrst and second order we use the Taylor formula for coefſcients of equation [5.17]. Let MR = M0 + R · N + WR , where M0 v(p) =
m
ij
N v(p) =
bi (0)Di v(p),
i
1 ij a (0)πm (p)πn (p)Dij v(p) 2 ij mn mn +
WR v(p) =
aij m (0)πm (p)D ij v(p) +
i
bim (0)πm (p)Di v(p) − c(0)v(p),
m
ij αR (p)Dij v(p)
ij
+
i
i βR (p)Di v(p) − R · c(Rp) − c(0) v(p),
178
Continuous Semi-Markov Processes
ij αR (p) = aij (Rp) − δ ij /R − aij m (0)πm (p) m
1 − R aij (0)πm (p)πn (p), 2 mn mn i βR (p) = bi (Rp) − bi (0) − R bim (0)πm (p). m ij (p), αR
R(c(Rp) − c(0)) obviously have an order o(R) with R → 0 Factors uniformly on all p ∈ B. Taking into account an estimate [5.27], we obtain WR ψ(p) = o(R) for any function ϕ from the given class uniformly on all G ∈ A◦ (K) and uniformly on all p ∈ G. From here in expansion of the function uRG (Rp0 ) on degrees of R the coefſcient of the ſrst degree is equal to Q p0 , p M0 ψ(p) dV (p), i βR (p),
G
and that of the second degree (1) Q p0 , p N ψ(p) dV (p) Q p0 , p1 M0 Q p1 , p M0 ψ(p) dV (p) dV p1 . G
G
G
The remaining terms of the sum give the function, having an order o(R) for any function ϕ from a class of twice continuously differentiable functions on B uniformly on all G ∈ A◦ (K) and uniformly on all p ∈ G. Further, we use integral representations of uRG of the form [5.23] and ψ(p) of the form [5.25]. Substituting them in coefſcients of expansion of the function uRG and changing an order of integrations, we obtain representation [5.22], equivalent to a statement of the theorem. An assertion about additional terms of ſrst and second order to be orthogonal to any constant follows from a property (∀p1 ∈ G) K p1 , p2 dS p2 = 1 ∂G
[SOB 54]. The orthogonality of X1 to any linear function on the boundary follows from a property (∀p1 ∈ G) K p1 , p2 πk p2 − πk p1 dS p2 = 0, ∂G
which is a direct corollary of the Green formula (see [GIL 89, SOB 54], etc.). 5.2.3. Neighborhood of spherical form 31. Exit from neighborhood of spherical form The coefſcients of expansion of a density derived in theorem 5.11 receive a concrete meaning with known analytical expression of the kernels Q(p1 , p2 ) and K(p1 , p2 ).
SM Processes of Diffusion Type
179
Let G = B ≡ {p : p < 1} be a unit ball. THEOREM 5.12. Under the same conditions on coefſcients of equation [5.17] as in theorem 5.11 the following weak asymptotic expansion of a density of the ſrst exit from a small spherical neighborhood holds: d 1 1 i d−1 1+R b (0)πi (p) + X(0, p) R hRB (0, Rp) = ωd 2 i=1 2 1 2 (R −→ 0), + R − c(0) + Y (0, p) + o R 2d [5.29] where the function X is orthogonal to any constant and any linear function on the boundary, the function Y is orthogonal to any constant function on the boundary, ωd = 2π (d/2) /Γ(d/2) is a measure of a surface of a unit ball in d-dimensional space. Moreover for a process that is homogenous in space X = 0 and d d 1 i 2 1 i j b , d ≥ 2. b b πi (p)πj (p) − Y (0, p) = 8ωd d i ij Proof. It is known (see [GIL 89, MIK 68, SOB 54], etc.) that for a ball B = {p ∈ Rd : p < 1} the Green function QB is equal to ⎧ ⎪ 1 1 1 ⎪ ⎪ − d−2 , d ≥ 3 ⎪ d−2 ⎨ (d − 2)ωd r12 r12 QB p1 , p2 ≡ Q p1 , p2 = ⎪ ⎪ 1 1 1 ⎪ ⎪ ln , d = 2, − ln ⎩ 2π r12 r12 2 where r212 = (1 − r12 )(1 − r22 ) + r12 . For G = B the kernel K, called the Poisson kernel, is equal to
1 − r12 , KB p1 , p2 ≡ K p1 , p2 = d ωd r12 where ri = pi , rij = pi − pj . From here the i-th factor in the ſrst term of the asymptotics is equal to (1) Q 0, p1 Di K p1 , p2 dV p1 B
⎧ ⎪ ⎪ ⎪ ⎨ =
1
− r12 dV p1 , d ≥ 3, d ωd r12
B
1 (d − 2)ωd
B
1 (1) 1 − r12 1 ln Di 2 dV p1 , 2π r1 2πr12
⎪ ⎪ ⎪ ⎩
r1d−2
(1) 1
− 1 Di
d = 2.
180
Continuous Semi-Markov Processes
Integrating by parts on the i-th component and taking into account that Q(·, p1 ) = 0 with p1 ∈ ∂B, we obtain the last expression equal to 1 − r12 1 1 (1) Di − 1 dV p1 , d ≥ 3, − d−2 d (d − 2)ω ω r r1 d d 12 B 1 1 − r12 1 (1) − Di ln dV p1 , d = 2. 2 r1 2π r12 B 2π With appropriate d ≥ 2 both these expressions are equal to the following expression: πi p1 1 − r12 1 [5.30] dV p1 . 2 d d ωd B r1 r12 The evaluation of integral [5.30] for d = 2 is not difſcult to perform. For arbitrary dimension d ≥ 3 we make a change of variables: p → p = Ap, where A is an orthogonal transformation, determined by a matrix (of the same label). Here the point p2 under this transformation passes in a point (1, 0, . . . , 0), where p2 = 1. Let (Φ1 , . . . , Φd−1 ) be the spatial polar (spherical) coordinates of a point p2 . Then a matrix of such a transformation can be as follows ⎞ ⎛ C1 S1 C2 S1 S2 C3 · · · S1 S2 · · · Sd−2 Cd−1 S1 S2 · · · Sd−1 ⎜−S1 C1 C2 C1 S2 C3 · · · C1 S2 · · · Sd−2 Cd−1 C1 S2 · · · Sd−1 ⎟ ⎟ ⎜ ⎜ 0 −S2 C2 C3 · · · C2 S3 · · · Sd−2 Cd−1 C2 S3 · · · Sd−1 ⎟ ⎟. ⎜ A=⎜ ⎟ · · ··· · · ⎟ ⎜ · ⎝ 0 Cd−2 Sd−1 ⎠ 0 0 ··· Cd−2 Cd−1 0 0 0 ··· −Sd−1 Cd−1 Here Ci = cos Φi , Si = sin Φi . This transformation can be received as an outcome of sequential d − 1 elementary turns: A = A1 · · · Ad−1 , where ⎛ ⎞ 1 0 ··· 0 ⎜ 0 1 ··· 0 ⎟ ⎜ ⎟ 0 0 ⎜ · · ··· · ⎟ ⎜ ⎟ ⎜ 0 0 ··· 1 ⎟ ⎜ ⎟ ⎜ ⎟ Ck Sk ⎜ ⎟. 0 Ak = ⎜ 0 ⎟ −Sk Ck ⎜ ⎟ ⎜ 1 0 ··· 0 ⎟ ⎜ ⎟ ⎜ 0 1 ··· 0 ⎟ ⎜ ⎟ 0 0 ⎝ · · ··· · ⎠ 0 0 ··· 1 Furthermore, we have πi p1 1 − r12 πi A−1 p1 1 − r12 1 1 dV p1 = 2 dV p1 , d d ωd2 B ωd B r1d r12 r1d r12
SM Processes of Diffusion Type
181
2 where in the second expression r12 = 1+r12 −2r1 c1 and c1 is cosine of angle between vectors p1 and (1, 0, . . . , 0). It is easiest to calculate this integral for ſrst coordinates. So π2 A−1 p = S1 C2 x + C1 C2 y − S2 z π1 A−1 p = C1 x − S1 y,
etc., the k-th coordinate being represented by the k + 1-th summand. Fortunately, in our formulae a common case can be reduced to the case of ſrst numbers with the help of variable change. Let T : Rd → Rd be an orthogonal transformation, which consists of the permutation of coordinates T p = (pi1 , . . . , pid ), where (i1 , . . . , id ) is a permutation of numbers (1, 2, . . . , d). Then πi1 p1 . dV p1 J i1 , p2 ≡ B F cos p1 , p2 πi T −1 p1 . dV p1 1= −1 p , p B F cos T 1 2 π1 p 1 . dV p1 ≡ J 1, T p2 , = B F cos p1 , T p2 where p1 , p2 is an angle between vectors p1 and p2 . Let us calculate an integral with i = 1. x1 1 − r12 1 dV p1 d d ωd2 B r1 r12 C1 x − S1 y 1 − r2 1 dV (p) = 2 ωd B r d ld 1 2π π π C1 rc1 1 − r2 d−1 d−2 1 = 2 ··· r s1 · · · s1d−2 dr dϕ1 · · · dϕd−1 ωd 0 0 r d ld 0 0 ωd−1 1 π C1 c1 1 − r2 d−2 = s1 dr dϕ1 , ωd2 0 0 ld where l2 = 1 + r2 − 2rc1 , (r, ϕ1 , . . . , ϕd−1 ) are polar coordinates of the point p and ci = cos ϕi , si = sin ϕi . This integral and the similar are possible to calculate on the basis of equality K p1 , p2 dS p2 1= B
=
2π
0
= ωd−1
π
0
0
π
···
0
π
1 − r2 d−2 s · · · s1d−2 dϕ1 · · · dϕd−1 ωd l d 1
1 − r2 d−2 s dϕ1 , ωd l d 1
182
Continuous Semi-Markov Processes
from here
π
ω c1 sd−2 1 d . dϕ1 = ld ωd−1 1 − r2
0
[5.31]
Integrating by parts the intrinsic integral and using this formula, we obtain the ſrst coefſcient ωd−1 1 π C1 c1 1 − r2 d−2 s1 dr dϕ1 ωd2 0 0 ld
C1 ωd−1 = ωd2 =
1
1−r
0
C1 ωd−1 ωd2
1
0
2
π
0
1 − r2 ·
c1 sd−2 1 dr dϕ1 ld
rωd C dr = 1 . 2ωd ωd−1 1 − r2
The last expression can also be rewritten as x2 /(2ωd ). So, in this case J(1, p2 ) = π1 (p2 )/(2ωd ). For arbitrary k ∈ {1, . . . , d} we choose a permutation operator T in such a manner that k = i1 . In this case J(k, p2 ) ≡ J(i1 , p2 ) = J(1, T p2 ) = π1 (T p2 )/ (2ωd ) = πk (p)/(2ωd ). From here coefſcient of a principal term of the ſrst order is equal to 1 i b (0)πi (p). 2ωd i Now calculate a coefſcient of a principal term of the second order. Q 0, p1 K p1 , p2 dV p1 B
=
1 (d − 2)ωd
1
−1
1 − r12 dV p1 d ωd r12
r1d−2 1 π d−2 1 ωd−1 s1 2 d−1 −1 1−r r dϕ1 dr = (d − 2)ωd2 0 rd−2 ld 0 B
ωd−1 = (d − 2)ωd2 =
1 (d − 2)ωd
1
1
1 − r2 1 − rd−2 r ·
0
0
1 − rd−2 r · dr =
ω d dr ωd−1 1 − r2
1 , 2dωd
where d ≥ 3. The same formula is fair for d = 2 despite the other method of evaluation which we omit in view of a lack of any complexities.
SM Processes of Diffusion Type
183
32. Evaluation of orthogonal term X Show on an example d ≥ 3 how the orthogonal term X can be calculated. We have X 0, p2 ϕ p2 dS p2 ∂B
=
ij
aij m (0)
m
B
(1) Q 0, p1 πm p1 Dij ψ p1 dV p1 .
Here the integral is equal to (1) 1 1 − 1 πm p1 Dij ψ p1 dV p1 d−2 B (d − 2)ωd r1 1 1 (1) (1) =− p Dj ψ p1 dV p1 . Di − 1 π m 1 d−2 r1 B (d − 2)ωd Again we accept m = 1 and x1 = π1 (p1 ). The further formulae differ for cases of possible equality of indexes i, j, 1. There are only 5 variants: (1) i = j, 1 = j, 1 = i; (2) i = j, 1 = i; (3) i = j, 1 = j; (4) i = j, 1 = i; (5) i = j = 1. In the ſrst case we have 1 1 (1) (1) Di − 1 x1 Dj ψ p1 dV p1 − d−2 r1 B (d − 2)ωd x1 πi p1 (1) Dj ψ p1 dV p1 . = ω r1d B d Here again we apply an integration by parts, but boundary values of the ſrst function here are not equal to zero. We obtain the previous expression equal to = Bj
/
0jb (j) x1 πi p1 (1) x1 πi p1 j · ψ p1 dV p1 , ψ p1 dV p1 − Dj d d ωd r1 ωd r1 B ja
where integration on a d-dimensional ball B we have replaced by iterated integration: (1) outside on the (d − 1)-dimensional ball B j (with the exception of the j-th coor(j) (j) dinate) with a variable of integration p1 and with the element of volume dV j (p1 ),
184
Continuous Semi-Markov Processes
1 (2) inside on the j-th coordinate on an interval (ja , jb ), where −ja = jb = (j) 1 − p1 2 . We see that on the boundary of B function ψ = ϕ, and also r1 = 1 and jb = j1+ ≡ πj (p+ 1 ) (the value of the j-th coordinate of p1 , when it is situated on the “upper” boundary), and ja = j1− ≡ πj (p− 1 ) (the value of the j-th coordinate of p1 , when it is situated on the “lower” boundary). Thus, 0jb / (j) x1 πi p1 ψ p1 dV j p1 d r1 B j ωd ja j (j) x1 πi p1 + ϕ p1 − ϕ p − dV p1 . = 1 d j ω r d 1 B To simplify this integral, we consider an integral on a surface of a ball + (j) 1 1 F p1 + F p − F p1 dS p1 = dV j p1 1 & & (j) 2 ∂B Bj 1 − &p1 & + + − j (j) F p1 /j1 − F p− p1 . = 1 /j1 dV Bj
We note that this integral is equal to preceding one when x 1 πi p 1 π j p 1 F p1 = ϕ p1 . ωd The second integral is equal to x1 πi p1 πj p1 d (−1) ψ p1 dV p1 , ωd r1d+2 B Remembering possibilities of a permutation operator, we consider this integral for the ſrst three coordinates of a point p1 (to simplify matters we assume that d ≥ 5): x1 y1 z1 d (−1) ψ p1 dV p1 ωd r1d+2 B x1 y1 z1 d 1 − r12 ϕ p2 (−1) dV p1 dS p2 . = d+2 ω r d ωd r1 d 12 ∂B B The interior integral is equal to d − 2 C1 x − S1 y S1 C2 x + C1 C2 y − S2 z ωd B 1 − r2 × S1 S2 C3 x + C1 S2 C3 y + C2 C3 z − S3 w d+2 d dV (p), r l
SM Processes of Diffusion Type
185
where p = (x, y, z, w, . . .), l2 = 1 + r2 − 2rc1 , c1 is the cosine of an angle between p and (1, 0, . . . , 0). The last expression is equal to ωd−3 d 1 π π π 1 − r2 C1 S12 C2 S2 C3 x3 + C1 C12 − 2S12 C2 S2 C3 xy 2 − 2 ωd 0 0 0 0 1 d−2 d−3 d−4 s s2 s3 dϕ1 dϕ2 dϕ3 dr. ld 1 Using the method applied above of evaluation of integrals (formula [5.31]), having expression ld as a denominator of the ratio, we discover that this integral is equal to dωd x2 y2 z2 /4 (we omit details). Together with the ſrst term this gives us πi (p2 )πj (p2 )x2 (d + 4)/4. So, in the ſrst case we obtain a common view of summand − C1 C2 S2 C3 xz 2
d+4 aij p2 ∈ ∂B , 1 πi p2 πj p2 x2 4 which are summarized to the term Xωd . The values of summands of the remaining four types, which compose the term X, are calculated similarly. 33. Evaluation of orthogonal term Y The same method of evaluation is applicable to a term Y . We calculate a component of this term, which exists in the case of homogenous in space process, when the coefſcients of the differential equation are constant. In this case (1) (2) i k b b Q 0, p1 Di Q p1 , p2 Dk K p2 , p dV p2 dV p1 . Y (0, p) = i
We have B
B
k
(1) Q 0, p1 Di =− = B
B
B
B
(2) Q p1 , p2 Dk K p2 , p dV p2 dV p1
(1) Di Q 0, p1
(2) Dk K
p2 , p
B
B
(2) Q p1 , p2 Dk K p2 , p dV p2 dV p1
πi p 1 Q p1 , p2 dV p1 dV p2 . d ωd r1
An interior integral for d ≥ 3 is equal to πi p 1 1 1 1 − d−2 dV p1 d d−2 r12 r12 B ωd r1 (d − 2)ωd and for d = 2 it is equal to πi p 1 1 1 1 ln dV p1 . − ln 2 2π 2πr r r 12 12 B 1
186
Continuous Semi-Markov Processes
Using permutation of coordinates we substitute πi by x; further, we make replacement variables with the help of orthogonal transformation and partial integration. In the case of d ≥ 3 we have the interior integral C1 ωd−1 ωd2 (d − 2)
1 π c1 sd−2 c1 sd−2 1 1 dϕ1 dr dϕ dr + 1 d−2 d−2 d−2 ld−2 r /r r/r2 l 2 0 r2 r2 0 r 0 1 π c1 sd−2 1 dϕ1 dr (d ≥ 3), − d−2 rr 2 0 0 l r2
π
where l(r) = 1 + r2 − 2rc1 . For d = 2 the interior integral is equal to 1 1 dϕ1 dr1 c1 ln − ln r12 r12 0 0 1 2π 1 C1 1 2 = s1 r1 r2 2 − 2 dϕ1 dr1 (2π)2 0 0 r12 r12 1 r2 C1 r1 2π s21 r2 2π s21 dϕ1 dr1 = dr + dϕ 1 1 (2π)2 r2 0 l r1 /r2 l r2 /r1 r2 r1 0 0 2π 1 s21 dϕ1 dr1 . − r1 r2 l r1 r2 0 0
C1 (2π)2
1
2π
After integration with the use of formula [5.31] we discover that the interior integral is equal to C1 r2 2(d − 2)2 ωd 1 C1 r2 ln , 2 · 2π r2
1 r2d−2
− 1 , d ≥ 3, d = 2,
where according to the denotation C1 r2 = x2 . We return to an evaluation of the outside integral. Using equality to zero of an interior integral for p2 ∈ ∂B, we obtain ! 2 ⎧ 1 1 − r22 1 (2) ⎪ ⎪ x D − 1 dV p2 , − 2 ⎪ k 2 d−2 d 2 ⎨ B 2(d − 2) ωd r23 r2 ⎪ ⎪ 1 1 − r22 1 (2) ⎪ ⎩− x D ln dV p2 , 2 2 2 k 2(2π) r r 2 B 23
d ≥ 3, d = 2.
SM Processes of Diffusion Type
187
When πk = x we obtain for d ≥ 3: π k p2 x2 1 − r22 − dV p2 − d d r2 ωd r23 B 2(d − 2)ωd x2 πk p2 1 − r22 1 = dV p2 . 2 d d 2(d − 2)ωd B r2 r23 Once again, to simplify matters we substitute an arbitrary coordinate πk (p2 ) with second y2 ; furthermore, we apply an orthogonal transformation and partial integration; again we use [5.31]. As an outcome we discover that for d ≥ 3 the previous integral is equal to x3 y3 ωd−2 2(d − 2)ωd2
0
1
r 1−r
2
π
0
π
0
c21 − s21 c22 sd−2 sd−3 x3 y3 1 2 dϕ1 dϕ2 dr = . ld 8(d − 2)ωd
For d = 2 the similar expression is equal to x3 y3 /(16π). When πk (p2 ) = x2 we obtain for d ≥ 3: x22 (d − 2) 1 − r22 −1− dV p2 − d−2 d d r2 r23 r2 B 1 π 2 d−2 ωd 1 c1 s1 2 2 − C 1 − r r ω dϕ1 dr =− 1 d−1 2(d − 2)2 ωd2 2d ld 0 0 1 π π 2 d d−3 c2 s1 s2 2 2 − S1 ωd−2 1−r r dϕ1 ϕ2 dr ld 0 0 0 1 1 . x23 − = 8(d − 2)ωd d
1 2(d − 2)2 ωd2
1
and similarly for d = 2 we obtain (x23 − 1/2)/(16π). Making inverse permutation of indexes ſnally we discover that in the case of the homogenous differential equation 1 Y6 (0, p) = 8 ωd
d iqj
d 1 i 2 b , b b πi (p)πj (p) − d i i j
d ≥ 2.
The evaluation Y in an inhomogenous case is rather awkward in connection with the necessity of taking into account the coincidence of indexes.
188
Continuous Semi-Markov Processes
5.2.4. Characteristic operator Consider the characteristic Dynkin operator [DYN 63, p. 201] Ep ϕ ◦ XσG , σG < ∞ − ϕ(p) . Aϕ(p) ≡ lim G↓p mG (p) In this deſnition a sequence of open sets G is supposed to tend to a point p ∈ G in an arbitrary manner, namely, the diameters of these sets tend to zero. We will not consider a problem of determination of characteristic operators in general. We will investigate a family of neighborhoods RG + p (R > 0, R → 0), converging to the point p, where G is some neighborhood of zero-point in Rd with a smooth enough boundary. The existence of the characteristic operator of a Markov process on the set of twice differentiable functions is proved in a case when on this set there exists an inſnitesimal operator. However, the Markov property is not necessary for the characteristic operator Aλ ϕ or its partial case A0 ϕ = Aϕ to exist. A proof of this fact based on theorem 5.11, which proved for a matrix of coefſcients of second derivatives of special view. 34. Arbitrary matrix of coefſcients Let us consider equation [5.17] with arbitrary non-degenerate positive deſnite continuous matrix (aij ) of coefſcients of the second derivatives, satisfying the norming condition [5.18]. For a ſxed initial point p = 0 let us deduce this equation to the form, when the matrix of coefſcients of the second derivatives is equal to (δ ij ) at point 0. T In order to do this let us represent the matrix a ≡ (aij (0)) of the form L L, where ij T L ≡ (L ) is a matrix of non-degenerate linear transformation, and L is the transposed matrix. It is well-known that the possibility of such a presentation is a necessary and sufſcient condition of positive deſnition of the original matrix. Let us transform all sets of points of the original space X ≡ Rd according to the map y = Lx, where L is the inverse matrix for L: LL = LL = E, where E = (δ ij ) (i.e. L = L−1 ) (we designate a linear map and its matrix with the same letter). The map L induces a map of the functional space D in itself, which we designate by the same letter. Thus, according to this deſnition, (Lξ)(t) = L(ξ(t)) (ξ ∈ D, ≥ !). In this case σLΔ (Lξ) = σΔ (ξ), XσLΔ (Lξ) = LXσΔ (ξ). For the given semi-Markov family of measures (Px ) on F , L (LS) = Px (S) (S ∈ F), and map L induces the family of measures (PxL ), where PLx L L , fΔ , the family of semi-Markov transition and transition generating functions: FΔ L L (dt × LS | Lx) = FΔ (dt × S | x), fLΔ (λ, LS | Lx) = fΔ (λ, S | x). where FLΔ Evidently, the linear map of the space in itself preserves the semi-Markov property L (λ, ϕ | x) of this process of a process and its diffusion type. The integral function fΔ satisſes the transformed differential equation 1 ij i a uij + b ui − cu = 0. 2 ij i=1 d
[5.32]
SM Processes of Diffusion Type
189
This is a solution of the Dirichlet problem for this equation on the set Δ with the meaning ϕ on the boundary of this set. Forms of coefſcients of this equation follow from a rule of differentiation. For y = Lx let us designate u(x) = u(L−1 y) = u(y) = u(Lx). In particular, c(y) = c(x). Furthermore, ui (x) =
d
uk (y)Lki ,
uij (x) =
k=1
ukl (y)Lki Llj .
kl
From here d
bi (x)ui (x) =
i=1
Lki bi (x)uk (y) =
ik
d
i
Lki b (y)uk (y).
ik
akl = Hence, bk = i=1 L b , where b (y) = bi (x). Similarly, ij ij a (y) = a (x). In this case ki i
a (0) = kl
i
ki
lj ij
L L a (0) =
ij
=
ij
d
ki
lj
L L
d
ij
mi
L
L
Lki Llj aij , where
mj
m=1
δ km δ lm = δ kl .
m=1
Hence, our investigation of asymptotics for small parameter meanings relates to the density of the ſrst exit point hL Δ (0, p) of the transformed process. We have to ſnd an asymptotics for the original process. Replacing variables in the integral we obtain L −1 −1 L y ϕ(x)hΔ (0, x) dS(x) = ϕ L−1 y hΔ 0, L−1 y JΔ dS(y), ∂Δ
L∂Δ
where is a coefſcient of variation of a surface element ∂Δ size under map x → y = Lx. This coefſcient depends on the orientation of this element in space. It is a Jacobian analogy (see item 38). In particular, for Lx = Rx (multiplying of vector x L (x) = Rd−1 (see formula [5.40]). The exact meaning of this by scalar R) we have JΔ coefſcient for L of a common view is not required here because in our deposition there L (L−1 y))−1 which represents the investigated density is the product hΔ (0, L−1 y) · (JΔ L hLΔ (0, y). In particular, for LΔ = RG (R > 0) (i.e. for Δ = L−1 RG = RL−1 G) we have ϕ(x)hRL−1 G (0, x) dS(x) L JΔ (x)
∂RL−1 G
= ∂RG
=
∂G
ϕ L−1 y hL RG (0, y) dS(y)
d−1 ϕ L−1 Ry hL dS(y). RG (0, Ry)R
[5.33]
190
Continuous Semi-Markov Processes
Let ϕ be twice differentiable in a neighborhood of point 0. A Taylor expansion in a neighborhood of point 0 (up to the second order) for function ϕ(L−1 Ry) as R → 0 d d ki ki lj has a view ϕ(0)+R i=1 k=1 ϕk (0)L y i +(R2 /2) ij kl ϕkl (0)L L y i y j + d−1 in integral [5.33] has o(R2 ). According to formula [5.21] the term hL RG (0, Ry)R expansion bi (0)D(1) KG p1 , y + RXG p1 , y QG 0, p1 R KG (0, y) + i G
i 2
2
− R c(0)KG p1 , y + R YG p1 , y + o R2
dV p1
QG 0, p1 dV p1 .
G
Furthermore, we use properties of orthogonality of kernels XG , YG , and also that of the kernel KG (p0 , p), which implies besides integrals of 1 and πi (p)−πi (p0 ) the following property: KG p0 , p πi (p) − πi p0 πj (p) − πj p0 dS(p) ∂G
= 2δ ij
QG p0 , p dV (p),
G
which also follows from the Green formula (see [GIL 89, MIK 68, SOB 54], etc.). Hence, integral [5.33] is equal to ki li ki 1 ϕ(0) + R2 ϕkl (0) L L + ϕk (0) L bi (0) 2 i i kl k QG 0, p1 dV p1 . − c(0)ϕ(0) + o(1) G
ki
li
Let us note that i L L = akl (0), and also ki ki j j k L bi (0) = L Lij b (0) = δ kj b = b (0) = bk (0). i
i
j
j
We also denote that G = LΔ. Hence the following expansion is true ϕ(x)hRΔ (0, x) dS(x) − ϕ(0) /R2 ∂RΔ
=
[5.34] 1 aij (0)ϕij (0) + bi (0)ϕi (0) − c(0)ϕ(0) + o(1) 2C(LΔ), 2 ij i
where C(LΔ) =
LΔ
QLΔ (0, p1 ) dV (p1 ).
SM Processes of Diffusion Type
191
35. Coefſcients of the equation Using formula [5.34] we will prove the following property of coefſcients: if a solution of the Dirichlet problem uRG (ϕ | 0) is an integral with respect to density hRG of a transition generating function of some semi-Markov process (in this case parameter λ is the parameter of the Laplace transformation), then coefſcients aij and bi do not depend on λ. Let ϕ be a continuous bounded positive function equal to 1 + mπi in some neighborhood of point 0, where m is some value differing from zero (we determine it below). Substituting this function in formula [5.34], we obtain that the right part is equal to 1 + (mbi (0) − c(0) + o(1))C(LΔ). Assuming that the product bi (0)C(LΔ) depends on λ, we get a contradiction: on the one hand, the left part of the equality does not increase on the whole half-line as a non-decreasing function of the Laplace transformation, while on the other hand, by choosing m appropriately it is possible to make the right part of the equality increase on some interval from the half-line. Let ϕ = 1 + mπi πj . In this case the right part is equal to 1 + (maij (0) − c(0) + o(1))C(LΔ). In the same manner we are convinced that the product aij (0)C(LΔ) does not depend on λ, but in this case a stronger assertion is true, because coefſcients aij (0) satisfy the norming condition [5.18]. Hence, as well as the integral C(LΔ), all the coefſcients bi (0) and aij (0) do not depend on parameter λ.
36. Dimensionless characteristic operator Formula [5.34] prompts us to use R2 as a norming factor while treating behavior of the process in a small neighborhood of its initial point. In this case it is convenient to choose some standard decreasing sequence of sets, namely, the sequence of concentric balls (RB) (R → 0). We deſne “dimensionless” characteristic operator as o lim Ao,R λ (ϕ | x) = Aλ (ϕ | x),
R→0
where Ao,R λ (ϕ | x) =
1 fR (λ, ϕ | x) − ϕ(x) . R2
Evidently this operator is a differential second-order operator of elliptic type with coefſcients different from those of equation [5.17] with additional factor C(LB). Let us show that under norming condition [5.18] the integral C(LB) does not depend on
192
Continuous Semi-Markov Processes
L. We have
1=
KLB (0, x) dS(x) = ∂LB
=
d i=1
=
2 πi (Lx) KLB (0, x) dS(x)
∂LB
ik
il
L L xk xl KLB (0, x) dS(x)
∂LB kl
d i=1
=
d i=1
=
i 2 L−1 x KLB (0, Lx)JLB (Lx) dS(x)
d i=1
=
∂B
∂B
−1
L KLB (0, Lx)JLB (Lx) dS(x)
d
ik 2 k 2
L
x
KLB (0, x) dS(x)
∂LB k=1
d d
ik 2
L
2 C(LB) =
i=1 k=1
d
aii (0)2 C(LB) = 2d C(LB).
i=1
Thus, C(LB) = 1/(2d). In the case of aij = δ ij this formula can be proved by immediate integrating. We have proved the following statement. THEOREM 5.13. Let the SM transition generating function satisfy equation [5.17] with an arbitrary non-degenerate positive determined continuous matrix (aij ) of coefſcients of the second derivatives satisfying norming condition [5.18]. Then the dimensionless lambda-characteristic operator, Aoλ , is determined on the corresponding set of functions and 1 1 ij o i a (0)ϕij (0) + b (0)ϕi (0) − c(0)ϕ(0) + o(1) . [5.35] Aλ (ϕ | x) = d 2 ij i
Proof. It immediately follows from the above formula and [5.34].
37. Strict lambda-characteristic operator Let us ſnd an asymptotics of an expectation of the ſrst exit time from a small neighborhood of zero-point: mG (p) ≡ Ep σG ; σG < ∞
(p ∈ G),
SM Processes of Diffusion Type
193
which can be represented as −
∂ ∂ Ep e−λσG ; σG < ∞ λ=0 = − yG (λ, p)|λ=0 , ∂λ ∂λ
where
yG (λ, p) ≡ fG (λ, 1 | p) ≡
fG λ, p1 | p dS p1
∂G
(see item 27). The function fG (λ, 1 | p) is a solution of the Dirichlet problem for equation Lu = 0 on G with a boundary condition u = 1 on ∂G. In this case we consider an arbitrary matrix of coefſcients of second derivatives with limitation introduced in item 34. Moreover, between all coefſcients only c(p) = c(λ, p) (dependent on λ); we assume that −∂c(λ, p)/∂λ is a completely monotone function of λ. It is a necessary condition for function yG (λ, p) to be completely monotone. As noted above (see L (λ, Lp), where the right part relates to the transformed proitem 34), yG (λ, p) = yLG cess with transition generating functions investigated in theorem 5.17. Let us rewrite equation [5.26] for the given boundary condition L L 2 λ, p1 dV p1 . [5.36] QLG p0 , p1 c λ, Rp1 yRLG yRLG λ, p0 = 1 − R LG
In order to ſnd an equation that mL RLG satisſes, it is sufſcient to differentiate both T parts of this equation by λ. Since a = L L does not depend on λ, it is evident that the matrix L can be chosen independently of λ. Differentiation on λ of integrands for λ = 0, like those in theorem 5.7, is justiſed by the ArcellTheorem theorem. In order to apply this theorem it is sufſcient to have monotone convergents of ratios (c(λ, p) − L L c(0, p))/λ and (yRG (λ, p) − yRG (0, p))/λ to their limits. This property follows from L (λ, p). non-negativeness of second derivatives on λ of functions −c(λ, p) and yRG From here L mL RLG Rp0 = MRLG Rp0 [5.37] − R2 QLG p0 , p1 c 0, Rp1 mL RLG p1 dV p1 , LG
where L MRLG
Rp0 = R
2
QLG p0 , p1 γ Rp1 dV p1 ,
LG
γ(p) = c (0, p) ≡
∂ c(λ, p)λ=0 . ∂λ
From equations [5.36] and [5.37] the existence and uniqueness of mL RLG (Rp) follows, for which the following asymptotics is fair Rp = [5.38] QLG p0 , p1 dV p1 R2 γ(0) + o R2 mL 0 RLG LG
194
Continuous Semi-Markov Processes
uniformly on all G ∈ A◦ (K) and p0 ∈ LG. In particular, 2 2 . mL RLG (0) = C(LG) R γ(0) + o R Evidently, mRG (0) = mL RLG (0). Hence under conditions of theorem 5.17 a strict lambda-characteristic operator is determined. According to formula [5.34] on the set of twice differentiable functions ϕ at point 0, it is expressed as 1 aij (0)ϕij (0) + bi (0)ϕi (0) − c(0)ϕ(0) . [5.39] Aλ (ϕ | 0) = γ(0) ij i 38. Coefſcient of transformation of surface element size Let us consider a non-degenerate linear map L : Rd → Rd and region G, for which at point p ∈ ∂G the outside normal to its surface is determined: n = n(p). Let W (p, n) be a tangent hyperplane to region G at point p, and S(p, n) be some (d − 1)dimensional region in this hyperplane. Under map L this “planar” region passes in the other planar region LS(p, n). A ratio of (d − 1)-dimensional Lebesgue measures |LS(p, n)|/|S(p, n)| evidently does not depend on the form and size of the planar region S(p, n), and depends only on the orientation of this region, i.e. on n. From here it follows that this ratio is the main term of asymptotics of ratios of Lebesgue measures of small “surface” neighborhood δS(p) ⊂ ∂G of the point p and its image LδS(p). This main term depends on the normal to the surface at point p. This limit ratio is said to be a coefſcient of transformation of surface element size and is denoted L (p). Let us express this coefſcient in terms of matrix L and normal n. by JG Evidently, the coefſcient of transformation of a planar ſgure size does not vary under parallel translation of this ſgure. It is convenient to assume that this planar ſgure is a parallelogram, having zero-point (origin of coordinates) as some of its angles. In addition, this planar ſgure is perpendicular to the normal n = (n1 , . . . , nd ). Let {x1 , . . . , xd−1 } be a system of linear independent vectors, perpendicular to vector n, and |S| be Lebesgue measures of parallelogram S formed by these vectors in the corresponding hyperplane. It is well-known that a volume of d-dimensional parallelepiped “spanned” on d linear independent vectors is numerically equal to an absolute meaning of determinant of the matrix, composed with coordinates of these vectors. On the other hand, a volume of parallelepiped corresponding to the system of vectors {x1 , . . . , xd−1 , n} is equal to S|n| = S (because |n| = 1 and n is perpendicular to S). Hence d S = | k=1 nk Xdk |, where Xdk is an adjunct of the k-th elements of the d-th row of the matrix, composed with vectors {x1 , . . . , xd−1 , n}. Let us consider vector n1 of unit length with coordinates 34 5 d 5 2 k 6 Xdk . n1 = Xdk k=1
SM Processes of Diffusion Type
195
d From the theory of determinants it is well-known that k=1 xki Xdk = 0 (1 ≤ i ≤ d − 1). Hence, the vector n1 is perpendicular to hyperplane W (p, n) and therefore n1 = n. From here Xdk = |S|nk and 4 d d 5 2 5 k Xdk . |S| = n1 Xdk = 6 k=1
k=1
Let us consider the system of vectors {y1 , . . . , yd−1 }, where yk = Lxk . According to the previous consideration, the Lebesgue measure of parallelogram LS in corresponding hyperplane with these composing vectors is equal to 4 5 d 5 2 Ydk , |LS| = 6 k=1
where Ydk is an adjunct of the k-th element of the d-th row of the matrix composed with vectors {y1 , . . . , yd−1 , y} (y is an arbitrary vector, determining the d-th row of d the matrix). Assuming ylk = i=1 Lki xil , we have 1 y1 · · · y1k−1 y1k+1 · · · y1d .. .. .. . Ydk = ... . . . k−1 k+1 d y 1 · · · yd−1 yd−1 · · · yd−1 d−1 From here the determinant is equal to L1a1 · · · Lk−1,ak−1 Lk+1,ak+1 · · · Ldad a1 a a x1 · · · x1 k−1 x1 k+1 · · · xa1 d .. .. .. , × ... . . . a ak−1 ak+1 ad x 1 xd−1 · · · xd−1 d−1 · · · xd−1 where the summands are determined for all collections {a1 , . . . , ak−1 , ak+1 , . . . , ad } of d − 1 elements, where each element accepts any meaning from 1 to d. Because the determinant with repeating columns is equal to zero, the previous sum can be considered only for collections with pair-wise different elements. In this case it is possible to group summands with respect to absent elements (one from d possible). From here Ydk =
d
L1b1 · · · Lk−1,bk−1 Lk+1,bk+1 · · · Ldbd
j=1
b1 x1 × ... xb1
d−1
b
x1k+1 .. .
b
k+1 xd−1
· · · x1k−1 .. . k−1 · · · xd−1
b
···
xb1d .. .
b
···
d xbd−1
,
196
Continuous Semi-Markov Processes
where summands are determined on all permutations
b1 , . . . , bk−1 , bk+1 , . . . , bd of numbers {1, . . . , j − 1, j + 1, . . . , d}. Let z be an index of such a permutation. Then according to our denotation and rule of the column permutation, the determinant in a , where Xdj = (−1)d+j Xdj common term of the previous sum is equal to (−1)z Xdj is a minor of a term with index dj. Further, according to the rule of calculation of the determinant we have (−1)z L1b1 · · · Lk−1,bk−1 Lk+1,bk+1 · · · Ldbd = Lkj , b1 ,...,bk−1 ,bk+1 ,...,bd
where Lkj is a minor determinant of the element Lkj in matrix L. As a result we obtain Ydk =
d j=1
Xdj Lkj = (−1)d+k
d
Xdj Lkj = (−1)d+k |S|
j=1
d
nj Lkj .
j=1
At last we have 4 2 5 d d 5 L JG (p) = |LS|/|S| = 6 nj Lkj . k=1
j=1
[5.40]
Chapter 6
Time Change and Semi-Markov Processes
A time change is a transformation of the process which preserves distributions of all points of the ſrst exit (see [VOL 58, VOL 61, DEL 72, ITO 65, BLU 62, LAM 67, ROD 71, SER 71], etc.). Many other properties of the process are also preserved. Investigation of all such properties is equivalent to the study of random sequences of states or traces (see [KOL 72]). It can be explained as follows. Let us consider a class of trajectories where two elements differ from each other only by a time change. There thus exists a correspondence between a trajectory and such a class of trajectories.1 This title, as well as the title random process, will refer to an admissible family of measures with parameters x ∈ X corresponding to initial points of random trajectories. The join of measures in a family may be justiſed if for this family there are nontrivial times of regeneration. For a random trace it can be only intrinsic Markov times [HAR 76a, CHA 79, CHA 81, JAN 81]. Among them are all the ſrst exit times from open sets and their iterations. The possibility of constructing a chain of the ſrst exit points for a ſxed sequence of open sets when the rank of this sequence is as small as desired makes it possible to describe almost all properties of the random trace by properties of such random chains. With the help of chains of the ſrst exit points the following theorem is proved:
1. For this object there is a name “map of ways” [DYN 63, ITO 65], although it seems to be less suitable than “trace”.
197
198
Continuous Semi-Markov Processes
For distributions of two random traces to coincide, it is necessary and sufſcient that there exist two continuous inverse random time changes transforming these processes in some third process. If it is admitted for the inverse time change to be discontinuous then for any two random processes with identical distributions of random trace there exists a random time change transforming one process into the other. In semi-Markov process theory, the time change plays the special role. The time change keeping the semi-Markov property of the process is tightly connected with additive functionals [HAR 80a]. An example of the additive functional is an integral function determining the transformation of measures of a random process, like that of item 5.21, theorem 5.10. Analysis of a general semi-Markov process would be signiſcally simpler if this process is known to be obtained by a time change from a Markov process. Under rather weak suppositions a stepped SM process possesses this property [YAC 68] although there exist stepped SM processes with properties that are absent in an ordinary Markov process, and these properties are preserved under time change. Possibility of such Markov representation of a semi-Markov process is discussed in Chapter 8. Studying a random trace of an SM process leads to an object “time run along the trace”, when time taken in order to make a trace is considered as an additive functional on the trace. Such decomposition of a process on trace- and time-components is natural for a step-process. It is also useful for general SM processes. In this way the Lévy expansion and representation for a conditional distribution of time run along the trace were obtained. They are suitable to be expressed in terms of so called curvilinear integrals. Parameters of the Lévy expansion are tightly connected with a lambdacharacteristic operator of the process. In both cases (continuous and discontinuous) the form of dependence on the parameter λ serves as an indicator of Markovness of the SM process. In particular, when a measure of the Poisson component in the Lévy expansion is not equal to zero (hence dependence on λ is not linear) the process contains intervals of constancy with non-predicted position in space, i.e. is a typically non-Markov process. Curvilinear integrals appear to be very useful for constructing a theory of stochastic integrals, determined on the trace of a semi-Markov process. This theory is constructed in order to solve some internal problems of semi-Markov processes, but it can be useful in common theory of random processes because it gives a new perspective on the theory of stochastic integration. 6.1. Time change and trajectories The style of representation in this section is analogous to that of Chapter 2. When dealing with time change we do not use an abstract probability space but construct a
Time Change
199
measure on a product of two spaces: the space of sample functions (trajectories of the process) and the space of time change (functions of special view). Such construction makes it possible to see in detail the construction of a measure of a transformed process and its connection with the original one. Such a method of enlargement of the original space we will use later. In this section we consider direct and inverse time changes. The name “direct time change” is connected with the sense of map ψ which transforms a point of “old” time into a point or an interval of points of “new” time. When ψ(t) is equal to an interval it means in the new time scale a transformed function preserves a constant value equal to the meaning the function had before transformation at the instant t. With the help of direct time change it is convenient to express values of such Markov times that do not lose their meaning after the change of time scale, for example, the ſrst exit time from an open set. In this case if ξ and ξ are original = ψ(σΔ (ξ)). and transformed functions under the direct time change ψ, then σΔ (ξ) Values of the transformed function are able to be expressed in terms of inverse time change: ξ = ξ ◦ ϕ, where ϕ is inverse (in natural sense) for ψ. The connection of Markov and semi-Markov processes and the presence in trajectories of semi-Markov process of intervals of constancy (see theorem 3.9) means it is necessary to study the not necessarily continuous and strictly monotone time change. By supposing such functions as a time change (direct and inverse here are equivalent), it is necessary to choose one of two modiſcations: continuous from the left (Ψ1 ) and continuous from the right (Ψ2 ). Continuity from the right (see [BLU 68], etc.) attracts us not only due to its similarity to D, but also due to important property: (∀ξ ∈ D) (∀ψ ∈ Ψ2 ) ξ ◦ ψ ∈ D. However, in this case it is necessary to admit functions ψ ∈ Ψ2 , for which ψ(0) > 0. It especially is inconvenient that (∀t ≥ 0) θt ψ(0) = ψ(t) − ψ(t) = 0 (see item 3), i.e. θt ψ belongs to a narrower class than all Ψ2 . With a / D, but always ψ(0) = 0. Since continuity from the left (∃ξ ∈ D) (∃ψ ∈ Ψ1 ) ξ ◦ ψ ∈ we will use only those time changes which do not change a sequence of states (see item 14), and for these time changes the property ξ ◦ ψ ∈ D is fulſlled, in view of a useful property ψ(0) = 0 we choose continuity from the left. The basic content of the section consists of a proof of the theorems on measurability of different sets and maps connected with time change.
1. Direct and inverse time change Let Ψ1 be a set of all non-decreasing functions continuous from the left: ϕ : R+ → R+ , ϕ(0) = 0, ϕ(t) → ∞ (t → ∞); Φ a set of all continuous functions ϕ ∈ Ψ1 ; and Ψ a set of all strictly increasing functions ϕ ∈ Ψ1 . Let (·)∗ : Ψ1 → Ψ1 be a map of the form: (∀t ∈ R+ ) (∀ϕ ∈ Ψ1 ) ϕ∗ (t) = inf ϕ−1 [t, ∞). Obviously, (∀ϕ ∈ Φ) ϕ∗ ∈ Ψ and (∀ϕ ∈ Ψ) ϕ∗ ∈ Φ, and (∀ϕ ∈ Ψ1 ) (ϕ∗ )∗ = ϕ, and also (∀ϕ1 , ϕ2 ∈ Ψ1 ) ϕ1 ◦ ϕ2 ∈ Ψ1 . We will suppose (∀ψ ∈ Ψ1 ) ψ(∞) = ∞.
200
Continuous Semi-Markov Processes
The set of functions Ψ1 together with the sigma-algebra of its subsets deſned below determines the set of time changes. The names “direct” and “inverse” time change for ϕ ∈ Ψ1 depends on the role ϕ plays with respect to a function ξ ∈ D: either it transforms a special Markov time by scheme τ (ξ) → ϕ(τ (ξ)) (direct change), or it transforms values of a function by scheme ξ(t) → ξ(ϕ(t)) (inverse change). As a rule a role of a direct change is played by strictly increasing function ϕ ∈ Ψ, and that of an inverse change by continuous function ϕ ∈ Φ. Between these classes there is one-to-one correspondence determined by map (·)∗ .
2. Properties of time changes Let
Γ1 (ϕ) = (t, s) ∈ R2+ : s = ϕ(t) ,
Γ2 (ϕ) = (t, s) ∈ R2+ : t = ϕ∗ (s) be graphs of function ϕ ∈ Ψ1 and the appropriate inverse function. Let Γ0 (ϕ) = Γ1 (ϕ) ∪ Γ2 (ϕ) be the union of the graphs. PROPOSITION 6.1. The following properties are fair (1) (∀ϕ1 , ϕ2 ∈ Φ) (ϕ1 ◦ ϕ∗2 )∗ = ϕ2 ◦ ϕ∗1 , (2) (∀ϕ ∈ Ψ1 ) (∃ϕ1 , ϕ2 ∈ Φ) ϕ = ϕ1 ◦ ϕ∗2 . Proof. (1) The following equalities are obvious:
∗ −1 −1 −1 ϕ1 ◦ ϕ∗2 (t) = inf ϕ1 ◦ ϕ∗2 [t, ∞) = inf ϕ∗2 ϕ1 [t, ∞) ∗ −1 −1 inf ϕ−1 = inf ϕ2 1 [t, ∞), ∞ = ϕ2 inf ϕ1 [t, ∞) = ϕ2 ϕ∗1 (t) .
(2) Obviously, in a frame (t, s) the line {(t, s) : s + t = c} (c ≥ 0) intersects a set Γ0 (ϕ) only at one point. Let (tc , sc ) ∈ Γ0 (ϕ), where sc + tc = c, and also ϕ1 (c) = sc , ϕ2 (c) = tc . Obviously, ϕ1 , ϕ2 ∈ Φ. Let us prove that ϕ = ϕ1 ◦ ϕ∗2 . We obtain (∀a ∈ R+ ) a = ϕ2 (a + ϕ(a)), since (a, ϕ(a)) ∈ Γ0 (ϕ). In this case according to the left-continuity of ϕ ϕ∗2 (a) = a + ϕ(a). Furthermore, ϕ1 (a + ϕ(a)) = ϕ(a), hence ϕ(a) = ϕ1 (ϕ∗2 (a)). Similar facts are established by Meyer [MEY 73].
Time Change
201
3. Basic maps Let T be a set of all maps τ : Ψ1 × D → R+ . We will consider maps: t : Ψ1 → R+ (t ∈ R+ ), X t ψ = ψ(t), meaning of the process at ſxed instant; X s θt ψ = X s+t ψ − X t ψ (s ∈ R+ ), shift of the θt : Ψ1 → Ψ1 (t ∈ R+ ), X process; s α s∧t ψ + (0 ∨ (s − t)), stopping at the t ψ = X α t : Ψ1 → Ψ1 (t ∈ R+ ), X ſxed instant with linear continuation; t (ψ, ξ) = (X t ψ, Xt ξ), meaning of the t : Ψ1 × D → R+ × X (t ∈ R+ ), X X two-dimensional process at the ſxed instant; θt : Ψ1 × D → Ψ1 × D (t ∈ R+ ), θt (ψ, ξ) = (θt ψ, θt ξ), shift of the twodimensional process on the ſxed time; t (ψ, ξ) = ( αt ψ, αt ξ), stopping of the α t : Ψ1 × D → Ψ1 × D (t ∈ R+ ), α two-dimensional process at the ſxed instant; τ (ψ, ξ) = X τ (ψ,ξ) (ψ, ξ), τ : {ψ, ξ : τ (ψ, ξ) < ∞} → R+ × X (τ ∈ T ), X X meaning of the two-dimensional process at the non-ſxed instant; θτ : {ψ, ξ : τ (ψ, ξ) < ∞} → Ψ1 × D (τ ∈ T ), θτ (ψ, ξ) = θτ (ψ,ξ) (ψ, ξ), shift of the two-dimensional process on the non-ſxed time; τ (ψ, ξ) = α τ (ψ,ξ) (ψ, ξ), stopping of the α τ : Ψ1 × D → Ψ1 × D (τ ∈ T ), α two-dimensional process on the non-ſxed time; p : Ψ1 × D → Ψ1 , p(ψ, ξ) = ψ; the pair consisted of a time change and trajectory; q : Ψ1 ×D → D, q(ψ, ξ) = ξ; the pair consisted of a time change and trajectory. 4. Relations of maps PROPOSITION 6.2. The basic maps are connected by the following relations: s+t − X t ; s ◦ θt = X (1) X s∧t + (0 ∨ (s − t)); s ◦ α t = X (2) X t = α s∧t ; (3) α s ◦ α (4) θs ◦ θt = θs+t ; s+t ; (5) α s ◦ θt = θt ◦ α s ◦ θt , Xs ◦ θt ); s ◦ θt = (X (6) X s ◦ α s ◦ α (7) X t = (X t , Xs ◦ αt ); t = α s∧t ; (8) α s ◦ α (9) θs ◦ θt = θs+t ; s+t ; (10) α s ◦ θt = θt ◦ α
202
Continuous Semi-Markov Processes
(11) p ◦ α t = α t ◦ p; (12) p ◦ θt = θt ◦ p; (13) q ◦ α t = αt ◦ q; (14) q ◦ θt = θt ◦ q. Proof. It immediately follows from the deſnition (see theorem 2.1). 5. Time change in stopped and shifted functions PROPOSITION 6.3. Let ϕ ∈ Ψ1 , ξ ∈ D. Then (∀(t, s) ∈ Γ1 (ϕ)∩Γ2 (ϕ)) α t ϕ = ( αs ϕ∗ )∗ , ∗ ∗ θt ϕ = (θs ϕ ) and, besides (1) (∀t ∈ R+ ) αt (ξ ◦ ϕ) = (αϕ(t) ξ) ◦ ϕ, (2) (∀(t, s) ∈ Γ1 (ϕ) ∩ Γ2 (ϕ)) θt (ξ ◦ ϕ) = θs ξ ◦ θt ϕ. Proof. The statement about α t ϕ and θt ϕ is obvious. Further (∀t1 ∈ R+ ) the following equalities are evident Xt1 αt (ξ ◦ ϕ) = Xt1 ∧t (ξ ◦ ϕ) = ξ ϕ t1 ∧ t = ξ ϕ t1 ∧ ϕ(t) = Xϕ(t1 ) αϕ(t) ξ = Xt1 αϕ(t) ξ ◦ ϕ , Xt1 θt (ξ ◦ ϕ) = Xt1 +t (ξ ◦ ϕ) = ξ ϕ t1 + t , Xt1 θs ξ ◦ θt ϕ = X(θt ϕ)(t1 ) θs ξ = ξ s + θt ϕ t1 = ξ s + ϕ t + t1 − ϕ(t)). Since s = ϕ(t) we have Xt1 θt1 (ξ ◦ ϕ) = Xt1 (θs ξ ◦ θt ϕ). 6. Streams of sigma-algebras t , t ∈ R+ ), B = G ⊗ F, Gt = σ(X s , s ≤ t) (t ∈ R+ ), Bt = Let G = σ(X Gt ⊗ Ft . It is obvious that Φ ∈ G and Ψ ∈ G, and also the following properties of a stream of sigma-algebras (ſltration) (Bt )∞ 0 take place: (1) t1 < t2 ⇒ Bt1 ⊂ Bt2 ; (2) B = B∞ = σ(Bt , t ∈ R+ ); s ∈ Bt /B(R+ × X) (s ≤ t). (3) X
Time Change
203
7. About representation of sigma-algebras PROPOSITION 6.4. The following properties are fair (1) (∀t ∈ R+ ) α t ∈ G/G, Gt = α t−1 G, t ∈ B/B, Bt = α t−1 B. (2) (∀t ∈ R+ ) α Proof. Let S ∈ B(R+ ), B1 ∈ G, B2 ∈ F. Then (1) We have
s α s−1 S = ψ : X α t−1 X t ψ ∈ S
s∧t ψ + 0 ∨ (s − t) ∈ S ∈ Gt ⊂ G. = ψ:X Hence, α t is measurable. Besides s ◦ α s , s ∈ R+ = σ X α t−1 G = α t−1 σ X t , s ∈ R+ s∧t , s ∈ R+ s∧t + 0 ∨ (s − t) , s ∈ R+ = σ X =σ X s , s ≤ t = Gt . =σ X (2) We have t−1 B1 × αt−1 B2 ∈ Gt ⊗ Ft ⊂ B. α t−1 B1 × B2 = α Hence, α t is measurable. Furthermore s ◦ p, Xs ◦ q, s1 , s2 ∈ R+ α t−1 B = α t−1 σ X 1 2 s ◦ p ◦ α t , Xs2 ◦ q ◦ α t , s1 , s2 ∈ R+ =σ X 1 s ◦ α t ◦ p, Xs2 ◦ αt ◦ q, s1 , s2 ∈ R+ =σ X 1 = Gt ⊗ Ft . 8. Set of Markov times 7 be a set of Markov times concerning the stream of sigma-algebras (Bt )∞ : Let MT 0 7 ⇐⇒ τ ∈ T , τ ∈ MT ∀t ∈ R+ {τ ≤ t} ∈ Bt . 7 be the sigma-algebra of preceding events for a moment τ : Let Bτ (τ ∈ MT)
Bτ = B ∈ B : ∀t ∈ R+ B ∩ {τ ≤ t} ∈ Bt .
204
Continuous Semi-Markov Processes
9. Properties of the sigma-algebra THEOREM 6.1. The following properties are fair 7 (1) τ ∈ MT ⇒ τ ◦ q ∈ MT, τ−1 (2) Bτ ◦q = α ◦q B (τ ∈ MT).
Proof. Let τ ∈ MT. Then (∀t ∈ R+ ) {τ ◦ q ≤ t} = q −1 {τ ≤ t} = Ψ1 × {τ ≤ t} ∈ Bt , and the ſrst statement is proved. Let
B2τ ◦q
B1τ ◦q = B ∈ B : B = α τ−1 ◦q B ,
= B ∈ B : ∀t ∈ R+ B ∩ {τ ◦ q = t} ∈ Bt .
Then B ∈ Bτ ◦q ⇔ (∀t ∈ R+ ) B ∩ {τ ◦ q ≤ t} ∈ Bt and {τ ◦ q = t} = q −1 {τ = t} = Ψ1 × {τ = t} ∈ Bt . From here (∀B ∈ Bτ ◦q ) (∀t ∈ R+ ) B ∩ {τ ◦ q = t} ∈ Bt , B ∈ B2τ ◦q and Bτ ◦q ⊂ B2τ ◦q . Further, by proposition 6.4(2) B ∈ B2τ ◦q =⇒ B =
B ∩ {τ ◦ q = t}
t∈R+
=
α τ−1 ◦q B ∩ {τ ◦ q = t}
t∈R+
=α τ−1 ◦q B. From here B2τ ◦q ⊂ B1τ ◦q . If B ∈ B1τ ◦q , then B ∈ B, B = α τ−1 τ−1 ◦q B and B ∈ α ◦q B. 1 −1 τ ◦q B. We have Hence, Bτ ◦q ⊂ α
Xs ◦ q ◦ α τ ◦q
−1
t ◦ p ◦ α ∈σ X τ ◦q , Xs ◦ q ◦ α τ ◦q , s, t ∈ R+ ,
Xs ◦ q ◦ α τ ◦q = Xs ◦ ατ ◦ q. From here −1 Xs ◦ ατ ◦ q B = Ψ1 × ατ−1 Xs−1 B ∈ G0 ⊗ Fτ ∩ {τ ≥ 0} .
Time Change
205
Let τ = limn→∞ τn , where τn (ξ) = (k − 1)/n, if (k − 1)/n ≤ τ (ξ) < k/n (k, n ∈ N). In this case τn ≤ τ and for anyone ψ ∈ Ψ and ξ ∈ D (by left-continuity of ψ) τ (ξ) ψ → X τ (ξ) ψ, i.e. X n
t ◦ p ◦ α t α t∧τ (ξ) ψ + 0 ∨ t − τ (ξ) X τ ◦q (ψ, ξ) = X τ (ξ) ψ = X t∧τ (ξ) ψ + 0 ∨ t − τ (ξ) . = lim X n n→∞
We have 0 ∨ t − τ (ξ) ∈ Fτ /B R+ , Fτ = Fτ ∩ {τ ≥ 0}, 9 ∞ 8
k+1 k ≤ τ (ξ) < ψ, ξ : Xt∧k/n ψ ∈ S, ψ, ξ : Xt∧τn (ξ) ψ ∈ S = n n
k=0
9 k+1 k = ≤ τ (ξ) < n n k=0 9 8 k , k ∈ N ⊂ B0τ ◦q , ∈ σ Gk/n ⊗ Fτ ∩ τ ≥ n ∞
−1 S × X t∧k/n
8
where B0τ ◦q = σ Gt ⊗ Fτ ∩ {τ ≥ t} , t ∈ R+ . 0 t ◦ p ◦ α Hence (X τ ◦q )−1 S ∈ B0τ ◦q . Consequently it follows that α τ−1 ◦q B ⊂ Bτ ◦q . Let B1 ∈ Gt , B2 ∈ Fτ ∩ {τ ≥ t}. Then B1 × B2 ∩ {τ ◦ q ≤ s} = B1 × B2 ∩ {τ ≤ s} .
If s < t, then B2 ∩ {τ ≤ s} = Ø. If s ≥ t, then B2 ∩ {τ ≤ s} ∈ Fτ . Since B2 ∈ Fτ and B1 ∈ Gs , then B1 × (B2 ∩ {τ ≤ s}) ∈ Bs . From here B0τ ◦q ⊂ Bτ ◦q . 10. Measurability of shift operators PROPOSITION 6.5. The following properties are fair (1) θt ∈ G/G (t ∈ R+ ); (2) θt ∈ B/B; (3) θτ ◦q ∈ B ∩ {τ ◦ q < ∞}/B (τ ∈ MT). Proof. Assertions (1) and (2) are obvious.
206
Continuous Semi-Markov Processes
(3) Let B1 ∈ G, B2 ∈ F, S ∈ B(R+ ). We have
θτ−1 ◦q B1 × B2 = ψ, ξ : θτ (ξ) ψ ∈ B1 , θτ ξ ∈ B2 , τ < ∞ and {θτ ξ ∈ B2 } ∈ F ∩ {τ < ∞}. Let τn (ξ) = (k − 1)/n on {(k − 1)/n ≤ τ < k/n} −1 S then (k, n ∈ N). If B1 = X s
−1 S ψ, ξ : θτn (ξ) ψ ∈ X s
s θτ (ξ) ψ ∈ S = ψ, ξ : X n
τ (ξ) ψ ∈ S s+τ (ξ) ψ − X = ψ, ξ : X n n 9 ∞ 8 s+k/n ψ − X k/n ψ ∈ S, k ≤ τ (ξ) < k + 1 ∈ B ψ, ξ : X = n n k=0
s+τ (ξ) ψ → X s+τ (ξ) ψ, then and since (∀ψ, ξ ∈ Ψ1 × D) (∀s ∈ R+ ) X n
∀B1 ∈ G
θτ (ξ) ψ ∈ B1 ∈ B.
6.2. Intrinsic time and traces The name “intrinsic time” is connected with properties of a trajectory, not timedependent. The basic property of these Markov times is borrowed from a time of the ſrst exit σΔ (Δ ∈ A). The class of intrinsic time is closed with respect to operations ˙ combination and passage to a limit. +, The equivalence relation is considered when two trajectories belong to one class, or have the same sequence of states (trace), if they can be reduced to the same trajectory by a time change [KOL 72, CHA 79]. A sigma-algebra F ∗ of subsets invariant with respect to time change is deſned. It can be represented in terms of intrinsic times and traces. 11. Intrinsic time Let
Tc = τ ∈ T : (∀ϕ ∈ Φ) (∀ξ ∈ D) τ (ξ ◦ ϕ) = ϕ∗ τ (ξ) . If τ (ξ ◦ ϕ) ∈ ϕ−1 {τ (ξ)}, then, obviously, Xτ (ξ ◦ ϕ) = Xτ ξ on {τ < ∞}. From here (∀τ ∈ Tc ) Xτ (ξ ◦ ϕ) = Xτ ξ (τ < ∞). Let us designate IMT = MT ∩Tc and call it as class “of intrinsic Markov times”. This class is rather narrower than a similar class from [CHA 81].
Time Change
207
Let F ∗ be a set of all B ∈ F such that (∀ϕ ∈ Φ) (∀ξ ∈ D) ξ ∈ B ⇐⇒ ξ ◦ ϕ ∈ B. Obviously, F ∗ is a sigma-algebra. 12. Properties of IMT class THEOREM 6.2. The following properties are fair: ˙ τ2 ∈ IMT, (1) τ1 , τ2 ∈ IMT ⇒ τ1 + ∞ (2) If τn ∈ IMT (∀n ∈ N) and τn = τ on Bn ∈ Fτn ∩ F ∗ , where n=1 Bn = D, then τ ∈ IMT. (3) If (∀n ∈ N) τn ∈ IMT and τn ↑ τ , then τ ∈ IMT. (4) T (σΔ , Δ ∈ A) ⊂ IMT (see item 2.14). Proof. (1) Let ξ ∈ D, ϕ ∈ Φ, τ1 , τ2 ∈ Tc and τ1 (ξ) < ∞. Then ˙ τ2 (ξ ◦ ϕ) = τ1 (ξ ◦ ϕ) + τ2 θτ1 (ξ ◦ ϕ). τ1 + According to theorem 6.3(2) ∗ θτ1 (ξ◦ϕ) (ξ ◦ ϕ) = θϕ∗ (τ1 (ξ)) (ξ ◦ ϕ) = θτ1 (ξ) ξ ◦ θτ1 (ξ) ϕ∗ . From here τ2 θτ1 (ξ ◦ ϕ) = θτ1 (ξ) ϕ∗ τ2 θτ1 (ξ) ξ = ϕ∗ τ1 (ξ) + τ2 θτ1 (ξ) ξ − ϕ∗ τ1 (ξ) ˙ τ2 (ξ) − ϕ∗ τ1 (ξ) , = ϕ∗ τ1 + ˙ τ2 (ξ ◦ ϕ) = ϕ∗ τ1 + ˙ τ2 (ξ) . τ1 + If τ1 (ξ) = ∞, then τ1 (ξ ◦ ϕ) = ∞. From here also ˙ τ2 (ξ ◦ ϕ) = ϕ∗ τ1 + ˙ τ2 (ξ) , τ1 + ˙ τ2 ∈ MT (see theorems 2.5(1); if we set ϕ∗ (∞) = ∞. Since τ1 , τ2 ∈ MT ⇒ τ1 + 2.6(4); 2.8(1); 2.9(1)) the ſrst statement is proved. (2) We have {τ ≤ t} =
∞ n=1
{τ ≤ t} ∩ Bn =
∞ n=1
τn ≤ t ∩ Bn ∈ Ft .
208
Continuous Semi-Markov Processes
From here τ ∈ MT. Furthermore τ (ξ ◦ ϕ) =
∞
τ (ξ ◦ ϕ)Jn (ξ) =
n=1
=
∞
∞
τn (ξ ◦ ϕ)Jn (ξ ◦ ϕ) =
n=1
=
∞
τ (ξ ◦ ϕ)Jn (ξ ◦ ϕ)
n=1 ∞
ϕ∗ τn (ξ) Jn (ξ)
n=1
ϕ∗ τ (ξ) Jn (ξ) = ϕ∗ τ (ξ) ,
n=1
where B i = D \ Bi , Jn (ξ) = I(Bn B n−1 · · · B 1 | x), I(B | ξ) ≡ IB (ξ) is indicator of the set B. From here τ ∈ Tc . (3) If τn ∈ IMT and τn ↑ τ , then τ ∈ MT (see [BLU 68, p. 32]) and, in addition, (∀ξ ∈ D) (∀ϕ ∈ Φ) τ (ξ ◦ ϕ) = lim τn (ξ ◦ ϕ) = lim ϕ∗ τn (ξ) = ϕ∗ τ (ξ) n→∞
n→∞
by a continuity of ϕ∗ from the left. Hence, τ ∈ Tc . (4) Let Δ ∈ A, ξ ∈ D, ϕ ∈ Φ. We have σΔ (ξ ◦ ϕ) = inf ϕ−1 ξ −1 (X \ Δ) = inf ϕ−1 inf ξ −1 (X \ Δ) = ϕ∗ σΔ (ξ) . ˙ generated Since σΔ ∈ MT and T (σΔ , Δ ∈ A) is a semigroup with respect to +, by a class {σΔ , Δ ∈ A}, then according to item 1 we obtain proof of the fourth assertion. 13. About non-constancy of the function on the left of intrinsic time PROPOSITION 6.6. If τ ∈ IMT, then (∀ε > 0) (∀ξ ∈ D) ξ is not constant on [τ (ξ)−ε, τ (ξ)]. Proof. Let τ ∈ IMT and (∃ε < 1) τ (ξ) = t0 > 0 and ξ be constant on [t0 − ε, t0 ]. Since IMT ⊂ ST ⊂ Ta (see theorem 2.8) and τ = τ ◦ ατ (see theorem 2.2), then τ (ξ) = τ (αt0 ξ). Let ⎧ ⎪ t ∈ 0, t0 − ε , ⎨t, ϕ(t) = t0 − ε, t ∈ t0 − ε, t0 − ε + 1 , ⎪ ⎩ t − 1, t ∈ t0 − ε + 1, ∞ ;
Time Change
209
Then αt0 ξ = αt0 (ξ ◦ ϕ) (up to t0 − ε ϕ(t) = t, with t ∈ [t0 − ε, t0 ] (ξ ◦ ϕ)(t) = ξ(t0 − ε) = ξ(t0 ) = ξ(t)). From here τ (ξ ◦ ϕ) = τ (αt0 (ξ ◦ ϕ)) = τ (αt0 ξ) = t0 . On the other hand, ϕ∗ (τ (ξ)) = ϕ∗ (t0 ) = t0 + 1 > τ (ξ ◦ ϕ). It is a contradiction. 14. Sequence of states (trace) Let M be a set of equivalence classes of set D with respect to an equivalence relation: ξ1 ∼ ξ2 ⇐⇒ ∃ϕ1 , ϕ2 ∈ Φ ξ1 ◦ ϕ1 = ξ2 ◦ ϕ2 The map : D → M corresponding to each function its class of equivalency is said to be a sequence of states (or trace) of this function. Thus, (ξ) is the trace of the trajectory ξ ∈ D. In [KOL 72, p. 109] this object received the name “curve in metric space”. This name seems to be unsuitable for such a trace as sequence of isolated points (in case of step-functions). In the case of continuous trajectories we will use this name in aspect of “curvilinear integrals”. 15. Representation of the sigma-algebra PROPOSITION 6.7. The following property is fair (see item 11): B ∈ F ∗ ⇐⇒ (∃B ⊂ M)
B = −1 B , B ∈ F.
Proof. Let B ∈ F ∗ . We have B ⊂ −1 B for B ⊂ M. Furthermore ξ ∈ −1 B =⇒ ξ ∈ B =⇒ (∃ξ ∈ B) ξ = ξ =⇒ (∃ϕ, ϕ ∈ Φ) (∃ξ ∈ B) ξ ◦ ϕ = ξ ◦ ϕ ∈ B =⇒ ξ ∈ B. Hence, B = −1 B. Let B ⊂ M. We have ξ ∈ −1 B ⇐⇒ ξ ∈ B ⇐⇒ (∀ϕ ∈ Φ) ⇐⇒ (∀ϕ ∈ Φ)
(ξ ◦ ϕ) = ξ ∈ B
ξ ◦ ϕ ∈ −1 B .
If −1 B ∈ F, then −1 B ∈ F ∗ . 16. Class IMT and its relation with IMT and MT Let us consider a class IMT of measurable functions τ ∈ T such that (∀t1 , t2 ∈ R+ ) (∀ξ1 , ξ2 ∈ D) αt1 ξ1 = αt2 ξ2 , τ ξ1 ≤ t1 =⇒ τ ξ2 ≤ t2 .
210
Continuous Semi-Markov Processes
PROPOSITION 6.8. The following property holds IMT ⊂ MT . Proof. According to theorem 2.9 it is enough to prove that IMT ⊂ Ta . Let τ ∈ IMT and τ (ξ) ≤ t. Then αt ξ = αt αt ξ and, hence, τ αt ξ ≤ t. Let τ αt ξ ≤ t. Then τ (ξ) ≤ t, i.e.
{τ ≤ t} = τ ◦ αt ≤ t . 17. Necessary condition of class IMT Let us consider a class of time (see item 2.2)
Tb = τ ∈ T : ∀t ∈ R+ {τ ≤ t} = αt−1 {τ < ∞} . PROPOSITION 6.9. The following property is fair IMT ⊂ Tb . Proof. According to item 11, IMT ⊂ M T . From here (∀τ ∈ IMT) (∀t ∈ R+ ) τ (ξ) ≤ t =⇒ τ αt ξ < ∞. Function αt ξ is constant on [t, ∞). By theorem 6.6, function τ (αt ξ) cannot be more than t. Therefore, from condition τ αt ξ < ∞ it follows that τ αt ξ ≤ t. Hence, {τ ◦αt ≤ t} = {τ ≤ t} = {τ ◦ αt < ∞}. 18. The main property of class IMT THEOREM 6.3. The following property is fair IMT = IMT . Proof. We divide the proof into two parts. (1) Let τ ∈ IMT. Then, by theorem 6.9, we have τ (αt ξ) < ∞ ⇒ τ (ξ) ≤ t. Hence, if τ (αt1 ξ1 ) = τ (αt2 ξ2 ) < ∞, then τ (ξ1 ) = τ (ξ2 ) ≤ t1 ∧ t2 . Let αt1 ξ1 = αt2 ξ2 and τ (ξ1 ) ≤ t1 < ∞. Then (∃ϕ1 , ϕ2 ∈ Φ) αt1 ξ1 ◦ ϕ1 = αt2 ξ2 ◦ ϕ2
Time Change
211
and, by theorem 6.3(1), αϕ∗1 (t1 ) (ξ1 ◦ ϕ1 ) = αϕ∗2 (t2 ) (ξ2 ◦ ϕ2 ). Hence, τ αt1 ξ1 ◦ ϕ1 = ϕ∗1 τ αt1 ξ1 = τ αϕ∗1 (t1 ) ξ1 ◦ ϕ1 and since τ (αt1 ξ) = τ (ξ1 ) < ∞, τ αϕ∗1 (t1 ) ξ1 ◦ ϕ1 = τ αϕ∗2 (t2 ) ξ2 ◦ ϕ2 = ϕ∗1 τ αt1 ξ1 = ϕ∗1 τ ξ1 < ∞. From here τ (ξ1 ◦ ϕ1 ) = τ (ξ2 ◦ ϕ2 ) ≤ ϕ∗1 (t1 ) ∧ ϕ∗2 (t2 ), i.e. τ ξ2 ◦ ϕ2 ≤ ϕ∗2 t2 =⇒ ϕ∗2 τ ξ2 ≤ ϕ∗2 t2 =⇒ τ ξ2 ≤ t2 . Hence, τ ∈ IMT . (2) Let τ ∈ IMT . Then by theorem 6.3(1) (∀ξ ∈ D) (∀ϕ ∈ Φ) (∀t ∈ R+ ) αt (ξ ◦ ϕ) = αϕ(t) ξ ◦ ϕ = αϕ(t) ξ τ (ξ ◦ ϕ) ≤ t ⇐⇒ τ (ξ) ≤ ϕ(t) . We have
τ (ξ) ≤ ϕ(t) =⇒ ϕ∗ τ (ξ) ≤ ϕ∗ ϕ(t) = inf ϕ−1 ϕ(t) ≤ t.
On the other hand, on a continuity of ϕ τ (ξ) > ϕ(t) =⇒ ϕ∗ τ (ξ) > sup ϕ−1 ϕ(t) ≥ t . From here (τ (ξ) ≤ ϕ(t)) ⇐⇒ (ϕ∗ (τ (ξ)) ≤ t), i.e. τ (ξ ◦ ϕ) ≤ t ⇐⇒ ϕ∗ τ (ξ) ≤ t . Hence, τ (ξ ◦ ϕ) = ϕ∗ (τ (ξ)) and τ ∈ IMT. 6.3. Canonical time change for trajectories with identical traces According to the deſnition two functions have one trace if with the help of two time changes they can be transformed into the same function. The property of coincidence of traces of two functions can be checked by a comparison of values of these functions in an countable set of time of the ſrst exit and their iterations. In a common case coincidence of all these values is not yet enough for equality of sequences of states. It is necessary to check up simultaneous presence or lack of intervals of constancy before time of discontinuity and on inſnity. These intervals cannot be discovered on a ſnite set of ſrst exit times. Therefore, it is desirable to have effectively
212
Continuous Semi-Markov Processes
checked conditions of their lack (see theorems 3.11 and 3.12). The construction of canonical time changes for two functions with identical sequences of states serves as preparation for the further constructions connected with random processes and time changes. 19. Trace values in a point Let τ ∈ T , δ = (Δ1 , Δ2 , . . .), (Δi ⊂ X), X = X ∪ {∞} where ∞ ∈ X. We will use denotations γτ : D → X = X ∪ {∞} is the map ∞, τ (ξ) = ∞, γτ ξ = Xτ ξ, τ (ξ) < ∞. ∞
δ : D → X
where δ ξ = (γ0 ξ, γσδ1 ξ, γσδ2 ξ, . . .), γ0 ξ = X0 ξ.
20. Preservation of an order We use DS(Arn ), a class of deducing sequences composed of elements of an open covering Arn of a rank rn . PROPOSITION 6.10. Let ξ1 , ξ2 ∈ D and (∃(δn )∞ 1 , δn ∈ DS(Arn ), rn → 0) (∀n ∈ N) δn ξ1 = δn ξ2 . i.e. it is a closed set) δ ξ1 = δ ξ2 Then for any sequence δ = (Δ1 , Δ2 , . . .) (Δi ∈ A, and for any τ1 , τ2 ∈ T (σΔ , Δ ∈ A) (1) τ1 ξ1 < τ2 ξ1 ⇒ τ1 ξ2 ≤ τ2 ξ2 ; (2) τ1 ξ2 < τ2 ξ2 ⇒ τ1 ξ1 ≤ τ2 ξ1 . τ Lδ ξ1 = τ Lδ ξ2 = ∞ or Proof. (1) We have (∀n ∈ N) (∀τ ∈ T (σΔ , Δ ∈ A)) n n k−1 k−1 (∃k ∈ N) τ Lδn ξ1 = σδn ξ1 < ∞, τ Lδn ξ2 = σδn ξ2 < ∞ and Xτ Lδn ξ1 = Xτ Lδn ξ2 . it follows that (∀ξ ∈ D) τ Lδ ξ ↓ τ (ξ) From the deſnition of τ ∈ T (σΔ , Δ ∈ A) n (n → ∞) and Xτ Lδn ξ → Xτ ξ for τ ξ < ∞. From here it follows that either times τ ξ1 , τ ξ2 are inſnite, or both are ſnite. In the last case Xτ ξ1 = Xτ ξ2 . This means γτ ξ1 = γτ ξ2 and for any δ = (Δ1 , Δ2 , . . .) (Δi ∈ A) that (∀τ ∈ T (σΔ , Δ ∈ A)) δ ξ1 = δ ξ2 . then τ1 Lδ ξ1 ↓ τ1 ξ1 , τ2 Lδ ξ1 ↓ τ2 ξ1 ; τ1 Lδ ξ2 ↓ (2) If τ1 , τ2 ∈ T (σΔ , Δ ∈ A), n n n τ1 ξ2 , τ2 Lδn ξ2 ↓ τ2 ξ2 (n → ∞), but for τ1 Lδn ξ1 and τ1 Lδn ξ2 there is the same corresponding number kn in a sequence δn ξ1 = δn ξ2 . Also, for τ2 Lδn ξ1 and τ2 Lδn ξ2 , there is a corresponding common number sn . Let τ1 ξ1 < τ2 ξ1 . Then kn < sn for rather large n and, hence, τ1 ξ2 ≤ τ2 ξ2 . The same is true for τ1 ξ2 < τ2 ξ2 .
Time Change
213
21. Interval of constancy on the left of t Let us say that ξ has no intervals of constancy on the left of t − 0 (0 < t ≤ ∞) if (∀t1 < t) ξ is not constant on [t1 , t). Let Dτ be a set of all functions ξ ∈ D not having an interval of constancy on the left of τ (ξ) − 0 (τ ∈ MT+ ). Let D∞ be a set of all functions not having an inſnite interval of constancy (see item 3.46). Learning intervals of constancy before random instants makes sense for processes having a non-trivial trace, i.e. a trace not reducing to a sequence of isolated points. Evidently, a step-process does not belong to Dτ for any jump time τ (any jump time has an interval of constancy just before any jump time). According to proposition 6.6, for any τ ∈ MT, which is not a point of discontinuity of the function ξ, ξ ∈ Dτ . 22. Measurability Let us denote Tn = T σΔ , Δ ∈ Arn ,
T0 =
∞
Tn .
n=1
PROPOSITION 6.11. The following property is fair τ ∈ IMT =⇒ Dτ ∈ F ∗ ∩ Fτ . Proof. We have {τ < ∞} ∩ Dτ =
∞
ξ : τ (ξ) − 1/n < τ1 (ξ) < τ (ξ) < ∞ ,
n=1 τ1 ∈T0
∞
where T0 = n=1 Tn , Tn = T (σΔ , Δ ∈ Arn ), rn ↓ 0. From here {τ < ∞} ∩ Dτ ∈ Fτ (see, e.g., [BLU 68, p. 33]). Furthermore, {τ = ∞} ∩ Dτ = {τ = ∞} ∩ D∞ ∈ Fτ ∩ {τ = ∞} ⊂ Fτ . From here Dτ ∈ Fτ . Let ϕ ∈ Φ. Because τ, τ1 ∈ Tc , we have τ (ξ) −
1 < τ1 (ξ) < τ (ξ) ⇐⇒ ϕ∗ τ (ξ) − 1/n < ϕ∗ τ1 (ξ) n < ϕ∗ τ (ξ) ⇐⇒ τ (ξ ◦ ϕ) − εn < τ1 (ξ ◦ ϕ) < τ (ξ ◦ ϕ),
where εn = ϕ∗ (τ (ξ)) − ϕ∗ (τ (ξ) − 1/n), εn → 0 (n → ∞). From here ξ ∈ Dτ ∩ {τ < ∞} ⇐⇒ ξ ◦ ϕ ∈ Dτ ∩ {τ < ∞}.
214
Continuous Semi-Markov Processes
Similarly, it can be proved that ξ ∈ Dτ ∩ {τ = ∞} ⇐⇒ ξ ◦ ϕ ∈ Dτ ∩ {τ = ∞}. Hence Dτ ∈ F ∗ . 23. Criterion of equivalence of two trajectories Let (∀n ∈ N) δn = (Δn1 , Δn2 , . . .) ∈ DS(Arn ) and δn ξ1 = δn ξ2 (rn ↓ 0). k = τnk . According to proposition 6.6, We denote [δn ] = ([Δn1 ], [Δn2 ], . . .) and σ[δ n] (∀k, n ∈ N) γτnk ξ1 = γτnk ξ2 , and the set of pairs (τnk ξ1 , τnk ξ2 ) is linear ordered by the relation (a, b) < (c, d) ⇔ (a < c, b ≤ d) ∨ (a ≤ c, b < d). We identify equal pairs. For any m ∈ N we consider pairs (τnk ξ1 , τnk ξ2 ) (n ≤ m, k ∈ N). We unite points (τnk ξ1 , τnk ξ2 ) of quadrant R+ × R+ with broken line Γm with vertice in these points and beginning at the point (0, 0). Let it pass through these points according to the given order from small to large. If there is only a ſnite number of such points in quadrant R+ × R+ , then the last inſnite link is a ray drawing from the last point with m inclination coefſcient 1. Consider the point (tm c , sc ) of intersection of the broken m m tm m line Γm with line t + s = c. Let (tc , sc ) and let ( c ,s c ) be the greatest vertex not m m m exceeding (tc , sc ), and smallest (if it exists) not less than (tm c , sc ) correspondingly. m m Since sums tc +sc do not decrease and are bounded from above by value c, and sums tm m c +s c do not increase and are bounded from below by value c, there exist limits m c = limm→∞ (tc + sm c = limm→∞ ( tm m c +s c ). Then according to ordering any c ) and mn m sequence tc tends to the same limit tc , hence tc ↑ tc and correspondingly sm c ↑ sc , m m m tm ↓ t , s ↓ s . Therefore, the limits t = lim t , s = lim s c c c m→∞ c m→∞ c c c c exist. Let Γ = Γ(ξ1 , ξ2 ) be the curve determined by family of points {(tc , sc )} (c ≥ 0). We will call it the time run comparison curve for functions ξ1 and ξ2 . According to construction c1 < c2 ⇒ (tc1 , sc1 ) < (tc2 , sc2 ), and since tc + sc = c, then tc and sc are continuous ((tc2 − tc1 ) + (sc2 − sc1 ) = c2 − c1 ). If I(D∞ | ξ1 ) = I(D∞ | ξ2 ), then either simultaneously sup{τnk (ξ1 ) : τnk (ξ1 ) < ∞} = sup{τnk (ξ2 ) : τnk (ξ2 ) < ∞} = ∞, or both these limits are less than inſnity. In this case if a m point of intersection (tm c , sc ) for all m belongs to a last inſnite link, then existence of limit (tc , sc ) follows from the existence of the limit (tc , sc ). Let us designate v1 (c | ξ1 , ξ2 ) = tc , v2 (c | ξ1 , ξ2 ) = sc . Under our suppositions tc , sc → ∞ as c → ∞ and hence v1 (· | ξ1 , ξ2 ), v2 (· | ξ1 , ξ2 ) ∈ Φ. We refer to these functions as canonical inverse time changes corresponding to two functions with identical traces. THEOREM 6.4. Let ξ1 , ξ2 ∈ D. The following criterion for equivalence of functions ξ1 = ξ2 is fair if and only if the following conditions are fulſlled: (1) (∀τ ∈ T0 ) γτ ξ1 = γτ ξ2 ; (2) (∀τ ∈ T0 ) I(Dτ | ξ1 ) = I(Dτ | ξ2 ); (3) I(D∞ | ξ1 ) = I(D∞ | ξ2 ).
Time Change
215
Proof. Sufſciency. Let us consider the time run comparison curve Γ(ξ1 , ξ2 ). We will prove that ξ1 ◦ v1 = ξ2 ◦ v2 . Let tc , sc be points of continuity of ξ1 and ξ2 correspondingly. Then (∃k ∈ N) snc ∈ τnk ξ2 , τnk+1 ξ2 , tnc ∈ τnk ξ1 , τnk+1 ξ1 , ρ ξ1 tnc , ξ1 τnk ξ1 < rn , ρ ξ2 snc , ξ2 τnk ξ2 ≤ rn and since ξ1 (τnk (ξ1 )) = ξ2 (τnk (ξ2 )), then ρ(ξ1 (tnc ), ξ2 (snc )) ≤ 2rn and ρ(ξ1 (tc ), ξ2 (sc )) = 0, i.e. in this case (ξ1 ◦ v1 )(c) = (ξ2 ◦ v2 )(c). Let tc be a point of discontinuity of ξ1 . Then (∃n, k ∈ N) tc = τnk−1 ξ1 . However, because of equality (∀n ∈ N) δn ξ1 = δn ξ2 we ſnd that τnk−1 ξ2 is also a point of discontinuity of function ξ2 . Consider a set of points of the curve Γ with the ſrst coordinate tc . If this set consists of one point, then ξ2 sc = ξ2 τnk−1 ξ2 = ξ1 τnk−1 ξ1 = ξ1 tc . If this set consists of an interval with edges (tc , s1 ), (tc , s2 ), (s1 < s2 ), then from k−1 k−1 i equality I(Dτn | ξ1 ) = I(Dτn | ξ2 ) it follows that s1 = τnk−1 ξ2 . Besides, if τm ξ1 > k−1 i k−1 τm ξ1 , then τm ξ2 > τm ξ2 . Hence on the interval (s1 , s2 ) the function ξ2 does i ξ2 (∀i, m ∈ N), i.e. it is constant and, consequently, it is not contain points τm k−1 equal to ξ2 (τn ξ2 ). Assuming ξ2 (s2 ) = ξ2 (τnk−1 ξ2 ), we obtain (∃m, l ∈ N) s2 = l−1 l−1 ξ2 , and hence tc = τm ξ1 . The latter contradicts the supposition, from which it τm follows that l−1 l−1 ξ2 = ξ1 τm ξ1 . ξ1 tc = ξ1 τnk−1 ξ1 = ξ2 τnk−1 ξ2 = ξ2 τm Hence ξ2 (sc ) = ξ1 (tc ) and in this case (ξ1 ◦ v1 )(c)(ξ2 ◦ v2 )(c). Necessity. Let ξ1 = ξ2 and ξ1 ◦ ϕ1 = ξ2 ◦ ϕ2 (ϕ1 , ϕ2 ∈ Φ). We have (∀τ ∈ MT) τ (ξ ◦ ϕ) = ∞ ⇐⇒ τ (ξ) = ∞,
τ (ξ) < ∞ =⇒ Xτ (ξ ◦ ϕ) = Xτ (ξ)
(see item 11). From here γτ ξ1 = γτ ξ2 and since Dτ ∈ F ∗ , then I(Dτ | ξ1 ) = I(Dτ | ξ2 ). Because T0 ⊂ IMT, necessity is proved. The similar result is placed in [CHA 81]. 24. Notes to criterion of equivalence (1) According to theorem 6.2, if τ ∈ IMT and ξ ∈ / Dτ , then either τ (ξ) is a point of discontinuity of ξ or τ (ξ) = ∞ and ξ has an inſnite interval of constancy. Therefore,
216
Continuous Semi-Markov Processes
condition (∀τ ∈ T0 ) I(Dτ | ξ1 ) = I(Dτ | ξ2 ) is meaningful only for points of discontinuity of functions ξ1 , ξ2 , and also for point ∞. The statement of theorem 6.4 is also fair in a case when T0 is replaced by T0 , the similar set of the ſrst exit time from closed sets; however, condition (∀τ ∈ T0 ) I(Dτ | ξ1 ) = I(Dτ | ξ2 ) should be supplemented by the requirement: either τ (ξ1 ), τ (ξ2 ) are points of discontinuity of the appropriate functions or τ (ξ1 ) = τ (ξ2 ) = ∞. (2) Similar statements are fair, when Markov times σrkn (rn ↓ 0) or appropriate time of the ſrst exit from the closed balls are considered. (3) For two functions with identical sequences of states (ξ1 = ξ2 ) we have constructed a time run comparison curve Γ ⊂ R+ ×R+ . This curve contains segments parallel to axes of ordinates with edges (t, s1 ), (t, s2 ) if and only if ξ2 is constant on an interval [s1 , s2 ) and point t does not belong to an interval of constancy of function ξ1 . If ξ2 has no intervals of constancy on the whole set R+ , then, obviously, ξ2 ◦Γ(·) = ξ1 , where Γ(·) ∈ Φ is a single-valued function, deſned by curve Γ. 25. First exit time from a closed set PROPOSITION 6.12. Let ξ1 = ξ2 and Γ = Γ(ξ1 , ξ2 ). Then (∀τ ∈ T (σΔ , Δ ∈ A)) either (τ (ξ1 ), τ (ξ2 )) ∈ Γ, or τ (ξ1 )= τ (ξ2 ) = ∞. Proof. The curve Γ is constructed for a sequence (δn )∞ 1 (δn ∈ DS(Arn )), and for any k k k then either m, k ∈ N (τm ξ1 , τm ξ2 ) ∈ Γ, where τnk = σ[δ . If τ ∈ T (σΔ , Δ ∈ A), n] τ L[δn ] ξ1 = τ L[δn ] ξ2 = ∞ or (τ L[δn ] ξ1 , τ L[δn ] ξ2 ) ∈ Γ. Since (∀ξ ∈ D) τ L[δn ] ξ ↓ τ (ξ) and Γ is a continuous curve, then (τ (ξ1 ), τ (ξ2 )) ∈ Γ. 26. Measurability of canonical direct time changes Let d = {(ξ1 , ξ2 ) ∈ D × D : ξ1 = ξ2 }. From sigma-compactness of space X, it follows that d ∈ F ⊗ F. We call d a diagonal of traces. PROPOSITION 6.13. The following property is fair: v1∗ , v2∗ ∈ (F ⊗ F) ∩ d/G, where v1 , v2 are canonical inverse time changes determined in item 24 (map (·)∗ (see item 1)). Proof. We have v1∗ , v2∗ ∈ Ψ according to construction. Map (·)∗ : Ψ1 → Ψ1 is measurable with respect to G/G. Therefore it is enough to prove that vi ∈ (F ⊗ F) ∩ d/G,
Time Change
217
m but v1 (c | ξ1 , ξ2 ) = tc = limm→∞ tm c and v2 (c | ξ1 , ξ2 ) = sc = limm→∞ sc m m (in terms of theorem 6.4). The measurability of tc , sc follows from that of vertices m m m (tc , sm c ) and (tc , sc ) of the broken line Γm as functions of (ξ1 , ξ2 ).
27. Equivalence under shift and stopping PROPOSITION 6.14. Let τ ∈ IMT and ξ1 = ξ2 . Then (1) τ (ξ1 ) < ∞ ⇒ θτ ξ1 = θτ ξ2 ; (2) ατ ξ1 = ατ ξ2 . Proof. (1) Let τ ∈ IMT and ξ1 = ξ2 . Then τ (ξ1 ) < ∞ ⇔ τ (ξ2 ) < ∞. Let τ (ξ1 ) < ∞, ξ1 ◦ ϕ1 = ξ2 ◦ ϕ2 (ϕ1 , ϕ2 ∈ Φ), τ1 ∈ IMT. Then, under theorem 6.2(1), ˙ τ1 ξ1 < ∞. ˙ τ1 ξ1 < ∞ ⇐⇒ ϕ∗1 τ + τ1 θτ ξ1 < ∞ ⇐⇒ τ + ˙ τ1 )(ξ1 )) = ϕ∗2 ((τ + ˙ τ1 )(ξ2 )), then Since ϕ∗1 ((τ + τ1 θτ ξ1 < ∞ ⇐⇒ τ1 θτ ξ2 < ∞. Besides, from condition τ1 θτ ξ1 , τ1 θτ ξ2 < ∞ it follows that Xτ1 θτ ξ1 = Xτ +˙ τ1 ξ1 = Xτ +˙ τ1 ξ1 ◦ ϕ1 and because of Xτ +˙ τ1 (ξ1 ◦ ϕ1 ) = Xτ +˙ τ1 (ξ2 ◦ ϕ2 ), Xτ1 θτ ξ1 = Xτ1 θτ ξ2 . From here for any τ1 ∈ T0 (see item 22) γτ1 θτ ξ1 = γτ1 θτ ξ2 . Furthermore, for τ1 < ∞ we have 0 < τ1 θτ ξ1 − τ2 θτ ξ1 < ε θτ ξ1 ∈ Dτ1 ⇐⇒ (∀ε > 0) ∃τ2 ∈ T0 ˙ τ1 ξ1 − τ + ˙ τ2 ξ1 < ε 0< τ+ ⇐⇒ (∀ε > 0) ∃τ2 ∈ T0 ˙ τ1 ξ1 − ϕ∗1 τ + ˙ τ2 ξ1 < ε, 0 < ϕ∗1 τ + ⇐⇒ (∀ε > 0) ∃τ2 ∈ T0 ˙ τ1 )ξ1 ) = ϕ∗2 ((τ + ˙ τ1 )ξ2 ), and because of ϕ∗1 ((τ + θτ ξ1 ∈ Dτ1 ⇐⇒ θτ ξ2 ∈ Dτ1 . Hence, (∀τ1 ∈ T0 ) (τ1 < ∞) I Dτ1 | θτ ξ1 = I Dτ1 | θτ ξ2 .
218
Continuous Semi-Markov Processes
Similarly, it can be proved that I(D∞ | θτ ξ1 ) = I(D∞ | θτ ξ2 ). From here under theorem 6.4, θτ ξ1 = θτ ξ2 . (2) We have ατ (ξ1 ◦ ϕ1 ) = ατ (ξ2 ◦ ϕ2 ) and according to proposition 6.3(1), ατ (ξ1 ◦ϕ1 ) ξ1 ◦ ϕ1 = αϕ∗1 (τ (ξ1 )) ξ1 ◦ ϕ1 = ατ ξ1 ◦ ϕ1 . From here (ατ ξ1 ) ◦ ϕ1 = (ατ ξ2 ) ◦ ϕ2 and consequently ατ ξ1 = ατ ξ2 . 28. Preservation of order under time change Then PROPOSITION 6.15. Let ξ ∈ D, ϕ ∈ Φ and τ1 ∈ T (σΔ , Δ ∈ A). −1 −1 (1) τ1 (ξ ◦ ϕ) ∈ {inf ϕ {τ1 (ξ)}, sup ϕ {τ1 (ξ)}}; (2) if τ ∈ IMT, then the following two equivalences are fair: τ1 (ξ) < τ (ξ) ⇐⇒ τ1 (ξ ◦ ϕ) < τ (ξ ◦ ϕ), τ1 (ξ) > τ (ξ) ⇐⇒ τ1 (ξ ◦ ϕ) > τ (ξ ◦ ϕ). Proof. (1) We have σØ (ξ ◦ ϕ) ≡ 0 ∈ ϕ−1 {σØ ξ} = ϕ−1 {0}. Let for some τ ∈ T (σΔ , τ (ξ ◦ ϕ) ∈ ϕ−1 {τ (ξ)}. Then for Δ ∈ A we have Δ ∈ A)
˙ σΔ (ξ) = inf τ (ξ), ∞ ∩ ξ −1 (X \ Δ), τ+ ˙ σΔ (ξ ◦ ϕ) = inf τ (ξ ◦ ϕ), ∞ ∩ ϕ−1 ξ −1 (X \ Δ) τ+ ≥ inf ϕ−1 τ (ξ), ∞ ∩ ξ −1 (X \ Δ) ≥ inf ϕ−1 inf τ (ξ), ∞ ∩ ξ −1 (X \ Δ)
˙ σΔ (ξ) . = inf ϕ−1 τ + ˙ σΔ )(ξ ◦ ϕ) ≤ On the other hand, (∀tn ∈ (τ (ξ), ∞) ∩ ξ −1 (X \ Δ)) it is fair that (τ + inf ϕ−1 {tn }, and if tn ↓ inf[τ (ξ), ∞) ∩ ξ −1 (X \ Δ), then
˙ σΔ (ξ) , inf ϕ−1 {tn } ↓ sup ϕ−1 inf τ (ξ), ∞ ∩ ξ −1 (X \ Δ) = sup ϕ−1 τ +
˙ σΔ (ξ ◦ ϕ) ≤ sup ϕ−1 τ + ˙ σΔ ξ . τ+ ˙ σΔ )(ξ ◦ ϕ) ∈ ϕ−1 {(τ + ˙ σΔ )ξ}. Hence, (∀τ ∈ T (σΔ , Δ ∈ A)) From here (τ +
τ (ξ ◦ ϕ) ∈ ϕ−1 τ (ξ) . On interval (inf ϕ−1 {τ (ξ)}, sup ϕ−1 {τ (ξ)}) the function ξ ◦ ϕ is evidently constant point τ (ξ) cannot belong an interval and equal to ξ(τ (ξ)), but (∀τ ∈ T (σΔ , Δ ∈ A)) of constancy of the function ξ. It implies the ſrst assertion.
Time Change
219
τ ∈ IMT. Then τ (ξ ◦ ϕ) = ϕ∗ (τ (ξ)). However, for (2) Let τ1 ∈ T (σΔ , Δ ∈ A), τ1 (ξ) < τ (ξ)
sup ϕ−1 τ1 (ξ) < inf ϕ−1 τ (ξ) = ϕ∗ τ (ξ) , i.e. τ1 (ξ ◦ ϕ) < τ (ξ ◦ ϕ). If τ1 (ξ) > τ (ξ), then
τ1 (ξ ◦ ϕ) = inf ϕ−1 τ1 (ξ) > inf ϕ−1 τ (ξ) = τ (ξ ◦ ϕ), and if τ1 (ξ) = τ (ξ), then τ (ξ ◦ ϕ) ≤ τ1 (ξ ◦ ϕ). 29. Intrinsic Markov time and curve Γ PROPOSITION 6.16. If ξ1 = ξ2 , τ ∈ IT and τ (ξ1 ) < ∞, then τ ξ1 , τ ξ2 ∈ Γ (deſnition of Γ; see item 23). Proof. Let ξ1 ◦ ϕ1 = ξ2 ◦ ϕ2 (ϕ1 , ϕ2 ∈ Φ) and τ ∈ IMT. In this case there can be two possibilities. If τ (ξ1 ) is a point of discontinuity of ξ1 , then τ (ξ1 ◦ ϕ1 ) = τ (ξ2 ◦ ϕ2 ) is a point of discontinuity of ξ1 ◦ ϕ1 = ξ2 ◦ ϕ2 , i.e. τ (ξ2 ) is a point of discontinuity of ξ2 . From here (∃τ1 , τ2 ∈ T (σΔ , Δ ∈ A)) τ ξ1 = τ1 ξ1 , τ ξ2 = τ2 ξ2 . Let (τ1 (ξ1 ), τ1 (ξ2 )) = (τ2 (ξ1 ), τ2 (ξ2 )). Both these points by virtue of theorem 6.12 belong to Γ. Consider the case (τ1 (ξ1 ), τ1 (ξ2 )) < (τ2 (ξ1 ), τ2 (ξ2 )). Because τ2 (ξ1 ) is a point of discontinuity of function ξ1 , by virtue of a condition I(Dτ2 | ξ1 ) = I(Dτ2 | ξ2 ) would be simultaneously τ1 (ξ1 ) < τ2 (ξ1 ) and τ1 (ξ2 ) < τ2 (ξ2 ). Then, by proposition 6.15(2), τ1 ξ2 < τ2 ξ2 = τ ξ2 ⇐⇒ τ1 ξ2 ◦ ϕ2 < τ ξ2 ◦ ϕ2 ⇐⇒ τ1 ξ1 ◦ ϕ1 < τ ξ1 ◦ ϕ1 ⇐⇒ τ1 ξ1 < τ ξ1 . We obtain a contradiction. Let τ (ξ1 ), τ (ξ2 ) be points of continuity of ξ1 and ξ2 . Then on the left of points τ (ξ1 ), τ (ξ2 ) for functions ξ1 , ξ2 accordingly there are no intervals of a constancy (theorem 6.4) and τ (ξ1 ), τ (ξ2 ) are limits from the left for the set of points τn (ξ1 ), τn (ξ2 )
220
Continuous Semi-Markov Processes
Let τn (ξ1 ) ↑ τ (ξ1 ). Then τn (ξ2 ) ↑ a ≤ accordingly, where τn ∈ T (σΔ , Δ ∈ A). τ (ξ2 ). If a < τ (ξ2 ), we obtain an inconsistency: according to proposition 6.15(2) (∃τ0 (ξ2 ) ∈ (a, τ (ξ2 )), τ0 ∈ T (σΔ , Δ ∈ A)) τ0 ξ2 < τ ξ2 =⇒ τ0 ξ1 < τ ξ1 , τ0 ξ2 > a =⇒ τ0 ξ1 ≥ τn ξ1 =⇒ τ0 ξ1 ≥ τ ξ1 . From here (τ (ξ1 ), τ (ξ2 )) = limn→∞ (τn (ξ1 ), τn (ξ2 )) ∈ Γ. 30. Direct values and inverse time changes PROPOSITION 6.17. Let τ ∈ IMT, ξ1 = ξ2 , i = 1, 2. Then (1) vi (τ (ξ1 ) + τ (ξ2 ) | ξ1 , ξ2 ) = τ (ξi ); (2) vi∗ (τ (ξi ) | ξ1 , ξ2 ) = τ (ξ1 ) + τ (ξ2 ) (v1 , v2 ; see item 23). Proof. Assertion (1) follows from the deſnition of v1 , v2 and proposition 6.16. Let us prove assertion (2). According to the deſnition, (∀t ∈ R+ ) vi∗ (t | ξ1 , ξ2 ) = inf vi−1 ({t} | ξ1 , ξ2 ). There can be two possibilities. Let τ (ξ1 ), τ (ξ2 ) be points of discontinuity. According to condition I(Dτ | ξ1 ) = I(Dτ | ξ2 ), for any point (t, s) ∈ Γ it is fair that (t, s) < (τ (ξ1 ), τ (ξ2 )) ⇒ t < τ (ξ1 ), s < τ (ξ2 ). From here τ (ξ1 ) + τ (ξ2 ) = inf v1−1 ({τ (ξ1 )} | ξ1 , ξ2 ) = inf v2−1 ({τ (ξ2 )} | ξ1 , ξ2 ). Let ϕ ∈ Φ and t > 0. Let for any ε > 0 ϕ be not constant on [t − ε, t], i.e. (∀t < t) ϕ(t ) < ϕ(t). In general case ϕ∗ (ϕ(t)) ≤ t, but if ϕ∗ (ϕ(t)) < t, we obtain a contradiction: ϕ(ϕ∗ (ϕ(t))) = ϕ(t) < ϕ(t). Let τ (ξ1 ), τ (ξ2 ) be points of continuity of corresponding functions and, hence, both ξ1 , and ξ2 have no intervals of constancy on the left of corresponding points. Hence, vi∗ τ ξi | ξ1 , ξ2 = vi∗ vi τ ξ1 + τ ξ2 | ξ1 , ξ2 | ξ1 , ξ2 = τ ξ1 + τ ξ2 . 31. Permutability of time change with shift and stopping operators THEOREM 6.5. Let τ ∈ IMT, ξ1 = ξ2 . Then (1) if τ (ξ1 ) < ∞, then v1∗ · | θτ ξ1 , θτ ξ2 = θτ (ξ1 ) v1∗ · | ξ1 , ξ2 ; (2) if τ (ξ1 ) < ∞, then v1 · | θτ ξ1 , θτ ξ2 = θτ (ξ1 )+τ (ξ2 ) v1 · | ξ1 , ξ2 ;
Time Change
221
(3) (∀s ∈ R+ ) v1 s | ατ ξ1 , ατ ξ2 = v1 s ∧ τ ξ1 + τ ξ2 | ξ1 , ξ2 + 0 ∨ s − τ ξ1 − τ ξ2 /2; (4) (∀s ∈ R+ ) v1∗ s | ατ ξ1 , ατ ξ2 = v1∗ s ∧ τ ξ1 | ξ1 , ξ2 + 2 0 ∨ s − τ ξ1 ; (5) v2 v1∗ · | θτ ξ1 , θτ ξ2 | θτ ξ1 , θτ ξ2 = θτ (ξ1 ) v2 v1∗ · | ξ1 , ξ2 | ξ1 , ξ2 ; (6) τ (ξ1 ) v2 v1∗ · | ξ1 , ξ2 | ξ1 , ξ2 . v2 v1∗ · | ατ ξ1 , ατ ξ2 | ατ ξ1 , ατ ξ2 = α The same is fair with replacing indexes (1, 2) → (2, 1) (see item 23). Proof. (1) We have θτ ξ1 = θτ ξ2 (proposition 6.14). Since any point of non-constancy of v1 on the left is a limiting point for the set of points τn (ξ1 )+τn (ξ2 ) (τn ∈ IMT), it is sufſcient to check equality at points τn (θτ ξ1 ). We have v1∗ τn θτ ξ1 | θτ ξ1 , θτ ξ2 = τn θτ ξ1 + τn θτ ξ2 ˙ τn ξ2 − τ ξ2 ˙ τn ξ1 − τ ξ1 + τ + = τ+ τ (θ ξ ) θτ (ξ ) v1∗ · | ξ1 , ξ2 =X n τ 1 1 ˙ τn ξ1 | ξ1 , ξ2 − v1∗ τ ξ1 | ξ1 , ξ2 = v1∗ τ + (see item 3, proposition 6.2(1), theorem 6.2, proposition 6.17). (2) We have v1 τn θτ ξ1 + τn θτ ξ2 | θτ ξ1 , θτ ξ2 ˙ τn ξ1 − τ ξ1 . = τn θτ ξ1 = τ + On the other hand, τ (θ ξ )+τ (θ ξ ) θτ (ξ )+τ (ξ ) v1 · | ξ1 , ξ2 X n τ 1 n τ 2 1 2 ˙ =X ˙ τn )(ξ2 ) v1 · | ξ1 , ξ2 (τ + τn )(ξ1 )+(τ + − Xτ (ξ1 )+τ (ξ2 ) v1 · | ξ1 , ξ2 ˙ τn ξ1 − τ ξ1 . = τ+
222
Continuous Semi-Markov Processes
From here it follows that θτ (ξ1 )+τ (ξ2 ) v1 · | ξ1 , ξ2 = v1 · | θτ ξ1 , θτ ξ2 . (3) (∀s ≤ τ (ξ1 ) + τ (ξ2 )) v1 (s | ατ ξ1 , ατ ξ2 ) = v1 (s | ξ1 , ξ2 ) according to deſnition of v1 (see item 23). Since a coefſcient of inclination of the curve Γ(ατ ξ1 , ατ ξ2 ) after the point (τ (ξ1 ), τ (ξ2 )) is equal to 1, (∀a ≥ 0) v1 τ ξ1 + τ ξ2 + 2a | ατ ξ1 , ατ ξ2 = v1 τ ξ1 + τ ξ2 | ξ1 , ξ2 + a. From here v1 s | ατ ξ1 , ατ ξ2 = v1 s ∧ τ ξ1 + τ ξ2 | ξ1 , ξ2 + 0 ∨ s − τ ξ1 − τ ξ2 /2. (4) Since v1 (τ (ξ1 ) + τ (ξ2 ) | ατ ξ1 , ατ ξ2 ) = τ (ξ1 ) and up to that point (τ (ξ1 ), τ (ξ2 )) curves Γ(ξ1 , ξ2 ) and Γ(ατ ξ1 , ατ ξ2 ) coincide, (∀s ≤ τ (ξ1 )) v1∗ s | ατ ξ1 , ατ ξ2 = v1∗ s | ξ1 , ξ2 . Since coefſcients of inclination of the rectilinear part after the point τ (ξ1 ) + τ (ξ2 ) of the function v1 (· | ατ ξ1 , ατ ξ2 ) and that of the function v1∗ (· | ατ ξ1 , ατ ξ2 ) after point τ (ξ1 ) are mutually reciprocal, v1∗ s | ατ ξ1 , ατ ξ2 = v1∗ s ∧ τ ξ1 | ξ1 , ξ2 + 2 0 ∨ s − τ ξ1 . (5) According to item (1) of the proof, we have v2 v1∗ s | θτ ξ1 , θτ ξ2 | θτ ξ1 , θτ ξ2 = v2 θτ (ξ1 ) v1∗ s | ξ1 , ξ2 | θτ ξ1 , θτ ξ2 = θτ (ξ1 )+τ (ξ2 ) v2 v1∗ τ ξ1 + s | ξ1 , ξ2 − τ ξ1 − τ ξ2 | ξ1 , ξ2 = v2 v1∗ τ ξ1 + s | ξ1 , ξ2 | ξ1 , ξ2 − v2 τ ξ1 + τ ξ2 | ξ1 , ξ2 = v2 v1∗ τ ξ1 + s | ξ1 , ξ2 | ξ1 , ξ2 − τ ξ2 . On the other hand, s θτ (ξ ) v2 v1∗ · | ξ1 , ξ2 | ξ1 , ξ2 X 1 = v2 v1∗ τ ξ1 + s | ξ1 , ξ2 | ξ1 , ξ2 − v2 v1∗ τ ξ1 | ξ1 , ξ2 | ξ1 , ξ2 = v2 v1∗ τ ξ1 + s | ξ1 , ξ2 | ξ1 , ξ2 − τ ξ2 .
Time Change
223
(6) According to item (3) of the proof, we have v2 v1∗ s | ατ ξ1 , ατ ξ2 | ατ ξ1 , ατ ξ2 = v2 v1∗ s | ατ ξ1 , ατ ξ2 ∧ τ ξ1 + τ ξ2 | ξ1 , ξ2 + 0 ∨ v1∗ s | ατ ξ1 , ατ ξ2 − τ ξ1 − τ ξ2 /2 = v2 v1∗ s | ξ1 , ξ2 ∧ τ ξ1 + τ ξ2 | ξ1 , ξ2 + 0 ∨ 2 s − τ ξ1 /2 = v2 v1∗ s ∧ τ ξ1 | ξ1 , ξ2 | ξ1 , ξ2 + 0 ∨ s − τ ξ1 s α τ (ξ1 ) v2 v1∗ · | ξ1 , ξ2 | ξ1 , ξ2 . =X 6.4. Coordination of function and time change The direct time change which is not changing a sequence of states need not be a strictly growing function. In this case intervals of constancy of a direct time change should be chosen in a special manner. Such intervals cut out some segments from a time scale of a transformed function. On the other hand, they must not change their sequence of states. This can only be the case when deleted segments are closed intervals of constancy of the transformed function or its periods, if any. To avoid anomalies connected with periodicity, the deſnition of coordination of a direct time change and a function concerns to the stopped functions. The measurability of a set of coordinated pairs: “direct time change – function” follows from outcomes of section 6.3 (see proposition 6.13). 32. Coordinated functions We shall state that a time change ψ ∈ Ψ1 is coordinated with function ξ ∈ D, when (∀t ∈ R+ ) αt ξ ◦ ψ ∗ ∈ D and (αt ξ ◦ ψ ∗ ) = αt ξ. Let H be the set of the coordinated pairs (ψ, ξ) ∈ Ψ1 × D. Obviously, Ψ × D ⊂ H = Ψ1 × D and (∀(ψ, ξ) ∈ H) (ξ ◦ ψ) = ξ. An interval open or closed at some of its ends at least is said to be an interval of constancy of the function f : R+ → A if f is constant on this interval. Let us designate by IC(f ) the set of intervals of constancy of the function f . 33. Criterion of coordination As mentioned earlier we denote [Δ] a closure of an interval Δ. PROPOSITION 6.18. The following equivalence is fair (ψ, ξ) ∈ H ⇐⇒ ∀Δ ∈ IC(ψ) [Δ] ∈ IC(ξ).
224
Continuous Semi-Markov Processes
Proof. Necessity. Let (t1 , t2 ] ∈ IC(ψ), and this interval be not contained in any interval of constancy of the form (s, t]. We have ψ ∗ ψ(t2 ) = t1 and ψ(ψ(t2 ) + c) → t2 (c ↓ 0). Let (∃t ∈ (t1 , t2 ]) ξ(t1 ) = ξ(t). Then ξ(t1 ) = (αt ξ)(t1 ) = (αt ξ)(t2 ). Hence, αt ξ ◦ ψ ∗ ψ t2 = lim αt ξ ◦ ψ ∗ ψ t2 + c , αt ξ ◦ ψ ∗ ∈ / D. c↓0
Sufſciency. Let ξ = αt ξ, τ ∈ IMT, τ (ξ) < ∞. According to proposition 6.6, (∀ε > 0) [τ (ξ) − ε, τ (ξ)] ∈ / IC(ξ). Then, according to the condition, (∀ε > 0) (τ (ξ) − ε, τ (ξ)] ∈ / IC(ψ). Hence, ψ ∗ ψ(τ (ξ)) = τ (ξ). From here (∀Δ ∈ A, σΔ ξ < ∞) /Δ ξ ◦ ψ ∗ ψ σΔ (ξ) = ξ σΔ (ξ) ∈ and (∀t < ψ(σΔ (ξ))) (∃t1 < σΔ (ξ)) ξ ◦ ψ ∗ (t) = ξ t1 ∈ Δ. Hence, ψ(σΔ (ξ)) = σΔ (ξ ◦ ψ ∗ ) and XσΔ (ξ ◦ ψ ∗ ) = XσΔ ξ. Let (∃τ ∈ IMT, τ (ξ) < ∞) ψ τ (ξ) = τ ξ ◦ ψ ∗ ,
Xτ ξ ◦ ψ ∗ = Xτ ξ.
Then (∀Δ ∈ A) ˙ σΔ ξ ◦ ψ ∗ = τ ξ ◦ ψ ∗ + σ Δ θτ ξ ◦ ψ ∗ τ+ = ψ τ (ξ) + σΔ θψ(τ (ξ)) ξ ◦ ψ ∗ = ψ τ (ξ) + σΔ θτ (ξ) ξ ◦ θψ(τ (ξ)) ψ ∗
= ψ τ (ξ) + θτ (ξ) ψ σΔ θτ (ξ) ξ = ψ τ (ξ) + ψ τ (ξ) + σΔ θτ (ξ) ξ − ψ τ (ξ) ˙ σΔ ξ , =ψ τ+
since θτ (ξ) ψ = (θψ(τ (ξ)) ψ ∗ )∗ with respect to θτ (ξ) ξ satisſes the same condition as ψ with respect to ξ. Consequently, Xτ +˙ σΔ ξ ◦ ψ ∗ = Xτ +˙ σΔ (ξ). From here it follows that δ ξ = δ (ξ ◦ ψ ∗ ) for any sequence δ = (Δ1 , Δ2 , . . .) (Δi ∈ A). Furthermore, since (∀τ ∈ IMT) (∀ε > 0) τ (ξ) − ε, τ (ξ) ∈ IC(ψ),
Time Change
225
I(Dτ | ξ) = I(Dτ | ξ ◦ ψ ∗ ). For example, if I(Dτ | ξ) = 0, then (∃ε > 0) (∀δ ∈ DS) (∀n ∈ N) σδn (ξ) ≤ τ (ξ) − ε Correspondingly (∃ε1 > 0) ψ σδn (ξ) ≤ ψ τ (ξ) − ε1
or σδn (ξ) ≥ τ (ξ).
or
ψ σδn (ξ) ≥ ψ τ (ξ) ,
therefore I(Dτ | ξ ◦ ψ ∗ ) = 0. 34. Measurability of a set of coordinated pairs PROPOSITION 6.19. The following property is fair H ∈ B. Proof. In proposition 6.18 it is proved that (∀(ψ, ξ) ∈ H) (∀τ ∈ IMT) ψ ∗ ψ τ (ξ) = τ (ξ), i.e. (∀ε > 0) (τ (ξ) − ε, τ (ξ)) ∈ / IC(ψ). Let rn ↓ 0 and (∀δn ∈ DS(rn )) (∀k, n, m ∈ N, 1/m ≤ σδkn ξ < ∞)
σδkn ξ − 1/m, σδkn ∈ IC(ψ).
Then [a, b] ∈ / IC(ξ) ⇔ (∃k, n ∈ N) σδkn ξ ∈ (a, b], therefore (∃m ∈ N, σδkn ξ −1/m ∈ (a, b]) k σδn ξ − 1/m, σδkn ξ ∈ / IC(ψ) =⇒ (a, b] ∈ / IC(ψ) =⇒ (ψ, ξ) ∈ H. Hence, H=
k,m,n∈N
(ψ, ξ) : ψ σδkn ξ − 1/m < ψ σδkn ξ
∪ (ψ, ξ) : σδkn (ξ) < 1/m ∪ (ψ, ξ) : σδkn (ξ) = ∞ .
Evidently, two last sets belong to B. Besides,
(ψ, ξ) : ψ τ (ξ) − 1/m < ψ τ (ξ)
(ψ, ξ) : ψ τ (ξ) − 1/m ≤ t1 , t2 < ψ τ (ξ) , = t1 ,t2 ∈R+ , t1
226
Continuous Semi-Markov Processes
where R+ is the set of all rational t ∈ R+ , and
(ψ, ξ) : ψ τ (ξ) + a > t ! 9 8 k k k+1 + a > t, τ (ξ) ∈ , (ψ, ξ) : ψ = n n n k,n∈N 9 8 ! −1 k k+1 −1 −1 = , ∈ B. p Xk/n+a (t, ∞) q τ∈ n n k,n∈N
35. Transformation of trajectory under time change Let u : H → D be the map u(ψ, ξ) = ξ ◦ ψ ∗ transformation of a trajectory generated by a time change. PROPOSITION 6.20. The following property is fair u ∈ B ∩ H/F. Proof. Let τ ∈ T (σΔ , Δ ∈ A), t ∈ R+ , S ∈ B(X). Then
u−1 βτ−1 {∞} = (ψ, ξ) ∈ H : βτ ξ ◦ ψ ∗ = ∞
= (ψ, ξ) ∈ H : τ ξ ◦ ψ ∗ = ∞
= (ψ, ξ) ∈ H : τ (ξ) = ∞ ∈ B. Furthermore,
u−1 βτ−1 [0, t] × S = (ψ, ξ) ∈ H : τ ξ ◦ ψ ∗ ≤ t, Xτ ξ ◦ ψ ∗ ∈ S
= (ψ, ξ) ∈ H : ψ τ (ξ) ≤ t, Xτ ξ ∈ S . From the proof of proposition 6.19 it follows that
(ψ, ξ) ∈ H : ψ τ (ξ) ≤ t ∈ B. Measurability of the other component is obvious. 36. Correspondence of time change and trajectory PROPOSITION 6.21. Let ϕ1 , ϕ2 ∈ Φ, ξ1 , ξ2 ∈ D and ξ1 ◦ ϕ1 = ξ2 ◦ ϕ2 . Then ϕ2 ◦ ϕ∗1 , ξ1 ∈ H.
Time Change
227
Proof. We have (∀s ∈ R+ ) Xs∧ϕ∗1 (t) ξ1 ◦ ϕ1 = Xs∧ϕ∗1 (t) ξ2 ◦ ϕ2 =⇒ Xϕ1 (s∧ϕ∗1 (t)) ξ1 = Xϕ2 (s∧ϕ∗1 (t)) ξ2 =⇒ Xϕ1 (s)∧t ξ1 = Xϕ2 (s)∧ϕ2 ϕ∗1 (t) ξ2 =⇒ Xϕ1 (s) αt ξ1 = Xϕ2 (s) αϕ2 ϕ∗1 (t) ξ2 =⇒ Xs αt ξ1 ◦ ϕ1 = Xs αϕ2 ϕ∗1 (t) ξ2 ◦ ϕ2 . Hence, αt ξ1 ◦ ϕ1 = (αϕ2 ϕ∗1 (t) ξ2 ) ◦ ϕ2 . From here (∀t ∈ R+ ) αt ξ1 ◦ ϕ1 ◦ ϕ∗2 ∈ D. According to propositions 6.18 and 6.19, it is sufſcient to show that (∀τ ∈ IMT) ψ ∗ ψ(τ (ξ1 )) = τ (ξ1 ), where ψ = ϕ2 ◦ ϕ∗1 , ξ1 = αt ξ1 , ξ2 = αψ(t) ξ2 . According to proposition 6.1 we have ∗ ψ ◦ ψ τ ξ1 = ϕ1 ◦ ϕ∗2 ◦ ϕ2 ◦ ϕ∗1 τ ξ1 = ϕ1 ◦ ϕ∗2 ◦ ϕ2 τ ξ1 ◦ ϕ1 = ϕ1 ◦ ϕ∗2 ◦ ϕ2 τ ξ2 ◦ ϕ2 = ϕ1 ◦ ϕ∗2 ◦ ϕ2 ◦ ϕ∗2 τ ξ2 = ϕ1 ◦ ϕ∗2 τ ξ2 = ϕ1 τ ξ2 ◦ ϕ2 = ϕ1 τ ξ1 ◦ ϕ1 = ϕ1 ◦ ϕ∗1 τ ξ1 = τ ξ1 . 37. Property IMT with respect to coordinated pairs PROPOSITION 6.22. (∀τ ∈ IMT) (∀(ψ, ξ) ∈ H) τ ξ ◦ ψ ∗ = ψ τ (ξ) . Proof. Since (∀k ∈ N) (∀δ ∈ DS) ψ σδk (ξ) = σδk ξ ◦ ψ ∗ , from theorem 6.4 and proposition 6.18 it follows that Γ ξ, ξ ◦ ψ ∗ = Γ1 (ψ) ∪ Γ2 (ψ) (see items 2 and 23). From proposition 6.16 it follows that (∀τ ∈ IMT) τ (ξ), τ ξ ◦ ψ ∗ ∈ Γ ξ, ξ ◦ ψ ∗ .
228
Continuous Semi-Markov Processes
From proposition 6.6 it follows that
τ ξ ◦ ψ ∗ = inf s : τ (ξ), s ∈ Γ ξ, ξ ◦ ψ ∗ = ψ τ (ξ) . 6.5. Random time changes Constructed in item 24, canonical time changes are used to prove the following theorem: for two random functions with identical distributions of points of the ſrst exit and indicators of sets Dτ there exist two random time changes, transforming these functions into one. The reversion of this theorem is trivial. We consider general random time changes, when the joint distribution Q of direct time changes and trajectories is given. This joint distribution with the help of a map (ψ, ξ) → u(ψ, ξ) ∈ D induces a measure of a transformed random function. Transformation of distribution of an original random function Q ◦ q −1 (marginal distribution corresponding to map q(ψ, ξ) = ξ) in distribution of a random function Q ◦ u−1 we call as a time change transformation for distributions of random functions or random processes. 38. Basic maps Let us introduce the following denotations: p1 : D × D → D, p1 (ξ1 , ξ2 ) = ξ1 the ſrst coordinate of a two-dimensional vector; p2 : D × D → D, p2 (ξ1 , ξ2 ) = ξ2 the second coordinate of a two-dimensional vector; w1 : d → Ψ × D, w1 (ξ1 , ξ2 ) = (v1∗ (· | ξ1 , ξ2 ), ξ1 ) the ſrst pair with a direct time change (d see item 26); w2 : d → Ψ × D, w2 (ξ1 , ξ2 ) = (v2∗ (· | ξ1 , ξ2 ), ξ2 ) the second pair with a direct time change; z1 : d → H, z1 (ξ1 , ξ2 ) = (v2 (· | ξ1 , ξ2 ) ◦ v1∗ (· | ξ1 , ξ2 ), ξ1 ) the ſrst coordinated pair; z2 : d → H, z2 (ξ1 , ξ2 ) = (v1 (· | ξ1 , ξ2 ) ◦ v2∗ (· | ξ1 , ξ2 ), ξ2 ) the second coordinated pair. 39. Existence of canonical time change PROPOSITION 6.23. Let P be a probability measure on F ⊗ F, P(d) = 1, P1 = −1 P ◦ p−1 1 , P2 = P ◦ p2 . Then (1) there exist probability measures Q1 and Q2 on B such that Qi (Ψ × D) = 1, Pi = Qi ◦ q −1 (i = 1, 2) and Q1 ◦ u−1 = Q2 ◦ u−1 ; (2) there exist probability measures Q1 and Q2 on B such that Qi (H) = 1, Pi = Qi ◦ q −1 (i = 1, 2), P2 = Q1 ◦ u−1 , P1 = Q2 ◦ u−1 .
Time Change
229
Proof. (1) From proposition 6.13 it follows that wi ∈ (F ⊗ F) ∩ d/B. Let Qi = P ◦ wi−1 (i = 1, 2). We have pi = q ◦ wi (see item 3). From here Pi = P ◦ wi−1 ◦ q −1 = Qi ◦ q −1 . According to construction of v1 and v2 , for given ξ1 , ξ2 (see item 23) it is fair u ◦ w1 ξ1 , ξ2 = ξ1 ◦ v1 · | ξ1 , ξ2 = ξ2 ◦ v2 · | ξ1 , ξ2 = u ◦ w2 ξ1 , ξ2 . From here Q1 ◦ u−1 = P ◦ w1−1 ◦ u−1 = P ◦ w2−1 ◦ u−1 = Q2 ◦ u−1 . (2) We have pi = q ◦ zi and zi ∈ (F ⊗ F) ∩ d/B. Let Qi = P ◦ zi−1 . From here Pi = P ◦ zi−1 ◦ q −1 = Qi ◦ q −1 . In this case u ◦ z1 ξ1 , ξ2 = u v2 v1∗ · | ξ1 , ξ2 | ξ1 , ξ2 , ξ1 = ξ1 ◦ v1 · | ξ1 , ξ2 ◦ v2∗ · | ξ1 , ξ2 = ξ2 = p2 ξ1 , ξ2 . From here Q1 ◦ u−1 = P ◦ z1−1 ◦ u−1 = P ◦ p−1 2 = P2 . The same with replacement (1, 2)→(2, 1). 40. Existence of measure on diagonal of traces Let us designate
E = A ⊂ M : −1 A ∈ F . Evidently, E is a sigma-algebra, ∈ F/E. From proposition 6.7 it immediately follows that F ∗ = −1 E. This means that any set B ∈ F ∗ can be represented as B = −1 B , where B ⊂ E, but it follows that B is representable as B = −1 B. Actually, −1 B = −1 −1 B , and since −1 means identity map on M, we obtain required relation. In other words, F ∗ is a set of all B ∈ F representable as B = −1 B. In the following theorem and below we will use a sigma-algebra of subsets of the set D, which all functions and subsets determining the trace are measurable with respect to. Let us designate F ◦ = σ γτ , Dτ , τ ∈ T0 .
230
Continuous Semi-Markov Processes
Evidently F ◦ ⊂ F ∗ . From the Blackwell theorem [CHA 81] it follows inverse inclusion as well, hence F ◦ = F ∗ . We do not show a proof of this theorem. In all our constructions, touching traces and time changes, it is sufſcient to use the sigma-algebra F ◦. PROPOSITION 6.24. Let P1 , P2 be probability measures on F such that (∀, k ∈ N) (∀τi , τi ∈ T0 ) (∀Si ∈ B(X)) k−1 k−1 −1 −1
−1 τi −1 τi P1 = P2 γτi Si D γτi Si D i=1
i=1
i=1
i=1
(T0 see item 22). Then there exists a probability measure P on F ⊗ F such that −1 P(d) = 1, P1 = P ◦ p−1 1 , P2 = P ◦ p2 . Proof. According to a given condition and the theorem on extension of measure [SHI 80, p. 167], P1 = P2 on F ◦ . A probability measure P on F ⊗ F we deſne as an extension of the function of sets determining on sets A1 × A2 (A1 , A2 ∈ F) by the formula P1 A1 | F ◦ P2 A2 | F ◦ P1 (dξ). P A1 × A2 = D
◦
Let A1 , A2 ∈ F and A1 ∩ A2 = Ø. Then P1 -a.s. Pi (Ai | F ◦ ) = I(Ai | ·) and P A1 × A2 = I A1 | ξ I A2 | ξ P1 (dξ) = 0. D
According to theorem 6.4 and due to the sigma-compactness of X, set (D × D) \ d is exhausted by countable set of rectangles A1 × A2 (A1 , A2 ∈ F, A1 ∩ A2 = Ø). From here P(d) = 1. In this case (∀A ∈ F) P ◦ p−1 (A) = P1 A | F ◦ P1 (dξ) = P(A × D) = P1 (A), 1 P◦
p−1 2 (A)
=
P2 A | F
◦
P1 (dξ) =
P2 A | F ◦ P2 (dξ) = P2 (A).
41. Preservation of random trace under time change PROPOSITION 6.25. For any probability measure Q on B ∩ H P ◦ −1 = P ◦ −1 , where P = Q ◦ q −1 , P = Q ◦ u−1 . Proof. Measurability of u is proved in proposition 6.20; q is measurable as a projection. We have q(ψ, ξ) = ξ = (ξ ◦ψ ∗ ) = u(ψ, ξ) and, hence, ◦q = ◦u. Therefore P ◦ −1 = Q ◦ q −1 ◦ −1 = Q ◦ u−1 ◦ −1 = P ◦ −1 .
Time Change
231
42. Note on construction of measure on the diagonal According to proposition 6.11 and the evident F ∗ -measurability of the function γτ (τ ∈ IMT), we have F ◦ ⊂ F ∗ . Therefore, from condition P1 ◦ −1 = P2 ◦ −1 , fulſllment of proposition 6.24 follows (see item 40). On the other hand, if the condition of proposition 6.24 is fulſlled, then from propositions 6.23 – 6.25 it follows that P1 ◦ −1 = P2 ◦ −1 and P1 = P2 on F ∗ . Let us consider a probability measure P∗ on F ⊗ F such that (∀A1 , A2 ∈ F) P∗ A1 × A2 = P1 P1 A1 | F ∗ P2 A2 | F ∗ . Then since F ◦ ⊂ F ∗ and P1 = P2 on F ∗ , P A1 × A2 = P1 P1 A1 | F ◦ P2 A2 | F ◦ = P1 P1 A1 | F ∗ P2 A2 | F ∗ = P∗ A1 × A2 , i.e. P = P∗ . 43. Criterion of identity of traces Let (Qx ) be a family of probability measures on B. This family we call admissible if (∀x ∈ X) Qx (H) = 1, (∀B ∈ B) Qx (B) is a B(X)-measurable function of x and the marginal distributions (Px ) (Px = Qx ◦ q −1 ) composes an admissible family of probability measures on F (see item 2.42). THEOREM 6.6. Let (Px1 ) and (Px2 ) be two admissible family of probability measures on F. Then condition (∀x ∈ X) Px1 ◦ −1 = Px2 ◦ −1 is equivalent to either of the two conditions: (1) there exist two admissible families of probability measures (Q1x ) and (Q2x ) on B such that (∀x ∈ X) Qix (Ψ × D) = 1, Pxi = Qix ◦ q −1 (i = 1, 2) Q1x ◦ u−1 = Q2x ◦ u−1 ≡ Px ; 1
2
(2) there exist two admissible families of probability measures (Qx ) and (Qx ) on i 1 2 B such that (∀x ∈ X) Pxi = Qx ◦ q −1 (i = 1, 2), Px2 = Qx ◦ u−1 , Px1 = Qx ◦ u−1 . Proof. It follows from propositions 6.23, 6.24 and 6.25.
232
Continuous Semi-Markov Processes
44. Note on criterion If either of two processes with identical traces, for example (Px2 ), does not contain = any intervals of constancy, then, according to item 24 we have Px1 = Px ◦ p−1 1 2x ◦u−1 , where w Px ◦ w 2−1 ◦u−1 = Q 2 (ξ1 , ξ2 ) = ((Γ(· | ξ1 , ξ2 ))∗ , ξ2 ), Γ(· | ξ1 , ξ2 ) is a function corresponding to time run comparison curve Γ from item 23, and Γ(· | ξ1 , ξ2 ) 2x ◦ q −1 = Px2 , i.e. Q 2x is an analogy of Q2x . ∈ Φ. Here also Q 45. Maps on set of traces Let τ ∈ IMT. We deſne maps on the set M: X τ : {τ < ∞} → X, Xτ = X τ ◦ ; θτ : {τ < ∞} → M, ◦ θτ = θτ ◦ ; ατ : M → M, ◦ ατ = ατ ◦ . Maps X τ , θτ and ατ are determined to be single-valued, because if ξ1 = ξ2 , then ατ ξ1 = ατ ξ2 and θτ ξ1 = θτ ξ2 (τ < ∞) (see proposition 6.14), therefore X0 θτ ξ1 = Xτ ξ1 = Xτ ξ2 = X0 θτ ξ2 . From this deſnition it easily follows that X τ ∈ E ∩ {τ < ∞}/B(X),
θτ ∈ E ∩ {τ < ∞}/E,
ατ ∈ E/E.
46. Regeneration times of random traces Let (∀B ∈ E) Rx (B) be a B(X)-measurable function of x and Rx (X 0 = x) = 1 (admissible family). We will state that admissible family (Rx )x∈X of probability measures on E has a Markov time τ ∈ IMT as its regenerative time (τ ∈ RT(Rx )), if (∀x ∈ X) Rx -a.s. on the set {τ < ∞} ¯ τ−1 E (μ) = RX τ μ (B) B ∈ E, μ ∈ M . Rx θ¯τ−1 B | α THEOREM 6.7. For any admissible family of measures (Px ) on F it is fair IMT ∩ RT Px ⊂ RT Rx , where Rx = Px ◦ −1 . Proof. For τ ∈ IMT ∩ RT(Px ) and A, B ∈ E we have ¯ τ−1 A = Px −1 θ¯τ−1 B ∩ −1 α ¯ τ−1 A Rx θ¯τ−1 B ∩ α = Px θτ−1 −1 B ∩ ατ−1 −1 A = Px PXτ −1 B ; {τ < ∞} ∩ ατ−1 −1 A ,
Time Change
233
according to item 45 and because of −1 B, −1 A ∈ F. The last expression is equal to Px PX τ ◦ −1 B ; −1 {τ < ∞} ∩ −1 α ¯ τ−1 A ¯ τ−1 A , = Rx RX τ (B); {τ < ∞} ∩ α according to deſnition of X τ and because of {τ < ∞} = −1 {τ < ∞}. While investigating time change in families of measures, a question can arise: does time change preserve regeneration time? It is interesting because a family of distributions of traces for both a transformed and original process can possess a set of regeneration time reach enough. Simple examples show that even in the case of stepped SM processes, some time change can cause a loss of the semi-Markov property. For this aim it is enough to determine a conditional distribution of a sequence of sojourn times as that of a sequence of dependent random values. Preservation of regeneration time under time change requires a special class of time changes. 6.6. Additive functionals and time change preserving the semi-Markov property A time change preserving the semi-Markov property of the process can be constructed with the help of the Laplace family of additive functionals which is enough to be determined on the set of non-constancy of the process, i.e. the random set which does not contain intervals of constancy of trajectories. On the other hand, any time change transforming one semi-Markov process into another determines a Laplace family of additive functionals [HAR 80a]. As an example the time change with the help of independent strictly increasing homogenous process with independent increments is considered [GIH 73, ITO 60, HAR 76a, SER 71]. 47. Random set In the given context the word “random” has the sense “depending on trajectory”. Let M (·): D → B(R+ ) be a function of the following view: (1) (∀ξ ∈ D) M (ξ) is a measurable set of point on the half-line R+ (M (ξ) ∈ B(R+ )); in addition
T ≡ (t, ξ) ∈ R+ × D, t ∈ M (ξ) ∈ B R+ ⊗ F; (2) (∀ξ ∈ D) 0 ∈ M (ξ); (3) (∀ξ ∈ D) t1 , t1 + t2 ∈ M (ξ), t2 ≥ 0 ⇒ t2 ∈ M (θt1 ξ); (4) (∀ξ ∈ D) (∀t ∈ R+ ) M (ξ) ∩ [0, t] = M (αt ξ) ∩ [0, t].
234
Continuous Semi-Markov Processes
We call such a function a random set. Let L be a class of such functions. The trivial example of random set is M (ξ) ≡ R+ . A simple non-trivial example is
[6.1] M0 (ξ) = τ (ξ) : τ ∈ T0 , where T0 is deſned in item 22. Measurability of M0 follows from countability of set of parameters and B(R+ ) ⊗ F-measurability of the set {(τ (ξ), ξ), ξ ∈ D}. Conditions (2) and (4) are evident. Condition (3) follows from closure of T0 with respect to ˙ (see item 2.14). operation + 48. Additive functional on random set Let M ∈ L. Consider functional at (ξ) ≡ a(t, ξ) on T of the following view: (1) a ∈ (B(R+ ) ⊗ F) ∩ T/B(R), measurability; (2) (∀ξ ∈ D) a0 (ξ) = 0, zero initial value; (3) (∀ξ ∈ D) t1 , t1 + t2 ∈ M (ξ), t2 ≥ 0 ⇒ at1 +t2 (ξ) = at1 (ξ) + at2 (θt1 ξ), additivity; (4) (∀ξ ∈ D) (∀t ∈ R+ ) at (ξ) = at (αt ξ), independence of future. We call (a(·, ·), M (·)) as additive functional on a random set. If M (ξ) ≡ R+ , then the words “random set” can be omitted. 49. Family of additive functionals Consider a family of additive functionals on a random set (a(λ | ·, ·), M (·))λ≥0 . We will also designate functional a(λ | t, ξ) as at (λ | ξ), and even a(λ) if two arguments are absent. The family a(λ) is said to be Laplacian if (∀ξ ∈ D) (∀t ∈ M (ξ)) at (λ | ξ) ≥ 0 and exp(−at (λ | ξ)) is a completely monotone function of λ ≥ 0. Hence in this case there exists a probability measure Gt (· | ξ) on [0, ∞) such that ∞ exp − at (λ | ξ) = e−λs Gt (ds | ξ). 0
From the evident Ft -measurability of the function at (λ | ·), determined on a measurable set {ξ ∈ D : t ∈ M (ξ)} ∈ Ft , and from the formula of inversion of the Laplace transformation [FEL 67] it follows that (∀s ∈ R+ ) the function Gt ([0, s) | ·) is Ft -measurable. 50. Coordinated extension of function We denote by [A]↑ a closure of the set A ∈ R+ with respect to convergence from the left. For the random set S ∈ L we deſne class Ψ1 (S) of functions ψ : S → R+ by following properties:
Time Change
235
(1) ψ(0) = 0; (2) ψ does not decrease on S; (3) ψ(t) → ∞ (t → ∞, t ∈ S); (4) ψ is continuous from the left on S; (5) if t1 ∈ [S]↑ , t2 ∈ M0 (ξ) and t1 < t2 , then ψ(t1 ) < ψ(t2 ). PROPOSITION 6.26. Let ξ ∈ D and S ∈ L be a random set such that M0 ⊂ S (see formula [6.1]). A function ψ : S → R+ can be extended up to the function ψ1 ∈ Ψ1 , coordinated with ξ (i.e. (ψ1 , ξ) ∈ H, see item 32) if and only if ψ ∈ Ψ1 (S). Proof. Necessity of conditions (1)–(4) is evident. If t1 < t2 and t2 ∈ M0 (ξ), then / IC(ξ) and therefore (∀ψ ∈ Ψ1 , (ψ, ξ) ∈ H) (t1 , t2 ] ∈ / IC(ξ), i.e. ψ(t1 ) < [t1 , t2 ] ∈ ψ(t2 ), condition (5). Sufſciency. Deſne ψ1 = ψ on S, ψ(t) = ψ(t − 0) for t ∈ [S]↑ , ψ1 (t) = ψ(t − 0) +
t − t ψ(t + 0) − ψ(t − 0) t − t
for t ∈ (t , t ], where t = sup(t1 ∈ S, t1 < t), t = inf(t1 ∈ S, t1 ≥ t) and t < ∞; if t = ∞, ψ1 (t) = ψ(t − 0) + t − t . Then ψ1 (0) = 0, ψ1 does not decrease on R+ , ψ1 (t) → ∞ (t → ∞), ψ1 is continuous from the left. Let t1 < t2 and ψ1 (t1 ) = ψ1 (t2 ). From construction it follows that M0 (ξ) ∩ (t1 , t2 ] = Ø, i.e. [t1 , t2 ] ∈ IC(ξ). From here by 6.18, (ψ1 , ξ) ∈ H. 51. Time change determined by functional Consider a Laplace family of additive functionals on a random set (a(λ), M1 (·))λ≥0 where M1 ∈ L, (∀ξ ∈ D) M0 (ξ) ⊂ M1 (ξ). Let P be a probability measure on F. Let us construct on B(Rn+ ) ⊗ F a projective family of measures (Qτ1 ,...,τn ) (n ∈ N, τi ∈ T0 ) of the form n
−1 k1 Xτ ◦q ◦ θτ B ∩ τnn ◦ q < ∞ Qτ ,...,τ ◦q ∈ Si ∩ q 1
n
ni
i=1
=P
n
ni−1
Si ◦ θτni−1 , B ∩ τnn < ∞ , Gτni
i=1
where k1 (t, x) = t (either the ſrst coordinate of the pair or ∞; see item 4.2), Si ∈ B(R+ ), B ∈ F, τni is the i-th order statistics of a collection (τ1 , . . . , τn ), τni =
236
Continuous Semi-Markov Processes
˙ τni τni−1 + (e.g., τni = σΔni ); (∀ξ ∈ D, τni (ξ) < ∞) τni (θτni ξ) ∈ M1 (θτni ξ) and (∀λ > 0) ∞ (λ | ξ) = (dt | ξ), exp − aτni e−λt Gτni 0
Gτ (dt | ξ) ≡ Gτ (ξ) (dt | ξ) is a probability measure on (0, ∞); see item 49. THEOREM 6.8. Let (a(λ), M1 (·))λ≥0 be a Laplace family of additive functionals on a random set where M1 ∈ L, (∀ξ ∈ D) M0 (ξ) ⊂ M1 (ξ) and (∀λ > 0) (∀ξ ∈ D) the functional at (λ | ξ) as a function of t ∈ M1 belongs to class Ψ1 (M1 ). Then the constructed above projective family of measures (Qτ1 ,...,τn ) can be extended up to a probability measure Q on B, concentrated on H, and, in addition, P = Q ◦ q−1 . Proof. Since
τ ◦q ◦ θτ ◦q (ψ, ξ) = ψ (τ + ˙ τ )ξ − ψ τ (ξ) , k1 X
and map ξ → Gτ (ξ) (· | ξ) is measurable, it is sufſcient to show that (∀ξ ∈ D) the projective family (Gτ1 ,...,τn (· | ξ)) (n ∈ N, τi ∈ M1 (ξ)) of the form n
ψ τni (ξ) − ψ τni−1 (ξ) ∈ Si | ξ Gτ1 ,...,τn i=1
=
n
Si | θτni−1 ξ Gτni
i=1
can be extended up to a probability measure G on B(Ψ1 (ξ)), where Ψ1 (ξ) is a crosssection of set H on coordinate ξ. Note that since functionals aτ (λ | ξ) ≡ a(λ | τ (ξ), ξ) are additive on M1 (ξ), the distribution Gτ2 (· | θτ1 ξ) is that of increments on inter˙ τ2 )(ξ)] of a process with independent increments determined on set val [τ1 (ξ), (τ1 + M1 (ξ). Properties of trajectories of a process with independent increments are wellknown to coincide, in general, with logarithm properties of the generating function of the process (see [SKO 61]). Show that from conditions on aτ (λ | ξ) follows the trajectories of the constructed process with independent increments extended on R+ to be a.s. coordinated with ξ. Let U(· | ξ) be a distribution of this process. Obviously, t , t ∈ M1 (ξ)), Nn ⊃ Nn+1 and U(Nn | ξ) ≥ U(ψ(0) = 0 | ξ) = 1. Let Nn ∈ σ(X n ε1 > 0. Without loss of generality, suppose Nn = i=1 {ψ(ti ) ∈ Sin }, where M0 (ξ) ⊂ (t1 , t2 , . . .) ⊂ M1 (ξ), all Sin are compact, and in addition, (∀i ∈ N) Sin ⊃ Sin+1 . Since (∀(tn ) ⊂ M1 (ξ)) tn ↑ ∞ ⇒ (∀l ∈ N) Gtn ([l, ∞) | ξ) → 1 (it follows from property (3) of item 50), for any sequence (ε(l))∞ 1 (ε(l) > 0) there exists a sequence (a(l) ↑ ∞) such that (∀n ∈ N) (a(l))∞ 1 tn ≥ a(l) =⇒ Gtn [l, ∞) ≥ 1 − ε(l).
Time Change
237
Let ε(l) = ε · 2−l−1 , (tn1 , . . . , tnn ) be an order statistics of collection {t1 , . . . , tn } ⊂ M1 (ξ), (tni ) = l ⇔ a(l) ≤ tni < a(l + 1) (l ∈ N0 , a(0) = 0); Kn =
n ∞
ψ tni ≥ tni =
=
ψ tni ≥ l
l=0 (tni )=l
i=1 ∞
ψ tni(l) ≥ l ,
l=0
where tni(l) = min(tni : tni ≥ a(l)). In this case ∞ U Kn | ξ ≥ 1 − U ψ tni(l) < l | ξ l=0
=1− and Kn ⊃ Kn+1 . If ψ ∈
∞
∞
Gtni(l) (0, l) | ξ ≥ 1 − ε
l=0
n=1 Kn , tn
↑ ∞ and (tn ) ⊂ M1 (ξ), evidently, ψ(tn ) ↑ ∞.
Since (∀(tn ) ⊂ M1 (ξ))
tn ↑ t, t ∈ M1 (ξ) =⇒ (∀l ∈ N ) Gtn ,t 0, l−1 −→ 1
(this property follows from (4) of item 50), for any (ε(t, l))∞ l=1 (ε(t, l) > 0) there exists a sequence (b(t, l))∞ l=1 , (b(t, l) ↓ 0) such that 1 − b(t, l) ≤ tn < t =⇒ Gtn ,t 0, l−1 ≥ 1 − ε(t, l). ˙ t2 (t1 , t2 ∈ Here and in what follows Gt1 ,t2 (S | ξ) = Gt2 (S | θt1 ξ), where t2 = t1 + −n−l−1 , M1 (ξ), t2 ∈ M1 (θt1 ξ)). Let ε(tn , l) = ε2 Ln =
n i−1
: −1 ; ψ tni − ψ tnj ≤ tni , tnj i=1 j=0
=
∞ n
ψ tni − ψ(tnj ≤ l−1
i=1 l=0 j:(tni ,tnj )=l
=
∞ n
ψ tni − ψ tnj(l) ≤ l−1 , i=1 l=0
where (tni , tnj ) = l ⇔ tni − b(tni , l) ≤ tnj < tni − b(tni , l + 1), tnj(l) = min(tnj : tnj ≥ tni − b(tni , l)) (l ∈ N0 , b(tni , 0) = ∞). Then ∞ n Gtnj(l) ,tni l−1 , ∞ ≥ 1 − ε, U Ln | ξ ≥ 1 − i=1 l=0
238
Continuous Semi-Markov Processes
(∀n ∈ N ) Ln ⊃ Ln+1 and any function ψ ∈ (t1 , t2 , . . .) ⊂ M1 (ξ).
∞ n=1
Ln is continuous from the left on
Since (∀(tn ) ⊂ M1 (ξ)) tn ↑ t < t,
t ∈ M0 (ξ) =⇒ Gt −0,t (0, ∞) = 1
(follows from property (5) of item 50), for any sequence (ε(t, l))∞ l=1 (ε(t, l) > 0) (c(t, l) > 0) such that there exists a sequence of numbers (c(t, l))∞ l=1 −1 t − l ≤ tn < t − 1/(l + 1) =⇒ Gtn ,t c(t, l), ∞ ≥ 1 − ε(t, l). Let (tn , ) = 2−n−−1 , where tn is the n-th element of the set M0 (ξ) in the sequence (tn ). Let the condition (tni , tnj ) = be equivalent to the condition tni −1/ ≤ tnj < tni − 1/( + 1) ( ∈ N0 ), and Mn =
i−1
ψ tni − ψ tnj ≥ c tni , tni , tnj
j=0 i≤n, tni ∈M0 (ξ)
=
∞
ψ tni − ψ tnj ≥ c tni , l
i l=0 j:(tni ,tnj )=l
=
∞
ψ tni − ψ tnj(l) ≥ c tni , l , i l=0
where tnj(l) = max{tnj : tni − l−1 ≤ tnj < tni − 1/(l + 1)}. Then ∞ U Mn | ξ ≥ 1 − Gtnj(l) ,tni 0, c tni , l ≥ 1 − ε, i
l=0
(∀n ∈ N ) Mn ⊃ Mn+1 and for any function ψ ∈ item 50 is fulſlled.
∞ n=1
Mn on (tn ) property (5) of
t , t ∈ M1 (ξ)) Let Rn = Kn Ln Mn Nn . Then (∀n ∈ N ) Rn ⊃ Rn+1 ∈ σ(X and U(Rn ) ≥ ε1 − 3ε ≥ ε1 /2, if ε = ε1 /6. Since U is a probability measure on ∞ M (ξ) B(R+ 1 ), n=1 Rn = Ø, and, in addition, for any ψ from this conjunction the conditions of proposition 6.26 are fulſlled. Hence the probability measure U and the canonical extension of the function ψ up to the function ψ on R+ , constructed in proposition 6.26, induces a probability measure n U (· | ξ) on B(Ψ1 (ξ)). Here (∀n ∈ N) (∀ti ∈ R+ ) (∀St ∈ B(R+ ) the function U ( i=1 {ϕ(ti ) ∈ Si } | ξ) is measurable with respect to F. The distribution Q on B is determined by the relation n n
−1 ψ ti ∈ S i ∩ q B = P U ψ ti ∈ S i ; B Q i=1
i=1
Time Change
239
(a canonical extension of distributions (Qτ1 ,...,τn )) and is concentrated on H = ξ∈D {(ψ, ξ) : ψ ∈ Ψ1 (ξ)}. Evidently, P = Q ◦ q −1 .
52. Note on uniqueness Note that the uniqueness of extension of Q is guaranteed only for the case when (∀ξ ∈ D) M0 (ξ) is dense in R+ . However, the transformed measure P = Q ◦ u−1 is always unique in spite of possible non-uniqueness of Q. This measure is determined by the expression (∀n ∈ N) (∀λi > 0) (∀ϕi ∈ C) (∀τi ∈ T0 ) P
n
−λi τi
e
ϕi ◦ Xτi
=P
i=1
n
−aτi (λi )
e
ϕi ◦ Xτi
.
i=1
Actually, (∀B ∈ F ∗ ) u−1 B ∩ H = q −1 B ∩ H (by 6.22 and item 11). Let τi ∈ T0 , ˙ τi (τ0 = 0). We have τ1 < · · · < τn , τi = τi−1 +
τi ◦ θτi−1 ◦ u (ψ, ξ) = τi θψ(τi−1 (ξ)) ξ ◦ ψ ∗ = τi θτi−1 ξ ◦ θψ(τi−1 (ξ)) ψ ∗ ∗ = θψ(τi−1 (ξ)) ψ ∗ τi θτi−1 ξ = θτi−1 (ξ) ψ τi θτi−1 ξ = ψ τi−1 (ξ) + τi θτi−1 ξ − ψ τi−1 (ξ) τ ◦q ◦ θτ ◦q (ψ, ξ). = k1 X i−1 i
Then P
n
−λi τi ◦θτi−1
e
;B
=Q
i=1
=Q =P
n
−λi τi ◦θτi−1 ◦u
e
; u−1 B ∩ H
i=1 n
◦θτ −λi k1 (X ) τ ◦q i−1 ◦q
e
i
i=1 n
;q
−1
B
−aτ (λi )◦θτi−1
e
i
;B .
i=1
The measure P is determined uniquely with collection of these integrals.
240
Continuous Semi-Markov Processes
53. Preservation of regeneration times THEOREM 6.9. Let (a(λ), M (·))λ≥0 be a Laplace family of additive functionals satisfying the conditions of proposition 6.26; (Px ) be an admissible family of probability measures on F and (Px ) = Qx ◦ u−1 be a family of transformed probability measures on F, determined by corresponding family of probability measures (Qx ), constructed in theorem 6.8. Then IMT ∩ RT Px ⊂ RT Px . Proof. Let τ ∈ IMT. Then Xτ ∈ F ∗ /B(X). Furthermore, let τn = σδkn for σδkn ≤ τ < σδk+1 , where δn ∈ DS(An ), An is a covering of X of rank rn (rn ↓ 0). Since n M0 (ξ) is dense in M (ξ), τn ↑ τ and aτn (λ) ↑ aτ (λ), where aτ (λ | ξ) is an extension by continuity of the function at (λ | ξ) with M0 (ξ) on M (ξ) = {t = τ (ξ), τ ∈ IMT}. } ∈ F ∗ . We have In addition {σδkn ≤ τ < σδk+1 n Px e−λτ ϕ ◦ Xτ = lim Px e−λτn ϕ ◦ Xτ n→∞
= lim
n→∞
∞
−a k (λ)
ϕ ◦ Xτ ; σδkn ≤ τ < σδk+1 Px e σδn n
k=0
= lim Px e−aτn (λ) ϕ ◦ Xτ = Px e−aτ (λ) ϕ ◦ Xτ . n→∞
Hence, Px e−λτ ϕ ◦ Xτ = Px e−aτ (λ) ϕ ◦ Xτ . Similarly we prove that (∀n ∈ N) (∀τi ∈ IMT) (∀λi > 0) (∀ϕi ∈ C) n n −λi τi −aτi (λi ) ϕi ◦ Xτ = Px ϕi ◦ X τ . Px e e i
i
i=1
i=1
Furthermore, (∀n, m ∈ N) (∀τ ∈ IMT) (∀λi , λi > 0) (∀ϕi , ϕi ∈ C) (∀τi , τi ∈ T0 ) n m −λ τ −λ τ ◦ θτ ◦ ατ ; τ < ∞ Px e i i ϕi ◦ X τ e i i ϕ ◦ Xτ i
i
i=1
= Px
i=1 n
e−aτi
(λi )◦θτ
ϕi ◦ Xτi ◦ θτ
i=1
×
m i=1
i
−a (λ ) e τi ◦ατ i ϕi
◦ Xτi ◦ ατ
; τ <∞ .
Time Change
241
We note that the second factor is Fτ -measurable and {τ < ∞} ∈ F ∗ . Then if τ ∈ MP(Px ), the last expression is equal to m n −a (λ ) e−aτi (λi ) ϕi ◦ Xτi e τi ◦ατi i ϕi ◦ Xτi ◦ ατ ; τ < ∞ Px PXτ
i=1
= Px PXτ
i=1
n
−λi τi
e
ϕi ◦ Xτi
i=1
m
−λi τi
e
ϕi
◦ Xτi
◦ ατ ; τ < ∞ .
i=1
From here it follows that τ ∈ MP(Px ). 54. Process with independent increments A probability measure U on G is said to be a measure of a homogenous process with independent increments, if U-a.s. (∀t ∈ R+ ) (∀B ∈ G) U θt−1 B | Gt = U(B). Let us determine a Laplace family of additive functionals (a(λ)): (∀t ∈ R+ ) (∀λ ≥ 0) which is independent of ξ e−at (λ) = U e−λXt . From homogenity of the process it follows that at (λ) = tα(λ), where α(λ) is a positive function with a derivative completely monotone on λ. Thus, the constructed Laplace family of additive functionals satisſes conditions of theorem 6.8 and determines a time change transforming the SM process (Px ) in SM process (Px ) by formula Px e−λτ ϕ ◦ Xτ ; τ < ∞ = Px e−aτ (λ) ϕ ◦ Xτ ; τ < ∞ , where τ ∈ IMT. The time change with the help of an independent process with independent increments is also investigated in [SER 71]. 55. Strong semi-Markov process A process (Px ) is said to be strong semi-Markov process if IMT ⊂ RT(Px ). In theorem 6.9 we considered a Laplace family of additive functionals which determines a time change preserving strong semi-Markov property. Thus, a method of construction of strong semi-Markov processes is given. As an original process can be
242
Continuous Semi-Markov Processes
considered a strong Markov process which contains all intrinsic Markov times in its class of regeneration times. An example of Laplace family of additive functionals preserving the strong semi-Markov property, but not preserving Markovness, is the family considered above, connected to homogenous process with independent positive increments, if it is not linear on λ. 56. On linearity of the Laplace family By analogy with the criterion of Markovness for semi-Markov processes (see theorems 3.6, 3.8) it can be predicted that linearity of the Laplace family of additive functionals on λ is necessary and sufſcient for the corresponding time change to preserve the Markov property of the process. As it follows from results of [BLU 62] (see also [BLU 68, p. 234]), it is fair under additional supposition that the map t → at (λ | ·) is continuous. Let (Px )x∈R+ be a family of degenerate distributions of a Markov process for which (∀t ∈ R+ ) Px (Xt = x + t) = 1 (movement to the right with unit velocity). Let ξx (t) = x + t (t, x ∈ R+ ) and λt, 1∈ / [x, t + x), at (λ | ξx ) = λt + ln f (λ), 1 ∈ [x, t + x), ∞
where f (λ) = 0 e−λt F (dt), F is some probability distribution on B(R+ ) and F ({0}) = 0. Then (∀x ∈ R+ ) Px -a.s. a(λ) is a Laplace family of additive functionals: at1 +t2 λ | ξx = at1 λ | ξx + at2 λ | θt1 ξx = at1 λ | ξx + at2 λ | ξx+t1 1∈ / x, x + t1 + t2 , λ t1 + t2 , = λ t1 + t2 + ln f (λ) otherwise. This family of additive functionals, in general, is not linear on λ. However, for f (λ) = a/(a + λ) (a > 0), the corresponding time change preserves the Markovness of the family. In the transformed family trajectories since the ſrst hitting time into the unit level has an interval of constancy distributed exponentially. For any other f (λ), including degenerated distributions when f (λ) = e−aλ , Markovness of the family is not preserved. Note that in the latter case (∀x ∈ R+ ) Px -a.s. the family (a(λ)) is linear on λ. We will return to the question about linearity of a family of additive functionals while analyzing a time run along the trace. 6.7. Distribution of a time run along the trace For any random process there exists a special class of additive functionals which is determined by the process itself. It is called an additive functional of a time run
Time Change
243
along the trace. Consider a function ξ ∈ D and its trace ξ. From other functions ξ1 ∈ −1 ξ the original function differs with its individual distribution of time along the trace. If we consider functions τ ∈ IMT to be marks on a trace, this distribution represents an additive functional (Tτ (ξ))τ ∈IMT , where Tτ (ξ) ≡ τ (ξ). Its main properties follow from properties of functions ξ themselves and those of intrinsic times. Thus, the time run is a “twice random” non-negative additive functional on a trace. The ſrst “randomness” is dependent on a trace. The second “randomness” distinguishes an individual trajectory between trajectories with the same trace. It follows that the distribution of a random process can be represented with the help of two distributions: a distribution on set of traces, and depending on the trace a conditional distribution on the set of time runs. The second distribution, factually, is a distribution of some random process. For a semi-Markov process this secondary process possesses the simplest properties. It is a process with independent positive increments on some linearly ordered set. A distribution of this process can be determined with the help of a so called Laplace family of additive functionals of a time run. As well as in a usual monotone process with independent increments determined on a segment of the line, for these functionals the Lévy formula is fair. A measure of a Poisson component in this formula determines jumps of the time run process. It is natural to interpret these jumps as intervals of constancy of a trajectory. 57. Representation of sigma-algebra In item 40 we have deſned the sigma-algebra F ◦ = σ(γτ , Dτ , τ ∈ T0 ). We assume that T0 contains τ ≡ ∞ (e.g., τ = σX ). Thus, using the deſnition, D∞ ∈ F ◦ . This sigma algebra also contains other useful sets such as {τ1 < τ2 }, {τ < ∞}, {sup(τn ) < τ }, {sup(τn ) < ∞}, where τ, τi ∈ T0 (see below). PROPOSITION 6.27. For any τ ∈ T0 the following representation is fair: F ◦ = σ ατ−1 F ◦ , θτ−1 F ◦ . Proof. Let τ1 ∈ T0 : τ1 = σ(Δ1 ,...,Δk ) , τ = σ(Δ1 ,...,Δn ) (Δi , Δi ∈ P(A0 )). With times τ and τ1 we consider all preceding times, i.e. chains of times: (1) 0 ≤ σΔ1 ≤ σ(Δ1 ,Δ2 ) ≤ · · · ≤ σ(Δ1 ,...,Δk ) ; (2) 0 ≤ σΔ1 ≤ · · · ≤ σ(Δ1 ,...,Δn ) . Let
z = (Ø), Δ1 , . . . , Δ1 , . . . , Δk , Δ1 , . . . , Δ1 , . . . , Δn be the corresponding full collection (see item 4.1), I(z) be the set of all indexes of intersection for the given z (item 4.11) and J(z) be the set of all sequences of indexes
244
Continuous Semi-Markov Processes
of intersection (item 4.12). One and only one variational series, composed with times σΔ1 , . . . , σ(Δ1 ,...,Δk ) , σΔ1 , . . . , σ(Δ1 ,...,Δn ) , corresponds to any sequence (α) ∈ J(z). This sequence (α) = (α1 , . . . , αN +1 ) (N ≥ 0) depends only on corresponding values of X0 , Xτα1 , . . . , XταN and, correspondingly, is F ◦ -measurable. In this case the set {τ1 < τ } is a union of a ſnite number of sets B(α), where B(α) is a set of all ξ ∈ D for which a variational series for the collection σΔ1 ξ, . . . , σ(Δ1 ,...,Δk ) ξ, σΔ1 ξ, . . . , σ(Δ1 ,...,Δn ) ξ has the sequence of indexes of intersection (α) and hence {τ1 < τ } ∈ F ◦ . It is sufſcient to check F ◦ -measurability of maps ατ and θτ on sets of the form {τ1 = ∞}, {τ1 < ∞, Xτ1 ∈ S} (S ∈ B(X)), Dτ1 , D∞ ∈ F ◦ . We have
ατ−1 τ1 = ∞ = τ1 ◦ ατ = ∞ = τ1 > τ ∪ τ1 = τ = ∞ ∈ F ◦ ,
ατ−1 τ1 < ∞, Xτ1 ∈ S = τ1 ◦ ατ < ∞, Xτ1 ∧τ ∈ S
= τ1 ≤ τ, τ1 < ∞, Xτ1 ∈ S ∈ F ◦ ,
ατ−1 Dτ1 = τ ≥ τ1 ∩ Dτ1 ∈ F ◦ ,
ατ−1 D∞ = ξ : ατ ξ ∈ D∞ = {τ = ∞} ∩ D∞ ∈ F ◦ , and also
˙ τ1 = ∞ ∈ F ◦ , θτ−1 τ1 = ∞ = {τ < ∞} ∩ τ +
˙ τ1 < ∞, Xτ +˙ τ ∈ S ∈ F ◦ , θτ−1 τ1 < ∞, Xτ1 ∈ S = {τ < ∞} ∩ τ + 1 ˙
θτ−1 Dτ1 = {τ < ∞} ∩ Dτ + τ1 ∈ F ◦ , θτ−1 D∞ = {τ < ∞} ∩ D∞ ∈ F ◦ . Hence, σ(ατ−1 F ◦ , θτ−1 F ◦ ) ⊂ F ◦ . On the other hand,
τ1 = ∞ = ατ−1 τ1 = τ = ∞ ∪
k
ατ−1 σ(Δ1 ,...,Δi−1 ) ≤ τ < σ(Δ1 ,...,Δi )
i=1
τ1 < ∞, Xτ1 ∪
k i=1
∩ θτ−1 σ(Δi ,...,Δk ) = ∞ ∈ σ ατ−1 F ◦ , θτ−1 F ◦ ,
∈ S = ατ−1 τ1 ≤ τ < ∞, Xτ1 ∈ S
ατ−1 σ(Δ1 ,...,Δi−1 ) ≤ τ < σ(Δ1 ,...,Δi )
∩ θτ−1 σ(Δi ,...,Δk ) < ∞, Xσ(Δi ,...,Δk ) ∈ S ∈ σ ατ−1 F ◦ , θτ−1 F ◦ ,
Time Change
Dτ1 = ατ−1
245
k
τ1 < τ ∩ Dτ1 ∪ ατ−1 σ(Δ1 ,...,Δi−1 ) ≤ τ < σ(Δ1 ,...,Δi ) i=1
∈ σ ατ−1 F ◦ , θτ−1 F ◦ , = ατ−1 {τ = ∞} ∩ D∞ ∪ ατ−1 {τ < ∞} ∩ θτ−1 D∞ ∈ σ ατ−1 F ◦ , θτ−1 F ◦ , ∩ θτ−1 D
D∞
σ(Δ ,...,Δ i
k
)
Hence, F ◦ ⊂ σ(ατ−1 F ◦ , θτ−1 F ◦ ). 58. Joint measurability of conditional probabilities Furthermore, we will consider conditional probabilities, connected with different original measures, as functions of ξ and of a parameter of a family of measures (in the given case x ∈ X). A question arises about joint measurability. Let F be a sigma algebra being a subset of F . Consider a conditional distribution Px (· | F ) (x ∈ X). It is well-known (see [BLU 68, p. 22] and also [HAL 53]) that for a countably-generated sigma-algebra F and B(X)-measurable probabilities Px (B) (B ∈ F) (as functions of x) there exists a B(X) ⊗ F -measurable version of a conditional probability, depending on a parameter (i.e. (∀B ∈ F) Px (B | F )(ξ) is a B(X) ⊗ F -measurable function). ∞ A proof of this fact follows from the property that F can be represented as σ( n=1 Fn ), where Fn is a ſnite algebra of subsets of the set D, and the family of conditional probabilities (Px (B | Fn ))∞ n=1 composes a martingale [SHI 80]. If function X0 is F -measurable, a superposition of this function with the conditional probabilities PX0 (B | F ) (or with expectation EX0 (f | F )) is a F -measurable function for any sets B ∈ F (or F-measurable functions f ). 59. Representation of conditional probability PROPOSITION 6.28. Let (Px ) be an admissible family of measures, and also τ ∈ T0 ∩ RT(Px ). Then (∀x ∈ X) Px -a.s. for any F-measurable functions f and g on the set {τ < ∞} it is fair Ex f ◦ ατ g ◦ θτ | F ◦ = Ex f ◦ ατ | ατ−1 F ◦ Ex g ◦ θτ | θτ−1 F ◦ . Proof. Let A, B ∈ F ◦ . We have Ex f ◦ ατ g ◦ θτ ; ατ−1 A ∩ θτ−1 B = Ex Ex f ◦ ατ g ◦ θτ | F ◦ ; ατ−1 A ∩ θτ−1 B . On the other hand, according to the property of regeneration, the ſrst expression is equal to Ex EXτ (g; B) f ◦ ατ ; ατ−1 A ∩ {τ < ∞} .
246
Continuous Semi-Markov Processes
Using the deſnition of conditional probability and taking into account the evident ατ−1 F ◦ -measurability of function Xτ and set {τ < ∞}, we obtain Ex Ex f ◦ ατ | ατ−1 F ◦ EXτ (g; B); ατ−1 A ∩ {τ < ∞} . [6.2] Note that Px -a.s.
EXτ (g; B) = Ex g ◦ θτ ; θτ−1 B | Xτ ,
where conditional probability, as usual, is with respect to the sigma-algebra Xτ−1B(X), generated by the map Xτ on set {τ < ∞}. Since Xτ can be represented as X0 ◦ θτ , Xτ−1B(X) ⊂ θτ−1F ◦ . Using well-known property of conditional distributions (see, e.g., [SHI 80, p. 230]), we obtain Ex g ◦ θτ ; θτ−1 B | Xτ = Ex Ex g ◦ θτ | θτ−1 F ◦ ; θτ−1 B | Xτ . Substituting the last expression in formula [6.2] and again using the property of regeneration we obtain Ex Ex f ◦ ατ | ατ−1 F ◦ Ex g ◦ θτ | θτ−1 F ◦ ; θτ−1 B ∩ ατ−1 A . From here, using the theorem on extension of measure [GIH 71, p. 53], Px -a.s. on {τ < ∞} Ex f ◦ ατ g ◦ θτ | F ◦ = Ex f ◦ ατ | ατ−1 F ◦ Ex g ◦ θτ | θτ−1 F ◦ . 60. Additive functional of time run Let (Px ) be an admissible family of measures. Consider F ◦ -measurable function Ex (e−λτ | ατ−1 F ◦ ) (see item 40). According to the deſnition T0 is countable. The sigma-algebra B(X) is countably-generated too. It is generated by a system of open balls {S(x, r)}, (x ∈ X , r ∈ R+ ), where X is a countable set which is dense everywhere in X, R+ is the set of all positive rational numbers. From here F ◦ and ατ−1 F ◦ are countably-generated sigma-algebras. Since X0 is F ◦ -measurable function, there exists F ◦ -measurable modiſcation of the function EX0 (e−λτ | ατ−1 F ◦ ). In the following theorem we consider F ◦ -measurable functional bτ (λ) = − log EX0 e−λτ | ατ−1 F ◦ on a random set M0 (ξ) (see item 47, formula (6.1)), where bτ (λ)(ξ) ≡ b(λ | τ (ξ), ξ). Note that, by proposition 6.27, (∀x ∈ X) Px -a.s. EX0 e−λτ | ατ−1 F ◦ = EX0 e−λτ | F ◦ . We will call the functional bτ (λ) a functional of time run.
Time Change
247
THEOREM 6.10. Let (Px ) ∈ SM. Then (∀x ∈ X) Px -a.s. the family of functionals (b(λ)) (λ ≥ 0) is a Laplace family of additive functionals on a random set M0 (ξ) satisfying the conditions of theorem 6.8. Proof. Let us prove additivity of functional b(λ). It is sufſcient to establish that (∀A1 , A2 ∈ F) (∀τ1 , τ2 ∈ T0 ) (∀λ > 0) (∀x ∈ X) Ex exp − bτ1 +˙ τ2 (λ) ; A = Ex exp − bτ1 (λ) + bτ2 (λ) ◦ θτ1 ; A , where A = ατ−1 A1 ∩ θτ−1 A2 . According to deſnition of bτ (λ), we have 1 1 Ex exp − bτ1 +˙ τ2 (λ) ; A ˙ τ2 | F ◦ ; A = Ex EX0 exp − λ τ1 + ˙ τ2 | F ◦ Px A | F ◦ = Ex Ex exp − λ τ1 + ˙ τ2 P x A | F ◦ = Ex exp − λ τ1 + A1 | ατ−1 F ◦ Px θτ−1 A2 | θτ−1 F◦ . = Ex e−λτ1 e−λτ2 θτ1 Px ατ−1 1 1 1 1 According to the well-known property of conditional probabilities (see, e.g., Neveu A2 | θτ−1 F ◦ ) can be represented as f ◦θτ1 , [NEV 69]), conditional probability Px (θτ−1 1 1 ◦ where f is a F -measurable function. Using the regeneration property of family (Px ), the preceding expression can be written as A1 | ατ−1 F ◦ EXτ1 e−λτ2 f ; τ < ∞ . Ex e−λτ1 Px ατ−1 1 1 On the other hand, for any x ∈ X it is fair Ex e−λτ2 f = Ex Ex e−λτ2 | F ◦ f = Ex exp − bτ2 (λ) f . From here EXτ1 (e−λτ2 f ) = EXτ1 (exp(−bτ2 (λ))f ) and also by the regeneration condition we obtain Ex exp − λτ1 exp − bτ2 (λ) ◦ θτ1 Ex A | F ◦ = Ex EX0 e−λτ1 | F ◦ exp − bτ2 (λ) ◦ θτ1 ; A = Ex exp − bτ1 (λ) + bτ2 (λ) ◦ θτ1 ; A . Since F = σ(ατ−1 F, θτ−1 F) (see item 2.24, theorem 2.14), the equality Ex exp − bτ1 +˙ τ2 (λ) ; A = Ex exp − bτ1 (λ) + bτ2 (λ) ◦ θτ1 ; A
248
Continuous Semi-Markov Processes
is fair for all A ∈ F. Evidently, b0 (λ) = 0. By proposition 6.28, we have exp − bτ (λ) = EX0 e−λτ | ατ−1 F ◦ , i.e. bτ (λ) does not depend on the future. Obviously, ∞ e−bτ (λ) = e−λt Px τ ∈ ds | F ◦ , 0
i.e. Px -a.s. (b(λ), M0 (·)) is a Laplace family of additive functionals on a random set. For this family to determine a time change it is sufſcient that the conditions of theorem 6.8 are fulſlled. From them it remains to check continuity from the left and properties on inſnity, but these properties immediately follow from the deſnition of the functional. For example, if Px -a.s. τn ↑ τ ∈ T0 , PX0 exp − λτn | F ◦ −→ PX0 exp(−λτ ) | F ◦ . From the proven theorem it follows that for two SM processes with coinciding distributions of traces there exists an additive functional determining a time change which transforms one process into the other, namely if (Px1 ), (Px2 ) are two families of 2 (exp(−λτ ) | F ◦ ) is measures such that Px1 ◦ −1 = Px2 ◦ −1 , then b2 (λ, τ ) = PX 0 such a Laplace family of additive functionals which, by theorem 6.10, determines the time change transforming family (Px1 ) into (Px2 ). 61. Composition of sequences In order to conduct further investigation, deducing sequences of a special view will be useful for us. Let (An )∞ n=1 be a sequence of countable open coverings of the space X with decreasing ranks: (∀Δ ∈ An ) diam Δ ≤ 1/n, and let δn ∈ DS(An ). Deſne a composition of k deducing sequences δ1 × · · · × δk , where δi = (Δi1 , Δi2 , . . .). This is a sequence of the order-type ω k with elements (Δ1i1 ∩· · ·∩Δkik ) (i1 , . . . , ik ) ∈ Nk (index of dimension k). A rank of this sequence is equal to the smallest rank of the component ranks. It is convenient to use sequences of compositions (δ1 ×· · ·×δk )∞ k=1 because (∀ξ ∈ D), a sequence of the ſrst exit times (τi1 ,...,ik (ξ)), determined by k-th composition is a sub-sequence of sequence (τi1 ,...,ik+1 (ξ)), determined by the (k + 1)-th composition. The deſnition of times τ (i1 , . . . , ik ) is based on the following property: for any Δ ∈ A and δ ∈ DS ˙ ···+ ˙ σΔ∩Δk −→ σΔ σΔ∩Δ1 + as k → ∞, where δ = (Δ1 , Δ2 , . . .), and if σΔ ξ < ∞, then (∃n ∈ N) σΔ ξ = ˙ ···+ ˙ σΔ∩Δn )(ξ). It is not difſcult to note that we again turn to the situation (σΔ∩Δ1 + of Chapter 4 where construction of a projective family of measures with the help
Time Change
249
of an admissible family of semi-Markov transition functions has been considered. Every system of initial segments of k deducing sequences determines a full collection z ∈ Z0 (A0 ) (see item 4.1). This full collection determines a ſnite set I(z) of indexes of intersection, on which a lexicographical ordering in alphabet N0 is introduced (see item 4.11). Here we will not show in detail the rule of index change while increasing the dimension, i.e. under addition of new deducing sequences to a given ſnite system. It is important to note that to each index α = (i1 , . . . , ik ) = 0 there corresponds: k ſrstly, the set (it can be empty) Δα = j=1 Δjij and, secondly, the immediately ˙ σΔα . Every index of dimension k is at the preceding index α . In this case τα = τα + same time an index of dimension m (m > k), in which m − k coordinates are equal to zero. Thus, if α < β, τα ≤ τβ (α, β ∈ Ak ). The set of indexes Ak of dimension k is countable. A rank of a composition decreases as 1/k with growth of dimension.
62. Lévy formula Formula In the following theorem a decomposition of b(λ), analogous to the Lévy formula [ITO 65, p. 49], for non-decreasing process with independent increments is proved. THEOREM 6.11. Let (Px ) ∈ SM, (b(λ)) (λ ≥ 0) be a Laplace family of additive functionals of time run, deſned in item 60. Then (∀x ∈ X) (∀τ ∈ T0 ) Px -a.s. the following expansion is fair: ∞ 1 − e−λu nτ (du) bτ (λ) = λaτ + −
0+
log EX0 e−λσ0 | F ◦ ◦ θτi ,
[6.3]
τi <τ
where aτ and (∀B ∈ B(0, ∞)) nτ (B) (Lévy measure) are F ◦ -measurable additive ˙ σ0 ) is a ſnite or countable inſnite collection functionals on the random set M0 ; (τi + of F ◦ -measurable Markov times of the class MT+ ; and σ0 is the ſrst exit time from the initial point of a trajectory.
Proof. Let τ = τβ for some m ∈ N and β ∈ Am . Consider partitioning of the set T0 by all Markov times of the form τα (α ∈ An ) for some (n ≥ m). Then, by theorem 6.10, and due to representation of τ in the form of a ſnite iteration, corresponding to index β, (∀λ > 0) (∀x ∈ X) Px -a.s. the following representation of bτ (λ) ≡ b(λ, τ ) in the form of a ſnite sum is fair: bτ (λ) =
(n) b λ, σΔα ◦ θα ,
250
Continuous Semi-Markov Processes
where sum is made up on all α ∈ An such that α < β. We will sum up separately small and large terms of this sum. Let ε > 0, (n) b λ, σΔα ◦ θα , bnε (λ, τ ) = 1
where all terms not more than ε are summed up; (n) b λ, σΔα ◦ θα , Bεn (λ, τ ) = 2
where all summands exceeding ε are summed up. As n → ∞, evidently, the ſrst sum does not decrease, the second sum does not increase. On the other hand, as ε → 0 the ſrst sum does not increase, and the second sum does not decrease. Hence, Px -a.s. b(λ, τ ) = b0 (λ, τ ) + B(λ, τ ), where b0 (λ, τ ) = lim lim bnε (λ, τ ), ε→0 n→∞
B(λ, τ ) = lim lim Bεn (λ, τ ). ε→0 n→∞
The ſrst part we call continuous, and the second purely discontinuous component of the additive functional with respect to some natural F ◦ -measurable metric on the trace. Both functions, exp(−b0 (λ, τ )) and exp(−B(λ, τ )), are completely monotone on λ as limits of completely monotone functions. Consider the continuous component. We have ∂ −b(λ,τ ) ∂ Ex e−λτ | F ◦ = e = −b (λ, τ ) e−b(λ,τ ) ∂λ ∂λ = −Ex τ e−λτ | F ◦ , from here
EX0 τ e−λτ | F ◦ . b (λ, τ ) = EX0 e−λτ | F ◦
Hence, (n) ∂ b0 (λ, τ ) = lim lim b λ, σΔα ◦ θα 1 ε→0 n→∞ ∂λ (n) = lim lim EX0 σΔα e−λσΔα | F ◦ ◦ θα ε→0 n→∞
1
(n) −λσΔ ◦ α | F b λ, σΔα ◦ θα 1 − EX0 e ◦ θα . + 1
Evidently, b (λ, τ ) is an additive functional too. Therefore (n) b λ, σΔα ◦ θα 1 − e−b(λ,σΔα )◦θα ≤ b (λ, τ )ε. 1
Time Change
251
From here it follows that the derivatives of b0 (λ, τ ) on λ is a completely monotone function as a limit of a sequence of completely monotone functions. Hence, using Bernstein’s theorem, there exists a measure m(du, τ ) on R+ such that ∞ ∞ ∂b0 (λ, τ ) = e−λu m(du, τ ) = m {0}, τ + e−λu m(du, τ ). ∂λ 0 0+ Therefore
b0 (λ, τ ) = λm {0}, τ +
∞
0+
1 − e−λ u
1 m(du, τ ). u
Denoting aτ = m({0}, τ ), nτ (du) = (1/u)m(du, τ ), we obtain the required representation of the continuous component. Evidently, aτ and nτ (du) are additive on the random set M0 and they are F ◦ -measurable. Consider the purely discontinuous component. According to the chosen way of construction of sequences of partitions of set T0 for any ε > 0, limit limn→∞ Bεn (λ, τ ) is a sum of a ſnite number of terms. As ε → 0 only some new terms are added to the sum. To every term there exists a corresponding sequence of included intervals having non-empty intersection. Let [ταn , ταn ] be such an interval where ταn = ˙ σΔα ; τ1 = limn→∞ τα and τ2 = limn→∞ ταn . In this case τ1 ≤ τ2 . ταn + n n ˙ σΔα , where PX(τ1 ) (σ0 > 0) > 0, and since Hence, ταn can be represented as τ1 + n ˙ σΔα ) = τ1 + ˙ σ0 . Note that for a semi-Markov diam Δαn → 0, τ2 = limn→∞ (τ1 + n process the law of 0 or 1 is not fulſlled, and hence the values 0 < Px (σ0 > 0) < 1 are admissible. Thus, the situation τ1 = τ2 is possible. By continuity of b(λ, τ ) from the left we obtain b(λ, ταn ) → b(λ, τ1 ). Since trajectories of the process are continuous from the right and have limits from the left, a priori we can only assert that X(ταn ) → X(τ1 − 0). If for any n ταn > τ1 , the point τ1 is not a point of discontinuity. Hence X(ταn ) → X(τ1 ). If beginning with some n all ταn are equal to τ2 , this τ2 is a point of discontinuity. In this case the situation τ1 = τ2 is interpreted as an event σ0 = 0, which in the given case has a positive probability, equal to the probability of event ξ ∈ Dτ2 . We denote a limit of the sequence of values b(λ, σαn ) ◦ θτ (αn ) as ˙ σ0 is a b(λ, σ0 ) ◦ θτ1 , in spite of τ1 , in general, is not a Markov time. A time τ2 = τ1 + Markov time from the set MT+ as a lower boundary of Markov times ταn ∈ MT. This limit can be written as − log EX0 (exp(−λσ0 ) | F ◦ ) ◦ θτ1 . So, the second component can be written in the form log EX0 exp − λσ0 | F ◦ ◦ θτi B(λ, τ ) = − τi <τ
˙ σ0 ∈ MT+ . Evidently, indexes of these for corresponding random times τi , where τi + times can be chosen depending only on γτ (τ ∈ T0 ). We call this formula a Lévy expansion for semi-Markov processes and nτ (du) a Lévy measure. From this expansion of the function b(λ, τ ) it follows a representation
252
Continuous Semi-Markov Processes
of τ :
τ = aτ +
∞
0+
uPτ (du) +
σ0 ◦ θτi ,
[6.4]
τi <τ
where Pτ (du) is F ◦ -measurable conditional Poisson random measure on R+ × [0, τ ), where [0, τ ) is a segment of a trace, and in addition, where Ex (Pτ (du) | F ◦ ) = nτ (du). A proof of this representation, similar to the proof of corresponding to Lévy representation in the theory of processes with independent increments, consists of construction of a process by the given conditional distribution Px (· | F ◦ ), determined by four independent components: a non-random function aτ , a Poisson random measure Pτ on [0, τ ) × (0, ∞), a (non-random) countable collection of elements (τi ) and a sequence of independent non-negative random values (σ0 θτi ). Here τ ∈ IMT is considered as a mark on the trace (see below). Atoms (ti , si ) of measure P are interpreted as lengths (ti ) and initial points (si ) as intervals of constancy of a trajectory with a given trace. They are so called conditional Poisson intervals of constancy. Measure nτ (du) is a measure of intensity of random measure Pτ (du), its expectation with respect to measure Px (· | F ◦ ). A time τi in Lévy expansion is the initial point of an interval of constancy of another kind. We call them conditionally ſxed intervals of constancy with random lengths σ0 ◦ θτi , which with positive probability are more than ˙ σΔ relates to a set of the ſrst exit zero: Px (σ0 ◦ θτi > 0 | F ◦ ) > 0. The time τi + times of function PX(τi ) exp − λσΔ | F ◦ = PX(τi ) exp − λσΔ | ασ−1 F◦ Δ beyond a level ε (ε > 0) and therefore it is a Markov time with respect to the stream of sigma-algebras (Ft ) (t ≥ 0), where Ft is a completion of the sigma algebra Ft with respect to measure Px for all x ∈ X. In what follows we will deal with processes for which Px -a.s. PX(τ ) (exp(−λσ0 ) | F ◦ ) = PX(τ ) (exp(−λσ0 )). The ſrst exit time of the value of such a function beyond a given level is already a Markov time in common sense. Moreover it is an intrinsic Markov time, and under some regularity conditions of the semi-Markov process it is its regeneration time. 6.8. Random curvilinear integrals Up to now we have considered a trace as an equivalence class on the Skorokhod space which is invariant with respect to a time change. In this section we will interpret the trace as a linearly ordered set of points taken from the metric space of states. From such a point of view it is natural to consider an additive functional along the trace and an integral with respect to this additive functional. It is a curvilinear integral, a well-known object of mathematical analysis [KOL 72]. We investigate a random curvilinear integral which turns out to be a useful tool for studying properties of semiMarkov processes. As an example of application of the random curvilinear integral we consider a problem of a semi-Markov process having an interval of constancy at a ſxed time t > 0.
Time Change
253
63. Trace as a linearly ordered set Obviously, IMT is a partly ordered set. Besides, we have proved (see proposition 6.27) that (∀τ, τ1 ∈ IMT) the set {τ < τ1 } belongs to F ◦ . According to the deſnition, the sets {Xτ ∈ S, τ < ∞} (S ∈ B(X)) and Dτ also belong to F ◦ . Theorem 6.4 (criterion of equality of traces of two functions) prompts a thought to characterize the trace by properties of some maps: (1) ζ : IMT → X ≡ X ∪ {∞} corresponds to meaning of trajectory at the time τ ; (2) χ : IMT → {0, 1} corresponds to an indicator of set Dτ . The trace must also correspond to relation of linear ordering, determined on IMT (which, in turn, must be coordinated with ζ and χ). 64. Realization of linear ordering Let a structure of linear ordering, and maps ζ and χ be determined in the IMT set. We say that function ξ ∈ D realizes this structure and these maps if for any (∀τ, τ1 ∈ IMT) the following relations are fulſlled τ1 < τ2 ⇐⇒ τ1 (ξ) < τ2 (ξ), τ1 = τ2 ⇐⇒ τ1 (ξ) = τ2 (ξ), ζ(τ ) = γτ ξ,
χ(τ ) = Jτ ξ,
where Jτ = IDτ (see item 23). PROPOSITION 6.29. For a structure of linear ordering given on IMT and maps ζ : IMT → X and χ : IMT → {0, 1} there exists a function ξ ∈ D, realizing this structure and maps, if and only if the following conditions are fulſlled: ˙ σΔ < ∞ ⇒ ζ(τ + ˙ σΔ ) ∈ (1) τ + / Δ; ˙ Δ ⇒ ζ(τ ) ∈ Δ; (2) τ1 ≤ τ < τ1 +σ (3) τ < ∞ ⇒ (∃n ∈ N) σδn ≥ τ ; (4) χ(∞) = 0 ⇒ (∃n ∈ N) σδn = ∞; (5) χ(τ ) = 0, τ < ∞ ⇒ (∃ε > 0) (∀δ ∈ DS, rank δ ≤ ε) (∃n ∈ N) τ = σδn ; (6) (∀k, , m, n ∈ N : σδk−1 ≤ σδm < σδkn ) (∃ε > 0) (∀r : 0 < r ≤ ε) n
(s, t) ∈ N2 : σδk−1 < σδst < σδm , ζ σδst ∈ Δnk \ Δ−r nk = Ø, n
where δn = (Δn1 , Δn2 , . . .); ) ∈ Δnk , where τnk = sup{σδm : σδm < σδkn }; (7) χ(σδkn ) = 0 ⇒ ζ(τnk (8) χ(∞) = 1 ⇒ {σδkn : σδkn < ∞}, and it has no maximal term.
254
Continuous Semi-Markov Processes
The logical quantors ∀τ, τ1 ∈ IMT (∀Δ ∈ A) (∀δ ∈ DS) precede conditions (1)–(5). The logical quantor ∞ ∃ δn 1 , δn ∈ DS, rank δn −→ 0 precedes conditions (6)–(8). Proof. Necessity. Let ξ ∈ D, τ ∈ IMT, ζ(τ ) = γτ ξ, χ(τ ) = Jτ ξ and a linear order on IMT be determined by values of τ (ξ). Then properties (1), (2), (3), (4) and (8) are obvious. Property (5) follows from an interval of constancy (IC) on the left of τ − 0 to be possible only in the case where τ (ξ) is a point of discontinuity. Its right part is an evidently necessary and sufſcient condition for discontinuity at point τ (ξ). Property (6) follows since for any function ξ ∈ D there is always a rather rich collection of sets Δ ∈ A which ξ exits “correctly” from. In particular, from any system of concentric balls no more than a countable subsystem of balls can be chosen for which property σΔ−r (ξ) → σΔ (ξ) (r → 0) is not fulſlled. Property (7) follows from (ξ) ∈ [σδk−1 (ξ), σδkn (ξ)). τnk n Sufſciency. We will construct a function ξ with required properties (from theorem 6.4 it follows that any other such function has the same trace). Consider the sequence k k (δn )∞ 1 from conditions (6)–(8). We will choose for all σδn < ∞ points κ(σδn ) on axis R+ according to the linear order on IMT and values of function χ. Without loss of ) ∈ Δnk for any n and k. From here by condigenerality, we can assume that ζ(σδk−1 n k−1 k tion (1) it follows: σδn < σδn . Let χ(∞) = 1. Choose points for δ1 : κ(σδn1 ) = n+m, where σδn1 < ∞ and m is a number of those k ≤ n, for which χ(σδk1 ) = 0. A double enlargement of a distance between points corresponds to an interval of constancy of unit length situated before the appropriate time point. A choice of remain points is being made on parts of the axis without these IC. Let points for δ1 , . . . , δn be already located. If κ(σδk ) < κ(σδcm ) are two neighboring points in this construction < · · · < σδs=1+L are times from the following sequence, located between and σδs=1 n+1 n+1 σδk and σδcm , then we assume = κ σδk + κ σδs+i n+1
i+j t, L+M +1
where t = κ(σδcm − κ(σδl k ), if χ(σδcm ) = 1, and t = (κ(σδcm ) − κ(σδk ))/2, if χ(σδcm ) = 0; M is a number of those i ≤ L, for which χ(σδs+i ) = 0; j is a number n+1 s+d of those d ≤ i, for which χ(σδn+1 ) = 0. A double reduction of an interval between
Time Change
255
neighboring points corresponds to putting IC before the point κ(σδcm ). If there exists the last point after the n-th step of construction, κ(σδk ) < ∞, and σδs+1 < σδs+2 n+1 n+1 are times next to σδk , then their location on [κ(σδk ), ∞) with gaps 1 or 2 is being made according to the ſrst construction rule. Thus, the rule of allocation of all points κ(σδkn ) (k, n ∈ N) is determined. In addition, sup κ(σδkn ) = ∞ (by condition (8)). If χ(∞) = 1, we locate points (κ(σδk1 )) (which are of a ſnite number, by condition (4)) uniformly on interval [0, 1] taking into account the corresponding IC, as in the case of χ(∞) = 1 on the (n + 1)-th step. The construction of κ(σδkn ) is ſnished. Furthermore, in accordance with ζ we construct a sequence of functions. After n-th stage we determine a function κ σδk ≤ t < κ σδcm , ξn : ξn (t) = ζ σδk where extreme terms of the inequality are neighboring points (we assume σδ0k = 0 and κ(0) = 0, and also κ(∞) = ∞ for the case when there are no points of the n-th construction on the right of t). From conditions (2), (3) and (4) it follows that the sequence (ξn ) converges uniformly, since rank δn → 0. According to the construction (∀n ∈ N) ξn ∈ D. From here ξ = lim ξn ∈ D. Again according to the construction ξ(κ(σδkn )) = ζ(σδkn ), from here σδkn (ξ) ≤ κ(σδkn ). However, from conditions (6) and (7) it follows that σδkn (ξ) = κ(σδkn ). Actually, it is true for k = 0. Let (ξ) = κ(σδk−1 ). If t ∈ [σδk−1 (ξ), κ(σδkn )) exists, for which ξ(t) ∈ / Δ (from uniσδk−1 n n n form convergence it follows that in this case ξ(t) ∈ ∂Δ), then either t is some point ), κ(σδkn )), and for of discontinuity (in this case t is included in (κ(σδm )) ∩ [κ(σδk−1 n any point from the latter sub-sequence, by condition (2), ζ(σδm ) ∈ Δ, i.e. contradiction), or t is a point of continuity. If on the interval (t, κ(σδkn )) there is at least one point κ(σδm ), then, by condition (6), (∃r > 0) ξ(t) ∈ Δ−r , i.e. a contradiction. If on the interval (t, κ(σδkn )) there are no points of view κ(σδn ), it means that ξ is constant and ξ(t) = ζ(t ) = lim ζ(τn ), where τn < τ , τn → τ , and, by condition (7), we obtain a contradiction. According to construction ξ has jumps at the same points, where the right part of condition (5) is fulſlled, and it has IC only when χ(τ ) = 0. Hence, the constructed function realizes the structure of ordering and maps ζ and χ on the subset T21 = (σδkn )k,n∈N of the set IMT. On the other hand, any ſnite τ ∈ IMT can be determined with the help of showing a sequence of intervals containing it: σδknn ≤ τ < σδknn +1 . In this case τ = sup σδknn . Let us deſne κ(τ ) = lim κ(σδknn ). Note that ζ(τ ) = lim ζ(σδknn ) = ξ(κ(τ )) and κ(σδknn ) ≤ κ(τ ) < κ(σδknn +1 ). By uniqueness of correspondence of a value of τ ∈ IMT and a sequence of containing its intervals, determined by sequence (δn ), it follows that κ(τ ) = τ (ξ), i.e. ζ(τ ) = ξ(τ ). According to the construction σδkn < ∞ ⇔ κ(σδkn ) < χ(∞) = 0. Besides: (a) if χ(∞) = 1, then, by condition (8),
sup κ σδkn : σδkn < ∞ = ∞ ⇐⇒ sup{τ ∈ IMT : τ < ∞} = ∞ and in the left part σδkn can be replaced by τ ∈ IMT;
256
Continuous Semi-Markov Processes
(b) if χ(∞) = 0, then, by construction,
sup κ σδkn : σδkn < ∞ ≤ 1 ⇐⇒ sup{τ ∈ IMT : τ < ∞} < ∞, where in the left part σδkn can be replaced by τ ∈ IMT. From here it follows that (∀τ ∈ IMT) τ < ∞ ⇔ τ (ξ) < ∞ and, moreover, ζ(τ ) = γτ ξ. Due to the absence of IC in ξ before τ ∈ IMT, which is a point of continuity of ξ, we have (∀τ ∈ IMT) χ(τ ) = Jτ ξ. 65. Curvilinear integral We have deſned the trace (item 14), we have proved a criterion of equality of the traces of two functions ξ1 , ξ2 ∈ D (theorem 6.4), and we have also shown that the trace can be determined in terms of linear ordering on IMT and that of map ζ. Function χ only makes more precise linear ordering in points of discontinuity and on inſnity (proposition 6.29). Now we deſne one kind of integral along the trace, in particular, called a curvilinear integral along the trace. While constructing it we need not consider a priori given linearly-ordered structure and functions ζ and χ on IMT, for which we would check the conditions of proposition 6.29. Furthermore, we will propose that in constructions relating to a trace and the class IMT there already exists a realizing function ξ ∈ D. Let IMT(ξ) be the set of all contraction of τ ∈ IMT on ξ, in which an order is determined according to τ (ξ) (which is corrected in a point of discontinuity of ξ and at inſnity). Thus, if ξ1 , ξ2 ∈ −1 ξ, then τ1 ξ1 < τ2 ξ1 =⇒ τ1 ξ2 < τ2 ξ2 , τ1 ξ1 = τ2 ξ1 =⇒ τ1 ξ2 = τ2 ξ2 . This linearly ordered set will serve as a set of parameters while deſning a curvilinear integral. We will name it a set of indexes of state of the corresponding trace ξ. Set T0 (ξ) of all contractions τ ∈ T0 on set ξ plays a role of a countable everywhere dense set in IMT(ξ). The last aspect can be corrected with the help of a metric on IMT(ξ). For any τ1 , τ2 ∈ T0 and ξ ∈ D we deſne a function ρ as follows:
˙ σΔ (ξ) , ρ τ1 , τ2 | ξ = inf diam Δ : Δ ∈ A0 , τ1 ξ ≤ τ2 ξ < τ1 + if τ1 ξ ≤ τ2 ξ, and
˙ σΔ (ξ) , ρ τ1 , τ2 | ξ = inf diam Δ : Δ ∈ A0 , τ2 ξ ≤ τ1 ξ < τ2 + if τ2 ξ ≤ τ1 ξ. Denote F = σ(γτ , τ ∈ T0 ).
Time Change
257
PROPOSITION 6.30. For any ξ ∈ D function ρ (·, · | ξ) is a metric on T0 , and for any τ1 , τ2 ∈ T0 it is F -measurable. Proof. F -measurability follows immediately from the deſnition. Fulſlment of the zero axiom and the symmetry axiom is evident. Let us check the triangle axiom. Let τ1 (ξ) < τ2 (ξ). Investigate three possible cases of location of the third point (excluding coincidence) denoting ρij = ρ (τi , τj | ξ) and omitting the argument ξ. ˙ σΔ > τ2 it follows that τ1 + ˙ σ Δ = τ3 + ˙ If τ3 < τ1 , then from condition τ3 + σΔ > τ2 ; from here ρ12 ≤ ρ32 ≤ ρ13 + ρ32 . ˙ σΔ > τ3 it follows that τ2 + ˙ σ Δ = τ1 + ˙ σΔ > τ3 ; If τ3 > τ2 , then from condition τ1 + from here ρ12 ≤ ρ13 ≤ ρ13 + ρ32 . ˙ σΔ1 > τ3 and τ3 + ˙ σΔ2 > τ2 it If τ1 < τ3 < τ2 , then from conditions τ1 + ˙ σΔ1 ∪Δ2 = τ1 + ˙ σΔ1 + ˙ σΔ2 + ˙ σΔ1 ∪Δ2 > τ2 . From here ρ12 ≤ follows that τ1 + diam(Δ1 ∪ Δ2 ) ≤ diam(Δ1 ) + diam(Δ2 ). Passing on the right part to the lower bound by all Δ1 and Δ2 , we obtain ρ12 ≤ ρ13 + ρ32 . In this metric the IMT(ξ) set is not in general closed and connected. Each τ ∈ IMT(ξ), while being a point of discontinuity, is situated on a positive distance ρ from the set [0, τ ) ∩ IMT(ξ) of preceding points, which does not contain a maximal element, if Jτ (ξ) = 1. An additive functional on IMT(ξ) is said to be such a function of two arguments A(τ1 , τ2 ) that if τ1 ≥ τ2 , then A(τ1 , τ2 ) = 0, and if τ1 ≤ τ2 ≤ τ3 , then A(τ1 , τ3 ) = A(τ1 , τ2 )+A(τ2 , τ3 ). Furthermore, we will be interested in non-negative, left-continuous additive functionals, i.e. if τn < τ , τn , τ ∈ IMT, ρ (τn , τ ) → 0 (n → ∞), then A(τn , τ ) → 0. Let f be a real function on IMT(ξ), δn ∈ DS and τ ∈ IMT(ξ), τ < ∞. Consider a sum ∞ k−1 k A σ δn , σδn ∧ τ . f σδk−1 n k=1
If this sum tends to a limit independent of a sequence (δn ) as rank δn → 0, then this limit is called a curvilinear integral of function f on a segment of the trace [0, τ ) ∩ IMT(ξ) with respect to the additive functional A. We will designate this integral as τ f τ1 A dτ1 . 0
258
Continuous Semi-Markov Processes
66. Existence of a curvilinear integral We call a variation of function f on the interval [0, τ ) ∩ IMT(ξ) the functional τ < 0
(f ) = sup δ∈DS
∞
sup f σδk−1 − f τ1 : τ1 ∈ σδk−1 , σδk ∧ τ ∩ IMT(ξ) ,
k=1
which, evidently, coincides with a variation of function gξ (t), which is a naturally parameterized function f : (∀t ∈ R+ )
gξ (t) = f sup τ ∈ IMT(ξ) : τ (ξ) ≤ t on the interval [0, τ (ξ)). Function g can be determined in terms of values gξ (σδkn (ξ)) = f (σδkn ), because gξ (t) is constant on all intervals of constancy of the function ξ. PROPOSITION 6.31. Let f be a function of bounded variation. For any τ ∈ IMT(ξ) (τ < ∞) if f is continuous from the left on [0, τ ) ∩ IMT(ξ), then there exists a curvilinear integral on this interval (continuity is considered in the sense of the metric ρ ). Proof. We use the sequence (δn )∞ 1 , where δn is a deducing sequence of an order-type ω n . In this case all points of partition of n-th order are included in the set of points of partition of (n + 1)-th order. We have S1 ≤
f τk A σδkn , σδkn ∧ τ ≤ S2 ,
k∈Nn 0
where k is a preceding point for k under the lexicographical ordering in alphabet Nn0 , and also mk = inf f τk , Mk = sup f τk , τk ∈ σδkn , σδkn ∧ τ ∩ IMT(ξ), S1 = mk A σδkn , σδkn ∧ τ , S2 = Mk A σδkn , σδkn ∧ τ . k∈Nn 0
k∈Nn 0
As n → ∞ the left term of this inequality does not decrease, and the right one does not increase. Consider a sequence of embedded intervals
k k σδ11 , σδk11 ∧ τ ⊃ σδ22 , σδk22 ∧ τ ⊃ · · · ,
for which Mkn −mkn ≥ ε (ε > 0). From boundedness of the variations of f the ſniteness of number of such sequences follows. For them because of continuity from the left k k there exists a limit limn→∞ f (σδnn ) equal to f (τ ), where τ = limn→∞ σδnn < ∞.
Time Change
259
In this case k f σδknn A σδnn , σδknn ∧ τ −→ f (τ )A(τ , τ + 0)
(n −→ ∞),
∞ k where A(τ , τ + 0) = limn→∞ A(τ , σδknn ∧ τ ), if the set n=1 [σδnn , σδknn ∧ τ ) is not empty. Otherwise, the limit of the sequence of this terms is equal to zero. Since ε > 0 is arbitrary, the limit of these sums exists. This limit, evidently, does not depend on the choice of a sequence of deducing sequences (δn ) with ranks converging to zero. It means a curvilinear integral exists. 67. Evaluation of curvilinear integrals It can be made with the help of a natural parametrization. Let τ ∈ IMT(ξ). Determine an additive functional aξ : aξ (t1 , t2 ) = aξ (0, t2 ) − aξ (0, t1 ), where t1 ≤ t2 and
aξ (0, t) = A 0, sup τ ∈ IMT(ξ) : τ (ξ) ≤ t = sup A(0, τ ) : τ ∈ IMT(ξ), t(ξ), τ (ξ) ≤ t . In this case aξ (t1 , t2 ) = 0, if ξ is constant on [t1 , t2 ]. Then on IMT(ξ) 0
τ
f τ1 A dτ1 =
0
τ (ξ)
gξ (t)aξ (dt),
where gξ (t) is deſned in item 66. If f (τ ) = f1 (ζ(τ )),
gξ (t) = f1 ζ sup τ ∈ IMT(ξ) : τ (ξ) ≤ t = f1 ξ(t) . This case corresponds to a usual deſnition of a curvilinear integral, when a curve is determined with the help of a numerical parameter. A proof consists of a check of a usual rule of replacement of variables in the integral. 68. Random curvilinear integral Let (Px ) be an admissible family of probability measures on F, determining a random process with trajectories from D. For any x ∈ X and τ ∈ IMT the distribution Px (τ < t | F ◦ )(ξ) (t ∈ R+ , ξ ∈ D) can be interpreted as a conditional distribution of τ under ſxed ξ, which is deſned for almost all ξ. An expectation Ex (τ | F ◦ )(ξ), considering as a function of τ ∈ IMT(ξ), is an example of a nondecreasing continuous from the left function, which determines an additive functional μ1 (τ1 , τ2 ) = Ex (τ2 − τ1 | F ◦ ) on IMT(ξ). Continuity from the left follows from the fact that if τn ∈ IMT(ξ), τn < τ and ρ (τn , τ ) → 0, then τ is not a point of discontinuity, hence (∀ξ1 ∈ −1 ξ) τn (ξ1 ) → τ (ξ1 ). The simple example of a curvilinear
260
Continuous Semi-Markov Processes
integral of a continuous function f is expression 0
∞
Px f1 Xt ; t < τ | F ◦ dt =
τ
0
f1 Xτ1 μ1 dτ1 .
For a semi-Markov process the additive functional μ can be written in a homogenous form: μ1 (τ1 , τ2 ) = μ(τ2 ) − μ(τ1 ), where μ(τ ) = PX0 (τ | F ◦ ) and therefore ˙ τ2 = μ τ1 + μ τ2 ◦ θτ1 μ τ1 + (homogenous additivity). In this case a linearly ordered family IMT(ξ) is a process with independent increments with respect to a measure Px (· | F ◦ ): (∀τ1 , τ2 ∈ IMT(ξ), τ1 < τ2 ) τ1 and τ2 − τ1 are independent. A vectorial additive (multiplicative) functional is said to be a ſnite system of onedimensional additive (multiplicative) functionals (F 0 -measurable also). Furthermore, we use d-dimensional functionals (d ≥ 2). An example of a vector additive funct tional is the integral Stg ≡ 0 g(Xs ) dt, where g(x) is a vector-valued function, or a corresponding curvilinear integral.
69. Probability of hitting in an interval of constancy We know that trajectories of a typical SM process contain intervals of constancy. A quantitative description of a localization and lengths of IC can be derived from Lévy formula [6.3] according to Lévy representation [6.4] (the most important outcome from the Lévy formula). There are two kinds of possible IC: Poisson and conditionally-ſxed. We are interested in the probability that a point t (t ∈ R+ ) belongs to some IC of the function ξ ∈ D. In addition, we will distinguish events {t ∈ ICR} and {t ∈ ICF} of t to hit in random (Poisson) interval of constancy (ICR), and that of ſxed (ICF). Besides we will consider a function
Rt (ξ) = t − sup τ (ξ) : τ ∈ IMT, τ (ξ) ≤ t , which is called an (inverse) sojourn time in IC (see item 1.18). Let us evaluate a functional ∞ e−λt Ex f Xt e−λ1 Rt ; A ∩ {t < τ } | F ◦ dt λ, λ1 > 0 , 0
where f is a continuous function on X, for A there are three possibilities: {t ∈ / IC}, {t ∈ ICF} or {t ∈ ICR} (in the ſrst event, obviously, Rt = 0). Let τ = σΔ , where Δ ∈ A and the set Δ ∪ ∂Δ is compact.
Time Change
261
PROPOSITION 6.32. Let (Px ) ∈ SM. Then (∀x ∈ X) Px -a.s.
∞
e−λt Ex f Xt e−λ1 Rt ; t < σΔ , t ∈ ICR | F ◦ dt
0
=
∞
0
f Xτ e−b(λ,τ ) Bλ+λ1 (dτ ),
e−λt Ex f Xt e−λ1 Rt ; t < σ0 , t ∈ ICF | F ◦ dt
0
=
σΔ
∞
σΔ
0
f Xτ e−b(λ,τ ) Cλ+λ1 (dτ ),
e−λt Ex f Xt ; t < σΔ , t ∈ IC | F ◦ dt
0
σΔ
= 0
f Xτ e−b(λ,τ ) A(dτ ),
where A τ1 , τ2 = aτ2 − aτ1 , ∞ 1 − e−λu nτ2 (du) − nτ1 (du) , Bλ τ1 , τ2 = (1/λ)
0
Cλ τ1 , τ2 = (1/λ)
1 − Px exp − λσ0 ◦ θτi | F ◦ .
τ1 ≤τi <τ2
Proof. Note that Rt = lim(t − σδk ) (rank δ → 0), where σδk ≤ t < σδk . We have 0
∞
e−λt Ex f Xt e−λ1 Rt ; t < σΔ | F ◦ dt = Ex =
=
σΔ
0
lim
rank δ→0
lim
rank δ→0
e−λt e−λ1 Rt f Xt dt | F ◦
σδk <σΔ
f X σδk Ex
σδk σδk
−λt −λ1 (t−σδk )
e
e
f X σδk exp − b λ, σδk k
dt | F
◦
1 λ + λ1 . × 1 − exp − b λ + λ1 , σδk + b λ + λ1 , σδk
[6.5]
262
Continuous Semi-Markov Processes
An increment of b(λ, τ ) at the point of continuity (τ = τi ) is equal to an increment of ∞ function λ aτ + 0+ (1 − e−λ u ) nτ (du), and at the points of discontinuity (τ = τi ) it is equal to − log Ex exp − λσ0 ◦ θτi | F ◦ . From here we discover that expression [6.5] is equal to
σΔ
0
f Xτ e−b(λ,τ ) A(dτ ) + Bλ+λ1 (dτ ) + Cλ+λ1 (dτ ) .
We have 0
∞
e−λt Ex f Xt e−λ1 Rt ; t ∈ ICF, t < σΔ | F ◦ dt = Ex =
σΔ 0
f Xt e
e
Ex f X τ i
1 λ + λ1
IICF (t) dt | F
˙ σ0 τi +
◦
e−λt e−λ1 (t−τi ) dt | F ◦
τi
τi <σΔ
=
−λt −λ1 Rt
f X τi Ex e−λτi 1 − e−(λ+λ1 )σ0 ◦ θτi | F ◦ .
τi <σΔ
˙ σ0 − τi is independent of τi with respect to measure Note that σ0 ◦ θτi = τi + ◦ Px (· | F ). This property does not immediately follow from properties of SM processes, because τi is not its regeneration time, but it can be obtained as a limit for ˙ σ0 , hence σδk − σδk → σ0 ◦ θτi . Therefore σδk ↑ τi , σδk ↓ τi + 1 f X τi e−b(λ,τi ) 1 − Ex e−(λ+λ1 )σ0 ◦ θτi | F ◦ λ + λ1 τ <σ i Δ σΔ f Xτ e−b(λ,τ ) Cλ+λ1 (dτ ). = 0
Evidently, {t ∈ ICR} = ε>0 {t ∈ ICRε }, where {t ∈ ICRε } means hitting t into a random interval of length more than ε. We have
[ε, ∞) × σ k , σ k = 0 , t ∈ σδk , σδk , P t∈ / ICRε = δ δ δ∈DS k
× [τ1 , τ2 ]) = Pτ (A) − Pτ (A) and P is a Poisson random measure on where P(A 2 1
Time Change
263
(0, ∞) × IMT(ξ). From here ∞ e−λt Ex f Xt e−λ1 Rt ; t ∈ / ICR, t < σΔ | F ◦ dt 0
= lim
lim
ε→0 rank δ→0
= lim
lim
ε→0 rank δ→0
∞
0
e−λt
Ex f Xt e−λ1 Rt ; t ∈ σδk , σδk ∧ σΔ ,
k
[ε, ∞) × σ k , σ k ∧ σΔ = 0 | F ◦ dt P δ δ σδk ∧σΔ k f X σ k Ex e−λt e−λ1 (t−σδ ) dt; k
δ
σδk
[ε, ∞) × σ k , σδk ∧ σΔ = 0 | F ◦ P δ
f Xσδk
1 Ex exp λ1 σδk exp − λ + λ1 σδk ε→0 rank δ→0 λ + λ1 k k [ε, ∞) × σδk , σδk ∧ σΔ = 0 | F ◦ . − exp − λ + λ1 σδ ∧ σΔ ; P [6.6]
= lim
lim
Using the Lévy representation for τ ∈ IMT, we discover that the conditional average in [6.6] is equal to exp − b λ, σδk Ex 1 − exp − λ + λ1 σδk ∧ σΔ − σδk ; [ε, ∞) σ k , σ k ∧ σΔ = 0 | F ◦ P δ δ ! = exp − b λ, σδk Ex 1 − exp − λ + λ1 a σδk ∧ σΔ − a σδk +
ε
0+
u P du, σδk ∧ σΔ − P du, σδk +
[ε, ∞) × σδk , σδk ∧ σΔ = 0 | F ◦ +P
2
σ0 ◦ θτi
σδk ≤τi <σδk ∧σΔ
= exp − b λ, σδk exp − n [ε, ∞) × σδk , σδk ∧ σΔ × 1 − exp − λ + λ1 a σδk ∧ σΔ − a σδk −
ε
0+
1 − e−(λ+λ1 )u n du, σδk ∧ σΔ − n du, σδk
×
σδk ≤τi <σδk ∧σΔ
Ex exp − λ + λ1 σ0 ◦ θτi | F ◦ ,
;
264
Continuous Semi-Markov Processes
This expreswhere n is a measure of intensity of the Poisson random measure P. sion with a boundedness t ∈ / IC Cε differs from the corresponding factor without this boundedness by a term exp(− n([ε, ∞) × [σδk , σδk ∧ σΔ ))) (which tends to 1 as rank δ → 0) and by an upper limit of integration on measure n(du, ·). Let 1 ε ε 1 − e−λ u n du, τ2 − n du, τ1 . Bλ τ1 , τ2 = λ 0+ ε
From an evident continuity of integral 0+ (1 − e−λ u ) n(du, τ ) on τ (with respect to ρ ) when δ → 0, convergence of the sum to a curvilinear integral follows σΔ ε f Xτ e−b(λ,τ ) A(dτ ) + Bλ+λ (dτ ) + Cλ+λ1 (dτ ) . 1 0
ε
On the other hand, from convergence 0+ (1 − e−u )n(du, τ ) → 0 when ε → 0, convergence follows σΔ ε f Xτ e−b(λ,τ ) Bλ+λ (dτ ) −→ 0, 1 0
and we discover that ∞ e−λt Px f Xt e−λ1 t ; t ∈ / ICR, t < σΔ | F ◦ dt 0
=
hence
∞
0
σΔ 0
e−λt Px f Xt e−λ1 t ; t ∈ / IC, t < σΔ | F ◦ dt =
and also
0
∞
f Xτ e−b(λ,τ ) A(dτ ) + Cλ+λ1 (dτ ) ,
σΔ
0
f Xτ e−b(λ,τ ) A(dτ ),
e−λt Px f Xt e−λ1 Rt ; t ∈ ICR, t < σΔ | F ◦ dt = 0
σΔ
f Xτ e−b(λ,τ ) Bλ+λ1 (dτ ).
6.9. Characteristic operator and curvilinear integral In this section a connection between two directions of investigation of semiMarkov processes is considered:
Time Change
265
(1) an analytical direction in which the principal objects of investigation are transition generating function and a lambda-characteristic operator (see Chapters 3 and 5); (2) a probability direction, dealing with conditional distributions of a time run with respect to trace. A supposition about existence of a rather simple connection between these directions arises due to Markov processes, which are described in terms of these directions with some extreme but similar properties. 70. Representation of operator Consider a SM process. Assume that in point x ∈ X0 there exists a limit Ex e−λ σΔ − 1 . Aλ I(x) = lim Δ↓x Ex σ Δ Evidently, function Aλ has all derivatives on λ for λ > 0 and its ſrst derivative can be represented in the form Ex σΔ e−λσΔ ∂ . − Aλ I(x) = lim Δ↓x ∂λ Ex σ Δ From here it follows that this function is completely monotone. Possessing such a property this function, using Bernstein’s theorem, can be represented as an outcome of the Laplace transformation of some distribution function. From here, in turn, it follows that function −Aλ I(x) itself can be represented in Lévy form (see theorem 6.11) ∞ 1 − e−λu ν(du | x), −Aλ I(x) = λα(x) + 0+
where α(x) is some non-negative function and ν(du, x) is some measure on (0, ∞) depending on x. On the other hand, the following limit is true Ex e−b(λ,σΔ ) − 1 , −Aλ I(x) = lim Δ↓x Ex σ Δ where b(λ, σΔ ) → 0 for almost all ξ. We suppose that 2 Ex b λ, σΔ −→ 0, Ex σ Δ
[6.7]
266
Continuous Semi-Markov Processes
Px -a.s. (we will justify this assumption below). Under this condition Ex b λ, σΔ Ex b0 λ, σΔ = lim , −Aλ I(x) = lim Δ↓x Δ↓x Ex σ Δ Ex σ Δ and, hence, in the expansion of Aλ I(x) the coefſcients have a sense Ex a σ Δ Ex n B, σΔ , α(x) = lim ν(B | x) = lim . Δ↓x Ex σΔ Δ↓x Ex σ Δ
[6.8]
These formulae show connection between parameters of Lévy expansion for the process of time run along the trace and lambda-characteristic operator. This connection can be made more precise with the help of additive functionals a(τ ) and n(B, τ ) represented by curvilinear integrals. 71. Continuous and discrete parts of functional Consider a functional μ(τ ) = EX0 (τ | F ◦ ) (τ ∈ IMT), for which the following ˙ τ ) = μ(τ1 ) + μ(τ ) ◦ θτ1 . Let Ex (τ ) < ∞. Then Px -a.s. representation is fair μ(τ1 + be a sequence of deducing sequences with decreasing ranks μ(τ ) < ∞. Let (δn )∞ n=1 (rank δn → 0), and δn = δ1 × · · · × δn be a composition of ſrst n sequences, i.e. a sequence of order-type ω n (see item 61). Consider a representation of μ(τ ) in the form of a ſnite sum, where τ = τβ < ∞ (β ∈ Am ) μ(τ ) = μσΔα ◦ θα α<β
(for denotation for multi-dimensional indexes, see item 61). We represent this sum in the form of two summands μσΔα ◦ θα I μσΔα ◦ θα ≤ ε , α<β
μσΔα ◦ θα I μσΔα ◦ θα > ε .
α<β
As n → ∞ and ε → 0, each of these sums tends to a limit. The ſrst limit is a continuous component of the additive functional μ0 (τ ), which is obviously equal to ∞ unτ (du). μ0 (τ ) = aτ + 0+
The second limit is a discrete component which can be represented in the form EX0 σ0 | F ◦ ◦ θτi . M (τ ) = τi <τ
Time Change
267
72. Representation of parameters in the form of integrals According to item 65, the IMT(ξ) set can be considered as a metric space with the metric ρ . Due to continuity from the left of functionals b(λ, τ ) and μ(τ ), they coincide with distribution functions of some measures b and μ on Borel subsets of the set IMT(ξ). Obviously, the measure b and its components, including the Lévy expansion, are absolutely continuous with respect to the measure μ . Let ∞ 1 − e−λu ν (du | τ ) + p τi β(λ, τ ) = λα (τ ) + 0+
ti <τ
be a density of the measure b with respect to μ , i.e. τ β λ, τ1 μ dτ1 , b(λ, τ ) = b0 (λ, τ ) =
0
τ
0
λα λ, τ1 μ dτ1 +
0
τ
∞
0+
1 − e−λu ν du | τ1 μ dτ1 ,
where α (τ ) and ν (du | τ ) correspond to aτ and n(du, τ ), p τi = − log EX0 e−λσ0 | F ◦ /PX0 σ0 | F ◦ ◦ θτi . ∞
Let α (τ ) and 0+ (1 − e−λu )ν (du | τ ) be continuous in some neighborhood of point 0. Then, by formulae [6.8], σ EX0 0 Δ α (τ )μ(dτ ) EX0 a σΔ = lim σ = α (0), α X0 = lim Δ↓x EX0 σΔ Δ↓x EX0 0 Δ μ(dτ ) σ Ex 0 Δ ν (B | τ )μ(dτ ) Ex n B, σΔ σ = lim ν B | X0 = lim = ν (B | 0). Δ↓x Δ↓x Ex σ Δ EX0 0 Δ μ(dτ ) In this case
b0 (λ, τ ) =
0
τ
− Aλ I ◦ Xτ1 μ dτ1
[6.9]
is a required representation of a parameter of the Lévy expansion in terms of the lambda-characteristic operator. 73. Conditional degeneration of Markov processes Let (Px ) be a family of measures of a Markov process with pseudo-local lambdacharacteristic operator in some region Δ ∈ A. Then in this region the process does not have any ſxed intervals of constancy and, besides, condition [6.7] is fulſlled. Hence,
268
Continuous Semi-Markov Processes
formula [6.9] is fair. Since for a Markov process Aλ I = −λ (see theorem 3.6(2)), we discover that for this process b(λ, τ ) = λμτ for all τ < σΔ . It means that a time run along a ſxed trace represents a determinate (non-random) process. In other words, a conditional distribution of a time run along the trace in a Markov process is degenerate [HAR 89].
6.10. Stochastic integral with respect to semi-Markov process In this section we assume X = Rd (d ≥ 1) and (Px ) to be a family of measures of a semi-Markov process of diffusion type. We consider a construction of stochastic integrals appropriating to a semi-Markov process as an integrating function. Stochastic integrals for processes of such a class are required in the problem of absolute continuity (see [HAR 02]). Since semi-Markov process is not Markovian (in general) the method of construction of stochastic integrals based on the classic theory of martingales [LIP 86] is not applicable. However as we know semi-Markov processes possess a partly Markov structure which can be used in order to construct some kind of stochastic integral with the help a modernized method. Of course this method can be applied to the proper Markov processes of diffusion type too. For them it brings the same results as the traditional method of stochastic integrating. In this respect the former can be considered as a generalization of the latter.
6.10.1. Semi-martingale and martingale along the trace 74. Asymptotic formulae From items 5.34 and 5.36 it follows that neighborhoods of spherical form play a special role due to their simple asymptotical properties. Let B = B(0, 1) and Br = Br (x) = x + rB be an open ball with radius r and center x. In item 5.36 [5.35] the following asymptotical representation is proved: (0) fσBr (λ, ϕ | x) − ϕ(x) = r2 Aλ (ϕ | x) + o r2
(r −→ 0)
[6.10]
where (0) Aλ (ϕ
1 | x) = d
d d 1 ij i a (x)Dij ϕ(x) + b (x)Di ϕ(x) − c(x, λ)ϕ(x) , 2 i,j=1 i=1
λ ≥ 0; ϕ is a twice continuously differentiable function. The second asymptotical formula relates to the expectation of the ſrst exit time from a small spherical neighborhood of the starting point of the process. In equation
Time Change
269
(5.17) we assume c(0, x) ≡ 0 and 0 < γ(x) < ∞ where γ(x) = ∂c(λ, x)/∂λ|λ=0 . From results of item 5.37 the asymptotics follows: [6.11] Ex σBr = r2 γ(x) + o r2 /d (r −→ 0). Formula [6.11] can be generalized as follows. Let us consider the integral Stg = g(Xs ) ds, where g is a continuous positive function on X. Continuity from the right of trajectories implies the following asymptotics of the expectation [6.12] Ex SσgBr = r2 γ(x)g(x) + o r2 /d (r −→ 0). t 0
Let us assume function γ to be continuous on G. Formulae [6.10], and [6.12] imply the equivalence: as r → 0 (0) Ex exp − λσBr ϕ XσBr − ϕ(x) A (ϕ | x) g = λ 1 + o(1) . [6.13] g(x)γ(x) Ex SσBr This equivalence has a key meaning while constructing a stochastic integral with respect to a semi-Markov process. As will be shown below different g functions generate different additive functionals determining curvilinear integrals. 75. Sequence of inscribed balls Let G ∈ A and x ≡ ξ(0) ∈ G. We denote r(x, G) = sup{r > 0 : Br (x) ⊂ G}. For given value u > 0 let σG,u (ξ) = σr(X0 (ξ),G)∧u (ξ). Then σG,u ∈ T (Markov n ˙ σG . Hence for any n ≥ 1 it is true that σG = σG,u ˙ σG , time) and σG = σG,u + + n ˙ ˙ where σG,u = σG,u + · · · + σG,u (in the right part of the equality identical summands 0 are repeated n times). Let us assume σG,u = 0, and also in the case ξ(0) ∈ G let σG,u (ξ) = 0. Hence for any ξ there exists the limit n ∞ lim σG,u ≡ σG,u ≤ σG .
n→∞
n , and consequently, if ξ ∈ C (continuous), Evidently, lim xn ∈ G, where xn = XσG,u ∞ = σG . then σG,u
The method of inscribed balls can be easily generalized for the case of inscribed neighborhoods of rather arbitrary form if a ſxed family of neighborhoods corresponds to every point of the space of states. 76. Semi-martingale and martingale Consider semi-Markov process of diffusion type, controlled by equation [5.17]. Let G ∈ A be a bounded set and x ∈ G. Consider a deducing sequence from G of
270
Continuous Semi-Markov Processes
δ rank r. Let NG be the number of jumps that the stepped function Lδ makes before its ſrst exit from G; tk be the time of the k-th jump; xk be the meaning of the process at the time of the k-th jump. In this case there exists Δk+1 ∈ Ao such that tk+1 = ˙ σΔk+1 . Consider representation of the integral of the function ϕ(XσG ). We have tk + Nδ G ϕ xk − ϕ xk−1 Ex ϕ XσG − ϕ(x) = Ex k=1
=
∞
Ex
n=1
=
∞
n δ ϕ xk − ϕ xk−1 ; NG =n
k=1
δ Ex ϕ xk − ϕ xk−1 ; NG ≥ k)
k=1
=
∞
δ Ex Exk−1 ϕ XσΔk − ϕ X0 ; NG ≥k .
k=1 (0)
Using equivalence [6.13] as λ = 0 and g(x) = A0 (ϕ | x)/γ(x), we obtain =
∞
δ Ex Exk−1 SσgΔ 1 + (k, r) ; NG ≥k , k
k=1
where (k, r) → 0 (r → 0) uniformly in k. From here the latter expression can be rewritten as follows ∞ δ = Ex Exk−1 SσgΔ ; NG ≥ k 1 + (r) k
k=1
=
∞
δ Ex Stgk − Stgk−1 ; NG ≥ k 1 + (r) −→ Ex SσgG .
k=1 (0)
Denoting A0 (ϕ | x) = A0 (ϕ | x)/γ(x), we obtain the formula σG Ex ϕ XσG − ϕ(x) = Ex A0 ϕ | Xt dt .
[6.14]
0
This formula is well-known in the theory of Markov processes [DYN 63], where (A0 (ϕ | x)) is the characteristic Dynkin operator. From additivity of the functional Sτg it follows that this formula is true after change σG by an arbitrary time τ ∈ T o . (0) Let ϕ(X) = X i − Yi . Then A0 (ϕ | Y ) = bi (Y ), where b(Y ) = (b1 (Y ), . . . , bd (Y )) is the vector of coefſcients of the ſrst derivatives in equation [5.17]. Extending denotation Stg to vector functions g, we can write τ b Xt dt . Ex Xτ − X0 = Ex Sτb/γ = Ex γ Xt 0
Time Change
271
Taking into account this formula, we consider a vector additive functional (process) (1)
Mt
b/γ
= X t − X 0 − St
.
Additivity of this functional follows from additivity of the difference: Xτ1 +˙ τ1 − X0 = Xτ1 − X0 + Xτ2 − X0 ◦ θτ1 . In order to prove the martingale property of this process we note that for any τ1 and ˙ τ3 (the τ2 from T 0 there exists τ3 ∈ T 0 such that on the set τ1 ≤ τ2 τ2 = τ1 + ˙ proof follows from relation {t < σΔ } ⇒ {σΔ = t + σΔ }; see item 2.16). Hence, ˙ τ2 . Using the semi-Markov it is sufſcient to consider the pair of times τ1 and τ1 + property, we have (1) + Mτ(1) ◦ θτ1 | Fτ1 Ex Mτ +˙ τ | Fτ1 = Ex Mτ(1) 1 2 1 2 = Mτ(1) + EXτ1 Mτ(1) . = Mτ(1) 1 2 1 So the martingale property is fulſlled on the partly ordered set of Markov times T 0 , but in general is not fulſlled on the whole semi-axis, or on any other interval. Analogously with semi-Markov processes such a process would naturally be called semimartingale (unfortunately, this term is frequently used for the union of sub- and supermartingale classes). The deſnition of a stochastic integral can be made in terms of semi-martingales. However, properties of semi-martingales are not sufſcient to derive the Ito formula or its close analogy. For this aim we use another modiſcation of the martingale, namely F 0 -measurable martingale along the trace. Let us note that formula [6.14] can be rewritten as follows σG A0 ϕ | Xτ dμτ , [6.15] Ex ϕ XσG − ϕ(x) = Ex γ Xτ 0 On the basis of this formula we obtain the second variant of a vector additive function, which can be called a martingale on the partly ordered set T 0 , or a martingale along the trace: Mτ(2) = Xτ − X0 − μfτ , where f = b/γ. The martingale property follows from the following relation (2) = Mτ(2) + EXτ1 Mτ(2) . Ex Mτ +˙ τ | Fτ1 = Mτ(2) 1 2 1 1
(2)
2
The martingale (Mτ ) cannot be extended to a “semi-martingale”. Its realizations are determined on every trace of a trajectory. The latter is considered an ordered set of points from the metric space X. In the given example the F 0 -measurable additive functional corresponds to the ſrst exit times from open sets. In contrast to the semimartingale, the martingale along the trace, can also be determined in the case when the expectation of this time is inſnite (see examples of additive functionals).
272
Continuous Semi-Markov Processes
77. Quadratic characteristic Let us show that martingale M ≡ M (2) is quadratically integrable, and ſnd its quadratic characteristic. Note that martingale M (like the original process) has dimension d, hence its quadratic characteristic is a matrix with elements τ ∈T0 . K ij (M, τ ) = Ex Mτi Mτj We see that M0 = 0 Px -a.s. Consider the expression ij Km
M, σG = Ex
N
Mtik
−
Mtik−1
k=1
N
Mtjk
−
Mtjk−1
[6.16]
,
k=1
where tk = σδkm , δm ∈ Sm is a deducing sequence from G ∈ A of order type ω m , δm (ξ) is number of jumps of the and of rank rm (rm → 0 as m → ∞), and N = NG quantiſed process Lδm (ξ) by the time σG (here σG (ξ) = σδNm (ξ)). Denote ΔMks = Mtsk − Mtsk−1 . Then ∞ ij M, σG = Ex Mσi G MσjG ; N = n Km n=1
=
∞ n=1
Ex
n
ΔMki ΔMkj +
k=1
+
n−1
n
k=1 l=k+1
n−1
n
ΔMki ΔMlj ;
ΔMki ΔMlj
N =n .
l=1 k=l+1
By varying the order of summation: n ∞ n−1
−→
n=1 k=1 l=k+1
∞ ∞ ∞
,
k=1 l=k+1 n=l
taking into account that {N ≥ l} ∈ Fτl−1 , and using the martingale property, we obtain ∞ n=1
Ex
n−1 n k=1 l=k+1
ΔMki ΔMlj +
n−1
n
l=1 k=l+1
ΔMki ΔMlj ; N = n
= 0.
Time Change
273
Consequently, ∞ ij M, σG = Ex ΔMki ΔMkj ; N ≥ k Km k=1
=
∞
bi /γ bi /γ bj /γ bj /γ Ex xik − xik−1 − μtk + μtk−1 xjk − xjk−1 − μtk + μtk−1 ; N ≥ k
k=1
=
∞ k=1
=
∞
i j Ex Xσi Δ − X0i − μbσΔ/γ Xσj Δ − X0j − μbσΔ/γ ◦ θtk−1 ; N ≥ k k
k
∞
k
i j Ex Exk−1 Xσi Δ − X0i − μbσΔ/γ Xσj Δ − X0j − μbσΔ/γ ; N ≥ k k
k=1
=
k
k
k
k
j Ex Exk−1 Xσi Δ − X0i Xσj Δ − X0j − Xσi Δ − X0i μbσΔ/γ k
k=1
k
k
k
i i j − Xσj Δ − X0j μbσΔ/γ + μbσΔ/γ μbσΔ/γ ; N ≥ k . k
k
k
k
Estimate the order of the members. According to formula [6.13] we have j j |/γ ≤ rm Ex μ|b . Ex Xσi Δk − X0i μbσΔ/γ σΔk k From here it follows that the sum of these members is negligible as m → ∞. bi /γ bj /γ
Furthermore, we will estimate members Ex (μσΔk μσΔk ). We have i 2 i b /γ bj /γ Ex μσΔk μσΔk b (x)bj (x)/γ 2 (x) Ex μσΔk ∼ . Ex μgσΔk g(x)Ex μσΔk We know that ratio Ex ((μσΔk )2 )/Ex (μσΔk ) tends to zero as m → ∞. So it follows that the sum of these members is also negligible. Let us estimate members of the form Ex ((Xσi Δ − X0i )(Xσj Δ − X0j )). Using k
k
(0)
formula [6.13] with function ϕ(y) = (y i − xi )(y j − xj ), when A0 (ϕ | x) = aij (x), we obtain Ex Xσi Δ − X0i Xσj Δ − X0j = Ex SσfΔ 1 + 1 (m, k) k
k
k
274
Continuous Semi-Markov Processes
with f (x) = aij (x)/γ(x), where 1 (m, k) → 0 as m → ∞ uniformly in k. From here ij Km
M, σG =
∞
Ex SσfΔ ◦ θtk−1 ; N ≥ k 1 + (m) k
k=1
= Ex
N k=1
= Ex SσfG
SσfΔ k
◦ θtk−1
1 + (m)
1 + (m) ,
where (m) → 0. Therefore ij Km
M, σG =
Ex SσfG
= Ex
σG 0
ij
a
Xt /γ Xt
dt .
From the method of construction it follows that this formula is true after change σG by an arbitrary ſnite time τ ∈ T 0 ,
j
K ij (M, τ ) ≡ Ex Mτi Mτ = Ex
τ
0
aij Xt dt. γ Xt
[6.17]
Positive deſniteness of matrix K ij (M, τ ) follows from positive deſniteness of matrix (aij ). We now prove that (mij τ ) is a martingale along the trace, where i j f mij τ = Mτ Mτ − μτ .
We obtain i j i j ij Ex mij ˙ τ2 | Fτ1 = mτ1 + Ex Mτ1 Mτ2 ◦ θτ1 + Mτ2 ◦ θτ1 Mτ1 τ1 + + Mτi2 Mτj2 ◦ θτ1 − μfτ2 ◦ θτ1 | Fτ1 . The second member of this sum is equal to zero due to the semi-Markov property of the process, and because of Ex (Mτi ) = 0 and Ex (Mτi Mτj − μfτ ) = 0 for any x ∈ G. Hence the matrix additive functional with the components ij μaτ /γ
≡
0
τ
aij Xt1 dt1 γ Xt1
[6.18]
is an analogy of the quadratic characteristic of a one-dimensional quadratically integrable martingale.
Time Change
275
6.10.2. Stochastic integral with respect to a martingale Construction of integral We consider a vector martingale with a standard denotation of the vector as a column. Then a∗ (row) is the transposed vector a (column), and a∗ b is the scalar product d of two vectors of the same dimension d (in coordinate denotation: a∗ b = i=1 ai bi ). Furthermore, we will consider integrals of the scalar product of a vector-valued integrand function by increments of a vector-valued integrating function. In order to deſne stochastic integral with respect to a general semi-martingale we consider ſrstly a stepped function such that there exists a sequence of Markov times (τk )N 1 , where N is an integer-valued function with {N = k} ∈ Fτk , τk ∈ T 0 , τ0 = 0 ≤ τ1 ≤ · · · , τN = σG . In particular these times can be taken from the family (σδkn ). We assume that for any t ∈ [τk−1 < τk ) f (t) = f (τk−1 ), and for any k f (τk−1 ) is a Fτk−1 measurable random element with values from Rd . Deſne the stochastic integral on the interval [0, σG ) of function f by semi-martingale (Mt ) as the sum
σG
0
N ∗ ∗ f (τ ) dMτ = f τk−1 Mτk − Mτk−1 . k=1
Denote by L0 the class of such functions f . Note that for any two functions of this class there exists a sequence of Markov times with properties enumerated above, such that both functions have constant meaning on any interval between neighboring Markov times from this sequence. From here it follows that L0 is a linear space. In order to extend the deſnition of the stochastic integral on a wider class of functions we consider (according to the standard method of extension of stochastic integrals [GIH 75, p. 72]) Hilbert space L2 , determined by the scalar product (f, g) ≡ Ex
σG 0
ij
aij Xt dt , f (t)g (t) γ Xt
i
j
[6.19]
and by the completion of space L0 with respect to a metric, corresponding to the norm f = (f, f )1/2 . In this case for f ∈ L0 it is true: 2
f = Ex
σG 0
f (τ )
∗
2 dMτ
.
[6.20]
276
Continuous Semi-Markov Processes
Actually, Ex
σG
f (τ )
0
=
∗
Ex
σG
∞
∞
Ex Ex
ij n=1
=
∞
f i (τ ) dMτi
ij n=1
=
dMτ
0
ij
=
2
n
i
f
σG
0
f j (τ ) dMτj
τk−1 ΔMki
k=1 n
i
n
f
j
τl−1 ΔMlj
l=1
f τk−1 f
j
; N =n
τk−1 ΔMki ΔMkj ;
N =n
k=1
Ex f i τk−1 f j τk−1 Mτi Mτj ◦ θτk−1 ; N ≥ k , k
ij k=1
k
˙ τk . Accordwhere τk ∈ T 0 is an increment of the Markov time such that τk = τk−1 + ing to semi-Markov property the latter expression is equal to =
∞
aij /γ Ex f i τk−1 f j τk−1 μτ ◦ θτk−1 ; N ≥ k
ij k=1
=
ij
Ex
∞ n=1
k
n
i
f τk−1 f
k=1
j
aij /γ τk−1 Sτ k
◦ θτk−1 ; N = n
aij Xt = Ex f (t)f (t) dt; N = n γ Xt 0 ij n=1 σG 1 = Ex f i (t)f j (t)aij Xt dt . γ Xt 0 ij ∞
σG
i
j
Consider an extension of the stochastic integral on space L2 , consisting of limits of all sequences with elements from class L0 , mutually converging (see [LOE 62]) with respect to scalar product [6.19]. This extension is said to be a stochastic integral (in the sense of mean square convergence) with respect to martingale (Mτ ). In this case for any function f ∈ L2 formula [6.20] is true. 78. Stochastic integral with respect to a semi-Markov process A stochastic integral with respect to a semi-Markov process of diffusion type, which can be called stochastic integral along the trace of this process, can be naturally
Time Change
277
deſned as a sum of the stochastic integral with respect to the martingale plus the curvilinear integral with respect to the additive functional of bounded variation: σG σG ∗ ∗ f (τ ) dXτ = f (τ ) dMτ 0
0
+
σG
0
[6.21]
∗ 1 f (τ ) b Xτ dμτ . γ Xt
6.10.3. Ito-Dynkin’s formula 79. Method of inscribed balls In order to derive Ito’s formula we will use the method of inscribed balls. Its application is connected with asymptotic formula [6.11]. For any point x ∈ G we consider k a sequence of inscribed balls Bu (x) = x + (r(x, G) ∧ u)B (u > 0). Let tk = σG,u and xk = XσG,u . We denote k Ju ≡
∞ ϕ xk−1 Δxik Δxjk , k=1
where Δxik = xik − xik−1 , and ϕ is a continuous function. We have ∞ i j ϕ xk−1 Δxk Δxk Ex Ju = Ex = Ex = Ex = Ex = Ex
k=1 ∞ ϕ xk−1 Ex ΔXki ΔXkj | Ftk−1
k=1 ∞ ϕ xk−1 Exk−1 Xσi G,u − X0i ) Xσj G,u − X0j k=1 ∞
Exk−1
ij /γ Sσϕa G,u
k=1 ∞
Exk−1
k=1
= Ex Sσϕa G
ij
/γ
ij /γ Sσϕa G,u
1 + (k, u)
1 + (u)
ij 1 + (u) = Ex μσϕaG /γ 1 + (u) .
From here ij Ex Ju −→ Ex μσϕaG /γ (u −→ 0).
278
Continuous Semi-Markov Processes
LEMMA 6.1. Under the assumptions made above, Ju tends in probability to the curvilinear integral σG ij ϕ Xτ a Xτ dμτ . J≡ γ Xτ 0 ϕaij /γ
Proof. Denote Δμtk
Ex Ju − J
2
ϕaij /γ
= μtk
= Ex = Ex
ϕaij /γ
− μtk−1 . We have 2
∞ ϕaij /γ ϕ xk−1 Δxik Δxjk − Δμtk k=1
∞ 2 ϕaij /γ ϕ xk−1 Δxik Δxjk − Δμtk k=1
+2
ϕaij /γ ϕ xk−1 Δxik Δxjk − Δμtk k
i j ϕaij /γ . × ϕ xl−1 Δxl Δxl − Δμtl
The second member of this sum is equal to ∞ ∞ ϕaij /γ ϕ xk−1 Δxik Δxjk − Δμtk Ex 2 k=1 l=k+1
i j ϕaij /γ . × ϕ xl−1 Δxl Δxl − Δμtl
Using the semi-Markov property we obtain ∞ ∞ ϕaij /γ = 2Ex ϕ xk−1 Δxik Δxjk − Δμtk k=1 l=k+1
ϕaij /γ | Ftl−1 × Ex ϕ xl−1 Δxil Δxjl − Δμtl = 2Ex
∞ ∞ ϕaij /γ ϕ xk−1 Δxik Δxjk − Δμtk k=1 l=k+1
ij /γ × Exl−1 ϕ X0 Xσi G,u − X0i Xσj G,u − X0j − Sσϕa G,u
Time Change
= 2Ex
279
∞ ∞ ϕaij /γ ϕ xk−1 Δxik Δxjk − Δμtk k=1 l=k+1
ij × Exl−1 ϕ X0 Xσi G,u − X0i Xσj G,u − X0j − μσϕaG,u/γ = 2Ex
∞ ∞ ij ϕaij /γ /γ ϕ xk−1 Δxik Δxjk − Δμtk Exl−1 (u, l)μϕa σG,u k=1 l=k+1
= 2(u)Ex = 2(u)Ex
∞ ij ϕaij /γ ϕ xk−1 Δxik Δxjk − Δμtk Exl−1 μσϕaG /γ k=1
∞ ij ϕaij /γ /γ ϕ xk−1 Δxik Δxjk − Δμtk Exl−1 Sσϕa G k=1
≤ 2(u) sup
x∈G
ij /γ Ex Sσϕa G
Ex
∞ ij |/γ ϕ xk−1 · Δxik · Δxj + Δμ|ϕ|·|a t k
k
k=1
We have Ex
∞
|ϕ|·|aij |/γ Δμtk
ij ij |/γ |/γ = Ex μ|ϕ|·|a = Ex Sσ|ϕ|·|a σG G
k=1
bounded uniformly. On the other hand Ex
∞ ϕ xk−1 · Δxik · Δxj
k
k=1
≤ C sup ϕ(x)Ex x∈G
∞ 2 r xk−1 , G ∧ u , k=1
where C is the upper bound of the ratio of the maximal axis of the ellipsoid to the minimal axis on set G. The sum in the latter expression is equal to Ex
∞ 2 r xk−1 , G ∧ u
= d · Ex
k=1
= d · Ex
∞
Exk−1
Sσ1/γ G,u
k=1 ∞
Exk−1 Sσ1/γ G,u
k=1
= d · Ex Sσ1/γ G,u
1 + (u, k) 1 + (u)
1 + (u) < ∞
280
Continuous Semi-Markov Processes
uniformly in x ∈ G. So the second member of the sum, representing Ex (Ju − J)2 , tends to zero as u → 0. Consider the ſrst member. We have ∞ 2 i ϕaij /γ j ϕ xk−1 Δxk Δxk − Δμtk = Ex = Ex
k=1
∞ ij 2 i 2 j i ϕaij /γ j 2 2 ϕa /γ ϕ xk−1 Δ xk Δ xk − 2Δμtk . ϕ xk−1 Δxk Δxk +Δ μtk k=1
The sum of members, containing Δ2 xik Δ2 xjk , tends to zero, because ratio Ex (Δ2 xik × Δ2 xjk )/Ex (σG,u ) tends to zero due to property A0 (g | x) = 0, where g(y) = (y i − ϕaij /γ
xi )2 (y j −xj )2 . The sum of members containing 2Δμtk
ϕ(xk−1 )Δxik Δxjk tends to ϕaij /γ
zero due to estimate |Δxik Δxjk | ≤ u2 . The sum of members containing (Δμtk )2 2 tends to zero due to the property of semi-Markov process Ex (μGr )/Ex (μGr ) → 0 (Gr ↓ x). The lemma is proved. 80. Notes A variant of Ito’s formula for the stochastic integral with respect to a semi-Markov process can be derived in a standard way from the lemma proved above. Namely, for a twice differentiable function ϕ, the following representation is true σG ∗ ∇ϕ Xτ dXτ ϕ X σG = ϕ X 0 + +
1 2
ij
0
0 σG
Dij ϕ Xτ aij Xτ /γ Xτ dμτ .
[6.22]
In this representation the ſrst integral is a stochastic integral with respect to the semiMarkov process, and the second one is a curvilinear integral with respect to the additive functional. In this book the case of a ſnite F 0 -measurable additive functional μτ is considered. The rather similar method is applicable to the case of inſniteness of this functional.
Chapter 7
Limit Theorems for Semi-Markov Processes
A distribution of any semi-Markov process induces a probability measure of a stepped semi-Markov process of the ſrst exits, which is appropriate to a sequence of iterated times of the ſrst exit from balls of a radius r > 0 (see item 2.18). If r → 0, this stepped semi-Markov process weakly converges to an initial semi-Markov process. It is natural to consider conditions under which an a priori given sequence of stepped semi-Markov processes weakly converges to a general semi-Markov process. These conditions are convenient for expressing in terms of a time of the ſrst exit. The language of time of the ſrst exit has appeared well adapted for the study of weak compactness and weak convergence of probability measures in space D, not only for semi-Markov processes (see [ALD 78a, ALD 78b, GUT 75]). For semi-Markov processes, a simple sufſcient condition of weak convergence of distributions is derived in terms of semi-Markov transition functions. The task of the analysis of weak convergence of distributions of semi-Markov walks can be divided into two parts: at ſrst to consider convergence of distributions of inserted Markov walks, and then to make time change transformation of an obtained limit appropriate to a sequence of distributions of intervals between jumps. In this way we come to an outcome which was proved with a few other methods by Silvestrov [SIL 74]. As has been shown in [HAR 79], the limiting theorems for semi-Markov processes can be applied for a construction of a Markov process, which is not broken at a ſnite instant, with the given distribution of points of the ſrst exit from open sets (see [BLU 62, KNI 64, SHI 71a, SHI 71b]). 7.1. Weak compactness and weak convergence of probability measures In terms of time of the ſrst exit from open sets the conditions of weak compactness and weak convergence of probability measures in space D are derived 281
282
Continuous Semi-Markov Processes
[BIL 77, GIH 71, GIH 75, SKO 56, SKO 58, HAR 76b, HAR 77, ALD 78a, ALD 78b, GUT 75].
1. Deſnitions and denotations Let Nt (ξ) be the number of points of discontinuity of function ξ ∈ D on an interval (0, t] (t > 0); Nt (ξ) ∈ N0 ∪ {∞}. Deſne functionals
htr (ξ) = min σrk+1 (ξ) − σrk (ξ) : k ∈ {0, 1, . . . , Nt Lr ξ , where ξ ∈ D; r, t > 0, and for δ ∈ DS
htδ (ξ) = min σδk+1 (ξ) − σδk (ξ) : k ∈ 0, 1, . . . , Nt Lδ ξ , σδk+1 − σδk > 0 . (the minimum is sought among positive differences). We will use functionals htr and htδ for a statement of two variants of the limiting theorems in terms of time of the ſrst exit. For any c, t > 0 and ξ ∈ D, let Δtc (ξ) = sup
∨ ρ ξ(0), ξ t4 : ρ ξ t1 , ξ t2 ∧ ρ ξ t2 , ξ t3
0 < t2 ≤ t, 0 ∨ t2 − c ≤ t1 ≤ t2 ≤ t3 ≤ t2 + c, 0 ≤ t4 ≤ c .
Let (Xt )t≥0 be a non-decreasing continuous function from the right set of compact sets Xt ⊂ X, λtc be a continuous non-decreasing function on arguments c and t (c, t > 0) for which (∀t > 0) λt0+ = 0. Deſne a set:
K Xt , λtc = ξ ∈ D : (∀c, t > 0) ξ(t) ∈ Xt , Δtc (ξ) ≤ λtc . 2. Compact sets in D PROPOSITION 7.1. The following properties are fair (1) set K(Xt , λtc ) is a compact set in (D, ρD ); (2) for any compact set K ⊂ D there exists a set (Xt )t≥0 and function λtc (c, t > 0) with the listed above properties such that K ⊂ K(Xt , λtc ). Proof. The proof of this outcome is similar to the proof of theorem 1 in [GIH 71, p. 502] about a form of a compact set in space D[0, 1] with modiſcations connected with a replacement of the Skorokhod metric by the Stoun-Skorokhod metric.
Limit Theorems
283
3. Existence of function lambda PROPOSITION 7.2. Let (P (n) ) be a family of probability measures on (D, F ); also let ∞ sequences (ak )∞ 1 (ak ↓ 0) and (t )1 (t ↑ ∞) exist such that for any k, ∈ N lim sup P (n) Δtc > ak = 0. c→0 n
Then for any ε > 0 there is a function λtc with properties, enumerated in item 2, for which
(n) t t Δc > λc < ε. sup P n
c,t>0
Proof. Let ck, > 0 (k, ∈ N), ck, ↓ 0 with k → ∞ and with → ∞ and > ak < ε 2−k− sup P (n) Δtc+1 k, n
Sequence (ck, ) obviously exists. Assume λtck, = ak−1 ; we will also continue this function in a continuous non-decreasing manner up to function λtc (c, t > 0). Such a function obviously exists, and λt0+ = 0 with any t > 0. Now for any n:
(n) t t Δc > λc P c,t>0
=P
(n)
≤P
(n)
Δtc
>
λtc
k, c∈[ck+1, ,ck, ) t∈[t ,t+1 )
Δtc+1 k,
k,
>
λtck+1,
≤
P (n) Δtc+1 > ak < ε. k,
k,
4. Comparison of functionals PROPOSITION 7.3. For any c, t, r > 0, δ ∈ DS(r) and ξ ∈ D it is fair (a) htr (ξ) ≥ 2c ⇒ Δtc (ξ) ≤ 2r, (b) htδ (ξ) ≥ 2c ⇒ Δtc (ξ) ≤ r. Proof. Let htr (ξ) ≥ 2c and t2 ∈ (0, t]. Then there exists k ≤ Nt Lr ξ such that t2 ∈ [σrk (ξ), σrk+1 (ξ)), where σrk+1 (ξ) − σrk (ξ) ≥ 2c. From here either [t2 , t2 + c] ⊂ [σrk (ξ), σrk+1 (ξ)) or [t2 − c, t2 ] ⊂ [σrk (ξ), σrk+1 (ξ)). Hence, ρ(ξ(t2 ), ξ(t1 )) ∧ ρ(ξ(t2 ), ξ(t3 )) < 2r, where 0 ∧ (t2 − c) ≤ t1 ≤ t2 ≤ t3 ≤ t2 + c. Obviously, ρ(ξ(t4 ), ξ(0)) < r, where 0 ≤ t4 ≤ c. From here Δtc (ξ) ≤ 2r. Property (b) is similarly proved.
284
Continuous Semi-Markov Processes
5. Weak compactness Denote (n) S ∈ B(X) . νr,k (S) = P (n) Xσrk ∈ S, σrk < ∞ PROPOSITION 7.4. Let (P (n) ) be a set of probability measures on (D, F); and (∃(rm ), rm ↓ 0) (∃(t ), t ↑ ∞) the following conditions be fulſlled: (1) (∀, m ∈ N) limc→0 supn P (n) (htrm < c) = 0; (n)
(2) (∀m ∈ N) (∀k ∈ N0 ) limK↑X supn νrm ,k (X \ K) = 0, where K is a compact (n)
set, i.e. a set of subprobability measures (νr,k ) is weakly compact. Then the set of measures (P (n) ) is weakly compact. Proof. From propositions 7.2, 7.3 and condition (1) it follows that (∀ε > 0) there is a function λtc with properties enumerated in item 2, for which supn P (n) ( c,t>0 {Δtc > λtc }) < ε. Let rm ↓ 0. From condition (1) it follows that (∀ε > 0) (∀, m ∈ N) (∃Nm ∈ N0 ) sup P (n) Nt Lrm ≥ Nm < ε 2−m−1 , n
(where N0 = N ∪ {0} = {0, 1, 2, . . .}), since {htr ≥ c} ⊂ {Nt Lr ≤ t/c}. From condition (2) it follows that (∀m ∈ N) (∃Km – compact set) (∀k : 0 ≤ k ≤ Nm ) ε 2−m−1 . / Km , σrkm < ∞ < sup P (n) Xσrk ∈ m Nm + 1 n Let Δm = {x : ρ(x, Km ) ≤ rm }. Then (∀n)P (n) σΔm > t N m
Xσrk ∈ K, σrkm < ∞ ∪ σrkm = ∞ ≥ P (n) Nt Lrm < Nm , m
k=0
> 1 − ε 2−m . ∞ Let Δ = m=1 Δm . It is not difſcult to show that each sequence (xm )∞ 1 , belonging to Δ, contains a sub-sequence mutually converging. Since, in addition, X is complete and Δ is closed, Δ is a compact set (see also [GIH 71, p. 484]). On the other hand (∀n ≥ 1) ∞ P (n) σΔm ≤ t < ε. P (n) σΔ ≤ t = P (n) inf σΔm ≤ t ≤ m
=1
Limit Theorems
Hence, (∀ε > 0) (∀ ≥ 1) (∃K – compact set) sup P (n) σK ≤ < ε 2− , n
285
K ⊂ K+1 .
We assume that Xt = K with − 1 ≤ t < . Then (Xt )t≥0 is non-decreasingly continuous from the right family of compact sets. In this case ∞
(n) (n) Xt ∈ ≤ Xt ∈ P / Xt P / K t≥0
=1
≤
∞
−1≤t<
P (n) σK < < ε.
=1
Hence, under theorem 7.1 and sufſcient condition of weak compactness (see [GIH 71, p. 429]) the set of measures (P (n) ) is weakly compact. 6. Weak compactness (the second variant) Denote (n) νδ,k = P (n) Xσδk ∈ S, σδk < ∞ S ∈ B(X) . PROPOSITION 7.5. Let (P (n) ) be a family of probability measures on (D, F); and also (∃(t ), t ↑ ∞) (∃(δm ) ∈ DS(rm ), rm ↓ 0) the following conditions are fulſlled: (1) (∀, m ∈ N) limc→0 sup P (n) (htδm < c) = 0; (2) at least one of two conditions is fulſlled: (n) (a) (∀k, m ∈ N) family of subprobability measures (νδm,k ) is weakly compact; (b) (∃δ ∈ DS, δ consists of precompact sets) (∀ ∈ N) lim sup P (n) σδk ≤ t = 0. k→∞ n
Then the family of measures (P (n) ) is weakly compact. Proof. (2)(a) We will show that from conditions (1) and (2)(a) it follows that (∀, m ≥ 1) [7.1] lim sup P (n) σδkm ≤ t = 0. k→∞ n
For this we use two obvious properties of deducing sequences:
286
Continuous Semi-Markov Processes
m (C1 ) (∀δ ∈ DS) (∀K – compact set) (∃m ≥ 1) (∀x ∈ K) x ∈ i=1 Δi , δ = (Δ1 , Δ2 , . . .); (C2 ) (∀δ ∈ DS, Δi = X) (∀m ≥ 1) δ (m) ∈ DS, where δ = (Δ1 , Δ2 , . . .) and δ (m) = (Δm , Δm+1 , . . .). Let K0 be a compact set. By a property C1 (∃k1 ≥ 1) t
1 hδm ≥ c, X0 ∈ K0 = X0 ∈ K0 , htδm ≥ c, σδkm ≥c . By properties C1 and C2 , if K1 is a compact set, then (∃k2 ≥ k1 ) : ; 1 htδm ≥ c, X0 ∈ K0 , Xσk1 ∈ K1 , σδkm ≤ t δm ; : 1 2 ≤ t , σδkm ≥ 2c , = htδm ≥ c, X0 ∈ K0 , Xσk1 ∈ K1 , σδkm δm
etc.: for any sequence of compact sets K0 , K1 , . . . , Ks there is a sequence of numbers k1 , . . . , ks+1 (ki < ki+1 ), for which
htδm
≥ c, X0 ∈ K0
s :
Xσ k i ∈ K i
i=1
=
htδm
≥ c, X0 ∈ K0
δm
;
s σδkm ≤ t
s :
Xσ k i ∈ K i
i=1
;
δm
s σδkm ≤ t
ks+1 σδm ≥ (s + 1)c .
Let (∀n) P (n) htδm < c <
ε , s+2
where s = [t/c] (whole part of a ratio), P (n) (X0 ∈ / K0 ) < ε/(s + 2) and ε i P (n) Xσki ∈ : (i = 1, . . . , s), / Ki , σδkm <∞ < δm s+2 where each ki depends on chosen Ki−1 . Then ks+1 >t P (n) σδm s : ;
t ki ki (n) Xσki ∈ Ki , σδm < ∞ ∪ σδm = ∞ hδm ≥ c, X0 ∈ K0 , ≥P i=1
δm
/ K0 ≥ 1 − P (n) htδm < c − P (n) X0 ∈ −
s i=1
i P (n) Xσki ∈ / Ki , σδkm < ∞ < 1 − ε. δm
Limit Theorems
287
Property [7.1] is proved. Further proof of weak compactness of a family of measures (P (n) ) is conducted on a sample of theorem 7.4. s (2)(b) If all [Δi ] are compact, (∀s ≥ 1) Ks = i=1 [Δi ] is a compact set and s σKs ≥ σ∪si=1 Δi ≥ σδ , where δ = (Δ1 , Δ2 , . . .). From condition (2)(b) it follows that (∀t > 0) lim sup P (n) σKs ≤ t = 0. s→∞ n
From here in view of condition (1), as well as in proposition 7.4, weak compactness of a set of measures (P (n) ) follows. 7. Sufſcient condition of weak convergence w
→ a weak convergence of (sub)probability measures in corresponding Denote by − metric space. THEOREM 7.1. Let (P (n) )∞ 1 and P be probability measures on (D, F); and also at least one of two conditions is fulſlled: (1) (∃(rm )∞ 1 , rm ↓ 0) (∀k ∈ N0 ) (∀m ∈ N) −1 −1 w P (n) ◦ βσr0m , . . . , βσrk −−→ P ◦ βσr0m , . . . , βσrk , m
(2)
m
(∃(δm )∞ 1 ,
δm ∈ DS(rm ), rm ↓ 0) (∀k ∈ N0 ) (∀m ∈ N) −1 −1 w P (n) ◦ βσδ0 , . . . , βσδk −−→ P ◦ βσδ0 , . . . , βσδk . m
m
m
m
w
→ P. Then P (n) − Proof. Since σ βσrk−1 ; k, m ∈ N = σ βσk−1 ; k, m ∈ N = σ Xt , t ∈ R+ = B(D) m
δm
(see proposition 2.13), by theorem 4 [GIH 71, p. 437], it is sufſcient to prove weak compactness of a family of measures (P (n) ). From condition (1) it follows that for almost all c > 0 (∀, m ∈ N) P (n) htrm < c −→ P htrm < c (n −→ ∞) for some sequence (t )∞ 1 , where t ↑ ∞. From here weak compactness of the family of distributions (P (n) ◦ (htrm )−1 )∞ n=1 follows and, hence, lim sup P (n) htrm < c = 0, c→0 n
288
Continuous Semi-Markov Processes
i.e. condition (1) of proposition 7.4 is fulſlled. Further from condition (1) it follows w (n) that (∀m ∈ N) (∀k ∈ N0 ) νrm ,k − → νrm ,k , where νr,k (S) = P Xσrk ∈ S, σrk < ∞
S ∈ B(X)
(n)
(νrm ,k is similarly determined). From here weak compactness of the family of mea(n)
sures (νrm ,k )∞ n=1 follows, i.e. condition (2) of proposition 7.4 is fulſlled. From proposition 7.4 weak compactness of the family of measures (P (n) )∞ 1 follows. Similarly, in view of proposition 7.5, it is proved that from condition (2) weak compactness of a family of measures (P (n) )∞ 1 follows. 8. Necessary condition of weak convergence (n) PROPOSITION 7.6. Let (P (n) )∞ 1 and P be probability measures on (D, F ) and P w − → P . Then there exist (rm ) (rm ↓ 0) and (δm ) (δm ∈ DS(rm ), rm ↓ 0) such that (∀k, m ∈ N) with n → ∞ w
→ P ◦ (βσr0m , . . . , βσrk )−1 , (1) P (n) ◦ (βσr0m , . . . , βσrk )−1 − m
m
w
→ P ◦ (βσδ0 , . . . , βσδk )−1 . (2) P (n) ◦ (βσδ0 , . . . , βσδk )−1 − m
m
m
m
∞ Proof. By corollary 2.2 (∃(δm )∞ 1 , δm ∈ DS(rm ), rm ↓ 0) P ( m=1 Π(δm )) = 1. Let ϕ ∈ C(Y k ) (continuous and limited function on Y k ; see item 2.9). Then by theorem 2.6 function ϕ(βσδ0 , . . . , βσδk ) is continuous on Π(δm ). From here, using lemma m m from [GIH 71, p. 437], −→ P ϕ βσδ0 , . . . , βσδk . P (n) ϕ βσδ0 , . . . , βσδk m
m
m
m
The ſrst statement is proved similarly (see notes on items 2.33 and 2.40). 9. Weak convergence criterion Furthermore, we assume that e−∞ (ϕ ◦ X∞ ) = 0. THEOREM 7.2. Let (P (n) )∞ 1 and P be probability measures on (D, F). Then the following conditions are equivalent: w (1) P (n) − → P; (2) (∃(δm ), δm ∈ DS(rm ), rm ↓ 0) (∀k, m ∈ N) −1 −1 w −−→ P ◦ βσδ0 , . . . , βσδk ; P (n) ◦ βσδ0 , . . . , βσδk m
m
m
m
Limit Theorems
289
(3) (∃(rm )∞ 1 , rm ↓ 0) (∀k, m ∈ N) −1 −1 w P (n) ◦ βσr0m , . . . , βσrk −−→ P ◦ βσr0m , . . . , βσrk ; m
m
(4) (∃(δm ), δm ∈ DS(rm ), rm ↓ 0) (∀k, m ∈ N) (∀λi > 0) (∀ϕi ∈ C0 ) k k (n) −λi σδi m −λi σδi m ϕi ◦ Xσδi −→ P ϕi ◦ Xσδi ; e e P m
i=0
m
i=0
(5) (∃(rm ), rm ↓ 0) (∀k, m ∈ N) (∀λi > 0) (∀ϕi ∈ C0 ) k k (n) −λi σri m −λi σri m ϕi ◦ Xσri m −→ P ϕi ◦ Xσri m . e e P i=0
i=0
Proof. Equivalence conditions (1), (2) and (3) follows from proposition 7.6. The implication (4)⇒(2) follows from a property of the Laplace transformation (see [FEL 67, p. 496]). The implication (1)⇒(4) follows from continuity on Π(δm ) of the function k i f= e−λi σδm ϕi ◦ Xσδi Iσδk <∞ . i=1
m
m
The equivalence (3)⇔(5) is similarly proved. 7.2. Weak convergence of semi-Markov processes The general theorems of weak convergence of probability measures in space D are applied to derive sufſcient conditions of weak convergence of semi-Markov processes, in particular, the conditions for a sequence of semi-Markov walks to converge weakly to a continuous semi-Markov process (see [SIL 70, SIL 74, HAR 76b, HAR 77]) are given. Alongside a time change in a Markov process, this passage to a limit in a class of stepped semi-Markov processes is a constructive method in order to obtain continuous semi-Markov processes which are not being Markovian. 10. Weak convergence of families of probability measures (n)
Let (Px )x∈X (n ∈ N), (Px )x∈X be admissible families of probability measures (n) on (D, F ). We will state that a sequence of families {(Px )}∞ n=1 weakly converges w (n) → Px . In this sense we understand the expression to a family (Px ) if (∀x ∈ X) Px − “a sequence of semi-Markov processes weakly converges to semi-Markov process”. Family (Px ) is said to be weakly continuous if (∀x ∈ X) (∀(xn ), xn ∈ X, xn → x) w Pxn − → Px .
290
Continuous Semi-Markov Processes
11. Weak convergence of semi-Markov processes (n)
THEOREM 7.3. For the sequence of SM processes {(Px )}∞ n=1 to converge weakly to a weakly continuous SM process (Px ) it is sufſcient that the following two conditions are fulſlled: (1) (∀Δ ∈ A0 ) (∀ϕ ∈ C0 ) (∀λ > 0) fσΔ (λ, ϕ | ·) ∈ C0 ; (2) (∀Δ ∈ A0 ) (∀ϕ ∈ C0 ) (∀λ > 0) (∀K ⊂ X, K – compact set) (λ, ϕ | x) −→ fσΔ (λ, ϕ | x) (n −→ ∞) fσ(n) Δ uniformly on x ∈ K. Proof. By theorem 7.2 it is enough to prove that (∀k ∈ N) ∀λi > 0 ∀ϕi ∈ C+ ∀Δi ∈ A0 ∀x0 ∈ X k k −λ τ θ −λ τ θ −→ Px0 e i i τi−1 ϕi ◦ Xτi e i i τi−1 ϕi ◦ Xτi , Px(n) 0 i=1
i=1
˙ σΔi where C+ is a set of all non-negative functions ϕ ∈ C, τ1 = σΔ1 , τi = τi−1 + (i ≥ 2). For a semi-Markov process with n ∈ N0 k k −λ τ ◦θ i i τi−1 ϕ = λi , dxi | xi−1 ϕi xi Px(n) e ◦ X fσ(n) i τ i 0 Δ XK i=1
i=1 (0)
i
(0)
where Px = Px and fσΔ = fσΔ . In order to prove convergence of the previous integral to an integral
k
XK i=1
fσΔi λi , dxi | xi−1 ϕi xi
we use the corollary from Prokhorov theorem (see [GIH 71, p. 429]): if μn is a sequence of ſnite measures, weakly converging to a measure μ0 on B(X), then for any ε > 0 there is a compact set K ⊂ X such that (∀n ∈ N) μn (X \ K) < ε. We prove the theorem by induction. From condition (2) it follows that (∀ϕ ∈ C+ ) (∀x0 ∈ X) λ1 , ϕ1 | x0 −→ fσΔ1 λ1 , ϕ1 | x0 fσ(n) Δ1 Let μn (ϕk−1 ) → μ0 (ϕk−1 ) for any ϕk−1 ∈ C+ , where μn dxk−1 =
k−2
XK−2 i=1
λi , dxi | xi−1 ϕi xi fσ(n) fσ(n) Δ Δ i
k−1
λk−1 , dxk−1 | xk−2 .
Limit Theorems
291
Then μn dxk−1 ϕk−1 xk−1 fσ(n) λk , ϕk | xk−1 Δk X − μ0 dxk−1 ϕk−1 xk−1 fσΔk λk , ϕk | xk−1 X
≤
X
λk , ϕk | xk−1 − fσΔk λk , ϕk | xk−1 μn dxk−1 ϕk−1 xk−1 fσ(n) Δ
+ μn ϕk−1 − μ0 ϕk−1 ,
k
where ϕk−1 (xk−1 ) = ϕk−1 (xk−1 )fσΔk (λk , ϕk | xk−1 ) and, by condition (1), ϕk−1 ∈ C+ . Let K ⊂ X, K being compact set and (∀n ∈ N) μn (X \ K) < ε. Then the ſrst summand is no more than λk , ϕk | x − fσΔk λk , ϕk | x · M k−1 , ε 2 M 2 + sup fσ(n) Δ x∈K
k
where ϕi ≤ M (i = 1, . . . , k). The second summand by virtue of the inductive supposition tends to zero. In view of an arbitrary ε > 0 and due to uniform convergence on compact sets, the weak convergence of the family of measures is proved. Weak continuity of the limit family of measures is obvious (see theorem 7.2). 12. Markov walk Markov walk [KEM 70, SPI 69] with the probability kernel Q(dx1 | x) and step h > 0 is said to be a Markov chain (see item 1.5) on a countable parametrical set {0, h, 2h, 3h, . . .}, controlled by a transition function Q(dx1 | x). It can also be interpreted as a semi-Markov walk, i.e. a stepped semi-Markov process (see items 1.17 and 3.5) (Px ) for which n−1 Q S \ {x} | x Px τ = nh, Xτ ∈ S = Q {x} | x
(n ∈ N),
where τ (ξ) = inf(t ≥ 0 : ξ(0) = ξ(t)). If Q({x} | x) = 0, it is obvious that Px (τ = h, Xτ ∈ S) = Q(S | x). 13. Convergence of semi-Markov walks Let (Px ) be a family of measures of semi-Markov walks with a corresponding family of transition functions (kernels) Q(S | x) (x ∈ X, S ∈ B(X)) of an embedded Markov chain. Then for any x ∈ X a conditional distribution of a time of the ſrst jump Px (τ ∈ dt | Xτ ) on set {τ < ∞} is determined. In the following proposition we will suppose that for any considered SM family of measures (Px ) (∀x ∈ X)
292
Continuous Semi-Markov Processes
Px (τ < ∞) = 1 and there exists a family of B(X2 )-measurable distribution functions F (t | x, x1 ) ((x, x1 ) ∈ X2 ), where F t | x, x1 = Px τ ≤ t | Xτ = x Q(dx1 | x)-a.s. (t ≥ 0). We call such a function a two-dimensional conditional distribution function of sojourn time at state x with passing to state x1 . Let ∞ e−λt dF t | x, x1 . f λ | x, x1 = 0
(n) PROPOSITION 7.7. Let {(Px )x∈X }∞ n=1 be a sequence of families of measures of semi-Markov walks with the corresponding sequences of Markov kernels Q(n) (S | x) (Q({x} | x) = 0) and of two-dimensional cumulative distribution functions {F (n) (t | (n) (∞ | x0 , x1 ) = 1. Suppose there exists a sequence of x0 , x1 )}∞ n=1 for which F ∞ steps {hn }1 (hn ↓ 0) such that (n) w
(n)
→ Px , where (Px )x∈X is a family of measures of a Markov (1) (∀x ∈ X) Px − walk, corresponding to the kernel Q(n) (dx1 | x) and step hn , and (Px ) is some admissible family of probability measures on (D, F ); (2) for any λ ≥ 0 uniformly on x0 and x1 in each compact set (n) λ | x0 , x1 −→ −g λ, x0 (n −→ ∞), h−1 n log f where (∀λ ≥ 0) g(λ, ·) ∈ C and g(λ, x) → ∞ as λ → ∞ uniformly on all x ∈ X. (n) w Then for any x ∈ X Px − → Px , where (Px ) is an admissible family of probability measures, obtained from the family of measures (Px ) under a random time change corresponding to a Laplace family of additive functionals (at (λ), t ≥ 0)λ≥0 , t where at (λ) = 0 g(λ, Xs ) ds (see item 6.49 and theorem 6.8). If (Px ) ∈ SM, then (Px ) ∈ SM.
Proof. First of all we will prove that (at (λ), t ≥ 0) is such a family of additive functionals which determines a time change preserving a semi-Markov property. Obviously (∀ξ ∈ D) (∀λ ≥ 0) a· (λ | ξ) is a non-decreasing function of t continuous from the left, for which a0 (λ | ξ) = 0, and (∀λ > 0) at (0 | ξ) = 0 (since F (n) (∞ | x0 , x1 ) = 1). Furthermore, (∀λ ≥ 0) at (λ) is a Ft -measurable function, since g(λ, ·) is a B(X)-measurable function. “Laplace property” of a functional at (λ) (see item 6.61) is equivalent to complete monotonicity of exp(−g(λ, x)) with respect to λ (λ ≥ 0). However, it follows from representation 1/hn exp − g λ, x0 = lim f (n) λ | x0 , x1 n→∞
hn −→ 0 ,
Limit Theorems
293
where (∀n ∈ N) f (n) (λ | x0 , x1 ) is a completely monotone function of λ. Actually, let us say that function f (λ) is monotone of the order n, if (∀k ∈ N) (∀λ > 0) there exists derivative ∂ k f (λ)/∂λk and (−1)k (∂ k /∂λk )f (λ) ≥ 0 for all k ≤ n. It is not difſcult to show that from a complete monotonicity of function f (λ) the monotonicity of order n of functions (f (λ))α for all α ≥ n follows. Hence, exp(−g(λ, x0 )) is monotone of order n for any n ∈ N, i.e. completely monotone. Furthermore, (∀c > 0) (∀Δ ∈ A) we have aσΔ (λ | ξ) = ac (λ | ξ) =
c
0
g λ, Xs (ξ) ds −→ ∞ (λ −→ ∞)
uniformly on all ξ ∈ {σΔ ≥ a} (since condition (2) is satisſed). At last, from boundedness of g(λ, ·) it follows that (∀λ ≥ 0) at (λ, ξ) → 0 (t → 0) uniformly on all ξ ∈ D. Hence, the conditions of theorem 6.8 are fulſlled, i.e. additive functional (at (λ), t ≥ 0)λ≥0 determines time change transformation, due to which a family of measures (Px ) passes to a family of measures (Px ), where (∀δ ∈ DS) (∀k ∈ N) (∀λi > 0) (∀ϕi ∈ C) x E
k i=1
= Ex
−λi σδi
e
k
ϕi ◦ Xσδi
exp
i=1
−
0
σδi
g λi , Xt dt
ϕi ◦ Xσδi ,
σδk
<∞ .
∞ Let δm ∈ DS(rm ), rm ↓ 0 and Px ( m=1 Π(δm )) = 1 (such a sequence of deducing sequences exists; see corollary 2.2). Under theorem 7.2, it is enough to prove that (∀k, m ∈ N) (∀λi > 0) (∀ϕi ∈ C) (n) uk −→ E x uk , E x m m where ukm =
k
i ϕ . exp − λi σδi m − σδi−1 ◦ X i σδ m m
i=1
We will prove it for k = 1. In a common case the proof is similar. For any n ∈ N we have ∞ (n) e−λσΔ ϕ ◦ Xσ , σΔ = τ k , (n) e−λσΔ ϕ ◦ Xσ = E E x x Δ Δ k=0
294
Continuous Semi-Markov Processes
(n) where τ k is a time of the k-th jump. From regenerative properties of (Px )x∈X it follows that
x(n) e−λτ k ϕ ◦ Xτ k , σΔ = τ k E k (n) (n) k λ | Xτ i−1 , Xτ i ϕ ◦ Xτ k , σΔ = τ , f = Ex i=1
x(n) (e−λτ (ϕ ◦ Xτ )) = E x(n) (f (λ | X0 , Xτ )(ϕ ◦ Xτ )). Since E x(n) and Ex(n) where E coincide on sigma-algebra σ(γτ k , k ∈ N0 ) = σ(Xτ k , k ∈ N0 ), x(n) E
k i=1
=
f
(n)
λ | Xτ i−1 , Xτ i
Ex(n)
exp
k
log f
(n)
ϕ ◦ Xτ k , σΔ = τ k
λ | Xτ i−1 , Xτ i
ϕ ◦ Xτ k , σΔ = τ
k
i=1
τk 1 log f (n) λ | Xt , Xt+hn dt ϕ ◦ Xτ k , σΔ = τ k . = Ex(n) exp hn 0 Hence, x(n) e−λσΔ ϕ ◦ Xσ E Δ σΔ 1 log f (n) λ | Xt , Xt+hn dt ϕ ◦ XσΔ , σΔ < ∞ . = Ex(n) exp hn 0 (n) w
(n)
→ Px , the family of measures (Px )∞ Since Px − n=1 is weakly compact and (∀ε > 0) (∀t ∈ R+ ) (∀Kt ⊂ X, Kt – compact set) sup Px(n) n (n)
Besides, if Px (Π(Δ)) = 1, Px
Xs ∈ / Kt
< ε.
s≤t
w
◦ βσ−1 − → Px ◦ βσ−1 and (∀ε > 0) (∀t1 ∈ R+ ) Δ Δ
sup Px(n) t1 < σΔ < ∞ < ε. n
Limit Theorems
Furthermore, we ſnd (n) Ex exp
295
1 (n) λ | Xt , Xt+hn dt ϕ ◦ XσΔ log f hn 0 σΔ g λ, Xt dt ϕ ◦ XσΔ − Ex exp − σΔ
0
σΔ 1 (n) (n) λ | Xt , Xt+hn dt log f ≤ Ex exp hn 0 σΔ g λ, Xt dt ϕ ◦ XσΔ − exp − + Ex(n) exp −
− Ex exp
0
σΔ 0
−
g λ, Xt dt ϕ ◦ XσΔ
σΔ 0
g λ, Xt dt ϕ ◦ XσΔ .
The ſrst summand of this sum is no more than a sum
(n) (n) Px t 1 − h n < σ Δ < ∞ + P x Xs ∈ / Kt1 +
Ex(n)
s≤t1
1 exp | log f (n) λ | Xt , Xt+hn dt + h n 0
σ Δ ≤ t − hn ∩ Xs ∈ Kt1 , σΔ
0
σΔ
g λ, Xt dt | −1 ,
s≤t1
which can be made as small as desired at the expense of a choice t1 (ſrst term), Kt1 (second term) and on uniform convergence on compact sets 1 log f (n) λ | Xt , Xt+hn −→ −g λ, Xt , hn making the function as small as desired in the third term. In order to prove the theσ orem it is enough to show that function exp(− 0 Δ g(λ, Xt ) dt) is continuous as a ρD function of ξ ∈ Π(Δ). Let ξn −−−→ ξ. Then σΔ (ξn ) → σΔ (ξ). Besides, Xt (ξn ) → Xt (ξ) in all points of continuity of ξ (see [BIL 77, p. 157]) and therefore almost everywhere with respect to the Lebesgue measure. Hence, with the same t we obtain g(λ, Xt (ξn )) → g(λ, Xt (ξ)). From boundedness of g(λ, ·), by the Lebesgue theorem (see [NAT 74, p. 120]) we obtain 0
σΔ (ξn )
g λ, Xt ξn dt −→
0
σΔ (ξ)
g λ, Xt (ξ) dt;
296
Continuous Semi-Markov Processes
From here by continuity functions on Π(Δ) it follows that σΔ g λ, Xt dt . exp − 0
14. Example of convergence to non-Markov SM process (n) (n) Let X = R1 , and {(Px )}∞ n=1 be a sequence of SM walks. SM walk (Px ) has a transition function of an embedded Markov chain √ Q(n) (−∞, x) | x0 = G n x − x0 ,
where G(x) is a cumulative distribution function on R1 , and ∞ ∞ x dG(x) = 0, x2 dG(x) = 1. −∞
−∞
Consider the conditional cumulative distribution function of length of an interval between jumps of this SM walk of the form F (n) t | x0 , x1 = F tn1/α , α
where 0 < α < 1, and (∀λ ≥ 0) (f (λn−1/α ))n → e−λ , where f (λ) = ∞ −λt e dF (t), i.e. F belongs to a region of attraction of the stable law with a 0 parameter α (see [FEL 67, p. 214, 515]). Then from proposition 7.7 it follows that (n) w (∀x ∈ X) Px − → Pxw , where (Pxw ) is a family of measures of Wiener process, transformed by a random time change with the help of an independent homogenous process with independent increments (see item 6.54) with a measure U on (Ψ, G), where α ˜ U e−λXt = e−λ t (λ ≥ 0). In order to apply theorem 7.7 we choose hn = 1/n. Then a sequence of Markov (n) (n) (S | x)) and walks {(Px )}∞ n=1 , corresponding to a sequence of the kernels (Q w steps (hn ) converges weakly to Wiener process (Px ). Proof of weak convergence in space (D, ρD ) follows from theorems 3 and 4 of [GIH 71, p. 486, 488]. We note that n log f λn−1/α −→ −λα , (n) w w i.e. condition (2) of proposition 7.7 is satisſed. From here (∀x ∈ X) Px − → Px , where (Pxw ) ∈ SM, and (∀Δ ∈ A) (∀λ > 0) (∀ϕ ∈ C)
α Pxw e−λσΔ ϕ ◦ XσΔ = Pxw e−λ σΔ ϕ ◦ XσΔ ,
Limit Theorems
297
i.e. (Pxw ) is a Wiener process transformed by a random time change. It represents a continuous semi-Markov process without a deterministic component in time run along the trace. Evidently it is non-Markovian. Its conditional distribution of time run along the trace is expressed as α α e−b(λ,τ ) = Pxw e−λτ | F ◦ = Pxw e−λ τ | F ◦ = e−λ τ . In order to ſnd its Lévy measure n τ (see theorem 6.11), we consider an equation ∞ 1 − e−λu n τ (du). λα τ = 0+
Using representation of these additive functionals in the form of curvilinear integrals by the additive functional μτ , we obtain τ ∞ 1 − e−λu ν du | Xτ1 μ dτ1 , λα μτ = 0
0+
from here we obtain an equation with respect to ν: ∞ α 1 − e−λu ν du | X0 . λ = 0+
Differentiating on λ we ſnd a Laplace transformation of measure u ν (du | X0 ): ∞ e−λu u ν du | X0 = αλα−1 . 0+
For α = 1/2 we obtain a tabulated value for this transformation [DIT 65, p. 216, formula (22.1)]; therefore du . ν du | X0 = √ 2 Xu3 Trajectory of this process represents a continuous Cantor function; the set of nonconstancy of the process has a zero measure, i.e. the whole half-line without intervals of constancy of the process.
This page intentionally left blank
Chapter 8
Representation of a Semi-Markov Process as a Transformed Markov Process
It is natural to assume that each semi-Markov process is either itself Markovian or grows out of transformations of a time change in a Markov process. With reference to stepped semi-Markov processes this supposition makes a Lévy hypothesis, which was the area Yackel [YAC 68] was engaged in. In general the Lévy hypothesis is not justiſed. So, the trajectory of a homogenous stepped Markov process with two states has either no more than one jump, or an inſnite number of jumps; and the trajectory of a homogenous stepped semi-Markov process with two states can have any ſnite number of jumps. At the same time number of jumps does not vary with a time change. There are also other speciſc properties of non-Markov semi-Markov invariant processes relative to time change keeping a homogenity of process in time. The problem consists of searching necessary and sufſcient conditions for the semi-Markov process to be a Markov process transformed by a random time change. It is desirable to ſnd an original Markov process and corresponding time change. In the present chapter two methods of construction of a Markov process, preserving distribution of the random trace of a given semi-Markov process, are investigated. Firstly, for a regular semi-Markov process with a lambda-continuous family of probability measures (Px ), sufſcient conditions of such construction are formulated in terms of lambda-characteristic operators of this process. With some additional conditions providing uniqueness of this representation we obtain an inſnitesimal operator of a corresponding Markov process and a Laplace family of additive functionals determining the time change. Secondly, for a semi-Markov process with a regular additive functional of time run along the trace we formulate sufſcient conditions for the corresponding Markov 299
300
Continuous Semi-Markov Processes
process to be constructed in terms of parameters of Lévy expansion for a conditional process with independent increment (time run process). In order to prove Markovness of the constructed semi-Markov family of measures we use the theorem on regeneration times which is applied to a sequence of regeneration times of the semi-Markov process with trajectories without intervals of constancy, converging from above to a non-random time instant. Constructed by the second method Markov process, which is determined by its trace distribution and by its conditional distribution of time run along the trace, we use this to derive formulae of stationary distributions of the original semi-Markov process. 8.1. Construction of a Markov process by the operator of a semi-Markov process With the help of the Hille-Iosida theorem, a Markov process with an inſnitesimal operator determined with the help of lambda-characteristic operator of the given semiMarkov process is constructed [VEN 75, GIH 73, DYN 63, ITO 63]. 1. Deſnitions Let
CK = f ∈ C0 (X) : (∀ε > 0) |f | ≥ ε ∈ K , where K is a set of all compact subsets of space X;
CR = f ∈ C0 (X) : ∃f1 ∈ CK (∃c ∈ R) f = f1 + c . A Markov process is said to be (a) stochastically continuous if (∀f ∈ C0 ) (∀x ∈ X) Tt (f | x) −→ f (x) where Tt (f | x) = Ex (f ◦ Xt ); (b) Feller process if (∀t ≥ 0) (∀f ∈ C0 ) Tt f ∈ C0 ; (c) regular if (∀t ≥ 0) (∀f ∈ CK ) Tt f ∈ CK
(t ↓ 0),
Transformed Markov Process
301
(see [GIH 73, p. 156, 160, 170]). We call semi-Markov process (Px ) lambda-regular if (∀λ > 0) (∀f ∈ CK ) Rλ f ∈ CK (see item 3.22). Let C(x) be a set of all measurable functions f : X → R continuous at the point x (x ∈ X). 2. The Hille-Iosida theorem THEOREM 8.1. For the linear operator A to be an inſnitesimal operator of a stochastically continuous, Feller-type, regular Markov process, it is sufſcient to fulſll the following conditions: (1) A is determined on a set of functions V ⊂ CR everywhere dense in topology of uniform convergence in space CR ; (2) (∀f ∈ V ) if f (x0 ) ≥ 0 and (∀x ∈ X) f (x) ≤ f (x0 ), then A(f | x0 ) ≤ 0 (maximum principle); (3) (∀λ > 0) (∀ϕ ∈ CR ) ∀ψ ∈ V (λE − A)ψ = ϕ, where E is an operator of identical transformation. Proof. See [ITO 63, p. 27]; [GIH 73, p. 172, 180]. 3. Construction of a Markov process We call a domain (region of determination) of an inſnitesimal operator A the set of all ϕ ∈ B for which At (ϕ | x) ≡
1 Ex ϕ ◦ Xt − ϕ(x) −→ A(ϕ | x) t
uniformly on x ∈ X. We will designate all values, functions and sets, derived from (P x ), by a letter with an overline. For example, ∞ Rλ0 (ϕ | x) = E x e−λ t ϕ ◦ Xt dt 0
=
D
∞ 0
e−λt ϕ ◦ Xt dt dP x .
THEOREM 8.2. Let (Px ) ∈ SM be a λ-regular and λ-continuous family; let the following conditions be fulſlled: (1) (∀ϕ ∈ CK ) λRλ ϕ → ϕ with λ → ∞ uniformly on X;
302
Continuous Semi-Markov Processes
(2) (∀λ > 0) I ∈ Vλ (X), (∀x ∈ X) Aλ (I | x) < 0, Aλ (I | x) → 0 with λ → 0; (3) (∀x ∈ X) (∀λ > 0) Aλ is a pseudo-local operator at point x (see item 3.27). Then (∀λ0 > 0) there exists a non-breaking Markov process (P x ) with an inſnitesimal operator A = −λ0 A0 /Aλ0 I, with is deſned on a domain V = V0 (X), and (∀ϕ ∈ C0 ) Rλ0 ϕ = Rλ0 ϕ. Proof. By theorems 3.3 and 3.4, (∀ϕ ∈ C0 ) (∀λ > 0) Rλ ϕ ∈ Vλ (X), and Aλ Rλ ϕ = A0 + Aλ IE Rλ ϕ = ϕRλ I/λ. Since I ∈ Vλ (X) and Aλ0 I < 0, Rλ ϕ ∈ V0 (X) = V . In this case (∀ϕ ∈ CK ) (∀c ∈ R) λRλ (ϕ + c) = λRλ ϕ + c → ϕ + c uniformly on x ∈ X. From here and by lambdaregularity of family (Px ) it follows that set V is dense in CR in topology of uniform convergence. Furthermore, (∀ϕ ∈ V (X)) if ϕ(x0 ) ≥ 0 and (∀x ∈ X) ϕ(x0 ) ≥ ϕ(x), −λ0 A0 ϕ | x0 Aλ0 I | x0
Ex0 ϕ ◦ Xσr , σr < ∞ − ϕ x0 −λ0 lim ≤ 0. = Aλ0 I | x0 r→0 mr x0
For any (∀ϕ ∈ CR ), equation (λ0 E − A)ψ = ϕ has an evident solution: ψ = Rλ0 ϕ. Hence, by theorem 8.1, there exists a unique Markov process (P x ) (which is a stochastically continuous regular Feller process) such that A is its inſnitesimal operator and (∀ϕ ∈ C) Rλ0 ϕ = Rλ0 ϕ. The process (P x ) is not a breaking process, since Rλ0 I = 1/λ0 . 8.2. Comparison of an original process with a transformed Markov process Laplace family of additive functionals is constructed. The Markov family, constructed in the previous section, is transformed by a random time change. The identity of distributions for both original and constructed semi-Markov processes is proved. 4. Additive functional Consider a function t at (λ) = λ0
0
Aλ I ◦ Xs ds Aλ0 I
λ, λ0 > 0 ,
Transformed Markov Process
303
PROPOSITION 8.1. Let the conditions of theorem 8.2 be fulſlled, and (∀x ∈ X) Aλ (I | x) → ∞ as λ → ∞. Then (at (λ)) (λ > 0) is the Laplace family of additive functionals (see items 6.48 and 6.49). Proof. From all properties of Laplace family of additive functionals the only complete monotonicity is non-obvious. In order to prove the latter we can use the criterion of complete monotonicity from lemma 5.6, item 5.13, which expresses the condition of complete monotonicity in terms of ſnite differences. Furthermore, function e−λ t is completely monotone, and linear operations of integration do not change a sign of differences, and a limit of a sequence of values of constant signs is a value of the same sign. 5. Family of transformed measures By theorem 6.8, the family (a· (λ | ·))λ>0 determines a random time change, i.e. a family of measures (Qx ) on B, where Qx (H) = 1 and (∀x ∈ X) P x = Qx ◦ q −1 . Since (P x ) is a strictly Markov family, then (P x ) ∈ SM, hence, by theorem 6.9 (Px ) ∈ SM, where Px = Qx ◦ u−1 . It remains to be proved that (Px ) = (Px ). In order to prove this, it is sufſcient to establish that (∀Δ ∈ A, [Δ] ∈ K) (∀n ∈ N ) (∀λi > 0) (∀ϕi ∈ C) σ λ1 , . . . , λn ; ϕ1 , . . . , ϕn = Rσ λ1 , . . . , λn ; ϕ1 , . . . , ϕn , R Δ Δ where
σ λ1 , . . . , λn ; ϕ1 , . . . , ϕn | x R Δ n −λi ti x e ϕi ◦ Xsi dti , ≡E k
sn <σΔ i=1
sk = i=1 ti (see item 3.21). In what follows all values, functions and sets derived from (Px ) are designated with a “wave” above the letter. 6. Domain of identity of operators Consider a lambda-characteristic operator of the second kind Aλ (ϕ | x) = lim Arλ (ϕ | x) r→0
(see item 3.26) where (∀x ∈ X) (∀λ > 0) (∀ϕ ∈ B) Ex e−λσr ϕ ◦ Xσr − ϕ(x) r Aλ (ϕ | x) = λ Ex 1 − e−λσr and Vλ (x) is a set of all ϕ ∈ B, for which the preceding limit exists.
304
Continuous Semi-Markov Processes
PROPOSITION 8.2. Let ϕ ∈ V ∩C, Aϕ ∈ C; and conditions of theorem 8.2 be fulſlled; and (∀λ > 0) Aλ I ∈ C. Then λ and Aλ (ϕ) = Aλ (ϕ); (1) ϕ ∈ Vλ ∩ V (2) the following conditions are fulſlled: (a) (∀K ∈ K) (∃M > 0) (∃r0 > 0) (∀x ∈ K) (∀r, 0 < r < r0 ) Arλ (ϕ | x) ≤ M ; (b) (∀x ∈ X) Arλ (ϕ | x) → Aλ (ϕ | x) (r → 0); (c) Aλ (ϕ) ∈ C.
Proof. (1) We have (∀x ∈ X) (∀ϕ ∈ V ∩ C) ϕ ∈ V0 ∩ C = Aλ (ϕ | x) = −λ =λ
λ>0
Vλ ∩ C,
A0 (ϕ | x) − λϕ(x) Aλ (I | x)
Aλ0 (I | x)A(ϕ | x) + λϕ(x). λ0 Aλ (I | x)
(2) On the other hand, according to deſnition of a time change with the help of Laplace family of additive functionals (item 6.52), we have (∀x ∈ X) (∀ϕ ∈ V ∩ C, Aϕ ∈ C) Arλ (ϕ | x) = = =
λ x e−λσr ϕ ◦ Xσ − ϕ ◦ X0 E r −λσ r Ex 1 − e Ex Ex +
=
λ E x e−aσr (λ) ϕ ◦ Xσr − ϕ ◦ X0 −a (λ) σ 1−e r
λ E x e−gλ (x)σr ϕ ◦ Xσr − ϕ ◦ X0 −a (λ) σ 1−e r
Ex
λ E x e−aσr (λ) − e−gλ (x)σr ϕ ◦ Xσr −a (λ) σ r 1−e
λ r A (ϕ | x)αr (x) + εr (x), gλ (x) gλ (x)
where gλ = λ0 Aλ I/Aλ0 I, αr (x) = E x 1 − e−gλ (x)σr /E x 1 − e−aσr (λ)
Transformed Markov Process
305
and εr (x) is a remainder. By Dynkin’s lemma [ITO 63, p. 47], gλ (x) E x e−gλ (x)σr ϕ ◦ Xσr − ϕ ◦ X0 E x 1 − e−gλ (x)σr σr gλ (x) −gλ (x)t = Ex e Aϕ − gλ (x)ϕ ◦ Xt dt E x 1 − e−gλ (x)σr 0
r
Agλ (x) (ϕ | x) ≡
= A(ϕ | x) − gλ (x)ϕ(x) + εr ; σr gλ (x) −gλ (x)t E e Aϕ − g (x)ϕ ◦ Xt εr = x λ E x 1 − e−gλ (x)σr 0
− Aϕ − gλ (x)ϕ ◦ X0 dt ,
ε r ≤
sup
x1 ∈B(x,r)
A ϕ | x1 − A(ϕ | x) + gλ (x)ϕ x1 − ϕ(x) −→ 0
uniformly on any compact set. It is also evident that g λ,r (x)/ gλ,r (x) ≤ αr (x) ≤ gλ,r (x)/g λ,r (x), where g λ,r (x) =
inf
x1 ∈B(x,r)
gλ (x),
gλ,r (x) =
sup
x1 ∈B(x,r)
gλ (x).
Since gλ ∈ C and min{λ, λ0 } ≤ gλ ≤ max{λ, λ0 }, αr (x) → 1 as r → 0 uniformly on x. Furthermore −g (x)σ r λ,r − e−gλ,r (x)σr εr ≤ sup |ϕ|λ E x e E x 1 − e−gλ,r (x)σr gλ,r (x) −1 ≤ sup |ϕ|λ g λ,r (x) and, hence, εr → 0 (r → 0) uniformly on x. From here Arλ (ϕ | x) −→
λ gλ (x)ϕ(x) − A(ϕ | x) gλ (x)
uniformly on x on each compact set and λAλ0 (I | x)A(ϕ | x) . Aλ (ϕ | x) = λϕ(x) − λ0 Aλ (I | x)
306
Continuous Semi-Markov Processes
7. Original and transformed processes THEOREM 8.3. Let (Px ) ∈ SM; (∃(Δm )∞ Δn ) m=1 , Δm ∈ A, [Δm ] ∈ K, X = (∀m ∈ N) (Px ) be a lambda-continuous on interval [0, σΔm ) and lambda-regular family; and besides, let the following conditions be fulſlled: (1) (∀ϕ ∈ CK ) λRλ ϕ → ϕ as λ → ∞ uniformly on X; (2) (∀λ > 0) I ∈ Vλ (X), (∀x ∈ X) Aλ (I | x) < 0, Aλ (I | x) → 0 as λ → 0, Aλ (I | x) → −∞ as λ → ∞; (3) (∀λ > 0) Aλ I ∈ C; (4) (∀x ∈ X) (∀λ > 0) Aλ is a pseudo-local operator at the point x. Then for any (∀λ0 > 0) there exists and only one Markov family (P x ), from which (Px ) is being obtained as a result of a random time change, and such that (∀ϕ ∈ C0 ) Rλ0 ϕ = Rλ0 ϕ. An inſnitesimal operator of this process is A = −λ0 A0 /Aλ0 I, and time change corresponds to Laplace family of additive functionals (at (λ), t ≥ 0) (λ > 0), where (∀ξ ∈ D) t Aλ I | Xs ξ ds. at (λ | ξ) = λ0 0 Aλ0 I | Xs ξ Proof. Let ψ ∈ C0 . Check that for ϕ = RσΔ (λ; ψ) the conditions of proposition 8.2 are fulſlled. Obviously, ϕ ∈ V ∩ C0 , and since Aλ ϕ = −ψ, Aϕ = −λ0 A0 ϕ/Aλ0 I = −λ0 Aλ ϕ − ϕAλ I /Aλ0 I = λ0 Aλ I(−ψ + ϕλ)/ λAλ0 I . Therefore, Aϕ ∈ C0 . Furthermore, from pseudo-locality of operators Aλ and Aλ it follows σ (λ; ψ) = −ψ, Aλ RσΔ (λ; ψ) = Aλ R Δ and also, by proposition 8.2, Aλ RσΔ (λ; ψ) = Aλ RσΔ (λ; ψ), from here σ (λ; ψ). Aλ RσΔ (λ; ψ) = Aλ R Δ σ (λ; ·). The left part of Substitute these expressions as arguments in operator R Δ the obtained equality by propositions 3.9 and 8.2 is equal to −RσΔ (λ; ψ). The right
Transformed Markov Process
307
σ (λ; ψ). Furthermore, we prove by induction. We will use denopart is equal to −R Δ tations of item 3.24: R(1, n) = RσΔ λ1 , . . . , λn ; ϕ1 , . . . , ϕn k). Then, by theorem and so on. Let for all k ≤ n − 1 it be proved that R(1, k) = R(1, 3.4, we have 1,n . n) = −Φ 1,n , Φ1,n = Φ Aλ R(1, n) = −Φ1,n , Aλ R(1, 1
1
Besides, as well as in for n = 1 we convince ourselves that function ϕ = R(1, n) satisſes the conditions of proposition 8.2; from here we obtain Aλ1 R(1, n) = Aλ1 R(1, n) n). Again substituting these values into and consequently Aλ1 R(1, n) = Aλ1 R(1, operator RσΔ (λ, ·), we obtain −R(1, n) on the left side of the equality. Further, the n) and convergence to a limit follow from the proof of theboundedness of Arλ1 R(1, n) on the right side of the orem 3.4. From here, by proposition 3.9, we obtain −R(1, equality. 8. Note about strict semi-Markov property From theorem 6.9 it follows that conditions of theorem 8.3 are sufſcient for family (Px ) to be strictly semi-Markovian (item 6.55). Really, (Px ) turns out from a strictly Markov family with the help of time change keeping all regeneration times which belong to IMT. Since (∀τ ∈ IMT) τ ∈ RT(P x ), we obtain τ ∈ RT(Px ), i.e. (Px ) is a strictly semi-Markov process. 8.3. Construction of a Markov process by parameters of the Lévy formula An analytical form of a conditional transition generating function of a distribution of a time run along the trace from theorem 6.11 (Lévy formula) gives us another method of representation of a semi-Markov process in the form of a transformed Markov process. An idea consists of looking for a transformation providing absence of intervals of constancy in trajectories of the process, but preserving the distribution of a trace. Such a transformed process appears to be Markovian. In order to prove Markovness of the constructed family of measures the theorem on regeneration times should be used, which is applied to a sequence of Markov times converging to a determinate instant of time. In this case a time change transforming the constructed Markov process into the original semi-Markov process is evident. 9. Process without intervals of constancy Our next task consists of constructing a Markov process with the same random trace as in the original semi-Markov process. In other words, measures of the original semi-Markov process and the constructed Markov process have to coincide on
308
Continuous Semi-Markov Processes
the sigma-algebra F ◦ . For this aim it is sufſcient in the original process to change the distribution of a time run without varying the trace. Consider a family of kernels depending on a parameter λ > 0 of the form f G (λ, S | x) ≡ Ex exp − b(λ, τ ) ; Xτ ∈ S (G ∈ A, S ∈ B(X), x ∈ X), where τ = σG , b(λ, τ ) is equal to λAτ , and Aτ is some non-negative F ◦ -measurable additive functional. A semi-Markov process with such a parameter in a Lévy expansion (if it exists) does not contain any interval of constancy (its Lévy measure is equal to zero). An example of such an additive functional is ∞ unτ (du). μτ = aτ + 0+
We will obtain a semi-Markov process with this additive functional as a limit of a sequence of semi-Markov processes with additive functionals b(λk , τ )/λk (λk > 0, λk → 0). Thus, we denote [8.1] b(k) (λ, τ ) = λb λk , τ /λk . (k)
PROPOSITION 8.3. A family of kernels fG (λ, S | x) with parameter [8.1] is the family of transition generating functions of a semi-Markov process. (k)
Proof. Obviously, kernel fG (λ, S | x) with parameter [8.1] is an image of Laplace (k) transformation of some semi-Markov kernel FG (dt × S | x). In order to prove (k) the proposition it is enough to check that the family of kernels (FG ) satisſes the (k) conditions of proposition 4.2. Show the system of kernels (fG ) satisſes the Markov equation for transition generating functions. Actually, for G ⊂ G1 and τ1 = σG1 we have (k)
fG1 (λ, ϕ | x) ˙ 1 /λk ϕ ◦ Xτ +τ = Ex exp − λb λk , τ +τ ˙ 1 = Ex exp − λ b λk , τ + b λk , τ1 ◦ θτ /λk ϕ ◦ Xτ +τ ˙ 1 = Ex EXτ exp − λb λk , τ1 /λk ϕ ◦ Xτ1 exp − λb λk , τ /λk (k) (k) fG1 λ, ϕ | x1 fG λ, dx1 | x . = X
From here admissibility of the family of transition function follows (see item 4.13). Hence, for any x ∈ X, a projective family of measures is determined on sub-sigmaalgebras of the sigma-algebra F (see Chapter 4). The condition of proposition 4.2(e) (k) means convergence fτn (λ, X | x) → 0 as n → ∞, where τn = σδn , δ is a deducing sequence of open sets, λ > 0, x ∈ X. From the property of a deducing sequence it
Transformed Markov Process
309
follows that (∀x ∈ X) (∀λ > 0) Px -a.s. exp(−b(λ, τn )) → 0 (n → ∞). Thus, at 1 ∞ least one of three components: aτn , 0+ unτn du, 1 nτn du, tends to inſnity Px -a.s. From here it follows that function b(k) (λ, τn ) Px -a.s. tends to inſnity,
λ 1 − e−λk b(k) λ, τn ≥ λaτn + λk
1
0+
unτn (du) +
∞
1
nτn (du) ,
i.e. condition (e) is fulſlled. The condition of proposition 4.2(d) means convergence (∀G ∈ A1 ) (∀λ > 0) (∀x ∈ G) (k) fG λ, X | x1 fG−r 0, dx1 | x −→ 1 (r −→ 0), X
where A1 ⊂ A is a rich enough family of open sets. We have (k) 1− fG λ, X | x1 fG−r 0, dx1 | x X
=
X
=
X
(k)
1 − fG
λ, X | x1 fG−r 0, dx1 | x
Ex1 1 − exp − b(k) λ, σG fG−r 0, dx1 | x .
For λ < λk using inequality 1 − exp(−λu) /λ > 1 − exp − λk u /λk , we obtain b(k) (λ, σG ) ≤ b(λ, σG ), from here follows the required property. For λ > λk ; we use the inequality 1 − exp − λu/λk ≤ λ 1 − exp(−u) /λk and we also obtain the required property, since in the case of the original process it is fulſlled for a rich enough class of open sets. Hence, the constructed projective family (k) of measures for any x ∈ X can be extended up to a probability measure Px on (D, F) and, by theorem 4.6, the family of this measures is semi-Markovian. Consider a family of semi-Markov transition generating functions (f G ) (G ∈ A) where f G (λ, S | x) ≡ Ex exp − λμτ ; Xτ ∈ S . PROPOSITION 8.4. The family of kernels f G (λ, S | x) is a family of transition generating functions of some semi-Markov process.
310
Continuous Semi-Markov Processes
Proof. This family of generating functions for any x ∈ X determines a projective family of probability measures −1 (r > 0, n ≥ 1), P x ◦ βσr0 , . . . , βσrn where
∞, τ (ξ) = ∞, βτ (ξ) = τ (ξ), Xτ (ξ) , τ (ξ) < ∞. (k)
Since Px -a.s. b(k) (λ, τ ) → λμτ as k → ∞ the distribution fG (λ, · | x) converges weakly to f G (λ, · | x) and hence, by theorem 7.3, the sequence of distributions (k) (Px ◦ (βσr0 , . . . , βσrn )−1 ) converges weakly to (P x ◦ (βσr0 , . . . , βσrn )−1 ). Under theorem 7.2, from this convergency, ſrstly, weak compactness of the family of distribu(k) tions (Px ) follows and, secondly, weak convergence of this sequence to the limit P x follows (it is a projective limit of the family of distributions (P x ◦(βσr0 , . . . , βσrn )−1 )), which is a probability measure on D, and the family of these measures is consistent as a family distributions of a semi-Markov process. 10. Markovness of transformed family of measures By our construction, the obtained semi-Markov family of measures determines a process without intervals of constancy. Our hypothesis is that without any additional conditions, such a process is Markovian. At the present time we do not know how to prove this hypothesis. In order to prove Markovness of the constructed process we use some assumption on regularity of the original process. THEOREM 8.4. Let for any λ > 0, G ∈ A, and continuous bounded function ϕ on X a transition generating function fG (λ, ϕ | ·) be continuous. Then: 1) the semi-Markov (k) process with the transition generating function fG (λ, S | x) is a Markov process (see proposition 8.3); 2) the semi-Markov process with the transition generating function f G (λ, S | x) is a Markov process (see proposition 8.4). (k)
Proof. On the basis of theorem 7.2 we conclude that (Px ) is a weakly continuous family. Prove that the semi-Markov family of measures (P x ) (and also (P(k) ) for any k) is also weakly continuous. By Lévy representation for a ſrst exit time of the transformed process from set G, it follows that P x -a.s. this time is equal to μσG . On ∞ the other hand, by Lévy formula σG = aσG + 0+ uP(du, σG ). In this expression two last summands depend monotonically on G. In particular, by replacing G with G−r (G+r ) they both increase (decrease) as r ↓ 0. The same conclusion is fair with respect to the intensity of the Poisson measure, i.e. Lévy measures n(du, σG ), and also that of additive functional b(λ, σG ). Hence P x (Π(G)) = 1 for those G, for which Px (Π(G)) = 1, where Π(G) is a set of trajectories ξ, exiting correctly from G. The
Transformed Markov Process
311
same relates to any ſnite sequence of open sets and a set of functions, exiting correctly from these sets. We obtain the family of functions of the form n
τi = σG1 ,...,Gi , ϕi ∈ C0 (X) exp − λμτi ϕi Xτi
i=1
is continuous on a full measure set and, consequently, an integral of this function on measure Px is continuous according to x. It implies lambda continuity of the family (P x ) (and also its weak continuity). Furthermore, for a given deducing sequence δ and t > 0 we deſne a Markov time τδt : τδt (ξ) = σδk (ξ) ⇐⇒ σδk−1 (ξ) < t ≤ σδk (ξ), which is a regeneration time for the semi-Markov family (P x ). Besides, the sequence of such Markov times constructed for δ with decreasing ranks as rank δ → 0, P x -a.s. converges to t. From here it follows that the ſxed time t ∈ R+ is a regeneration time of the family (P x ), i.e. this family is Markov. Possibility of constructing such an associated Markov process for which any intrinsic Markov time τ coincides with μτ – a conditional expectation of τ with respect to a trace in the original semi-Markov process – implies interesting consequences for a distribution of value μτ . It is well-known that for a Markov process of diffusion type, ratio Ex ((σG )2 )/Ex (σG ) tends to zero as G ↓ x. For a semi-Markov process of diffusion type, which is not Markovian, this fact generally is not true. Only boundedness of this ratio can be proved. However, for semi-Markov process convergence to zero of the ratio Px ((μσG )2 )/Px (μσG ) is true since for its associated Markov process μσG = σG . From here, in particular, fairness of representation [8.5] follows (see below) as well as that of a similar assumption while deriving the Ito formula. 8.4. Stationary distribution of semi-Markov process Representation of a semi-Markov process as a Markov process transformed by a time change makes it possible to solve a problem of existence of a stationary distribution for a SM process and to express it in terms of the associated Markov process. In the present section a stationary distribution for a continuous SM process is investigated. A proof of existence of such a distribution is based on well-known theorems on a stationary distribution for stepped SM processes (see Koroljuk and Turbin [KOR 76], Shurenkov [SHU 89]). For this aim we use uniform convergence of a sequence semiMarkov walks to a continuous semi-Markov process. 11. Three-dimensional distribution In what follows we will be interested in intervals of constancy of trajectory ξ. Remember that Rt− (ξ) and Rt+ (ξ) are lengths of the left and the right parts of an
312
Continuous Semi-Markov Processes
interval of constancy of trajectory ξ, covering a point t; if such an interval is absent, we assume Rt− (ξ)= Rt+ (ξ) = 0 (see 1.18). Consider distribution Pxt (B) = Px (θt−1 B) (B ∈ F). A semi-Markov process is said to be ergodic if for any x ∈ X and ξ ∈ D Px -a.s. there exists a limit t limt→∞ (1/t) 0 f (θt ξ) dt, which is not dependent on a trajectory where f is a bounded measurable function on D. We will consider a rather simpler variant of ergodicity when distribution Pxt tends weakly to some distribution P ∞ which is the same for different x. For a Markov process we have Pxt (B) = Ex (PXt (B)), and thus convergence of a measure to a limit reduces to convergence to a limit of its one-dimensional distribution Ht (S | x) ≡ Px (Xt ∈ S) (S ∈ B(X)). Evidently, for a weakly continuous Markov w family of measures (Px ) from weak convergence Ht (· | x) → H∞ it follows that Py (f )H∞ (dy), Pxt (f ) −→ X
where f is a continuous bounded functional on D. For a semi-Markov process a ſxed instant of time is not (in general) its regeneration time and the preceding arguments are not applicable. However, for any t ∈ R+ a sequence of regeneration times of this process can be constructed which converges from above to a function t + Rt+ . It follows that if the family of measures (Px ) of a semi-Markov process is weakly continuous, the time t + Rt+ is a regeneration time for this process. On the other hand, for some classes of semi-Markov processes a pair (Xt , Rt− ) is a Markov process (see, e.g., Gihman and Skorokhod [GIH 73]). Taking into account a special role of processes Rt− and Rt+ , we will investigate a three-dimensional distribution Gt S × A1 × A2 | x = Px Xt ∈ S, Rt− ∈ A1 , Rt+ ∈ A2 (A1 , A2 ∈ B(R+ )). Evidently, a measure Pxt for semi-Markov process with a weakly continuous family of measures (Px ) tends weakly to a limit, if for any x ∈ X as t → ∞ the three-dimensional distribution Gt (· | x) tends weakly to some distribution G∞ , which is the same for any x. 12. Shurenkov theorem Consider a transition function F (dt × dx1 | x) of a stepped semi-Markov process. Let (∀x ∈ X) F (R+ × X | x) = 1; F n be the n-th iteration of the kernel F ≡ F 1 . A marginal kernel H(dy | x) = F (R+ ×dy | x) is a transition function of the embedded Markov chain. A Markov chain is said to be ergodic if for any x ∈ X 1 k w H (· | x) −−→ H ∞ , n n
k=1
Transformed Markov Process
313
where H k is the k-th iteration of the kernel H ≡ H 1 ; H ∞ is a probability measure on X, for which a stationarity condition is fulſlled: A ∈ B(X) . H(A | x)H ∞ dx1 H ∞ (A) = X
Let U (A × S | x) =
∞
F n (A × S | x),
n=0 0
where F (A × S | x) = I(A × S | x), an indicator function. Let Qn (A | x) = F n (A × X | x) (n ≥ 0, A ∈ B(R+ )) and Q ≡ Q1 . Let qn (t | x) (t ≥ 0) be a density of an absolutely continuous component of measure Qn (· | x). The family of distributions (Qn (· | x)) (x ∈ X) is said to be non-singular with respect to measure on X, if ∞ qn (t | x) dt (dx) > 0. X
Denote
0
M=
X
∞
tQ(dt | x)H ∞ (dx).
0
For a stepped semi-Markov process the following theorem is fair. THEOREM 8.5. Let the embedded Markov chain of a given semi-Markov process be ergodic with a stationary probability distribution H ∞ . Let 0 < M < ∞ and besides there exists n ≥ 1 such that the family of distributions (Qn (· | x)) is non-singular with respect to the measure H ∞ . Let for a measurable real function ψ, determined on R+ × X, the following conditions be fulſlled: t
(1) H∗∞ {x ∈ X : supt≥0 X 0 ψ(t − s, y)U (ds × dy | x) < ∞} > 0, where H∗∞ is an internal measure of the measure H ∞ ; (2) limt→∞ ψ(t, x) = 0 for H ∞ -almost all x; ∞
(3) X 0 ψ(t, x) dt H ∞ (dx) < ∞; (4) the family of B(X)-measurable functions (ψ(t, ·)) (t ≥ 0) has a H ∞ -integrable majorant. Then it is fair
lim
t→∞
X
0
1 = M
t
ψ(t − s, y)U (ds × dy | x) X
∞
0
Proof. See Shurenkov [SHU 89, p. 127].
[8.2] ∞
ψ(t, y) dt H (dy).
314
Continuous Semi-Markov Processes
13. Ergodicity and stationary distribution Now we return to general semi-Markov processes. According to the results of Shurenkov [SHU 89], in order to prove ergodicity of a process it is sufſcient to ſnd embedded in it an appropriate Markov renewal process (a so called Markov interference of chance). Let τ ≡ τ1 be a regeneration time of a semi-Markov family of probability measures; (τn ) be a sequence of iterated Markov times. In order to apply the Shurenkov theorem we have to propose that for any x Px -a.s. τn < ∞ and ˙ σB , where A and B τn → ∞. This would be the case, for example, if τ = σA + are two open subsets of X such that X \ A ⊂ B. Note that an alternate sequence of sets (A, B, A, B, . . .) is a partial case of a deducing sequence. Thus, if all τn are ſnite the sequence of pairs (τn , Xτn ) (n ≥ 0, τ0 = 0) composes a Markov renewal process. Suppose that the semi-Markov transition function Fτ (A × S | x) ≡ Px (τ ∈ A, Xτ ∈ S) and its derived kernels Fτn ≡ Fτn , Hτn ≡ Hτn , Qτn satisfy conditions of theorem 8.5 with a limit distribution Hτ∞ and an average meaning of a length of the interval between neighboring regeneration times ∞ Mτ = tQτ (dt | x)Hτ∞ (dx). X
0
Consider the three-dimensional distribution. We have Gt S × A1 × A2 | x =
∞
Px Xt ∈ S, Rt− ∈ A1 , Rt+ ∈ A2 , τk ≤ t < τk+1
k=0
=
∞ 0
k=0
=
∞ t 0
k=0
=
k=0
X
t
0
0
Px (τk ∈ ds, Xt ∈ S, Rt− ∈ A1 , Rt+ ∈ A2 , t < s + τ ◦ θτk
t
− Px τk ∈ ds, Xt−s ◦ θτk ∈ S, Rt−s ◦ θτk ∈ A1 , + ◦ θτk ∈ A2 , t − s < τ ◦ θτk Rt−s
∞
=
t
Ex PX(τk ) Xt−s ∈ S, Rt− ∈ A1 , Rt+ ∈ A2 , t − s < τ ; τk ∈ ds
− + Py Xt−s ∈ S, Rt−s ∈ A1 , Rt−s ∈ A2 , t − s < τ U (ds × dy | x).
Let ψτ (t, y) = Py (Xt ∈ S, Rt− ∈ A1 , Rt+ ∈ A2 , t < τ ). Check for this function fulſllment of conditions of theorem 8.5. It is a bounded function converging to zero as t → ∞. Since ψτ (t, y) ≤ Py (t < τ ), the integral included in condition (1) is not more than 1; an integral of ψ on t is not more than mτ (y) ≡ Py (τ ) and therefore it
Transformed Markov Process
315
is Hτ∞ -a.s. ſnite. Hence, there exists a limit of probability Gt (S × A1 × A2 | x) as t → ∞ and this limit is equal to ∞ 1 ψτ (t, x) dt Hτ∞ (dx). G∞ S × A1 × A2 = Mτ X 0 14. Representation in terms of Lévy expansion ∞
Our next aim is to express the integral 0 ψτ (t, x) dt in terms of Lévy expansion for a transition generating function of a semi-Markov process. For λ > 0 and a bounded function ϕ we consider Lévy expansion for transition generating function fτ : fτ (λ, ϕ | x) ≡ Ex exp(−λτ )ϕ Xτ = Ex exp − b(λ, τ ) ϕ Xτ , where b(λ, τ ) is a F ◦ -measurable additive functional ∞ b(λ, τ ) = λaτ + 1 − e−λu nτ (du) + B(λ, τ )
[8.3]
0+
(see theorem 6.11), where aτ is a F ◦ -measurable additive functional; nτ (S) (S ∈ B(R+ )) is a F ◦ -measurable additive functional being a measure on argument S (Lévy measure). Measure nτ is an intensity of Poisson ſeld Pτ on a sequence of states (trace) determining position and length of “Poisson” intervals of constancy. An additive functional B(λ, τ ) is determined by “conditionally ſxed” intervals of constancy, and for a given measure Px it is of the form B(λ, τ ) = − log Px e−λσ0 ◦ θτi | F ◦ , τi <τ
˙ σ0 ) is either ſnite, or an inſnite countable collection of F ◦ -measurable where (τi + Markov times with respect to a widened stream of sigma-algebras; σ0 is a ſrst exit time from an initial point. In a simple case which we will consider below, times τi themselves are Markov times, namely, ſrst hitting times of some points of the space of states. Sojourn times at these points are independent random values with distributions depending on i ∈ N. From the previous expansion of the generating function it follows Lévy representation for a Markov time τ follows (see item 6.62, formula [6.4]): ∞ uPτ (du) + σ0 ◦ θτi . [8.4] τ = aτ + 0+
τi <τ
Furthermore, everywhere, except a concluding example, we will suppose that conditionally ſxed intervals of constancy are absent.
316
Continuous Semi-Markov Processes
Consider an integral G∞ ϕ; A1 × A2 ≡
X
ϕ(x)G∞ dx × A1 × A2 ,
where ϕ is a continuous bounded real function on X. For this integral the following representation is fair ∞ 1 G∞ ϕ; A1 × A2 = ψτ (t, x) dt Hτ∞ (dx), Mτ X 0 where ψτ (t, x) = Px (ϕ(Xt ); Rt− ∈ A1 , Rt+ ∈ A2 , t < τ ). We have τ ∞ − + ϕ Xt IA1 Rt IA2 Rt dt . ψτ (t, x) dt = Ex 0
0
We split the interval [0, τ ), depending on a trajectory, on a ſnite number Nd of parts by Markov times τ¯k in such a manner that on each of these intervals the trajectory does not exit from some open set of a small diameter d. Then function ϕ(Xt ) has a small variation on this interval. Thus, the previous value can be found as a limit of sums N d τ¯k − + ϕ Xτ¯k−1 IA1 Rt IA2 Rt dt . Ex τ¯k−1
k=1
Using Lévy representation [8.4] for times τ¯k , we can write the integrals in another view, at least for two partial cases. If A1 = A2 = {0}, τ¯k+1 IA1 Rt− IA2 Rt+ dt = aτ¯k+1 − aτ¯k . τ¯k
If A1 = (r, ∞) and A2 = (s, ∞), τ¯k+1 IA1 Rt− IA2 Rt+ dt = τ¯k
∞
s+r
(u − s − r)P[¯τk ,¯τk+1 ) (du),
where P[¯τk ,¯τk+1 ) (du) = Pτ¯k+1 (du) − Pτ¯k (du). Using semi-Markov properties of the process the Poisson measure can be replaced by a measure of its intensity. As a result we can write the sum N d ϕ Xτ¯k−1 c[¯τk ,¯τk+1 ) , Ex k=1
where c[¯τk ,¯τk+1 ) = cτ¯k+1 − cτ¯k and ⎧ ⎪ ⎨aτ , cτ = ∞ ⎪ (u − r − s)nτ (du), ⎩ r+s
A1 = A2 = {0}, A1 = (r, ∞), A2 = (s, ∞).
Transformed Markov Process
317
Passing to a limit as d → 0, we obtain a representation of an integrand in form of a curvilinear integral by F ◦ -measurable additive functional cτ (see item 6.65): τ ∞ ϕ Xτ1 dcτ1 . ψτ (t, x) dt = Ex 0
0
This representation accepts a more simple form if additive functionals aτ and nτ (B) themselves are representable in view of curvilinear integrals by the same additive functional. It is not difſcult to prove that function Aλ I(x) ≡ lim Px exp − λσΔ − 1 /Px σΔ Δ↓x
can be represented in Lévy form −Aλ I(x) = λα(x) +
∞
0+
1 − e−λu ν(du | x),
where α(x) is some non-negative function, and ν(du | x) is some family of measures depending on x ∈ X. It is not difſcult to give sufſcient conditions for the function Aλ I continuous in some neighborhood of the point x to have the following representation τ − Aλ I ◦ Xτ1 dμτ1 , [8.5] b(λ, τ ) = 0
∞
where μ is an additive functional: μτ = aτ + 0 unτ (du) (see item 10, note on theorem 8.4). In what follows we suppose that this representation is fair; and thus ∞ τ 1 − e−λu ν du | Xτ1 dμτ1 . λα Xτ1 + [8.6] b(λ, τ ) = 0
0+
In this case
G∞ ϕ; A1 × A2
1 = Mτ
X
Ex
τ 0
ϕ Xτ1 γ Xτ1 dμτ1
Hτ∞ (dx),
[8.7]
where ⎧ ⎪ ⎨α(x), γ(x) = ∞ ⎪ (u − r − s)ν(du | x), ⎩
A1 = A2 = {0}, A1 = (r, ∞), A2 = (s, ∞).
r+s
Thus, the three-dimensional stationary distribution of semi-Markov process is expressed through a one-dimensional process: G∞ ϕ; A1 × A2 = G∞ ϕγ; R+ × R+ .
318
Continuous Semi-Markov Processes
We have obtained a stationary distribution characterizing (with a family of probability measures (Px )) the stationary measure P ∞ of a semi-Markov process. In order to give this distribution more closed form and get rid of formal dependence on the choice of τ , we use an associated Markov process and express the three-dimensional stationary distribution of a semi-Markov process through stationary distribution of a Markov process. 15. Markov representation of stationary distribution Consider the original semi-Markov family of measures (Px ) and the Markov fami(k) lies of measures (Px ) and (P x ), constructed in the previous section. An expectation of a Markov time can be found by differentiating the corresponding Laplace transformation image on parameter λ for λ = 0. From here ∞ unτ (du) , mτ (x) = Ex aτ + m(k) τ (x)
(k)
Let μτ
0+
1 λk u 1−e nτ (du) , = Ex aτ + λk 0+ mτ (x) = Ex μτ = mτ (x).
∞
(k)
= Ex (τ | F ◦ ) and μτ = E x (τ | F ◦ ). From here ∞ 1 (k) 1 − eλk u nτ (du), μτ = μτ . μτ = aτ + λk 0+ (k)
According to the assumption, additive functional μτ can be written in the form of a curvilinear integral: ∞ τ 1 −λk u + 1 − e ν du | X dμτ1 . α X = μ(k) τ1 τ1 τ λk 0+ 0 From the arbitrariness of τ we conclude that the following rule of replacing an additive functional is fair: ∞ 1 −λk u + 1 − e ν du | X dμτ1 , dμ(k) = α X τ1 τ1 τ1 λk 0+ ∞ −1 (k) 1 1 − e−λk u ν du | Xτ1 dμτ1 . dμτ1 = α Xτ1 + λk 0+ Hence the stationary three-dimensional distribution of semi-Markov process [8.7] obtained above can be represented in two forms. Firstly, it is G∞ ϕ; A1 × A2 τ (k) (k) 1 Xτ1 dμτ1 Hτ∞ (dx), Ex ϕ Xτ1 γ Xτ1 β = Mτ X 0
Transformed Markov Process
319
where β
(k)
(x) =
1 α(x) + λk
∞
0+
−λk u
1−e
−1 ν(du | x) .
Secondly,
G∞ ϕ; A1 × A2
1 = Mτ
X
Ex
τ 0
ϕ Xτ1 γ Xτ1 dμτ1
Hτ∞ (dx).
(k)
Since the curvilinear integral is F ◦ -measurable and Px , Px and P x coincide on F ◦ , then passing from a curvilinear integral to a common one, internal integrals in these representations can be represented as τ τ (k) (k) Xt dt , ϕ Xt γ Xt β Ex ϕ Xt γ Xt dt Ex 0
0
(k)
correspondingly. Assume that families of measures (Px ) and (P x ), and functions (k) ψ (k) (t, x)(k) ≡ Ex (ϕ(Xt )γ(Xt )β (k) (Xt )) and ψ(t, x) ≡ E x (ϕ(Xt )γ(Xt )) satisfy conditions of theorem 8.5 for the same Markov time τ . Obviously, these assumptions touch upon only a time run along the trace, but the trace itself remains ſxed. In particular, these Markov processes correspond to the same embedded Markov chain with a stationary distribution Hτ∞ . In this case there exist one-dimensional stationary distri(k) butions G∞ and G∞ of the constructed Markov processes; and integrals with respect to these distributions are τ (k) 1 (k) (k) ϕγβ = γ X β X dt Hτ∞ (dx), E ϕ X G(k) t t t ∞ x (k) 0 X Mτ τ 1 G∞ (ϕγ) = Ex ϕ Xt γ Xt dt Hτ∞ (dx) Mτ X 0 correspondingly, where Mτ(k) = Mτ =
X
X
∞ m(k) τ (x) Hτ (dx),
mτ (x) Hτ∞ (dx) = Mτ .
From here it follows the interesting Markov representations of stationary distribution of semi-Markov process Mτ(k) (k) G∞ ϕγβ (k) , G∞ ϕ; A1 × A2 = Mτ G∞ ϕ; A1 × A2 = G∞ (ϕγ).
320
Continuous Semi-Markov Processes
In this case we need not make too strong suppositions on Markov families of measures (k) as in theorem 8.5. We will prove that distributions G∞ and G∞ are already stationary distributions of constructed Markov processes. LEMMA 8.1. For any semi-Markov family (Px ) with a stationary distribution Hτ∞ of an embedded Markov chain, corresponding to a regeneration time τ , for any n ≥ 1 and a measurable function ϕ it is fair
∞
0
X
Ex ϕ Xt ; t < τn dt Hτ∞ (dx)
=n
X
∞
0
Ex ϕ Xt ; t < τ dt Hτ∞ (dx),
in particular Mτn ≡
X
Ex τn Hτ∞ (dx) = nMτ
Proof. We have 0
∞
Ex ϕ Xt ; t < τn+1 dt =
∞
0
+
0
Ex ϕ Xt ; t < τn dt ∞
Ex ϕ Xt ); τn ≤ t < τn+1 dt.
The second integral is equal to
∞
0
0
t
E x ϕ Xt ; τn ∈ dt1 , t < t1 + τ ◦ θτn dt.
Substituting into integral Xt = Xt−t1 ◦ θτn and using a regeneration condition for Markov time τk , we obtain
∞
0
t
0
= =
Ex EX(τn ) ϕ Xt−t1 ; t − t1 < τ ; τn ∈ dt1 dt
∞
0
0 ∞
0
∞
Ex EX(τn ) ϕ Xt ; t < τ ; τn ∈ dt1 dt
Ex EX(τn ) ϕ Xt ; t < τ dt.
Transformed Markov Process
321
Integrating both integrals on x and using stationary property of measure Hτ∞ , we obtain ∞ Ex ϕ Xt ; t < τn+1 dt Hτ∞ (dx) X
0
=
∞
0
X
+
X
0
Ex ϕ Xt ; t < τn dt Hτ∞ (dx) ∞
Ex ϕ Xt ; t < τ dt Hτ∞ (dx).
The following proposition, formulated for (P x ), is fair for any Markov process, obtained from a given semi-Markov process with the help of time change, in particular, (k) for (Px ). PROPOSITION 8.5. One-dimensional distribution G∞ is a stationary distribution for the constructed Markov process. Proof. For any measurable bounded function ϕ and h > 0 it is fair E x ϕ Xh G∞ (dx) X
1 = Mτ
0
X
∞
E x E Xt ϕ Xh ; t < τ dt Hτ∞ (dx).
According to lemma 8.1, for any n ≥ 1 the previous expression can be written as ∞ 1 E x E Xt ϕ Xh ; t < τn dt Hτ∞ (dx). nM τ X 0 From Markov property it follows that the internal integral is equal to ∞ E x ϕ Xt+h ; t < τn dt 0
=
∞
E x ϕ Xt+h ; t + h < τn dt + ε1
∞
E x ϕ Xt ; t < τn dt + ε1
0
=
h
=
0
∞
E x ϕ Xt ; t < τn dt + ε1 − ε2 ,
322
Continuous Semi-Markov Processes
where
ε1 =
∞
0
E x ϕ Xt+h ; t < τn ≤ t + h dt,
ε2 =
h
0
E x ϕ Xt ; t < τn dt.
Obviously, ε2 ≤ h sup |ϕ|. Furthermore, ∞ ε1 ≤ sup |ϕ| P x t < τn ≤ t + h dt 0
= sup |ϕ| From here X
∞ 0
P x t < τn dt −
∞
P x t < τn dt
≤ h sup |ϕ|.
h
E x ϕ Xh G∞ (dx) 1 = nMτ
X
0
∞
E x ϕ Xt ; t < τn dt + ε1 − ε2 Hτ∞
= G∞ (ϕ) + ε1 − ε2 / nMτ .
Since n can be as large as desired, the following equality is proved P x ϕ Xh G∞ (dx) = G∞ (ϕ). X
Hence, by arbitrariness of h and ϕ, the stationarity of measure G∞ is proved. EXAMPLE 8.1. Consider a monotone continuous semi-Markov process for X = R+ . Let ζx be the ſrst hitting time of level x. The process ξ(t) (t ∈ R+ ) is said to be an inverse process with independent positive increments if ζx (ξ) (x ∈ R+ ) is a process with independent positive increments. The process is determined on the positive halfline by a family of measures (Px ) determined on a space of non-decreasing positive Cantor functions. For this process it is interesting to determine a stationary distribution of the pair (Rt− , Rt+ ). We will also consider a fraction part of the value of the process at the instant t. Factually we pass from a monotone process to a non-monotone ergodic ≡ ξ(t) mod 1 with a space of states [0, 1) and a family of measures process ξ(t) (Px ). A natural embedded Markov renewal process for such a process is generated by ˙ σ(0,1) is a sequence of jumps from unit to zero: (τ1 , τ2 , . . .) where τ ≡ τ1 = σ0 + the ſrst hitting zero, except the initial position, i.e. the ſrst return to zero if the initial position is zero (here σ0 is the ſrst exit time from the initial position. The embedded Markov chain consists of repeating the same meaning, zero: Hτ∞ ({0}) = 1. Using
Transformed Markov Process
323
the Lévy formula for a homogenous process with independent positive increments we obtain for 0 < y < x < 1 ∞ −λu 1−e ν(du) , Ey exp − λσ[0,x) = exp − (x − y) λα + 0+
where α ≥ 0 and ν(du) is a measure on (0, ∞) (Lévy measure). Let Gt S × A1 × A2 | y = Py Xt ∈ S, Rt− ∈ A1 , Rt+ ∈ A2 . According to formula [8.7] we obtain G∞ (0, x) × A1 × A2 = xγ/M, where M ≡ P0 τ1 = α + ⎧ ⎪ ⎨α, γ= ∞ ⎪ (u − r − s)ν(du), ⎩
∞ 0+
u ν(du),
A1 = A2 = {0}, A1 = (r, ∞), A2 = (s, ∞).
r+s
EXAMPLE 8.2. Consider a monotone continuous semi-Markov process representing an inverse process with independent positive increments ξ(t) (t ∈ R+ ), in which all intervals of constancy are conditionally ſxed. For such a process ti , ζx = ax + xi <x
where ax is a non-decreasing function of x ∈ R+ , X ≡ (xi ) is a sequence of points from set R+ , and (ti ) is a sequence of positive random values. Distribution of value ti depends only on xi : P0 (ti < t) = Fxi (t) (t > 0). Suppose that ax = αx (α > 0); the set X is periodic with a period 1 (for any t ≥ 0 we assume X ∩ [t + 1, ∞) = (X ∩ [t, ∞)) + 1); and also a family of distribution functions is periodic with a period 1 (Fx+1 = Fx (x ≥ 0)). We will consider the fraction part of a value of the process ≡ ξ(t) mod 1 with the space of states [0, 1). at instant t, i.e. the ergodic process ξ(t) For 0 < x < 1 a stationary probability G∞ ((0, x) × A1 × A2 ) of the process is equal to ω(x)/M where ∞ t dFxi (t), M =α+ xi <1
0
⎧ ⎪ ⎪ ⎨αx, ω(x) = ∞ ⎪ (t − r − s) dFxi (t), ⎪ ⎩ r+s xi <x
A1 = A2 = {0}, A1 = (r, ∞), A2 = (s, ∞).
This page intentionally left blank
Chapter 9
Semi-Markov Model of Chromatography
In this chapter we consider applications of the theory of continuous semi-Markov processes. These processes happen to be suitable mathematical models for various natural phenomena. We consider application of this model to the study of transfer of matter through a porous medium. This transfer and the separation of mixtures of substances connected with it is the important constituent of such natural phenomena as metasomatism [KOR 82], ſltration of oil and gas [LEI 47], percolation of liquid and gas through constructional materials, etc. In the most pure aspect this process is exhibited in chromatography [GID 65, GUT 81, HOF 72]. The theoretical treatments of transfer processes are conducted within the framework of the appropriate applied disciplines and frequently lead to overlapped outcomes. For example, there is an explicit parallelism in outcomes of the theory of metasomatic zonality [KOR 82] and chromatography [HOF 72]. There are common features in different variants of the process of transfer, which can be separate from concrete engineering applications. The main peculiarity of such a process is the semi-Markov character of movement of particles composing the substance which is transferred by a solvent. We will explain this character in this chapter and will provide some formulae to evaluate parameters of such a movement. The stochastic model we deal with begins with the supposition that the observable measure of distribution of substance is an expectation of some random measure which is a random point ſeld in a given region of space. Each point of this ſeld can be interpreted as a molecule or particle keeping a wholeness when being transferred by a solvent. The exact interpretation is not important for a model, since only the averaged magnitudes are observed. The point ſeld and its average, generally speaking, depend on time. This dependence is related to the movement of particles. The stochastic model assumes such a movement under the rule of some random process. 325
326
Continuous Semi-Markov Processes
The essential supposition of a model is the assumption of independence of random processes corresponding to various particles. With these suppositions two laws determine a measure of the matter in which we are interested: (1) expectations of initial and boundary point ſeld; (2) individual rules of driving of a separate particle. In the present chapter the basic attention is given to a substantiation and application of the semi-Markov law of driving of separate particles and corollaries from this supposition. The continuous semi-Markov process is an appropriate model for a process of transfer of matter with a solvent through a porous medium due to the practical absence of inertia of driven particles making it possible to apply the premise of the theory of semi-Markov processes, and also due to a character of trajectories of particles of transferable substance. The experimental fact is that during the transfer of matter through a porous medium in each instant particles can be found both in a solution and in the substance of a ſlter in absorbed condition. Only by driving with time stoppings on the immovable phase can such distribution of particles between phases be explained. In other words, the continuous trajectories of driving have to contain intervals of constancy. How we know this is one of the peculiarities of trajectories of continuous (non-Markov) semi-Markov processes.
9.1. Chromatography A brief overview of the basic concepts of chromatography is given: its method, instrument realization and registered signal.
1. Method Chromatography is a physico-chemical method of separation based on distribution of divided components between two phases: motionless (ſlter) and mobile (eluent). A mobile phase ƀows continuously through a motionless phase. The purpose of separation is to cause the preparative selection of substances as pure components of a mixture. Another aim is physico-chemical measurements. Separation of complicated mixtures when moving along the surface of an absorbent occurs due to distinctions of intermolecular interactions (various absorbableness) and the consequent determination of divided components on an exit of a device with the help of special detectors. Chromatography was developed by the Russian botanist M.S. Tswett in 1901–1903 during a study of the structure of chlorophyll and the mechanism of photosynthesis (see [GID 65, YAS 76, GOL 74], etc.).
Semi-Markov Model of Chromatography
327
2. Device A central part of a traditional chromatograph is a column, i.e. a narrow cylindrical vessel, completed by an absorbent. It solves the main problem: separation of components of a mixture. All remaining devices in the chromatograph are intended either for registration of divided components on the exit from the column, or for creation of stable conditions for the column. In analytical chromatography in the majority of cases an exhibiting variant of chromatography is used, in which an inert liquid or gas-carrier continuously passes through the column with a constant (liquid) or increasing (gas) velocity. A device, which injects an analyzable test in the column, is situated at the beginning of the column. The test injected in an eluent stream begins to move to the exit from a column. The components of a mixture, which are weakly sorbable, passes through a column with a greater velocity and exits a column earlier. As a rule, for chromatographic separation they use convertable physical adsorption, where adsorbing substances can be desorbed by a stream of liquid or the gas-carrier. Column chromatography is possible to classify as one-dimensional. There exists a planar chromatography in which the role of a column is played by a sheet of a paper or a layer of another porous material on which the liquid moves in two-dimensional space and the registration is carried out on tracks of components on a surface of a material. 3. Chromatogram The stream of eluent including desorbing components passes through a feeler of the detector, the signal of which is registered. The signal curve of the detector depending on time is called a chromatogram. In Figure 9.1, a chromatogram x(t) of mixture of substances borrowed from [GOL 74, p. 85] is represented. x(t) 6
- t Figure 9.1. Chromatogram of a mixture of substances with different parameters of delay
328
Continuous Semi-Markov Processes
A typical chromatogram of one absorbing substance is represented by a curve of the bell-shaped asymmetric form whose maximum is shifted with respect to the origin of coordinates (which corresponds to the instant the test is injected). With small asymmetry, which is usually connected to a large displacement, this curve is well approximated by a Gaussian curve. Thus, basic characteristics of this curve are shift, height of curve peak, width of a bell-shaped part and asymmetry. The area limited by this curve is directly proportional to a quantity of the analyzable substance. Therefore, it is natural to consider parameters of a normalized curve, i.e. a curve which is equal to the registered one, divided by a size of this area. The normalized curve can be interpreted as a density function of some random variable. In statistical terms the shift is the ſrst initial moment of this distribution, the width is mean-square deviation (radical of a variance), and the asymmetry is the third central moment. The height of peak in this case is a value derived from the previous parameters and actual form of a curve. An example of such a derived value is a height equivalent to a theoretical plate (HETP), connecting a variance and shift. Below we will give the exact deſnition of HETP in terms of a mathematical model and a formula of its approximate value used in evaluation. A large difference in shifts with small variances of distributions corresponding to two various adsorbing substances is desirable for separation of these substances. Therefore, efforts of investigators were directed on studying the parameters of a chromatogram, on physico-chemical properties of a ſlter and eluent, chromatograph design and techniques of operation and, in particular, a velocity of eluent running through a ſlter. One such dependence, the van Deemter formula, is derived below as a corollary from the offered semi-Markov model of chromatography. 9.2. Model of liquid column chromatography Continuous monotone semi-Markov processes with trajectories without terminal absorption in a ſlter are considered. Such a process is an inverse process with independent positive increments. The Le´vy formula for these processes and its corollaries are analyzed. One of corollaries is the presence of intervals of constancy, which are interpreted as a delay of a particle with a temporal adsorption. The probability performances of this delay are given. The “height equivalent to a theoretical plate” and its approached value used for evaluation of quality of chromatographic separation is determined. 4. Monotone process To begin with, we consider the most simple but practically important case of process ξ(t), when the adsorbing particle moves in one-dimensional space X = R1 with
Semi-Markov Model of Chromatography
329
monotone movement without terminal absorption in a ſlter. This case is realized, for example, in long and narrow chromatography columns with a liquid mobile phase without chemical interaction between the analyzable substance and a substance of the ſlter. In this case the semi-Markov property turns into the requirement for a particle to have independent sojourn times in intervals of the column which are mutually disjoint: if τx (ξ) is a random time to reach section x of the column and 0 < a < b < c, then τb − τa , τc − τb are independent random values. From here it follows that the random process (τ (x))x≥0 is a process with independent increments (here and further τ (x) ≡ τx ), and process ξ(t) is a so called inverse process with independent positive increments. 5. Lévy formula Let (Px ) be a family of distributions of this process. For anyone λ > 0 the Lévy formula is fair (see [ITO 65] and also Chapter 5, item 73) E0 exp − λτ (x) = exp − b(λ, x) , where b(λ, x) = λa [0, x) + −
∞ 0+
1 − e−λu n du × [0, x)
log Pyi e−λσ0 ,
0≤yi <x
where a(dx) is a locally ſnite absolutely continuous measure on axis X, and n(du × dx) is a measure on a half-plane (0, ∞) × (−∞, ∞), called a Lévy measure, (yi ) is a countable set of points on axis X, and σ0 is the ſrst exit time from an initial point, where Eyi (σ0 > 0) > 0. 6. The Lévy representation and its interpretation The most interesting corollary from the Lévy formula is the Lévy representation: Px -a.s. ∞ τ (x) = a [0, x) + uP du × [0, x) + ci , 0+
yi <x
where P(du × dx) is a Poisson measure (a random point ſeld of a special view) on a half-plane (0, ∞) × (−∞, ∞) with an intensity measure n(du × dx), ci ≡ σ0 ◦ θτ (yi ) is a random variable, the random variables in set (ci ) are mutually independent and independent of Poisson ſeld P.
330
Continuous Semi-Markov Processes
The interpretation of Lévy representation is interesting and is the most attractive feature of semi-Markov model chromatography. The determinate part a x1 , x2 = a 0, x2 − a 0, x1 can be interpreted as a result of driving of particles with a liquid and can be connected to a velocity v(x) of moving of a liquid-carrier: a x1 , x2 =
x2
x1
1 dx, v(x)
where 1/v(x) is a density of the measure a(dx) with respect to the Lebesgue measure on axis X. In a spatially-homogenous case v(x) ≡ v and a([0, x)) = x/v. The random part consists of intervals of constancy of two sorts. At ſrst, it is a sum of random delays in random points, the duration and position of which on a segment [x1 , x2 ) is determined by a Poisson measure P. Secondly, it is a sum of independent random delays in ſxed points from a countable set (yi ). For homogenous in space process ξ(t) the delay of the second sort of intervals of constancy is absent. Furthermore, we will be limited to processes without these sort of delays. As we see, the emerging of a random delay follows from the sole supposition that sojourn times in mutally disjoint intervals are independent random values. We note that the magnitudes of intervals of constancy can be inſnitely small; however, the summarized length of these small intervals can be comparable with the summary length of large intervals. This will be the case when for any ε > 0 n((0, ε] × [x1 , x2 )) = ∞. Nowadays experimental data about magnitudes of intervals of constancy are not known and consequently there are no reasons to neglect a case of inſnite intensity.
7. Density of Lévy measure The Lévy measure can be represented as an integral of some conditional onedimensional measure ν(du | x): n B × x1 , x2 =
x2
ν(B | x) dx,
x1
which in a homogenous case turns into equality n(B × [0, x)) = ν(B)x, where ν(B) = ν(B | 0). From here in a homogenous case we have ∞ λ −λu + 1−e ν(du) . b(λ, 0, x) = x v 0+
Semi-Markov Model of Chromatography
331
8. Moments of distribution of delay A residual τ (x) − a([0, x)) in a monotone case is referred to as a delay, where a([0, x)) is a passage time of eluent through chromatographic column. Thus, it is a random part of Lévy representation (in this case, Poisson). ∞ uP du × [0, x) . τ (x) − a [0, x) = 0+
Moments of distribution of a random variable τ (x) can be obtained from the Lévy formula by differentiation on λ with λ = 0: ∞ ∂ E0 τ (x) ≡ tP0 τ (x) ∈ dt = − E0 e−λτ (x) ∂λ λ=0 0 ∞ = a [0, x) + un du × [0, x) , 0+
2
2 σ τ (x) ≡ E0 τ (x) − E0 τ (x) =
∞
0+
u2 n du × [0, x) .
We also obtain the 3rd and 4th central moment: ∞ 3 E0 τ (x) − E0 τ (x) = u3 n du × [0, x) ,
4 E0 τ (x) − E0 τ (x) =
0+
∞
0+
u4 n du × [0, x) + 3σ 4 τ (x) .
It is not difſcult to count up other moments of distribution of random variable τ (x). From the fourth order all central moments depend on x in a non-linear way. Note that in the considered case of monotone moving the important characteristic of a ſlter, the magnitude Hτ ≡ σ 2 (τ (x))/x, which in a homogenous case is equal ∞ to 0+ u2 ν(du), depends on a velocity v of a carrying liquid only in the case when the conditional Lévy measure ν(du) depends on v. The form of this dependence does not follow from the semi-Markov model itself and serves an additional supposition, which can be justiſed only with the help of experiments. It seems to be reasonable to suppose for measure ν to be deformed in inverse proportion to a velocity eluent: ν(B) =
ν1 (B) . v
[9.1]
In this case the relationship between determinate and random parts of a delay does not depend on a velocity eluent: the longer a particle moves, the longer it remains motionless on sorbent. The velocity of the eluent plays a role of a scale parameter for a process.
332
Continuous Semi-Markov Processes
9. Height equivalent to a theoretical plate Alongside a variance σ 2 (τ (x)), the variance of a random variable ξ(t) may be of interest. Its exact value in terms of a Lévy measure is expressed with the awkward formula following from an inversion of a Laplace transformation. To estimate this variance the approximate equality: σ(ξ(tx )) ∼ = σ(τ (x))vc can be used, where tx is obtained from the condition E0 ξ(tx ) = x and vc = x/E0 τ (x) [GID 65]. In a homogenous case we have −1 ∞ 1 + vc = uν(du) = vR, v 0+ where R is a factor of a diminution of a velocity of transferable substance comparatively with respect to a velocity of the carrier. In particular, if condition [9.1] is satisſed, R does not depend on a velocity. The magnitude σ 2 (ξ(tx ))/x in [GID 65] is called a height equivalent to a theoretical plate (HETP). This title is borrowed from the practice of a distillation, in which the separation of substances happens as a result of multiple transfusions of solutions from each plate into the following one, where plates compose an inſnite sequence. In an outcome of these transfusions the test located in the ſrst plate of a series is spread on all plates and the maxima of concentration also moves along a series. We will not describe this procedure explicitly, and will not justify the deſnition of HETP for chromatography. For practical purposes, in particular, for an evaluation of quality chromatograph, an approximation of HETP is used, namely 2 [9.2] H ≡ σ τ (x) vc /x. The higher the chromatograph quality, the smaller H will be. In a one-dimensional monotone case we have H = Hτ R2 v 2 and, if condition [9.1] is satisſed ∞ u2 ν1 (du), H = R2 v 0+
i.e. it is directly proportional to an eluent velocity. Note that formula [9.2] is deduced without the supposition of a monotonicity of process ξ(t). We use it below when analyzing continuous semi-Markov processes of diffusion type. 9.3. Some monotone Semi-Markov processes The family of measures (ν(du | x))x∈X of monotone processes determines a socalled ſeld of delay. This ſeld in a common case is inſnite-dimensional because measure ν(· | x) is characterized by an inſnite set of parameters, for example, moments. For practical applications it is preferable to use families of measures with a ſnite number of parameters.
Semi-Markov Model of Chromatography
333
10. A Gut and Ahlberg model The example of a ſeld with ſnite intensity is considered in [GUT 81]. In this work, devoted to a problem of sums of series with a random number of summands, the stochastic model of chromatography was offered. In this model a particle moves with stops in such a way that uniform moving intervals with a velocity v alternate with intervals of constancy. The supposition about an exponential distribution of length of a uniform moving interval (with a parameter αv), which is accepted in this work, refers these processes to a class of homogenous monotone continuous semi-Markov processes. The considered process has a ſnite intensity of a random point ſeld of jumps of function τ (x) and an exponential distribution of magnitudes of jumps, which are intervals of constancy of function ξ(t) (with a parameter β). The parameter of an exponential function in Lévy expansion for this process is of the form xλ + xα b(λ, 0, x) = v
∞
0
1 − e−λu e−βu du.
Parameters v, α and β of this process are functions of the ſrst three moments of distribution of random variable τ (x):
1 α + , v β
2αx σ 2 τ (x) = 2 , β 3 6αx E0 τ (x) − E0 τ (x) = 3 , β
E0 τ (x) = x
which can be estimated using a chromatogram.
11. Inverse gamma-process An inverse gamma-process (without drift) is a monotone semi-Markov process with independent positive increments of random function τ (x), distributed according to gamma-distribution: 1 hxα txα−1 e−ht Γ(xα)
(x > 0),
where Γ(x) is a gamma-function, α > 0 is a form parameter and h > 0 is a scale parameter. The parameter of an exponential Lévy function of this process has the form ∞ e−h u h+λ = xα du 1 − e−λu xα ln h u 0
334
Continuous Semi-Markov Processes
(see [PRU 81, formula 2.3.19.28]). For practical purposes it is reasonable to supplement this parameter with a term xλ/v, where v is velocity of eluent. This supplemented parameter ∞ e−h u xλ + xα du 1 − e−λu b(λ, 0, x) = v u 0 corresponds to an inverse gamma-process with drift, for which xα 3 x xα 2xα + , σ 2 τ (x) = 2 , E0 τ (x) − E0 τ (x) = 3 . v h h h The principal advantage of an inverse gamma-process with drift in comparison with other monotone semi-Markov models is its simplicity, sufſcient ƀexibility due to its three parameters, and also its well-known properties. Tables of its values can be found, for example, in [PAG 63]. E0 τ (x) =
12. Process of maximum values The large subclass of inverse processes with independent positive increments is made with processes of record values connected with semi-Markov non-monotone processes. For continuous one-dimensional process ξ(t) with an initial point x = ξ(0) the process η(t) = max(ξ(s) : 0 ≤ s ≤ t) is called a process of maximum values. In addition, if ξ(t) is a semi-Markov process, η(t) is semi-Markovian too. Besides it is a monotone process and therefore an inverse process with independent positive increments. Consider, for example, a continuous semi-Markov process such as a homogenous Brownian motion with drift. The density of its one-dimensional distribution has a view 1 (x − μt)2 . exp − pt (x) = √ 2d t 2πd t The semi-Markov transition generating function of process hG (λ, x) (where G = (a, b) and a < x < b) is expressed by the formula % 2 A(b−x) sinh(x − a) A − 2B(λ) % , hG (λ, x) = e sinh(b − a) A2 − 2B(λ) where A = μ/d and B(λ) = −λ/d, (see items 5.22 and 5.23). The generating time function of the ſrst exit of the corresponding process of maximum values from an interval (−∞, b) can be received as a limit of the above function as a to −∞. Taking into account the values of factors, we obtain = = −λτ (x) μ 2 μ2 +λ− √ = exp − x . [9.3] E0 e d 2d 2d
Semi-Markov Model of Chromatography
335
Consider a case μ ≥ 0. In this case the process η(t) does not have an inſnite interval of constancy: P0 (τ (x) = ∞) = 0. We need to ſnd parameters of the Lévy formula, i.e. to ſnd unknown measures a(·) and ν in the equation = = ∞ 2 μ 2 μ +λ− √ 1 − e−λu ν(du) = x λa [0, x) + x . [9.4] d 2d 2d 0+ By dividing both parts of the equation by λ and tending λ to inſnity, we obtain a([0, x)) = 0. It is easy to check that the function μ2 1 √ . [9.5] exp − u 2d 2πd u3 is a density of a Lévy measure in equation [9.4]. The P0 -density of the distribution of the ſrst hitting time of level x is of the form x (x − μt)2 exp − pτ (x) (t) = √ 2td 2πt3 d (see [BOR 96, formula 1.1.4]). Three ſrst moments of distribution τ (x) for this process look like E0 τ (x) =
x , μ
xd σ 2 τ (x) = 3 , μ
3 3x d2 E0 τ (x) − E0 τ (x) = . μ5
[9.6]
With μ = 0, i.e. for a process without drift, we obtain a monotone, slowly growing process: the average ſrst passage time of any level is equal to inſnity, though the probability of its reaching is equal to 1. 13. Process of maximum values with negative drift From a semi-Markov property it follows that for 0 < x1 < x E0 e−λτ (x) = E0 e−λτ (x1 ) Ex1 e−λτ (x) , from here E0 (τ (x) < ∞) = E0 (τ (x1 ) < ∞)Ex1 (τ (x) < ∞) and hence, ſrstly, E0 e−λτ (x) | τ (x) < ∞ = E0 e−λτ (x1 ) | τ x1 < ∞ Ex1 e−λτ (x) | τ (x) < ∞ = exp − b(λ, 0, x) , (see item 5) and, secondly, E0 τ (x) < ∞ = exp − c [0, x) ,
336
Continuous Semi-Markov Processes
where c(dx) is a measure on axis X. We suppose that this measure (as well as measure a(dx)) is absolutely continuous with respect to a Lebesgue measure. Hence, in this case, the Lévy formula is completed by one term (not depending on λ) −λτ (x2 ) = exp − c x1 , x2 − λa x1 , x2 Ex1 e −
∞
0+
1 − e−λu n du × x1 , x2 ;
[9.7]
yet a distribution of τ (x) is subprobability; and the moments received in item 8 are moments of the measure P0 (· | τ (x) < ∞), i.e. conditional moments. However, distribution of the ξ(t) itself is probability for any t ≥ 0: P0 ξ(t) < x = P0 τ (x) = ∞ + P0 τ (x) < ∞ P0 ξ(t) < x | τ (x) < ∞ . In a limit as t → ∞ they obtain a distribution of absorbed particles which for a spatialhomogenous case is exponential. This limit distribution is appropriate for moving the substance along a one-dimensional path with absorbtion when the substance from a point-wise source is brought by an eluent with a constant velocity. Consider expression [9.3] under condition μ < 0. This process ſnally stops at a random instant and on a random level. Since E0 (e−λτ (x) ) = E0 (e−λτ (x) ; τ (x) < ∞), from formula [9.3] for λ = 0 we obtain an exit probability on level x: P0 τ (x) < ∞ = exp − 2x|μ|/d < 1. According to item 12, formula [9.7] for this process passes in the expression E0 exp − λτ (x) ∞ μ2 1 du . 1 − eλu √ exp − u = exp − 2x|μ| − x 2d 2πd u3 0+ From equality E0 exp − λτ (x − ε) , τ (x) = ∞ = E0 exp − λτ (x − ε) Px−ε τ (ε) = ∞ (ε > 0) we obtain a conditional generating function of a time when the inſnite interval of constancy begins: = λ 2 + μ2 − |μ| , P0 exp − λτ (x − 0) | τ (x) = ∞ = exp − x d although this time is not a Markov time (see item 2.2).
Semi-Markov Model of Chromatography
337
9.4. Transfer with diffusion We have to take into account diffusion when investigating gaseous chromatography. In this case they use a column chromatograph connected to a vessel with a compressed gas-eluent. Let b > 0 is the length of the column. Near the open end of the column a detector is placed in order to record the ſrst appearance of the substance of the test. It makes it possible to consider the recorded function as the distribution density of the ſrst hitting time of level b for a process of diffusion type. It is natural to consider a Markov diffusion process as a possible model for movement of a gas particle. In this case the movement is controlled by a backward differential Kolmogorov equation: 1 ∂2f ∂f ∂f = d(x) 2 + μ(x) , ∂t 2 ∂x ∂x where f = f (t, x) = pt (S | x) is a transition function of process (S ∈ B(X)). However, as it follows from formulae [9.6] under the conditions of the Markov model, it is impossible to explain separation of different substances in the gaseous chromatograph. Actually, in a gas mixture all the particles have the same drift μ. Thus, they may differ only due to their local variance d. However, the ſrst moment does not depend on d, i.e. different kinds of particles cannot be separated at the end point of the column. In order to be exact we have to note that formulae [9.6] correspond to spatial-homogenous cases, but the same outcome is fair in common cases as well. 14. Semi-Markov model The semi-Markov model seems to be a well adapted model in order to learn the gaseous chromatography phenomenon. We know that the transition generating function of a semi-Markov process of diffusion type is controlled by equation [5.6] (Chapter 5). Consider a coefſcient B(λ, x) of this equation. By theorem 5.6 its derivative −∂B(λ, x)/∂λ is a completely monotone function on λ ≥ 0. Consider a limit γ(x) = lim − ∂B(λ, x)/∂λ . λ→∞
An important element of our model of gaseous chromatography is that γ(x) > 0 for any x ≤ b. This means coefſcient B(λ, x) contains a non-degenerated linear part such that c(λ, x) ≡ −B(λ, x) − γ(x)λ is a completely monotone function on λ ≥ 0. A Markov diffusion process with the transition generating function controlled by equation 1 f + A(x)f − γ(x)λf = 0, 2
[9.8]
338
Continuous Semi-Markov Processes
is said to be the support Markov process for the given semi-Markov process. Both these processes have the same distribution for their random trace. Trajectories of the original process differ from those of the support Markov process due to presence of Poisson intervals of constancy, corresponding to Lévy measure ν(du | x), where ∞ 1 − eλu ν(du | x). c(λ, x) = 0+
The Markov process corresponding to equation [9.8] has Kolmogorov coefſcients as follows: μ(x) = A(x)/γ(x),
d(x) = 1/γ(x).
We know that in the one-dimensional case, coefſcient μ(x) determines a group velocity V (x) of the system of independent homogenous particles controlled by this equation. Our assumption is that for different kinds of particles their coefſcients μ of support Markov processes are identical, and coincide with those of the gas-eluent. Thus, we suppose that μ(x) = V (x). Parameter V (x) is a macroscopic velocity of the eluent ƀow through a porous adsorbent medium along the column. This parameter essentially depends on x, because a decrease of pressure from the entry to the exit of the column is a necessary condition for gas to ƀow through the column. Evaluation of the gas velocity in cylindrical vessel (column) is based on the BoyleMariott law pv = Cm,
[9.9]
where p is the pressure of gas, v is volume of gas, C is a coefſcient depending on a chemical composition and temperature of gas, and m is mass of gas. Representing the volume as the product, V St, of the gas velocity, V , in a given cross-section of the vessel, by area, S, of the cross-section, and by a time, t, we obtain the equality pV = Cm0 /S,
[9.10]
where m0 = m/t is a normed expenditure of gas mass, which is constant at every cross-section of the vessel under usual stationary condition of work. In order to derive dependence of the velocity on distance, x, of a cross-section from the entry to the column we use the Gagen-Poiseil equation [GOL 74, p. 71]: V = −kp ,
[9.11]
where k is a coefſcient depending on the porous medium inside the column and chemical composition of the gas. Differentiating the constant product pV with respect to x and substituting p from [9.11] we obtain equation −V 2 /k + pV = 0. Taking into account [9.10] we derive equation V /V 3 = c, where c = S/(kCm0 ). A solution to this equation is V (x) = (C1 − 2c1 x)−1/2 . In terms of pressure, by [9.10], this means
Semi-Markov Model of Chromatography
339
that p(x) = (Cm0 /S)(C1 − 2c1 x)1/2 . Hence C1 = p2 (b)/(Cm0 /S)2 + 2c1 b, where under usual conditions of work p(b) is equal to atmospheric pressure. Thus, V (x) = %
Cm0 /S p2 (b)
+ 2(b − x)Cm0 /(kS)
,
[9.12]
The denominator of this expression represents a pressure at cross-section x of the column. In particular, p2 (0) = p2 (b) + 2bCm0 /(kS). Under condition m0 > 0 the velocity is positive. It follows that Px (τb < ∞) = 1 for any x < b. Equation [5.6] (Chapter 5) can now be rewritten as 1 f + V (x)γf − γλ + c(λ) f = 0, 2
[9.13]
where V (x) is given by formula [9.12]. We will suppose that γ = γ(0), c(λ) = c(λ, 0) (not dependent on x). This supposition is natural for a thermostated column – the usual condition of work. 15. About boundary conditions The random value we are interested in is the ſrst exit time of a particle from the interval determined by the length, b, of the column. How should this time be interpreted? In the case of a semi-Markov process of diffusion type, the particle, starting at point x = 0, theoretically returns to the area of negative values many times in spite of the positive group velocity of gas-eluent at point x = 0. This velocity is made by a device creating pressure at the start of the column. We take into consideration that the particle movement in the column does not vary if the column is virtually prolonged in the area of negative values in such a way that the pressure and group velocity of gas at point x = 0 are invariable. Mathematically this means considering equation [5.6] on interval (−∞, b), replacing velocity V (x) determined by [9.12] on its analytical prolongation on the set of negative values, i.e. considering formula [9.12] for negative x. In this case the time we are interested in gains a precise sense. It is τb = σ(−∞,b) with respect to probability measure P0 . For any positive γ and x < b Px (τb < ∞, Xτb = b) = 1. This makes it possible to reduce the problem to analysis of function h(−∞,b) (λ; x) (see item 5.1). Unfortunately, we do not know an analytical form of a solution to equation [9.13]. However, in order to ſnd moments of random value τb we do not need this analytical form, since from equation [9.13] a differential equation can be derived for any Mk,a (x) (k ≥ 1), where Mk,a (x) = Ex ((σ(a,b) )k ). It follows from representation k k ∂ Ex exp − λσ(a,b) Mk,a (x) = (−1) k (∂λ) λ=0 and from differential equation [9.13] differentiated by parameter λ.
340
Continuous Semi-Markov Processes
16. Approximate solutions Before analyzing equation [9.13] with variable velocity V = V (x) we consider its rough approximation, which is a solution of a similar equation but with a constant coefſcient V . According to the theory of linear differential equations with constant coefſcients, a general solution to the equation 1 f + V γf − γλ + c(λ) f = 0 2 is f = C1 exp(α1 x) + C2 exp(α2 x), where 1 α1 = −V γ − (V γ)2 + 2 γλ + c(λ) , 1 α2 = −V γ + (V γ)2 + 2 γλ + c(λ) . The constants C1 , C2 can being obtained from boundary conditions for the desired partial solution. We are interested in a partial solution which is the generating function f (λ | x) ≡ Ex exp − λτb . From the boundedness of this solution as x → −∞ it follows that C1 = 0. The second constant can be found from condition 1 = f (λ | b) = C2 exp(α2 b). Hence 1 f (λ | x) = exp (b − x) V γ − (V γ)2 + 2 γλ + c(λ) . With the help of the inverse Laplace transformation the distribution density of τb can be found, which is represented by a unimodal function with positive asymmetry. Moments of this distribution can be found by differentiating function f (λ | x) with respect to λ at point λ = 0. We obtain ∂ γ + δ1 , = (b − x) M1 (x) ≡ Ex τb = − f (λ | x) ∂λ Vγ λ=0 ∞ where δ1 ≡ (∂c(λ)/∂λ)λ=0 = 0 uν(du),
∂2 f (λ | x) (∂λ)2 λ=0 2 2 γ + δ1 (b − x) γ + δ1 δ2 + (b − x) = + (b − x) , Vγ Vγ (V γ)3 ∞ where δ2 = −(∂ 2 c(λ)/(∂λ)2 )λ=0 = 0 u2 ν(du). From here >2 (x) ≡ Ex τb − M1 (x) 2 = M2 (x) − M1 (x) 2 M 2 γ + δ1 δ2 + (b − x) = (b − x) . Vγ (V γ)3 M2 (x) ≡ Ex
2 τb =
Semi-Markov Model of Chromatography
341
We are interested in moments of distribution of τb : γ + δ1 , Vγ 2 γ + δ1 δ2 > μ2 (b) = M2 (0) = b +b . Vγ (V γ)3 m1 (b) = M1 (0) = b
Note that formula [9.2] for the liquid chromatograph HETP can be rewritten as H ≡ H(0, b) =
bμ2 (b) . m21 (b)
[9.14]
Thus, in this case, δ2
H(0, b) =
γ + δ1
2 V γ +
1 , Vγ
which precisely corresponds to the well-known van Deemter formula for gaseous chromatography [GOL 74]. 17. Precise solutions Continue investigation of equation [9.13] with variable coefſcient V (x). First moment Note that function fσ(a,b) λ, {a, b} | x = g(a,b) (λ; x) + h(a,b) (λ; x) ≡ Ex exp − λσ(a,b) satisſes equation [9.13]. Substituting this function in the equation and differentiating members of the equation with respect to λ at point λ = 0, we obtain the differential equation 1 M + V (x)γM1,a + γ + δ1 = 0, 2 1,a
[9.15]
since c(0) = 0 and fσ(a,b) (0, {a, b} | x) = 1. In addition M1,a (a) = M1,a (b) = 0. Let us denote ϕ(x) = M1,a (x) and ſnd a partial solution of the equation
1 ϕ + V (x)γϕ + γ + δ1 = 0, 2 % √ where according to formula [9.12] V (x) = A/ B − x, A = Cm0 k/2S, √ B = p2 (0)kS/2Cm0 . Look for a partial solution of the form ϕ(x) = A1 + A2 B − x.
342
Continuous Semi-Markov Processes
Substituting this expression in the equation and equating coefſcients of every power of the root to zero we obtain two equations with respect to A1 and A2 , from where A1 = −(γ +δ1 )/(4A2 γ 2 ), A2 = −(γ +δ1 )/(Aγ). A general solution of the homogenous equation is of the form x V (s) ds ϕ0 (x) = C1 exp − 2γ a
√ √ = C1 exp − 4Aγ B − a − B − x ,
where C1 is an arbitrary constant. Consequently x M1,a (x) = C2 + ϕ(s) + ϕ0 (s) ds a
x
= C2 +
√ √ ϕ(s) + C1 exp − 4Aγ B − a − B − s ds .
a
The arbitrary constant C2 of the general solution is determined from condition 0 = M1,a (a) = C2 . The second boundary condition determines C1 : 0 = M1,a (b) =
b
√ √ ϕ(s) + C1 exp − 4Aγ B − a − B − s ds ,
a
from where
b
C1 = −
b
ϕ(s) ds a
exp − 4Aγ
√
√ B − a − B − s ds
−1 .
a
We obtain
B − s ds M1,a (x) = ϕ(s) ds − ϕ(s) ds √ a a − a − B − s ds √ b b b exp 4Aγ B − s ds ϕ(s) ds + ϕ(s) ds xb =− . √ x a exp 4Aγ B − s ds a
x
b
√ x exp − 4Aγ B a √ b exp − 4Aγ B a
−a−
√
In this expression b 2 ϕ(s) ds = A1 (b − a) + A2 B − a)3/2 − (B − b)3/2 , 3 a b √ √ √ 2 1 exp 4Aγ B − a exp 4Aγ B − s ds = B−a− 4Aγ 4Aγ a √ √ 1 exp 4Aγ B − b . B−b− − 4Aγ
Semi-Markov Model of Chromatography
343
The ratio of the two latter expressions tends to zero as a → −∞. Thus, the expectation in which we are interested is equal to 2 M1 (x) = lim M1,a (x) = −A1 (b − x) − A2 (B − x)3/2 − (B − b)3/2 a→−∞ 3 2 2 2 = (b − x) γ + δ1 / 4A γ + (B − x)3/2 − (B − b)3/2 γ + δ1 /(Aγ) 3 γ + δ1 b − x 2 + (B − x)3/2 − (B − b)3/2 . = Aγ 4Aγ 3 [9.16]
Second moment Double differentiation of equation [9.13] with respect to λ at point λ = 0 brings us to equation 1 M2,a + V (x) γM2,a + δ2 + 2 γ + δ1 M1,a (x) = 0. 2 Denoting ϕa = M2,a , we obtain equation
1 ϕ + V (x)γϕa + δ2 + 2 γ + δ1 M1,a (x) = 0, 2 a which can be solved in a standard way. Let us assume that we know a partial solution ϕa of this equation which increases with a power rate as a → −∞. Then, b according to previous evaluation, we obtain M2 (x) = lima→−∞ (−1) x ϕa (s) ds. In particular, if the integrand is uniformly limited on the region of integration, then b M2 (x) = − x ϕ(s) ds, where ϕ(x) = lima→−∞ ϕa (x). On the other hand, ϕ is a partial solution of the equation 1 ϕa + V (x)γϕa + δ2 + 2 γ + δ1 M1 (x) = 0, 2 which it is easier to construct by a standard way. In this case a free term has a view A1 + A2 (B − x) + A3 (B − x)3/2 , where 2 2 γ + δ1 (B − b) 4 γ + δ1 (B − b)3/2 , A1 = −δ2 − − 2A2 γ 2 3Aγ 2 2 γ + δ1 4 γ + δ1 A2 = . , A3 = 2A2 γ 2 3Aγ
344
Continuous Semi-Markov Processes
Let us look for ϕ of the form coefſcients gives meanings
4 k=0
Fk (B−x)k/2 . The standard procedure of uncertain
1 3 A3 , − A A − + 1 2 4A2 γ 2 8A2 γ 2 Aγ 1 3 A3 F1 = − A1 − , A + 2 Aγ 8A2 γ 2 Aγ 3 A3 , − A2 − F2 = 4A2 γ 2 Aγ 1 A3 F3 = − A2 − , Aγ Aγ F0 =
F4 = −
A3 . Aγ
Integrating the partial solution we obtain M2 (x) = −F0 (b − x) −
4 k=1
Fk (B − x)k/2+1 − (B − b)k/2+1 . k/2 + 1
This expression gives the variance of τb as a function of the initial point x ∈ [0, b): >2 (x) = M2 (x) − M 2 (x). After elementary but awkward transformations we obtain M 1 >2 (x) = δ2 b − x + 2 (B − x)3/2 − (B − b)3/2 M Aγ 4Aγ 3 2 γ + δ1 11 b − x 11 (B − x)3/2 − (B − b)3/2 + + A2 γ 2 64 A4 γ 4 24 A3 γ 3 5 (B − x)2 − (B − b)2 2 (B − x)5/2 − (B − b)5/2 . + + 8 A2 γ 2 5 Aγ [9.17]
Letting x = 0 in formulae [9.16] and [9.17] and using previous denotations we obtain expressions for expectation and variance of the ſrst exit time of a particle from a column of length b: m1 (b) = M1 (0) = E0 τb ; >2 (0) = E0 τb − m1 (b) 2 . μ2 (b) = M
Semi-Markov Model of Chromatography
345
Thus, b 2 3/2 γ + δ1 3/2 , + B m1 (b) = − (B − b) Aγ 4Aγ 3 b 2 δ2 μ2 (b) = + B 3/2 − (B − b)3/2 Aγ 4Aγ 3 2 γ + δ1 11 B 3/2 − (B − b)3/2 11b + + A2 γ 2 64A4 γ 4 16A3 γ 3 2 5 B − (B − b)2 2 B 5/2 − (B − b)5/2 . + + 8A2 γ 2 5Aγ Analogously, initial and central moments of the distribution of τb of any order k > 2 can be evaluated. Derived characteristics Using inequality bB k−1 < B k − (B − b)k < bkB k−1
(0 < b < B, k > 1),
we see that m1 (b) and μ2 (b) increase asymptotically linearly as functions of b. It implies that, as well as in the case of liquid chromatography, the resolution % ρ(b) ≡ m1 (b)/ μ2 (b), of gaseous chromatograph has an order
√ b as b → ∞.
In order to analyze how HETP depends on velocity we deſne a local HETP as an integral characteristic of a chromatograph column of a small length h, beginning at point x. According to formula [9.14] its HETP is 2 Ex τx+h − Ex τx+h . H(x, x + h) = h 2 Ex τx+h As h tends to zero, we obtain a desired local characteristic ψ2 (x) 2 , ψ1 (x)
H(x) = where ψ1 (x) = −
∂ M1 (x), ∂x
ψ2 (x) = −
∂ > M2 (x). ∂x
346
Continuous Semi-Markov Processes
For a model with a constant velocity the local HETP coincides with the integral HETP, [9.14]. In the case of variable velocity we have 1 γ + δ1 1/2 + (B − x) , ψ1 (x) = Aγ 4Aγ 1 δ2 1/2 + (B − x) ψ2 (x) = Aγ 4Aγ 2 11 1 11 (B − x)1/2 5 B − x (B − x)3/2 γ + δ1 . + + + + Aγ 64 A4 γ 4 16 A3 γ 3 4 A2 γ 2 Aγ According to our denotation we have (B − x)1/2 = A/V (x). Expressing powers of this difference in terms of velocity we obtain H(x) =
δ2 Aγ
A 1 + 4Aγ V
−1
2 γ + δ1 −2 5 1 A 1 11 1 11 A2 1 + + + + + . 64 A4 γ 4 16 A2 γ 3 V 4 γ2V 2 γV 3 4Aγ V
The ſrst term of this sum increases as a function of V (x). The second term decreases. Hence, for any x, there exists a minimum of this expression. This formula can be considered as a reſned variant of the van Deemter formula for gaseous chromatography, taking into account different velocities in different cross-sections of the column. In this case the functional NTP (number of theoretical plates) can be deſned as Z(b) =
0
b
dx . H(x)
9.5. Transfer with ſnal absorption We will consider a continuous monotone semi-Markov process for which the trajectory has an inſnite interval of constancy, i.e. in some instants it stops forever (we distinguish cases of a ſnal stopping and breakage of a trajectory). The ſnal (terminal) stopping can be interpreted as a sedimentation of a particle in a ſlter, for example, in an outcome of chemical transformation (chem-sorption). Such processes are interesting from the point of view of geological applications (see [GOL 81, KOR 82]). 18. Semi-Markov process with ſnal stopping. We are interested in continuous processes stopping forever at some random time ζ = ζ(ξ). We also state that at time ζ an inſnite interval of constancy with a value
Semi-Markov Model of Chromatography
347
ξ(t) = ξ(ζ) begins for all t ≥ ζ(ξ). This time, in general, depends on the future: the event {ζ ≤ t} is not an element of Ft . In other words, this time is not a Markov time. However, for a semi-Markov process, a distribution of this time and of the value of a trajectory on the inſnite interval of constancy can be expressed in terms of semiMarkov transition functions as a limit of an outcome of stepped approximation. Obviously, ζ(Lr ξ) ≤ ζ(ξ) and ζ(Lr ξ) → ζ(ξ) as r → 0. Besides, Xζ (Lr ξ) → Xζ ξ. The latter is trivial if ζ is a point of continuity of the function ξ. However, it is true even if ζ is a point of discontinuity of function ξ, because in this case there exists r0 such that for all r < r0 ζ(ξ) = ζ(Lr ξ). Let Fζ (A × S | x) = Px (ζ ∈ A, Xζ ∈ S), Hζ (S | x) = Fζ (R+ × S | x). Thus, measure Hζ (S | x) is a weak limit of a sequence of measures Hζr (S | x) ≡ Px (ζ ◦Lr < ∞, Xζ ◦Lr ∈ S) as r → 0. Find a distribution of a pair (ζ ◦ Lr , Xζ ◦ Lr ): Px ζ ◦ Lr < t, Xζ ◦ Lr ∈ S =
∞
Px σrn < t, σr ◦ θσrn = ∞, Xσrn ∈ S
n=0
=
∞
n=0
=
S
Fσrn [0, t) × dx1 | x 1 − Hr X | x1
Ur [0, t) × dx1 | x 1 − Hr X | x1 ,
S
where Hr (S | x) = Fσr R+ × S | x , ∞ Ur [0, t) × S | x = Fσrn [0, t) × S | x . n=0
The latter measure on R+ × X represents a measure of intensity of a random locallyſnite integer-valued measure N ([0, t) × S), counting a number of hittings of twodimensional points of the marked point process (σrn , Xσrn )∞ n=0 into set [0, t) × S. In particular, [9.18] Zr dx1 | x 1 − Hr X | x1 , Hζr (S | x) = S
where Zr (S | x) = Ur (R+ × S | x). In general, the latter measure tends to inſnity, and the integrand in [9.18] tends to zero, if r → 0. In some cases, it is possible to ſnd asymptotics of these functions. Note that the right part of formula [9.18] determines distribution of the limit point of the trajectory for ζ < ∞ as well as for ζ = ∞; in the latter case if a limit lim ξ(t) as t → ∞ exists Px -a.s. In the following example we assume that Px (ζ < ∞) = 1.
348
Continuous Semi-Markov Processes
In this case the limit point is a stopping point, and for any x measure Hζ (S | x) is a probability measure. It is not difſcult to give and justify sufſcient conditions for this property to be fulſlled. 19. Diffusion and non-diffusion types of movement We will consider two principal subclasses of the class of SM processes: diffusion and non-diffusion types. A distinction between these subclasses is exposed when analyzing distribution of the ſrst exit from a small neighborhood of the initial point. A one-dimensional continuous process has the simplest description (see Chapter 5). In this section we consider functions g and h deſned in item 5.1 with λ = 0, changing the denotation of coefſcients of expansion a little. Let a < x < b and g(a,b) (x) = Px σ(a,b) < ∞, Xσ(a,b) = a , h(a,b) (x) = Px σ(a,b) < ∞, Xσ(a,b) = b . For diffusion process the following expansion holds (see item 5.12) 1 1 − b(x)r − 2 2 1 1 h(x−r,x+r) (x) = + b(x)r − 2 2 g(x−r,x+r) (x) =
1 c(x)r2 + o r2 , 2 1 c(x)r2 + o r2 , 2
where b(x) is a shift parameter determining tendency of movement to the right (if b(x) > 0) or to the left (if b(x) < 0); c(x) > 0 is a parameter of stopping the movement. In this case Px ζ ≥ σ(a,b) ≤ Px σ(a,b) < ∞ = g(a,b) (x) + h(a,b) (x) = 1 − c(x)r2 + o r2 . For a non-diffusion (“smooth”) process g(x−r,x+r) (x) = 1 − c(x)r + o(r),
h(x−r,x+r) (x) = o(r),
h(x−r,x+r) (x) = 1 − c(x)r + o(r),
g(x−r,x+r) (x) = o(r).
or
Parameter c(x) determines stopping of the movement as well although in another manner than in the case of diffusion. For a d dimensional space d ≥ 2 the distinction between a diffusion and non-diffusion character of movement is in principle the same. For diffusion process for the given point x ∈ Rd there exists a non-degenerate linear map of space Rd on itself preserving the point x immovable and such that in a new space distribution of the point of the ſrst exit from a small ball neighborhood with the center in x is uniform on the surface of the ball (in ſrst approximation); in second approximation to this distribution some terms relating to a shift or stopping inside
Semi-Markov Model of Chromatography
349
the ball are added (see items 5.27 and 5.34). For a non-diffusion process at almost all points of space distribution of the point of the ſrst exit from a small ball neighborhood in ſrst approximation is concentrated on intersection of the surface of the ball with a space of less dimension. If the dimension of this space is equal to 1 the distribution in ſrst approximation is concentrated at one point, determining a unique sequence of states (trace) passing through this point; in second approximation only the term determining stopping inside the ball is added. We call the latter case the smooth character of the process. 20. Accumulation equations for diffusion type process Consider a continuous semi-Markov process of diffusion type and its “kernel of accumulation” Hζ (S | x), i.e. a probability measure depending on the initial point of the process. We will derive for this kernel forward and backward differential equations with respect to output (S) and input (x) arguments correspondingly. We begin with deriving the backward equation. According to item 5.36 for any twice differentiable function ϕ there exists a limit 1 Hr (ϕ | x) − ϕ(x) 2 r→0 r
Ao0 (ϕ | x) ≡ lim =
d 1 ij a (x)ϕij (x) + bi (x)ϕi (x) − c(x)ϕ(x), 2 ij i=1
where d ≥ 1 is a dimension of Euclidean space X; (aij (x)) is a symmetric positive deſnite matrix of coefſcients which can depend on x; in addition, the sum of diagonal terms of this matrix is equal to 1; (bi (x)) is a vector ſeld of coefſcients; c(x) is a positive function; ϕi , ϕij are partial derivatives of ϕ on coordinates with numbers i and j correspondingly. For an arbitrary function ξ ∈ D and R > 0 we consider a function LR ξ such that LR ξ(t) = ξ(0), if t < σR (ξ), and LR ξ(t) = ξ(t), if t ≥ σR (ξ). Let HζR (S | x) = Px Xζ ◦ LR ∈ S; ζ ◦ LR < ∞ . Then HζR (ϕ | x) = ϕ(x) 1 − HR (X | x) +
X
HR (dy | x)Hζ (ϕ | y).
[9.19]
Subtract from both parts t Hζ (ϕ | x) and divide by R2 . Then, assuming function Hζ (ϕ | x) to be continuous and twice differentiable (this assumption can be justiſed as well) we obtain for R → 0 on the right part of equality 1 1 ϕ(x)c(x) + Ao0 Hζ (ϕ) | x , 2d 2d
350
Continuous Semi-Markov Processes
and that on the left part a limit of the expression
1 HζR (ϕ | x) − Hζ (ϕ | x) /R2 = 2 Px ((ϕ(x) − ϕ(Xζ )); ζ < σR ) R
1 ≤ max ϕ(x) − ϕ x1 : x − x1 ≤ R 2 1 − HR (X | x) , R
which is equal to zero. From here we discover that function Hζ (ϕ) ≡ Hζ (ϕ | ·) satisſes the equation d 1 ij a Hζ (ϕ) ij + bi Hζ (ϕ) i − cHζ (ϕ) + cϕ = 0. 2 ij i=1
[9.20]
This equation is called a backward accumulation equation. An interesting solution tends to zero on inſnity. From the maximum principle it follows that such a solution is unique. Derive the forward accumulation equation. In this case instead of kernel Hζ (S | x) we investigate its average with respect to a particular measure. Let μ(x) be the density of probability distribution on X and Hζ (S | μ) =
X
Hζ (S | x)μ(x) dx.
Averaged kernels Hζr (S | μ) and Zr (S | μ) and measure Pμ have a similar sense. According to equation [9.18] we have Hζr (ϕ
| μ) =
X
Zr (dx | μ)ϕ(x) 1 − Hr (X | x) .
For any continuous function ϕ the left part of this equality tends to a limit Hζ (ϕ | μ). On the right part we have (1 − Hr (X | x))/r2 → c(x). Assume that function c(x) is continuous. Then as r → 0 there exists a weak limit W (S | μ) of measure r2 Zr (S | μ), and, consequently, Hζ (ϕ | μ) =
X
W (dx | μ)ϕ(x)c(x).
Assume that there exists a density hζ (x | μ) of measure Hζ (S | μ). Then a density w(x | μ) of the measure W (S | μ) exists, and in addition hζ (x | μ) = w(x | μ) c(x).
Semi-Markov Model of Chromatography
351
According to deſnition of function Zr (see [9.18]) we have Zr (ϕ | μ) =
∞
Eμ ϕ Xσrk ; σrk < ∞
k=0
= ϕ(μ) +
∞
Eμ ϕ Xσr ◦ θσrk−1 ; σrk−1 < ∞, σr ◦ θσrk−1 < ∞
k=1
= ϕ(μ) +
∞
Eμ EX(σrk ) ϕ Xσr ; σrk < ∞
k=0
= ϕ(μ) +
X
Zr (dx | μ)Ex ϕ Xσr ; σr < ∞ ,
μ(x)ϕ(x) dx. From here we obtain an equation ϕ(μ) + Zr (dx | μ) Ex ϕ Xσr ; σr < ∞ − ϕ(x) = 0.
where ϕ(μ) =
X
[9.21]
X
Assume the function Ao0 (ϕ | x) is continuous on X. Then, by multiplying measure Zr by r2 , and dividing the integrated difference on r2 , and passing to a limit as r → 0, we obtain equation ϕ(μ) + W (dx | μ) Ao0 (ϕ | x) = 0. X
The integral in this expression can be represented in the form d 1 ij w(x | μ) a (x)ϕij (x) + bi (x)ϕi (x) − c(x)ϕ(x) dx. 2 ij X i=1 Assume function ϕ(x) and all its partial derivatives of the ſrst and second order tend to zero on inſnity. Then, by applying the rule of integrating by parts, we obtain the other representation of the integral d i 1 ij a (x)w(x | μ) ij − b (x)w(x | μ) i − c(x)w(x | μ) dx. ϕ(x) 2 ij X i=1 Using the possibility of an arbitrary choice of function ϕ we conclude that d i 1 ij a w(μ) ij − b w(μ) i − cw(μ) + μ = 0, 2 ij i=1
where w(μ) = w(· | μ) and w(x | μ) = X w(x | x1 )μ(x1 ) dx1 . We will obtain from this equation a forward equation of accumulation if we substitute into it
352
Continuous Semi-Markov Processes
w(x | μ) = hζ (x | μ)/c(x). As a result we obtain d 1 hζ (μ)aij /c ij − hζ (μ)bi /c i − hζ (μ) + μ = 0. 2 ij i=1
[9.22]
A limitation for a solution of this equation is that for any initial probability density μ a solution hζ (μ) is also a density of a probability distribution, i.e. it is non-negative and integrable, and the integral of it on X is equal to 1. Such a solution is unique. In differential equation theory equation [9.22] is said to be conjugate to equation [9.20]. Note that in theory of Markov processes with breakage when a time of break is identiſed with hitting time to the point of inſnity, the kernel Hζ (S | x) is interpreted as a distribution of a point of the process immediately before break (depending on the initial point of the process). Therefore equations [9.20] and [9.22] can be derived by methods of the theory of Markov processes. So we have demonstrated “semi-Markov method” of proof on the base of semi-Markov interpretation. 21. Accumulation equations for smooth type processes Semi-Markov processes of smooth type can be deſned by the following property of distribution of the ſrst exit point from a small ball neighborhood of the initial point of the process: Hr (ϕ | x) = ϕ(x + rb) 1 − c(x)r + o(r) (r −→ 0), where ϕ is a continuous function; b = b(x) = (bi (x)) is a point on the surface of the unit sphere. The family (b(x)) (x ∈ X) determines a ſeld of directions in the space X. We suppose the vector function b(x) to be continuous and the function c(x) to be continuous and positive. Under such assumptions for all ϕ there exists the limit d 1 Hr (ϕ | x) − ϕ(x) = bi (x)ϕi (x) − c(x)ϕ(x). r→0 r i=1
α0 (ϕ | x) ≡ lim
To derive the backward accumulation equation we use equation [9.19] with another normalizing factor. From this follows the identity α0 (Hζ (ϕ) | x) + c(x)ϕ(x) = 0, which implies equation d bi Hζ (ϕ) i − cHζ (ϕ) + cϕ = 0.
[9.23]
i=1
This is the backward accumulation equation for processes of smooth type. We are interested in bounded non-negative solutions tending to zero on inſnity if the parametric function is the same. In this connection max Hζ (ϕ) ≤ max ϕ.
Semi-Markov Model of Chromatography
353
To derive the forward accumulation equation we use equation [9.21]. In this case measure rZr (S | μ) tends weakly to some ſnite measure Y (S | μ). Its density y(x | μ) (if it exists) is connected with the density of the measure Hζ (S | μ) by equality: hζ (x | μ) = y(x | μ)c(x). From identity [9.21] we obtain equation ϕ(μ) + y(x | μ)α0 (ϕ | x) dx = 0. X
Substituting the value of operator α0 in the left part and integrating by parts we take identity d i y(x | μ)b (x) i − c(x)y(x | μ) + μ(x) dx = 0, ϕ(x) − X
i=1
from which we obtain equation −
d
y(μ)bi
i
− c y(μ) + μ = 0.
i=1
Replacing function y with hζ (μ) we get the forward accumulation equation for processes of smooth type: d
hζ (μ)bi /c i + hζ (μ) − μ = 0.
[9.24]
i=1
The required solution is a probability density if function μ is the same. 22. Equations with regard to central symmetry Backward accumulation equations relate to a problem of reconstruction of the initial distribution of matter on the basis of the given result of its transportation. In this book we are not going to solve this problem. In what follows we investigate forward accumulation equations with rather simple ſelds of coefſcients. They answer the matter of how the matter around a source is distributed. Assume the set of coefſcients of the differential equations to satisfy the principle of central symmetry with respect to the origin of coordinates. We consider a particular case of such a set of coefſcients, namely xi , c = c(r), r 1 d i i 2 where x is the i-th coordinate of vector x; r = i=1 (x ) is the length of x; ij ij ii δ is the Kronecker symbol (δ = 0 if i = j, and δ = 1); a(r), b(r) and c(r) aij =
1 ij δ , d
bi (x) = b(r)
354
Continuous Semi-Markov Processes
are continuous positive functions of r ≥ 0 (for smooth type processes b(r) = 1). Hence the vector ſeld at any point but the origin is directed along the beam going from the origin through this point (radial vector ſeld). Such a ſeld represents an idealized picture of velocity directions inside a laminar liquid stream ƀowing from a point source in d-dimensional space (d ≥ 1). Consider the forward accumulation equation for a process of diffusion type with function μ of degenerate form. Let μ be the unit loading at the origin of coordinates which we denote as 0. Hence, X ϕ(x)μ(x) dx = ϕ(0), where ϕ is any continuous bounded function. Evidently in this case the density h = hζ (· | μ) represents a centrally symmetric function h = h(r). We have ii 1 1 ∂r xi ha /c i = (h/c) i = (h/c) , d ∂x d r 2 ii 1 (h/c) (h/c) xi 1 (h/c) ha /c ii = + − , d r d r r2 r 2 i (hb/c) (hb/c) xi hb/c hx b/(rc) i = + − . r r r2 r Here and in what follows denotations f , f (without ſxing arguments the functions are differentiated by) relate to derivatives of f by r. Therefore we obtain d ii 1 (h/c) (h/c) (h/c) + − ha /c ij = ha /c ii = r r d r r2 i=1
ij
ij
1 d − 1 (h/c) (h/c) + , d d r d i (hb/c) (hb/c) − hx b/(rc) i = dhb/(rc) + r r r2 i=1 =
= (hb/c) + (d − 1)
hb/c . r
Equation [9.22] is transformed into equation d − 1 (h/c) hb/c 1 (h/c) + − (hb/c) − (d − 1) − h = 0 (r > 0). 2d 2d r r
[9.25]
Correspondingly, equation [9.24] for a smooth process is transformed into equation (h/c) + (d − 1)
h/c +h=0 r
(r > 0).
[9.26]
Semi-Markov Model of Chromatography
355
The unit loading at the origin of coordinates affects properties of solutions of equations [9.25] and [9.26]. Under our suppositions equation [9.22] can be represented in the form ∇2 (ha/c) − div(hb/c) − h + μ = 0, where ∇2 u is the Laplacian applied to function u; rdivv is the divergence of vector ſeld v; b is the vector ſeld with coordinates bi . Integrate all the members of this equation over a small ball neighborhood (of radius R) of the origin of coordinates. Under our supposition the integral of the last term is equal to 1 for any R > 0. The integral of the third term tends to zero (if h is bounded or tends to inſnity not very quickly as its argument tends to 0). The ſrst and second terms remain. From the theory of differential equations it follows that the integral of the divergence of the given vector ſeld over a ball is equal to the integral of function h b/c over the surface of the ball, i.e. it is equal to Rd−1 ωd h(R) b(R)/c(R), where ωd is the area of the unit ball surface in d-dimensional space. The integral of ∇2 (h/c) over the ball is equal to the integral of function −(h/c) over the surface of the ball, i.e. it is equal to −Rd−1 ωd (h/c) (R). Therefore, the following condition must hold Rd−1 ωd (h/c) (R) + Rd−1 ωd h(R) b(R)/c(R) −→ 1.
[9.27]
This is true if function b is bounded in a neighborhood of the origin and function h/c has a pole in point 0 of the corresponding order. Namely (− log r)/(2π), d=2 d−2 [9.28] h(r)/c(r) ∼ (d − 2)ωd , d ≥ 3, 1/ r Note that near such a pole the function does not tend to inſnity very quickly. In this case the integral of the divergence tends to zero as R → 0. Thus, the desired centrally symmetric solution of the forward accumulation equation has to be of order [9.28] and consequently it depends on the rate of c(r) at the zero point. If function b is not bounded in a neighborhood of zero it can be the case that the second term of the equation determines the order of the solution. This term is determining for equation [9.26] when the second derivatives are absent. To assign the order of solutions at the origin means in fact to assign initial conditions for equations [9.25] and [9.26]. However, it is not convenient to use these conditions for drawing graphs of solutions because of their instability. We search for positive and integrable solutions on the pos∞ itive semi-axis: 0 h(r) rd−1 dr < ∞, and therefore, h(r) → 0 (r → ∞). Besides under “time inversion” solutions of the equations become stable. Hence solutions of equations [9.25] and [9.26] can be found with the help of a computer on any ſnite interval (ε, T ) (0 < ε < T ) replacing r → T − r. Below we give some examples of how to choose parameters of equations [9.25] and [9.26], which have a physical interpretation, and also their solutions in analytical (if possible) or graphical form (the result of a computer’s work).
356
Continuous Semi-Markov Processes
23. Choice of coefſcients In spite of this, solutions of accumulation equations depend on the ratios aij /c and b /c, the interpretation of these ratios is more natural if one considers the ſelds (aij ), (bi ) and c separately because each of them has a speciſc physical sense. It is possible that some of them to depend on another. i
Generally, liquids and gases play a role of carrier for a particle of a matter. As a rule liquids determine the smooth type of a process, and gases the diffusion types. We take the simplest supposition with respect to diffusion. We consider it to be constant on all the space. The reason for such a choice follows from the interpretation of diffusion as a result of chaotic movement of molecules in the stable thermal ſeld. It is possible to give other interpretations of diffusion, for example, turbulence. However, taking into account turbulence would change our model and the form of relating differential equations. In this book we consider only laminar streams of liquids and gases. In the diffusion case the velocity v of the carrier matter determines a tendency of movement of a particle but not its actual shift. In this case we take the ſeld of coefſcients b to be equal to the ſeld of velocities. In the smooth case the velocity ſeld plays the main role in transportation of a particle. In this case we suppose b = v/|v|. This means that these ſelds coincide in directions but differ in values of vectors. In both types of processes values of velocities can affect the coefſcient of absorption c. In the centrally symmetric case we assume the velocity ſeld to be v(x) = v(r) x/r, where v(r) depends on the dimension of the space and on the loss of carrier matter during transportation of the particle. The choice is based on the principle of balance of matter under stationary activity of the point source. For incompressible liquid and without loss of liquid from the system (e.g., it could be transformed to another aggregate state) the equation of balance has the form: div v = 0,
[9.29]
hence v = −(d − 1) v/r and, consequently, v(r) = v(1)/rd−1 . If the incompressible liquid ƀows on the plane and goes out from the system as vapor at a rate of v1 per unit square, the velocity on the surface decreases faster. The balance of carrier matter has the form: v0 = 2πr v(r) + v1 πr2 , where v0 is activity of the source in point 0, hence v(r) = (v0 − v1 πr2 )/(2πr). In this case (in contrast to the previous one when the liquid covers the plane wholly) the circle determined by condition v(r) > 0 is the domain of the carrier % liquid. Hence rmax = v0 /(π v1 ) is the radius of this circle. In three-dimensional space the velocity of the particle transported by incompressible liquid varies inversely proportionally to the squared distance from the source:
Semi-Markov Model of Chromatography
357
v(r) = v(1)/r2 . In this case it is difſcult to give a reasonable interpretation to loss of the carrier matter, and we do not consider it. For a gaseous carrier the velocity of the particle depends on pressure p = p(r) which depends on the resistance of the medium and it decreases according to the distance from the source. When it passes through the homogenous porous medium, the balance equation for the matter has the form: div(pv) = 0,
[9.30]
which follows from the Boyle-Mariott law. Hence, in a centrally symmetric case, we obtain the equation: (p v) +(d−1) p v/r = 0 and consequently, pv = p(1)v(1)/rd−1 . On the other hand from the Gaghen-Poiseil law for laminar ƀowing of gas the velocity of the stream is proportional to the gradient of pressure (see [GOL 74]): v = −k∇p,
[9.31]
where k > 0 is a coefſcient depending on the properties of gas and the porous medium; ∇p is a vector with coordinates pi . Using the value of product pv, we obtain the differential equation with respect to p: pp = −k1 /rd−1 , where k1 = p(1) v(1)/k. Hence p2 = p2∞ + 2 k1 /((d − 2) rd−2 ) (d ≥ 3), where p∞ is the pressure of gas in an inſnitely removed point, for example, atmospheric pressure. Therefore, for space (d = 3), we obtain v=
r3/2
m √ , r+n
where m = p(1)v(1)/p∞ , n = 2p(1)v(1)/(p∞ k). We consider the ſeld of absorption c(r) to be of two forms. Firstly, this is a constant ſeld c(r) ≡ c1 = const. We call such a ſeld strong. In this case, probability of absorption only depends on the distance passed (possibly on the number of collisions of the transported particle with molecules of immovable phase). Secondly, this is a ſeld varying inversely proportionally to the velocity of a carrier: c(r) = c1 /b(r). We call such a ſeld weak. In a weak ſeld probability of absorption in some interval of immovable phase depends not only on the length of this interval but on the time it takes for the particle to interact with the immovable phase. The second dependence seems to be more plausible. It is veriſed indirectly in the theory of chromatography (see item 15). It is possible to have any intermediate laws of interaction, but these will not be considered here. 24. Transportation with liquid carrier The liquid carrier can ƀow out of the source and ƀood over the horizontal surface. In this case we deal with a two-dimensional accumulation problem [HAR 78].
358
Continuous Semi-Markov Processes
If the carrier is not lost as a result of evaporation, its radial velocity decreases due to the geometry of the plane. If the carrier is being lost by evaporation its radial velocity decreases faster and reaches a value of zero at a ſnite distance from the source. The liquid carrier can penetrate through a three-dimensional region ſlled with porous matter. In this case the carrier is not lost; the radial velocity decreases only due to geometry of space. Besides, we consider two forms of absorption ſelds for each dimension: strong and weak. We do not take into account diffusion for the liquid carrier. Therefore in this case we deal with ſrst order differential equations. For dimension d = 2 three types of coefſcients are investigated: (1) c(r) = c1 – ƀood over without evaporation and strong absorption ſeld. In this case the solution of equation [9.26] has the form 1 h = C exp − c1 r . r The distribution density of the accumulated matter has an acute maximum at the point of the source. (2) c = c1 r – ƀood over without evaporation and weak absorption ſeld. c1 r2 . h = C exp − 2 This is the unique case when the accumulated matter has the normal distribution as a result of transporting matter by a liquid carrier. % (3) c = 2c1 πr/(v0 − v1 πr2 ) (0 < r < v0 /πv1 ) – ƀood over with evaporation and strong absorption ſeld. In this case the solution of equation [9.26] has the form c1 /v1 −1 . h = C v0 − v1 πr2 There is clearly expressed dependence of the distribution form on the ratio of the absorption and evaporation coefſcients. If c1 /v1 > 1 the matter is being accumulated in the form of a cupola above the source. If c1 /v1 < 1 the matter is concentrated near the boundary of a circular domain forming a ring with a sharp border. For dimension d = 3 two types of coefſcients are investigated: (1) c = c1 – penetrating of liquid into three-dimensional volume and strong absorption ſeld. The solution of equation [9.26] has the form: h=C
1 exp − c1 r . 2 r
The distribution density of such a form has an acute maximum above the source.
Semi-Markov Model of Chromatography
359
(2) c = c1 r2 – penetrating of liquid into three-dimensional volume and weak absorption ſeld. The solution of equation [9.26] has the form: h = C exp
−
c1 r3 . 3
This distribution is similar to the normal distribution but it has a more clearly expressed boundary between large and small values of the density (see item 15).
25. Transportation with gaseous carrier We consider an intrusion of a gaseous carrier into a porous medium, where the gas moves with deceleration less than that of a liquid. Accumulation of the matter corresponds to two types of ſelds of coefſcients: √ (1) b = m/r3/2 r + n, c = c1 – strong absorption ſeld. In this case equation [9.25] has the form 3nb 2 − 6b − h + 6c1 = 0. h +h r r(r + n) Its analytical solution is not known. For graphs of solutions, see Figures 9.2 and 9.3. There is an acute maximum at the place of the source; the rate of decreasing of the density depends on the value c1 . √ (2) b = m/r3/2 r + n, c = c1 /b – weak absorption ſeld. In this case equation [9.25] has the form h − F h + Gh = 0, where F = G=−
6r + 4n , r(r + n)
5r2 + 6rn + (9/4)n2 b(36r + 24n) 6c1 − . + r2 (r + n)2 r(r + n) b
Its analytical solution is not known. For graphs of solutions, see Figures 9.4 and 9.5. The distribution has a “crater” at the place of the source. The radius of the ring of maximum values depend on c1 (the left pictures correspond to smaller values of c1 ). To obtain graphics we use a standard algorithm for approximate solving systems of ſrst order differential equations, solved with respect to derivatives. This algorithm is realized with the computer (“Stend” program by Prof. V.A. Proursin). In all the pictures proſles of distribution densities are shown as functions of the distance from the source at zero (on the left).
360
Continuous Semi-Markov Processes
h
h
r
r
Figure 9.2
Figure 9.3
h
h
r Figure 9.4
r Figure 9.5
26. Conclusion Using methods of the theory of semi-Markov processes we obtained the forward and backward accumulation equations. The density of the measure of accumulated matter is proportional to the distribution density of the process at the time of stopping. When analyzing the process of accumulation two problems arise: forward and backward. The backward problem is to reconstruct a source on the basis of an observable distribution. For this aim the backward equation can be used. The main content of this section is to derive forward accumulation equations for diffusion and smooth types of processes and to solve them in the case of circular symmetry. The solutions answer the question of how the matter around a source is distributed. We consider a point source in two- and three-dimensional homogenous and isotropic medium. It is shown that under some combinations of the model parameters zones of accumulated matter can have either the maximum or a local minimum of concentration at the place of the source. In the latter case increased concentration zones form concentric rings or spheres around the source. The radii of these rings and spheres depend on the rate and character of absorption of moving particles by the substance of the ſlter. Thus, this property can imply separation of components of a mixture which differ in their absorption rates. Such a conclusion may have a geological sense [GOL 81].
Bibliography
[ALD 78a] A LDOUS D., Weak convergence of stochastic processes for processes viewed in the Strasbourg manner, Cambridge, England, 1978. [ALD 78b] A LDOUS D., “Stopping times and tightness”, Ann. Probab., vol. 6, no. 2, p. 1–6, 1978. [ARK 79] A RKIN V.I., E VSTIGNEEV I.V., Probability Models of Control and Economy Dynamics, Nauka, Moscow, 1979 (in Russian). [BIL 77] B ILLINGSLY P., Convergence of Probability Measures, Nauka, Moscow, 1977 (in Russian). [BIR 84] B IRKHOFF G., Theory of Lattices, Nauka, Moscow, 1984 (in Russian). [BLU 62] B LUMENTHAL R.M., G ETOOR R.K., M C K EAN H.P., J R ., “Markov processes with identical hitting distributions”, Illinois J. Math., vol. 6, no. 3, p. 402–421, 1962. [BLU 68] B LUMENTHAL R.M., G ETOOR R.K., Markov Processes and Potential Theory, Academic Press, New York, 1968. [BOR 96] B ORODIN A.N., S ALMINEN P., Handbook of Brownian Motion – Facts and Formulae, Birkhäuser Verlag, Basel, Boston, Berlin, 1996. [CHA 79] C HACON R.V., JAMISON B., “Processes with state-dependent hitting probabilities and their equivalence under time changes”, Adv. Math., vol. 32, no. 1, p. 1–36, 1979. [CHA 81] C HACON R.V., L E JAN Y., WALSH J.B., “Spatial trajectories”, Lect. Not. Math., vol. 850, 1981. [CHE 73] C HEONG C.K., DE S MIT J.H.A., T EUGELS J.L., Bibliography on Semi-Markov Processes, Core discussion papers, no. 7310, Louvain, Brussels, 1973. [CIN 79] Ç INLAR E., “On increasing continuous processes”, Stoch. Proc. Appl., no. 9, p. 147– 154, 1979. [COX 67] C OX D., S MITH V., Renewal Theory, Sov. radio, Moscow, 1967 (in Russian). [DAR 53] DARLING D.A., S IEGERT A.J.F., “The ſrst passage problem for a continuous Markov process” Ann. Math. Statist., vol. 24, p. 624–639, 1953.
362
Continuous Semi-Markov Processes
[DEL 72] D ELLACHERIE C., Capacités et Processus Stochastiques, Berlin, Heidelberg, New York, 1972. [DIT 65] D ITKIN V.A., P RUDNIKOV A.P., Handbook on Operation Calculus, High school, Moscow, 1965 (in Russian). [DIT 66] D ITKIN V.A., P RUDNIKOV A.P., Operation Calculus, High school, Moscow, 1966 (in Russian). [DYN 59] DYNKIN E.B., Foundation of Theory of Markov Processes, Fizmatgiz, Moscow, 1959 (in Russian). [DYN 63] DYNKIN E.B., Markov Processes, Fizmatgiz, Moscow, 1963 (in Russian). [DYN 67] DYNKIN E.B., Y USHKEVICH A.A., Theorems and Tasks on Markov Processes, Nauka, Moscow, 1967 (in Russian). [DYN 75] DYNKIN E.B., Y USHKEVICH A.A., Controlable Markov Processes and Their Applications, Nauka, Moscow, 1975 (in Russian). [FEL 67] F ELLER W., Introduction to Probability Theory and Its Applications, Mir, Moscow, 1967 (in Russian). [GID 65] G IDDINGS J.C., Dynamics of Chromatography, vol. 1, Marcel Decker, Inc., New York, 1965. [GIH 71] G IHMAN I.I., S KOROKHOD A.V., Theory of Random Processes, vol. 1, Nauka, Moscow, 1971 (in Russian). [GIH 73] G IHMAN I.I., S KOROKHOD A.V., Theory of Random Processes, vol. 2, Nauka, Moscow, 1973 (in Russian). [GIH 75] G IHMAN I.I., S KOROKHOD A.V., Theory of Random Processes, vol. 3, Nauka, Moscow, 1975 (in Russian). [GIL 89] G ILBARG D., T RUDINGER N., Elliptic Differential Equations with Partial Derivatives of Second Order, Nauka, Moscow, 1989 (in Russian). [GOL 74] G OLBERT C.A., V IGDERGAUZ M.S., Course of Gas Chromatography, Chemistry, Moscow, 1974 (in Russian). [GOL 81] G OLUBEV V.S., Dynamics of Geo-Chemical Processes, Nedra, Moscow, 1981 (in Russian). [GUB 72] G UBENKO L.G., S TATLANG E.S., “On controlable semi-Markov processes”, Cybernetics, no. 2, p. 26–29, 1972 (in Russian). [GUT 75] G UT A., “Weak convergence and ſrst passage times”, J. Appl. Probab., vol. 12, no 2, p. 324–334, 1975. [GUT 81] G UT A., A HLBERG P., “On the theory of chromatography based upon renewal theory and a central limit theorem for randomly indexed partial sums of random variables”, Chemica Scripta, vol. 18, no 5. p. 248–255, 1981. [HAL 53] H ALMOSH P., Theory of Measure, Mir, Moscow, 1953 (in Russian).
Bibliography
363
[HAN 62] H ANT G.A., Markov Processes and Potentials, Inostr. Literatury, Moscow, 1962 (in Russian). [HAR 69] H ARLAMOV B.P., “Characterization of random functions by random pre-images”, Zapiski nauch. semin. LOMI, vol. 12, p. 165–196, 1969 (in Russian). [HAR 71a] H ARLAMOV B.P., “On the ſrst time of exit of a random walk on line from an interval from interval”, Math. notes., vol. 9, no. 6, p. 713–722, 1971 (in Russian). [HAR 71b] H ARLAMOV B.P., “Determining of random processes by streams of ſrst entries”, Reports to AS USSR, vol. 196, no.2, p. 312–315, 1971 (in Russian). [HAR 72] H ARLAMOV B.P., “Random time change and continuous semi-Markov processes”, Zapiski nauch. semin. LOMI, vol. 29, p. 30–37, 1972 (in Russian). [HAR 74] H ARLAMOV B.P., “Random processes with semi-Markov streams of ſrst entries”, Zapiski nauch. semin. LOMI, vol. 41, p. 139–164, 1974 (in Russian). [HAR 76a] H ARLAMOV B.P., “On connection between random curves, time changes, and regeneration times of random processes”, Zapiski nauch. semin. LOMI, vol. 55, p. 128– 194, 1976 (in Russian). [HAR 76b] H ARLAMOV B.P., “On convergence of semi-Markov walks to continuous semiMarkov process”, Probability theory and its applications, vol. 21, no. 3, 497–511, 1976 (in Russian). [HAR 77] H ARLAMOV B.P., “Property of correct exit and a limit theorem for semi-Markov processes”, Zapiski nauch. semin. LOMI, vol. 72, p. 186–201, 1977 (in Russian). [HAR 78] H ARLAMOV B.P., “On a mathematical model of accumulation of accessory minerals in sedimental deposits”, in Investigation on Mathematical Geology, p. 80–89, Nauka, Leningrad, 1978 (in Russian). [HAR 79] H ARLAMOV B.P., YANIMYAGI V.E., “Construction of homogeneous Markov process with given distributions of the ſrst exit points”, Zapiski nauch. semin. LOMI, vol. 85, p. 207–224, 1979 (in Russian). [HAR 80a] H ARLAMOV B.P., “Additive functionals and time change preserving semi-Markov property of process”, Zapiski nauch. semin. LOMI, vol. 97, p. 203–216, 1980 (in Russian). [HAR 80b] H ARLAMOV B.P., “Deducing sequences and continuous semi-Markov processes on the line”, Zapiski nauch. semin. LOMI, vol. 119, p. 230–236, 1980 (in Russian). [HAR 83] H ARLAMOV B.P., “Transition functions of continuous semi-Markov process”, Zapiski nauch. semin. LOMI, vol. 130, p. 190–205, 1983 (in Russian). [HAR 89] H ARLAMOV B.P., “Characteristic operator and curvilinear integral for semiMarkov process”, Zapiski nauch. semin. LOMI, vol. 177, p. 170–180, 1989 (in Russian). [HAR 90] H ARLAMOV B.P., “On statistics of continuous Markov processes: semi-Markov approach” In Probability Theory and Mathematical Statistics. Proc. 5th Vilnius Conf., p. 504–511, 1990.
364
Continuous Semi-Markov Processes
[HAR 99] H ARLAMOV B.P., “Optimal prophylaxis policy for systems with partly observable parameters”, in Statistical and Probabilistic Models in Reliability, D.C. Ionescu and N. Limnios (eds.), Birkhauser, Boston, p. 265–278, 1999. [HAR 02] H ARLAMOV B.P., “Absolute continuity of measures in class of semi-Markov processes of diffusion type”, Zapiski nauch. semin. POMI, vol. 294, p. 216–244, 2002 (in Russian). [HOF 72] H OFMANN A., “Chromatographic theory of inſltration metasomatism and its application to feldspars”, Amer. J. Sci., vol. 272, p. 69–90, 1972. [ITO 60] I TÔ K., Probability Processes, vol. 1, Inostr. Literatury, Moscow, 1960. [ITO 63] I TÔ K., Probability Processes, vol. 2, Inostr. Literatury, Moscow, 1963. [ITO 65] I TÔ K., M C K EAN H.P., Diffusion Processes and Their Sample Paths, Berlin, Heidelberg, New York, 1965. [KEI 79] K EILSON J., Markov Chain Models – Rarity and Exponentiality, Springer Verlag, New York, Heidelberg, Berlin, 1979. [KEL 68] K ELLY G., General Topology, Nauka, Moscow, 1968 (in Russian). [KEM 70] K EMENY G., S NELL G., Finite Markov Chains, Nauka, Moscow, 1970 (in Russian). [KNI 64] K NIGHT F., O REY S., “Construction of a Markov process from hitting probabilities”, J. Math. Mech., vol. 13, no. 15, p. 857–874, 1964. [KOL 36] KOLMOGOROV A.N., Foundation of Probability Theory, ONTI, Moscow, 1936 (in Russian). [KOL 38] KOLMOGOROV A.N., “On analytical methods in probability theory”, Progress of math. sci., no. 5, p. 5–41, 1938 (in Russian). [KOL 72] KOLMOGOROV A.N., F OMIN S.V., Elements of Theory of Functions and Functional Analysis, Nauka, Moscow, 1972 (in Russian). [KOR 74] KOROLYUK V.S., B RODI S.M., T URBIN A.F., “Semi-Markov processes and their application”, in Probability Theory. Mathematical Statistics. Theoretical Cybernetics, VINITI, Moscow, vol. 2, p. 47–98, 1974 (in Russian). [KOR 76] KOROLYUK V.S., T URBIN A.F., Semi-Markov Processes and Their Application, Naukova dumka, Kiev, 1976 (in Russian). [KOR 82] KORZHINSKI D.S., Theory of Metasomatic Zonality, Nauka, Moscow, 1982 (in Russian). [KOR 91] KORLAT A.N., K UZNETSOV V.N., N OVIKOV M.M., T URBIN A.F., Semi-Markov Models of Restorable Systems and Queueing Systems, Steenca, Kisheneu, 1991. [KOV 65] KOVALENKO I.N., “Queueing theory” In Results of Science. Probability Theory, VINITI, Moscow, p. 73–125, 1965 (in Russian). [KRY 77] K RYLOV N.V., Controlable Processes of Diffusion Type, Nauka, Moscow, 1977 (in Russian).
Bibliography
365
[KUZ 80] K UZNETSOV S.E. “Any Markov process in Borel space has transition function”, Probab. theory and its application, vol. 25, no. 2, p. 389–393, 1980. [LAM 67] L AMPERTI J., “On random time substitutions and Feller property”, in Markov Processes and Potential Theory, John Wiley & Sons, New York, London, Sydney, p. 87–103, 1967. [JAN 81] L E JAN Y., “Arc length associated with a Markov process”, Adv. Math., vol. 42, no. 2, p. 136–142, 1981. [LEI 47] L EIBENZON L.S., Motion of Natural Liquids and Gases in Porous Medium, Nedra, Moscow, 1947. [LEV 54] L ÉVY P., “Processus semi-Markoviens”, in Proc. Int. Congr. Math. (Amsterdam, 1954), vol. 3, p. 416–426, 1956. [LIP 74] L IPTSER R.S H ., S HIRYAEV A.N., Statistics of Random Processes, Nauka, Moscow, 1974 (in Russian). [LIP 86] L IPTSER R.S H ., S HIRYAEV A.N. Theory of Martingales, Nauka, Moscow, 1986 (in Russian). [LOE 62] L OEV M., Probability Theory, Inostr. Literatury, Moscow, 1962. [MAI 71] M AINSONNEUVE B., “Ensembles régénératifs, temps locaux et subordinateurs”, Lect. Not. Math., vol. 191, p. 147–170, 1971. [MAI 74] M AINSONNEUVE B., “Systémes régénératifs”, Astérisque, vol. 15, 1974. [MAI 77] M AIN K H ., O SAKI S., Markov Decision Processes, Nauka, Moscow, 1977 (in Russian). [MEY 67] M EYER P.-A., “Processus de Markov”, Lect. Not. Math., vol. 26, 1967. [MEY 73] M EYER P.-A., Probability and Potentials, Mir, Moscow, 1973 (in Russian). [MIK 68] M IKHLIN S.G., Course of Mathematical Physics, Nauka, Moscow, 1968 (in Russian). [MUR 73] M ÜRMANN M.G., “A semi-Markovian model for the Brownian motion”, Lect. Not. Math., vol. 321, p. 248–272, 1973. [NAT 74] NATANSON I.P., Theory of Functions of Real Variables, Nauka, Moscow, 1974 (in Russian). [NEV 69] N EVEU G., Mathematical Foundation of Probability Theory, Mir, Moscow, 1969 (in Russian). [NOL 80] N OLLAU V., Semi-Markovsche Prozesse, Akad. Verlag, Berlin, 1980. [PAG 63] PAGUROVA V.I., Tables of Non-Complete Gamma-Function, Nauka, Moscow, 1963 (in Russian). [PRO 77] P ROTTER P H ., “Stability of the classiſcation of stopping times”, Z. Wahrsch. verw. Geb., vol. 37, p. 201–209, 1977.
366
Continuous Semi-Markov Processes
[PRU 81] P RUDNIKOV A.P., B RYCHKOV Y U .A., M ARICHEV O.I., Integrals and Series, Nauka, Moscow, 1981 (in Russian). [PYK 61] P YKE R. “Markov renewal processes: diſnitions and preliminary properties”, Ann. Math. Statist., vol. 32, no. 4, p. 1231–1242, 1961. [ROB 77] ROBBINS G., S IGMUND D., C HOU I., Theory of Optimal of Stopping Rules, Nauka, Moscow, 1977 (in Russian). [ROD 71] RODRIGUEZ D.M., “Processes obtainable from Brownian motion by means of random time change”, Ann. Math. Statist., vol. 42, no. 1, p. 176–189, 1971. [RUS 75] RUSAS G., Contiguality of Probability Measures, Mir, Moscow, 1975 (in Russian). [SAN 54] S ANSONE G., Common Differential Equations, vol. 2, Inostr. Literatury, Moscow, 1954 (in Russian). [SER 71] S ERFOZO R.F., “Random time transformations of semi-Markov processes”, Ann. Math. Statist., vol. 42, no. 1, p. 176–188, 1971. [SHI 71a] S HIH C.T., “Construction of Markov processes from hitting distributions”, Z. Wahrsch. verw. Geb., vol. 18, p. 47–72, 1971. [SHI 71b] S HIH C.T., “Construction of Markov processes from hitting distributions. II”, Ann. Math. Statist., vol. 42, no. 1, p. 97–114, 1971. [SHI 76] S HIRYAEV A.N., Statistical Sequential Analysis, Nauka, Moscow, 1976 (in Russian). [SHI 80] S HIRYAEV A.N., Probability, Nauka, Moscow, 1980 (in Russian). [SHU 77] S HURENKOV V.M., “Transformations of random processes preserving Markov property”, Probability theory and its applications, vol. 22, no. 1, p. 122–126, 1977 (in Russian). [SHU 89] S HURENKOV V.M., Ergodic Markov Processes, Nauka, Moscow, 1989 (in Russian). [SIL 70] S IL’ VESTROV D.S., “Limit theorems for semi-Markov processes and their applications”, Probability theory and math. statistics, vol. 3, p. 155–194, 1970. [SIL 74] S IL’ VESTROV D.S., Limit theorems for complex random values, High school, Kiev, 1974. [SKO 56] S KOROKHOD A.V., “Limit theory for random processes”, Probability theory and its applications, vol. 1, no. 3, p. 289–319, 1956 (in Russian). [SKO 58] S KOROKHOD A.V., “Limit theory for Markov processes”, Probability theory and its applications, vol. 3, no. 3, p. 217–264, 1958 (in Russian). [SKO 61] S KOROKHOD A.V., Investigation on Theory of Random Processes, Kiev. Univ., Kiev, 1961 (in Russian). [SKO 70] S KORNYAKOV L.A., Elements of lattice theory, Nauka, Moscow, 1970 (in Russian). [SMI 55] S MITH W.L., “Regenerative stochastic processes”, Proc. Roy. Soc. Ser. A, vol. 232. p. 6–31, 1955.
Bibliography
367
[SMI 66] S MITH W.L., “Some peculiar semi-Markov process”, in Proc. 5th Berkeley Sympos. Math. Statist. and Probab. vol. 2, Part 2, p. 255–263, 1965–1966. [SOB 54] S OBOLEV S.L., Equations of Mathematical Physics, Gostekhizdat, Moscow, 1954 (in Russian). [SPI 69] S PITSER F., Principles of Random Walks, Mir, Moscow, 1969 (in Russian). [STO 63] S TONE C., “Weak convergence of stochastic processes deſned on a semiſnite time interval”, Proc. Amer. Math. Soc., vol. 14, no. 5, p. 694–696, 1963. [TAK 80] TAKSAR M.I., “Regenerative sets on real line”, Lect. Not. Math., vol. 784, p. 437– 474, 1980. [TEU 76] T EUGELS J.L., “A bibliography on semi-Markov processes”, J. Comput. Appl. Math., no. 2, p. 125–144, 1976. [VEN 75] V ENTSEL A.D., Course of Random Process Theory, Nauka, Moscow, 1975 (in Russian). [VIN 90] V INOGRADOV V.N., S OROKIN G.M., KOLOKOL’ NIKOV M.G., Abrasive Wear, Mashinostroenie, Moscow, 1990 (in Russian). [VOL 58] VOLKONSKI V.A., “Random time change in strictly Markov processes”, Probab. theory and its appl., vol. 3, no. 3, p. 332–350, 1958 (in Russian). [VOL 61] VOLKONSKI V.A., “Construction of non-homogeneous Markov process with the help of random time change”, Probab. theory and its appl., vol. 6, no. 1, p. 47–56, 1961 (in Russian). [YAC 68] YACKEL J., “A random time change relating semi-Markov and Markov processes”, Ann. Math. Statist., vol. 39, no. 2, p. 358–364, 1968. [YAS 76] YASHIN YA .I., Physico-Chemical Foundation of Chromatographical Separation, Chemistry, Moscow, 1976 (in Russian).
This page intentionally left blank
Index
Symbols Xτ (ξ), 30 Xt , 29, 38 Xτ , 38 (P (n) ), 283 (Pu ), 122 (Px ), 64 (Px )x∈X , 64 (Pxw ), 296 (n) (Px )x∈X , 24 (Qx )x∈X , 26 (TtΔ ), 107 (W, B, Q), 26 (YΔ ), 128 (Ω, A), 18 (·)∗ , 199 (P x ), 302, 303 (Px ), 292, 303 (n) (Px ), 292 (at (λ)), 292 A, 301 A(τ1 , τ2 ), 257 A(x), 142 AΔ , 107 Ao0 (ϕ | x), 349 Aoλ (ϕ | x), 191 At (ϕ | x), 301 Ao,R λ (ϕ | x), 191 A0 , 94 Aλ , 103, 302 Aλ (ϕ | x), 92 Aλ (ϕ | x0 ), 162
Aλ ϕ, 84 Arλ (ϕ | x), 92 B(x, r), 46 B(z), 121 B(λ, x), 143 B(y), 43 Ci , 180 D, 167 (k) Di , 169 (k) Dij , 169 Di u, 169 Dij u, 169 E(f ; A), 22 E(s), 18 F ([0, t] × A | x), 31 F (dt, dx1 | x), 103 F (t | x, x1 ), 292 F ∗n ([0, t] × S | x), 27 FΔ , 81 Fτ (B | x), 79 Fζ (A × S | x), 347 Fn (A | x), 23 Fn ([0, t] × A | x), 26 Fn (A | x0 , . . . , xn ), 20 F1,··· ,n (A1 × · · · × An | x), 23 FσΔ (m, A | x), 25 Fσr (dt, dx1 | x), 103 G∞ (S × A1 × A2 ), 315 Gt (S × A1 × A2 | x), 314 Gt (ds | ξ), 234 H([0, t]), 34 H(dx1 | x), 103
370
Continuous Semi-Markov Processes
HG (p, B), 170 Hζr (S | x), 347 I(B | ξ), 208 IA (z), 21 J(z), 128 Jξ, 75 L (p), 196 JG K(Xt , λtc ), 282 KG (λ, x, t), 161 KR , 175 Lδ , 49 Lr , 49 L∗δm , 52 M (ξ), 233 M0 (ξ), 234 N (B × A), 33 Nt (ξ), 32, 282 NΔ , 85 P (C | X0 = x), 19 P (C | X0 ), 19 P (X1 ∈ A1 | X0 ), 19 P , 17 Px,z , 129 Q(S | x), 291 Q1 , 228 Q2 , 228 QR , 175 Qx (Bn | X 0 , . . .), 28 R(k, n), 85 Rλ (S | x), 166 Rτ (λ1 , . . . ; ϕ), 83 Rt+ (ξ), 32, 312 Rt− (ξ), 32, 312 Rx , 231 Si , 180 T (T1 ), 47 Tt (f | x), 300 U (x, [0, t] × A), 33 Ur ([0, t) × S | x), 347 V , 167 V0 (x), 95 Vλ (x), 92 W , 29 W , 25 W0 , 29 Wλ (Δ), 98 Xn (z), 18
Z, 18 Zr (S | x), 347 [Δ], 46 DS(A), 49 DS(A1 ), 49 DS(r, A), 49 DS, 49 DS(A1 , A), 49 DS(r), 49 Δu, 175 Δ+r , 55 Δ−r , 55 Δh1 ,...,hn , 150 Δtc (ξ), 282 Γ, 175 Γ(ξ1 , ξ2 ), 214 Γ1 (ϕ), 200 Γ2 (ϕ), 200 IMT, 206 IMT(ξ), 256 Λ[0, t], 58 Λ[0, 1], 58 Ω, 17 Φ, 199 Φi , 180 Φk,n , 85 Π(Δ), 55 Π(Δ1 , Δ2 , . . .), 55 Π(r), 55 Π(r1 , r2 , . . .), 55 Πi (k, n), 85 Πm (r), 55 Ψ, 199 RT(Px ), 64 RT, 64 SM(rm ), 75 SM(A0 ), 75 SMGS, 76 B(S), 92 B0 , 82 N, 18, 63 N0 , 18 P, 228 R, 18 R+ , 25, 38 R+ , 54 X, 18, 38
Index Y, 43 ατ , 39 αt , 30, 38 βτ , 43 βσΔ , 55 A, 17 C, 58 D , 72 D, 37 F , 18, 29 M, 209 T , 38 U , 241 χ(τ ), 253 δ1 × · · · × δk , 248
δ , 212 γ(x), 161 γτ , 212 A, 46 A◦ (k), 172 A0 , 74, 138 Am , 74 B, 26, 202 G, 202 K, 50 L, 234 T, 233 46 A, λtc , 282 MT, 43 MT+ , 43, 46 μ(τ ), 266 μ ◦ (f1 , . . . , fk )−1 , 64 νr,k (S), 288 ω k , 248 ωd , 175 ˙ 39 +, Q1 , 228 Q2 , 228 X τ , 232 X n (w), 25 g G , 143 hG , 143 Rλ0 ϕ, 302 R+ , 25, 38 X, 25 C, 58
D, 58 1, 43 ατ , 232 ∞, 25, 43 θτ , 232 X , 29 ∂Δ, 46 ϕ, 58 ϕ(t), 199 ϕi , 181 π, 167 πi (pk ), 169 ψkn , 85 ρ, 38 ρ (τ1 , τ2 | ξ), 256 ρY , 44 ρC , 58 ρD , 58 ρtD , 59 ρD , 58 ρY , 43 σ(Xt , t ∈ R+ ), 42 σ(Xn , n ∈ N0 ), 18 σ0n , 30 σΔ , 46 σδn , 47 σr , 46 σrn , 47 σ[r] , 47 σr,t , 110 τn , 30 τNt (ξ), 32 θn , 21 θt , 30, 38 θτn , 31 θτ , 39 |f |0;S , 176 |f |2;S , 176 τ , 201 X t , 201 X T , 201 α τ , 201 α t , 201 θτ , 201 θn , 25, 31 θt , 201 7 203 MT,
371
372
Continuous Semi-Markov Processes
t , 75 X Pxw , 296 n (w), 25 X t , 201 X Vλ (x), 101 T0 , 216 Ta , 47 Tb , 47 α t , 201 θt , 75, 201 Ta , 45 λ (x), 101 V ξ, 38 ξ1 ∼ξ2 , 209 ζ(w), 25 ζ(τ ), 253 ζ(ξ), 346 {t ∈ ICF}, 260 {t ∈ ICR}, 260 a(λ1 , λ2 ), 93 a(x), 103 aτ , 249 ar (λ1 , λ2 ), 93 at (λ | ξ), 234 at (ξ), 234 bτ (λ), 246 ci , 181 d, 216 f (λ, A | x), 27 f ◦ g, 19 f ∗n (λ, A | x), 28 fΔ , 81 fτ (λ, S | x), 79 gG (λ, x), 138 gr (λ, x), 139 h(ξ, t), 56 hG (p, p1 ), 170 htδ (ξ), 282 htr (ξ), 282 hG (λ, x), 138 hr (λ, x), 139 k1 (y), 117 k2 (y), 117 l(r), 186 l2 , 181 m(f ; A), 22 mr (x), 92
mG (x), 161 nτ (B), 249 p, 201 p1 (ξ1 , ξ2 ), 228 p2 (ξ1 , ξ2 ), 228 q, 201 s ∧ t, 30 si , 181 u(ψ, ξ), 226 uG (ϕ | p), 170 v1 (· | ξ1 , ξ2 ), 214 v1∗ , 216 v2 (· | ξ1 , ξ2 ), 214 v2∗ , 216 vn , 25 w1 (ξ1 , ξ2 ), 228 w2 (ξ1 , ξ2 ), 228 wn , 29 yG (λ, S | x), 153 z1 (ξ1 , ξ2 ), 228 z2 (ξ1 , ξ2 ), 228 zn , 29 zn , 25 B(z), 117 I(z), 127 Lζ, 117 Ω, 121 X0 , 92 Xr , 92 Y, 44 Y , 43 α, 127 A/E, 43 A ∩ B, 43 A, 180 Aλ ϕ, 84 Ak , 249 B(E), 43 B(R), 18 B(X), 18 B(Y), 44 C(A), 64 C(Xk )), 73 C(D), 81 C(x), 301 C+ , 290 CK , 300
Index CR , 300 D∗ , 110 D, 38 Dτ , 213 D0 , 29, 72 Dt∗ , 110 E, 229 F, 42 F ∗ , 207 F , 30 F ◦ , 229 Fτ , 43, 53 Fn , 22 Ft , 30, 42 FσΔ , 32 Fτ + , 43 Fτn , 31 Ft+ , 42 H, 223 K, 117 L(λ), 170 Lu, 169 MR , 176 Pτ (du), 252 R(G), 172 T0 , 213 Ta , 39 Tb , 39 Tc , 206 Vλ (x), 93 Vλλ21 (x), 93 Wλ (Δ), 98 Aϕ(p), 188 A1 , 117 Bτ , 203 Bn , 26 Bt , 202 Bτ ◦q , 204 B2τ ◦q , 204 Gt , 202 z, 116 z−ε , 119 y 1 , 117 y 2 , 117 |z|, 117 ζ, 117 c(z), 117
373
Σ0 (A1 ), 118 Σ0 , 118 K(A1 ), 116 Z(A1 ), 116 Z0 , 117 Z0 (A1 ), 116 Z0 (A1 ), 117 Z0 , 117 Z, 117, 121 *, 43, 193, 249 A Additive functional of time run, 246 on a random set, 234 on a trace, 257 Admissible family of measures, 64 of semi-Markov kernels, 128 sequence, 118 Approximate solutions, 340 Asymptotics of solutions, 144 B Backward equation of accumulation, 350 Borel sigma-algebra, 18 C Canonical inverse time change, 214 Chain, 117 Characteristic operator in a strict sense, 101 of Dynkin, 188 Chromatogram, 327 Chromatography, 326 Coefſcient of transformation of surface element size, 194 Column chromatography, 327 Compact in Skorokhod space, 282 Completely monotone function, 129 Compositions of deducing sequences, 248 Conditional probability, 19 degeneration of a Markov process, 267 Consistent sequence of maps, 118 Coordinated functions, 223 Correct ſrst exit, 55, 119 map, 117 Curvilinear integral, 256 D Deducing sequence, 49
374
Continuous Semi-Markov Processes
Delay, 330 Determining sequence of functionals, 121 Diagonal of traces, 216 Difference operator, 84 Differential equation, 143 of elliptical type, 170 Dimensionless operator, 191 Direct time change, 200 Directed set, 121 Dirichlet problem, 170 Distribution, 18 E Eluent, 326 Embedded Markov chain, 35, 296 Enlargement, 121 Epsilon-net, 154 Ergodic Markov chain, 34 process, 312 F Family of sigma-algebras, 43 Feller process, 300 Filter, 326 First exit time, 46 from an initial point, 73 Formula Dynkin, 100 Ito-Dynkin, 277 Lévy, 249 van Deemter, 341 Forward equation of accumulation, 351 Full collection, 116 Functional equation, 170 G Graph of a time change, 200 Green function for the Laplace operator, 172 Gut and Ahlberg model, 333 H Hausdorff topological space, 122 Height equivalent to a theoretical plate, 332 I Independent random variables, 19 Index of intersection, 127
Inſnite interval of constancy, 111, 213 Inſnitesimal operator, 107, 300 Integral operator, 82 Internal measure, 313 Interval of constancy, 110 conditionally ſxed, 260 on the left, 213 Poisson, 260 Intrinsic Markov time, 206 Inverse gamma-process, 333 Inverse time change, 200 Isotone map, 121 Iterated kernel, 153 J Jump-function, 72 L Lévy measure, 249 Lambda-characteristic operator, 92 of ſrst kind, 92 of second kind, 92 Lambda-continuous family, 65 Lambda-potential operator, 65, 82 Lambda-regular process, 299 Laplace family of additive functionals, 234, 249 Law of Boyle-Mariott, 357 of Gaghen-Poiseil, 357 Local drift, 137 Local operator, 93 Local variance, 167 M Map of partitioning, 121 Markov chain, 21 property for operators, 84 renewal process, 26 Markov process, 311 Markov time, 43 Martingale, 245, 271 Maximal chain, 117 Measurable function, 17 Method of inscribed balls, 269 Metric Skorokhod, 58 Stone, 43 Minimum principle, 144
Index Moments of distribution of delay, 331 Monotone process, 332 Multi-dimensional lambda potential operator, 83 N Non-commutative sum, 39 O One-coordinate projection, 38 P Partitioning map, 121 Pi-system of sets, 65 Probability measure, 18 Process of maximum values, 334 of transfer with absorption, 346 with independent increments, 241 Process without intervals of constancy, 307 Projective system of measures, 130 Pseudo-local operator, 94 Pseudo-locality in a strong sense, 101 Pseudo-maximum of a function, 147 Q Quadratic characteristics of martingales, 272 Quasi-continuity from the left, 73 R Random curvilinear integral, 259 sequence, 18 set, 234 variable, 17 Realization of a correct map, 118 Realizing function, 119 Regeneration condition, 24 Regeneration time, 64 Regular process, 299 Relatively semi-Markov, 72 Renewal process, 33 S Scalar product, 275 Semi-Markov consistency condition, 31 process, 72 by Gihman and Skorokhod, 76
375
of diffusion type, 149 transitional function, 31 generating function, 31 walk, 73 Semi-martingale, 271 Semigroup property, 46 Set of elementary events, 17 Shift, 38 Sigma-algebra, 17 Sojourn time in current position, 75 Start time of inſnite interval, 336 Stepped function, 72 Stepped semi-Markov process, 29 Stochastic element, 17 Stochastic integral by semi-martingale, 275 by semi-Markov process, 276 Stochastically continuous process, 300 Stopping, 38 Strictly Markov process, 72 Strong semi-Markov process, 241 Superposition of two functions, 19 System of difference equations, 139 T Theorem Arcell, 193 Hille-Ioshida, 300 Shurenkov, 312 Three-dimensional distribution, 312 Time run along the trace, 14 comparison curve, 214 Trace, 209 as an ordered set, 253 of a trajectory, 209 Track of a sigma-algebra, 43 Transformation under time change, 226 Two-dimensional conditional distribution function, 292 W Weak asymptotics, 173 compactness of a family of measures, 284 convergence of a family of measures, 289