Jaures Cecconi ( E d.)
Stochastic Differential Equations Lectures given at a Summer School of the Centro Internazionale Matematico Estivo (C.I.M.E.), held in Cortona (Arezzo), Italy, May 29-June 10, 1978
C.I.M.E. Foundation c/o Dipartimento di Matematica “U. Dini” Viale Morgagni n. 67/a 50134 Firenze Italy
[email protected]
ISBN 978-3-642-11077-1 e-ISBN: 978-3-642-11079-5 DOI:10.1007/978-3-642-11079-5 Springer Heidelberg Dordrecht London New York
©Springer-Verlag Berlin Heidelberg 2010 Reprint of the 1st ed. C.I.M.E., Ed. Liguori, Napoli & Birkhäuser 1981 With kind permission of C.I.M.E.
Printed on acid-free paper
Springer.com
C O N T E N T S
C.
DOLEANS-DADE
A.
FRIEDMAlJ
: $ t o c R a s t i c P r o c e s s e s and Stochastic iff e r e n t i a l ~ q u a t i o n s pago : Stochastic D i f f e r e n t i a l E q u a t i o n s and A p p l i c a t i o n s 'I
D. STROCK/ S.R.S. VARADBAN : T h e o r y of D i f f u s i o n P r o c e s s e s G. C PAPANICOLAOU: W a v e P r o p a g a t i o n and H e a t C o n d u c t i o n i n a Random Medium C.DEWITT-MORETTE : A Stochastic P r o b l e m i n P h y s i c s G. S. GOODMAN : Th,e E m b e d d i n g P r o b l e m f o r Stochastic Matrices
.
" "
"
5 75 149 193 217 231
STOCHASTIC PROCESSES AND STOCHASTIC DIFFE-
C
.
T I AL EQUATIONS
DOLEAN S-DADE
STOCHASTIC PROCESSES AND STOCHASTIC DIFFERENTIAL EQUATIONS
C. Dolgans-Dade University of Illinois, Urbana
Introduction. Since Ito has defined the stochastic integral with respect to the Brownian motion, mathematicians have tried to generalize it. The first step consisted of replacing the Brownian motion by a square integrable martingale.
Later H. Kunita and S. Watanabe in [lo] introduced the concept
of local continuous martingale and stochastic integral with respect to local continuous martingales which P. 1. Ifeyer generalized to the no.? continuous case. But in many cases one observes a certain process X and there are at least two laws P and Q on
(Q,F). -
For the law Q, X is not a local
martingale but the sum of a local martingale and a process with finite variation. We would like to talk about the stochastic integrals [asdxs P and !Qsdxs in thc two probability spaces (Q,E,P) and (B,F,Q). And of -
Q
course we would like those two stochastic integrals to be the same. This is why one should try to integrate with respect to semimartingales (sums of a local martingale and a process with finite variation), and this is what people have been doing for awhile (see chapters 5 and 6). Now the latest result in the theory is "one cannot integrate with respect to anything more general than semimartingales" (see chapter 3).
So as it
stands now the theory looks complete. To end this introduction I wish to thank Professor J. P.Ceceoni and
the C.I.M.E.
for their kind invitation to this session on differential
stochastic equations in Cortona; the two weeks of which I, and my family, found most enjoyable.
STOPPING TIMES AND STOCHASTIC PROCESSES
We s h a l l l i s t i n t h i s chapter some d e f i n i t i o n s and p r o p e r t i e s on stopping times and s t o c h a s t i c proccsoes.
The proofs can be found i n [ I ] o r
(21.
I n a l l t h a t follows space and
(n.2.P)
i s a givcn complete p r o b a b i l i t y
n f m j l y of sub-0-fields
of
1 -
verifying the " u s ~ ~ a l "
following properties
a)
t h e family
b)
f o r each
(Ft)t>O
is non decreasing and continuous on t h e
right t,
gt
contains a l l t h e P-null s e t s of
1 -
(a P-null
s e t f s a s e t of P-measure zero).
Et
The a - f i e l d s
~ h o o l dbe thought of a s t h e o-field of t h e events which
occurred up t o t i c e
L.
Ur w i l l soncclces consider o t h e r p r o b a b i l i t i e s a b l e space and
on t h e measur-
But we s h a l l always assume t h a t t h e p r o b a b i l i t i e s
P
Q a r e eqoivalcnt ( i - e . they have t h e same n u l l s e t s ) ; and t h e family
( L ~ )w i l l ity
(3.1).
Q
s t i l l s a t i s f y the "usual" c o n d i t i o n s r e l a t i v e l y t o t h e probabil-
Q*
STOPPISC TI= Suppose a ~ d l * decides r t o s t o p playing when a c e r t a i n phenomenon has occurrd in the ga-s-
Let
T be t h e time a t which he w i l l s t o p playing-
The event (T
5
t? will depend only on the observations of the gambler
up to time t. This remark leads to the natural following definition. 1.1.
Definition. A non negative random variable T is a stopping time if
for every t 2 0 the event {T ( t? is in
&.
(We allow the random
variable T to take the value +) 1.2.
Properties of stopping times: I)
if S and T are two stopping times so are SvT, SAT and
2)
if Sn is a monotone sequence of stopping times, the limit
T = lim Sn is also a stopping time. n++m 1.3. The o-field If T is a stopping time,
gT.
the evcntn A E
A n {T < t)
& =$&,
gT
is the family of all
such that for every t 2 0 the event
Egt.
It is easy to check that
gT
is a 5-field; it is intuitevely the
a-field of all the events that occurred up to time T. T is the constant stopping time t, stopping times, and if S (T
zT
=
$; if
a,e., then F C
==s
If T is a stopping time, and if A by TA = T on A, T = A
+w
E
S
In particular, if and T are two
zT. XT, the rev. TA defined
on AC, is also a stopping time (A'
denotes
the complement of the set. A). Any stopping time can be approached strictly on the right by the sequence of stopping times T = T + (knowing everything up to the near n n future you know the present); the similar property on the left is false (knowing the strict past is not enough to know the present); the stopping times which can be thus announcad are called predictable times. 1.4.
Predictable times.
A predictable time T is a stopping time - T for
which there exists a non decreasing sequence (TnInLO such that
of stopping times
l i m Tn = T n++ OD
a.e.,
and v n
Tn
W e s h a l l s a y t h a t such a sequence
Let
T.
If
gT-
S < T
then
a.e.,
(Sn,m)
1.7. time
IT]
If
T
S, is a s t o p p i n g t h e and
i s a s t o p p i n g t i m e , i t s graph
An a c c e s s i b l e time is a s t o p p i n g time
IT]
T, such t h a t
i s contained i n a c o u n t a b ~ eunion of graphs of p r e d i c t a b l e (A,)
of
can b e announced by a sequence
depends on t h e s e t
An.
The time
(S ) independent o f n. n,m T o t a l l y i n a c c e s s i b l e time. T
TA i s a l s o a p r e d i c t a b l e time.
and i f
T
So t h e r e e x i s t s a p a r t i t i o n
t h e time
the
& C gT-.
A c c e s s i b l e time.
times.
T;
W+x R:
i s t h e s u b s e t of
i t s graph
2 -TY
i s contained i n
Graph of a s t o p p i n g time.
1.6.
a sequence announcing
is independent o f t h e c h o i c e o f t h e announcing
A E I& t he s t o p p i n g time
The a - f i e l d
1.5.
(T,)
T.
It i s t h e a - f i e l d o f t h e e v e n t s o c c u r r i n g s t r i c t l y b e f o r e t h e
sequence.
time
FTn
{T > 0).
anncunces t h e stoppiilg time
T b e a p r e d i c t a b l e time and
5-!=
the +field
(Tn)
on
R such t h a t on e a c h An, '%.,m)m2~'
T
But t h e sequence
is p r e d i c t a b l e i f one c a n make
A t o t a l l y i n a c c c s s b i l e time
such t h a t f o r e v e r y p r e d i c t e b l e time
S, we have
is a s t o p p i n g
p(T = S <
+m)
= 0.
In o t h e r words, one j u s t cannot announce a t o t a l l y i n a c c e s s b i l e t i m e e x c e p t on s e t s of measure zero.
1.8.
Decomposition o f s t o p p i n g time-
e x i s t s a set
AE
E~
Let
T be a s t o p p i n g time; t h e r e
(unique i n t h e s e n s e t h a t t h e d i f f e r e n c e of two s u c h
sets i s o f measure zero) such t h a t t o t a l l y i n a c c e s s i b l e time and
A
% {T
is a n a c c e s s i j l e t i m e , *OD).
T A"
is a
STOCHASTIC PROCESSES
A stochastic process X is a real valued function (t,w)
+
Xt(w)
defined on JR+ x 52.
A stochastic process Y is a version of a process X if v t > 0
1.9.
P(Yt # Xt)
0.
=
If one looks at the values of two such processes X and
at a countable number of times (which is the best one can do in reality)
Y
bne can't tell them apart. Two processes X and Y are indistinguishable if
1.10.
3
P(w;
t such that Xt(w) # Y t(w))
than the preceding one.
= 0.
This is a much stronger property
In the following chapters we shall state theorems
of the kind: "there exists a unique process such that..-.".
It will mean,
two processes having this property are indistinguishable. 1.11. B(IR+) -
x
1.12. w
-+
process X
A
F
is measurable if the application (t,w)
+
Xt(o)
is
(g(R+) is the borelian 0-field on R+).
measurable
A process X is adapted if for every t 2 0 the application is F -measurable. =t A process X is progressively measurable if for each t 2 0 the
Xt(w)
1.13.
restriction of the application (s,w) + Xs(w)
-B([O,t])
x
Et- measurable.
to the set
[O,t] x Q
is
Such a process is an adapted process.
Why is the notion of progressive measurability of any interest? a)
If X
denotedby XT
{T <
+-I
{T =
f
is a stochastic process and = X
(w); this r.v. is defined only on T(w) (unless Xoo is defined in which case.we take XT = Xm on
-1).
the r.v.
T is a stopping time,
%(w)
Assume that X
is an adapted process; is then
X ~ l ~ ~ < +a m ~ F -measurable function? No, in general; but if X is progressively =T measurable, the r.v. b)
is F -measurable. -T Let A be a progressively measurable set (i.e. YpI{T<+ol
Progressively measurable process) ; then the r.v.
'IA
is a
is a stopping time (here we adopt the convention inf Q =
+-).
This last
result is far from being trivial. 1.14.
Csdlsg processes. A process X
is a right continuous function with finite left limits.
t -+ X t (w)
such a process we will denote by AXt = Xt
is cadljg if each of its trajectory
- Xt-
the jump at time
Xtt.
For
the left limit at time t, and by The juntpsize will be
IAX~~.
Any chdlSg adapted process is progressively measurable, and two csdlsg versions of the same process are indistinguishable. Take a cAdl2g process X, and define the r.v.
... 1 1 is the time of thc nth jump of size In other words T k,n /Axti The processes Xt and Xt- are progressf''el~ measurable therefore the
[pml.
T are stopping times. Each of t h e trajectories t + Xt(u) k,n continuous function with left linics; f n a co=pact interval
kxl' Tk,n ' nL1
01
it has
than a given E > 0, and the
only a finite number of jumps of size set U = {(t,u) ; AX t(w) #
is a right
is exactly the countable union of the graphs
. Each stopping time can be split into its cotally inaccessible part
and its accessible part.
Each graph
an
zccessible time can be covered by
a countable union of graphs of p r e d i c t a b l e times.
And i n t h e end we can
f i n d a countable number of t o t a l l y i n a c c e s s i b l e times number of p r e d i c t a b l e times
Sn
such t h a t
Moreover we can always assume t h a t n
P(S, = Sm < +m) = 0
# m.
Tn, and a countable
P(T,
= Tm
<
+OD)
= 0
and
So we can cover t h e jump times of a czdlhg
adapted process by a countable number of t o t a l l y i n a c c e s s i b l e , o r p r e d i c t a b l e times.
Note t h a t a t t h e t o t a l l y i n a c c e s s i b l e times
Tn
we have
# 0 on { T <~ +m} , but a t t h e p r e d i c t a b l e times Sn, AXS can be n n This i s what comes from using p r e d i c t a b l e zero on p a r t of {sn < +OD)
AXT
.
times i n s t e a d of a c c e s s i b l e times. 1.15.
Predictable a-field.
R+x R
The p r e d i c t a b l e a - f i e l d i s t h e a - f i e l d on
generated by t h e l e f t continuous adapted processes.
This 5 - f i e l d
w i l l b e e s s e n t i a l i n s t o c h a s t i c i n t e g r a t i o n ( s e e c h a p t e r s 3 and 5 ) .
A
subset of
IR x.R i s p r e d i c t a b l e i f i t belongs t o t h e p r e d i c t a b l e a - f i e l d .
A process
X
+
is predictable i f the function
with r e s p e c t t o t h e p r e d i c t a b l e a - f i e l d .
(t,w)
-t
Xt(w)
i s measurable
Any p r e d i c t a b l e process i s pro-
gressively measurable.
It i s handy t o have some o t h e r systems of g e n e r a t o r s f o r t h e predictable a-field.
nzl
a)
Here a r e two:
it i s generated by t h e process of t h e form
c p $ ( ~ ) I { ~ j ( t+)
9i(u)1 , ( t ) , where 0 ( t o < tl <...< tn < + w , PC i s a bounded t=o ltf.ti+l i" a e a s u r a b l e rev., and t h e r.v. vi a r e bounded and l't.-mea~urable 7-0 1
b,
i-lq fin'te r*Y.
* IbZ
i t i s a l s o =enerated by t h e process of t h e form n-1 ( t , ~ ) ,where t h e (T ) form a nondecreasi=1 i+l i of stopping times, is a bounded, F measurable
93
are
5
-measurable,
f
bounded r .r. and
-0-
is the
stochastic interval
X
If r.v.
{ ( t , w ) ; Ti(w) < t
5 T~+~(w)).
i s a p r e d i c t a b l e p r o c e s s and
XTI{T<+m)
is
T
a p r e d i c t a b l e time, t h e
F -measurable ( i t is obvious f o r l e f t continuous =T-
p r o c e s s e s and extend e a s i l y t o p r e d i c t a b l e p r o c e s s e s ) . 1.16.
P r e d i c t a b l e times and p r e d i c t a b l e a - f i e l d s .
d i c t a b l e time i f and o n l y i f i t s graph
[TI
A r.v.
T
i s a pre-
is a predictable s e t ( t h i s is
a n o t h e r non t r i v i a l r e s u l t ) . If
A
i s a predictable s e t , t h e r.v.
i s a s t o p p i n g time (1.13 set
A,
(DA] = A
and 1.15).
ID^ +
m
5
D (w)
A
= i n f ( t ; (t,w) E A)
U ~ ~ i ils
I f t h e graph
DA
i s a p r e d i c t a b l e s e t and
included j.n t h e
is a predictable
time. 1.17.
c a d l j g predictable processes.
p r e d i c t a b l e p r o c e s s , t h e time 1 1 [-, k s-1
1.13. A
X~
are n,k
(Xt)
of t h e nth jump of s i r e
T k,n
i s a p r e d i c t a b l e time and
Furthermore t h e r , v .
In particular, i f
U
= { ( t , ~ ) ;AXt
zTn,k- -m"asurable
# 0)
=
i s a c>dlzg
IAX~I
E
nzlU~n,kll.
(1.15).
I n c r e a s i n g p r o c e s s e s and p r o c e s s e s wit11 f i n i t e v a r i a t i o n .
k>l A process
i s an i n c r e a s i n g p r o c e s s i f a)
A i s adapted and c z d l z g
b)
Ao=O
c)
As ( At
B
A process
for
s ( t.
i s a process with f i n i t e v a r i a t i o n i f
i s adapted and c s d l s g
a)
B
b)
Bo = 0
c)
For each
w, t h e t r a j e c t o r y
w
* B t (w)
h a s f i n i t e v a r i a t i o n on
compact i n t e r v a l s . One c a n show t h a t a p r o c e s s i s a p r o c e s s w i t h f i n i t e v a r i a t i o n i f and o n l y i f i t i s t h e d i f f e r e n c e of two i n c r e a s i n g p r o c e s s e s . If
B
i s a p r o c e s s w i t h f i n i t e v a r i a t i o n , t h e S t i e l t j e ~i n t e g r a l s
t
e x i s t f o r any bounded (or non negative) b o r e l i a n f u n c t i o n
j0f(s)dB (w) f (s)
.
The symbol
1
I f ( s ) dBs
1
r e s p e c t t o t h e v a r i a t i o n of t i o n of
Bs(w)
on
w i l l denote t h e i n t e g r a l of t Bs. I n p a r t i c u l a r IOldBs(w)
B
If
A
is integrable i f
has i n t e g r a b l e v a r i a t i o n i f R
1 0 l d ~ s l a r c f i n i t e ; and t h e process
EII;ldBsl
] <
~[jidd~ < ~+ m] +m
B
c r e a s i n g process, s o i s
BC).
A
1 laBsl 5
sit
i s of t h e form
Using 1.14 .we can w r i t e
.
.
i s a continuous process with f i n i t e v a r i a t i o n ( i f
B'
i s the v a r i a -
i s a process with f i n i t e v a r i a t i o n , t h e sums
t
where
I
with
[O,t].
An i n c r e a s i n g process process
f (s)
B
1 ABc
i s an in-
i n t h e form
i s a sequence of stopping times. IhBTnl{r>Tnl T~ - 9 n 1.19. P r e d i c t a b l e processes with f i n i t e v a r i a t i o n . Suppose now t h a t a p r e d i c t a b l e process with f i n i t e v a r i a t i o n , t h e stopping times
Tn
I* is
can be
ABT
are F -measurable ( s e e 1.17). Any =T n np r e d i c t a b l e process w i t h f i n i t e v a r i a t i o n i s t h e r e f o r e of t h e form taken p r e d i c t a b l e , and t h e
where
lIC i s a continuous process with f i n i t e v a r i a t i o n , t h e
d i c t a b l e times, t h e r.v. f o r any
t.
Vn
are
zT
Tn
a r e pre-
-measurable, and l I p n l ~ { t , T exists nn n The r e a d e r can check t h a t conversely any process of t h i s form
is a p r e d i c t a b l e process w i t h f i n i t e v a r i a t i o n .
-
CHAPTER 3I:
MARTINGALES, LOCAL MARTINGALES AND SEMIMARTINGALES
We shall just give here the results necessary for Theorem 3.1 of chapter 3 which shows why semimartingales are important. The machinery on martingales and local martingales needed to construct the stochastic integrals will be seen in chapter 4.
MARTINGALE, SUBMARTINGALE AND SUPERMARTINGALE This section is just a summary of the classical results in martingale theory. The reader who is not familiar with the subject should consult 161 or [12].
2.1.
2.2.
Martingales. A martingale is an adapted process M P.)
B[IM~~]
b)
EIMtl~s]= Ms
<+-
such that
v t > O a.e.
vt
2
s.
Sub and supermartingales. A super (resp. sub) martingale is an adapt-
ed process M
such that
a)
E[(M~~I
b)
E [ M , ~ ~(Ms J
<
-trn v
t
20
(resp. ,Ms)
a.e.
\tr
2 s.
If Mt is the capital of a gambler a time t the notion of martingale (resp. sub, resp. super) corresponds to the notion of fair (resp. favorable, resp. unfavorable game). 2.3.
Cldlgg versions of martingales. Any martingale M
has a chdlag
version; therefore the term "martingale" will from now on mean "cadlsg martingale".
2.4.
If X is a supermartingale (non necessarily cldlhg), for
almost all w, the two limits
= lim
X t+
= lim Xs
and
S-W
S>t
s
so
sED
exist for each t E IR+
(the limits are taken over the set I ) of the
rational numbers).
The process (X ) is then indistinguishable from.a t+ cadlzg supermartingale. The supermartingale (Xt) has a right continuous
version if and only if the function t
.+
E[Xt]
is right continuous.
shall always, except when otherwise specified, consider cidllg supennartingales, and call them supermartingales for short.
2.5. A martingale M is said to be unYformly integrable if the family of r.v.
(Mt)t>O
is uniformly integrable. For any uniformly inte-
-
grable martingale M, the limit Mw = lim Mt exists a.e.; and for any t++w ] . this result to a stopping time T, we then have- % = ~ [ t ~ l ~ ~Apply sequence Sn announcing a predictable time S. Be get M = n' i for any n; and by taking limits on both sides, E[MS E[M
IF
==,
$0 if M
E[M~I&
1
1%-I
n = MS-
=
.
is a uniformly integrable -martingaleand S a predictable time,
the jump at time S verifies E[AMs 13-1= 0.
2.6. then
Let X be a non'negative supermartingale, and take Xw = 0,
(Xt)O
--
is a supermartingale, and for any two stopping times T
and S such that S
2.7.
T
a
inf(t; Mt
5T
Let M
we have Xs
and
% E ,'L
be a non negative martingale, and let
or Mt- = 01, then M = 0 a.e. on
IT, +-I
if MaJ = lim Mt exists and if Mw > 0 a.e., we have T = t*" is ~ { ( w ; 3 t such that Mt(w) or Kt-(w) = 0)1 * 0.
2.8.
.
In particular
+w
a.e. that
Jensen's inequality. If M is a martingale and f(x)
a convex function, the process f(M) ~[lf(~~)IlC +*,
2.9.
1JS F] < XS '
and E[%
is a submartingale provided
bl t 2 0 .
Doob's decomposition theorem.
is
If M
is a uniformly integrable martingale, and A
grable increasing process, the process X = M
-A
is an inte-
is a supermartingale,
satisfying the strong following integrability condition: the family of r.v.
{x~I{~<++, T stopping time)
is a uniformly integrable family. We
shall call those supermartingales, supermartingales of class
(D).
Doob's
decomposition theorem is just the converse statement: any supermartingale
X of class (D)
where M
is of the form
is a uniformly integrable martingale, and A
is a predictable,
integrable, increasing process. And this decomposition is unique.
See
[12], [4] and [14] for three different proofs of this theorem. 2.10.
Corollaries. 1)
Let X
be a supermartingale of class
(D), and B be the
predictable increasing process in Doob's decomposition. The process B jumps only at predictable times; at such a predictable time T, ABT
F -measurable (see 1.19) and we have, (2.5), =Tintegrable martingale M = X
And the jumps of B 2)
If A
if M
is
is the uniformly
+B
are easy to compute. is an integrable increasing process the process
-A
is
a supermartingale of class (D); therefore there exists a unique integrable predictable increasing process B grable martingale. The process B
such that B
-A
is -a uniformly inte-
is called the compensator of A.
This generalizes to processes with integrable variation.
If A is
such a process, there exists a unique predictable process B with integrable variation such that B
-
A
is a uniformly integrable martingale.
The process B
is again called the compensators of A.
From part 1 of
this corollary we get: a)
if T is a totally inaccessible time, and 9 an zT-measurable
1 function in L , the compensator of At =PI{t2Tl
is a continuous process
with integrable variation. b)
if T
is a predictable time,and 9 is an F -measurable =T is the process At = 91{t>~}
1 function in L , the compensator of Bt = E[91~T-11{t>T} -
LOCAL MARTINGALES AND PROCESSES WITH LOCALLY INTEGR/&LE Let X be -a stochastic process, and
xT
symbol XtnT(u).
[7ARI,iTION
T a stopping time. Tl~e
will denote the process X stopped at time T :
xT(3
=
A process M is a martingale if and only if, for any constlnt
time n, the process M"
is a uniformly integrable martingale. And it is
natural to let the constant times n be stopping times Tn: Definition. A iocalizing sequence is a n o n d e c ~ a ~ i n sequence ~. (T,)
2.11.
of stopping times such that 2.12.
lim Tn =
++m
a.e.
Definition. A process M is a local martingale if a)
Mo
=
0
there exists a localizing sequence (T,) sucft th~t-fiich proTn cess M is a uniformly integrable nartinralcb)
Such a sequence (Tn)
will be called a fundamental Sequence fur :he local
martingale M. Remark.
1)
Local martingales are necessarily cjdl:~ Processes
as
decided that here "martingale1'means "cldllg martingale"2)
The processes defined above should real1?
mrtingales vanishing at time
we
'*
"lOCCil
shall not use here the general
concept of local martingales. The interested reader can consult [3]. 2.13. MT
Definition. A stopping time T reduces a local martingale M
if
is a uniformly integrable martingale.
2.14.
Theorem. 1)
Let M be
a sto6ping tiine S reduces M
is of class ping time)
2)
if if
if and only if the process 'M S IM~I{~<+~ ; ) T stop-
(D) (i.e. the family of r.v. is uniformly integrable;
T is a stopping time reducing M,
time and if S
3)
a local martingale then
S
2 T, then
if
S
is a stopping
S reduces T.
T are two stopping times reducing M,
then
Sv T
reduces M. Proof. Parts 1 and 2 are trivial. Part 3
comes from the fact that
Msfl =
Ms + MT -
MS"T
2.15.
Theorem.
If a process 1.1 is !ocaliy a local martingale, thr.n it is
a local martingale. SO there is no way one can gct mare general processes by localizing once more. Proof.
There exists a localizing sequence (T,)
is a local martingale. Let H formly integrable martingale).
a
{T; T
A ~ l d t:*kc
bigger than or equal to any of
stopping time, M~ is a uni-
R = ess. sup T.
There exists a TEH vihicfl converges as%. to R. Using part 3
of elements of 5 n' of 2.14 we can sake this sequence sequence
such that each process
decreasing. The r.v. In (a-e.1,
SO
R =
-!-a,
R has to be
, and
Sn is a
fundamental sequence for the PrQcess 3. 2.16.
Definition. A process B kas lcrcall?integrable variation if there
exists a localizing sequence (T,) integrable variation.
2.17.
Theorem.
Let B be a process with locally integrable variation,
there exists a unique predictable process A with locally integrable variation such that B
-A
is a local martingale. A is called the compensator
of B.
7
Proof. Easy consequence of +he existence and uniqueness of the compensator of a process with integrable variation. 2.18.
It is important to remark the following fact. If B
Remark.
is a
predictable process with finite variation, then the variation of B locally bounded: define the stopping times Tn = inf (t; j i l d ~ ~ ]n) A n. The variation of B on [O,T ! is bounded by n, but we know nothing on the jump of B
at time Tn.
Ncw each time Tn is predictable and can be
1 the variation of B is n ,m bounded by n. Take the stopping times Rk = sup S The sequence nLk " s ~ ' (%I m
on a 0 , S
Let A be a process with finite variation. If there exists a predictab1.e process 3 with finite variation such that A martingale, then B has locally bounded variation, and A
-B
is a local
itself has
locally integrable variation. We can rewrite theorem 2.17 in a stronger form 2.19.
Theorem.
A process
A
with finite variation has a compensator if
and only if its variation is locally integrable. Here is now an easy
to
verify that the variation .is locally
integrable. 2.20.
=. -
Let
A
be a Process
finite variation; we assume that
there exists a 1ocalizinR seauence (Tn) such that for each n, 1 = Yn E L Then the ':ariation of A i s locally integrable. sup
5"
.
proof. -
Take Sn = Tn
inf(t ;
A
t
d~~
1
izing, the variation of A on rO,Sni
n)
.
The sequence
6,) is
local-
is bounded by n, and
We shall now state the fundamental lemma for local martingales. 2.21.
Let M be a local martingale then
Fundamental lemma. 1) 2)
the increasing process
ME
sup\Msl is locally integrable szt the localmartingale M can be written in the form M = U + V
where U -
=
is a local martingale, the jumps of U are bounded by 1
in size, and V
is both a local martingale and a process with
finite variation. ('Lne bound
1 for the jump size cou1.d have been
replaced by another .strictlypositive constant). there exists a localizing sequence (Tn)
-s bounded Proof. -
1)
In particular
such that each UTn
2
process.
Let Rn be a fundamental sequence for M.
We can always assume
that the Rn are finite (otherwise use the fundamental sequence Rn
A
n);
we consider the following stopping times
R r.v.
"
arc uniformly integrable, and S < R therefore the n - n' is integrable (2.5). Furthermore on [0,Sn!, we have ( M( ~( n;
The martingales M MS n
and
As the sequence (Sn)
:M
is locqlly integrable. 2)
sum.
is a localizing sequence, the increasing process
Let At =
1 dMslllfi4s/$l.
s
This sum is, for each w, a finite
constructed above and consider the stopping
times
The sequence (Tn)
I ~ 1 A( n~ +
CO,T,I sator B
for A.
is.a localizing sequence, and [A% ( n + 2% E L 1 There exists therefore a compenn n Take V = A - B, V is both a local martingale and a
I
.
process with finite variation. The jumps of B
occur only at predictable times T. Stop all the S processes at the time Sn, and remember that M is a uniformly integrable martingale. If T is a predictable time, we get
And the jumps of U verify
It is now easy to see that the sequence (R,), Rn = inf(t; Rn is a localizing sequence and that U is bounded by n + 1.
Iut1 2 n),
SEMIMARTINGALES
2.22.
Definition. A process X is a semimartingale if it is of the form
X0 +.M + A, where Xo is=O F -measurable r.v., M is a local martingale and A is a process with finite variation (remember that by definition
X
=
both processes M & A vanish at t = 0). This definition contains no local integrability condition. It should not if we want the semimartingales to remain semimartingales when the probability P is replaced by an equivalent probability Q. Here again one cannot get more general processes by localizing the
notion of semimartingale 2.23.
Theorem. Any process which is locally a semimartingale is a semi-
martingale. Proof. -
We shall need the following useful lemma
a. Let --
2.24.
X be a semimartingale, assume that the size of the
lumps of X is bounded by a
(a > 0 ) .
Then one can write X in a unique
way as
where -
Xo
is=O F -measurable r.v.,
M
is a local martingale, and A
a predictable process with finite variation (in fact, locally bounded variation by 2.18). Proof. The semimartingale is of the form X = X0
+N+B
local martingale and B a process with finite variation.
where N
is a
The r.v.
Xo
is
uniquely determined. The jumps of the process B verify
The increasing process Yt = suplABsl is locally integrable and by 2.20, the s2t variation of B is locally integrable. Let A be the compensator of B, and let M = N
+B -
A;
M is a local martingale, A is a predictable
Process with finite variation, and X = Xo + M + A. If X = Xo
+ E.i + A
the semimartingale X, '"'Or
!-'''''
A
-
= X 0
A'
+ M' + A'
are two such decompositions of
is a local martingale, so A'
is the cornfen-
A.
But A being predictable is its own compensator and A '
2.23.
The process X is locally a semimartingale. That is,
*"
Tn e'''*cs !'
a
localiring sequence of stopping times
(Tn), such that each
*~~taartingale. *. ~,L:C-
As X is cldlgg the process
=n finite variation; for each n the process X
zn-
YTn = XO
is a semimartingale
It can therefore be written in a unique
with jump size smaller than 1. way as
-Y
+ Mn + An,
where Mn
is a local martingale and An is a predictable process with finite variation. The uniqueness of the An
gives AniAn+l,
and Mn = Mn+l
One can patch the An
A - CnnDTn-l,TnB AI
on Io,T,B.
together, and the Mn
together to get the processes
The process A is predictable and ,~DT~-~,T~B' has finite variation, the process I.! is locally a local martingale, thereM = ~ P ? I
fore it is a martingale (2.15) and X = X0 + M
+A+Y
is a semimartingale.
The following lemma is important. 2.25.
Lemma. -
Let X be a cldlsg adapted process;
exists a sequence of stopping times
Tn
and a sequence of semimartinga*
Yn such that 1)
lim Tn =
re+2) xn= Y,
+a
a.e.
on Bo,T,JI.
Then X is a semimartingale.
.
n: Tn
X
= Yn
- *n,T~{t2Tn~
+
x~nl{tL~n~' So for each n
the process
is a semimartingale. If the sequence Tn is non decreasing, 2.23 says
that X is a semimartingale. Otherwise, make the sequence non decreasing S by remarking that if S and T are two stopping times, and if X. and
xT
are both semimartingales, then
xSAT
and
xSYT = xS + xT - xSAT
are
both semimartingales. 2.26.
.xxamplesof semimartingales.
1) any supermartingale (therefore any submartingale) is a semimartingale: let X be a supermartingale, by 2.24 we just have to show that each
X"
is a semimartingale. But :X = E[xn[ft]
+ :X
- E[X,~:~I
7
and we just have to work with the non negative supermartingale n Zt = Xt
-
EIXnlFt].
For any stopping time S we have E[ IzSI1 <
consider the process Z:
+-
(2.6):
supiZs[, and the stopping times T = inf(t; n S
I:zI
.
decomposition theorem 2.9 gives that Z is of the for111 Z = M is a local martingale and A Therefore Z
2)
is a predictable process with finite variation.
Let X be a c2dl2g process with independent increments. As X =
1 AXsIIIAX i,ll S Y = X -
SZt independent increments; the process
has finite variation, and has
Z is a process with indepcn-
dent increments, and. its jumps are all bounded in size by exists, and Yt
process X
M
is a semimartingale, and so is X.
is cldlag the process Zt
E[Yt]
- A, where
- E[Ytj
is a martingale.
1. Therefore
We see then that the
is a semimartingale if and only if the function t + E[Yt]
has
finite variation. We get now to the interesting result on semimartingales.
2.27.
Theorem. -
Let P and
Q be two equivalent probabilities on
Then the semimartingales for P are the semimartingales for
(Q,x) -
P.
We know that this result is false if we replace semimartingales by local martingales. Proof. It is enough to show that any P-local martingale is a Q-semimartingale. Consider the Radon Nikodyui derivative
on
=
(Q,f),
and
let M be a chdlsg version of the P-martingale E ~ [ M ~ ~ FAs~ ] . P(M,
=
0) = 0, the martingale is P
(and
Q) ace. strictly positive. It
is easy to check that X is a P-local martingale if and only if Q-local martingale.
Write X as X s
X M. F1
The process
martingale, therefore a Q-semimart$ngale. The process
1
GX
X
is a
is a Q-local
is a Q-martingale
(as 1 is a P-martingale).
All we should know now is:
1)
if N
is a
strictly positive Q-local martingale, then 1 is a Q-semimartingale (lemma 2.28);
2)
the product of two Q-semimartingales is a Q-semimartin-
gale (lemma 2.29). 1 is a strictly positive Q-local martingale, then N is a Q-semimartingale. 2.28.
Lemma. - If
N
1 -, 1 so E[-]1 < + m v t ; a > 0, everything is trivial: -< t N -a t Nt as the function 1 is convex, Jensen's inequality (2.8) gives that --1 is Proof. If N
-x
N
a Q-submartingale, therefore a Q-semimartingale. Otherwise, consi.der the functions fn(x) defined as follows: f (x) is the function on n 1 ,;[ + m ] . On to,-]n1 the graph of fn is the tangent to the curve y = -1 at the point
a.
The functions fn
Sothe.processesY = f (N)
and
fn(Nt)
E
' L
yr
-,0.
are Q-subma~tingales. Consider the st~pping
R
R
are convex,
times T = inf(t; Nt 5;;). The sequrace fTn) is a localizing sequence; n 1 1 the processes Yn and coincide on [O,Tn[, so N itself is a Q1
semimartingale (lemma 2.25). 2.29.
Lemma. se1-
is a scrr.imag&zle.
Proof. Et is enough to show that the square of a semimartingale is martingale (use (a
+ b12 =
a2
+
can be written in the form X = M martingale, and V
+ b 2). Let + v where
2ab
3 senti-
X
be a scninnrtinga!e,
is
3
X
locally bounded
is a process with finite variation (1cm2 2-31). i
Let
(Tn)
be a localizing sequence such that for each n, I is a bounded Tn 1s bounded on J c , T ~ ~ .k.'r just have martingale, and the variation of V t o show that each process (M + VTn) 2 is a semimartingsleThe first term2)n:( (2.8).
The term (V
O ( t 0 < tl...
< tn
)
= t
2
is a submartingale by Jenscn*. inequality
is a process with finite variation
is a subdivision of
[O,tl, we haxve
3 s . if
.T T, That le~vesthe term M ?J
.
Tn The process M
of the two bounded non negksive martingales
YT"
process
is the difference
+
Izt]
and E[M~ ; the n n is the difference of two increasing processes. So we just
have to look at the case where M
E[%
is a non negative bounded martingale,
YTn
Tn is bounded. The process is an increasing process such that VT nT n Tn is a bounded increasing process, and M is a Wt = Vt AV I T, I t > ~ ~ l T. submartingale. As for each n = M "Wn on IO,T f , the product MI n and
-
nWn
znT"VTn
is a semimartingale (lemma 2.25). Now one more definition and one more theorem and we will be finished with this long chapter. 2.30.
Definitions. --
T = (tO,tl,
...,tn)
1)
A subdivision o f
[O,+m ]
such that 0 < to < t1 <...<
is any finite serlu-
tn
fm.
2) Let X be a process defined on [O,+m], we shall denote by var(X)
the quantity
the sup being taken over all the subdivisions of 3) [O,Sm].
Let X be a right continuous, adapted process defined on
The process X is a quasimartingale on and if var(X)
EIIxt[] < + m , t(t E [O,+m] 2.31.
[O,+m]
-Theorem. -
(Xt)t>O
[O,+m]
if
<+m
be an adapted, right continuous process. We
extend X by taking Xa = 0. The process X is a quasimartingale on [O,+m]
if and only if X
is of the form X = Y
are two czdlsg nonnegative supermartingales.
-Z
where Y
and Z
Proof. s
20
The only non trivial part is the necessary condition. let l(s)
T = (tO,tl,.
be the set of all the subdivisions of
..,tn) E I(s)
Is,+-1.
subdivision o. The quantities E(Y:]
is finer than the
are bounded by var(X),
so Y 0 =
(the limit is taken over the directed set C(s)
T
if r
the partial ordering u < T 0
Ys
The r.v.
For any
consider
It is easy to see that yo < yT if the subdivision T s- S
lim YT exists in
For all
is finer than a.
can be computed by using only the subdivisions T
such that to = s and
t =
For such a subdivision we have
+a.
It is easy to see that for s < t we have Y 0 > E[Y;I%] S
:z 2 e[z:lgl
a.e.
The processes Yo and
-
=+
=
lim :Y s-tt
and
= lirn 2:
Z
The processes Y
t+
s+t
and
2
t+
2.32.
0 - zt
0
exist for all t
becomes S
Corollar-. I-et If X
where H
is a m a r t i n & % ! ?
.
St is right continuous the equality
as a
-
Y
t+
.
be a right continuous, adapted process defined on
is a c*jsf=;irtingaleon
supermartingales on
E+
are indistinguishable from c2dlZg, non nega-
t+
[~,+m].
E
the limits
t+
tive s~~ermartingales: and Xt = Y
and
ZO are supermaitingales
which might not be ~2dlhg. But (see 2.4) for almost all w y
with
[O**m
[O.iul 1 .
[O,+m], then X
1, & Y &
Z
is of the form
are two non negative
Proof. -
Let Mt
processes M
and
be a c8dl8g version of the martingale
X
-M
are quasimartingales on
we finish using theorem 2.31.
E[x~~&],
[O,+m].
As
The Xoo= Moo,
X
Let jIo,tldXs
the
IJO, tldXs
as
= :'1{0}
+
pi
be a r i g h t continuous process.
'01~0, t l ]
are
r.v.,
=
+
dX = Xt. PDI 0 l0,tl s
'11] t l , t 2 ]
the integral
It i s n a t u r a l t o d e f i n e
For a process
,
9k-113tk-1,
+*'.+
,o,msdXs
, where
~p:
90(Xtl
-
should be
and Xo)
+. ..+
). Nowdays t h e most r e c e n t trend i s t o extend a l l t h e processes t o
by t a k i n g
R
then a jump a t time bother about t h i s
0
X
t
and
= 0
IIo,ml(sdXs
p o s s i b l e jump a t
jJO,tl. S i m i l a r l y jst
w i l l mesn
We have a f i l t r a t i o n
for = p;
< 0; t h e process X has
t
+ ~lo,mrI$sdXs.
9;
those f o r which
lo w i l l always mean
t = 0, and
Jls,tl'
(zt)
on
(Q,E)
s o we w i l l a l s o assume t h a t
i s &-measurable
and t h e
pi
are
j - $ s d ~ s i s i t s e l f zt-measurable. 0 Let be t h e s e t of a l l processes
3
+...
C
P ':
pk-l
are
F
I
4
are
ltk-l,tkly
F
-measurable.
So
-ti
t
that
We w i l l not
t
i s adapted; i n t h a t c a s e t h e i n t e r e s t i n g processes
X
t h e process
4t
= 0,
such t h a t
4,
1)
= p3{01
+ ~oIlo,tll
i s a n F measurable bounded r . v . , t h e
y);
9-
-measurable, bounded r.v.
and
tfi 5 k;
ti. ( t
t h e processes i n
i B vanish on =t
It,+-[;
we put on
B t h e topology of t h e uniform conver=t
gence. We denote by
LO
t h e v e c t o r space of t h e c l a s s e s of r.v. with
t h e topology of t h e convergence i n p r o b a b i l i t y . space
LO i s m e t r i s a b l e and complete; i f
The t o p o l o g i c a l v e c t o r
/I f lb =
E[/f
1A
11,
fundamental system of neighborhoods of z e r o by t a k i n g t h e s e t s
we g e t a
If J(@)
=
g @ s d ~ s is going to have the properties of an integral,
we should have at least: if the processes $n in to a process $ in Et, then J(@n)
3
converge uniformly
converges in probability to J($).
Now, as we are going to see, this implies that X is a semimartingale. This result is quite recent and has been proved by Dellacherie with the collaboration of Xokobodzki. Later, Letta gave a variant of the proof, which uses less analysis; this is the one we shall give here. To simplify things a little, we shall assume that the process X is cadlzg, despite the fact that Meyer remarked that right continuity of X is enough. Theorem. Let X be a c5dlly: adapted process, if for any t, 0, the
3.1.
application J
is continuous, then X
LO
is a,semimartin-
gale. It is enough to show that, on each
Proof.
[O,t], the process X is a
semimartingale. So we are going to transform csdlag, adapted process X
is defined on
processes of the form $ = P*I 0 {O) 0
< t
1
. ..
t
+
9
+
[O,t] into [O,+w]:
[O,+w],
' P ~ ltl] ~~,
+-
.C
2
the
is the set of the
%-11] tk-l,tk] ' where
is an F -measurable bounded r.v., and the pi 4
are F -measurable bounded r.v. On B - we con ider the topology of the ti uniform convergence, and our assumption is "J is a continuous function from
-
into LO',. TO show that X
is a semimartingale on .[O,+W], we
shall show that it is a quasimartingale on
IO,+w]
for an equivalent
probability Q and then use 2 - 3 2 , 2.26 and 2.27.
. I,
As X is a csdlsg Process on [O,+m
the r.v. :X = sup lxsl s<+equivalent to the pro&bilits
finite. We can find a probabilftS P'
is P
such that jl~2id~' < +OD Let Dl be the unit ball vergence.
And let J(D~) ' A -
-B set A
for the norm of the uniform conis a convex and bounded set of
LO
(this as J
is linear and continuous).
Furthermore A is in L ' ( P ' ) .
We shall show (see lemma 3.4) that there exists a probability Q equivalent to P'
CL ' ( Q )
such that L1(p')
and such that sup j f d ~<
+m
E A
.
For the probability Q we then have
= sup Eg[fI < £€A
is then a Q quasimartingale, and therefore a P semimar-
The process X tingalc.
Let us show the existence of the probability Q.
3.2. Lemma. Let A be a non empty convex set of L1 and K be a non 00
empty convex set of L ; we assume thrt K is compact for the weak Copology O(L~,C1) and that for a certain number c & R , and for any x E A, the.sets Hx = (Y; y
E
K,
5 c}
are all non empty.
Then
n Hx # 6. x€A
Proof. Hx is weakly closed in K, so it is a weakly compact set, and we just have to show that any finite intersection of the sets Hx is non n n empty. If xl,xZ,...,xn are in A, and if i21Hxm = 0 , the two sets of lR , n 1are disjoint. L is a L = {()~=~, , ,n; y E K} and M = ]-m,c] convex, compact set of Rn, and M is a convex closed set of lRn, so there exists a linear form on Rn , l(tl,t2, tn) = a1t1 +...+a ntn
...,
5 inf
l(u). The coefficients ai have to be non IEL negative, and they can't be all equal to zero, so we can normalize and such that sup l(t) El4
n
assume that
1 ai = 1.
i=1 A and for any y
E
R
Take the point x =
K we have
1
aixi, the point x i=1
is in
For this x we have Rx = 6 and this can't be. A be a convex bounded set in LO, such that A C L1
m -. Let
3.3.
.
Then
YE > 0, there exists
1)
c E IR
such that P(f
5 c)
> 1
-e
vf E A. for anx E > 0 there exists c
2)
0 2 go 5 1, Elgo] ) 1 - E and
Proof.
Let
1)
E
> 0 and VE = if;
E
E[fgo]
11 f11
R
5c
( €1.
go
E
E
A.
vf
L* such that
AS the set A is bound-
0
ed in LO there exists X > 0 such that XA C V. For such a pair ( ~ , h ) E 1 we have ~(111I 1) 5 E vf E A; and finally, taking c = 5; we get P(f
-4 c)
,
< ~(]fl(c)
E
yf €A.
Let e > 0 and let c be chosen as above. Take
2)
K
21-E
Ig; g E Lm,
05 g
5 1, E[g]
- .
) 1 E)
The set K is weakly closed in
the unit ball of Lm; therefore K is weakly compact. The set K is obviously convex and for any f E A, the set, Hf = {g; g contains at least the function g = ItfLcl.
E
K,
2 cl
Lemma 3.2 implies that there
K such that sup ( c. S A 0 1 3.4. Lemma. If A C L (P') is a convex, bounded subset of L , there
exists go
E
-
exists a probability measure Q equivalent to P' sup j f d ~< +a. EA Proof. For any integer n, there exists cn
-
such that Igndpl 2 1
E
'
- ;; and
R
such that
and
gn E
@
5 gn
SUP !fgndpV en. Choose a'sequence of ffA. strictly positive real numbers PF such that lan and lanlcnl both converge. Take h = lungn. and let (h = 0) is the intersection "(gn n
= 0 ) ; 50 p'(h
+w
.-
-
0)
Q = hP1. The set
5
i,Yn
and the
f~rthetmore sup jfhdp' = sup I f d ~5 EA E A The only thing is that the finite measure Q might not be a
measures p and Q are equivalenc.
lan=, <
Q Be the measure
1,
probability, but this can be easily taken care of. We see now that there is no hope to go beyond semimartingales in stochastic integration.
Can one actually integrate with respect to semi-
martingales? Yes and we have the following theorem.
3.5. Theorem. Let X be a semimartingale and b(Et) bounded predictable proc5qses vanishing an
It,+=
and only one extension J* of the function J($)
be the set of all
[. Then there exists one = g $ s d ~ s to the set
such that
1) J* is linear 2) if (Yn) is a bounded sequence of elements of b(&), for all (s,~), Y(s,w)
exists, then J*(yn) -t J*(Y) n (Note that the limit process- Y is automatically in b(Et>).
Partial proof.
= lim yn(s,w)
The existence of J* will be proven in chapter 5.
unicity is trivial, using the fact that predictable sets yanishing on Remark.
It,+-[
.
and if 0 L
.
The
Zt generates the a-field of the
Note that we are working in LO, so the extension J* is the same
if we replace the probability P by an equivalent probability Q.
CHAPTER W: MORE ON LOCAL MARTINGALES AND SEMIMARTINGALES
SQUARE INTEGRABLE MARTINGALES Ito's stochastic integral theory is based on the remark' that, if 2 Bt is the Brownian motion starting from zero, the process Bt - t is a martingale. We are going to see what takes place of the process At
a
t,
when instead of the Brownian motion we have a martingale. 4.1.
Definition. A martingale M
sup E[<]
<
+w
is a square integrable martingale if
.
t
--
4.2. Prorzrties. If M is a square j.~tegrablemartingale, M has the following properties 2 1) Mt is a submartingale 2) 3
2 Mm = lim Mt exists a.e. and in L t+l-a' 2 12I = 4E[Mw12 E[s:~~M, 1 1 2 4 s;p E[
. (Doob's inequality).
- Mt ) 2]= E[Mw]2 < +w. (the lim inf being ~[l(&$~]~l+m inf E[~(M s ti+l i taken over the directed set of the partitions T of [O,m] with the partial
4)
ordering T < 0 if
(I
is finer than T ; the above inequality comes from
AM^)^ -< lim inf I(M~i+l - Mti))
s 4.3.
T
The process <M,M>.
2-A
and Fatou's lemma)
One can find an increasing process A such that
is a martingale as follows: the process
of class (D)
.
-2
is a supermartingale
if M is square integrable (use Doob's inequality in 4.2);
there exists therefore a unique predictable, integrable, increasing process A such that
M' - A
is a uniformly integrable martingale.
This increasing
process is usually denoted by A = <M,kP. In the case where M is the Brownian motion, which is locally a square integrable martingale, we get <M,W = t. Unfortunately the process t <M,W is not the good one for local martingales. For some local martsngales
-A
M, there is no predictable increasing process A such that local martingale (see 2.19 for an intuitive explaination).
is a
This is why we
have to look more closely at square integrable martingales.
4.4. Decomposition of square integrable martingales. Let M be a square integrable nartingale. It is csdllg, therefore we can cover the jump times by a sequence of stopping times,,someof wh'ich are predictable (call
of M
them Sn) , the others are totally inaccessible (call them Rn). Look at the processes C: = bMsnIitLsnl.
The jump
mS
2
is in L
n
So C;
has a compensator which is E[AMsnl&n-lI~t2sn~
EIAMsnlgn-1 = 0
(2.5) and
cnt
(2.10)
, but
is a square integrable martingale.
For the process :D = 'sMRnIrtlRn1, things.are slightly more complicated. D;
has a compensator
grable variation, but
6;
- :5 and AB;
=
Rn
IJ;
5;
jjf
which is a continuous process with inte-
is not the zero process. Nevertheless
is a uniformly integrable martingale which jumps only at time =
%.
n Can we sum
n Ict
and
n
16;
and get the martingale M as the sum of
n
( might not
a continuous martingale and compensated jumps? The sum converge;.but we know that g[IlAMs
lZ1
<
Cw
, and we
are ioing to work in
8
m
L ~ . First we need a few results. 4.5.
A
a. If A
-B
B are two process with integrable variation, if
predictable non negative process), Proof. n
-
ia a martingale and if 9 is a predictable, bounded process (or a
then
E[G+~~(A 0. BIsl =
The result is true for any process @ of the fortn @ =
*
where cPo
is a bounded, &-measurable
+
r.v. and the Pi
gt -measurable r.v.
Therefore it is true for any predictable i bounded (or non negative) process. are bounded
4.6.
e. - Let into
[O,=[ Then -
c(s)
R+
a(t)
be a right continuous, increasing function from
, such that
a(0) = 0. And let c(s)
= inf(t; a(t)
> s).
is a non decreasing, right continuous function; and for any
borelian function f bounded or non negative
Proof. -
Just check the equality for the functions of the form f(s) = I
(Remember that da(s)
4.7.
=. -
If
P,t I
and ds have no mass at s = 0).
A is a process with integrable variation and if M
a
bounded martingale
Proof. Apply lemma 4.6 to At and
Ct = inf(s; As > t).
The r . v .
Ct are
stopping times and
4.8,
-
If Lt
is a unifo~l?.integrable process on
L = 0 and if E [ L ~ I " 0
0
[O,+= 1,
time S, then L is a martin-
for any
gale.
Proof. -
Let A
E
st, and
On
g
A*
on .'A
We have
and L is a martingale.
4.9. Lemma. Let M be a square integrable martingale, and S be a pre-
Then
Ct = AM I s It>s3 is a square integrable martingale; for any square integrable martingale N, the process Lt = C N t t is a uniformly integrable martingale.
dictable time.
AC~ANsl{t>S?
Proof. We have already seen that Ct is a martingale. -
As ACSE ,'L
the
martingale Ct is square integrable. The process L is uniformly integrable as s;pl~tI to C and N~
(N
5 stplct1s;pINtI +
any stopping time T.
a. Let M
T
h
and
Dt =
T E[N,.E[Ac,~~-I] = 0).
Apply 4.7
So E[LT] = 0 for
As Lo = 0, L is a martingale (4.8). be a square integrable martingale, and let R
Dt - Dt, -then
We consider
Dt = A%IIt,Rl,
-
its compensator
1)
4
2)
for any square integrable martingale N, the process
Lt = DtNt Proof. -
ACS] =
s- .
totally inaccessible the.
Dt
(see 4.2).
stopped at a stopping time T), we get
(we used the fact that E[N
4.10.
I A C ~ ~ ~ AEN L~~ ~
is a square integrable maningale, and ] :$[E
- ADRANRIIt3)
< ~E[(A%)
1'
is a uniformly integrable martingale.
+ and hM;; it is enough to study the ease % is a non negative, F -measurable r.v. in L~ and Dt -R - "{t>~}'
1) By considering
where $
If the function $ is bounded by the constant a we get
(we used the formula f(-)?
= l@(s)df(s),
ing functions, and lemma 4.5.)
true for continuous non decreas-
So if $ is non negative bounded, 2
Gm E L*
and El521 5 4E[$ I. L If the non negative function $
$ 1 L1 I
functions $n = $A n, and define the The process Dn+l t
and E[G~]< 2~[$<]
E 1-
5;
is not bounded, consider the corresponding to
-
is ihe compensator of the increasing process
- D;,
SO
construct the increasing, continuous processes each n,
5
2
6t'
-5
in such a way that for
is itself an increasing process.
The limit Bm of the non decreasing sequence
we can
5:
For each n we have
2
is therefore in L
T.
, as
For each w the functions t * D; converge uniformis each B = lim t n a+1 ly to the function t + Bt (this as 0 < t - Dt -n 5 Dm .):D So the
-
En"
process Bt is continuous. By taking limits in LI in the equality
- 5nl~ ] =: D - 5: for s 2 t, we get that D El"; ts t integrable martingale. Therefore Bt = gt, and I And the martingale 2)
6=D-D
-
Bt is a uniformly E
2
l
2
4814 1 <
+.
is square integrable.
AS in (4.9) we see that L is a uniformly integrable process.
Let T be a stopping time and apply 4 . 7 to
(we used the fact that Nf
3
and the martingale N ~ ,we
and Nt- have the same Stieltjes integral with
-
respect to the continuous Process Dt).
Again use 4.8 to show that L is
a martingale. 4.11.
Definition. A square integrable martingale M is purely discontinu-
ous if M~ = 0 , and if for anv square integrable continuous martingale N,
the process MN 4.12.
is a martingale.
Theorem. Let M be a square integrable martingale, then M can be
decomposed in a unique way into the sum of a square integrable continuous martingale MC and a square integrable purely discontinous martingale Md
.
M~ is called
The martingale MC is called the continuous part of M, the compensated sum of the jumps of M. Proof. Let us come back to the
cn
4.9 and 4.10 the products c?,
cnSm, cncm, 6nfjm
gales. Consider the processes
<
=
and fj" defined in 4.4.
TC: 1
+ ED:;
(n # m)
According to
are all martin-
we then have for N'
5N
1
(using 4.9 and 4.10 again)
This and 4.2 shows that
We can assume, by taking a subsequence (which we shall still denote yN)
.
that ~ E [ s u ~ [-[Y :Y ~ 2] < +a This implies, by Bore1 Cantelli lemma, N t that for almost all u the functions <(LO) converge uniformly in t to a 2 N function Yt, and that for each t the r.v. Yt converges to Yt in L , The process Y
is therefore cldlzg, it has the same jumps as M, and it is
a square integrable martingale. Let X be a continuous square integrable martingale.
Each X Y ~ N converge to Yt in L~ so XtYt
is a martingale (4.9 and 4.10). The f 1 converges to XtYt in L , and XY is a martingale. So purely discontinuous martingale, and 'M
=
M
-
Pld
nd
-
Y
is a
is a continuous martin-
gale. We still have to show the uniqueness of such a decomposition; if MC
+ M~
= 'N
+ N~
with 'M
and N~ square integrable, continuous
martingales, and M~ gales, (Md
- N'
M'
- Itdl2
4.13..
and
- M~
= N~
purely discontinuous square i n t e g r a b l e martin-
N!'
is continuous and purely discontinuous.
i s a martingale and
Remark.
For each N
d E[(Mt
e r g s in
and
L
M
4.14.
The increasing process
g a l e such t h a t
The process
= (M:)~
[M,M].
No = 0, and l e t
discontinuous p a r t .
When N goes t o
- SLt I
i n t e g r a b l e martingale.
<M',M'>
nC
Let and
[M,M] = <MC
[M,M]
= t
(ME)'
- ac,~c>ta
(M:)~
0
- 1 2 SLt M - [M,M].
0
- Mo,
If M
[M,M]
rod
M'M~
are
The increasing
.
t = 0 , t h e tendancy is t o t h i n k of
and t o d e f i n e
- No> + f c + I
(AMs)
2
.
But we s h a l l always s
again.
4.15. ing a t
i s a uniformly
b e i t s continuous and purely
defined i n 4.3 is t h e compensator of
a s t h e jump a t time
assume Mo = 0.
each term con-
is t h e one defined i n 4.3.
If M does not vanish a t Mo
\It > 0.
be a square i n t e g r a b l e martin-
M
xd
+ m,
- szt 1 (AM:)
a l l uniformly i n t e g r a b l e martingales and s o i s -,M>
So
We d e f i n e
m e processes
process
-
d NO)2] = 0
t h e process
i s a martingale (apply 4.9 and 4.10). d 2
-
d 2 a Nt) ] = E[(Mo
If
M
we define:
and
N
a r e two square i n t e g r a b l e martingales vanish-
We have the following obvious properties 1)
RI,N>
is the unique predictable process B with integrable variation
such that MN
2)
-B
Because of the uniqueness of RI,N> we have for any stopping time T
aT ,N> = 3)
is a uniformly integrable martingale.
<M,N=> = < M ~ , N ~=><M,N>~
[M,NIt = <MC ,N'>~
4) MN
-
[M,N]
+ 1 AMSANS
szt is a uniformly integrable martingale
5) for any stopping time T
LOCAL MARTINGALES 4.16. Mo
Definition. A -
local martingale M is purely discontinuous if
0 and if for any continuous local martingale N
a local martingale.
the product MN
is
(As any continuous local martingale is locally a
bounded martingale, it is enough to check the property for any bounded continuous martingale). 4.17.
Lemma. - Let M be a local martingale. Suppose that M is also a
process with finite variation (MO = 0 by definition 2.12). 1) and M = V 2)
Then
1 (hMs) is a process with locally integrable variation, S
Vt =
.
For any bounded continuous martingale N, the product MN
local martingale. Proof. 1) the process :M supl~sI is locally integrable (2.21) so by s 2-20 the variation of V is locally integrable and has a compensator -V. The local martingale M - (V - ?) is continuous (localize so that M =
V
is a mifondly integrable martingale and use 2.10).
The variation of
M
-
(V
- ?)
is locally integrable by 2.21 and 2.20.
- (V - (V - ?)
compensator which has to be M but which has to be 0 as M
as M
-
- (V - ?)
So M
(V
- ?)
has a
is continuous,
is a local martingale.
So
Let N be a bounded continuous martingale; byworking locally
2)
,.
we can assume that the variations of V and V are integrable. For any stopping time T, we get using successively 4.7 and 4.5.
So the hiformly integrable process J = MN
is a martingale (lemma 4.8).
4.18. Theorem. Let M be a local martingale. M
can be written in a
unique way as
where MC -
is a continuous local martingale, and Md is a purely discontk-
uous martingale. such that M' Proof. -
Moreover we have
(MCIT = (MTlc
for any stopping time T
is a square integrable martingale.
Use the fundamental lemma 2.21;
M = N
+U
where N is a locally
bounded local martingale, and U is a martingale with finite variation. (T ) be a localizing sequence of stopping times such that each NTn n .F I is a bounded (therefore square integrable) martingale. N can be decom-
Let
"
posed into NTn = (NTnlc have
(&)C.=
Define 'N
c)l+n:( as
+
(NTnId.
Because of the uniqueness in 4.12, we
[O,T~~.
is a continuous local martingale, and ?Id = N - 'N is a T purely discontinuous local martingale (this as each ($1 = (NTnId is a The process 'N
purely discontinuous square integrable martingale). d where N is a contlnuous local martingale and N
+
So M = 'N
+ U
( N + ~ U)
is a purely discon-
tinuous local martingale (4.17). If this decomposition unique? If M = MC such decompositions, the process MC
- xC = xd -
+ Md = xC + xd
are two
is both a continuous
and purely discontinuous martingale. Being continuous it is locally bounded, d and we see, using 4.12, that locally MC - xC = xd - M = 0. The fact that (MCIT = ( M ~ ) for ~ any stopping time T such that MT
is a square integrable martingale is again due to the uniqueness in 4.12.
-. Let M -
be a continuous martingale, there exists a unique T T T increasbg predictable process [M,M] such that [M,M] = [M ,M 1 for any
4.19.
stopping time T such that M~
is square integrable.
Proof. The uniqueness and existence are both due to the existence and uniqueness of <M,M>
in 4.3; and to the fact that the continuous local martin-
gale M is locally square integrable. The process 4.20.
=. -
[M,M].
Let
sums 1 (AMSl2 s
the
M be a local martingale, then for almost all o
converge for all t.
By the fundamental lemma 2.21,
M is the sum of a locally bounded
1 conSLt verges, so does 1 There exists a localizing sequence (In) such Tn szt 2 that each U is square integrable; we then have E f 1 3 < + w ; and s
1
martingale U and a process V with finite variation. As
~
t2.
IAU~]
en
4.21.
Definition.
Let
I
SSn
S
M be a local martingale, we define
[M,MIt =
~
~
1
The process chapter 6 that ML
[M,MIt is an increasing process; we will see in
- [M,M]
is a local martingale.
SEMIMARTINGALES 4.22.
= . -
+ B where
Let
M & N are local martingales, and A
cesses with finite variation.
xC
-
X be a semimartingale; suppose X = Xo
MC =.'N
a ,&
+M +A
= Xo
+N
B are two pro-
We shall denote
xC = MC,
is called the continuous local martingale part of the semimartingale X.
Proof. M
4.17, M 4.'23.
-
-N
N is a local martingale with finite variation, so, by lemma is purely discontinuous and
a. -- Let
(M
- ')N
=
0.
X be a semimartingale; for almost all w
the sums
I
( A X ~ ) converge ~ for all t. s
Let X be a semimartingale, the increasing process
4.24.
Definition.
[X,X]
is the process
As in 4.15 we shall define the processes
[X,Y]..
4.25.
Remark. One can show (see 1121) that [X,Xlt is the limit in pron . bability of the 1 (X - XtiIL where the limit is taken on the directed i=l %+l set of the partitions of [O,t]. So [X,XIt is the quadratic variation of X on
[O,t], and
[X,XIt does not change if P is replaced by an equiva-
lent probability. We shall never need the result here, but it is an important one, as we have already decided that we should deal only with concepts invariant by a change of probability.
CHAPTER V:
STOCHASTIC INTEGRALS
In this chapter we will show the existence part of theorem 3.5: if X is a semimartingale and $ a bounded predictable process vanishing after a certain time t, we can define the continuity condition of 3.5.
~"(4) =
G$sd~s,and J* verifies
Actually we shall look at the stochastic
integrals slightly differently and construct for any locally bounded predictable process 0, the semimartingale
=
In the case where
J:$,dxs.
X is a square integrable martingale it will be similar to the construction of Ito's integral.
STOCHASTIC INTEGRATION WITH ESPECT TO SQUARE INTEGRABLE YIARTINGALES
5.1. M.
Let M be the set of all the square integrable martingales
For each M E
ly, to each r.v.
g,
the r.v.
Y
L (C,&,P)
2
2
Mm = lim Mt is in L (n,&,P). And converset++m corresponds a square integrable martingale
M which is the czdlsg'version of the martingale E [YI&] 2 L (n,&,P)
M =
is a Hilbert space for the scalar product
is a Hilbert space 'for the scalar product
pondins norm on g is
5.2. verging in
5
Let
11 all =
m.
+
The vector space
(X,Y)
-t
E[MooN,].
E[XY],
so
The corres-
be a sequence of square integrable martingales con-
to a square integrable M.
- M ~ I 1 5 4~[le -~
n have E [ S U ~ ~ M ~
(M,N)
.
2
t there exists a pubsequence
Wnk
By Doob's inequality (4.2) we
~ 1 ~Therefore 1 . by
Bore1 Cantelli lemma,
such that for almost every w
the
trajectory
t
+
9
M (o) converge uniformly t o t h e t r a j e c t o r y t
M"
p a r t i c u l a r , i f each 5.3.
If
Definition.
t
+
Mt(w).
In
i s continuous, s o i s M. i s a square i n t e g r a b l e martingale such t h a t
M
Mo = Q
we d e f i n e
2 L (M) = {H; H (The d o t on t h e On
L
p r e d i c t a b l e process such t h a t
i s t o d i s t i n g u i s h i t from a space
2 L (M)
11 ~ 1 1 . ~
we take t h e norm
E [ ~ H ? ~ [M] M, 2 L (M)
<+my,.
not used h e r e ) .
= ~ [ J ~ H : ~ ~ M , M I ~ I
L (MI
B -
be t h e s e t of a l l t h e p r e d i c t a b l e processes 4 of n-1 the form 4 = { + Pillti,ti+lj where 0 = to < tl <...< tn < + C d , i=l 9: i s a bounded &-measurable and t h e qi a r e bounded, F -measurable r.v. -t i For such a process 4 we consider, a s i n c h a p t e r 3, t h e process
5.4.
Let
1
I
,,
1
-
M ~ t ) . ~ (As M i s a square ($OM), = ~ ~ l o , t l ( ~ ) 4= s ~ ~ Pi(Mt s 1= i+l 1 i n t e g r a b l e m a r t i n g a l e vanishing a t 0 , we could u s e I i n s t e a d of E0,tl I But it i s b e t t e r t o adopt t h e n o t a t i o n = .) 10,tl'
1; llt,sl
5.5.
Lemma. -
If
1)
ME
4
Mo = 0,
g, -
a square i n t e g r a b l e martingale and
E
2
t h e n t h e process
aoM
is
11 '$0~11 = 11
L (M) i s a continuous square i n t e g r a b l e martingale, s o i s
2)
If
3)
The processes
M
+OM.
Proof. -
A(@oM)~
The proof of p a r t 1 is t r i v i a l i f one remembers t h a t
is a uniformly i n t e g r a b l e martingale. e x p l i c i t form of
5.6.
a r e indistirrguishable.
Theorem.
M~
- [M,M]
P a r t s 2 and 3 a r e obvious on t h e
$OM.
Let
ME
M, -
such . . ..t h. a t A
Mo
a
then
0,
is dense i n i2(Bf)
1)
3 -
2)
We can extend t h e a p p l i c a t i o n
i s m c t r r El .+ BoM
from i2(~)
3)
if
4)
thexprocesses
4
+ 4oM
defined on
B-
i n t o an
g. -
M i s contgnuous s o is Hell ~ ( H O M )& ~ Hs&ls
are i n d i s t i n g u i s h a b l e
proof. -
The processes of B - generate the predictable cr-field so 2 is dense 2 in i (M); and the isometry 4 + goM from 2 into 3 - can be extended into 2 an isometry from L (M) into M. Convergence in
g -
implies almost everywhere uniform convergence
on the trajectories for a subsequence (5.2),
so part 3 and 4 Zollow from
lemma 5.5. Theorem. Let M E
5.7.
M
such that Mo = 0, and H E i2(M).
The process
L = HOM is the unique element of 2 - such that 1) L o = O 2)
[L,N] = Ho[M,N]
(the symbol (Ho[M,N])~
NE3 -
denotes the Stieltjes integral (Ho[M,NIt) =
~30,t1~sd[~.~1s) ' The proof will use Kunita and Watanabe's inequality which is just a form of Schwarz inequality. 5.8.
e. - Kunita and Watanabe's
elements of
g -
inequality:
vanishing at time zero, and H
M
and
and
N be two
K be two bounded
measurable processes then
Proof. Use the fact that
[M
+ XN, .M + ANIt -
[M + AN, M
+ XNls is non
negative for any X E R , and' s 5 t, to get
This implies easily: if H and K are of the form m H K = I K I , where the Hi and K are li:J lti¶ti+ll * j=l j ISj'Sj+l j bounded E-measurable r.v. then -
-
n
,
And this extends to the case where H and K are bounded measurable processes. As the borelian a-field on
is separable, there exists a fa,M;N, measurable version of the process J = , and we can assume that s d[M,Nls Js is bounded by 1. Apply the inequality (*) to Hs and JsKs to get
I
for any H and K bounded measurable processes. Then the inequality with
\HI
and
~KI
is trivial.
Proof of theorem 5.7.
By lemma 5.8 and theorem 5.6 the application
E[(ROM)~N~ - ~ s d [ ~ , ~ ~isscontinuous ] on i2(M) zero on B, is zero on all ? 2(M).
H
+
).
Its value, being
-
Let M and N be two elements of let H E
i2(~).
process Jt = (HoM)
e
g -
vanishing at t = 0, and
pt- ~ ~ s d [ ~ , ~is] suniformly
integrable (use lemmas 5.8, 5.6 and Doob's inequality 4.2). time T and
xT
Take a stopping
the martingale N stoppei at time T; we get
So Jt is a uniformly integrable martingale (lemma 4.8). Let MC be the continuous part of M, and M~ be its purely discontinuous part.
For any continuous martingale NC, the process
~ ~ H ~ ~ [ Mis~ contiuuous , N ~ I ~and has finite variation. & I8.d
(M~M')~N~
-
[ M ~ , N ~ is ] ~a unifomly integrable martingale, we have (4.15)
d
the process (HoM -'NI d c J H ~ ~ [ M ~ , Nis ~ ]a uniformly integrable martingale; but [M .N 1
For any continuous martingale ,'N
definition 4.14, so and as HoM =
is purely discontinuous.
+ noMd, we
have
(HOM)~= HOM',
and
0 by
H ~ Mis ~continuous, (noMld = IioMd.
We have now f o r any square i n t e g r a b l e martingale
N
1
[HoM,NIt = [ H O M ~ , N + ~ ] ~~ ( H O M ) ~ ~ ~ s
+
1 Hsmsms =
SLt
(Ho[M,Nl)
Suppose two square i n t e g r a b l e martingale L = L;) = 0 0
and
2, -
[L
VN
E
and
[L,M] = Ho[M,N]
- L',
LO = L;) = 0, s o
5.9. and
and -
= 0.
We then have
The process
(L
L' [L
Corollary.
be
verify
-
L', N] = 0
- L ' ) ~ i s a martingale,
L = L'.
d ( H O M= ) ~HoM
H
- L']
and
Note t h a t i n t h e proof of 5.7 we showed t h a t
Remark.
5.10.
L
d N E g.
L
(HOM) = HoMC
. Let
-2 L (M).
W
N
be two elements of
PI -
vanishing a t
0,
Using t h e c h a r a c t e r i z a t i o n 5.7 of t h e s t o c h a s t i c
i n t e g r a l we g e t , f o r any bounded p r e d i c t a b l e process K
and KO
@OM) = (KH) OM.
In particular i f K = I.
l o , ~ Ba
5.11.
T
i s . a stopping time, and
i s t h e p r e d i c t a b l e process
K
we have -
Remark.
Let u s come back t o theorem 3.5.
The s e t
set of a l l bounded p r e d i c t a b l e processes vanishing on process
Y E b(%)
obvious t h a t elements of process
Y
we s h a l l d e f i n e
J* is l i n e a r on b($),
J*(Y) = ~
p(3);l e t
converging a t each
is t h e n t h e l i m i t in i2(M)
(s,w)
I
(Y")
~
b(Et)
1$ , + a [ . ~
was t h e For a
=, (Yon) ~ t. ~
It Y is~
be a bounded sequence of
t o a process
of t h e yn, and
Y E b(&).
The
.J*(yn) converges i n
~
M
~
L* to J*(Y)
.
The stochastic integral we have constructed so far, is the
one we were looking for.
STOCHASTIC INTEGRATION WITH RESPECT TO LOCAL MARTINGALES Remember that a local martingale always vanishes at t = 0. 5.12.
Definition. A process H
is locally bounded, if there exists a
localizing sequence (Tn) of stopping times such that each process T H n~{Tn,o) is a bounded process. Since we have decided that we would like to be able to replace P by an equivalent probability Q without changing the set of processes H
for which j
~ is~defined, d ~we shall ~ restrict ourselves to the processes
H which are predictable and locally bounded. 5.13.
Let M be a local martingale, and H a predictable,
locally bounded process.
By the fundamental lemma (2.21), we have M = U
where U is a locally square integrable martingale, and V martingale with finite variation. Let
(T )
+V
is a local
be a localizing sequence such
m I
that for each n, U
..
is a square integrable variation, V is a martingale 1
with integrable variation (we can do this by 4-17), and H
1{T,>O)
is
bounded. We want to define the stochastic integral (HoM)~ by
Lemma 5.14 will tell us that the stochastic integral HQM thus defined-does not depend on the decomposition M = U
+ V,
and that the process HoM is a
local martingale. 5.14.
Lemma. Let = - V be a process with integrable variation 1) if V is a martingale, and if H is a predictable process
such that E [ ~ I ldvS\ H ~ ]~ <
+-,
then the Stieltjes integral 1 3 s d ~ s
a
c 2 d l l g uniformly i n t e g r a b l e martingale. 2)
If
is a square i n t e g r a b l e iaartingale, and i f
V
ed process t h e s t o c h a s t i c i n t e g r a l (>sd~s Proof. -
H i s a bound-
( H o V ) ~ and t h e S t i e l j e s i n t e g r a l
a r e two i n d i s t i n g u i s h a b l e processes. Part 1 is true i f
t a b l e processes
-1 Define L (V)
B. -
IiE
a s t h e s e t of a l l predic-
E [ ~ I ldvSl H ~ 1~< + m
-
H such t h a t
.
And t a k e on
il(v)
11 H I I . + ~ ) = e [ ~ J n ~ldvsl]. l if H* H i n i l ( v ) , we have 0. \ j >- ~@ ds dv~ s~l l 2 ~ i l k -l ~~( ~ ~ l d v5~ 11l Hn l -~ 1 1 . ~
t h e norm ~ [ s ~ ~
f i n i s h a s i n 5.6, except t h a t t h e convergences a r e i n For par-t 2 , t h e s t o c h a s t i c i n t e g r a l the Stieljes integral are linear i n
H
10,t,H
I t0H sdVs =
we 2 L
L (V) i n s t e a d of
L'
(HoV) = ((H150,tl)oV)m
.
and
dVs coincide .on p ( z t ) ; both
and v e r i f y t h e c o n t i n u i t y property of theorem 3.5,
SO
(H0Vlt = 1 2 ~ da.e.~ ~A s t h e two processes a r e c 5 d l l g they a r e i n d i s t i n guishable. 5.15.
Theorem.
Let
M b e a l o c a l martingale, and l e t
l o c a l l y bounded process.
Then
L =
HOM
H
be a p r e d ~ c t a b l e
is t h e unique l o c a l n a r t i n g a l e L
such t h a t [L,N] = Ho [M,N]
+V
where
l o c a l martingale.
i s a l o c a l l y square i n t e g r a b l e martingale,
Proof.
Let
and
is a l o c a l m a r t i n g a l e with f i n i t e v a r i a t i - n .
V
M = U
VN
U
a l o c a l martingale with f i n i t e variation, so
As
U
The process
(H~v)' = 0
(lemma 4.17).
is l o c a l l y square i n t e g r a b l e we have (using 4.18 and 5 . 9 )
( H O U );. ~H ~ U C and Now we have
( H O U )=~ HoU
d
.
HQV i s
and
The continuous martingales
u',N'
and
RoUC
a r e l o c a l l y square i n t e g r a b l e ,
s o by Theorem 5.7,
and
HoIM,N] = [HoM,N]
Ho[M,N], gale.
Let
L and L'
vN
l o c a l martingale.
Let
Nw
We have
[L
and l e t
By l o c a l i z i n g we can suppose t h a t
i s a square i n t e g r a b l e martingale and
v a r i a t i o n (2.21 and 4.17).
V
We then have a s
[L,N] = [L' ,N] =
- L' ,I] = 0 f( N
be any bounded &-measurable r . v . ,
(cldlzg version). U
be two l o c a l martingales. such t h a t
L
l o c a l martin-
Nt = E [ N ~ \ ~ ~ I
- L'
= U
+V
where
i s a martingale with i n t e g r a b l e [U,N]
and
[V,N]
a r e process-
es w i t h i n t e g r a b l e v a r i a t i o n :
f o r any bounded L-measurable r.v. 5.16.
Remark.
From 5.15 we get:
So Lw
N-. if
M
- LL = 0,
and
L = L'.
i s a l o c a l martingale and .H a
p r e d i c t a b l e , l o c a l l y bounded process:
1)
( H O M )=~ HOM'
2)
d ( H o M ) ~= RoM
3)
t h e processes
A(HoM)~ and
HsAMs
a r e indistinguishable ( t h i s
f a c t i s t r i v i a l by 5.6 and t h e d e f i n i t i o n of t h e S t i e l t j e s i n t e g r a l ) .
4) T.
(HoM),
-
I I ~ ~ , ~ = ~J H > ~~ ~~f oM M r any ~~ f i n i t e stopping t i m e
STOCHASTIC INTEGRATION M T H RESPECT TO SEMIMARTINGALES let X =
5.17.
%+ H i
A be a semimartingale, and E be a pre-
dictable, locally bounded process.
We shall define
The integral HoM is the stochastic integral with respect to the local martingale M;
$fOsd~s is the Stieltjes integral with respect to the pro-
cess with finite variation A.
According to lennna 5.14 the process HQX
does not depend of the decomposition X = Xo
+ M + A.
And we have
1)
HoX is a semimartingale
2)
(H~x)'
3)
the processes A(HoM)~ and HsAMs are indistinguishable
.r
-
4)
BOX'
~ , O , T I H s d X s=
~.8~
for any finite stopping time
T. 5.18.
Eemark.
Let H be a process vhich is adapted, left continuous, and
has right limits everywhere, then
is predictable and locally bounded. Xt (Take the localizing sequence Tn = inf(t; Int1 ) n)) Those are the only
.
processes R we will really use. Let Bt be the Brownian motion, and
5
5.19.
Remark.
u(Bs,s
5 t) completed with all the null sets in 1 -. The family & is
then right continuous.
be 0-field
In his lectures Friedman shoued that one could
integrate any bounded progressively measurable process K with respect to B.
In fact for such a K
there exists a bounded predictable process H
such that P(o; Kt = Ht except for at moat a countable number of
In that case E[$;(S =
- Hs) 2ds] = 0
t)
a
1.
Vt, and it is natural to take
$ 2 ~and~this. is t h e s u e as the integral defined in ~iedman's
lectures (the fact that the process R
problem a t a l l ) .
is not unique is t r i v i a l l y no
CHAPTER M:
ITO'S FORMULA
If F isa continuously differentiable function from YR into I R , re have F(t) continuous
-
@'(s)ds
+ F(0).
This formula is also valid if Vt is a
process with finite variation, and F(vt) = ,$F'(Vs)dvS
+ F(v~).
How does the proof go? You write
where 0 = to < tl...
< tn = t is s subdivision of
[O,t], then you use
Taylor's formula and the fact that th.? quadratic variation of Vt is zero to get the result. If X is a continuous semimartingale, the quadratic n-1 Xti13 go to zero in variation of X is [X.Xl; and the sums IXt i.11 i+l probability when the subdivision T = (to,. ,t ) gets finer and finer. So n if .F is a function with continuous znd order derivative we should get
I
-
..
~f
x
is a semimartingale, non necessarily continuous, just look
at what the jumps of F(Xt) 6.1.
Theorem.1to's formula-
are to guess the formula in Theorem 6.1. XlSX2..
..,Xn be
- valued seninartinnale
denote by Xt the
real valued function on Rn * Then -
n semimartingales; re
) . Let F n,t is twice continuously differentiable.
(x.l,ty...,X
i D F
Comments.
DidI
and
a r e t h e d e r i v a t i v e s of
I.
The term
X;
is
A l l t h e processes t h e continuous martingale p a r t of t h e semimartingale Xi. i D Foxs- a r e p r e d i c t a b l e and l o c a l l y bounded s o t h e s t o c h a s t i c i n t e g r a l s
exist.
Let
(T )
IxtI
[O,Tn[,
be t h e l o c a l i z i n g sequence
and
Ixt_I
a r e bounded by
nd
we have using Taylor's 2
So f o r almost a11 w
s
1 IF OX^ - FOXs- - 11)Foxs-
does
n; l e t
i
bXi,sl
.
on
converges, and s o
Therefore f o r almost a l l
1 FOX^ - Foxs- - IDiFOX,-AX~,~I converge
sums
.
I x l ~ ni a j
- i=lD ~ F O X ~ - AiX, s I
S2Tn
) n)
1 \DiD'I(x) 1 ,
K = sup
order formula
, 1 IF OX^ - Foxs-
[xt 1
Tn = i n f ( t ;
for a l l
w, t h e
t.
S(t
Note a l s o t h a t t h e r e is no condi.tion on
F, except t h e c o n t i n u i t y
of. t h e 2nd' d e r i v a t i v e s . Proof.
The proof w i l l c o n s i s t of simplifying t h e problem s t e p by s t e p .
F i r s t n o t e t h a t t h e jump a t t i n e
8
of t h e term on t h e r i g h t i n I t o ' s
formula i s (using 5.17)
t h e term on t h e l e f t .
This is a l s o t h e jump a t time t h e proof f o r t h e c a s e 1)
.)
1-
i t is enouzh t o proof
F, F'
b)
and
X = Xg +
"
W e w i l l w r i t e down
I t o ' s fornula, when
F"
a r c bounded
+ A * !LhS?E
a bounded martingale and
L:
xo
is a bounded random v a r i a b l e ,
a process With bounded
veriat-on.
M
is
By lemma 2.21, we can write
X=
x,, + N + B, where N
is a martingale with bounded jump size
and B a process with finite variation. Let Rn =
R~ = 0 if
1~01
> n, and let Tn = infft;
The sequence (Tn)
IN^^ ) n
if
+w
Ix0l 5 n, 2
or J:I~B,I
4
Rn.
is a localizing sequence. m
m
+ A; n +: B
Take :Y = XoI(b>Ol
- ABT IT
I
The r".
is a bounded, 4P measurable r.v., )I n= : is bounded T martingale, end At = B "1 has bounded variation. The two processes t { t ~ ~ ~ l :Y -and Xt coincide on 10,T,I ; and [ynC,ynC] == [xC,xC] on [ 0 ,T,B ; if Yo =
Xto's formula is valid for ,:Y
it will be valid for. Xt, d t < Tn. As the
two terms in Ito's formula have the same jump at Tn, Ito's formula is valid for xt, d t ~ ~ , .
Now Yf: and -:Y in a compact set D of 8.
are two bounded processes, taking their values Let G be a twice continuously differentiable
function, with compact support such tl.at
G = F. on D. All we have to do
is prove Itols formula for G and yn.
So we can assume tlaat F; F' and
F" are bounded. 2)
Furthermore we can assume that M and A have at most N
The martingale M martingales
M"
jumps.
is bounded so there exists (proof of 4.12) a sequence of
such that
jumps of M, and such that
M" = MC + compensated sum of
.
111 $ - M-11
n 4.2 implies that for almost all w
ly to the functions t + Mt; and so
a finite number of
< +m This, by Doob' s inequality L n the functions t + Mt converge unifonn-
$-
-:M
+
a.e.
For the process A, things are simpler; we can have by 1.18
The process 'A A
is the continuous ;"processwith finite variation" part of
and nof.the roatinuous local martingale part of A which is 0. As the
v a r i a t i o n of A is bounded, t h e r e e x i s t s a. subsequence nk, such t h a t n "k n verifies lE[Gld(h = Ac + A% A A)~I< ] +a ; Let us c a l l n=l n n k t h i s sequence An. Again, f o r almost a l l w t h e t r a j e c t o r i e s t -+ A;(w)
1
-
-
converge uniformly t o t h e t r a j e c t o r y
t + Ay(w).
A -:
And
xn
Suppose t h a t I t o ' s formula i s v a l i d f o r each
The f i r s t term
n FoxO i s t h e term
The second term
For t h e t h i r d term we have, a s
M
and
Now t h e process t i o n of t h e
N"),
and
bounded (remember thcr: E[~:(F'~x:-
to
a.e.
At-
= Xo
+ Mn + A",
Foxt.
FoxO.
t
~ O ~ ' o ~ ~= -@'OdX:PM: ~ :
+ @'oxn
s- d~:,.
a r e square i n t e g r a b l e
Mn
[M,M]
converges a.e.
FOX:
-+
-
is a n i n c r e a s i n g process (by t h e d e f i n i -
[Mn,Mn]
(F'ox;-
- F ' O X ~ - ) * converges
F, F', F"
t o z e r o and remains
a r e bounded from now on).
So
- F'OX~-)~~[M~,M 0. ~ ] ~ ] -+
For t h e o t h e r term, we have
find
13'ox:-d%
converges i n
t
I* t o ~ , , F ~ O X ~ - ~ M ~ .
Q u i t e s i m i l a r l y ( i n f a c t i t i s e a s i e r a s we work w i t h S t i e l t j e s
so
t integrals) we have that ~,,F'~x~-~A: converges in L1 to @'oxS-dAs.
As [ x ~ ~ , x=~[xC,xC] ~] and as F"oxns- converges to F"oXs- and remains bounded, the term $F"OX~~[X~~,?~]~ that J ~ o ~ s - d [ ~ c ,(remember ~C~
xC = MC
converges in L1 to
is square integrable so that
E [ [ X ~ , X ~ I<~+-). ~
1 (FOX:
That leaves
n - Foxs- F1ox:-&X:).
Using Taylor's formula,
szt and the fact that F" is bounded we have
I
]AMsI and 1 converge for almost all w, we see that the s
As
1 ~ ~ ~ 1
-
lim 1 (FOX: n++- slt
-
- FOX:-
- F'~x" AX:) S-
Each term of Ito's formula for
=
IF
FOX^- - I'ox~-Ax~a s .
~ -X ~
s
xn
converges in probability to the
corresponding term for X so Ito's formula is valid.for X. 3)
It is enough to prove Ito's formula when Xo is bounded, F, F'
F" are bounded and M
A are continuous.
We are already down to the case where M
and A have at most N
jumps and M is square i~tegrable. Take Bt = 1 m s , this is a process t'S N and with integrable variation (remember that in fact Bt = 1 &$, n n-1 n - n that each % is in L~). Let B be its compensator, we get n M = MC + B $; let C = A + B - g, C is a process with finite variation
-
-
which has almost 2N jumps and X = Xo
+ MC + C.
...yR2N be
Let R1,R2 ,
the possible jump times of C; if Ito's formula is true for continuous M and A we have
and
Add t h e jumps a t times
4)
%
on both s i d e s , and you g e t I t o ' s formula f o r
Proof i n t h e continuous case.
a r e bounded, t h a t
M,A
can a l s o assume t h a t
(and
M, A
So we can assume t h a t
[M,M]) and
a r e continuous.
[M,M]
XO,F,FV and
X. F"
And by l o c a l i z i n g w e
a r e bounded.
Taylor's f o m l a gives
and t h e r e e x i s t s a non decreasing f u n c t i o n and
I.r(x,y)
I f E ( ~ Y - x i ) IY - XI
Choose a constant
= t
Ti+l
For each
A
o, we have
lim ~ ( t = ) 0
a > 0, and consider t h e stopping times
+
( T ~ a ) n i n f is > Ti;
Ti(w) = t
i
~ ( t ) such t h a t
2
' i
I M ~- %!
> a, [M,MIs
except f o r a f i n % t e number of
,XT 1. .i+l
We have f o r t h e f i r s t sum
- [M,Ml
i.
> a,
Ti
0 , since supl~'(X~)-Ft(XT ) f
and the last quantity goes to zero when a
i
i
remains bounded and goes to zero.
IF'o% (% i i i+l
Similarly we get that
j$'
converges in LI to
.XSdAS.
The term I F ' ' O X ~ ~ ( X ~- ~)+ ~ splits into three terms. We have i (where C is a upper bound for I IF' I and ] F"
FI
1)
so this term goes to zero with a.
The double product is as easy to deal with, as
Now for the term with M2, we have using the fact that
2
-
[M,M]
is a martingale
- Fi
E[~V'(X~ ) (% i i i+l =
E
[ 1
i
i.
-< 2c2IE[(E$ i
% i+l
-~
M i
- %.f2 -
Ti+l
~ ~ ~ r .l[~ o i p ) I 5 EI~(~[I i+l
i
i+1
)
2
.=i+l
when s
+
-
IM,M~ 1 [ M ~ M ] ~ ] Ti
0.
):
i+l
-
1
Ti
i
i
))*I Ti
+ 2C2~[supl[.,MI
Ir(r(XT ,XT
[~-f,Ml 1 121 Ti
[MN
Ti+l
-< 2c2a2E[<J + 2c2aE[ [M,MIt]+ 0
i
-
-1 [M,MI
~ + ZC~E[([M,MI ~ 1 ~
- %i12~:] -< 2c2E[suplE$ i i+l
i
-
([M,Ml
x
i+l
That leaves the term
, Ti+l
~[I:*E(I~(xT
i+l
- xTil)
+M:] -< 2~(2a)~[a~ild~~l
-+
0 when
a + 0.
And we have proven Ito's formula.
6.2. Corollary. If M is a local martingale, the process
2-
[M,M] is
the local martingale 2 1 ~dMs. sProof. Ito's formula with F(x) = x2
6.3. Corollary. If M is a semimartingale, and M
0
2, = exp(Mt
-1
c c ,M It) 11 (1 + ms)e SLt
-A%
= 0
then the process
is the oaly semimartingale verify-
L3.S Zt = 1 -+ l i ~ ~ - d ~ ~ 1 -c cIt, Kt = 11 (l+AMs)e -AMs and apply Ito's Take Nt = M t -$M.,M s
Proof:
e ex^(%^).
it is the unique solution.
CHAPTER VII:
STOCHASTIC DIFFERENTIAL EQUATIONS
Now that we have developed a nice theory of stochastic integration, the next step is to look at stochastic differential equations. We are going to state and prove the theorem with only one semimartingale, but one can similarly study systems of'stochastic differential equations. 7.1.
Theorem. Let M be a semimartingale such that MO = 0, and let H be
a csdlsg adapted process. We consider a function f(w,s,x) $2 X IR+x IR
(L1)
defined on
such that
Fr,r w,s
fixed,
f(w,s,*)
K i
Lipschitz function with. Lips*
constant K (L2)
For s,x fixed, f(-,s,x) &&-measurable.
(L3)
For x,w fixed, f(w,. ,x)
is a left continuous function with right
limits. Then the stochastic integral equation
has one and only one solution (Xt) which is a cldllg adapted process. Before proving the theorem let us show that j3(w,s,~~_(w>d~~(w) exists. 7.1:
Lemma. -
f(w,s,Xs-(w))
If X is an adapted, czdllg process, the process
(s,w)
-t
is adapted, left continuous and has right limits (so it is
a predictable locally bounded process).
Proof. For t fixed, the function (w,x) measurable (by
(L2)
+
is =t F x B-( 7 R ) -
f(w,t,x)
and the continuity in x, (L1)).
SO f(w,t,Xt-(w))
is F -measurable. =t
The left continuity and the existence of right limits is easy to show. Proof of Theorem 7.1.
Let us try the classical proof fornon stochastic
differential equations on the stochastic integral equation
in the easy following case (A1)
M
M
is of the form
=
N
+ A, where
A is a process with finite variation such that
N is a local martingale,
[N,N] and B =
ld~~l
are both bounded by a constant b. (A2) Let
2
lf(w,s,~)l
LC
= {csdllg processes X
On H - we take the norm
If XI!
\d(w,s> such that X* = suplx t t =
11 X*I
We consider for each X E
7.2.
e. The process
WX
where 11 WX - ~ ~ l l z IIx-Y~I, h
Proof. .(WO)t
g
I E L2
and
'(0
= 0)
2.
L the process
is in,
and if X
Y
are in,
h = ~ ( 2 & + b)
t t = Jif(*,s,o)d~~ + ~of(*,s,~)d~s. Let Lt = ~of(*,e,~)d~s, L
is a local martingale and
So by Doob's inequality 4.2
Let Vt =
and Ewh21
2 c2b2.
,s,0)dliS, we have
(*
So the process WO
Let X and Y be in
g,
g.
is in
and Z = X
- Y, we have
Take again L' = ~k[f(*,s,~~-) - f(. ,s,Y~-)I~N~and V * = j:[f(*,s,xs-)
-
£(~.s,Ys-)ldAs9 we get
and V*
5
GK1 zsj I
~j
5A K ~~Z *
]I WX - WY]]2 K I ~ X - YII (b + 2&). As WO E H, - this implies that for any X E l -l, WX E g 7.3. e. - ff M and f satisfy conditions (A1) & (A2), and if h = K(b + 2 6 ) < 1, there exists one and only one adapted chdl?ig process SO
Xt which is solution of
Proof. -
There is one and only one solution X
in
E,
as h < 1. IF Z is a
t
c&d1Sg adapted process, and if 2 = ~of(*,s,~s-)d~s, consider the times Tn = inf(t; lZtl ) n).
Az,
The jump of 2 at time- Tn is
= f(*,~ Z
n
I -<
)I%
n' Tn-
(C
+ nKI(2b
+ fi)
n
(Ll), (L2) and the fact that
f
S
Ims!
+
1
~
=
A ~
1T
+I
AA~~
-< (m+ 2b). H, and -
7.4.
The process Z
is therefore locally bounded, and locally in
by the uniqueness of the solution in
e. M -
satisfies (A1)
X -
we have Z = X.
and if h = K(b
+
26) < 1, then
there exists one and only one adapted csdlig process. Xt which is solution t of Xt = jOf( - ,s,xs-)dMS.
Proof. Let Tn(u)
1
= inf(t; If(w,t,0)
n), and let fn(w,t,x) =
f(~,t,x)I{~<~<~ ). The functions fn satisfy (A2) with c = n. And each -n n stochastic integral equation 2: = J;fn(* ,s,Zs-)dMs has a unique solution yn+l on ~o,T,B. yn As fn+l(w,t.,x) = fn(o,t,x) on 1 O,T~], we have yn = t An adapted csdlgg process X is solution of Xt = jof(',s,Xs-)dMs T T if and only if for each Tn we have X> = jkf , S , X ~ : ) ~ M : . It is now ( 0
easy to see that the process X defined as
t is the unique solution of Xt = jof(.,s,Xs-)dMs.
7.5.
--
Lemma. If M satisfies A1,
if
h = K(b
+ 25)
< 1, and if Ht
a chdlsg adapted process, then the stochastic integral equation
has a unique chdlhg adapted solution. Proof. -
X is solution of (*)
if and only if Y = X
-H
is solution of
t
Yt = J0f (- ,t ,YS- + Hs-)dMs. Take the function g(u,t,x) properties .(L1),
(L2) and
(Lj)
=
f(~,t,x
+ Rs-(w)),
it satisfies
with the same Lipschitz constant K.
So
just apply lemma 7.4.
7.6.
Lemma. -
If
H is a cldlag adapted process if H is a semimartingale, b if the jumps of M verify IAMtl 5 a, if b < 1 and if h = K(b + 2&) < 1,
then t h e s t o c h a s t i c i n t e g r a l equation
h a s one and only one c;dl;g, Proof.
adapted s o l u t i o n .
M
By 2.24 we can w r i t e
m a r t i n g a l e , and The process
A
A
as
+ A,
M = N
where
N
is a local
i s a p r e d i c t a b l e p r o c e s s w i t h l o c a l l y bounded v a r i a t i o n .
jumps only a t p r e d i c t a b l e times: f o r a p r e d i c t a b l e time
T
we have
(one h e r e shou3.d r e a l l y l o c a l i z e t o be s u r e t h a t g r a b l e martingale and t h a t t h e v a r i a t i o n of
AIN,Nlt
=
AN^)^ Let
and
1 AAt 1
Dt = [N,NIt
+
i s a uniformly i n t e -
N
is integrable).
A
and t h e r e f o r e bounded by
-J
~ k l d ~and ~ ld e f i n e
5b
hT
I ~ A ~ J
2b n' n+l So by lemma 7.5, wt have one and only one s o l u t i o n on 10,T1j
W e now have
I Tn*Ta+ll
The jumps
b
dIN,Nls
. and
f o r any
n.
f o r the
s t o c h a s t i c i n t e g r a l equation
Xt = Ht
+ I Ot f ( *, s , x ~ - ) ~ M ~ .C a l l
Now look on
1 T ~ , T * ~a t
it
1
X.
t h e equation
u s e lenma 7.5 a g a i n t o g e t a unique s o l u t i o n
X
2
, define
similarly the
X"
on JTn-l,~nI. Patch them together, it is then easy to see that the prucess
X thus defined is the unique csdllg adapted solution of
Proof of theorem 7.1.
Let M be a semimartingale, let b as in lemma 7.6
and let
b be the successix~etimes at which M has jumps of size ]AMtl > 2;.
We con-
sider the.semimartingale
and
A czdlsg adapted process Xt is solution on [O,T1] Xt = Ht
+ l:f(*,s,Xs-)dMs
of 1 if and only if the process Xt = XtY (t.<.T1)
+
is solution on !J O,T.~B l l t 1 1 Xt = Ht + jof( * ,s,Xs-)dMs. But on IO,T 1 the semimartingale M1 satisfies the conditions of lemma 1 7.6, so this last stochastic integrhl equation has one and only one csdlag 1 on fO,T 1; do the same on each adapted solution Xt 1
I
T,;Tn+l]
and you
get theorem 7.1. Recently Protter and Emery have studied the stability of solutions of stochastic differential equations. The interested reader will find the results in [7], [8], 191 and [13].
Bibliography
[I] C. Dellacherie. Capacit6s et processus stochastiques. Springer 1972. [2]
C. Dellacherie et P. A. Meyer.
Probabilites et potentiel. ler volume.
Hermann 1975. [
3]
C. Dellacherie et P. A. Meyer. Probabilites et potentiel. 2Sme volume (to appear).
[
4 ] C. Dole'ans. Existence du processus croissant nature1 associs 2 un potentiel de la classe (D).
2. fiir Wahrscheinlichkeitstheorie 9,
1967. C. Dolhans-Dade et P. A. Heyer. Equations diffgrentielles stJchastiques.
~e'minairede Probabilite's XI, Lecture Notes 581. Springer,1977.
J. L. Doob. Stochastic processes. J. Wiloy and Sons, 1953. M. Emery.
Stabiliti des solutions des iquations diff&rentielles
stochastiques; application aux tnt6gral.e~multipticatives stochastiques, 2. fiir Wahrscheinlichkeitsthcorie 41, 1978;
M. Emery. Une topologie sur l'espace des semimartingales (to appear) M. Emery. Equations diffgrentielles 1 i~schitziennes: la stabflit&. (to appear). H. Kunita and S. Watanabe- On square integrable martingales. Nagoya
Math. 3. 30, 1967.
P. A. Meyet. Un cours SUr les intkales stochastiques. SCrninaire de probabilit& X. p. A. Meyer.
Lecture Notes 511- Springer 1976. probabilit6s et Potentier- gemann, 1966.
[I31
P. E. Protter.
A* stability of solutions of stochastic differential
equations (to appear). [I43
M. Rao.
On decomposition theorems of Meyer.
Math. Scand. 24 (1969).
CEN TRO INTERNAZIONALE MATEMATICO EST1 W
(c.I.M.E.
1
STOCHASTIC DIFFERENTIAL EQUATIONS AND APPLICATIONS
AVNER FRIEDMAN
Stochastic Differential Equations and Applications
Avner Friedman Northwestern University Evanston, Illinois
1.
Brownizn motion
92.
The stochastic integral
93.
to's
formula
$4. Stochastic differential equations $5.
Probabilistic interpretation of boundary value problems
$6.
Stopping time problems and variational inequalities
87.
Stochastic switching and nonlinear elliptic equations
98.
Probabilistic methods in singular perturbations
$1. Brownian motion A s t o c h a s t i c process x ( t ) , t ~ i1s a family of random v a r i -
a b l e s x ( t ) defined i n a measure space ( R , ' J ) o r i n a p r o b a b i l i t y space (R,AP); h e r e x ( t ) i s e i t h e r r e a l valued o r n-vector valued and
I
i s an i n t e r v a l , usually LO,-).
Notice t h a t x ( t )
i s a function x(t,w), wEn. The f u n c t i o n t - x(t,lu) i s c a l l e d a sample p a t h a.e.
w.
If
sample p a t h i s continuous ( r i g h t continuous), we say t h a t
t h e process x ( t ) i s continuous ( r i g h t continuous). A process y ( t ) i s said t o be ,a version of xCt) i f
P ( x ( ~ )$ y ( t ) ) = 0 Theorem 1.
vt.
(Kolmogorov)
.
If
f o r some p o s i t i v e c o n s t a n t s C,$,a then t h e r e i s a continuous version of x ( t )
.
A process x ( t ) , t E I i s c a l l e d separable i f t h e r e e x i s t s a
sequence { t . dense i n J such t h a t , i f W ~ N ,
I and a subset N c 2 0 2 probabLlity
f o r any open s e t J c f and any closed s e t F
C
Rr
.
It i s known (Doob) t h a t any s t o c h a s t i c process x ( t ) has a
separable version.
0
79
It is not difficult to show that if x(t) sup x(t), tEJ
lim Inf x(t),
is separable then etc.
are measurable. Definition. x(t) ~fx(t)/ <
w
T
is martingale (submartingale) if
t, and
The martingale inequality: If x(t)
is a separable submartingale
then
If x(t)
is a separable martingale and if Elx(t)la a then jx(t)\ is a submartingale; consequently
Let
3t
be an increasing sequence of o-fields, t 2 0.
random variable
T
with range in to,-]
time with respect to -
I£
< = (a 5 1)
= a(x(s),
is called
a
A
sfopping
3t if
s ( t), we say that
7
is a stopping time with
respect t o x ( t ) . Theorem 2.
I f x ( t ) i s r i g h t continuous martingale and
stopping time w i t h r e s p e c t t o x ( t ) , then y ( t )
-E
x(t
A T)
T
a
i s also
r i g h t continuous martingale. Let u s f i x t h e following model of ( R , 7 ) - : space R
3
MCO,~)o f measurable f u n c t i o n s from
i s t h e 0 - f i e l d generated by w(u), s
5u5
t,
v a r i e s over t h e
w
into R
[o,-)
3
6
-
; t = 30- .
U ?
t>s
1
, T ts
:,
C
Let p ( s , x , t,A) b e a nonnegative f u n c t i o n defined f o r 0 5 s < t < (i) (ii) (iii)
w,
xER
1
,A
x -. p(s,x,t,A) A
any Bore1 s e t i n R 1, s a t i s f y i n g : i s Bore1 measurable;
' p(s,x, t ,A) i s a p r , , b a b i l i t y measure;
~(s,x,t,A) =
S l ~ ( s s x , h , d ~ ) ~ ( i , ~ , t , A )(8 R '
1 < t)
( t h e Chapman-Kolmogorov equat2on). Theorem 3.
under t h e foregoing assumptions ( i ) - ( i i i ) , t h e r e
such t h a t t h e process x ( t ) , e x i s t s a family of p r o b a b i l i t i e s p x, s w i t h x ( t , w) = ~ ( t,) s a t i s f i e s :
The c a l l e c t i o n
h,3 ,3
:,
x(t)
, Px, ,I
i s c a l l e d a Markov
process and
p.
Ls called the transLtion probability function.
To prove the theorem, one introduces the multi-dimensional distribution functions
where n(dx) is a distribution function on R 1
.
property (iii) ensures that the family F tO"'t i s consism tent in the sense that if xi + = then
By the Kolmogorov construction there exists, then, a probability
p
such that the finite dimensional distribution functions of S JR
coincide with F tOtl...t ' to = s. The p are obtained by m sax choosing n to be the ~ e l t ameasure supported at x. The veri-
x(t)
fication of (1) and (2) will be omitted. Taking the expectation.in (2), we obtain (with t = s and t 4- h replaced by
t)
for f = I
A'
and hence f o r gny bounded Bore1 measurable
f.
The
p r o p e r t y ( 2 ) i s c a l l e d t h e Markov p r o p e r t y ; i t can b e r e c a s t i n t h e form
P f bounded, Bore1 measurable.
= Ex ( t ) , t f ( x ( t + h ) )
The more g e n e r a l p r o p e r t y
or, equivalently, ~ ~ , ~ [ f ( ~ ( I t$1 + o =)
S
P(~,X(T),~+T,~Y)~(Y)
(6
-
f ( x ( t + ~ ) ) V f bounded, Bore1 measurable EX(~),T
i s c a l l e d t h e s t r o n g Markov p r o p e r t y ; h e r e time w i t h r e s p e c t t o s e t s AE3,
S
T tS,
t
2
S,
7
i s any s t o p p i n g
s and 3Ti s t h e o - f i e l d of a l l
such t h a t
The s t r o n g Markov p r o p e r t y i s a very u s e f u l t o o l , and we s h a l l t h e r e f o r e g i v e a g e n e r a 1 c o n d i t i o n under which i t i s satisfied. Definition.
If P X > 0 and
f
bounded c o n t i n u o u s ,
$
( t , z ) -.
R t h e n we say t h a t Theorem 4.
p
p ( t , z , t+X,dp)f(y) i s continuous, I s a t i s f i e s t h e F e l l e r property.
A r i g h t continuous Markov p r o c e s s w i t h
p
s a t i s f y i n g t h e F e l l e r p r o p e r t y s a t i s f i e s t h e s t r o n g Markov property.
If p ( s , x , t,A)
=
p(O,x,t-s,A)
p(t-s,x,A)
t h e n we speak of
time-homogeneous Markov p r o c e s s and w r i t e Px - Px,O.
The s t r o n g
~ a r k o vp r o p e r t y becomes:
1. Let x ( t ) be an n-dimensional continuous p r o c e s s .
problems. Let
T
be t h e f i r s t time x ( t ) h i t s a g i v e n c l o s e d s e t A c R ~ .
Prove t h a t
2.
I£
3.
Let
T T,U
i s a s t o p p i n g time. a r e stopping times t h e n
T A CT
i s a stopping time.
Prove t h a t t h i s f u n c t i o n s a t i s f i e s t h e Chapman-Kolmogorov equat ion.
The corresponding Markov p r o c e s s i s c a l l e d a Brownian motion ( o r a Wiener p r o c e s s ) .
One can v e r i f y d i r e c t l y t h a t , i n
t h i s c a s e , i f n(dx) i s t h e D i r a c measure t h e n
where %(xO) = 0 i f xo c 0 , of ( r . . ) ,
=J
-
1 =1 i f xo r 0 , (Tij) i s t h e i n v e r s e
and
rij Random v a r i a b l e s XI,
= min(ti,tj).
...,Xm a r e s a i d
t o have j o i n t normal d i s t r i -
b u t i o n N(0,T) i f
It t h e n f o l l o w s t h a t
r
jk = 'jXk* d i s t r i b u t i o n f u n c t i o n p(y) o f X1,
I f det
...,Xm
r#
0 then t h e j o i n t
i s g i v e n by
Using t h e s e f a c t s we d i s c o v e r t h a t f o r a Brownian motion
a r e independent i f to < tl < t
2
<
**
c tm,
(9)
x(t)-x(s)
i s normally d i s t r i b u t e d w i t h mean 0 and
v a r i a n c e t-s. ~ h u s ,i n p a r t i c u l a r ,
One e a s i l y computes
Hence, by Theorem 1, x ( t ) h a s a c o n t i n u o u s v e r s i o n ; from now on we always work w i t h t h i s v e r s i o n . The f a c t t h a t x ( t , m ) i s continuous i n t h a t the s e t of points tinuous f o r a l l t
w
f o r which w(t)
2 0 i s o f Px measure
0.
t , f o r a.a.w,
=
means
x ( t , u ) i s n o t con-
Thus, we may remove
t h i s s e t from our space w i t h o u t a f f e c t i n g anything.
From now on
we work w i t h t h i s s l i g h t l y r e v i s e d
space; we c o n t i n u e t o d e n o t e
i t by
R
R, b u t now t h e p o i n t s w of
a r e elements o f c[O,W),
and t h e measurable s e t s a r e d e f i n e d i n t h e obvious way. Theorem 5.
Almost a l l sample p a t h s o f a Brownian motion
a r e ~ 6 l d e rcontinuous w i t h any exponent ~ 6 l d e rcontinuous w i t h any exponent a
L
a , a < 1 / 2 , and nowhere 1/2.
We a l s o mention t h e i t e r a t e d l o g a r i t h m laws:
lim tr0
x(t) = 1 a.s., J 2 t log l o g ( l / t )
lim
x(t> = 1 a.s. tt, J2t log log t
Problems.
4.
If
x ( t ) , then y ( t ) of
E
7
i s a s t o p p i n g time f o r a Brownian motion
x(t+T)-X(T) i s a Brownian motion independent
3,.. I f x ( t ) i s a p r o c e s s s a t i s f y i n g (8),
5.
(9) t h e n i t s con-
t i n u o u s v e r s i o n i s a Brownian motion w i t h
and i t s t r a n s i t i o n p r o b a b i l i t y f u n c t i o n i s g i v e n by ( 7 ) .
6.
I f x ( t ) i s a Brownian motion t h e n
2 ~C(x(t)-x(s)) I
T~ I
= t-s
a.s.
(The converse is a l s o t r u e , namely, i f x ( t ) i s continuous mar9
t i n g a l e and i f x - ( t ) - t
i s a martingale, then x ( t ) s a t i s f i e s ( 8 ) ,
(9) and i s thus a Brownian motion.) An n-dimmsional Brownian motion i s d e f i n e d analogously t o
a 1-dimensional Brownian motion. A
t o be any Bore1 s e t i n R".
(xl(t),
-
*,xn(t))
~ h u s ,i n (7) we t a k e XER" and
I n terms o f t h e components
of t h e p r o c e s s x ( t ) , each x. (t.) c o r r e s p o n d s 1
t o a 1 - d i r e n s i o n a l Brownian motion and t h e p r o c e s s e s x i ( t ) , t
-> 0
a r e CUtually independent.
82.
The s t o c h a s t i c i n t e g r a l We t a k e a 1-dimensional Brownian motion and d e n o t e i t by
w ( t ) ; t h e p r o b a b i l i t y and e x p e c t a t i o n corresponding t o w(0) = 0
w i l l be denoted by Let
P
and
E.
.j. be an i n c r e a s i n g f a m i l y o f a - f i e l d s ( t t
2
0 ) such
that o(w(h+t)-w(t),h > 0 ) i s independent o f For example, i f
yt.
i s a a - f i e l d independent o f t h e Brownian
motion, we can t a k e 3 co be t h e 0 - f i e l d g e n e r a t e d by t o(w(s),s
5
t ) and f
Denote by
.
@[O,TI
Definition.
t h e Bore1 o - f i e l d o f t h e i n t e r v a l [O,T].
A s t o c h a s t i c process f ( t ) , 0
5 t 5 T, i s
n o n a n t i c i p a t i v e w i t h r e s p e c t t o ( o r adapted t o ) (i) (ii)
[O,T~] x
Tt measurable;
V t ~ [ o , T ] , f ( t ) i s s e p a r a b l e and
v
T E(O,T] t h e f u n c t i o n ( t , m ) 0
-
9'
i s %[0,T0] x
7
To
-r
Tt i f
f ( t , w ) from
measurable.
I f , i n addition,
t h e n we say t h a t A
f
belongs t o <[O,TI
2
(%[(),TI).
step f . u n c t i o c i s a s t o c h a s t i c p r o c e s s f ( t ) f o r which
t h e r e e x i s t s a p a r t i t i o n { t . ] o f t h e t - i n t e r v a l such t h a t 1
called
2 Let £CL~[O,TI. Then there exist sequences of 2 2 functions gn~~wCO,~], ~,EL~CO,T I such that^
Lemma 1.
(i)
gn is continuous, lim n-
If(t)-gn(t)l
(ii) hn is a step function, lim
. 2dt = 0 a.s.;
If(t)-hn(t)
1 2d t
=
0 a.s.
P
2 If in addition f€~ty[O,~l then the above assertions are valid
with L2 replaced by M 2 , and with Definition. f(t)
=
J:
replaced by E
J:.
2 If f is a step function in ~JCO,T], with
fi when ti 5 t 5 ti+ly the s m
T f(t)dw(t) and is called the stochastic integral 0 of f with respect to the Brownian motion w(t). is denoted by
Recall that almost every sample path t bounded variation.
+
w(t,w)
is not of
Therefore we cannot define the integral
pathwise, for any (say) bounded measurable f. The definition which we shall soon introduce is based on approximation of f by step functions and on the particular definition (1).
(If
for instance we chanze (1) just "slightly" by taking E ) f ,,t(
(w(tk+,)-w(tk))
, then the
resulting definition for (2)
will be quite different). Lemma 2.
I£
2 f is a step function in yw[O,~l,
The proof is direct. Lemma 3. If f any
c
is a step function in $[2O,T],
then for
r 0, N > 0,
Proof. Define C(t)
=
f(t) if tk < t c tk+l '
k Z f2(t.)(t j=o J
2 E S C (t) 0
j+l
cN
-t.) q N and ~ ( t )= 0 otherwise. Then J and
S
since f(t) = ~ ( t )for all 0 s t c s if sOfz(t)dt
< N.
Estimate
now the first tern on the right by ~hebyshev's inequality. Let f€$[O2,T'I and choose fn step functions in \[2 O,T] such that
By Lemma 3 ,
i s convergent i n p r o b a b i l i t y .
We denote t h e l i m i t of
and c a l l i t t h e s t o c h a s t i c i n t e g r a l ( o r t h e I t o i n t e g r a l ) of f ( t ) w i t h r e s p e c t t o t h e Brownian motion w ( t ) . The above d e f i n i t i o n i s c l e a r l y independent o f t h e approximation f
n
.
Theorem 4 . any
f
in
The a s s e r t i o n s of Lemmas 1 and 2 a r e v a l i d f o r 2 and \ [ O , T ] ,
2
@,TI
respectively.
T h i s f o l l o w s by approximation. Problems.
1.
One can d e f i n e
f f ( t ) d w ( t ) i n t h e obvious way.
h2[ a , ~ ] ,p 2~ C a , @ and ]
2 prove t h a t i f f @ ~ [ a , @ j ,
a
2.
If f , g belong t o ~2, [ a , f i ] and i f f ( t ) = g ( t ) f o r a l i
a5 t5
3.
B,
w G ~ then ~ ,
2
If f€%[a,
partitions (tn,l,.
@I,
f continuous, then for any sequence of
..,tn ,
li::
) of [ a , f! ] with mesh
he or em 5. Let £E<[o,TI.
-
0
Then the process
is a martingale and, further, it has a continuous version. In the future we work only with this version and call
jt
the indefinite integral. Proof.
First take ~ E M2 ~ C O , Tand I approximate it by step
functions fn as in Lemma 1. Let
By Problem 1, I (t) is a martingale. n inequality,
Hence, by the martingale
Taking
k
= 112
it follows that for some nk sufficiently large,
We can choose the nk in such a way that nk t if k
-
Since Ck 2 <
a,
t.
Hence,
the Borel-~antellilemma implies that (t) 1 >
p[ sup 1.1 (t)-In
OrtrT nk
L
1-0-3= 0 -
2
k+l
i.e., for a.a. w
1I
(t)] 5 -T; for all 0 5 t 5 T, if k > k0(w). ( - 1 k k+l 2 1
But then, with probability one, $ 1 (t)? is uniformly connk vergent in t€[O,Tl. The limit J(t) is therefore a continuous function in ~ECO,TI for a.a, w.
it follows that
Since
Thus, t h e i n d e f i n i t e i n t e g r a l h a s a continuous v e r s i o n . 2 Consider now t h e g e n e r a l c a s e where £EL [O,T].
For any
W
N > 0, l e t
and i n t r o d u c e t h e f u n c t i o n
It i s e a s i l y checked t h a t f
2
N belongs t o M w CO,TI.
Hence, by
what was a l r e a d y proved, a v e r s i o n o f
i s a continuous process. Let
I f w€RN,
then f
N
it f o l l o w s t h a t f o r a.a.
Theref o r e
-
(t) = f ( t ) f o r 0 < t < T, M > N. M
wEnN
BY problem 2
~ ( t = ) l i m J (t)
EEI
M-m
i s continuous i n since
n
N
t,
~ECO,TI
f o r a.a.
-
we9
P(nN) t 7' i f N t
4' J ( t ) (0
5
t
5
T) i s a con-
tinuous p r o c e s s .
~ u sti n c e f o r each ~ E ( O , T I ,
as M
-
we have,
m,
,.
Consequently, I ( t ) h a s t h e c o n t i n u o u s v e r s i o n ~ ( t ) . Problems.
4.
I f x ( t ) i s a separable martingale then, f o r
any a > 1,
2 ~EM ~[O,TI,
Use t h i s f a c t t o prove t h a t , f o r
5. O
-<
T
I f ~ E M2J [ O , T ] ,
5 T,
then
7At
ES
0
7
a s t o p p i n g time w i t h r e s p e c t t o
f ( s ) d w ( s ) = 0.
C,
6-
Define
f
(I
rf+,
f(t)dw(t) =
r
62
'3
f (r)dw(t)-J
&I
f ( t ) d w ( t ) , where
0
6i
are nonnegative random variables, C1 5
if £ ~ ~2 [ O , T ]provided C1, E2 are 'Tt 0 I 61 S 6,
63.
to's
c2.
Then
topping time,
S T* formula
Suppose <(t) is a stochastic process satisfying for any 0 1 tl< t 2 s T
where ~CL'CO,TI, VJ ~c2 L ~ I O ,(\T[O1,~T
1
is defined similarly tc
T with P[J /f(t)/dt < w l = 1). 0 stochastic differential d 5 given by
..
Then we Say that 5(t) has
j = 1,2,. ,m 1 be a sequence of parLet [t n,3 ' n titions of an interval a < t < b with mesh 6n -. 0. Therr
Lemma 1.
converges in L~ to the constant b-a.
Proof.
birite t
j
= t
n,j'
m =
m
no
Then
Since t h e summands a r e independent and o f mean
0,
where
The Y. a r e i d e n t i c a l l y normally d i s t r i b u t e d ; hence
J
We s h a l l u s e Lemma 1 i n o r d e r t o prove t h a t
Indeed,
and t h e l a s t l i m i t , i n p r o b a b i l i t y , i s t2-tl. We can r e w r i t e (1; i n t h e form 2 dw ( t ) = 2w(t)dw(t)
(2)
+ t.
A s a n o t h e r example we compute
Clearly
and adding b o t h e q u a t i o n s ,
Thus
Theorem 2.
Let f ( x , t ) b e a continuous f u n c t i o n f o r
( x , ~ ) E R ' X 10,-) w i t h continuous d e r i v a t i v e s fx,ft,fx, Let d 5 ( t ) = a ( t ) d t
+ b(t)dw(t).
Then f ( 5 ( t ) , t ) h a s a
98 stochastic differential
T h i s formula i s c a l l e d proof.
I f dc = adt
to's
+ bdw,
formula.
d e f i n e fdF = f a d t
+
fbdw.
If
then
Indeed, f o r a . , b . 1
(
2 (3).
1
c o n s t a n t s , (5) i s a consequence o f t h e r u l e s
When ai,b.
1
a r e step functions, ( 5 ) i n i t s integrated
form f o l l o w s from i t s i n t e g r a t e d form on each s t e p o f t h e ai,bi. For g e n e r a l a , b . i t t h e n f o l l o w s by approximation. i 1 Applying (5) one can e s t a b l i s h by i n d u c t i o n t h a t
T h i s can b e used t o show t h a t
f o r a polynomial
Q.
Next one u s e s t h i s r e l a t i o n and (5) i n
order to establish
to's
is a polynomial and g(t)
formula for a function Q(x)g(t) where 1 is in C By linearily and approxima-
.
tion one then establishes (4) in case 5(t) can establish (4) when d5(t)
=
adt
+ bdw
=
w(t).
Similarly one
and a,b are step func
tionsand,by approximation, also for general a,b. 2m Let f€Q [O,TI, m positive integer. Use t to's formula with f = x2m , 5(t) = f(s)dw to show that 0 Problems.
1.
S
2.
Let PEL:[O,TI
and let a,B be positive numbers.
prove
the exponential martingale inequality ?[
(7)
[Hint:
t man [J f(h)dw(h) o
--
For
and apply
- $ ~~f~(h)dhl z B ) 5 e-a$ 0
f bounded step function, let
to's formula with eX to deduce that ((t)
= ((9)
t >Oi)f(h)dw(A)
+S
S
where <(t) = e5(t).
~ p p l ythe martingale inequality.
Finally
approximate general f by step functions 1. For n-dimensional Brownian motion w(t) one defines
Q
=
(w (t), 1
...,w,(t))
JBb(t)dw(t) =
a
where b = (bl,
:.., b n ) ,
bj
Z
j=l
B b . (t)dw. ( t ) a J
2
EL^[^, PI.
I f b = (b. .) then 1J
B
E I Sab ( t ) d w ( t ) 1 2
B =
Sa
c (b. . ( t ) ) 2d t . IJ
i,j
The concept o f s t o c h a s t i c d i f f e r e n t i a l
where a = ( a l , .
.., a m ) , b
Here
, a r e assumed
2 i 5 m, 1 5 j 5 n) i s =J defined i n t h e obvious way, and t h e corresponding 1 t o ' s formula
UtlUx
,Ux
i . i j
= (b. .) ( 1
t o be continuous.
The proof u s e s t h e s p e c i a l c a s e n = 1 and t h e r e l a t i o n
To prove (9), simply n o t i c e t h a t w ( t ) Brownian motion, so t h a t dw2 = d t
=
(wl(t) f w ( t ) ) / f i 2
+ 2wdw.
I t o ' s formula (8) can b e w r i t t e n i n t h e form
is a
provided we d e f i n e t h e m u l t i p l i c a t i o n t a b l e : d k . d t = 0 , d t d t = 0 , dw.dw 1
1
Introducing the matrix (a
ij
= 0 if
j
i # j, dwidwi = d t .
) where a i j =
operator
c bikbjk k= 1
and t h e
we can a l s o w r i t e (8) i n t h e form
It follows t h a t i f Lu and u .b a r e i n X
s p e c t i v e l y , t h e n , f o r any s t o p p i n g time
Problems.
3.
Let 5 ( t ) =
m a t r i x w i t h elements b
i $ j, dSidFi
ij
2
t
M1J O , T ] , ~ C O2, T I
7, 0
7 (T,
b
i s nxn
Suppose dtid5
= 0 if
b ( t ) d w ( t ) where
0
i n %[O,T].
5
j
Prove t h a t < ( t ) i s a n n-dimensional
= dt.
Brownian motion.
1 s :
Suppose b
ij
<(t) = expciy-S(t)
a r e step functions.
+
2 y t/21.
Let
By 1 t o ' s formula d 6 = i ~ y d w .
re-
~
L
~ t h a td E [ ~ ~ Y ~' S 5( ,~e] )=~ e i ~ * S ( s ) - ~ (t's)/2
following f a c t :
i f 5(t0),
t i o n N(O,r) w i t h
r
=
(rij),
..., 5 ( t n rjk =
and t h e
) have j o i n t normal d i s t r i b u -
min(t , t k ) then 5 ( t ) i n a j
Brownian motion. ]
$4.
Stochastic d i f f e r e n t i a l equations Let b ( x , t ) = (bl(x,t),...,bn(x,t)),
where I
5
i , j ( n.
o(x,t) = (oij(x,t))
If 5 ( t ) i s a s t o c h a s t i c process s a t i s f y i n g
t h e n we say t h a t g ( t ) s a t i s f i e s t h e system of s t o c h a s t i c dizf e r e n t i a l e q u s t i o n s (1) and t h e i n i t i a l c o n d i t i o n ( 2 ) . We s h a l l assume t h a t
b(x,t),o(x,t)
(3)
a r e measurable i n R~ x [O,T],
-
I 5 ~lx-xl, ~o(~,t)-(~(:,t) I 5 KIX-XI, Ib(x,t)-b(Z,t)
Ib(x,t) la(x,t)
I 5 I 5
K(1 + I x l ) , ~ ( +1 1x1)
and t h a t
i s independent o f t h e Brownian motion w ( t ) , (4)
Thus, i n p a r t i c u l a r ,
may b e any c o n s t a n t .
We take 3 to be the U-field generated by w(s),s
5 t and
the u-field of 50. Let ( 3 ) , (4) hold. 2,T]. solution S(t) .of (I), (2) in %[O Theorem 1.
Uniqueness means that if E(t)
Then there exists a unique
is another solution in
2 yy[O,~l; then
Proof. Uniqueness.
If 51,52 are two solutions, then C
Hence
implying E /51(t)-52(t) Existence.
I
=
0.
Define
and make the inductive assumption that
(6)
EI
5k+1(t)-6k(t
)I
5 (k+l)
:
Skd[~,~] and
for 0 5 k s m - 1,
-
where
M
i s a p o s i t i v e c o n s t a n t depending on K,T.
It can e a s i l y be shown t h a t ( 6 ) h o l d s f o r k = m and L
5 m + 1 q [ 0 , . ~I.
Next, u s i n g Problem 4 , $2 we f i n d t h a t
Hence
The B o r e l - C a n t e l l i lemma now i m p l i e s p[ sup
1 5m+l(t)-5m(t)l
O_C~~T
Thus, f o r almost any
IN
> 1 i-0*3 = O* 2m
-
t h e r e i s a n mo = m 0 (w)
sup 1 5mt,(t)-5m(t) 05t~T
1
<
such t h a t
1 if m 2 rno(w).
- 2m
It f o l l o w s t h a t t h e s e r i e s
i s uniformly convergent i n t ~ C 0 , ~ l .Denoting t h e l i m i t by 5 ( t ) we t h e n have S k ( t ) w.
-
But t h i s i m p l i e s
5 ( t ) uniformly i n
~ECO,TI,f o r
almost any
uniformly i n
Taking m
t , f o r a.a.w.
-
i n (5) we f i n d t h a t
w
5 ( t ) 5s a s o l u t i o n o f ( I ) , (2). F i n a l l y , s i n c e by ( 5 ) ,
we g e t , by i n d u c t i o n ,
so t h a t ~l5,~(t)l
2
5 c1-
The same i n e q u a l i t y t h e n h o l d s f o r 5 ( t ) , i . e . ,
< ( t ) € 2~ [ o , ~ ] .
Theorem 1 can be extended i n two d i r e c t i o n s : Uniqueness.
I f Ti
(i
=
1,2) i s a solution of a s t o c h a s t i c
i i 1 2 d i f f e r e n t i a l system w i t h b ,a such t h a t b ( x , t ) = b ( x , t ) ,
1 o (x,t) 5,(0)
=
2
a ( x , t ) i f ( x , t ) i s i n a domain
= S 2 ( 0 ) ~ U ,t h e n C1(t)
= s 2 ( t ) a.e.
U, and i f f o r a l l t < r where
i s t h e e x i t time of S l ( t ) (or S 2 ( t ) ) from Existence.
U.
The e x i s t e n c e p a r t remains t r u e i f t h e uniform
L i p s c h i t z c o n d i t i o n s on
0,
b a r e replaced by a L i p s c h i t z condi-
t i o n i n every compact s e t . Theorem 2.
r
The s o l u t i o n 5 ( t ) o f ( I ) ,
(2) s a t i s f i e s
f o r a l l t > s and any Bore1 s e t
A; f u r t h e r , p(s,x,t,A)
is a
t r a n s i t i o n p r o b a b i l i t y ,function. Proof. -
Let,
If y = S(S) then s k ( t )
-
5 ( t ) (by t h e proof of Theorem 1). De-
note by 5 ( t ) t h e s o l u t i o n of (1) w i t h S ( s ) = x. x, s By i n d u c t i o n one c a n show t h a t each x k ( t ) i s measurable with r e s p e c t t o t h e o - f i e l d generated by
and
y
o(w(X+s)-w(s), s 5.15 t ) ; more s p e c i f i c a l l y , t h e r e e x i s t s a sequence of Bore1 measurable functions Fn (t,xo,xl,
...,
that
f o r a.a.m,
uniformly i n
t , f o r some 0 <
f o r e , t h e same holds f o r 5 ( t ) .
< t
-
s.
There-
~ h u s(with another sequence Fm)
a.s.
.
(8)
5 ( t ) = l i m ~ ~ ( t , S ( s ) . w ( lu+~~ ) - ~ ( ~ ) , ,.w . ( u , ~ +s)-w(s)), 2 m m-w
(9)
C,,,(t)
= l i m ~ ~ ( t , x , w ( u+ m-rp
m¶1
s
-
,urn
yPrn
+s)-w(s)),
a s seen by t a k i n g f i r s t F = F O ( ~ O ) ~ 1 ( ~ 1 , ~ 2 , . . . , ~ kTaking ).
F
=
f (F,)
,f
a bounded continuous f u n c t i o n and u s i n g (8), ( 9 ) ,
we g e t
where @(x) = ~f ( 5
XPS
(1:)).
By approximation, t h i s r e l a t i o n h o l d s
f o r any bounded Bore1 f u n c t i o n .
It remains t o prove t h a t function.
p
i s a transition probability
The chapman-~olmogorov e q u a t i o n i s t h e o n l y c o n d i t i o n
t h a t i s not obvious.
Notice t h a t
Hence
f o r any bounded Bore1 f u n c t i o n t < T, w e g e t
$.
Taking $(y) = p ( t , y , ~ , A ) ,
where ( 7 ) was used i n t h e l a s t e q u a l i t y .
S i n c e t h e l a s t expres-
sion i s equal t o
t h e proof i s complete. We can now i d e n t i f y t h e s o l u t i o n o f t h e s t o c h a s t i c d i f f e r e n t i a l system w i t h a Markov p r o c e s s . space
eT of
Indeed, t a k e
s a l l continuous n-vector f b n c t i o n s x ( - ) , ')')lt t h e
u - f i e l d g e n e r a t e d by x ( u ) , s 5 u
5
t , and
P x , s [ ~ ( * ) ~=~PEW; ]
(11)
t o be t h e
5X YS ( * ,w)EB].
We.have t o v e r i f y t h e Markov p ~ o p e r t y
Proof o f (12).
Since
P C % , ~ ( ~ + ~ ) E5x,s(x),X AI f o r any s
tl < t 2 <
5 tI
=
~(t,e~,,!t)~t+h,A),
< tm 5 t and Bore1 s e t s A1,.
..,Am,
Hence, by ( 1 1 ) ,
T h i s i m p l i e s (12). By t h e s o l u t i o n o f t h e s t o c h a s t i c system ( I ) one
means
t h e Markov p r o c e s s j u s t c o n s t r u c t e d , namely,
Notice t h a t E f ( S
x,s
( t ) ) = Ex,sf(x(t))
f o r any bounded
Bore1 f u n c t i o n . When b = b ( x ) ,
0
=
~ ( x ) ,t h e Markov p r o c e s s i s time-homo-
geneous. Theorem 3 .
The s o l u t i o n of ( I ) s a t i s f i e s t h e ~ e l l e rpro-
p e r t y (and t h e r e f o r e a l s o t h e s t r o n g Markov p r o p e r t y ) . Proof. -
It i s n o t d i f f i c u l t t o s e e t h a t
i f 1x1 < R, lyl 5 R, 0 5 s
5
T
5 T, where
C
By t h e Lebesgue bounded convergence theorem,
depends on R,T.
E f ( S y , T ( t + ~ ) ) - E f(5x,s(t+:)) provided
-. 0 i f y
- x, r
i s a bounded continuous f u n c t i o n .
f
E f(5
x, =
(t+r))
+
E f(5x,s(t+s))
if
s
A'lso T
-
s.
Thus
i s continuous. Problems.
1.
Show t h a t t h e s o l u t i o n of ( I ) , ( 2 ) s a t i s f i e s
ElC(t)l 2. ai j =
t
2
r(l+Elsol
Assume t h a t b ( x , t ) , n
x
aikojk.
k= 1
3.
2m
2m e C t
1
a(x, L) a r e continuous and s e t
Prove t h a t
If b ( x , t ) , a ( x , t ) a r e continuous t h e n f o r any s z 0 ,
0, x€R~,
A Markov process with
p
satisfying these properties is some-
times called a diffusion process. Suppose now that a a ~~b(x,t), ~~o(x,t) are continuous
(14) and bounded by C(l for,some
c>
0.
+
1x1 B )
( 0 5 la-1 ( 2 ,
One .can show that for any
$ > 0)
f such that
a
D f (x) is continuovs and bounded X
(15) by C(l
+ \:.;I B )
(01 la1 5 2, B > 0)
the function cp(x) = E f(5 x,s (t)) has two continuous derivatives, and
The proof is-omitted. In this connection we mention also the fact that
satisfies the ~olmogorovequation (or the backward parabolic equation)
problems.
4.
Consider t h e l i n e a r s t o c h a s t i c d i f f e r e n t i a l
equation
where a , p, y , b a r e bounded and measurable. (a)
i f a s 0, y
E
0 t h e n t h e s o l u t i o n 5 = 5 ( t ) i s g i v e n by 0
t Co(t) ' S0(0)expfJ [B(s) 0 (b)
Prove :
t
- 7162( s ) Ids + S06(s)dw(s) I .
s e t t i n g SCt) = S O ( t ) S ( t ) show t h a t 5 ( t ) s o l v e s t h e
e q u a t i o n (16) i f and o n l y i f
Thus the s o l u t i o n of (16) i s t ( t ) / S 0 ( t ) w i t h 5 (0) = ~ ( 0 ) / 5 ~ ( 0 ) .
65.
P r o b a b i l i s t i c i n t e r p r e t a t i o n o f boundary v a l u e problems Let
w i t h c o e f f i c i e n t s defined i n t h e c l o s u r e D c R ~ ,and assume t h a t
D
o f a bounded domain
a . . , b . uniformly L i p s c h i t z continuous i n 1J 1 c 5 0, c uniformly alder c o n t i n u o u s i n Let f , p b e f u n c t i o n s d e f i n e d on
D
-
D,
-D.
and a D r e s p e c t i v e l y ,
satisfying:
(2)
f
i s uniformly ~ 6 l d e rcontinuous i n
cp
i s continuous on b D .
Assume f i n a l l y t h a t a D i s i n C
2
-D,
.
Consider t h e D i r i c h l e t problem
It i s w e l l known t h a t t h e r e e x i s t s a unique c l a s s i c a l s o l u t i o n t o t h i s problem; u
i s continuous i n
-D
and i t s second d e r i v a t i v e s
a r e continuous ( i n f a c t , ~ 6 l d e rc o n t i n u o u s ) i n
D.
S i n c e t h e m a t r i x a ( x ) = ( a . . ( x ) ) i s p o s i t i v e d e f i n i t e and 1J uniforml-y L i p s c h i t z continuous i n D, t h e r e e x i s t s a square
-
m a t r i x o ( x ) = (o . ( x ) ) which i s symmetric, p o s i t i v e d e f i n i t e and 1J uniformly L i p s c h i t z continuous i n D such t h a t a ( x ) = o 2 (x). We extend a(x) i n t o Rn s o t h a t i t remains uniformly L i p s c h i t z cont i n u o u s ; b ( x ) = (bl(x)
,..., b n ( x ) )
i s extended s i m i l a r l y , i n t o R
Consider t h e system of s t o c h a s t i c d i f f e r e n t i a l e q u a t i o n s
n
.
Theorem 1. Then E ~ T<
a
Denote by
T
t h e e x i t time o f 5 ( t ) from
D.
Y XED and t h e s o l u t i o n of ( 3 ) can be represented i n
t h e form "
Proof.
Consider t h e f u n c t i o n
We can choose
l
l a r g e and t h e n
By 1 t o ' s formula, f o r any T <
Since
I h t x ) \ -< K
(6)
in
A
a,
D, Ex(r A T )
ExT
l a r g e so t h a t
5 2K <
5
w
2K.
v
Taking T t
w
we o b t a i n
XED.
To prove (51, d e n o t e by V c ( e > 0) t h e closed c-neighborhood of
a~ and l e t
coincides with
DE = D \ v ~ .
u
i n DCj2.
Let
v
2 n be a f u n c t i o n i n C (R ) which
By 1 t o ' s formula and 03 (5)
2
for some AEM~CO,TI. Hence
for any XED & ' where Noting that v(5(t)
=
7
8
is the hitting time of V € and T <
u(5(t))
if 0 ( t 5
T
€
A
a.
T and taking
c -,
0,
we get TAT
u(x)
=
~~u(5(7A ~))exp[J
c(S(x))dsl 0
Taking Tt- and using (6), the assertion (5) follows. problems.
1. Consider the case of one stochastic differen-
tial equation
where a(x),b(x) The function
are uniformly ~ipschitzand a(x) > 0 for all x.
satisfies
2
prove t h a t i f v(-m) = i f v(m)
= m
-w
02v11 + bv'
=
0.
t h e n PX I s u p 5 ( t ) = 0 0
t h e n px[inf 5 ( t ) = t>O
-w]
=
1.
similarly,
1.
v(-m) >
--.
[Hint.
To prove t h e l a s t p a r t , denote by
5
=
I n t h e preceding problem, assume t h a t V ( W )=
2.
5(t)
-1
W,
Show t h a t
y and l e t y < x, x 2 > y.
T
Y
t h e f i r s t time
Then PX(7 < m) = 1 and by t h e Y
s t r o n g Markov p r o p e r t y
pX[sup 5 ( t t>O
+
T
Y
) I = E P [sup 5 ( t ) 2 t>O
Also t h e l e f t hand s i d e i s 2
then x2
-
x21
=
~2% 5 ( t ) 2 x2].
--,I
Consider now t h e o p e r a t o r
v(y)-v(-w) v(x,)-vi-m) Take y -.
-m
and
nnd assume that for some cylinder QT
=
D
X
(0,~):
are uniformly Lipschitz continuous in (x,t)~$, a ijyb. 1 c
is uniformly Holder continuous in (x,t)€Q 1"
(7) f is uniformly ~Gldercontinuous in (x,t) €6 T'
cp
is continuous on DT = {(x,~); xcE3 and cp
BD is in C2
=
g
on BDTy
.
Consider the initial boundary value problem
It is well known that this problem has a unique classical solution u. As in the elliptic case we introduce the square root o(x,t) of a(x,t)
,(a..(x,t)) and extend both u and b as uniformly =J Lipschitz functions in Rn x [O,T]. Introducing the system of =
stochastic differential equations
we can now state:
heo or em ' 2 .
solution u
:t:c
of (8) can be represented in
the form
where such X
T
is t k i :$rSt
exists
2533 xqe
time hc[t,~) that %(I) set
T =
leaves D; ifno
T.
The proof 5 s similar to the proof of Theorem 1; we apply here Ito's fonc,a
20
Consider ixa: t h e Cauchy problem
We assume that
aijybi are k m n d e d and uniformly ~ipschitzcontinuous in n R x+@z:?,
c
i s bounded and uniformly ~ 6 l d e rcontinuous i n R~ x
f ( x , t ) i s uniformly R~ x [O,T]
[o,~],
alder continuous i n compact s u b s e t s of
and I f ( x , t ) l 5 C ( l
$(x) i s continuous i n R
n
and
+
1xla),
i $(x) f 5
C(1
+
1x1~)
where c , > 0 , a > 0. Under t h e s e c o n d i t i o n s t h e r e e x i s t s a unique s o l u t i o n u ( x , t ) of (8) s a t i s f y i n g
The f i r s t d e r i v a t i v e u
X
i s a l s o bounded by t h e r i g h t hand s i d e of
(10) ( w i t h d i f f e r e n t c o n s t a n t s ) i n every s e t R n x
[O,T-€1.
We can now r e p r e s e n t u ( x , t ) i n terms o f t h e s o l u t i o n 5 ( t ) o f
(9) : Theorem 3 .
The s o l u t i o n u(x, t ) i s g i v e n by
The proof i s l e f t a s a n e x c e r c i s e . Consider now t h e s p e c i a l c a s e
and t h e Cauchy problem
The s o l u t i o n c a n be r e p r e s e n t e d i n terms of t h e fundamental s o l u t i o n T ( x , t ; y,T) of t h e backward p a r a b o l i c e q u a t i o n L
+ a/at:
We r e c a l l t h a t a s a f u n c t i o n o f ( x , t ) ,
Also
f o r some C >
4,
c > 0.
From (11) we g e t u(x,t) Since
$
-
J
E ~ , ~ ~ ( s ( T=) ) ~ ) ( ~ I P ~ , ~ ( s ( T ) E ~ Y ) .
i s a r b i t r a r y , i t follows t h a t
t h a t i s , t h e ' t r a n s i t i o n p r o b a b i l i t y f u n c t i o n , s o n s i d e r e d as a measure A
+
p(t,x,T,A),
h a s d e n s i t y which i s t h e fundamental
- + a/at.
s o l u t i o n r ( x , t ; y , ~ )o f L
We shall use later on the LP elliptic estimate:
here G
is a bounded domain with C 2 boundary, the coefficients
of L ape continuous in G and function in WZyp(G)
n Wiyp(G), 1
L is elliptic, and < p <
u
is any
Recall that P'P(G)
a.
is
the class of functions whose first n derivatives belong to LP,
m
and wiyp(G) is the completion in W"P(G) tions with support in G.
of the set of
func-
C
If c(x) 5 0 then the term 1 ul
on the LP
right hand side of (16) may be dropped out. Let u
If C(X)
satisfy
0 then 7
U(X)
=
S
- E ~ f(S(t))dt,
7
exit time from G,
0
so that, by the LP estimate,
~rylov[ 2 0 1 has considered the much more general process
with nonanticipative a(t),b(t)
and proved the following estimate:
Assume that
and let G be any open bounded domain with diameter 5 D.
Then,
for any xEG, ~EL~(G),
where
T~ =
exit time of 5(t)
+x
from G and
N
is a constant
depending only on M,D.
6.
Stopping time problems and variational inequalities Consider a stochastic differential system in R"
with the usual Lipschitz condition on b(x) ,o(x), bounded domain with C2 boundary.
and let G be a
Denote by t the exit time from G G, and introduce the cost functional
f o r any stopping time
w i t h r e s p e c t t o t h e standard a - f i e l d
T
3 t a s s o c i a t e d w i t h t h e Brownian motion. a
a r e g i v e n f u n c t i o n s and
Here f ( x ) , cp(x), h ( x )
i s a g i v e n p o s i t i v e number ( t h e
discount f a c t o r ) . we c o n s i d e r t h e prc5lem o f f i n d i n g
where 01 v a r i e s over t h e s e t o f a l l s t o p p i n g times, and f i n d i n g a s t o p p i n g time T
3;
such t h a t
We r e f e r t o t h i s problem a s a stopping c a l l e d an o p t i m a l stopping Let a =
OD*
LU =
time problem;
r* w i l l be
time.
and s e t
-1
z a i j (XI-
i,j=l
Consider t h e problem:
a 2u axihj
+
find a function
n
c
b.(x)-
i=l u
1
au
axi
- au.
satisfying:
T h i s problem i s c a l l e d a v a r i a t i o n a l i n e q u a l i t y .
If
L
is
124
formally selfadjoint, then u
is the function v which minimizes
over the functions v which vary in the convex set: v 5 9, v = h on
a ~ .
We shall now assume:
h
Theorem 1.
is in cZ(aG) and h 5 cp.
Under the foregoing assumptions, there exists a
unique solution u
of the variational inequality (3) such that. UEW~,~(G)for any 1 < p <
(7)
m.
OD
Proof.
Let $,(t)
be C
furiction in t, for any
that
and consider the Dirichlet problem
6
> 0, such
~y t h e standard theory, a unique s o l u t i o n e x i s t s . mate t h e maximum of Pg(u-cp) i n
-
We now e s t i -
by n o t i n g t h a t i f t h e maximum
G
0 i s a t t a i n e d a t a p o i n t x EaG t h e n PE(u-cp)
=
O y whereas i f i t i s
0 0 a t t a i n e d a t a p o i n t x E G t h e n u-cp a l s o a t t a i n s i t s maximum a t x
so t h a t -L(u-cp)
>_ 0 a t x 0
.
We t h u s f i n d t h a t
0 ( BJu-cp) We can now u s e t h e L'
(C.
e s t i m a t e s t o deduce t h a t
Taking a subsequence of u e , which i s weakly convergent t o some
u
i n W2'~(G) and s t r o n g l y convergent i n WLyp(G), we e a s i l y f i n d t h a t
u
solves the v a r i a t i o n a l inequality. The uniqueness f o l l o w s from t h e following theorem which con-
n e c t s t h e v a r i a t i o n a l . i n e q u a l i t y problem t o t h e stopping time problem. TheoremA. g i v e n by
Further,
(10)
Any s o l u t i o n
u
of ( 3 ) which s a t i s f i e s (7) i s
where
s
=
=
T*
EXCG;
?
and
G'
i s t h e h i t t i n g time of t h e s e t
?
I.
U(X) = V(X)
The s e t C =
t
A
i s called t h e stopping
S
IXEG; u ( x ) < cp(x)3
6et
and t h e s e t
i s called the continuation set.
The r e l a
t i o n (10) means t h a t t h e optimal s t o p p i n g procedure i s t o continue while 5 ( t ) i s i n Proof.
where 9 =
C
and t o s t o p a s soon a s 5 ( t ) h i t s
s.
By l t o l s formula ( c f . t h e proof of Theorem 1, 45).
T A T
'
T
E
= e x i t time from G
€
=
G\v€,
and V c i s a n
c-neighborhood of X. A c t u a l l y , f o r l t o l s formula (11) t o hold one u s u a l l y r e q u i r e s t h a t ucC2 (G ) . However t h i s formula h o l d s €
also i f p > 1
u
+ n/2;
i s j u s t assumed t o belong t o w 2 " ( ~ € ) w i t h s e e [ 7 ] 1151.
.
Using t h e i n e q u a l i t i e s Lu > - - f , u -< q
and t h e n t a k i n g c -. 0 , we o b t a i n u ( x ) 5 JX(T). preceding proof
T = T*
Taking i n t h e
and n o t i n g t h a t
we o b t a i n (10).
It i s a c t u a l l y n o t s u r p r i s i n g t h a t t h e o p t i m a l s t o p p i n g problem l e a d s t o t h e v a r i a t i o n a l i n e q u a l i t y .
Indeed, arguing
formally we have two c h o i c e s a t each i n i t i a l p o s i t i o n (x,O) w i t h
(i) (ii)
e i t h e r s t o p , which i m p l i e s t h a t V(x) o r c o n t i n u e f o r a time
a
5
q(x) ;
and t h e n proceed o p t i m a l l y ,
which i m p l i e s
t h e second summand on t h e r i g h t i s obtained a f t e r applying t h e s t r o n g Markov p r o p e r t y . by
and t a k i n g a
a
t
to's formula and t h e n d i v i d i n g
Using
0 , we o b t a i n LV
+f
2
0.
F i n a l l y , s i n c e e i t h e r ( i ) o r ( i i ) i s optimal, we must have (V-q) (LV
+
f ) = 0.
The above procedure of d e r i v i ~ gf o r m a l l y d i f f e r e n t i a l i n e q u a l i t i e s f o r t h e optimum can be a p p l i e d t o a l a r g e v a r i e t y of Narkov o p t i m i z a t i o n problems. The system (8) i s c a l l e d t h e p e n a l i z e d problem. t h e c a s e where $ ( t ) = t C
4-
/E,
SO
Consider
t h a t t h e penalized problem
becomes
Even though t h i s $ ( t ) i s only L i p s c h i t z i n E
proof s t i l l a p p l i e s , s o t h a t u
E
-
0 if
-
t , t h e previous 0.
The s o l u t i o n u
can be g i v e n a p r o b a b i l i s t i c i n t e r p r e t a t i o n , namely: Denote by
V
t h e c l a s s of a l l n o n a n t i c i p a t i v e f u n c t i o n s
E
v(t) with 0
5
v ( t ) ( 1.
For any vW, d e f i n e t h e c o s t f u n c t i o n a l
Then u C(x) = i n £
3(v) .
V EV
problems. also t h a t uE(x)
-v ( t )
=
2.
1. =
Prove ( 1 4 ) , by applying
Tx(;)
where ;(t)
=
to's formula.
Prove
1 i f u E ( 5 ( t ) ) 2 9 ( C ( t ) ) and
0 otherwise. Frove t h a t
Consider now a f u n c t i o n a l which depends on two s t o p p i n g times :
We c a l l J ( a , ~ )a p a y o f f and we c o n s i d e r two p l a y e r s , t h e f i r s t X
one c o n t r o l s
a
and t r i e s t o minimize t h e payoff, and t h e second
one c o n t r o l s
r
and t r i e s t o maximize t h e payoff.
This model i s
c a l l e d a z e r o sum s t o c h a s t i c d i f f e r e n t i a l game. A p a i r (a*,r*)
JX
of stopping times i s c a l l e d a s a d d l e p o i n t if (a*,
5 JX(o*, r*) 5 JX(cr,79:)
T)
f o r a l l a , ~ . The number
i s c a l l e d t h e v a l u e of t h e game. The d e f i n i t i o n (15) i s not symmetric i n a , ~ , s i n c e when
a = r < t
G
t h e f u n c t i o n cp
2
(and not cp ) i s r e l e v a n t ; t h i s however 1
w i l l not a f f e c t t h e r e s u l t s below (r.7hich w i l l b e symmetric
(Notice t h a t V(x) 2 cp (x) and i f t h e re2
U , T ) provided cp2 5 cpl.
s u l t s should be symmetric t h e n V(x) necessary c o n d i t i o n cp
itl
2
5
vl(x),
thus leading t o t h e
-< cpl).
We i n t r o d u c e t h e v a r i a t i o n a l i n e q u a l i t y w i t h two constraints: LU LU
+f +f
2 0 a.e. where u > rp2' 0 a . e . where u < cp 1' V2
5u
( V1 i n
G,
u = h on aG.
We assume t h e same r e g u l a r i t y c o n d i t i o n s on L , f , h a s b e f o r e and,
130
i n addition, q1,q2 belong t o
Theorem 3.
c2(~),
There e x i s t s a s o l u t i o n
longs t o W"P(G) f o r any 1 < p < c o i n c i d e s w i t h V(x).
.
u
o f (16) which be-
The s o l u t i o n i s unique and
Further, the p a i r
(D*,T*)
where
a* = h i t t i n g time of t h e s e t {u = (p 3, 1 T*
= h i t t i n g time of t h e s e t [u = q
2
]
i s a saddle point. Problems. Theorem51,2,
3.
prove Theorem 3 by t h e method of proof of
i n t r o d u c i n g t h e penalized problem
where y E ( t ) = 0 i f t > O,.-yE(t) y,(t)
i f t < 0, E
-
0,
> 0 i f t < 0. Theorems 1-3 can b e g e n e r a l i z e d t o unbounded domains
G
and
t o time-dependent c o e f f i c i e n t s and d a t a ( t h e d i f f e r e n t i a l i n e q u a l i t i e s form a p a r a b o l i c v a r i a t i o n a l i n e q u a l i t y ) .
Also,
i n s t e a d of j u s t c o n t r o l l i n g t h e stopping time, one may i n t r o d u c e n o n a n t i c i p a t i v e c o n t r o l s i n t o t h e s t o c h a s t i c d i f f e r e n t i a l equat i o n s 1151.
There i s a l s o some work on non-zero sum s t o c h a s t i c
d i f f e r e n t i a l games ( s e e [ 2 ] [15
I).
I f i n a v a r i a t i o n a l i n e q u a l i t y t h e c o n s t r a i n t depends on t h e unknown s o l u t i o n , t h e n we c a l l t h i s problem a q u a s i v a r i a t i o n a l inequality.
Such problems a r i s e i n non-zero s t o c h a s t i c d i f f e r e n -
t i a l games.
Another model which g i v e s r i s e t o such a problem i s
when t h e c o n t r o l v a r i a b l e i s a sequence of stopping times T =
(7 1,72,. T1
where house.
. .).
we r e f e r t o [51 C61 f o r a model of t h i s kind,
y ~ 2 , . . . a r e t h e time f o r o r d e r i n g s t o c k from t h e wareAnother model a r i s i n g i n q u a l i t y c o n t r o l i s s t u d i e d i n
C11.
$7;
S t o c h a s t i c switching and n o n l i n e a r e l l i p t i c e q u a t i o n s n For any p > 0, we denote by w ~ ' ~ " ( R) t h e c l a s s of func-
tions
u
such t h a t
Let
be e l l i p t i c o p e r a t o r s s a t i s f y i n g :
and i n t r o d u c e t h e corresponding systems of s t o c h a s t i c
d i f f e r e n t i a l equations
where o
k
i s t h e p o s i t i v e s q u a r e r o o t of t h e m a t r i x ( a i j ) .
Let v ( t ) be any
l , . .
..
We c a l l
$unction w i t h ~ . a l u e si n v
t h e s e t of a l l c o n t r o l s .
-
a c o n t r o l f u n c t i o n and d e n o t e by
V
To each v m we d e f i n e t h e t r a j e c t o r y
~ " ( t )by
with i n i t i a l condition ~ ' ( 0 )
=
x.
Thus s V ( t ) c o i n c i d e s w i t h
The c o n s t r u c t i o n of a continuous
s k ( t ) " a s long as" v ( t ) = k .
v p r o c e s s 5 ( t ) and i t s uniqueness can 1,e proved by t h e s u c c e s s i - ~ e approximation method of 94. We now i n t r o d u c e a c o s t f u n c t i o n a l which depends on a
k
sequence o f g i v e n f u n c t i o n s f ( x ) , f o r which
and on a d i s c o u n t f a c t o r a > 0:
Consider t h e problem o f f i n d i n g
(6
~ ( x )= i n £ J x ( v ) . va'
T h i s i s a problem of o p t i m i z i n g t h e running c o s t f = Ef
k
3
when
one i s allowed t o switch f r e e l y from one s t o c h a s t i c system co another. ~ r y l o v[21] s t u d i e d t h i s problem.
His main r e s u l t i s t h e
following. Theorem 1.
i s s u f f i c i e n t l y large then
v ~ w ~ ' ~ " ( R f o~r )some p > 0 and a l l p <
(7
and
a
If
i s uniquely determined by (7),
V
W,
(8).
Equation (8) i s c a l l e d t h e Bellman equation.
~rylov's
proof i s p r o b a b i l i s t i c and does not extend t o t h e corresponding
G, G f Rn (whSch w i l l be defined i n d e t a i l
problem i n a domain
below); h i s proof u s e s , among o t h e r t h i n g s , t h e i n e q u a l i t y (18),
95. Now l e t
b e a bounded domain w i t h C2 boundary aG, and
G
define a cost functional
where
(6).
T
i s t h e e x i t time from
G; l e t V(x) be a g a i n defined by
Consider t h e problem o f c h a r a c t e r i z i n g V(x) a s t h e solu-
t i o n o f t h e ~ i r i c h l e tproblem f o r t h e Bellman equation:
k
i n f C ~u(x) (10)
k
+ fk ( x ) ) u = 0
=
on
0 a.e.
aG.
in
G,
The f o l l o w i n g r e s u l t i s due t o Evans and Friedman 1121. Theorem 2 .
Assume t h a t t h e c o e f f i c i e n t s a k a r e c o n s t a n t s . ij
Then, f o r any a > 0, t h e r e e x i s t s a unique s o l u t i o n
u
of (10)
such t h a t
and u a V. Before o u t l i n i n g t h e proof we i n t r o d u c e , a s a m o t i v a t i o n , a n o t h e r s t o c h a s t i c c o n t r o l problem corresponding t o a f i n i t e number
m
of t h e e l l i p t i c o p e r a t o r s , s a y L
,...,Lm .
1
The pre-
v i o u s c o n t r o l v a r i a b l e v ( t ) i s now r e s t r i c t e d t o a c o u n t a b l e number of switchings,and, furthermore, t h e switchings a r e c y c l i c , i.e.,
f ~ c ms t a t e
i
fied with s t a t e 1).
to state i
+1
(where s t a t e m
+
1 i s identi-
E q u i v a l e n t l y , we t a k e t h e c o n t r o l v a r i a b l e
t o be a sequence o f s t o p p i n g t i m e s 0 = ( e l , q 2 , .
. .) w i t h
9
j 1 To w r i t e down t h e c o s t J ( Q ) , we f i x p o s i t i v e numbers
kl,
...,km and
X
then define
t
m.
~ h u st h e s w i t c h i n g from 5
i
to
v 1(x)
si" =
i n c u r s c o s t k i'
1 in£ Jx(e). 0
i s i m i l a r l y we can d e f i n e a c o s t J,(e) s t e a d of 5',
Set
s t a r t i n g with 5
i
in-
i.e.,
set
Proceeding f o r m a l l y we a r r i v e a t t h e f o l l o w i n g system of variational inequalities for u
i
= V
i
(x) :
T h i s system was s t u d i e d i n C31 C41 i n c a s e m = 2. It i s c l e a r from t h e above model t h a t i f ki
-
0 (1
5
is m)
i
t h e n each u (x) should converge t o t h e same f u n c t i o n ~ ( x ) ,where V(x) i s d e f i n e d i n (6).
Thus one i s .motivated t o f i r s t s o l v e
(11) and t h e n t a k e ki
-
0.
I n o r d e r t o s o l v e ( l l ) , we i n t r o d u c e a p e n a l t y term Bc(ui-ui+l-ki)
m
where $
r
i s d e f i n e d a s i n $ 6 and t h e n t a k e r -0.
t h i s way one c a n show (even when t h e a i kj a r e n o t c o n s t a n t s )
t h a t t h e r e i s a u n i q u e s o l u t i o n of (11) such t h a t u ~ ~ w ~f o$r ~ ( ~ any p <
(One can a l s o prove t h i s r e s u l t by more p r o b a b i l i s t i c
m.
i methods based upon approximating t h e c o s t s Jx(0) by c o s t s funct i o n a l ~i n which a f i n i t e number o f times 8l < used, and t h e n l e t N
-
a;
e2
<
-.
s oN i s
s e e [12].)
S i n c e we a r e mainly i n t e r e s t e d i n s o l v i n g ( l o ) , o r f i r s t
it i s t e c h n i c a l l y s i m p l e r t o work d i r e c t l y w i t h t h e p e n a l i z e d problem
and t a k e c
-
0, hoping t o g e t t h e s o l u t i o n of (12) a s l i m u i (x).
co
The e x i s t e n c e of a unique c l a s s i c a l s o l u t i o n o f (13) i s r a t h e r standard. p r i o r i estimates
The n e x t s t e p , which i s c r u c i a l , i s t o d e r i v e a
where
i s independent of
C
(Details a r e omitted.)
E.
t h e s e e s t i m a t e s one proceeds t o show t h a t , a s
i s a s o l u t i o n of (12).
and V,(x)
E
Using
-. 0,
Next we t a k e m -.
a,
and show
t h a t Vm(x) -. V(x) where V(x) i s t h e a s s e r t e d s o l u t i o n of (10). Uniqueness follows by t h e method of Krylov [ Z l ]
( s e e a l s o [7]).
As an immediate a p p l i c a t i o n of Theorem 2, c o n s i d e r t h e D i r i c h l e t problem f o r a h i g h l y nonlin.ear e 1 l i p t i . c equation
where
> 0.
Assume: F: R
.
n
2
-
R i s convex and C 2
X Fx. S i t j 2 y 1 51 1J
,
2
,
y > 0 (ellipticity)
Then we can w r i t e (16) a s a Bellman e q u a t i o n w i t h
5
= (Tij)
with r a t i o n a l coordinates.
~ h u st h e r e e x i s t s a unique
solution of (16),
$8.
2 w n w1AC(~).
(17) in w''-(G)
Probabilistic methods in singular perturbations Consider the uniformly elliptic operator
with Lipschitz continuous coefficients in a bounded domain D with CZ boundary, .and set Lu
L1u.
=
We shall be interested in
the following problems. Problem.
1.
Denote by u the solution of the Dirichlet B
problem
Find the behavior of u (x) a$ e -. 0. E
problem 2. Denote by X
the principal eigenvalue and
E'
eigenfunction of
Find the behavior of X c, pe as
E
-
0; here cp
0
is normalized by
The s o l u t i o n o f (1) can be w r i t t e n i n t h e form
where
i s t h e p o s i t i v e square r o o t of ( a . .) , and ' 7 i s t h e e x i t time 1.J E 0 depends i n a cruof c E ( t ) from D. The behavior o f 7"s
-
3
c i a l way on t h e behavior of t h e s o l u t i o n s o f t h e o r d i n a r y d i f f e r e n t i a l equations
-
Suppose a l l s o l u t i o n s o f (4) l e a v e (depending on t h e i n i t i a l p o i n t x ( 0 )
that, f o r any T <
=
D
x).
0 i n f i n i t e time 7X It i s e a s i l y shown
-,
0 sup 15:(t)-~,(t)l 0
P
->
o
as
E
-
o
~ : ( t ) ( C 2 0) is t h e s o l u t i o n ~ ~ ( wti t h ) 5:(0)
Y"cre
fl>llO'ds t h a t u,(x)
+
uO(x)
=
0 0 q ( c X ( r X ) ) as
"o"SLdernow t h e extreme c a s e where
4'.
E
+
0
= X.
It
v
where
i s t h e outward normal.
This c o n d i t i o n i m p l i e s t h e
s o l u t i o n s of (4) cannot r e a c h aD i n any time t > 0.
~hus
We s h a l l assume: (A)
0 There i s a p o i n t x ED such t h a t every s o l u t i o n o f (4)
e n t e r s a g i v e n neignborhood o f x0 i n f i n i t e time.
F u r t h e r , xO i s
a s t a b l e e q u i l i b r i u m p o i n t f o r (4) i n t h e sense t h a t b ( x0 ) = 0 and t h e J a c o b i a n m a t r i x of a
- 1b
a t x0 h a s a l l n e g a t i v e eigen-
values. (B)
There e x i s t s a f u n c t i o n $(x) i n
-D
such t h a t
We s h a l l c o n s i d e r t h e D i r i c h l e t problem
there
n
1
'ajk 2 Exk k=l Wotice t h a t if a
jk
=
c o n s t . then we can take
t h a t (8) reduces t o (1).
Set
f o r some
f
1
= Os
tL; 1 b. = 0 J
SO
J' C
= lim
e'ILe'
1
( b - v ) p IS
aD 1
i L this l i m i t exists.
heo or em solution u
C
s u b s e t s of
1.
Let (7),
( A ) , (B) hold.
of (8) s a t i s f i e s :
uL(x)
-
If. C
e x i s t s then the
C uniformly i n bounded
D.
For t h e D i r i c h l e t problem
we can e s t a b l i s h a s i m i l a r r e s u l t ( i f ( A ) , ( B )
c
(11)
=
lim
PO
S aDeq"(b.v)p
s
hold) w i t h
dS
e$''(b*v)dS aD
These formulas were discovered h e u r i s t i c a l l y by Matkowsky and Schuss 1 2 3 1 and proved under v a r i o u s r e s t r i c t i o n s by Kamin [19].
The proof i n t h e g e n e r a l c a s e i s due t o Devinatz and
Friedman [9
1 and
e x p l o i t s b o t h p r o b a b i l i s t i c c o n s i d e r a t i o n s and
e l l i p t i c estimates. Theorem 1 r e q u i r e s s e l f - a d j o i n t n e s s o f t h e e l l i p t i c o p e r a t o r
(with r e s p e c t t o t h e measure e"',
= $
+
1 r t ). Consider now
t h e problem ( I ) , w i t h o u t t h e c o n d i t i o n ( B ) , b u t assuming (7) and (A).
Introduce the functional
i f 6 ( t ) i s a b s o l u t e l y continuous ( I
T
(6)
if
=
C i s n o t abso-
l u t e l y continuous) and
llxll
a
-1
=
(XI
Let 6 ( t ) (0 c(0) = x
0
5
-
1 (C a.. (x)x.x.)"~, 1J 1 J
t
, g(T)
p o i n t on aD.
5 T) =
=
i n v e r s e of ( a . . ( x ) ) . 13
v a r y over a l l continuous c u r v e s such t h a t
ED
y,
(a;:(x)) 1J
i f 0 < t < T, where
i s a fixed
y
Let
over a l l such
6, for a l l T c
a.
V(y) i s c a l l e d a q u a s i p o t e n t i a l .
It measures i n some
s e n s e t h e amount o f work r e q u i r e d t o move a p a r t i c l e from x0 t o y
a g a i n s t t h e dynamical, system ( 4 ) .
V(y) i s ~ i p s c h i t zcontinuous.
It i s easy t o v e r i f y t h a t
It can a l s o b e shown t h a t , under
t h e c o n d i t i o n s o f Theorem 1, V(y) = 40(y) a t t h e p o i n t s where t a k e s i t s maximum. Theorem 2.
Denote t h i s s e t o f p o i n t s by
If q
t i o n s (7), (A)) uc(x)
-+
=
c o n s t . = C on
C,
V
C.
t h e n (under t h e assump-
C uniformly on compact s u b s e t s o f
D.
This r e s u l t i s due t o V e n t c e l and F r e i d l i n 1251.
Their
proof i s based on t h e following asymptotic e s t i m a t e s f o r any open s e t
G
and any closed s e t
l i m [ 2c -
log p i (G)
H
i s t h e space
I2
- wEGx i n £ IT(w) ,
C-0
-
l i n t 2 e log P ~ ( HI ) 5 PO
(13)
-
%:
i n f IT(w) ,
wax
e where px i s px induced by s e ( t ) 2nd
For p r o o f s s e e a l s o [15]. It remains an open problem t o determine whether l i m u
e
e x i s t s when t h e o n l y assumptions a r e (7) and (A). We now c o n s i d e r problem 2. Lemma 3 .
Define A = sup(l
2 0 , sup ~ ~ es a} " XED
where T = e x i t time from c i p a l e i g e n v a l u e of
D.
Then A = Xo where Xo i s t h e p r i n -
L.
For p r o o f , s e e 1151. Theorem 4. value X
(14)
C
Let (7) and (A) hold.
Then t h e p r i n c i p a l eigen-
satisfies -2e l o g A C
-+
V*
(V" = min V(y))
I f i n a d d i t i o n t o (7),
Theorem 5 .
(A),
b = VJI, t h e n t h e p r i n c i p a l e i g e n f u n c t i o n (ac(x) -. c o n s t . =
(15)
.uniformly i n compact s u b s e t s o f
c D
(s
(o €
aij
6ij and
satisfies
c2dx = 1)
D
and boundedly i n
D.
The proof o f Theorem 4 (which was o r i g i n a l l y a s s e r t e d by Ventcel C24J) is. proved i n Friedman C15].
i s due t o Devinatz and Friedman [8]. F r e i d l i n e s t i m a t e s (12),
The proof of Theorem 5
The p r o o f s u s e t h e Ventcel-
(13) and ( i n t h e c a s e o f Theorem 5 ) some
e l l i p t i c estimates. Theorem 5 h o l d s f o r g e n e r a l aij provided a'lb neighborhood o f x
0
.
= V(
in a
It remains an open q u e s t i o n t o prove t h e
theorem w i t h o u t t h i s r e s t r i c t i o n . Other r e s u l t s a r e known on t h e behavior o f hc,(oE under d i f f e r e n t t y p e o f c o n d i t i o n s on b(x) zero of order leave
'd
v
.
For i n s t a n c e , i f b(x) h a s
a t x0 and a l l s o l u t i o n s o f (4) w i t h x ( 0 ) # x
0
i n f i n i t e time t h e n
u2 -. D i r a c d e l t a f u n c t i o n supported a t x 0 E
.
For proof s e e [ 8 1 and t h e r e f e r e n c e s g i v e n t h e r e . We f i n a l l y mention t h a t t h e v e n t c e l - F r e i d l i n e s t i m a t e s have been used t o o b t a i n t h e p r e c i s e asymptotic b e h a v i o r o f o t h e r q u a n t i t i e s ; f o r i n s t a n c e , t h e Green f u n c t i o n q ( t , x , y ) o f C
L
-a/at;
C
s e e C151.
Problems.
1.
Let D c
e i g e n v a l u e s corresponding t o
and d e n o t e by Xo A
D3k
'
in
L
and
D
prove t h a t X 0 > X*0'
-
0
f o r some XED t h e n X
-
D~V
* 0
the principal
respectively.
0 i f :-r 0.
2.
If rX =
3.
Use t h e V e n t c e l - F r e i d l i n e s t i m a t e s t o show t h a t f o r any
for a l l
4.
e
a
s m a l l , where
Let u
E
2'
c , i s a positive constant.
satisfy
Use t h e V e n t c e l - F r e i d l i n e s t i m a t e s t o prove t h a t l i r n ( 2 ~l o g u e ( x , t ) 1 = - I ( t , x , a D ) e-0 where ~ ( t , x , a ~=) i n £ lt(p), p
T satisfying:
p(0) = x,
varying over a l l functions i n
min d i s t . ( ~ ( s ) , a ~ =) 0 .
ossrt
References
[I]
R. F. Anderson and A. Friedman, Mu1t.-dimensional quality control problem, Trans. Amer. Math. !ith., to appear.
[2]
A. Bensoussan and A. Friedman, Nan-zero sum stochastic differential games with stopping tintis and free boundary problems, Trans. Amer. Math. SOC., 231 (1977), 275-327.
t31
A. Bensoussan and A. Friedman, On tho support of the solu-
tion of a system of quasi variational inequalities, 2. E h . Anal. and Appl., to appear.
143
A. Bensoussan and J. L. Lions, ~ontr6leimpulsionnel et systrmes d'inequations quasi variatic,unelles, C, R. .*A sc. Paris, 278 (1974), 747-751.
[51
A. Bensoussan and J. L. Lions, Nouvcllcs methodes en contrcle impulsionnel, ~ppl.~ath.and Optimization, 1 (1975), 289-312.
C61
A. Bensoussan and J. L. Lions, Temps rl'arrgt et contr6le impulsionnel: lngquations variationr~clleset quasi variationnelles d ' evolution, Univ. Paris Ix, Cahier de Math. de la D & L S ~ O ~ ,1975, no. 7523.
C71
A. Bensoussan and J. L. Lions, Temps d'arrh optimal, Dunod, 1978.
8
A, Devinatz and A. Friedman, Asymptottc behavior of the
principal eigenfunction for singular1.y perti~rbedDirichlet problem, Indiana Univ. Math. J., 27 (1978), 143-157.
A. Devinatz and A. Friedman, The asymptotic behavior of a s i n g u l a r l y perturbed D i r i c h l e t problem, I n d i a n a Univ. Math.
J . , t o appear. J. L. Doob, s t o c h a s t i c Processes, Wiley, New York, 1967.
E. B. Dynkin, Markov P r o c e s s e s , Vols. 1.,2, Springer-Verlag, B e r l i n , 1965. C. L. Evans and A. Friedman, Optimal s t o c h a s t i c switching
and t h e D i r i c h l e t problem f o r t h e Bellman e q u a t i o n , t o appear. A. Friedman, P a r t i a l D i f f e r e n t i a l Equations of P a r a b o l i c
=,P r e n t i c e - H a l l ,
Englewood C l i f f s , New J e r s e y , 1964.
A . Friedman, S t o c l ~ a s t i c~ i f f e r e n t i a l 'Equations and Applica-
t i o n s , Vol.
1, Academic P r e s s , New york, 1975.
A. Friedman, S t o c h a s t i c ~ i f f e r e n t i a lEquations and Applicat i o n s , Vo1.'2, Academic P r e s s , New York, 1976.
I. I. Gikhmanand A. V. Skorohod, The Theory of s t o c h a s t i c P r o c e s s e s I, Springer-Verlag,
B e r l i n , 1974.
I, 1.Gikhmanand A. V. Skorohod, The Theory of s t o c h a s t i c P r o c e s s e s 11, Springer-Verlag, B e r l i n , 1975. I. 1,Gikhmanand A . V.
Skorohod, The Theory o f S t o c h a s t i c
P r o c e s s e s 111, Springer-Verlag, B e r l i n , t o appear. S. Kamin, On e l l i p t i c s i n g u l a r p e r t u r b a t i o n problems w i t h t u r n i n g p o i n t s , SIAM J. Appl. Math.,
t o appear.
N. V. Krylov, An i n e q u a l i t y i n t h e t h e o r y o f s t o c h a s t i c i n t e g r a l s , Th. Prob. Appl.,
16 (1973.), 438-448.
(211
N. V. Krylov, Control
0.fa
solution of a stochastic inte-
g r a l equation, Th, Prob. Appl.,
17 (1972), 114-130.
f223 H. p. McKean, S t o c h a s t i c I n t e g r a l s , Academic p r e s s ,
N ~ W
york, 1971. f23 1 B. J. Matkowsky and Z. Schuss, On t h e e x i t problem f o r randomly p e r t u r b e d dynamical systems, SIAM J.&pl.
Math.,
33 (1977), 365-382. 1243 A. D. Ventcel, On t h e asymptotic behavior of t h e g r e a t e s t eigenvalue o f a second o r d e r e l l i p t i c d i f f e r e n t i a l o p e r a t o r w i t h a s m a l l parameter i n t h e h i g h e s t d e r i v a t i v e s , S o v i e t Math. Sokl.,
13 (1972), 13-17.
f 2 5 1 A . D. V e n t c e l and M. I. ~ r e i d l i n ,On small random p e r t u r bat?-ons of dynamical systems, Russian Math. Surveys, 25 (1970), 1-56 C ~ s p e h id a t h . Kauk, 25 (1970), 3-55
1.
CENTRO INTERNAZIONALE MATEMATICO ESTIVU
(c.I.M.E.)
THEORY OF DIFFUSION PROCESSES
D. STROXK
- S.R.S.
VARADNAN
Theory ofCDff f u s i o n 'Processes D. Stroock .and- S .R.S. Varadhan u n i v e r s i t y of Colorado and C.I.M.S., N.Y.U. '
Section I Let (1.1)
x(t) E[cp(t
be a Markov process and assume t h a t
+
h)
- cp(x(t))lx(s)
for
+ o(h)
s s t ] = hltrp(x(t))
d cp E C;(R ). It i s n o t d i f f i c u l t t o check t h a t Lt must be a l i n e a r
for
o p e r a t o r which i s q u a s i - l o c a l ( i . e . , a constant
C
E
<
such t h a t
\I.(\
Here and throughout
f o r each
x E Rd
I L t'p (x) 1 s c,lldI
Ltcp(x)
0
.
for a l l
denotes the uniform norm.)
s a t i s f y t h e weak maximum p r i n c i p l e i n t h a t i f tainly
'
and
E
>
0
t h r e is
cp E c ; ( R \ B ( ~ , ~ ) ) .
Moreover,
Lt
must
cp(x) = m x ~ ( y ) then cery€Rd
From these observations one can conclude t h a t
Lt
ought t o be of the form d
where
((ai'(t,x)))
i s a n element of t h e c l a s s dd of symmetric non-
n e g a t i v e d e f i n i t e matrices and on
R ~ \ [ 01 such t h a t
d
n
f RYl01 1 +
- f i n i t e non-negative measure d < . The a s s e r t i o n t h a t
is a u
M(t,x;-) t
x
1~1.~
;
must have t h i s form i s t h e a n a l y t i c statement o f the renowned LGvy
Lt
Khinchine decomposition theorem.
- x(t)
+ h)
x(t
, for
small
h
-
I n p a r t i c u l a r ? it s a y s t h a t t h e process
, behaves
l i k e the independent increment
process whose Gaussian p a r t has covariance
a(t,x(t))
and whose Poisson jdmp p a r t has d v y measure
and d r i f t
.
M(t,x(t);.)
b(t,x(t))
Since our a t t e n -
t i o n i n these l e c t u r e s w i l l be devoted t o processes which a r e continuous
of
, we
t
with respect t o
can and w i l l assume from now on t h a t the jump p a r t
i s absent so that
Lt
The c e n t r a l theme of t h e s e l e c t u r e s w i l l be the i n v e s t i g a t i o n of what can be s a i d when one t r i e s t o pursue the preceding l i n e of reasoning i n the opposite d i r e c t i o n . given.
That i s , suppose t h a t an
Lt
of t h e form i n (1.2) i s
T11r;n t h e r e a r e two key q u e s t i o n s which we wish t o answer:
a continuous process
i ) 4s t h e r e
f o r which (1.1) o b t a i n s , and i i ) i s t h e r e a t most
x(.)
one such process i f one a l s o s p e c i f i e s the i n i t i a l data?
Before these
q u e s t i o n s can be s t u d i e d i t i s e s s e n t i a l t o g i v e a p r e c i s e formulation of what we mean by a s t o c h a s t i c process s a t i s f y i n g ( 1 . 1 ) . Since we a r e going t o be r e s t r i c t i n g our a t t e n t i o n t o continuous proc e s s e s , our b a s i c sample space w i l l be
d
= C([O,m) ,R )
fY
endowed v i t h the
topology of uniform convergence on compact i n t e r v a l s .
A s such
complete s e p a r a b l e m e t r i c space and we w i l l denote by
v
over of
.
n
w
a t time
n
b l e on
t
2 0
Giv~nm
.
t
En
.
f o r each Clearly
mt
and
t 2 0
I n t h i s way
t 2 0
.
, we
x(t)
use
x ( t ,w)
becomes a n
Next d e f i n e
7/lt
111
= a(
U
tro
qt).
is a
t h e Bore1 f i e l d
t o denote the p o s i t i o n R~
- valued
random v a r i a -
=a(x(s): 0 5 s c t )
i s a sub a - a l g e b r a of ?Q
one can e a s i l y check t h a t
n
f o r each
t 2 0
.
for Moreover,
From now on 5 s t o c h a s t i c process
s a t i s f y i n g (1.1) w i l l be f o r us a ? r o b a b i l i t y measure
P
(n ,a
on
such
that
t 2 0
for a l l
and
d
cp E C(R:
)
.
Observe t h a t i n t h i s formulation the
paramount r o l e i s played by t h e measure
P
r e l e g a t e d t o t h e p o s i t i o n of a r t i f a c t s .
We next want t o m?nipulatc (1.1')
i n t o a more. convenient form.
1 P l i m ?; E [ c p ( x ( t + h ) ) h'1 0
Hence f o r
0
S
tl
S
-
Let
0
whereas the paths
tl h t
5
bc given.
x(.)
arc
Then
cp(x(t))flllt I 1
tg :
o r equivalently:
Tha't i s , t h e q u a n t i t y
is c o n d i t i o n a l l y constant under
P
.
This v e r s i o n of (1.1')
pleasini: on both i n t u i t i v e a s w e l l a s technical groundssecond order p a r t of
L~
is absent and
Lt =
bi(t,x)
i=l
is q C i L C
Indeed# if
-. 7
axi
the process associated w i t h it to be concentrated on
''L '*h=lrf
~ " C L Z ~ Z ~
d curve of
2 , i n which case
bi(t,x)
axi
i=l
X (t)
would be a c t u a l l y (not
Q
j u s t c o n d i t i o n a l l y ) constant. Processes which a r e c o n d i t i o n a l l y c o n s t a n t play such a n important r o l e i n p r o b a b i l i t y theory t h a t they have been given a s p e c i a l name:
With t h i s terminology we can now formulate our problem
c a l l e d martingales. i n i t s f i n a l form. masure from -
where
P
on
(s,x)
X
v'
i)
(0
Given
,n)
a s i n (1.2), we w i l l say the p r o b a b i l i t y
Lt
s o l v e s the martingale problem for
Lt
starting
if: a)
P(x(t) = x
b)
X ( tv s)
Q
i s defined by (1.5). Existence:
Uniqueness:
,
0s t
5
s) = 1
is a P -martingale f o r a l l
(p
E C;(R
d
(s,x) Lt
f o r each
9s there a solution
s t a r t i n g from (s,x)
P
t o t h e mar-
(s,x) ?
i s t h e r e a t most one such
P 7
In. a d d i t i o n , we w i l l be i n t e r e s t e d i n f i n d i n g out what conclusions can be drawn from a f f i r m a t i v e answers t o
i)
and
ii).
I n o r d e r t o c a r r y o u t t h i s program, we a r e going t7 r e q u i r e v a r i o u s p r e l i m i n a r i e s of a more o r l e s s standard n a t u r e . i n t o two c a t e g o r i e s :
These f a l l q u i t e n a t u r a l l y
t h e g e n e r a l theory of p r o b a b i l i t y measures on
42 ,Q), and t h e theory o f m a r t i n g a l e s .
The r e s t of t h i s l e c t u r e w i l l be
devoted t o t h e f i r s t of t h e s e t o p i c s . Let
Ma)
The topology on weak topology: -
s t a n d f o r t h e s e t of a l l p r o b a b i l i t y measures on
Ma)
)
We propose t o study the following questions:
f o r each
t i n g a l e problem f o r ii)
they a r e
(Q ,'j?o
.
w i t h which we w i l l be concernecl i s the s o c a l l e d
t h e s m a l l e s t topology w i t h r e s p e c t t o which
P
P E [F]
is
continuous f o r a l l s o that
Ma)
F E Cb(n).
It i s possible t o f i n d a metric on M(n)
with the weak topology becomes a complete separable metric
More important f o r our purposes i s t h a t we can c h a r a c t e r i z e compact
space.
subsets of
MCq)
.
rC
I n f a c t , by ~ r o k h a r w ' s theorem,
compact i f and only i f f o r each
E
M(Q) i s pre-
> 0 there i s a compact KE E C2 such
- .
i n f P(K ) 2 1 E Since the compact subsets of Q a r e characterized PEr by the Azela-Ascoli theorem, we now can say the r E Ma) i s pre-compact that
i f and only i f i n f P ( ~ x ( o ) /S A ) = 1 PEI-
lim A
(1.7)
~
W
inf
lim
68 o Per
P(
sup
Ix(t)
OSSC~ST
- x(s)l
S
,
p) = 1
T 2 0 and p > 0
t-s<6
Since the second p a r t of the condition (1.7) i s i n p r a c t i c e d i f f i c u l t t o check, it i o w e l l t o have more e a s i l y v e r i f i e d s u f f i c i e n t conditions. suc:i c r i t e r i o n i s t h a t of Kolmogorw:
and suppose t h a t f o r each
rC
r
M(n)
satisfy
> 0 there i s a CT < m such t h a t
P 4 2 sup E I I x ( t g ) - x ( t l ) j sCT(t2-ti) Per
(1.9) Then
T
let
,
O s t
1
s a t i s f i e s (1.7) and i s therefore pre-compact.
how ~olmogorov's c r i t e r i o n can be applied, l e t XI,.
nd - valued
standard normal random v a r i a b l e s 2
;'IxI I 2 d x )
3ne
(i
.,
A s a n examale of
..,Xn, ...
be independent
P(Xn E A) =
and define
rntl ,t 1
a o .
where
[x]
where
C
S (.)
(clearly
pn((0) = 0 n
[P,:
n 2 13
1 and
j
2
-
sup E .[Ix(t2) n s1
1) i s pre-compact.
vlxere
Let
and check f o r a n N 2 1 , 0
-ly12/2t
g ( t , y ) =-
.
e .
Now l e t
x(tl)l
d Cb(" ):
E
Check t h a t
t -, S n ( t )
P
[P,:
.
< w i s independent of n a 1
the d i s t r i b u t i o n of
lknce
.
i s the l a r g e s t i n t e g e r i n x
P
4
Pn
on
(5
,n)
i s continuous). ] E C(t2
denote Then
- tl12 , 0
t 1 C t2.
E
denote any l i m i t p o i n t of
< t1 <
... < % , and
vl,
...,cpN
This shows t h a t a l l l i m i t p o i n t s of
(2nt) n 2 13
[?,:
coincide and t h e r e f o r e t h a t
l i m Pn
exists.
It a l s o proves
11300
the existence of a
p
on
(n , v )
satisfying
(I. 10).
The measure s a t i s -
fying (1.10) i s t h e famous one constructed by Wiener and w i l l be denoted by We w i l l l a t e r s e e t h a t
b
Pl* problem f o r
1/26
LI,
i s the one and only s o l u t i o n t o the n a r t i n -
s t a r t i n g from
( 0 ,8).
i s the
Another important f a c t about roba ability measures on CI
fcll&ng. f
*'
a*
'lf;
'
'.tihi
?<,?
4Zl
Let Q'
be a countably generated sub o - a l g e b r a of
12( and
Then c o n d i t i o n a l expectations with r e s p e c t t o
NC7)
IJJ
-, p (A) W
Y ~ ( T ) for a l l u, E Q
eq
&.,d
B E @'
given
TO b e p r e c i s e , there is a family
by i n t e g r a t i o n .
such t h a t
P
.
i s @'
and A E
- measurable f o r a l l
11' , and
I n particular,
EP[ P (A),B]
A
E 12( ,
= P(A
I B)
E~[F~@'] = 'E [F] (a.s.,P)
a family
{P,}
.
F: 0 + G
f o r a l l bounded ?it-measure
I n t h e f u t u r e we w i l l c a l l such
a regular c o n d i t i o n s probability distribution
and we w i l l abbreviate t h i s statement by =.p.d.
111'
for a l l
R
T:
t 2 0)
-t
[O,m) U
and s e t
Q
be a stopping time ( i . e . ,
{o~]
= {A
T
nT = G ( ~ ( At
One can show t h a t
n
E q: A
r.c.p.d.
PI?(^
of
,
for
then
0
S
E ?(,
{T s t]
7717
containing
,
P (x(t) = x(t,w) W
0
t
S
One f i n a l property of p r c , ; ~ a b i l i t ymeasures on R
ure on and [T
(62
.%)
= c ( ~ ( t ) :t
A E 70'
2 s]
and l e t
and
{Q
2 s)
BE
~
3
E-
~
.(
M(P)
the m p
{ T S t]
wo
E
w
-,Qw(A)
.
flT
is
i s the
(P ] w
is a
* ~ ( w ) )= 1 . i s t h a t t11e.y can be be a p r o b a b i l i t y meas-
PoT(.)Q.
s 2 0
i s ?I -measurable ( on On
(n ,r!)
X A ( ~ ) ~ U If(o~r ) A
E flT(w)
GW BTQIQW
Define
n B)
mt
t 2 01.
be a family such t h a t f o r a l l
bw BT(w)QW (A
F~i n a)l l y , define
P
P
That i s , l e t
Q W ( x ( ~ ( w )= ) X(T(W) ,w)) = 1
t o be the measure such t h a t and
w
Of p a r t i -
for a l l
t 5 T ( U J ~ ) ] . Hence i f
"glued together'' a t stopping tirncs.
.
and t h e r e f o r e t h a t
T ) : t 2 0)
countably generated and t h a t the atom i n (W € f2: x(t,w) = x(t,wo)
'('%IP
P
h a s a s ; > e c i a l form.
c u l a r importance t o u s w i l l be the case. when Q' Namely, l e t .
of
&
by
P@T(mlQ.(A) =
.
Q . (A)] for A E Q Then P QT(.)Q. i s t h e unique p r o b a b i l i t y ~ ( - 1 L+%isure R on (0 svch t h a t R c o i n c i d e s w i t h P on Q and
E [b. D
,n;)
I6, 2,,(w1QW] i s a r.c.p.d.
of
R[QT
T
.
Section I1 3:
d lo,=) x R
i c n c c ions 2nd define
-+
Sd
and
d b: 1O.w) X R + R~
Lt accordingly by ( 1 . 2 ) .
be bounded measurable
I n t h i s l e c t u r e we a r e going
various equivalent formulations of t h i s martingale problem.
Each
fOrculations has i t s own s p e c i a l v i r t u e s and the a b i l i t y t o go fron nn*
willf a c i l i t a t e our understanding of t h e questions r a i s e d L1ccure I.
b a s i c t o o l used i n proving t h e i r equivalence i s the
following elementary l e m . Lemma (2 . l ) : Let
let
Y: [O,m) X
[qt:
t
03
2
n -+
(i.e.,
perty that f o r a l l
X(t)Y(t)
-
r
'
Proof:
[tl,t21
Let
Itl
Next, l e t
as
is
Y(t) T
> 0 and
qt -measurable)
.
=
0
so
- martingale and
X(s)Y(ds) tl
S
<
<
-
and has the a d d i t i o n a l pro-
w € fl the t o t a l v a r i a t i o n of
CT <
t
0
be. a continuous P
be a continuous function which i s adapted t o
C
i s bounded by some constant
[O,T] Then
X: f 0,m) X f l -, C
Y(. , w )
which d o z s n ' t depend on w
on
.
-
is again a P martin[,ale. A E
t2 and
... < sn
t2]
T/lt
be given.
Then
1
be p a r t i t i o n ,points of t h e i n t e r v a l
Then n- 1
max
OskSn
/ s ~ + ~~ ~ -
Theorem (2.2):
1 3 0 .
Let
P E ?.:C)) S a C i s f y P ( x ( t ) . =
, 0 ?;
s) = 1
.
Then t h e following s tat e m n t s a r c equivalent: s o l v e s the = r t i n z a l e
a)
p
b)
for 011 B € R~
~ r o b l e nf o r
, Xe(tVs)
Lt
s t a r t i n g from
i s a P - m r t i n g a l e where
(s,x)
,
for a l l
e)
Morewer, i f
P
satisfies
lx(t) P ( sup 0 stST
(2.4) where
A = sup (s,x>
sup
d
X R )
sup ( I f ( t , x ) l &tST
+
(tV s)
i s a P-martingale.
a ) , then f o r a l l
- x ( s ) -J
lei = I
f E c:'~([o,~)
CT
, Xis
0 E R~
t
0
.
(e,a(s,x)@)
Finally, i f
I (g4- ~ ~ f ) ( t , x ) I i) s dominated by
X(t)
v
2
d
E Cb(R ) . Now l e t
and
Y(t)
respectively.
2
d
cp E Cb(R )
satisficri
a)
ant1
T> 0 , C e T
).T1xI
Cur son*
- n n r t l n g a l c !:';!;!re
P
XCD ( t )
is
:I
Xq(t)
and
ir,y
nj.
!:nCisfy
nl!
c.tsp
P-w r t ! : ~ ~ : n l for e
be u n i f o r a l y yo:i\lvc
i n L e m (2.1) equal t o
ex;)[-
-
anti ti1t.e c t, ,:(x(u)} li
---Y-Y--
,.:!cju,)
Then, by Lfmna (2.1) :
i s a P-martingale.
In particular, i f
R > 0 and
then, by ~ o o b ' sstopping time theorem, where
P
( s , x ) = (0 , 8 ) . Let
l i m i t i n g procedure, we can conclude t h a t all
P
has the property t h a t f o r each
Assume t h a t
1/2
2 b ( s , x ( s ) ) d s l 2 R) r 2de-R/2"'d
< rn and AT > 0 , then X f ( t V s) i s a
Proof: -
B>0
-rR = i n f ( t a 0: I x ( t ) 1
2 R]
.
Xe(t A .rg)
h'ote that:
?i .z) 7ts?
en EC?,!!)
!"a f*lircinL!d!c.
T h i s shows t h a t
(X ( t
e
R t
as
Xe(t A T ~ 4) Xe(t)
Next assume t h a t t i n u a t i o n arguement, take
X(t)
exp[-i
<
6
and t
,
P P
Y(t)
a,
, we
i s uniformly P - i n t e g r a b l e .
have now proved t h a t
s a t i s f i e s b). must s a t i s f y
c).
X
i s a P-martingale w i t h
9
can be expressed a s
cP
Let
1
Then,
Since e v e r y d J(R ), i t
t E
f o r some
19
c),
and
cp(x) = e i ( " ~ ) .
S ei(e'x)$(e)de
The proof of (2.4) runs a s f o l l o w s .
satisfies
, respectively.
i s a P-martingale f o r a l l
X (t)
b).
D
ie(t)
S0 b(u,x(u))du) - ls0 (e,a(u,x(u))e)dul
i s easy t o see t h a t
>0
Finally, i f
i n Lemma (2.1) e q u a l t o
X (t)
a)
Since
Then, by a n e a s y a n a l y t i c con-
l t
by L e m (2. I ) ,
rp E c:(R~)
T ~: )R > 01
A
d
YC,
E CO(R ) .
= 1 be f i x e d .
For
we have by b) and ~ o o b ' si n e q u a l i t y : t
P( sup ( e , x ( t ) Os ts-T
-! b(u,x(u))au) 0
h
Se
Setting
= R
AT , we
P(
SUP
OstsT
-XR
2 R)
2 t ~ 8 ( t ) ~ e ~ ~ ( h (e,a(u,x(u))e)du:) ~ - $ - ~
0
+ X2/2
AT
arrive a t
(e,x(t) P( sup OstsT
- J'
t '
0
b(u,x(u))du) 2 R) s e
,.
-R~/~AT
and (2.4) follows q u i c k l y from t h i s .
The f i n a l p a r t o f t h e 'theorem c a n now be proved i n two e a s y s t e p s . F i r s t , one shows t h a t
X (t) f
i s a P-martingale f o r a l l
T h i s i s e a s i l y . done by i n i t i a l l y assuming t h a t a p p l y i n g a ) to calculus t o
f
to pass t o a l l
f
a s a f u n c t i o n of
a s a f u n c t i o n of
f
t
x
.
f E
c;([
f
E
c:'~([o,w)
d 0,-) X R )
X
d R ).
and
and the fundamental theorem of A f t e r t h i s has been done, it is e a s y
cLy2([0,-) X R~). The second s t e p i s t o use a n approxi-
mation procedure and a p p l y (2.4)
t o j u s t i f y t h e passage t o t h e l i m i t .
As a p r e l i m i n a r y a p p l i c a t i o n o f Theorem (2.2), we p o i n t o u t t h a t
Q.E.D.
uniqueness of s o l u t i o n s t o the martingale f o r
can be e a s i l y proved
Lt
whenever one has a s t r o n g enough e x i s t e n c e theorem f o r t h e P.D.E.:
l i m u ( t , .) = 0 t t T Indeed, suppose t h a t (2.6) admits a smooth s o l u t i o n
cp f C:(R
and
d
).
s t a r t i n g from
(s ,x) :
u(tV s,r(tV s ) )
t
i s a P-martingale f o r
for a l l
T > s
marginals of
P
Then, f o r any s o l u t i o n
and
S
T
cp E C;(R
. d
)
+S
u
f o r every
T
t o t h e martingale f o r
>0
=t
t Vs
cp(x(U))du
s
Thus
.
This m a n s t h a t the one dimensional ktme
P a r e unique determined.
P
To complete t h e proof t h a t
i t s e l f i s unique, we r e q u i r e the next theorem. Theorem (2.8): from of
(s,x)
pinT
and l e t
, then
t h e measure
Let
P
solve t h e martingale problem f o r
: : s be a stopping time.
T
there i s a P-null s e t
(P
W
1
starting
is a r.c.p.d.
sucii t h t~ whenever
(7 (w),x(T(w) ,w))
(here
6X(7(w) ) YW
mass on the path which i s c o n s t a n t l y equzl t o
N
w
s o l v e s t h e martingale problem f o r
8 P 'x(~((u),(u) ~ ( w ) w
s t a r t i n g from
N E
If
Lt
L
t
s t a n d s f o r the p o i n t
x(r(u,) ,w)).
The proof of Theorm (2.8) i s n o t d i f f i c u l t , but i t i s somewhat t e d i o u s . The idea i s t o show t h a t f o r each martingale f o r P
- almost
i s o l a t e one P - n u l l s e t for a l l
w
N
and
all w
N E
cp E C;(R
7 d
.
d
cp € cm(R ) , X9 ( ~ V T ( W ) )i s a 0 Since
such t h a t )
.
cm(Rd )
Pw
-
i s s e p a r a b l e , one can
0 X ( t V ~ ( w ) ) i s a P -martingale cP W
Given Theorem (2.8), we can now e a s i l y complete argument begun above. Indeed, by Theorem (2.8) p l u s (2.7), we have t h a t
for
, since
s r t < T.
T
E";J
q ~ ( x ( u ) ) d u ) Q ~=l E t
where
i s t h e r.c.p.d.
P.
of
plQt
.
-I!
p(r(u))dul
(8.s. ,PI
t
But from (2.91, i t i s i m d i a t e
t h a t a l l f i n i t e dimensional time m a r g i ~ l sof and t h e r e f o r e
P
P
a r e uniquely determined,
i t s e l f i s unique.
P
The preceding l i n e of reasoning a p p l i e s t o mny choices of
..
For i n s t a n c e , i f
((alJ(f,x))) ((aij(t,x)))
continuous and i f ((aij(t,x)))
and
i
and
(b ( t , x ) )
a r e s u f f i c i e n t l y smooth, then (2.6) admits Lt =
VZA, then
t h e corresponding
martingala problem has a t most one s o l u t i o n f o r each
for a l l
s
S
tl
< t2
and (2 . l o ) , then from
P
problem for
on
(n ,m)
satisfies
P(x(t) = x
s o l v e s the martingale problem f o r
i(e ,x(tp)).
Iqt I
1
and t h e r e f o r e t h a t
0 E. R~
and i n .'act
,0
S
t
5
s) =1
1 / 2 ~starting
To s e e t h i s , note t h a t from (2.10) one g e t s
(s,,x).
~ ' ie
all
(s ,x)
. P
Conversely, i f
'
a r e bounded and H6lder
i s uniformly p o s i t i v e d e f i n i t e o r i f
I n particular, i f
good s o l u t i o n s .
(bi(t,x))
Lt
.
exp[i(~,x(t))
= e
+
ut
i s a P-martingale f o r
By Theorem (2.2), t h i s means t h a t
1/24
.
s o l v e s the martingale
P
We a r e now i n a p o s i t i o n t o i d e n t i f y Wiener measure
w i t h s o l u t i o n s t o the martingale problem f o r
4/2b
.
S t a r t i n g from (1.10),
it i s a n e a s y m a t t e r t o s e e t h a t b ( x ( O ) = 0 ) = 1 and t h a t (2.10) s a t i s f i e d with
P
replaced b y b
.
Thus 1U
is
i s t h e unique s o l u t i o n t o t h e
martingale problem f o r s t a r t i n g from a general
.
invariance of
4/2~
(s,x)
, we
That i s , d e f i n e
-
(s,x)
and
,
.
w
elementary computation i d e n t i f i e s Ib martingale problem f o r
get the solution
take advantage of t h e t r a n s l a t i o n
X ( ~ , $ ~ , ~ (.=Wx) + ) ~ ( ( tS) V 0 , ~ ) j o i n t l y continuous i n
,8).To
s t a r t i n g from (0
5/2A
n* n
P,,,: t 2 0
Let
.
so t h a t
Clearly
U)
s,x
=
b
e
s,x
P-? s,x
(w)
.
is
An
a s t h e unique s o l u t i o n t o the
S,x
1 / 2 ~ s t a r t i n g from
( s ,x)
.
Section 111 We begin i n t h i s l e c t u r e t o prepare the machinery f o r our attaclc on t h e question of uniqueness.
Crucial t o t h i s enterprise i s the relationship
be tweea t h e martingale problem and 1t& s t o c h a s t i c i n t e g r a l equations. Throughoat t h i s l e c t u r e we w i l l be assuming t h a t
i s uniformly
((ai'))
p o s i t i v e d ~ f i n i t e . Under t h i s assumpti03 we w i l l show t h a t martingale problem f o r
where
Rt
P(-)
for
s r t
s t a r t i n g from
(s,x)
is a P-Brownian motion a f t e r tine
-measurable f o r a l l
P-almost a l l
Lt
t 2 s
w , PCs) = 0
1 <. t g )
.
, P(. ,w ) ( a - s - ,P)
s
(i.e.,
[s,m)
is for
and
P(.)
and i s well-defined s i n c e
r e l a t i v e t o t h e family
{n(
be emphasized t h a t j u s t because (3.1) h o l d s , t h e r e i s
is
P(t)
The f i r s t i n t e g r a l on the r i g h t hand s i d e of (3.1)
is a Brownian motion
x(.)
solves the
i f and only i f
i s continuous on
i s t o be i n t e r p r e t e d i n t h e sense of 1t;
that
P
P(.) -measurable.
That i s ,
. t 2 s] .
t'
no
cr(P(u): s
S
P(.)
It must
implication here u r; t )
my be
s t r i c t l y smaller than
o(x(u): s 5 u
To see t h a t (3.1) implies t h a t for
Lt
s t a r t i n g from
, we
($,x)
5
P
t). s o l v e s the martingale problem
1tG's formula and
need only invoke
thereby conclude t h a t i f (3.1) holds:
Since t h e r i g h t hand s i d e of t h e preceding i s a P-martingale, t h e r e can
P
be no doubt t h a t
s o l v e s the martingale problem f o r
Lt
s t a r t i n g from
The converse statement i s n o t q u i t e s o e a s i l y proved.
To f a c i l i t a t e
(s,x).
s = 0
the p r e s e n t a t i o n , we w l l l assume t h a t Given a s o l u t i o n
P
,
, and
x =8
t o the martingale problem f o r
Lt
that
b
E
0
.
s t a r t i n g from
-
( O D ) , we .proceed t o develop t h e theory of 1t*o type s t o c h a s t i c i n t e g r a l s This i s done,
f o r t h e process ' x ( . ) .
the c a s e of Brownian motion. functions t 2 0
of
t
$0
9:
and
[(I,-)
X
That i s , we s t a r t with bounded -eeasurable
,
t 2 0
f o r some
[n t l
XO(t) = e q [ S
i s a P-martingale. [P] U1
n a 1
.
The d e f i n i t i o n
k- 1
0 's:
t
and l e t
,
-measurable f o r a l l
i s then given by the Riemann sum:
,dx(u))
We then observe t h a t f o r such
(3.2)
f o r word, i n lke same way a s i n
O + R~ such t h a t 9 ( t ) i s
0 ( t ) = 0(*)
(9 (u)
10i,:d
0
(0 (u),dx(u))
-fJ
t
0
(9(u),a(u,x(u))9(u))
To s e e t h i s , assume t h a t
b e a r.c.p.d.
of
~174
.
0
S
tl
S
t2 S
dul
t nt11 + 1
Then, by b ) of Theorem ( 2 . 2 ) :
C l e a r l y t h i s proves t h a t
?,
A
where
10
, after
R'
.
setting
X (t) 8
i s a P-martingale.
Now r e p l a c e
0 by
D i f f e r e n t i a t i n g once and then twice with r e s p e c t t o
A = 0 we s e e t h a t :
and
a r e P-martingale. From h e r e i t i s a n easy t a s k t o conlplete the d e f i n i t i o n t d of (8(u) ,dx(u)) f o r general bounded measurable 0: [ 0,cu) X t-l + B: 0 The procedure i s i d e n t i c a l t o the one which a r e adapted t o (r/lt: t 2 01
S
.
used i n t h e Brownian case. I n t h i s way, we a r r i v e a t a d e f i n i t i o n of t t ( e ( u ) ,dx(u)) f o r such 0 ' s ; and the i n t e g r a l ( 0 ( u ) ,dx(u)) enjoys 0 3 t h e following p r o p e r t i e s :
1
t
a)
S0 (9 (u) ,dx(u))
c)
X (t) 8
i s a n a . s. continuous P-martingale,
is a P-martingale, where
X0
i s given by (3.2).
d Next suppose t h a t a: [ 0 , a ) X fl + R @ Rd which i s adapted t o
[r/lt:
t 2 0).
i s a bounded measurable f u n c t i o n t We then d e f i n e r~(u)dx(u) so t h a t 0
S
for a l l
0 E R ~ :(8
t
,f
u(u)dx(u)) =
t
f
0 if
( u ) = a2(u,xu)
(cr*(u)e,dx(u))
.
In p a r t i c u l a r ,
0 and
t
B(t)
o(u)dx(u)
, then
f o r each
0 E Rd:
0
i s a P-martingale.
But t h i s m a n s t h a t
$(a)
i s a P-Brownian motion.
Moreover, i t i s not hard t o show t h a t f o r any bounded measurable 8: [ 0,-) X (2
+ cd
I n particular, for
which i s adapted t o
at:t
2
O]
,
0 E R~ :
We have t h e r e h r e demonstrated how t o go from a s o l u t i o n t o the martingale problem t o the equation
The equivalence between s o l u t i o n s t o the martingale problem and s t o c h a s t i c i n t e g r a l equations opens up the p o s s i b i l i t y of s t u d y i n g - t h e mart i n g a l e problem with 1t;'s existence.
methods.
F i r s t consider t h e question of
I f t h e r e is somewhere a p r o b a b i l i t y space
(E,F,Q)
on which
t h e r e is a Brownian motion @ ( - ) r e l a t i v e t o an increasing family
{st:t
of sub o - a l g e b r a s and i f t h e r e is a measurable function
2 03
5: [O,.)
X E .r
nd
adapted t o
then the d i s t r i b u t i o n on problem f o r
Lt
(0
(st: t
,q)
s t a r t i n g from
of
2
0)
such t h a t
5 (.) under
.
(s,x)
Q
solves the m r t i n g a l e
Indeed, t h i s a s s e r t i o n i s proved
i n exactly the same way a s we proved (3.1) i s a s u f f i c i e n t condition f o r
P t o solve the martingale problem.
Thus we now see t h a t existence f o r the
martingale problern can be proved whenever on some space there i s existence f o r the s t o c h a s t i c i n t e g r a l equation associated with the c o e f f i c i e n t s of =t
. Next we i n v e s t i g a t e whether uniqueness f o r the martingale problem can
be deduced from uniqueness f o r the associated s t o c h a s t i c d i f f e r e n t i a l equation.
To be s p e c i f i c , l e t o = a'/*
( 3 5)
sup (Ilo(t,x) - u ( t , ~ > 1 1 +I b ( t , x ) - b ( t , y > I ) OStST
for a l l for
and assurne t h a t
T
>
(s,x)
define
and
5 0 (.)
Clearly each
0
E
.
If
solves the martingale problem f o r
x
starting
and
i s a functional of
p(-)
and therefore i t s d i s t r i -
i s the same a s t h a t of the analogous quantity under any
o t h e r s o l u t i o n t o the same martingale problem. where
Lt
p ( - ) i s a P-Brownian motion f o r which (3.1) obtains,
Sn(.)
bution under. P
P
~Ix-yl
Furthermore,
g,(.)
4
5 (. )
Clearly
d i s t r i b u t i o n again i s t h e same f o r a l l s o l u t i o n s t o t h e
5(-)'s
same martingale problem.
F i n a l l y , by pathwise uniqueness,
Thus the cond:cion
(a.s.,P).
z(.)
= x(-)
i n (3.5) i s enough t o guarantee uniqueness
f o r the martingale problcn v i a itniqueness f o r the corresponding s t o c h a s t i c : , c t w l l y a more r e f i n e d technique shows t h a t a f t e r
d i f f e r e n t i a l equation.
the n o t i o n of uniqueness f o r s t o c h a s t i c d i f f e r e n t i a 1 equations has bee2 properly formulated, then uniqueness f o r t h e m r t i n g a l e problem i s always a consequence of uniqueness f o r the corresponding s t o c h a s t i c d i f f e r e n t i a l This more r e f i n e d tcchnique i s i n t i m a t e l y connected with the
equation.
determination of tlle circumstances under which
B (.) i n (3.1).
the
x(-)
i s a f u n c t i o n a l of
w i l l take t h i s s u b j e c t up a g a i n i n Section V.
\:r?
-Section -- I V We opan r11Is s c c c i ~ muith n q u i t e g e n e r a l e x i s t e n c e theorem f o r solut i o n s t o tfrc mzrt!r'.i;-i!r: Given
A
' Id
P
8
problcn. ;-E~
, and
(S ,x) 6 [O,.)
X Itd
(A B): , d e f i n e Q[s:xl
i.\,k}
o + f i hy x ( ~ . ? ~ ~ , , j ' - )x + ~ ~ ' ~ x ( ( t - s ) V O , s+) ( ( t - s ) v O ) B ~ E ~ ~ z ~ > l - l I t i s c l e a r t h a t ((A,B),(s,x)) 6 .X
(A,B)
'% .x
let
is a contlnu*J6 to check d
FA?-
fA,tj
PC,,
c%C
t
.L
1/2
c-"t'x*
I.JSI
t! a: (0,s)1. 2 * Sz C(-.-en
c2
L
.
and (A B ) 1bs,2
?!"rWfcr. a simple computation s u f f i c e s i n order
s * th- unique s o l u t i o n t o t h e martingale problem f o r c!
-
-
S b r t i n g from
(s ,x)
.
Now suppose t h a t
1 Ir:
&*if,*
d
iOP) X R
-b
R~ a r e bounded continuous f u n c t i o n s .
Since i t i s c l e a r t h a t restricted t o a unique
Pn
on
.
PAd1)
, standard
m/n
(n , Q )
on
restricted to
coincides with
e x t e n t i o n theorems t e l l us t h a t t h e r e i s
s,.ich t h a t
P
c o i n c i d e s with
on
P ( ~ ) n
Furthermore, i t i s not hard t o check by induction t h a t f o r
2 d any . cp € C (R )
which, together with i t s f i r s t and second o r d e r i v a t i v e s ,
grows no f a s t e r than a n exponential:
i s a P -martingale. n T
> 0 t h e r e i s a c o n s t a n t CT which i s independent of
Since
Pl,(x(0) = O ) = 1 f o r a l l
i s pre-compact i n limit
P
.
Then
uniformly a s if
I n p a r t i c u l a r , one can s e e from t h i s t h a t f o r each
0 s tl
where
<
MQ)
{ P 13 n
and
.
, we
now s e e t h a t
F
Moreover, i f
i s a bounded Q
t
{Pn: n 2 l]
be a convergent subsequence with
ranges over compact s u b s e t s of
X ( t ) = ' ~ ( x ( t ) )(9
Let
P(x(0) = 8) = 1
(t,w) t2
.
n 2 1
n 2 1 such t h a t
S0 'LUcp(x(u))du .
cp E C;(R
[O,m)
d
X
)
Q
, then
.
Hence
-measurable f u n c t i o n , then
From h e r e i t i s a n easy s t e p t o
conclude t h a t
.
( 0 ,O )
P
s o l v e s the martingale problem f o r
Lt
s t a r t i n g from
By a t r i v i a l change i n n o t a t i o n , we could have c a r r i e d out t h e
same l i n e of reasoning t o produce a s o l u t i o n s t a r t i n g from any
(s,x).
Thus we have proved t h e n e x t theorem. Theorem (4.1):
Let
a: [O,m) X R~ + Sd
be bounded continuous f u n c t i o n s and d e f i n e (s,x)
d d b: [O,=) X R + R
and
Lt
accordingly.
t h e r e i s a s o l u t i o n t o t h e martingale problem f o r
from
Then f o r each
Lt
starting
(s,x). The r e s t of t h i s s e c t i o n i s devoted t o t h e development of t h e Cameron-
Martin-Girsonov formula.
This formula w i l l enable u s t o reduce both t h e
q u e s t i o n of e x i s t e n c e a s w e l l a s uniqueness when t o t h e c a s e i n which Let
a
b
=
0
0 1 Lt = 2
R(t)
s t a r t i n g from
Q
0 and A E
Rt
Q s o l v e s t h e martingale problem f o r
9 ( t ) = 9,
.
t
1
o+
Lt = Lt
d
2
+ a-lb(t,x(f)).
Go E R~
Then t o r
(n ,q)
on
.
We claim t h a t
i a b ( t , x ) ;i;;-
i=l
TO s e e t h i s , l e t
(s,x)
j
i s a-P-L-zrtingale. Thus t h e r e i s a unique Q(A) = E ~ [ R ( ~ ) , A fJo r a l l
( s ,x)
t o the
P
d i,j=l
such t h a t
from
Given a s o l u t i o n
-
define
Then
i s positive definite
.
be uniformly p o s i t i v e d e f i n i t e .
martingale problem f o r
a
i
be given and d e f i n e
s r tl < t2 and A E Qt : 1
starting
,
where
Xe ( t ) = e ~ p [ ( 8 ~ , x ( tsV)
- x) - 1 St v s (00,a(u,x(u))80)du
0
and
S
j u s t i f i e s our claim.
Conversely, suppose t h a t
Q
is a s o l u t i o n f o r Lt
s t a r t i n g from
and define
.
Then, by extending the
( s ,x)
S( t ) = 4 / ~ ( t )
c o n s i d e r a t i o n s of Section 3 t o cover Q
- martingale and t h e r e f o r e
P(A) = E Q [ s ( t ) , ~ ] , t 2 0 can now check t h a t
P
b f 0
that there is a and A Evt
solves f o r
0 Lt
.
, one P
can show t h a t
on
(R
,711)
S(t)
is
such t h a t
Reasoning a s we did above, one
s t a r t i n g from
(s,x).
With these
remarks, we have t h e following important theorem. Theorem (4.3):
Let
a: [O,m)X
d R +S
d
bounded measurable functions and assume t h a t definite.
Then
Q
and a
b: [O,-)
d R . + R ~be
is uniformly p o s i t i v e
solves the martingale problem f o r
C a l.J ( t , x ) axiax " +
X
Lt =
*
2
i,j=l
i
only i f t h e r e i s a s o l u t i o n 0
1
L =t 2
y
s t a r t i n g from
(s,x)
if
j P
t o t h e martingale problem f o r
d
C
aiJ(tsx)
i , j =.I
q y
suchthatf6rall
t 2 0
and A c Q t
j
Q(A) = E ~ [ R ( ~ ) , Awhere ] R(t)
i s defined i n (4.2).
I n particular,
e x i s t e n c e (uniqueness) f o r t h e martingale corresponding t o
Lt
follows
from e x i s t e n c e (uniqueness) f o r the martingale problem a s s o c i a t e d w i t h
L0t'
Section 5 We saw in the preceding section that the problem of proving uniqueness for solutions to the martingale problem for the case of general coefficients { a ( case b(*,.)
z 0, when : ( - , * )
, ,b ( ,* ) 1
can be reduced to the
0 )
is uniformly elliptic.
There are
other procedures which will be useful in proving the uniqueness of solutions to the martingale problem. Localization.
Suppose (Gal is an open coveriiig of [O,w)
and for each a we haye coefficients ( a,
and
For each a we have a unique measurable Family
(ii)
I,
Rd
, ) ,ba ( , ) 1 such that
(aa(*,*)rba(*,*)lz {a(*,*),b(*,*)I on Ga
(i)
{P:,
(
x
(s,x) E 1 0 , ~ )X R~
of solution? to the martingale problem
corresponding to
.
Then the solution to the martingale problem corresponding to {a(*,*),b(*,*)), if it exists, is unique for every starting point (s,x) E [O,w) x Rd
.
Outline of the proof:
Let
P and P2 1 martingale problem corresponding to ( a ( the same point (sOIxO). Let [Of-)
x
R~
be two solutions of the
, ) ,b ( ,.) )
starting from
BR be the open ball of radius R in
around ( S ~ , X ~ ) .Let
iff = in£ (t:
(t,r(t)) 9
~~l
be
h
the exit time from the ball.
;R(~)
+ w
as R
+ w
Clearly
for each'w E Q.
to show that P1 and P2 agree on
a value of R <
T~
is a stopping time and
It is therefore sufficient
MqR for-each
R <
-.
Let us fix
a.
It follows from standard compactness arguments that we can find
a number 6 = 6R > 0 such that for any (s,x) in the closed ball
ER around *(s0,xO)
we can find a value of a such that
S 6 ( ~ 0 , ~ oC) Ga where S6 (sO,xO) is the sphere around (sOlx,,) of T ~ I T ~ ~ . . by . T~
radius 6. We now define stopping times
2
r n = in£ It: t
I (t,x(t))
By the continuity of paths A
a
n = T~ A agree on
, T
-
-
= SO and
I
( T ~ - ~ , X ( T ~) - ~63 ) for n' 2 1.
+ as n + m . If we define n ~ it is clear.1~sufficient to prove that P1 and P2
T
for every n.
We will prove it by induction on n.
Let us take the case n = 1.
Since (so1 ~ 0 E) BR we can find
,
ciO
such that lau (.,-),bU ( . , * ) I r {a(.,.),b(.,*)l on S6(sOIxO) .and 0 0 a, we have unique solutions {psU 1 corresponding to 1x {aa( , ) ,ba ( , ) 1. Let us construct solutions 1 and F2 by the relations
-
-
-
pi = Pi on
MT
for i = 1,2 1
-
r.c.p.d. or 'il!.! T.
-
T1
"0 ' T ~ ( C ~,X(T~ ) (a),a)
Then by the propertics of martinyales P
on M
i
martingale problc~acorrcsponding to (a uniqueness P1 E on bf
0, J.
P2.
(.
,. ) M 1 '
Assume that we have the result for n = R.
and the r.c.p.6. solutions to
for
i = 1,2.
are solutions to the
"0 Therefore P1 : P2 on
.
(a)
,b
"0
(*
,
.) }
and by
and a fortiori
Then P1 E P 2 on M
*L *s
find P; of P1 and P2 given Mu
are again
R
t!lC
fa(-,+ ) ,b(., - )
::.artinqn'le problem corresponding to
: ~tarciz3fro3
all w with reSFcct to applies to t h e caz2i:ic-al
(ag,x(a,.))
at least for .almost
or'P2- NOW the argument for n = 1
distributions pW and . : P The role of 1 (so,x0) is p l ~ y ebi'~ f : L * x ( g 2 ) ) and the new ox is the same as a, (o) Cnc t b e r c f o r c chcains that PW and for oi+l' 1 2 agree on M O11+1 Co:lscr;.;t..?tlyp1 = P2 on A4 alcost all and the induction t+l
-.
is complete. The impact of the above argument is that uniqueness is a local property
of the coefficients.
One Dimensional Marginals Suppose we have a measurable family p(s,x,t,-) of probability measures indexed by s < t and x E Hd such that for any solution P to the martingale problem starting from any point (sO ' ~ O ) corresponding to {a(*,*),b(*,.)) P[x(t) Then
E
A] = p(so,xO,t,A) for t > s0 and A
E
d
B(R ).
the solution to the martingale problem corresponding to
{a(. ,- 1 ,b(.,.) 1 is unique for any starting point (sOIxO) , and is the Markov process with transition .probabilities p(s,x,t;). In particular p(s,x,t,*) satisfies the Chapman-Kolmogorov equations. Proof:
Let us c0nside.r the r.c.p.d.
QW of any solution P
starting from (sO,xO) given the.0-field Mt The solution
QW
for some to > so. 0 is again a solution to the martingale problem
starting from (tO,x(to.))
for t > to
.
and A E B (Hd )
By our assumption we have
.
P is therefore the Markov process
with transition probabilities p (s,x,t, ' ) starting from (so,xO).
P is therefore unique and moreover p(*,*,*,*) must satisfy the Chapman-Kolmogorov equations. Remark. where
T
It now follows by conditioning with respect to any MT
is a stopping time that the r.c.p.d.
of P given MT is
the solution starting from (T, X(T) ) for almost all w .
In other
words t h e f a m i l y o f unique s o l u t i o n s (P.
SIX
h a s t h e s t r o n g Markov
property. Section -
6
We w i l l c o n t i n u e o u r d i s c u s s i o n o f v a r i o u s circumstances under which e i t h e r a r e d u c t i o n o r a complete s o l u t i o n o f t h e problem of uniqueness i s p o s s i b l e . Random Time Change L e t @(x)
be a measurable f u n c t i o n of x i n
t h e bounds 0 < cl < G(x) ( C1 a map T
@
of 0 +
<
nd
and s a t i s f y t h e
d for a l l x E K
.
We i n t r o d u c e
a s follows:
where T ( t ) i s a s o l u t i o n of
@
T @( t )
I
@(x(s,w)) 6s = t
.
0
i s bounded above and below, w e have a unique s o l u t i o n
Since T
@
( t ) of t h e above e q u a t i o n , which i s a s t o p p i n g time ( a s a
f u n c t i o n of w )
f o r each t
i n .t and t e n d s t o
and T
@,$I
'J
as t
-t m
0.
Moreover T ( t ) is nondecreasing $ f o r e a c h w. I n f a c t
w and t.
for a l l If.
-
2
a r e two f u n c t i o n s o f t h e above t y p e t h e n t h e maps T
4
have t h e p r o p e r t y
The above p r o p e r t y i s e a s i l y v e r i f i e d by computing t h e d e r i v a t i v e
In particular T+ and TI,+
are inverses of each other.
One can
also verify that
Suppose now that we have coefficients which are independent of time and we denote by L the operator
and P is a solution corresponding to L
x0 at time 0.
, starting from the point
Then f( t )
-
4
1 (Lfl (x(s))
ds
0 d is a (Q,MtlP) martingale for all f E C ~ ( X)
.
By Doobb'sstopping
theorem,
is a martingale reiati ve to (R;!,! l0(t) I
We can rewrite this as
is a martingale where ~ ( t =) X(TO(t) ) =
.
Since the 4 for 0 5 s 2 t is.contained in A{ we ., (t) $
field generated by Y ( S )
(T W ) (t)
can say that t
is a martingale relative tc
(n,i!t ,Q)
where
Q
= PT;~.
In other
words, t h e t r a n s f o r m a t i o n T8 maps s o l u t i o n s of L i n t o s o l u t i o n s I
of
L.
S i n c e t h e mappint T8 h a s t h e i n v e r s e T
we conclude
l/@
t h a t t h e s o l u t i o n s corresponding t o L and t h e s o l u t i o n s c o r r e s 1 L f o r - t h e same s t a r t i n g p o i n t ( n o t e t h a t ponding t o 8 (T w ) (0) = w ( 0 ) ) a r e i n one t o one correspondence. In particu8 l a r e x i s t e n c e o r uniqueness f o r L e n s u r e s t h e e x i s t e n c e o r 1 uniqueness f o r L provided 4 i s bounded above and below. 8 Remark. Let us c o n s i d e r t h e c a s e of a d i f f u s i o n i n R1 corres'-
.ponding
to
where a ( x J and b ( x ) a r e bounded and measurable and a ( x ) i n addit i o n h a s t h e lower bound a ( x ) Martin-Girsanov
c > 0.
Then by t h e Cameron-
formula, t h e e x i s t e n c e and uniqueness f o r L i s
t h e same a s t h a t f o r L
-%i and by t h e random time ax 2 change d i s c u s s e d above it i s t h e same a s t h a t f o r A = 0 2ax2' . . Since t h e o n l y s o l u t i o n f o r A O i s t h e Brownian motion we 0
=
2
a(x)
conclude t h a t we have e x i s t e n c e and uniqueness f o r any s t a r t i n g point f o r t h e given o p e r a t o r L. Connection
A
w i t l i I t n ' s Theory
Let a ( t , x ) b e such t h a t a ( t , x ) = u ( t , x ) u * ( t , x ) Suppose we t r y t o s o l v e
1t6's
for a l l
e q u a t i o n ( i n t h e more g e n e r a l
s e n s e ) i . e . we look f o r a s o l u t i o n x ( t ) , on some ( E , f t , P ) , t h e r e i s a l s o a Brownian motion tion
t,x.
$(a)
where
which i s g i v e n , o f t h e equa-
More precisely we are looking for a measure
on C [ [0,a);R~~~
starting from (x,O) at time 0, which solves the martingale problem corresponding to
and
This means that the first component is a solution to the martingale problem corresponding to ( a (t,x),b (t,x)I, the second component is Brownian motion and the two are related by 1t6's equations.
One knows that any solution to the
martinqale problem corsespondiny to ( a(t,x) ,b (t,x)1 exhibited as the first component of a solution
car: be
corresponding
to { all;) with any choice of a such that a a* = a. Pathwise uniqueness can be phrased in terms of a solution to the martingale problem for an even bigger system.
Consider
for instance
and
(
y = t
xI b
0)
.. A
solution
n
to the martingale problem corresponding to {
A
s t a r t i n g from (x,x,O) on C[ [Of-) ; R of x ( t ) , x 8 ( t ) and B ( t ) x( )
,
x ()
where B
~ i s~ j u]s t t h e d i s t r i b u t i o n
i s a Brownian motion and
a r e two s o l u t i o n s o f . I t 6 ' s e q u a t i o n i n terms o f t h e
Brownian motion s t a r t i n g from t h e same p o i n t x.
Pathwise A
uniqueness i s t h e r e f o r e t h e same a s e v e r y such P l i v i n g on t h e diagonal x ( t )
Z
x' ( t ) f o r a l l t.
To s e e t h a t pathwise uniqueness i m p l i e s t h a t t h e s o l u t i o n t o the.mart,ingale problem i s unique, we need a c o n s t r u c t i o n which s t a r , t s w i t h two s o l u t i o n s
P1
, P2
t o t h e m a r t i n g a l e problem
6
s t a r t i n g from t h e same p o i n t x and ends up w i t h a s o l u t i o n s t a r t i n g from
(x,x,O) corresponding t o
{GIs)
which has P1,P2
f o r ' t h e marginals f o r t h e f i r s t and second components r e s p e c t ively.
Since
PI = P2
.
s lives
on t h e d i a g o n a l we w i l l conclude t h a t
This construction c a r r i e d o u t by Yamada and Watanabe i s a s We can s t a r t from P1 and- P2 and
follows: tions
P1
and
P2
s t a r t i n g from (x,O) corresponding t o { & G I .
The second component i s Brownian motion L e t us denote by R1 and R2 t h e r.c.p.d. given
t h e second component.
,
a2
under b o t h
,
w3
G1
and
P2.
of t h e f i r s t component
Let us d e n o t e by W t h e Wiener
We denote p o i n t s i n C [ [0 ,w ) ; R
measure.
w1
c o n s t r u c t two s o l u -
~ by~ t h ]r e e compzments
and w r i t e
B ( d ~ ~ ~ d = w w(dw3) ~ ~ d R1(w3; ~ ~ ) dull
R2(w3i dm2)
I n o t h e r words w e make t h e f i r s t two components independent
under $
given the third component which is Brownizn motion.
This clearly works. We also deduce from this that
Rl(w3,dwl)
and
R2(w3,dw2)
must be degenerate distributions for almost all w3.
In other
words the solution x ( - 1 to It6's equation is really a measurable functional of the Brownia~path even though we did not know it to begin with. Section 7 We saw in a preceding section that if the equation
with T <
can be solved for 0 <_ t <_ T
u(T,x) = 0
m
and
for every
for sufficiently many functions f then we can
conclude uniqueness. Suppose P1 and P2
Let us pursue the point a little further.
are two solutions starting from
.
(sOrxO)
Let us define
for
Our aim is to conclude that A1 E A,.
i = 1, 2.
For each
f
for which we
can solve the equation ( * ) with a smooth solution we can get
Al (f) =
.
A, (f)
Sometimes one can prove that the set of
functions f for which
(*)
is solvable is a dense class in some
space. Then one has to go back and show that A1 and A2 belong P to the appropriate dual space so that one -can still conclude that
L
A1
EA2.
We are now going to prove the uniqueness for coefficients {arb} such that
a
is uniformly elliptic, a is bounded and con-
tinuous in t and x, and b is bounded and measurable.
Because of
the Cameron-Martin-Ginsanov formula we can assume that b is identically zero. that
a
By the localization procedure we can assume
is uniformly close to
constant matrix a .
2
after a linear transformation of R~
, which
can be assumed to be I.
The problem of uniqueness therefore can be reduced to the following two propositions. ,
.
..
PropositiorA. Let us consider al' (t,x) = 6 ij + c I 3 (t,x) where leij (t,x)l ( cO for Some E 0 > 0. For a suitable choice of eO and a suitable choice of p0
for every TI the equation ( * )
has a smooth solution for a class of functions f which is dense
Proposition 2.
For any solution P to the martingale problem
corresponding to { s i j (t,x) I 0 with 0 < so
5
T,where T <
starting from any point (s
orxo)
is arbitrary but given,
f
A(f) =['E
~(s,x(s)) ds]
is a bounded linear functional on L { [O,T] x R~]. [Here po is 130 the same as in Proposition 1 and the coefficients also satisfy
..
the same bounds lal3(t)x) Outline of Proof:
-
5. . I 13
<
EO
as i n Proposition 1.1
L e t us fix an intarVal [O,Tl and consider the
function
We define an operator G acting on functions f defined on [O,T]xRd by
~(s,x,t,y)dt dy <
Since G is a convolution and
oo
we
conclude that G is a bounded operator from L [[O,T]XR~]into P itself. Moreover Ip(~,x.t,~) l a dy dt is finite for
1; IRd
a < (d+2)/d.
if
c:'.
(i)
This in turn implies that
> d/2 +l.
If Gf = u
U(S,X) + O
(ii) u
as
for some nice f then
s + T
forallx
is smooth and
We have the following important bound from the theory of singular integrals. For any a in the range 1 < a <
there is a constant Ma such
that
We shall choose and fix a value of a larger than d/2 +l. us call it aog The corresponding bound Ma
Let
will be denoted by Mo. 0
Now consider the coefficients aij (t,x) such that
where
1 cij (t,x) I 5
so and
so is to be chosen presently.
We
want to solve
,ifwe try for a
u
of tne form u = Gg
then we
have to solve
Denoting Eij(s,y) aZGg ~ g =1 z C axiax
j
we want to solve
The problem formally is to invert (I
If
E*
- E).
However
2
is chosen so that d MOsO <- 1, we shall have lEgl
a.
1 lgl < -
-2
This in turn rmplies that (I
-
a . E)"
.
exists as a bounded operator
in L, [[O,T]
x
R ~ ] and
0
Formally
The set of f's for which u is nice are those for which (I-El-'£ is nice.
We do not know what they are except that
dense in La 0
.
they are
This proves Proposition 1.
We now tu_~ntoPgo~o_s&~i,ri. Let P be some solution to the martingale problem corresponding to ( a ( - , * ) , O 3 of the sort discussed above. We can find
a Brownian motion on our space such that
We can assume without loss of generality that the starting time is 0 and that the starting point is some xo E R ~ .Let us define
n( s ) = 0
,
j
1
--
if -j< s < j:-1 nn
and xn(t) = x0
+
1 O
~,(s,w) dB(s)
.
We define the linear functionals
T
nnct) = pPr
J f(s.x,(s))
la]
0
Since, by the properties of
stochastic integrals x,(t)
a
x(t)
uniformly in probability as n
-+
-.
We
conclude that for each
smooth f
Moreover each x ( - ) as a stochastic process is piecewise n Brownian motion. Therefore there is clearly a constant Cn such that ]An(£)I <_ CnOfO
a.
Our problem in completing the proof of Proposition 2 is to obtain a uniform bound on Cn independent of n. function u = Gg for some g. Then
ij is a martingale, where an (s,w) = 6ij cij
,- )
with [ ~ ( i j
(
5
is a (R,Mt,~) martingale. t = T
So
uniformly.
for some
Therefore
Equating expectations at t
and using the bounds on j:r
In other words
ij (s,w) + sn
Let us pick a
(
,- 1
= 0
and
2
L e t u s d e n o t e a Gg/ax. a x . b y Hij ( s ,x) 3 . 3
.
Then
Taking t h e supremum o v e r a l l g w i t h lqll, t h e norm
a
i n t h e d u a l s p a c e of La
O
n
a
S i n c e 11 Ani 0
5 Cn
0
5
1 we obtain f o r
0
i s f i n i t e we have
(using
cod 2Ma
0
<_
1)
T h i s c o m p l e t e s t h e proof of P r o p o s i t i o n 2 . Section 8 Here we a r e i n t e r e s t e d i n p r o v i n g t h e convergence o f t h e d i f f u sion process
corresponding t o
ing t o (a,b}.
to t h e o n e c o r r e s p o n d -
W e s h a l l i l l u s t r a t e t h e methods i n t h e s i m p l e s t
L e t u s suppose t h a t { an] and
situation.
u n i f o r m l y i n s , x and
bn) a r e bounded
W e s h a l l f u r t h e r suppose t h a t an and bn
1:.
a r e converging t o a , b and t h a t
{an,bnl
d uniformly on COcPact s u b s e t s of [O,=)xi?
a n , bn, a , b
a r e a l l c o n t i n u o u s f u n c t i o n s of s and x.
F i n a l l y e a c h an and a i s assw.ed t o be u n i f o r m l y e l l i p t i c . (sn,xn) be s t a r t i n g p o i n t s converging t o ( s , x ) . t h e s o l u t i o n f o r {an,bn)
W e d e n o t e b y pn
s t a r t i n g fro2 f s n r x n ) and b y P t h e
s o l u t i o n f o r { a r b ) s t a r t i n g fro= (sex) Proposition:
Pn converges weakly to P a s n
+
Let
-.
Outline of Proof:
From the fact that the coefficients {anlbn]
are uniformly bounded and that
(sn1xn) varies over a bounded set
we conclude that P is weakly conditionally compact. Let Q be n any limit point. Without loss of generality we can assume that Pn * Q
as n
+ rn.
If we prove that Q is a solution to the
martingale problem for {arb) starting from ( s , x ) then by the uniqueness
Q must equal P and we have proved the proposition. We note that for f E c ~ ( dR)
n ' ss a (Q,MtlPn) martingale.
Clearly +n(srx)
-+
Here
$(s,x) uniformly on conpact s e t s where
Just as in the proof of existence we condlue t h s r
is a martingale relative to (flI1.ft,Q). This c::Y:ie:cs
tka proof.
We note of course that
because
for esch n .
One can prove by t h e s e t e c h n i q u e s t h a t under very g e n e r a l cond i t i o n s c e r t a i n Markov c h a i n s converge t o d i f f u s i o n s . s i m p l i c i t y t r e a t t h e time homogeneous c a s e .
Let us f o r
Suppose f o r each
w e have a t r a n s i t i o n p r o b a b i l i t y . a (x,dy) on Rd s a t i s f y i n g h f o r each f E C;(R d )
h > 0
uniformly on compact s u b s e t s of Rd,
where
We assume h e r e t h a t t h e c o e f f i c i e n t s a r e c o n t i n u o u s and bounded on Rd and t h a t { a(x) J i s uniformly e l l i p t i c .
Let u s f i x a s t a r t -
i n g p o i n t xo a t time 0 and c o n s t r u c t a Markov c h a i n which a t times jh, j = 1 , 2 , . . .
jump
according t o
$,
.
( x ~ ~ Y ) We can
i n t e r p c l a t e l i n e a r l y i n between s o t h a t we have a measure P on h ':he s p a c e C2 f o r each h > 0. We want t o show t h a t
l i m Ph = P
h+O
where P s o l v e s t h e m a r t i n g a l e problem f o r L s t a r t i n g
from x
0
a t t i m e 0. Sketch o f p r o o f :
Let us pick a function $(x) with +(x) = 1 f o r
1x1 5 1 and + ( X I = 0 f o r 1x1 2 2 0
5
(x)
5
1.
R W e d e f i n e Ph
W e d e f i n e $ J ~ ( x =)
j u s t as P
h
which i s smooth and s a t i s f i e s
+ (x/R)
and c o n s i d e r
was d e f i n e d r e l a t i v e t o
t h e operator (LRf) (x) = qR(x) (Lf) ( x )
.
h
and LR i s
Clearly
for each f
R IIhf-f lim = LRf uniformly h h+O dI . One can now show that {ph} R is relatively c~(R
E
compact as h
-+
0 on Q and any limit point is a solution corres-
ponding to L~., The compactness is established by the techniques that we have already seen.
To identify the limit we note that
is a martingale relative to (Q,MnhIPh). If we let h subsequence so that Ph
*
Q
-+
0 along a
we conclude that
is a martingale relative to (Q,A,ftIQ).Q must therefore necessarily agree with P on MT sphere of radius R.
R
where
T~
is the exit time from the
In particular
Therefore R sup Ix(s) ILL] = 0 lim sup lirn sup lirn sup Ph[ !?,+aR+a- h + O OLs<_T The last inequality implies that for large R the difference R and Ph,is uniformly small as h + 0 on any finite time between Ph interval. Consequently one can interchange the limits as R and .h -+ 0
and conclude that Ph
-+
itself converges to P as h
-+
0.
Section 9 We will be concerned here with the cannot assert uniqueness.
To fix ideas let us suppose that
{arb) is bounded and continuous. uniqueness.
situation in which we
We have existence but no
For each x let Cx be the set of solutions starting
from x at time 0.
CX perhaps consists of more than one element.
We want to pick a Px from each Cx such that (px} Markov family. time T
is a strong
The crucial property is that. for any stopping
the r.c.p.d. of P given MT
is in the class Cx(T)
provided we reset the time T as 0. The idea is to pick a bounded d continuous function fl(x) on R , h > 0, and consider for each x the set :C
defined by
1
m
: C
=
{P: P E C ~ ,6 ' 1
m
-X1t -hit fl(x(t): dt] = sup [ e e fl(x(t))dtl 0 PEC,
I
By ideas very similar to that of dynamic programming one can show 1 that Cx inherits from Cx the property of being closed under conditioning. We now pick .I2 and f.* and define
Cx
= {P: SEC:;
and so on.
0)
m
/e-A2tf21n(t)Idt] = sup['E PEC;
1~~~~f~(x(t))dt]} 0
~~f
0
have the property of being closed
Such :C
under conditioning.
If
50 through (1.,f . )
which is dense
3 3 among all pairs ( I t f ) then denoting by Dx the intersection
n n Cx
n we see that 0, is c l ~ s e dunder conditioning and furthermore if
for all
and f.
This means that
P1[x(t) E A] E P2[x(t)
E
A]
for all t 2 0 and A
By the method through which we
E
B(R~).
proved uniqueness this in turn
implies that each Dx consists only of a single element Px and they of course automatically form a strong Harkov family. There is also a natural converse in the sense that starting from all strong Markov families { P ~ ) and mixing them up
one
can recover the collection Cx. In other words
any nonuniaueness of solutions to the
martingale problem arises from nonuniqueness of the Markov semigroups whose infinitesimal generators are extensions of L from smooth functions.
C EN TRO I N TERN AZION ALE MATEMATICO E S T I W
(c.I.M.E.
WAVE PROPAGATION AND HEAT CONDUCTION I N A RANDOM MEDIUM
G. C.
PAPANICOLAOU
Wave PropagatioL and Beat Conduction i n a Random Medium G. C. Papanicolaou Courant I n s t i t u t e , New York University
INTRODUCTION We s h a l l g i v e a f a i r l y s e l f c o n t a i n e d
account o f some r e s u l t s on waves i n
random media and r e l a t e d problems t h a t w a have considered i n t h e p a s t few years [ll-[6].
These r e s u l t s r e l y upon p r o p e r t i e s of s o l u t i o n s of d i f f e r e n t i a l
equations with random c o e f f i c i e n t s , i - c . , stoc11astic equations.
We r e ~ t r i c t
a t t e n t i o n t o one-dimensional problems s o that we a r e d e a l i n g with s t o c h a s t i c o r a i n a r y d i f f e r e n t i a l equations.
There a r e a few r e s u l t s , a t p r e s e n t , d e a l i n g
with multidimensional problerre a t [ c f . 121 but w e s h a l l n o t d i s c u s s t h e s e here.
ivle c o n s i d e r a
one-dimensional m:cdim 0 c c ~ ; i n g
wave of u n i t amplitutle i n c i d e n t from x complex-valued wave f i e l d a t l o s a t i o n x omitted a s i s customary.
i n t e r v a l [O,L] w i t h a
L e t u(x) exp
G.
I-iwt)
tizx? t- Tne ti=
denote t h e
f a c t o r k i l l be
The f i e 1 5 ~ ( x )s a = i s f i c s the one-dimensional reduced
wave equation
Here n ( x r is t h e index c f r e f r a c e o n * f r e e space propagation speed. ~ 5 t hknown
T3e
' I-jc Of
Cqe Wave nrrmber and c is t h e
n ( x ) i s a random process
p r o ~ e r t i e s t o k* k s = r i ! x d belW.
A wave o f u n i t amplitude
i s incident from the 1e f t which i s f r e e space.
Therefore, (1.2)
U(X)
=e
ikx
+ R e-ikx
where R = R(L,k) i s t h e r e f l e c t i o n c o e f f i c i e n t . variable with
IRI
5 I.
x< 0 ,
I t i s a complex-valued random
Thc region t o t h e - r i g h t of [O,Ll i s a l s o f r e e space s o
t h a t t h e transmitted wave is
where T = T(L,k) i s t h e transmission c o e f f i c i e n t . Equation, (1.1) f o r u(x) i n 0 < x < L
i s supplemented by requiring t h a t u(x)
and du(x)/dx be continuous a t x = 0 and x = L.
This y i e l d s t h e two point
boundary conditions
Equations (1.1), (1.4)
and .('1.5)
determine u(x)
completely.
Then R and T
are given by
where u(x) ='u(x;L,k) b u t w e suppress dependence on L and k.
Note t h a t we have
the conservation r e l a t i o n
which says that the wave energy p e r u n i t time transmitted through [OIL]p l u s
the wave energy per u n i t time r e f l e c t e d equals t h e i n c i d e n t energy p e r u n i t
time which is normalized t o one. We s h a l l now describe the class of random indices of refraction n(x) which we w i l l consider.
We assume t h a t
where y(x),, x ) 0, i s a Markov process on a s t a t e space S which i s a compact metric space and g(y) i s a continuous function from S to [-1/2,1/2],
Ig!y (x)) 1
<_
i.e.
1/2 f o r a l l x ) 0. The Markov process y (xj i s assumed to be
ergodic (more technical assumptions a r e introduced l a t e r )
with the i n i t i a l
value y (0) distributed according t o the invariant distribution so t h a t y (x) , x
2 01 i s
also a stationary process.
We s h a l l denote by
d -1 expecta-
tion r e l a t i v e t o t h i s statioriary Markov process and we s h a l l assume t h a t (1.10) and
The dimensionless variable a is a measure of t h o s i z e of the fluctuations i n the refractive index while the parameter L! a measure of
the amount of correlation
with the dimensions of length i s
i n t h e fluctuations a t d i f f e r e n t
points. The prcblem a t hand has three natural dimensionless parameters: a, kL and kf,. The quantity of principal i n t e r e s t t o us, the transmission coefficient T, i s thus a functional of process y ( - 1
and a function of the parameters L, k t
f,
and
a, i.e.*
with a , kL and kf, dimensionless parameters. One can study the s t a t i s t i c a l properties of the transmission coefficient T
in a t l e a s t f o u r i n t e r e s t i n g asymptotic limits which we now enumerate.
(i)
Rapid f l u c t u a t i o n l i m i t (law o f l a r g e numbers).
2 Thus, we r e p l a c e R by E R with
s m a l l while a l l o t h e r parameters a r e fixed.
E
-+
0.
This means t h a t R i s
It is e a s i l y seen t h a t T ( L , ~ , € ' R , ~ )+ 1 with p r o b a b i l i t y one and t h i s
i s n o t d i f f i c u l t t o prove a s w i l l b e explained i n t h e n e x t s e c t i o n . 2
2.
index o f r e f r a c t i o n has t h e form n (x) = 5 (x) (1 + g ( y ( x ) ) ) ,
w i t h C(x) a d e t e r m i n i s t i c r e f r a c t i v e index
Returning t o t h e case T
one may now analyze t h e l i m i t i n g d i s t r i b u t i o n of -t
i n s t e a d of (1.9),
then T ( L , ~ , E ~ R , C-+~ )Deterministic
transmission c o e f f i c i e n t corresponding t o C(x).
E
I f the
E
41
2 IT (L,k,€ R,a)
0 and show t h a t it i s a complex-valued Gaussian d i s t r i b u t i o n .
-
-+
1
11 a s This i s
a l s o easy t o show and i t i s discussed f u r t h e r i n t h e n e x t s e c t i o n . (ii)
and l e t E -> 0 . E
-+
0
This means t h a t we r e p l a c e a by a/€ and f, by
White noisc l i m i t .
E
2
R
We s h a l l s e e l a t e r t h a t T ( L , ~ , E ' R , ~ / E ) converges weakly a s
t o n random v a r i a b l e whose d i s t r i b x t i o n can be obtained explici2-ly, i n
principle.
tiwovcr, i t is very d i f f i c u l t t o e x t r a c t p h y s i c a l l y u s e f u l informa-
t i o n from the l i m i t s i n c e t h e necessary c a l c u l a t i o n s a r e p r o h i b i t i v e l y complex. p h y s i c a l l y , t h e white n o i s e l i m i t . is j u s t what the words mean:ifgE(x) denotes the scaled random process g (Y [.XI) then its c o r r e l a t i o n E{ gE (x
tends t o 2 a . 2 ~ 6 (*I
as
E
+
(iii) ~ o j kf l u c t u a t i o n s CIE. and L SI
by .L/z2.'
it
0
where 6(x)
-- l a r g e
+
x' ) gE (x' 11
i s t h e Dirac d e i t a function.
s c a t t e r i n g region.
Here we r e p l a c e a by
i 5 easy t o s e e 'that T (L/E 2 , k , R , ~ a ) = T ( L , ~ / E ~ , E ~ R , E ~ )
T ( L / E , ~ ~ / & , E ~and , E hence ~~ t h e l i m i t could have been c a l l e d t h e weak f l u c t u -
ations
-- h i +
fmuenr'
n o t i c e h e r e is
.means that t ! e
-- s h o r t
correlation limit.
The important t h i n g t o
Lh-e &=ensionless parameter kR i s independent of E. Ua'R Icf'-qLh
correlation l a w *
This
of t h e i n c i d e n t wave 1 = 2 ' f ~ / ki s comparable t o t h e
a s d hL\ a r e S-11 compared t o t h e s i z e of t h e s c a t t e r i n g
region (and the fluct'a=ic.w are a l s o s m a l l ) .
T h i s i s the
basic physics!.. icterest tor waves i n random media.
~t t-s
lidt of
out, fortunate&
t \ a t p r a c t i c l r r l y c w r f L 5 i ~ goze wants about T c a n be c o q u t e d e x p l i c i t l y i n
t h i s asymptotic l i m i t [ll-[21. (iv) L/E
2
Large s c a t t e r i n g region
and k by Ek
- Large wave l e n g t h s .
Here w e replace L by
and n o t e t h a t T ( L / E ~ , E ~ , R = , ~T) ( L , ~ / E , E ~ E ,=~T) ( L / E , ~ , E R , ~ )
which means t h a t t h i s l i m i t could have been c a l l e d t h e s m a l l wave length
with t h e
small c o r r e l a t i o n length l!iut the.wave length.
correlation length
--
much s m a l l e r t h a n
Note t h a t we do n o t assume weak f l u c t u a t i o n s h e r e ( j u s t a s we
d i d not assume we& f l u c t u a t i o n s i n c a s e ( i ) ) .
The p r e s e n t s c a l i n g is o f
i n t e r e s t . i n t h e h e a t conduction problem 141, [71
which i s introduced below.
Having enumerated s e v e r a l i n t e r s t i n g asymptotic l i m i t s we now'pose t h e
Problem I:
Show t h a t
-!
T ( L / E . , ~ , R , E ~ )has a l i m i t i n g d i s t r i b u t i o n a s
and compute t h i s d i s t r i b u t i o n e x p i i c i t l y as a f u n c t i o n o f L, k , 2
Problem I1 : Sante a s Problem I , f o r T (L/E ,&k,R,a)
E + 0
and a.
.
I t $urns out, n o t s o s u r p r i s i n g l y , t h a t t h e l i m i t i n g d i s t r i b u t i o n s i n I and X I a r e i n f a c t t h e same b u t t h e i r parametric dependence on k and 9, is d i g f e r e n t .
Ke r e t u r n t o Problems I and I1 i n Sections 3 and 4 r e s p e c t i v e l y . Kc s h a l l introduce t h e h e a t conduction problem next.
One can a l s o g i v e a
less phenomenological d e r i v a t i o n than t h e one below f o r t h e q u a n t i t y o f i n t e r -
est I14J. Suppose t h a t t h e i n c i d e n t wave from t h e
l e f t i s n o t monochromatic b u t a
general time dependent p u l s e s o t h a t f o r x < 0 oa
u(t,x) =
I
e
-iwt
ikx + [e
.-ikx
1
G(W)
.
-(D
Here
G (dw) is a complex-valued
k,-2iW c o v l e x conjugation.
measure on RI Such t h a t G* (-&I
= G
8
star
We assume t h a t G i s s t a t i s t i c a l l y independent of
f l u c t u a t i o n process y (x) , t h a t
< G (dw)>
0I
and t h a t
We use angular brackets< from
> t o denote expectation involving
G which i s d i s t i n c t
Thus we assume t h a t t h e wave incident f r o m the l e f t i s a stationary
-3.
random function of time, s t a t 4 - s t i c a l l y independent of the scattering medium and with power s p e c t r a l density B(w). By (1.3) 'and l i n e a r i t y , t h e transmitted wave i s OD
(1.15)
u(t,x) =
f e-iot
e l L x ~ ( ~ , kG(&) )
,
x l L .
-0)
Since k =
W/C
t h i s i s the same as
and 'this is a real-valued process. T * ( L , ~ )= T!L,-I:),
,p&i,cular
(1.16) and t h e identity
it Zolloas that
The quantity on the wave with time l a g
From (1.14),
S.
l e f t is the time correlation function of the transmitted
see that it is .independent of t
We
20
and x
2 L.
Of
i n t e r e s t f s the variance of average transmitted wave defined by OD
Since both
IT fL8(J/C1 I* and 8 (
~ 1are even functions of o.
we ha-
0
(1.19)
J ~ ( L ) rn zc
E{IT(L,W
I*}
kc) mi
.
0
TO simulate a heat bath a t t e ~ r a - r e equal to 8 on the l e f t (x 0
+hall take 8(kc)
b near k
toell
Since no waves impinge from the right,
is at t e ~ r a t u r ezero.
of heat coadu?ziQf% b.1
we
- a d zero for kc > 1. Only the behavior of
see.
0 satttrs as
medim on t?# pa-
% 'O
_< (3)
Thus we define ~e a e r a g e
Problan 111:
Determine t h e asymptotic behavior o f J ( L ) a s L
-+
*.
We r e t u r n . t o t h i s problem i n Section 4 'and show 141 t h a t J ( L )
L
-+
1, L'lI2
as
".
There a r e many o t h e r i n t e r e s t i n g problems one can pose about t h e behavior of T(L,k,R,ct), o r even t h e wave amplitude u(x;~,k,R,cl) a t i n t e r i o r p o i n t s 0 < x
l2
I101 t h & t ' ~ T ( L , ~ , R , & )9 0 a s L 2
I
Z$ ~?!~~.,k,!t,a)
3
-+
m
with p r o b a b i l i t y one.
decays exponentially t o zero
a s L -+
a,
In fact,
a s i s shown i n [81.
The . e x a c t constant of exponential decay i s n o t known, however. this
.question
We r e t u r n t o
i n ,Section 6. 2.
LIMXT THEORFMS FOR STOCHASTIC EQUATIONS
The approach t o Problems I, 11, I11 o f t h e previous s e c t i o n t h a t we s h a l l follow is this.
Consider t h e
t o (1.4) and (1.5). A and-B
wave f i e l d u(x;L,k)
s a t i s f y i n g (1.1) s u b j e c t
Define t h e forward and backward t r a v e l i n g wave amplitudes
by (cf. [ l l j
Then (1.1),
(1.4) and (1.5) y i e l d t h e f o l l w i n g equations f o r A and B.
with (2.3) F u r t h e m r e , (1.6) and (1.7) become
Now l e t
and l e t Y (x;k) b e t h e 2 x 2 fundamental s o l u t i o n matrix of (2.2)
i.e.
From t h e s t r u c t u r e o f m i n (2.5) we s e e t h a t Y has t h e form
i.e.,
Y (x;k),, x
k is a parameter. T(L;k)
0 i s a random process with values i n the group SU(1,l) 'and It a l s o follows. from (2.31,
(= T (L,k,R,a) ) is a' function of a(L;k)
IT ( ~ ; k 1)
(2.8)
.=
(2.4) and (2.7) t h a t
, b (L;k)
and i n p a r t i c u l a r
1
1 a ( ~ ; k 1)2
'
Therefore'the problems of the.previous s e c t i o n reduce t o t h e determination of t h e asymptotic d i s t r i b u t i o n of Y (L;k) (= Y ( ~ ; k , R , a )1 But Y is t h e s o l u t i o n
, under
various s c a l i n g s
of t h e stochas'tic ' i n i t i a l value prdblem (2.6).
So we
need some theorems regarding the asymptotic behavior of s t o c h a s t i c ODE4s depending on' a small parameter. Following 151 we s h a l l describe next some r e s u l t s concerning asymptotics f o r s t o c h a s t i c ODE'S.
I n Section 6 we s h a l l prove i n some d e t a i l one p a r t i c u l a r
2 03
r e s u l t ; t h e o t h e r s f o l l m i n much t h e same way. F i r s t we consider i n i t i a l value problems s c a l e d as i n Case ( i ) i n Section 1. We s h a l l denote by t
E
0 t h e independent v a r i a b l e , by y ( t ) t h e s c a l e d E
f l u c t u a t i n g c o e f f i c i e n t s and by x ( t ) t h e s c a l e d random process which i s t h e solution
Here
of
F ( x , y , ~ ) i s a given function from R~
.n R -valued;
X
S
(0,1] t o SZn s o t h a t x E ( t ) i s
X
E
Recall t h a t t h e f l u c t u a t i o n process y (t.) t a k e s values i n some The discussion h e r e w i l l
compact m e t r i c space S ( t h e space o f c o e f f i c i e n t s ) .
be informal t o avoid d e t a i l e d l i s t i n g of r e g u l a r i t y and o t h e r hypotheses necessaqy f o r a complete treatment.
01
S u p p o s e . t h a t { y ( t ), t d e t a i l s in: Section 6) and'with x fixed
i s a given e r g o d i c Markov process on S (more
E 2 and t h a t y (t)= y(t/E )
.
Suppose a l s o t h a t F = F (x,y)
put
Here and i n t h e sequel E(
0 )
r e f e r s t o e x p e c t a t i o n r e l a t i v e t o the p r o b a b i l i t y
measure of t h e ergodie Markov process { y ( t ) , t
2 0)
regarded a s a s t a t i o n a r y
process, i . e . with y (0) d i s t r i b u t e d according t o t h e i n v a r i a n t measure o f t h e process. L e t x ( t ) be t h e s o l u t i o n
of
I t i s reasonable t o expect t h a t &s E + 0, x E ( t )
+
G(t) i n probability.
it i s n o t hard t o show t h a t [2,13,151 f o r each 6 > 9 and T < m, (2.12)
xE(t)
-t
> 0
DS
E + O .
In fact
The reader can v e r i f y e a s i l y t h a t t h i s r e s u l t covers Case (i)o f the previous section. One can now consider t h e behavior of t h e f l u c t u a t i o n process E-'
.
(x" (t)-(t)) ;
Again one can show I2,13,151 t h a t t h i s process converges weakly a s E
+ 0
to a
Gaussian Markov process whose i n f i n i t e s i m a l parameters a r e e a s i l y i d e n t i f i e d . Since we s h a l l n o t use t h i s r e s u l t , we s h a l l n o t discuss i t f u r t h e r (cf: a l s o [171) E 2 Next we consider s c a l i n g corresponding t o Case (ii) Now y (t)= y ( t / E ,) a s
.
above, b u t
i n (2.9) and
C
The asymptotic behavior of x ( t ) i n t h i s case was t r e a t e d f i r s t by Khasminskii [16].
It i s a l s o t r e a t e d i n [ 5 ] and i n s e v e r a l o t h e r places
r e f e r r e d t o i n [ I ] and [2].
E
The r e s u l t is t h a t x ( t ) converges weakly t o the
diffusion Markov process x ( t l a s
E -+ 0 .and the
i n f i n i t e s i m a l generator of
t h i s process i s given .by
E 2 Case ( i i i ) .corresponds t o . y (t)= y ( t / E ) a s above b u t now F (x,y,E) i n (2.9)
depends a l s o e x p l i c i t l y on t / E L i .e.
and again (2.17)
The asymptotic behavior of x E ( t ) i n t h i e case was a l s o obtained i n [16] and the r e s u l t i s t h a t xE(t) converges weakly t o the. diffusion process x ( t ) a s E +
0 whose i n f i n i t e s i m a l generator is given by
This r e s u l t is a l s o proven i n 151 and formula (2.18) f o r the l i r i r i t generator i s the b a s i c tool. f o r the computations c a r r i e d o u t i n Ell- [31. Case (iv) d i f f e r s from Case ( i i i ) only i n s o f a r as
instead of (2.16)' we have
The generator of the limiting diffusion process is, given now by
This J s the formula used i n the heat conduction problem [41.
I n Section 5 we
s h a l l qive a b r i e f proof of t h i s r e s u l t following [ 5 ] . I n addition t o the above some f i n i t e i n t e r v a l O < - t E
x (t) as t question
+
CO
while E
l i m i t theorems i n which
c T < m
, one
E -+
0 and t s t a y s fixed i n
may a l s o consider what happens t o
> 0 is fixed but small.
Some r e s u l t s regarding t h i s
a r e given i n [6].
Another type of question t h a t can be asked i s t h i s :
I n case (i)where (2.12)
holds, suppose U i s a s e t of t r a j e c t o r i e s t h a t does not contain the trajectory x(t), 0
5 t LT.
What i s the probability t h a t
x E ( - ) E U, f o r € small?
p r o b a b i l i t y i s exponentially small and the exponential constant can be computed e x p l i c i t l y [181.
This
3.
APPLICATIC'N TO THE TTRANSMISSION COEFFICIENT E
Formula (2.18) can be applied t o the matrix-valued process. Y (x;k) which i s t h e scaled version (Case ( i i i ) ) of (2.6)
(3.1)
dyE dx =
-1 i k g (y2(x/e2)
1
e
)
E
E
We f i n d t h a t Y (x;k) converges weakly a s E
+-
0 t o a diffusion Markov process
with values i n SU(1,l) and it 'remaias t o f i n d its generator
e x p l i c i t l y 111.
The calculations f o r t h i s a r e straightforward and' i n addition (cf. 111 ) one can compute t h e l i m i t i n g d i s t r i b u t i o n function of t h e transmission c o e f f i c i e n t explicitly.
One a l s o finds t h a t
12}
~T(wi~,k.a,r)
= 2r
j 0
2 .-YL ( t +1/4) t s i n h ' st dt 2 cosh rt
where y = y (k) is given by
and should be thought of a s the proper form of L! here s i n c e i n .the r e s u l t (3.2).
does not appear
I n 111 and [2] we a l s o give references t o o t h e r papers
where formula (3.21 i s derived. E
Formula (2.19) can b e applied to t h e matrix valued process Y (x;k) which i s
now taken t o be t h e s c a l e d version oC (2.6)
under case ( i v ) :
Now the generator of t h e l i m i t i n g process has t h e same form a s t h a t of (3.1)
(cf. (4.11) i n [ I ] , P a r t I) b u t t h e c o e f f i c i e n t s ch;mge (as explained i n [41 also).
Formula (3.2) is v a l i d again except t h a t ncw Y = Y(k) is given by
which is i n f a c t kLR/2. 4.
APPLICATION TO THE HEAT CONDUCTION PFOBLEM
We r e w r i t e t h e average r a t e b Z h e a t conduction (1.20) i n t h e form
If we s e t L = &
-2
and use t h e r e s u l t s o f t h e previous s e c t i o n we s e e t h a t
l i m .{~T(c~,EL,R.~)
(4.2)
&+O
12}
= 2a
where y ( k ) i s given by (3.5.) i . e . y =
1
-y ( t2.+1/4) t s i n h st dt 2 cosh a t 0 e
kLk/2.
Now we t a k e t h e l i m i t L -+
-
in
(4.1) and use (4.2) assuming t h a t we can t a k d t h e l i m i t i n s i d e t h e i n t e g r a l . W e obtain m m
(4.3)
lim L+
& J (L)
2
2
-k R ( t +1/4)/2 t s i n h a t dt dk
= 2coO(28)
2 0 0
-
cosh st
2 i -112 t s i n h m-t dt ( t + ;r) 2 cosh a t
This s e t t l e s problem IS1 o f S e c t i o n 1. I t remains t o v e r i f y t h a t one can t a k e t h e l i m i t L s i g n i n (4.1).
-+ m
inside the integral
This r e q u i r e s s p e c i a l c o n s i d e r a t i o n s t h a t a r e due l a r g e l y t o
p a s t u r and Feldman [ 8 ] -and 5.
are
considered i n S e c t i o n 6.
PXIOF' OF A LIMIT THEOREM
We s h a l l now give a b r i e f argument t h a t shows how t h e r e s u l t (2.19) is obtained. The process x E ( t ) under consideration i s assumed t o s a t i s f y
l e a d i n g to
where F(x,y,'r) is a function on R" x S x 10,") with values i n Rn and 2
y E ( t ) 5 y (t/&) where { y (t), t
2 0) i s
i n S which i s a compact metric space.
an ergodic Markov process with values
W e s h a l l assume that F i s a differenti-
able function of x and T and continuous i n y.
Regarding y t t j , t
2 0, we
assume'
t h a t it i s a r i g h t continuous Markov pmcess whose infinitesimal generator
Here E { )4 i s ekpectation with respect t o the Y on t r a j e c t o r i e s iri S t h a t s t a r t from the point y.
i s a bounded operator on C ( S ) .
probability P Y W e have assumed t h a t { y (t)',t
2 01
i s 'ergodic.
Actually we need t h a t t h i s
be s o with an exponential approach r a t e so t h a t the operator -Q
-1
is well
defined and bounded on functions which average t o zero with respect t o the invariant masure. one-dimensional,
In other words we assume t h a t the n u l l space of Q i s spanned b.1 the
onstant functions, and t h a t the Fredholm
alternative i s valid. E
E
The ,process, (x ( t ) ,y (t)) i s a Markov process with values i n R" denote the infinitesimal.generator of t h i s process.
X
E
S. Let Lt
It is e a s i l y seen t h a t
Note t h a t the infinitesimal generator depends on t s o that the process i s not
time homogeneous.
The pobbability measures pE
XI Y
on t r a j e c t o r i e s i n
Rn x S
starting- from (x,y) can be characterized. by the m r t i n g a l e approach of Stroock
and Varadhan: f o r which
it i s t h a t probability measure D( [O,m) , ?tnxs)
(Skorohod space)
i s a martingale f o r each f ( x , y , t ) which i s bounded and has bounded x and s derivatives. we r e q u i r e t h a t
For t h e asymptotic a n a l y s i s (5.5)
F(X.Y,T) i ( d y )
s where
5 (dy)' i s
(2.17).
.
0
t h e i n v a r i a n t measure of { y ( t ) , t
Let $(y ,dz) be t h e k e r n e l o f
-
(5.6)
Q-% ( Y ) =
-Q-l
01.
This is hypothesis
So t h a t
] Y(y.dz)
h (2)
S
where. h i s bounded and
Let f (x):be a f i x e d function with bounded and continuous second d e r i v a t i v e s . Define. t h e o p e r a t o r . 15.8)
f;Tf(x) =
II S
P(dy) $(y.dzI
F(xly1T).
a
a f (x)
(F(X~Z,T)'
s
We s h a l l a s s m ~ et h a t f o r each f f i x e d
1
T (5 ;9)
L
f(xj'= l i m
aE ~ + ~
0 e x i s t s uniformly i n x and T and i s independent of T a s shown. v e r i f i e d e a s i l y t h a t (5.9)
and (2.19\
are i d e n t i c a i .
N o w f o r t h e l i m i t theorem we proceed a s follows.
with compact support)
where
and l e t f o r E
It Can 'be
> 0, h > 0,
F i x f (x) smooth
w
(say C
0
and (5.13)
p ( 2 1 h ) ( ~ , y , T )=
V I Y , I ~ E ) F(x.z.~).
- -~~f (x)
)a*:
a$;)
(x,z,T) ax
+ F(x,z,T)-
!F,
a $ ( l t A ) (x,T)
ax
1
Then, (5.14)
(+:L
&-)
f ("
(xIy, ):
= Lf ( X I .+
I n f a c t , t h e p o i n t o f t h e above constructj.ons i s p r e c i s e l y t o o b t a i n (5.14) which i s a formalized p e r t u r b a t i o n theory. We r e t u r n t o (5.4) and
p u t f o r f t h e function
( x , y , t / ~ ) . We s e e then
from (5.10) and (5.14) t h a t
0 Assuming t h a t we have shown weak compactness f o r t h e process x
E ( a )
(which i s
n o t d i f f i c u l t t o show [5]) then we .can p a s s t o the l i m i t i n (5.15) along a convergent stbsequence.
Because of (5.91,
l i m sup h+O X,T
x
$(l")
( x , ~ )=
arid hence wd conclude t h a t f o r any l i m i t of t h e
i s a martingale.
o
process x
E
( * I t h e expression
S i n c e f. i s a d i f f u s i o n o p e r a t o r with smooth c o e f f i c i e n t s
t h i s martingale problem has a unique s o l u t i o n . t o t h e d i f f u s i o n p r o c e s s generated by
L.
E
Then x
( a )
converges weakly
6.
THE INSTABILITY OF THE HARMONIC OSCILLATOR
F o r t h e r e s u l t (4.3) t o hold we need t h e foll6wing e s t i m a t e 183 which allows interchange o f l i m i t and i n t e g r a t i o n . There i s a constant C independent of k
and a p o s i t i v e fimction z ( k ) f o r
k > 0, such t h a t
E{IT(L,
Moreover, z ( k )
+
0 as k
+
2 t2}<,c
-2 (k)L
,
L
Lo.
0 b u t t h e r e i s a c o l ~ s t a n tz0 > 0 such t h a t l i m A z ( k ) = z0 kSO k2
(6.2)
.
A s i s e a s i l y . s e e n from I B I and elsewhere, t h e e s t i m a t e (6.1)
to't h e following problem.
quickly reduces
Cowider t h e i n i t i a l value problem f o r (1.1) and
introduce p o l a r , c o o r d i n a t e s
Then (r(x) , 6 ( x ) ) a r e s o l u t i o n s of t h e system k
dr
;i;; = 2 9 dO
( (~ X I ) s i n 20 (x)
= k (l+g (Y (X)) COS26 (X))
and we s h a l l take r(0) = 0 i n t h e Sequel.
where of course r(L) = r(L;k)-
It is e a s i l y seen t h a t
Thus, (6.1) is implied by
which i s what we s h d l pro*. For the proof that fo'lD's a s follows.
Recall that
us
strengthen o u r hypotheses on { y ( t ) I
have assU-Wd it is a r i g h t continuous Markov
t>01
process on S with bounded i n f i n i t e s i m a l generator Q which s a t i s f i e s t h e Fredholm a l t e r n a t i v e .
We s h a l l now assume t h a t
where q is continuous on S and s t r i c t l y p o s i t i v e and t h e p r o b a b i l i t y measures r(y,A) have a continuous density r e l a t i v e t o a reference ;.?asure t h i s density i s s t r i c t l y p o s i t i v e .
4 on
S and
This hypothesis,implies, a s is well known
[19], t h e Fredholm a l t e r n a t i v e f o r Q.
We a l s o assume t h a t
OD
=
I
ildy)
Ip(y,dyl) g ( y l ) g(y)
(cf. Section 5)
i s a p o s i t i v e number (it i s always nonnegativp- s i n c e it i s equal t o 1/2 the
power s p e c t r a l density of t h e s t a t i o n a r y process The proof of (6.6) i s i n two p a r t s . t h e o t h e r w i t h t h e case k Part1
k > 0
The process
One deals with the case k > 0 fixed and
-+ 0.
(see [81). (y (x) ,0(x)]
( c f . ( 6.4) ) i s a Markov process on S x T (T = t h e
u n i t c i r c l e ) with i n f i n i t e s i m a l
L.= Q Let
+
generator k(l
+
2
gfy) cos 0)
~ ( y , B ; k ) be ' defined. by V =
and note t h a t
g ( y ( s ) )a t zero frequency).
k
g(y) s i n 28
a s .
~ M H A 1.
For each r e a l B
i n g semigroup
v R (t)on
the operator L+BV
generates a p o s i t i v i t y presem-
the bounded measurable functions on S x T.
has an i s o l a t e d maximal eigenvalue X = A(B,k) corresponding r i g h t and l e f t eigenvectors
M~redver '1, v . and
; axe
This semi-
and s t r i c t l y p o s i t i v e
i.e.
d i f f e r e n t i a b l e functions of B.
Tnis lemma i s .proved by noting t h a t by the Feynm-Kac foxmula we have
where E
y .i$ '1
(, c$(t),',8(t))
i s e k e c t a t i o n r e l a t i v e t o the measure of the process
t
,
01.
The p o s i t i v i t y ,presr-rving property i s seen from 15-13].
For.,the existence of an i s o l a t e d maximal eigenvalue with p o s i t i v e r i g h t and
1 ~ E tnull vect0r.s it s u f f i c e s t o show [ l l ] t h a t there i s a to< corstant y ? 0 such t h a t f o r a l l A
cT
S i n c e V i s bounded and continuous,
I gl <_ 1 and Q has
e a s i l y obtained.
m
and a
x S
the form ( 6 . 7 )
this is
Finally t h e d i f f e r e n t i a b i l i t y i n B i s a consequence of t h e
Leolated nature of X and is not hard t o show. ZWX 2.
ilt*k)
u ! . :
We have t h a t X(O,k)
is a convex f m c t i o n of
...:~&:*. -4'"@ that f (k)
2
fxf*,ct
0,
6.
is p o s i t i v e i s a continuous-ti&
analog of
F-terberg's
r e s u l t [ l o ] and it i s proved i n [81 and [9]. I t i s a
n o n t r i v i a l f a c t which we s h a l l n o t , however, consider i n d e t a i l .
The
convexity follows from t h e formula X(f3,k) = l i m
(6.16)
1 -5 log
v
!R ( t ) l
,
t+ the Feynman-Kac formula and ~ G l d e r ' si n e q u a l i t y .
Following [81 we now have
where 6 > 0 is chosen below. note that f o r f3
B u t from (6,.11),
To e s t i m a t e t h e p r o b a b i l i t y on t h e r i g h t , we
> 0
(6.16)
v
and t h e s u b a d d i t i v e n a t u r e o f l o g UR (t)! we conclude
atat there is a c o n s t a n t C independent o f k and L s u d t h a t
> 0 i s Small enough so *at " d g f Z . * ~
'.a
ST?
t h e exponent on t h e r i g h t i n (6 -19) and.
(6.19) we get
@ = B* > 0
1-6 > 0, then we have
B*
=
-
B* (k)
that-
%USr
froill
P a r t 11.
k
LEMMA 3.
+ 0. F o r k > 0 , s m a l l we have
where 8 i s given by (6.8) L e t AO(B) =
Proof: A
C
.
! 2 1 2 4 (T L3 + 8).
We s h a l l show t h a t t h e r e a r e c o n s t a n t s
A
1
and C such t h a t 2
which a l o n g w i t h (6.16) give:
t h e r e s u l t (6.22).
The r e s u l t (6.21) is
o b t a i n e d by a s i m i l a r argument. L e t $ ( y , d z ) b e t h e k e r n e l o f -Q-'
( c f . S e c t i o n 5) and d e f i n e
8
(6.25)
hl(8)
= -
(6.26)
f
=
+
14
I
sin28
$(y,dyt)
+ 88
2 coo 8 c o s 28
-
[$
g ( Y ' ) s i n 28 f l ( y 9 , O )
S afl(~',o)
af1(y1,e)
+ g(yl)
,
(+ 82+ B)]
+ y0
g(y')hl(e)
COS-8+ g ( y t )
ahl (0)
Let
.,
fl = fl
+
h
3
and f 3 = O(k )
uniformly i n
sin28
a e cos2e] .
By d i r e c t c a l c u l a t i o n a s i n S e c t i o n 5 we f i n d t h a t
where
dB
(y,0) E S x T.
For k s u f f i c i e n t l y small there e x i s t constants
If
C; and Z2 such t h a t
1 denotes the function identically equal t o one
then the Feynman-Kac
formula gives
v
(R (t)1)lyre)
2 .1T sup c2 y.0
E~ e{eBr(t)
f (k) ( y ( t l .e(t))}
Now choose 6 > 0 small so t h a t
-> 0 -< f o r a l l k small. From Dynkin's i d e n t i t y (integrated semigroup identity) we have
From (6.31) and (6.32) we obtain the inequalitica
Combining t h i s with (6.30) yields (5.23) and the lemma i s proved.
-
Now t o f i n d z i n (6.2) we repeat the argument (6.17) (6.19) and use Lemma 3. 0
II
By picking the 6 > 0 appropriately we actually obtain zO= 7 (3 This completes the proof of
tC.1)
and (6.2)
.
-
fi) >
0.
W. Kohler and G. C. Papanicolaou, Power s t a t i s t i c s f o r waves i n one aimens i o n and comparison with r a d i a t i v e t r a n s p o r t theory, J. Math. Phys. (1973)
pp. 1733-1745 and
15 (1974),
.=
pp. 2186-2197.
W. Kohler, and G. C. P?panicolaou, Wave propagation i n a randomly inhomo-
Springer Lecture Notes i n Physics # 70 (.1977), e d i t e d by
geneous ocean,
J. B.' K e l l e r and J . Papadzkis.
: W
Kohler 'and G. C. Papanicolaou, F l u c t u a t i o n phenomena i n under w a t e r
sound pxopagation, I and 11, Proceedings o f Conference on S t o c h a s t i c Equations and ~ p p l i c a t i o n s ,Academic P r e s s (1977) , e d i t e d by J . D. Mason. .J. B. K e l l e r , G. C. Papanicolaou and J . Weilenmann,
Heat conduction i n
a one-dimensional random medium, Comm. Pure Appl. Math. t o ' .appear
32
(1978)
.
G. C. Papanicolaou, D. Stroock and S. R. S. Varadhan, Martingale approach
to some l i m i t theorems, i n
S t a t i s t i c Mechanics, Dynamical Systems and t h e
Duke Turbulence Conference, by D. Ruelle, Duke Univ., vol
. 111, ~urham; ~ o r t hCarolina, -
t i c systems with wide-boud 34 (1978), -
L.
S t a b i l i t y and c o n t r o l o f stochas-
n o i s e disturbances, I , SIAM J. Appl. Math.
pp. 437-476.
O'Connor, A. J., chain,
1977.
and G. C. Papanicolaou,
G. lanke ens hip
A c e n t r a l l i m i t theorem f o r t h e disordered h a m n i c
Comm. 1.lath. Phys.
A. P a s t u r
45
(1975) pp. 67-77.
and E. P. Feldman, Wave transmittance f o r a t h i c k .layer o f
a randomly inhomgeneous medium, S c v i e t ~ h y s .JETP [g]
L.
Math. S e r i e s
40
(1975), pp. 241-243.
A. P a s t u r , Spectra of random Jacobi m a t r i c e s and ~ c h r b ' d i n ~ e equations r
w i t h random p o t e n t i a s on t h e whole Ukr. SSR, a a r ' k o v ,
a x i s , P r e p r i n t , FTINT- Akad. Nauk
1974.
1101 H. ~ u r s t e n b e r g ~N o n c c m t i n g
random pooducts , Trans. .Am. Math.
SOC.
108
(1963). pp. 337-428. [ l l ] T. Harris,
Branching Processes I
[123 V. I. Klyatskin,
Springer, Berlin, 1963.
S t a t i s t i c a l p r o p e r t i e s of dynamical systems with
randomly f l u c t u a t i n g parameters, Nauka, Moskow, 1975. 1131 I. I. Gihman
and A. V. Skorohod, Stochastic D i f f e r e n t i a l Equations,
Springer, Berlin, 19;:. [14] A. Casher
.
and J. L. Lebowitz,
harmonic chains, J. Math. Phys.,
Heat flow i n regular and disturbed
12
(1971)
pp. 1701-1711.
[IS] R. 2. ' ~ h a s m i n s k i i , On s t o c h a s t i c processes defined by d i f f e r e n t i a l equations with a small parameter, Theor. Prob. Appl.
11 (19661,
pp. 211-
228. [161 R. 2. Wasminskii,
A l i m i t theorem f o r s o l u t i o n s of d i f f e r e n t i a l equa-
t i o n s with a random right-hand s i d e , Theor. Prob. Appl.
2, pp
pp. 390-406. [173 B. W1:ite
and J. .Franklin,
A l i m i t theorem f o r s t o c h a s t i c t w o - y ~ i n t
boundary value problems of ordinary d i f f e r e n t i a l equations, Comm. Pure Appl. Math.,
t o appear.
1181 M. I. F r e i d l i n ,
DOH.
Fluctuations i n dynamical systems with averaging,
Akad. Nauk SSSR,
17 ( i g j 6 )
pp. 104-108.
f191 J. L. ' ~ o o b ,S t o c h a s t i c Processes, J. Wiley, New York, 1953.
CENTRO INTERNAZIONALE K4TEMATICO E S T I W
(c.I.M.E.)
A STOCEASTIC PR03LEM I N P H Y S I C S
C E C I L E DFdITT-MORETTE
A STOCHASTIC PROBLEM I N PMlSICS
C e c i l e DeWitt-Morette Department of Astronomy and Center f o r RelatAvity University Of Texas, Austin, TX 78712
Iptroduction:
The.world i s g l o b a l and s t o c h a s t i c and p h y s i c a l laws a r e l o c a l and deterministic.
Thus -she problems discussed a t t h i s Summer School a r e t h e very fabi-ic
of p h y s i c s ,
But physics asks some q u e s t i o n s which go beyond t h e t z r r i t o r y
which has been explored h e r e .
I s h a l l p r e s e n t one of them, show how f a r phy-
s i c i s t s have gone toward i t s s o l u t i o n and mention an important problem of current interest.
P r o b a b i l i t y theory begins w i t h a p r o b a b i l i t y s p a c e d e f i n i t i o n of t h e o - f i e l d
? of s u b s e t e of
h a s given us many powerful theorems.
U,$!P)I
The c a r e f u l
S2 and 0% tile p r o b a b i l i t y measure
F.
I t is a l s o p o s s i b l e , and o f t e n p r e f e r a b l e
i n p h y s i c s , t o d e f i n e P as a promeasure,'
namely a s a p r o j e c t i v e family of
bounded measures defined on t h e system of f i n i t e dimensional spaces Q known a s t h e p r o j e c t i v e system of S2.
We thus s t a r t from
This is e x c e l l e n t f o r s t a t i s t i c a l mechanics.
(P,~,P) r a t h e r than
(~,Z?P).
Unfortunately, i n quantum mech-
a n i c s , we have t o d e a l w i t h f a m i l i e s of unbounded measures on t h e p r o j e c t i v e system.
9
of $2.
And t h i s i s the,key i s s u e i n t h e s t u d y of Feynman.path
Also c a l l e d cyli*ndrical measure.
See f o r i n s t a n c e [Bourbaki] ,
irrtegrals b$en Feynman w a s . a graduate s t u d e n t i n t h e e a r l y f o r t i e s , he f e l t uncomforta b l e with quantum mechanics and, r a t h e r than l e t f a m i l i a r i t y become a s u b s t i t u t e f o r understanding, he ana7yzed1 a b a s i c quantum phenomenon i n h i s own considered a. source of e l e c t r o n s S ; a plane d e t e c t o r D a n d , . i n beterns: b e e n , a screen with 2 s l i t s which could be closed o r open. The d e t e c t o r measures t h e number of electr:?s
1%
on the plane D 'as a f u n c t i o n of p o s i t i o n ,
lx
arriving
It i s turned
on f o r an amount of time which can be considered a s infinite.
i.
'I
iA.
1)
iiC.
I f one does t h e s e experiments:
s i i t 1 open, s l i t 2 closed s l i t 1 closed, s l i t 2 open both s l i t s open,
one. f i n d s t h a t the p r o b a b i l i t y p e r i n c a t is not t h e sum P two experiments.
Pg
measured by t h e d e t e c t o r i n t h e t h i r d ex-
+ P of t h e p r o b a b i l i t i e s measured i n t h e f i r s t 1 . 2
But one f i n d s t h a t t h e r e is an a d d i t i v e q u a n t i t y , c a l l e d prob-
c b i l i t y anplifude, whose a b s q l u t e value squared i s t h e p r o b a b i l i t y P ( b , t b ; a , t a ) t h a t Chq e l e c t r o n known t o be a t :
a t time ta be found a t b a t time tb:
This r e s t i l t . can be generalized t o a n Inf;lnLte number of . s l i t s and t h e probabili t y amplitude f o r a t r a n s i t i o n from .(apta)
to
(b?t,)
i s t h e sum over a l l
possible paths x
T
R
s u c h , t h a t ~ ( f ) .= a
and
x(\)
= b!
The requirement t h a t , i n the limit, 8 = 0 , quantum p h y s i c s goes over t o c l a s s i c a l physics implies t h a t
I
Read the f i r s t chapter of [Feynman and ~ i b b s )f o r a b e a u t i f u l account of t h i s analysis.
,)rere S i s t h e a c t i o n d e f i n e d by t h e Lagrangian L,
5t1r s p t e n need n o t b e a p a r t i c l e i n
(a
~h~ s p a c e of p a t h s W C ~t h a t
ta)
E M,
x E Q
x(ta) = a
and
3
.
For i n s t a n c e c o n s i d e r a system
One c a n w r i t e t h e P r o b a b i l i t y amplitude f o r a
uj.ose c o n f i g u r a t i o n s p a c e is 11. r ~ a n s i t i o dfrom
R
to
( b e M, t b )
by a s i m i l a r p a t h i n t e g r a l .
i s t h e n r h e s p a c e of c o n t i n u o u s p a t h s
X:
T -+ M,
x ( t b ) = ba
1 .have. n a r , s e t up a i l t h e n e c e s s a r y p h y s i c a l c o n c e p t s t o show why (1) phy-
;icfsts need "P" t o be more g e n e r a l t h a n a p r o b a b i l i t y measure.
" c + ~t o
be endowed v i t h a v a r i e t y of s t r u c t u r e s .
(2) They need
Indeed:
1. The f a c t t h a t we have t o work w i t h unbounded measures comes from t h e f q c t t h a t we s u m , p r o b a b i l i t y a m p l i t u d e s r a t h e r t h a n p r o b a b i i i t l e s . Where a '
probabilist has dy (u) = ( 2 n i )
-d/2
( ~ e tI'
-1 1 / 2 )
exp (-ipj
-1 k j u u 1 2 ) dul.. .dun
w e have dy (u) = ( 2 r i ) - d / 2
(Det r - l ) l l 2 e x p ( i r
kj
u k u j / 2 dul.. .dun @,a)
I t i s . . c l e a r t h a t we cannot u s e t h e p r o b a b i l i s t s ' e s t i m a t e s and we have been
f o r c e d t o i n v e s t i g a t e d i f f e r e n t approaches.
i. lus.
Feynman d i d n o t know t h e Wiener i n t z g r a l and i n v e n t e d h i s own c a l c u -
-
He r e p l a c e d a p a t h x by n of i t s v a l u e s
the l i m i t
n =
of t h e d i s c r e t i z e d problem.
.
and computed ,x(tfi), H e d i s c o v e r e d "experimentally" x(tl).
t h a t what i s now known a s t h e S t r a t o n o v i t c h i n t e g r a l g i v e s t h e " r i g h t " r e s u l t i f t h e problem i s s i m p l e enough--for i n s t a n c e , i f t h e c o n f i g u r a t i o n s p a c e d = R With h i s a d m i t t e d l y c r u d e t o o l , ~ e ~ l t n awas n a b l e t o c o n s t r u c t a fan-
M
.
t a s t i c a l l y good computational procedure known a s t h e Feynman diagrams. Feynman diagrams a r e used wideiy i n n e a r l y a l l branches of p h y s i c s .
The
The d i a -
gram r u l e s c a n b e a p p l i e d and even r e f i n e d w i t h o u t knowing a n y t h i n g about path integration.
They have been j u s t i f i e d by s e v e r a l methods and have o f t e n
eclipsed path integration. ii.
Another approach pioneered i n p a r t i c u l a r by M o n t r o l l and Xelson i s
based o n a n a l y t i c c o n t i n u a t i o n ; e i t h e r t h e time o r t h e mass i s complexified. The main a c t i v i t y i n t h i s domain i s e u c l i d e a n f i e l d theory. 1 s h a l l speak today of a method which proceeds n e i t h e r by d i s c r e t -
iii.
i z a t i o n n o r by a n a l y t i c a l c o n t i n u a t i o n . can
I t ' h a s g i v e n v e r s a t i l e t o o l s which
5 f o r t i o r i b e used i n p r o b a b i l i t y t h e o r y .
called prodistribution_.
T t d e f i n e s a n o b j e c t on 0
Because p r o d i s t r i b u t i o n s a r e d e f i n e d d i r e c t l y on R,
one can i n v e s t i g a t e what happens when D i s endowed w i t h a v a r i e t y of o t h e r structures
2.
.
The s p a c e Sl, i n p h y s i c s , i s o f t e n t h e s p a c e of p a t h s mapping t h e time
T c
R i n t o t h e c o n f i g u r a t i o n s p a c e M, o r i n t o t h e phase s p a c e T*M of a sys-
tem.
Too o f t e n t h e g l o b a l p r o p e r t i e s of t h e c o n f i g u r a t i o n s p a c e of a system
a r e ignored, and one t h i n k s of t h e c o n f i g u r a t i o n s p a c e of a system w i t h d deg r e e s of freedom a s
R ~ . But even t h e s i m p l e s t systems, a pendulum, a system
o; i n d i s t i n g u i s h a b l e p a r t i c l e s ,
2
r i g i d body r o t a t o r , e t c . , have c o n f i g u r a t i o n
s p a c e s which a r e ' m u l t i p l y connected riemannian s p a c e s . l a t i o n of cjilantum p h y s i c s is a n i n t e g r a l over Sl.
A p a t h i n t e g r a l formu-
I t r e f l e c t s t h e g l o b a l prop-
~ r t i e sof Sl and t h e v a r i o u s s t r u c t u r e s p u t on $2. I s h a l l now i n t r o d u c e p r o d i s t r i b u t i o n s and e x p l a i n b r i e f l y
used t o compute p a t h i n t e g r a l s e x p l i c i t l y . a promeasure.,
1
how they c a n b e
L e t u s go back t o P c o n s i d e r e d a s
We c o u l d have d e f i n e d a promeasure by i t s F o u r i e r t r a n s f o r k ,
i . e , by a f a m i l y of f u n c t i o n s on t h e d u a l of t h e p r o j e c t i v e system.
For i n -
s t a n c e , i n s t e a d of d e f i n i n g a g a u s s i a n promeasure by a p r o j e c t i v e f a m i l y of g a u s s i a n s , o n f i n i t e dimensional s p a c e s of t h e t y p e (1,a) we can d e f i n e i t by t h e i r Fourier transforms
bjy
on t h e . d u a l s p a c e s
A t t h i s point: we c a n remove t h e c o n d i t i o n t h a t t h e measures
y
b e bcunded.
i s a s e t f u n c t i o n , y(u) = I d y ( u ) , i t s F o u r i e r t r a n s f o r m w U S Y is d e f i n e d p o i n t w i s e . Whereas "i"p l a y s havoc i n e q u a t i o n ( 2 , a ) i t i s Izdeed whereas
y
q u i t e oanageable i n i t s F o u r i e r t r a n s f o m
1
A d e t a i l e d account w i l l a p p e a r i n [ ~ e l ~ i t t - ~ ote, r e tMaheshwari, B. Nelson, '
19791.
I n o t h e r words, i n s t e a d of c o n s i d e r i n g a p r o j e c t i v e f a m i l y of bounded meas u r e s , we c a n c o n s i d e r a p r o j e c t i v e f a m i l y of tempered d i s t r i b u t i o n s .
Dieu-
donng h a s proposed t o c a l l a p r o j e c t i v e f a m i l y of tempered d i s t r i b u t i o n s a "prodis tribution." S i n c e time i s l i m i t e d I s h a l l work w i t h a n example.
The Feynman-Kac f o r -
m l a sugges'ts i t s e l f s i n c e you a r e working w i t h t h e Kac formula and I work v i t h t h e Feynman formula.
Given
\.:rife down t h e p a t h i n t e g r a l r e p r e s e n t a t i o n of t h e s o l u t i o n and compute i t . 'Ihc problem
is
s u f f i c i e n t l y complicated t o d i s p l a y t h e power of p r o d i s t r i b u -
tdons. A1lShTr
:
in p a r t i c u l a r t h e p r o p a g a t o r
K(tb,b;ta,a)
i s o b t a i n e d by choosing t h e i n i t i a l
gave f u n c t i o n t o be
The f o l l o w i n g n o t a t i o n h a s been used. i.
if.
p = (h/m)
'I2
Devb i s t h e development mapping from t h e s p a c e of
tangent space d of H. If H = R
T M
tile
,
L*"
( t a n g e n t s p a c e t o EI a t b ) t o t h e s p a c e of
paths1 on .paths
then
of s q u a r e i n t e g r a b l e f u n c t i o n s whose f i r s t weak d e r i v a t i v e s a r e s q u a r e tategrable.
''?ate
I n general,
i s a p a t h X on M s u c h t h a t
Devb(px, .)
p a r a l l e l t r a n s p o r t of
k(t)
from b t o
X(t)
i(t)
a l o n g X.
is e q u a l t o t h e
Thus e q u a t i o n (3) is
defined f o r p a t h s X on M, b u t t h e v a r i a b l e of i n t e g r a t i o n x i s a p a t h on
'rb~.
Elworthyl h a s shown t h a t t h e development mapping d e f i n e s a measureable mapping from t h e s p a c e of c o n t i n u o u s p a t h s on
TbM
i n t o t h e s p a c e of continuous p a t h s
on H. iii,
iv.
i s t h e . s p a c e of c o n t i n u o u s p a t h s on
0 +I{
w+
i s t h e p r o d i s t r i b u t i o n on
f o m on t h e d u a l ~ e x t e D+
Q i
of
and
Q+
TbM
such t h a t
x ( t b ) . = 0.
d e f i n e d by i t s F o u r i e r t r a n s -
St+.
p
E
0;
,
= / dua('t)xa(t>
...d
a = 1,
A gaussinn p r o d i s t r i b u t i o n w on a s p a c e of ~ o n t i c u o u sp a t h s d e f i n e d on T i s a
p r o d i s t r i b u t i o n whose F o u r i e r t r a n s f o r m i s of t h e form
The Wiener p r o d i s t r i b u t i o n covariance i s
wW on
+
Q+
i s t h e g a u s s i a n p r o d i s t r L b u t i o n whose
Equation (3) i n t h e f l a t c a s e is t h e Feynman-Kac formula. g r a t e over
R+ ( p a t h s v a n i s h i n g a t
tb) and n o t on
a@
where t h e c o v a r i a n c e . G
( t , ~ )5 i n f ( t - t a , s - t a ) )
.
~ d t et h a t we i n t e -
D-, ( p a t h s v a n i s h i n g a t t a ' T h i s i s c o n c e p t u a l l y simp-
l e r ( s u n wer a l l p a t h s ending a t b) and c o m p u t a t i o n a l l y e a s i e r . f
This is t h e
one o b t a i n s r e a d i l y by working w i t h p r o d u c t i n t e g r a l s . e q u a t i o n (3) h a s been d e r i v e d by Elworthy f o r t h e p r o b a l i s t i c c a s e (soluThe t h e o r y of p r o d i s t r i b u t i o n s makes i t
t i o n of t h e h e a t d i f f u s i o n e q u a t i o n ) ,
p o s s i b l e t o u s e ~ l w o r t h y 'c~o n s t r u c t i o n f o r t h e Schrtfdinger e q u e t i o n . Computation o f e q u a t i o n (3). Consider a l i n e a r c o n t i n u o u s mapping P from
n+
eitrier i n t o i t s e l f o r i n t o
another space, p:
Q+
Say
-. u
x -+ u ; l e t P be t h e transposed mapping between t h e
by
respective duals . U'
6:
~ 1 - t ~ by ;
r'
~
w
u
. If
F: 0 + + R
and
i s s u c h that
0; F = f o U , then
6
vhere T w p = 'Jrw o
his simple r e l a t i o n i s t h e c l u e f o r nany e x p l i c i t c a l c u l a t i o n s and we c a r ry out one c a l c u l a t i o n i n t h e appendix. The e x p l i c i t c a l c u l a t i o n of ( 3 ) proceeds v i a s e v e r a l l i n e a r mappings.
I
fihall mention only a couple of them: 1. Map
y w x
such t h a t
b
+
py(t) = q ( t )
-+
p(t)
where q i a t h e p a t h
tzHose dwelopment i s a s o l u t i o n of t h e Euler-Lagrange equation of t h e problem
s c h thet
q ( t b ) = b.
kXave f u n c t i o n .
chooses
The boundary v a l u e
q(ta)
is related to the i n i t i a l
For i n s t a n c e i f one compcter, t h e propagator (eq. 4) one
q(ta) = a . Set
Dev(q
+ p, t )
=
Y (t,x,u)
and expand t h e integrand i n equation (3)
i n powers of 11. Set D'ev.'(q)
GY(-,x) = .
a au
, -
+ p, - )lyP0=
~ev'(q)x
is a l i n e a r mapping from t h e s e t of v e c t o r f i e l d s along q i n t o t h e
s e t of v e c t o r f i e l d s along 2.
Dev(q
.
Dev (q) It i s easy t o construcc t h e l i n e a r mapping which llabsorbsl' terms of t h e
.
+
form ( V ~ V ~ V ) ~ Y m ~ ~ e Y image ' under t h i s mapping of t h e Wiener gaussian ww is a gaussian whose covariance i s an elementary k e r n e l of t h e Jacobi equation of the system ( a l i a s t h e m a l l d i s t u r b a n c e - equation, a l i a s t h e v a r i a t i o n a l equation of t h e actior? S).
The problem of solving a p a r t i a l d i f f e r e n t i a l
equation ( ~ c h r ~ d i n ~ equation) er i's then reduced t o solving a n ordinary d i f f e r e n t i a l equation (Jacobi equation) and much i s known about t h i s second order l i n e a r homogeneous ordinary equation.'
cf Jacobi, ~ o i n c a r g , Sturm L i o u v i l l e , e t c , , e t c .
..,
F i n a l l y one o b t a i n s
S
and where
i s t h e a c t i o n a l o n g t h e c l a s s i c a l p a t h from
Qnd where t h e terms
Ak
(a,ta)
to
(b,tb)
a r e g i v e n by i n t e g r a l s over f i n i t e d i m e n s i o n a l s p a c e s .
i s a "moment i n t e g r a l " v e r y e a s y t o compute i n t h e f l a t c a s e , i n p r i n c i p l e
%
f o r any k ,
I t is v e r y d i f f i c u l t t o compute i n t h o Riemannian c a s e .
P r o d i s t r i b u t i o n s have been used i n a v a r i e t y of problems :
scattering
s t a t e s , bound s t s t e s , quantum p r o p e r t i e s of systems whose c l a s s i c a l s o l u t i o n s have c a u s t i c s , e t c .
A v e r s a t i l e technology h a s been developed t o o b t a i n ex-
p l i c i t answers. Problems on curved s p a c e s have been s o l v e d .
The n e x t problem we p l a n t o
i n v e s t i g a t e is p a t h i n t e g r a t i o n on curved s p a c e t i m e s ,
This is not a simple
g e n e r a l i z a t i o ? of p a t h i n t e g r a t i o n on curve'd s p a c e s : , i f one r e p l a c e s t h c Laplacian by a d ' h l e m b e r t i a n , one l o s e s e l l i p t i c i t y .
On t h e o t h e r hand we do
n o t want t o touch f i e l d t h e o r y u n t i l we u n d e r s t a n d what happens on curved spacetimes.
Appendix
Example, x: T
-t
L e t w be t h e Wiener measure on t h e s p a c e 51 of c o n t i n u o u s p a t h s
R s u c h t h a t x ( t a ) = 0.
ac'
* exp(-w(pr p) 12)
G(t,s) = i n f (t-tats-ta)
for
p
E
n1
Compute
I =
I
F(x)dw(x)
where
n
F = f o P
I t f olJ.0~5 t h a t
where
I =
1
1
P: x k u = {U
f (u)dwp(u)
...
where
u n+l
Twp = Fw o
R" The t r a n s p o s e 3
?
?.:Rn+Q'
of P i s d e f i n e d by by
< ? S , X > ~= < t , p x >
where
<
,y>Q
5 - t ~ suchthat
Rn
i s t h e d u a l i t y i n Q,
1
< ~ , x >= ~ d y ( t ) x ( t )
T and
< .rgn
i s t h e d u a l i t y i n Rn,
One can r e a d off immediately
Hence S w p = ei(p(-u(6cric)/*)
i r ~ , u r = zfiq, R~
6
.
A quick c a l c u l a t i o n g i v e s
It f o l l o w s t h a t
T h i s e;:anple,
p o s s i b l y t h e b e s t known r e s u l t of p r o b a b i l i t y t h e o r y , was chosen
t o d i s p l a y e n f a m i l i a r grounds, methods used i n computing e x p l i c i t l y t h e I.KB a p p r o x i n a t i o n of she wave f u n c t i o n on curved s p a c e s (eq. 3 ) .
Bwrbaki,
:!.
(1969) .Elements d e mathematique, Chapter I X , Volume VI, a l s o
r e f e r r e d t o a s F a s c i c u l e 35 o r
NO.
1343 of t h e A c t u a l i t e s S c i e n t i f i q c e s e t
l n d u s t r i e l l e , Hermann, P a r i s . DeWitt-Morette,
C.,
A . ' M a h e s h a r i and B . Nelson (1979) P a t h I n t e g r a t i o n i n
!ion-Rcla t i v i s t i c Quantum Mechanics, P h y s i c s R e p o r t s . Elwor t h y , K. D. (1978) " S t o c h a s t i c dynamical systems and t h e i r flows,"
to
appear i n P r o c e e d i n g s of t h e Confarence on S t o c h a s t i c A n a l y s i s , N o r t h 2 e s t e r n t'niversitg
.
Fe)man, R: P. and A, R. ~ i b b s(1965) ,guantum Mechanics and P a t h I n t e g r a l s , kcraw-Hill-,
New .York.
C ENTRO INTERNAZIONALE MATEMATICO ESTIVO
(c.I.M.E.)
THE EMBEDDING PROBLEM FOR STOCHASTIC MATRICES G. S. GOilDMAN
$1.
Statement of the embedding problem.
1
An n X n matrix P = [p. , with non-negative entries, 1J is said to be stochastic if the entries along any row sum to one. In 1938, Ga Elfving f.21 formulated the embedding problem for stochastic matrices, essentially as follows, For what'aatricesP can there be found a value to+O and a contj.nuotrs family of stochastic matrices Y(s,t') on 04,s&t$to that satisfies the functional equation 1 h a (2:)
P(s,t) = ~(s,u)~(u,t)whenever (sku& t)
P(S,S) = I
for all
0 4shto
and is such that '
3
P(O,to) = P?
Here I denotes, as usual, the identity matrix. In proba,bility theory, the antries p
(s,t) i,j = 1,. .,a, P(s,t) are regarded as transition probabilities of an ij
of
n-state, non -homogeneous Markov process in continuous time, i.e., pij(s,t) is the conditional probability that the process will be found in state j at time t,.given that it was in state i at time s, and equation (l'.l.) is k n o m as the Chapman
- Kolmogorov equation, The contTnuity of ~(s,t)re-
f1ec.t~certzin hypotheses concerning the nature of the sample paths, Thus the embedding problem is concerned with determi-
ning exactly which stochastic matrices P can serve as tramsition matrices of an n-state Xarkov process. For 2 x 2 by Frichet C33
matrices, the problem had been solved in 1932
, and the
solution was rediscovered by Elfving.
The necessary and s-lfficient condi%n this case is that
for embeddability in
tr P
- 1 = det P)O.
When n>2, the problem is still open. It is known that det P>O sufficient, as a result cited in95 is necessary, but is below a'bundan-tly shows.
9 2. The Koloogorov In [53
Equations
1 showed that if such an embedding fzmily ?(s,t)
exists, thentke. simple change of time scale, replacing s and t by (2.1) log det ?'(O,s) and log det p(0,t) converts the fanily into one which is lipschitzian in each var
-
-
Sable znd can be identified with -the general solution of the Kolmogorov forward and backward differential equations
as
=
-
CZ(S)P
a.e.
(O&s&t
),
0
resp. (unders-bod in the ~arathkodor~ sense). Here, for fixed ralucs of s an2 t, the Q's ds30te inteztsitr m s t ~ i c c c
3 ( t ) is given by ~ ( t= )limU,V*
(2.2)
v
- w-
(U k t L_Y,
v-u qC o),
and a similar fo~%G.aholds f o ~Q(s).
The main work in 53 was to show that these limits 'exista.2. when the above time scale is used. It follows from (2 .j)'. that the intensity matrices Q
r
i,j = I,..., n,. satisfy the conditions
I
r
-1&qii&0
for all i, l&qijkO
while (2.1)
implies that
whenever i f j,
=rqiJ
They thus fo m qn(n-q)dimensionnl &@x. Moroever, the cond& tions Q . 3 ) chsracterise the intensity matrices, for, given
any one-parameter of such matrices with-measurableentries, we can integrate the Kolmogorov differential .equations,subject to the initial values (1.2 ), and generate a unique fam& ly.P(s,t) of stochastic matrices that satisfies (1.1) and yields (2.2). It is. only the proof that the,elements ,ofP(s,t) are 8
non-negative that is not routine: a simple way out is to use product integration, cf
. 183
I& follows that either one of t30 Kolmogorov equations together with the constraints ( 2 . 3 ) , can be re(~1)'or ('a), garded as a control sgstsm that generates stochastic matrices, with the.intensity matrices, varying measurably ,t-pJ.layi-nguthe ,r81e' of. controlq,
3, C'ontrol-theorbtic formulatioa of the embedding ~roblem. In t 53, I pointed out that by replacing the functional equation (1.1) by Che control epation (Kl), subject to the c o ~ straints (2.3), the embedding problem is converted into an equivalent -habilitg ~roblem,viz., What matrices P can be reach86 at t from the identity 0
matrix I & t = 0 by solutions P(t) = P (0,t) of(~l)? Of course, if
fie
want, we can use.(~2)and ask
What matrices P at s
= 0 &be
steered to the identity
matrix I & s= to along solutions P(s) = P(s,to) of (E)? ln both cases, t plays the role of a parme*. 0
The two problems
-
are equivalent, and the second can be put into the s m e f o r m as the first by replscing s by to-s, thereby changing the sign in ( ~ ) 2 to
+.
The i n v e s t i g a t i o n of the embedding problem by control-theoretic: means became one of the main t a s k s of a research project, sponsored by t h e S c i e n t i f i c A f f a i r s Division of Nato, i n which the p r i n c i p a l i n v e s t i g a t o r s were ~ b r e nJoEansen f roa Copenhagen and myself.
4. Some p r o ~ e r t i e sof t h e reachable s e t . From general considerations concerning semigroups of posi.t i v e matrices
El],
it follows t h a t the reachable s e t i s contrzg
t a b l e t o c e r t a i n of i t s b o u n d a r y points (which corresporid t o val u e s to = 00 1. I n r91,
Johansenprovef from the d i f f e r e n t i a l
quations t h a t the contractions can be done along rays, so t h a t the reachable s e t i s a c t u a l l y s t a r l i k e with respect t o these points. The b a s i c existence theorem of Filippov, i n the form given by Lee an8 Markus [12],
Roxin [151 and myself C43,
the s e t of n a t r i c e s t h a t can b e reached i n time t
0
i i i ~ p l i e st h a t
i s compact,
and the s e t of a l l reaehable matrices i s compact r e l a t i v e t o GL ( n ) . The facO t h a t t h e sections % =const. of the reachable s e t 0
i s arc-wise connected i s alrnost immediate. For i f P and P2 a r e 1 two embeddable matrices, a s s o c i ~ t e dwith the embedding familiea
~ & ( i , t and ) p 2 ( s , t ) , resp.,each
reachable i n time t
09
then
continuous curve vtich joins P and P2. 1 For each f i x e d u, P(u) i s reachable i n t i m e to, i t s c o n t r o l lavi represents an absolute-
being t h s t of P1 from 0 t o u and t h a t of P
2
from u t o to. The sa-
me argument shows t h a t the s e c t a n s t =const. of the ~ e reachat 0
b l e by bang-bang c o n t r o l s i s a l s o arcwise connected. The reachable s e t has c e r t a i n s y ~ ~ n e t r i e which s, go back t o the f a c t t h a t the order i n which the s t a t e s a r e labeled i n s Markov process i s irrelevant,. Thus, the reachable s e t i s c a r r i e d
onto itself by orthogonal transformstions induced by pemtaOne could try to
tion matrices,
normalize the reachable matrices by requiring that the elements along the main diagonal be arranged in an increasing, or decreasing, order, but this is not always convenient.
$5. The bang-beng conjecture The theory of sliding regimes or chattering controls 1139 asserts that any embeddable natrix can be approximgted (along its whole trajectory) by finite products of elernentq matriceg, 2.e.;
matricee that are generated when the controls are fixed
at.theextreme ppgnts of the control ragion. These elementary matrices turn out to be precisely those stochastic matrices which differ from the id~ntityby the presence of precisely nbn-zero, off-dizgonal element. Johanscn
193 has
one
observed that
*heir trajectories a m rectilinear. Sbme years ago, I zmjcct~.~ea tba': the bmg-b~zgi;rinci>la
,holdsand that eveq embeddable matrix is a finite produce of elementary matrices. The cor,jecture wa.3 suggested by resqts of .~oewner 1141 on totally positive ~m.1on doubly-stochaslic matrices. It is easy to see that it ho:lds, trivially, wlzen n=2, for then any stochastic matriix P can ba written as the product of two elementary matrices, so long as it satisfies the embed(if.ng condition trP - 1 > 0
. In general, one migh7t expect that the
number of terns would depend on det P as well as upon n, but
I suspect
that it depends upon n alone end equals n(n-1)
When n k 2
, Johansen proved r91 that
every matrix in th-
interior 3f the reachable set can be reached by bang-bang controls with a finite number of srvitcke?. @-ing$0 the work of Krener [ill this is now seen to be a general property of
,
certain control systems. Since there is no bound on the nurnber
of.switches, it is not possible to conclude that the bang-bang principle holds for matrices on the boundam of the reachable set.
A considerable amount of effort has gone into the study of the bang-bang conjecture. Recent resultsin the case n=3 are reported beiow in $11.
fj 6
a detenninantal inequality. Whiie the result of section 4 give a certain mount of
qualitatise information about the set of all embeddable mztrices, they fail to yield any crit,erionfor deciding whether a, given stochastic matrix P is embeddable or not. To remedy this, we may appeal to a result proved in
153.
There it was n~tedthat the differential equation (Kl), together with the constraints ( 2 . 3 ) , yield a differential inequdfor the product of the diagonal ele~nen%sin :P(t ) , just by omitting the terms in the.equationwlrrtckf?are non-negative, Integrating this inequality and usi~gthe Jacobi-Liouville ,fornulafor'the deterniinant (or(2.l)directly) follovring inequal%-@
gives then the
which must be satisfieid by the elensntn
of any embeddable,matrixP:
The same inequality occurs in the theories of positive-definite and
totally positive matricas. The inequality (5.1) is a strong necessary condition for
embeddability, and it can be used to show t h a t there are stochastic,matrices arbitrarily close to the iden-tidy~ihi~h are not.embeddable (cf. $7
below),
The set of stochastic matrices which satisfies (6.1) is
a semigroup, and in &53 I conjectursl! that it is precisely the semigroup of em3-eddablematrices, i.e.,
that the condition (6.1)
is not only necessary, but also sufficient for embeddability. Shortly thereafter, David Williams pointed out to me that eq% lity can hold in (6.1) for an embeddable matrix P only if some off-diagonal element vanbhes. His proof was based upon th2 functional equation (1.11, but it is equally apparent when one checks the differential inequzli'tydescribed above. Willianis' remark shows, for example, that the 3 x 3 matrix whose entries on the main diagonal are each l/4, while the remaining elements are each 3/8, Whough (6.1) is satisfied. 1nf&
is not embeddable, even
we shall see how his remark
carr be used to establish that the set of embeddable matrices
is @,
.
convex when 1122. (In fact, its convex hull is not
,khov?n)
57
Geometrical re~resentati0n.g stochastic matrices. One of the most captivating features of the embedding proc
blem,is that it is completely;.equivalent to a problem of geome-
try, or, at least, of kinematics. In this and the next few sep"iions, I shall explain how this comes about. A more couplate account will appear in f7]. For simpl.icity,let us restrict ourselves to 3X3 matrices. Considerir-g the rows of a stochastic m:-:trixP as vectors in thresspace. relative to a fixea coordinate system, we'see that they specify the vertices of an oriented triangle < P > lying in the plane through.the three unit vectors. The identity matrix I corresponds to the unit trizn~ld4IS.and fixes the,orientation. All the other PI?-ochastic matrices describle subtriangles of E, and *La inclusion is strict except for pey cia :a-!!ion
.
~atrices
then to conclude from
(1.1) --(1.3)
that every embeddable ma-
trix p belongs to this semigroup. (The s a e can be said for ( ~ G L ) which, , of course, implies (7.11, but (7.1) has been de-
rived without use of the differential equations.)
8 4 pre-orher
for stochastic matrices.
Now let us return to.our main theme. Having associated to each 3 % 3 stochastic matrix P a triangle 4 P > , we can introduce a pre-order in the class of stochastic matrices by de-
fining
(:8.l) !&us,
P
if and only
if
co 4P> eco 4 R 3
the inclusion refers to the points of the simplex spanned
by the vertices of P ..The pre-order .failsto be a partial. order because it is not anti-symmetric: the ordering of the verand R 4 P
tices has got lost in the set inclusion. Indeed,.PlcR mean that P and R
are congruent, so that P and R are congruc.
.
ent unde? the action of a permutation matrix (cf.tf4 !
Thepre-order just introduced can be put into an analytical form that shows that it agrees with the pre-order natural to any ,transf ornation semigroup (cf. (8.2)
P 4 3 if a*& only,if SR
A proof %vii:.lbs giveli in l73
t"~],p.
14) viz. ,
= .P for sone stochastic matrix 3.
. Since
F, considered as an operu-
tor on ,c'ontravari&t'vectors, rep re as;;.:^ the unique affine trasformation that carriesCI> onto
, whereupon co 41> goes onto co
in
(8.2). the analog of a well-knovin property of projection opemtors reveals further anaSpecial2zation to elementary matrices.($4 logies stillo It should be noted that (8.2) continues to hold if P, R and
S are restricted to lie in the subsemigroup of stochastic ~~etrices with positive determinant.
Pmposition (8'.21 all~~?s us to characterize She 3
~ 3a t 2 chastjc natrices which &re irrciecompcraable, i,e,, %hey ea~uiot 3-9 factored, except trivia21ypin the semigroup of stochzstie na.%rices, Iadeed,. if 4i?> i s such that each of its vertices liss on a differezt side o f 4 1 2 , then P 4 R inplies th& R 1 or PI unless the vertices ofC-3?S coincide widh the ~erticcsof 413 , in qsizk czse R
a permutation-rnatrrx? q d the factorization nigh% again be conside~edtrivial,
Conversely, if 3?
5.8
does not haye the configmation sfated,, it
/
is easjr to see tkat a nori-bivia2 E can be foil,~dsuch that.
c.0 e p e .co4 E% wr'necce P has R ss a factor, Note that, in tine firat case, it is posslbfe for (7.1) to be satisfied, but not (6.11, and such P's can lis arbitrarily c l o s e to the ideatitg I, without; t e i ~ gembeddable. They are, in fact, convex coizbiaati~nsof I and pernutation rnztrices; consequently, there are rays em2nai;ing fron I in the sznigroup of stochastic mstrices that do not meet the set of eiubeddable matrices except a% I. The enbeddable matrices are far fr@abeing indecomposable, for
they a?-e precisely the stochastic aatrices which are ixlYnite-
Iy factorizable.,in a sense that
c a n be
nade precise C6].
This characterization of the embe8deble matrices dates from q r eayly work I c the field, in 1969, and is linket! to cognate .ideasin the tlleosy of schlicht functions, It embreces the intuitive idea that embeiidable matrices can be decomposed into infinitesimal matrices &d
then reassembled froa them, which
underzies the results of $ 2 .
The sazle ideas were taken up I@--
.&erby Johansen, who improved them and gave an elegant derivaBifferential equations based upon them,] tion of Kolmogorov
in 1183.
99. A geometric formulation of the embedaim problem. Looking back at the preceding development, we see that the embedding problem of $1 can now be put into a simple geometrical fom. For, suppose we fix t0 and set
Then .(1.1) becomes
and (1.2) and(l.3)
resp. By (8.2),
become
(9.l) tells us thet P(s)-C. P(u)
whenever s L u (0&,
P (s) is a monotone increasing function of s. Consequeeb,,by (8.11,
i.e,,
co sP(sp
C
co 4 P(u)P
whenever
sL u
do&,
u6to),
so that, by (9.f ) , the (ordered !) triangle P (s) ex~ands continuous~y fro^ P mtil it reaches I and the corresvon'
ding vertices coincide. This, then, is the geometry that was lurking in the foy mulas of
9.The Chapman- olzoaorov equation (1.1),
whicb represents
the W k o v property in the probztrilistic interpretztion, turur out to be lust a kinematic constraid, Moreover, the passage from the -tical
formulation of the erabedding problem to
its geometrical interpretation can be revers3a. %e can start with the geometrical-problem, defining continxity in wey, and use.t h e , results of
37
.obvious
a n d 8 8 to arrive at the ena-
The ernbeddiag problem for 3 $ 3
stbchastic =trices is
thus coiripletely equ2valen-L to the following Eocus Frcblen. What is the locus of ttie ordored triple
(z1.~2 'z3
such that the ordered triangle &P> of points with these points as its vertices can be espanded contiphouslg'u~tilft coincides with e given ordered triangle
ZX3? -
Obviously; -the problem can be refomlatea to ask for the ul-
sw-P
$~rnateposiiicn of contrsctl~gtrianglss that fitart off a&< 12-
a
mela we trace trough the developasnts of
42
and $3, we
see that the locus problem is equivalent to the reachability pl;oblen for the control system (E), when the 9iatural parameter"
is used as a time scale, "enarkably, this Cime scale also hae
a probaSilistic interpretation in terns of the expectztion of certain ancillary random variables conqe,?tedwith the Brkov d, process associated with the embedding family ..flSs,t), The geonetriczl formulation of th.9 em3edding problen, an$, its coixnection with probabili-ty a r r co:cttrol, I presented in
a seminar in the ilept. of Coaputing and Control, at Imperfal College, London, in April, 1912. 'iihe faradation is so sinpie that ~brenJohansen was later able to prese?rlt it on Danish television in a program intended t3 ialustrate what type of problems come up
,~IL
pure mathematics,
9 15.Ban~-bang c~ntrolsin the restrictxl embedding problem. The formulztion of the enbed5iig problea just given.-passes tha differential equations and aftsehes itself directly to the formulation In terms of the flmctioml equation (1.1)
given in $1
.
Nevertheless, it is interesting to see what
thh notion of bang-bang control 'meansin..'the geonatrical pl"oblem. As we have already observed, bang-bw controls give rjs-S
:,
to responses which are finite products of elementary matrices, i,e.,
stochastic matrices which have only a single non-
zero off-diagonal element. The effect of an elenentam matrix on a fiWe
.C P>. is
easily seen to be that it moves just one
vertex of P along a side joining it to one or the other of remaining vertices. A bang-bang contrbl therefore consist of a succession of such.simple moves, one at a time. We n6w formulate a restricted embsdding problem. Restricted Problem. Given two pol:lts p and p2 in 412, 1what is the Iocus of points p such that the ordered 3 triangleCP>with these Doints as its vertices can be expanded continuously until it coincides withGI>?
It is immediate froa the foregoing t'nat the IO~USi3f ~tpissible points p is starlike about pl and p2, and it may be emp3
ts* Clearly, if we knew the solution of the restricted embedding'problemfor all different positions of pl and p2,.we would know the solution of the original enbedding problem.
The use of bang-bang controls in the restricted problem leads to an interesting constr?action. Suppose that pl, p2 and 1,2,3
have the configuration given in the diagram. The line through pl and p2 meets tha side I.% at a point lcbled 4. Let 6 . be any point on the segment 24 and draw a ray 41,s
from
0
through p2; it will meet the side 13 at a point
po. lPOD draw a ray from po through pl; it d l 1 int2rsect the
lfne through 3 and 0 at a point p.
I claim that p is an abissible valne of p and the.resulting 3 triangle el?> can b 3 expanded to C I> by a bang-bang control in five moves. The proof is simple. Start with the triangle and nove 2.to 0, 1 to po, 3 to p, 0 to p2 and po to p19 in that order, alv~aysalong line segments. In this nay, the triangle 'C I>.ends up coincident with .
Re-rerbring
the steps then expands , as required.
As O varies from 4 to 2, p sweeps out a co@tinuous curvs thzt reaches froz 4 to a certain point p on 23. The cu3-e 4 .is sell-known: it is an arc of a hyperbola, and the construction is due to Braiken-ridge end Maclawin, around 1733. The construction plat-es the pencil of lines through 3 in projentive relationship with the pencil through pl; the points of intersection of corresponding lines realize Steinerlsdefinition of a conic, 1832. It can be verified that the Bepent 4.p is precisely the 4
locus of points in which
11p22p33 r det P, in an obvious notation, (Since pl and p2 , the first; t w o rows of P, are fixed, this is a linear equation for p ) The dis3 cussionl of the deterninal iobquality in 36 tells us that p
-
auality can hold for an embeddable matrix onlx a3 the enGpoints, If p = 4, the triengle P is degenerate, 3ut .there s r e points 3
a r b i t r a r i l y near t o 4 which are admissible. I t follows t h a t 3 the s e t of embeddable matrices i s not convex, f o r the section i n which p and p a r e constant is not convex ( i f it were, i t s p
1 2 closure would be convex).
-
Every point p
3
i n < I > t h a t l i e s inside the connected do-
main bounded by the l i n e through pl and p2 and the arc through 4 and pd gives r i s e t o a t r i a n g l e < P > t h a t can be expanded t o 4 I l p i n s i x moves. TO see t h i s , just; draw the point p away from, pl on a r e c t i l i n e a r path -atil it reaches the bozul3 d : a q : the resulting triangle czn then be expanded t o In f i v e moves, a s the reader can check. Actually, the domain indicated i s s t a r l i k e with respect t o pl. i s t o realize t h a t
-7.0
!l"k= crwr of the proof
l i n e through p1 can meet the arc from
4 t o p i n more than one point. kt t h z t i s clear, since p 4 1 already belongs t o the other branch of the conic. 1% can likeerise be shown t h a t bang-bang controls w i l l work when p3 l a located on the l i n e segment dofning p2 t o p4 o r on t h e segment t h a t joins.4 t o the intersectl,qp of the line through 1%~Lth23. The r e s u l t s pf the section date from the jpring .of 1972. Apart from &e discussion of the deteminal equality .and the identification of the conic, they were rediscovered by Johansen and Ramsey ~ h o employed the= in an attack on tho bang-bang conjecture of
ria
$4
.
511. .A characterization
of the reachable set.
In 94 we pointed oat that set of a l l reachable matrices i s bounded and closed relative t o GL(~)and in95 we remarked that f i n i t e products of elementary matdces 81% dense In the reachable set. 'Phese properties caa be used t o characterize the reachable set, as follows. Suppose that we can find a set B with these properties: 1 ) I belongs t o R 2) XRci R for any non-singular elementary matrix K 3) R is closed relative to GL(~) 4 ) every matrix i n R i s reachable from I; then R i s the xealhable set from I. To see this, it i s enough t o observe that 1) a d 2) *ply that f i n i t e products of elementary matrices belong t o R , hence B i s dense i n the reachable set, while by 31, B 5s closed, so that it conta-ins the raachable set. But the l a t t e r set d s o contains R, because of 4) hence they coincide. The advantage of this scheme'is that it allows us to feet whether an explicitly given set B is %hereachable set; or not. 1 2 , Recent work a n t h e bang-banff conjecture
Recently, Johansen has exploited a variant of the above proces to dure t o charac-zerize the set of 3x3 matrices reachable 3 f o r to i n the interval 0, logg2 .For R he tskes a claeed set which he can describe esplicitly and which has the property thst every matrix i n it can be expressed as the product of a t most six elementary matrices. Then, using results on She restricted 5abetIdjing problem, he establishes that the remaining properties 1) and 2 cited above are valid, provided that det K i s sufficiently large. This allows h i m to oonclnaa that the described set coincides with the--set reachable i n toSloge 2 , and a t the same time, it establi shes the bang-bang conjecture, f i r s t for matrices corresponding t o to i n tbia interval, but.then for the whole reacheble set just
-
by i t e r a t i o n , where now the nubex- of factors Iks proportional
t o to* It i s o f some kntemst to.obsei-3.e that in adopting this approach% Yohaasen di2 not have t o esta3'lish a p r i o r i Ynat his s e t R
contained
natrices .reachable i n
i;a & loge
2 t h a t are the
pro&uct of s i x elementary matrioes. That is a conseauence of his f i d result. 1% may v e r y well be t h a t a ~odifieationof t h i s waned lead Go a proof of %hen s t r m g bang-bag
proce&we
eonjevkure" that
-q reachabie matrix a% a11 can be exp~essedas the produei: of
at most six elemen%ax-y matzices% men an algebsatc decompasf-t.lontheorem -11. have been established by geonetric means.
X ~ e t eadded
1978 I belleye %hat I can now estab1ish geometric3.3,ly %his conjecture by going back to tbe c f i t e r i o n of *XI. A t l g i ~ l s t 10,
Brown, D.R., on clans of non negative matrices, Proc. Amer Wth. SOC.. 15 (19642, 671-674. Elfving, G., Uber die Interpolation vlon M3rkoffschen Ketten, Soc. Sci. Fennica Coment. Phys. Y'ath lo,?No 3 (1938), 1-8 FrBchet, X., Solution continue la plus g6ndral drune dquatiolu fonctionelle de la th6orie des probabilitks "en chainew,BulL Soc, Math, France 60 (19321, 242-280. Goodman, G.S., On a theorem of Scorza-Dragoni and its application to optimal control, in "Bath. Thcory of eontroln,Balakrishnan & Neustadt eds., Academic Press, N.Y. (1967), 165-180, Goodman, G.S., An intrirrsic time for non-stationary finite h r kov Chains, Zeit f. Warsch; 16 (1970), 164-170. Goodman,,Control theory in transformatiom semigroups, in IrGeometric 16Iethods in System Theoryu, Mayne & Brockett eds., Reidel, Dordrecht.(1973), 215-226. Goodman, G.S., A peometrical formulation of the embedding problem for stochastic matrices, to appear. Johansen, S., A central limit theorem for finite semigroups ar.d its application to the embedding problem for finite state larkov chains, Zeit. F. Warsch. 26 (1973), 191-195. Johansen, S., The ba~g-bangproblem for stochastic mstrices,Z.eit, f Warsch. 26 (1973) , 191-i95. Johansen, S. ?C Runsey, F., A representatiomtheorem for imbeddable j%3 stochastic matrices, Yreprint n. 5 , Aug 1973, Inst. of &th. Stat., Univ. of Copenhagen. Krener, A , , A generalization of Chow's theorem and the bangbang theorem to nonlinear control problems,SI.Uf J. Control(197 t ) 43-52 Lea, 2. B. & hkrkus, L., Optimal control for nonlinear processss Arch, &Tech. Anal. 8 (1961). 36-58. Lee, E. B. & ilarkus, L,,. wFoundations of Optimal Control Theoryrr, WZLey, Naw York (1967). Loewner, C,, On semigraups in analysis and geometry, Bull Amer. tdath; SOC. (1964), 1-15. Roxin E., The existence of optimal controls, Uich. &th. J. 2 ( 1962) 109-119.
,