This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
(t) < p , u > (t)
is a continuous is the increasing
p(t).
It can be proved that
in the sense of convergence in
L'
One easily checks the following property
P[
(1.6)
Sup ()l(t)) > tc C O , TI
E l 5
N - + P[
(T) 2
t
N1
I
The concept of square integrable martingale being too restrictive, one introduces the concept of Zocali,u square i n t e g r a b l e m a r t i n g a k . that
u
We say
is a locally square integrable martingale, if there exists an
increasing sequence of stopping times
- a.s.
We denote by
kM2
T~
+
+
T~
such that
m
the space of locally square integrable martingales.
The decomposition property ( . 4 ) extends to locally square integrable martingales.
In other words if
p
E
RM2,
there exists one and only one
KARTINGALE PROBLEM
increasing continuous adapted process
2 (tAT) -
!J
ti T
such that
103
such that
< p , u > (t)
(tAT)
p(tA.r)
is a 5 tAT martingale,
E
M2
.
Example Consider the stochastic integral
v(t)
(1.8)
w
where
is
zt
=
it @(s).dw(s)
n dimensional standard Wiener process.
Assume that
E p
Then
E
RM2.
iT ($(t)(’dt
Indeed set
un(t)
=
V T
c
T~ =
.
n and consider
p(tAn)
then
E 1pn(t)
I*
=
E
itAn
I$(s)l
2
ds
9
E
in /@(s)l2ds
.
We define
(1.9)
9 , U >
(t) =
it/$(s)I2ds .
By Ito’s formula
2
lJ (t)
=
2
it l l ( s ) @ ( s ) . d w ( s ) + It
/ @ ( s ) 2ds
hence (1.9) defines the increasing process associated w th the martingale P(t)
*
104
CHAPTER 111
Remark 1.1.
One can justify the notation
(t)
as follows.
Let
1 ~ ~ ~ E1 -M2, 1 ~ then
1
PlU2 = 7 (lJ1+LI2)
(1.10)
=
2
< p , p > (t) 1
-
(t)
2
+
1 2
Pl(t)
- 71
2 U,(t)
martingale
where
Note that
is a difference of two increasing processes.
property and decomposition (1.10) uniquely defines
(t)
This
.
We also note that
p1,p2 -t
is a bilinear form on
M2
'
and
are orthogonai if pl(t)p 2 (t) is a martingale, This implies that they are also orthonormal 0 in the sense of the Hilbert space M2'
We say that i.e., if
Let now Assume
where
pl,p2
E
M2 =
.
p ( t ) = (pl(t), . . . , p
pk(t)
E
RM2 , and
(t))
be a vector continuous
Zt martingale.
10s
MARTINGALE PROBLEM
(1.13)
ajk(s)
is an adapted process and
/a.,(s) J
1
S C
.
We can define easily stochastic integrals with respect to integrands
4 be a step function. We define
Let
Then
and as for the case
a
=
I , we can extend
I(@)
to elements of
LL
3
and denote it
Let
b(s)
be a matrix
processes and vector (1.16)
.EM2
JT
5 2 (Rn;Rm) , whose components are adapted <
tr b(s)b*(s)ds
rna:tingale.
<8.
Let
Ct b(s)dp(s) =
( I . 17)
a(s)
(1.18)
b(t)
6
Jt
E
Jt
b(s)dp(s)
Rm , then
,
8.Jt b(s)dp(s)>
=
b(s)a( s)b* ( s )6.€Ids
is i n v e r t i b l e
=
a.s., then
m
It a"2(s)dw(s)
-1
a
(s)
kmciea
V t , a.s
.
is a
106
CHAPTER I11
props Define (1.19) which is an
w(t) =
It a- 1 1 2
n dimensional
(s)dp(s)
martingale. Let
RM2
E Cl'd.(w(t2)-w(tI))l
2 15t l
@
E
Rn
,
then
I= t
=
E Liltt2 a-112(s)€I.dp(s) 121s 'I= 1
= (t2-tl)
181
2
.
From Levy's theorem i t follows that let (assuming all dimensions
1
w(t)
i s a Wiener process.
to simplify)
then
L(T)
=
lim bk(t)
T
1
bk(s)dw(s) =
bn
in
in L2
,
[nk,(n+l)k)
N- 1 =
lim
1 bn (w((n+l)k)-w(nk)) n= 1
where
in L2
Moreover
107
MARTINGALE PROBLEM
Since
bk
and
a
-112
+
all2
in
is bounded, it follows that
i;(~) = U(T)
-
Since
L3(0,T) 2
p(t)
,?ernark 1.2.
,
p(t)
a.s.
are continuous processes we have
3
1.18).
When assumption (1.17) is not satisfied, then there exists a *t
n dimensional Wiener process with respect to a bigger family 5
(i.e.
Zt c 5*t ti
t)
and a matrix process
$(t)
which is adapted t o
5*t , such that 3
2. DEFINITION OF THE MARTINGALE PROBLEM
Let
R0
=
C(IO,m);Rn)
which is provided with the topology of uniform
convergence on compact sets. w E R0
Then
is thus a function o E u(t)
slo
is a metric space. An element
.
The canonical process is defined by
Let
Ocsst , we define
proved that
Let
x:
=
clx(u),s
,
and ?7j
T~ coincides with the Borel o-algebra
aij(x,t) , gi(x,t)
be functions on
a . . = aji
(2.2)
I
aij,bi
, a
Rn
x
.
=T: on
It can b e
"0 .
9
[0,-) such that
E a , . is a non negative matrix
1J
Borel and bounded
.
108
CHAPTER I11
W e write
A(t)
(2.3)
=
-
"f
C
i,j A probability
PXt on Ro ,
d-
a ij
vt A(t)
(2.4)
= 1
is a
m:
is a
pXt,
is a
pXt, 78:
pxt
, p:
E bi a
~* .
if one has
martingale
.
martingale,
wartingaLe, if $
is uniformly p s s i t i v e .
To obtain (2.6) it is clear that we may assume $
then by definition
pxt Also for any
d
is s o l u t i o n of t h e rnnrtingale problem
w i t h r e s p e c t t o t h e operator PXt (x(t)=x)
axiax j
martingale.
A E [t,m)
E
J(Rn
x(t,m)).
But
109
MARTINGALE PROBLEM
Therefore also
We integrate (2.10) with respect to
A
between
t
and
s
, we
get
Property ( 2 . 7 ) follows from the following fact, whose proof is left to the reader (cf, Stroock
-
Varadhan [ 11, or Bensoussan - Lions i 11).
v ( s ) , ~ ( s )scalar processes which are adapted to a family bounded.
Ss
Let
~ ( s ) ,
and
Assume that P(s) +
1; v(X)dA
is a Q s
martingale
.
Then
is also a Q s ~ ( s )exp
JtS
martingale.
r;(A)dA
is a
We apply this result with :(s)
=
9 and P(S)
In particular if
Ss
u
=
u<,
then
martingale.
~ ( s )=
we obtain ( 2 . 7 ) .
$(x(s),s)
, v(s)
=
a+ + A(s)$(x(s)), - as 0
110
CHAPTER 111
L e m a 2.2.
8
Let
(2.1 I )
Rn, then
E
- Itsb(x(A)
expCi8. (x(s)-x
,A)dX) + :I.
8.a(x().) ,A)Bdi.]
rnartizgale.
is a pxt,7.i:
?rcop We apply ( 2 . 7 ) with
$(z) =
exp i€I.(z-x)
expi8. (x(s)-x)
exp[
JS
t
and get
- iB.b(x(A),X)dX
+
+ Is a(x(X),X)S.SdAl t is a
pxt
martingale
u
Hence ( 2 . 1 1 ) .
2 J: a(x(X),X)4.9dX
Replace
B
BB
8 by
and let
B
=
with respect to
0
in ( 2 . 1 1 ) , 3
.
I
scalar.
Differentiate with respect to
We get the first part of ( 2 . 1 2 ) . and let
8
=
0
Differentiate twice
, we obtain the increasing process. J
(1) (2.6) and (2.9) extend to complex valued functions provided 13; 2 B > O .
111
MARTINGALE PROBLEM
then t h e r e e x i s t s a (with w(t)
=
n dimensional Wiener process
w(s)
0) such t h a t
,
tr s
a.s.
This is an immediate consequence of Lemma 2 . 3 and the representation
0
Theorem 1 . 1 .
Remark 2.1.
Matrix
u
can be computed by the following formula
(cf. A . Kato [ I ] ) (2.15)
U(x,t)
This shows that
u
Remark 2 . 2 . then Pxt
=
1
- J X n o
-1/2
(2a(x,t)+h)-l
has the same regularity as
a
a(x,t)dh
.
0
Under the assumptions of Lemma 2 . 4 , given
,
x(s),
w(s)
(QO ? ,,m is a weak solution of the S.D.E. with initial
This is clear from the definition of the concept of
conditions (x,t).
weak solution (cf. (5.18), (5.19), (5.20), (5.21) Chapter I).
The
inverse is also true, since by Ito's formula
$(x(s))
=
$(x) +
it"
+
it" A(A)$(x(X))dX
2
+
(x(X)) .o(x(X) ,A)dw(A)
Therefore if we have a weak solution of the S.D.E. on we have a solution of the martingale problem. Let u s define
Fxt
by
(Ro, pt,p~:)
then
0
112
CHAPTER 111
then we have Lerona 2.5.
(2.17) then
-w(s)
(Qo,A , , F t , 711;)
Consider
is a
w(s)
+ :/
and t h e p r o c e s s
u-’ b(x(X),X)dX
G(s)
=
nl:
iiiener p r o c e s s and
It is an immediate consequence of Theorem 5.1 and Corollary 5.1.
-1
(x,t)
bowaced
(2.19)
a
(2.20)
a , . E C a’a’2(Rn
x
[O,T])
,
0 <
CL
< 1
1J
Let 0 be a b o m d e d srcojth domaiv. of
Rn.
process, we define
the first exit time of
x(s)
from 8
.
If x(s)
is the canonical
5
113
MARTINGALE PROBLEM Theorem 2 . 1 .
Ye assume (2.2j, ( 2 . 1 3 1 , (2.20).
Let
Pxt be a soZz*tion
of t h e m a r t i n g a l e prGbZem and L e t
Then we have zhe estimate TAT t /EXt Jt f(x(s),s)dsi
(2.23)
5
C If1 LP
Ghere
C
does n o t depend on
f.
f
Clearly it is enough to prove ( 2 . 2 3 ) with
smooth.
Assume
f
smooth.
We consider the parabolic problem (2.24)
uIx
=
0
, u(x,T)
=
0
.
According to 5 3.3.3 Chapter I, by virtue of (2.20), we can assert that there exists one and o n l y one solution of ( 2 . 2 4 ) such that
2+z,l+u/2 U € C (Q)
(2.25)
. C2’’(Rn
Consider next a function v which is v = u on Q. Apply Ito’s formula to
Take hence
s =
TAT^
v(x(X),h)
, and note that for :u(x(h),h)
t
v
5
x
[O,T])
and such that
and the process ( 2 . 1 8 ) .
A 5
TAT^,
then
(x(X),X)
We get
E
6,
as well its derivatives, hence using ( 2 . 2 4 )
114
CHAPTER I11
Therefore (2.26)
u(x,t)
rct
E
=
.
itTA-rtf(x(s),s)ds
Now consider problem (2.24) with
f
E
Lr(Q)
,
Then from ( 3 . 4 4 ) Chapter I1 we can assert that
r
2
2
u
E
,
r <
a.
b2y1'r(Q)
.
Now f o r
r >$+ 1 , (2.27)
2,1,r(Q)
c
C0 (Q)
with continuous injection.
This implies that
But when
for
f
f
is smooth, (2.26) applies, hence
smooth. Relation (2.29) can then be extended to all functions of
Lr.
NOW
we can express pXt as
(2.30)
Indeed let
5,
be 5'
measurable and bounded.
To prove (2.30) we have
to prove the formula (2.31)
Ext
5,
=
-x t E
- - 71 fts
5 , expC Its u- 1 b.dw
By (2.16) the right hand side of (2.31) is equal to
10.
- 1 2 bl dX1
.
115
MARTINGALE PROBLEM
(2.32)
Ext SsCexp :J expt-
CS-]
b.d;
-
1 0 -1 b /2d h l
:J
its o- 1 b.dw - 71 Jts
-1
lo
b 'dA1
.
But from (2.17) we have (2.33)
Jts
-1
0
-
b.dw
=
Jts
o
-1
b.dw + Js t
10
2 bl d
-1
w
where the left integral is the stochastic integral with respect to w when
Qo
,m
is provided with
stochastic integral when
F t , and the first right integral is the
Q,,m
is provided with
by approximation and use the fact that
PXt
continuous with respect to each other. expression (2.32) coincides with Next we have, assuming
f
Ext
lEXt
it
XT
exp
PXt
Fxt
.
Indeed proceed
are absolutely
Using (2.33) in (2.32) we see that
5 , , hence (2.30)
.
smooth and bounded
TAT (2.34)
and
f(x(s),s)ds/
=
TAT t
iExt (Jt
f(x(s),s)ds)XTl
where =
tJtT
o-lb.dw -
3I itT
10 -
1 2 bl dsl
hence IExt But
TAT it t f(x(s),s)ds/
TATt
s
[Ext (it
f ds)211/2
(E
X;)"*
116
CHAPTER I11
Next TAT
[Ft( 4
TAT
t f ds)2]1/2
< [T
Pt4
f2 d S P 2
.
Gathering results we see that
and from (2.29) 5
c(/f2 I
1 1 f 2 since p > n+2 PI2
1
This completes the proof of (2.23).
Corollary 2 . 1 . such t h a t
Then if
Q
Let
0 be continuous and bounded on
LP(O,T;W~~~(Rn)),
E
Rn
x
[O,T1
,
, w i t h p > n+2.
LP(O,T;LToc(Rn))
Pxt is a s o l u t i o n o f t h e martingale probleir:, ue kave Ext @(x(TAT~),TAT~) = Q(x,t) +
(2.35)
where
E
Li
T~
is def ine d b y (2.21).
proos We can find a sequence Qn
an Since p > n+2 , Q~
E
C"(6
C0,TI) , such that
x
(Q) " ~
+
Q
in
1
-t
Q
in
c0(Q).
~
1
~
3
Q = 8 X [O,Tj
.
We can write Ito's formula for Q n and the process defined by (2.14)
But
117
MARTINGALE PROBLEM
0
Using Theorem 2.1, we obtain (2.35).
3. EXISTENCE A N D UNIQUENESS OF THE SOLUTION OF THE MARTINGALE PROBLEM
We assume here (3.1) a. ij
c
=
aij
b.
a,. 31
tjtj
2
B /t12
v 5
6
R” , B > o
Bore1 bounded.
Our objective is to prove the following
Theorem 3.1.
Under the assumptions ( 3 . 1 1 , ( 3 . 2 ) the s o k t i c n
the
0
martingale p r ~ b Z e mis unique. We will use several lemmas.
Lemma 3 . 1 .
It is sufficient to prcve uniqueness uhen bi
=
0
Considering the change of probability defined by (2.16), then solution of the martingale problem with unambiguously for
Fxt
0. Since
Fxt
is
is defined -xt by formula ( 2 . 3 0 ) , then the uniqueness of P
implies the uniqueness of We assume that
b
.
PXt
.
=
Pxt
0
118
CHAPTER 111
A(t)
(3.3)
=
-
C
a .(x,t)
iJ
i,j Let
4
E
,
C2+a(Rn)
a2 axiax j
consider the Cauchy problem
which has one and only one solution by Theorem 3.8, Chapter 11.
Lema 3 . 2 .
Let
pxt be a s o l u t i o n o f t h e martingale problem correspcnding
t o t h e operator
A(t)
given by ( 3 . 3 : , then we have
props Since
Property (3.5) is an immediate consequence of Ito's formula applied to the function u
and to the process
regularity of
Lemma 3 . 3 . (3.6)
x(s), which is possible by virtue of the
0
u.
~ e ~t O
YB
Borel subset of
P(x,t;T,B)
=
PXt(x(T)
R"
E
B)
,
t 5 T
then t h e f u n c t i o n P(x,t;T,B) does not depend on t h e p a r t i c u l a r s c l u t i o n of t h e martingale probZem.
119
MARTINGALE PROBLEM
Proof Since on a metric space provided with its Bore1 o-algebra, a probability is uniquely defined by the values of
(see for instance J. Neveu [ I ] , V @
for
p. 60).
,
Co(Rn)
E
E@
@
continuous bounded
It is enough to check that
Ext @(x(T))
does not depend
on the particular solution Pxt. From Lemma 3 . 2 , the result is proved for @
E
C2"(Rn).
Since V I$ ll@kll 0 C
E
Co(Rn)
' ll@l
I
0
,3
a family
@k of smooth functions such that
and
C
Gk
+
I$ uniformly on compact sets,
we clearly have
If now
Ft is
another solution of the martingale problem, from the
equality
and from ( 3 . 7 ) it follows that EXt @(x(T))
=
Ext @ ( x ( T ) )
hence the desired result.
i e m a 3.4.
The function
i.e. it satisfies
P(x,t;T,B)
is
il
Markov transition function
CHAPTER 111
120
x
+
P(x,t;T,B)
is BoreZ
P(x,t;T,B)
7:s
a probability on Rn
(3.9)
B
(3.10)
P(r,T;T,B)
=
xB(x)
(3.11)
P(x,t;T,B)
=
J P(x,t;s,dy) P(y,s;T,B) R"
,
t S s 5 T
(3.9) and (3.10) are obvious from the representation formula (3.6). if
I$ E Co(Rn)
,
x
-Y
E
Xt
@(x(T))
is Bore1 for
approximation result (3.7) and since c2+a,1+a/2 function. Therefore (3.8) is verified if
B
Ext I$~(x(T))
=
uk(x,t)
is a
is a bounded ball, hence also for
compact and from Neveu C11, p. 61, for
B
Now
t < T , since we have the
B
borel.
It remains to prove (3.11), or which is sufficient
0
for any
E
.
O n C (R )
A s for Lemma 3.3, it is sufficient to check (3.12) for function which are @
E
c2+a( R ) .
Bu then considering the function u(x,t)
(3.4) and the function v(x,t,s)
i:
t
5
,
s
solution of
defined by
2
--
c '
i,J
a. .(x,t) a v - 0 ax. ax. 1J
t 5 S
1 - 3
v(x,s;s)
=
u(x,s)
v(x,t;s)
=
u(x,t)
then we have
which is obvious by the uniqueness of the solution of ( 3 . 4 ) .
121
MARTINGALE PROBLEM
Lema 3.5. Is Pxt is avy solution of t h e martingale probLem, t h e n
(3.13)
Pxt [x(T)
E
Blms]
=
P(x(s),s;T,B)
a.s.
V t S s < T
It is sufficient to verify that for
I$
O n C (R )
E
and
cs,n
measurable
and bounded then we have
It is also enough to prove (3.14) with
@
C2'a.
E
In that case (3.14)
reads
But from Ito's formula and (3.4)
0
which easily implies (3.15).
Proof of Theorem 3 . 1 From formula (3.13), recalling that the functions P(x,t;T,B)
does not
depend on the particular solution of the martingale problem, we deduce easily that (3.16)
PxL [x(t,)
E
B ] , . . . , x(tn)
E
BnI =
Fxt [x(tl)EB1 ,..., x(t,)EBn!
V t l,...,t
2
t
,
,...,B
B1
Bore1 sets and where
PXt, pxt are two
solutions.n From Theorem 1n2, Chapter I, it follows that
Pxt
=
pxt . 0
122
CHAPTER 111
Remark 3 . 1 .
We have proved in addition that
x(s)
,m: , PXt
is a
0
Markov process.
We assume here that (3.17)
a
measurable bounded
(3.18)
bi measurable bounded.
ij
,
-1
aij = aji ; a
bounded
We recall that A(t)
Theorem 3 . 2 .
=
-
C
i,j
a. .(x,t) 'J
Under asswnptions 13.171,
a2 axiaxj
Ibi
a
13.181 there e x i s t s a s o l u t i o n of
t h e martingale problem.
Proof Assume first that (3.19)
I
a=-Do
*
2
with
Then according to Theorem 5.2, Chapter I, there exists a weak solution of the S.D.E.
at
Hence on some adequate 0,O ,5
n dimensional Wiener, and
y(t)
t
,
such that
there exists
, w(t)
123
MARTINGALE PROBLEM
F : R
Consider the map w
(3.22)
Ro
-+
y(.;w)
-f
Pxt the measure on 711
and
image of
t
?
F
by
.
Then PXt is a
solution of the martingale problem. Indeed let
...,xk
xl,
@
E
,
J%R")
5,
and
to be a measurable function of
; write
5,
(3.23)
=
t
Ss(Y(S1), ...,y (s,))
s
... s
s1 5
Sk
5 s
.
From Ito's formula we have
hence taking the image measure
and since
5,
generate 711;
,
it follows that
Pxt
is a solution of the
martingale problem. Assume now that we have (3.17), then consider a sequence an (3.26) and
sup lan(x,t)-a(x,t)l X,t
is Lipschitz in
a
(3.17).
x
+
o
as
n
-c
such that
-
uniformly with respect to
t, and satisfies
Such a sequence exists. There exists a solution Pn of the
martingale problem, corresponding to the pair
Therefore
(O,an) and
124
CHAPTER 111
This implies that the sequence Pn
remains in a relatively compact provided with the Ro,mt Let us extract a subsequence, still
subset of the set of probability measures on weak topology (see Parthasaratly [ I ] ) . denoted by
Pn
such that
(3.28)
Pn
-+
P weakly
...,
Consider a continuous function s(xl, xk) which is bounded; writing t 5 s1 5 s sk 5 s then we have, for 5 , = 6(x(sl), x(sk))
...
...,
0
E
B(Rn)
Using (3.26) and (3.28), we can let n
tend to
+m
in (3.29), and
obtain that
P
pair (0,a).
Use Girsanov transformation to obtain a solution of the
is a solution of the martingale problem relative to the
martingale problem relative to the pair
Remark 3.2.
7
(b,a).
We can get rid of the assumption
a
-1
exists and is bounded,
to obtain an existence result, provided that we replace (3.18) by the strongest assumption (3.30) Indeed replace a
SUP
1 x-y 1 s6 by
/bi(x,t)-b.(y,t:l/
p(6)
+
0 as
6
+
0
.
a+EI, then according to Theorem 3.2, there exists
a solution PE relative to the pair
and again
5
b,a+EI.
We have
MARTINGALE PROBLEM
Therefore PE
12s
remains in a relatively weakly compact subset of the set
of measures. Now we have, analogously to (3.29)
By virtue of (3.30) we can let
E +
0 in (3.31) and obtain the desired
0
result.
4. INTERPRETATION OF THE SOLUTION OF P.D.E.
Consider here
aij(x)
a
such that
i Borel bounded on Rn
,
Define (4.3)
Let also (4.4)
. a
Borel bounded
. a t 0 ,
According to Theorems 3.1 and 3.2, there exists one and only one solution of the martingale problem corresponding to the initial condition x
0. More precisely let Q o probability measure
Px
on
, m 0 , n to , there exists one and (Rono) such
that
only one
at
126
CHAPTER I11
(4.5)
PX[x(e)
= XI = 1
(4.6)
@(x(t))
-
@(x) +
Jt A$(x(s))ds &Rn)
.
is a smooth bounded domain of
Rn
,
(Px,?$)
Y @
E
is a martingale
Let
where 0
and consider the solution of
By Sobolev's inclusion theorems, we have (4.9)
u
E
C06)
.
Let
(4.10)
7 =
inf{x(t)
do}
tro the exit time of the process
Theorem 4 . 1 .
x(t)
from 8. We have
Asswne ( 4 . 1 1 , (4.2), 1 4 . 7 1 .
(4.11)
u(x) = EX
JT
f(x(t))(exp
Then we have
- Jt
ao(x(s))ds)dt
Proof 2
,
We can easily extend formula ( 2 . 3 5 ) to the function u(x)exp -
z,
Assume first p <
a.
f
and the process
E
B @ ) and f
2
0 , then u
E
W2yp(8)
for any
p
2
127
MARTINGALE PROBLEM
We obtain (4.12)
=
U(X)
+ EX We let T
+ m
u(x(TAT))
=
0
-
EX u(x(TA?))exp
CAT
ao(x(s))ds
-
ITATf(x(s))(exp
+
isao(x(X))dX)ds
For T > T we have since X(T)
in (4.12).
, hence since
E
a,
is bounded
u
-
EX U(X(TAT) exp
J
TAT ao(x(s))ds
-t
0
.
Formula (4.12) shows that, by the monotone convergence theorem, (4.11) holds. Moreover
Thus formula (4.11) extends to f
E
Lp(8)
,
p >
4.
Let us now give the interpretation of parabolic equations. Let us assume aaij
(4.13)
(x,t)
a
E
C"'"'2(Rn
x
[O,T])
ij
I
aij xai
(4.14)
a.
(4.15)
. a
E
,
-E "k
=
aji
Sitj
2
R
Y 5
1512
L~(R"
x (0,~))
L-(R"
x
(0,~))
.
E
Rn , 8 > 0
Loo
128
CHAPTER I11
Let (4.16)
f
E
Lp(@
(0,T))
x
p >
$+
1
From Theorem 3.4, Chapter 11, there exists one and only one solution of
1
(4.17)
aU - at +
,
0
1 1 1 ~=
+ aou = f
A(t)u
u(x,T)
;(x)
2
where (4.19)
A(t)
-
=
C
a ~
a
axi
a
a.. - + C a 1J ax. i 2,. J i
According to Theorems 3.1 and 3.2, there exists one and only one solution Pxt of the martingale problem relatively to the operator A(t) Let (4.20)
401
S~ = inf{x(s)
S>t and note that since p >
$+
1
,
u
E
Assume 1 4 . 1 3 ) , 1 4 . 1 4 1 ,
Theorem 4 . 2 .
C
0 -
(Q)
,
Q = 8 xl0,TC.
(4.151, (4.161. Then we have
TAT (4.21)
Assume first 2
5
get
p < =.
f
E
u(x,t)
=
B(Q)
, ii
Ext[/
E
f(x(s),s)(exp
t
bo)
.
Then
u
E
- :1 ao(x(X),A)dA)ds
+
b 2 $ ’ * P ( ~ ), for any
Apply formula (2.35) extended asexplained in Theorem 4 . 1 , we
129
MARTINGALE PROBLEM
u(x,t)
=
TAT a ds Ext u(x(TAT~),TAT~) exp - it 0
+
TAT
+ Ext it
f(x(s),s)(exp
- it" aod))ds
from which we easily deduce (4.21).
U = 0 , we first extend (4.21) to functions f We observe the estimate
Taking first p >
+ 1.
Then assume
U
W z Y p n WAY'
(hence in
u
E
W2"
1 n Wo"(0)
COG))
solution of (4.17) with data
and consider a sequence
U +U
,
En, and letting
n
+ 03,
the desired result.
Remark 4.1.
We see that we have the es imate /Ext Jt
f(x(s),s)ds
which improvesthe result of Theorem 2.1
Remark 4.2. We also have the estimate
Assume ( 4 . 1 ) , ( 4 . 2 ) and consider the forward Cauchy problem (5.1)
in
we obtain
0
TAT^ (4.22)
Lp(Q) ,
Un E B O ) . Applying ;L.21) with
and f
E
130
CHAPTER I11
with
4
(5.Ibis)
Borel bounded.
Since setting v(x,t) = u(x,T-t)
-
*
+ Av
at
=
0
then v
, v(x,T)
satisfies
=
@(x)
we can assert that there exists one and only one solution of (5.1)
such
that
Since v(x,t) and noting
(5.3)
PX'O
=
= EXIt $(x(T))
Px, we can assert that
u(x,t)
=
EX $(x(t))
Y t
2
0
.
We write
which defines a family O(t) bounded functions on that
O(t)
of operators on
B , space of Borel
Rn, provided with the sup norm; it is easy to check
satisfies the following properties
131
MARTINGALE PROBLEM
(5.5) O(0)
O(t)@ Hence
I
=
t
0 if
@
z 0
.
i s a semi group of contractions on
@(t)
B , which preserves
positivity.
C
Denote by
the space of uniformly continuous bounded functions on
B.
which is a subspace of in x,t
for
(5.6)
,
t > 0
if :
O(t)
c
Then from 5 . 2 , we know
B.
@
E
+
c ,
that
u
Rn,
is continuous
Let u s check that
if we assume
In that case, we can consider the S.D.E. (5.8)
dy
=
Y(0)
+ a(y)dw
b(y)dt
=
x
(R,Q,P,st ,w(t)) . Since b,a are Lipschitz, according to Theorem 4 . 1 of Chapter I, there is one and only one solution of (5.8). on a system
Denote the solution
yx(t)
.
Then clearly we have
and is an increasing function of
6. We have
132
CHAPTER 111
Therefore
IE O(Yx(t))-EO(y,~
(t))
~
5
E P( IY,(t)-YxI
(t)
5
p(6) +
C(t)
1) /x-x'i2
h2 for any
6 > 0. From this one easily deduces that
uniformly continuous in x, f o r
fixed, t t 0
t
.
@ ( t ) @ ( x ) is
Let us also check
that
(5.10)
in
O(t)b+b
C
as
t + O , ' d ~ c c .
Indeed
from which we obtain (5.10). The infinitesirnaL generator of (5.11)
~7 @
=
lim O(t)@-$ ti0 ~
O t)
is defined by
133
MARTINGALE PROBLEM
The domain of
B , such that (5.11) in the
is the set of functions
0
E
B.
E
C;"
sense of the convergence in EX @(x(t))
=
Assume
$(x)
@
, then from Ito's formula
.
- EX it A @(x(s))ds 0
But when (4.1), (5.7) are satisfied and
I$
E
c2+a b
then
A$
E
CEya
,
hence by the above reasoning we have
Therefore
Thus we have proved that
c ~ +D(c~J ~ ,
(5.13)
Let
b
and Q $
=
-
A@
8 be a smooth bounded domain of Rn. Let
where
T
Assuming
is the exit time from 8
q?
E
x(t)
E
B(6)
we define
.
S(6) , we may consider the non homogeneous Dirichlet
pr ob 1em (5.15)
of
@
1 2+Au=o
CHAPTER 111
134
Since u
-
=
v
satisfies
vIc
=
0
V(X,O)
=
0
then we see that there is one and o n l y one solution of (5.15) such that
Moreover we have
(5.17)
u(x,t) = EX @(x(tAT))
hence by definition of
If
@
E
H
1
@(t)
0 ) , we can give a meaning to (5.15) as follows
(5.19)
1
u - $
E
Hob)
Problem (5.19) has one and only one solution such that
(5.20)
u
Uniqueness i s easy. I
E
2
1
L (0,T;H )
du
, ;ir E L
2
(0,T;H
-1
To prove existence we can consider an approximation
.
+ $ in H (s) , @n E B ( ~ ) Then we can define the solution u (5.15) with @ changed into $n. It is easy to check that we have
@n
of
135
MARTINGALE PROBLEM
'd dt (un(t),v)
a(u (t),v)
t
=
V v
0
E
1 Ho
.
Hence
I2 -
/un(t)-um(t) +
Jt
(un(t)-um(t),$n-($m)
-
a (un(s)-um(s) ,un(s)-um(s))ds
- It a
+
.
(u,(s)-~~(s),m~-~~)ds = 0
Using the fact that
($n- @m + 0 in H 1 as 1 2 is a Cauchy sequence in L (0,T;H ) and
n,m
u
2 C(0,T;L )
If
$
n satisfies (5.19), (5.20).
E
H 1 (8)n Co(g) , then Qn
+
g?
+ m
,
we get that
.
The limit
in Co(g) ,
Since
we obtain (5.21)
u
E
.
CO(<)
Formula (5.18) extends to of (5.19),
If
I$ E
E
1
C O G ) n H (8), u
being the solution
(5.20), (5.21).
0 -
C (8)
un , where show that
g?
,
Qn u
we define u
+
($
0 -
as the limit in C (Q), of the Cauchy sequence
in C O G ) , and
is regular inside Q
$n
,
E
,96)
.
It is then possible to
and satisfies the differential
equation (5.15)
in the sense of distributions i.n Q .
considering us
where
6
E
&@)
This can be done by
and showing that it satisfies an
136
CHAPTER 111
homogeneous Dirichlet problem.
0
At any rate ( 5 . 1 7 ) extends to
E
0 C (8).
Since ( 5 . 1 7 ) expresses the
6,
equality of the values taken by measures on formula ( 5 . 1 7 ) extends to B ( 0 ) , provided u(x,t) is then defined by the integral of @ with respect to a measure. take test functions in
.
COG)
Clearly the first relation ( 5 . 5 ) holds when we
B(B) , and thus also with test functions in
This is enough to assert that it holds with test functions in
B(6) . The remainder of ( 5 . 5 ) is clear. Here
C
I$
,
E
C
coincides with the set of continuous functions on 8. When we have ( 5 . 2 1 ) , which implies that ( 5 . 6 ) still holds.
Note that
( 5 . 2 1 ) also implies ( 5 . 1 0 ) .
We can also check that generator of
O(t).
&O)
belongs to the domain of the infinitesimal
Moreover
Indeed by Ito's formula we have
hence (5.23)
137
MARTIKGALE PROBLEM
Since
@
is with compact support in 0
A$(x) where 6
=
{x
E
=
0 if
B/d(x,T) > 6)
x
we have
O6
.
If x E Q ~, then EX x,
=
PX(.r
Therefore
which completes the proof (5.22).
COMMENTS ON CHAPTER I11 1 . The representation Theorem 1 . 1 is due to Kunita - Watanabe
C11.
See also R. Priouret ill. 2 . The change of probability measures on (2.16) defines a probability on Ro §
since
Ro
, Pxt satisfies the conditions meantionned in Chapter I,
5.2 (see Comments Chapter I, 12).
3. In (2.20) the coefficients may of course be more regular. c 1 f co,l,l
Note that
138
CHAPTER I11
4 . -rt
defined in ( 2 . 2 1 ) is a
since
x(t)
mh
stopping time (cf Chapter I, Comments 2)
is a continuous process.
5 . The proof of uniqueness that we give in Theorem 3 . 1 , uses the Markov
property.
Stroock -Varadhan prove first that
Pxt is unique,
from which the Markov property is easily deduced.
6. To prove the Krylov
[Zl
Lp
estimate in Theorem 2.1, we have relied on P.D.E.
has proved
Lp
estimates for general Ito processes
(not necessarily diffusion processes, and therefore P.D.E. methods cannot be applied).
From this and the parabolic problem ( 3 . 4 4 ) Chapter
11, it is possible to prove the uniqueness of the martingale problem for
aij
E
0
C (Rn x [O,TI), which i s the case considered by S.V.
This idea has been indicated to us by P.L. Lions. 7 . Any Markov process generates a semi group by the formula
Conversely, if
@(t)
is a semi group on
B
satisfying ( 5 . 5 ) and
then
P(x,t,B)
=
@(t)xB(x)
defines a Markov transition function. We refer to E. Dynkin
[l],
[2]
for the theory of Markov processes.
139
CHAPTER I V
STOCHASTIC CONTROL WITH COMPLETE INFORMATION
INTRODUCTION Complete information means that we can observe the evolution of the state of the system as time evolves. Otherwise we say that we have partial information. In this chapter we develop the Dynamic Programming approach. We consider mostly the problem of controlled diffusions. But we study also control problems for general semi groups. In fact, the same methodology carries over to many other types of Markov processes, such as reflected diffusions, diffusions with jumps, and semi Markov processes The control of (see A . Bensoussan - J.L. Lions [ 2 1 , M. Robin [ I ] ) . general non Markov processes can not be treated in that way. One has then to use the "purely probabilistic" approach mentionned in the general introduction. We refer to the corresponding litterature. 1. S E T T I N G OF THE PROBLEM
Let 2 (1.1)
We write
be a subset of a metric space, called the space owz conti.s:s, and f(x,v) : Rn
x
'lf+ R
Bore1 bounded
140
CHAPTER IV
Let next
We note
Px
the solution of the martingale problem relative to
with initial condition x
C a - -a L ij axiax i,j j
A = -
(1.8)
(0,a)
0. Setting
at
f
a
bi
K
then PX(x(0)=x)
(1.9)
1
=
(1.10)
Then also there exists a ;71
t
standard n dimensional Wiener process w ( t )
such that
(1.11)
x(t)
=
x + J
t
o(x(s))dw(s)
+
It b(x(s))ds
,
0
Let
v(s)
be an adapted (to
m t)
controL process, or a control.
process with values in V .
a.s.
Px
.
We call it a
STOCHASTIC CONTROL
141
(1.13)
where we have set
(1.14)
For the probability (1.15)
g(x(s),v(s))
=
PV(S)
,
P,"
is a Wiener process and
wv(t)
x(t)
= x
bi t
a.s.
+
*
{t
(b+gv(s))ds
P;
.
+
Jt
u(x(s),s)dwv(s)
From I t o ' s formula it follows that (1.16)
Therefore P;
@(x)
,
P;
is a A s for ( 2 . 3 0 )
-
O(x(t))
formula ( 1 . 1 3 )
mt
it 2 (x(s)).gv(s)ds
Q
E
nt
t
for ?lo
.
J
t
A @(x(s))ds
can be inverted as follows
J(Rn)
We then set the functional
+
martingale.
is the unique probability on
(1.16) holds for any
(1) We write
-
.
no , 77!, ,
t
such that
142
(1.18)
CHAPTER IV
Jx(v(.))
=
- Jt c(x(s),v(s))ds)dt
EX JT f(x(t),v(t))(exp V Q
where (1.19)
T
=
first exit time of the process
bounded domain of
Rn
from 8,open
x(t)
.
We are interested in characterizing the function (1.20)
u(x)
=
Inf .J~("(.)) V(.)
and find optimal controls.
2. THE EQUATION OF DYNAMIC PROGRAMMING
When there is no control, i.e., then the function u(x)
f(x,v)
=
h(x)
,
a]
=
Au + a 0u
=
h
ulr = 0
We will show in this section that in the control case solution of a P.D.E. of elliptic type, but non linear.
Consider here
= -
where (1.6) and ( 1 . 7 )
0
,
g(x,v)
=
b(x),
defined by (1.20) is the unique solution of
2 d C a. ~C b . ij axiax.' i ax. i,J J hold and
u
E
u(x)
W2%) is also
.
143
STOCHASTIC CONTROL
(2.2) J
We next define a function H(x,X,p) : Rn (2.3)
H(x,A,p)
We call H
=
J
x
R
x
Rn
-f
Rn
as follows
infCf(x,v) + nal(x,v) + p.g(x,v)l
VE 7f
the HamiZtonian function.
Let us give some properties of
H.
is measurable in x,X,p (1)
(2.4)
H
(2.7)
hcLPb)
,
p 2 2 , p i m
Our first objective is to study the equation Au + a u
(2.8)
0
u
E
-
H(x,u,Du) = h
,
uIr = 0
W24(3)
which we call the Hamilton-Jacobi-Bellman equation (H.J.B.).
Our objective is to prove
Theorem 2 . 1 . (1.7).
We make t h e assumptions (1.1), ( 1 , 2 ) , ( i . 3 ) , (1.4), (1.6),
Then t h e r e e x i s t s one and o n l y one s o k i t i o n oc (2.8).
(1) With respect to the Lebesgue measure in
function.
x.
It is not a priori a Bore1
144
CHAPTER I V
We proceed in a way similar to Theorem 2.3 of Chapter 11.
Considering the
change of functions
where
w
i s defined in (2.17) of Chapter 11, we are led to the same
problem with the following changes
Note that
Hence we may assume without any loss of generality
(2.9)
a.
-
a l 1 y. > 0
Let next consider the problem
.
145
STOCHASTIC CONTROL
Auo
(2.10)
with
A
(a +x)uo - H(x,u0,Du 0) 0
+
,
h
=
u0
w29p n w1bP
large enough.
Since
c
/H(x,~,,s) - aOCO1 we can find
X
(1 +
IS,/
+
/El)
large enough so that
2
E(l
+
151
2
)
,
6 > 0 . Let us show uniqueness.
-
(2.12)
A(U-U)
Let
-
u,u
be two solutions of ( 2 . 1 0 ) ,
-
+ (a,+X)(u-u)
H(X,U,DU) + H(~,;,D;)
then
=
o
and
-
Multiplying (2.12) by
u-u
and integrating over
0, by Green’s
theorem we obtain Y
(2.13)
Y
a(u-u,u-u)
2
6
~
t
- -
~
0, hence
Lp(0) , hence there exists
5
E
- .
u=u
Existence is proved by a fixed point argument. E
t
- 2 p-u(
and the left hand side of (2.13) is
H(x,z,Dz)
-
~ 1 u - G ;- (H(. ,u,Du)-H(. ,u,Du),u-u)
W2”
Let ; I
W:”
z
E
W2”(Q)
, then
solution of
CHAPTER IV
146
and (A-ho)
where
C
1
does not depend on
A,
I Iz1
-2
2+
( A > Ao).
Therefore
hence if
X
is large enough
which proves that
S
is a contraction.
This completes the proof of the existence of
u0
The next step i s to define the following sequence
solution of (2.10).
STOCHASTIC CONTROL
147
Aun+’ + (a +X)u n+ 1 - H(x,u n+ 1 ,Dun+’) 0
(2.14)
=
h + Au“
n+ 1
Then by difference
+ H(x,un,Dun)
= h(Un-Un-’)
*
Again there is an improvement of regularity at each step. ul-uo E W2’p(o)
and
u2-u1
E
Indeed
and from
, follows from that D(u 2-u1 )
W2”O)
E
WlYp
hence also
Therefore
2,q1 This implies that
u2-u1
E
W
(@)
.
After a finite number of steps, we obtain in particular Therefore using Lemma 2.1 below we have
Hence at least for
n
2
+
u
no
, un-u
.
i s a Cauchy sequence in
(2.14) we also have
Hence un
in W2”
u n-un-1
weakly.
Lm.
E
Lm .
From
148
CHAPTER IV
Therefore also, since the injection of un
u
+-
W2”
into Wl”
is compact
in W’,P strongly.
This implies +
~ ( x , u , ~ u in )
L‘
strongly
We can pass to the limit in (2.14) which proves that
u
I
is a solution
of (2.8). Let us prove uniqueness.
-
u, u
Let
From Lemma 2 . 1 below it follows that
with
I$
E
Lm
, and A
large enough.
be two solutions, we have
-
u-u = 0
Then
.
u-U
G
E
Lw
and
proos Let
K
be the constant to the right hand side of (2.17).
(2.16) by
-
(u-u-K)
+
E
1
Ho
-
. -
We get
+
-
a ( u - u , ( u - u - ~ ) ) + ~ ( u - u (, ~ - L - K ) + ) =
or
We multiply
149
STOCHASTIC CONTROL
(2.18)
(u-U”-K)+) + A I (u-U”-K)+/
a((u-u-K)+, +
“ + dx I’ K(ao+A)(u-u-K)
=
/(H(x,u,Du)-H(x,;,D~)
+
=
+ @ )(u-u“-K)+dx
- -
+ g.Du + ual + g.D(u-u-K)
= /[inf(f
=
+
V
+ a 1 (u-u-K) + Kal + @ ) - inf(f + g.Du” + u”a,)l(u-u-K)+dx N
V
hence
=
I’
Cinf ( f + g.DC
La, + g.D(u-U”-K) + a (u-L-K) + 1
+
V
+ K(al-a,,-A)
+ @ ) - inf(f + g.Du + U”a,)](u-u-K)+dx
.
V
But recalling that K(a -a - A ) I
0
+
6
S
-
a .
a i r y , it follows from the choice of
0
, hence
S
J [inf(f + g.Du + ual + g.D(u-u-K)
N
“
+ a (u-G-K)
1
V
-
inf(f + g.Dl + ; a V
and for
A
Hence
-
u-u
5 C
J
(ID(u-;-K)
= C
J
(1
D(u-U-K)+l
1
1
)](u-K-K)+dx
K
.
+
-
5
I (u-u-K)+I ) (u-;-K)+dx
A similar reasoning shows that
completes the proof of (2.17).
that
+ Iu-G-KI) (u-L-K)+dx
large enough, this inequality implies 5
K
.-.+ = 0 (u-u-K)
-
u-u
2
.
- K , which
0
150
CHAPTER IV
3. SOLUTION OF THE STOCHASTIC CONTROL PROBLEM Our objective is to prove the
Theorem 2. I.
We make t h e a s s - q t i o n s'0 Theorem 2 . 1 rizd
(3.1)
f(x,v)
(3.3)
a-a 0
1
, g(x,v) , al(x,v) are Z . S . C .
in x,v
> y > O , h = O .
Moreover t h e r e es4sts s i z cptirnal c m t r s l . We start with a Lemma.
Let
P:
0
be the probability defined by (1.13).
Then we have
It is enough to prove (3.5) assuming Y
where
=
E X JrAT $ ( x ( s ) ) d s
4 =
E
.
B(Rn) ,
'42
EX
$(x(s))ds)
0
We have
XV(T)
15 1
STOCHASTIC CONTROL
XV(T)
=
exp CJT
0-l
- 71
gv(s)dw(s)
J
T
IU
-1
gv(s)
1
2
dsl
hence
We easily check that EX XV(T) 2
2
CT
independant of
v
.
Next $(x(s))d~)~]"~ [Ex (ITAT
using (4.22) of Chapter 11.
Hence (3.4)
2
C' T ( E x oJTAr $2(x(s))ds)1'2
2
c;\
IMLP since
p > n+2
.
, 3
l y o o f of Theorem 3 . 1 Since h 2
5
p .<
= m.
,
W2"(O) ,V p From Lemma 3.1, it follows that we can apply Ito's formula,
0
the solution of (2.8) belongs to
under integrated form, to
u
Hence
From equation (2.8) we deduce
and the process
x(t)
verifying (1.15).
152
CHAPTER IV
But H(x,u,Du)
(3.8) Taking
x
=
x(s)
,
v
u(x)
(3.9)
2
v(s)
=
we deduce from (3.7) and (3.8) that
- JTAT
EX u(x(TAr))exp
5
c(x s),v(s))ds
-
+ E~ /TAT f(x(s),v(s))(exp v o Since (3.3) is satisfied we can let (3.10)
u(x)
T
-t
+m
+
Jt c (x(X) ,v(X))dh)ds
in (3.8).
We obtain
.
Jx(v(.))
5
.
f(x,v) + u(x)a 1 (x,v) + Du(x).g(x,v)
Moreover by virtue of ( 3 . 1 ) , (3.2) the function
L(x,v,X,p)
=
f(x,v)
+
Xal(x,v)
is 1.s.c. in all variables and bounded below if Since
+
p.g(x,v) Ihl
2
,
M
R~
x
1 1 ~ 1s
M;
x {Ip( 5
MI
such that (3.11)
L(x,V,X,p)
=
inf L(x,v,i,p) V
u
(3.12)
E
5
M
.
is compact, there exists a Bore1 function with values in 7 ~ ,
?J
v(x,~,p) on
Now
jpl
C1(6)
.
Define ?(x)
=
V(x,u(x) ,Du(x))
=
H(x,i,p)
.
153
STOCHASTIC CONTROL
-
which is a Bore1 function on 8 , with values in "J. 0 ( s ) = C(x(s))
(3.13) then 0 ( s )
Define
is an admissible control.
0 , we
Moreover by the choice of
have
which with ( 3 . 7 ) , taking v
Letting next
T
+
m
,
u(x)
=
0
,
yields
we obtain
=
.Jx(C(.))
which, with (3.10) completes the proof of the desired result
Remark 3 . 1 .
.
The optimal control is defined by a feedback 0(x).
n This
means that to know the control which should be excited at any time the information on the states previous to We also remark that to
b(x) + g(x,?(x))
Pt
and
t,
is unrelevant.
is solution of the martingale problem relative a(x)
Remark 3.2. We can take h # 0
, hence is Markov process.
,
h
E
Lp6)
solution of (2.8) is given by ( 3 . 4 ) with f(x,v) + h(x).
t
,p
f(x,v)
Details are left to the reader.
> n+2.
0
The function u
changed into
0
154
CHAPTER IV
Let us indicate now an other approach to study equation ( 2 . 8 ) ,
called the
method of policy iteration. 0
,...,un...
Let
u
W2"
n W;"
Knowing
, un
,
be a sequence of functions belonging to
p > n
,
defined as follows. We take
define vn(x)
uo
t o be a Bore1 function such that
+ Du"(x).g(x,v"(x)) Define next
un+l
Y x
.
as the solution of the linear equation AU~+'+
(4.15)
arbitrary.
a un+l
=
f(x,vn(x))
+ un+ 1 al(x,vn(x))
+
0
Theorem 3 . 2 .
We make t h e assumptions of Theorem 3.1 and a . 2 0, Lp , p > n , then un c u and in W2'p u e a k l y , where u i s t h e solution of 1 2 . 8 1 .
h
E
Clearly the sequence un al(x,vn(x))
,
g(x,v"(x))
is well defined. Moreover since are bounded,
(3.16)
Next Aun + a un = f(x,vn-l) + unal(x,vn-1)
+
0
+ Dun.g(x,vn-') t
f(x,vn) + unal(x,vn)
+
+
h
2
Dun.g(x,vn)
+
h
f(x,v"(x))
,
155
STOCHASTIC CONTROL
hence ~ ( u ~ + l - u+~ a ) (un+l-un) 0
u which implies u Hence
un
i
u
n -u 1,
n+ 1 +n
5
0
=
0
-
D(un+l -u n ).g(x,vn)
o
(un+l -un)al(x,vn) 5
n+l
-
(recall ( 3 . 3 ) ) .
pointwise and in W2"
weakly.
Therefore also, by compactness (3.17)
un
-t
u
in W'VP
Let us identify the limit.
We have for v
E
V
arbitrary
Aun + aOun
-
f(x,v)
+ aOun
-
f(x,vn)
~u~ + aOun
-
f(x,vn) + (un+l-un)a 1 (x,vn)
S Au"
=
strongly.
-
+ (Du n + l -Dun ).p(x,v")
unal(x,v)
-
unal(x,vn)
n+l)
n
+ (Dun+l-Dun).g(x,vn)
-+
Du".g(x,v)
-
+ (u
n+l
0 in
-
h
Uun.g(x,vn)
Dun+ 1 .g(x,vn)
-
= A(U~-U~+') + a0(u -u
-
-
-
h
=
.
Therefore Au + aou V V € V which means
-
f(x,v)
-
ua ( x , v ) 1
-
Du.g(x,v)
-
h
=
un+'al(x,vn) +
n -u )al(x,vn) +
Lp weakly
-
5
h 5 0
156
CHAPTER IV
(3.18)
AU + a u
-
H(x,u,Du)
S
h
Au + a u
-
H(x,u,Du)
-
h
0
a.e.
Also we have
0
2
Au + a u 0
=
+ m,
-
f(x,vn)
h + Aun + aOun
+
I
Au + a u 0
-
H(x,u,Du)
-
Dun.g(x,vn)
(DU"-D~) .g(x,v")
the right hand side tends to
which with (3.18)
ua (x,vn)
-
-
f(x,vn) - h =
-
h
Z
0 in L p
-
unal(x,vn)
weakly, hence
0
concludes to the desired result.
Let us give the analogue of the situation studied in section 2 and 3 .
4 . 1 . perabgljc-equetjgns
We consider here functions f(x,v,t)
: R" x ?/ x
g(x,v,t)
: R"
c(x,v,t)
: R" x ? / x C0,TI
x T
Bore1 and bounded and set
x
=
+ (un-u)al (x,vn)
4. EVOLUTION PROBLEMS
(4.1)
-
0
= ~(u-u") + a (u-u") 0
n
Du.g(x,vn)
~(u-u") + a (u-u") + (~u"-~u).g(x,v") + (un-u)a,(x,vn)
-
and as
-
[O,Tl
+
R
C0,Tl
+
Rn
+
R
0
157
STOCHASTIC CONTROL
(4.2)
H(x,t,X,p)
=
infCf(x,v,t)
-
Ac(x,v,t) + p.g(x,v,t)]
.
VEV
Next we assume aij
(4.3)
aaij __
CO,a,a/2
E
E
axk
3
Lrn
(4.4)
Let (4.5)
where 8
h
E
Q =8
Lp(Q)
x
, U
(0,T)
is a smooth bounded domain of
E
1
W2” n W0”
Rn.
Then we have
Theorem 4 . 1 .
We assume ( 4 . 1 ) , ( 4 . 3 1 , (4.4), (4.51. Then t h e r e e x i s t s
one and o n l y one s o h t i o n of (4.6)
b2y”p(Q)
u
E
-
au at
uIC
=
+
A(t)u
0
-
H(x,t,u,Du) = 0
, u(x,T)
=
u(x)
.
Similar to Theorem 3.4 of Chapter I1 and Theorem 2 . 1 . We can next give the interpretation o f the function u. (4.7)
f(x,v,t),g(x,v,t),c(x,v,t)
and measurable
We assume
are continuous in v
with respect to x,t,
V v
.
,
a.e.x.t
158
CHAPTER IV
(4.8)
Let
?J is a compact subset of
u
=
f(x,v,t)
which is Lebesgue measurable in a.e.
.
belong to I J J ~ ’ ~ ’ ~ (,Q )and define LU(X,V,t)
x,t.
OU(x,t)
Rd
x,t
-
u(x,t)c(x,v,t)
for any
It is a Caratheodory function.
v
+
and continuous in v
Hence there exists a function
which is Lebesgue measurable and such that
We can take
GU(x,t)
to be a Borel representative.
In the sequel, the
results will not depend on the choice of the Borel representative.
Let
Pxt
operator
be the solution of the martingale problem relatively to the A(t)
, with initial conditions (x,t).
Then if
x(s)
is the
canonical process as usual, we have
A control is an adapted process with values in V .
v(s)
(with respect to the family
We define the measure
:)
Pzt such that
(4.11)
and
Ptt
is the unique solution of the problem of controlled martingales
159
STOCHASTIC CONTROL
(4.12)
And
(4.13)
where
We can state the
Theorem 4.2.
We make the a s s q t i o m of Theorem 4 . 1 and
7de
Then t h e s o l u t i o n of 14.61 is given explicitely by
h = 0.
(4.15)
u(x,t) = inf
J~~(V(.))
p > n+2.
.
V(.) ~ v ~ G P e G V e rzhere ,
(4.16)
exists an cptirnal control O ( s ) = Ou(x(s),s)
.
O(s)
defined by
CHAPTER IV
160
Similar to that of Theorem 3.1.
Theorem 4.3.
We make the assumptions on Theorem 4.2.
control v(.)
, the process SAT
u ( x ( s ~ ~ ~ ) , s ~ ? ~ ) -e xitp
for
t 5 s 2 T,
is a sub martingale
(PVXt
Then for any
t c(x(X),v(X))dX
A;)
-
u(x,t)
+
.
For v = 0 , it is a martingale.
Let
5,
be
measurable and bounded.
Using equation ( 4 . 6 ) we obtain
We have for
t 5 s 5 @ 5
T ,
161
STOCHASTIC CONTROL
This proves the first part of the theorem. Taking to
v
=
0 , we have equality
0
0, hence the second part of the theorem.
Remark 4.1. verifies
1.1~
It is easy to check that if =
0
,
u(x,T)
=
0
then ( 4 . 1 5 ) holds and
u
is Bore1 bounded and
and the properties of Theorem 4 . 3 ,
;(x)
3
is optimal.
5 . S E M I GROUP FORMULATION
5 * 1.
4-eroee r tr -2f - the -E9uatjon- - u
Let us go back t o equation ( 2 . 8 ) , and we will assume here (5.1)
For
a]
v
E
=
, .a
0
= c1
> 0 a constant
2 , a parameter we consider the operator
(5.3)
We note that
Av = A
u
-
g(x,v).D
.
satisfies
(5.4)
1 Moreover let
w
A u + au s f
a.e.
in 8
.
satisfy ( 5 . 4 ) , then we have Aw + uw S f(x,v) + Dw.g(x,v) Au + au = inf[f(x,v) V
hence
fi v
+ Du.g(x,v)l
162
CHAPTER I V
A(w-u) +
U(W-U)
2
f(x,v) + DU g(x,v)
-
-
+ Du.g(x,v)l
inf[f(x,v
+
V
+ (Dw - Du).g(x,v)
< f(x,v) + Du.g(x,v) - inf[f(x
v ) + Du.g(x,v)l
+
V
+ ID(w-u) hence taking the inf in v
,
C
we obtain
(5.5)
Condition (5.5) imp1ies w - U S O .
(5.6)
Relation (5.6) is clear when a to prove the following result.
is large enough. Otherwise we have Let
h
E
0
.
Lp
given and
z
to be the
solution of the H.J.B. equation
I
then (5.8)
Indeed
h z
S
0
implies
z 2
can be obtained as the limit of the following iteration
163
STOCHASTIC CONTROL
(5.9)
zo
starting with
0 , and
=
zn Since
h
0
5
+
z
in
WzYp weakly
, one checks inductively that zn
5
0
,
hence
z
s 0
.
This
proves ( 5 . 8 ) , hence (5.6). We thus have proved the following ble make t h e assumptions o f Theorem 2.1,
Tkeorem 5 . 2 .
Then t h e soZution
u
and (5.1), ( 5 . 2 ) .
~f (2.8) i s t h e maximum element of ;he s e t o f
0
f u n c t i o n s s a t i s f y i n g (5.4). Remark 5 . 1 .
Assumption
(5.1) can be weakened, but it will be sufficient
0
for the remainder of the chapter. We note now
the solution of the martingale problem corresponding to
P:
the operator
Av
,
starting in x
at time
0
controlled martingale problem, with a control S,L.
Let
u
.
It corresponds to a
v(s)
=
s
, independant of
be a function satisfying ( 5 . 4 ) , then from Ito's formula
we have
= o hence u(x) Recalling that
ui,.
=
5
Ez[{tmfv(x(s))e-"sds
0
, we also have
+ E:u(x(tAT))e
-CltAT 1
.
164
CHAPTER IV
.
0
x (x(s~r))e-"~ds] + E:u(x(th~))e-'~l v o
which we have considered in
§
5.2 of Chapter I11 and noting that
(5.12)
+
Bo
u(x)
(5.10)
5
:E
[Jt f
Using the semi group
where
OV(t) : Bo
Bo
is the set of Borel bounded functions on
r,
we see that
(5.13)
u
on
u
6,
which vanish
satisfies the relation
s Jt OV(s)fv
w - " s d s + OV(t)u
This motivates the problem which will be studied in the next paragraph.
We make here the following assumptions.
Let
E
be a polish space
(1)
provided with the Borel a-algebra 8 . We note
B
the space of Borel bounded functions on
E.
uniformly continuous function on
E, C
the space of
We assume given a family
v c 1J, where (5.14)
'V
(5.15)
OV(t) : B
finite set
OV(t)OV(s) OV(t)Q
t
=
o
,
B
-f
Ov(0) = I
.
OV(t+s) if
Q
2
o
(1) This will be needed for the probabilistic interpretation.
OV(t)
,
165
STOCHASTIC CONTROL
We will also assume that (5.16)
aV(t)
(5.17)
t
-t
c
+
c
oV(t)G(x)
E
u
5
To find
u
maximum solution of
+ OV(t)ue-at
u e B
= fV(X)X
(4=
8
R ,
a > 0
1t 0v (s)Lve-"'ds
Lv(x)
-t
C
,
L (x) :L(x,v)
We consider the following problem.
(5.19)
(0,m)
is continuous from
x fixed, Y I$
Y (5.18)
:
f(x,v)x
(x)
0
,
166
CHAPTER IV
t > 0
For
z(x,t)
is a regular function of
hence (5.18) is
x,t
satisfied. We will study ( 5 . 1 9 ) by a discretization procedure. Uh
h > 0
Let
define
by
(5.22)
uh
=
Min!Jh v
e-US QV(s)Lvds
+
QV(h)uhl
-"here mists one and onZg one s c l u t i o r cf (2.22).
Define for
z
E
z =
MinCJh e-"' v
T z h C
.
B
E
C
since 7J'
OV(s)Lvds
+
OV(h)zl
'
is a finite set.
Note also that
T z h
Th
E
C , when
is a contraction, hence has one and only one fixed
point , uh. L e m o 5.2. L e t
0 z c B
such t h a t
z S
T z h
proof Th is increasing we have
2
T z S T z h h hence
t
Moreover
which proves that
Since
uh
B
T and
,
o
S e m a 5.1.
z E
,
z 5
T2z h
and by induction z S
Tnz h
+
u
h a s n - t m '
then
z
< uh '
167
STOCHASTIC CONTROL
Lemma 5 . 3 .
Ue have
(5.23)
Uh 5 UZh
.
procs We have f o r any
v
Uh 5
{
u
LZh e-"'
OV(s)Lvds
+ evah OV(h)uh
hence
which implies <
OV(s)Lvds + e -2ah @ v (2h)uh
h hence
which with Lemma 5 . 2 implies ( 5 . 2 3 ) . We can then state the
Theorem 5 . 2 . u1/2q
+
'
We assume 1 5 . 2 4 1 , 15.15), (5.161, 15.171, 15.18). maxim s o l u t i o n of (5.191, as q + + m .
Let us check that
(5.24)
Assume
z t
-
K
,
then
Then
168
CHAPTER IV
OV(s)Lvds +
Jh e-" t
- e -ah K + Jh e-"'
E
- e -ah K - Max/IL;II
Qv(h)z t QV(s)L;ds t
Jh e-asds
E
V
2 - K
hence
T z > - K h
which i m p l i e s ( 5 . 2 4 ) .
Let u s s e t
q f +
then a s
u
m
q
.
C u
u
Note t h a t
is
U.S.C.
Furthermore we s e e
that uh s
Take
I h = 24
,
imh e-"
OV(s)Lvds + e - a h 0v (rnh)uh
m = j 2q-R
and
R
with
5
R (5.25)
u
q
5
R 5 q
5
hence
q
,
j
integer.
/j/'
e-"
o
,
j
Let
,
q
QV(s)Lvds
+
ti rn
we g e t
9. e-a"2
~ ~ ( j / 2 " . u, ~
integer
.
q
According t o Lemma 5 . 4 below,
-f
a.
integer
.
169
STOCHASTIC CONTROL
(5.26)
ti q.
j
Take next
= Ct
2
9"
I+
1
and let
R
tend to
+a
, we deduce from
( 5 . 2 6 ) using assumption ( 5 . 1 7 )
in which we may again let (5.19).
q
tend to
+m.
This proves that
,.
It is the maximum solution, since assuming u
u
satisfies
to satisfy
( 5 . 1 9 ) then clearly
which implies
-u
5
uh , hence the desired result.
G
Let us now state the result which has been used in the proof of Theorem 5 . 2 .
We refer to Dynkin 121. Let us briefly mention the main elements of the proof.
Let
Banach space.
be the space of bounded measures on We write
Then we have the following result
( E , 6 ) , which is a
170
CHAPTER IV
m
Define next an operator on
One checks that
by the formula
is a contraction semi group o n n , and that the
U(t)
following relation holds,
0
From (5.29) and (5.27) the desired result follows. Remark 5.2.
In example (5.20), we have
uh
Co =
E
T, and of course u
functions which vanish on
E
subspace of
C
of
.
Bo
L
Let u s define
R0
=
E
I
,
x(t;w)
is the canonical process,
Let u s assume for simplicity that
7 To
i
E
b'
=
...,m} .
{I,?,
, we associate a probability PTt
(5.30)
E2t @(x(s))
We will denote by values in V .
W
=
Oi(s-t)6(x)
Ro
, nt
fi s 2 t
such that
.
the class of step processes adapted to
V
E
... 5
7
More precisely, if To =
on
0
s
T1 5
not
with
W , then there exists a sequence 5
...
which is deterministic increasing and convergent to
+x
and
171
STOCHASTIC CONTROL
v
(5.31)
=
, v(t;o)
v(.)
=
vn(w)
t
€
IT,,?
n+ 1 )
T
where
is ?!on
v
measurable.
9"
We next define a family the pair
w,t
(w
E
07; , Oss
Go C E
,
of probabilities on
w,t t 2 0)
(C20,77!o)
vt , then
Let us next define a sequence
n Pz
of probabilities on
(fro$:,) as
follows 0 p;
(5.33)
where
vo
=
vo(x)
=
pX90 "0
, vo being a Bore1 function from E - V .
and by induction
We then note that
Indeed n
and since
7
E
mi"''
indexed by
such that for
we can take
172
CHAPTER IV
r = r l n r2 where
1-0 r l E P ‘n+ / ~ , r2
then for such
u(x(T~+]))
r
Moreover n+ 1 P ;(r)
n = pC(r)
n Then the family
(Qo,
on
Pt(r)
.
mo‘n+ I )
forms a system of compatible
probabilities and thus from Theorem 1.2 Chapter I there exists a unique probability
P;
(C20,7co)
on
~ ; =f
(5.37)
Lema 5 . 5 . Let
$(x)
E
n
P;
B
on
such that
mo‘n+ I .
and
T~ 5 t < T
n+l.
T
(5.38)
E$@(x(t))
1mon1 =
Then w e have
V
@
n(t-?n)@(x(Tn))
.
T
Let
5
be
non
measurable and bounded.
Since
t < T
n+1
,
we have
173
STOCHASTIC CONTROL
But
and
3
from which we deduce (5.38).
We next define for
(5.39)
V
W
E
Jx(V) = El:
Jrn
.
L(x(t),v(t))dt
We have m
Jx(V)
=
C
Et
L(x(t),v(t))dt
n=O
But from Lemma 5.5 we deduce
(5.40)
Ex JTn+l
n '
-Cl(t-Tn)
L(x(t) ,v,)dt
=
CHAPTER IV
174
Define
(5.41)
Lh(x,v)
=
Jh
OV(t)Lv(x)dt
then from (5.40)
and defining
we get the formula -3T
(5.42)
c
. J ~ ( v )= E;
L
e
n=O
(X,(O)
,vn(Ld)
*
‘n+ 1 -Tn
Let
(5.43)
wh
(5.44)
uh(x)
For
V
E
Wh
=
iv
E
WIT,
=
=
:
=
E;
nhl
I
we have m
(5.45)
J~(v)
-anh e ~ ~ ( x ~ , v ~ )
z n=O
and
v
is
nh
0
measurable.
A control V
sequence vo
=
VO(X),
...,Vn,”.
can be considered as a
175
STOCHASTIC CONTROL
where
v
is
nib
measurable. Note that
vo
is not random.
Now from ( 5 . 2 2 ) we have
(5.46) Take
V
u,(x)
to be an arbitrary control in
Replace in (5.46) x
by
uh(xn)
(5.47)
+
e-as OV(s)Lv(x)ds
5
=
(v,(x)
, and v by vn(w).
xn(o) 5
Vh , V
Ov(h)uh(x)
Jh ePas :n(s)Lv
,...,vn(w) ... ) . We get
(xn)ds + edah ;"(h)uh(xn) n
But from Lemma 5.5, we can assert that
hence we deduce from (5.47), (5.48), (5.49) that
Adding up for
Letting N
n = 0
+ m
(5.50) Define now F(x)
,...,
N-l
we get
and recalling that u,(x)
5
Jx(V)
uh
is bounded, we obtain
.
such that
+
Y x .
.-as
O G ( x ) (h)uh(x)
.
.
176
We can find
CHAPTER IV
G(x)
Borel.
G
=
To
0
we associate V
E
Wh
as follows
. . . ,Cn(LI), . . . )
(Go(x),
Go(x) = G(x)
+,(a)
G(x(nh;w))
=
.
... A calculation similar to the one above shows that
u,(x)
Jx(?)
=
which completes the proof of (5.44). We are now in a position to interpret the maximum solution of (5.19) Let us define (5.52)
W
4
= w 1/74
Then (5.53)
w
q+ 1
=w
4
hence (5.54)
and we recover the fact that
*
c
1I 7
STOCHASTIC CONTROL
Theoreq 5 . 4 .
Ye make t h e assumptions of Theorem i . 2 .
Then t h e mmirnum
element o f (5.19) e m be i n t e r p r e t e d a s
VEU
q
Let
u(x)
W g
be the right hand side if (5.55).
Since
we have
-
u > u .
But on the other hand we know that u
and since
u
5
u
4
Jx(V)
S
V V
W
E
4
4 u 5
Jx(V)
V V
E
V
E
W
q
hence u 5 Jx(V)
which implies .?er*Ork 5.3.
u S
-u
'4
u W
q
,
hence the result.
The function u
the solution of (2.8).
q
3
defined by (5.55) is a priori larger than
0
178
CHAPTER IV
In this paragraph we give some regularity results on the maximum solution of (5.19).
We will assume
and
then
Theorem 5 . 5 .
Then if
Let
c1
Ye make the a s s u - v p t i m s of Theorem 5 . 2 ar,d ( 5 . 5 6 ; ) (5.573.
> A
,
.
u c cO"(E)
z c c'"((E).
Let u s fix xo
in
E , then there exists vO
(depending on xo) such that ThZ(XO) * j h e-as CJv~ ( s ) L
(xo)ds +
Ovo(h)a(xO)
vO Let x
arbitrary, we have Thz(x)
5
Jh
(x)ds +
Pvo(s)Lv
0 hence by difference
and from the assumptions (5.56), (5.57) it follows that
Ovo(h)z(x)
179
STOCHASTIC CONTROL
5
Lh e-asei's
Klx-xo
1 'ds
+
hence
and since
xo,x
are arbitrary, this implies
and iterating we see that
and letting
k
tend to
+a
,
it follows that
K
I I ~ h I 5l ~
Taking now h
=
1 and letting -
q
-f
m,
we deduce
24
(5.59)
IlU
Il&q
K
which implies the desired result. Let u s now give an other regularity result. We assume (5.60)
I
I Z I
86 16 Ix-xgl
180
CHAPTER IV The rnuximwn so2ution of 15.191 is a l s o the maximwn s o l u t i o n of
L e m a 5.6. (5.61)
U E B ,
u
5
Jt e-”
tr v
+ e-Bt OV(t)u
Qv(s)(Lv+(G-a)u)ds
Y t
E l J ,
2 0
Proof We first show that (5.61) has a maximum element, which will be denoted by N
u.
Indeed, define for Bhz
=
z
E
B + e-Bh O v ( h ) z l
Min[Jh e-Bs aV(s)(Lv+(B-a)z)ds v o
It is well defined, by virtue if ( 5 . 6 0 ) . This is a contraction, since
’
I / o ~- ~~ ~~
z 5~ / jI z 1j-2 2 ’
= 1l2,-22/
Moreover when
z
,
C
E
Ohz
when
uh
.
C
-
..r
Let
E
be the fixed point, uh
z 2 0
6
.
C
and
Y
One checks as for Theorem 5.2 that
- -
Setting u
q
=
u
1/29
, we get
Y
u
.
J- u
, since
Ohz 2 0
N
uh U
q
-
uh 2 0
5
uZh
.
Then
9, S 4
/j’*
e-Bs OV(s)(Lv+(C-a)u
0
4
)ds
t
L
and as for Theorem 5 . 2 , one checks that
,u.
is a solution of ( 5 . 6 1 ) , and
181
STOCHASTIC CONTROL
that it is the maximum solution, since any other solution satisfies v
5
0hv
,
-
v < uh '
hence
w
Let us show that u = u , where
u
is the maximum element of (5.19).
We will use Lemma 5.7 below to assert that & < I t e-"'
-
hence u
5
u.
w
i e m a 5.7.
,
Let
where (5.63)
E
B.
0
and the desired result is proved. O(t)
be a semi group on Let
w
w < Jt g
+ e-Bt OV(t)u
Jt e-Bs OV(s)(Lv+(B-a)u)ds
( 5 . 1 5 ) , and is.&Ol.
(5.62)
OV(t)u"
However, still using Lemma 5.7 we have u
hence u < u
+
QV(s)Lvds
@(s)g
5
ds +
@(t)w
B > 0 , one has
Then f o r any w
satisfying properties
be such t h a t
B
E
B
it e-as @(s)(g+(B-a)w)ds
+ edBt O(t)w
We set
we have H(0)
=
0
,
H(t)
5
0
In fact, we have the additional property
Y t
,Y
t t 0
182
(5.64)
CHAPTER IV
H(t) s H(s)
for
.
t 2 s
Indeed ( 5 . 6 4 ) amounts t o proving t h a t
(5.65)
e
-us
O(s)w
O ( t ) w + Jst e-"O(A)g
5
,
di,
.
s 5 t
But f r o m ( 5 . 6 2 )
w s /
t-S
e
-a>
dA + e
O(A)g
-a ( t - s )
O(t-s)w
0
0 and
and i n t e g r a t i n g between
we deduce
T
[ l-e-(B-a)Tjw =
JT (B-a)ewbt O ( t ) w d t + 0
+ /T ( 3 - i ) e
+
=
H(t)dt
+
iT ( 3 - a ) e- ( 3 - a ) t ( i t e-" O ( s ) g d s ) d t iT (6-a)e - 6 t O ( t ) w d t + iT ( 0 - a ) e - ( p a ) t =
- e
-(B-a)T /T e - 2t 3 ( t ) g d t +
iT e-r't
H(t)dt
O(t)g d t
hence (5.66)
w =
/T .-3t
O(t)(g+(?-?)w)dt
+ e
-6T
O(T)w
+
0
- (6-3)T H ( T ) + J .T ( 3 - 5 ) e - (5-3) t H ( t ) d t
+ e
a
.
-
183
STOCHASTIC CONTROL
If B 2 u since H(t) < 0 , we clearly have ( 5 . 6 3 ) with 6 < a then using ( 5 . 6 4 ) we have
If
t = T.
hence H(T) +
e-(B-a)T
iT(@-a)e-(B-a)t
H(t)dt
H(T) + (@-a)H(T) JT e-(B-a)t
< e
therefore ( 5 . 6 3 ) holds in all cases for
t = T
.
5
dt
Since T
H(T)
=
5
is
C
arbitrary the desired result is proved.
Theorem 5 . 6 .
Let
z
We make t h e assumptions of Theorem 5.2,
15.561,
Then the m a x i m s o l u t i o n o f (5.19) belongs t o
(5.60).
E
C
5
and
5
1 5 . 5 7 1 and
C.
be the maximum element of
5
it e-Bs Ov(s)(Lv
+ (8-a)z)ds + e-Bt OV(t)<
,
tiv,tit. This set has indeed a maximum element according to Theorem 5 . 2 . defines a map
S : C
we can assert that
.
According to Theorem 5 . 5 , provided S : C o y & + CoP6. +
B
We also know that S(z) =
s1 1im+-(z)
24
as
0
q f
,
ti z
E
C
This
B >
184
CHAPTER IV
Ch
where
=
Sh(z)
is defined by
ch
(5.67)
Min[Jh e-@'
=
v
OV(s)(Lv+(@-a)z)ds
+ e-ah QV(h)ch]
o
< h e C * Sh : B
Note that
B
-t
and
C
C
+
.
One easily checks the estimate
I Ish(zI)-sh(z2) 1 1
(5.68)
5
71
1z1-z21 j
from which one deduces
I IS(zl)-S(z2) 1 I
(5.69)
5
a-a 112 - 2 1 I , p 1 2'
when
We also note the relation, which follows from Lemma 5.7, (5.70)
u
5
sh (u)
.
Define now
,
un = S"(0)
Since
s
maps
coy6
u;
=
.
s;(o)
into itself, u n c
c0j6.
From (5.69) we have n+l IIU
n
' I1
B-a n 1 (T)1Iu 1
and thus un+w We will show that
(5.71)
u = w .
in
C .
1
z 1 ,z2
E
C
185
STOCHASTIC CONTROL
which w i l l prove t h e d e s i r e d r e s u l t .
We f i r s t remark t h a t from ( 5 . 6 8 ) belongs t o
C
,
has a fixed point i n
and
wh
s ~ ( o )+ wh
(5.72)
From ( 5 . 6 9 ) ,
denoted by
Sh
c
in
.
( 5 . 6 8 ) we have
(5.73)
(5.74)
From ( 5 . 7 0 ) we c a n a s s e r t t h a t (5.75)
u s w
h
We check by i n d u c t i o n on
n
that
un h
-
By i n d u c t i o n on
n
we check t h a t
(5.76)
un+un q
un 2h
hence
From ( 5 . 7 3 ) ,
as
q f m ,
=
~
( 5 . 7 4 ) , (5.76) follows t h a t
wq(x) 4- w(x) which w i t h ( 5 . 7 5 ) shows t h a t
(5.77)
Gn
u < w .
v
x
un.
Hence
n
.
B
,
which
186
CHAPTER IV
But a l s o
hence
+ e-pph OV(ph)wh
w < Jph edBS OV(s)(LV+(B-a)wh)ds h - 0 hence also
for
q 2 R.
Using a reasoning as in Theorem 5 . 2 , we obtain easily that w
5
+ e-Rt @"(t)w
J t e-Bs OV(s)(Lv+(p-a)w)ds
hence also, using Lemma 5.7 w
it
5
@"(s)Lvds
+ e-"
QV(t)w
0
which implies w
5 u
,
and from (5.77) we see that ( 5 . 7 1 ) holds.
ci
completes the proof. Let us give an example where ( 5 . 5 7 ) is satisfied, with Consider the S.D.E, dy with (5.78)
This
=
g(y)dt
+
o(y)dw
y(0)
=
x
6
=
0
.
187
STOCHASTIC CONTROL
hence
which proves (5.77).
Remark 5 . 4 .
For other details (cf. M. Nisi0 [l], Bensoussan-Robin C11,
0
Bensoussan-Lions 121). COMMENTS ON CHAPTER I V
I . The method of improvement of regularity used in Theorem 2.1 is due to
P.L. Lions. 2 . Assumption (3.1) can be replaced by Lebesgue measurable in x
continuous in v
as mentionned in the evolution case
§
4.1.
, and In fact
we need a selection theorem. There are two types of such theorems that we may use. Consider Assume
F(x,v)
,
x
E
Rn
,
v
E
3'(compact subset of a metric space).
188
CHAPTER IV
F
1.s.c.
in x,v, F
bounded below.
Then there exists a Borel function C(x) F(x,C(x))
= inf F(x,v)
+V,
: Rn
such that
Y x
V
(see for instance D. Berksekas, S . E . Shreve [ I ] ) . The other theorem uses more explicitely the Lebesgue measure on We assume that F F
Rn.
is a Caratheodory function, i.e.
, continuous in v, a.e. x.
V v
is Lebesgue measurable in x
Then there exists a Lebesgue measurable function C(x)
: Rn
+ V , such
that F(x,?(x))
=
inf F(x,v) a.e. V
We can take a Borel representation of We write
inf
G(x)
,
but it is not unique.
is a Lebesgue measurable
V
function such that if
?(x)
, which
for ess inf F(x,v)
5
then G(x)
F(x,v) 5
a.e.
Y v
ess infF(x,v)
a.e,
V
Note that
inf F(x,v)
when
F(x,v)
is Borel for any v
is not a
V
Borel function (cf. I. Ekeland - R. Temam [ I ] ) . 3 . The method of policy iteration was introduced by R. Bellman [ l ] ,
in the
general context of Dynamic Programming.
4 . For the study of degenerate Dynamic Programming equations (i.e., the matrix
-1
a
does not necessarily exist) we refer to P.L. Lions
-
J.L. Menaldi C11. 5. J.P. Quadrat has formulated a generalized martingale control problem,
which includes degeneracy (cf. J.P. Quadrat [ I ] ,
[Z]).
189
STOCHASTIC CONTROL
6. For numerical techniques t o solve the H.J.B. equation see J.P. Quadrat [ l l , P.L. Lions
- B.
Mercier [I].
7 . A s we have said in the general introduction, the most complete
treatment of the general Bellman equation is due t o P . L . Lions [ll,
[ZI. 8. The problem of semi group enveloppe was introduced by M. Nisi0 C21
9. Nisi0 has also introduced a problem of non linear semi group connected to stochastic control (cf. M. Nisi0
[ll).
10. In the context of Remark 5 . 3 . Under what conditions can we assert
that the solution u
of (5.55) coincides with that of ( 2 . 8 ) .
This Page Intentionally Left Blank
191
CHAPTER
FILTERING AND PREDICTION
V
FOR
LINEAR S.D.E.
INTRODUCTION We present here the classical theory of linear filtering, due to R.E. Kalman [ l ] , R.E. Kalman - R.S. Bucy [ l l . Xe have chosen a presentation which can be easily carried over to infinite dimensional systems, for which we refer to A. Bensoussan [ I ] , R. Curtain - P.L. Falb [ l ] , R. Curtain - A . J . Pritchard [ l ] . For filtering of jump processes For non linear filtering, cf. R. Bucy - P. Joseph (cf. P. Bremaud [ l ] ) . [ I ] , and the recent developments in E. Pardoux [ I ] , T. Allinger S.K. Mitter [ I ] .
1. SETTING OF THE PROBLEM
We consider a usual system
(S2,a,P,5t,w(t)), and
solution of the linear S.D.E.
where
(1.2)
F
E
L~(O,~$(R";R"))
G
E
L~(O,~;~(R";R"))
f(.)
E
L ~ ( o , ~ ; R ,~ )
Clearly the standard theory applies since
x(t)
to be the
192
CHAPTER V
g(x,t)
=
F(x)x
o(x,t)
=
G(t)
+ f(t)
.
5 is gaussian with mean x and covariance matrix
a
To the O.D.E.
corresponds a fundamental matrix
such that the solution of (1.4)
@(t,T)
can be expressed as x(t)
(1.5)
where
g
E
=
2
L (0,m;R").
@(t,O)x
+
Jt
The family
(1.6)
@(t,S)@(S,T)
(1.7)
@(t,t)
=
I
=
@(t,r)g(r)dT has the group property
@(t.T)
@(t,T)
d '
t,S,T
.
It is easy to check that the solution of (1.1) y(t) = @(t,O)c
(1.8)
+
Jt
can be expressed by
@(t,T)f(?)d?
+
It @(t,?)G(~)dw(r)
where the last integral is a stochastic integral. Formula (1.8) is a representation formula for the process
y(t).
It is also useful to
notice the following. Let (1.9)
h
E
Rn
and
-3 dt
=
F*(t)$
,
$(T)
=
h
FILTERING AND PREDICTION
193
then we have (1.10)
@(0).5 +
y(T).h
=
p(t)
O*(T,t)h
@(t).f(t)dt
+
iT @(t).G(t)dw(t)
.
Since
(1.11)
=
it is easy to deduce (1.8) from (1.10) and ( 1 . 1 1 )
It is clear from (1.8) or (1.10) that expectation y ( T ) (1.12)
i s a Gaussian variable with
y(T)
such that
Q(T,O)x + {T @(T,t)f(t)dt
y(T)
=
-dy- -
F(t)y
;(t)
=
i.e. (1.13)
dt
,
y(0) = x
.
Let
Y(t)
-
y(t)
then from (1.10) (1.14)
-Y(T).h
=
O(O).t
where (1.15)
Define next (1.16)
then
- b-'' dt
=
F* (t)+
+
iT @(t).G(t)dw(t)
CHAPTER V
194
hence from ( 1 . 1 4 ) , (1.18)
(1.17)
we deduce
E F(T).h
= Po @ ( O ) . $ ( O )
y(T).k
+
JT
G*(t)@(t).G*(t)$(t)dt
= II(T)h.k
where
II(T)
denotes the covariance operator of
y(T) (or
y(T)).
Hence we have the f o r m u l a (1.19)
n(T)h.k
(1.20)
n(T)
=
Po @ ( o ) . $ ( O )
=
+
O*(T,O) +
O(T,O)PO
J
T
G(t)G*(t)q(t).+(t)dt
iT O(T,t)G(t)G*(t)O*(T,t)dt
We will set for simplicity (1.21)
G(t)G*(t)
=
Q(t)
We can deduce from (1.20) that
.
II is solution of a differential
equation. We have (1.22)
TI(T)h.k
=
Po @*(T,O)h.O*(T,O)k
t
+ JT Q(t) @*(T,t)h.O*(T,t)k 0
The function s
1
O(s,t)h
E
H (t,T;Rn) , and
O*(s,t)h
E
H 1 (t,T;Rn)
+
hence
and
dt
.
.
195
FILTERING AND PREDICTION
(1.24)
d O*(s,t)h ds
=
O*(s,t)F*(s)h
We can approximate (1.22) with respect to
. T , using ( 1 . 2 4 ) .
We
obtain
dT
h.k
= Po
+
O*(T,O)F"(T)h.@*(T,O)k
+ Po O*(T,O)h.O*(T,O)F*(T)k
+
Q(T)h.k
+
+ JT (Q(t)@*(T,t)F*(T)h.@*(T,t)k
.
+Q( t ) @*(T,t ) h O*(T, t) F*( T) k)d t
and from (1.20) we get
We thus have proved
L e m a 1.1.
The process
y
soZution of (1.1) i s
whose mathematical expectation y(t)
covariance matrix n(t) (1.25)
We next define a process
where (1.27)
G
Gaussian process
is solution of 11.131 and whose i s so2zrtion of the equation
z(t)
by setting
+
196
CHAPTER V
(1.28)
is a
n(t)
Rn
and
(1.29)
R
Y 0 is
e.n(t)
Zt continuous martingale with values in E
Rp , the increasing process of
it R(s)B.Bds
,
where
R
is symmetric invertible and R-1
E
Lm(O,m$(Tn;RP)) bounded.
From the representation theorem of continuous martingales, we have q(t) =
Lt R”2(s)db(s)
,
where
b
is a standard 3
Wiener process. We also assume q(t)
(1.30)
It i s clear that
Z(t) (1.31)
is independant from
z(t)
5
and
w(.)
.
is a Gaussian process, whose expectation
is given by
i ( t ) = {t H(s)y(s)ds
.
Set
-z(t)
=
z(t)
- Z(t)
then (1.32)
-
(1.33)
E Y*(s,)?(s,)
z(t) =
it H(s)y(s)ds =
+ q(t)
.
a(s1,s2)~(s2)
if
s 1 2 s2
t
197
F I L T E R I N G AND P R E D I C T I O N
= @(sl,s2)~(s2) +
Jssl @(sl,s)G(s)dw(s) 2
hence
Let h , k
E
R”.
We have from (1.34)
E y(sl).h y(s2).k
=
E Y(s2).0*(s1,s2)h
=
iI(s2)O*(s1,s2)h.k
y(s2).k
therefore
from which we deduce (1.33).
It is easy to deduce from (1.33) and (1.20)
that
From (1.34) and (1.32) it is easy, although tedious to deduce the covariance matrix of
z(t)
We consider that the process be observed.
and the correlation function.
y(t)
cannot be observed, whereas
z(t)
can
The filtering problem consists in estimating the value of
y(t), knowing the past observations. More precisely, we are interested in
We note the following
198
Lema 1.2.
CHAPTER V
Ye have
(1.36)
a(z(s),O<s
=
o(J
T
$(t).dz(t),
Y $
E
2
L (O,T;Rp))
0
Denote by /-31
the o-algebra to the left of ( 1 . 3 6 ) and B2
o-algebra to the right of ( 1 . 3 6 ) .
it is clear that
B 2 measurable
c
,
Y
B2
.
Since
On the other hand
c2 c B I
hence
the
JT $(t).dz(t)
is
.
Since by definition
{T $(t).dz(t) the assertion that
=
LT @(t).H(t)y(t)dt
+
iT @(t).dq(t)
is E 2 measurable is not completely
JT @(t).dz(t) 0
obvious. However when
By convergence in , ' L Lemma 1 . 3 . (1.37)
We have f o r
E =
@
is a step function, this fact is clear.
we see that this fact is true for general
3. C
2
@,$ E L (O,T;Rp)
iT $(t).dy(t) iT +(t).di(t) CT @(t).R(t)$(t)dt .fT JT ~(s).H(s)A(s,t)H*(t)$(t)ds =
dt
+
0
0
proos Immediate from the definition of
A(s,t)
and formula ( 1 . 3 2 ) .
CI
FILTERING AND PREDICTION
2.
CHARACTERIZATION OF THE BEST ESTIMATE
2.1. kljkl-rg;uA;
We w i l l c h a r a c t e r i z e t h e b e s t e s t i m a t e by t h e f o l l o w i n g
Theorem 2 . 1 .
We assume ( 1 . 2 1 ,
Then t h e r e e x i s t s
11.31,
(1.27),
(1.28), 1 1 . 2 3 / , ( 1 . 3 0 ) .
map
CI
I
1
E
Z(L~(O,T;R~);R")
T
such t h a t V h (2.1)
proos Let u s w r i t e
(2.2)
We remark t h a t
E
R"
9 ( T ) . h = JT
i;
h.dy + y(T).h
.
199
200
CHAPTER V
We note that (2.6)
Let
(Q12h,@(.))
C
=
E y(T1.h
@(t).dz(t)
be any element of Z ( L 2 (0,T;R’);R”)
(2.7)
We have since
-
ECy(T1.h Q,
JTC* h.dzI2
=
.
and consider
dh(C)
.
i s invertible
&,(I)
=
Q 1 1 h.h + CQ 22 C* h.h - 2 C Q 12h.h
=
[Qll
+
(Z-Q;2Q;:)Q22(C*-Q;:Q12)
- Q;2Q;:Q,21h.h Clearly we have
zLh(C)
2
gh(iT)
where
* (2.8)
a’’
‘T
=
-1
Q l 2‘22
.
=
-
20 1
FILTERING AND PREDICTION
and
(2.11)
E
Eh (w)
l Let
ek
T
1 C*h2.dg
=
o
be a unit vector i n
Rp.
Let
C Ql2h1.h2 - Z Q,, M
E
to be chosen in order that
M h 2 = ek which is always possible (take for instance
Mkl=l Next define for
I$
E
Then by construction
and
Mkj=O 2
L (O,T;Rp)
j + l .
g(Rn;Rp) and
i$hl.h2 h,
E
Rn
=
0
202
CHAPTER V
Therefore (2.1 I) shows that
V k
,V
s.
is non correlated with
Since both are
Hence for any fixed -independant. ti s , OSsST. Since
h
Rn, E~
E
Y
z(s)
for any
sl,
...,sk
o ( z ( s ) , OSsST)
gk(s)
Gaussian variables, they are
, we see% l
is independant from
z(sl),
...,Z(sk)
that
ch is independant of
form a gaussian vector
= dT '
But Eh
y(T).h
=
-
(JT f;h.dy
+ y(T).h)
and
iT i;h.dy
+ y(T).h
E
2 L ( Q, B T;p)
and
E
~~c
=
V 5
0
2 L (Q,BT;P)
E
.
-
T This means that { ZTh.dz + y(T).h is the projection of y(T).h on 2 L (Q,dT;P), which is by definition the conditional expectation F(T).h.
0
Hence the result. Let u s set (2.12) then (2.13)
63h.h
= a (1 )
=
h T It is natural to call
E~
inf gh(C)
c
.
the estimation e r r o r .
covariance of the error. Introduce the following problem.
Then 6 is the
203
FILTSRING AND PREDICTION
where
q(.)
Clearly
E
2
L (O,T;Rp) and
Jh(q(.))
h
is a parameter.
reaches its minimum in AX
BT(.)
(2.15)
=
CT h
q
at the point
.
Our next step will be to give more convenient computational procedures to obtain
iT
or
i;
.
We first make explicit the functional Jh(q(.)). useful formulas.
Define
- dt ’ _ @ =
(2.16)
9 dt
=
(2.17)
Q
h
(2.18)
Q
F*(t),$
F(t)p
b(T) = h
- Q(t)@(t)
then we have
12
- H(t)Q(t)
h = 11
U(T)
.
Consider also (2.19)
then
I
We first remark some
- dpz
=
F*(t)u
+ H*(t)q(t)
v(0) =
-
Po@(O)
204
CHAPTER V
Next Q 1 1 h.h
-
2(QI2h,q)
=
- $(T).h
+ 2 h.v(T)
=
Set
hence
B
i s solution of
and
^*
Th as the solution of an optimal co n t r ol problem, namely (2.22), (2.23). Moreover (2.13) we can assert Therefore we can compute that
QT(.)
= C
205
FILTERING AND PREDICTION
(2.24)
ph.h
The notation P(T)
=
P(T)h.h
=
inf J ( q ( . ) ) h
.
means the covariance of
We can now solve the control problem ( 2 . 2 2 ) , ( 2 . 2 3 ) Let us consider the system in
‘T ’ BT
(2.25)
then we have
Lema 2 . 1 . The system ( i n aT,BTI ( 2 . 2 5 1 has one and only one solution. Moreover we have (2.26)
4,
(2.27)
P(T)h
= ich = - R-*(t) H(t)aT(t)
=
- aT(T).
We write the Euler equations for the control problem ( 2 . 2 2 ) , ( 2 . 2 3 ) . Let
9, , BT
the optimal control and state, then we have
+ IT Q(t)BT(t).E(t)dt where
=
0
bi q
2
L (o,T;R~)
206
CHAPTER V
- dt ” _ = F*(t)E - H*(t)q(t) Define
uT by ( 2 . 2 5 ) where
BT
E(T) = 0
.
has the meaning of the optimal state, we
deduce from (2.28)
+
iT (F(t)aT(t)
-
dclT ).g(t)dt dt
=
0
and by integration by parts
and since
q
is arbitrary we see that
Hence (2.25) has a solution and we have ( 2 . 2 6 ) . Consider a solution of ( 2 . 2 5 ) , then
3,
of the control problem ( 2 . 2 2 ) , (2.23). hence also
“T’
Let u s compute
and from (2.25)
is necessari.1~the optimal state Therefore it is uniquely defined,
201
FILTERING AND PREDICTION
hence we have (2.29)
P(T)h.h
=
-
.
aT(T).h
T (T) = aT (T;h) and remark that the function h linear. Then using the symmetry of P(T) we have
Write
ct
P(T)(h+k).(h+k)
=
-
+
aT(T;h+k).(h+k)
P(T)h.h
=
-
aT(T;h).h
P(T)k.k
=
-
aT(T;k).k
P(T)h.k
=
- 7 (aT(T;h).k + aT(T;k).h)
hence 1
It is easy to check that (from (2.25))
hence P(T)h.k
=
from which follows ( 2 . 2 7 ) .
-
aT(T;h).k
.
aT(T;h)
is
208
3.
CHAPTER V
RECURSIVITY. KALMAN FILTER
Consider now by analogy with (2.25) the system (3.1)
We have the following
Theorem 3.1. zd
E
2
We make the assumptions Gf Theorem 2 . 2 w i t h 2 E L (o,T;R”). Then system ( 3 . 2 1 has m e and on’,$ one
L (o,T;R~) , f
soZution
yT, pT
and
By analogy with Lemma 2.1, we see that system (3.1) consists in the Euler conditions of an optimal control problem.
(3.3)
- _d A=
dt
F*(t)>
- H*(t)
(q(t) + R-’(t)zd{t))
X(T) = h
with the payoff
-
2 x.X(O)
This problem i s the following
.
209
FILTERING AND PREDICTION
The optimal control 9 (3.5)
G(t)
is given by
=
-
.
R-'(t)H(t)y,(t)
Define next
Y
hence the pair
-
yT,pT
is solution o f
(3.7)
-
- H*(t)R-'(t)(zd
-YT(0)
=
- Po PT(0)
Now from (3.7) and (2.25) we deduce
and from (2.26) we see that
,
H(t)y)
-pT(T)
= 0
.
2 10
CHAPTER V
Y,(T).h
(3.8)
=
(zd
-
H(.)Y(.),?$)
-
H(.)?(.))
therefore
-yT(T)
(3.9)
=
CT(zd
which with ( 2 . 2 7 ) completes the proof of ( 3 . 2 ) .
0
We write (3.10)
r(T) = r(T;zd) = iTzd + y(T)
-
iT H(.)?(.)
then from ( 3 . 2 ) we see that (3.11)
yT(T)
=
-
r(T)
P(T)h
.
Our next task will be to obtain evolution formulas for the pair r(.) P(.).
3.2.
&arsjye-formulas
We will describe a decoupling argument on system ( 3 . 1 ) .
L e m a 3.1.
Functions
(3.12)
yT(t)
,
pT t )
are r e Z a t e d by t h e r e l a t i c n
yT(t) = r(t) - p t)pT(t)
v
t
E
C0,TI.
proos be convenient to denote by y t) ~ ~ , ~ ( the t ) solution of T,h to emphasize the dependance in h Consider next system ( 3 . 1 ) (3.11, on the interval (0,t) instead of (0,T) with final condition pT(t) It wil
instead of (s)
h.
More precisely consider the functions
with
Yt,PT(t)
(s)
,
s 2 t.
Since Y ~ , ~ ( s ), P ~ , ~ ( s )satisfy this system and since the solution is unique, we can assert that
21 1
FILTERING AND PREDICTION
(3.13)
Pt 'PT (t) But by definition of
(s)
=
pT(s)
for
s 5 t
r,P we have
which with (3.13) proves ( 3 . 1 2 ) .
;emu 3 . 2 .
he fitnetion
P(t) c H'(O,T~(R~;R"))
alzd is the unique solzrtion o f t h e eqzration. o f R i c c a t i type
(3.4)
1
P(0)
P 0
=
?roo _" By Lemma 3.1 applied to system ( 2 . 2 5 ) we may assert that
(3.5)
aT(t)
=
.
-
P(t)BT(t)
=
1 DaT.aTdt + PoBT(0).QT(O)
Now from ( 2 . 2 4 ) we have P(T)h.h
T
+
CT QBT.BTdt
where we have set (3.6)
D(t)
=
H*(t)R-'(t)H(t)
From (3.5) we obtain (dropping index T (3.7)
P(T)h.h
=
. on aT,BT)
T (PDP+Q)B.Bdt + PoB(0).B(O)
j 0
.
212
CHAPTER V
Next from the second equation (2.25) we get
- da = dt
(3.8)
(F*-Dp)a
B(T) = h
.
Define ( 3 9)
OP(t,S
9
F(t)
=
fundamental matrix associated to
-
Then the solution of ( 3 . 8) can be expressed as
which, going back to ( 3 . 7 ) yields
hence (3.10)
By analogy with (1.20), (1.25), formula ( 3 . 1 0 ) proves that differentiable in fact that i f c1
P
t
and that ( 3 . 4 ) holds.
is
Uniqueness follows from the
is a solution of ( 3 . 4 ) then defining
by ( 3 . 5 ) then the pair
in a unique way.
P(t)
6 by (3.8) and
a,@ is solution of (2.25) and thus defined
Applying (3.5) at time
T
shows that
P (T)h = P2(T)h
if
Pl,P2 are two solutions. Since h
uniqueness follows
is arbitrary, as well as
T, the
3
213
FILTERING AND PREDICTION
The f u n c t i o n
L e m a 3. 3.
r
E
1 H (0,T;R") and i s t h e unique s o l u t i o n o f
+ f(t) + P(t)H*(t)R-'(t)zd
dr = (F(t)-P(t)D(t))r(t)
(3.11)
r(0)
=
x
.
proos By (3.12) it is clear that
r
E
1 H (0,T;R").
Also
dt =
F~
-
Q~ + f + (FP+PF*
-
PDP+Q)~+
+ P(-F*~-D~+H * R-1 zd)
=
(F-PD)~+ (F-PD)P
P
*R-1
+ f + PH
zd
i.e. (3.11). Moreover r(0)
=
y(0) + P(O)p(O)
= x
which completes the proof of the desired result.
3.3.
Falr!ian-fjlter
Let us consider the observation process (1.26). z(t) where
b
is a
= Jt
H(s)y(s)ds
+
We have
it R''2(s)db(s)
Zt standard Wiener process with values in Rp. We
consider the S.D.E.
214
CHAPTER V
(3.12)
d9
((F(t)-P(t)D(t))?(t)
=
t
P(t)D(t)y(t)
+ f(t))dt
t
+ P( t)H*( t)R-’/’ (t)db (t) 9(0) = x
.
(3.13)
+
IT Op(T,t)P(t)H*(t)R-’/’(t)db(t)
.
Since dz = H y dt
R 1 ” db
t
we can rewrite (3.13) as (3.14)
?(T)
=
+
Op(T,O)x
+
JT
{T
O,(T,t)P(t)H*(t)R-’(t)dz(t)
Op(T,t)f(t)dt
t
.
Now from (3.11) we also can assert (3.15)
r(T)
=
Op(T,O)x +
T Oh(T,t) (f(t)+P(t)H*(t)R-’
(t)zd(t))dt
and (3.16) Take x
r(T).h =
0
,
f
(3.17)
=
0
= (i;h,zd) + y(T).h
then
7
=
0
-
(i;h,H(.)F(.))
, hence
^*
r(T1.h
=
(ZTh,zd)
r (T) =
iT O p ( T ,
and from (3.15) t) P (t) H* (t) R-’zd (t)dt
.
FILTERING AND PREDICTION
which compared to (3.17) yields (3.18)
23
=
.
R-l(t)H(t)P(t)O~(T,t)h
It follows from formula (3.19) that
Let us show that (3.21) Indeed let @(T)
Op(T,O)x
Op(T,t)f(t)
+
=
y(T)
- iTOp(T,t)P(t)D(t)y(t)dt
to be the right hand side of (3.21).
Differentiating in T
, we get
hence -do- - (F - PD)e + f dt O(0) = x
therefore O(T) which proves (3.21).
=
Qp(T,O)x
+
JT Op(T,t)f(t)
Therefore from (3.21) and (3.20) we see that
Qp(T,O)x + which with (3.14) shows that
T 0
Qp(T,t)f(t)dt
=
y(T) - IT H(.)Y(.)
215
216
CHAPER
v
This formula compared to (2.1) shows that of
Y(T)
is the best estimate
y(T).
W e thus have proved the
Theorem 3 . 2 . Ve make the asswnptims of Thecrem 2 # i . Then zi7.e conditional ezpectation 9(T) of y(T) given z ( s ) , O<s
d9
=
(F(t)-P(t)H*(t)R-l
+ f (t)dt +
(t)H(t))F(t)dt
3
9(0) = x
Let
(3.24)
ds = (F
E(O)
=
E
u(0) = 0
*
-
PD)E dt + Gdw - PH R
*
.
-1/2
db
217
F I L T E R I N G AND P R E D I C T I O N
Lema 3 . 4 .
The process
vaZues in Rp
Since v(t)
-
, where
Z
w(t) t
=
is a
Zt standard Wiener process w i t h
o(z(s),s
,
E(S)
is independant of Zs
w(s)
is independant of Z s , for
The process
u(t)
ti s
,
it is easy to check that
t
2
s
.
is clearly Gaussian. We consider the pair
as solution of a system
(dd) @(
(3.26)
=
td):
+ '4 dg
where
@ =
The covariance matrix
(
F-PH*R-' H R- 1 /2H
II of the vector
(:)
is solution of
We can write
in fact
n,2
= 0
.
Indeed w(t)
is adapted to
Zt since
E (t ),
w(t)
218
CHAPTER V
(3.28)
dv = R-”2H(y-9)dt
=
Since
E(t)
hence
nI2
R-’l2(dz
-
i s independant of
.
H 9 dt) t
,
E(t)
v(t)
and
a r e independant,
.
0
=
+ db =
Developping t h e n ( 3 . 2 7 ) w e o b t a i n
il =
(F
-
+ GG*
n2 Hence
II1
= P
PH*R-’H)~,
n l ( ~ *-
H * R - ~ H P )+
+ PH*R-~HP
,
= I
+
rI1(0)
=
which w e a l r e a d y knew, and
Po
,
II,(O)
= 0
I12(t) = I t .
.
T h i s completes
0
t h e proof o f t h e d e s i r e d r e s u l t .
W e c a l l Innovation process t h e p r o c e s s
hence I ( t ) = Jt R’/2(s)dv(s)
(3.30)
and ( 3 . 2 3 ) can b e w r i t t e n a s t h e S . D . E . dy = F ( t ) F ( t ) d t + f ( t ) d t + P ( t ) H * ( t ) R - ” 2 ( t ) d v ( t )
(3.31)
I Remark 3.1.
9(0)
=
The p r o c e s s
x
. 9(t)
i s c a l l e d t h e Kalman f i l t e r .
K(t) = P(t)H*(t)R-1/2(t)
i s c a l l e d t h e g a i n of t h e f i l t e r .
The m a t r i x
FILTERING AND PREDICTION
Remark 3 . 2 .
219
The system (3.1) is the system of Euler conditions of the
control problem (3.3), (3.4). problem as follows.
We can associate to it an other control
Consider
(3.32)
where the control is the pair
(w,v(.)).
The payoff is defined by
+ 2 h.u(T). Then it is easy to check that
yT
is the optimal state, pT
the adjoint
variable, and that the optimal control is given by
5(t)
=
G = - P
For h
=
-
G*(t)pT(t)
O
p(0). T
0 the interpretation of problem (3.32), (3.33) is clear.
the l e a s t square f u n c t i o n a l .
It is
This gives an other justification to the
role of quadratic optimization problems in filtering. Note however that (3.33) makes sense only when
Po
is invertible, which is not requested
0
for (3.3), (3.4).
4. PREDICTION We have computed so far
T 9(T) = ECy(T)(Z 1.
We are now interested by
220
CHAPTER V
which can be easily deduced from the knowledge of
(1.1)
Indeed, from
we have
+
Since
F(T).
Zt c
at
and
w
$+'@(T+B,t)G(t)dw(t)
is a 3
t
.
Wiener, we have
hence
e > o
(4.1)
Hence the prediction of the future state can be easily deduced from the Kalman filter. COMMENTS ON CHAPTER V 1. For the study of general Riccati equations in Hilbert spaces, cf. J.L. Lions [ Z ] , S.K. Mitter [ I ] ,
R. Temam [l], L. Tartar [ll, A . Bensoussan R. Curtain
-
-
M. Delfour
A.J. Pritchard C11, M. Sorine 111.
2 . They are efficient methods to compute the solution of a Riccati
equation, using Chandrasektar's ideas, cf. (among others)
J. Casti 111.
22 1
CHAPTER VI VARIATIONAL METHODS IN STOCHASTIC CONTROL
INTRODUCTION In this chapter we consider the point of view of optimality necessary conditions, which corresponds to Pontryagin's Maximum principle in the case of deterministic systems. We study here the most simple situations. One of the main difficulties to treat the theory in full generality, is that there may be several forms of the stochastic Maximum Principle. The most satisfactory treatment of a general situation is probably that of J . M . Bismut [ I ] . Here we restrict ourselves to controls entering in the drift term only. We also consider some problems of control with partial information and present the separation principle.
1. MODEL WITH ADDITIVE NOISE
Let
g : Rn
(1.1)
Let
X
Rm
X
g
(Q,Q,P)
C0,TI
+
Rn
be a function g(x,v,t)
such that
is measurable
be a probability space.
We consider b(t)
stochastic process with continuous trajectories, 2 with values in Rn, sup Elb(t)/ < CT 0s t
.
222
CHAPTER VI
5 R.V. with values in Rn
(1.3)
Let
Uad
Rm. We consider stochastic processes v(t)
be a subset of
are adapted to g t with values in Uad
which
and square integrable. More
precisely (I.3)bis
v(.)
For any given v(.)
Lemma 1 . 1 .
v(t)
<(O,T)
E
E
Uad
,
Uad closed convex in Rm ,
we solve the equation
There is one and only one s o l u t i o n o f (1.4) which belongs t o and
L2(Q,Q,P;C(0,T;Rn))
is an adapted process.
x(t)
proos Let
y(t)
=
-
x(t)
,
b(t)
then
(1.5)
Then (1.5)
has a unique solution in L2 (R,G,P;C(O,T;Rn))
Indeed define for
z
E
I(t;z)
11zI
I t2
= E
=
5 - b(0) + J t g(z(s)
-
sup Iz(s)l2 0s s< t
B
.
,
B
lI(t;zl)
if
=
I(t;z2)/
,
then
=
K Jt
Iz,(s)
+ b(s),v(s),s)ds
- z2(s)1ds
E I(z)
223
and
Consider
then i t is easy to check that yn + y being unique. Furthermore for any t
fixed point of
I, the fixed point
Define next (1 *6)
X t = o algebra generated by
It is important to notice that
Xt
x(s)
, 0
5
s 5 t
depends on the control.
.
Therefore
these 0-algebras are not given a priori. Let u s set (1.7)
2(* = {v
Once we fix a control v
E
2 L (O,T)/v(t) 3
in
$,
E
Uad
a.s., a.e. t}
.
this defines the a-algebra
define
The set
is not empty, for instance deterministic controls belong t o v
We pose the following control problem
.
224
CHAPTER V I
(1.9)
where
h
(1.10)
: Rn
+
R
,
h
Lipschitz, bounded.
Our objective is to study problem ( 1 . 9 ) .
1 . 2 . Admlty-r~~~~t The set 2; being indirectly defined, problem (1.9)
seems a priori quite
intricate. However we have the following very important property
Theorem 1.1.
Assume (l.l),
(1.11)
(1.2),
InfEh(x(T)) V
=
( 1 . 3 ) , (1.10). Then ue have
Inf*Eh(x(T)) V €3
€2t
proos We shall need several Lemmas.
Lema 1.2. (1.12)
Let
v
then
E
wt = TJt
v t
Ad
proos Indeed
x'
c
Ext
.
Let us prove the reverse inequality. Since
(1) We can add an integral payoff. I
225
VARIATIONAL METHODS
5 is
xv0
measurable.
Next
- 5 - it g(x(s),v(s),s)ds
b(t) = x(t)
x:
and the right hand side is
Zt c
measurable, therefore Let u s consider v E ? ( * with
x:
and
measurable, hence
,
L e m a 1.3.
vk
Clearly vk(t) xk
i~
=
vo
0
for
Uad
E
for
nk
5
=
5
+
zi
,
(n+l)k
measurable.
Vk
ft g(xk(s),vk(s),s)ds
be the 0-algebra generated by
We have 5
<
.
be the trajectory corresponding to
zk
t
.
xk(t) and
...,Nk , vk (t)
0
For
0
2
+
b(t)
xk(t). t < k
hence (1.13) Then for
5
t
2
k
,
1
5
n
5
N-1
' "ad
vO
-k f (n-1)k nk v(s)ds
Let
.
hence (1.12)
O,k,
t is X v
b(t)
b(t) k
2 t <
2k ,
is
x ,t
measurable for
0
5 t 2
k
I
.
226
CHAPTER VI
=
Jk v(s)ds
and sin e v E Q * , vk( ) is g k measurable for k s t < 2k. However from (1.13) it follows that gk Therefore vk(t) is X k k measurable
=xL.
for
k
S
t < Zk, since
By induction
(1.14)
b(t)
measurable
is
V t
hence ;Ft =zL , from which the desired result follows.
Lemma 1.4. xk*
in L’(RP,P;c(o,T;R~))
x
proop We know that
vk
-f
v
in
2 L3(0,T).
Denote
From the relation
and since
E we get
{
T /v,(s)-v(s)l
2
ds 4 0
.
221
VARIATIONAL METHODS
ci
from which we conclude the proof of the Lemma.
Since ?Ac 2(* we have
We can then proceed with the proof of Theorem 1 . 1 . infEh(x(T))
But for any
v
E
V&
%,*, there exists
corresponding trajectory xk
.
inf*Eh(x(T))
t
v&
+
x
,
vk
E
Q , such that denoting by
xk
the
then 2
in L (Q,G,P;c(o,T;R”))
.
But then
Therefore infEh(x(T))
5
Eh(xk(T)) V v
+ Eh(x(T))
E %(*
hence infEh(x (T)) VRi
5
inf*Eh(x (T)) v 4
0
which concludes the proof of Theorem 1.1.
From Theorem 1 . 1 , problem ( 1 . 9 ) , side of (1.11).
it follows that if
u(.)
is an optimal control for
then it is also optimal for the problem on the right hand
228
CHAPTER VI
We first start with a differentiability Lemma
Lemma 1 . 5 .
Let
u,v
E
2 I+(O,T)
and
8
of (1.4) corresponding t o t h e c ontrol
cont r ol
u+&.
(1.15)
E
Let
C0,ll.
u, and
x8(t)
y
be the s o l u t i o n
corresponding t o the
We assume g
continuously d i f f e r e n t i a b l e w i t h respect tc
and
g.',g{
uniform'oy continuous i n
x,v
Then we have
Xe(.
(1.17)
where (1.18)
We have
hence
and
)-y(. ) A
z
+
z(.)
in
c(o,T;R")
i s t he s o l u t i o n of the iine ar equation
x,v
and bcunded.
a . s . and i n
229
VARIATIONAL METHODS
which proves ( 1 . 1 6 ) . We next notice that
and from ( 1 . 1 5 )
we have, with ( 1 . 2 0 )
+
{
t
g:(y(s),u(s),s).v(s)ds
Setting
it follows from (1.23)
and (1.18) that
+ Jt
ds
.
230
and
CHAPTER VI
gi
, :g
are bounded, then
deduce from ( 1 . 2 4 ) ,
is bounded.
p(6)
Therefore we first
(1.25)
It follows from ( 1 . 2 6 ) that sup /xg(s)-y(s)l OSs5T
+
0 a.s.
6
as
+
0
hence from ( 1 . 2 5 )
IT
e
ds
+
0 a.s.
which with ( 1 . 2 4 ) and Cronwall inequality implies s u p /x,(t)I
--t
o
a.s.
as
0
-t
o ,
OSt5T
which proves (1.17). Set (1.27)
J ( v ( . ) ) = Eh(x(T;v(.))
.
We assume (1.28)
h
is continuously differentiable and :h
continuous and bounded.
i s uniformly
23 I
VARIATIONAL METHODS
We have
CoroZZary 1. I. Assume ( 1 . 2 8 ) . The functional J(v( . ) ) differentiable on L 2 (0,T) and 3
where
z
i s t h e solution of ( 1 . 1 8 ) .
We have
hence
From Lemma 1.5 and assumption ( 1 . 2 8 ) ,
If we introduce the adjoint system
we conclude.
is Gateam-
232
CHAPTER VI
I
then it follows from (1.30) and (1.29), (1.18) that
We can then state the following
Uad is conuez cLosed.
(1.32) L et
u(.)
to be
( 1 -33)
ayL
optimal control; u(.)
J(u(.))
=
inf
J(v(.))
EZ(
and
.
V(*)&
Then the following necessary condition holds (1.34)
I
E
p(t).g,'(y(t),u(t),r)(v-u(t))
t
0 a.e. t
,
a.s. w
According to Theorem 1.1, we also have
Since Uad
is convex, Q*
is convex. Therefore from the differentiability
property ( 1 . 2 9 ) , we can assert that (1.35)
( J ' (u( . ) ) ,v( .)-U( , ) ) 2' 0
4' V( . )
E
u* .
233
VARIATIONAL METHODS
then we have
(1.37)
X(t;b) z 0
a.e.
a.s
Indeed, let us set
v(t;h)
v
=
in
A
u(t;w)
outside A
.
This choice is possible, since is
A(t)
st
measurable
Y t
hence At
=
{hih(t;w) < 01
belongs to
But
v(t)
=
v
in
u(t)
*t
outside
st .
234
CHAPTER VI
which shows that
v
eq:
1A h(t;w)dt dP
,
v(.)
Taking this choice of
we get from (1.36)
0
2
which contradicts the definition of
A
A , unless
has measure
0. We
thus have (1.37). Since
u(.)
Remark 1 . 1 .
st =
e Q , we have
vt ,
vtp(t)
is not simple in practice, which
E
The computation of
0
hence property (1.34).
renders the necessary condition (1.34) difficult to apply.
[?
2 . THE CASE WITH INCOMPLETE OBSERVATION 2.1.
statement_of_the_erob1”[11
We consider the following problem.
The state of the system is governed
by the equation (2.1)
!
dx = F(t) x (t) + B(t;v(t))dt x(0) =
+ G(t)dw(t)
5
with the assumption (2.2)
F matrix
n
x
n
measurable and bounded in
G
m
x
m
I
w
matrix
t
,
Zt Wiener process, standard and m dimensional
5 R.V. with values in Rn , 3’ measurable, Gaussian, with mean B(t;v)
x
: (0,m)
subset of
ilk
and covariance matrix
x
Lad
+
R”,
where
Uad
*
is a closed convex
, B measurable bounded.
235
VARIATIONAL METHODS
The process
is the x x t i w l p r o z e s s .
v(t)
Let as in (1.3) Chapter V
Then we can solve (2.1) and obtain a process which is continuous and adapted to
Zt .
We next define the observation process by
dz
(2.7)
z(0)
=
0
H
matrix
dn
=
p
R
zt
(2.11)
x
I = x
the O.U.E.
=
Wiener, standard, p
8(t),
y(t)
t.
0
R- 1
bounded,
dimensional and independant
defined by
d3
=
F(t)8
dt + G(t)dw(t)
dy
=
H(t)B
dt + dn
y(0) Set
q(0)
symmetric invertible
Consider now the processes
1
which is measurable and bounded in
5 , w(t).
of
(2.10)
x n
R”2(t)db(t)
where b
dt + dn
H(t)x
=
=
0
.
B . From (2.1) and (2.10) we see that x I is the solution of
236
CHAPTER V I
(2.12)
Similarly, setting
we have dz
1 = H dt
(2.13)
The processes
xl, z
xi
~ ~ ( =0 0)
contain the control effect, and
We assume in the sequel that (2.16)
st =
(2.17)
, sst)
.
in 2(* we consider the process
For any control v or ( 2 . 1 5 )
o(y(s)
and denote by
z:
z:
= o(z(s)
z defined by ( 2 . 7 ) ,
the o-algebra
,
sst)
.
We define in a way similar to ( 1 . 8 ) (2.18)
The set
=
{v & [ v(t)
zvt
measurable a.e.
is not empty (take deterministic controls).
controls problems as follows
where
is
t}.
We define the
237
VARIATIONAL METHODS
(2.20)
x,v
+
R(x,v,t)
e(o,o,t) (2.21)
h
(2.22)
B(t;v)
Remark 2 . i .
E
is continuously differentiable
L~(o,T)
is continuously differentiable
is continuously differentiable with respect to v
We need to introduce explicitely an integral criterion in
(2.19), since we cannot reduce it to the final criterion, by the usual trick
This restriction is due to the fact that we request a special structure of the state equation (2.1) (linearity with respect to
Lemma 2.1. For any
Since
v(t)
xl(t)
is
z,(t)
is
is
2
vt
v
t
E
Q, 2,
=
st
ti t
x ).
0
.
Zt and Zt measurable a.e. then from (2.12) follows that
andV
zt
and
Zt measurable. Therefore from
measurable Y t.
From (2.13) it follows that ( 2 . 1 5 ) it follows that
238
CHAPTER VI
t
is 2, measurable and
Y(t) and
z(t)
is
st
t
t
Hence Zv c 5
measurable.
Zt c z:.
c
This implies the desired result.
Q is derse in Q*
Lemo. 2.2.
Let
v(.)
%* and consider (as in Lemma 1.3)
c
(2.23)
v(s)ds
k '(n-1)k Then vk(t) c Uad
,
a.e.
a.s.
For
0 s t < k t
function. Hence from (2.14), Zt --Zv k t
2k, we have from (2.14)
For
k
zk
measurable.
6
5
Therefore
Vk k
It 5
zl(t)
for
for
vk(t) is Zk Vk
,
nk
z,(t)
t < (n+l)k
5
.
is a deterministic
O s t s k . is and
st
measurable, hence
Sk measurable for
2k.
t From (2.15) this implies again that 2, = st (t s 2k). k see that 2: = Zt , ti t Therefore vk belongs to 2( k
.
By induction we
.
But
c
VARIATIONAL METHODS
239
240
CHAPTER VI
A similar argument shows that
ci
This completes the proof of ( 2 . 2 4 ) . We can then state the
Theorem 2 . i . (2.201,
>/e a s s m e ( 2 . 2 1 , ( 2 . 3 ) , (2.4), (2.5), ( 2 . 8 / , / 2 . 9 / ,
(2.211, (2.22).
(2.27)
inf
Then 7;e have
J(v(.))
V(.)&
A s the proof of Theorem 1 . 1 ,
=
.
inf*J(v(.)) vn,
using Lemmas ( 2 . 1 ) ,
We start with differentiability Lemmas as in denote by
y
tion) still by
§
(2.29)
Let
u
E
;3
2 S;(O,T)
and
the corresponding state defined by (2.1), and (to save nota-
z
the corresponding observation, defined by (2.7).
1%
=
F y 1 + B(t;u(t))
dz
(2.30)
( 2 . 2 ) , (2.3).
1.3.
We write
where
(2.161,
_ _1
dt -
1'
Z ] ( O )
= 0
.
24 1
VARIATIONAL METHODS
Let
v
E
control
2 L (O,T), we will denote by
3 u+5v
,
x,(t)
x I a defined by (2.12) with
the state corresponding u
replaced by
co
the
u+&.
We define
-
-dy- - F(t)y dt
(2.31)
Y(0) = 0
L e m a 2.4.
The :unctionaZ
+ Bv(t;u(t))v(t)
. J(v(.))
is Gateauz di;lr"ereqtisble on
and r e have t h e formula
?TOO t^
I
We have (2.33)
Now
From assumption ( 2 . 2 2 ) , it i s easy to check, like in Lemma 1.5 that
<(O,T)
242
CHAPTER VI
Also
1
5
CT (iT1v(s)
I 2 ds) 112 .
We can use Lebesgue's Theorem to pass to the limit in ( 2 . 3 3 ) and to obtain ( 2 . 3 2 ) .
U
We define (2.34)
then we deduce from ( 2 . 3 2 ) the formula
We can then state the
Theorem 2 . 2 . We make the asssumptiomof Theorem 2 . 1 . be c solution o f (2.36)
J(u(.))
=
inf
J(v(.))
Let
u(.)
Q
E
.
V(.)4
Then t he f ol l owing necessary c o n d i t i o n h o l d s t
(2.37)
a m
(E?
Y v
~(y(t),u(t),t)+B:(t,u(t))OZ
E
2
Uad , a.e. t , a.s.
By Theorem 2 . 1 , we know that over %*.
t p(t)).(v-u(t))
u(.)
realizes the infimum of
Hence from ( 2 . 3 5 ) we can assert that
J(v(.))
0
(2.38)
Hence also ))dt t 0
Y v(.) € V * . Reasoning as in the proof of Theorem 1 . 2 , we deduce from ( 2 . 3 9 ) that
V V € U ad
' a'e*
€ 3 ,st = Z t
and since u(.)
a*s*
hence relation ( 2 . 3 7 ) .
We assume here that (2.40)
L(x,v,t)
1
h(x)
=
=
1
5 Q(t)x.x
+
1
7 N(t)v.v
+ g(t).x
1
Yx.x 2
where (2.41)
~
Q(t)
M
symmetric
N(t) g
and
symmetric bounded
symmetric bounded, invertible, N-' (t)
bounded
bounded,
244
(2.42)
CHAPTER V I
B(t;v) = B(t)v + f ( t )
where
(2.43)
(2.44)
B
bounded
f
bounded
.
dy = ( F ( t ) y + B ( t ) u ( t ) + f ( t ) ) d t + G ( t ) d w ( t )
Y(0) =
5
Assuming
(2.46)
Uad
= Rk
then ( 2 . 3 7 ) yields t
(2.47)
N ( t ) u ( t ) + B * ( t ) E?
p(t) = 0
and t h e o b s e r v a t i o n p r o c e s s i s g i v e n by
(2.48)
dz = H ( t ) y ( t ) d t + R ” * ( t ) d b ( t )
Replacing ( 2 . 4 7 ) i n ( 2 . 4 4 ) y i e l d s
Y(0) =
5
.
z(0) = 0
.
245
VARIATIONAL METHODS
(2.50)
P(t) = @*(T,t)My(T)
+
:1
.
@*(s,t)(q(s)y(s)+g(s))ds
Let us set (2.51)
t 5 s
Apply (2.50) with obtain
t = s
and take the expectation with respect to
t 5 s
Now from ( 2 . 4 9 ) we have for
t 5 s
Taking the expectation with respect to Z t c 5t
Zt.
Zt
and recalling that since
,
and
we obtain
From Lemma 2 . 5 below we get, considering F(t)
as given and
s
E
Ct,Tl,
We
246
CHAPTER V I
there exists a unique pair
(?$s),fjt(s))
solution of (2.52), (2.53) and
related by the relation
where
II(s), p ( s )
s = t
, we obtain
are deterministic functions. In particular, taking
Therefore going back to (2.47) we get (2.55)
u(t)
=
- N-'(t)B*(t)
(Z(t)p(t) + p(t))
Let u s now derive an equation for ?(t). (2.56)
dy
=
(F(t)y
y(0) =
5
-
where
B(t)
Zt =
+ G(t)dw(t)
f
yl(t)
is
t
measurable
st
i s the Kalman filter of the free system.
according to the S.D.E.
i(0) = x
with
From (2.49) we have
BN-'B*(n(t)F(t)+p(t))+f(t))dt
We know that from (2.28) and the fact that
and since
.
Samely
act)
evolves
241
VARIATIONAL METHODS
(2.59)
dv
-1/2
=
R
(t)(dy(t)-Hg(t)dt)
=
R-1/2(t)dI(t)
where
is the Innovation process. From (2.57), (2/58) and (2.29) we obtain (2.61)
d9 = F(t)?
-
dt
B(t)N-'(t)B"(t)(n(t)9(t)+p(t))dt
+
+ P (t )H* (t ) R-I /2(t ) du (t )
.
Y(0) = x
We can also rewrite (2.61) as follows
+ P (t) H* (t) R-I (t) (dz(t) -H( t)? (t) dt) 9(0) = x
by virtue of the fact that (2.63)
dI = dy
-
Hidt
What wehave proved is that if
=
dz
-
u(.) E Q
Hgdt
.
is an optimal control, then
necessarily it is given by formula (2.55), where
9(t)
is solution of
(2.61). Conversely, we may solve the equation (2.61) which is an equation in 9(t), with external input v(t), where v(t) is adapted to
Zt ,
g(t)
and
9(t)
is given by (2.59).
are adapted to
Since v(t)
Zt. Next setting
248
CHAPTER VI
(2.64)
Y,(t)
=
p(t)
-
i(t)
it follows from ( 2 . 5 8 ) , (2.61) that
yl
by ( 2 . 5 5 ) .
We define a control u
is solution of
Then
u
is adapted to
Zt . The
corresponding state is given by ( 2 . 5 6 ) . Using, ( 2 . 6 4 ) we see that
(2.66)
Y = Y 1 + B
therefore from ( 2 . 6 4 ) we deduce
9(t)
(2.67)
=
Ely(t)1Zt]
Next considering the observation (2.68)
dz
=
,
z
defined by
H y dt + dr,
z(0)
=
0
we still have by ( 2 . 6 6 ) dz
=
H y l d t + H 3 dt + dq
=
H y 1 dt + dy
= H q d t - H E d t +dy
Replacing
dv
by this value in ( 2 . 6 1 ) , we obtain that
9 is solution of
( 2 . 6 2 ) , which has a unique solution, and the. solution is a process
adapted to u(.)
E
Zt . From that and the form of the control we obtain Zt = Zt , wich with ( 2 . 6 7 ) shows that
?.(,and
249
VARIATIONAL METHODS
The control u ( . )
satisfies the necessary (and sufficient by convexity)
optimality conditions. Therefore we have obtained Theorem 2.3. (2.421,
We make the a s s - m p t i o n s of Theorem 2 . 2 and (2.401,
(2.431,
(2.41),
(2.46). Then t h e r e e x i s t s one and o n l y one s o l u t i o n o f t h e
control problem (2.361, g i v e n by (2.55), where
9(t) is
t h e unique
s o l u t i o n o f ( 2 . 6 1 / , ( 2 . 6 2 1 and corresponds t o t h e b e s t e s t i m a t e (2.69).
In the proof of Theorem 2 . 3 , we have used the following result.
Lema 2.5.
Consider t h e two p o i n t boundary value problem
Then i t has a unique s o l u t i o n
(y,p).
Moreover t h e f o l l o w i n g r e l a t i o n
ho Zds
II is t h e unique s o l u t i o n of t h e R i c c a t i equation
where (2.72)
and
p(t)
(2.73)
i s t h e s o l u t i o n of t h e Zinear equation
*
I ,RL. -
=
(F*(t)-n(t)B(t)N-'(t)B*(t))p
+ g(t) + U(t)f(t)
0
CHAPTER VI
250
PrOoJ It is a consequence if the standard quadratic control theory (cf. also Chapter V 5 3 . 2 ) . The state equation is governed by
1
(2.74)
=
x(0)
+ B(t)v(t)
F(t)x
=
Yo
+ f(t)
.
The payoff function is defined by J(v(.))
(2.75)
=
iTQ(t)x(t).x(t)dt +
+ 2
iT N(t)v(t).v(t)dt
There exists an optimal control denoted by
+
u(.)
JT
g(t) .x(t)dt
.
M x(T).x(T)
.
Let
+
y(.)
be the
corresponding optimal trajectory. The Euler condition of optimality is given by
iT Q(t)y(t).f;(t;v)dt
(2.76)
+ 1T g(t).x(t;v)dt
+
where
Defining
p
by the second differential equation ( 2 . 7 0 ) , where
y
stands
for the optimal trajectory, we get from integration by parts in ( 2 . 7 6 )
J and since v (2.77)
T
(Nu + B*p).v dt
=
0
is arbitrary u(t) =
-
from which we obtain that
N-l(t)B*(t)p(t) (y,p)
is a solution of (2.70).
The solution
25 1
VARIATIONAL METHODS
is unique since the control defined by ( 2 . 7 7 ) will be optimal, hence unique, therefore
y
then
p
are unique.
Considering next a system similar to ( 2 . 7 0 ) on the interval
(s,T)
with
initial condition arbitrary and reasoning like in Lemma 3.1 of Chapter V, we define functions lT(t),
p(t)
and we have the relation
Taking in particular the homogeneous system (2.79)
a(0) = h
we have (2.80)
hence
and to
F
-
BN-lB*Il
we can associate a fundamental matrix
such that (2.8
Next from ( 2 . 7 9 ) we deduce
@,(t,s)
252
CHAPTER VI
hence using (2.80), (2.81)
We a l s o have, in a similar fashion
Now from (1.11) Chapter V, the function 1 H (O,T;Rn), and (2.84)
d O*(T,t)h dt
=
t
+
O*(T,t)h
II
-(F*(t)-II(t)B(t)N-l
belongs to
(t)B*(t))@i(T,t)h
II
from which we deduce that
On(T,t)h
E
H 1 (0,T;R")
and
But from (2.83) we deduce
and differentiating with respect to
t, using (2.85) we obtain
253
VARIATIONAL METHODS
i.e. (2.72).
Knowing the equation of
get the equation for
L and using (2.78) and (2.70) we
p.
Uniqueness of the solution of (2.72) follows from an argument similar to that of Lemma 3.2, Chapter V.
Remark 2 . 2 .
0
Hence the result.
Control problem (2.74), (2.75), corresponds to problem (2.l),
(2.19) when there are no noises and when assumptions (2.40). satisfied.
. . (2.43)
are
Combining (2.77) (2.78) we see that the optimal control for the
deterministic control is given by the formula (2.86)
u(t)
=
-
N-l (t)B*(t) (II(t)y(t)+p(t))
.
Therefore the optimal control can be computed via a feedback r u l e . Formula (2.55) shows that for the corresponding stochastic control problem, the optimal control is given by the same f e e d b a c k , where the state y(t)
must be replaced by its best estimate
observations up to time
t.
f(t)
,
given the
This is a very interesting property, which is
known as the s e p a r a t i o n p r i n c i p l e . First of all the optimal control at time
t
can be computed through a d e t e r m i n i s t i c f u n c t i o n of the
best estimate of the state of the system at time
t. More than that this
0
function is the same as in the deterministic case.
3. SEPARATION P R I N C I P L E Orientation We have seen that, for the linear quadratic stochastic control problem with partial observation, considered in 5 2.4, then the optimal control at time
t
is a deterministic function of the best estimate.
We shall see
in this section that this property holds under more general assumptions.
254
CHAPTER V I
However the fact that the deterministic function coincides with that of the corresponding deterministic problem does not extend beyond the case treated in
§
2.4, M. Wonham [ l ]
was the first to obtain such a
property.
We consider the situation of Theorem 2.1. ?(t)
Let v
E
q.
We consider
to be defined by
dP
(3.1)
=
+ B(t;v(t))dt
F(t):(t)dt
+ P(t)H*(t)R-’(t)
+
(dz-H(t)S(t)dt)
S(0) = x
which defines uniquely S
,
assuming v
given and (2.7), (2.1)-
Equation (3.1) shows that
S(t) is
(3.2)
(3.4)
,t
measurable
L
5
E(O)
=
x(t)
- S(t)
S(t)
=
=
B(t)
-
i(t)
hence
and since Zft
=
t
2,
,
t
ELx(t) 15
for v
E
4,
I
we have
V t
.
255
VARIATIONAL METHODS
(3.5)
a(t) = ECx(t)
l c l
which means that the solution of (3.1) is the Kalman filter.
A_dynnrnic_erosralllrin¶-~9~~~ion
3.2.
We will study here the equation of Dynamic Programming which is connected to our control problem.
It will be introduced a priori and
justified in the next paragraph. We need some notation.
We consider the second order differential
operator
(3.6)
-a(t)
where the matrix
is defined by
Y
(3.7) and
a(t)
F..(t) 11
=
1
,
P(t)H*(t)R-l(t)H(t)P(t)
is the matrix
F(t).
In order to avoid degeneracy we will
assume
-a(t)
(3.8)
t a1
,
a >
0
.
Let us remark that even when we assume ( 3 . 8 ) , the operator
is not
X(t)
completely standard, since the drift term (first order term
X F..(t)x
is not a bounded function).
j
E(t)
is the solution of ( 3 . 3 ) .
variable with mean
Note that
0 and covariance matrix
is invertible we have the explicit formula (3.10)
LJ
j
g(x,v,t) = E E(x + E(t),v,t)
(3.9)
where
We also set
P(t).
E(t)
is a Gaussian
Therefore when
P(t)
256
CHAPTER VI
Similarly we define (3.11)
K(x)
=
E h(x + E(T))
.
We introduce next the Hamiltonian (3.12)
E(x,p,t)
=
[x(x,v,t) + p.B(t,v)]
Inf
.
VEUad We now consider the non linear Cauchy problem (3.13)
-3 at
+ x(t)u
u(x,T)
=
-
E(x,Du,t)
=
0
.
K(x)
We will assume (to simplify some technical points) (3.14)
From ( 2 . 2 2 )
Uad and ( 3 . 1 4 )
compact
.
it follows that
We start with an existence and uniqueness result. Since we are dealing with a parabolic problem in the whole space (Cauchy problem), one expects to use the Sobolev spaces with weights, introduced in Chapter 11, 5 3 . 1 . However, since the coefficients of the operator x(t) are unbounded au because of the terms - Fij Xj (see ( 3 . 6 ) ) , the weights
5
are not suitable. This i s due to the fact that
257
VARIATIONAL METHODS
therefore
4J
is boundzd as
0 as
such a ratio tends to
1x1 1x1
+ m.
We will need weights for which A way to achieve that is to use
+ m.
polynomial weights such as
then (3.19)
hence (3.20)
...
We define Sobolev spaces LG HA
We first notice that if A(t)
A(t)
-
=
For instance
is an operator of the form
a
a
C - a. .(x,t) - + C a i axj i,j axi u i
a ax.
with the usual assumptions on the coefficients (in particular the non degeneracy assumption of the
a
ij
)
then we can solve the equation (in a
unique way)
- ?!.a t
(3.21)
u
with data
f
We begin with
E
2
E
+ A(t)u
=
,
f
L 2 (O,T;Hn) I 2
L (O,T;LR) ,
E
2
Ln
I
u(x,T)
=
258
CHAPTER VI
Lema 3 . 1 . Let
f
E
2 L2 (O,T;Ln)
, ii
E
2 Ln
.
Let
a..(t) 1J
such t h a t
(3.23)
be a matrix
measurable and bounded.
g
Then there e x i s t s one arid only one s o l u t i o n o f (3.24)
u(x,T) u
E
=
u(x)
1 L 2 (O,T;Hn)
.
props Since F(t)x
is not a bounded function we cannot a p p l y the variational
theory as for ( 3 . 2 1 ) . We first assume in addition that (3.25)
f ,u
are bounded.
Let us define (3.26)
Pk (x)
=
x
if
1x1
2
k
T?
if
1x1
2
k
.
259
VAR IATIONAL METHOD S
(3.27)
This is an application of the variational theory, since all coefficients are bounded.
We also have the estimates (cf. Chapter 11,
§
3.4)
1 2 Let u s then obtain an additional estimate in L (O,T;Hn); multiply (3.27)
by
uk7;
(3.29)
and integrate of
Rn.
-1 A- u (t) 1; 2 dtl k -
.f
We obtain +
zJ
aij (t)
Rn
F(t)Pk(x).DukukIl 2dx
au au k
p2 ax. ( ax.k J
- 1 Rn
Rn
1
~ . D u II2dx k k
2 2u II sx.
'
k
+
i
2)dx
1x1
=
2 f u k 7 d x .
=
But from (3.28) we can assert that
Therefore we deduce from (3.29)
(3.30)
i IUk/
2 5 c . L (O,T;HP)
We can then consider a subsequence still denoted by uk
-f
u
uk
,
such that
2 1 in L (O,T;HU) weakly and in Lm(Rn
x
(0,T))
-
260
CHAPTER VI
weak star. Since F(t)Pk(x)
-t
F(t)x k
pass to the limit in (3.27) as
in L L ( O , T ; L f I )
strongly, we can
and obtain a solution of (3.24).
-t m,
Let u s now obtain an a priori estimate for such a solution. Multiplying (3.24) by
uI12 and integrating over
Rn
yields
(3.31)
- / F(t)x.Du Rn
u
Ti2 dx
-
2 g.Du u II dx
.f
=
Rn
=JfuI12dx. But
/
F(t).x.Du
1
2
2
2
u II dx = j F(t)x.- 2 D(u )Il dx =
Rn
Rn =
-
1
tr F(t)u 2 JI2dx + Rn u
+ s /
Rn
2 F(t)x.x ~
2 Il dx
1+/xI2
Here we have been using the special property of the weights (3.20).
From this and (3.31) we deduce
where the constant C Therefore we obtain (3.32)
II, stated in
depends only on the Lm
bound of
g , F , aij.
26 1
VARIATIONAL METHODS
where
C
depends only on the
choice of
Lm
g , F, aij, on
bounds of
ci
and the
s.
From this estimate and the linearity of ( 3 . 2 4 ) we deduce uniqueness, and also the fact that the existence and uniqueness result extends to data as
0
in the statement of the Lemma. Remark 3 . 1 .
We do not have the analogue of Theorem 3 . 6 of Chapter 11.
Indeed if we multiply ( 3 . 2 4 ) by
at
I?
, then we get terms of the form
aU au II2 dx 1 F.IJ.(t)x j ax:L at which cannot be estimated as in the case
L e m a 3.2.
ij
-u
j axi
I?dx
.
0
We assume
(3.33)
Me take d a t a
[ F. .(t)x
aij f
E
E
C
0
LP(O,T;LToc(Rn))
,G
E
W:i’(Rn)
,
2 s p <
a.
Then t h e s o l u t i o n o f 1 3 . 2 4 ) s a t i s f i e s (3.34)
We proceed as in the proof of Theorem 3.7 of Chapter 11. Let @
E
&Rn).
Set
z = @u aZ
(3.35)
then from ( 3 . 2 4 ) it follows that
‘
i,j
I
aij
vj
-- O ( f + F(t)x.Du
? ,
z
satisfies
+ g.Du)
n
-
262
CHAPTER VI
Let 8
Q
be a regular bounded domain containing the support of
and
g
on the right hand side of (3.24).
Since from Lemma 3.1, u z
E
#2'1(QG),
Hence
4 , there is no difference between
Note now that because of the term F(t)x
$.
sob.
vanishes on
z
QG
=
-&
2 2 2 2 L ( 0 , T ; L (a$)), c L (0,T;L (0 ) ) , we have
E
$
8$ X (0,T) , by Corollaryl3.1 of Chapter 11.
We can then argue as in the proof of Theorem 3 . 7 of Chapter 11, to
0
complete the proof of (3.24). We turn now to the study of (3.15).
Theorem 3 . 1 .
We have the following
We assume (2.20/, (2.21), ( 2 . 2 2 ) , ( 3 . 6 / , ( 3 . 1 4 ) and ,-
(3.36)
a ij (t)
(3.37)
h
E
E
Co(CO,TI)
W2;Zc(Rn)
,
.
Then t her e exists one and o n l y one soZutiGn o f (3.171 such t h a t
We will use the policy iteration technique.
is continuous (') in
t
for
x,v,p
with respect to fixed.
(1) in fact differentiable
Let
$
x,v,p E
Let us first note that
and measurable (Lebesgue)
1 L2 (O,T;Hn) , we will write
263
VARIATIONAL METHODS
z9(x,v,t)
(3.39)
??(x,v,t) + [email protected](t,v)
=
I
u
Then the function L measurable in x,t.
@
is continuous with respect to v
and (Lebesgue)
It is thus a Caratheodory function. Since Uad
is
compact then there exists a (Lebesgue) measurable function v (x,t) such $
that N
(3.40)
H(x,D@,t)
(x,v (x,t),t)
=
Q
$
a.e. x,t ,
The proof of this property can be found in Ekeland-Temam [ l ] , Chapter VIII. Let us construct an iterative sequence as follows; uo
is an arbitrary
function satisfying properties stated in (3.38). For un
known, define vn(x,t)
=
U
the linear equation (3.41)
, and let un+ 1 be defined by
v .(x,t)
sun+'
z
at
i,j
a;
F(t)x.Du"+'
=
-
= R(x,v"(x, t), t)
nt 1 u (x,T)
=
h(x)
n+ 1 For the existence and uniqueness of u we can apply Lemmas 3.1 and 1 3.2. Indeed h E Hn for s large enough, and h WZ;:(Rn) for any S
Moreover setting
m
it follows from (3.17) that
does not depend on
n).
also
is bounded (note also that the L
bound
Next setting
f"(x,t) it follows from (2.20)
g
=
T(x,vn(x,t
that
f
E
2
L2(0 r;Ln)
for
s
large enough and
p.
264
CHAPTER VI
/f"(x,t)I <
(3.42)
the constant C for any
C(lXl2
+ 1)
being independant of
n, hence
f"
LP(O,T;LToc(Rn))
E
p. is well defined and
Therefore we can assert that the sequence un satisfies (3.43)
and
Q
x
=
(O,T), where 8 is an arbitrary bounded
regular domain of
R"
.
Let next proceed as in the proof of Theorem 3 . 2 of Chpater IV.
-
n+l
F(t)x.D(u
n -u ) s D(un+'-un).B(t,vn)
(u"+'-u")(x,T) We multiply by
n+l n + 2 (u -u ) TI
I (un'l-un)+(t) hence
n+ 1 (u -un)+
=
=
0
.
and integrate over
;1
< :1
R".
1 (un+'-un)+(s)
0. Therefore the sequence u"
the second estimate ( 3 . 4 3 ) we see that
un
We obtain lids
is decreasing. By Q V 8 , hence
is bounded in
we may assert that (3.44)
un
C
u
2
1
point wise, in L (0,T;H )
d"yp(Q)
We have
II
weakly for any
By compactness we may also assert that
p
2
2
,
weakly and p<m.
V A R I A T I O N A L METHODS
265
(3.45)
Let v
E
Uad
arbitrary, we have
-
Dun.B(t,v)
<-aU”at
S
-
C a. .(t) -- F(t)x.Dun IJ ax.ax. i,j 1 J
- l(x,vn,t) -
- Dun.B(t,vn) and from (3.41)
F(t)x.D(un+l-u“)
+
+
0
in
LP(Q)
+ D(un+l-un) .B(t,vn)
weakly
V Q , V p
by virtue of ( 3 . 4 4 ) , (3.45). Therefore letting n aU
at
and since v
+ m,
we obtain
-
1 a. ~-a 2 u ij axiax. i,j J
is arbitrary
F(t)x.Du
- x(x,v,t)
-
Du.B(t,v)
266
CHAPTER VI
We obtain a reverse inequality arguing as in the proof of Theorem 3.2, Chapter IV. Let us now prove uniqueness. Let
u1,u2 be two solutions and
1 2 v ,v
controls such that
v
i
,
v $x,t)
=
i
= 1,2
.
U
We have
-
1 aU
at
2 1
a u c a.ij - F(t)x.Du axiax
i,j
1
=
j
1 1 1 R(x,v ,t) + DU .B(t,v )
-
=
fi(x,Du2,t) + D(u 1-u 2).B(t,v 1 )
2
hence -a( u - u1 ) +1 at 2 1
2-u1 )(x,T)
From this we deduce 1 2 u = u
Hence
Remark 3.2. x,t
=
0
a2 (u2-u') + F(t)x.D(u 2-u1 ) + ij axiax.
J
i,j
+ D(u -U ).B(t,v
(U
-a
C
1
) 2 0
.
u2-u1 t 0. A reverse argument shows that
.
It follows from (3.38) that
ul-u2 2 0.
0
is a Holder function of with an arbitrary hElder constant <1 , on Q UO 0
Let us assume that (3.46)
Du
.
261
VARIATIONAL METHODS
and lB(tl;v)
(3.47)
-
B(t2;v)I
5
Citl-t216
V V € U ad ’ tI,t2 c i 0 , T l
.
From ( 3 . 9 ) and the assumption (2.20) it follows that
From the definition ( 3 . 1 2 ) we then deduce
where
60
=
Yin(&,I/2), and
C
is a constant independant of the
arguments. From ( 3 . 4 9 ) and Remark 3 . 2 it follows that
(3.50)
-H(x,Du,t)
is H6lder in x
and H6lder in subsets of We can then state
Rn
t x
with an arbitrary constant,
with constant
[O,T].
60, uniformly on compact
268
CHAPTER VI
We make t h e assumptions o f Theorem 3 . 1 , as w e l l a s 13.461
Theorem 3.2. and (3.51)
h
(3.52)
F. .(t)
E
1J
Then t h e s o l u t i o n
u
fio,
C2+'6)
E
C 6 ([O,Tl),zij
E
y > 0 , Y'
1
C'(C0,TI).
of ( 3 . 1 3 1 s a t i s f i e s
8 r e g u l a r bcunded donain of Rn.
Set
z = u$
where
@ E B(Rn).
Then
z
satisfies the equation
(3.54)
From Remark 2 . 2 ,
( 3 . 5 0 ) and ( 3 . 5 1 ) , ( 3 . 5 2 ) it follows that the right hand
side of ( 3 . 5 4 ) is hb'lder in x,t
with adequate constants, and
h@
Rn.
E
C2+'
on compact subsets of
an initial condition which is
We can consider
z-h$
0. We can then assert that
z
to obtain is
twice continuously differentiable in x with second derivatives hb'lder, on Q$ , and once continuously differentiable in t, with hb'lder -
derivative on
.
Q@
This f o l l o w s from the result of Ladyzenskaya et alii, mentionned in Chapter 11, Since
q?
§
3.3.3.
is arbitrary we have in particular the property ( 3 . 5 3 ) .
0
269
VARIATIONAL METHODS
With the assumptions of Theorem 3.2, the function L"(x,v,p,t) continuously differentiable in x,v,p
and holder in
There exists a Borel function <(x,p,t)
such that
9
It is hard to give more properties on
is
t.
because the infimum in v
in (3.12) is not necessarily unique. A reasonable candidate for an optimal feedback is the following
(3.56)
0(x,t)
=
<(x,Du(x,t),t)
but this is just a Borel function with values in
"ad
However we would like to solve the following equation (3.57)
where
dy
=
F(t)9
dt + B(t;O(F,t))dt
4
is a continuous process having a Ito differential, assumed to
z
be given.
We would like also to obtain a unique solution in order to
assert that the solution t
family 7
9
is an adapted process with respect to the
generated by the process
z(t).
This unfortunately cannot be
achieved with so weak properties of the function 0. This difficulty motivates the very restrictive following assumption (3.58)
c(x,p, t)
is continuously differentiable with respect to
x,p, with partial derivatives bounded on sets 1x1 5 M
, 1pI
5
M , t
E
C0,TI.
270
CHAPTER VI
When (3.18) is satisfied a s well as the assumptions of Theorem 3.2, then we have (3.59)
C(x,t)
is continuously in
on sets of the form ixI
5
M
x, with derivative bounded
,
t
[O,T].
E
To simplify a little bit we assume that (3.60)
K(t)
=
P(t)H*(t)R-l
is differentiable with bounded
(t)
derivatives. Then (3.57) can be rewritten as an integral equation and solved for each sample. Namely (3.61)
F(t)
=
x +
It
(F-KH)(s)?(s)ds
+ K(t)z(t)
- bCt
+ Jt B(s,O(F,s))ds 0
+
.
ds z(s)ds
The existence and uniqueness of the solution of (3.61) follows from the assumptions and the theory of integral equations. Clearly the process
Y(t)
We now consider a system in equation for (3.62)
z.
is adapted to
9,i
which consists in adding to (3.61) an
Namely we consider
?(t)
=
x +
it (F-KH)(s)?(s)ds
+ K(t).?(t)
.?(t) = I(t)
+
-
+ Jt B(s,;(?,s))ds 0
irt ds i(s)ds
.I
(1)
it H(s)?(s)ds
This system has a unique solution adapted to
Zt.
(?,?)
which are continuous processes
Zt (the o-algebra generated by y(t)).
Of course we also
have (:)
To have a more standard formulation one could make the change of functions
9 - K.? =
$.
27 1
VARIATIONAL METHODS
9 is adapted to -t
(3.63)
.
Therefore from the second equation ( 3 . 6 2 ) we deduce
$.
(3.64)
it
(3.66)
dy = F y dt + B(t,C(t))dt
=
Y(0) =
where
E
5
is the solution of ( 3 . 3 ) . z(t)
+ G dw
From which it follows
it H(s)E(s)ds
=
{t H(s)?(s)ds
+
=
it H(s)F(s)ds
+ I(t)
+ n(t)
which with the second equation ( 3 . 6 2 ) implies (3.69)
z(t)
Hence the process
2
2(t)
=
coincides with the observation process.
and ( 3 . 6 4 ) follows that (3.70)
G E V
.
I
From that
212
CHAPTER VI
0
We are now in a position to show that payoff
J(v(.)),
is optimal with respect to the
given by (2.19). bJe make t 4 e a s e wn p t i o n s of Theorem 3 . 2 , and (3.52),
Thecrem 3 . 3 .
Then zhe admissible c m t r o l defized (3.65) is an cpt?:maZ contro’o
t h e pa3of.f
Let
u(x,O)
v
=
Rn
and
.
inf J ( v ( . ) )
be an arbitrary control v
domain of
;doreover m have
J ( v ( . ) ) , deplned: b . ~( 2 . 1 9 ) .
(3.71)
E
(3.60). fgr
Q. Let CS. be a bounded regular f(t) from 8. We recall that
the exit time of
T
the Kalman filter (3.1) satisfies the Ito equation df
(3.72)
?(O) where
v
is a
3
t
=
F f dt + B(t,v(t))dt
=
+ PH* R -112 dv
x
Wiener process.
We apply Ito’s formula to the function u solution of (3.13) and to the process 8. This is possible since u t C2 9 1 .
We have
We apply this formula with
t
=
T
A
T
and take the expectation, we obtain
273
VARIATIONAL METHODS
u(x,O)
(3.73)
E u(P(TAT),TAT)
=
+ E
H”
and from the definition of (3.74)
u(x,O)
S
JTAT
Assume for a while that h
and ?R
-t
m
E f
? =
it follows that
J0 TATR(x(s),v(s),s)ds
are bounded functions, then u
fTAT
TAT^)
+
u(?(T) ,T) =
-R(x(s),v(s),s)ds
+
K(P(T))
a.s.
x(x(s),v(s),s)ds
a.s.
0
hence by Lebesgue’s Theorem u(x,O)
(3.75)
5
E K(f(T)) + E
IT x(f(s),v(s),s)ds
.
But recall that f ( s ) = x(s)-e(s)
and
E K(f(T))
=
E h(x(T))
E h(x(T))
=
E h(f(T)+E(T))
=
ECE(h(f(T)+e
=
E K(f(T))
since
since
E(T)
is independant of
. is
where OR i s the ball of radius R,
we have
U(P(TAT~)
+
E(Du,s)(f(s),s)ds
(cf. (3.12)),
R
TAT B(s,v(s)).Du(2(s),s)ds
0
E u(~(TAT),TAT) + E
bounded. Note that if as R
-
(T)) I $) 1
gT and P ( T )
is
gT measurable.
274
CHAPTER VI
is 3'
Similarly since v(s)
measurable
E X(?(s),v(s),s)
=
E R(x(s),v(s),s)
.
Therefore we obtain from ( 3 . 7 5 ) (3.76)
u(x,O) i E h(x(T)) =
Ix,(s)
I
5
C
T
R(x(s),v(s),s)
h,2
are bounded functions.
Consider
Remembering that
x ( s ) = X1(S)
and
E
J(v(.))
at least provided we assume that now the general case.
+
+
B(s)
a deterministic constant, since
(cf. (2.12))
-dxl - - F x 1 + B(t;v(t)) dt X1(O) =
0
and B(s)
=
O(s,O)x + O ( s , O ) g + I s O(s,o)G(o)dw(o)
.
Using assumption (2.20) (3.77)
hence (3.78)
where
c1
depends on
Co
and some fixed constants. Now consider a
sequence Qk of bounded functions such that
215
VARIATIONAL METHODS
Rk (x,v,s)
+
R(x,v,s)
pointwise
and
Such a sequence exists. A s
k
-+ a,
it follows from the estimate (3.78)
that
E
(3.79)
JT
Rk(x(s) ,v(s) ,s)ds
-+
E
JT
R(x(s) ,v(s) ,s)ds
Similarly we may consider a sequence of bounded functions hk hk(x)
Now let uk
-+
h(x)
. such that
pointwise
be the solution of (3.13) for the data
Rk, hk.
From the estimates of Theorem 3.1, we have
Using in particular the second estimate (3.49), it is easy to check then that
k
2
1
uk
+
u
in L (0,T;H )
Uk
-f
u
in u ~ ~ ~ ~ P (v QQ)
r!
In particular u (x,O) + u(x,O)
.
weakly
.
But from (3.76)
276
CHAPTER VI
and passing to the limit, we see that (3.76 holds in general. Consider now the control t to u
defined in (3. 5 ) .
Applying Ito's formula
9 , yields
and to the process
There is a difficulty in passing to the limit in a term like
E U(?(TAT~),TAT~) as R
+
-,
To overcome this, consider the equation aU
k Z
IJ
i,j
at
=
-a. .(t)
2 k
a U axiax
F(t)x.Duk
k - Du .B(t,?(t))
=
j
-k
R (x,O(t),t)
k
-k
u (x,T) = h (x)
.
Note that in the above eauation 0 are approximations of
u.
depends on
Moreover
.Ek, hk
R,h as seen before.
It is easy to see that for instance k u (x,O)
-+
But from the boundedness of
u(x,O)
.
hk, ik, one may establish
k k u (x,o) = E h (y(T)) and the right hand side tends to
which with (3.76) shows that u ( . ) the proof of the desired result.
+
E
J(u(.))
{T as
k R (y(t),?(t),t)dt k
-+ m.
Finally we obtain
is an optimal control, and completes
0
277
VARIATIONAL METHODS
Remark 3.3. There are ways to relax the assumptions ( 3 . 5 8 ) , but it is at the price of enlarging the class of admissible controls (see A. Bensoussan-
0
J.L. Lions C11).
COMMENTS ON CHAPTER VI I
1 . The method of density developed in 5 1 . 2 is due to M. Viot CSl.
See also A. Bensoussan 2 . The adjoint process
EZt p(t) does
=
II(t)
n(t)
-
p(t)
Viot E l ] .
M.
is not an adapted process.
Of course
is adapted. An interesting question is the following:
satisfy a S.D.E.
One can show that
II(t)
satisfies an
equation with an additional term which is a martingale (see A. Bensoussan
-
G. Hurst
- B.
Naslhd C11).
This leads naturally to
another question. Can one obtain a stochastic Maximum principle with an adjoint system whose solution is an adapted process? This problem was solved by T.M. Bismut C11.
Unfortunately the equation for n(t)
involves additional martingale terms.
So it is not certain that this is
preferable to a form like ( 1 . 3 4 ) .
3. The proof of Theorem 3.1 given here simplifies that of A. Bensoussan
J.L. Lions C l l . on
x
The reader will check that we can take a depending ij with appropriate assumptions. But the equation that we want
to solve has coefficients which do not depend on x . 4 . For the separation principle in cases of degeneracy, cf. J.L. Menaldi
[11.
-
This Page Intentionally Left Blank
219
CHAPTER V I I
PROBLEMS
OF OPTIMAL STOPPING
INTRODUCTION Optimal stopping problems were first introduced by H. Chernoff [ I 1 for applications in Statistics and soon related to the Stefan problem (problem of the free boundary of ice melted in water), see P. Van Moorbecke [ I ] . The method of Variational inequalities was introduced by A. Bensoussan - J.L. Lions C31. For a full treatment of the applications of V.I. in Stochastic Control, cf. A . Bensoussan J.L. Lions [l]. For a "more" probabilistic approach and connections with the problem of excessive elements, see A . Shyriaev [ I ] and earlier work of E. Dynkin [3]. See also N . Krylov [ I ] . We have restricted ourselves to stationary diffusion processes and general semi groups. For other processes and models (cf. A . Bensoussan - J.L. Lions [I], M. Robin [ I ] , R. Anderson - A. Friedman [ l l , i21, J.L. Menaldi [21). For numerical techniques to solve V.I. R. Tr6moliSres [ 11).
(see R. Glowinski
1. S E T T I N G OF THE PROBLEM
Let
aij(x), bi(x)
(1.2) Let
bi u(x)
be functions on
E
such that
Lm(Rn)
I
Rn
such that
-
J.L. Lions
-
280
CHAPTER VII
We note
Px
the solution of the martingale problem relative to
with initial condition x
(0,a)
0. We set
at
then as already seen (1.5)
PX(x(0)=x)
(1.6)
@(x(t))
There exits a
n;,t
Let
8 be a
(1.8)
not
1
- @(x)
+
it A@(x(s))ds
is a
x +
=
Jt
+ Jt b(x(s))ds
a(x(s))dw(s)
m0t
martingale.
w(t)
such that
Px,
standard n dimensional Wiener process
x(t)
(1.7)
=
,
a.s. Px
.
stopping time, we define a functional
.JX(e)
=
-
EXCJeA' f(x(t))(exp
+
$(x(e))xeCT
exp
Jt ao(x(s))ds)dt
- '1
i
ao(x(s))dsl
where (1.9)
T =
first exit time of the process
bounded regular domain of
(1.10)
a.
(1.11)
f
(1.12)
+
t 0
E
Rn
bounded
LPb)
,
P > " 2
COG)
,
$ir
t
o , r
=
a
We are interested in characterizing the function (1.13)
u(x)
=
inf Jx(S)
e
and to find optimal stopping times.
0.
x(t)
from 8 , open
28 I
PROBLEMS OF OPTIMAL STOPPING
We will formulate an "equation of Dynamic Programming" for the function u(x).
As it will be seen, it will not be an equation but a variational
inequality. We will also according to the plan followed in Chapter IV, consider a semi group formulation of the stopping time problem.
2 . UNILATERAL PROBLEMS
We write here the operator A
in variational form
with
We consider the Hamilton Jacobi equation
(2.3)
This is indeed a Hamilton Jacobi equation, as we have seen in Chapter IV,
(2.8).
Indeed we define
It clearly satisfies all the assumptions of the Hamiltonian H in Chapter IV, ( 2 . 4 ) , (2.5). H(~,o,o) c L~
,
since
It may not satisfy the property
required
282
CHAPTER VII
(u-$)(Au+a 0u-f) = 0
,
ulr = 0
.
0
283
PROBLEMS OF OPTIMAL STOPPING
Lema 2 . 1 .
Setting
without l o s s of g e n e r a l i t y we may assume
u = wz
are led to the
,
where
same
is defined in ( 2 . 1 7 ) of Chapter 11, then we
z
problem with the following changes
aij
+
2
aij 2
a.
+
a. w
. a
+
a . w2 +
1
w
1
'
f
aw c a.1 ax.
+
a (a. aw c axi ij K)
w
i,j
j
fw
and
U
hence the desired result.
Lema 2 . 2 . SoLUtiGn
Let
X Zarge enough.
Then the re e x i s t s one and only one
G f
(2.8)
Au
+ (a0+X)u
5
f
(u-q)(Au+(ao+A)u-f)
, =
u 5
0
proos We recall that
+ /
uvdx
8. a
ulr
)I
, u
E
W2"(8)
=
0
284
CHAPTER VII
A
Then we consider (2.9)
large enough so that
a(v,v)
A lv12
+
2
6 11v11
V v c H 01 .
Let us prove uniqueness. Let
-
u u
be two solutions then we have
J
(Au + aou
+
Au
-
f ) (u-u)dx =
J
(Au + a , u
+ hu
-
f) (9-u)dx s 0
8
0
Similarly
By addition
(2.10)
AU E +
UElr
-
aOuE
0
t ~u~ + 1 (uE +)+
, uE
E
w2q$
.
= f
.
PROBLEMS OF OPTIMAL STOPPING
We m u l t i p l y by
and t h u s f o r
((u'-'$)+)'-'.
A
285
We n o t e t h a t
l a r g e enough
T h e r e f o r e from ( 2 . 1 0 ) we deduce
hence
But t h e n from e q u a t i o n ( 2 . 1 0 ) and Theorem 2 . 1 of Chapter 11, w e deduce t h a t
(2. Let then
uE
be a subsequence such t h a t
U'
hence a l s o i n
(2.13)
Wiyp
+
u
in
W2"
strongly.
weakly
From ( 2 . 1 1 ) i t follows t h a t
U S $ .
From e q u a t i o n (2.10) w e can deduce t h a t
286
CHAPTER VII
hence letting (2.14)
tend to
E
we obtain
Au + (ao+A)u 5 f
We multiply (2.10)
by
J
(2.15)
But
,
0
(uE-+)-
-t
(uE-+)-.
( A U ~+
8
(u-+)-
=
.
Therefore
(ao+A)u E
I$- u
-
in Lp
f) (u'
-
+)- =
strongly.
o
.
Therefore from ( 2 . 1 5 )
we deduce that
which together with ( 2 . 1 4 ) and ( 2 . 1 3 ) implies (Au + (a +A)u - f ) (u-i) = 0 a.e. 0
Hence
L e ma 2 . 3 .
into
f
0
is a solution of ( 2 . 8 ) .
u
f
Let us consider t h e s o l u t i o n @
where
@ E Lw.
N
u
o f 12.81 w i t h
f changed
Then we have the estimate
(2.16)
N
Consider the penalized problems associated to u
E
,
"E
u
the corresponding solutions.
u, u
and denote by
By difference we have
and by Lemma 2 . 1 of Chapter IV, it follows that
from which and the convergence result of Lemma 2.2,
we deduce ( 2 . 1 6 ) .
0
281
PROBLEMS OF OPTIMAL STOPPING Proof o f Theorem 2 . 2 W
Consider for
z
L
E
the solution 5
of
(2.17)
Such a solution exists and is uniquely defined by Lemma 2.2. in particular
E
p >
4
Here we use 0 -
5 E C (8)I hence Thus by (2.17) we have defined a map
explicitely the assumption that Lm@).
to imply that
5 from Lm into itself. Let us show that it is a contraction. x Indeed take z1,z2 E Lm and c l = Sh(zl) , q 2 = S A ( z 2 ) . Then from S
(z) =
Lemma 2 . 3 we deduce that
which proves that
Sx
is a contraction. Since solution of ( 2 . 6 )
coincide with the fixed points of
Remark 2.2.
I3
S A , the proof is completed.
We say that problem ( 2 . 6 ) is a unilateral problem, since it
is defined by one sided conditions. The terminology comes from Mechanics where such problems are met frequently. Our motivation is completely different.
I7
3. VARIATIONAL INEQUALITIES
Assumptions ( 2 . 5 ) are two restrictive f o r the applicationsthat we have in mind.
We will relax them, but it will then not be possible to formulate
the problem like in ( 2 . 6 ) , since clearly regularity of the solution u requested in such a formulation.
is
288
LHAPTER V I I
We make the assumptions of Theorem 2.1, and the coercivity
Theorem 3.1. assumption
Then there exists one a d only one solution of the variational inequality
pi-ovided t h a t t h e set 1 K = { V E H Iv<$j 0
(3.3)
is not empty.
Proof Let
v
0
E
K
.
We multiply (2.3) by E
(3.4)
E
a(u ,u
uE
-
vo
then we get
- v 0 + 1 ((uE-$) + ,uE-vo
=
0 (f,uE-v ) .
But
hence
a(u
E
E
,u )
0
0
a(uE,v ) + (f,uE-v )
from which and the coercivity assumption (3.1), we deduce
< c .
(3.5)
Let
uE
uE
u
+
be a subsequence such that 'u in L~
strongly.
+
u
1
in Ho
weakly, hence also
289
PROBLEMS OF OPTIMAL STOPPING
We deduce from ( 3 . 4 ) and (3.5) ((uE-q)+,uE-vO)
0
+
hence ((u-qJ)
+ ,u-vo )
0
=
or
((u-q)+,u-qJ) 5 0
which implies Let now v
5
$.
(u-$)+
=
0
,
or
u
5
qJ.
We multiply ( 2 . 3 ) by a(uE,v-uE) +
-1
v-uE.
((uE-+)+,v-uE)
It follows =
(f,v-u')
.
But
hence
From the weak 1.s.c. of
a(v,v)
,
we deduce
a(u,v) 2 a(u,u) + (f,v-u)
u,u
hence
u
is a solution of ( 3 . 2 ) .
This solution is unique.
b e two solutions, then we have
Indeed let
290
CHAPTER V I I
and by addition -
N
a(u-u,u-u) t 0
-.
from which and the coercivity we deduce u = u
0
Problem (3.2) is called a uariational inequaZity (V.1.).
In the following,
we will weaken assumption (3.1) which is too strong for our applications. Clearly such an assumption requires that
a .
t
y > 0
some sufficiently
large constant. However since we do not want to rely on such an assumption, we will prove a result similar to Lemma 2 . 1 .
Lennna 3.1.
Without loss of generality, we may asswne
(3.6)
a o t y > O .
Proof Make the change of unknown function u = w z
.
Then we get the same problem with the following changes a
ij
a. 1
f
We recall that
1
+a
+
+
2
ijw
a.w2 1
wf
2 w S 2 ,
The result follows.
29 1
PROBLEMS OF OPTIMAL STOPPING
Let us now consider the problem a(u,v-u) + A(u,v-u)
(3.6)
B v X
For
1
,
Ho
v
5
$ ,
U E
I Ho
,u
5 $.
large enough, the assumptions of Theorem 3.1 are satisfied Therefore there exists one and o n l y one solution of
(namely (3.1)). (3.6).
E
(f,v-u)
2
We have the analogue of Lemma 2 . 3 .
Lema 3 . 2 .
Let
u
be t h e s o l u t i o n of (.3.6), and
corresponding problem w i t h
f changed i n t o
f + @
be t h e s o l u t i o n o f t h e
,
@ E
Lm.
Then we have
Since the penalized problems are the same as in section 2, we still have the estimate
Y
Hence E +
uE-UE
remains bounded in Lm.
+ u-u
0. Hence u"-';'
in Lm
However uE-EE
-+
u-u
in L2
0
( 3 . 7 ) holds.
Lema 3.3.
kssume
f,$ bounded.
I If1
L
+
L
-uLet $, 5
Then t h e s o l u t i o n of 13.61 s a t i s f i e s
/ / ~ / /m < A + yI
(3.9)
-u
lwLa *
be the solution of (3.6) corresponding to hence
as
weak star, from which it follows that
5
1 IqlI
a
L
.
f
=
0. Note that
292
Let
CHAPTER V I I
K
=
I~I) 1
We s h a l l prove that
m.
L (3.10)
-
u + K t O . Y
-
-
v = u+(u+K) as a test function in ( 3 . 6 ) , which is possible by the choice of K. We obtain
Take
or
K(ao+X,(u+K)-)
L
hence
(L+K)-
=
0
t
0
, which proves (3.10).
Using next Lemma 3 . 2 , we obtain ( 3 . 9 ) .
0
We can then obtain the
Theorem 3 . 2 .
and
f E LP@) the V . I . (3.11)
We assume 11.1), i1.2), 1 1 . 1 0 ) ,
,p
>
f.
v
provided t h a t t h e s e t
@
E
H K
1
~ v ,2 Q , u
0
,v)
E
H; n L~
u 5
q
defined i n 13.31 i s n o t empty.
Consider first the equation
a(u
E
Lm
Then t h e r e e x i s t s one and o n l y one s o l u t i o n of
* or
Let
a(u,v-u) t (f,v-u)
v
(3.12)
12.21.
=
(f,v)
,
293
PROBLEMS OF OPTIMAL STOPPING
AU
We know that
uo
0
+ aOuo
,
W2"@)
E
-
u = u-u
u
when
=
,
f
u
0
lT
hence since
.
o
=
p >
5 , uo
E
Now set
COG).
0
is a solution of (3.11).
;;
Then
is a solution of the
following problem
1 0
Y V E H
u
HA
E
n L~
,
v 2 $-u
,
-u
5
0
0 i-u
which is the same problem for data
f
=
0, and
$ = I$-u
O
E
L
m
.
Moreover
we set
KO
=
{v
since it contains vo-uo
1 0
H 1v
E
5
where v
0
I
$-u 0
5
9
is not empty
,
v
0
E
1
Ho.
Therefore without loss of generality, we may assume z
E
Lm, define
5
=
f = 0. Let next
as the solution of
Sh(z)
(3.13)
For
A
large enough, we may apply Theorem 3.1, to ensure the existence
and uniqueness of
5.
Moreover, from Lemma 3.3, we see that
Hence we have defined a map contraction. Indeed if
S,
z1,z2
from Lm
La
and
5
E
Lm.
into itself. This map is a c1,c2 are the corresponding
sohtions of (3.13), it follows from Lemma 3.2 that
294
But clearly the fixed points of (3.11),
when
Remark 3 . 1 .
S A coincide with the solutions of
0
f=O. Hence the existence and uniqueness.
When we make the assumptions of Theorem 2.2, we have one and
only one solution of (2.6) and also-of ( 3 . 1 1 ) .
These solutions
Indeed let us check that the solution u
coincides.
of ( 2 . 6 ) is a
solution of ( 3 . 1 1 ) . Indeed let v
E
1
Ho
,
v
5
$, we have
J0 (Au+aou-f) (v-u)dx and by Green's formula we see that u
=
J
0
(Auta u-f) (v-$)dx
0
satisfies ( 3 . 1 1 ) .
2
0
This justifies
the introduction of V.I. as a weaker formulation to ( 2 . 6 ) , when
$
Lema 3 . 4 .
Let
$,T
L m J and
E
corresponding t o them.
Let u s consider
' U
(3.15)
Au'ta
with (3.16)
X
is
0
not regular.
large and
Then one has
and
Q
u
u,u t o be the s o l u t i o n o f 13.121
0
E
LE
uE
Lm.
to be solutions of
+ AuE +
1 (u'-$)~
Then we have
=
f
, ~
€
=
10 ~
29 5
PROBLEMS OF OPTIMAL STOPPING Set K = Max(/
l$-@l
1
"
m,
L w
Y+X
J8
w+
.
l w E Ho
,
= uE--uE-K
We multiply (3.15) by (3.17)
lMl)
and (3.15)"
by
-w+
and add up.
CA(uE-UE) + a (uE-uE) + A(uE-GE)lw+dx + 0
-E1 X
+ dx
where
x Indeed assume
zE 2 ",)I w
-
= ((U"$)+
2
0
.
then "
C
(U"$)+,W+)
uE-$-K
,.,
< $-$-K
0
S
,
hence ((UE-qJ)',w+) which proves that
X
2
0
.
=
0
Now from (3.17) we deduce
a(w,w+) + X(w,w+) + J
8
hence
w+
=
0
.
[(a0+A)K+qlw+
Therefore uE-zE 2 K
.
By a reverse argument we conclude that (3.16) holds. Therefore considering the solution of
We obtain
dx < 0
=
296
CHAPTER VII
(3.18)
we can assert tha (3.19) Consider next the iterative process
n+l)
+ (f,v-u
When
. t , a ) I then the contraction argument mentionned in Theorem 3.2, guarantees that un Defining similarly
+
u
in L~
.
zn. It follows from estimate (3.19) that
(3.21) Letting a .
2
y
2
n
-t m
we deduce that (3.14) holds true, at least provided that
0. A s this stage, it is not useful to make the change of
unknown function u
= wz
, which changes
$
into $/w.
Indeed we will
only obtain estimate (3.14) with twice the right hand side, which is not the estimate we want.
One proceeds as follows. Consider the V.I.
291
PROBLEMS OF OPTIMAL STOPPING
6 > 0 , will tend to 0.
where
Let also &:
be the solution of the same problem with
1 4
changed into
$.
6 > 0 , we have
Since
And it is enough to show that
u6
-+
u
6
as
+
0
,
in some sense. For
such a result, we may consider the change of functions u therefore it is sufficient to assume
a .
2
y > 0
.
= uz
, and
Consider next the
iterative process
2
J,(un,v-un) 6 6
+
(f,"-Ut++
then we have
from which it follows that
>
k = h+y
with as
6
-t
.
In particular it follows that
u6
0. From this and the V.I., one deduces
is then enough to obtain
u
6
+
u
in
1
us
Ho weakly and
completes the proof of the desired result. We can then state the following regularity result
is bounded in bounded in Lw
Lm
1
Ho
.
It
weak star, which
c
298
CHPATER VII
Theorem 3 . 3 .
We make t h e a s s u m p t i o n s o f Theorem 3 . 2 and
(3.22) Then t h e s o l u t i o n
u
of (3.11) belongs t o
Co(&
.
Define
En = ll$n-j,l
I Lrn
,
En
-+
0
and
Clearly also
satisfy the assumptions of Theorem 2 . 2 .
The functions $n
Let
un
be the
It is also the solution of
solution of the V.I. corresponding to
@n. the unilateral problem ( 2 . 6 ) , hence in particular
u
E
Co(s).
But from
(3.14) we deduce that
lIun-uI Hence
u
E
0 -
.
C (0)
I L"
5
llQn-vl
I L"
*
0
*
0
299
PROBLEMS OF OPTIMAL STOPPING
Let u s prove to end that section, the useful result that the solution uE of the penalized problem converges to the solution
in
.
Co(s)
u
of the V.I.
This result will be generalized in section 5 for general
semi groups, with some slight changes in the assumptions.
We will
however need it in Chapter VII, section 3 .
Theorem 3 . 4 .
Under t h e assumptions ~f Theorem 3 . 3 , t h e n t h e s o l u t i o n
of 1 2 . 3 ) converges towards t h e s o l u t i o n
u
of t h e 7 . 1 . (S.11) i n
Let u s first remark that it is sufficient to prove this result when is regular.
Indeed let
as in Theorem 2 . 3 , @n
@n
Lemma 3 . 4 , we have (noting u
I /Un-U/I
(3.23)
+
$
in
the V.I. corresponding to
Co.
uE
Co(6)
.
y
From
on)
I lon-il I Lrn
5
L
But the proof of Lemma 3 . 4 , in particular estimate ( 3 . 1 6 ) , together with an iterative procedure for the penalized problem, like in ( 3 . 2 0 ) shows that the same estimate is valid for the penalized problem, namely
From ( 3 . 2 3 ) , ( 3 . 2 4 ) it is clear that if we have
1
lu:-unl
1
+
0
in
then the desired result will follow. a We may of course assume . Lemma 3 . 1 ) (1) Now for
)t
large, replacing
Lemma 2 . 2 , that
uE
2
C
0
, for n fixed We may thus assume
$
regular
y > 0 , without loss of generality (cf
a .
by
aO+X
(cf. ( 2 . 1 0 ) ) , we know fron
remains bounded in W2'p.
Now consider the iterative
sequence
(L) We have however to consider a penalized problem with EIUJ
2
.
E
changed into
300
CHAPTER V I I
hence
< - kn
-
k
where
NOW
=
A x+y .
1 /uE>OlI
I-k
Therefore letting p
+ m,
~< c ~, from , Lemma ~ 2.2; similarly
1
lu E, 1
< c .
Hence we have
IIu~-u~'~I I
(3.25)
2
C kn
.
Lrn We a l s o have
I lu-unt 1
(3.26)
< C
kn
L which follows from (3.25) and continuity and convexity of the norm.
It
also follows directly from (2.16) and an iterative scheme. Now for any fixed
since uE'n
remains in a bounded set of
depending on uE + u
in
n, we have by Lemma 2.2,
n).
COG) .
W2"
as
E
+
0
(a priori
From this and (3.25), (3.26) it follows that
0
30 1
PROBLEMS OF OPTIMAL STOPPING
4 . SOLUTION OF THE OPTIMAL STOPPING TIME PROBLEM
4.1.
Ihe-re9ular-case
We are going t o show the following
Theorem 4 . i .
We assume ( 1 . 1 1 , ( i . 2 ) , i l . i O / ,
(1.111, ( 2 . 2 ) , ( 2 . 5 1 .
the solution u
of 12.61 is given explicitely bg
(4.1)
u(x) = Inf Jx(6)
e
Then
.
Moreover there exists an optimal stopping time, characterized as follows. Define
and
then
6
If h
E
is an optimal stopping time.
Lp(&
,p >
5 , we know from Chapter 11, Theorem 4 . 1 ,
that
From this estimate follows that we can deduce the following Ito's the function u
integrated from formula
to
(4.4)
EX u(x(6A-c)exp
u(x)
=
+ EX
where
6
E
- J 6A.r
W2"
,
p >
ao(x(s))ds
4
+
JoAT (Au+aou) (x(s)) (exp-JS ao(x(X))dX)ds
is any stopping time.
302
CHAPTER V I I
Now using the relations ( 2 . 6 ) , it is easy to deduce from ( 4 . 4 ) that (4.5)
u(x)
5
V
JX(8)
8.
On the other hand we may assert that xc(x)
(Au+a u-f) = 0 p.p. 0
hence EX i8A?
xC(Au+aOu-f)
(x(s))
(exp-/’ ao(x(X))dh)ds
0
But for
s <
6
= 1
xc(x(s)) EX
-
(Au+aou)(x(s))(expf(x(s))
/OAT
and applying ( 4 . 4 ) with u(x)
=
8
set
C.
< T
, 6
=
(exp-
is a (x(h))dX)ds 0
=
0
LS aO(x(A))dX)ds
f ( x ( s ) ) (exp - is ao(x(A))dX)ds
u(x(8))~g,~
is finite hence
exp - /8ao(x(s))dsl
x(6)
+
.
belongs to the boundary of the
Therefore
which implies
when used in
.
8, we obtain
EX I
+
But if
0
, therefore
0
(4.6)
=
0
4.6)
u ( x ) = Jx(8
and this comp etes the proof of the desired result.
c
303
PROBLEMS OF OPTIMAL STOPPING
Theorem 4 . 2 . the s e t
Fie assume 11.11,
( 1 . 2 ) , 1 1 . ? 0 ) , 1 1 . 1 1 ) , i 2 . 2 / , (3.221 and
i s not empty. Then t h e s o l u t i o n u of 1 3 . 1 1 ) which i s a continuous f u n c t i o n on 0, can s t i l l be i n t e r p r e t e d by 1 4 . 1 ) . Moreover
6
K
defined by ( 4 . 3 1 is s t i l l an optimal stoppi n g t i m e .
Let
Qn be such as in the proof of Theorem 3.3, and let un
corresponding solution of the V . I .
be the
From Theorem 4 . 1 , we can assert that
where
J:(e)
=
EX [
But
hence
from which it follows that
f(x(t))(exp
-
Jt ao(x(s))ds)dt
+
304
CHAPTER VII
But
u
n
+ u
in Lm.
(4.7)
u(x)
=
inf Jx(S)
e
Let u s prove that If
Hence
u(x)
=
$(x)
boundary of
,
e
.
is an optimal stopping.
then since Px
C, hence
0 = 0
.
a.s.
=
0
hence
If
x
E
x(0)
=
x
,
we have
x(0)
on the
Therefore
u(x)
is optimal.
r
and
u(x)
=
, then ~ ( x )= 0 , and Jx(i)
$(x)
Therefore we may assume that
u ( x ) < v(x).
=
0
=
u(x)
6 > 0 such that
Let
u(x) < Y(X) - 6 and
Let
N,
such that
n t Nc:
implies
i
110,- $ 1 1 Therefore for
s S
6
e 6A T
<$
,
hence
1 /U,-U~ 1
6
< 8 ' ~~
, we have
Therefore 6 2 0 A T , hence as in the end of the proof of Theorem 4 . l , we may assert that
305
PROBLEMS OF OPTIMAL STOPPING
p
6
- is ao(x(X))dX)ds
AT (Aun+aoun)(x(s))(exp
0
6 =
EX
Ie
" f(x(s))(exp
- is ao(x(A))dh)ds
=
.
Therefore from Ito's formula
(4.8)
un(x)
EX un(x(B
=
6
AT))
6 exp - J8
6 + EX
Now since
u
n
+
u
(4.9)
-
ie " f(x(s))(exp
=
'E
6A T ) ) exp
u(x(e
0
6 I. A.
6
C
0
.
JS
ao(x(A))dX)ds
.
6
-
AT ao(x(s))ds
6
+ EX J e Let now
+
ao(x(s))ds
Lm, we deduce from ( 4 . 8 )
in
u(x)
AT
We have
6 I. and since
0
- is .a
" f(x(s))(exp
Now we have since the process
x
8
6
5
+
(x(h))dX)ds
.
-
0 , it follows that
is continuous
hence going to the limit
which implies that
A
t
6.
Hence
66 I.
6.
By passing to the limit in (4.9) we conclude the proof of the desired
D
result.
A s we have said in 5 2 . 1 ,
the penalized problem (2.3) is a Hamilton Jacobi
equation, with Hamiltonian ( 2 . 4 ) .
Therefore its solution can be inter-
preted as the infimum of a payoff corresponding to a stochastic control problem.
The control does not modify the martingale problem.
Therefore
306
CHAPTER VII
from formula (1.18),
Chapter IV and the definition ( 2 . 4 ) of the
Hamiltonian, we have
where
v(s)
is an adapted process such that
0
5
v(s)
s 1
.
We can assert that
(4.11) and there exists an optimal control, namely
One can greatly use the interpretation of the penalized problem in connection with the study of the optimal stopping time problem (we refer to A . Bensoussan
-
J.L. Lions [11 for details).
We will however use it to a
greater extent in the semi group approach developed in the next section.
Remark 4 . i . problem.
We have studied so far a stationary optimal stopping time
One can naturally consider evolution problems like in the
stochastic control case (cf. Chapter IV, section 4 ) . function $
case and lead to an evolution version of ( 2 . 6 ) . the function
The case when the
is smooth caries over by methods similar to the stationary
w
However, the case when
is not regular (especially with respect to time), then
we get a much more complicated situation. ' A treatment of this case will go beyond the scope of the present work.
We refer to A . Bensoussan
J.L. Lions [11 for a presentation of the results existing in this situation.
-
307
PROBLEMS OF OPTIMAL
Remark 4 . 2 .
In the context of mechanical applications the function
is called the obstacZe.
9
To understand the terminology, take the particular
case (in dimension 1) - u " < O
,
u - $ < O
u"(u-y)
=
0
u(0) = u ( l ) = 0
whose solution is drawn in the following picture.
This is the problem of tending an elastic string which passes by points
0 and 1 , below a given obstacle function $. The V.I. is also called the obstacle problem.
For the reader interested
in the applications of V.I. in Mechanics and Physics, we refer to G. Duvaut - J.L. Lions C11. 5 . SEMI GROUP APPROACH
Let u s consider the function z
satisfying
308
CHAPTER V I I
A function
verifying ( 5 . 1 ) will be called a lower s o l u t i o n of t h e V . 1 .
z
We make the asswrrpticns of Theorem 3 . 2 .
Theorem 5 . 1 .
The s o l u t i o n
u
Gf
t h e V . I . ( 3 . 1 1 1 i s a lower s o l u t i o n and t h e m a x i m element among lower solutions.
The fact that
u
is a lower solution is easy. Take
function in ( 3 . 1 1 ) .
Then u
for any
z.
Let
be the solution of
uo
AUO +
(5.3)
Then uo
a uo 0
=
satisfies ( 5 . 1 ) .
, uo
f
w2$P
E
v
,
u
0
lr
=
o
.
2 z
Let indeed
wo
the solution of
AW
0
,
+ (ao+X)wo = f + ~z
wo
E
1
' H
Ho
then
a(wo,$) hence from ( 5 . 1 )
+ x(W
0
-,$I
=
(f,c$)
it follows that 0
0
a(w - z , $ ) + X(w - z , $ ) Take
u
=
0 $ = (w - 2 ) -
0
2
,
V $
0
, then we obtain
-
0
0
a((w - z ) - , (w - z ) - )
-
x
-
c$ as a test
Now we want to prove that
j (w0-.z)-i
z
o
.
309
PROBLEMS OF OPTIMAL STOPPING
hence, since
X is large enough, (w0-2)-
=
.
0
Next consider the iterative sequence (5.4)
AW~+' +
(ao+A)w n+l - f +
wn
,
AW"
1 H2 n H~
6
By the regularity reasoning made in Chapter 11, Theorem 2.1, we know that n n-l for n 2 n w -w E Lm, and then (5.5) Lrn
provided that
a . t y, assumption which can be made without l o s s of
generality to prove ( 5 . 2 ) .
Hence wn-wnO
is convergent in Lm, from which
and the equation ( 5 . 4 ) , it can be easily deduced that wn
+
uo
solution of ( 5 . 3 ) in
1
Ho
weakly.
But let u s check by induction that n w t z . Indeed assuming the property at step n, then we get from ( 5 . 4 ) and (5.1) a(wn+'-z,$)
n+1
n
+ ~ ( w -w , q ) t
o
Y $ t
or
n+ 1 a(wn+l-z,Q) + ~ ( w - z , $ )
0 from which we deduce as for w , that Hence the result is proved for u
0
.
Consider now the iterative sequence
w
n+ 1
2
o 2 z
Y 4 t
.
o
o
310
CHAPTER VII
a(un+',v-un+*) + ~ ( u ~ + ~ , v - uz ~ + ~ )
(5.6)
2 (f
,v-un+5 + X(un,v-un+l)
1 ffvcHo,vS$,
n+l 1 n+l u c H O , u
We know from the contraction property of the map Theorem 3 . 2 (recall that the sequence un
S $ .
considered in
Sx
belongs to
Lm) , that
in L-. We check by induction that
(5.7)
U"LZ.
This is true at step 0. Assume it is true at step n, then t.ake v
=
n+ 1 u + (un+'-z)-
=
Max(un+',z)
+ a(u n+l ,(u n+l - z ) - ) 2
+ (f,(u
_<
@
n + ~ (n+l u -u ,(un+'-z)-)
n+ 1
and from (5.1)
therefore a(u
Letting
n
-z,(u
n+ 1
-z)-)
x(u"-z,(u"+]-z)-)
L
n+ 1 hence (u -2)-
n+ 1
=
0
tend to
, and thus
+ A(U
n+ I
-z,(u
n+l
-2)
L 0
( 5 . 7 ) holds at step
+-, we deduce (5.2).
n.
-
) 2
2
un
+
u
31 1
PROBLEMS OF OPTIMAL STOPPING
Let u s consider the semi group
then we can state the following Theorem 5 . 1 .
We make the assumptions of Theorem 3 . 3 , and
f
(5.10)
Then t h e s o l u t i o n
E
u
(Bore2 bounded cn
B
of t h e V.I. ( 3 . 1 1 1 s a t i s f i e s 0 -
(5.11)
UEC(0)
(5.12)
u
5
6)
Jt
,+=o f
Q(s)
x8
, U S $
ds + @(t)
e-atu
,
Y t t O . Moreover (5.111,
u
i s t h e m a x i m u m element o f t h e s e t o f f u n c t i o n s s a t i s f y i n g
(5.121.
Property (5.11) sequence un
is included in the statement of Theorem 3.3. Consider the in COG). From Ito’s formula
which approximates u
we have u
(X) =
EX
U
(X(tAT))
eXp
-
CltAT +
+ EX JtAT (Aun+aun) (x(s))e-asds hence Un(x)
5
EX Un(X(tAT))e
+ EX
JtAT f(x(s))e-usds 0
Since un
-+ u
in Co(&
,
we deduce
.
312
CHAPTER VII
Using the fact that
uiT = 0
,
we can write the above relation as
and by definition of the semi group
O(t)
(5.8), this is nothing other
than relation (12). Next let
w
E
0 -
C (0)
,
w 2 J
wir t
=
0 , w
x
O(s) f
2 $
ds + @(t)
8
w
Then we have
We now use the fact that Px
,
mt .
€!,: Ro
+
is a stationary Markov process for
x(t)
Then (cf. Dynkin [ I ] ) ,
Ro
consider the shift operator
such that
Note that
r(etw)
(5.14)
Now if
5
(5.15) Since x(t) (5.16)
=
rt(w)
=
infis>tlx(s)
do1
.
is a random variable, we write
"e
=
c(etw)
.
is a stationary Markov process we have the property EX[€ltc~mtl= EX(t)
6
.
We are going to apply that formula to the R.V.
313
PROBLEMS OF OPTIMAL STOPPING
5
=
w(x(s 2 - s 1 )AT))
e
-a(s - s ) 2 1 +
% SAT)) emas ds
s2 t
Then
8 5
-a(s2-s 1) =
w(x(s A T
s1
)) e
-a(s-s
S
+
i12
xo(x(s~-rs ) ) e 1
f
and property (5.16) reads
But from (5.13) we have
hence we have proved that
-as 2 w(x(s,))
e
1
.
This implies that
-aS Z
+
s1
w(x(slAr)) e
1
,
)
ds
S]
314
Indeed l e t
CHAPTER VII
X
be t h e l e f t hand s i d e of ( 5 . 1 8 ) .
= T
T~
1 and
Similarly for
s 2 s
Theref o r e
and from (5.17)
which p r o v e s ( 5 . 1 8 ) . But t h e n t h e p r o c e s s
1
if
T > sl
We u s e t h e f a c t t h a t
315
PROBLEMS OF OPTIMAL STOPPING
is a sub martingale. By Doob's theorem, we deduce
EX[w(x(6A~))e-ae
+ J6 f 0
for any stopping time.
W(X)
x 8(x(XA?))e-aX
dhl
2
w(x)
But this is identical to
4
Ex[w(x(0))
x6,T
+ JeAT f
x
(x(s))e-as
dsl
8
and since w 4 $
Therefore w(x)
5
u(x).
0
This completes the proof of the desired result
As we have done in Chapter IV, for the problem of semi group envelope
we can now define a general formulation of ( 5 . 1 1 ) ,
(5.12).
in Chapter IV 5 5 . 2 , consider a topological space
(E,E) and spaces
B
and
(5.19)
C.
We consider a semi group satisfying @(t)
We also assume that
: B
+
B
@(O) = I
Namely, as
316
CHAPTER VII
:
(5.23)
@(t)
(5.24)
@(t)f
c
-+
c
- + f in
C
as
t 4 0 , Y f E C .
Let now
(5.26)
L
E
B
,
t
-+
O(t)L
is measurable from
Then we consider the set of functions u
into C
[O,m)
.
satisfying
U€C,U
(5.27)
u
~
r
@(s)L
ds +
,
@(t)u
V t
0
2
0
.
Our objective is t o prove the following Theorem 5 . 3 .
We assume 1 5 . 1 9 / ,
. . .(5.261.
Then t h e s e t of f u n c t i o n s
s a t i s f y i n g 1 5 . 2 7 ) i s n o t empty and has a maximwn element. The proof will consist in several Lemmas. Let g
is a regular f u n c t i o n if there exists = jm
(5.28)
@(t)G
.-at
G
E
g E
C
0
C , we will say that
such that
dt
0
Note that this means that generator of
@(t)
in C. g = Jt 0
hence
G
belongs to the domain of the infinitesimal Indeed, we have @ ( s ) G ds +
@(t)g
317
PROBLEMS OF OPTIMAL STOPPING
the opposite of the infinitesimal generator, then we
If we denote by have
(5.29)
c7g + ag = G
g
6
Dk7)
L e m a 5.1. Let g E C. Then t h e r e ezists a sequence gn of regular is dense in Ci. functions converging to g in C (hence D(Q)
Set
and
I Ign-gl I 4 ; e- s I IO(:)S-glI But from ( 5 . 2 4 )
I
I IO(:)g-gl
-t
0
ff
I Ign-gl 1
But we can also write (cf. Lemma 5 . 7 .
gn -
j m
Gn
ng + (a-n)gn
.-at
'
and is bounded by
s
Lebegue's Theorem, it follows that
ds
+
0
2
I /gl 1.
From
.
of Chapter IV)
O (t) (ng+(a-n)gn)dt
and setting
we see that
gn
=
0
is a regular function.
We now introduce the penalized problem connected to ( 5 . 2 7 ) , (5.30)
u
= €
Jm
0
.-at O ( t ) (L
-
1
(uE-+)+)dt
, uE E
namely C
.
318
CHAPTER VII
By the assumptions the right hand side of (5.30) has a meaning.
Lema 5.2.
There e x i s t s one and o n l y one s o l u t i o n of (5.30).
Proof From Lemma 5 . 7 of Chapter IV, (5.30) is equivalent to
Define then a map
-(a+ -1)t
T
(5.32)
J
z = €
0
The fixed points of
T z
check that
T z E
E
e
@(t)(L
~
-Z T ~ ~
=Z
(”~e
1 - ( a + -)t
which proves the desired result
(5.33)
and
5.3.
One has u
E
s u
E’
.
if
Let us
Indeed
hence
L ema -
E
coincide with the solutiomof ( 5 . 3 0 ) .
is a contraction.
T
1
+ - zA$)dt
E S E ’
@(t)(;
1
1 zlAi+L
- ;z2A$)dt
PROBLEMS OF OPTIMAL STOPPING
Proof One has
and
hence
Since
TE
i s non d e c r e a s i n g
T:UEI
and when
n
-t
m,
5
uE '
we o b t a i n ( 5 . 3 3 ) .
Now
uE
5 {me-ut
@(t)Ldt 5
L e t u s prove a r e v e r s e e s t i m a t e .
M a I
We have 1
-(a+ $ t e
T O = / €
0
@ ( t ) ( L-
-(a+ I ) t 2 - / e
@(t)(L-
0
2 - E 1IL-l
I
1 +
+
I I+-/ I
€a
>
319
320
CHAPTER VII
Let us now assume that
Hence
TnO 2 -k
Lema 5.4.
Let
(5.35)
z 2
-k.
, and in the limit u
$ 2 -k
Since
u
2
-k
.
t o be a s o l u t i o n of 1 5 . 2 7 1 .
u s u
, it follows that
0 Then one has
E'
Proof From Lemma 5.7, Chapter I V we have 1
u < J
m
-(a+ -)t O(t) ( L +
e
0
EI u)dt
I
But from (5.32)
and since
u 6 $, we see that
T u
2
u
By induction Tnu 2 u E
and letting
Lemma 5 . 5 .
n
Let
tend to
+m,
0
we get (5.35).
s a t i s f y t h e same assumptions a s
soZution of 15.301 corresponding t o
5.
Then one has
$,
and
-u
t o be t h e
32 1
PROBLEMS OF OPTIMAL STOPPING
proos Let
z,Z c
c
verify
IIz-Zl
(5.37) Let
FE
I
5
be the mapping
I I@-TiI
*
with
-$
TE
replacing
$.
(5.38)
Indeed
But from (5.37) it follows that
I IW - G L I hence (5.38).
5
I Iili-GI I
2
I I ~ J - GI ~ *
But
which implies
1
lTEO
Applying (5.38) with 2 I lT&O
I
z
=
T 0 ,
-2
- -T 0 , z =
I
5
IIG I
pI
5
I
- T&Ol
and generally
1 ITZ0 Letting
n
+ m,
we deduce (5.36).
I$-51 I '
we obtain
We have
322
CHAPTER VII
We have
Lema 5.6. (5.39)
u
.c u
E
I Iu,-u! I
and
From Lemma 5 . 3 , it follows that
uE
j- u
-+
0
pointwise, and
The difficult part is to prove the convergence in C.
we may, without l o s s of generality restrict ourselves to $
Indeed let Let
$,
E
C
.
regular and
1
1
+
0
u
B
E
.
We first check that $
E
D&).
.
Let
u b e the solution of the penalized problem corresponding to En We have uEn + un , un E B
$.,
.
But from Lemma 5 . 5
and thus as
E
tends to
0
,
But
(5.40)
Since $,
is a regular function,
for fixed
n.
We thus assume
I 1uEn -
unl
I
-+
0
,
as
From this and ( 5 . 4 0 ) follows the fact that
i)
E
D@)
.
Then
E
tends to 0 ,
1 luE -u1 1
+
0
.
323
PROBLEMS OF OPTIMAL STOPPING
where we have s e t
Hence
-
u
i s a s o l u t i o n of t h e e q u a t i o n
(5.41)
-
u
E
= Jrn n
.-at
O(t)
(c - 1 ? ) d t E
E
=
hence
fom which i t f o l l o w s t h a t
(5.42)
But we have s e e n i n Lemma 5 . 3 t h a t
and t h u s from ( 5 . 4 1 )
(5.43)
T u
E E'
- u
2 - E '
E
~ ~ ~ + .l l
One has n e x t 1
(5.44)
T
2~
U
-
~
+
,
=
lrne
-(a+ - ) t
O(t)
(x -
1
-
L! +
1 ; TEuE1Ai/i)dt=
324
CHAPTER VII
But from ( 5 . 4 3 ) u
-
TEuE1
$ t uEl -
E'
I
In1
which is negative from ( 5 . 4 1 ) , hence
Therefore 1
~u2
(5.45)
- q t i m e
-(a+ -)t @(t)
E El
But from equation ( 5 . 4 1 ) follows
therefore
which with ( 5 . 4 5 ) implies
or 2 TEuEI
-
uE, t
-
&'
1 IF+l I
uEt 2
-
E'
I lF+l I
We may iterate to obtain k
TEuE1
and letting
k
-
tend to
m , it follows
*
(z+ A uE, - $ I Ir'I 1)dt
.
325
PROBLEMS OF OPTIMAL STOPPING
(5.46)
u
One then let
E
-
UE'
tend to
iiyii ,
2
-
E'
0
,
fixing
Therefore we have proved that one
One obtains
E'.
I+L
, then the following
D(Q)
E
.
if
estimate holds -4
l / ~ E - ~ l /<
(5.47)
E
IIL
I1
'
This completes the proof of Lemma 5.6. proof of Theorem 5 . 3 From the uniform convergence, we have
(5.30) after multiplying by /m 0
E.
.-at @(t)
u
C.
E
Let
tend to
E
0
in
We obtain
(u-$)+dt
=
0 ,
hence
1 /t e-as 0 ( s ) (u-Ji)+ds t
and letting we deduce
t
0
=
o
tend to
0
, from the continuity of
s
-f
@ ( s ) (u-Ji)',
(u-I+L)+ = 0.
On the other hand from (5.30) it follows that u
and letting
E
E
=
it e-as
$(s)(L
st
@(s)
n
tend to
u s / 0
e-"
-
1
+ e-at O(t)uE
- ( u -9)')ds E
L ds +
E
0
0, we obtain
t .-as
0 ( s ) L ds +
O(t
5
326
CHAPTER VII
Therefore
u
belongs to the set ( 5 . 2 7 ) .
From Lemma 5 . 4 , it is the
0
maximum element.
A s we have done in Chapter IV for the control problem, we are going to
define a discretization procedure. (5.48)
To
simplify we will assume that
L E C .
Otherwise, one should replace in the sequel
So
let
h
L
be a parameter, which will tend to
by
0. The discretized
V.I.
will be the following
Equation ( 5 . 4 9 ) has one and only one solution since the map
is a contraction in
C.
We are going to prove the following
Theorem 5.4. We assume 15.191 uh
of (5.49) tends to u
... ( 5 . 2 5 1
and (5.481. Then the s o l u t i o n
0
the m a x i m solution of ( 5 . 2 7 ) in C.
The proof will require several Lemmas.
We will need a penalized version
of the discretized problem as follows (5.50)
uk =
hL - h (ui-$) + + e-ah @(h)uk
,
:u
E
C
.
We first study problem ( 5 . 5 0 ) in a way which is analogue to what has been done in the continuous time case.
We define a map
327
PROBLEMS OF OPTIMAL STOPPING
m
c
T E ~=
(5.51)
n=O on
C.
( E ) n + l e-nah h+ E
@(nh)[hL + h
%I
One checks t h a t i t is a c o n t r a c t i o n , hence h a s a unique f i x e d
p o i n t which i s s o l u t i o n of (5 5 0 ) .
Indeed
I 1 T E Z 1-$z2 I
Let
uk
be t h e f i x e d p o i n t , t h e n i t i s a s o l u t i o n of
E
u;
(5.52)
Lemma 5 . 7 .
=
h+E
E
IcIAuh -ah (hL + h -) + e
One has
(5.53)
and (5.54)
Proof We have
(5.55)
But from (5.50)
h -<
UE
UE'
h
if
E 2 E'
O(h)uE h+E
,
-
328
or
+
h+E
emah @ ( h ) u E '
and t h u s
uE'
(5.56)
E
30
c
=
-nah
n+l
e
(=)
@(nh)lhL+ h
n=O
hence
TEu;'
5 uE'
h
when
E 5 E'
.
I t e r a t i n g we deduce
hence UE < U E '
h -
Let now
0 uh
h
be t h e s o l u t i o n of 0
uh = hL +
Then u s i n g (5.50) we deduce
0
O(h)uh
,
0
uh
E
C
.
+
PROBLEMS OF OPTIMAL STOPPING
'h0
-
Uh E > - .-ah
uh
0
-
uh
uo
=
0
O(h) (uh-u;)
and i t e r a t i n g E
2
O E e-nah O(nh) (uh-uh)
+
0
.
Therefore
(5.57) But
(5.58)
m
o ( n h ) hL n=O
hence
On t h e o t h e r hand o n e h a s m
TEO =
Z n=O
Assume
z
t -k
,
then
(z-) E n+l
e-nah O(nh) (hL - ; h $-)
329
CHAPTER VII
330
.
-k
2
Therefore we can assert that E Uh 2
(5.60)
U
h
-
0
Therefore (5.54) is satisfie-.
Lemma 5.8.
We have
(5.61)
u;
.
2 Uh
proos We have E
co
(5.62)
TLuh
n+l
Z (h+~)
=
-nuh e Q(nh)[hL
$Auh
+ h
7
1
n=O and u < hL + e-uh O(h)uh h or
EhL + h uh + u < e-ah @(h)uh h - h+E h+E h+E hence m
(5.63)
uh
2
C
E n+l (h+~)
h e nahQ(nh) [hL + ; uhl
.
n=O Noting that
uh
f: $ ,
hence
$ A U ~=
uh , it follows from (5.62) and
33 1
PROBLEMS OF OPTIMAL STOPPING
(5.63) t h a t
hence (5.61). N
Lemma 5.3. L e t replacing
Q.
$
C
E
and
-
uk
Then ue have
a o cf
I
Let
-
z,z
be i n
C
such t h a t
t h e n f r o m ( 5 . 5 1 ) i t follows t h a t
hence ( 5 . 6 4 ) .
We have t h e following
be t h e solutim
05 / 5 . , 7 S I
with
-$
332
CHAPTER VII
Then one has
L e m a 5.10. Assume ( 5 . 6 5 ) . (5.66)
E I Iuh-uh/l "(1
lL1
+
~
IiA/l)
*
Boof
I
Let us set
-Lh
=
-
L
.E=uE-$.
Ah
From (5.50) and (5.65) it follows that
(5.67)
ZE
-
h (u-E = h Lh - ,) +
+
.-ah
O(h);;
hence, as already seen
lE
m
Z
=
n=O
(-)
n+l
E
m
5
C
e
h+E
(-)
n+l
E
e
h+E n=O
-nah
-nnh
O(nh) [ h
@(nh) h
Eh - Eh
E -
(u,)
1
5
Eh
hence -E (U,)
(5.68)
+
-+ 7 '1lLhl1
'
Now (cf. Lemma 5.7)) m
(5.69)
TEuE'
-
uE'
=
-
C
E
n+l
(F;~F)
e
-nah
O(nh)h(:
1
1
and from (5.68) t
-
E'
1 -+1 ~ I ~ 1 ,
for
E
<
E'
E'
- z ) ( u h -I))+
n=O
.
PROBLEMS OF OPTIMAL STOPPING
333
Now from (5.65) we deduce m
(5.70)
E n+l (h+~)
1
$ =
-nah h e [hAh++F1
n=O and thus co
E'
(TE)Uh
-
11,
n+l
E
(E)
C
=
e
-nah
-
Ch Lh
-
h E -$Thuh
&'
-+
E'
-I))-]
n=O Using (5.69)
hence (5.71)
-
Now from expression (5.56) of
u;'
EE'
=
n + l e-nah Ch Lh + h -uh E'
(G)E
C
i l I L ~ / /* l
and from (5.65), we can write
-
m
(5.72)
h
h-u$-h
-E'
)+1
n=O which with (5.71) implies m
E'
E
-h-
n=O -
2
1
I '
u;
Therefore (TE)'
u;'
- uh E'
and by iteration, we obtain
b
-
E'
~1~ ~
n+l
(h+~)
-nah e
+
l / ~ ~ l j '
.
334
CHAPTER VII
, from
u:
Let now u i be the decreasing limit of the sequence
(5.73)
we get
0
Therefore
u i = uh
,
ufi
+
u;
*
2 Uh
in
c'
-
t
Uh
-
-+ 1 lLhJ 1 .
E'
Now from (5.50) one easily checks that
C.
hence finally
i.e., (5.66).
(5.74)
C
in C as h + O .
U;+U'
We have m
UE
- u;
z
=
@(nh)[Jh
e-a'
@ ( s )(L +
L $AuE)e-nh'c
td s
n=O n+l
E
- (E+h) m
=
h (L
e-nah O(nh)
C
1 $AU')] ~h
t
[Ch
e-"'
=
@(s)(L
+ 1 $AuE)e-nh'Eds
n=O
-
n+l
- (&) m
+
X
e
-nah
n=O
=I+II. But
1
h(L + -
$AUE)]
+
.
Q(nh)
E+h
E
($,Au'-$Au')
h
=
335
PROBLEMS OF OPTIMAL STOPPING
=
OE(h)
+
0 as
h
-+
0
,
for fixed
E.
But then we get
0
which implies ( 5 . 7 4 ) . We can then give the proof of Theorem 5 . 4 .
iN E Db).
We add the index
they are related to
N
We approximate
$
by
to the quantities u, uh, uE, uk
QN instead of I).
Then we write
when
336
CHAPTER V I I
this last inequality being deduced from ( 5 . 6 4 ) , by letting We may then let
I lu-\/I
+
h
0 as
go to
h
0
+
0, then
to
E
0
,
and
N
E
tend to
We obtain
+ m.
.
c
We have been considering problem ( 5 . 2 7 ) with the restrictive assumptions ( 5 . 2 3 ) , ( 5 . 2 4 ) on the semi group
O(t)
as well as continuous obstacle.
We can relax these assumptions provided that we do not search for a set of continuous functions in ( 5 . 7 7 ) . Here we consider a semi group
which satisfies ( 5 . 1 9 ) ,
O(t)
..., ( 5 . 2 2 )
and (5.75) (5.76)
x,t
j.
t
O(t)f(x)
+
@(t)f(x)
is measurable
is continuous from
fixed , Y f
Y x
Y f
E
B
B (0,m)
+
R ,
.
Let now (5.77)
L t B , $ c E .
We consider the problem ( 5 . 78)
U E B
,
u < i
u < J t e-cLs O ( s ) L ds + 0
(5.79)
@(t)u
,
V t
E
0
.
0.
331
PROBLEMS OF OPTIMAL STOPPING
then assumption 15.76) can be weakened as (5.80)
t
-t
is e o n t i n u o m from
O(t)f(x)
f i x e d , fi f c C
fi x
(0,m)
+
R
,
.
Under t h e asswnpzions o f Theorem 5 . 3 , t h e n t h e maximum element o f (5.78) co
We c o n s i d e r a d i s c r e t i z e d problem
We s e t f o r
z
E
B
and
Th
solution.
O ( t ) L d t + e-"h
Jh
T z = Min[$, h
0
is a contraction i n
B, hence (5.81) h a s one and o n l y one
A s i n Lemmas 5 . 2 and 5 . 3 of Chapter I V , we check t h a t
(5.82) (5.83)
z E B
and
5 U2h
Uh
z
< Thz
implies
.
Let
K
=
Max[
t h e n we have
(5.84)
Indeed
O(h)zl
uh2-K
Ilq-1 1 ,
u, ry
z
s uh
338
CHAPTER VII
-
hence
K
uh
5
from (5.82).
We now set u
q
=
u1/2q
*
From (5.83) and (5.84) we deduce that (5.85)
We let
u
q
q
tend to
tm,
u 5
(5.86)
+ u
,
U C B
u t - K
then from Lemma 5.4, Chapter IV we obtain
Jml2
R
e-"'
Q(s)L ds + e-"mJ2
R
o(m) u . 2R
Let
t > 0.
+a.
By virtue of assumption (5.76) we can assert that
We apply (5.86) with
u 5 J t e-"'
Therefore Indeed let
u
m
=
[t2'
+I],
O(s)L d s +
5
Jh e-"
O(s)L
ds +
-u
5
uh
tend
the maximum element.
O(h)u
hence
and from (5.82), it follows that
R
.
O(t)u
of (5.78). It i s -u isbeananelement other element, then
-u
and we let
-
, hence u
5
u.
to
339
PROBLEMS OF OPTIMAL STOPPING
Now if we do not assume (5.76), but (5.79) and (5.80), then we cannot let R L! tend m in (5.86) (with m = It2 ] + I ) . However since u 2 u we q'
deduce from (5.86)
But from (5.79), it follows that u 2 Jt e-"'
We let then
q
tend to
+m
@(s)L
u
q
Using then (5.80), we obtain
C.
E
ds +
O(t)uq
fi
q
.
as above and obtain the desired result.
The final statement of the theorem follows from the fact that both maximum elements of (5.78) and (5.27) can be approximated by the same sequence
h'
0
*
6 . I N T E R P R E T A T I O N AS A PROBLEM OF O P T I M A L S T O P P I N G
We assume is a semi compact ('1
E,
(6.1)
and the semi group
defined on
@(t)
(5.21), (5.23), (5.24). (6.2)
@(t)l
B
satisfies properties (5.19),
...
We replace (5.22) by =
1.
This assumption and (5.21) imply (5.22) Now in the case when assumption. (6.3)
E
is
ria5
csmpac;, we will need an additional
Let
i:
= {f
,3 K~ compact ~f(x)l < E , for x 4 KE}
continuous i ti
such that
E
(1) Locally Compact Hausdorff space, with denumerable base.
. Example
Rn,
340
The space
CHAPTER VII
is a closed subspace of
C.
Then we will assume that
We next define
for any Bore1 subset of
E.
We consider the canonical space
Ro
=
,
D([O,-);E)
is continuous to the right and
w(.)
has left limits.
no =
u(x(t),t
0)
2
According to the general Theorem of Markov processes, (cf. Dynkin [ 1 ] , [ 2 ] ) there exists a unique probability
Ft ''tl',?=
Px
on
completed ,
Ro, "mo , such that considering
c0 = n o
completed
then
o0,Go,
pX,
n-t
, x(t)
quasi continuous from t h e l e f t (I), PX(x(0)=x)
=
1
is a right continuous,
strong Markov process, and
. -
(1) quasi continuous from the left means that tr A
of stopping times
...,T~ ..., + ~ ( w )<
T ~ ,
rn(w)
~ ( 7 , )+ X ( T )
-
a.s.
E
mo
and tr the sequence
-R, we have
then if on
G , Px (Dynkin [I],
p. 103).
34 1
PROBLEMS OF OPTIMAL STOPPING
We then define the functional
where
+ e-"
Jx(0) = Ex[Ie e-as L(x(s))ds
(6.6) 0
$
is a
stopping time, and
L,$
$(x(0))]
are defined in (5.25), (5.26).
Our objective is to prove the following
Theorem 6.1.
We assume (5.191, 15.201, 15.211, 15.231, ( 5 . 2 4 1 , 1 5 . 2 5 / ,
15.26), 16.11, 16.21, (6.4). Then the m m i m w n solution of the set 15.271
is given explicitely by (6.7)
u(x)
=
Inf JX(e)
.
0
Idoreover there exists an optimal stopping time
8 defined by
.
Consider the shift operator 8, : Ro + Ro Let 5 1 L By the Markov property we have
be a random variable
.
belonging to
Applying that relation to -a(s
5
=
u(x(s2-sl))e
then
(1) the value
6
= +m
is possible.
-s
2 1
)
+
{
s -s
L(x(s))e
-Cis
ds
342
CHAPTER VII
and t h u s
and from (5.27)
Therefore the process
sub martingale. Using Doob's Theorem we deduce
and since
u 5
9, we obtain
Consider now the penalized problem (5.30).
We can write
Reasoning as above we see that the process
is a
ms
pX
martingale, and thus using again Doob's Theorem we can
assert that
for any stopping time 6.
343
PROBLEMS OF OPTIMAL STOPPING Let us define
The set
{xlu,(x)
2
$(x)}
being closed, and the process
standard process (cf. Dynkin [l], p. 104)),
6',
By definition of
6'
is a
-t
x(t)
being a
stopping time.
we have
6',
Now for the same reason as for
we can assert that
6
is a
R-'
stopping time. We want to prove that (6.10)
u(x)
Jx(6)
=
which with (6.7) will complete the proof of the.theorem. When that
,
u ( x ) = $(x)
clearly
Jx(6)
=
=
0
, Px
,
a.s.
x
is such
hence
$(x)
which proves (6.10). We may thus restrict ourselves to the case
Let
such that u(x) < $(x)
-
do.
Take
6 < 60, then u(x) <
(x)
- 6.
Let
which is also a m
-t
such that for
stopping time. E 5 E
6 '
Since u
E
+
u
in
C
,
there exists
344
CHAPTER VII
Then for
and therefore 8 '
6
2
e6,
t <
and
E 2 E~
.
ti6
Applying (6.8) with
and taking (6.9) into
account, we deduce from (6.8)
-ae u,(x)
and since u
+
u
=
in
EX[uE(x(8
))e
' + ie6
L(x(~))e-~~ds]
C
-ua
(6.12)
u(x) = EX[u(x(e
6
Assume
finite, then
86
L(~(s))e-'~ds]
6 c 0 , and
as
f
6+
))e
86
5
* Let 5.
. e6 +
Now by the right continuity of the process
By the quasi left continuity x(8 6)
-t
x(!J
a.s.
Since u,q
continuous
hence
A
2
6.
Rut also
If
(6.13)
G
=
A +=
6,
5
hence
, u(x(t))
Let u s show that in this case
e6
A
A
=
6.
< P(X(t))
+m
a.s.
Indeed let
We have for w
E
A
, by the right continuity
ti t
,
are
h.
345
PROBLEMS OF OPTIMAL STOPPING
and by the quasi left continuity x(8 6)
u(x(A))
2
66
f
- a.s. 6
x(A) a.s., hence
i(x(A))
which contradicts (6.13), then Therefore
+
Px(A)
=
0'
.
We note that using again the quasi left continuity
property
6
x(e )
+
x(6)
a.s. on
i^e <
hence
By Lebesgue's Theorem, since u (6.12),
which yields
(6.14)
U(X)
But when
8 <
m
,
=
=
Q(t)
satisfies (5.19), ( 5 . 2 0 ) , ( 5 . 2 1 )
We also assume (5.75), (5.76), (5.77).
solution of (5.81).
tion.
.
lJJ(x(i))
We now assume that the semi group and (6.2).
+ J e L(x(~))e-"~ds]
EX[u(x(6))e-"8
by the right continuity of the process
u(x(8))
uh
is bounded we can pass to the limit in
We consider the function
We are going to give its probabilistic interpreta-
We define a transition probability by ( 6 . 5 ) .
CHAPTER VII
346
We consider the canonical space
Qo
=
EcO,w)
,
x(t;w) E W(t)
We will assume that
E
(6.15)
is a metrizable u compact topological space (1)
.
According to the general theory of Markov processes, we can assert that (cf. Dynkin [11,[21) there exists a unique probability
Px
Qo,mo
on
such that
Ro,pO , Px, ?$
,
x(t)
is a Markov process,
and PX(x(0)=x) = 1 Naturally we cannot assert that
x(t)
has the properties stated in 5 6 .1 .
By the Markov property we have
Let
8
(6.17)
be a
mt
stopping time, we define JX(8)
=
Ex[ Je
L(x(t))dt
+
+(x(B))I
.
We are going to consider stopping times of the form 8 = vh
(1) cs compact means that it is the sum of a denumerable number of compact spaces.
341
PROBLEMS OF OPTIMAL STOPPING
where
v
is a random integer, satisfying
(6.18)
{v=n}
Note that
Theorem 6 . 2 .
y n
mt
8 i s indeed a
Our objective is
16.151.
mnh
c
stopping time since
prove the following
to
We assume 15.191,
15.201,
(5.21), (6.2),
15.751,
15.77),
Then t h e s o l u t i o n of t h e d i s c r e t i z e d p r o b l e m 15.811 is g i v e n
e z p l i c i t e l y by
(6.19)
u,(x)
Inf
=
Jx(e)
.
B=vh
Moreover t h e r e e x i s t s an o p t i m a l s t o p p i n g t i m e -
(6.20)
A
Bh
= vhh
vh
=
, where
Min[uh(x(nh)) n
From (5.81) it follows that (6.21)
uh 5
{nh
br n
$(x(nh))]
=
.
integer
-at e Q(t)L
dt + e-nffh O(nh)uh
.
Hence from the Markov property (6.16) Ex[uh(x(mh))e-"mh
+ Jmh e-"'
L(x(s))ds
lmnh] =
nh = O((m-n)h)uh(x(nh))e-amh
and from (6.21)
2
e
-anh
uh(x(nh))
+
Jnth
e-as @ (s-nh)L(x(nh) )e-"sds
348
CHAPTER VII
By Doob's Theorem we can replace m,n by random integers v1 satisfying (6.18),
From this it follows taking (6.22)
5
v2
namely
uh(x)
5
vl
= 0
, v2
5
v
.
Jx(Vh)
Now from (5.81) we have
Now from property (6.15) we have
hence /h .-at O(t)L(x(nh))dt
=
Ex[eunh
0
Therefore (6.23)
/n(hn+l)h
emasL(x(s))ds
reads
-anh e uh(x(nh))
= Mince
EX ( J : { + l ) h
-anh
$(x(nh))
,
e-"'
L(x(s))ds
+ e-a(n+l)h uh(x((n+l)h)) Multiply both sizes by
.
xn,j h
Since for n < v h '
+
lmnh)l
imnhl
349
PROBLEMS OF OPTIMAL STOPPING
,
< $(x(nh))
uh(x(nh))
Since %<jh is
we have
mnh
measurable, we can enter
conditional expectation.
xn,jh
into the
Taking then expectation on both sizes we
deduce
Since this relation h o l d s for any
'h- 1 (6.25)
.-anh
n, we may sum up in n.
uh(x(nh))
We obtain
=
n=O
In fact we have implicitely admitted in writing ( 6 . 2 5 ) that
If <,.=
0
,
we have a.s. from ( 6 . 2 0 )
holds assuming that uh(x) < $(x). uh(x)
=
Ex[ J
'hh
uh(x)
=
uh(x)
=
2
1
-as e L(x(s))ds
-avhh + e
uh(x(vhh)) 1
Jx(jhh)
provided again uh(x) < $(x).
.
It clearly follows from ( 6 . 2 5 ) that
0
(6.26)
vh
Therefore ( 6 . 2 5 )
$(x).
When uh(x)
=
@(x),
and ( 6 . 2 6 ) still holds. This completes the proof of the desired result.
jh = 0 , Px a.s
I
350
CHAPTER VII
We write
Then
From this we recover that the sequence u
q
decreases.
We can then state
Theorem 6.3. We make the assumptions of Theorem 6 . 2 and (5.761, or (5.79) (5.80). Then the m a x i m element of the set ( 5 . 7 8 1 i s ezpLicite1y given by (6.28)
Similar to the proof of Theorem 5 . 4 of Chapter IV.
D i f f u s i o n s (strong formulation). Consider as in Chapter 111, section 5, 5 5, the semi group
where
35 1
PROBLEMS OF OPTIMAL STOPPING
assuming Lipschitz conditions on b,U
as in Chapter I, section 4.
know that the semi group satisfies assumptions ( 5 . 1 9 ) ,
We
(5.20), (5.21),
(5.23), (5.24), (6.2) and (6.1) is satisfied. Let us check (6.4).
It is
enough to check that O(t)f
E
C
,
when f
E
C
.
Indeed we have shown that E IyX(t)-xl2 Let
5
C(t2+t)
1x1 > 2N, then
5-
c ( t2+t) N
2
hence
which tends to
0 as
N
tends to
+=, since I$ t E
,
Therefore if we
assume (5.25), (5.26) on the data, Theorem 6.1 will apply.
Stopped d i f f u s i o n s We take @ ( t ) @ ( x )= E ($(YX(tAT))
*
The problem has already been treated in Theorem 5.2.
352
CHAPTER VII
Remark 6.1. One can give examples of processes which are not continuous (cf. A . Bensoussan - J.L. Lions [ I ] ,
M. Robin [ I ] ) .
COMMENTS ON CHAPTER VII
1 luE-ul 1 ,
1 . For estimates on the penalization error J.L. Lions
cf. A . Bensoussan
-
C11.
2 . If the form
a(u,v)
is symmetric, then (3.2) corresponds to the Euler
condition of optimality for the quadratic optimization problem Min a(v,v)
-
2(f,v)
,
v
E
K
In that context, the penalized problem is very much related to the method of constraint penalization, which is well known in optimization theory. 3. The method of increase of regularity used for equations does not carry over easily for V.I.
A natural question in Theorem 3.2 is the
following. Can we weaken the assumption p > 2 2
.
4 . For degenerate diffusions, cf. J.L. Menaldi [ S ] . 5 . The non stationary stopping time problem leads to parabolic variational
inequalities, which are considerably harder than elliptic V.I., when the obstacle $
is not a regular function of time.
treatment is due to F. Mignot A . Benoussan
-
-
J.P. Puel [l],
The first rigorous
cf. also
J.L. Lions [I].
6 . We can mix stopping time and continuous stochastic control problems. This leads to V . I . where the operator A H
is replaced by
Hamiltonian (cf. A . Bensoussan - J.L. Lions [ I ] ,
A . Friedman [l]).
A-H, with
A. Bensoussan
-
For the case when the Hamiltonian has a quadratic
growth, many technical difficulties arise, cf. J. Freshe - U. Mosco [l], A . Benoussan - J . Frehse
-
U. Mosco [l].
353
PROBLEMS OF OPTIMAL STOPPING
7 . The probabilistic interpretation of the solution of V . I .
permits to
prove many properties of that solution (cf. A . Bensoussan
-
J.L. Lions
[I]).
8. For games, cf. A . Friedman [ll, A . Bensoussan
-
J.L. Lions [11,
0. Nakoulima C11.
9. For non continuous obstacles and relation with capacity concepts, cf. P. Charrier
C11.
10. The main advantage of V.I. for solving free boundary value problems, is that the free boundary which is of course the unknown does not appear explicitely in the formulation of the problem. it as a by product.
One recovers
Of course, all free boundary problems do not
lead to V.I.
1 1 . For the separation principle for the stopping time problem, cf J.L. Menaldi [ll.
This Page Intentionally Left Blank
355
CHAPTER V I I I
IMPULSIVE CONTROL
INTRODUCTION
I m p u l s i v e c o n t r o l i s t o some e x t e n t a g e n e r a l i z a t i o n of o p t i m a l s t o p p i n g , I t a r i s e s n a t u r a l l y i n many economic problems, t h e most p e d a g o g i c a l one
b e i n g InventoryTheory. The t h e o r y h a s been i n i t i a t e d by A Bensoussan
-
J.L. L i o n s [41 and h a s m o t i v a t e d numerous r e s e a r c h e r s , among them B . Hanouzet - J.L. J o l y [ I ] , M. Robin 111, P . C h a r r i e r [ I ] , 0 . Nakoulima [ I ] , J . L . Menaldi [ I ] , A . Friedman - L . C a f f a r e l l i [ I ] , P . Mignot - J.P. Puel [I],
L. Barthelemy [ 1 ] , U .
G. T r o i a n i e l l o [ l ] ,
Mosco [ I ] ,
C . Baiochi
-
L. T a r t a r [ 2 ] , 0 . C h a r r i e r
A . Capelo [ I ] ,
-
...
We r e s t r i c t h e r e t o some a s p e c t s of t h e t h e o r y , namely t h e s t a t i o n a r y impulse c o n t r o l problem a r i s i n g i n I n v e n t o r y t h e o r y , and some c o n s i d e r a t i o n f o r g e n e r a l semi-groups
( w i t h o u t p r o b a b i l i s t i c i n t e r p r e t a t i o n ) . For numeri-
c a l t e c h n i q u e s we r e f e r t o M. Goursat
-
G . Maarek [ I ] ,
J.C. M i e l l o u [ I ] ,
H . Kushner [ I ] .
The " p r o b a b i l i s t i c " approach t o impulse c o n t r o l has been developed r e c e n t l y by J . P . L e p e l t i e r
-
B . Marchal [ I ] .
356
CHAPTER V I I I
1 . SETTING OF THE PROBLEM
a
(1.1)
ij
6
~
'
si
c aij
5 J.
2
a . = - b i + C
j
Let
CI
a~i j = a j i
9
~
1
v~ g
6
1R
~
~ ,
>
o
aaxa i j j
such t h a t :
(1.5)
1. C12 = 2
a
Let a l s o : (1.6)
Where # i s
a
2
0
a
bounded
an open bounded r e g u l a r domain o f R".
k > 0,
c o ( s ) : R"+
+
R ~ c, o n t i n u o u s ,
c o ( 0 ) = 0, non d e c r e a s i n g CO(Sl +
s2)
5
co(5,)
+
C0(L2)
Let a l s o :
357
IMPULSIVE CONTROL
1 . 2 . The model
An i m p u l s i v e c o n t r o l i s d e s c r i b e d by a s e t a s f o l l o w s :
(1.10)
w
= (5'
5
I
... < e n .. . .... cn ..
e2 , 52
<
where 5" i s an i n c r e a s i n g sequence of p t s t o p p i n g t i m e s , and Cn i s a n + sequence of (R ) random v a r i a b l e s such t h a t En i s uRn m e a s u r a b l e , ' I n .
We c o n s i d e r a p r o b a b i l i t y P on
Qo,
p o and a g i v e n Wiener p r o c e s s , s t a n d a r d
n d i m e n s i o n a l w i t h v a l u e s i n Rn, w o ( t ) . We may s o l v e i n t h e s t r o n g s e n s e t h e equation :
s i n c e u i s L i p s c h i t z . We may a l s o s o l v e i n t h e s t r o n g s e n s e t h e c o n t r o l l e d equation :
(1.12)
p r o v i d e d we assume t h a t (1.13)
Equation (1.12)
Bn+
+m
a.s.
(5" =
+m
i s possible) (1)
h a s t o be i n t e r p r e t e d a s f o l l o w s . We d e f i n e a sequence of
d i f f u s i o n s w i t h random i n i t i a l c o n d i t i o n s : (1.14)
dxn = u ( x n ) dwo ."(en)
(1)
Condition B n
<en+'
= xn-l
(5")
+ gn
,
i s meaningful o n l y when
n t I
en
<
m.
CHAPTER VIII
358
dxo = a ( x 0 ) dwo x (0) = x . Then we w r i t e :
T h i s d e f i n e s a p r o c e s s which i s r i g h t c o n t i n u o u s w i t h l e f t l i m i t . I t i s e a s y t o check t h a t : o(x(s))dwo(s) + ( 5 ' +
(1.16)
... +
CVt)
where :
v t = max i n
1 on
5
t j
and t h i s i s t h e i n t e g r a l f o r m u l a t i o n of ( 1 . 1 2 ) .
The u n i q u e n e s s i s c l e a r
s i n c e t h e impulses do n o t depend on x .
I n o r d e r t o add a d r i f t term t o e q u a t i o n ( 1 . 1 6 ) ,
we use G i r s a n o v ' s t r a n s f o r -
mation. By v i r t u e of t h e non degeneracy assumption ( 1 . 1 )
u is invertible.
We d e f i n e PG by :
(1.17)
We a l s o c o n s i d e r :
F o r t h e system
no,
po,
p
t
, P Wx , w ,
t h e p r o c e s s x i s s o l u t i o n of t h e e q u a t i o n :
(1.18)
JO
JO
where v t = max i n We w i l l d e f i n e a
I on
5
payoff as f o l l o w s :
tI.
359 (1.19)
where x ( t ) i s t h e p r o c e s s d e f i n e d by ( 1 . 1 6 ) . We s h o u l d w r i t e T;.
Now
s i n c e t h e p r o c e s s x i s r i g h t c o n t i n u o u s and &open bounded,r i s a s t o p p i n g t
time f o r t h e f a m i l y ii
(cf.Dynkin
[I],
p.
188).
Our o b j e c t i v e i s t o s t u d y t h e f u n c t i o n :
x
(1.20)
u ( x ) = I n f J (W) W
and t o c h a r a c t e r i z e an o p t i m a l impulse c o n t r o l
1 . 3 . some remarks We do n o t claim u n i q u e n e s s f o r PG. N e i t h e r d i d we t r y t o d e f i n e a c o n t r o l l e d m a r t i n g a l e p r o b l e m a s we have done i n c h a p t e r I V . T h i s can be done ( c f . A . Bensoussan
-
J . L . Lions [ 2 ] ,
M. Robin [ I ] ) ,
b u t r e q u i r e s t e c h n i q u e s of
patching together p r o b a b i l i t i e s .
We have p r e f e r e d t h e above p r e s e n t a t i o n which p r e s e r v e s t h e i n t u i t i o n . Nothing e s s e n t i a l i s l o s t when d e a l i n g w i t h d i f f u s i o n p r o c e s s e s . However, when d e a l i n g w i t h g e n e r a l Markov p r o c e s s e s , t h i s i s u n a v o i d a b l e .
Let u s a l s o mention t h a t we have r e s t r i c t e d o u r s e l v e s t o one type of impulse c o n t r o l problem, t h a t i n s p i r e d from I n v e n t o r y t h e o r y . Many o t h e r problems c a n be t r e a t e d by s i m i l a r methods. We r e f e r t o A . Bensoussan
-
J.L.
Lions
[ 2 1 f o r a d i s c u s s i o n on d i f f e r e n t s i t u a t i o n s . The s t r u c t u r e of t h e payoff f u n c t i o n (1.19) i s c l e a r enough. There i s an i n t e g r a l c o s t c o r r e s p o n d i n g t o t h e e v o l u t i o n of t h e p r o c e s s and a c o s t p e r impulse. The c o s t p e r impulse i s l a r g e r t h a n a f i x e d c o n s t a n t k > 0, c a l l e d t h e f i x e d c o s t ; co(S) i s c a l l e d t h e v a r i a b l e c o s t .
360
CHAPTER V I I I
2 . QUASI VARIATIONAL INEQUALITIES
Orientation Dynamic Programming l e a d s t o a n a l y t i c problem which resembles t h e t y p e o f problems d i s c u s s e d i n c h a p t e r V I I . This i s n o t s u r p r i s i n g , s i n c e a s t o p p i n g t i m e problem i s a v e r y p a r t i c u l a r c a s e o f impulse contro1,namely when we
impose
e2
=
e3
=
... =
fm.
T h i s w i l l be r e f l e c t e d i n t h e a n a l y t i c as w e l l
a s i n t h e p r o b a b i l i s t i c t r e a t m e n t . We w i l l e n c o u n t e r q u a s i v a r i a t i o n a l i n e q u a l i t i e s ( Q . V . I . ) i n s t e a d of V . I .
We i n t r o d u c e a non l i n e a r o p e r a t o r a s f o l l o w s :
(2.1)
MHx) = k + i n f [cp(x+S) + c o ( S ) l 620
x +
6 €8
which makes s e n s e f o r f u n c t i o n s (Pwhich a r e bounded below. We c a l l Q . V . I . t h e f o l l o w i n g problem :
where a s u s u a l :
(2.3)
+
a.
u v dx.
Remark 2 . 1 . The c o n d i t i o n u t 0 i m p l i e s t h a t Mu i s w e l l d e f i n e d .
I
Remark 2.2. Problem ( 2 . 2 ) i s a n i m p l i c i t V . 1 , s i n c e t h e o b s t a c l e depends on t h e s o l u t i o n . T h i s i s w h y , i t i s c a l l e d a Q . V . 1 , w i t h t h e V.I.
I t s h o u l d n o t be confused
36 1
IMPULSIVE CONTROL Vv 5 Mv,
u s Mu
I
which i s a t o t a l l y d i f f e r e n t problem.
Remark 2 . 3 The o p e r a t o r does n o t p r e s e r v e t h e r e g u l a r i t y o n d e r i v a t i v e s . I t does n o t map H 1 i n t o i t s e l f ' ] ) . However i t h a s a v e r y i m p o r t a n t p r o p e r t y which w i l l p l a y a fundamental r o l e i n t h e s e q u e l . I t i s monotone i n c r e a s i n g i n the following sense :
'Dl ( x ) <(p2(x) a . e . i m p l i e s
xpl (x) <
?Ip2(x) a . e .
I
Remark 2 . 4 . The f a c t t h a t t h e o b s t a c l e i s n o t a p r i o r i r e g u l a r a l s o e x p l a i n s why we do n o t p r e s e n t a r e g u l a r v e r s i o n of t h e problem, a s an u n i l a t e r a l problem, a s we d i d i n s e c t i o n 2 of c h a p t e r V I I .
2.2.
I
S o l u t i o n o f t h e Q.V.I.
Our o b j e c t i v e i s t o prove t h e f o l l o w i n g :
We w i l l u s e s e v e r a l Lemmas.
S e t t i n g $ = Mu, we can c o n s i d e r ( 2 . 2 ) a s a V.I. We a r e i n t h e c o n d i t i o n s of theorem 3 . 2 . of chap. VII. Then (2.5) i s a consequence of t h e proof of theorem 5 . 1 , c h a p t e r VII. Note t h a t s i n c e u i n (3.3) chapter V I I .
(I)
e x c e p t i n dimension 1
2
0 , 0 belongs t o t h e s e t K d e f i n e d
I
362
CHAPTER V I I I
Lemma 2.2. -
We consider an i t e r a t i v e sequence as f o l l o w s : a(,"+',
(2.6)
u
n+l
Vv
E
v
-
1
u n+ 1 ) 2 ( f , v
Ho, u
E
Ho, 1
n+l
-
n+l)
5 MU n
v 5 Mun,
where uo is def i ne d by (2.4). Then un is a decreasing sequence which converges poinedise t o a s o l u t i o n of 1 2 . 2 / , which is t h e mmimum among a l l possible solutions.
Consider f i r s t u
1
.
Note t h a t uo 2 0 , s i n c e f 2 0 ( c h a p t e r 11, Lemma 2 . 3 ) .
Hence Muo 2 0. The assumptions of theorem 3 . 2 . a r e s a t i s f i e d . Hence u ' i s d e f i n e d i n a unique way. S i n c e f 2 0, 0 i s c l e a r l y a lower s o l u t i o n o f t h e (cf.
V.I.
we have u
(5. I ) , 1
c h a p t e r V I I ) . T h e r e f o r e from theorem 5 . I . ,
2 0, and a l s o u 1 5 u
0
. Assume now
Then t h e same argument t e l l s u s t h a t 0
5
u
chapter V I I ,
that a t step n,
n+ 1
5 uo.
Assume now u
n
5
u
n-1
(which i s t r u e a t s t e p n = I ) . L e t u s show t h a t : (2.7)
u
Indeed u
n+ 1
n+l
5 u
n
satisfies :
and :
SinceM i s monotone i n c r e a s i n g . But then u V.I.
v
= 0
n+1
i s a lower s o l u t i o n f o r t h e
d e f i n i n g u n . By theorem 5 . 1 . , c h a p t e r V I I we deduce ( 2 . 7 ) . We can take a s a t e s t f u n c t i o n i n ( 2 . 6 ) . We deduce :
363
IMPULSIVE CONTROL
hence :
I
1
a ( u n + l , u n + l ) + ~ ln +u l 2 s ( f , u n + ' ) + ~ ln +ul 2 5
since u
c
n 1. s a bounded f u n c t i o n . S i n c e A can be t a k e n l a r g e enough i t f o l -
low t h a t :
T h e r e f o r e we can a s s e r t t h a t : 1
p o i n t w i s e and i n H, weakly.
un t u
(2.9)
Let u s check t h a t :
u 5 Mu
(2.10) We have :
hence :
+ c,(S)
U(X) s k
+ un(x+S)
V t 2 0, x +
5
E
T h e r e f o r e going t o t h e l i m i t i n n :
u ( x ) 5 k + c,(S)
+ u(x+S)
hence ( 2 . 1 0 ) by t a k i n g t h e infinum i n
Let now v
E
I
Ho such t h a t v
S
5.
Mu. We have v 5 Mun, hence from ( 2 . 6 )
a ( u n + l , v ) - ( f , v-u n + 1 ) 2 a ( un + l ,un+l ) ,
By weak 2 . s . c .
we deduce :
a ( u , v-u)
t (f, v-u)
:
364
CHAPTER V I I I
hence u i s a s o l u t i o n of t h e Q . V . I . 1,
( 2 . 2 ) . Let us prove t h a t i t i s t h e maxi., S u o . Assume "u Si u n ,
mum s o l u t i o n . L e t u b e a s o l u t i o n , by Lemma 2.1 t h e n w e have :
and : p u " i
u 5 Mu
which i m p l i e s t h a t "i n+l Therefore u 5 u m
Let z
L
E
,
S
Mun
i s a lower s o l u t i o n f o r t h e V . I .
. Letting
n tend t o +
defining u 21
we o b t a i n u 5 u .
m,
z 2 0 , we d e f i n e Tz = 5 a s t h e s o l u t i o n of
n+ I
. I
:
(2. I I )
From theorem 3 . 2 c h a p t e r V I I , we have 5 m
L
.
So we have d e f i n e d a map from
into i t s e l f .
Lemma 2 . 3 .
Let z
"i
5 z
The map T i s i n c r e a s i n g and concave.
" i p u
and 5 = Tz, 5 = T z . We have 5
pu
hence : Let
m
L
E
e<
and :
1 5 5
+
(1-8)
"i
<,
8
E
( 0 , l ) . We have
5
Mz
S
M,:
and :
365
IMPULSIVE CONTROL
a s e a s i l y checked. T h e r e f o r e :
ec
+ (1-e);
5
T(ez +
(1-e):)
which p r o v e s t h a t T i s concave.
Lema 2.4.
Let z ,
Let
( 0 , I ) such t h a t :
II E
be two p o s i t i v e bounded f u n e t i o n s such t h a t :
(2.13)
z(x) -
y z(x) a.e.
5
Then one has : (2. 1 4 )
Tz(x) - T:(x)
5
y(1-p)
Proof From (2.13) we have : (I-y)z
5
2.
S i n c e T i s concave and i n c r e a s i n g
(2.15)
T;
2
T((1-y)z)
t (I-y)Tz
Let u s check t h a t :
(2.16)
Indeed :
To
2 pu
0
+ y To.
Tz(x)
y
6
CO,II
366
CHAPTER VIII
a(uu
0
,cp)
=
u
a(u
0
,cp)
(uf,y)
s (f,@
VY.
0
and : uu
0
2
k
= M(0)
hence ( 2 . 1 6 ) . Using a l s o t h e f a c t t h a t T z
u
0
,
we have from (2.16)
and from (2.15)
I
hence ( 2 . 1 4 ) .
L e m a 2.4.
Let u,
:be
The bounded s o h t i o n o$ (2.2) i s unique.
two s o l u t i o n s . From t h e p o s i t i v i t y , we have : %
u - u i - u
hence ( 2 . 1 3 ) i s v e r i f i e d w i t h y = 1 . Applying ( 2 . 1 4 ) , and u s i n g t h e f a c t that u,
a r e f i x e d p o i n t s f o r T , we o b t a i n u
-; s (1
-p)
U)
which means t h a t (2.13) i s v e r i f i e d w i t h y = ( 1 - p ) . u
and l e t t i n g k t e n d t o
I f cpc Cyo(@ that :
-
:s +m,
I t e r a t i n g , we check :
(l-p)k u u 5
:.
By s p e t r y u =
we e x t e n d i t b y 0 o u t s i d e
6.Then
:.
f o r such a
I
Q,
we may a s s e r t
367
IMPULSIVE CONTROL
(2.17)
MNx) = k + i n f (co(S) +(p(x+5)).
5 LO Indeed :
x+Sk
cr
x+fs 2 k +
inf
s
co(<) = y(x).
520 X+G
r
T h i s f o l l o w s from t h e f a c t t h a t f o r VC such t h a t x + 5 6 E
[ O , l l such t h a t x + 65
inf
r,
E
4 L?, t h e r e e x i s t s
and k + c o ( 5 ) t k + c 0 ( e g ) 2 k +
c o ( 0 =y(x).
520 X+cEr
But on t h e o t h e r hand :
y(x) = k +
(co(5) + p ( x + S ) )
inf
2
Mcp(x)
520 X+cEr
which i m p l i e s ( 2 . 1 7 ) .
L e m a 2.6.
But M q c l e a r l y b e l o n g s t o C
The sequence un
+
0 -
u i n C (W)
0
.
.
0 -
F i r s t , t h e sequence un b e l o n g s t o C (6).I t i s t r u e f o r uo. Assume i t i s n 0 true for u S i n c e un i n f a c t b e l o n g s t o C + o ( U ) , i t followsfrom Lemma 2 . 5
.
t h a t Mun E Cy(8). From theorem 3 . 3 of c h a p t e r V I I , we deduce t h a t n+ 1 E C y o ( B ) . Now we t h a t : u
hence from lemma 2 . 4 :
i.e. u2 -
UI
5
(I-l,J)u 2
CHAPTER V I I I
368 and i t e r a t i n g we o b t a i n :
u
n+l
-
u
n
5
(I-p)
n
u
n+l
<(I-p)
n
u
o
From t h i s e s t i m a t e i t e a s i l y f o l l o w t h a t un i s Cauchy i n L m . Hence t h e
I
desired result.
The proof of theorem 2 . 1 i s t h e c o m p l e t e .
Remark 2 . 1
I f t h e s o l u t i o n u of (2.2) b e l o n g s t o W2",
then one can w r i t e
problem ( 2 . 2 ) a s f o l l o w s :
(2.18)
I t t u r n s o u t t h a t under a d d i t i o n a l r e g u l a r i t y assumption on f , and t h e c o e f f i c i e n t s of A , a. p <
m.
and c o , one can prove t h a t u
E
f o r any p
W2"
2
2,
However t h i s t h e o r y i s t o o e l a b o r a t e f o r t h e scope of t h e p r e s e n t
work. The main
r e s u l t s have been o b t a i n e d by J . L .
Joly
-
U . Mosco -
111 and A . Friedman - L , C a f f a r e l l i [ I ] . We r e f e r t o A . Bensoussan - J.L. Lions C21 f o r a d e t a i l e d p r e s e n t a t i o n of t h e a v a i l a b l e
G. Trojaniello
results
.
3 . SOLUTION OF THE IMPULSIVE CONTROL PROBLEM
3. I . The main r e s u l t Our o b j e c t i v e i s t o prove t h e f o l l o w i n g :
Theorem 3 . ;
Ye make t h e asswvptions of t h e o r e y 2.1. The?? t h e s o l u t i o n u
of (2.2) i s ~ 3 e v e. q l i c i t i y by :
where the functinn J x ( W )
an op'imal
has been d e f i n e d in (1.19). Nsreover t h e r e e d s c s
impwlse c o n t r o i .
I
369
IMPULSIVE CONTROL
W e s t a r t w i t h a complement on s t o p p i n g t i m e problems, which i s i n t e r e s t i n g
i n i t s e l f . Let u s c o n s i d e r a "usual" system 0,a , be t w o 9
t
s t o p p i n g t r u e s such t h a t 8 5 8' a . s . and
P,Ft, n fp e
w ( t ) . L e t 8, 8' measurable w i t h
values i n R ~ . L e t y ( t ) be a c o n t i n u o u s p r o c e s s such t h a t :
where fi i s a n a d a p t e d p r o c e s s , which i s bounded by a d e t e r m i n i s t i c c o n s t a n t . C l e a r l y we have :
e
y ( t ) = rl f o r t
Let T~
T~
time of t h e p r o c e s s y ( t ) a f t e r time 6 from B: Then
be t h e 1st e x i t
i s a s t o p p i n g t i m e . I n d e e d , l e t xo
E
8*,
and s e t :
t h e n z ( t ) i s a d a p t e d , r i g h t c o n t i n u o u s w i t h l e f t l i m i t and i t s unique p o i n t of d i s c o n t i n u i t y i s 8 . Let us be t h e 1st e x i t t i m e of t h e p r o c e s s z ( t ) a f t e r t i m e s from
(3.3)
6.Note t h a t
oe
= T~
sS
=
:
since z ( t ) = y(t) for t t 6.
Let a l s o :
Because of t h e s p e c i a l
(3.4)
a
inf I t t s
I
z(t-)
or z(t)
4
W}
form o f z , i t i s e a s y t o check t h a t :
=
us.
370
CHAPTER V I I I
But s i n c e b i s a bounded open s e t and z r i g h t c o n t i n u o u s , from Dynkin [ I ] , p . 188. {BS > t}
But s i n c e xo
E
r,
Ue
=
c
o(z(A), s s A 5 t )
=yt.
Uo.
F i n a l l y T~ = ii0 which i s a s t o p p i n g t i m e . We make t h e assumption :
B(t) = b ( y ( t ) ) f o r t
(3.5)
[B,B'-.cgC
E
a.s.
We n e x t c o n s i d e r , under t h e assumptions of theorem 4 . 2 , c h a p t e r V I I , t h e V.I.
a(w, v-u)
( 3 -6)
Vv
E
I
Ho,
2 ( f , v-w)
v 5 $, w
E
1
Ho,
w 5 $.
We know t h a t u i s c o n t i n u o u s . Consider :
1
The s e t C = { x l w ( x ) < $ ( x ) } i s open and
0 i s t h e 1st e x i t a f t e r time 0
from C . We may assume t h a t $ i s extended o u t s i d e @ b y a f u n c t i o n which i s s t r i c t l y p o s i t i v e f a r away, and u i s extended by 0 . T h e r e f o r e t h e complement of C i s compact. Now i n t h e d e f i n i t i o n of z ( 3 . 2 ) w e may assume t h a t XOE
c.
We a r e i n a s i t u a t i o n s i m i l a r t o t h a t of T ~ ,e x c e p t t h a t C i s n o t bounded, b u t i t s complement i s compact. By Dynkin [ I ] ,
p . 188, t h i s s u f f i c e s t o 1
a s s e r t t h e a n a l o g u e of ( 3 . 5 ) . We t h u s a g a i n o b t a i n t h a t 0 i s a s t o p p i n g t i m e .
We t h e n have t h e :
Lema 3.1.
The following r e l a t i o n s hoZd
37 1
IMPULSIVE CONTROL
(3.8)
I f 8' = 8 (hence we asswne that ( 3 . 5 ) hoZds with 8 ' =
.
ei then
:
(3.9)
Proof Consider t h e c a s e of a r e g u l a r o b s t a c l e . Then we know t h a t w
E
W2"(6)
(cf.
c h a p t e r I V , theorem 2 . 2 ) . Let wn be a f a m i l y of smooth f u n c t i o n s such t h a t :
(3.10)
wn + w i n W2'p(8)
From I t o ' s f o r m u l a , we g e t :
Taking t h e c o n d i t i o n a l e x p e c t a t i o n w i t h r e s p e c t t o gral contribution
vanishes!2).
We o b t a i n
the s t o c h a s t i c inte-
:
I ) From Lemma 3 . 2 below i t w i l l f o l l o w t h a t T~ < 2) Cf Remark 3 . 2
m
a . s . on 8 <
m.
372
CHAPTER VIII
(3.11)
Now from (3.10) i t f o l l o w s t h a t : A W ~+
aown
+ AW
+ aow
i n L'(LY)
.
From lemma 3 . 2 below we deduce t h a t :
But t h e c o n d i t i o n a l e x p e c t a t i o n i s a c o n t r a c t i o n i n L
1
,
therefore there
e x i s t s a subsequence such t h a t :
*
E
[IeeTAT
e (Aw+aow) ( y ( s ) ) (exp-
/,'
ao(y)dX)dslf
on Since w i s 0 outside w
d,we + w
e1
e<-
can w i t h o u t l o s s of g e n e r a l i t y assume t h a t : u n i f o r m l y i n t h e whole s p a c e .
We c a n t h u s p a s s t o t h e l i m i t a . s . i n (3.11) ( f o r a subsequence) and obtain :
Using t h e f a c t t h a t Aw + aow 5 f a . e . , w e o b t a i n a s s e r t i o n ( 3 . 8 ) .
We a p p l y n e x t (3.12) w i t h 8' = 8. We u s e t h e f a c t t h a t :
a.s,
373
IMPULSIVE CONTROL
(Aw + aow ( c f . proof o f t h e o r e m
-
a.e
f)XC = 0
4.1, chapter IV). Hence,in p a r t i c u l a r
L
a . s . I n t h i s e x p r e s s i o n we may r e p l a c e
xc
by 1, b y d e f i n i t i o n o f '8. From
t h i s we deduce ( 3 . 9 ) . T h e r e f o r e t h e theorem i s proved f o r r e g u l a r o b s t a c l e s , .
I n t h e g e n e r a l c a s e , we approximate j , by
l i t y , s i n c e t h e y b o t h v a n i s h on
5, we
a s i n t h e p r o o f s o f theorem 3.3
$0
and 4.2, c h a p t e r V I I . We have wn +. w i n C
On-
(a), and w i t h o u t loss of genera-
can assume w
whose s p a c e . We can t h e n p a s s t o t h e l i m i t a . s . ,
+
w uniformly i n the
i n e x p r e s s i o n ( 3 . 8 ) . For A
( 3 . 9 ) we have t o be more c a r e f u l , s i n c e t h e s t o p p i n g t i m e 8 depends on n . We have t o make a random v e r s i o n of t h e argument used i n t h e proof of theo-
rem 4.2, c h a p t e r V I I . I t i s t e c h n i c a l l y more c o n v e n i e n t t o u s e t h e p e n a l i z e d problem as an approx i m a t i o n . Consider:
( 3 . 13)
Define :
*
which i s f o r s i m i l a r r e a s o n s as 8 , a s t o p p i n g time A s we d i d t o p r o v e ( 3 . 9 ) f o r r e g u l a r w , we deduce from (3.13)
(3.15)
374
CHAPTER V I I I
S i n c e w E i s a lower s o l u t i o n o f t h e V . I . , (3.16)
WE
2 w.
WE
-+
we have :
Moreover : (3.17)
~'(8)
w in
This r e s u l t has been proved i n c h a p t e r I V , theorem 3 . 4 .
L e t us t h e n show t h a t : 1
eE
(3.18)
1
+
e
a.s.
Indeed f i r s t of a l l we can a s s e r t t h a t :
e r e-
-E
(3.19)
1
,.
1
T h i s i n e q u a l i t y is c l e a r when B = +-. When
e
( c f . (3;7))
e
<
m,
we have by d e f i n i t i o n of
: 1
w ( Y ( ~ ) =) $ ( y ( e ) ) and t h u s from (3.16) :
which c l e a r l y i m p l i e s ( 3 . 1 9 ) .
I n a p o i n t o such t h a t 8 =
e
1
eE
w e c l e a r l y have from ( 3 . 1 9 ) ,
1
=
8 , i.e.,
1
(3.18) s a t i s f i e d . C o n s i d e r now v a l u e s of o such t h a t 0 < 8 . Let 6o be such 1
t h a t 8 > 8 + 60. 1
For 8 5 s 5 8
-
AO,
we have :
By (3.17) w e can f i n d
E(W)
such t h a t f o r
E
5 E~(o)
315
IMPULSIVE CONTROL
I
Therefore f o r
E
5 E~
and 8 5 s
5
8
-
6
which i m p l i e s :
--
F o r any 6 < 6w, w e can show i n a s i m i l a r way t h e e x i s t e n c e of s0(6,u) f o r which
E
5 EO
-E
implies 8
2
8
6 . From t h i s we deduce ( 3 . 1 8 ) .
Now from (3.18) and ( 3 . 1 7 ) i t i s e a s y t o p a s s t o t h e l i m i t as E
+
0 in
I
( 3 . 1 5 ) . We o b t a i n ( 3 . 9 ) .
Remark 3 . 1 .
I f w e want t o use a proof l i k e i n theorem 4 . 2 ,
we need t o i n t r o d u c e random time
e6,
chapter
VII,
w i t h 6 random. We f a c e t h e d i f f i c u l t y
I
t o prove t h a t t h e y a r e s t o p p i n g times.
The following e s t i m a t e holds
Lemma 3 . 2 .
where C does not depend on f non b , u, a
-1
e, e’,
b u t depends on Wand the bounds on
.
Proof We f i r s t make an i n v e r s e G i r s a n o v t r a n s f o r m a t i o n . We d e f i n e :
(3.22)
376
Considering
CHAPTER V I I I
n,
%T, ?T,
Yt,
then G ( t ) f o r t
5
T i s a standard n dimensional
Wiener p r o c e s s , and t h e p r o c e s s y ( t ) s a t i s f i e s :
We a l s o have t h e i n v e r s i o n formula :
(3.24)
:
L e t f 2 = h . We c o n s i d e r t h e P.D.E.
(3.26)
I We know s i n c e a i j
E
u l z = 0, u(x,T) = 0
W1'-,
h ' i s smooth and & i s r e g u l a r t h a t t h e r e i s one and
o n l y one s o l u t i o n of (3.26) i n t h e f u n c t i o n a l s p a c e :
(3.27)
2+a,l+ 5 . 2 U € C
(8).
By c o n s i d e r i n g a f u n c t i o n which i s C2"(Rn
-
Q , we o b t a i n from I t o ' s formula :
x
C0,TI) and c o i n c i d e s w i t h u i n
311
IMPULSIVE CONTROL
(3.28)
But we a l s o know t h a t f o r t h e s o l u t i o n of (3.26)
< C Ihl
(3.29)
(Q) Therefore, f o r p >
:
+
1,l lu/
1 c.0 <-
L~ ( Q ) C ] h i L p which combined wich (3.28)
shows t h a t :
(3.30) Combining t h i s e s t i m a t e w i t h ( 3 . 2 5 ) , f o r h = f 2 , we deduce :
8 ' -TAT (3.31)
e
'8-T
f ( y ( t ) ) d t s CT
If1
,
p > n +2.
LP
Consider now t h e e l l i p t i c P . D . E .
(3.32)
Au = f
Assuming p i s a s i n ( 3 . 3 1 ) , we can deduce from ( 3 . 3 1 ) , ( 3 . 3 2 ) and ( 3 . 5 ) , that : ~',T,T E u(y(8-T)) = E u(y(8'-T-Te))
+ E
ef ( y ( t ) )
dt.
'0 ,T
But n o t i n g t h e e s t i m a t e :
where t h e c o n s t a n t does n o t depend o n T , we deduce a b e t t e r e s t i m a t e t h a t ( 3 . 3 1 ) namely :
378
CHAPTER VIII
Proof of theorem 3 . l . We may c o n s i d e r t h e Q . V . I .
w i t h o b s t a c l e J1 = Mu. I t i s e a s y
(2.2) as a V . I .
t o check t h a t t h e assumptions o f theorem 4 . 2 ,
Qo,
W be a n i m p u l s i v e c o n t r o l . Consider
po,
p
t
c h a p t e r V I I a r e s a t i s f i e d . Let
, P;,
t h e n t h e p r o c e s s xn d e f i -
ned i n ( 1 . 1 4 ) s a t i s f i e s t h e e q u a t i o n :
We t a k e w i t h t h e n o t a t i o n of t h e b e g i n n i n g o f § 3.1
From ( l , l 7 ) ,
( I .18)
:
i t f o l l o w s t h a t assumption ( 3 . 5 ) i s s a t i s f i e d .
T h e r e f o r e from lemma 3 . 1 . ,
p r o p e r t y ( 3 . 8 ) a p p l i e d w i t h w = u we o b t a i n :
Let u s n e x t remark t h a t :
en
(3.38) Indeed i f 8""
(en,On+')
5
< T implies
r n , then x(s)
and x('J"),
x(e"-)
( 3 . 4 8 ) i s v e r i f i e d . I f 8"" x ( r n ) = x ( T ~ - )a @ ,
E
e n+ 1 - T n
d
for s
c &since
T.
If
en
= T~ T.
Ce n ,
1, x(s)
< T , therefore T 2
> T ~ t, h e n when
thus rn =
r n 2 T, which i s i m p o s s i b l e s i n c e 8" <
(3.38).
en
E
en <
< T~
en+',
<
= s(s-)
en+'
on and t h u s
en+',
t h e n x(T")
k 8', hence
This completes t h e proof of
319
IMPULSIVE CONTROL
where t h e c o n s t a n t C does n o t depend on f , n o r 8 , T , 8 ' . L e t t i n g T t e n d +=,
I
we deduce ( 3 . 2 0 ) , by F a t o u ' s Lemma.
Remark 3 . 2 . (3.33)
I t f o l l o w s from lemma 3 . 2 t h a t :
8'-.rg
Indeed t a k e f = I o n f .
<
m
a.s. 8 <
-
From ( 3 . 2 0 ) ,
e)
<
which i m p l i e s ( 3 . 3 3 ) . We a l s o have (which has been used i n t h e proof of lemma 3 . 1 ) :
(3.34)
g(y(t)).dw(t)
ire]0 =
a.s.
f o r s a y g bounded. Indeed we f i r s t n o t e t h a t t a k i n g T f i x e d
(3.35) Also :
T h e r e f o r e by
F a t o d s lemma :
By t h i s e s t i m a t e and Lebesgue's theorem, we can l e t T yelds ( 3 . 3 4 ) .
-f
m
i n ( 3 . 3 5 ) , which
I
380
CHPATER V I I I
We m u l t i p l y b o t h s i d e s of ( 3 . 3 7 ) by
x
which i s 3 en m e a s u r a b l e . We
en< T
o b t a i n from ( 3 . 3 8 ) :
(3.39)
Now we u s e t h e f a c t t h a t u 2 Mu, hence i n p a r t i c u l a r from lemma 2.5
(2.17))
(see
:
n+1
We a p p l y t h i s i n e q u a l i t y , w i t h x = xn(encl), when 8
<
m,
5 = 5
n+ I
and
we o b t a i n :
n+l (,n+l).
from t h e d e f i n i t i o n ( 1 . 1 4 ) of x Now w e have i f
en
<
m
n n
s i n c e u ( x (.c ))x that i f
(3.43)
en
<
m
:
<e
(hence 0
n+,
= 0
n+l
-T
n
n
when 8
i m,
s e e remark 3 . 2 )
< =. From ( 3 . 4 1 ) and ( 3 . 4 2 ) i t f o l l o w s
38 1
IMPIJLSIVE CONTROL
But :
Indeed when 8 n+ I < for s s
E
en+'
E
Ce",e"+'l,
T ,
we have
but xn(s)
(On,8n+11,thereforexn(s) <
E
en+'
< T and
d
for s
for s
E
< T"(indeed
Cen,enc'),
x(s-),
x(s) c @'
xn(s) = x(s-),
for
CBn,en+ll, which implies
E
P).
Now when 8 n+ I 2
but then from x(en+l)
en
= x(s)
=
T,
with
T < T~
xn+l(en+l)
en
< T , 0""
and x(e
< T
~
n+l-) = xn(e"+')
k4,and
necessarily , from ( 3 . 3 8 ) 0
cd,
n+ 1
= T,
i t follows that
therefore u(xn+](enc1))
= 0.
Using ( 3 . 4 4 ) in ( 3 . 4 3 ) and going back to ( 3 . 3 9 ) we obtain :
We add up these relations, after having taken the mathematical expectation, when n runs from 0 to N-I , remarking that by convention x
E
6,we obiain
(3.46)
:
eo
=
0. Assuming
382
CHAPTER V I I I
Now t h e sequence
x
en<
exp
-
loen
a ( x ( s ) ) d s i s d e c r e a s i n g . Hence i t con-
v e r g e s a . s . t o II. I f II = 0 , a . s . ,
t h e n c l e a r l y from ( 3 . 4 6 ) and 8
N
+
-
a.s.
T h e r e f o r e w e have proved t h a t : u ( x ) 5 Jx(W) f o r a l l W such t h a t II = 0
a.s.
Now i f W i s s u c h t h a t X > 0 on a s e t of p o s i t i v e p r o b a b i l i t y , t h e n i t i s e a s y t o check t h a t Jx(W) =
since :
W ,
(3.48) T h e r e f o r e e have proved t h a t :
(3.49)
provided x
u(x) 5 Jx(W)
E
(f
If x
4 &, t h e n
T = 0, u ( x ) = 0 and ( 3 . 4 9 ) i s a l s o s a t i s f i e d
(as an equality). I
L e t u s now p r o v e t h a t we can f i n d W such t h a t : I
(3.50)
u(x) = Jx(W) I
We f i r s t f i n d a f u n c t i o n 5(x) Bore1
and :
L e t u s consider next :
I
0
dxo = U ( X )dwo
2
0 , such t h a t x + S(x)
E
3,Vx
E
5,
383
IMPULSIVE CONTROL
and :
Then : -
(3.52)
1 = T
8
^I
5
=
0
if T
0
0 < S
,
-
0 -1 <(x (8 ) ) , i f 0 -1
+m
<
s',when
S i n c e Mulr = k , t h e p o i n t x ('2 ) E * ^ n- I cess x and en, Cn, w e d e f i n e xn,
otherwise m,
arbitrary i f
el <
en+', in+'
m.
=
+=J
Having d e f i n e d t h e pro-
as follows :
(3.53)
^
sn
=
1st e x i t time of xn from r a f t e r
T"
=
inf { t
in+'=
(3.54)
2
e"
1
en
u ( x n ( t ) ) = Mu(xn(t))]
T" i f Tn < S n ,
+m
otherwise
i f en+'
<
m,
arbitrary i f
in+'=
m
Let u s check t h a t t h e c o n t r o l
w-
^
=
-
( -e1 , -51 , ..., e n , cn,
...)
i s a d m i s s i b l e . The sequence 8" i s a sequence of s t o p p i n g t i m e s . A l s o assume
en
<
m,
t h e n we have : n n u ( x (8 ) ) = u ( x n -
and s i n c e
:
=
u(xn-
=
u(xn-
384
CHAPTER V I I I
we deduce :
and by t h e s u b l i n e a r i t y of co
f o r any 5
2
0. T h e r e f o r e
(3.55)
u ( x n ( e n ) ) 5 Mu(xn(en))
"n+l " Therefore B > B"
a.s. i f
B" <
-
,.
k < Mu(xn(Bn)).
m.
:
L e t u s next show t h a t L
(3.56)
en
+
a.s.
m
Indeed we f i r s t remark t h a t d e f i n i n g :
P(6)
=
sup
lu(x)
-
U(Y)I
145: X,YE
i t f o l l o w from t h e c o n t i n u i t y of u on
which t e n d s t o 0 a . s 6
hence :
which i m p l i e s : (3.57)
+
&
t h a t p i s an i n c r e a s i n g f u n c t i o n
0. But from ( 3 . 5 1 ) , we have :
385
IMPULSIVE CONTROL
iN<
Now we have i f
m
from (3.53)
:
(3.58) If
sN
-,
<
N ^N w e know t h a t x ( 8 )
€8.But
from ( 3 . 5 7 ) and t h e p o s i t i v i t y of t h e
A
components o f
5, i t f o l l o w t h a t
:
12 ... + 5 N I
(3.59)
+
Let us c o n s i d e r t h e s e t 2,
no
(3.60)
%
no
2
fi
L
such t h a t :
= {lim N
GN
= A <
-1
We d e f i n e t h e p r o c e s s :
t h e n (3.58) becomes :
(3.61)
"v
Now on O
(1)
~
we have :
I f Y, i s bounded a d a p t e d ,
5=
xeCm
a stopping time, then :
loe
xe
e
V (t)dw(t)
1
nN
O < N}
c
=
lim T-
x ecm
E
N
and N 2 No
O
OAT cPdw(t) a . s .
and O = U O
ON+],
loeA; dw. F o r w
lo
XN(w)
u (O=+m). Set
N = XNo,
hence t h e r e i s
NO
convergence on U ON a . s . ; on
(O=+m),
N
towards a R . V . which i s by d e f i n i t i o n
X
N
= 0, hence a . s .
x eCrn
re
/,
Y, ( t ) d w ( t ) *
convergence
386
CHAPTER VIII
But t h i s c o n t r a d i c t s (3.61) s i n c e i t would f o l l o w t h a t 5' "u
+
...
+
SN
remains
which i s i m p o s s i b l e by (3.59). Hence P(; 0 ) = 0.
bounded f o r w f i x e d i n Go,
This shows (3.56). We a r e g o i n g t o prove t h a t (3.50) h o l d s . Consider xn, yn a s i n (3.36), (3.37) w i t h en i n s t e a d of en. From lemma 3.1, -n+ 1 p r o p e r t y (3.9) i t f o l l o w s from t h e d e f i n i t i o n of 8 t h a t we g e t (3.37) w i t h
,.
a n i n e q u a l i t y . Next we have i f
en
<
m
:
U ( X " ( ~ " + ~ - T " ) )= ( k + c o (*n+ S I ) + U(X"+'(~"+')))X,n+ I < T n ' ^n+ I < Bu t when 8
m,
from (3.54),
en+'
< Sn 5 T~
and t h u s :
hence we g e t (3.43) w i t h an e q u a l i t y i n s t e a d of on i n e q u a l i t y . T h e r e f o r e
w e o b t a i n (3.45) w i t h a n e q u a l i t y , hence adding up :
-.,
and from t h e p o s i t i v i t y of u :
and l e t t i n g N t e n d t o
+-.
u ( x ) 2 Jx(W)
-
which i m p l i e s ( 3 . 5 0 ) s i n c e W i s a d m i s s i b l e . The proof o f theorem 3. I i s t h e n complete.
387
IMPULSIVE CONTROL
In t h e s t u d y of Q . V . I .
( 2 . 2 ) , w e have i n t r o d u c e d an i t e r a t i v e scheme un
d e f i n e d by ( 2 . 6 ) . We have shown t h a t un d e c r e a s e s and converges i n C o ( a ) towards t h e s o l u t i o n u of t h e Q.V.I.
lemmas 2.2.and
(cf.
2 . 6 ) . One can
g i v e a n i n t e r e s t i n g i n t e r p r e t a t i o n of t h e sequence u n , which e x p l a i n s i n a d i f f e r e n t way t h e convergence of un towards
We c o n s i d e r t h e g e n e r a l s e t up of § 1 . 1 .
U.
We c o n s i d e r t h e r e s t r i c t e d c l a s s of
impulse c o n t r o l s s a t i s f y i n g : ep =
(3.63)
+m
,
f o r p 2 n.
We d e n o t e by :
wn =
(3.64)
c l a s s of impulse c o n t r o l s s a t i s f y i n g ( 3 . 6 3 ) .
Then we have :
We make t h e asswrrptions o f theorem 2 . 1 . Then t h e sequence un
Theorem 3 . 2 .
defined b y (2. 61 s a t i s f i e s (3.65)
u
n-1
(x) =
inf WE
wn
J ~ ( w ), n
E
I
Moreover, t h e r e e x i s t s an optimal c o n t r o l .
For n = l , 0
1
=
*,
0
then x ( t ) = x ( t ) , JX(W) does n o t depend on W f o r W
E
ICC1 ,
and (3.65) i s n o t h i n g o t h e r t h a n t h e i n t e r p r e t a t i o n of t h e s o l u t i o n uo of e q u a t i o n ( 2 . 4 ) . Let u s t a k e n > I . We a p p l y t h e same method a s i n theorem n+ I 3.1, using the f a c t t h a t u i s a s o l u t i o n of a V.I. w i t h o b s t a c l e Mu". LetW
E
'Ikn+and l
take 0
2
j
2
n-l
We f i r s t have a s f o r ( 3 . 3 9 ) :
.
388
CHPATER V I I I
(3.66)
Operating a s f o r ( 3 . 4 5 ) , we o b t a i n :
(3.68)
We t a k e t h e mathematical e x p e c t a t i o n and add up t h e s e r e l a t i o n s , when j r u n s from 0 t o n-I. We o b t a i n :
n
389
IMPULSIVE CONTROL
n+ 1 =
Using t h e f a c t t h a t
(3.69)
un(x)
5
a . s . We o b t a i n :
+m
J ~ ( w ) ,f o r
w
c
V+'. 1
Let u s prove t h e e x i s t e n c e o f a n o p t i m a l c o n t r o l . Definet"(x)
Bore1 2 0
function such t h a t :
(3.70)
+ u n - l ( x + s"(x))
= k + co(;"(x))
Mun-l(x)
Vx
E
8, n
2 1
Consider xo a s
dx
0
=
0
U ( X )dwo
x (0) = x
T:
1
It
= inf
u n ( x o ( t ) ) = Mun-I(xo(t))}
then :
(3.71)
-1
'n -1
5,
=
0
-gn
We have a g a i n x 0 (en) -1 c @ when p r o c e s s x j-1 and -8:,'
if T:
Tn
5: f o r j
0 -1 ( x (en)) i f
;A
<
= 1
,...,n-2,
< So,
;A
+m
otherwise
< =, a r b i t r a r y o t h e r w i s e
-, s i n c e
Mun-]ly = k . Having d e f i n e d t h e
.
we d e f i n e x J ,
I.
€i;+',
^$ +.I n
follows :
(3.72)
Then :
(3.73)
as
CHAPTER VIII
390
and of c o u r s e m t a k e (3.74)
1
arbitrary.
Let u s p r o v e t h a t :
But :
and by t h e s u b l i n e a r i t y of co :
un-j(xj(6d)) i.e.
,"-j(xj(i:)
+ 5) +
c,(<)
+ k
-k
:
un-j(xj(ei)) and s i n c e un-'
(3.76)
u
n-j -1
,
< ~p-j(xj(8:))
-
k
we have :
un-'(xj(ii))
^'+I which by t h e d e f i n i t i o n of 8;
Mun-j-1 ( xj ( ^ e nj ) )
-
k
( c f . ( 3 . 7 3 ) ) proves (3.75)
We have t h u s c o n s t r u c t e d a n a d m i s s i b l e c o n t r o l Wn c W n + ] . We check e a s i l y that :
which c o m p l e t e s t h e proof of t h e d e s i r e d r e s u l t .
39 1
IMPULSIVE CONTROL
Remark 3 . 4 .
The s e t s
hence c l e a r l y u
ur"
satisfy
w"c tun+] n+l 2 u 2 ... 2
u.
We c o n s i d e r lower s o l u t i o n s of t h e Q . V . I . 1 z E Ho
(4.1)
Theorem 4 . 1 . Q.V.I.
12.2)
n Lm,
z 5
as follows :
Mz
We make t h e asswnptions o f theorem 2 . 3 . The s o l u t i o n u of t h e
is a Lower s o l u t i o n . I t is t h e
maxicimwn
element among lower
solutions.
proos The f a c t t h a t u i s a lower s c i t i o n f o l )w from ( 2 . 2 ) , by t a k i n g v = u
-
as a . t e s t f u n c t i o n . To prove t h a t i t i s t h e maximum lower s o l u t i o n , w e prove by i n d u c t i o n t h a t i f z i s a lower s o l u t i o n then :
(4.2)
z 5 un
where t h e sequence un h a s been d e f i n e d i n ( 2 . 6 ) . We f i r s t check t h a t z 5 u
0
.
This h a s been proved i n theorem 5. I . of c h a p t e r V I I . Assume ( 4 . 2 ) i s t r u e
a t s t e p n , t h e n Mz 5 Mun and t h u s z i s a lower s o l u t i o n f o r t h e problem n+ 1 nt1 From theorem 5.1 ., c h a p t e r VII, i t f o l l o w s t h a t z 5 u
defining u
.
Hence s i n c e un
j.
.
u , the desired r e s u l t follows.
We c o n s i d e r t h e n , a s i n theorem 5 . 2 . c h a p t e r V I I , t h e semi group $ ( t ) d e f i ned by :
I
392
CHAPTER VIII
(4.3)
$ ( t ) $ D ( x ) = EXq(x(t..T))
t h e n we can s t a t e t h e f o l l o w i n g .
Theorem 4 . 2 .
We make t h e assumptions o f theorem 2.1,
(4.4)
and :
a o = a > O
(4.5)
f
E
B (Bore2 bounded on
Then t h e s o l u t i o n u o f Q . V . I .
8)
(2.21 s a t i s f i e s
= 0, u 2 M u -a s
x6e
d s + $ ( t ) e-cYt u , V t 2 0.
of t h e s e t o f f u n c t i o n s s a t i s f y i n g ( 4 . 6 1 .
Proof u i s t h e s o l u t i o n of t h e V . I .
5.2, chapter V I I ,
w i t h o b s t a c l e $ = MU. T h e r e f o r e b y theorem
( r e l a t i o n s (5.11),
(5.12))
u s a t i s f i e s (4.6).
L e t us p r o v e t h a t i t i s t h e maximum element among t h e f u n c t i o n s s a t i s f y i n g %
( 4 . 6 ) . T h i s i s proved u s i n g t h e d e c r e a s i n g scheme. L e t u be an element o f ( 4 . 6 ) . We have
%
Assuming t h a t u
%
u
5
uo s i n c e :
%
un, t h e n M u 5 Mun, Therefore?; i s an element of t h e s e t
( 5 . 1 1 ) , (5.12) c h a p t e r V I I , w i t h $ = Mu". % n+l % then implies u < u T h e r e f o r e u S U.
But theorem 5 . 2 , c h a p t e r V I I ,
.
I
We c o n s i d e r now t h e a n a l o g u e of problem (5.27) c h a p t e r VII, w i t h an i m p l i c i t o b s t a c l e . Namely l e t $ ( t ) be a semi group s a t i s f y i n g ( 5 . 1 9 ) , c h a p t e r V I I . We a l s o c o n s i d e r :
..., ( 5 . 2 4 ) ,
393
IMPULSIVE CONTROL
L
(4.7)
E
B , t + $ ( t ) L i s measurable
from [ O , m l
into C
L t O
L e t now M be an o p e r a t o r such t h a t :
M :C
(4.8)
+
C i s L e p s c h i t z , concave, and monotone i n c r e a -
MYl
sing (i.e.
(4.9)
;
M(O)tk>O
5 M(D2 i f
Y1
5
(D2)
a > O
We c o n s i d e r t h e set of f u n c t i o n s : u
(4.10)
E
MU
C, u S e
Theorem 4.3.
We assume ( 5 . 1 9 ) ,
$(s)
Lds + e
..., (5.24)
-at
$(t) u
chapter VII and 1 4 . 7 1 ,
(4.81,
( 4 . 9 ) . Then the s e t of s o l u t i o n of ( 4 . 1 0 ) i s not empty and has a rnadmwn
element, which i s a p o s i t i v e f u n c t i o n .
Let z
E
C and c o n s i d e r 5 t o be t h e maximum s o l u t i o n o f :
(4.11)
S i n c e Mz
E
C, 5 =
Tz i s w e l l d e f i n e d , a c c o r d i n g t o theorem 5 . 3 , c h a p t e r V I I .
I t w i l l be c o n v e n i e n t t o w r i t e :
(4.12) where u(9)
T
=
u o
M
: C -+ C i s t h e maximum element of t h e s e t ( 5 . 2 7 ) , c h a p t e r V I I .
O n e e a s i l y checks t h a t u i s monotone i n c r e a s i n g and concave. From t h e assumpt i o n ( 4 . 8 ) on M i t f o l l o w s t h a t : (4.13)
T i s i n c r e a s i n g and concave.
394
CHAPTER VIII
L e t next
1,
m
uo =
(4.14)
e -at $ ( t ) Ldt
and uo E C , uo 2 0. L e t 1 > p > 0, s u c h t h a t :
I
(4.15)
! ~ I l u ~ l5 k
Then one h a s : T(0) 2 u u
(4.16)
0
Indeed by ( 4 . 1 5 ) and assumption ( 4 . 9 ) .
!J u0 9 M ( 0 )
and from ( 4 . 1 4 )
:
v
uo =
lot lo
e-as $ ( s ) vL d s + e
t
<
ea
S
$ ( s ) Lds + e
-at
-at
a
$(t)v u
$ ( t ) v uo
s i n c e L 2 0 , hence ( 4 . 1 6 ) . We have now a r e s u l t i d e n t i c a l t o lemma 2 . 4 . %
(4.17)
Let z , z
E
C and y
Vx, t h e n Tz(x)
-
E
T;(x)
C0,ll such t h a t z ( x ) S y(1-p) Tz(x)V x .
-
;(x)
< yz(x)
Recalling that :
(4.18)
Tz
A s i n lemma 2 . 4 . , i n lemma 2 . 6 .
S uo
Vz.
one checks t h a t t h e map T has a t most one f i x e d p o i n t . As
one checks t h a t t h e i t e r a t i v e scheme :
395
IMPULSIVE CONTROL
u
n+ 1
= T un, uo g i v e n by ( 4 . 1 4 )
i s d e c r e a s i n g and c o n v e r g e s i n C towards t h e maximum element of ( 4 . 1 1 ) . Moreover Tn uo 5 0 hence u 2 0. This completes t h e proof o f t h e d e s i r e d
I
result.
4.3. gjscretization The d i s c r e t i z e d v e r s i o n of ( 4 . 1 0 ) i s t h e f o l l o w i n g :
We w i l l assume i n s t e a d of ( 4 . 7 )
(4.19)
L E C , L > O
(1)
We can s t a t e t h e f o l l o w i n g :
Theorem 4 . 4 .
Assume (5.191,
. . ,,(5.24) chapter V I I and
14.S), (4.9), ( 4 . 1 9 ) .
Then problem 1 4 . 1 8 ) has one and o n l y one s o l u t i o n . Moreover h
-+
y,
-+
u in C, as
0, where C is the m d m u m element o f (4.10).
Let u s s e t oh($) t o be t h e s o l u t i o n o f ( 5 . 4 9 ) c h a p t e r V I I ( d i s c r e t i z e d V . 1 ) . Then uh : C
(4.20)
+
C , and :
ah i s i n c r e a s i n g and concave.
T h i s w i l l f o l l o w from t h e f o l l o w i n g f a c c :
(4.21)
if i
5
S
E
C v e r i f i e s 5 2 $ and
hL + e
-Uh
$(h)<
,
then
i 5 Oh($)
I ) a s i n § 5 . 3 c h a p t e r V I I , t h i s i s r a t h e r a m a t t e r of convenience.
396
CHAPTER V I I I
Indeed l e t :
s
(2)
= Min
C J I , hL + e -ah $ J ( ~ ) z I
h which i s c o n t r a c t i o n on C , and Ch = ah($) i s t h e f i x e d p o i n t of S h . But t h e assumption ( 4 . 2 1 ) on 5 i m p l i e s :
hence i t e r a t i n g ( s i n c e Sh i s i n c r e a s i n g ) :
therefore :
which p r o v e s t h e d e s i r e d p r o p e r t y ( 4 . 2 1 ) . Then ( 4 . 2 0 )
follows e a s i l y .
Defining next :
(4.22)
Th = uh 0 M
t h e n t h e s o l u t i o n s of ( 4 . 1 8 ) a r e t h e f i x e d p o i n t s of Th. Now we have s e e n i n (5.59) c h a p t e r V I I , t h a t :
hence t h e r e e x i s t s
u
E
( 0 , l ) such t h a t
u
1 I u0h / 1
9
k . Then :
From t h i s and t h e f a c t t h a t Th i s i n c r e a s i n g concave, one deduce t h a t Th h a s one and o n l y one f i x e d p o i n t and i f :
then :
IMPULSIVE CONTROL
397
(4.24) N o w by theorem 5 . 4 , c h a p t e r , V I I , we have :
-
I Since also l/un - ulj uh
+
u i n C.
+
unl
1
+
0 as h + 0 f o r f i x e d n.
0 , u s i n g t h e uniform e s t i m a t e ( 4 . 2 4 ) one o b t a i n s
1
398
CHAPTER V I I I
COMMENTS ON CHAPTER V I I I
I ) The p r o p e r t y g i v e n i n lemma 2 . 4 , which i s v e r y u s e f u l t o s t u d y Q . V . I .
i s due t o B . Hanouzet
-
J . L . Joly [ I ] .
U n f o r t u n a t e l y i t seems t o be
l i m i t e d t o bounded f u n c t i o n s . F o r unbounded f u n c t i o m ( c f . A . BensoussanJ.L.
Lions
[?.I).
2 ) I n t h e proof o f lemma 3 . 2 , we do n o t need t o know t h a t t h e f a m i l y ?T a r e
?.
t h e r e s t r i c t i o n s of a p r o b a b i l i t y
3) We have i n t r o d u c e d a d e c r e a s i n g scheme t o approximate t h e s o l u t i o n of the Q . V . I .
I t i s p o s s i b l e t o d e f i n e an i n c r e a s i n g scheme s t a r t i n g from 0 .
The method of Hanouzet-Joly w i l l show t h a t i t c o n v e r g e s . I t h a s a l s o a probabilistic interpretation.
4 ) For degenerate Q.V.I.
cf. J.L.
Menaldi [ I ] .
5 ) For t h e p r o b a b i l i s t i c i n t e r p r e t a t i o n of t h e Q . V . I . r a l Markov p r o c e s s e s , c f . M. Robin
[]],
c o r r e s p o n d i n g t o gene-
A . Bensoussan- J . L .
Lions [ 2 1 .
6 ) A " p r o b a b i l i s t i c approach" t o impulse c o n t r o l h a s been o b t a i n e d by J.L.
Lepeltier
-
B . Marchal [ I ] .
7 ) The non s t a t i o n a r y problem l e a d t o p a r a b o l i c Q . V . I . , more c o m p l i c a t e d ( c f . F . Mignot J.L.
J.P.
Puel [ I ] ,
A . Bensoussan
Lions [ 2 1 C . Barocchi
t e c h n i q u e s i n Mechanics c f . A . Bensoussan
-
Lions [ Z l ) .
-
A . Capelo [ I ] .
9 ) One can i n t r o d u c e a p e n a l i z e d problem t o t h e Q . V . I . J.L.
-
Lions [ 2 1 ) .
8) For a p p l i c a t i o n s of Q . V . I . J.L.
-
which a r e much
( c f . A . Bensoussan
-
399 R E F E R E N C E S
S.AGMON
-
A.DOUGLIS - L.NIRENBERG
111 Estimates near the boundary for solutions of elliptic
partial differential equations satisfying general boundary conditions, I, Comm. Pure Appl. Math. 12 (1959) p. 623-727. T.ALLINGER rll
-
New results on the innovation problem for non linear filtering, Stochastics, to be published.
R.F.ANDERSON
rll
S.K.MITTER
-
A.FRIEDMAN
A quality control problem and quasi variational inequalities, J. Rat. Mech. Anal. 63 ( 1 9 7 7 ) p. 205-252.
r 2 1 Multi dimensional quality control problems and quasi-variational inequalities, Trans. Am. Math. SOC. 246 (1978) p. 31-76. C.BAIOCCHI
rll
-
A.CAPEL0
Disequazioni variazionali e quasi variazionali Applicazioni a problemi di frontisra libera, vol.1-11, Quaderni dell'unione matematica Italiana, Bologna 1978.
L. BARTHELEMY rll
Th2se de 3Sme cycle, Besanfon, ( 1 9 8 0 ) .
R.BELLMAN r11 Dynamic Programming, Princeton University Press, Princeton, ( 1 9 5 7 ) . A. V. BALAKRISHNAN rll
Applied Functional Analysis, Springer Verlag ( 1 9 7 6 ) .
REFERENCES
400
A.BENSOUSSAN
[I1
-
J.L.LIONS
Applications des inzquations variationnelles en contrSle stochastique, Dunod Paris (1978).
[21 Contr6le impulsionnel et insquations quasi variationnelles, Dunod, Paris, (1982). [ 31
ProblSmes de temps d'arrgt optimal et inzquations variationnelles paraboliques, Applicable Analysis, 3 (1973) p. 267-294.
C41 Nouvelle formulation de problSmes de contr6le impulsionnel et applications, CRAS, Paris (1973), 1189-1192. A.BENSOUSSAN
-
J.FREHSE
-
U.MOSC0
C11 A Stochastic control problem with quadratic growth Hamiltonian and the corresponding quasi variational inequality, to be published.
A.BENSOUSSAN [ 11
-
J.L.MENALD1
"Optimal stochastic control of diffusion process with Jumps stopped at the exit of a domain" to appear in "Stochastic Differential Equations" Ed - M.A - Pinsky, Series "Advances in Probability".
A.BENSOUSSAN
-
M.VIOT
C 11 Optimal stochastic control for linear distributed parameter systems, Siam J. control (1975). A.BENSOUSSAN
[I]
-
G.HURST
-
B.NASLUND
Management Applications of Modern Control Theory, North Holland, (1974).
A.BENSOUSSAN
-
A . FRIEDMAN
C 1 1 Non linear variational inequalities and differential games with stopping times, J. Funct. Analysis, 16 (1974) p. 305-352. A.BENSOUSSAN
c11 Filtrage Optimal des SystSmes LinBaires, Dunod, Paris, (1971). [21 Control of stochastic partial differential equations, Chap. IV of Distributed Parameter Systems, edit. by W.H. RAY and D.G. LAINIOTIS, Marcel Dekker, New York, (1978).
REFERENCES A.BENSOUSSAN - M.DELFOUR - S.K*MITTER [I] Control of Infinite Dimensional Systems, MIT Press, to be published. A.BENSOUSSAN [11
A.BENSOUSSAN
[11
-
M.ROBIN
On the convergence of the discrete time dynamic programing equation for general semi-groups, submitted to Siam Journal.
-
P.L. LIONS
Control of random evolutions, Stochastics, to be published.
D.P . BERTSEKAS 1 I Dynamic Programming and Stochastic Control, Academic Press, (1976). D.P. BERTSEKAS, S.E. SHREW Stochastic Optimal Control :. the Discrete Time Case, Academic Press, ( 1 9 7 8 ) . J.M.BISMUT [I]
Thgorie probabiliste du contrale des diffusions, Memoirs of the American Mathematical Society, Vo1.4 no 167, Jan. ( 1 9 7 6 ) .
.
B BREMAUD
[I1
Book on Point Processes, to be published.
H.BREZIS
[I1
-
L.C.EVANS
A variational approach to the Bellman-Dirichlet equation for two elliptic operators. Arch. Rat. Mech. Anal. to be published.
R.S.BUCY
-
P.D.JOSEPH
C11 Filtering for stochastic processes with applications to guidance, interscience publishers, Wiley, New York, (1968).
401
402
REFERENCES
J. CAST1 C 1 1 Matrix R i c c a t i equations, dimensionality reduction and g e n e r a l i z e d X and Y f u n c t i o n s , U t i l i t a s Mathematics, 6 , p . 95-110, ( 1 9 7 4 ) .
-
P.CHARRIER
G.TROIANIELL0
C11 Un r C s u l t a t d ' e y i s t e n c e e t d e r e g u l a r i t e pour l e s s o l u t i o n s f o r t e s d ' u n problsme u n i l a t e r a l d ' e v o l u t i o n avec o b s t a c l e ddpendant du temps, J. Math. Anal. Appl. 64 (1978).
.
P CURRIER
[ I 1 Thsse Univ, Bordeaux
1,
(1978)
.
H CHERNOFF
[I
3
Optimal S t o c h a s t i c C o n t r o l , Sankhya, A 30 ( 1 9 6 8 ) .
R.CURTAIN - A.J.PRITCHARD I n f i n i t e Dimensional L i n e a r Systems Theory, L e c t u r e Notes i n C o n t r o l and I n f o r m a t i o n S c i e n c e s , S p r i n g e r Verlag, B e r l i n , (1978).
R.CURTAIN
[I]
-
P.L.FALB
Stochastic d i f f e r e n t i a l equations i n Hilbert space, J . D i f f . E q u a t i o n s , 10 ( 1 9 7 1 ) , p . 412-430.
M.DAVIS
C11 On t h e e x i s t e n c e of o p t i m a l p o l i c i e s i n s t o c h a s t i c c o n t r o l , Siam J. of C o n t r o l , n o 1 1 , p . 587-594, (1973). M.DAVIS
-
P.VARAIYA
111 Dynamic programming c o n d i t i o n s f o r p a r t i a l l y o b s e r v a b l e s t o c h a s t i c systems, Siam J. C o n t r o l n ' l l
C.DELLACHERIE
[I1
-
p . 226-261,
(1973).
P.A.MEYER
P r o b a b i l i t e s e t P o t e n t i e l , Hermann, (1977-1980).
REFERENCES
3 . L . DOOB
c11 S t o c h a s t i c P r o c e s s e s , Wiley.New York, ( 1 9 5 3 ) . G.DUVAUT
-
J.L.LIONS
c11 Les i n g q u a t i o n s e n m6canique e t e n p h y s i q u e , Dunod, P a r i s , (1972).
E.B.DYNKIN
[I]
Theory of Markov P r o c e s s e s , Pergamon P r e s s , New York, ( 1 9 6 0 ) .
C21
Markov P r o c e s s e s , Vol. 1-11, S p r i n g e r V e r l a g , B e r l i n , (1965).
[3]
The optimum c h o i c e of t h e i n s t a n t f o r s t o p p i n g a Markov p r o c e s s , Dokl, Acad. Nauk USSR 150, p . 238-240, (1963).
-
E.B.DYNKIN
A.A.YUSHKEVICH
C o n t r o l l e d Markov P r o c e s s e s , S p r i n g e r V e r l a g , Berlin, (1979).
I.EKELAND
[l]
-
R,TEMAP!
Analyse Convexe e t ProblSmes V a r i a t i o n n e l s , Dunod, P a r i s , ( 1 9 7 4 ) .
N.EL K A R O U I
111 Mdthodes P r o b a b i l i s t e s e n ContrGle S t o c h a s t i q u e , t o be p u b l i s h e d .
L.C.EVANS
-
A.FRIEDMAN
C11 Optimal s t o c h a s t i c s w i t c h i n g and t h e D i r i c h l e t problem f o r t h e Bellman e q u a t i o n , T r a n s . Am. Math. S O C , t o be p u b l i s h e d .
W.FLEMING
[11
-
R.RISHEL
Optimal D e t e r m i n i s t i c and S t o c h a s t i c C o n t r o l , Springer Verlag, Berlin, (1975).
W .FLEMING
-
E .PARDOUX
C11 Optimal c o n t r o l f o r p a r t i a l l y observed d i f f u s i o n s , Siam J . C o n t r o l , t o be p u b l i s h e d .
403
404
REFERENCES
-
W.FLEMING
M.VIOT
11 Some Measure valued Markov Processes in Population Genetics Theory,Indiana J. Math. 28, p. 817-843, (1979). W ,FLEMING
[I1 Non linear semi group for controlled partially observed diffusions, to be published. J.FRESHE
-
U.MOSCO
C 1 1 Variational inequalities with one sided irregular obstacles, to be published. A.FRIEDMAN
E l 1 Stochastic Differential Equations and Applications, Vol. 1-11, Academic Press, New York, (1975). C21 Partial Differential Equations of Parabolic Type, Prentice Hall, Englewood Cliffs, New Jersey, (1964). A.FRIEDMAN
-
L.CAFARELL1
C 1 1 Regularity of the solution of the quasi variational inequality for the impulse control problem, Comm. on P.D.E, 3 ( 8 ) , (1978). 1.I.GIKHMAN
[I]
-
A.V.SKOROKHOD
Controlled Stochastic Processes, Springer Verlag, Berlin, (1979).
[21 Sbochastic Dif5erential Equations, I, 11, 111, Springer Verlag, Berlin, (1972). R.GLOWINSK1
-
J.L.LIONS
-
R.TREMOLIERES
C 1 1 Analyse Numgrique des Insquations Variationnelles, Dunod, Paris, (1976). M.GOURSAT
-
G.MA4REK
C11 Nouvelle approche des problsmes de gestion de stocks. Comparaison avec les msthodes classiques. Rapport no 148, Mars 1976.
405
REFERENCES
B.HANOUZET - J.L.JOLY
1 Convergence uniforme des itdrds definissant la
[I
solu-
tion d'une inequation quasi variationnelle, CRAS, sCrie A, 286, (1978). R.JENSEN - P.L.LIONS
r l l Some asymptotic problems in fully non linear elliptic equations and stochastic control, to be published. J.L.JOLY - U.MOSC0 C1
-
G.M.TROIANIELL0
I On the regular solution of a quasi variational inequality connected to a problem of stochastic impulse control, J . Math. Anal. and Appl. 61 p. 357-369 (1977).
R.E .KALMAN [I]
A new approach to linear filtering and prediction problems, Trans. ASME Ser. D, J. Basic Engineering, 82, p. 35-45 (1960).
-
R.E.KALMAN
R.S.BUCY
rl1 New results in linear filtering and prediction
theory, journal of Basic Engineering, p. 95-107 (1961) A. KATO rl]
Perturbation Theory for Linear Operators, Springer Verlag, Berlin, (1966).
N.KRYLOV rll
Controlled Diffusion Processes, Springer Verlag, Berlin, (1980).
r21 Control of a solution of a stochastic integral equation, Theory of Probability and its Applications, ~ 0 1 . 2 7 , no 1 , (1972), p . 114-131. H.KUNITA
rll
-
S.WATANABE
On square integrable martingales, Nagoya Math. Journal, 30, p. 209-245 (1967).
H.J.KUSHNER rll
Probability Methods in qtochastic Control and for Elliptic Equations, Acad. Press, (1977).
406
REFERENCES
O.A.LADYZHENSKAYA
-
-
V.A.SOLONNIKOV
N.N.URAL'TSEVA
[11
L i n e a r and Quasi L i n e a r Equations of P a r a b o l i c Type, American Mathematical S o c i e t y , (1968).
O.A.
LADYZHENSKAYA
[I]
E l l i p t i c L i n e a r and Quasi L i n e a r E q u a t i o n s , Academic P r e s s , (1968).
J.P.LEPELTIER rl]
N.N.URAL'TSEVA
B.MARCHAL
ThSse P a r i s , (1980).
J.L.LIONS
[l]
-
-
-
E.MAGENES
ProblSrnes aux l i r n i t e s non hornog'enes e t a p p l i c a t i o n s , Vol. 1 e t 2 , P a r i s , Dunod, (1968), Vo1.3, ( 1 9 7 0 ) .
J . L . LIONS
[l]
Equations D i f f C r e n t i e l l e s O p 6 r a t i o n n e l l e s , S p r i n g e r Verlag, B e r l i n , (1961)
[2]
C o n t r 6 l e o p t i m a l d e systSmes gouvern6s p a r des dquat i o n s aux dCrivCes p a r t i e l l e s , Dunod, P a r i s ( 1 9 6 8 ) . E n g l i s h t r a n s l a t i o n by S . K . M i t t e r , S p r i n g e r Verlag,
P.L.LIONS
C o n t r o l of D i f f u s i o n P r o c e s s i n R N , Corn. i n Pure and Applied Maths. t o be p u b l i s h e d . R 6 s o l u t i o n d e s problSmes g6nCraux de Bellman-Dirichlet C . R . Ac. Sc. P a r i s , s 6 r i e A , (1978), p . 747-750. D e t a i l e d paper t o be p u b l i s h e d i n Acta Mathernatica ( J u n e 1981).
287
Equations d e Hamilton- Jacobi-Bellman d6gGngrCes, C . R . Ac. Sc. P a r i s s 6 r i e A , 289 (1979), p . 329-332, D e t a i l e d paper t o b e p u b l i s h e d . Equations de Hamilton- Jacobi-Bellman. Sdminaire Goulaouic-Schwartz, 1979-1980, Ecole P o l y t e c h n i q u e .
P.L.LIONS
[I]
-
J.L.MENALD1
"Probl'erne d e Bellman avec l e c o n t r d l e dans l e s c o e f tome 287, f i c i e n t s de p l u s h a u t degrC", C . R . A . S . s 6 r i e A, ( 1 9 7 8 ) , p . 409-502. v o i r a u s s i C a h i e r Math. DCcision n o 8023 and 8024, Univ. P a r i s Dauphine (1980)
REFERENCES
P.L.LIONS
-
401
B.MERCIER
[ I 1 Approximation numdrique d e s S q u a t i o n s de Hamilton-Jacob: Bellman, t o be p u b l i s h e d i n R A I R O .
.
H P .MacKEAN
[I]
Stochastic
I n t e g r a l s , Academic P r e s s , New York ( 1 9 6 9 ) .
J.L.MENALD1
[I]
"Le p r i n c i p e d e s d p a r a t i o n pour l e problsme de temps d ' a r r s t o p t i m a l " , S t o c h a s t i c s , Vol. 3 , n o I , (1979), p . 47-59.
C2I
Sur l e s p r o b l h e s de temps d ' a r r s t , c o n t r d l e impulsionnel e t continu correspondant 3 des opdrateurs ddgdndr6s. ThSse, P a r i s , ( 1 9 8 0 ) .
.
J . C MIELLOU [
I1
A mixed r e l a x a t i o n a l g o r i t h m a p p l i e d t o Q . V . 1 L e c t u r e Notes Comp. Sc. 4 1 , S p r i n g e r (1976), p . 192-199.
F.MIGNOT
[I]
-
J.P.PUEL
S o l u t i o n maximum de c e r t a i n e s i n g q u a t i o n s d ' d v o l u t i o n paraboliques e t indquations quasi variationnelles p a r a b o l i q u e s , CRAS 280, s S r i e A (1975), p . 259.
C .MIRANDA
111 Equazione a l l e d e r i v a t e p a r z i a l i d i t i p o e l l i t i c o , S p r i n g e r Verlag, B e r l i n , ( 1 9 5 5 ) .
C .B .MORREY
;I1
Second o r d e r e l l i p t i c systems of d i f f e r e n t i a l equations. Contributions t o t h e theory of p a r t i a l d i f f e r e n t i a l e q u a t i o n s , Ann. of ?lath S t u d i e s , n o 33, P r i n c e t o n U n i v e r s i t y P r e s s , (1954), p . 101-159.
U.MOSC0
1 13
I m p l i c i t v a r i a t i o n a l problems and q u a s i v a r i a t i o n a l i n e q u a l i t i e s , i n "Non L i n e a r O p e r a t o r s and C a l c u l u s o f v a r i a t i o n s " , L e c t u r e Notes i n Math, 543.
408
REFERENCES
0. NAKOULIMA [i]
Sur une n o t i o n d e s o l u t i o n f a i b l e pour l e s indquat i o n s v a r i a t i o n n e l l e s d ' 6 v o l u t i o n 3 deux o b s t a c l e s , CRAS s d r i e A, 284 (1977) p . 1037-1040.
J . NEVEU
[I]
C a l c u l des p r o b a b i l i t g s , Masson, (1964).
M.NISIO
c11 On a non l i n e a r semi group a t t a c h e d t o o p t i m a l s t o c h a s t i c c o n t r o l , Publ. RIMS, Kyoto Univ, 13 ( 1 9 7 6 ) , p. 513-537.
c 2 1 On s t o c h a s t i c o p t i m a l c o n t r o l s and enveloppe of Markovian semi-groups, P r o c . of I n t . Symp. Kyoto, p . 297-325, (1976).
.
E PARDOUX
[ I 1 S t o c h a s t i c p a r t i a l d i f f e r e n t i a l e q u a t i o n s and f i l t e r i n g of d i f f u s i o n p r o c e s s e s , S t o c h a s t i c s , 3, p . 127-167, (1979).
c21
Equations a u x d 6 r i v d e s p a r t i e l l e s non l i n e a i r e s monotones, ThPse, P a r i s , (1975).
K.R.PARTHASARATHY
[ I 1 P r o b a b i l i t y Measures on M e t r i c Spaces, Academic P r e s s , New York, (1967).
.
R PRIOURET [11
ProblPmes d e M a r t i n g a l e s , L e c t u r e Notes i n Math, n o 390, S p r i n g e r V e r l a g , ( 1 9 7 4 ) .
J.P.QUADRAT
c11 S u r l ' i d e n t i f i c a t i o n e t l e c o n t r a l e de s y s t h e s dynamiques s t o c h a s t i q u e s , ThPse P a r i s ,
1981.
c 2 1 E x i s t e n c e d e s o l u t i o n e t a l g o r i t h m e de r C s o l u t i o n numdrique, d e problsme de c o n t r 6 l e o p t i m a l de d i f f u s i o n s t a c h a s t i q u e d6ggn6r6e ou non. S i a m J . C o n t r o l and Opt. Vo1.18, n o 2 , March (1980)
409
REFERENCES
R.W.RISHEL [
11 Necessary and s u f f i c i e n t dynamic programming condit i o n s f o r c o n t i n u o u s time s t o c h a s t i c o p t i m a l Siam J . C o n t r o l (1970), p . 559-571.
control,
M.ROBIN
[I1
ConCrGle i m p u l s i o n n e l d e s p r o c e s s u s de Markov, ThZse, P a r i s , (1977).
A . N . SHYRIAEV
111 Optimal S t o p p i n g R u l e s , S p r i n g e r Verlag, New York, (1978).
M. SORINE
[ I 1 Thsse, P a r i s , t o be p u b l i s h e d G . STAMPACCHIA
Ell
Le problsme de D i r i c h l e t pour e s Gquations e l l i p t i q u e s d u second o r d r e 1 c o e f f c i e n t s d i s c o n t i n u s , Ann. I n s t . F o u r i e r , Grenoble, 3, 1 (1965), p . 129-258.
.
C S TR I E B E L
[ I 1 M a r t i n g a l e c o n d i t i o n s f o r o p t i m a l c o n t r o l of c o n t i n u o u s time s t o c h a s t i c s y s t e m s , I n t . workshop on s t o c h a s t i c f i l t e r i n g and c o n t r o l , Los Angeles (1974).
[21
Optimal C o n t r o l of d i s c r e t e time S t o c h a s t i c Systems, S p r i n g e r L e c t u r e Notes i n Economics and Mathematical Systems, Vol. 110, S p r i n g e r V e r l a g , B e r l i n , (1975).
D.STROOCK
-
S.R.S.
VARADHAN
111 D i f f u s i o n P r o c e s s e s , S p r i n g e r V e r l a g , B e r l i n , (1979). H.SUSSMAN
[I]
On t h e gap between d e t e r m i n i s t i c and s t o c h a s t i c ordinary d i f f e r e n t i a l equations, Ann of Prob, 6 , p . 19-41, (1978).
.
L TARTAR
[I1
Sur 1'Gtude d i r e c t e d ' G q u a t i o n s non l i n G a i r e s i n t e r v e n a n t en t h s o r i e du c o n t r 6 l e o p t i m a l , J . F u n c t . A n a l y s i s 1 7 , (1974), p . 1-47.
r21
I . Q . V . a b s t r a i t e s , CRAS, s G r i e A , 278, ( 1 9 7 4 ) , p . 1193-1196.
410
REFERENCES
.
R TENAM
111 Sur l ' g q u a t i o n de R i c c a t i a s s o c i g e 1 d e s o p g r a t e u r s non borngs e n dimension i n f i n i e , J . F u n c t i o n a l A n a l y s i s 7 p . 85-115
(1971).
F .TREVES
[I1
T o p o l o g i c a l Vector Spaces, D i s t r i b u t i o n s and K e r n e l s , Academic P r e s s , ( 1 9 6 7 ) .
S.G.TZAFESTAS
[I1
D i s t r i b u t e d parameter s t a t e e s t i m a t i o n , chap. 3 of D i s t r i b u t e d Parameter Systems, e d . by W.H Ray and D . G . L a i n i o t i s , Marcel Dekker, New York, ( 1 9 7 8 ) .
P .VAN MOERBECKE
111 On o p t i m a l s t o p p i n g and f r e e boundary problems, Arch. R a t . Mech, ( 1 9 7 6 ) .
M.VIOT
111 S o l u t i o n s f a i b l e s d ' g q u a t i o n s aux d6rivBes p a r t i e l l e s s t o c h a s t i s t i q u e s . Thsse, P a r i s (1976).
121 Thgorsme d ' o p t i m a l i t 6 pour des systsmes s t o c h a s t i q u e s oCi l a commande e s t a d a p t g e 1 l ' g t a t , Revue d ' I n f o r matique e t de Recherche O p g r a t i o n n e l l e , 2 n3 10, p . 15-28, ( 1 9 6 8 ) .
W. H .WOKHAM [I]
On t h e s e p a r a t i o n theorem o f s t o c h a s t i c c o n t r o l , Siam J . C o n t r o l , V o l . 6 , no2, ( 1 9 6 8 ) .
T.YAMADA - S.WATANABE
[I1
On t h e uniqueness o f s o l u t i o n s of s t o c h a s t i c d i f f e r e n t i a l e q u a t i o n s , J. Math. Kyoto Univ. 1 1 . 1 ( 1 9 7 1 ) , p . 155-167.
K . YOSHIDA
[ I 1 Functional Analysis, Springer Verlag, Berlin, (1965).