This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
0, f(x) = (£2 - (x - v)2)lp,
Section 6.4 Green 's Functions 223 and note that f ;::: 0 in (
. (a) = Ea(exp( ->.Ta )). (4.4) and (4.5) imply
u
{S S
Symmetry of the normal distribution implies Ea Y3 = Ea Y3 , so if we let = inf{s < t : B3 = a} and apply the strong Markov property then on < oo}
Taking expected values now gives the desired result.
4.1. We begin by noting symmetry and (2.5) imply Po(R � 1 + t) = 21 00 Pl (O, y) 1t Py(To = s) ds dy t = 1 21 00 Pl (O, y) 1t Py (To = s) dy ds by Fubini's theorem, so the integrand gives the density P0(R = 1 + t). Since Py (To = t) = Po(Ty = t), (4.1) gives 1 -Y2,.,- 1 ye-Y2/2t dy Po(R = 1 + t) = 2 100 --e 0 -../2i � 1 2 1__t_ _ _ -- 2rrt3/ 2 }r0 oo Ye -Y ( l +t)f 2 t dy -- _ 2rrt3/2 (1 + t) 4.2. Translation invariance implies that if < s then using the last identity we have
l)
v
1
E(x, r)f (Br. - Bo) = Eco,o)f (Br._r ) Applying the strong Markov property to Y(w) = f (w(r3 ) - w0) at time Tr, and
3.11. Let Y3 (w) = 1 if s < t and < w(t - s) < 0 otherwise. Let y3 (w) = { 1 if s < t., 2a - < w(t - s) < 2a 0 otherwise u
Answers for Chapter 317
.(a)
. (b) =
.(a + b) As before the first equation implies
. ( a ) = exp(c>.a ) while the second with >. = and a = gives = and this implies C>. = -1\,J>..
1
v'b
cb v'bc1 4.5.> LetB1 and M1 hence = maxo< 3 < 1 B3 • Exercise 3.10 implies that with probability 1, M1 a :_.-Ta is discontinuous at a = M1. This shows Po( a -+ Ta is discontinuous in [0, n]) -+ 1 as n -+ oo and the result follows from the hint.
318 Solu tions to Exercises
Answers for Chapter 2
> Ml }. Since B'f < M1 the strong Markov property and (4.3) imply that with probability one Bj. =/= B11 so s -+ C3 is discontinuous at s = M1 . This shows P0(s -+ C3 is discontinuous in [0, n]) -+ 1 as n -+ oo and the result follows from the hint.
319
4.6. Let M1 = maxo� 3:9 B} and T = inf{t > 1 : B}
2.2. To check integrability we note that
4.7. If we let /(x) = f(x), then it follows from the strong Markov property
since pog IYII is integrable near 0 and e -l y - :c l 2 f 2 t takes care of the behavior for large y. To show that E:r:Xt -+ oo we observe that for any R < oo
and symmetry of Brownian motion that
E:r:Cf (Bt); T � t) = E:r:[Eo f (Bt - r ) ; T � t] = E:r:[EofCBt - r); r � t] = E:r:[/(Bt); r � t] = E:c(fCBt)) The last equality holds since follows as in (4.8). Chapter
/(y) = 0 when Yd ;::: 0, and the desired formula
2
1.1. If A E converges to
:Fa and a < b then Hn (s, w) = 1[a + l / n ,b +l / n) 1A 1 (a ,bj(s)1A(w) as n -+ oo. 2.1. The martingale property w.r.t. a filtration 9t is that
is optional and
E(X/ ; A) = E(X:; A) for all A E 93. (3.5) in Chapter 1 implies that :Fs ,.. 3 C :F3 so if Xf is a martingale with respect to :F3 it is a martingale with respect to :FSA 3 . To argue the other direction let A E :F3. We claim that A n {S > s} E :Fs,.. 3 . To prove this we have to check that for all r
(A n { S > s}) n { S 1\ s � r} E :Fr but this is 0 for r < s and true for r ;::: s. If Xf is a martingale with respect to :Fs,..3 it follows that if A E :F3 then
E(X/;A n {S > s}) = E(X:;A n {S > s}) On the other hand
E:r:Xt 2:: (2-rrt) -d/2 1
IYI : 9
log IYI dy + (log R)P:r:( I Bt I
> R)
The integral is convergent and as we showed in Example 2.1, P:r:( I Bt l > R) -+ 1 as t -+ oo so the desired result follows from the fact that R is arbitrary.
2.3. Let Tn = inf{t : I Xt l > n}. By (2.3) this sequence will reduce Xt. Note that x'[n � n. Jensen's inequality for conditional expectation (see e.g., {l.l.d) in Chapter 4 of Durrett (1995)) implies
E(cp(X[n ) I :FTn A 3) 2:: cp(E(X[n i :FTn A 3)) = cp(X'[n ) 2.4. Let Tn � n be a sequence which reduces X, and Sn = {Tn - R) + . If s < t then defi n itions involved {note S'n = Tn - R on {Sn > 0} = {Tn > R}), the optional stopping theorem, and the fact 1 (Tn>R) E :FR C :FR+3 imply E(Y1ASn 1 (Sn >O}IQ3) = E(X(R+t }ATn 1(Tn >R}I :FR+ 3) = 1(Tn >R} E{X(R+t)ATn I :FR+3) = x(R+3)ATn 1 (Tn >R) = Y.,.. sn 1 csn >O) 2.5. Let Tn be a sequence that reduces X. The martingale property shows that if A E :Fo then E(Xt ATn 1(Tn > 0) ; A) = E{Xo1(Tn >O) ; A) Letting n -+ oo and using Fatou's lemma and the dominated convergence theorem gives ; A) 1 E(Xt ; A) � liminf n-+oo E(Xt ATn (Tn>O) � nlim -+oo E{Xo1 (Tn >O} ; A) = E(Xo; A) Taking A = n we see that EXt � EXo < oo. Since this holds for all A E :Fo we get E(Xt I :Fo) � Xo. To replace 0 by s apply the last conclusion to Y1 = x. +t ' which by Exercise 2.4 is a local martingale.
Adding the last two equations gives the desired conclusion.
3.1. Since t -+ Vt is increasing, we only have to rule out jump discontinuities. Suppose there is a t and an
E
> 0 so that s < t < u then V(ut - V(s) > 3�: for
Answers for Chapter 2
320 Solu tions to Exercises
321
= (Xs+t - Xs) 2 - {(X}s+t - (X}s}. Using the last equality we get E(Zt lFs+3) = E(X�+t - X�+3 - {(X} s+t - (X}s} IFs+3 ) + (Xs+3 - X3)2 = E(X�+t - (X}s+t lFs+ 3) - X�+3 + (X}s + (Xs+3 - X3 ) 2 = -(X}s +3 + (X}s + (Xs+3 - X3) 2 = Z3
all n. It is easy to see that V(u) - V(s) is the variation of X over [s, u]. Let s0 < t < u0 • We will now derive a contradiction that the variation over [s0 , uo] is infinite. Pick 6 > 0 so that if l r - t l < 6 then I Xt -Xrl < c. Assuming sn and Un have been defined, pick a partition of [sn , un] not containing t with mesh < 6 and variation > 2c. Let Sn + l be the largest point in the partition < t and Un + l be the smallest point in the partition > t. Our construction shows that the variation of X over [sn , un] - [sn + l l t n + l ] is always > c, so the total variation over [so, u 0] is infinite, a contradiction which implies t -l- Vt is continuous.
Let Zt
3.2. (X + Y)tZt - (X, Z}t - (Y, Z}t is a local martingale.
3.8. By stopping at Tn = inf{t : I Xt l > n} and using Exercise 3.5 it suffices to prove the result when X is a �ounded martingal_e. Using Exerc�se 3. 7, we c_an _ �ahty suppose without loss of generahty that S = 0. Usmg the L2 max1mal meq on the martingale Xt AT • and the optional stopping theorem on the martmgale X{ - (X}t it follows that
3.3. -XoZt is a local martingale, so if Yt desired result follows from Exercise 3.2.
= -Xo
then
(Y, Z}t
=
0 and the
3.4. (aXt)(bYt) - ab(X, Y}t = ab (XtYt - (X, Y}t) is a local martingale.
where the last equality follows from the optional stopping theorem and Exercise 3�6. This shows that Zt is a martingale, so (Y}t = (X} s+t - (X} s .
E (sup t�n XfAT ) � 4E(Xf An ) = E((X }t AT = 0
3.5. If T is such that XT is a bounded martingale it follows from (3.7) and the definition of (X} that (XT } = (X}T . For a general stopping time T, let
Tn = inf{t : I Xt l > n}, note that the last result implies (XTATn ) = (X) T ATn and then let n -l- oo The result for the covariance process follows immediately
Letting
3.6. Since X{ - (X}t is a martingale and I Xt I � M for all t
4.1. It is simply a matter of patiently checking all the cases.
.
from the definition and the result for the variance process.
E(X}t = EXt2 - EXJ � M2 so letting t -l- oo we have E(X}co � M2 • This shows that Xt2 - (X}t is dominated by an integrable random variable and hence is uniformly integrable.
3. 7. The optional stopping theorem implies and Xs E Fs C Fs+3 so
n -l- oo now gives the desired conclusion. 3.9. This is an immediate consequence of (3.8).
s < t � a. Yt = Yo E Fo so E(Yt lF3 ) = E(YolF3) = Yo = Ys 2. s < a � t � b E(Yt 1 .1"3 ) = E(E(Yt I Fa)F3) = E(Ya 1 .1"3 ) = E(YolF3 ) = Yo = Y3 3. s < b, t > b. Yt = Yb so this reduces to case 2 if s < a or to our assumption if a � s < b. 4. b � s < t. E(Yt lFs) = E(YblF3) = Yb = Ys .
1.
w then t -l- (X}t defines a measure. The triangle inequality for L2 of that measure implies
4.2. If we fix i.e., Y3 is a martingale. To prepare for the proof of the second result we note that imitating the proof of (2.4)
E((Xs+t - Xs) 2 1Fs +3) = E((Xs+t - Xs+3 ) 2 1Fs + 3) + (Xs+3 - X3)2 = E(Xj+t - xj+3 1Fs+3) + (Xs+3 - x3 ) 2
Now take expected value.
Answers for Chapter 2
322 Solu tions to Exercises 4.3. X E M2 implies E supt X? < oo and hence EX� < oo. To check the other
condition let Tn be a sequence of stopping times l 00 so that xTn is bounded and {X) Tn ::::; n. The optional stopping theorem implies
Clearly, (H 1 X)1 = (K 1 Y)t for all t. Since H'f = K'f = 0 for S ::::; s (5.4) implies (H2 · X). = {K2 · X } . = 0 for S ::::; s ::::; T •
•
5
T,
and it follows from Exercise 3.8 that (H 2 X)t and (K2 X)t are constant on [S, T]. Combining this with the first result and using ( 4.3.b ) gives the desired conclusion. •
•
Letting n --+ oo it follows that E(X! - (X)oo = EX6 . To prove the converse rearrange the displayed equation to conclude
323
6.2. Stop X at Tn = inf{t : ! Xt l > n or J; H; d(X}. } to get a bounded martingale and an integrand in II2 (X) . Exercise 4.5 implies
so X is L2 bounded.
4.4. The triangle inequality implies ll x ll ::::; ll xn ll + ll x - xn ll so
ll x ll ::::; liminf ll xn ll For the other inequality note that ll x nll ::::; ll x ll + ll xn - x ll so ll x ll ;:::: limsup ll x nll 4.5. Let Hn E bii 1 with II Hn - H llx --+ 0. Since II (Hn · X) - (H · X) ll 2 --+ 0, using Ex(!rcise 4.4, the isometry property for bii 1 , and Exercise 4.4 again II H · X l b = lim II Hn · X ll 2 = lim II Hn llx = II H II x 5.1. If L, L' E M2 have the desired properties then taking N = L - L' we have {L, L - L'} = {L', L - L') so {L - L') = 0. Using Exercise 3.8 now it follows that L - L' is constant and hence must be :: 0.
5.2. I! we let H. = 1 and K. = lc. 5T) then X = H . X and yT = [{ . Y. (It is
for this reason we have assumed Xo = Yo = 0.) Using (5.4) now it follows that
5.3. If we let H. = lc•5T) and K. = lc• >T) then xT = H . X and y - yT = K · Y. Using (5.4) now it follows that (XT, Y - YT)t :: 0 so the desired result follows from (3.11). 6.1. Write H. = H} + H; and I<. = K} + K; where
Using the L2 maximal inequality it follows that
)
(
Tn E sup (H · X); ::::; 4E J{ H? d(X}t t 5Tn o
Letting n --+ oo now we can conclude
With this in hand we can start with
E(H · X)}n = E and let n --+ oo to get the desired result.
1Tn Hl d(X}t 0
6.3. By stopping we can reduce to the case in which X is a bounded continuous martingale and (X}t ::::; N for all t, which implies H E II2 (X). If we replace S and T by Sn and Tn which stop at the next dyadic rational and let H'; = C for Sn < s ::::; Tn then H'; E II 1 and it follows easily from the definition of the integral in Step 2 in Section 2.4 that
To complete the proof now we observe that if I C (w ) I
::::; K then
II Hn - H llx ::::; K2 { E((X}Tn - (X } T ) + E((X} sn - (X } s ) } --+ 0
324 Solutions to Exercises
) · X)l1 2 -+ ·X) (H H'; dX -+ ( Hn 3 li C(XT.. - XsJ -+ C(XT - Xs) 6.4. x; - x5 = LX2 (t i+ 1 ) -X2 (ti) = L {X(t i+d - X(t f )} 2 + L 2X(tf){X(t i+ l ) -X(tf )}
as n -+ oo by the bounded convergence theorem. Using Exercise 4.4 and {4.3.b 0 and hence J J H3 dX3 . it follows that Clearly, and the desired result follows.
i
i
{3.8) implies that the first term converges in probability to (X)t. The previous exercise shows the second term converges in probability to J� 2X3 d 3 .
X
6.5. The difference between evaluating at the right and the left end points is
2 L{X(ti+l ) -X(tf )}2 -+ 2{X)t 6.6. Let Ck, n = B((k + 1/2)2 - n t) - B(k2 - n t) and Dk, n = B((k + 1)2 - nt) B((k + 1/2)2- nt). In view of {6 .7) it suffices to show that as n -+ oo Sn = .Lk Ck,n(Ck,n + Dk,n ) -+ t/2 in probability. E C'f, n = 2 - n t/2 and ECk,nDk, n = 0 so ESn = t/2. To complete the proof now we note that the terms in the sum are independent so 2 2 2 t/2 + D , , n = !/2) , . E (� Gi,n " G, ) E(Sn 2 n - 2- n t/2)2 + ck,: n Dk,2 n -< 2n ct2- 2n -+ 0 = E �(Ck, k i
L.J
and then use Chebyshev's inequality.
6. 7.
as n
that if t f is a sequence of partitions of [0, t] with mesh -+ 0 -+(6.oo7) implies we have
Answers for Chapter 3
-+
as n oo by the convergence of the Riemann approximating sums for the integral. Convergence of the means and variances for a sequence of normal random variables is sufficent for them to converge in distribution {consider the characteristic functions) and the desired result follows.
A ;J At )) then as in {7.4) we get t (A ! t ) - f(Ao) = 1 G! dA3 As o -+ 0, G� -+ f'(A3) uniformly on [0, t] so the desired conclusion from Exercise 7 .1. 8.1. Clearly Mt + Mf' is a continuous local martingale and A + A� is continuous adapted, locally of bounded variation and has A 0 + A� = 0. 7.2. By stopping it suffices to prove the result when I it :::; M. If we let
G� = f'(c(At
,+ 1
t
Chapter
3
1.1. The optional stopping theorem implies that
x = E:cBT = aP:c(BT = a) + b(1 - P:c(BT = a)) Solving gives P:c(BT = a) = (b - x)f(b- a). 1.2. From { 1 6) it follows that for any s and P(supt?:3 Bt 2:: ) = 1. This implies P(sup t?:3 Bt = oo ) = 1 for all s and hence limsup 3 _ 00 B3 = oo a.s. 1.3. If Sr(w) j t(w) which is < oo with positive probability then Bt (w) (w) = 0 and we have a contradiction. 3.1. Using the optional stopping theorem at time T t we have EoBfAt = t). Letting t -+ oo and using the bounded and monotone convergence Eo(T theorems with { 1 .4) we have EaT = EoBf = a2 -b+b a + b2 -b+a a = ab 3.2. Let f (x, t) = x6 - ax 4t + bx 2t2 - ct3 • Differentiating gives Dd = -ax4 + 2bx2t - 3ct2 (1/2)D:c:cf = 15x4 - 6ax2t + bt2 .
n
� hir (ti+l -tf ) -+ fo h; ds '
.
n
1\
1\
·
in probability. The left-hand side has a normal distribution with mean 0 and vanance t
325
-
•
-
Answers for Chapter 4
326 Solutions to Exercises Setting a = 15, 2b = 6 a , and b = 3c, that is a = 15, b = 45, c = 15, we have Dd + ( 1 /2)D l = 0, so f (B , t) is a local martingale. Using the optional stopping theorem at time Ta A t we have Eo{ BL\t - 1 5B;a At (Ta A t) + 45 B;a At (Ta A t) 2 } = 15Eo(Ta A t)3 Letting t --+ oo and using the bounded and monotone convergence theorems ,
:r::r:
with the results in (3.3) we have
As n --+ oo the first term on the right --+ 0, so
and it follows that for large n, infx e lW (y , 1 / n ) v(x) :::; 1 - f .
4.3. Let y E aG and consider U = V(y, \lg(y), 1 ) . Calculus gives us
15Eor: = a6 - 15 a4Era + 45a2Er; = a6 ( 1 - 15 + 75)
g(z) - g(y) =
so Eo r; = 61/15.
3.3. Using the optional stopping theorem at time T_a A n we have
= Eo exp( -(2J.L/u2 )ZT- a An) Letting n --+ oo and noting that on {T- a = oo } we have Zt/t = u Bt/t + J.L --+ J.L almost surely and hence Zt --+ oo a.s., the desired result follows. 4.1. Let 0 : [0, oo ) --+ Rd be piecewise constant and have 1 03 1 = 1 for all s. Let yt = I:; J; O! dX! . The formula for the covariance of stochastic integrals shows yt is a local martingale with (Y)t = t 1 so (4.1) implies that yt is a Brownian motion. Letting 0 = t 0 < t 1 < . . . < t n and taking 03 = Vj for s E (ti _ 1 , tj ] for Xt n - Xt n- are independent multivariate 1 :::; j :::; n_ shows that x,l - Xt normals with mean 0 and covariance matrices (t 1 - t0)I1 , (tn - tn - 1 )!, and 1
oI . . . I
1
• • •
it follows that Xt is a d-dimensional Brownian motion.
4.2. X3 is a local martingale with (X) 3 = J; h; dr. By modifying h3 after
time t we can suppose without loss of generality that (X) oo = oo Using (4.4) now we see that if 1(u) = inf{ s : (X) 3 > u} then X..,. (u ) is a Brownian motion. Since (X) 3 and hence the time change ! ( u) are deterministic, the desired result follows. .
4.1. Let r = inf{t > 0 : Bt f/. G}1 let y E aG1 and let Ty = inf{t > 0 : Bt = y}.
(2.9) in Chapter 1 implies regular.
Py (Ty = 0)
= 1. Since r
:::; Ty
it follows that
y is
4.2. Py(r = 0) < 1 and Blumenthal's 0-1 law imply Py( r = 0) = 0. Since we are in d 2:: 21 it follows that Py (Br = y) = 0 and hence f = 1 - Ey/(Br ) > 0. Let Un = inf{t > 0 : Bt f/. D(y 1 1/n) } . Un ! 0 as n j oo so Py (un < r) --+ 1. 1 - f = Ey/ (Br)
11 \lg(y + O(z - y))
= Ey { f (Br ) ; T :::; Un) + Ey (v(Bun ) ; T > Un)
·
O( z - y) dO
Continuity of \lg implies that if l z - Y l < r and z E U then \lg(y + O(z y)) · O(z - y) 2:: 0 so g(z) 2:: 0. This shows that for small r the truncated cone U n D(y, r) c ac so the regularity of y follows from (4.5c).
4.4. Let r = inf{t > 0 : Bt E V(y, v, a) } . By translation and rotation we can suppose y = 0, v = (1, 0, . . . , 0) , and the d-1 dimensional hyperplane is Zd = 0. Let To = inf{t > 0 : .Bf = 0}. (2.9) in Chapter 1 implies that P0(T0 = 0) = 1. I_! a > 0 then for some k < oo the hyperplane can be covered by k rotations of V so Po(r = 0) 2:: 1/k and it follows from the 0-1 law that P0(r = 0) = 1. 7.1. (a) Ignoring Cd and differentiating gives
; - O; ) Y D h 8 _- - �2 . ( x _2(x 2 0 1 + y2 ) (d+2 )/2 l + 2)(x; - 0) 2 y D h 8 - - d · ( l x _ 0 1 2 +y y2 ) (d+2 )/ 2 + ( ld(d x 0 1 2 + y2 ) (d+4)/2 2 Dyh 8 = ( x - 0 1 21+ y2 ) d/2 - d . ( x - 0 1 2 +y y2 ) (d+2 )/2 l l 2y y + 3 DYY h8 - -d · ( x 0 1 2 + y2 ) (d+2 )/2 + ( l x d(0 1 2d ++ y2)y 2 ) ( d+4)/2 l :r:;
:r: ; :r: ;
_
_
_
Chapter 4
327
Adding up we see that
d- 1
- 1) +" (-d) · 3}y "\;"" D {;;:_ h 8 + Dyy h 8 _- {(-d)(d ( l x - Oj 2 + y2 ) (d+2 )/2 :r: ; :r: ;
+ 2)y( l x - 0 1 2 + y2 ) = 0 + d(d ( l x 0 1 2 + y2 ) (d+4)/2 _
The fact that .6. u( x)
= 0 follows from ( 1. 7)'as in the proof of (7 .1).
Answers for Chapter 5
328 Solutions to Exercises (b) Clearly J dO h 9(x, y) is independent of x. To show that it is independent of y, let x = 0 and change variables 0; = yep ; for 1 ::; i ::; d - 1 to get
(c) Changing variables 0; = x ; + r; y and using dominated convergence
{
JD(:c ,e)c (d) Since P:c(r
< oo) = 1 for all x E H, this follows from (4.3).
=
1 2
-u" - {Ju
::; 1
u(-a)
=0
and it follows from (6.3) that
= u(a) = 1
Guessing u (x) = B cosh ( bx) with b > 0 we find
1
2u" - {Ju =
if b = -12!3. Then we take B = Chapter
1/ cosh( a.J2/3) to satisfy the boundary condition.
5
3.1. We first prove that h is Lipschitz continuous with constant C2 = 2C1 + Let
f (x) = (2R - l x i)/R and g(x)
=
l f(x) - f (y) l = I IYI � l x l l ::; l x - Yl
�
= { l x l ::; R}, and Rxf l x l is the projection
Since h is Lipschitz continuous on D1 of x onto D1 we have
lu(x) - g(y)i = ih(Rxfl x l ) - h(Ryfiyl) l Rx Ry ::; cl � - IYT ::; C1l x - Yl
I
l
lh(x) - h(y)l ::; ll f lloo , 2 I Y (x) - g(y) l + l f(x) - f (y) lll u lloo , 2 ::; 1 · C1 l x - Yl + R1 llhll oo , l ::; (2Cl + lh (O) I /R) l x - Y l = C2 l x - Yl since for x E D1 , lh(x) l ::; lh(O) I + C1 R.
lh(x) - h(y) l ::; lh(x) - h(z) l + lh (z) - h(y) l ::; C1l x - z l + C2 lz - Yl ::; C2 l x - Yl since C1 ::; C2 and l x - zl + lz - Yl = l x - Yl · This shows that h is Lipschitz continuous with constant C2 on lzl ::; 2R. Repeating the last argument taking l xl ::; 2R and y > 2R completes the proof. 3.2. (a) Differentiating we have
) (2 B b2 - {JB cosh ( bx ) = 0
·R- 1 I h(O) I on D2 = {R ::; l x l ::; 2R}. h(Rxf l x l). If x, y E D2 then
: x E D;}, and
To extend the last result to Lipschitz continuity on Rd we begin by ob serving that if x E D1 , y E D2 , and z is the point of 8D1 on the line segment between x and y then
dO h6(x, y) = J
-{3 < 0 so w(x) = E:c e - f3r f3 e r E:c - is the unique solution of
9.1. c(x)
v(x) =
{D(O,efy)c dr hr(O, 1) -+ 0
Combining the last two results, introducing ll k lloo , i = sup { l k(x) l using the triangle inequality:
329
��:(x) = -Cx log x ��:'(x) = -C log x - C > 0 if O < x < e - 1 ��:"(x) = - Cfx < 0 if x > 0 so is strictly increasing and concave on [0, e - 2 ]. Since ��:(e - 2 ) = 2Ce - 2 and ��:'(e- 2 ) = C, is strictly increasing and concave on [O, oo ). When f ::; e - 2 - 1 dx = - c- 1 loglog x = oo I' C x Iog x (b) If g(t) = exp( - 1/tP ) with p > 0 then g' (t) = t �l exp(- lftP) = pg(t){log(1/g(� ))} (P+l )/p P 5.1. Under Q the coordinate maps Xt (w) satisfy MP ([J + b, ) Let 11:
11:
1,
0
0
u .
it [J(X. ) + b(X. ) ds t = Xt - i b (X. ) ds
Xt = Xt -
Answers for Chapter
330 Solutions to Exercises Since we are interested in adding drift
-b we let c = a- 1 b as before, but let
-1t c(X.) dX. t = -Yt + 1 c(X.) b(X.) ds t 1 = -yt + 1 ba- b(X.) ds
Yt =
exp
·
(- 1Y 2b(z)fa(z) dz) ::; e-2
so
-oo and if I < 0 then so(oo) < 00 .
+
Since yt and Yt differ by a process of bounded variation we have
e
If I = 0 then
1/2 z1 - 26 - 1 dz < oo if. 8 > 1 J= <1 = 00 1f u 28 - 1 1
hence J =
100
{
c
Combining the results for I and J and consulting the table we have the indicated result. 5.5. When a = 0
(
)
dz
0, M(z) - M(O) ....., Cz2� as z -+ 0 so J < oo if f3 > 0. Comparing the possibilities for I and J gives the desired result.
= Cy- 2�
1/