ORDINARY DIFFERENTIAL EQUATIONS Sze-Bi Hsu
ii
Contents 1 INTRODUCTION 1.1 Introduction: Where ODE comes from . . . ...
28 downloads
1585 Views
726KB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
ORDINARY DIFFERENTIAL EQUATIONS Sze-Bi Hsu
ii
Contents 1 INTRODUCTION 1.1 Introduction: Where ODE comes from . . . . . . . . . . . . . . . . .
1 1
2 FUNDAMENTAL THEORY 2.1 Introduction and Preliminary . . . . . . . . . . . . . . . . . . 2.2 Local Existence and Uniqueness of Solutions of I.V.P. . . . . 2.3 Continuation of Solutions . . . . . . . . . . . . . . . . . . . . 2.4 Continuous Dependence on Parameters and Initial Conditions 2.5 Differentiability of Initial Conditions and Parameters . . . . . 2.6 Differential Inequality . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
9 9 11 19 21 24 26
3 LINEAR SYSTEMS 3.1 Introduction . . . . . . . . . . . . . . . . . . . 3.2 Fundamental Matrices . . . . . . . . . . . . . 3.3 Linear Systems with Constant Coefficients . . 3.4 Two Dimensional Linear Autonomous system 3.5 Linear Systems with Periodic Coefficients . . 3.6 Adjoint System . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
35 35 36 39 46 50 56
4 STABILITY OF NONLINEAR SYSTEMS 4.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Linearization . . . . . . . . . . . . . . . . . . . . . . . 4.3 Saddle Point Property: Stable and unstable manifolds 4.4 Orbital Stability . . . . . . . . . . . . . . . . . . . . . 4.5 Travelling Wave Solutions . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
65 65 66 73 82 88
. . . . . .
. . . . . .
. . . . . .
. . . . . .
5 METHOD OF LYAPUNOV 99 5.1 An Introduction to Dynamical System . . . . . . . . . . . . . . . . . 99 5.2 Lyapunov Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 6 TWO DIMENSIONAL SYSTEMS 117 6.1 Poincar´ e-Bendixson Theorem . . . . . . . . . . . . . . . . . . . . . . 117 6.2 Levinson-Smith Theorem . . . . . . . . . . . . . . . . . . . . . . . . 125 6.3 Hopf Bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 7 SECOND ORDER LINEAR EQUATIONS 7.1 Sturm’s Comparison Theorem and Sturm-Liouville boundary problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Green’s function . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Fredholm Alternative for 2nd order linear equations . . . . . iii
145 value . . . . . . . . . . . . . . . .
145 151 153 158
iv
CONTENTS
8 THE INDEX THEORY AND BROUWER DEGREE 163 8.1 Index Theory In The Plane . . . . . . . . . . . . . . . . . . . . . . . 163 8.2 Brief introduction to Brouwer degree in Rn . . . . . . . . . . . . . . 170 9 INTRODUCTION TO REGULAR AND SINGULAR PERTURBATION METHODS 177 9.1 Regular Perturbation Methods . . . . . . . . . . . . . . . . . . . . . 177 9.2 Singular Perturbations : Boundary Value Problems . . . . . . . . . . 182 9.3 Singular Perturbation : Initial Value Problem . . . . . . . . . . . . . 187
Chapter 1
INTRODUCTION 1.1
Introduction: Where ODE comes from
The theory of ordinary differential equations is dealing with the large time behavior of the solution x(t, x0 ) of the initial value problem I.V.P of first order system of differential equations: dx1 = f1 (t, x1 , x2 , · · · , xn ) dt .. . dxn = fn (t, x1 , x2 , · · · , xn ) dt xi (0) = xi0 , i = 1, 2 · · · n or in vector notation dx dt
=
f (t, x),
f : D ⊆ R × Rn −→ Rn , D ⊆ R × Rn is open , (1.1)
x(0)
=
x0
where x = (x1 , · · · xn ), f = (f1 , · · · , fn ) If the right hand side of (1.1) is independent of time t , i.e. dx = f (x). dt
(1.2)
Then we say (1.1) is an autonomous system. In this case, we call f is a vector field on its domain. If the right hand side depends on time t, we say (1.1) is a nonautonomous system. The most important nonautonomous system is the periodic system i.e., f (t, x) satisfies f (t + w, x) = f (t, x) for some w > 0 (w is the period). If f (t, x) = A(t)x where A(t) ∈ Rn×n , then we say dx = A(t)x. (1.3) dt 1
2
CHAPTER 1. INTRODUCTION
is a linear system of differential equations. It is easy to verify that if ϕ(t), ψ(t) are solutions of (1.3), then αϕ(t) + βψ(t) is also a solution of linear system (1.3) for α, β ∈ R. The system dx = A(t)x + g(t) (1.4) dt is called linear system with nonhomogeneous part g(t). If A(t) ≡ A, then dx = Ax dt
(1.5)
is a linear system with constant coefficients. A system (1.1) which is not linear is called a nonlinear system. Usually it is much harder to analyze nonlinear systems than the linear ones. The essential difference is that linear systems can be broken down into parts. Through superposition principle, Laplace transform, Fourier analysis, we find a linear system is precisely equals to the sum of its parts. But nonlinear systems which phenomena is almost our everyday life, does not have superposition principle. In the following we present some important examples of differential equations from Physics, Chemistry and Biology. Example 1.1.1 m¨ x + cx˙ + kx = 0 This describes the motion of a spring with damping and restoring force. Apply Newton’s law, F = ma, we have ma = m¨ x = F = −cx˙ − kx = Friction + restoring force. Let y = x. ˙ Then
½
x˙ = y c y˙ = − m y−
k mx
Example 1.1.2 m¨ x + cx˙ + kx = F cos wt. Then we have ¶ µ ¶ µ y x˙ = c k y˙ m y − m x + F cos wt p If c = 0 and w = k/m, then we have ”resonance”. Example 1.1.3 Electrical Networks Let Q(t) be the charge on the capacitor at time t. Use the following Kirchoff’s 2nd law: In a closed circuit, the impressed voltage equals the sum of the voltage drops in the rest of the circuit.
1.1. INTRODUCTION: WHERE ODE COMES FROM
3
Fig.1.1
1. The voltage drop across a resistance of R ohms equals RI (Ohm’s law) 2. The voltage drop across an inductance of L henrys equals L dI dt 3. The voltage drop across a capacitance C farads equals Q/C Hence E(t) = L Since I(t) =
dQ dt ,
dI Q + RI + dt C
it follows that L
d2 Q dQ 1 +R + Q = E(t) 2 dt dt C
Example 1.1.4 Van der Pol Oscillator [K] p.481, [HK] p.172 00
0
u + ²u (u2 − 1) + u = 0, Let E(t) =
02
u 2
+ 0
u2 2
E (t)
0<²¿1
be the energy. Then 0
00
0
0
0
= u u + uu = u (−²u (u2 − 1) − u) + uu ½ 0 2 < 0, |u| > 1 2 = −²(u ) (u − 1) = > 0, |u| < 1
0
Hence the oscillator is “self-excited”. Example 1.1.5 Van der Pol Oscillator with periodic forcing 00
0
u + ²u (u2 − 1) + u = A cos wt The is the equation Cartwright, Littlewood, Levinson studied in 1940-1950 and Smale constructed Smale’s horseshoe in 1960. It is one of the model equations in chaotic dynamics.
4
CHAPTER 1. INTRODUCTION
Example 1.1.6 Second order conservative system x ¨ + g(x) = 0 or equivalently x˙ 1 = x2 x˙ 2 = −g(x1 ) The energy E(x1 , x2 ) = 21 x22 + V (x1 ), V (x1 ) =
R x1 0
g(s)ds satisfies
d E=0 dt Example 1.1.7 Duffing’s equation x ¨ + (x3 − x) = 0 The potential V (x) = −(1/2)x2 + (1/4)x4 is a double-well potential.
Fig.1.2 Example 1.1.8 Duffing’s equation with damping and periodic forcing. x ¨ + β x˙ + (x3 − x) = A cos wt This is also a typical model equation in chaotic dynamcis. Example 1.1.9 Simple pendulum equation d2 θ g + sin θ = 0 dt2 `
1.1. INTRODUCTION: WHERE ODE COMES FROM
5
Fig.1.3
F −mg sin θ
= ma = m` ·
d2 θ dt2
Example 1.1.10 Lorentz equation [S] p.301, [V] 1. x˙ = σ(y − x) 2. y˙ = rx − y − xz
σ, r, b > 0
3. z˙ = xy − bz When σ = 10, b = phenomenon.
8 3,
r = 28, (x(0), y(0), z(0)) ≈ (0, 0, 0), we have butterfly
Example 1.1.11 Michaelis-Menten Enzyme Kinetics ([K] p.511),([LS] p.302) Consider the conversion of a chemical substrate S to a product P by enzyme catalysis. The reaction scheme k
1 k2 E+S * ) ES → E + P
k−1
was proposed by Michaelis and Menten in 1913. By the law of Mass action. We have the follows equations d dt [E]
= −k1 [E][S] + k−1 [ES] + k2 [ES]
d dt [S]
= −k1 [E][S] + k−1 [ES]
d dt [ES] d dt [P ]
= k1 [E][S] − k−1 [ES] − k2 [ES]
= k2 [ES]
6
CHAPTER 1. INTRODUCTION subject to initial concentration [E](0) = E0 , [S](0) = S0 , [ES](0) = [P ](0) = 0.
Since
d d [ES] + [E] = 0 dt dt
Hence [ES] + [E] ≡ E0 Let
Á Á 1. u = [ES] C0 , v = [S] S0 , τ = k1 E0 t Á ± 2. κ = (k−1 + k2 ) k1 S0 , ² = C0 S0 , λ =
k−1 k−1 +k+2 ,
± C0 = E 0 1 + κ
Then we have the equation of singular perturation (v + κ)u du =v− , 0<²¿1 dτ 1+κ Á dv = −v + (v + κλ)u (1 + κ) dt u(0) = 0, v(0) = 1
²
We shall study this example in chapter 9. Example 1.1.12 Belousov-Zhabotinskii Reaction [M] dx dt dy δ dt dz dt ²
=
qy − xy + x(1 − x)
=
−qy − xy + 2f z
=
x−z
where ², δ, q are small, f ≈ 0.5. This is an important oscillator in chemistry. Example 1.1.13 Logistic equation. Let x(t) be the population of a species. dx dt K
= =
³ x´ rx − bx2 = rx 1 − K carrying capacity
Example 1.1.14 The Lotka-Volterra model for Predator-Prey interaction. Let x(t), y(t) be the population of prey and predator at time t respectively. ([M],p.124 and p.62). Then we have dx dt = ax − bxy a, b, c, d > 0 dy = cxy − dy dt or dx x dt = rx(1 − k ) − bxy
dy dt
= cxy − dy
1.1. INTRODUCTION: WHERE ODE COMES FROM
7
Example 1.1.15 The Lotka-Volterra two species competition model. ([M]). Let xi (t), i = 1, 2 be the population of i-th competing species. µ ¶ dx1 x1 = r1 x1 1 − − αx1 x2 dt K1 µ ¶ dx2 x2 = r2 x2 1 − − βx1 x2 dt K2 x1 (0) > 0, x2 (0) > 0, α, β, r1 , K1 , r2 , K2 > 0.
8
CHAPTER 1. INTRODUCTION
References [HK ] J. Hale and H. Kocak: Dynamics and Bifurcations [K ] J. Keener: Principles of Applied Mathematics [M ] J. Murray: Mathematical Biology [S ] S. Strogatz: Nonlinear Dynamics and Chaos [LS ] C.C.Lin and L.A. Segel Mathematics [V ] M. Viana, What’s new on Lorenz strange attractors?, The Mathematical Intelligence, Vol 22, No.3, 2000, p.6-19.
Chapter 2
FUNDAMENTAL THEORY 2.1
Introduction and Preliminary
In this chapter we shall study the fundamental properties of initial value problem (I.V.P.) ½ dxdt = f (t, x), f : D ⊆ R × Rn → Rn (2.1) x(t0 ) = x0 where D is an open set of R × Rn containing (t0 , x0 ). We shall answer the following questions: (Q1) What is the least condition on f (t, x) to ensure the local existence of a solution of I.V.P (2.1)? (Q2) When the solution of (2.1) is unique? (Q3) Global existence of the solution of (2.1), i.e. the maximal interval of existence is the whole real line R (Q4) Is the initial value problem well-posed? That is the solution of (2.1) continuously depends on the initial conditions and parameters. Before we pursue these questions, we need the following preliminaries: Let x ∈ Rn and | | is a norm if the function | | : Rn −→ R+ (i) |x| ≥ 0 and |x| = 0 iff x = 0 (ii) |αx| = |α||x|, α ∈ R, x ∈ IRn (iii) |x + y| ≤ |x| + |y| The three most commonly used norms of a vector x = (x1 , · · · , xn ) are [IK] |x|∞ =P sup1≤i≤n |xi |, n |x|1 = i=1 |xi |, ¡Pn ¢ 2 1/2 . |x|2 = i=1 |xi | It is known that all norms on a finite dimensional vector space are equivalent, i.e. if | |1 and | |2 are any two norms, then there exists α, β > 0 such that α|x|1 ≤ |x|2 ≤ β|x|1 9
for all
x
10
CHAPTER 2. FUNDAMENTAL THEORY
From (iii) we obtain by induction |x1 + x2 + · · · + xm | ≤ |x1 | + |x2 | + · · · + |xm | From this inequality and (ii) we can deduce the inequality ¯ Z ¯Z ¯ ¯ b b ¯ ¯ x(t)dt¯ ≤ |x(t)|dt ¯ ¯ ¯ a a for any vector function x(t) which is continuous on [a, b]. In fact, let δ = tk = a + kδ, k = 1, · · · , m. Then
b−a m
and
¯R ¯ Pm ¯ b ¯ ¯ a x(t)dt¯ = |limm→∞ k=1 x(tk )δ| Rb Pm ≤ limm→∞ k=1 |x(tk )|δ = a |x(t)|dt. An abstract space X, not necessarily a finite dimensional vector space, is called a normed space if a real-valued function |x| is defined for all x ∈ X having properties (i) – (iii). A Banach space is a complete normed space, i.e. any Cauchy sequence {xn } is a convergent sequence. Let I = [a, b] be a bounded, closed interval and C(I) = {f |f : I → Rn is contiuous } with the norm k f k∞ = sup |f (t)| a≤t≤b
Then k k∞ is a norm of C(I). We note that fm → f in C(I) means k fm −f k∞ → 0 i.e. fm → f uniformly on I. Theorem 2.1.1 ([A] p.222) C(I) is a Banach Space In the following, we need the notion of equicontinuity. A set of functions F = {f } defined on a real interval I is said to be equicontinuous on I if given ε > 0 there exists a δ = δ(²) > 0, independent of f ∈ F , and t, t¯ ∈ I such that |f (t) − f (t¯)| < ²
whenever
|t − t¯| < δ, f ∈ F
F is uniformly bounded if there exists M > 0 such that k f k∞ < M for all f ∈ F . The following Ascoli-Arzela’s Theorem is a generalization of Bolzano-Weierstrass Theorem in Rn , which concerns the property of compactness in Banach space C(I).
Theorem 2.1.2 [c] (Ascoli-Arzela) Let fm ∈ C(I). If {fm }∞ m=1 is equicontinuous and uniformly bounded, then there exists a convergent subsequence fmk . Proof. Let {γk } be the set of rational numbers in I. Since {fm (γ1 )} is a bounded sequence, we extract a subsequence {f1m }∞ m=1 of {fm } such that f1m (γ1 ) converges to a point denoted as f (γ1 ). Similarly {f1m (γ2 )}∞ m=1 is a bounded sequence, we can extract a subsequence {f2m } of {f1m } such that f2m (γ2 ) converges to f (γ2 ). Continue this process, inductively, we can extract a subsequence {fkm } of {fk−1,m } such that fk,m (γk ) converges to a point, denoted as f (γk ). By diagonal process
2.2. LOCAL EXISTENCE AND UNIQUENESS OF SOLUTIONS OF I.V.P. 11
f11 , f12 , f13 , & f21 , f22 , f23 , .. . & .. . & fk1 , fk2 , · · ·
f14 ,
···
f24 ,
···
fκ,k , · · ·
def
Let gm = fmm . Then gm (γk ) → f (γk ) as m → ∞ for all k = 1, 2, · · ·. Claim {gm } is a Cauchy sequence and then {gm } is a desired convergent sequence. Since {gm (γj )} converges for each j, given ε > 0 there exists Mj (ε) > 0 such that |gm (γj ) − gi (γj )| < ε for m, i ≥ Mj (ε). By the equicontinuity of {gm }, there exists δ > 0 such that for each i |gi (x) − gi (y)| < ε for |x − y| < δ. From Heine-Borel’s Theorem, the open convering {B(γj , δ)}∞ j=1 of [a, b], has a fiL nite subconvering {B(γjk , δ)}k=1 . Let M (ε) = max{Mj1 (ε), · · · , MjL (ε)}. Then x ∈ [a, b] implies x ∈ B(γj` , δ), for some, 1 ≤ ` ≤ L. For m, i ≥ M (ε), |gm (x) − gi (x)| ≤ |gm (x) − gm (γj` )| +|gm (γj` ) − gi (γj` )| + |gi (γj` ) − gi (x)| < ε + ε + ε = 3ε. Thus {gm } is a Cauchy sequence and we complete the proof of the theorem .
2.2
Local Existence and Uniqueness of Solutions of I.V.P.
Before we prove the theorems for the local existence of the solutions of I.V.P. (2.1), we present the following examples. Example 2.2.1 The system ½
dx dt
= 1 + x2 , x(0) = 0
has a unique solution x(t) = tan t defined on (−π/2, π/2). Example 2.2.2 Consider the system ½ dx
= x2 , x(0) = x0 dt
−1 2 Since dx dt = x has general solution x(t) = t+c . Hence if x0 > 0 then x(t) is defined −1 on (−∞, −c), c = x0 and x(t) is defined on (−c, ∞) if x0 < 0, x(t) ≡ 0 if x0 = 0.
12
CHAPTER 2. FUNDAMENTAL THEORY ½ √
x, x ≥ 0 , x(0) = 0. 0, x < 0 Then there are infinitely many solutions ( (t−c)2 , t≥c≥0 4 x(t) = 0, t≤c Example 2.2.3
dx dt
= f (t, x) =
x(t) ≡ 0 is also a solution. We note that f (t, x) is not Lipschitz at x = 0. (Prove it!) Remark 2.2.1 From the application viewpoint, a good I.V.P from physical world should have existence and uniqueness of the solution and the solution is defined globally i.e. it is defined on (−∞, ∞). A nice thing about the solution of ODE is that the time can be “ reversed”, i.e., x(t) is defined for t < t0 . Lemma 2.2.1 If the function f (t, x) is continuous, then the initial value problem (2.1) is equivalent to the integral equation. Z
t
x(t) = x0 +
f (s, x(s))ds
(2.2)
t0
Proof. Obviously, a solution x(t) of (2.1) satisfies (2.2). Conversely, let x(t) be a solution of (2.2). Putting t = t0 in (2.2), we obtain x(t0 ) = x0 . Moreover, x(t) is continuous, since any integral is a continuous function of its upper limit. Therefore f (t, x(t)) is continuous. It now follows from (2.2) that x(t) is differentiable and 0 x (t) = f (t, x(t)). The local existence proof [CL] proceeds by two stages, First it is shown by a construction that there exists an “approximate” solution to I.V.P. (2.1). Then one proves that there exists a sequence of these approximate solutions which tends to a solution of (2.1). Definition 2.2.1 We say a piecewise C 1 function ϕ(t) on interval I is an ε– approximate solution of I.V.P. (2.1) if 0
|ϕ (t) − f (t, ϕ(t))| < ε f or all t ∈ I 0
if ϕ (t) exists. Since (t0 , x0 ) ∈ D, D is open in R × Rn , there exist a, b > 0 such that S = {(t, x) : |t − t0 | ≤ a, |x − x0 | ≤ b} ⊆ D. From the assumption f ∈ C(D), we obtain |f (t, x)| ≤ M on compact set S. Let c = min{a, b/M } Theorem 2.2.1 For any ε > 0, there exists ε–approximate solution of I.V.P. (2.1) on the interval I = {t : |t − t0 | ≤ c}.
2.2. LOCAL EXISTENCE AND UNIQUENESS OF SOLUTIONS OF I.V.P. 13 Proof. From the fact f (t, x) is continuous on compact set S, it follows that f (t, x) is uniformly continuous on S. Then given ε > 0, there exists δ = δ(ε) > 0 such that |f (t, x) − f (s, y)| < ε whenever |(t, x) − (s, y)| < δ
(2.3)
Partition the interval ©[t0 , t0 +ª c] into m subintervals, t0 < t1 < · · · < tm = t0 + c δ with tj+1 − tj < min 2δ , 2M for j = 0, · · · , m − 1. Construct an ε–approximate solution by Euler’s method: Let ϕ(t0 ) = x0 , on [t0 , t1 ], ϕ(t) = ϕ(t0 ) + f (t0 , ϕ(t0 ))(t − t0 ), on [tj , tj+1 ] , ϕ(t) = ϕ(tj ) + f (tj , ϕ(tj )) (t − tj ), 1 ≤ j ≤ m − 1.
(2.4)
First we check that (t, ϕ(t)) ∈ S for all t0 ≤ t ≤ t0 + c. Obviously |t − t0 | ≤ c ≤ a. For t0 ≤ t ≤ t1 , |ϕ(t) − x0 | = |f (t0 , x0 )|(t − t0 ) ≤ M c ≤ M · b/M = b. By induction we assume (t, ϕ(t)) ∈ S for t0 ≤ t ≤ tj . Then for tj ≤ t ≤ tj+1 , |ϕ(t) − x0 | ≤ |ϕ(t) − ϕ(tj )| + |ϕ(tj ) − ϕ(tj−1 )| + · · · + · · · + |ϕ(t0 ) − x0 | ≤ M ((t − tj ) + (tj − tj−1 ) + · · · + (t1 − t0 )) = M (t − t0 ) ≤ M c ≤ M · b/M = b. (2.5) Now we verify that ϕ(t) is an ε–approximate solution. Let t ∈ (tj , tj+1 ). Then |t − tj | ≤ |tj+1 − tj | < δ/2 and from (2.4) |ϕ(t) − ϕ(tj )| = |f (tj , ϕ(tj ))(t − tj )| ≤ M |t − tj | ≤ M · δ/2M = δ/2. Hence it follows that |(t, ϕ(t)) − (tj , ϕ(tj ))| < δ and from (2.3) we have ¯ 0 ¯ ¯ ¯ ¯ϕ (t) − f (t, ϕ(t))¯ = |f (tj , ϕ(tj )) − f (t, ϕ(t))| < ε.
Fig.2.1 Theorem 2.2.2 Let f ∈ C(D), (t0 , x0 ) ∈ D. Then the I.V.P. (2.1) has a solution on an interval I = [t0 − c, t0 + c].
14
CHAPTER 2. FUNDAMENTAL THEORY
1 Proof. Let εm = m ↓ 0 as m → +∞ and ϕm (t) be an εm –approximate solution on I. Claim: {ϕm } is uniformly bounded and equicontinuous. Uniformly boundedness: from (2.5)
|ϕm (t)| ≤ |ϕm (t0 )| + |ϕm (t) − ϕm (t0 )| ≤ |x0 | + M (t − t0 ) ≤ |x0 | + M c for all t and m Equicontinuity of {ϕm }: For any s < t with s < tj < · · · < tk < t for some j and k. Then |ϕm (t) − ϕm (s)| ≤ |ϕm (t) − ϕm (tk )| + |ϕm (tk ) − ϕm (tk−1 )| + · · · + |ϕm (tj+1 ) − ϕm (tj )| + |ϕm (tj ) − ϕm (s)| ≤ M (t − tk ) + M (tk − tk−1 ) + · · · + M (tj − s) = M |t − s|. Hence for any ε > 0 we choose δ = εM , then |ϕm (t) − ϕm (s)| < ε whenever |t − s| < δ. From Ascoli-Arzela Theorem, we can extract a convergent subsequence {ϕmk }. Let ϕmk → ϕ ∈ C(I) then f (t, ϕmk (t)) → f (t, ϕ(t)) uniformly on I. Let Em (t) = 0 ϕm (t) − f (t, ϕm (t))), then Em is piecewise continuous on I and |Em (t)| ≤ εm on I. It follows that ϕm (t) = x0 +tt0 [f (s, ϕm (s)) + Em (s)]ds ↓ ↓ ϕ(t) = x0 +tt0 f (s, ϕ(s))ds To prove the uniqueness of the solutions of I.V.P., we need the following Gronwall inequality. Theorem 2.2.3 Let λ(t) be real continuous function and µ(t) a nonnegative continuous function on [a, b]. If a continuous function y(t) has the property that Z t y(t) ≤ λ(t) + µ(s)y(s)ds, a ≤ t ≤ b (2.6) a
then on [a, b] we have Z
µZ
t
y(t) ≤ λ(t) +
λ(s)µ(s) exp a
¶
t
µ(τ )dτ s
In particular if λ(t) ≡ λ is a constant, µZ y(t) ≤ λ exp
t
¶ µ(s)ds
a
Proof. Put
Z
t
z(t) =
µ(s)y(s)ds a
ds.
2.2. LOCAL EXISTENCE AND UNIQUENESS OF SOLUTIONS OF I.V.P. 15 Then z(t) is differentiable and from (2.6) we have 0
z (t) − µ(t)z(t) ≤ λ(t)µ(t) Let
(2.7)
µ Z t ¶ w(t) = z(t) exp − µ(τ )dτ a
Then from (2.7), it follows that µ Z t ¶ 0 w (t) ≤ λ(t)µ(t) exp − µ(τ )dτ . a
Since w(a) = 0, we get by integration Z
µ Z λ(s)µ(s) exp −
t
w(t) ≤ a
¶
s
µ(τ )dτ
ds
a
i.e. by the definition of w(t). Z
µZ
t
z(t) ≤
λ(s)µ(s) exp
¶
t
µ(τ )dτ
a
ds.
s
Then from (2.6) we complete the proof . Remark 2.2.2 Gronwall inequality is used frequently to estimate the bound of the solution of ordinary differential equation. Next theorem concerns the uniqueness of the solutions of I.V.P. Theorem 2.2.4 Let x1 (t), x2 (t) be differentiable functions such that |x1 (a)−x2 (a)| ≤ 0 δ and |xi (t) − f (t, xi (t))| ≤ ²i , i = 1, 2 for a ≤ t ≤ b. If the function f (t, x) satisfies the Lipschitz condition in x |f (t, x1 ) − f (t, x2 )| ≤ L|x1 − x2 | then |x1 (t) − x2 (t)| ≤ δe
L(t−a)
h
L(t−a)
+ (²1 + ²2 ) e
iÁ −1 L
Proof. Put ² = ²1 + ²2 , γ(t) = x1 (t) − x2 (t). Then 0
|γ (t)| ≤ |f (t, x1 (t)) − f2 (t, x2 (t))| + ² ≤ L|γ(t)| + ². Since Z
t
0
γ (s)ds = γ(t) − γ(a), a
f or
a ≤ t ≤ b.
16
CHAPTER 2. FUNDAMENTAL THEORY
it follows that ¯Z t ¯ ¯ ¯ 0 ¯ |γ(t)| ≤ |γ(a)| + ¯ γ (s)ds¯¯ a Z t 0 ≤ δ+ |γ (s)|ds a Z t ≤ δ+ (L|γ(t)| + ²)ds a Z t = δ + ²(t − a) + L|γ(s)|ds. a
Therefore by Gronwall’s inequality, Z |γ(t)| ≤ δ + ²(t − a) +
t
L {δ + ²(s − a)} eL(t−s) ds
a
or, after integrating the right hand side by parts |γ(t)| ≤ δe
L(t−a)
h +² e
L(t−a)
iÁ −1
L
Corollary 2.2.1 Let ²1 = ²2 = δ = 0 in Theorem 2.6, then we obtain the uniqueness of solutions of I.V.P. provided f (t, x) satisfies Lipschitz condition in x. In the following Corollary 2.2.2, the meaning of continuous dependence of solutions on initial conditions is stated as follows: For any compact interval [t0 , t0 + T ], given ε > 0, there exists δ = δ(ε, T ) > 0 such that |x1 (t0 ) − x2 (t0 )| < δ
implies |x1 (t) − x2 (t)| < ε
for all t0 ≤ t ≤ t0 + T . Corollary 2.2.2 The continuous dependence of the solutions on the initial conditions holds when f is global Lipschitz. Proof. Let x1 (t), x2 (t) be two solutions of I.V.P. (2.1) with |x1 (t0 ) − x2 (t0 )| ≤ δ, then applying Theorem 2.2.4 with ²1 = ²2 = 0 yields |x1 (t) − x2 (t)| ≤ δeL(t−t0 )
f or
t ≥ t0
(2.8)
Choose δ > 0 satisfying δ < εe−LT then from (2.8) we obtain the continuous dependence on initial conditions. Corollary 2.2.3 If f ∈ C 1 (D) then we have local existence and uniqueness of the solutions of I.V.P. Proof. It suffices to show that if f ∈ C 1 (D) then f is locally Lipschitz. Since f ∈ C 1 (D), (t0 , x0 ) ∈ D then Dx f (t, x) is continuous on R = {(t, x) : |x − x0 | ≤ δ1 , |t − t0 | ≤ δ2 } for some δ1 , δ2 > 0. Claim f (t, x) satisfies Lipschitz condition in
2.2. LOCAL EXISTENCE AND UNIQUENESS OF SOLUTIONS OF I.V.P. 17 x on the rectangle R, Z
1
f (t, x1 ) − f (t, x2 ) = Z
0
d f (t, sx1 + (1 − s)x2 )ds ds
1
Dx f (t, sx1 + (1 − s)x2 ) · (x1 − x2 )ds.
= 0
Then we have |f (t, x1 ) − f (t, x2 )| ≤ M |x1 − x2 |, where M=
sup k Dx f (t, sx1 + (1 − s)x2 ) k . 0≤s≤1 |t − t0 | ≤ δ2
Let X be a Banach space and F ⊆ X is a closed subset of X. We say T : F → F is a contraction if |T x1 − T x2 | ≤ θ|x1 − x2 | for some 0 < θ < 1 and for any x1 , x2 ∈ F . Obviously a contraction map has a unique fixed point. Theorem 2.2.5 (Contraction mapping principle) There exists a unique fixed point for a contraction map T : F → F . Proof. Given any x0 ∈ F , defined a sequence {xn }∞ n=0 by xn+1 = T xn . Then |xn+1 − xn | ≤ θ|xn − xn−1 | ≤ · · · ≤ θn |x1 − x0 | Claim: {xn } is a Cauchy sequence. Let m > n. Then |xm − xn | ≤ |xm − xm−1 | + |xm−1 − xm−2 | + · · · + |xn+1 − xn | ¡ ¢ ≤ θn 1 + θ + · · · + θm−n−1 |x1 − x0 | θn ≤ |x1 − x0 | → 0 as n → ∞ 1−θ Since F is a closed set then xn → x ∈ F as n → ∞. Claim x is the desired fixed point of T . Since |T x − x|
≤ |T x − xn+1 | + |xn+1 − x| = |T x − T xn | + |xn+1 − x| ≤
θ|x − xn+1 | + |x − xn+1 | → 0
It follows that T x = x. Now we shall apply contraction principle to show the existence and uniqueness of solutions of I.V.P. (2.1). Theorem 2.2.6 Let f (t, x) be continuous on S = {(t, x) : |t − t0 | ≤ a, |x − x0 | ≤ b} and f (t, x) satisfy a Lipschitz condition in x with Lipschitz constant L. Let M =
18
CHAPTER 2. FUNDAMENTAL THEORY
max{|f (t, x)| : (t, x) ∈ S}. Then there exists a unique solution of I.V.P dx dt x(t0 )
= f (t, x) = x0
on I = {t : |t − t0 | ≤ α}, where α < min{a, b/M, 1/L}. Proof. Let B be a closed subset of C(I), B = {ϕ ∈ C(I) : |ϕ(t) − x0 | ≤ b,
t ∈ I}
Define a mapping on B by Z
t
(T ϕ)(t) = x0 +
f (s, ϕ(s))ds.
(2.9)
t0
First we show that T maps B into B. Since f (t, ϕ(t)) is continuous, T certainly maps B into C(I). If ϕ ∈ B, then Z |(T ϕ)(t) − x0 |
t
= |
f (s, ϕ(s))ds| t0 t
Z ≤
|f (s, ϕ(s))|ds ≤ M |t − t0 | ≤ M α < b t0
Hence T ϕ ∈ B. Next we prove T is a contraction mapping ,
|T x(t) − T y(t)| = ≤
¯Z t ¯ ¯ ¯ ¯ (f (s, x(s)) − f (s, y(s)))ds¯¯ ¯ t0 Z t |f (s, x(s)) − f (s, y(s))|ds t0
Z
≤
t
L
|x(s) − y(s)|ds ≤ L k x − y k |t − t0 | t0
≤
Lα k x − y k= θ k x − y k
where θ = Lα < 1. Hence k T x − T y k≤ θ k x − y k. Thus we complete the proof by contraction mapping principle. Remark 2.2.3 From (2.9), {xn+1 (t)} is a successive approximation of x(t). It is used for theoretical purpose, not for numerical computation. Z xn+1 (t)
t
= x0 +
f (s, xn (s))ds t0
x0 (t) ≡
x0 ,
|t − t0 | ≤ α
{xn (t)} is called Picard iterations of I.V.P. (2.1).
2.3. CONTINUATION OF SOLUTIONS
2.3
19
Continuation of Solutions
In this section we discuss the properties of the solution on the maximal interval of existence and the global existence of the solution of I.V.P. (2.1). Theorem 2.3.1 Let f ∈ C(D) and |f | ≤ M on D. Suppose ϕ is a solution of (2.1) on the interval J = (a, b). Then (i) the two limits lim ϕ(t) = ϕ(a+ ) and lim ϕ(t) = ϕ(b− ) exist; + − t→a
t→b
(ii) if (a, ϕ(a+ )) (respectively, (b, ϕ(b− )) is in D, then the solution φ can be continued to the left pass the point t = a (respectively, to the right pass the point t = b). Proof. Consider the right endpoint b. The proof for the left endpoint a is similar. Fix τ ∈ J and set ξ = ϕ(τ ). Then for τ < t < u < b, we have ¯Z u ¯ ¯ ¯ ¯ |ϕ(u) − ϕ(t)| = ¯ f (s, ϕ(s))ds¯¯ ≤ M (u − t). (2.10) t
Given any sequence {tm } ↑ b, from (2.10) we see that {φ(tm )} is a Cauchy sequence. Thus, the limit φ(b− ) exists. If (b, ϕ(b− )) is in D, then by local existence theorem we can extend the solution ϕ to b + δ for some δ > 0 by the followings: Let ϕ(t) ˜ be the solution of I.V.P. for b≤t≤b+δ dx = dt x(b) = Then we verify that the function ½ ϕ(t), ∗ ϕ (t) = ϕ(t), ˜
f (t, x) ϕ(b− )
a≤t≤b b≤t≤b+δ
is a solution of I.V.P. (2.1) Remark 2.3.1 The assumption |f | ≤ M on D is too strong for the theorem. In fact from (2.10) we need |f (t, ϕ(t))| ≤ M for t closed to b Definition 2.3.1 We say (t, ϕ(t)) → ∂D as t → b− if for any compact set K ⊆ D , there exists t∗ ∈ (a, b) such that (t, ϕ(t)) ∈ / K for all t ∈ (t∗ , b) 2 Example 2.3.1 dx dt = 1 + x , x(0) = 0. Then the solution is x(t) = tant, π 2. In this case D = R × R, ∂D is empty.
−π 2
Corollary 2.3.1 If f ∈ C(D) and ϕ is a solution of (2.1) on an interval J, then ϕ can be continued to a maximal interval J ∗ ⊃ J is such a way (t, ϕ(t)) → ∂D as t → ∂J ∗ . The extended solution ϕ∗ on J ∗ is noncontinuable. Proof. By Zorn’s lemma, there exists a noncontinuable solution ϕ∗ on a maximal interval J ∗ . By Theorem 2.3.1, J ∗ = (a, b) must be open. If b = ∞, then obviously from Definition 2.3.1 (t, ϕ∗ (t)) → ∂D as t → ∞. Assume b< ∞. Let c be any
20
CHAPTER 2. FUNDAMENTAL THEORY
number in (a, b). Claim: For any compact set K ⊆ D, {(t, ϕ∗ (t)) : t ∈ [c, b)}K If not , then f ((t, ϕ∗ (t)) is bounded on [c, b) and (b, ϕ∗ (b−)) ∈ K ⊆ D. By Theorem 2.3.1, ϕ∗ can be continued to pass b. This leads to a contradiction. Now we prove (t, ϕ∗ (t)) → ∂D as t → b− . Let K be any compact set in D, we claim that there exists δ > 0 such that (t, ϕ∗ (t)) ∈ / K for all t ∈ (b − δ, b). If not ,then there exists a sequence {tm } ↑ b such that (tm , ϕ∗ (tm )) → (b, ξ) ∈ K ⊆ D. Choose r > 0 such that B((b, ξ), r) ⊆ D and set ε = 3r , B ≡ B((b, ξ), 2²) ⊆ D. Let M = sup{|f (t, x)| : (t, x) ∈ B} < ∞. Since B is a compact set , from above claim, {(t, ϕ∗ (t)) : t ∈ [tm , b)}B. Hence there exists τm ∈ (tm , b) such that (τm , ϕ∗ (τm )) ∈ / B. Without loss of generality, we assume tm < τm < tm+1 . Let m∗ be sufficiently large such that for all m ≥ m∗ we have |tm − b| + |ϕ∗ (tm ) − ξ| < ε. ∗ Obviously ¯ ¯ ¯|τm − b| + |ϕ ¯ (τm ) − ξ| > 2ε. Then there exists tm ∈ (tm , τm ) such that ¯tm − b¯ + ¯ϕ∗ (tm ) − ξ ¯ = 2ε. Then ε
¯ ¯ ¯ ¯ < ¯tm − b¯ + ¯ϕ∗ (tm ) − ξ ¯ − |tm − b| − |ϕ∗ (tm ) − ξ| ¯ ¯ ≤ (b − tm ) − (b − tm ) + ¯ϕ∗ (tm ) − ϕ∗ (tm )¯ ¯Z ¯ ¯ tm ¯ ¯ ¯ ¯ ¯ < ¯ f (s, ϕ∗ (s))ds¯ ≤ M ¯tm − tm ¯ ¯ tm ¯ ≤ M |τm − tm | → 0 as m → ∞
This leads to a contradiction. Thus we complete the proof. Corollary 2.3.2 If the solution x(t) of (2.1) has a apriori bound M , i.e, |x(t)| ≤ M whenever x(t) exists. Then x(t) exists for all t ∈ R. Proof. Let T > t0 be arbitary. Define the rectangle D1 = {(t, x) : t0 ≤ t ≤ T, |x| ≤ M }. Then f (t, x) is bounded on D1 . Then the solution x(t) can be continued to the boundary of D1 . Hence x(t) exists for t0 ≤ t ≤ T and T is arbitary. Similarly x(t) exists for T1 ≤ t ≤ t0 , T1 arbitary. Thus we have global existence with J ∗ = (−∞, ∞). Remark 2.3.2 To get an apriori bound M for the solution x(t) of (2.1), we apply differential inequalities (see §2.6) or Lyapunov method (see Chapter 5). Example 2.3.6 There exists a unique solution ϕ(t) of the linear system dx = A(t)x + h(t) = f (t, x) dt x(t0 ) = x0 for all t ∈ R where A(t) ∈ Rn×n and h(t) ∈ Rn are continuous functions on R Proof. Let ϕ(t) be the solution for t0 ≤ t < t0 + c Then Rt ϕ(t) − x0 = t0 f (s, ϕ(s))ds Rt Rt = t0 [f (s, ϕ(s)) − f (s, x0 )] ds + t0 f (s, x0 )ds
2.4. CONTINUOUS DEPENDENCE ON PARAMETERS AND INITIAL CONDITIONS21 and it follows that Rt Rt |ϕ(t) − x0 | ≤ t0 k A(s) k |ϕ(s) − x0 |ds + t0 |A(s)x0 + h(s)|ds Rt ≤ L t0 |ϕ(s) − x0 |ds + δ where L=
sup
k A(t) k
and
t0 ≤t≤t0 +c
δ = c · max{|A(s)x0 + h(s)| : t0 ≤ s ≤ t0 + c} From Gronwall’s inequality, we have |ϕ(t) − x0 | ≤ δ exp(Lc) Hence ϕ(t) is bounded on [t0 , t0 + c). Then by Theorem 2.3.1, we can extend ϕ to pass t0 + c.
2.4
Continuous Dependence on Parameters and Initial Conditions
First we consider the discrete version of the continuous dependence property. Theorem 2.4.1 Let {fn (t, x)} be a sequence of continuous functions on D and lim fn = f uniformly on any compact set in D. Let (τn , ξn ) ∈ D, (τn , ξn ) → (τ, ξ) n→∞
0
and ϕn (t) be any noncontinuable solution of x = fn (t, x), x(τn ) = ξn . If ϕ(t) is the 0 unique solution of x = f (t, x), x(τ ) = ξ, defined on [a, b]. Then ϕn (t) is defined on [a, b] for n large and ϕn → ϕ uniformly on [a, b]. Proof. It suffice to show that ϕn → ϕ uniformly on [τ, b]. There are two steps in the proof. Step 1: Show that there exists t1 > τ such that ϕn → ϕ uniformly on [τ, t1 ]. Since f ∈ C(D) and (τ, ξ) ∈ D, we let E be a compact subset of D such that Int E ⊇ graph ϕ = {(t, ϕ(t)) : a ≤ t ≤ b}. Let |f | < M in E. Then |fn | < M for n ≥ n0 for some large n0 . Choose δ > 0 sufficiently small such that R = {(t, x) : |t − τ | ≤ δ, |x − ξ| ≤ 3M δ} ⊆ E
Fig.2.2
22
CHAPTER 2. FUNDAMENTAL THEORY
Choose n1 ≥ n0 large such that |τn − τ | < δ, |ξn − ξ| < M δ
for n ≥ n1 .
Let n ≥ n1 , then (τn , ξn ) ∈ R. For |t − τ | ≤ δ, as long as (t, ϕn (t)) stays in E, we have ¯Z t ¯ ¯ ¯ |ϕn (t) − ξn | = ¯¯ fn (s, ϕn (s))ds¯¯ ≤ M |t − τn | τn
and |ϕn (t) − ξ|
< ≤ ≤ ≤
|ξ − ξn | + |ϕn (t) − ξn | M δ + M |t − τn | M δ + M (|t − τ | + |τ − τn |) M δ + M δ + M δ = 3M δ
Then ϕn (t) is defined and {(t, ϕn (t)) : |t−τ | ≤ δ} ⊆ R. Obviously {ϕn } is uniformly bounded on Iδ = {t : |t − τ | ≤ δ} and from ¯Z ¯ |ϕn (t) − ϕn (s)| = ¯¯
t
s
¯ ¯ fn (ϕn (s), s)ds¯¯ ≤ M |t − s|
for s > t s, t ∈ Iδ
It follows that ϕn is equicontinuous on Iδ . By Ascoli-Arzela’s Theorem, we can extract a convergent subsequence {ϕnk } such that ϕnk → ϕ¯ uniformly on Iδ . Write Z t ϕn (t) = ξn + fn (s, ϕn (s))ds τn
Let nk → ∞, we have
Z ϕ(t) ¯ =ξ+
t
f (s, ϕ(s))ds ¯ τ
By uniqueness of ϕ(t), we have ϕ(t) ¯ ≡ ϕ(t) on Iδ . Obviously by uniqueness every convergent subsequence converges to ϕ. Now we claim ϕk → ϕ on Iδ . If not, then there exists ε > 0 and a subsequence ϕn` such that k ϕn` − ϕ k> ε for each ` = 1, 2, · · ·. Apply Ascoli-Arzla’s Theorem to {ϕn` }, we obtain a subsequence ϕn`k → ϕ which is a desired contradiction. Hence ϕk → ϕ uniformly on [τ, t1 ] , t1 = τ + δ. Step 2: To show ϕn → ϕ uniformly on [τ, b]. Let t∗ = sup{t1 ≤ b : ϕn → ϕ uniformly on [τ, t1 ]} we claim that t∗ = b. If not, t∗ < b Choose δ 1 > 0 small such that © ª R1 = (t, x) : |t − t∗ | ≤ 2δ 1 , |x − ϕ(t∗ )| ≤ 4M δ 1 ⊆ E Choose t1 , t∗ − δ 1 < t1 < t∗ . Then Z ∗
1
t∗
|ϕ(t ) − ϕ(t )| ≤ t1
|f (s, ϕ(s))|ds ≤ M δ 1
2.4. CONTINUOUS DEPENDENCE ON PARAMETERS AND INITIAL CONDITIONS23
Fig.2.3 Hence 00
R = {(x, t) : |t − t1 | ≤ δ 1 , |x − ϕ(t1 )| ≤ 3M δ 1 } ⊆ R1 Let ξn1 = ϕn (t1 ), ξ 1 = ϕ(t1 ). Then (t1 , ξn1 ) → (t1 , ξn ). Apply step 1 with δ replaced by δ 1 , we obtain that ϕn (t) → ϕ(t) unif ormlyon
[t1 , t1 + δ 1 ].
But t1 + δ 1 > t∗ . This contradicts to the definition of t∗ . Now we state the continuous version of the theorem. Theorem 2.4.2 Let f (t, x, λ) be a continuous function of (t, x, λ) for all (t, x) in open set D and for all λ near λ0 and let ϕ(t; τ, ξ, λ) be any noncontinuable solution of 0
x = f (t, x, λ) x(τ ) = ξ If ϕ(t; t0 , ξ0 , λ0 ) is defined on [a, b] and is unique, then ϕ(t; τ, ξ, λ) is defined on [a, b] for all (τ, ξ, λ) near (t0 , ξ0 , λ0 ) and is a continuous function of (t, τ, ξ, λ). Proof. By Theorem 2.4.1 we have |ϕ(t; τ, ξ, λ) − ϕ(t1 ; t0 , ξ0 , λ0 )| ≤ |ϕ(t; τ, ξ, λ) − ϕ(t; t0 , ξ0 , λ0 )| + |ϕ(t, t0 , ξ0 , λ0 ) − ϕ(t1 , t0 , ξ0 , λ0 )| ≤ ε + |ϕ(t; t0 , ξ0 , λ0 ) − ϕ(t1 ; t0 , ξ0 , λ0 )| < 2ε for a ≤ t ≤ b if |(τ, ξ, λ) − (t0 , ξ0 , λ0 )| < δ and |t − t1 | < δ1 .
24
2.5
CHAPTER 2. FUNDAMENTAL THEORY
Differentiability of Initial Conditions and Parameters
For fixed τ, ξ, if f (t, x, λ) is C 1 in x and λ then we shall show that the unique solution ϕ(t, λ) of I.V.P. dx = f (t, x, λ) dt x(τ, λ) = ξ is differentiable with respect to λ. Furthermore from µ ¶ d d d ϕ(t, λ) = (f (t, ϕ(t, λ), λ)) dλ dt dλ or
Hence
d dt
µ
¶ µ ¶ d d ∂f ϕ(t, λ) = fx (t, ϕ(t, λ), λ) ϕ(t, λ) + (t, ϕ(t, λ), λ) dλ dλ ∂λ
d dλ ϕ(t, λ)
≡ ψ(t, λ) satisfies the variational equation dydt = fx (t, ϕ(t, λ), λ)y + fλ (t, ϕ(t, λ), λ)
y(τ ) = 0
Similarly, for fixed τ , if f (t, x) is C 1 in x then we shall show that the unique solution ϕ(t, ξ) of I.V.P. dx = f (t, x) dt x(τ ) = ξ is differentiable with respect to ξ. Furthermore from µ ¶ d d d ϕ(t, ξ) = (f (t, ϕ(t, ξ))) dξ dt dξ we obtain d dt
µ
d ϕ(t, ξ) dξ
¶ = fx (t, ϕ(t, ξ))
d ϕ(t, ξ) dξ
d ϕ(τ, ξ) = I dξ Then the n × n matrix
d dξ ϕ(t, ξ)
≡ ψ(t, ξ) satisfies the linear variational equation
ddtY (t) = fx (t, ϕ(t, ξ))Y
Y (τ ) = I
Theorem 2.5.1 [C] Let ϕ(t, λ0 ) be the unique solution of dx = f (t, x, λ0 ) dt x(t0 ) = ξ0
2.5. DIFFERENTIABILITY OF INITIAL CONDITIONS AND PARAMETERS25 on compact interval J = [a, b]. Assume f ∈ C 1 in x and λ at all points (t, ϕ(t, λ0 ), λ0 ), t ∈ J. Then for λ sufficiently near λ0 , the system Eλ : dx dt = f (t, x, λ), x(t0 ) = ξ0 has a unique solution ϕ(t, λ) defined on J. Moreover ϕλ (t, λ0 ) exists and satisfies 0
y = fx (t, ϕ(t, λ0 ), λ0 )y + fλ (t, ϕ(t, λ0 ), λ) y(t0 ) = 0 Proof. Since fx , fλ are continuous in (t, x, λ) and ϕ(t, λ0 ) is continuous in t, given ε > 0 for each s ∈ J there exists δ = δ(s, ε) > 0 such that |fx (t, x, λ) − fx (t, ϕ(t, λ0 ), λ0 )| ≤ ² (2.11) |fλ (t, x, λ) − fλ (t, ϕ(t, λ0 ), λ0 )| ≤ ² if |t − s| ≤ δ, |x − ϕ(t, λ0 )| ≤ δ, |λ − λ0 | ≤ δ. SN Since ∪ B(s, δ(s, ²)) ⊇ [a, b], from Heine-Borel Theorem i=1 B(si , δ(si , ²)) ⊇ s∈[a,b]
[a, b]. Let δ 1 = δ 1 (²) = min(δ(s1 , ²), · · · , δ(sN , ²)). For any t ∈ J, if |x − ϕ(t, λ0 )| ≤ δ 1 ,
|λ − λ0 | ≤ δ 1 ,
(2.12)
then |t − si | ≤ δi (si , ε) = δi for some i and then |x − ϕ(t, λ0 )| ≤ δ 1 < δi , |λ − λ0 | ≤ δ 1 < δi . So (2.11) holds for any t ∈ J satisfying (2.12). Let R = {(t, x, λ) : |x − ϕ(t, λ0 )| ≤ δ 1 , |λ − λ0 | ≤ δ 1 , t ∈ J} Then fx , fλ are bounded on R, say |fx | ≤ A, |fλ | ≤ B. Hence f satisfies a Lipschitz condition in x and, if |λ − λ0 | ≤ δ 1 , the I.V.P. (Eλ ) has a unique solution passing through any point in the region Ω : |x − ϕ(t, λ0 )| ≤ δ 1 , t ∈ J. Moreover the solution ϕ(t, λ) passing through the point (t0 , ξ0 ) satisfies 0
|ϕ (t, λ) − f (t, ϕ(t, λ), λ0 )| ≤ B|λ − λ0 | as long as (t, ϕ(t, λ)) stays in region Ω. Therefore, by Theorem 2.4.2, |ϕ(t, λ) − ϕ(t, λ0 )| ≤ C|λ − λ0 |
(2.13)
where C = B[eAh − 1]/A where h = b − a, it follows that ϕ(t, λ) is defined on J for all λ sufficiently near λ0 . Furthermore by (2.11), (2.13) and the mean value theorem, if λ is sufficiently near λ0 , then for all t ∈ J, |f (t, ϕ(t, λ), λ) − f (t, ϕ(t, λ0 ), λ0 )| − fx (t, ϕ(t, λ0 ), λ0 ) · [ϕ(t, λ) − ϕ(t, λ0 )](2.14) −fλ (t, ϕ(t, λ0 ), λ0 ) (λ − λ0 )| ≤ ²|λ − λ0 | Put ψ(t, λ) = ϕ(t, λ) − ϕ(t, λ0 ) − y(t)(λ − λ0 ) Then ψ(t0 , λ) = 0 and (2.14) can be written 0
|ψ (t, λ) − fx (t, ϕ(t, λ0 ), λ0 )ψ(t, λ)| ≤ ²|λ − λ0 | Since the differential equation 0
y = fx (t, ϕ(t, λ0 ), λ0 )y y(t0 ) = 0
26
CHAPTER 2. FUNDAMENTAL THEORY
has the solution y ≡ 0, from Theorem 2.2.5 it follows that µ |ψ(t, λ)| ≤ ²|λ − λ0 |
eAh − 1 A
¶
Therefore ϕλ (t, λ0 ) exists and equal to y(t). Theorem 2.5.2 (Peano) [C] Let ϕ(t, t0 , ξ0 ) be the unique solution of dx = f (t, x) dt x(t0 ) = ξ0 on J = [a, b] and fx exists and is continuous at all points (t, ϕ(t, t0 , ξ0 )), t ∈ J. The the system Eτ,ξ : dx dt = f (t, x), x(τ ) = ξ, has a unique solution ϕ(t, τ, ξ) for (τ, ξ) near (t0 , ξ0 ). Moveover ϕξ (t, t0 , ξ0 ) exists, t ∈ J and satisfies the linear homogeneous equation dY = fx (t, ϕ(t, t0 , ξ0 ))Y dt Y (t0 ) = I Proof. Put x = u + ξ. Then the I.V.P. dx = f (t, x), x(t0 ) = ξ dt is transformed into the I.V.P. du = f (t, u + ξ), u(t0 ) = 0 dt with ξ as parameter. Then the first part of theorem follows from Theorem 2.12. From Theorem 2.12 we have · ¸ µ ¶ d du du (t, ξ0 ) = fx (t, u(t, ξ0 ) + ξ0 ) (t, ξ0 ) + I dt dξ dξ or
2.6
· ¸ d dx dx (t, ξ0 ) = fx (t, x(t, ξ0 )) (t, ξ0 ) dt dξ dξ
Differential Inequality
Differential inequality is a tool to estimate the bound of a solution. It is usually in scalar form. In the following, we shall apply the property of continuous dependence on initial conditions and parameters to obtain differential inequality. Theorem 2.6.1 Let x(t) be a scalar, differentiable function satisfying 0
x (t)
≥ f (t, x(t)), a ≤ t ≤ b (2.15)
x(t0 )
≥ ξ0
, t0 ∈ [a, b]
2.6. DIFFERENTIAL INEQUALITY
27
or 0
x (t)
≤ f (t, x(t)), a ≤ t ≤ b (2.16)
x(t0 )
≤ ξ0
, t0 ∈ [a, b]
0
Let ϕ(t) be the solution of x = f (t, x), x(t0 ) = ξ0 . Then ϕ(t) ≤ x(t) or ϕ(t) ≥ x(t) on [a, b] Proof. Consider the following I.V.P. 0
xn = f (t, xn ) + xn (t0 ) = ξ0 +
1 n
1 n
Then from the continuously dependence on initial conditions and parameters, we have xn (t) → ϕ(t) uniformly on [a, b]. We shall only consider the case (2.16). The other case (2.15) can be done by similar argument. It suffices to show that x(t) ≤ xn (t) on [a, b] for n sufficiently large. If not, there exists a large n such that there are t1 , t2 , with t0 < t1 < t2 < b satisfying x(t) > xn (t) on (t1 , t2 ) x(t1 ) = xn (t1 )
Fig.2.4 Then for t > t1 , t is near t1 x(t) − x(t1 ) xn (t) − xn (t1 ) > t − t1 t − t1 Let t → t1 then we have 0
x (t1 ) ≥ = >
0
xn (t1 ) = f (t1 , xn (t1 )) + 1/n f (t1 , x(t1 )) + 1/n f (t1 , x(t1 ))
This contradicts to the assumption (2.16).
(2.17)
28
CHAPTER 2. FUNDAMENTAL THEORY
Corollary 2.6.1 Let Dr x(t) = lim+ h→0
x(t+h)−x(t) h
and D` x(t) = lim− 0
h→0
x(t+h)−x(t) . h
Then the conclusions of Theorem 2.6.1 hold if we replace x (t) in (2.16) by Dr x(t) 0 and x (t) in (2.15) by D` x(t). Example 2.6.1 Consider the Lotka-volterra two species competition model. µ ¶ x1 dx1 = r1 x1 1 − − αx1 x2 dt K1 µ ¶ dx2 x2 = r2 x2 1 − − βx1 x2 dt K2 x1 (0) > 0, x2 (0) > 0 We first verify that x1 (t) > 0, x2 (t) > 0 for all t > 0. then we have µ ¶ dxi xi ≤ ri xi 1 − dt Ki xi (0) > 0 Then xi (t) ≤ Ki + ε for t ≥ T for some T sufficiently large. For system of differential equations, we have the following comparison theorem. Lemma 2.6.1 Let x(t) = (x1 (t), · · · , xn (t)) ∈ Rn be differentiable. Then Dr |x(t)| ≤ 0 |x (t)|. Proof. For h > 0, from triangle inequality, we have ¯ ¯ |x(t + h)| − |x(t)| ¯¯ x(t + h) − x(t) ¯¯ ≤¯ ¯ h h Then let h → 0+ we complete the proof. Theorem 2.6.2 (Comparison theorem). Let F (t, v) be continuous and |f (t, x)| ≤ F (t, |x|). Let ϕ(t) be a solution of the system dxdt = f (t, x) and v(t) be the solution of scalar equation dvdt = F (t, v), v(t0 ) = η with |ϕ(t0 )| ≤ η. Then |ϕ(t)| ≤ v(t) for all t. Proof. Let u(t) = |ϕ(t)|. Then 0
Dr u(t) = Dr |ϕ(t)| ≤ |ϕ (t)| = |f (t, ϕ(t))| ≤ F (t, |ϕ(t)|) = F (t, u(t)) u(t0 ) = |ϕ(t0 )| ≤ η Then |ϕ(t)| ≤ v(t) for all t. Corollary 2.6.2 Let f (t, x) = A(t)x + h(t), and A(t) ∈ Rn×n , h(t) ∈ Rn be continuous on R. Then we have global existence for the solution of I.V.P., dx dt = f (t, x), x(t0 ) = x0 . Proof. Since |f (t, x)| ≤ kA(t)k |x| + |h(t)| ≤ max {kA(τ )k , |h(τ )|}(|x| + 1) t0 ≤τ ≤t
= g(t)ψ(|x|),
ψ(u) = u + 1
2.6. DIFFERENTIAL INEQUALITY
29
Claim: The solution of du = g(t)ψ(u) dt u(t0 ) = |x0 | = u0 globally exists on R. If not, assume the maximal interval of existence is [t0 , b). Then Z
+∞
u0
du = ψ(u)
Z
b
g(t)dt t0
The left hand side is infinite while the right hand side is finite. Thus we obtain a contradiction. Hence u(t) exists on R. From Theorem 2.6.2, we have |x(t)| ≤ u(t)
onR
and x(t) exists for all t by Theorem 2.3.1. Now we consider the differential inequality for certain special type of systems. n Let R+ = {x = (x1 , · · · , xn ) ∈ Rn : xi ≥ 0, i = 1, · · · , n} be the nonnegative cone. Define following partial orders: n x ≤ y if y − x ∈ R+ i.e. xi ≤ yi for all i x < y if x ≤ y and x 6= y x ¿ y if xi < yi for all i
Definition 2.6.1 We say f = (f1 , · · ·, fn ) : D ⊆ Rn → Rn is of type K on D if for each i, fi (a) ≤ fi (b) for any a, b ∈ D satisfying a ≤ b and ai = bi . Theorem 2.6.3 (Kamke) [C] Let f (t, x) be of type K for each t and let x(t) be a solution of dx dt = f (t, x) on [a, b]. If y(t) is continuous on [a, b] satisfies Dr y(t) ≥ f (t, y) and y(a) ≥ x(a) then y(t) ≥ x(t) for a ≤ t ≤ b. If z(t) is continuous on [a, b] and satisfies D` z(t) ≤ f (t, z) and z(a) ≤ x(a) then z(t) ≤ x(t) for a ≤ t ≤ b. Proof. We only prove the case for z(t). First we prove that if D` z(t) ¿ f (t, z(t)), z(a) ¿ x(a) then z(t) ¿ x(t) for a ≤ t ≤ b. If not, we let c be the least number in [a, b] such that z(t) < x(t) for a ≤ t < c and z(c) ≤ x(c), zi (c) = xi (c) for some 1 ≤ i ≤ n. Then 0 D` zi (c) < fi (c, z(c)) ≤ fi (c, x(c)) = xi (c) Since zi (c) = xi (c), it follows that zi (t) > xi (t) for certain values of t less than and near c. This contradicts to the definition of c. Now we are in a position to complete the proof of Theorem 2.6.6. Assume D` z(t) ≤ f (t, z(t)), z(a) ≤ x(a). We want to show z(t) ≤ x(t) for a ≤ t ≤ b. Let c be the greatest value of t such that z(s) ≤ x(s) for a ≤ s ≤ t and suppose, contrary to the theorem, that c < b. Choose a vector ² > 0 and let ϕn (t) be the solution of I.V.P. dwdt = f (t, w) + w(c) = x(c) +
² n
² n
, c≤t≤c+δ
30
CHAPTER 2. FUNDAMENTAL THEORY
Then ϕn (t) → x(t) on [c, c + δ] as n → ∞ and z(t) < ϕn (t) on [c, c + δ]. Let n → ∞, then z(t) ≤ x(t) on [c, c + δ]. this contradicts to the definition of c. Hence c = b and z(t) ≤ x(t) on [a, b]. Remark 2.6.1 Kamke’s Theorem is an essential tool to study the behavior of the cooperative system and competitive system [H] and the monotone flow [S].
2.6. DIFFERENTIAL INEQUALITY
31
Exercises Exercise 2.1 Find a bounded sequence {fm }∞ m=1 ⊆ C(I), I = [a, b] such that there is no convergent subsequence. Exercise 2.2 Let J = [a, b], a, b < ∞ and F be a subset of C(J). Show that if each sequence in F contains a uniformly convergent subsequence, then F is both equicontinuous and uniformly bounded. Exercise 2.3 Show that if g ∈ C 1 (IR) and f ∈ C(IR) then the solution of I.V.P. 00
0
0
y + f (y)y + g(y) = 0, y(t0 ) = A, y (t0 ) = B 0
exists locally, is unique and can be continued so long as y and y remains bounded. Hint: Use a transform. Exercise 2.4 If f : W ⊆ R” → R\ is locally Lipschitz and A ⊆ W is a compact set, then f /A is Lipschitz. 0
Exercise 2.5 Prove that any solution of the system x =f (t,x) can be extended indefinitely if |f (t, x)| ≤ k|x| for all t and all |x| ≥ r, r, k > 0. Exercise 2.6 Consider the pendulum equation with constant torque θ¨ = a − sin θ, ˙ a) = 0. Compute d θ(t, a)|a=0 . θ(0, a) = 0, θ(0, da Exercise 2.7 Let f : R → R be C k with f (0) = 0. Then f (x) = xg(x), g ∈ C k−1 . Exercise 2.8 Consider x˙ = f (x), f : D ⊆ IRn → IRn is of K–type on D and x0 , y0 ∈ D. Let 0 and the solutions ϕ(t, x0 ), ϕ(t, y0 ) are defined then ϕ(t, x0 ) 0 x(0) > 0, y(0) > 0 Show tht the solutions x(t), y(t) are defined for all t > 0 and the solutions are positive and bounded for all t > 0. Exercise 2.11 Consider the equation x˙ = f (t, x), |f (t, x)| ≤ φ(t)|x| for all t, x, R∞ φ(t)dt < ∞. (a) Prove the every solution approaches a constant as t → ∞.
32
CHAPTER 2. FUNDAMENTAL THEORY (b) If, in addition, |f (t, x) − f (t, y)| ≤ φ(t)|x − y| for all
x, y.
Prove there is a one to one correspondence between the initial values and the limit values of the solution. (Hint: First, take the initial time sufficiently large to obtain the desired correspondence.) (c) Does the above result imply anything for the equation Z ∞ x˙ = −x + a(t)x, |a(t)|dt < ∞? (Hint: Consider the transformation x = e−t y.) (d) Does this imply anything about the system x˙ 1 = x2
Z
x˙ 2 = −x1 + a(t)x1 ,
∞
|a(t)|dt < ∞,
where x1 , x2 are scalars? Exercise 2.12 Consider the initial value problem z¨ + α(z, z) ˙ z˙ + β(z) = u(t), z(0) = ξ, z(0) ˙ = η, with α(z, w), β(z) continuous together with their first partial derivatives for all z, w, u(t) is continuous and bounded on (−∞, ∞), α ≥ 0, zβ(z) ≥ 0. Show there is one and only one solution to this problem and the solution can be defined on [0, ∞]. (Hint: Write R x the equations as a system by letting z = x, z˙ = y, define V (x, y) = y 2 /2 + 0 β(s)ds and study the rate of change of V (x(t), y(t)) along the solutions of the two dimensional system. Exercise 2.13 Let g : R → R be Lipschitz and f : R → R continuous. Show that the system x y
0 0
= g(x) = f (x)y
has at most one solution on any interval, for a given initial value. (Hint: use Gronwall’s inequality) 0
Exercise 2.14 Consider the differential equation x = x2/3 (a) There are infinitely many solution satisfying x(0) = 0 on every interval [0, β]. (b) For what values of α are there infinitely many solutions on [0, α] satisfying x(0) = −1. Exercise 2.15 Let f ∈ C 1 on the (t, x, y) set given by 0 ≤ t ≤ 1, and all x, y. 00 0 Let ϕ be a solution of the second-order equation x = f (t, x, x ) on [0, 1], and let ϕ(0) = a, ϕ(1) = b. Suppose ∂f /∂x > 0 for t ∈ [0, 1] and for all x, y. Prove
2.6. DIFFERENTIAL INEQUALITY
33 00
0
that if β is near b then there exists a solution ψ of x = f (t, x, x ) such that ψ(0) = a, ψ(1) = β. Hint: Consider the solution θ (as a function of (t, α)) with initial values θ(0, α) = 0 0 a, θ (0, α) = α. Let ϕ (0) = α0 . There for |α − α0 | small, θ exists for t ∈ [0, 1]. Let u(t) = Then 00
u − 0
∂θ (t, α0 ) ∂α
0 0 0 ∂f ∂f (t, ϕ(t), ϕ (t))u − (t, ϕ(t), ϕ (t))u = 0 ∂y ∂x
where u(0) = 0, u (0) = 1. Because ∂f /∂x > 0, u is monotone nondecreasing and thus u(1) = (∂θ/∂α) > 0. Thus the equation θ(1, α) − β = 0 can be solved for α as a function of β for (α, β) in a neighborhood of (α0 , b).
34
CHAPTER 2. FUNDAMENTAL THEORY
References [C ] W.A. Coppel: Stability and Asymptotic Behavior of Differential Equations [H ] M. Hirsch: Systems of differential equations which are competitive or cooperative 1; limit sets, SIAM J. Math. Analysis (1982) p.167–179 [Har ] Hartman: Ordinary Differential Equations [S ] H. Smith: Monotone Dynamical Systems AMS Monographs vol.41, 1995 [IK ] E.Isaacson and H.Keller : Analysis of Numerical methods [CL ] Coddington and N.Levinson : Ordinary Differential Equations [M ] R.K Miller : Ordinary Differential Equations [Ha ] J.Hale : Ordinary Differential equations
Chapter 3
LINEAR SYSTEMS 3.1
Introduction
In this chapter we first study the general properties of the linear homogeneous system [H] dx = A(t)x dt
(LH)
dx = A(t)x + g(t) dt
(LN )
and linear nonhomogeneous system
where A(t) = (aij (t)) ∈ Rn×n and g(t) ∈ Rn are continuous on R. There are two important cases, namely (i) A(t) ≡ A is a n × n constant matrix (ii)A(t + ω) = A(t), A(t) is a periodic matrix Example 3.1.1 From elementary ordinary differential equations. we know that the solution of scalar equations 0
x
=
a(t)x,
x(0) = x0
and
0
x
= ax, x(0) = x0 , a ∈ R ´ ³R t are x(t) = x0 exp 0 a(s)ds and x(t) = x0 eat respectively. For the systems of linear equations, dxdt = Ax and dxdt = A(t)x x(0) = x0 x(0) = x0 ³R ´ t Is it true that x(t) = eAt x0 and x(t) = exp 0 A(s)ds x0 (where eAt will be defined in Section 3.3)? Remark 3.1.1 The linear system has many applications in linear control theory. It is very important to understand linear system for the study of nonlinear system. For instance, if we do the linearization about an equilibrium x∗ of a nonlinear system 0 dx ∗ dt = f (x), then the information of linear system x = Ax, A = Df (x ) provides us the local behavior of the solution of nonlinear system in a neighborhood of x∗ . To understand the local behavior of the solutions near the periodic orbit {x∗ (t)}0≤t≤T , 35
36
CHAPTER 3. LINEAR SYSTEMS
∗ the linearization yields the linear periodic system dx dt = A(t)x, A(t) = Df (x (t)). This is the orbital stability we shall study latter. Also there are some famous equations like Matheiu’s equation and Hill’s equations which are second order linear periodic equations.
3.2
Fundamental Matrices
In this section we shall study the structure of the solutions of (LH) and (LN). Theorem 3.2.1 The set V of all solutions of (LH) on J = (−∞, ∞) forms an n-dimensional vector space. Proof. It is easy to verify V is a vector space over C. We shall show dim V = n. Let ϕi (t), i = 1, · · · , n be the unique solution of dx dt = A(t)x, Pn x(0) = ei = (0 · · · 1 · · · 0)TP . Then {ϕi }ni=1 are linearly independent. Let Pn i=1 αi ϕi = 0. It n follows that i=1 αi ϕi (t) = 0 for all t. Setting t = 0 yields i=1 αi ei = 0 or αi = 0 for all i = 1, 2, · · · , n. Next we show that any solution ϕ of can be generated P(LH) n by ϕ1 , · · · , ϕn . Let ϕ(0) = ξ = (ξ1 , · · · , ξn )T . Then y(t) = i=1 ξi ϕi (t) is a solution of (LH) with Pn y(0) = ξ. From uniqueness of solutions, we have y(t) ≡ ϕ(t) for all t, i.e., ϕ = i=1 ξi ϕi . Next we introduce the fundamental matrices. Definition 3.2.1 Let ϕ1 , · · · , ϕn be n linearly independent solutions of (LH) on R. We call the matrix Φ = [ϕ1 , · · · , ϕn ] ∈ Rn×n a fundamental matrix of (LH). There are infinitely many fundamental matrices. Consider the following matrix equation 0 X = A(t)X (3.1) ³ ´ 0 0 where X = (xij (t)) and X = xij (t) . Theorem 3.2.2 A fundamental matrix Φ(t) of (LH) satisfies (3.1). i h 0 0 0 Proof. Φ (t) = ϕ1 (t), · · · , ϕn (t) = [A(t)ϕ1 , · · · , A(t)ϕn ] = A(t) [ϕ1 , · · · , ϕn ] = A(t)Φ(t). 0
0
Obviously for any ξ ∈ Rn , Φ(t)ξ is a solution of (LH) since (Φ(t)ξ) = Φ (t)ξ = A(t)(Φ(t)ξ). In the following we prove the Abel’s formula which is a generalization of Wronskian for n-th order linear equations. Theorem 3.2.3 (Liouville’s formula or Abel’s formula) Let Φ(t) be a solution of (3.1) on J = (−∞, ∞) then µZ det Φ(t) = det Φ(τ ) exp
t
¶ trA(s)ds
(3.2)
τ 0
0
Proof. Let Φ(t) = (ϕij (t)), A(t) = (aij (t)). From Φ = A(t)Φ, we have ϕij =
3.2. FUNDAMENTAL MATRICES Pn k=1
37
aik ϕkj . By induction (Exercises), we have ¯ ¯ ϕ11 · · · · · · ¯ d ¯¯ ϕ21 ¯ . dt ¯ .. ¯ ¯ ϕn1 ¯ 0 0 ¯ ϕ ¯ 11 ϕ12 · · · ¯ ϕ21 ϕ22 ¯ ¯ . ¯ .. ¯ ¯ ϕn1 ϕn2 ¯ Pn ¯ k=1 a1k ϕk1 , ¯ ¯ ϕ21 ¯ ¯ ¯ ¯ ϕn1 ¯ ¯ ϕ11 ¯ ¯ .. ¯ . ¯ ¯ ϕ n−1,1 ¯ Pn ¯ k=1 ank ϕk1 ,
d (det Φ(t)) = dt
=
=
¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ϕnn ¯ ¯ ¯ ¯ ¯ 0 ¯ ϕ11 ϕ1n ¯¯ ¯¯ ϕ11 ϕ12 · · · ϕ1n ¯¯ ¯ 0 0 0 ¯ ϕ21 ϕ2n ¯¯ ϕ2n ¯¯ ¯¯ ϕ21 ϕ22 ¯ ¯ + · · · + ¯ .. ¯+¯ . ¯ . ¯ ¯ ¯ .. ¯ 0 ¯ ¯ ¯ ¯ ϕ ¯ ¯ ¯ ϕn1 ϕnn ϕnn n1 ¯ Pn ¯ ······ , a ϕ 1k kn k=1 ¯ ¯ ······ ϕ2n ¯ + ··· + ¯ ¯ ¯ ······ ϕnn ¯ ¯ ϕ12 · · · ϕ1n ¯ ¯ ¯ ¯ ¯ ······ ¯ Pnϕn−1,n ¯ , k=1 ank ϕkn ϕ1n ϕ2n .. .
= a11 (t) det Φ(t) + · · · + ann (t) det Φ(t) = trace(A(t)) det Φ(t) Hence we have
µZ det Φ(t) = det Φ(τ ) exp
t
¶ tr(A(s))ds .
τ
Remark 3.2.1 The Abel’s formula can be interpreted geometrically as following: 0 From (3.1) and Theorem 3.2.2, we have Φ = A(t)Φ. Obviously (3.1) describes the evolutions of the vector (ϕ1 (τ ), · · · , ϕn (τ )). Then Liouville’s formula describes the evolution of the volume of a parallelepiped generated by ϕ1 (τ ), · · · , ϕn (τ ). Theorem 3.2.4 A solution Φ(t) of (3.1) is a fundamental matrix of (LH) iff det Φ(t) 6= 0 for all t. Proof. From (3.2) we have det Φ(t) 6= 0 for all t iff det Φ(t) 6= 0 for some t. If Φ(t) is a fundamental matrix, then ϕ1 , · · · , ϕn are linearly independent and hence det Φ(t) 6= 0 for all t. Conversely, Φ(t) satisfies (3.1) and det Φ(t) 6= 0 for all t, then ϕ1 , · · · , ϕn are linearly independent and Φ(t) is a fundamental matrix of (LH). Example 3.2.1 Consider n-th order linear equation x(n) (t) + a1 (t)x(n−1) (t) + · · · + an (t)x(t) = 0.
(3.3) 0
It can be reduced to a first order linear system. Let x1 = x, x2 = x , · · · , xn = x(n−1) . Then we have
x1 x2 .. . xn
0
=
0 0 .. .
1 0
−an (t), · · ·
0 ··· 1
0 0 .. . , −a1 (t)
x1 x2 .. . xn
(3.4)
···
ϕ1n ϕ2n
···
ϕnn
0
¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯
38
CHAPTER 3. LINEAR SYSTEMS
If ϕ1 (t), ϕ2 (t), · · · , ϕn (t) are n-linearly independent solutions of (3.3) then
ϕ1 0 ϕ1 .. .
,···
,
(n−1)
ϕn 0 ϕn .. .
(n−1)
ϕ1
ϕn
are n linerly independent solutions of (3.4). Let ϕ1 ··· 0 ϕ1 Φ(t) = .. . (n−1)
ϕn 0 ϕn .. .
(n−1)
ϕ1
ϕn
and W (ϕ1 , · · · , ϕn ) = det Φ(t) is the Wronskian of (3.3). From (3.2) we have µ Z t ¶ W (ϕ1 , · · · , ϕn )(t) = W (ϕ1 , · · · , ϕn )(t0 ) · exp − a1 (s)ds . t0
Remark 3.2.1 It is very difficult to compute fundamental matrix Φ(t) for general A(t). In the case of constant coefficients, A(t) ≡ A, we shall show that eAt is a fundamental matrix in next section. Theorem 3.2.5 Let Φ(t) be a fundamental matrix of (LH) and C ∈ Rn×n be nonsingular. Then Φ(t)C is also a fundamental matrix. Conversely if Ψ(t) is also a fundamental matrix, then there exists a nonsingular matrix P such that Ψ(t) = Φ(t)P for all t. Proof. Since
0
0
(Φ(t)C) = Φ (t)C = A(t)(Φ(t)C) and det(Φ(t)C) = det Φ(t) · det C 6= 0, Φ(t)C is a fundamental matrix. Now we consider Φ−1 (t)Ψ(t), we want to show d −1 (t)Ψ(t)) ≡ 0 and then we complete the proof by setting P ≡ Φ−1 (t)Ψ(t). dt (Φ From 0 0 d −1 (Φ Ψ) = (Φ−1 ) Ψ + Φ−1 Ψ dt 0
0
0
0
Since ΦΦ−1 = I, we have Φ (Φ−1 ) + Φ(Φ−1 ) = 0 and (Φ−1 ) = −Φ−1 Φ Φ−1 . Then 0 0 d ¡ −1 ¢ Φ Ψ = −Φ−1 Φ Φ−1 Ψ + Φ−1 Ψ dt = −Φ−1 AΦΦ−1 Ψ + Φ−1 AΨ = 0
Example 3.2.2 The solution ϕ(t) of the scalar equation x0 (t) =
ax(t) + g(t)
x(τ ) =
ξ
3.3. LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS
39
can be found by integrating Z x(t) = ea(t−τ ) ξ +
t
ea(t−η) g(η)dη
τ
In the following, we shall discuss the ”variation of constant formula” which is very important in the linearlized stability theory. Theorem 3.2.6 (Variation of Constant formula) Let Φ(t) be a fundamental matrix of (LH) and ϕ(t, τ, ξ) be the solution of (LN) ½ dxdt = A(t)x + g(t), (LN ) x(τ ) = ξ. Then
Z ϕ(t, τ, ξ) = Φ(t)Φ−1 (τ )ξ +
t
Φ(t)Φ−1 (η)g(η)dη
(3.5)
τ
Proof. Let ψ(t) be the RHS of (3.5). Then Rt 0 0 0 −1 −1 ψ (t) = Φ (t)Φ h (τ )ξ + g(t) +R τ Φ (t)Φ (η)g(η)dη i t = A(t) Φ(t)Φ−1 (τ )ξ + τ Φ(t)Φ−1 (η)g(η)dη + g(t) = A(t)ψ(t) + g(t). Since ψ(τ ) = ξ, then ψ(t) ≡ ϕ(t, τ, ξ) for all t, by the uniqueness of the solutions. Thus we complete the proof. Rt Remark 3.2.2 Set ξ = 0 then ϕρ (t) = τ Φ(t)Φ−1 (η)g(η)dη is a particular solution of (LN). Obviously Φ(t)Φ−1 (τ )ξ is a general solution of (LH). Thus we have general solution of (LN) = general solution of (LH) + particular solution of (LN). Rt Remark 3.2.4 If A(t) ≡ A, then Φ(t) = eAt and ϕ(t, τ, ξ) = eA(t−τ ) ξ+ τ eA(t−η) g(η)dη which is often used in the theory of linearization.
3.3
Linear Systems with Constant Coefficients
In this section we shall study the linear system with constant coefficients, ½ 0 x = Ax, A = (aij ) ∈ Rn×n , x(0) = x0 .
(LC)
It is easy to guess that the solution of above should be x(t) = eAt x0 . We need to defined the exponential matrix eA . Definition 3.3.1 For A ∈ Rn×n , eA
=
I +A+
=
∞ X An n! n=0
A2 An + ··· + + ··· 2! n!
In order to show that the above series of matrices is well-defined, we need to defined the norms of matrix A.
40
CHAPTER 3. LINEAR SYSTEMS
Definition 3.3.2 Let A ∈ Rm×n , A = (aij ). Define k A k= sup x6=0
k Ax k kxk
Remark 3.3.1 Obviously we have k A k= sup{k Ax k:k x k= 1} = sup{k Ax k:k x k≤ 1} It is well known from Linear Algebra ([IK], p.9) that: Let x = (x1 ,P · · · , xn ) and P A = (aij ). If k x k∞ = sup |xi |, then k A k∞ =sup |aik |; if k x k1 = |xi |, then i k i µ ¶1/2 p P P k A k1 =sup |aik |; if k x k2 = |xi |2 , then k A k2 = ρ(AT A), where k
i
i
T
ρ(A A) = Spectral radius AT A = Max |λ| : λ ∈ σ(AT A)}, and σ(AT A) = Spectrum AT A = set of eigenvalues AT A From the definition of k A k, it follows that k Ax k≤k A kk x k, A ∈ Rm×n , x ∈ Rn and k A + B k≤k A k + k B k, A, B ∈ Rm×n k cA k=k c kk A k, c ∈ R k AB k≤k A kk B k, A ∈ Rm×n , B ∈ Rn×p k An k≤k A kn , A ∈ Rm×m Theorem 3.3.1 eA is well-defined and k eA k≤ ekAk . Proof. Since for any a ∈ R, ea = k
m+p X m
P∞
an n=0 n!
converges, from
m+p X k A kn An k≤ n! n! m
we complete the proof. Next we state and prove the properties of eA . Theorem 3.3.4 (i) eO = I (ii) If AB = BA then eA+B = eA eB (iii) eA is invertible and (eA )−1 = e−A (iv) If B = P AP −1 then eB = P eA P −1 Proof. (i) follows directly from the definition of eA . µ ¶ Pn n n (ii) Since AB = BA, then (A + B) = m=0 Am B n−m and m ¶ ∞ ∞ n µ X X 1 X (A + B)n n = eA+B = Am B n−m m n! n! n=0 n=0 m=0
3.3. LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS = =
=
41
∞ n X 1 X n! Am B n−m n! m!(n − m)! n=0 m=0 ∞ X X Aj B k j! k! n=0 j+k=n ∞ ∞ j k X X A B = eA eB j! k! j=0 j=0
A −A (iii) Since A(−A) = (−A)(A), then I = eO = eA+(−A) follows ³P = e ke ´ and (iii) Pn B k n A −1 k k −1 −1 (iv) Since B = P AP then B = P A P and P P = k=0 k! . k=0 k! Let n → ∞ and we complete the proof of (iv). 0
Theorem 3.3.3 eAt is a fundamental matrix of x = Ax Proof. Let Y (t) = eAt . Then Y (t + h) − Y (t) = eA(t+h) − eAt ¡ ¢ ¡ ¢ = eAh − I eAt = hA + O(h2 ) eAt . Thus we have 0
Y (t + h) − Y (t) = AeAt = AY (t). h→0 h
Y (t) = lim
Consider the linear nonhomogeneous system 0
x = Ax + g(t), x(τ ) = ξ,
(3.6)
From (3.5) and Theorem 3.3.3, we have the following important variation of constant formula: the solution of (3.6) is Z ϕ(t, τ, ξ) = eA(t−τ ) ξ +
t
eA(t−η) g(η)dη.
(3.7)
τ
(3.7) is very important in showing linearized stability implies the nonlinear stability for the equilibrium solution of the nonlinear system x0 = f (x). Since the solution of I.V.P. of (LC) is x(t) = eAt x0 , it is important to estimate |x(t)| by |x(t)| ≤k eAt k |x0 |. Thus we need to estimate k eAt k and we need to compute eAt . In the following we compute eAt . Example 3.3.1 If A = D = diag (λ1 , · · · , λn ) then by the definition of eAt , we have eAt = I + At +
A2 t2 + · · · = diag (eλ1 t , · · · , eλn t ). 2!
42
CHAPTER 3. LINEAR SYSTEMS
Example 3.3.2 If A is diagonalizable, then we have D = P −1 AP = diag (λ1 , · · · , λn ) for some nonsingular P or AP = P D. Obviously P = [x1 , · · · , xn ] satisfies Axk = λk xk , k = 1, · · · , n. Then from Theorem 3.3.2 (iv) eDt = P −1 eAt P or eAt = P eDt P −1 . Example 3.3.3 Let J be the Jordan block of A, J =
J = P −1 AP,
J0
J1 ..
. Js
where J0 = diag (λ1 , · · · , λk ) and
Ji =
λi
1 .. .
O
O
..
.
..
.
. 1 λi
and eJt = P −1 eAt P . In order to evaluate k eJt k we need to review the Jordan forms. Review of Jordan forms: Let A ∈ Rn×n and the characteristic polynomial of A, is f (λ) = det(λI − A) = (λ − λ1 )n1 · · · (λ − λs )ns where λ1 , · · · , λs are distinct eigenvalues of A. We call ni the algebraic multiplicity of λi . Let Vi = {v ∈ C n : Av = λi v}. We call Vi is the eigenspace of λi and mi = dim Vi , the geometric multiplicity of λi . It is a well-known fact that mi ≤ ni . Obviously Vi = Null (A − λi I) = N (A − λi I) and N (A − λi I) ⊆ N (A − λi I)2 ⊆ · · ·. Theorems of linear algebra shows [HS] N (A − λi I)ni = N (A − λi I)ni +1 ⊆ C n . Let M (λi , A) = N (A − λi I)ni . We call M (λi , A) the generalized eigenspace of λi . We say an eigenvalue λ has simple elementary divisor if M (λ, A) = N (A − λI), i.e., algebraic multiplicity of λ is equal to the geometric multiplicity. From theorems of linear algebra, we have Theorem 3.3.4 ([HS]) Let C n = M (λ1 , A) ⊕ · · · ⊕ M (λs , A). To understand the structure of Jordan blocks we, for simplicity, we assume that A is an n×n matrix has only one eigenvalue λ. Let v1 , · · · , vk be a basis of eigenspace (1) (j) V . Let vi be solution of (A − λI)v = vi and vi be a solution of (j−1)
(A − λI)v = vi
,
j = 2, · · · , `i .
3.3. LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS
43
(j)
i We call {vi }`j=1 the generalized eigenvectors of eigenvector vi . It is easy to show Pk (1) (` ) that {vi , vi , · · · , vi i }ki=1 is a basis of C n , i=1 (`i + 1) = n. Let h i (1) (` ) (1) (` ) (1) (` ) P = v1 , v 1 , · · · , v1 1 , v2 , v 2 , · · · , v2 2 , · · · , vk , vk , · · · , v k k h iT (1) (` ) (1) (` ) P −1 = w1 , w1 , · · · , w1 1 , · · · , wk , wk , · · · , wk k .
Then it follows that h i (1) (` ) (1) (` ) AP = Av1 , Av1 , · · · , Av1 1 , · · · , Avk , Avk , · · · , Avk k (1)
(2)
(1)
(` )
= [λv1 , λv1 + v1 , λv1 + v1 , · · · , λv1 1 (` −1) (1) (` ) (` −1) +vh1 1 , · · · , λvk , λvk + vk , · · · , λvk k +ivk k ] (1)
(` )
(1)
(` )
= λ v1 , v1 , · · · , v 1 1 , · · · , vk , vk , · · · , vk k h i (1) (` −1) (` −1) (` −1) + 0, v1 , v1 , · · · , v1 1 , 0, v2 , · · · , v2 2 , · · · , 0, vk , · · · , vk k Then
P −1 AP
= λI +
w1 (1) w1 .. . (` )
w1 1 .. . wk .. .
(1)
wk .. .
h i (` −1) (` −1) (1) 0, v1 , v1 , · · · , v1 1 , · · · , 0, vk , · · · , vk k
(` )
wk k = λI +
0 1 0 ··· · 0 0 0 1 ··· · 0 .. . 1 .. . 0 ······ · · 0 01 .. .. . . ..
.
1 0 01 .. .. . . ..
.
1 0
44
CHAPTER 3. LINEAR SYSTEMS
=
λ 1 .. .. . . ..
.
1 λ λ
1 .. .. . . ..
.
1 λ λ 1 .. .. . . ..
.
1 λ
Thus for general case we have P −1 AP =
J0
J1 ..
.
(3.8)
Js where
Ji =
Lemma 3.3.1 Let
Then
J0 = diag (λ1 · · · λI ) where I = number of λ with nλ = mλ . λi 1 O J˜i1 .. .. . . .. , Jij = . . . . 1 J˜i` O λi λ 1 O . . .. .. be a np × np matrix. J = .. . 1 O λ np −1 2 1, t, t2! , · · · (nt p −1)! np −2 0, 1, t , , (nt p −2)! .. eJt = eλt . . .. 1
(3.9)
λt N t Proof. eJt = e(λI+N )t = eλIt+N t = e . Penp −1 tk N k np Jt λt Since N = 0, we have e = e and the Lemma 3.1 follows directly k=0 k!
3.3. LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS
45
from routine computation. Let Reλ(A) = max{Reλ : λ is an eigenvalue of A}. ° ° Theorem 3.3.5 If Re λ(A) < 0 then there exists α > 0 such that °eAt ° ≤ Ke−αt , t ≥ 0. If Reλ(A) ≤ 0 and those with zero real parts have simple elementary divisors, then |eAt | ≤ M for t ≥ 0 and some M > 0. Proof. Let Reλ(A) < 0. From (3.8), (3.9) and Theorem 3.3.4 it follows that eAt = P eJt P −1 and
° At ° ° °° ° °e ° ≤ kP k °eJt ° °P −1 ° ≤ Ke−αt
where 0 < α < −Re λ(A). When Reλ(A) ≤ 0 and those with zero real parts have simple elementary divisors, then ° At ° ° °° ° °e ° ≤ kP k °eJt ° °P −1 ° ≤ M. Theorem 3.3.6 Let λ1 , · · · , λs be distinct eigenvalues of A and M (λ1 , A), · · · , M (λs , A) be the corresponding generalized eigenspace. Then the solution of (LC) is At
x(t) = e x0 =
"nj −1 s X X j=1
Proof. Let x0 = Then
Ps j=1
k=1
# tk (A − λj I) x0,j eλj t k! k
x0,j , x0,j ∈ M (λj , A),
³ ´ eAt x0 = e(A−λI)t eλt x0 = e(A−λI)t x0 eλt "∞ # k X kt (A − λI) = x0 eλt k! k=0
If x0 ∈ M (λ, A) then e
(A−λI)t 0
x =
nX λ −1 ·
kt
(A − λI)
k=0
k
k!
¸ x0 ,
Hence for any x0 ∈ C n x(t) =
"nj −1 s X X j=1
k=1
# tk (A − λj I) x0,j eλj t k! k
Remark 3.3.2 Given A ∈ Rn×n . How do we verify analytically that A is a stable matrix, i.e. Reλ(A) < 0? Suppose the characteristic polynomial of A f (z) = det(zI − A) = a0 z n + a1 z n−1 + · · · + an , a0 = 1. The Routh-Hurwitz Criterion provides a necessary and sufficient condition for a real polynomial g(z) = a0 z n + · · · + an , a0 > 0 to have all roots with negative real parts. (See [C] p.158). We list the conditions for n = 2, 3, 4 which are frequently used in applications.
46
CHAPTER 3. LINEAR SYSTEMS n 2 3
f (z) a0 z 2 + a1 z + a2 a0 z 3 + a1 z 2 + a2 z + a3 a0 z 4 + a1 z 3 + a2 z 2 + a3 z + a4
4
3.4
R–H criterion a2 > 0, a1 > 0 a3 > 0, a1 > 0 a1 a2 > a0 a3 a4 > 0, a2 > 0, a1 > 0, a3 (a1 a2 − a0 a3 ) > a21 a4
Two Dimensional Linear Autonomous system
In this section we shall apply Theorem 3.3.6 to classify the behavior of the solutions of two dimensional linear system [H] · ¸ . a b x= Ax, A = , det A 6= 0 (3.10) c d where a, b, c, d are real constants. Then (0, 0) is the unique rest point of (3.10). If λ1 , λ2 are the eigenvalues of A, there are many cases to consider. Case 1: λ1 , λ2 are real and λ2 < λ1 . Let v 1 , v 2 be unit eigenvectors of A associated with λ1 , λ2 respectively. Then from (3.4), a general real solution of (3.10) is x(t) = c1 eλ1 t v 1 + c2 eλ2 t v 2 Case 1a (Stable node) λ2 < λ1 < 0. Let L1 , L2 be the line generated by v 1 , v 2 respectively. Since λ2 < λ1 < 0, x(t) ≈ c1 eλ1 t v 1 as t → ∞ and the trajectories are tangent to L1
Fig.3.1 Case 1b (Unstable node) 0 < λ2 < λ1 . Then x(t) ≈ c1 eλ1 t v 1 as t → ∞.
3.4. TWO DIMENSIONAL LINEAR AUTONOMOUS SYSTEM
47
Fig.3.2 Case 1c (Saddle point) λ2 < 0 < λ1 . In this case, the origin 0 is called a saddle point and L1 , L2 are unstable manifold and stable manifold.
Fig.3.3 Case 2: λ1 , λ2 are complex. Let λ1 = α + iβ, λ2 = α − iβ and v 1 = u + iv and v 2 = u − iv be complex eigenvectors. Then ´ ³ x(t) = ce(α+iβ)t v 1 + c¯e(α−iβ)t v 1 = 2Re ce(α+iβ)t v 1 Let c = aeiδ . Then x(t) = 2aeαt (u cos(βt + δ) − v sin(βt + δ)) .
48
CHAPTER 3. LINEAR SYSTEMS Let U and V be the line generated by u, v respectively.
Case 2a (Center) α = 0, β 6= 0.
Fig.3.4
Case 2b (Stable focus, spiral) α < 0, β 6= 0.
Fig.3.5
Case 2c (Unstable focus, spiral) α > 0, β 6= 0.
3.4. TWO DIMENSIONAL LINEAR AUTONOMOUS SYSTEM
Fig.3.6 Case 3
(Improper nodes) λ1 = λ2
Case 3a: There are two linearly independent eigenvectors v 1 and v 2 ¡ ¢ x(t) = c1 v 1 + c2 v 2 eλt , If λ > 0(λ < 0) then we have unstable (stable) improper node.
Fig.3.7 Case 3b: There is only one eigenvector v 1 . Then from (3.9) x(t) = (c1 + c2 t) eλt v 1 + c2 eλt v 2 where v 2 is any vector independent of v 1 .
49
50
CHAPTER 3. LINEAR SYSTEMS
Fig.3.8
3.5
Linear Systems with Periodic Coefficients
In this section, we shall study the linear periodic system x0 = A(t)x,
A(t) = (aij (t)) ∈ Rn×n
(LP )
where A(t) is continuous on R and A(t) = A(t + T ) with period T . We shall analyze the structure of the solutions x(t) of (LP). Before we prove the main results we need the following theorem concerning the logarithm of a nonsingular matrix. Theorem 3.5.1 Let B be nonsingular. Then there exists A ∈ C n×n , called logarithm of B, satisfying eA = B. Proof. Let B = P JP −1 with λ1 .. . O J0 = . .. O λk
where J is a Jordan form of B, J = diag (J0 , J1 , · · · , Js )
and
λi
Ji = O
1 .. .
O ..
.
..
.
∈ C ni ×ni , i = 1 · · · s. 1 λi
˜ Since B is nonsingular, λi 6= 0 for all i. If J = eA for some A˜ ∈ C n×n then it follows −1 def ˜ ˜ that B = P eA P −1 = eP AP = eA . Hence it suffices to show that the theorem is true for Jordan blocks Ji , i = 1 · · · s. Write 0 1 O ¶ µ .. .. . . 1 . Ji = λi I + Ni , Ni = .. λi . 1 O 0
3.5. LINEAR SYSTEMS WITH PERIODIC COEFFICIENTS
51
Then Nini = O. From the expressions log(1 + x) =
∞ X (−1)p+1
p
p=1
and
xp , |x| < 1
elog(1+x) = 1 + x,
(3.11)
(3.12)
formally we write ¶ µ 1 log Ji = (log λi )I + log I + Ni λi µ ¶p ∞ p+1 X (−1) Ni = (log λi )I + . p λi p=1 From (3.10) we define Ai = (log λi )I +
nX i −1 p=1
(−1)p+1 p
µ
Ni λi
¶p
Then we have (3.11) e
Ai
Ãn −1 µ µ ¶p ! ¶ i X Ni (−1)p+1 Ni = λi I + = exp((log λi )I) exp = Ji p λi λi p=1
Now we state our main results about the structure of the solutions of (LP) Theorem 3.5.2 (Floque Theorem) If Φ(t) is a fundamental matrix of (LP), then so is Φ(t+T ). Moreover there exists P (t) ∈ C n×n which is nonsingular and satisfies P (t) = P (t + T ) and there exists R ∈ C n×n such that Φ(t) = P (t)etR Proof. Let Ψ(t) = Φ(t + T ). Then Ψ0 (t) = Φ0 (t + T ) = A(t + T )Φ(t + T ) = A(t)Ψ(t). From Theorem 3.5, there exists C ∈ Rn×n is nonsingular such that Φ(t + T ) = Φ(t)C. By Theorem 3.11, C = eT R for some R ∈ C n×n . Then we have Φ(t + T ) = Φ(t)eT R . Define
P (t) = Φ(t)e−tR .
Then Φ(t) = P (t)etR and we have P (t + T ) = Φ(t + T )e−(t+T )R
Thus we complete the proof of the theorem.
= Φ(t)eT R e−(t+T )R = Φ(t)e−tR = P (t).
52
CHAPTER 3. LINEAR SYSTEMS
Definition 3.5.1 The eigenvalues of C = eT R is called the characteristic multipliers (Floque’s multipliers) of A(t). The eigenvalues of R are called characteristic exponents (Floque’s exponents) of A(t). Next we shall establish the relationship between Floque’s multipliers and Floque’s exponents. We also show that the Floque’s multipliers are uniquely determined by the system (LP). Theorem 3.5.3 Let ρ1 , · · · , ρn be the characteristic exponents of A(t). Then the characteristic multipliers are eT ρ1 , · · · , eT ρn . If ρi appears in the Jordan block of R, then eT ρi appears in the Jordan block of C = eT R with the same size. Proof. Since ρ1 · · · ρn are eigenvalues of R, T ρ1 · · · T ρn are eigenvalues of T R. Let P −1 RP¡ = J = diag (J0 , J1¢, · · · , Js ). Then C = eT R = P eT J P −1 where eT J = diag eT J0 , eT J1 , · · · , eT Js . Theorem 3.5.4 The characteristic multipliers λ1 · · · λn are uniquely determined by A(t) and all characteristic multipliers are nonzero. Proof. Let Φ1 (t) be another fundamental matrix of x0 = A(t)x. Then there exists a nonsingular matrix C1 such that Φ(t) = Φ1 (t)C1 . Then it follows that Φ1 (t + T )C1 = Φ(t + T ) = Φ(t)C = Φ(t)eT R = Φ1 (t)C1 eT R , or
0
Φ1 (t + T ) = Φ1 (t)C1 eT R C1−1 = Φ1 (t)eT R . 0
0
Then eT R = C1 eT R C1−1 and eT R , eT R have the same eigenvalues. Hence the characteristic multipliers λ1 · · · λn are uniquely determined by the system (LP). The fact λi are nonzero follows directly from λ1 · · · λn = det eT R 6= 0. Now we are in a position to study the asymptotic behavior of the solution x(t) of (LP). Theorem 3.5.5 (i) Each solution x(t) of (LP) approaches zero as t → ∞ iff |λi | < 1 for each characteristic multiplier λi or equivalently Reρi < 0 for each characteristic exponent ρi . (ii) Each solution of (LP) is bounded iff (1) |λi | ≤ 1 and (2) If |λi | = 1 then the corresponding Jordan block eT Ji is [λi ] Proof. Consider the change of variable x = P (t)y. Then we have 0
0
0
x = P (t)y + P (t)y = A(t)x = A(t)P (t)y or
0
0
P (t)y + P (t)y = A(t)P (t)y. 0
(3.13) tR
To compute P (t), we differentiate both sides of Φ(t) = P (t)e . Then we have 0
A(t)Φ(t) = P (t)etR + P (t)RetR
3.5. LINEAR SYSTEMS WITH PERIODIC COEFFICIENTS or
0
P (t) = A(t)P (t) − P (t)R.
53
(3.14)
The from (3.12) and (3.13) We have 0
y = Ry. Since x(t) = P (t)y(t) and P (t) is periodic and continuous, then the theorem follows directly from Theorem 3.3.6 Remark 3.5.1 If the fundamental matrix Φ(t) satisfies Φ(0) = I, then the characteristic multipliers are eigenvalues of Φ(T ). Hence we may compute the Floque’s multipliers by solving numerically the I.V.P. 0
x = A(t)x x(0) = ei and denote the solution yi (t). Then Φ(T ) = [y1 (T ), · · · , yn (T )] . Example 3.5.1 Hill’s and Mathieu’s equations [[JS], p.237] Consider a pendulum of length a with a bob mass m is suspended from a support which is constrained to move vertically with displacement ξ(t).
Fig.3.9 The kinetic energy T and potential energy V are given by ·³ ¸ ´2 1 T = m ξ˙ − a sin θθ˙ + a2 cos2 θθ˙2 2 V = −mg(ξ + a cos θ) The Lagrange’s equation for L = T − V is µ ¶ d ∂L ∂L = ˙ dt ∂ θ ∂θ
54
CHAPTER 3. LINEAR SYSTEMS
or
¨ sin θ = 0. aθ¨ + (g − ξ)
For small oscillation sin θ ≈ θ, then we have ³ ´ aθ¨ + g − ξ¨ θ = 0 As a standard form for this equation we may write 00
x + (α + p(t))x = 0 When p(t) is periodic it is known as Hill’s equation. If p(t) = β cos t, x ¨ + (α + β cos t) x = 0 is called Mathieu’s equation. Now consider the Mathieu’s equation and reduce it to linear periodic system µ ¶0 µ ¶ µ ¶ µ ¶ x 0 1 x x = = A(t) . y −α − β cos t 0 y y Let Φ(t) be the fundamental matrix with Φ(0) = I. From trace(A(t)) = 0 and Abel’s formula, the characteristic multiplier µ1 , µ2 satisfy ÃZ ! T
µ1 µ2 = det Φ(T ) = det Φ(0) exp
tr(A(t))dt
= 1,
0
µ1 , µ2 are solutions of µ2 − µ (T race(Φ(T ))) + det Φ(T ) = 0. Let ϕ = ϕ(α, β) = T race(Φ(T )). Then µ1,2 =
i p 1h ϕ ± ϕ2 − 4 . 2
There are several cases. (i) If ϕ > 2, then µ1 , µ2 > 0 and µ1 > 1, µ2 = exponents satisfying ρ1 > 0 and ρ2 < 0 and
1 µ1
< 1. Then the characteristic
x(t) = c1 eρ1 t p1 (t) + c2 eρ2 t p2 (t), pi (t) has period 2π. Hence {(α, β) : ϕ(α, β) > 2} is an unstable region. (ii) ϕ = 2, µ1 = µ2 = 1, ρ1 = ρ2 = 0 Then one solution is of period 2π and the other is unbounded. (iii) −2 < ϕ < 2. The characteristic multipliers are complex. In fact |µ1 | = |µ2 | = 1, ρ1 = iv, ρ2 = −iv and x(t) = c1 eivt p1 (t) + c2 e−ivt p2 (t). All solutions are bounded and {(α, β) : −2 < ϕ(α, β) < 2} is a stable region. All solutions are oscillatory but not periodic since v 6= 2π in general.
3.5. LINEAR SYSTEMS WITH PERIODIC COEFFICIENTS
55
(iv) ϕ = −2, µ1 = µ2 = −1 Then µ1 = µ2 = −1. Set T = 2π and let x0 be an eigenvector of Φ(T ) with eigenvalue µ1 = −1. Consider x(t) = Φ(t)x0 . Then x(t + T ) = Φ(t + T )x0 = Φ(t)Φ(T )x0 = Φ(t)(−x0 ) = −Φ(t)x0 . Hence x(t + 2T ) = −Φ(t + T )x0 = −Φ(t)Φ(T )x0 = Φ(t)x0 = x(t) Hence there is one solution with period 4π. (v) ϕ < −2. Then µ1 , µ2 are real and negative and µ1 µ2 = 1. The general solution is of the form x(t) = c1 e(σ+ 2 )t p1 (t) + c2 e(−σ+ 2 )t p2 (t). i
i
p1 , p2 have period 2π or x(t) = c1 eσt q1 (t) + c2 e−σt q2 (t) Where q1 (t), q2 (t) are periodic with period 4π. The following theorem given by Liapunov provides a sufficient condition to estimate the stable region. 3.5.6 RTheorem R π (Liapunov)4 ([H] p.130): Let p(t + π) = p(t) 6 ≡0 for all t, 0 ≤ π p(t)dt and |p(t)| dt ≤ π , p(t) is continuous on R. Then all solutions of the 0 0 00 equation u + p(t)u = 0 are bounded. Proof. It suffices to show that no characteristic multiplier is real. Then µ1 µ2 = 1 implies µ1 = eiν and µ2 = e−iν and u(t) = c1 eiνt p1 (t) + c2 e−iνt p2 (t) are bounded. If µ1 is real, then there exists a real solution u(t) with u(t + π) = ρu(t) for all t for some ρ 6= 0. Then either u(t) 6= 0 for all t or u(t) has infinitely many zeros with two consecutive zeros a and b, 0 ≤ b − a ≤ π. Case I: u(t) 6= 0 for all t ± ± 0 0 0 0 Then we have u(π) = ρu(0), u (π) = ρu (0) and u (π) u(π) = u (0) u(0) 00
Since
u (t) u(t)
+ p(t) = 0, it follows that Z 0
π
00
u (t) dt + u(t)
Z
π
p(t)dt = 0. 0
Write Z 0
π
Z π à 0 !2 Z π 0 0 ¯π 00 u ¯¯ u (t) u (t) (u (t))2 dt = ¯ + dt = dt > 0 2 (t) u(t) u 0 u u(t) 0 0
Thus we obtain a contradiction.
56
CHAPTER 3. LINEAR SYSTEMS
Case II: Assume u(t) > 0 for a < t < b, u(a) = u(b) = 0 Let u(c) = max |u(t)| a≤t≤b
>
1 u(c)
Z
4 ≥ π b
Z
Z
π
b
|p(t)|dt ≥ 0
a
00
|u (t)|dt ≥ a
¯ 00 ¯ ¯ u (t) ¯ ¯ ¯ ¯ u(t) ¯dt
0 1 |u (α) − u (β)| u(c) 0
for any α < β in (a, b). We note that ¯Z ¯ |u (α) − u (β)| = ¯¯ 0
0
β
α
¯ Z ¯ u (t)dt¯¯ ≤ 00
Z
β
b
00
|u (t)|dt ≤
α
00
|u (t)|dt. a
By Mean Value Theorem we have 0 0 u(c) − u(a) u(c) − u(b) = u (α), −u (β) = . c−a b−c
Hence it follows that ¯ ¯ ¯ ¯ ¯ ¯ ¯ 0 4 1 ¯¯ 0 ¯ = 1 ¯ u(c) + u(c) ¯ > u (α) − u (β) ¯ ¯ ¯ π u(c) u(c) c − a b − c ¯ 1 b−a 4 4 1 + = > ≥ , = c−a b−c (c − a)(b − c) b−a π here we apply the inequality 4xy ≤ (x + y)2 with x = c − a, y = b − c. Thus we obtain a contradiction.
3.6
Adjoint System 0
Let Φ(t) be a fundamental matrix of x = A(t)x. Then we have ¡ −1 ¢0 Φ = −Φ−1 Φ0 Φ−1 = −Φ−1 A(t). Taking conjugate transpose yield ³
−1
Hence (Φ∗ )
Φ∗
−1
´0
−1
= −A∗ (t) (Φ∗ )
.
is a fundamental matrix of y 0 = −A∗ (t)y
and we call Y 0 = −A∗ (t)Y is the adjoint of Y 0 = A(t)Y . Theorem 3.6.1 Let Φ(t) be a fundamental matrix of x0 = A(t)x. Then Ψ(t) is a fundamental matrix of Y 0 = −A∗ (t)Y iff Ψ∗ Φ = C for some nonsignular constant matrix C. Proof. If Ψ(t) is a fundamental matrix of Y 0 = −A∗ (t)Y . Then Ψ = (Φ∗ )−1 P for def
some nonsingular matrix P and Ψ∗ Φ = P ∗ Φ−1 Φ = P ∗ = C. On there other hand if Ψ∗ Φ = C then Ψ = (Φ∗ )−1 C ∗ and hence Ψ is a fundamental
3.6. ADJOINT SYSTEM
57
matrix of the adjoint system. Example 3.6.1 Consider n-th order scalar linear equation Dn y + a1 (t)Dn−1 y + · · · + an (t)y = 0 Then we rewrite it in the form of first order system with x1 = y, x2 = y 0 , · · · , xn = y (n−1) . 0 x1 x1 0 1 ··· ··· 0 .. .. 0 . 0 1 · · · 0 . = . . .. .. −an −an−1 · · · · · · −a1 xn xn or
x0 = A(t)x.
Then the adjoint equation is or
y1 .. . .. . yn
0
y 0 = −AT y
=
0, · · · · −1 0 · · · .. .. . . 0 ····
0 0 −1
y 1 an .. an−1 . . .. a1 yn
Hence we have y10 = an yn y20 = −y1 + an−1 yn y30 = −y2 + an−2 yn .. . yn0 = −yn−1 + a1 yn Let z = yn then we obtain the adjoint equation Dn z − Dn−1 (a1 z) + Dn−2 (a2 z) + · · · + (−1)n an z = 0 Fredholm Alternatives: We recall in Linear Algebra the solvability of the linear system Ax = b, A ∈ Rm×n , b ∈ Rm . The results are stated as follows: (Uniqueness): The solution of Ax = b is unique iff Ax = 0 has only trivial solution x = 0. (Existence) The equation Ax = b has a solution iff hb, vi = 0 for every vector v satisfying A∗ v = 0, here A∗ is the complex conjugate of A, i.e. the adjoint of A. Theorem 3.6.2 ([H], p.145): Let A(t) ∈ Rn×n be a continuous periodic matrix with period T and f ∈ PT , where PT = {f : R → Rn : f is continuous with period T }. Then x0 = A(t)x + f (t) (3.15) has a solution in PT iff def
Z
T
hy, f i =
0
y T (t)f (t)dt = 0 for all y ∈ PT
58
CHAPTER 3. LINEAR SYSTEMS
satisfying y 0 = −A∗ (t)y. Proof. Let x(t) be a periodic solution of (3.15) in PT . Then x(0) = x(T ). Let Φ(t) be the fundamental matrix of x0 = A(t)x with Φ(0) = I. From the variation of constant formula (3.5) Z T x(0) = x(T ) = Φ(T )x(0) + Φ(T )Φ−1 (s)f (s)ds 0
Then it follows that Z Φ
−1
T
(T )x(0) = x(0) +
Φ−1 (s)f (s)ds
0
or
£ −1 ¤ Φ (T ) − I x(0) =
Z
T
Φ−1 (s)f (s)ds
0
or Bx(0) = b From Fredholm’s Alternative of linear algebra, the solvability of x(0) means hv, bi = 0 for all v satisf ying B ∗ v = 0. We note that
B ∗ v = 0 iff
¡
¢∗ Φ−1 (T ) v = v
¡ ¢∗ Since y(t) = Φ−1 (t) v is a solution of the adjoint equation y 0 = −A∗ y, where v is an initial value of a T -periodic solution of y 0 = −A∗ y. ¿ À Z T hv, bi = 0 iff v, Φ−1 (s)f (s)ds = 0 0
For y ∈ PT , y(t) = (Φ
−1
∗
(t)) v, Z
T
¡ ¢∗ f T (t) Φ−1 (t) vdt = 0 0 ¿ Z T À −1 iff v, Φ (s)f (s)ds = 0
hy(t), f (t)i = 0 iff
0 00
Example 3.6.2 u + u = cos wt Rewrite the equation in the form µ ¶0 · ¸· ¸ · ¸ x1 0 1 x1 0 = + x2 −1 0 x2 cos wt
(3.16)
The adjoint equation is µ
or Then
y1 y2
¶0
µ =
0 −1
1 0
¶µ
y1 y2
¶ (3.17)
y10 = y2 , y20 = −y1 y1 (t) = a cos t + b sin t y2 (t) = −a sin t + b cos t
(3.18)
3.6. ADJOINT SYSTEM
59
If w 6= 1 then (3.16) has no nontrivial periodic solution of period Z
T
µ
0
y1 (t) y2 (t)
¶T µ
0 cos wt
2π w
and
¶ dt = 0
µ
¶ y1 . ∈ P 2π w y2 Hence (3.16) has a unique periodic solution of period 2π/w. If w = 1 then every solution of the adjoint system is of period 2π. Check from (3.18)
for all
Z 0
T
µ
y1 (t) y2 (t)
¶T µ
0 cos t
¶
Z dt =
T
¡
¢ −a sin t cos t + b cos2 t dt = πb 6= 0
0
Unless b = 0, (3.16) has no solution of period 2π. In fact from elementary different tial equation, u(t) = a sin t+b cos t+ t sin and we have the “resonance” phenomena. 2
60
CHAPTER 3. LINEAR SYSTEMS
Exercises Exercise 3.1 (i) show that det eA = etrA (ii) If A is skew symmetric, i.e., AT = −A then eA is orthogonal. Exercise 3.2 Find the general solution of x˙ = Ax With 0 1 0 A = 4 3 −4 1 2 −1 Exercise 3.3 Prove that BeAt = eAt B for all t if and only if BA = AB. Exercise 3.4 Let A be a continuous n × n matrix such that the system x˙ = A(t)x has a uniformly bounded fundamental matrix Φ(t) over 0 ≤ t < ∞. (a) Show that all fundamental matrices are bounded on [0, ∞). (b) Show that if ·Z
¸
t
lim t → ∞ inf Re
trA(s)ds > −∞, 0
then Φ−1 (t) is also uniformly bounded on [0, ∞). Exercise 3.5 Show that if a(t) is a bounded function, if a ∈ C[0, ∞) and if φ(t) is a nontrivial solution of y 00 + a(t)y = 0
(∗)
satisfying φ(t) → 0 as t → ∞, then (∗) has a solution which is not bounded over [0, ∞). Exercise 3.6 Let g be a bounded continuous function on (−∞, ∞) and let B and −C be square matrices of dimensions k and n − k all of whose eigenvalues have negative real parts. Let · ¸ · ¸ B 0 g1 (t) A= and g(t) = , 0 C g2 (t) so that the system x0 = Ax + g(t)
(∗∗)
is equivalent to x01 = Bx1 + g1 (t), Show that the functions Z t φ1 (t) = eB(t−s) g1 (s)ds, −∞
x02 = Cx2 + g2 (t). Z φ2 (t) = − t
∞
eC(t−s) g2 (s)ds
3.6. ADJOINT SYSTEM
61
are defined for all t ∈ R and determine a solution of (∗∗). Exercise 3.7 Show that if g is a bounded continuous function on R1 and if A has no eigenvalues with zero real part, then (∗∗) has at least one bounded solution. (Hint: Use Problem 5) Exercise 3.8 Let A = λI + N consist of a single Jordan block. Show that for any α > 0 A is similar to a matrix B = λI + αN . (Hint: Let P = [αi−1 δij ] and compute P −1 AP ). Exercise 3.9 Let A be a real n × n matrix. Show that there exists a real nonsingular matrix P such that P −1 AP = B has the real Jordan canonical form J = diag(J0 , J1 , · · · Js ) where Jk is given as before for real eigenvalues λj while for complex eigenvalue λ = α ± iβ the corresponding Jk has the form Λ I2 · · · 02 02 02 Λ I2 02 02 Jk = . .. . . .. .. . .. . . . . 02
02
···
02
Λ
Here 02 is the 2 × 2 zero matrix, I2 the 2 × 2 identity matrix, and ¸ · α −β . Λ= β α
Exercise 3.10 Use the Jordan form to prove that all eigenvalues of A2 have the form λ2 , where λ is an eigenvalue of A. Exercise 3.11 If A = C 2 in Problem 7, where C is a real nonsingular n × n matrix, show that there is a real matrix L such that eL = A. Hint: Use Problems 8 and 9 and the fact that if λ = α + iβ = reiθ , then · ¸ log r −θ Λ = exp . θ log r
Exercise 3.12 Let A(t) be a 2 × 2 continuous, periodic matrix of period T and let Y (t) be the fundamental matrix of y 0 = A(t)y satisfying Y (0) = I. Let B(t) be a 3 × 3 periodic matrix of the form A(t), b1 (t) b2 (t) B(t) = 0, 0, b3 (t) Then the fundamental matrix Φ(t) of z 0 = B(t)z is given by
Y (t), z1 z2 Φ(t) = 0, 0, z3
62
CHAPTER 3. LINEAR SYSTEMS
where
Rt z3 (t) = e
0
b3 (s)ds
and z = (z1 , z2 ) is given by Z
t
z(t) =
Y (t)Y −1 (s)b(s)ds
0
where b(s) = col(b1 (S), b2 (s)). In particular, if ρ1 , ρ2 are the Floquet multipiers associated with the 2 × 2 system eigenvalues of Y (t) then the Floquet multipliers of the 3 × 3 system are ρ1 , ρ2 and ρ3 = z3 (T ). Exercise 3.13 Let a0 (t) and a1 (t) be continuous and T -periodic functions and let φ1 and φ2 be solutions of y 00 + a1 (t)y 0 + a0 (t)y = 0 such that · ¸ · ¸ φ1 (0) φ2 (0) 1 0 Φ(0) = = = E2 . φ01 (0) φ02 (0) 0 1 Show that the Floquet multipliers λ satisfy λ2 + αλ + β = 0, where " Z # T α = − [φ1 (T ) + φ02 (T )] , β = exp − a1 (t)dt . 0
Exercise 3.14 In Problem 12 let a1 ≡ 0. Show that if −2 < α < 2, then all solutions y(t) are bounded over −∞ < t < ∞. If α > 2 or α < −2, then y(t)2 + y 0 (t)2 must be unbounded over R. If α = −2, show there is at least one solution y(t) of period T while for α = 2 there is at least one periodic solution of period 2T . · ¸ −1 + 23 cos2 t 1 − 32 cos t sin t Exercise 3.15 (Markus and Yamabe [MY]) If A(t) = −1 − 32 sin t cos t −1 + 32 sin2 t √ then the eigenvalue λ1 (t), λ2 (t) of A(t) are λ1 (t) = [−1 + i 7]/4, λ2 (t) = λ1 (t) and , in particular, the real parts of the eigenvalues have negative real parts. On the other hand, one can verify directly that the vector (− cos t, sin t) exp( 2t ) is a solution of x0 = A(t)x and this solution is unbounded as t → ∞. Show that one of the characteristic multipliers is eπ and the other multiplier is e−2π . Exercise 3.16 (i) Consider linear inhomogeneous system x0 = Ax + f (t) where f (t) is a continuous 2π− periodic function. If there is a 2π− periodic solution y(t) of adjoint equation y 0 = −AT y such that
Z
2π
= y T (t)f (t)dt 6= 0
0
Show that every solution ¡ T x(t) of ¢ (∗∗) is unbounded. d y (t)x(t) and integrate from 0 to ∞. Hint: Computer dt
3.6. ADJOINT SYSTEM
63
(ii) Show that the resonance occurs for the second order linear equation x00 + w02 x = F cos wx when w = w0 .
64
CHAPTER 3. LINEAR SYSTEMS
References [C ] W.A.Coppel : Stabliity and Asymptotic Behavior of Differential Equations [H ] J.Hale : Ordinary Differential Equations [HS ] Hirsch and Smale : Differential Equations, Dynamical Systems and Linear Algebra [IK ] E. Isaacson and H. Keller : Analysis of Numerical Method [JS ] D.W.Jordan and P.Smith : Nonlinear Ordinary Differential Equations [MY ] Markus L. and H.Yamabe : Global stability criteria for differential systems, Osaka Math. J.12(1960),305-317
Chapter 4
STABILITY OF NONLINEAR SYSTEMS 4.1
Definitions
Let x(t) be a solution of the system dx = f (t, x) dt
(4.1)
In this section, we introduce three types of stability of the solution x(t), namely stability, instability and asymptotic stability in the sense of Lyapunov. Definition 4.1.1 We say a given solution x(t) of (4.1) is stable, more precisely, stable over the interval [t0 , ∞) if ¯(t) of (4.1) with (i) for each ² > 0 there exists δ = δ(ε) > 0 such that any solution x |¯ x(t0 ) − x(t0 )| < δ the inequality |¯ x(t) − x(t)| < ² holds for all t ≥ t0 . x(t) is said to be asymptotically stable if in addition (ii) |¯ x(t) − x(t)| → 0 as t → ∞ whenever |¯ x(t0 ) − x(t0 )| is sufficiently small. A solution x(t) is said to be unstable if it is not stable, i.e., there exists ε > 0 and sequences {tm } and {ξm }, ξm → ξ0 , ξ0 = x(t0 ) as m → ∞, such that |ϕ(t0 + tm , t0 , ξm ) − x(t0 + tm )| ≥ ε. Example 4.1.1 The constant solution x(t) ≡ ξ of the equation x0 = 0 is stable, but it is not asymptotically stable. Example 4.1.2 The solution of the equation x0 = a(t)x are µZ x(t) = ξ1 exp
¶ a(s)ds .
0
65
t
66
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
The solution x(t) ≡ 0 is asymptotically stable if and only if Z
t
a(s)ds → −∞ as t → ∞. 0
µ
¶0
µ
¶µ
¶ x1 x2 ¶ µ ¶ µ x1 (t) 0 Then (0, 0) is a saddle point. Hence ≡ is unstable, even though x2 (t) 0 there exists points, namely, the points (ξ1 , ξ2 ) on stable manifold such that ϕ(t, ξ1 , ξ2 ) → 0 as t → ∞. Example 4.1.3
x1 x2
=
0 1
1 0
Example 4.1.4 Consider the equation v 00 (t) + t sin v(t) = 0 Then v(t) ≡ π is unstable and v(t) ≡ 0 is asymptotically stable. (Exercise 4.1) Example 4.1.5 For the equation x00 + x = 0. The null solution x(t) ≡ 0 is stable, but not asymptotically stable. Remark 4.1.1 In application, we mostly have autonomous system x0 = f (x) or periodic system x0 = f (t, x) with f periodic in t. For autonomous system x0 = f (x), f : Ω ⊆ Rn → Rn . We consider the stability of following specific solutions, namely (i) Equilibrium solution x(t) ≡ x∗ , f (x∗ ) = 0 (ii) Periodic solution which existence usually follows from Poincar´e-Bendixson Theorem for n = 2 or Brouwer fixed point Theorem for n ≥ 3. We shall discuss the “orbital stability” for a periodic orbit in Section 4.4. (iii) In some cases, we consider a homoclinic solution x(t) (i.e., solution x(t) satisfying x(t) → x∗ as t → ±∞) or x(t) is a ”chaotic” solution.
4.2
Linearization
Stability of linear system: The stability of a solution x(t) of the linear system x0 = A(t)x is equivalent to the stability of null solution x(t) ≡ 0. Consider the linear system x0 = Ax, where A is a n by n constant matrix. From Chapter 3, we have the following: (i) If all eigenvalues of A have negative real parts, then x(t) ≡ 0 is asymptatically stable. Moreover, there are positive constants K and α such that k eAt x0 k≤ Ke−αt k x0 k for all t ≥ 0, x0 ∈ Rn . (ii) x(t) ≡ 0 is stable iff all eigenvalues of A have nonpositive real parts and these with zero real parts have simple elementary divisors. (iii) x(t) ≡ 0 is unstable iff there exists an eigenvalue with positive real part or these with zero real part have no simple elementary divisors.
4.2. LINEARIZATION
67
To justify the stability of equilibrium solutions of nonlinear autonomous system x0 = f (x), we have two ways, namely, linearization method and Lyapunov method. In this chapter we discuss the method of linearization and we shall discuss the Lyapunov method in next chapter. Given a nonlinear autonomous system x0 = f (x), given initial condition x0 , it is very difficult to study the global asymptotic behavior of the solution ϕ(t, x0 ). As a first step, we find all equilibria of the system, i.e. solving the nonlinear system f (x) = 0, and then check the stability property of each equilibrium x∗ . Then we may predict the behavior of the solution ϕ(t, x0 ). Let x∗ be an equilibrium of autonomous system x0 = f (x).
(4.2)
The Taylor series expansion of f (x) around x = x∗ has the form f (x) = f (x∗ ) + A(x − x∗ ) + g(|x − x∗ |) = A(x − x∗ ) + g(|x − x∗ |), where
A = Dx f (x ) = ∗
∂f1 ∂x1
,
∂fn ∂x1
,
··· ,
.. . .. .
··· ,
∂f1 ∂xn
∂fn ∂xn
x=x∗
and g(y) = o(|y|) as y → 0, i.e. lim
y→0
|g(y)| = 0 as y → 0. |y|
The Jacobian Dx f (x∗ ) is called the variational matrix of (4.2) at x = x∗ . Let y = x − x∗ , the linear system y 0 = Dx f (x∗ )y (4.3) is said to be the linearized equation of (4.2) about the equilibrium x∗ . The stability property of the linear system (4.3) is called the linearized stability of the equilibrium x∗ . The stability property of the equilibrium x∗ of (4.2) is called the nonlinear stability of x∗ . Then we ask that does linearized stability of x∗ imply the nonlinear stability of x∗ . From Hartman-Grobman Theorem the answer is yes if the equilibrium x∗ is hyperbolic, i.e., Dx f (x∗ ) has no eigenvalues with zero real parts. We shall state Hartman-Grobman Theorem at the end of this section. In the following two examples, ”hyperbolicity” is important for nonlinear stability. Example 4.2.1 Consider the system ¡ 2 ¢ ½ dx1 2 dt = x2 − x1 x¡1 + x2 ¢ dx2 2 2 dt = −x1 − x2 x1 + x2 . (0, 0) is an equilibrium. The variational matrix A at (0, 0) is ¢ ¡ 2 − x1 + x22 − 2x21 , 1 − 2x1 x2 0 1 = ¡ 2 ¢ 2 2 x = 0 −1 0 −1 − 2x1 x2 , − x1 + x2 − 2x2 1 x2 = 0
68
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
which eigenvalues are ±i. Hence (0, 0) is a center for the linearized system x0 = Ax. However ¡ ¢2 dx1 dx2 x1 + x2 = − x21 + x22 dt dt or ¡ ¢2 1 d ¡ 2 ¢ r (t) = − r2 (t) , r2 (t) = x21 (t) + x22 (t). 2 dt Then c , c = x21 (0) + x22 (0) r2 (t) = 1 + 2ct and r(t) → 0 as t → ∞. It follows that (0, 0) is asymptotically stable. Example 4.2.2 For the system ½ dx1
dt dx2 dt
¡ ¢ = x2 + x¡21 + x22 x¢1 = −x1 + x21 + x22 x2
and the equilibrium (0, 0), the linearized system is µ ¶ 0 1 x0 = x, with (0, 0) being a center. −1 0 However
¡ ¢2 d ¡ 2 ¢ r (t) = 2 r2 (t) dt
or r2 (t) =
r2 (0) 1 − 2r2 (0)t
The equilibrium (0, 0) is a unstable and the solution blows up in finite time. Theorem 4.2.1 Let x∗ be an equilibrium of the autonomous system (4.2). If the variational matrix Dx f (x∗ ) has all eigenvalues with negative real parts, then the equilibrium x∗ is asymptotically stable. Proof. Let A = Dx f (x∗ ) and y = x − x∗ . then we have y 0 = f (x) = f (y + x∗ ) = Ay + g(y) where g(y) = o(|y|) as y → 0, g(0) = 0.
(4.4)
Since Reλ(A) < 0, there exists M > 0, σ > 0 such that |eAt | ≤ M e−σt as t ≥ 0.
(4.5)
From (4.5) and the variation of constant formula Z y(t) = eA(t−t0 ) y(t0 ) +
t
eA(t−s) g(y(s))ds
t0
, it follows that Z |y(t)| ≤ M |y(t0 )|e
−σ(t−t0 )
t
+M t0
e−σ(t−s) |g(y(s))|ds.
(4.6)
4.2. LINEARIZATION
69
From (4.4), we have that given ε, 0 < ε < σ, there exists δ > 0 such that |g(y)| <
ε |y| f or |y| < δ. M
(4.7)
As long as |y(t)| < δ, we have Z |y(t)| ≤ M |y(t0 )|e
−σ(t−t0 )
t
+ε
e−σ(t−s) |y(s)|ds.
t0
Applying Gronwall’s inequality to eσt |y(t)| yields |y(t)| ≤ M |y(t0 )|e−(σ−ε)(t−t0 ) as long as |y(t)| < δ. Choose γ < δ/M such that |y(t0 )| ≤ γ. Then |y(t)| ≤ M |y(t0 )|e−(σ−ε)(t−t0 ) < M |y(t0 )| < δ. Hence the equilibrium y = 0 is asymptotically stable, in fact, it is exponentially asymptotically stable. Example 4.2.3 Predator-Prey system with Holling’s type II functional response ([HSU]), where x(t), y(t) are the population density of prey and predator respectively. dx ¡ ¢ x dt = γx 1 − K − mxa + xy, dy dt = (mxa + x − d) y, γ, K, m, a, d > 0 x(0) > 0, y(0) > 0 a > ( md )−1 0, y ∗ > 0. The existence of (x∗ , y ∗ ) is under the condition x∗ < K. Assume 0 < x∗ < K. The variational matrix is ¢ ¡ ma mx γ 1 − 2x − a+x K − (a+x)2 y , A(x, y) = ³ ´ ma mx , (a+x)2 y a+x − d
Let m > d. Then we have three equilibria: (0, 0), (K, 0) and (x∗ , y ∗ ), x∗ =
At E0 = (0, 0)
· A(0, 0) =
γ, 0,
E0 is a saddle point for r > 0, −d < 0. At E1 = (K, 0), −γ A(K, 0) = 0 is a saddle point since x∗ < K. At E ∗ = (x∗ , y ∗ ) ³ γ 1− A(x∗ , y ∗ ) =
2x∗ K
mK a+K
,
−
¸
mK − a+K
,
´
0 −d
−d
ma ∗ (a+x∗ )2 y
ma ∗ (a+x∗ )2 y
∗
,
mx − a+x ∗
,
0
70
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
The characteristic equation of A(x∗ , y ∗ ) is · µ ¶ ¸ 2x∗ ma mx∗ ma ∗ − + λ2 − λ γ 1 − y · y ∗ = 0. ∗ 2 K (a + x ) a + x∗ (a + x∗ )2 E ∗ is asymptotically stable iff µ ¶ 2x∗ ma γ 1− − y∗ < 0 K (a + x∗ )2 or
µ ¶ ¶ µ 2x∗ a x∗ γ 1− − <0 γ 1− K a + x∗ K
or
K −a < x∗ 2 If K−a > x∗ . Then E ∗ is a unstable spiral. 2 K−a If 2 = x∗ , the eigenvalues of A(x∗ , y ∗ ) is ±ωi. This is ”Hopf Bifurcation” which will be discussed in Section 6.3.
Fig.4.1 Example 4.2.4 Lotka-Volterra Two species competition model [W] dx 1 dt = γ1 x1 (1 − x1 K1 ) − α1 x1 x2 dx 2 dt = γ2 x2 (1 − x2 K2 ) − α2 x1 x2 γ , γ ,K ,K ,α ,α > 0 1 2 1 2 1 2 x1 (0) > 0, x2 (0) > 0 There are equilibria: E0 = (0, 0), E1 = (K1 , 0) and E2 = (0, K2 ). The interior equilibrium E ∗ = (x∗ , y ∗ ) exists under following cases (iii) and (iv). The varational matrix at E(x, y) is ´ ³ γ1 x1 − α x − γ1 1 − K x , −α x 1 2 1 1 1 K1 1 A(x, y) = ´ ³ γ x2 2 − α x − −α2 x2 , γ2 1 − K x 2 1 2 K 2 2
4.2. LINEARIZATION
71
At E0 , · A(0, 0) =
γ1 0
¸
0 γ2
E0 is a source or a repeller for γ1 > 0, γ2 > 0. At E1 = (K1 , 0) · A(K1 , 0) =
−γ1 0
, −α1 K1 , γ2 − α2 K1
¸
At E2 = (0, K2 ) · A(0, K2 ) =
γ1 − α1 K2 −α2 K2
, ,
0 −γ2
¸
³ There are four cases according to the position of isoclines L1 : γ1 1 − ´ ³ x2 − α2 x1 = 0 0 and L2 : γ2 1 − K 2 (i)
r2 α2
x1 K1
´ −α1 x2 =
> K1 , αr11 < K2
Fig.4.2
In this case E2 = (0, K2 ) is a stable node, E1 = (K1 , 0) is a saddle point and E0 = (0, 0) is a unstable node. We may predict that (x1 (t), x2 (t)) → (0, K2 ) as t → ∞. (ii)
r2 α2
< K1 , αr11 > K2
72
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
Fig.4.3 In this case E1 = (K0 , 0) is a stable node, E2 = (0, K2 ) is a saddle point and E0 = (0, 0) is an unstable node. We may predict that (x1 (t), x2 (t)) → (K1 , 0) as t → ∞. (iii)
r1 α1
> K2 , αr22 > K1
Fig.4.4 In this case E1 = (K1 , 0) and E2 = (0, K2 ) are saddle point, E0 = (0, 0) is an unstable node. The variational matrix for E ∗ is γ1 ∗ x , −α1 x∗1 −K 1 1 A(x∗ , y ∗ ) = γ2 ∗ −α2 x∗2 , − K x 2 2
4.3. SADDLE POINT PROPERTY: STABLE AND UNSTABLE MANIFOLDS73 The characteristic polynomial of A(x∗ , y ∗ ) is µ ¶ µ ¶ γ1 ∗ γ2 ∗ γ1 γ2 2 ∗ ∗ λ + x + x λ + x1 x2 − α1 α2 = 0 K1 K2 2 K1 K2 Since αγ22 > K1 , αγ11 > K2 , it follows that E ∗ = (x∗ , y ∗ ) is a stable node. We may predict the solution (x1 (t), x2 (t)) → (x∗1 , x∗2 ) as t → ∞. (iv) αr11 < K2 , αr22 < K1
Fig.4.5 In this case E1 = (K1 , 0) and E2 = (0, K2 ) are stable nodes. E0 = (0, 0) is an unstable node and from K1 > αγ22 , K2 > αγ11 , it follows that E ∗ = (x∗ , y ∗ ) is a saddle point. This is a well-known ”bistability” example. Hartman-Grobman Theorem: [R] 0 Let x∗ be a hyperbolic equilibrium of the system x =f (x) . Then the flow ϕt of f is conjugate in a neighborhood of x∗ to the affine flow x∗ + eAt (y − x∗ ) ∗ where A = Dfx (x∗ ). More precisely, there are a neighborhood t of ¡ ∗ ¢ x and a t At ∗ homeomorphism h : t → U such that ϕ (h(x)) = h x + e (y − x ) as long as x∗ + eAt (y − x∗ ).
4.3
Saddle Point Property: Stable and unstable manifolds
In this section, we discuss the existence of the stable and unstable manifolds for a hyperbolic equilibrium of an autonomous system and prove they are submanifolds with the same smoothness properties as the vector field. Consider (2.1) with x∗ as a hyperbolic equilibrium and A = Dx f (x∗ ). Let z = x − x∗ then we have z 0 = Az + F (z) where F (0) = 0 DF (0) = 0
F (z) = o(|z|) as z → 0.
74
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS Consider the differential equation x0 = Ax + f (x)
(4.8)
where the equilibrium 0 is hyperbolic and f : Rn → Rn is a Lipschitz continuous function satisfying f (0) = 0 |f (x) − f (y)| ≤ ρ(δ)|x − y| if |x|, |y| ≤ δ
(4.9)
where ρ : [0, ∞] → [0, ∞) is continuous with ρ(0) = 0. We note that (4.9) implies f (x) = O(|x|) as x → 0. For any x ∈ Rn , let ϕt (x) be the solution of (4.8) through x. The unstable set W u (0) and the stable set W s (0) of 0 are defined as W u (0) = {x ∈ Rn : ϕt (x) is defined for t ≤ 0 and ϕt (x) → 0 as t → −∞}, W s (0) = {x ∈ Rn : ϕt (x) is defined for t ≥ 0 and ϕt (x) → 0 as t → ∞} u s The local unstable set Wloc (0) and the local stable set Wloc (0) of 0 corresponding to a neighborhood U of 0 are defined by u Wloc (0) ≡ W u (0, U ) = {x ∈ W u (0) : ϕt (x) ∈ U, t ≤ 0}, s Wloc (0) ≡ W s (0, U ) = {x ∈ W s (0) : ϕt (x) ∈ U, t ≥ 0}.
Example 4.3.1 Lotka-Volterra two species competition model of case(iv) in Example 4.2.4. For the system µ ¶ x1 x01 = r1 x1 1 − − α1 x1 x2 K1 µ ¶ x2 x02 = r2 x2 1 − − α2 x1 x2 K2 x∗ = (x∗1 , x∗2 ) is saddle point with one-dimensional stable set W s (x∗ ) and onedimensional unstable set W u (x∗ ).
Fig.4.6 Example 4.3.2 Consider the following nonlinear equations ½ 0 x1 = −x1 x02 = x2 + x21
4.3. SADDLE POINT PROPERTY: STABLE AND UNSTABLE MANIFOLDS75 The equilibrium (0, 0) is a saddle since the linearized equation is ½ 0 x1 = −x1 x02 = x2
Fig.4.7 If we integrate the nonlinear equations, we obtain x1 (t) x2 (t)
= e−t x01 µ ¶ ¡ ¢2 1 1 = et x02 + (x01 )2 − e−2t x01 3 3
From these formulas, we see that © 0 ¡ 0 0¢ ª W u (0) = x = x1 , x2 : x01 = 0 ½ ¾ ¡ ¢ 1 ¡ ¢2 W s (0) = x0 = x01 , x02 : x02 = − x01 3
Consider the linear system x0 = Ax
(4.10)
where A ∈ Rn×n has k eigenvalues with negative real parts and n − k eigenvalues with positive real parts. From the real Jordan form, there exists a nonsingular matrix U = [u1 · · · uk , uk+1 · · · un ] ∈ Rn×n such that ¸ · A− 0 , (4.11) U −1 AU = 0 A+ where A− ∈ Rk×k with Reλ(A− ) < 0 and A+ ∈ R(n−k)×(n−k) with Reλ(−A+ ) < 0.
76
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
Then there are constants K1 > 0, α > 0 such that |eA+ t | ≤ K1 eαt , t ≤ 0 |eA− t | ≤ K1 e−αt , t ≥ 0.
(4.12)
From (4.11), it follows that · A[u1 · · · uk , uk+1 · · · un ] = U
A− 0
0 A+
¸ .
Then for the linear system (4.10), 0 is a saddle point with W s (0) = spanhu1 · · · uk i, W u (0) = spanhuk+1 · · · un i, where u1 · · · uk (or uk+1 · · · un ) are eigenvectors or generalized eigenvectors associated with eigenvalues with negative (or positive) real parts. The stable set W s (0) and the unstable set W u (0) are invariant under A. Define the projection P : Rn → W u (0) and
Q : Rn → W s (0)
with P
à n X Ã
Q
! αi ui
i=1 n X
αi ui ,
i=k+1
! αi ui
n X
=
=
i=1
k X
αi ui .
i=1
Then P Rn = W u (0), QRn = W s (0) and à n ! à n ! X X αi ui = P αi Aui P Ax = P A i=1
=
n X
αi Aui = A
i=k+1
Ã
i=1 n X
αi ui
! = AP x
k+1
i.e. P A = AP . L n n Similarly we haveµQA = QRn . For any x ∈ Rn , P x = ¶ AQ. obviously R = P R 0 U y for some y = , v ∈ Rn−k and from (4.11) v · A t ¸ µ ¶ e − 0 0 −1 eAt P x = eAt U y = U U U y = U . eA+ t v 0 eA+ t Then from (4.12) |eAt P x| ≤ k U k |eA+ t ||v| = k U k |eA+ t ||U −1 P x| ≤ Hence it follows that
k U k K1 eαt k U −1 k |P x|, t ≤ 0 |eAt P | ≤ Keαt ,
t≤0
(4.13)
4.3. SADDLE POINT PROPERTY: STABLE AND UNSTABLE MANIFOLDS77 and similarly we have
|eAt Q| ≤ Ke−αt ,
t ≥ 0.
(4.14)
A basic lemma is the following. Lemma 4.3.1 If x(t), t ≤ 0 is a bounded solution of (4.8), then x(t) satisfies the integral equation Rt Rt (4.15) y(t) = eAt P y(0) + 0 eA(t−s) P f (y(s))ds + −∞ eA(t−s) Qf (y(s))ds If x(t), t ≥ 0 is a bounded solution of (4.8) then x(t) satisfies the integral equation Z t Z ∞ eA(t−s) Qf (y(s))ds − eA(t−s) P f (y(s))ds. y(t) = eAt Qy(0) + (4.16) 0
t
Conversely, if y(t), t ≤ 0 (or t ≥ 0) is a bounded solution of (4.15) (or (4.16)), then y(t) satisfies (4.8). Proof. Let y(t) = x(t), t ≤ 0 be a bounded solution of (4.8). Since P A = AP, QA = AQ, from variational of constant formula, for any τ ∈ (−∞, 0] , we have Z t
Qy(t) = eA(t−τ ) Qy(τ ) +
eA(t−s) Qf (y(s))ds
τ
From (4.14) and the assumption y(s) is bounded for s ≤ 0, and let τ → −∞, we obtain Z t Qy(t) = eA(t−s) Qf (y(s))ds −∞
Since
Z
t
At
P y(t) = e P y(0) +
eA(t−s) P f (y(s))ds
0
then from y(t) = P y(t) + Qy(t), we obtain (4.15). The proof for the case where x(t) ≥ 0, t ≥ 0 is bounded is similar. The converse statement is proved by direct computation. (Exercise 4.2) We say W u (0, U ) is a Lipschitz graph over P Rn if there is a neighborhood V ⊆ P Rn of 0 and a Lipschitz continuous function g : V → QRn such that W u (0, U ) = {(ξ, η) ∈ Rn : η = g(ξ), ξ ∈ V } . The set W u (0, U ) is said to be tangent to P Rn at 0 if |g(ξ)| → 0 as ξ → 0 in |ξ| W u (0, U ). We say W u (0, U ) is a C k (or analytic) graph over P Rn if the above function g is C k (or analytic). Similar definitions hold for W s (0, U ). Theorem 4.3.1 (Stable Manifold Theorem) If f satisfies (4.9) and Re(σ(A)) 6= 0, then there is a neighborhood U of 0 in Rn such that W u (0, U ) (or W s (0, U )) is a Lipschitz graph over P Rn (or QRn ) which is tangent to P Rn (or QRn ) at 0. Also, there are positive constants K1 , α1 such that if x ∈ W u (0, U ) (resp. W s (0, U )), then the solution ϕt (x) of (4.8) satisfies |ϕt (x)| ≤ K1 e−α1 |t| |x|,
t ≤ 0 (resp. t ≥ 0)
Furthermore, if f is a C k function (or an analytic function) in a neighborhood U of 0, then so are W u (0, U ) and W s (0, U ).
78
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
Proof. We shall apply the contraction mapping principle. With the function ρ as in (4.9) and K, α in (4.14), choose δ > 0 so that 4Kρ(δ) < α,
8K 2 ρ(δ) < α.
(4.17)
Let S(δ) be the set of continuous function x : (−∞, 0] → Rn such that |x| = sup−∞≤t≤0 |x(t)| ≤ δ. Then the set S(δ) is a complete metric space with the metric induced by the uniform topology. For any y(·) ∈ S(δ) and any ξ ∈ P Rn with |ξ| ≤ δ/2K, define, for t ≤ 0 Z t Z t At A(t−s) (T (y, ξ)) (t) = e ξ + e eA(t−s) Qf (y(s))ds P f (y(s))ds + (4.18) 0
−∞
Next we want to show that T : S(δ) × {ξ ∈ P Rn : |ξ| ≤ δ/2K} → S(δ) is Lypschitz continuous and T (·, ξ) is a contraction mapping with contraction constant 1/2 for all ξ ∈ P Rn . From (4.9), (4.14), (4.18) for t ≤ 0, we have R0 Rt |T (y, ξ)(t)| ≤ Keαt |ξ| + t Keα(t−s) ρ(δ)|y(s)|ds + −∞ Ke−α(t−s) ρ(δ)|y(s)|ds ≤ K|ξ| + ρ(δ) · δ · K α1 + ρ(δ)δ K α < δ/2 + δ/2 = δ Thus T (·, ξ) : S(δ) → S(δ). Furthermore from (4.9),(4.13),(4.14), (4.17), and (4.18), we have
≤
|T (y1 , ξ)(t) − T (y2 , ξ)(t)| Z 0 Keα(t−s) |f (y1 (s)) − f (y2 (s))|ds t Z t + Keα(t−s) |f (y1 (s)) − f (y2 (s))|ds −∞
Z
0
Keα(t−s) ρ(δ)|y1 (s) − y2 (s)|ds
≤ t
Z
t
+
Keα(t−s) ρ(δ)|y1 (s) − y2 (s)|ds
−∞
≤
1 |y1 − y2 | 2
Therefore T (·, ξ) has a unique fixed point x∗ (·, ξ) in S(δ). The fixed point satisfies (4.15) and thus is a solution of (4.8) by Lemma 4.3.1. The function x∗ (·, ξ) is continuous in ξ. Also |x∗ (t, ξ)|
R0 ≤ Keαt |ξ| + Kρ(δ) t eα(t−s) |x∗ (s, ξ)|ds Rt +Kρ(δ) −∞ e−α(t−s) |x∗ (s, ξ)|ds
(4.19)
From inequality (4.19) and the next following Lemma 4.3.2, one can prove that |x∗ (t, ξ)| ≤ 2Keαt/2 |ξ|,
t≤0
(4.20)
4.3. SADDLE POINT PROPERTY: STABLE AND UNSTABLE MANIFOLDS79 This estimate shows x∗ (·, 0) = 0 and x∗ (0, ξ) ∈ W u (0). The similar estimate as above shows that ˜ ≤ 2Keαt/2 |ξ − ξ|, ˜ |x∗ (t, ξ) − x∗ (t, ξ)|
t≤0
(4.21)
In particular, x∗ (·, ξ) is Lypschitz in ξ. Since x∗ (t, ξ) satisfies Rt x∗ (t, ξ) = eAt ξ + 0 eA(t−s) P f (x∗ (s, ξ))ds Rt + −∞ eA(t−s) Qf (x∗ (s, ξ))ds,
(4.22)
set t = 0 in (4.22), we have Z x∗ (0, ξ) = ξ +
0
eA(−s) Qf (x∗ (s, ξ))ds
(4.23)
−∞
From (4.23) and ξ ∈ P Rn we have P x∗ (0, ξ) = ξ,
Fig.4.8 Thus we define Z ∗
0
g(ξ) = Qx (0, ξ) =
eA(−s) Qf (x∗ (s, ξ))ds.
(4.24)
−∞
Now we prove that W u (0, U ) is tangent to P IRn at 0. From (4.9) and x∗ (t, 0) ≡ 0, let δ = |x∗ (s, ξ)|, it follows that |f (x∗ (s, ξ)) − f (x∗ (s, 0))| ≤ ρ(|x∗ (s, ξ)|) · |x∗ (s, ξ) − x∗ (s, 0)| or |f (x∗ (s, ξ))| ≤ ρ(|x∗ (s, ξ)|) · |x∗ (s, ξ)| From (4.20) and (4.24) we have ¯Z 0 ¯ ¯ ¯ −As ∗ ¯ |g(ξ)| = ¯ e Qf (x (s, ξ))ds¯¯ −∞
80
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS Z
0
≤ −∞ 0
Z ≤
−∞ 0
Z ≤
° −As ° °e Q° · |f (x∗ (s, ξ))| ds ¡ ¢ αs Keαs ρ (|x∗ (s, ξ)|) |x∗ (s, ξ)|ds |x∗ (s, ξ)| ≤ 2Ke 2 |ξ|t ≤ 0 ¡ ¢ αs αs Keαs ρ 2Ke 2 |ξ| · 2Ke 2 |ξ|ds
−∞
Z
0
≤
2K 2 ρ(2K|ξ|)(
=
4K 2 ρ(2K|ξ|)|ξ| α
3
e 2 αs ds)|ξ|
−∞
Therefore from (4.17) 4K 2 |g(ξ)| = ρ(2K|ξ|) → 0 as |ξ| → 0. |ξ| α Lemma 4.3.2 Suppose that α > 0, γ > 0, K, L, M are nonnegative constants and u is a nonnegative bounded continuous function satisfying one of the inequalities Z t Z ∞ −αt −α(t−s) u(t) ≤ Ke +L e u(s)ds + M e−γs u(t + s)ds, t ≥ 0 (4.25) 0
Z u(t) ≤ Keαt + L
0 0
Z eα(t−s) u(s)ds + M
t
If β ≡ L/α +
M γ
0
eγs u(t + s)ds, t ≤ 0
(4.26)
−∞
< 1, then −1
u(t) ≤ (1 − β)−1 Ke−[α−(1−β)
L]|t|
(4.27)
Proof. It suffice to prove (4.25). We obtain (4.26) directly from (4.25) by changing variables t → −t, s → −s. Let δ =lim sup u(t). Since u is nonnegative, bounded, t→∞
δ < ∞. We claim that δ = 0, i.e. lim u(t) = 0. If not, δ > 0. Choose θ, β < θ < 1, t→∞
then there exists t1 > 0 such that u(t) ≤ θ−1 δ for t ≥ t1 . For t ≥ t1 Z t1 Z t u(t) ≤ Ke−αt + L e−α(t−s) u(s)ds + L e−α(t−s) u(s)ds 0 t1 Z ∞ e−γs ds +M θ−1 δ 0 µ ¶ Z t1 L M αs −αt −αt e u(s)ds + ≤ Ke + Le + θ−1 δ α γ 0 Let t → ∞. Then we have a contradiction: ¶ µ L M + θ−1 δ = βθ−1 δ < δ. lim sup u(t) ≤ α γ t→∞ Let v(t) =sup u(s). Since u(t) → 0 as t → ∞, for any t ∈ [0, ∞), there exists t1 ≥ t s≥t
such that v(t) = v(s) = u(t1 ),
t ≤ s ≤ t1
v(s) < v(t1 ) for s > t1
4.3. SADDLE POINT PROPERTY: STABLE AND UNSTABLE MANIFOLDS81
Fig.4.9 Then Z v(t) = u(t1 ) ≤ Ke−αt1 + L
t1
e−α(t1 −s) u(s)ds
Z 0∞
e−γs u(t1 + s)ds ¶ t1 −αt1 −α(t1 −s) ≤ Ke e u(s)ds 0 t Z ∞ +M e−γs v(t + s)ds 0 Z t Z t1 ≤ Ke−αt + L e−α(t−s) v(s)ds + Lv(t) e−α(t1 −s) ds 0 t Z ∞ +v(t)M e−γs ds 0 Z t −αt −αt ≤ Ke + Le eαs v(s)ds + βv(t) +M µZ t ¶ µZ −α(t1 −s) +L e u(s)ds + L
0
0
Let z(t) = eαt v(t). Then Z
t
z(t) ≤ K + L
z(s)ds + βz(t) 0
or
Z z(t) ≤ (1 − β)−1 K + (1 − β)−1 L
t
z(s)ds 0
Then from Gronwall’s mequality, it follows that ¡ ¢ z(t) ≤ (1 − β)−1 K exp (1 − β)−1 Lt Thus the estimate (4.27) follows. For the regularity of W u (0, U ), we observe from (4.23), (4.14) and (4.9) ¯ ¯ Z ¯ ∗ ¯ ¯ ¯ ≥ |ξ − ξ| ¯− ¯x (0, ξ) − x∗ (0, ξ) ¯ ¯
¯ ¯ ¯ ∗ ¯ ∗ ¯ ¯ Kη(δ)e · ¯x (s, ξ) − x (s, ξ)¯¯ds −∞ ¸ · 4K 2 η(θ) ¯ ¯ ≥ 1/2|ξ − ξ| ≥ |ξ − ξ| 1 − 3α 0
αs
82
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
Thus the mapping ξ 7−→ x∗ (0, ξ) is 1 − 1 with continuous inverse. Hence W u (0, U ) is a Lipschitz manifold. To show W u (0, U ) is C k -manifold if f ∈ C k is not trivial. The reader may consult Henry’s book [1983]. The stable and unstable manifold are defined as follows s W s (0) = ∪ ϕt (Wloc (0, U )) t≤0
u
u W (0) = ∪ ϕt (Wloc (0, U )) t≥0
4.4
Orbital Stability
Let p(t) be a periodic solution of period T of the autonomous system x0 = f (x),
f : Ω ⊆ Rn → Rn .
(4.28)
It is inappropriate to consider the notion of asymptotic stability for the periodic solution p(t). For example, consider ϕ(t) = p(t + τ ), with τ > 0 small, then ϕ(t) is also a solution of (4.28), but it is impossible that |ϕ(t) − p(t)| → 0 as t → ∞ no matter how small is τ . Hence we instead consider the periodic orbit γ = {p(t) ∈ Rn : 0 ≤ t < T } and the concept of “orbital stability”. Definition 4.4.1 We say that a periodic orbit γ is orbital stable if given ε > 0 there exits δ > 0 such that ¡ ¢ dist ϕt (x0 ), γ < ε for t ≥ 0 provided dist(x0 , γ) < δ. If in addition, dist(ϕt (x0 ), γ) → 0 as t → ∞ for dist(x0 , γ) sufficiently small, then we say that γ is orbitally asymptotically stable. γ is unstable if γ is not stable. Now the next question is that how do we verify the orbital stability of γ besides the definition. Linearization about periodic orbit: Let y = x − p(t) Then y 0 (t)
= x0 (t) − p0 (t) = f (x(t)) − f (p(t)) = f (y + p(t)) − f (p(t))
(4.29)
= Dx f (p(t))y + h(t, y) where h(t, y) = 0(|y|) as y → 0 for all t. The linearized equation of (4.28) about p(t) is ½ 0 y = Dx f (p(t))y = A(t)y (4.30) A(t + T ) = A(t) We shall relate the characteristic multipliers of (4.30) to the orbital stability of γ. Let Φ(t) be the fundamental matrix of (4.30) with Φ(0) = I. Since p(t) is a solution of (4.28), then we have p0 (t) = f (p(t)) and
d 0 d p (t) = f (p(t)) = Dx f (p(t))p0 (t) = A(t)p0 (t). dt dt
4.4. ORBITAL STABILITY
83
Hence p0 (t) is a solution of (4.30) and it follows that p0 (t) = Φ(t)p0 (0) Setting t = T yields
p0 (0) = p0 (T ) = Φ(T )p0 (0) ,
thus 1 is an eigenvalue of Φ(T ) or 1 is a characteristic multiplier of (4.30). Assume µ1 , µ2 , · · · µn are characteristic multipliers of (4.30) with µ1 = 1. Then from Liouville’s formula µ1 · · · µn
=
µ2 · · · µn = det Φ(T ) ! ÃZ T = det Φ(0) exp tr (A(s))ds ÃZ = exp
0
!
T
tr (A(s))ds 0
We note that
µ A(t) = Dx f (p(t)) =
∂fi ∂xj
¶¯ ¯ ¯ ¯
. x=p(t)
Hence ÃZ µ2 · · · µu
=
T
exp ÃZ
=
0 T
exp
! ∂fn ∂f1 (p(t)) + · · · + (p(t))dt ∂x1 ∂xn ! div(f (p(t)))dt
0
Definition 4.4.2 We say γ is orbitally stable with asymptotic phase if for any nearby solution ϕ(t) we have |ϕ(t) − p(t + t0 )| → 0
as
t→∞
for some 0 ≤ t0 < T . Now we state the main theorem of this section. Theorem 4.4.1 If the (n − 1) characteristic multipliers µ2 · · · µn satisfy |µi | < 1, i = 2, · · · , n. Then γ is orbitally asymptotically stable and nearby solution ϕ(t) of (4.28) possesses an asymptotic phase with |ϕ(t) − p(t + t0 )| ≤ Le−αt , L, α > 0 for some t0 , 0 ≤ t0 < T . Corollary 4.4.1 For n = 2. If the periodic solution p(t) satisfies Z 0
T
∂f2 ∂f1 (p(t)) + (p(t))dt < 0 ∂x1 ∂x2
then γ is orbitally stable with asymptotic phase.
(4.31)
84
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
Remark 4.4.1 Even for the case of n = 2, it is difficult to verify (4.31) since the integral depends on how much we know about the periodic orbit. Before we prove the main Theorem 4.4.1 where Poincar´e map will be introduced, we study several properties of nonlinear map f : RN → RN and consider the difference equation xn+1 = f (xn )
(4.32)
Definition 4.4.3 We say x ¯ is an equilibrium or fixed point of (4.32) if x ¯ = f (¯ x). Definition 4.4.4 We say that x ¯ is stable if for any ε > 0 there exists δ > 0 such that for any x0 with |x0 − x ¯| < δ we have |xn − x ¯| < ε for all n ≥ 0. If in addition, xn → x ¯ as n → ∞ for x0 sufficiently close to x ¯ then we say x ¯ is asymptotically stable. x ¯ is unstable if x ¯ is not stable. Linearization of the map: xn+1 − x ¯ where g(y) satisfies
|g(y)| |y|
= =
f (xn ) − f (¯ x) Dx f (¯ x)(xn − x ¯) + g(xn − x ¯)
(4.33)
→
0 as y → 0
(4.34)
Let zn = xn − x ¯. The linearized equation of (4.32) about x is zn+1 = Azn ,
A = Dx f (¯ x)
(4.35)
From (4.35) we have zn = An z0 If k A k< 1 or equivalently |λ| < 1 for all λ ∈ σ(A) then zn → 0 as n → ∞. Lemma 4.4.1 If all eigenvalues λ of Dx f (¯ x) satisfy |λ| < 1, then the fixed point x ¯ is asymptotically stable. Furthermore if k x0 − x ¯ k is sufficiently small, then k xn − x ¯ k≤ e−αn k x0 − x ¯k
for some α > 0,
for all n ≥ 0
Proof. (Exercises) Now let’s consider (4.28) and its periodic solution p(t) with period T and periodic P n orbit γ. Take a local cross section ⊆ R of dimension n − 1. The hypersurface P need not be planar, but it must be chosen in P P the way that the flow of (4.8) is tranversal to i.e. f (x) · n(x) = 6 0 for all x ∈ where n(x) is the P P normal vector to at x or equivalently the vector field f (x) is not trangentPto at x. Without loss of generality, wePassume the periodic orbit γ intersect at point p∗ where p∗ = p(0). Let U ⊆ be a small neighborhood of p∗ . Let’s define the first return P map (also called Poincare’ map) P : U → X P (ξ) = ϕτ (ξ) ∈ P where τ = τ (ξ) is the first time the trajectory {ϕt (ξ)}t≥0 returns to . Obviously, p∗ is a fixed point of P . The stability of the fixed point p∗ will reflect the
4.4. ORBITAL STABILITY
85
orbital stability of the periodic orbit γ. We shall show that Dx P (p∗) has eigenvalues λ1 · · · λn−1 which are exactly the characteristic multipliers µ2 , · · · µn of (4.30). Without loss of generality, we assume p∗ = 0. Then p(0) = 0 and p0 (0) = f (0). Let Π be the hyperplane {x ∈ Rn : x · f (0) = 0} and ϕ(t, ξ) be the solution of (4.28) with ϕ(0, ξ) = ξ. Lemma 4.4.2 For ξ ∈ Rn , |ξ| sufficiently small, there exists a unique real-value function t = τ (ξ), the first return time, with τ (0) = T, ϕ(τ (ξ), ξ) ∈ Π. Proof. Let F (t, ξ) = ϕ(t, ξ) · f (0) and consider the equation F (t, ξ) = 0 Since F (T, 0) = ϕ(T, 0) · f (0) = 0 ¯ ∂F ¯¯ = f (ϕ(T, 0)) · f (0) = |f (0)|2 > 0, ∂t ¯t=T, ξ=0 from Implicit Function Theorem, there exists a neighborhood of ξ = 0, t can be expressed as a function of ξ with t = τ (ξ), T = τ (0) and F (τ (ξ), ξ) = 0. Then ϕ(τ (ξ), ξ) · f (0) = 0 and it follows that ϕ(τ (ξ), ξ) ∈ Π. Lemma 4.4.3 The eigenvalues λ1 · · · λn−1 of DP (p∗ ) are exactly the characteristic multipliers µ2 · · · µn of (4.30). Proof. The first return map P : U ⊆ Π → Π satisfies P (ξ) = ϕ(τ (ξ), ξ), P (0) = ϕ(T, 0) = p(T ) = 0.
(4.36)
Without loss of generality we may assume f (0) = (0 · · · 0, 1)T (why?) and hence Π = {x = (x1 · · · xn ) : xn = 0}. Since d dt ϕ(t, ξ)
= f (ϕ(t, ξ)) ϕ(0, ξ) = ξ
(4.37)
Differentiating (4.37) with respect to ξ yields Φ0 (t, ξ) = Dx f (ϕ(t, ξ))Φ(t, ξ) Φ(0, ξ) = I
(4.38)
where Φ(t, ξ) = ϕξ (t, ξ). Setting ξ = 0 in (4.38) yields Φ0 (t, 0) = Dx f (p(t))Φ(t, 0) Φ(0, 0) = I Hence Φ(t, 0) is the fundamental matrix of (4.30) with Φ(0, 0) = I and the characteristic multipliers. µ1 = 1, µ2 , · · · µn of (4.30) are eigenvalues of Φ(T, 0). Since p0 (0) = f (0) and p0 (0) = Φ(T, 0)p0 (0), it follows that f (0) = Φ(T, 0)f (0). From f (0) = (0 · · · 0, 1)T , it follows that the last column of Φ(T, 0) is (0 · · · 0, 1)T . Now we consider the Poincar´e map P in (4.36). Compute DP (0). Differentiating (4.36) with respect to ξ, we obtain DP (ξ) = ϕ0 (τ (ξ), ξ)
d τ (ξ) + ϕξ (τ (ξ), ξ) dξ
(4.39)
86
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
Setting ξ = 0 in (4.39), we have ¯ dτ (ξ) ¯¯ DP (0) = ϕ (T, 0) + Φ(T, 0) dξ ¯ξ=0 ¯ dτ (ξ) ¯¯ = f (0) · + Φ(T, 0) dξ ¯ξ=0 µ ¶ ∂τ ∂τ (0), · · ·, (0) + Φ(T, 0) = (0, · · · , 0, 1)T ∂ξ1 ∂ξn 0 0 ··· · ···· 0 0 .. + 0 · · · · · · · · 0 = . 0 ∂τ ∂τ x x x x 1 ∂ξ1 (0), · · · ∂ξn (0) 0 .. . .. = . 0 x x x x 0
Hence the eigenvalues of DP (0) are the characteristic multipliers of (4.30). Proof of Theorem 4.4.1: Let p(0) = 0 and P : Π ∩ B(0, δ) → Π be the Poincar´e-map. From the assumption |µi | < 1, i = 2, · · · , n and Lemma 4.4.3, it follows that 0 is an asymptotically stable fixed point for the first return map P on U ⊆ Π. Thus from Lemma 4.4.1, if k ξ0 k is sufficiently small, then k ξn k≤ e−nα k ξ0 k→ 0 as n → ∞ for some α > 0 where ξn = P n ξ0 . From the continuous dependence on initial data, given ε > 0 there exists δ = δ(ε) > 0 such that if dist(x0 , γ) < δ then there exists τ 0 = τ 0 (x0 ), ϕ(t, x0 ) exists for 0 ≤ t ≤ τ 0 , ϕ(τ 0 , x0 ) ∈ Π and |ϕ(τ 0 , x0 )| < ε.
Fig.4.10 Let ξ0 = ϕ(τ 0 , x0 ) ∈ Π and τ1
= τ 0 + τ (ξ0 ) such that .. .
τn
= τ n−1 + τ (ξn−1 )
ξ1 = ϕ (τ (ξ0 ), ξ0 ) ∈ Π
such that
ξn = ϕ (τ (ξn−1 ), ξn−1 ) ∈ Π
4.4. ORBITAL STABILITY
87
i.e., τ k is the travelling time from ξ0 to ξk . We claim that n
τ limn→∞ nT = 1 and there exists t0 ∈ R n |τ and − (nT + t0 )| ≤ L1 e−αn k ξ0 k .
such that t0 = limn→∞ (τ n − nT ) (4.40)
We first show that {τ n − nT } is a Cauchy sequence. Consider |(τ n − nT ) − (τ n−1 − (n − 1)T )|
= |τ n − τ n−1 − T | = |τ (ξn−1 ) − T | = |τ (ξn−1 ) − τ (0)| ¯ ¯ ¯ dτ ¯ ¯ ≤ sup ¯ (θ)¯¯ k ξn−1 k= L0 k ξn−1 k dξ θ∈B(0,δ)
≤ L0 e−α(n−1) k ξ0 k Hence for m > n we have |(τ m − mT ) − (τ n − nT )| ≤
|(τ m − mT ) − (τ m−1 − (m − 1)T | + · · · + |(τ n+1 − (n + 1)T ) − (τ n − nT )| ³ ´ ≤ L0 k ξ0 k e−α(m−1) + · · · + e−αn
≤
L0 k ξ 0 k
e−αn < ε if n ≥ N 1 − e−α
Then lim (τ n − nT ) exists and equals to a number, say, to and n→∞
|τ n − (nT + t0 )|
= ≤
≤ ≤
|t0 − (τ n − nT )| |t0 − (τ m − mT )| + |(τ m − mT ) − (τ m−1 − (m − 1)T )| + · · · + |(τ n+1 − (n + 1)T ) − (τ n − nT )| ∞ X |(τ (k+1) − (k + 1)T ) − (τ k − kT )| k=n ∞ X
L0 e−α(k−1) kξ0 k ≤
k=n
L0 e−αn k ξ0 k . 1 − e−α
Thus we prove the claim (4.40). Next we want to show that |ϕ(t + t0 , x0 ) − p(t)| ≤ Le
−α T t
|ξ0 |.
Let 0 ≤ t ≤ T , |ϕ(t + τ n , x0 ) − p(t)|
= ≤
|ϕ(t, ξn ) − ϕ(t, 0)| sup
k ϕξ (t, θ) k |ξn | ≤ L2 |ξn |
0≤t≤T, θ∈B(0,δ)
≤
L2 e−nα |ξ0 |
and
≤
|ϕ(t + τ n , x0 ) − ϕ(t + nT + t0 , x0 )| |ϕ0 (t˜, x0 )| |(t + τ n ) − (t + nT + t0 )|
≤
L3 |τ n − (nT + t0 )| ≤ L3 e−αn |ξ0 |
88
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
Hence for 0 ≤ t ≤ T , we have |ϕ(t + nT + t0 , x0 ) − p(t)| ≤ |ϕ(t + nT + t0 , x0 ) − ϕ(t + τ n , x0 )| +|ϕ(t + τ n , x0 ) − p(t)| ≤ (L2 + L3 )e−αn k ξ0 k
(4.41)
For any t, we assume nT ≤ t ≤ (n + 1)T for some n. Then 0 ≤ t − nT ≤ T and t/T − 1 ≤ n ≤ t/T . Substituting the t in (4.41) by t − nT , we have =
|ϕ(t + t0 , x0 ) − p(t − nT )| |ϕ(t + t0 , x0 ) − p(t)|
≤
(L2 + L3 )eαn k ξ0 k ≤ (L2 + L3 )e−α( T −1) k ξ0 k
t
Hence |ϕ(t + t0 , x0 ) − p(t)| ≤ Le
4.5
−α T t
k ξ0 k for some L > 0.
Travelling Wave Solutions
In this section we shall apply stable manifold theorem to obtain the existence of travelling wave solutions for the scalar diffusion-reaction partial differential equation ut = Kuxx + f (u)
(4.42)
A travelling wave solution u(x, t) of PDE (4.42) is a special solution of the form u(x, t) = U (x + ct), c ∈ R,
(4.43)
where c is called the travelling speed. Let ξ = x + ct and substitute (4.43) into (4.42), we obtain d2 U dU K 2 −c + f (U ) = 0. (4.44) dξ dξ We shall study two well-known nonlinear diffustion equations, one is the Fisher’s equation, the other is the ”bistable” equation. Fisher’s equations (or KPP equation): In 1936 famous geneticist and statistician R. A. Fisher proposed a model to describe the spatial distribution of an advantageous gene. Consider a one locus, two allele genetic trait in a large randomly mating population. Let u(x, t) denote the frequency of one gene, say, A, in the gene pool at position x. If A is dorminant and different sites are connected by random migration of the organism, then the gene pool can be described by a diffusion model (the details of the derivation of the model see [AW]): ut = Kuxx + su(1 − u).
(F isher0 sequation)
We shall consider the general model ut = Kuxx + f (u),
(4.45) 0
f (u) > 0, 0 < u < 1, f (0) > 0, f (0) = f (1) = 0.
4.5. TRAVELLING WAVE SOLUTIONS
89
In 1936, Kolmogoroff, Petrovsky and Piscounov analyzed Fisher’s equation by looking for steady progressing waves (4.43) with boundary conditions u(−∞, t) = 0, u(+∞, t) = 1. Then we look for a solution U (ξ) of (4.44) with boundary conditions U (−∞) = 0, U (+∞) = 1. Let W = dU dξ , then (4.44) can be rewritten as the following first order system: dU =W dξ dW c f (U ) = W− dξ K K (U (−∞), W (−∞)) = (0, 0), (U (+∞), W (+∞)) = (1, 0).
(4.46)
In (4.46), the system has two equilibria (0, 0) and (1, 0). The variational matrix of (4.46) is · ¸ 0 1 0 M (U, W ) = . (U ) −f K , Kc For equilibrium (0, 0),
· M (0, 0) =
0
, ,
0
− f K(0)
1
¸
c K
The eigenvalue λ of M (0, 0) satisfies f 0 (0) c λ+ =0 K K
λ2 − and hence there are two eigenvalues
λ± = Multiplying
dU dξ
c K
±
q¡ ¢ c 2 K
0
− 4 f K(0)
2
.
on both sides of (4.44) and integrating from −∞ to +∞ yields R1 c = R +∞0 −∞
f (u)du 2
(U 0 (ξ)) dξ
>0.
p p Hence if c ≥ 4Kf 0 (0) then (0, 0) is a unstable node; if 0 < c < 4Kf 0 (0) then (0, 0) is a unstable spiral. For the equilibrium (1, 0), µ M (1, 0) =
0
1
0
c K
− f K(1)
¶ .
The eigenvalues λ satisfy λ2 −
c f 0 (1) λ+ = 0, f 0 (1) < 0 K K
Hence (1, 0) is a saddle point. p Let c0 = 4Kf 0 (0). If 0 < c < c0 then we claim there are no hetroclinic orbits connecting (0, 0) and (1, 0). Since (0, 0) is a unstable spiral and the solution U (ξ) of (4.46) is positive (biological restriction). We shall show for each c ≥ c0 , there is
90
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
a hetroclinic orbit connecting unstable node (0, 0) and saddle point (1, 0). First we reverse the time ξ → −ξ, then (4.46) becomes dU = −W dξ dW c f (U ) =− W + dξ K K (U (+∞), W (+∞)) = (0, 0), (U (−∞), W (−∞)) = (1, 0).
(4.47)
Choose m, p > 0 such that m>
f (U ) c
and ρ> Let
for all
0 ≤ U ≤ 1,
K 1 f (U ) + sup . c c 0
½ Ω=
(4.48)
W (U, W ) : 0 ≤ W ≤ m, ≤U ≤1 ρ
¾
Fig.4.11 We claim that with the choices of m and ρ in (4.48), the region Ω is positively − → − invariant under the system (4.47). Let F be the vector field of (4.47) and → n be the outward normal vector of ∂Ω. Let check → (i) For the line W = ρU , 0 ≤ W ≤ m, − n = (−1, ρ), from (4.48) ¶ µ f (U ) c − → − F ·→ n = W +ρ − W + K K µ ¶ c f (U ) = ρU 1 − ρ + <0 K KU → (ii) For U = 1, 0 < W < m, − n = (1, 0), − → → F ·− n = −W < 0.
4.5. TRAVELLING WAVE SOLUTIONS (iii) For W = m,
m ρ
91
→ ≤ U ≤ 1, − n = (0, 1), from (4.48) we have c f (U ) − → → F ·− n =− W+ <0 K K
→ (iv) For W = 0, 0 ≤ U ≤ 1, − n = (0, −1), f (U ) → − − F ·→ n =− <0 K Hence Ω is positively invariant under (4.47). Now we verify the tangent vector to the unstable manifold of (4.47) at saddle point (1, 0) points to the interior of Ω. Let λ1 > 0, λ2 < 0 be the eigenvalues of the variational matrix of (4.47) at (1, 0). Then the tangent vector (x1 , x2 )T to the unstable manifold at (1, 0) satisfies µ ¶µ ¶ µ ¶ 0 −1 x1 x1 0 = λ1 . f (1) −c x2 x2 K K Then −x2 = λ1 x1 . Thus the unstable manifold points to the interior of Ω. Since Ω is positively invariant and thus dU dξ = −W < 0 for all ξ and hence the unstable manifold will converges to (0, 0) for all c ≥ c0 . This establishes the existence of travelling wave solutions. We note that the wave is monotonic due to the fact dU dξ = W > 0.
Fig.4.12 Bistable equations: Consider the following scalar diffusion-reaction PDE ut = uxx + f (u)
(4.49)
where f (u) satisfies f (0) = f (1) = 0, f 0 (0) < 1, f 0 (1) < 0 and there exists α, 0 < α < 1 such that f (x) < 0 for 0 < x < α and f (x) > 0, α < x < 1.
(4.50)
92
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
Fig.4.13 The equation (4.49) with (4.50) can be derived from nerve conduction (See [Keener], [AH], [Fife Mcloud]). The typical examples of f (u) are f (u) = au(u − 1)(α − u), 0 < α < 1 and f (u) = −u + H(V − α), 0 < α < 1 where H(v) is a Heavside function, ½ H(v) =
1 0
, ,
v≥0 v<0
We note that u = 0 and u = 1 are stable steady states of the ordinary equation du = f (u). dt We look for the travelling wave solution u(x, t) = U (x + ct). Let ξ = x + ct. Then from (4.49) we have d2 U dU −c + f (U ) = 0. dξ 2 dξ Set W =
dU dξ ,
(4.51)
then (4.51) can be rewritten as dU =W dξ
(4.52)
dW = cW − f (U ) dξ We want to find travelling front solutions connecting (0, 0) to (1, 0), i.e., limξ→−∞ (U (ξ), W (ξ)) = (0, 0) and (4.53) limξ→+∞ (U (ξ), W (ξ)) = (1, 0).
4.5. TRAVELLING WAVE SOLUTIONS
93
From the variational matrix M (U, W ) of (4.52), µ ¶ 0 1 M (U, W ) = −f 0 (U ) c
(4.54)
, it is easy to verify that (0, 0) and (1, 0) are both saddle points of (4.52) due to the fact f 0 (0) < 0, f 0 (1) < 0. Multiplying W on both sides of (4.51) and integrating from −∞ to ∞, we obtain from (4.53) Z
Z
∞
2
1
W (ξ)dξ =
c −∞
From now on we assume
f (u)du
(4.55)
0
Z
1
f (u)du > 0
(4.56)
0
then c > 0. Consider the unstable manifold Γc of (0, 0). Obviously when c = 0, Γc cannot reach U = 1 otherwise from (4.51) we obtain Z
1
f (u)du + W 2 (−∞) = 0
0
which is a contradiction to the assumption (4.56). Since the slope of the unstable manifold Γc at (0, 0) is the positive eigenvalue λ+ of M (0, 0). p c + c2 − 4f 0 (0) λ+ = > c. 2 Let K = sup0 0 be a fixed number. Consider the line W = σU in the phase plane. (see Fig. )
Fig.4.14 On the line W = σU , large, then
dW dU
(U ) (U ) = c − fW = c − fσW ≥ c− K σ . Choose c > 0 sufficiently
K dW ≥c− > σ. dU σ
94
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
Then for large c the unstable manifold Γc is above the line W = σU . By continuous dependence on the parameter c, there exists c0 such that the unstable manifold Γc0 of (0, 0) reach the saddle point (1, 0). Next we show the uniqueness of travelling wave. Suppose we have two travelling waves connecting (0, 0) and (1, 0) with two different speed c1 and c0 . Assume c1 > c0 . From the (4.52) we have f (U ) dW =c− dU W
(4.57)
It follows that W (U, c1 ) ≥ W (U, c0 )
Fig.4.15 From (4.54), the slope of the stable manifold of (1, 0) is p c − c2 − 4f 0 (0) λ− = < 0. 2 Hence p c2 − 4f 0 (1) − c |λ1 | = and 2 " # 1 c d|λ1 | p = −1 <0 dc 2 c2 − 4f 0 (1) However from Fig. , |λ− (c1 )| > |λ− (c0 )|, we obtain a contradiction.
(4.58)
4.5. TRAVELLING WAVE SOLUTIONS
95
Exercises 00
Exercise 4.1 For the equation v (t) + t sin v(t) = 0, show that v(t) ≡ π is a unstable solution and v(t) ≡ 0 is asymptotically stable. Exercise 4.2 Prove the converse statement in Lemma 4.3.1 (a) Let B ∈ R2×2 . Show that for the linear difference equation xn+1 = Bxn , fixed point 0 is asymptotically stable if and only if | det B| < 1 and |TraceB| < 1 + det B. (b) Do the stability analysis of the map xn+1
=
yn+1
=
ayn 1 + xn 2 bxn 1 + yn 2
,
a, b ∈ R
Exercise 4.3 Do the stability analysis of the Brusselator equations dx dt dy dt
= a − (b + 1)x + x2 y
,
a, b > 0
= bx − x2 y
Exercise 4.4 Suppose a, b, c are nonnegative continuous functions on [0, ∞), u is a nonnegative bounded continuous solution of the inequality Z t Z ∞ u(t) ≤ a(t) + b(t − s)u(s)ds + c(s)u(t + s)ds, t ≥ 0, 0
0
R∞
R∞ and a(t) → 0, b(t) → 0 as t → ∞, 0 b(s)ds < ∞, 0 c(s)ds < ∞. Prove that u(t) → 0 as t → ∞ if Z ∞ Z ∞ b(s)ds + c(s)ds < 1. 0
0
0
n
Exercise 4.5 Let x = f (x), x ∈ R with flow ϕt (x) defined for all t ∈ R, x ∈ Rn . Show that if Trace(Df (x)) = 0 for all x ∈ Rn , then the flow ϕt (x) preserves volume, i.e., vol(Ω) = vol(ϕt (Ω)) for all t > 0. Exercise 4.6 Consider x01
=
−x2 + x1 (1 − x21 − x22 )
x02
=
x1 + x2 (1 − x21 − x22 )
Show that (cos t, sin t) is a periodic solution. Compute the characteristic multipliers of the linearized system and discuss the orbital stability. Exercise 4.7 Prove Lemma 4.4.1. Exercise 4.8 Consider Fibonacci sequence {an } satisfying a0 = 0, a1 = 1, an = an−1 + an−2 , n ≥ 2. Find a formula for an . Show that an increases like a geometric
96
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
progression and find lim
n→∞
ln an n .
Exercise 4.9 Do the stability analysis for the following Henon map xn+1 yn+1
= A − Byn − x2n , A, B ∈ R = xn
4.5. TRAVELLING WAVE SOLUTIONS
97
References [HA ] Hartman : Ordinary Differential Equations [HSU ] S. B. Hsu, P.Waltman and S.Hubbel, Competing Predators, SIAM J.Applied Math. 1978 [W ] P. Waltman Competition Models in Population Biology CBMS vol 45, SIAM 1983 [H ] J. Hale : Ordinary Differential Equations [H1 ] J. Hale : Asymptotic Behavior of Dissipative System [He ] D. Henry : Geometric Theory of Semilinear Parabolic Equations. Lecture Notes in Mathematics, Vol. 840, Springer-Verlag. New York, New York, Heidelberg, Berlin (1981). [R ] Clark Robinson : Dynamical systems, Stability, Symbolic Dynamics, and Chaos. (1994)
98
CHAPTER 4. STABILITY OF NONLINEAR SYSTEMS
Chapter 5
METHOD OF LYAPUNOV 5.1
An Introduction to Dynamical System
Example 5.1.1 Consider the autonomous system x0 = f (x), f : D ⊆ Rn → Rn
(5.1)
Let ϕ(t, x0 ) be the solution of (5.1) with initial x(0) = x0 . Then it follows that (i) ϕ(0, x0 ) = x0 (ii) ϕ(t, x0 ) is a continuous function of t and x0 . (iii) ϕ(t + s, x0 ) = ϕ(s, ϕ(t, x0 )) The property (i) is obvious; the property (ii) follows directly from the property of continuous dependence on initial conditions. Property (iii) which is called group property, follows from uniqueness of ODE and the fact that (5.1) is an autonomous system. We call ϕ : R × Rn → Rn is the flow induced by (5.1). Remark 5.1.1 For nonautonomous system x0 = f (t, x) then (iii) does not hold. For generality we shall consider flow defined on metric space M . In Example 5.1, M = Rn . In fact, in many applications of dynamical system to partial differential equations, functional differential equations, the metric space M is a Banach space and the flow ϕ is a semiflow, i.e., ϕ : R+ × M → M satisfying (i), (ii), (iii). Now let (M, ρ) be a matrix space. Definition 5.1.1 We call a map π : M × R → M a continuous dynamical system if (i) π(x, 0) = x (ii) π(x, t) is continuous in x and t. (iii) π(x, t + s) = π(π(x, t), s), x ∈ M, t, x ∈ R. We many interpret π(x, t) as the position of the particle at time t when the initial (i.e. time = 0) position is x. Remark 5.1.2 The discrete dynamical system is defined as a continuous map π : M × Z + → M satisfying (i), (ii), (iii) where Z + = {n : n ≥ 0, n is an integer }. 99
100
CHAPTER 5. METHOD OF LYAPUNOV
The typical examples of discrete dynamical system is xn+1 = f (xn ), xn ∈ Rd and the Poincar´e map x0 7−→ ϕ(w, x0 ) where ϕ(t, x0 ) is the solution of periodic system x0 = f (t, x), x(0) = x0 where f (t, x) = f (t + w, x). The next lemma says that property (ii) implies the property of continuous dependence on initial data. Lemma 5.1.1 Given T > 0 and p ∈ M . For any ε > 0, there exists δ > 0 such that ρ(π(p, t), π(q, t)) < ε for 0 ≤ t ≤ T whenever ρ(p, q) < δ. Proof. If not, then there exits {qn }, qn → p and {tn }, |tn | ≤ T and α > 0 with ρ(π(p, tn ), π(qn , tn )) ≥ α. Without loss of generality, we may assume tn → t0 . Then 0
< ≤
α ≤ ρ (π(p, tn ), π(qn , tn )) ρ (π(p, tn ), π(p, t0 )) + ρ (π(p, t0 ), π(qn , tn ))
(5.2)
From (ii), the right hand side of (5.2) approaches zero as n → ∞. This is a contradiction. Notations: γ + (x) = {π(x, t) : t ≥ 0} is the positive orbit through x. γ − (x) = {π(x, t) : t ≤ 0} is the negative orbit through x. γ(x) = {π(x, t), −∞ < t < ∞} is the orbit through x. Definition 5.1.2 We say that a set S ⊆ M is positively (negatively) invariant under the flow π if for any x ∈ S, π(x, t) ∈ S for all t ≥ 0 (t ≤ 0) i.e. π(S, t) ⊆ S for all t ≥ 0 (i.e. π(S, t) ⊆ S for all t < 0). S is invariant if S is both positively and negatively invariant, i.e. π(S, t) = S for −∞ < t < ∞. Lemma 5.1.2 The closure of an invariant set is invariant. ¯ Proof. Let S be an invariant set and S¯ be its closure. If p ∈ S, then π(p, t) ⊂ S ⊆ S. ¯ If p ∈ S \ S, then there exists {pn } ⊂ S such that pn → p. Then for each t ∈ R, we ¯ Hence have lim π(pn , t) = π(p, t). Since π(pn , t) ∈ S, it follows that π(p, t) ∈ S. n→∞ ¯ t) ⊂ S. ¯ π(S, Definition 5.1.3 We say a point p ∈ M is an equilibrium or a rest point of the flow π if π(p, t) ≡ p for all t. If π(p, T ) = p for some T > 0 and π(p, t) 6= p for all 0 < t < T , then we say {π(p, t) : 0 ≤ t ≤ T } is a periodic orbit. Example 5.1.2 Rest point, periodic orbit are (fully) invariant. We note that positively invariance does not necessarily imply negatively invariance. For instance, if → an autonomous system x0 = f (x) satisfies f (x)· n (x) < 0 for all x ∈ ∂Ω where Ω is n a bounded domain in R , then Ω is positively invariant, but not negatively invariant. Lemma 5.1.3
5.1. AN INTRODUCTION TO DYNAMICAL SYSTEM
101
(i) The set of rest points is a closed set. (ii) No trajectory enters a rest point in a finite time. Proof. (i) Trivial (ii) Suppose π(p, T ) = p∗ where p∗ is a rest point and p 6= p∗ . Then p = ∗ π(p , −T ) = p∗ . This is a contradiction. Lemma 5.1.4 (i) If for any δ > 0, there exists p ∈ B(q, δ) such that γ + (p) ⊂ B(q, δ), then q is a rest point. (ii) If lim π(p, t) = q then q is a rest point. t→∞
Proof. We shall prove (i). Part (ii) follows directly from (i). Suppose q is not a rest point, then π(q, t0 ) 6= q for some t0 > 0. Let ρ(q, π(q, t0 )) = d > 0. From continuity of the flow π, there exists δ > 0 such that if ρ(p, q) < δ < d/2 then ρ(π(q, t), π(p, t)) < d/2 for all t, |t| ≤ t0 . By hypothesis of (i) there exists p ∈ B(q, δ) such that γ + (p) ⊂ B(q, δ). Then d
= ρ(q, π(q, t0 )) < ρ(q, π(p, t0 )) + ρ(π(p, t0 ), π(q, t0 )) < δ + d/2 < d/2 + d/2 = d.
This is a contradiction. Next we introduce the notion of limit sets. Definition 5.1.4 We say p is an omega limit point of x if there exists a sequence {tn }, tn → +∞ such that π(x, tn ) → p. The set ω(x) = {p : p is an omega limit point of x} is called the ω-limit set of x. Similarly we say p is an alpha limit point of x if there exists a sequence {tn }tn → −∞ such that π(x, tn ) → p. The set α(x) = {p : p is an alpha limit point of x} is called the α-limit set of x. Remark 5.1.3 ω(x) represents where the positive orbit γ + (x) ends up. While α(x) represents where the negative orbit γ − (x) started. We note that α, ω are the initial and final alpha-beta of Greek characters. Remark 5.1.4
ω(x) =
\ t≥0
α(x) =
\ t≤0
cl cl
[ s≥t
[
π(x, s) π(x, s)
s≤t
Theorem 5.1.1 ω(x) and α(x) are closed, invariant sets.
102
CHAPTER 5. METHOD OF LYAPUNOV
Proof. We shall prove for the case of ω(x) only. First we show that ω(x) is invariant. Let q ∈ ω(x) and τ ∈ R. We want to show π(q, τ ) ∈ ω(x). Let π(x, tn ) → q as tn → +∞. By continuity of π, it follow that π(x, tn + τ ) = π(π(x, tn ), τ ) → π(q, τ ) as tn → +∞. Thus π(q, τ ) ∈ ω(x) and it follows that ω(x) is invariant. Next we show ω(x) is a closed set. Let qn ∈ ω(x) and qn → q as n → ∞. We want to show q ∈ ω(x). Since qn ∈ ω(x), there exists τn ≥ n such that ρ(π(x, τn ), qn ) < 1/n. From qn → q, it follows that for any ε > 0 there exists N = N (ε), such that ρ(qn , q) < ε/2 for n ≥ Nε . Thus ρ ρ(π(x, τn ), q) < <
ρ(π(x, τn ), qn ) + ρ(qn , q) ε/2 + ε/2 = ε
Thus we have lim π(x, τn ) = q and q ∈ ω(x). n→∞
Theorem 5.1.2 If the closure of r+ (p) is compact then w(p) is nonempty, compact and connected. Furthermore lim ρ(π(p, t), w(p)) = 0 t→∞
Proof. Let pk = π(p, k). Since the closure of r+ (p), Cl(r+ (p)), is compact, then there exists a subsequence of {pkj }, pkj → q ∈ Cl(r+ (p)) . Then q ∈ w(p) and hence w(p) is nonempty. The compactness of w(p) follows directly from the facts that the closure of r+ (p) = r+ (p) ∪ w(p), w(p) is closed and the closure of Cl(r+ (p)) is compact. We shall prove that w(p) is connected by contradiction. Suppose on the contrary w(p) is disconnected. Then w(p) = A∪B, a disjoint union of two closed subsets A, B of w(p). Since w(p) is compact , then A, B are compact.Then d = ρ(A, B) > 0 where ρ(A, B) = inf ρ(x, y). Consider the neighborhoods N (A, d/3) and N (B, d/3) x∈A,y∈B
of A, B respectively.Then there exists tn → +∞ and τn → +∞ , τn > tn such that π(p, tn ) ∈ N (A, d/3) and π(p, τn ) ∈ N (B, d/3). Since ρ(π(p, t), A) is a continuous function of t, then from the inequality ρ(π(p, τn ), A) > ρ(A, B) − ρ(π(p, τn ), B) > d − d/3 = 2d/3 and ρ(π(p, tn ), A) < d/3 we have ρ(π(p, t∗n ), A) = d/2 for some t∗n ∈ (tn , τn ). {π(p, t∗n )} contains a convergent subsequence, π(p, t∗nk ) → q ∈ w(p). However q ∈ / A and q ∈ / B and we obtain a desired contradiction. Next we shall prove a trajectory is asymptotic to its omega limit set. If not , then there exists a sequence {tn }, tn → +∞ and α > 0 such that ρ(π(p, tn ), w(p)) ≥ α. Then there exists a subsequence of {π(p, tn )}, π(p, tnk ) → q ∈ w(p) and we obtain a contradiction : 0 = ρ(q, w(p)) = lim ρ(π(p, tnk ), w(p)) ≥ α nk →∞
Example 5.1.3
x0 = x + y − x(x2 + y 2 ) y 0 = −x + y − y(x2 + y 2 )
5.1. AN INTRODUCTION TO DYNAMICAL SYSTEM
103
Empolying the polar coordinate (r, θ), we have r0 = r(1 − r2 ) θ0 = −1 If If If If
r(0) = 0, then (0, 0) = w((0, 0)) 0 < r(0) < 1, then w(x0 , y0 ) = unit cycle, α(x0 , y0 ) = (0, 0) r(0) = 1, then w(x0 , y0 ) = α(x0 , y0 ) = unit cycle r(0) > 1 , then w(x0 , y0 ) = unit cycle , α(x0 , y0 ) = ∅.
Example 5.1.4 0 x =
x−y 1+(x2 +y 2 )1/2
y0 =
x+y 1+(x2 +y 2 )1/2
(5.3)
Then in terms of polar coordinate
dr dt
=
r 1+r
dθ dt
=
1 1+r
Fig.5.1 Then ω(x) = φ and α(x) = {(0, 0)}. The following examples show a unbounded trajectory may have disconnected or noncompact ω-limit set. Example 5.1.5 Let X = x0 =
x 1−x2 ,
Y = y, where (X(t), Y (t)) satisfies (5.3). Then
x(1 − x2 ) − y(1 − x2 )2 Ã ¶1/2 ! = f (x, y) µ³ ´2 x + y2 (1 + x2 ) 1 + 1−x2
y0 =
µ³ 1+
y ´2
x 1−x2
+
y2
x ¶1/2 + 1 − x2
µ³ 1+
1 ´2
x 1−x2
¶1/2 = g(x, y) +
y2
104
CHAPTER 5. METHOD OF LYAPUNOV
Then lim f (x, y) = 0 x→±1 S ω((x0 , y0 )) = {x = 1} {x = −1} is not connected
Fig.5.2 Example 5.1.6 Let X = log(1 + x), Y = y, −1 < x < +∞, y ∈ R. Where (X(t), Y (t)) satisfies (5.3). Then the equation x0 = f (x, y) = (1 + x)
y 0 = g(x, y) =
log(1 + x) − y 1 + {(log(1 + x))2 + y 2 }1/2
y + log(1 + x) 1 + {(log(1 + x))2 + y 2 }1/2
f (x, y), g(x, y) satisfies lim f (x, y) = 0, lim f (x, y) = −1. The ω-limit set x→−1
x→1
ω(x) = {(x, y) : x = −1} is not compact.
Fig.5.3
5.2. LYAPUNOV FUNCTIONS
5.2
105
Lyapunov Functions
Let V : Ω ⊆ Rn → R, 0 ∈ Ω, Ω is an open set in Rn Definition 5.2.1 V : Ω ⊆ Rn → R is said to be positive definite (negative definite) on Ω is V (x) > 0 for all x 6= 0 (V (x) < 0 for all x 6= 0, V(0)=0. Vissaidtobesemi−positivedef inite(semi−negativedef inite)onΩ if V (x) ≥ 0 (V (x) ≤ 0) for all x ∈ Ω. Remark 5.2.1 In applications, we consider the autonomous system x0 = f (x) with x∗ as an equilibrium. Let y = x − x∗ , then y 0 = g(y) = f (x + x∗ ) and 0 is an equilibrium. In general, V (x) satisfies V (x∗ ) = 0, V (x) > 0 for all, x ∈ Ω, x 6= x∗ . For the initial-value problem: x0 = f (x),
x(0) = x0 ,
(5.4)
we assume the solution ϕ(t, x0 ) exists for all t ≥ 0. We introduce “Lyapunov function” V (x) to locate the ω-limit set ω(x0 ). The function V (x) satisfies the followings: V (0) = 0, V (x) > 0 for all x ∈ Ω and V (x) → +∞ as |x| → ∞. The level sets {x ∈ Ω : V (x) = c} is a closed surface. The surface {x ∈ Ω : V (x) = c1 } encloses {x ∈ Ω : V (x) = c2 } if c1 > c2 . To prove lim ϕ(t, x0 ) = 0, we don’t have t→∞
to know the exact location of the solution ϕ(t, x0 ). We only need to construct a d suitable Lyapunov function V (x) such that dt V (ϕ(t, x0 )) < 0 as the following figure shows.
Fig.5.4 Example 5.2.1 m¨ x + kx = 0, x(0) = x0 , x0 (0) = x00 describes the motion of spring without friction according to Hooke’s law. Consider the total energy V (t) =
kinetic energy + potenial energy Z x(t) 1 2 m(v(t)) + ks ds = 2 0 1 k = m(x0 (t))2 + x2 (t) 2 2
106
CHAPTER 5. METHOD OF LYAPUNOV
Then
d V (t) = mx0 (t)x00 (t) + kx(t)x0 (t) ≡ 0 dt
and hence the energy is conserved. In this case the Lyapunov function is V (x, x0 ) = m 0 2 k 2 0 0 2 (x ) + 2 x . It is easy to verify that V (0, 0) = 0, V (x, x ) > 0 for all (x, x ) 6= (0, 0) 0 0 and V (x, x ) → +∞ as |(x, x )| → +∞. Rx Example 5.2.2 mx00 + g(x) = 0, xg(x) > 0, x 6= 0, 0 g(s)ds → +∞ as |x| → ∞. This describes the motion of spring which restoring force is a nonlinear term g(x). The energy Z x(t) m 2 V (t) = (x0 (t)) + g(s) ds (5.5) 2 0 0 satisfies dV dt ≡ 0, i.e., the energy is conserved. Hence the solution (x(t), x (t)) is periodic.
Fig.5.5 Example 5.2.3 mx00 + k(x)x0 + g(x) = 0, k(x) ≥ δ > 0 for all x. Use the energy function in (5.5), it follows that dV dt
=
x0 (t) [−k(x(t))x0 (t) − g(x(t))] + x0 (t)g(x(t))
=
−k(x(t))(x0 (t))2 < −δ(x0 (t))2
We expect lim x(t) = 0 and lim x0 (t) = 0. t→∞
t→∞
Let x(t) be the solution of (5.4) and V (x) be positive definite on Ω satisfying V (x) → ∞ as |x| → ∞. Compute the derivative of V along the trajectory x(t), d V (x(t)) = dt =
grad V (x(t)) · x0 (t) n X ∂V ∂ (x(t))fi (x(t)) = V˙ (x(t)) ∂x i i=1
5.2. LYAPUNOV FUNCTIONS
107
Definition 5.2.2 We say a function V : Ω → R, V ∈ C 1 is a Lyapunov function for (5.4) if V˙ (x) = grad V (x) · f (x) ≤ 0 for all x ∈ Ω. Remark 5.2.2 If V is merely continuous on Ω, we replace ddtV (x(t)) by lim
k→0
1 [V (x(t + h)) − V (x(t))] h
Let ξ = x(t), then x(t + h) = ϕ(h, ξ). Then we define 1 V˙ (ξ) = lim [V (ϕ(h, ξ)) − V (ξ)] h→0 h
Remark 5.2.3 In application of physical system, V (x) is usually the total energy. However for a mathematical problem, we may take V as a quadratic form, namely, V (x) = xT Bx, B is some suitable positive definite matrix. Example 5.2.4 Consider the Lotka-Volterra Predator-Prey system, dx dt dy dt
= ax − bxy = cxy − dy
x(0) = x0 > 0,
a, b, c, d > 0
(5.6)
y(0) = y0 > 0
where x = x(t), y = y(t) are densities of prey, and predator respectively. Let x∗ = dc , y ∗ = ab . Then (5.6) can be rewritten as dx dt dy dt
= −bx(y − y ∗ ) = cy(x − x∗ )
Consider the trajectory in phase plane, we have dy cy(x − x∗ ) = . dx −bx(y − y ∗ )
(5.7)
Using separation of variables, from (5.7) it follows that y − y∗ c x − x∗ dy + dx = 0 y b x and
Z
y(t)
y(0)
η − y∗ c dη + η b
Z
x(t)
x(0)
η − x∗ dη ≡ const. η
Hence we define a Lyapunov function Z Z y c x η − x∗ η − y∗ dη + dη V (x, y) = η b x∗ η y∗ · µ ¶¸ ³ x ´i y ch ∗ ∗ = y − y ∗ − y ∗ ln + x − x − x ln . y∗ b x∗
(5.8)
Then it is straight forward to verify that V˙ (x, y) ≡ 0 and hence the system (5.6) is a conservative system. Each solution (x(t, x0 , y0 ), y(t, x0 , y0 )) is a periodic solution.
108
CHAPTER 5. METHOD OF LYAPUNOV
Fig.5.6 Consider the system (5.4) and for simplicity we assume 0 is an equilibrium. To verify the stability property of the equilibrium 0 , from previous chapter we can obtain the information from the eigenvalues of the variational matrix Dx f (0). In the following, we present another method by using Lyapunov functions. Theorem 5.2.1 (Stability, Asymptotic Stability) If there exists a positive definite function V (x) on a neighborhood Ω of 0 such that V˙ ≤ 0 on Ω then the equilibrium 0 of (5.4) is stable. If, in addition, V˙ < 0 for all x ∈ Ω \ {0} then 0 is asymptotically stable. Proof. Let r > 0 such that B(0, r) ⊆ Ω. Given ε, 0 < ε < r and let k = min V (x). |x|=ε
Then k > 0. From continuity of V at 0, we choose δ > 0, 0 < δ ≤ ε such that V (x) < k provided |x| < δ. Then the solution ϕ(t, x0 ) satisfies V (ϕ(t, x0 )) ≤ V (x0 ) because of V˙ ≤ 0 on Ω. Hence for x0 ∈ B(0, δ), ϕ(t, x0 ) stays in B(0, ε) and it follows that 0 is stable. Assume V˙ < 0 for all x ∈ Ω \ {0}. To establish the asymptotic stability of the equilibrium 0, we need to show ϕ(t, x0 ) → 0 as t → ∞ provided x0 is sufficiently close to 0. Since V (ϕ(t, x0 )) is strictly decreasing in t for |x0 | ≤ H, for some H > 0. It suffices to show that lim V (ϕ(t, x0 )) = 0. If not, we let t→∞
lim V (ϕ(t, x0 )) = η > 0. Then V (ϕ(t, x0 )) ≥ η > 0 for all t ≥ 0. By conti-
t→∞
nuity of V at 0, there exists δ > 0 such that 0 < V (x) < η for |x| < δ. Hence |ϕ(t, x0 )| ≥ δ for all t ≥ 0 . Set S = {x : δ ≤ |x| ≤ H} and γ =min (−V˙ (x)). x∈S
d Then γ > 0 and − dt V (ϕ(t, x0 )) ≥ γ. Integrating both sides from 0 to t yields −[V (ϕ(t, x0 )) − V (x0 )] ≥ γt or 0 < V (ϕ(t, x0 )) ≤ V (x0 ) − γt. Let t → +∞, we obtain a contradiction.
Theorem 5.2.2 If there exists a neighborhood U of 0 and V : Ω ⊆ Rn → R, 0 ∈ Ω such that V and V˙ are positive definite on U ∩ Ω \ {0}, then the equilibrium 0 is completely unstable i.e. 0 is a repeller. More specifically, if Ω0 is any neighborhood of 0, Ω0 ⊆ Ω then any solution ϕ(t, x0 ) of (5.4) with x0 ∈ U ∩ Ω0 \ {0} leaves Ω0 in finite time.
5.2. LYAPUNOV FUNCTIONS
109
Proof. If not, there exists a neighborhood Ω0 such that ϕ(t, x0 ) stays in U ∩Ω0 \{0} for some x0 ∈ U ∩ Ω0 \ {0}. Then V (ϕ(t, x0 )) ≥ V (x0 ) > 0 for all t ≥ 0. Let α = inf{V˙ (x) : x ∈ U ∩ Ω0 , V (x) ≥ V (x0 )} > 0. Then Z V (ϕ(t, x0 ))
=
t
V (x0 ) +
V˙ (ϕ(s, x0 ))ds
0
≥
V (x0 ) + αt
Since ϕ(t, x0 ) remains in U ∩ Ω0 \ {0} for all t ≥ 0, V (ϕ(t, x0 )) is bounded for t ≥ 0. Thus for t sufficiently large, we obtain a contradiction from above inequality. Example 5.2.5
½
x0 = −x3 + 2y 3 y 0 = −2xy 2
(5.9) µ
¶ 0 0 which 0 0 eigenvalues λ1 = λ2 = 0 are not hyperbolic. However (0, 0) is indeed asymptotically stable. Employ the Lyapunov function V (x, y) = 21 (x2 + y 2 ). Then we have
(0, 0) is an equilibrium. The variational matrix of (5.9) at (0, 0) is
V˙ = x(−x3 + 2y 3 ) + y(−2xy 2 ) = −x4 ≤ 0. From Theorem 5.2.1, (0, 0) is asymptotically stable.
Linear Stability by Lyapunov Method Consider linear system
x0 = Ax,
A ∈ Rn×n
(5.10)
From Chapter 3, the equilibrium x = 0 is asymptotically stable if Reλ(A) < 0. We recall that in order to verify Reλ(A) < 0 analytically, we first calculate the characteristic polynomial f (z) = det(λI − A) and then apply the Roth-Hurwitz criteria. In the following we present another criteria for Reλ(A) < 0. Theorem 5.2.3 Let A ∈ Rn×n . the matrix equation AT B + BA = −C
(5.11)
has a positive definite solution B for every positive definite matrix C if and only if Reλ(A) < 0. Proof. Consider the linear system (5.10) and let V (x) = xT Bx, where B is a positive definite matrix to be determined. Then V˙ (5.9) (x)
= x˙ T Bx + xT B x˙ = (Ax)T Bx + xT BAx =
xT (AT B + BA)x
If for any positive definite matrix C there exist a positive definite matrix B satisfying (5.10), then V˙ (5.9) (x) = −xT Cx < 0 for x 6= 0. The asymptotic stability of x = 0 or Reλ(A) < 0 follows directly from Theorem 5.2. Assume Reλ(A) < 0. Let C be any positive definite matrix, we define Z ∞ T B= eA t CeAt dt 0
110
CHAPTER 5. METHOD OF LYAPUNOV
First we claim that B is well-defined. From Reλ(A) < 0, there exist α, K > 0 such that k eAt k≤ Ke−αt for t ≥ 0. For any s > 0 Z s Z s T T k eA t CeAt dt k≤ k eA t k k C k k eAt k dt 0 Z0 s 2 −2αt ≤ K e k C k dt < ∞ for all s > 0. 0
Compute Z T
∞
A B + BA = Z0 ∞ = 0
h i T T AT eA t CeAt + eA t CeAt A dt d ³ AT t At ´ e Ce dt = 0 − ICI = −C. dt
Linearization via Lyapunov Method: Consider the nonlinear system x0 = Ax + f (x), f (0) = 0, f (x) = 0(|x|) as x → 0.
(5.12)
Now we shall apply Theorem 5.2.3 to obtain the stability property of the equilibrium x = 0 of (5.11). Claim: If Reλ(A) < 0 then x = 0 is asymptotically stable. From Theorem 5.2.3, there exists a positive definite matrix B satisfying AT B + BA = −I. Let V (x) = xT Bx. Then V˙
= (Ax + f (x))T Bx + xT B(Ax + f (x)) = xT (AT B + BA)x + 2xT Bf (x) = −xT x + 2xT Bf (x)
Since f (x) = 0(|x|) as x → 0, for any ε > 0 there exists δ > 0 such that |f (x)| < ε|x| for |x| < δ. Then for |x| < δ it follows that V˙ ≤ −xT x + 2εxT Bx ≤ (−1 + 2ε k B k)xT x < 0. Hence V˙ is negative definite for |x| < δ provided ε is sufficiently small. Thus x = 0 is asymptotically stable. If Reλ(A) 6= 0 and there exists an eigenvalue of A with positive real part, then x = 0 is an unstable equilibrium of (5.11). Without loss of generality that A = diag(A− , A+ ) where Reλ(A− ) < 0, Reλ(A+ ) > 0. Let B1 be the positive definite solution of AT− B1 + B1 A− = −I and B2 be the positive definite solution of ¡ ¢ −AT+ B2 + B2 (−A+ ) = −I. Let x = (u, v) where u, v has the same dimension as B1 , B2 respectively. Rewrite (5.11) as u0 = A− u + f1 (u, v) v 0 = A+ v + f2 (u, v)
5.2. LYAPUNOV FUNCTIONS Introduce
111
V (x) = −uT B1 u + v T B2 v
Then V˙
=
¡ ¢ ¡ ¢ −uT AT− B1 + B1 A− u + v T AT+ B2 + B2 A+ v T + 0(|x|2 )
=
xT xT + 0(|x|2 ) > c|x|2 > 0 for some c > 0 and for |x| < δ, δ > 0.
On the region where V is positive definite, the conditions of Theorem 5.3 are satisfied, hence x = 0 is unstable.
Invariance Principle: The following invariance principle provides a method to locate ω-limit set ω(x0 ) of the solution of I.V.P of autonomous system x0 = f (x), x(0) = x0 . It is also a tool to estimate the domain of attraction of an asymptotically stable equilibrium x∗ . Definition 5.2.3 We say a scalar function V is Lyapunov function on an open set G ⊆ Rn if V is continuous on G and V˙ (x) = gradV (x) · f (x) ≤ 0 for all x ∈ G. We note that V need not necessarily to be positive definite. Let S = {x ∈ G : V˙ (x) = 0} and M is the largest invariant set in S with respect to the flow x˙ = f (x). Theorem 5.2.4 (LaSalle’s invariant principle) If V is a Lyapunov function on G and γ + (x0 ) is a bounded orbit of x˙ = f (x), γ + (x0 ) ⊆ G, then ω(x0 ) ⊆ M , i.e., lim dist(ϕ(t, x0 ), M ) = 0. t→∞
+
Proof. Since γ (x0 ) ⊆ G is bounded and V is continuous on G, V (t) = V (ϕ(t, x0 )) is bounded for all t ≥ 0. From V˙ ≤ 0 on G, it follows that V (ϕ(t, x0 )) is nonincreasing in t and hence lim V (ϕ(t, x0 )) = c. Let y ∈ ω(x0 ) then there exists a t→+∞
sequence {tn } ↑ +∞ such that ϕ(tn , x0 ) → y as n → ∞, then we have V (y) = c. From the invariance of ω(x0 ), we have V (ϕ(t, y)) = c for all t ∈ R+ . Differentiating with respect to t yields V˙ (ϕ(t, y)) = 0. Hence ϕ(t, y) ⊆ S and ω(x0 ) ⊆ S. By the definition of M , it follows that ω(x0 ) ⊆ M . Thus we complete the proof. Next Corollary provides us a way to estimate the domain of attraction. Corollary 5.2.1 If G = {x ∈ Rn : V (x) < ρ}, γ + (x0 ) ⊆ G and G is bounded, then ϕ(t, x0 ) → M as t → ∞. Corollary 5.2.2 If V (x) → ∞ as |x| → ∞ and V˙ ≤ 0 on Rn then every solution of (5.4) is bounded and approaches M . In particular, if M = {0}, then 0 is global asymptotically stable. 00 0 Example 5.2.5 Consider the nonlinear spring motion with friction R x x + f (x)x + g(x) = 0 where xg(x) > 0, x 6= 0, f (x) > 0, x 6= 0 and G(x) = 0 g(s)ds → ∞ as |x| → ∞. The equation is equivalent to the system ½ 0 x =y (5.13) y 0 = −g(x) − f (x)y 2
Let V (x, y) be the total energy of the system, that is, V (x, y) = y2 + G(x). Then V˙ (x, y) = −f (x)y 2 ≤ 0. For any ρ, the function V is a Lyapunov function on the
112
CHAPTER 5. METHOD OF LYAPUNOV
bounded set Ω = {(x, y) : V (x, y) < ρ}. Also, the set S where V˙ = 0, belongs to the union of x-axis and y-axis. From (5.12), it is easy to check M = {(0, 0)} and Corollary 5.2 implies x = 0 is globally asymptotically stable. Example 5.2.6 Consider the Lotka-Volterra Predator-Prey system ³ dx x´ = γx 1 − − αxy dt K dy = βxy − dy dt x(0) = x0 > 0, y(0) = y0 > 0
Fig.5.7 Case 1: x∗ = d/β < K Let
Z
x
V (x, y) = x∗
With the choice of c =
α β,
η − x∗ dη + c η
Z
y
y∗
η − y∗ dη η
we have γ V˙ = − (x − x∗ )2 K
Then S = {(x, y) : x = x∗ } and it is easy to verify M = {(x∗ , y ∗ )}. Hence lim x(t) = x∗ , lim y(t) = y ∗ .
t→∞
t→∞
∗
Case 2: x > K Then with Lyapunov function Z
x
V (x, y) = K
η−K dη + cy η
it follows that
γ V˙ = − (x − K)2 + cβy(K − x∗ ) ≤ 0 K Then S = {(K, 0)} = M and hence lim x(t) = K, lim y(t) = 0
t→∞
t→∞
We note that in this example the Lyapunov function V does not satisfy the hypothesis of Theorem 5.2.4. It is an exercises to show the invariance principle holds.
5.2. LYAPUNOV FUNCTIONS
113
Example 5.2.7 Consider the Van der Pol equation x00 + ε(x2 − 1)x0 + x = 0 and its equivalent Lienard form ½
¡ ¢ x0 = y − ε x3 3 − x y 0 = −x
(5.14)
In the next chapter we shall show that (0, 0) is an unstable focus and the equation has a unique asymptotically stable limit cycle for every ε > 0. The exact location of this cycle in (x, y) plane is extremely difficult to obtain but the above theory allows one to determine a region near (0, 0) in which the limit cycle cannot lie. Such a region can be found by determining the domain of attraction of (0, 0) with t replaced 2 2 ) by −t. This has same effect of ε < 0. Suppose ε < 0 and let V (x, y) = (x +y . 2 Then µ 2 ¶ x V˙ (x, y) = −εx2 −1 3 and V˙ (x, y) ≤ 0 if x2 < 3. Consider the region G = {(x, y) : V (x, y) < 3/2}. Then G is bounded and V is a Lyapunov function on G. Furthermore S = {(x, y) : V˙ = 0} = {(0, y) : y 2 < 3}. From (5.13), M = {(0, 0)}. Then every solution starting in the circle x2 + y 2 < 3 approaches zero as t → ∞. Finally, the limit cycle of (5.13) with ε > 0 must be outside this circle.
114
CHAPTER 5. METHOD OF LYAPUNOV
Exercises Exercise 5.1 Consider the second order system x˙ = y − xf (x, y), y˙ = −x − yf (x, y). Discuss the stability properties of this system when f has a fixed sign. Exercise 5.2 Consider the equation x ¨ + ax˙ + 2bx + 3x2 = 0,
a > 0, b > 0.
Determine the maximal region of asymptotic stability of the zero solution which can be obtained by using the total energy of the system as a Lyapunov function. Exercise 5.3 Consider the system x˙ = y, Ry˙ = z − ay, z˙ = −cx − F (y), F (0) = y 0, a > 0, c > 0, aF (y)/y > c for y 6= 0 and 0 [F (ξ) − cξ/a]dξ → ∞ as |y| → ∞. If F (y) = ky where k > c/a, verify that the characteristic roots of the linear system have negative real parts. Show that the origin is asymptotically stable even when F is nonlinear. Ry Hint: Choose V as a quadratic form plus the term 0 F (s)ds. Exercise 5.4 Suppose there is a positive definite matrix Q such that J 0 (x)Q+QJ(x) is negative definite for all x 6= 0, where J(x) is the Jacobian matrix of f (x). Prove that the solution x = 0 of x˙ = f (x), f (0) = 0, is globally asymptotically stable. R1 Hint: Prove and make use of the fact that f (x) = 0 J(sx)xds. Exercise 5.5 Suppose h(x, y) is a-positive definite function such that for any constant c, 0 < c < ∞, the curve defined by h(x, y) = c is a Jordan curve, h(x, y) → ∞ as x2 + y 2 → ∞. Discuss the behavior in the phase plane of the solution of the equations x˙ = y˙ =
εx + y − xh(x, y), εy − x − yh(x, y),
for all values of ε in (−∞, ∞). Exercise 5.6 Consider the system x0 y0
= =
x3 + yx2 −y + x2
Show that (0, 0) is a unstable equilibrium by using Lyapunov Function V (x, y) = y2 x2 2 − 2 . We note that the variational matrix at (0, 0) is µ ¶ 0 0 0 −1 and the eigenvalues λ1 = 0 are not hyperbolic. Exercise 5.7 Consider the n-dimensional system x˙ = f (x) + g(t) where xT f (x) ≤ −k|x|2 , k > 0, for all x and |g(t)| ≤ M for all t. Find a sphere of sufficiently large
5.2. LYAPUNOV FUNCTIONS
115
radius so that all trajectories enter this sphere. Show this equation has a T -periodic solution if g is T -periodic. If, in addition, (x − y)T [f (x) − f (y)] < 0 for all x 6= y show there is a unique T -periodic solution. Hint: Use Brouwer’s fixed point theorem. open Exercise 5.8 We say an autonomous system x0 = f (x), f : D ⊆ IRn → IRn is dissipative if there exists a bounded domain Ω ⊆ IRn such that any trajectory of x0 = f (x), ϕ(t, x0 ), x0 ∈ IRn will enter Ω and stay there i.e. there exists T = T (x0 ) such that ϕ(t, x0 ) ∈ Ω for all t ≥ T . Show that the Lorenz system x0 y0 z0
= σ(y − x) = ρx − y − xz = xy − βz
σ, ρ, β > 0
is dissipative. Hint: Consider the function V (x, y, z) =
1 2 (x + σy 2 + σz 2 ) − σρz 2
Prove that the ellipsoid V (x, y, z) = c, c > 0 sufficiently large, is the desired bounded domain Ω. Exercise 5.9 Consider the system of differential equations dx = f (x), dt
f : Ω ⊆ Rn → Rn D
(5.15)
is assumed to be continuous. We call V a Lyapunov function on G ⊆ Ω for (D) if (a) V is continuous on G ¯ (the closure of G) then lim V (x) = +∞ (b) If V is not continuous at x ¯∈G x→¯ x x∈G
(c) V˙ = gradV · f ≤ 0 on G Prove the following invariance principle: ¯∩Ω: Assume that V is a Liapunov function of (1) on G. Define S = {x ∈ G V˙ (x) = 0}. Let M be the largest invariant set in S. Then every bounded trajectory (for t ≥ 0) of (1) that remain in G for t ≥ 0 approaches the set M as t → +∞. Remark: In the example of Lotka-Volterra Predator-prey system, we did apply the above modified invariance principle.
116
CHAPTER 5. METHOD OF LYAPUNOV
References [NS ] Nemitskii V. and V. Stepanov : Qualitative Thoery of Differential Equations, Princeton Press 1960 [H ] J.Hale : Ordinary Differential Equations
Chapter 6
TWO DIMENSIONAL SYSTEMS 6.1
Poincar´ e-Bendixson Theorem
Let f : Ω ⊆ R2 → R2 , Ω is open in R2 . Consider the two dimensional autonomous system x0 = f (x) (6.1) Let ϕ(t) be a solution of (6.1) for t ≥ 0 and ω(ϕ) is the ω-limit set of ϕ(t). We recall that if ϕ(t) is bounded for t ≥ 0, then ω(ϕ) is nonempty, compact, connected and invariant. The following Poincar´e-Bendixson Theorem characterizes the ω-limit set ω(ϕ) of the solution ϕ(t) of the two dimensional autonomous system (6.1). Theorem 6.1.1 (Poincar´e-Bendixson) If the solution ϕ(t) of (6.1) is bounded for all t ≥ 0, then either (i) ω(ϕ) contains an equilibrium or (ii) (a) ϕ(t) is periodic or (b) ω(ϕ) is a periodic orbit. Remark 6.1.1 In case (ii), we assume ω(ϕ) contains no equilibria and in the case (b) we call ω(ϕ) a limit cycle. There is a difference between a periodic orbit and a limit cycle. A limit cycle must be a periodic orbit but not visa verse. Remark 6.1.2 The Poincar´e-Bendixson Theorem is one of the central results of nonlinear dynamical systems. It says that the dynamical possibilities in the phase plane are very limited: if a trajectory is confined to a closed, bounded region that contains no fixed points, then the trajectory must eventually approach a closed orbit. Nothing more complicated is possible. This result depends crucially on the Jordan Curve Theorem in the plane. In higher dimensional system (n ≥ 3), for example, Lorenz system, the Poincar´e-Bendixson Theorem is no longer true and we may have “strange attractors”. However for three dimensional competitive system [S] and n-dimensional feedback system [MP-Sm] the Poincar´e-Bendixson Theorem hold. Remark 6.1.3 To show the existence of a limit cycle by Poincar´ e-Bendixson Theorem, we usually apply the standard trick to construct a trapping region R which 117
118
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
contains no equilibrium. Then there exists a closed orbit in R.
Fig.6.1 Example 6.1.1 Consider the system r0 θ
= =
0
r(1 − r2 ) + µr cos θ 1
Show that a closed orbit exists for µ > 0 sufficiently small. Solve: To construct an annulus 0 < rmin ≤ r ≤ rmax to be a desired trapping region. To find rmin , we required r0 = r(1 − r2 ) + µr cos θ > 0 for all θ. √ Since cos θ ≥ −1 then r0 ≥ r(1 − r2 ) − µr = r[(1 − r2 ) − µ]. Hence any rmin < 1 − µ will work as long √ as µ < 1. By similar argument, the flow is inward on the outer circle if rmax > 1 + µ. Therefore a closed orbit√exists for all µ < 1 and it lies somewhere √ in the annulus 0.999 1 − µ < r < 1.001 1 + µ. Example 6.1.2 Consider the following system of glycolytic oscillation arsing from biochemistry. x0 y0
= −x + ay + x2 y = b − ay − x2 y
Solve:
Fig.6.2
´ 6.1. POINCARE-BENDIXSON THEOREM
119
→
on S1 , n= (−1, 0) →
→
n · f (x) = −(−x + ay + x2 y)|x=0,
0≤y≤b/a
= −ay ≤ 0
→
on S2 , n= (0, 1) →
→
n · f (x) = (b − ay − x2 y)|y=b/a,0≤x≤b = −x2 · b/a ≤ 0
→
on S3 , n= (1, 1) →
→
n · f (x) = b−x|b≤x≤˜x ≤ 0 →
on S4 , n= (1, 0) →
→
n · f (x) = −x+ay+x2 y|x=˜x,
0≤y≤˜ y
where x ˜ + y˜ = b + b/a,
x ˜ is close to b + b/a, i.e. y˜ is close to 0.
Then →
→
n · f (x) ≤ y˜(˜ x2 + a) − x ˜≤0 →
on S5 , n= (0, −1) →
→
n · f (x) = −(b − ay − x2 y)|0≤x≤˜x=−b<0,
y=0
= −b < 0
Thus the region is a trapping region. Next we verify that the equilibrium (x∗ , y ∗ ) b4 +(2a−1)b2 +(a+a2 ) b where x∗ = b, y ∗ = a+b > 0. Thus 2 is an unstable focus if τ = − a+b2 by Poincar´e-Bendixson Theorem, there exists a limit cycle for τ > 0. Example 6.1.3 Consider the Lotka-Volterra Predator-Prey system x0 y0
= =
ax − bxy cxy − dy
Each periodic orbit is neutrally stable. There is no limit cycle. We note that the system is a conservative system. Example 6.1.4 Consider the predator-prey system with Holling type II functional response dx = dt dy = dt x(0) >
³ x´ mx γx 1 − − y K a+x µ ¶ mx −d y a+x 0, y(0) > 0
Assume K−a < λ where λ = x∗ , (x∗ , y ∗ ) is the equilibrium. We note that in 2 Chapter 4, Example 4.8, we prove that (x∗ , y ∗ ) is a unstable focus. First we show
120
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
that the solution (x(t), y(t)) is positive and bounded. It is easy to verify x(t) > 0, y(t) > 0 for all t ≥ 0. Then from differential inequality ³ x´ dx ≤ γx 1 − dt K it follows that for small ² > 0, x(t) ≤ K + ε for t ≥ T (ε). Consider dx dy + ≤ γx − dy ≤ (γ + d)K − d(x + y), dt dt then x(t) + y(t) ≤ γ+d d K for t large. Hence x(t), y(t) are bounded. From Poincar´e-Bendixson Theorem, there exists a limit cycle in the first quadrant of x − y plane. (See Remark 6.4 after Theorem 6.4) To establish the global asymptotic stability of the equilibrium (x∗ , y ∗ ) by using Poincar´e-Bendixson’s Theorem, it suffices to eliminate the possibility of the existence of periodic solutions. Theorem 6.1.2 (Bendixson’s Negative criterion) Consider the system dx dt dy dt
= f (x, y) = g(x, y).
(6.2)
∂g 2 If ∂f ∂x + ∂y is of one sign in a simple-connected domain D in R , then there are no periodic orbits in D.
Proof. Suppose on the contrary, there is a periodic orbit C = {(x(t), y(t))}0≤t≤T . Then by Green’s Theorem ¶ µ I ∂g ∂f + = (−g(x, y)) dx + f (x, y)dy D ∂x ∂y C Z T = [−g(x(t), y(t))x0 (t) + f (x(t), y(t))y 0 (t)] dt 0
= 0 ∂g But ∂f ∂x + ∂y is of one sign, then This is a contradiction.
³ D
∂f ∂x
+
∂g ∂y
´ dxdy is either positive or negative.
Example 6.1.5 Show that x00 +f (x)x0 +g(x) = 0 cannot have any periodic solution whose path in a region where f (x) is of one sign (i.e. such region has only “positive damping” or “negative damping”). Write the equation as µ 0 ¶ µ ¶ µ ¶ x y F (x, y) = = y0 −f (x)y − g(x) G(x, y) Then
∂F ∂G + = −f (x) is of one sign ∂x ∂y
It is an exercises to generalize the Negative Bendixson’s criterion to the following Dulac’s criterion which is much more powerful than Negative Bendixson Criterion. Theorem 6.1.3 (Dulac’s criterion) Let h(x, y) ∈ C 1 in a simply-connected region h) ∂(gh) D. For the system (6.2), if ∂(f is of one sign in D, then (6.2) has no ∂x + ∂y
´ 6.1. POINCARE-BENDIXSON THEOREM
121
periodic orbit in D. Example 6.1.6 Consider Lotka-Volterra two species competition model ¶ µ x 0 x = γ1 x 1 − − αxy = f (x, y) K1 µ ¶ y y 0 = γ2 y 1 − − βxy = g(x, y). K2 Show that there is no periodic orbit in the first quadrant. If we try the Negative Bendixson criterion, then µ ¶ µ ¶ ∂f ∂g γ1 γ2 + = γ1 − 2 x − αy + γ2 − 2 y − βx ∂x ∂y K1 K2 which is not of one sign. Choose h(x, y) = xξ y η where ξ, η ∈ R are to be determined. Then from routine computation, ∆
= =
∂(f h) ∂(gh) + ∂x ∂y ½ µ ¶ γ1 ξ η x y (γ1 + γ2 + ξγ1 + ηγ2 ) + (−2 − ξ) − β(1 + η) x K1 · ¸ ¾ γ2 + (−2 − η) − α(1 + ξ) y K2
Choose ξ = η = −1, then ∆ < 0 for x, y > 0. Thus from Dulac’s criterion, we complete the proof. Now we return to the proof of Poincar´e-Bendixson Theorem [MM] for the system x0 = f (x),
f : Ω ⊆ R2 → R2
The proof need the following Jordan Curve Theorem. Jordon Curve Theorem: A simple closed curve separates plane in R2 into two parts, Ωi and Ωe . The interior component Ωi is bounded while the exterior component Ωe is unbounded. →
Given a closed line segment L =ξ1 ξ2 . Let the vector ~b = ξ2 − ξ1 and ~a is the normal vector to L.(See Fig. 6.1)
Fig.6.3
122
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
We say a continuous curve ϕ : (α, β) → R2 cross L at time t0 if ϕ(t0 ) ∈ L and there exists δ > 0 such that (ϕ(t) − ξ1 , ~a) > 0 for t0 − δ < t < t0 and (ϕ(t) − ξ1 , ~a) < 0 for t0 < t < t0 + δ or vice versa. Definition 6.1.1 We say a closed segment L in R2 is a transversal with respect to continuous vector field f : R2 → R2 if (i) for each ξ ∈ L, ξ is a regular point of the vector field f , i.e. f (ξ) 6= 0, (ii) The vector f (ξ) is not parallel to the direction of L. Assume L is a transversal with respect to vector field f (x). Lemma 6.1.1 Let ξ0 be an interior point of a transversal L, then for any ε > 0 there exists δ > 0 such that for any x0 ∈ B(ξ0 , δ), ϕ(t, x0 ) crosses L at some time t ∈ (−ε, ε). Proof. Let the equation of L be g(x) = a1 x1 + a2 x2 − c = 0. Consider G(t, ξ) = g(ϕ(t, ξ)) where ϕ(t, ξ) is the solution of I.V.P. x0 = f (x), x(0) = ξ, then G(0, ξ0 ) = ∂G (t, ξ) = ∂t ∂G (0, ξ0 ) = ∂t
g(ξ0 ) = 0, →
g 0 (ϕ(t, ξ))f (ϕ(t, ξ)) = a ·f (ϕ(t, ξ)), →
a ·f (ξ0 ) 6= 0
By Implicit Function Theorem, we may express t as a function of ξ, i.e., there exists δ > 0 and a map t : B(ξ0 , δ) → R s.t. G(t(ξ), ξ) = 0, t(ξ0 ) = 0, then g(ϕ(t(ξ), ξ)) = 0 and we complete the proof of Lemma 6.1.1 Lemma 6.1.2 Let ϕ(t) be an orbit of (6.1) and S = {ϕ(t) : α ≤ t ≤ β}. If S intersects L then L ∩ S consists of only finite points whose order is monotone with respect to t. If ϕ(t) is periodic then L ∩ S is a singleton set. Proof. First we show that L ∩ S is finite. If not, then there exists {tm } ⊂ [α, β] such that ϕ(tm ) ∈ L. We may assume tm → t0 ∈ [α, β]. Then ϕ(tm ) → ϕ(t0 ) and lim
m→∞
ϕ(tm ) − ϕ(t0 ) = ϕ0 (t0 ) = f (ϕ(t0 )) tm − t0
Thus the vector f (ϕ(t0 )) is parallel to L and we obtain a contradiction to the transversality of L. Next we prove the monotonicity. We may have two possibilities as the Fig. 6.2 shows:
´ 6.1. POINCARE-BENDIXSON THEOREM
123
Fig.6.4 We consider case (a) only. From the uniqueness of O.D.E., it is impossible for P d the trajectory to cross the arc P Q. Let be the simple closed curve made up of d the curve P Q and the segment T between y and y1 . Let D be the closed bounded 0 P region bounded by . Since T is transverse to the flow , at each point of T , the trajectory either leaves or enter D. We assert that at any point of T the trajectory leaves D. Let T− be the set of points whose trajectory leaves D and T+ be the set of points whose trajectory enters D. From the continuity of the flow, T− , T+ are open sets in T . Since T is a disjoint union of T+ and T− and T is connected ,then we have T+ is empty and T = T− . Obviously the monotonicity of L ∩ S follows.And it is obvious from the monotonicity that L ∩ S is singleton if ϕ(t) is periodic. Lemma 6.1.3 A transversal L cannot intersect ω(ϕ), the ω-limit set of a bounded trajectory, in more than one point. Proof. Let ω(ϕ) interest L at ξ 0 , then there exists {t0m } ↑ +∞, t0m+1 > t0m + 2, ϕ(t0m ) → ξ 0 . By Lemma 6.1.1, there exists M ≥ 1 sufficiently large, such that ϕ must cross L at some time tm , |tm − t0m | < 1 for m ≥ M . From Lemma 6.1.2, ϕ(tm ) ↓ ξ ∈ L∩ω(ϕ). From Lemma 6.1.1, we may assume |tm −t0m | → 0 as m → ∞. Then from mean value Theorem, we have ϕ(tm ) = ϕ(t0m ) + (ϕ(tm ) − ϕ(t0m )) = ϕ(t0m ) + (tm − t0m )f (ϕ(ηm )) Since ϕ(t) is bounded, then lim ϕ(tm ) = ξ 0 and ξ = ξ 0 . Hence there exists t→∞
{tm } ↑ ∞ such that ϕ(tm ) ↓ ξ. If η is a second point in L ∩ ω(ϕ), then there exists {sm } ↑ ∞ such that ϕ(sm ) ∈ L and ϕ(sm ) ↓ η or ϕ(sm ) ↑ η. Assume the sequences {sm }, {tm } interlaces i.e., t1 < s1 < t2 < s2 < · · ·, then {ϕ(t1 ), ϕ(s1 ), ϕ(t2 ), · · ·} is a monotone sequence. Thus ξ = η. Lemma 6.1.4 Let ϕ(t) be the solution of (6.1) and ω(ϕ) be the ω-limit set of γ + (ϕ). (i) If ω(ϕ) ∩ γ + (ϕ) 6= φ then ϕ(t) is a periodic solution. (ii) If ω(ϕ) contains a nonconstant periodic orbit Γ, then ω(ϕ) = Γ. Proof. (i) Let η ∈ ω(ϕ) ∩ γ + (ϕ), then η is a regular point and there exists a transversal L through η. Let η = ϕ(τ ), then from the invariance of ω(ϕ), γ(η) ⊂ ω(ϕ).
124
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
Fig.6.5 Since η ∈ ω(ϕ), there exists {t0m } ↑ +∞ s.t. ϕ(t0m ) → η (See Fig. 6.3). By Lemma 6.1.1, there exists tm near t0m , ϕ(tm ) ∈ L, |tm − t0m | → 0, ϕ(tm ) → η as m → ∞ as we did in the proof of Lemma 6.1.3.Since η ∈ w(ϕ), it follows that ϕ(tm ) ∈ w(ϕ) for m sufficiently large.From Lemma6.1.3 and ϕ(tm ) ∈ L ∩ w(ϕ), we have ϕ(tm ) = η for m sufficiently large. Since ϕ(tm ) ∈ L ∩ ω(φ), from Lemma 6.1.3, ϕ(tm ) = η for all m sufficiently large. Hence ϕ(tm ) = η and this implies that ϕ(t) is a periodic solution. (ii) Assume a periodic orbit Γ is contained in ω(ϕ). We claim that Γ = ω(ϕ). If not, Γ 6= ω(ϕ). Since ω(ϕ) is connected, there exists {ξm } ⊆ ω(ϕ) \ Γ and ξ0 ∈ Γ such that ξm → ξ0 .(See Fig. 6.4)
Fig.6.6 Let L be a transversal through ξ0 . by Lemma 6.1.1, for m sufficiently large, the 0 orbit through ξm must intersect L. Let ϕ(τm , ξm ) = ξm ∈ L ∩ ω(φ). From Lemma 0 0 0 6.1.3, {ξm } is a constant sequence and hence ξm = ξ0 . Then ξm = ϕ(−τm , ξm )= ϕ(−τm , ξ0 ) ∈ Γ. This is a contradiction. Proof of Poincar´ e-Bendixson Theorem: Assume ω(ϕ) contains no equilibria and ϕ is not a periodic solution. Let y0 ∈ ω(ϕ) and C + = γ + (y0 ), then C + ⊆ ω(ϕ). Let ω(y0 ) be the ω-limit set of C + . Obviously ω(y0 ) ⊆ ω(ϕ) and hence ω(y0 ) contains no equilibria. Let η ∈ ω(y0 ) and introduce a transversal L through η.
Fig.6.7
6.2. LEVINSON-SMITH THEOREM
125
Since η ∈ ω(y0 ), there exists a sequence {tn } ↑ ∞ such that ϕ(tn , y0 ) → η (See Fig. 6.5). By Lemma 6.1, there exists {t0n }, |tn − t0n | → 0, ϕ(t0n , y0 ) ∈ L ∩ ω(y0 ). But C + ⊆ ω(ϕ). C + meets L only once. Then η ∈ C + . From Lemma 6.1.4 (i), we have that C + is a periodic orbit. Since C + ⊆ ω(ϕ), from Lemma 6.1.4 (ii), ω(ϕ) = C + . Thus we complete the proof. In the following, the result relates to the Poincar´e-Bendixson’s Theorem: A Closed orbit in the plane must enclose an equilibrium. We shall prove it in Chapter 8 by using the index theory. However in the following we present a result in Rn by using Brouwer’s fixed point theorem. Theorem 6.1.2 If K is a positively invariant set of the system x0 = f (x), f : Ω ⊆ Rn → Rn and K is homeomorphic to the closed unit ball in Rn , then there is at least one equilibrium in K. Proof. For any fixed τ1 > 0, consider the map ϕ(τ1 , ·) : K → K. From Brouwer’s fixed point theorem, there is a p1 ∈ K such that ϕ(τ1 , p1 ) = p1 , thus, a periodic orbit of period τ1 . Choose τm → 0, τm > 0 as m → ∞ and corresponding points pm such that ϕ(τm , pm ) = pm . Without loss of generality, we assume pm → p∗ ∈ K. For any t and any integer m, there is an integer km (t) such that km (t)τm ≤ t < km (t)τm +τm and ϕ(km (t)τm , pm ) = pm for all t. Furthermore |ϕ(t, p∗ ) − p∗ | ≤ |ϕ(t, p∗ ) − ϕ(t, pm )| + |ϕ(t, pm ) − pm | + |pm − p∗ | = |ϕ(t, p∗ ) − ϕ(t, pm )| + |ϕ(t − km (t)τm , pm ) − pm | + |pm − p∗ | → 0 as m → ∞. Therefore p∗ is an equilibrium. Remark 6.1.4 In the book of J. Hale [H] or J. Yorke [ASY] p.341, we have further results on the Case(i) of Poincare-Bendixson Theorem. If ω(ϕ) contains only finite number of equilibria of (6.1) then either (a) ω(ϕ) is a single equilibrium of (6.1) or (b) ω(ϕ) consists of a finite number of equilibria q1 · · · qn together with a finite set of orbits γ1 · · · , γn such that the α(γj ) and ω(γj ) are equilibria, i.e. ω(ϕ) = {q1 · · · qn } ∪ γ1 ∪ · · · ∪ γn , which is a closed contour.
6.2
Levinson-Smith Theorem
Consider the following Lineard equation x00 + f (x)x0 + g(x) = 0
(6.3)
where f (x), g(x) satisfy (H1) f : R → R is continuous and f (x) = f (−x) i.e. f is even. (H2) g : R → R is continuous, xg(x) > 0 for all x 6= 0, g(x) = −g(−x), i.e. g is odd. Rx (H3) F (x) = 0 f (s)ds satisfies 0 < x < a, F (x) < 0 x > a, F (x) > 0, f (x) > 0
126
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
Rx (H4) G(x) = 0 g(s)ds → ∞ as |x| → ∞ F (x) → +∞ as x → +∞ Example 6.2.1 One of the most important example of Lienard equation is Van der Pol equation, x00 + ε(x2 − 1)x0 + x = 0 The derivation of the model be´ consulted in [Keener, p.481]. In this example, ³ can √ x3 2 f (x) = ε(x − 1), F (x) = ε 3 − x , g(x) = x, G(x) = x2 /2 with a = 3 in (H3). Theorem 6.2.1 (Levinson-Smith) There exists a unique limit cycle of (6.3), which is globally orbital stable. Proof. Rewrite (6.3) in the following form ½ 0 x = y − F (x) y 0 = −g(x)
(6.4)
Consider the energy function
Then
v(x, y) = y 2 /2 + G(x).
(6.5)
dv = y 0 y + g(x)x0 = −g(x)F (x). dt
(6.6)
We note that F is odd, i.e. F (−x) = −F (x) and the trajectory (x(t), y(t)) is symmetric with respect to origin because of the fact (x(t), y(t)) is a solution of (6.3) if and only if (−x(t), −y(t)) is a solution of (6.3). Let α > 0, A = (0, α) and D = (0, y(t(α))) be the points of the trajectory (x(t), y(t)) on the y-axis with (x(0), y(0)) = A and (x(t(α)), y(t(α))) = D (See Fig. 6.6). From (6.5) we have α2 − y 2 (t(α)) = 2(v(A) − v(D)).
Fig.6.8
(6.7)
6.2. LEVINSON-SMITH THEOREM
127
Due to the symmetry of (x(t), y(t)) with respect to origin, we prove the theorem in three steps. Step 1: To show v(A) − v(D) is a monotonically increasing function of α (See Fig. 6.7).
Fig.6.9 From (6.4), (6.6), we have dv dv/dt −g(x)F (x) = = , dx dx/dt y − F (x) dv = dy
dv dt dy dt
= F (x).
(6.9)
From (6.8), (6.9), we have the following inequalities, Z aµ ¶ Z a dv −g(x)F (x) v(B) − v(A) = dx = dx y(x) − F (x) 0 0 Z a −g(x)F (x) < dx = v(B 0 ) − v(A0 ); y ˜ (x) − F (x) 0 Z
y2
v(E) − v(B) = − y1
µ
dv dy
¶
(6.8)
Z
(6.10)
y2
dy = −
F (x(y))dy < 0;
(6.11)
y1
R y dv Ry v(G) − v(E) = − y31 dy dy = − y31 F (x(y))dy R y1 < − y3 F (˜ x(y))dy = v(C 0 ) − v(B 0 ), Here we use the fact f (x) > 0 for x > a to deduce F (x(y)) > F (˜ x(y)); Z y3 Z y4 dv dv dy = − dy v(C) − v(G) = dy dy y4 y3 Z y3 =− F (x(y))dy < 0 y4
(6.12)
(6.13)
128
CHAPTER 6. TWO DIMENSIONAL SYSTEMS ¶ Z a dv −g(x)F (x) v(D) − v(C) = − dx = − dx (6.14) dx 0 0 yL (x) − F (x) Z a Z a −g(x)F (x) −g(x)F (x) = dx < dx = v(D0 ) − v(C 0 ) ˜L (x) 0 F (x) − yL (x) 0 F (x) − y Z
a
µ
Combining (6.10) – (6.14), we have v(D) − v(A) < v(D0 ) − v(A0 ) Thus we complete the proof of Step 1. Step 2: For α > 0 sufficiently small, v(A) − v(D) < 0. Since dv dt = −g(x)F (x) > 0 for α small, Step 2 follows. Step 3: For α sufficiently large, v(D) − v(A) > 0 (See Fig. 6.8)
Fig.6.10
Z
Z y1 Z 0 dv dv dv dx + dy + dx dx dy dx 0 y2 a Z a Z y1 g(x)F (x) −g(x)F (x) F (x)dy dx + dx + yL (x) − F (x) 0 yu (x) − F (x) y2
v(D) − v(A) = Z = 0
a
a
(6.15)
Case 1: x(α) is bounded for all α. Since the map (x(α), F (x(α))) → (0, y(t(α))) is continuous in α and hence y(t(α)) is bounded. Hence v(D) − v(A) = α2 − y 2 (t(α)) > 0 for α sufficiently large. Case 2: x(α) → +∞ and y(t(α)) → +∞ as α → +∞. From (6.4), we have −g(x) dy = (6.16) dx y − F (x) For 0 < x < a, consider (6.16) and initial values y(0) = α and y(0) = y(t(α)), it follows that yu (x) → +∞ and yL (x) → −∞ if α → ∞. Hence the first and second integral in (6.15) go to zero as α → ∞. Consider the third integral in (6.15), from
6.2. LEVINSON-SMITH THEOREM
129
Green’s theorem we have Z y1 Z Z F (x)dy = F (x)dy + F (x)dy c y2 CB BC I ∂ = F (x)dy =R (F (x))dxdy ∂x =R f (x)dxdy → +∞ as α → +∞ Hence v(D) − v(A) > 0 for α sufficiently large. Combining Step 1 – Step 3 and the symmetry of the trajectory with respect to origin, there exists a unique solution α∗ of v(D) − v(A) (See Fig. 6.9)
Fig.6.11 And obviously the corresponding unique limit cycle attracts every point (x, y) 6= (0, 0). Remark 6.2.1 For two -dimensional autonomous, dissipative system, we may have the existence of limit cycle from Poincar´e-Bendixson Theorem. To prove the uniqueness of limit cycles is a very difficult job. If we are able to show that every limit cycle is orbitally asymptotically stable, then uniqueness of limit cycles follows. Hence if we are able to prove that ¶ I µ ∂f ∂g + dt < 0 (6.17) ∂x ∂y Γ for system (6.2), then from Corollary of Theorem 4.1 in Chapter 4, the periodic orbit Γ is orbitally asymptotically stable. Since the exact position of Γ is unknown, hence it is difficult to evaluate the integral (6.17). However for some systems, for example, the predator-prey system, Cheng [C] proved the uniquess of limit cycle by method of reflection. Relaxation of the van der Pol oscillator: Consider van der Pol equation with k À 1 u00 − k(1 − u2 )u0 + u = 0 Let ε = k1 . Then 0 < ε ¿ 1 and εu0 = v − G(u) v 0 = −εu
(6.18)
130
CHAPTER 6. TWO DIMENSIONAL SYSTEMS 3
where G(u) = −u + u3 Consider the Jordan curve J as following (See Fig. 6.10)
Fig.6.12 By Theorem 6.2.1, there is a unique limit cycle Γ(ε) of (6.18). If the orbit is away from the isocline v = G(u), then from (6.18), u0 = ε−1 (v − G(u)), v 0 = −εu, it follows that |u0 | À 1 and |v 0 | ≈ 0, and the orbit has a tendency to jump except when it is closed to the curve v = G(u). Theorem 6.2.2 As ε ↓ 0, the limit cycle Γ(ε) approaches the Jordan curve J. [H] Proof. We shall construct closed region U containing J such that the distance dist(U, J) is any preassigned constant and for any ε > 0 sufficiently small, the vector field of (6.18) is directed inward U at the boundary ∂U , i.e. U is a trapped region. By Poincar´e-Bendixson Theorem, U contains the limit cycle Γ(ε). Consider the following figure, due to the symmetry, we only need to check the flow at the following curves.
Fig.6.13
6.2. LEVINSON-SMITH THEOREM
131
(1) On the segment 12, the flow is inward since ~n · (u, ˙ v) ˙ = (0, 1) · (u, ˙ v) ˙ = −εu < 0 (2) On the segment 23 ~n · (u, ˙ v) ˙ = (1, 0) · (u, ˙ v) ˙ =
1 (v − G(u)) < 0 ε
(3) On the segment 13, 14 ~n · (u, ˙ v) ˙ = (0, 1) · (u, ˙ v) ˙ = −εu < 0 (4) On the segment 11, 12 1 ~n · (u, ˙ v) ˙ = (−1, 0) · (u, ˙ v) ˙ = − (v − G(u)) < 0 ε d13, (5) On the arc 12, ~n · (u, ˙ v) ˙ = (−G0 (u), 1) · (0, v) ˙ = −εu < 0 c4 (6) On the arc 3, ~n · (u, ˙ v) ˙
(G0 (u), −1) · (u, ˙ v) ˙ µ ¶ ¡ 2 ¢ −h = u − 1, −1 · , −εu ε ¡ ¢ −h = u2 − 1 + εu < 0 if 0 < ε ¿ 1 ε
=
(7) On the arc c 45, n1 > 0, n2 < 0 ¶ 1 (v − G(u)), −εu ~n · (u, ˙ v) ˙ = (n1 , n2 ) · ε µ ¶ 1 = n1 (v − G(u)) − εun2 < 0 ε if 0 < ε ¿ 1 µ
d11, n1 < 0, n2 < 0 (8) On the arc 10, µ ~n · (u, ˙ v) ˙ = n1
¶ 1 (v − G(u) − εun2 < 0 if 0 < ε ¿ 1. ε
Remark 6.2.3 As ε ¿ 1, we may evaluate the period of the periodic solution of the relaxation.
132
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
Fig.6.14 Since the periodic trajectory spends most of the time travelling on the isocline (See Fig. 6.12), we need to consider the equation dv dv = −εu or dt = − dt εu Then Z T (ε)
Z
≈ 2
u=1
dt = 2 u=2
Z
2
= 2 1
6.3
dv −εu
2
u −1 3 − 2 log 2 du = . −εu ε
Hopf Bifurcation
Consider one parameter family of differential equations x0 = f (x, µ) = fµ (x)
(Hµ )
with x ∈ R2 , µ ∈ R. Let f (x(µ), µ) = 0, i.e., x(µ) is an equilibrium. As µ varies, we assume the stability property of the equilibrium x(µ) changes from asymptotic stability to complete instability at µ = µ0 , i.e., the variational matrix Dx f (x(µ), µ) with eigenvalues α(µ) ± iβ(µ) satisfies α(µ) < 0 for µ < µ0 , α(µ0 ) = 0, β(µ0 ) 6= 0 and α(µ) > 0 for µ > µ0 or vice versa. The Hopf Bifurcation Theorem says that the equations (Hµ ) has an one-parameter 2π family of nontrivial periodic solution x(t, ²) with period T (²) = β(µ + O(²), µ = 0)
6.3. HOPF BIFURCATION
133
µ(²) and µ0 = µ(0). Let x ˜ = x − x(µ), µ ˜ = µ − µ0 x ˜0 = f (˜ x + x(˜ µ + µ0 ), µ ˜ + µ0 ) = F (˜ x, µ ˜) Hence we assume (Hµ ) satisfies (H1) f (0, µ) = 0 for all µ (H2) The eigenvalues of Dfµ (0) are α(µ) ± iβ(µ) with α(0) = 0, β(0) = β0 6= 0 and α0 (0) 6= 0, so the eigenvalues are crossing the imaginary axis. Example 6.3.1 Consider the predator-prey system ¡ ¢ ( 0 x mx x =³ γx 1 − K ´ − a+x y mx y 0 = a+x −d y a Hopf Bifurcation occurs at λ0 = K−a where λ = (m/d)−1 is the parameter. 2 In order to prove Hopf-Bifurcation Theorem we need to introduce the concept of ”normal form”.
Example 6.3.2 Saddle-Node Bifurcation x0 = r − x2 or x0 = r + x2 are the “prototypical” of all saddle-node bifurcations. For example, consider x0 = r − x2 . Fig. 6.13 is the bifurcation diagram.
Fig.6.15
(6.19)
134
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
Consider the general one-dimensional equation x0 = f (x, r)
(6.20)
with the saddle-node bifurcation at r = rc . Then f (x, r) has the graph looks like Fig. 6.14.
Fig.6.16 Examine the behavior of x0 = f (x, r) near the bifurcation at x = x∗ and r = rc . Taylor’s expansion yields ¯ ∂f ¯¯ x = f (x, r) = f (x , rc ) + (x − x ) ¯ ∂x (x∗ ,rc ) ¯ ¯ ∂2f ¯ ∂f ¯ 1 +(r − rc ) ¯¯ + (x − x∗ )2 2 ¯¯ + ··· ∂r (x∗ ,rc ) 2 ∂x (x∗ ,xc ) 0
∗
Since f (x∗ , rc ) = 0 and
∗
¯ ¯
∂f ¯ ∂x ¯
= 0, then we have (x∗ ,rc )
x0 = a(r − rc ) + b(x − x∗ )2 + · · · where a =
¯ ¯
∂f ¯ ∂r ¯
, b= (x∗ ,rc )
¯ ¯
2 1∂ f¯ 2 ∂x2 ¯
(6.21)
. Thus with change of variables and scalings, (x∗ ,rc )
(6.21) will looks like (6.19). We say (6.19) is a “normal form” of (6.20). Example 6.3.3 Transcritical Bifurcation The normal form of a transcritical bifurcation is x0 = rx − x2
6.3. HOPF BIFURCATION
135
Fig.6.17
Example 6.3.4 Show that x˙ = x(1 − x2 ) − a(1 − e−bx ) undergoes a transcritical bifurcation at x = 0 when the parameter a, b satisfies a certain equation. Since 1 − e−bx
£ ¤ = 1 − 1 − bx + 12 b2 x2 + O(x3 ) = bx − 12 b2 x2 + O(x3 )
so x0
¡ ¢ = x − a bx − ¡12 b2 x2 ¢ + O(x3 ) = (1 − ab)x + 12 ab2 x2 + O(x3 )
Hence a transcritical bifurcation occurs when ab = 1 and x∗ ≈ Example 6.3.5 Pitchfork Bifurcation Supercritical Pitchfork Bifurcation: The normal form of the supercritical pitchfork bifurcation is x0 = rx − x3 The bifurcation diagram is the following Fig. 6.15.
2(ab−1) ab2 .
136
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
Fig.6.18 Subcritical Pitchfork Bifurcation: The normal form is x0 = rx + x3 . The bifurcation diagram is following Fig. 6.17.
Fig.6.19 Normal form for Hopf Bifurction: Consider (Hµ ) under the assumption (H1), (H2), with the form µ
x0 y0
¶
µ =
α(µ) β(µ)
−β(µ) α(µ)
¶µ
x y
¶
µ +
f 1 (x, y, µ) f 2 (x, y, µ)
¶
6.3. HOPF BIFURCATION
137
t With the scalings τ = β(µ) and let µ ˜ = α(µ) β(µ) and may assume α(µ) = µ, β(µ) = 1, then we have P x0 = µx − y + P amn xm y n m+n≥2 (6.22) y 0 = x + µy + bmn xm y n µ ¶ µ −1 Since µ ± i are eigenvalues of , we set 1 µ
u = x + iy, v = x − iy Then we have
P u0 = (µ + i)u + P Amn um v n v 0 = (µ − i)v + Bmn um v n
(6.23)
Claim Bmn = Anm Since u = v, then v0
X Amn um v n = u0 = (µ − i)v + X = (µ − i)v + Amn v m un
Hence Bmn = Anm . Now we look for the following change of variables P u = ξ + P αmn ξ m η n v = η + βmn ξ m η n Write
P ξ 0 = (µ + i)ξ + P A0mn ξ m η n 0 η 0 = (µ − i)η + Bmn ξ m ηn
Now we substitute (6.25) into (6.24), X ¡ ¢ u0 = ξ 0 + αmn mξ 0 ξm−1 η n + nξ m η n−1 η 0 ³ ´ X = (µ + i) ξ + αm ξ n η n ³ ´` ³ ´k X X X + A`k ξ + αmn ξ m η n η+ βmn ξ m η n ³ ´ ³X ´ X αmn nξ m η n−1 η 0 = 1+ αmn mξ m−1 η n ξ 0 + ´ ³ ´³ X X = 1+ αmn mξ m−1 η n (µ + i)ξ + A0mn ξ m η n ´ ³X ´³ X 0 + Bmn ξm ηn αmn nξ m η n−1 (µ − i)η +
(6.24)
(6.25)
(6.26)
(6.27)
Comparing the coefficients ξ m η n , m + n = 1, 2 in (6.26) and (6.27), for m + n = 1 (µ + i)ξ = (µ + i)ξ For m + n ≥ 2 (µ + i)αmn ξ m η n + Amn ξ m η n = αmn m(µ + i)ξ m η n + A0mn ξ m η n + αmn n(µ − i)ξ m η n
(6.28)
We want A0mn = 0 as many terms as possible. From (6.28), A0mn = Amn − αmn ((µ + i)(m − 1) + (µ − i)n)
(6.29)
138
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
If A0mn = 0 then
Amn (6.30) (µ + i)(m − 1) + (µ − i)n For m + n = 2, we obviously have m − n − 1 6= 0. For µ = 0 or µ near 0 we choose αmn as in (6.31) and hence we are able to kill all quadratic terms in (6.25). Now for m + n = 3, for µ = 0 or µ near 0, we want to check m − n − 1 6= 0 or not. Then we find that except m = 2, n = 1, we are able to kill all other cubic terms. For m = 2, n = 1 A21 α21 = . 3µ Then (6.26) becomes αmn =
0
ξ0
=
(µ + i)ξ + A21 ξ 2 η + 4th-order terms
η0
=
(µ − i)η + A21 ξη 2 + 4th-order terms 0
The last change variable is ξ = ρeiϕ , Then or µ
ρ0 ϕ0
µ
¶
µ = =
=
eiϕ e−iϕ
ξ0 η0
¶
µ =
iρeiϕ −iρe−iϕ
eiϕ e−iϕ
¶−1 µ
−iρe−iϕ ,
η = ρe−iϕ iρeiϕ −iρe−iϕ ξ0 η0
−iρeiϕ
1 (−2iρ) −e−iϕ , eiϕ µρ + Gρ3 + · · · 1 + (ImA2 )ρ2 + · · ·
¶µ
ρ0 ϕ0
¶
¶
(µ + i)ρeiϕ + A21 ρ3 eiϕ + · · ·
(µ − i)ρe−iϕ + A21 ρ3 e−iϕ + · · ·
Hence we have ρ˙ = µρ + Gρ3 + O(ρ4 ) ϕ˙ = 1 + Kρ2 + O(ρ3 )
(6.31)
Neglecting the higher order terms in (6.32) we have ρ˙ = µρ + Gρ3 (6.32) ϕ˙ = 1 + O(ρ2 ) p p Let G 6= 0. If µ/G < 0 then ρ˙ = ρG(ρ +p −µ/G)(ρ − −µ/G). If G < 0, µ > 0, then (0, 0) is a unstable spiral and ρ = −µ/G is asymptotically stable. (See Fig. 6.18)
6.3. HOPF BIFURCATION
139 Fig.6.20
This is the case of supercritical Hopf Bifurcation. If G > 0, µ < 0 then (0, 0) is p asymptotically stable and the limit cycle ρ = −µ/G is unstable. (See Fig. 6.19)
Fig.6.21 This is the case of subcritical Hopf Bifurcation. When G = 0, we call the Hopf Bifurcation is degenerated. Now p we return to (6.32). consider the annulus p A = {(ρ, ϕ) : ρ1 ≤ ρ ≤ ρ2 } where ρ1 < −µ/G < ρ2 , ρ1 , ρ2 sufficiently close to −µ/G. Then for G < 0 the annulus is a trapped region for µ sufficiently small. For G > 0, the annulus region A is a trapped region in reverse time. Apply Poincar´e-Bendixson Theorem, we obtain a closed orbit. Remark 6.3.1 Return to original parameter. ρ˙ θ˙
= =
α(µ)ρ + Gρ3 + O(ρ4 ) β(µ) + O(ρ2 )
Hence for µ sufficiently small, the period of the closed orbit is 2π/β0 + O(µ). Remark 6.3.2 To check the supercritical or subcritical Hopf Bifurcation, we need to evaluate G. It is a complicated matter. According to [Wan, Hassard, Karzarinoff], if we have µ ¶ µ ¶µ ¶ µ ¶ x˙ 0 −β0 x f (x, y, 0) = + y˙ β0 0 y g(x, y, 0) at µ = 0, then the formula is G =
1 (fxxx + fxyy + gxxy + gyyy ) 16 1 + (fxy (fxy + fyy ) − gxy (gxx + gyy ) − fxx gxx + fyy · gyy ) 16β0
where all partial derivatives are evaluated at the bifurcation point (0, 0, 0). Example 6.3.5 x00 + µx0 + sin x = 0 has degenerated Hopf bifurcation at µ = 0. Remark 6.3.3 We may generalized the Hopf-Bifurcation Theorem to x0 = f (x, µ) = fµ (x), satisfying
x ∈ R\ ,
{(§, µ) ∈ R\ ,
µ∈R
140
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
(H1) f (0, µ) = 0 (H2) The eigenvalues of Dfµ (0) are λ1,2 (µ) = α(µ) ± iβ(µ), λ3 (µ), ...., λn (µ) with α(0) = 0, β(0) = β0 6= 0 and α0 (0) 6= 0, Re λi (µ) < 0 for all i = 3, ...., n. n-dimensional Hopf-Bifurcation Theorem is primarily need as a tool to show the existence of periodic solution for n ≥ 3
6.3. HOPF BIFURCATION
141
Exercises Exercise 6.1 Write the Van der Pol equation x00 + α(x2 − 1)x0 + x = 0 in the form ½ 0 x =y (1) y 0 = −x − α(x2 − 1)y Compare (1) with auxiliary system ½ 0 x =y √ y 0 = −x − α sgn(|x| − 2)y
(2)
Show that system (2) has a unique limit cycle C and the vector field of (1) is directed inside C. Exercise 6.2 Is it possible to have a two dimensional system such that each orbit in an annulus is a periodic orbit and yet the boundaries of the annulus are limit cycles? Exercise 6.3 Sketch the plase portrait for the unforced, undamped Duffing Oscillator x00 + x3 − x = 0 2
4
2
by considering the energy E = y2 +V (x), V (x) = x4 − x2 is a double-well potential. Show that there is a homoclinic orbit. Also consider the same problem for unforced, damped oscillator x00 + δx0 + x3 − x = 0, δ > 0. Exercise 6.4 Show that the system of equations x0 y0
= x − xy 2 + y 3 = 3y − yx2 + x3
has no nontrivial periodic orbit in the region x2 + y 2 ≤ 4. Exercise 6.5 Consider x00 + g(x) = 0 or equivalent system x0 y0
= y = −g(x)
Verify that this is a conservative system with the Hamiltonian function H(x, y) = Rx + G(x), where G(x) = 0 g(s)ds. Show that any periodic orbit of this system must intersect x-axis at two points (a, 0), (b, 0), a < b .Using symmetry of the periodic orbit with respect to x-axis, show that the minimal period T of a periodic orbit passing through two such points is given by y2 2
Z
b
du
p
T =2 a
2[G(b) − G(u)]
Exercise 6.6 The second-order equation y 00 + (y 0 )3 − 2λy 0 + y = 0 where λ is small scalar parameter, arises in the theory of sound and is known as Rayleigh’s equation Convert this into a first-order system and investigate the Hopf bifurcation.
142
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
Exercise 6.7 Investigate the Hopf bifurcation for the following predator-prey system ³ x´ mx x0 = rx 1 − − y K a+x ´ ³ y , r, K, m, a, s, ν > 0 y 0 = sy 1 − νx Exercise 6.8 Consider the two-dimensional system x0 = Ax − r2 x where A is a 2 × 2 constant real matrix with complex eigenvalues α ± iw and x ∈ IR2 , r =k x k. Prove that there exists at least one limit cycle for α > 0 and that none for α < 0. Exercise 6.9 Show that the planar system x0 = x − y − x3 y0 = x + y − y3 has a periodic orbit in some annulus region centered at the origin. Exercise 6.10 Sketch the phase portrait of x00 − x + x2 = 0 and
x00 − (1 − x)e−x = 0
6.3. HOPF BIFURCATION
143
References [ASY ] K. Alligood, T. Sauer and J. Yorke, Chaos, An introduction to dynamical systems Springer-Verlag, 1996 [C ] K.S.Cheng : Uniqueness of a limit cycle for a predator prey system SIAM Math. Analysis (1981) [H ] J.Hale : Ordinary Differential Equations [MM ] R.K.Miller and A.Michel : Ordinary Differential Equations [MP-Sm ] J. Mallet-Paret and H. Smith, The Poincar´e-Bendixson Theorem for monotone cyclic feedback systems, J. Dyn. Diff. Eq. 2(1990)367-421 [R ] C.Robinson : Dynamical System
144
CHAPTER 6. TWO DIMENSIONAL SYSTEMS
Chapter 7
SECOND ORDER LINEAR EQUATIONS 7.1
Sturm’s Comparison Theorem and Sturm-Liouville boundary value problem
Consider the general second order linear equation u00 + g(t)u0 + f (t)u = h(t), a ≤ t ≤ b Multiplying exp(
Rt 0
(7.1)
g(s)ds) ≡ p(t) on both sides of (7.1) yields
(p(t)u0 )0 + q(t)u = H(t), with p(t) > 0, q(t) continuous on [a, b].
(7.2)
The advantage of the form (7.2) over (7.1) is that the linear operator Lu = (p(t)u0 )0 + q(t)u is self-adjoint. (See Lemma 7.1.2) Pr¨ ufer Transformation: Let u(t) 6 ≡0 be a real-value solution of (p(t)u0 )0 + q(t)u = 0.
(7.3)
Since u(t) and u0 (t) cannot vanish simutaneously at any point t0 in J = [a, b], we introduce the following “polar” coordinate h i1/2 2 ρ = u2 + (pu0 ) (7.4) −1 u ϕ = tan pu0
Fig.7.1 145
146
CHAPTER 7. SECOND ORDER LINEAR EQUATIONS Then we have pu0 = ρ cos ϕ u = ρ sin ϕ
(7.5)
From (7.4), (7.5), it follows that 0
ϕ
=
pu0 ·u0 −u(pu0 )0 (pu0 )2 ³ ´2 u 1 + pu 0
=
1 p(t)
i.e. ϕ0 =
µ
(pu0 )2 ρ2
=
(pu0 )u0 + q(t)u2 ρ2
¶ + q(t)
µ ¶2 u ρ
1 cos2 ϕ + q(t) sin2 ϕ. p(t)
We note that in (7.6) the RHS is independent of ρ. A straight forward computation shows · ¸ 1 0 ρ = − q(t) − ρ sin ϕ cos ϕ. p(t)
(7.6)
(7.7)
From (7.5), we have ϕ(t0 ) = 0 (mod π) if and only if u(t0 ) = 0 Lemma 7.1.1 Let u(t) ≡ 0 be a real-value solution of (7.3) with p(t) > 0 and q(t) continuous on J = [a, b]. Assume u(t) has exactly n zeros, n ≥ 1, t1 < t2 < · · · < tn in [a, b]. Let ϕ(t) be a continuous solution of (7.6), (7.7) with 0 ≤ ϕ(t0 ) < π. Then ϕ(tk ) = kπ and ϕ(t) > kπ for tk < t ≤ b. Proof. Since u(t0 ) = 0 iff ϕ(t0 ) = 0 (mod π), it follows that ϕ0 (tk ) =
1 > 0. p(tk )
Hence the Lemma 7.1 follows. Definition 7.1.1 Consider (p1 (t)u0 )0 + q1 (t)u = 0,
t ∈ J = [a, b]
(I)
(p2 (t)u0 )0 + q2 (t)u = 0,
t ∈ J = [a, b]
(II)
we say equation (II) is a Sturm majorant of (I) on J if p1 (t) ≥ p2 (t), q1 (t) ≤ q2 (t), t ∈ J. In addition, if q1 (t) < q2 (t) or p1 (t) > p2 (t) > 0 at some point t ∈ J, we say (II) is a strict Sturm majorant of (I) on J. Theorem 7.1.1 (Sturm’s 1st Comparison Theorem) Let pi (t), qi (t) be continuous on J = [a, b], i = 1, 2 and (II) is a Strum majorant of (I). Assume u1 (t) 6 ≡0 is a solution of (I) and u1 (t) has exactly n zeros t1 < t2 < · · · < tn on J and u2 (t) 6 ≡0 is a solution of (II) satisfying p2 (t)u02 (t) p1 (t)u01 (t) ≥ at t = a u1 (t) u2 (t)
7.1. STURM’S COMPARISON THEOREM AND STURM-LIOUVILLE BOUNDARY VALUE PROBLEM147 (if ui (a) = 0, set
pi (a)u0i (a) = +∞) ui (a)
Then u2 (t) has at least n zeros on (a, tn ]. Furthermore, if either the inequality holds in (7.8) or (II) is a strict Sturm majorant of (I), then u2 (t) has at least n zeros in (a, tn ). Proof. Let ϕi (t) = tan−1
ui (t) pi (t)u0i (t) 2
i = 1, 2, · · ·. Then ϕi (t) satisfies (7.6), i.e.
ϕ0i (t) = pi1(t) cos2 ϕi (t) + qi (t) sin ϕi (t) = fi (t, ϕi ). From (7.8) it follows that 0 ≤ ϕ1 (a) ≤ ϕ2 (a) < π. Since (II) is a Sturm majorant of (I), it follows that f1 (t, ϕ) ≤ f2 (t, ϕ) on J. From differential inequality, we have ϕ1 (t) ≤ ϕ2 (t) on J. Since ϕ1 (tn ) = nπ ≤ ϕ2 (tn ), u2 (t) has at least n zeros on (a, tn ]. If ϕ1 (a) < ϕ2 (a) or f1 (t, ϕ) < f2 (t, ϕ) on J, then we have ϕ1 (t) < ϕ2 (t) on J. Hence u2 (t) has at least n zeros on (a, tn ). Corollary 7.1.1 (Sturm’s Separation Theorem) Let (II) be a Sturm majorant of (I) on J and u1 (t), u2 (t) are nonzero solutions of (I) and (II) respectively. Let u1 (t) vanish at t1 , t2 ∈ J, t1 < t2 . Then u2 (t) has at least one zero on [t1 , t2 ]. In particular if p1 (t) ≡ p2 (t) ≡ p(t), q1 (t) ≡ q2 (t) ≡ q(t) and u1 (t), u2 (t) are two linearly independent solutions of (7.3), then the zeros of u1 separate and are separated by those zeros of u2 . Example 7.1.1 sin t and cos t are two linearly independent solution of u00 + u = 0. Their zeros are separated each other. Example 7.1.2 Consider Airy’s equation u00 (t) + tu(t) = 0. Comparing with u00 + u = 0, it is easy to show that Airy’s equation is oscillatory. Remark 7.1.1 Strum’s Comparison Theorem is used to show that a second order linear equation (sometimes nonlinear) is oscillatory. For example in [Hsu et] v 00 (x) + x sin v = 0 v 0 (0) = 0, v(K) = 0
(7.8)
v 00 (x) + x sin v = 0 v 0 (0) = 0, v(0) = a.
(7.9)
where K is a parameter. Let v(x, a) be the solution of
Since from the uniqueness of solution of ordinary differential equations, we have v(x, 2π + a) v(x, a)
= 2π + v(x, a) = −v(x, −a)
v(x, 0) ≡
0, v(x, π) ≡ π.
Hence we only consider the solution v(x, a) with 0 < a < π. We next show that v(x, a) is oscillatory over [0, ∞). Let 1 (v 0 (x))2 . V (x) = (1 − cos v(x)) + 2 x Then µ ¶2 1 v 0 (x) 0 ≤0 V (x) = 2 x
148
CHAPTER 7. SECOND ORDER LINEAR EQUATIONS
and 1 − cos v(x) ≤ V (x) ≤ V (0) = 1 − cos a Since |v(x)| ≤ π, then |v(x)| ≤ a for all x ≥ 0. Rewrite (7.10) as µ ¶ sin v(x) v(x) = 0 v 00 (x) + x v(x) ¡ ¢ Let 0 < δ < min sinv v . Apply Sturm’s Comparison Theorem with 0≤v≤a
v 00 + δv = 0 which is oscillatory over [0, ∞), the solution v(x, a) of (7.10) is oscillatory over [0, ∞) with zeros y1 (a) < y2 (a) < · · · (See Fig. 7.1)
Fig.7.2 To see how the number of solutions of (7.9) varies with paremeeter K, we construct the following Fig. 7.2.
Fig.7.3 Sturm-Liouville boundary value problem: Let Lu = (p(t)u0 )0 + q(t)u, p(t) > 0, q(t) continuous on [a, b]. Consider the following eigenvalue problem: Lu + λu = 0
7.1. STURM’S COMPARISON THEOREM AND STURM-LIOUVILLE BOUNDARY VALUE PROBLEM149
u(a) cos α − p(a)u0 (a) sin α = 0
(Pλ )
u(b) cos β − p(b)u0 (b) sin β = 0 Obviously the boundary conditions in (Pλ ) are ϕ(a) = α, ϕ(b) = β, where ϕ is the angle in the “polar” coordinate (7.4) or (7.5). Let y, z : [a, b] → C and define the inner product Z hy, zi =
b
y(t)z(t)dt. a
Then we have the followings (i) hy + z, wi = hy, wi + hz, wi (ii) hαy, zi = αhy, zi, α ∈ C (iii) hz, yi = hy, zi (iv) hy, yi > 0 when y 6= 0 Lemma 7.1.2 The linear operator L is self-adjoint, i.e., hLy, zi = hy, Lzi for all y, z satisfy the boundary condition in (Pλ ). Proof. hLy, zi − hy, Lzi Rb Rb = a [(p(t)y 0 )0 + q(t)y] z(t)dt − a y(t) [(p(t)z 0 (t))0 + q(t)z(t)] dt ¯b ¯b ¯ ¯ Rb 0 Rb 0 0 0 ¯ = p(t)y (t)z(t)¯ − a z (t)p(t)y (t)dt − p(t)z (t)y(t)¯¯ + a y 0 (t)p(t)z 0 (t)dt a a = p(b)y 0 (b)z(b) − p(a)y 0 (a)z(a) − p(b)z 0 (b)y(b) + p(a)z 0 (a)y(a) = y(b) cot β z¯(b) − y(a) cot α¯ z (a) − z¯(b) cot βy(b) + z¯(a) cot αy(a) =0 Lemma 7.1.3 (i) All eigenvalues of (Pλ ) are real. (ii) The eigenfunctions φm and φn corresponding to distinct eigenvalues λm , λn are orthogonal, i.e. hφn , φm i = 0 (iii) All eigenvalues are simple. Proof. Let Lϕm = −λm ϕm , Lϕn = −λn ϕn , Then we have λm hϕm , ϕn i = hλm ϕm , ϕn i = −hLϕm , ϕn i =
−hϕm , Lϕn i = hϕm , λn ϕn i = λn hϕm , ϕn i.
If m = n then λm = λm . Hence (i) follows. If m 6= n then (λm − λn )hϕm , ϕn i = 0 and it follows that hϕm , ϕn i = 0 and we complete the proof of (ii). To show that the eigenvalues are simple, we note that the eigenfunction φ(t) satisfies the boundary
150
CHAPTER 7. SECOND ORDER LINEAR EQUATIONS
condition φ(a) cos α − p(a)φ0 (a) sin α = 0. If cos α = 0 then φ0 (a) = 0. If cos α 6= 0 φ(a) φ(a) 0 then p(a)φ In either case φ0 (a) is determined 0 (a) = tan α or φ (a) = p(a) tan α . uniquely by φ(a). Hence the eigenvalues are simple. Now we return ½ to00Sturm-Liouville boundary value problem. u + λu = 0 Example 7.1.3 has eigenvalues λn = (n + 1)2 with eigenu(0) = 0, u(π) = 0 functions un (x) = sin(n + 1)x Theorem 7.1.2 There are an infinite number of eigenvalues λ0 , λ1 , · · · forming a monotone increasing sequence with λn → ∞ as n → ∞. Moreover the eigenfunction corresponding to λn has exactly n zeros on (a, b). Proof. Let u(t, λ) be the solution of initial value problem (p(t)u0 )0 + q(t)u + λu = 0 u(a) = sin α cos α u0 (a) = p(a)
(7.10)
We note that (P)λ has a nontrivial solution if and only if u(t, λ) satisfies the second boundary condition. For fixed λ, we define ϕ(t, λ) = tan−1
u(t, λ) p(t)u0 (t, λ)
Then ϕ(a, λ) = α and ϕ(t, λ) satisfies 1 cos2 ϕ(t, λ) + (q(t) + λ) sin2 ϕ(t, λ) p(t) ϕ(a, λ) = α. ϕ0 (t, λ) =
The Sturm comparison theorem or the differential inequality shows that ϕ(b, λ) is strictly increase in λ. Without loss of generality, we may assume that 0 ≤ α < π. We shall show that lim ϕ(b, λ) = +∞, (7.11) λ→∞
and lim ϕ(b, λ) = 0
λ→−∞
(7.12)
We note that the second boundary condition in (P)λ is equivalent to ϕ(b, λ) = β + nπ for some n ≥ 0. Then if (7.12), (7.13), hold then there exists eigenvalues λ0 , λ1 , · · · λn , λn → +∞ such at ϕ(b, λn ) = β + nπ, n ≥ 0. From Lemma 7.1, the corresponding eigenfunction un (x) has exactly n zeros. Now we prove (7.12). Rt 1 Introduce scaling, s = a p(τ ) dτ and set U (s) = u(t). Then U˙ (s) ¨ (s) U Then (7.11) becomes
du dt du = p(t) dt ds dt ¶0 µ du = p(t) p(t) dt
=
¨ (s) + p(t)(q(t) + λ)U = 0 U
(7.13)
7.2. DISTRIBUTIONS
151
From arbitary fixed n > 0 and for any fixed M > 0, choose λ sufficiently large such that p(t)(q(t) + λ) ≥ M 2 , a ≤ t ≤ b Compare (7.14) with
u ¨ + M 2 u = 0.
(7.14)
Then the solution u(s) of (7.15) has at least n zeros provided M is sufficiently large. R b dt By Sturm Comparison Theorem, U (s) has at least n zeros on 0 ≤ s ≤ a p(t) , or, equivalently u(t) has at least n zeros on [a, b]. Then ϕ(b, λ) ≥ nπ if λ is sufficiently large. Hence we complete the proof of (7.12). Next we show that (7.13) holds. Obviously ϕ(b, λ) ≥ 0 for all λ ∈ R. Choose λ < 0 with |λ| sufficiently large and p(t) · (q(t) + λ) ≤ −M 2 < 0. Compare (7.14) with u ¨ − M 2u = 0 u(0) = sin α u0 (0) = cos α. Then u(s) = sin α cos h(M s) + Let ψ(s, M ) = tan−1
u(s) , u(s) ˙
1 cos α sin h(M s). M
ψ(0, M ) = α. Since it can be verified that u(s) → 0 as M → ∞ u(s) ˙
R b dt , it follows that ψ(b0 , M ) → 0 as M → ∞ where b0 = a p(t) . By Sturm Comparison Theorem, we have 0 < ϕ(b, λ) ≤ ψ(b0 , M ) → 0. Thus (7.13) holds.
7.2
Distributions
In 1930 famous physicist Paul Dirac introduced the concept of δ-function , δξ (x) which satisfies δRξ (x) = 0 , forx 6= ξ ∞ δξ (x)dx = 1 , R−∞ ∞ δ (x)ϕ(x)dx = ϕ(ξ) , ϕ ∈ C∞ −∞ ξ Physically δξ (x) represents a unit force applied to the position x = ξ. In 1950 L.Schwartz introduced the definition of distribution to interpret the mathematical meaning of δ-function. Definition 7.2.1 A typical test function is a function ϕ ∈ C0∞ (R) with compact support, i.e. there exists a finite interval [a, b] such that ϕ(x) = 0 outside [a, b]. A typical test function is as following : ( ³ ´ exp x21−1 , |x| < 1 Example 7.2.1 ϕ(x) = 0 , |x| ≥ 1 Definition 7.2.2 We say t : C0∞ (R) → Ris a linear functional if t(αϕ1 + βϕ2 ) = αt(ϕ1 ) + βt(ϕ2 ), α, β ∈ R, ϕ1 , ϕ2 ∈ C0∞ (R). We denote t(ϕ) = ht, ϕi. Definition 7.2.3 Let {ϕn } ⊆ C0∞ (R). We say {ϕn } is a zero sequence if
152
CHAPTER 7. SECOND ORDER LINEAR EQUATIONS
(i) ∪n { suppϕn } is bounded ¯ k ¯ ¯ ¯ (ii) lim (max ¯ ddxϕkn (x)¯) = 0, k = 0, 1, 2, · · · n→∞
x
Definition 7.2.4 A linear functional t : C0∞ (R) → R is continuous if ht, ϕn i → 0 as n → ∞for any zero sequence {ϕn }. A distribution is a continuous linear functional. R Example 7.2.2 Let f ∈ C0∞ (R) be locally integrable, i.e., I |f (x)| dx exists and bounded for Rany finite interval I. Then we define a distribution tf associated with f ∞ by htf , ϕi = −∞ f (x)ϕ(x)dx It is an exercise to verify tf is a distribution. Example 7.2.3 (i) Heaviside distribution 1 , x≥0 H(x) = {0 , x<0 Then R∞ R∞ htH , ϕi = −∞ H(x)ϕ(x)dx = 0 ϕ(x)dx (ii) Delta distribution def
hδξ , ϕi = ϕ(ξ) R Symbolically we write δξ (x)ϕ(x)dx = ϕ(ξ) (iii) Dipole distribution def
h∆, ϕi = ϕ0 (0) Definition 7.2.5 We say a distribution t is regular if t = tf for some locally integrable function f . Otherwise t is singular. Remark 7.2.1 δξ is a singular distribution. Proof. If not, δξ = tf for some locally integrable f(x) Consider ( ϕa (x) = ϕa (0) =
³ exp
1 e
a2 x2 −a2
´
0 =max |ϕa (x)|
, |x| < a , |x| ≥ a
x
Then htf , ϕ¯a i → 0 as a → 0 for ¯ ¯R ∞ ¯ |htf , ϕi| = ¯ −∞ f (x)ϕa (x)dx¯ ≤
1 e
Ra −a
|f (x)| dx → 0
However if δξ = tf then htf , ϕa i = ϕa (0) = This is a contradiction.
1 e
Next we introduce some algebraic operations of distribution. Let f ∈ C ∞ (R) and t be a distribution. Then we define ft as a distribution hf t, ϕi = ht, f ϕi, ϕ ∈ C0∞ (R)
7.3. GREEN’S FUNCTION
153
We note then R ∞that if t is a regular Rdistribution, ∞ hf t, ϕi = −∞ f (x)t(x)ϕ(x)dx = −∞ t(x)f (x)ϕ(x)dx To define Rthe derivative of a distribution t, weRlet t be a regular distribution. Then ∞ 0 0 ht0 , ϕi = −∞ t0 (x)ϕ(x)dx = ϕ(x)t(x) |∞ −∞ − ϕ (x)t(x)dx = − ht, ϕ i. Hence we have the following definition, Definition 7.2.6 ht0 , ϕi = − ht, ϕ0 i It is an easy exercise to verify t0 is a distribution. Example 7.2.4 H 0 = δ0 = Rδ in distribution sense.R ∞ ∞ hH 0 , ϕi = − hH, ϕ0 i = − −∞ H(x)ϕ0 (x)dx = − 0 ϕ0 (x)dx = ϕ(0) = hδ, ϕi, ϕ ∈ C0∞ (R) Example 7.2.5 δ 0 = −∆ in distribution sense. hδ 0 , ϕi = − hδ, ϕ0 i = −ϕ0 (0) = h−∆, δi
7.3
Green’s function n
n−1
Let Lu = an (x) ddxnu + an−1 (x) ddxn−1u + · · · + a1 (x) du dx + a0 (x)u be a linear differential operator. Consider the boundary value problem (Lu)(x) = f (x), 0 ≤ x ≤ 1 with boundary conditions at x = 0 and 1 We say u is distribution solution of Lu = f if hLu, ϕi = hf, ϕifor all test functions ϕ. To motivate the concept of Green’s function, we consider the following example. Example 7.3.1 Consider two point boundary value problem 00
u (x) = f (x) u(0) = 0, u(1) = 0 From direct integrations and the boundary conditions , we obtain Z u(x) =
Z =
Z
1
−
x
x(1 − ξ)f (ξ)dξ + 0
(x − ξ)f (ξ)dξ
(7.15)
0
1
g(x, ξ)f (ξ)dξ 0
where ½
x(ξ − 1) , 0 ≤ x ≤ ξ ≤ 1 ξ(x − 1) , 0 ≤ ξ ≤ x ≤ 1 R1 Write u(x) = (L−1 f )(x) = 0 g(x, ξ)f (ξ)dξ Applying L R 1on both sides of above R 1 identity, we have f (x) = L( 0 g(x, ξ)f (ξ)dξ) = 0 Lg(x, ξ)f (ξ)dξ Then Lg(x, ξ) = δx (ξ) = δ(x − ξ) g(x, ξ) is called Green’s function of differential operator L subject to homogeneous boundary condition u(0) = 0, u(1) = 0 g(x,ξ) =
154
CHAPTER 7. SECOND ORDER LINEAR EQUATIONS Physical Interpretation of Green’s function:
Suppose a string is streched between x = 0 and x = 1 in equilibrium state. Let u(x) be the vertical displacement from zero position and u(0) = u(1) = 0, i.e. two ends of the string are fixed. Since the string is in equilibrium state, the horizontal and vertical forces must be in balance. Then
Fig.7.4 T0 = T (x)cosθ(x) = T (x + dx)cosθ(x + dx) T (x + dx)sinθ(x + dx) − T (x)sinθ(x) = ρ(x)gds where ρ (x) is the density per unit length of the string at x and ds is the arclength between x and x + dx. Then T0 Since
du dx
tan θ(x + dx) − tan θ(x) ds = ρ(x)g dx dx 2
2 1/2 = tan θ(x), let dx → 0 and it follows that T0 ddxu2 = ρ(x)g(1 + ( du dx ) )
¯ ¯ ¯ Assume ¯ du dx is small compared to 1, then it follows that d2 u ρ(x)g = f (x), f (x) = dx2 T0 u(0) = u(1) = 0 From Example 7.3.1, u(x) =
R1 0
g(x, ξ)f (ξ)dξ, where g(x, ξ) satisfies
d2 g(x, ξ) = δ0 (x − ξ) = δ(x − ξ) dx2 g(0, ξ) = 0, g(1, ξ) = 0 Hence, g(x, ξ) is the vertical displacement of a string fixed at x = 0 and x = 1 subject to the unit force at ξ. If we partition [0,1] at points {ξk }nk=1 and apply force f (ξ Pkn) at point ξk , k = 1, · · ·, n. Then by superposition principle , we have u(x) = k=1 f (ξk )g(x, ξk )4ξk .
7.3. GREEN’S FUNCTION
155
Let n → ∞, then we obtain (7.15). Now we consider the following two point boundary value problem Lu = f on a < x < b n n−1 Lu = an (x) ddxnu + an−1 (x) ddxn−1u + · · · + a1 (x) du dx + a0 (x)u
(7.16)
with homogeneous boundary conditions B1 u(a) = 0, B2 u(b) = 0 To solve (7.16) we introduce Green’s function g(x, ξ) which satisfies Lg(x, ξ) = δξ (x) for all x ∈ (a, b) B1 g(a, ξ) = 0 , B2 g(b, ξ) = 0 Then we expect the solution u(x) of (7.16) will be written as u(x) =
(7.17) Rb a
g(x, ξ)f (ξ)dξ
We shall find Green’s function g(x, ξ) by the followings: (i)
Lg(x, ξ) = 0 for x 6= ξ B1 g(a, ξ) = 0, B2 g(b, ξ) = 0
(ii)
dk g (x, ξ) dxk
is continuous at x = ξ for k = 0, 1, · · ·, n − 2
(iii) Jump condition:
dn−1 g + dxn−1 (ξ , ξ)
−
dn−1 g − dxn−1 (ξ , ξ)
=
1 an (ξ)
To see why the jump condition (iii) hold we integrate (7.16) from ξ − to ξ + , Then R ξ+ dn g dn−1 g dg [an (x) dx n (x, ξ) + an−1 (x) dxn−1 (x, ξ) + · · · + a1 (x) dx (x, ξ) + a0 (x)g(x, ξ)]dx = 1 ξ− n−1
+
x=ξ d g From condition (ii) , the above identity becomes an (ξ) dx n−1 (x, ξ) |x=ξ − = 1 Hence (iii) holds. ½ 00 u (x) = f (x) , 0<x<1 Example 7.3.2: Solve u0 (0) = a , u(1) = b
Let u1 (x), u2 (x) be the solution of u00 (x) = f (x) u0 (0) = 0 ,
,
0<x<1 u(1) = 0
(7.18)
and u00 (x) = 0 , 0 < x < 1 u0 (0) = a , u(1) = b
Then u(x) = u1 (x) + u2 (x), u1 (x) =
R1 0
respectively.
g(x, ξ)f (ξ)dξ, u2 (x) = ax + (b − a)
Find Green’s function g(x, ξ) of (7.18) which satisfies (i)
d2 dx2 g(x, ξ) dg dx (o, ξ) =
= 0 for x 6= ξ 0, g(1, ξ) = 0
(ii) g(x, ξ) is continuous at x = ξ (iii)
x=ξ + dg dx (x, ξ) |x=ξ − =
1
156
CHAPTER 7. SECOND ORDER LINEAR EQUATIONS From (i) ½ g(x, ξ) = dg dx (o, ξ)
Ax + B Cx + D
, ,
0≤x<ξ≤1 0<ξ<x≤1
= 0 implies A = 0
g(1, ξ) = 0 implies D = −C From (ii) , we have Aξ + B = Cξ + D. Hence ½ C(ξ − 1) , 0≤x≤ξ≤1 g(x, ξ) = C(x − 1) , 0<ξ≤x≤1 dg + dg − From the jump condition (iii) dx (ξ , ξ) − dx (ξ , ξ) = 1 ½ ξ−1 , 0≤x<ξ≤1 we have C = 1 and g(x, ξ) = x−1 , 0≤ξ<x≤1
Next we want to show that u1 (x) satisfies (7.18).
u01 (x)
=
d dx
"Z
Z
x−
g(x, ξ)f (ξ)dξ + x−
g(x, x− )f (x− ) + 0
Z
+
+
dg (x, ξ)f (ξ)dξ dx
1
−g(x, x )f (x ) + Z
x+ 1
dg (x, ξ)f (ξ)dξ dx
1
dg (0, ξ)f (ξ)dξ = 0 dx
= 0
Z u01 (0)
= 0
Z u1 (x) = u001 (x) =
= =
g(x, ξ)f (ξ)dξ x+
0
Z =
#
1
dg (x, ξ)f (ξ)dξ dx
1
dg (1, ξ)f (ξ)dξ = 0 0 dx Z x− 2 dg d g − − (x, x )f (x ) + (x, ξ)f (ξ)dξ 2 dx dx 0 Z 1 2 d g dg − (x, x+ )f (x+ ) + (x, ξ)f (ξ)dξ 2 dx x+ dx · ¸ dg dg − + f (x) (x, x ) − (x, x ) dx dx ¸ · dg − dg + (ξ , ξ) − (ξ , ξ) = f (x) f (x) dx dx
Lu = a2 (x)u00 (x) + a1 (x)u0 (x) + a0 (x)u(x) = f (x) , a < x < b α1 u(a) + β1 u0 (a) = 0 Example 7.3.3 α2 u(b) + β2 u0 (b) = 0 Let u1 (x) be a solution of Lu = 0, a < x < b, α1 u(a) + β1 u0 (a) = 0 and u2 (x) be a solution of Lu = 0, a < x < b, α2 u(b) + β2 u0 (b) = 0
7.3. GREEN’S FUNCTION ½ Let g(x, ξ) =
157
Au1 (x)u2 (ξ) Au2 (x)u1 (ξ)
, ,
a≤x<ξ≤b a≤ξ<x≤b
Obviously g(x, ξ) is continuous at x = ξ From jump condition
dg x=ξ + 1 dx |x=ξ − = a2 (ξ)
We obtainAu02 (ξ)u1 (ξ) − Au01 (ξ)u2 (ξ) =
1 a2 (ξ) ,
i.e. ,A =
1 a2 (ξ)W (u1 ,u2 )(ξ)
where W (u1 , u2 )(ξ) is the Wronski of u1 , u2 It is an exercise to verify that u(x) =
Rb a
g(x, ξ)f (ξ)dξ satisfies Lu = f
In the following we employ Green’s function to convert a nonlinear two point boundary value problem ½ 00 y = f (x, y) , a < x < b (7.19) y(a) = 0 , y(b) = 0 into integral equation Z
b
y(x) =
g(x, ξ)f (ξ, y(ξ))dξ
(7.20)
a
where
( g(x, ξ) =
(x−a)(ξ−b) b−a (x−b)(ξ−a) b−a
, ,
x<ξ ξ<x
(7.21)
To show the existence and uniqueness of the solution of (7.20),we shall apply the contraction mapping principle. Let C[a, b] = {ϕ : [a, b] → R is continuous } with supreme norm kϕk∞ = sup |ϕ(x)|. a≤x≤b
Define T : C[a, b] → C[a, b] by (T ϕ)(x) =
Rb a
g(x, ξ)f (ξ, ϕ(ξ))dξ
Theorem 7.3.1 Assume f (x, y) satisfies Lipschitz condition in y with Lipschitz 2 constant K ,i.e., |f (x, y1 ) − f (x, y2 )| ≤ K |y1 − y2 | for each x. If K (b−a) < 1 then 4 there exists a unique solution of (7.19). ¯R ¯ ¯ b ¯ Proof. |T ϕ1 (x) − T ϕ2 (x)| = ¯ a g(x, ξ)(f (ξ, ϕ1 (ξ)) − f (ξ, ϕ2 (ξ)))dξ ¯ ≤ K(
Rb a
|g(x, ξ)| dξ) kϕ1 − ϕ2 k∞
≤ K[ max ( a≤x≤b
Rb a
|g(x, ξ)| dξ)] kϕ1 − ϕ2 k∞
Rb It is easy to verify directly from (7.21) that max ( a |g(x, ξ)| dξ) = a≤x≤b
Then kT ϕ1 − T ϕ2 k∞ ≤ K 2
(b−a)2 4
(b − a)2 kϕ1 − ϕ2 k∞ 4
Since K (b−a) < 1 ,from contraction mapping principle, T has a unique fixed 4 point ϕ. Thus there is a unique solution of (7.19).
158
CHAPTER 7. SECOND ORDER LINEAR EQUATIONS
7.4
Fredholm Alternative for 2nd order linear equations
Let’s recall the Fredholm Alternative in linear algebra. Let A be an m × n real matrix and b ∈ Rm . For the solutions of the linear system Ax = b. We have two alternatives (i) If Ax = 0 has only trivial solution ,i.e., then null space N (A) = {0} , then the solution of Ax = b is unique. (ii) If Ax = 0 has nontrivial solutions , then AT v = 0 has nontrivial solution and Ax = b is solvable if and only if hb, vi = 0 for all v ∈ N (AT ) . Let Lu = a2 (x)u00 + a1 (x)u0 + a0 (x)u Consider
½
Lu = f (x) , a < x < b B1 (u) = 0 , B2 (u) = 0
(7.22)
Let D=domain of L = {u ∈ C 2 : B1 (u) = 0, B2 (u) = 0} We shall find L∗ v , the adjoint system of L and D∗ , the domain of L∗ , and , B2∗ (v).
B1∗ (v)
Compute Z
b
hLu, vi =
(Lu)(x)v(x)dx a
Z =
b
[a2 (x)u00 v + a1 (x)u0 v + a0 (x)uv]dx
a
= (a2 v)u0 |ba −u(a2 v)0 |ba +u(a1 v) |ba Z b + [(a2 v)00 − (a1 v)0 + (a0 v)]udx a
=
J(u, v) |ba + hu, L∗ vi
Definition 7.4.1 We say L is self-adjoint if L = L∗ and D = D∗ , where L∗ v = (a2 v)00 − (a1 v)0 + (a0 v) J(u, v) |ba = (a2 v)u0 |ba −u(a2 v)0 |ba +u(a1 v) |ba We define D∗
= domain of L∗ = {v ∈ C 2 : J(u, v) |ba = 0 for all u ∈ D} = {v ∈ C 2 : B ∗ (v) = 0 , B ∗ (v) = 0}
Then for u ∈ D , v ∈ D∗ we have hLu, vi = hu, L∗ vi
(7.24)
7.4. FREDHOLM ALTERNATIVE FOR 2ND ORDER LINEAR EQUATIONS159 Example 7.4.1 If Lu = a2 (x)u00 + a1 (x)u0 + a0 (x)u and B1 (u) = u(a) = 0 , B2 (u) = u0 (b) = 0, then J(u, v) |ba
= =
a2 (vu0 − uv 0 ) + (a1 − a02 )uv |ba a2 (b)(−u(b)v 0 (b)) + (a1 (b) − a02 (b))u(b)v(b) − a2 (a)v(a)u0 (a) = 0
for all u ∈Domain L Then B1∗ (v) = v(a) = 0, B2∗ (v) = 0 if and only if v(b)(a1 (b) − a02 (b)) = a2 (b)v 0 (b) n
Remark 7.4.1 If Lu = an (x) ddxnu +···+a0 (x)u , then L∗ u =
Pn
k dk k=0 (−1) dxk (ak (x)v)
Now we are in a position to state Fredholm Alternatives for second order linear equations. Theorem 7.4.1 (Fredholm Alternatives) Let (1) Lu = f (x) , B1 (u) = 0 , B2 (u) = 0 , a < x < b (2) Lu = 0 , B1 (u) = 0 , B2 (u) = 0 (3) L∗ v = 0 , B1∗ (v) = 0 , B2∗ (v) = 0 Then there are two alternatives (i) If (2) has only trivial solution , then (1) has a unique solution (ii) If (2) has nontrivial solutions ,then (3) has nontrivial solutions .Furthermore (1) is solvable if and only if hf, vi = 0 for all solution v of (3). Proof. (i) is trivial. For part (ii) we only show the necessary condition . Let Lu = f for some u . From (7.4.2) , we have hLu, vi = hf, vi = hu, L∗ vi = 0 for all v satisfying L∗ v = 0 . For the sufficient condition , reader may consult [St] Chapter 3, p.254. Remark 7.4.2 For second order linear operator L we may assume L is self-adjoint , i.e. , Lu = (p(x)u0 )0 + q(x)u. From Theorem 7.1.2 , there is a sequence of eigenvalues {λn } such that Lun = λn un . Let λ0 = 0 then Lu0 = 0 and hf, u0 i = 0 . For n 6= 0 , if Lu = f then hf, un i = hLu, un i = hu, Lun i = λn hu, un i P Since u = u0 + hu, un i un , hence solving Lu = f , we define u = u0 + Then Lu = Lu0 +
X 1 hf, un i un λn
X 1 hf, un i Lun λn
=0+
X
hf, un i = f
160
CHAPTER 7. SECOND ORDER LINEAR EQUATIONS
Exercise Exercise 7.1 (a) In the differential equation (∗)
u ¨ + q(t)u = 0
let q(t) be real-valued, continuous , and satisfy 0 < m ≤ q(t) ≤ M . If u = u(t) 6= 0 is a solution with a pair of zeros t = t1 , t2 (> t1 ), then π/m1/2 ≥ t2 − t1 ≥ π/M 1/2 . (b) Let q(t) be continuous for t ≥ 0and q(t) → 1 as t → ∞. Show that if u = u(t) 6= 0 is a real-valued solution of (∗), then the zeros of u(t) form a sequence 0 ≤ t1 < t2 < · · · such that tn − tn−1 → π as n → ∞. (c) Consider the Bessel equation (∗∗)
v 00 +
v0 µ2 + (1 − 2 )v = 0 t t
where µ is a real parameter. The change variable u = t1/2 v transforms (∗∗) into u00 + (1 −
α )u = 0 t2
where
α = µ2 −
1 4
Show that the zeros of a real-valued solution v(t) of (∗∗) on t > 0 form a sequence t1 < t2 < · · · such that tn − tn−1 → π as n → ∞.
7.4. FREDHOLM ALTERNATIVE FOR 2ND ORDER LINEAR EQUATIONS161
References [HA ] Hartman : Ordinary Differential Equations [HH ] S.B.Hsu , S.F.Hwang : Analysis of large deformation of a heavy cantilever, SIAM J.Math. Analysis Vol.19 No.4 1988, p.854-866 [K ] J. Keener : Principle of Applied Mathematics
162
CHAPTER 7. SECOND ORDER LINEAR EQUATIONS
Chapter 8
THE INDEX THEORY AND BROUWER DEGREE 8.1
Index Theory In The Plane
Given a simple closed curve C in x−y plane and a vector field f~(x, y) = (P (x, y), Q(x, y))such that f~ does not vanish on any point of C. Let C be parameterized as x = x(s), y = y(s), a ≤ s ≤ b, x(a) = x(b), y(a) = y(b). Define θ(s) = tan−1
Q(x(s), y(s)) P (x(s), y(s))
be a continuous function on [a, b] and the index 1 1 If (C) = (θ(b) − θ(a)) = 2π 2π def
Z
b a
dθ ds ds
is an integer. (See Fig. 8.1)
Fig.8.1
If (C) = =
winding number number of times ”circle” C 0 winds in the counterclockwise direction 163
164
CHAPTER 8. THE INDEX THEORY AND BROUWER DEGREE
Remark 8.1.1 (Exercises) If (C) is independent of the choice of parameterization θ(s) Example 8.1.1 Let C be a unit circle and ¡ ¢ f (x, y) = (P (x, y), Q(x, y)) = 2x2 − 1, 2xy Then (P, Q) = (cos 2θ, sin 2θ), 0 ≤ θ ≤ 2π(See Fig. 8.2) If (C) =
1 · 4π = 2 2π
Fig.8.2 Since tan θ(s) =
Q(s) , P (s)
differentiating with respect to s yields dP ¢ P (s) dQ dθ dθ ¡ ds − Q(s) ds sec2 θ(s) = 1 + tan2 θ(s) = , 2 ds ds P (s)
and we have
P dQ − Q dP dθ ds = ds2 . ds P + Q2
Thus we have the formula 1 If (C) = 2π
I
−Q P dP + 2 dQ. P 2 + Q2 P + Q2
Lemma 8.1.1 (Homotopy invariance property) Let ft , 0 ≤ t ≤ 1be a continuous family of vector field such that ft (C) 6= 0 for all t. Then Ift (C) ≡ constant. Proof. Obviously Ift (C) is continuous in t. Hence Ift (C) ≡ constant integer. Theorem 8.1.1 Let C be a smooth Jordan curve and there is no equilibrium of f inside C. Then If (C) = 0
8.1. INDEX THEORY IN THE PLANE
165
Proof. Let Ω be the region enclosed by C. Then If (C)
= =
I −Q 1 P dP + 2 dQ 2π P 2 + Q2 P + Q2 µ ¶ µ ¶ 1 ∂ P ∂ Q + dP dQ = 0 2π D ∂P P 2 + Q2 ∂Q P 2 + Q2
Theorem 8.1.2 Let C1 , C2 be two Jordan curves with C2 enclosed by C1 and the vector field f~ does not vanish on the region between C1 and C2 . Then If (C1 ) = If (C2 ). Proof. Consider the Jordan curve Γ : ABAA0 B 0 A0 A. (See Fig. 8.3)
Fig.8.3 Then 0 =
If (Γ) 1 1 = dθ = [C +AA0 −C2 +A0 A ] 2π 2π 1
Hence
1 2π C1
=
1 2π C2 .
Theorem 8.1.3 Let C be a continuous differentiable simple closed curve with nonzero tangent vector field f (i.e. Cis a periodic orbit of x0 = f (x)). Then If (C) = 1 Proof. Without loss of generality, we assume that C(a) = C(b) = Q where Q is the point with minimum y-coordinate. Define a continuous vector field V on the triangular region of (s, t)-plane, (See Fig. 8.4) V (s, t) =
C(t)−C(s) , t−s
s
f (t) ,
s=t
166
CHAPTER 8. THE INDEX THEORY AND BROUWER DEGREE
Fig.8.4 Obviously V (s, t) is continuous function of (s, t), and V (s, t) 6= (0, 0) on the triangular region. Then by Theorem 2, IV (Γ) = 0 where Γis the boundary of the triangular region. Hence we have by definition of IV (Γ) in terms of angles. I 0= dθ = 2π · If (C) + (−π) + (−π) Γ
Here we note that from (b, b) to (a, b)( s is from b to a, see Fig. 8.5) V (s, b) =
C(b) − C(s) C(s) − C(b) =− b−s b−s
Fig.8.5 and from (a, b) to (a, a), (t is from b to a, see Fig. 8.6) V (a, t) =
C(t) − C(a) t−a
Fig.8.6
8.1. INDEX THEORY IN THE PLANE
167
Hence If (C) = 1 Corollary 8.1.4 Suppose the autonomous system x0 = f (x), x ∈ R2 has a periodic orbit C then there exists an equilibrium of f inside C. Proof. If there exists no equilibrium of f inside C then If (C) = 0 which contradicts to the result of Theorem 8.3, If (C) = 1. Definition 8.1.1 Let z0 be an isolatedzero of the vector field f . Assume f = 6 0on a simple closed curve C enclosing z0 with dist (z0 , C) sufficiently small. We define the index of f at z0 to be If (z0 ) = If (C) (8.1)
Remark 8.1.2 From Theorem 8.1.2, the definition (8.1) is well-defined. In the followings, we shall compute the index of the saddle point, center, sprial, node of a nonlinear vector field. As a first step, we compute these indices for linear vector field, then we do for nonlinear vector field through linearization and homotopy invariance. Consider the linear vector field ¶ ¶ µ ¶ µ ¶µ ¶0 µ µ P (x, y) ax + by x a b x (8.2) = = = Q(x, y) cx + dy y c d y Assume (0, 0) is a nondegenerate µ critical ¶ point of (8.2), ie, the determinant ∆ = a b ad − bc 6= 0. The eigenvalue of satisfiess c d λ2 − (a + d)λ + (ad − bc) = 0 If ∆ > 0 then (0, 0) is either a source or sink, or center. On the other hand, if ∆ < 0then (0, 0) is a saddle point. Let C = {(x, y) : x = cos s, y = sin s, 0 ≤ s ≤ 2π} be a unit circle. Then the index for the linear vector field f = (P, Q) is If (C) = where θ(s) = tan−1 Since
dx ds
= −y, dθ ds
dy ds
1 2π
Z 0
2π
dθ ds ds
cx + dy Q(x(s), y(s)) = tan−1 . P (x(s), y(s)) ax + by
= x, we have
=
d ds
1+ = =
³ ³
cx+dy ax+by cx+dy ax+by
´ ´2
h i ³ ´ dy dy dx (ax + by) c dx ds + d ds − a ds + b ds (cx + dy) (ax + by)2 + (cx + dy)2 ad − bc (ax + by)2 + (cx + dy)2
168
CHAPTER 8. THE INDEX THEORY AND BROUWER DEGREE
Hence we have ad − bc If (C) = 2π
Z 0
2π
dt (a cos t + b sin t)2 + (c cos t + d sin t)2
(8.3)
We note the RHS of (8.3) is an integer and is continuous in a, b, c, d for ad − bc 6= 0. If ad − bc > 0, we have two cases: (i) ad > 0 Let b, c → 0 and d → a then we have If (C) = invariance property.
1 2π
R 2π 0
dθ = 1 by homotopy
(ii) ad ≤ 0 Then bc < 0 Let a, d → 0 and b → −c, it follows that If (C) = 1. If ad − bc < 0, then the same arguments with d → −a, b, c → 0 yields If (C) = −1. Theorem 8.1.5 Let f and g be vector fields which never have opposite directions at any point of Jordan curve C. Then IC (f ) = IC (g). Proof. Define a homotopy between f and g, hs = (1 − s)f + sg
0 ≤ s ≤ 1.
Then h0 = f and h1 = g. We claim that hs 6= 0 at every point of C for all 0 < s < 1. If not, then for some 0 < s < 1, then exists a point p on C such that hs (p) = 0 i.e., s f (p) = − 1−s g(p) which contradicts to the assumption. Hence the claim holds and by homotopy invariance property IC (f ) = IC (ht ) = IC (g). Theorem 8.1.6 Let ad − bc 6= 0 and µ ¶ µ ¶µ ¶ µ ¶ f1 (x, y) a b x g1 (x, y) f (x, y) = = + y g2 (x, y) µ f2 (x, y) ¶µ ¶ c d a b x = + g(x, y) c d y
(8.4)
p p x2 + y 2 ), g2 (x, y) = 0( x2 + y 2 )as (x,µy) →¶ (0,µ 0) then ¶ Ifµ (0) =¶ x a b x Iv (0)where v is the linear vector field v(x, y) = Dx f (0, 0) = . y c d y g1 (x, y) = 0(
Proof. Let Cr = {(x, y) : x = r cos θ, y = r sin θ} be a circle with center (0, 0) and radius r which is sufficiently small. We want to show the vector fields f and v never point in opposite direction on Cr . Then from Theorem 5, it follows that If (Cr ) = Iv (Cr ) and hence If (0) = Iv (0). Suppose on the contrary, the vector fields f and vpoint in opposite direction at some point of Cr . Then f (x0 , y0 ) + sv(x0 , y0 ) = 0 for some s > 0 and for some (x0 , y0 ) ∈ Cr . From (8.4), we have (1 + s)v(x0 , y0 ) = −g(x0 , y0 ) and hence (1 + s)2 k v(x0 , y0 ) k2 =k g(x0 , y0 ) k2 . (8.5)
8.1. INDEX THEORY IN THE PLANE
169
Since ad − bc 6= 0, v vanishes only at (0, 0). Let m =limk v k. Then m > 0. Since r=1
à v then
1 r
p
x x2
+
y2
!
,p
y x2
+
y2
1
=p
x2
+ y2
v(x, y)
k v(x, y) k≥ m for (x, y) ∈ Cr . From (8.5) we have m2 (1 + s)2 r2 ≤k g(x0 , y0 ) k2
or
k g(x0 , y0 ) k2 → 0 as r → 0 r2 This is a contradiction and we complete the proof. 0 < m2 < m2 (1 + s)2 ≤
Theorem 8.1.7 (i) The index of a sink, souce, center is +1 (ii) The index of a saddle is −1 (iii) The index of a closed orbit is +1 (iv) The index of a closed curve not containing any fixed points is 0 (v) The index of a closed curve is equal to the sum of indices of the fixed points within it. Corollary 8.1.10 Inside any closed orbit Γ, there exists at least one fixed point. If there is only one, then it must be sink, source. If all fixed points within Γ are hyperbolic, there must be odd number 2n + 1 of which n are saddle, n + 1 wither sink, source, center. Now we shall prove two classical theorems using index theory. Let z = x + iy and f (z) = z n , n is a positive integer. Then it is easy to show If (0) = n Theorem 8.1.11 (Fundamental Theorem of Algebra) Let p(z) = an z n + · · · + a0 , an 6= 0 be a polynomial in complex variable, z = x + iy, ai ∈ C, i = 1 · · · n. Then p(z) = 0 has at least one zero. Proof. Assume an = 1. Define a homotopy of vector fields ¡ ¢ ft (z) = z n + t an−1 z n−1 + · · · + a0 Since
|an−1 z n−1 + · · · + a0 | → 0 as |z| → ∞, |z n |
there exists ρ > 0 such that ft (z) 6= 0 for |z| ≥ ρ and 0 ≤ t ≤ 1. Let C = {z : |z| = ρ}, f0 (z) = z n , f1 (z) = p(z). Then n = If0 (C) = If1 (C) = Ip (C) 6= 0 If p(z) has no zeros inside C then Ip (C) = 0. This is a contradiction. Hence p(z)has at least one zero inside C.
170
CHAPTER 8. THE INDEX THEORY AND BROUWER DEGREE
Theorem 8.1.12 (Brouwer fixed Point Theorem for R2 ) Let D = {z ∈ R2 : |z| ≤ 1} and g : D → Dbe continuous. Then g has a fixed point. Proof. Let C = ∂D = {z : |z| = 1} be the unit circle in R2 . Suppose g(z) 6= z for all z ∈ D, ie, there is no fixed point of g in D. Define a continuous family of vector fields, ft (z) = tg(z) − z Verify ft (z) 6= 0 for all z ∈ Cand 0 ≤ t ≤ 1. If not, for some 0 < t < 1and some |z| z ∈ C such that ft (z) = 0or tg(z) = z. Then 1 > t = |g(z)| = 1is a contradiction. Hence If1 (C) = If0 (C) Claim: If0 (C) = 1 µ ¶ −1 0 Since f0 (z) = −z corresponds to the linear vector field A = having 0 −1 (0, 0) as a sink. Thus If0 (C) = 1. Hence If1 (C) = 1. But f1 (z) = g(z) − z 6= 0 on D, which implies If1 (C) = 0. This is a contradiction.
8.2
Brief introduction to Brouwer degree in Rn
Brouwer degree d(f, p, D) is a useful tool to study the existence of solutions of the nonlinear equation f (x) = p, x ∈ D ⊆ Rn , where Dis a bounded domain with boundary ∂D = C and f (x) 6= p for all x ∈ C. To motivate the definition of d(f, p, D), we consider the special case n = 2, p = 0. From Theorem 8.1.7(v),(i),(ii),we have If (C) =
N X i=1
=
N X
if (zi ) =
N X
Ivi (o)
i=1
sgn(det Jf (zi ))
i=1
where Jf (x) = Df (x) is the Jacobian matrix and {zi } are the fixed points inside C. We now define the Brouwer degree d(f, p, D) as followings: Let f : D ⊆ Rn → Rn , p ∈ Rn and f (x) 6= p for x ∈ ∂D. Step1: Let f ∈ C 1P (D)and assume detJf (x) 6= 0 for all x ∈ f −1 (p) we define d(f, p, D) = x∈f −1 (p) sgn(det Jf (x)) From the assumption detJf (x) 6= 0 for all x ∈ f −1 (p) and Inverse Function Theorem, it follows that f −1 (p) is discrete subset in compact set D, thus f −1 (p) is a finite set. Step2: Let f ∈ C 1 (D)and assume that there exists x ∈ f −1 (p) such that detJf (x) = 0. From Sard’s Theorem (See Theorem 8.2.1) there exists a sequence {pn }, pn → p as m → ∞ such that detJf (x) 6= 0 for all x ∈ f −1 (pm ). Then we define d(f, p, D) = lim d(f, pm , D). We note that it can be shown that the definition m→∞
is independent of the choices of {pm }.
8.2. BRIEF INTRODUCTION TO BROUWER DEGREE IN RN
171
Step3: Let f ∈ C(D). Since C 1 (D) is dense in C(D), there exists {fm } ⊆ C (D) such that fm → f uniformly on D. Then we define d(f, p, D) = lim 1
m→∞
d(fm , p, D). We note that it can be shown that the definition is independent of the choices of {fm }. In the following we state Sard’s Theorem without proof and explain the use of the theorem.The proof can be found in [BB]. Theorem 8.2.1 (Special case of Sard’s Theorem) Let f : D ⊆ Rn → Rn be C 1 and B = {x ∈ D : det Jf (x) = 0}. Then f (B) has empty interior. Remark 8.2.1 If p ∈ f (B), then p = f (x) for some x ∈ B. Hence x ∈ f −1 (p) and detJf (x) = 0. Since p is not an interior point, there exists {pm }, pm → p, m → ∞such that pm ∈ / f (B). Then for all x ∈ f −1 (pm ), we have detJf (x) 6= 0. Before we state Sard’s Theorem, we introduce the following definitions. Definition 8.2.2 Let f : D ⊆ Rn → Rm be C 1 , m ≤ n. We say x ∈ Rn is a regular point if RankJf (x) = m. (i.e the m × n matrix Jf (x)is of full rank.) We say x ∈ Rn is a critical point if x is not a regular point. We say p ∈ Rn is a regular value if for each x ∈ f −1 (p), x is a regular point.We say p ∈ Rm is a critical value if p is not a regular value. Theorem 8.2.3 (Sard’s Theorem) Let f : D ⊆ Rn → Rm be C 1 and C = {x ∈ Rn : RankJf (x) < m, i.e. C is the set of critical points.Then f (C) is of measure zero. Remark 8.2.2 Sard’s Theorem implies that for almost all p ∈ Rm , p is a regular value. Let E = f (C). Then E is of measure zero. Let p ∈ Rm \E, then we have f −1 (p) ∩ C = i.e for each x ∈ f −1 (p), xis a regular point.Hence p is a regular value. Remark 8.2.3 If m = n then C = {x ∈ Rn : det Jf (x) = 0}. By Sard’s Theorem, f (C) is of measure zero and thus f (C) has empty interior. Hence we complete the proof of Theorem 8.2.1. Properties of d(f, p, D): (i) Homotopy invariance Let H : f : D × [0, 1] → Rn , D ⊆ Rn be continuous in x and t and H(x, t) 6= p for all x ∈ ∂D, 0 ≤ t ≤ 1. Then deg(H(·, t), p, D) = constant. Proof. Let h(t) = deg(H(·, t), p, D). Then h(t) is continuous in t and h(t) is integer-valued. Hence h(t) = constant for all t. Remark 8.2.4 Consider linear homotopy: H(x, t) = tf (x) + (1 − t)g(x). Then H(x, 0) = g(x), H(x, 1) = f (x) and deg(f, p, D) = deg(g, p, D)provided tg(x) + (1 − t)f (x) 6= 0 for all x ∈ ∂D and 0 ≤ t ≤ 1. (ii) d(f, p, D) is uniquely determined by f |∂D . Proof. Let fe(x) satisfies fe(x) = f (x)for x ∈ ∂D. Define H(x, t) = tfe(x) + (1 − t)f (x), x ∈ D, 0 ≤ t ≤ 1. Then for x ∈ ∂D, H(x, t) = f (x) 6= p. By (i) we have
172
CHAPTER 8. THE INDEX THEORY AND BROUWER DEGREE
deg(f, p, D) = deg(fe, p, D). (iii) d(f, p, D) is continuous function of f and p. Proof. (iii) follows directly from definition. (iv)(Poincare’-Bohl) Let f (x) 6= p, g(x) 6= p for all x ∈ ∂D. If f (x) − pand g(x) − p never point opposite on each x ∈ ∂D then deg(f, p, D) = deg(g, p, D). Proof. Define a homotopy H(x, t) = t(f (x) − p) + (1 − t)(g(x) − p) We claim that H(x, t) 6= 0 for all x ∈ ∂D, 0 ≤ t ≤ 1. If not,then there exists t, 0 < t < 1 and x ∈ ∂D such that H(x, t) = 0 and it follows that f (x) − p = − 1−t t (g(x) − p). This contradicts that f (x) − p and g(x) − p never point opposite on each x ∈ ∂D. By (i), deg(f, p, D) = deg(g, p, D) Corollary 8.2.4 If f (x) · x < 0 for all x ∈ ∂Dor f (x) · x > 0 for all x ∈ ∂D, then there exists x0 ∈ D such that f (x0 ) = 0 or there exists an equilibrium of xprime = f (x). P Proof. Let g(x) = x, p = 0. Then deg(g, 0, D) = x∈g−1 (0) sgn(det Jg (x)) = 1. The assumption f (x) · x > 0 for all x ∈ ∂D says that f (x) and g(x) never point opposite on ∂D. From (iv) deg(f, 0, D) = deg(g, 0, D) = 1 6= 0, By (v) there exists x0 ∈ D such that f (x0 ) = 0. If f (x) · x < 0 for x ∈ ∂D, then we replace f (x) by −f (x). (v) If d(f, p, D) 6= 0 then there exists x ∈ D such that f (x) = p. Next we prove Brauwer Fixed Point Theorem: Let Dn = {x ∈ Rn : |x| ≤ 1}and f : Dn → Dn be continuous. Then f has a fixed point. i.e., there exists x ∈ Dn such that f (x) = x. Proof. We shall show that f (x) − x = 0 has a solution in Dn . Consider the following homotopy H(x, t) = x − tf (x), x ∈ Dn , 0 ≤ t ≤ 1. Then H(x, 0) = x = g(x), H(x, 1) = x − f (x). We claim that H(x, t) 6= 0 for all x ∈ ∂Dn , 0 ≤ t ≤ 1. Obviously H(x, 0) = g(x) 6= 0on ∂Dn . If H(x, t) = 0 for some 0 < t < 1 and some x ∈ ∂Dn , then x = tf (x). Since 1 = |x| = t |f (x)| ≤ t, we obtain a contradiction. Hence deg(x − f (x), 0, Dn ) = deg(g, 0, Dn ) = 1 6= 0 and it follows that there exists x ∈ Dn such that x = f (x). (vi) Domain decomposition If {Di }N open sets in D and f (x) 6= p for all x ∈ ∂(D − i=1 is a finite disjoint PN N ∪i=1 Di ), then d(f, p, D) = i=1 d(f, p, Di ). Proof. Assume f ∈ C 1 (D) and detJf (x) 6= 0for all x ∈ f −1 (p). Then X sgn(det Jf (x)) d(f, p, D) = x∈f −1 (p)
=
N X
X
i=1 x∈Di ∩f −1 (p)
=
N X i=1
d(f, p, Di )
sgn(det Jf (x))
8.2. BRIEF INTRODUCTION TO BROUWER DEGREE IN RN
173
Next we introduce the notion of index Definition 8.2.5 Assume f (x) = p has an isolated zero x0 . Then the index of f at x0 is defined as X i(x0 , f (x) − p) = d(f (x) − p, 0, ) = d(f, p,
X
ε
)
ε
P
P
where ε is the ball B(x0 , ε), 0 < ε ¿ 1, ε ⊂ D. Like Theorem 8.1.7, from (vi) we have the following result. Theorem 8.2.4 d(f, p, D) =
P xj ∈f −1 (p)
i(xj , f (x) − p).
Remark 8.2.5 For the infinite dimensional space, we have the notion of LeraySchauder degree and Schauder fixed point theorem which is a generalization of Brouwer fixed point theorem. Interested reader may consult [BB].
174
CHAPTER 8. THE INDEX THEORY AND BROUWER DEGREE
Exercise Exercise 8.1 Prove that the system of differential equations dx = x + P (x, y) dt dy = y + Q(x, y) dt where P (x, y) and Q(x, y) are bounded function on the entire plane, has at least one equilibrium. Exercise 8.2 Let f = ax + b be a linear mapping of Rn into Rn , where a is an n × n matrix. Prove by definition (ii) of degree, that if f (x0 ) = p then ½ X 1 if det(a) > 0 d(f, p, )= −1 (if det(a) > 0 ε
P
where ε is a sphere of any radius ε centered about x0 in Rn . What happens if det(a) = 0? P Exercise 8.3 Let F (x1 , · · ·, xn ) = aij xi xj be a real nonsingular quadratic form with aij = aji . Prove that i(grad, F, 0) = (−1)λ where λ is the number of negative eigenvalues of the self-adjoint matrix (aij ). Exercise 8.4 Let f be a continuous mapping of a bounded domain D of Rn into Rn with the property that for all x ∈ ∂D , f (x) never points in the direction q(6= 0), that is, f (x) 6= kq for all real nonzero k. Then d(f, 0, D) = 0. Exercise 8.5 Let f (z) = u(x, y) + iv(x, y) be a complex analytic function defined on a bounded domain D and its closure D in R2 , where z = x + iyand i2 = −1. Suppose f (z) 6= pfor z ∈ ∂D. Prove d(f, p, D) ≥ 0,where f denote the mapping (x, y) → (u, v).
8.2. BRIEF INTRODUCTION TO BROUWER DEGREE IN RN
175
References [CL ] Coddington and Levinson: Ordinary Differential Equations [BB ] M.Berger and M.Berger : Perspective in Nonlinearity. An introduction to Nonlinear Analysis
176
CHAPTER 8. THE INDEX THEORY AND BROUWER DEGREE
Chapter 9
INTRODUCTION TO REGULAR AND SINGULAR PERTURBATION METHODS 9.1
Regular Perturbation Methods
Suppose we have a problem P (ε) with small parameter 0 < ε ¿ 1 with solution x(ε), ε ≥ 0. If lim x(ε) = x(0), then we say the problem P (ε) is regular. In the ε→0
following we give several examples to show how to slove the regular perturbation problem P (ε) by the perturbation methods. Then we explain how to verify a problem is regular by the Implicit Function Theorem in Banach Space. Example 9.1.1 Solving the quadratic equation P (ε) : x2 − x + ε = 0, 0 < ε ¿ 1
(9.1)
When ε = 0, the problem P (0) has two solution x = 0 and x = 1. Let x(ε) =
∞ X
an εn = a0 + a1 ε + a2 ε2 + · · ·
(9.2)
n=0
Substitute (9.2) into (9.1) x2 (ε) − x(ε) + ε = 0
(9.3)
or (a0 + a1 ε + a2 ε2 + · · ·)2 − (a0 + a1 ε + a2 ε2 + · · ·) + ε = 0
(9.4)
The next step is to find the coefficients a1 , a2 , a3 , · · ·an , · · · by comparing the coefficients εn in (9.4)
177
178CHAPTER 9. INTRODUCTION TO REGULAR AND SINGULAR PERTURBATION METHODS O(1) : Set ε = 0 into (9.3) or (9.4), we obtain x2 (0) − x(0) = 0
or
a20 − a0 = 0a0 = 0
or 1
O(ε) : Differentiating (9.3) with respect to ε yields 2x(ε) From (9.2), x(0) = a0 ,
dx dε (0)
dx dx (ε) − (ε) + 1 = 0 dε dε
(9.5)
= a1 , if we set ε = 0in (9.5) then
2a0 a1 − a1 + 1 = 0
or
a1 =
−1 2a0 − 1
Thus a1 = 1 if a0 = 0and a1 = −1 if a0 = 1 O(ε2 ) : Differentiating (9.5) with respect to ε yields 2x(ε) From (9.2)
d2 x dε2 (0)
dx d2 x d2 x (ε) + 2( )2 − 2 = 0 2 dε dε dε
(9.6)
= 2a2 , if we set ε = 0in (9.6), then
4a0 a2 + 2a21 − 2a2 = 0
or
a2 =
−a21 . 2a0 − 1
Thus a2 = −1 if a0 = 0, a1 = 1 and a2 = −1 if a0 = 1, a1 = −1 For O(εn ), n ≥ 3, we continue this process to find coefficients an , n ≥ 3. We obtain two solutions of P (ε), namely , x1 (ε) = ε + ε2 + 2ε3 x2 (ε) = 1 − ε − ε2 − 2ε3 + · · · Remark 9.1.1 From above, we observe (2a0 − 1)ak = fk (a0 , a1 , · · ·, ak−1 ), k ≥ 1 d To determine ak we need L = dx (x2 − x) |x=a0 = 2a0 − 1 to be an invertible linear operator (See Theorem 9.1) Example 9.1.2 Consider two point boundary value problem ½ 00 u + ²u2 = 0, 0 < x < 1 P (ε) : u(0) = 1, u(1) = 1, 0 < ² ¿ 1 Let u(ε, x) =
∞ X
εn un (x) = u0 (x) + εu1 (x) + ε2 u2 (x) + · · ·
(9.7)
n=0
be a solution of P (ε). Substitute (9.7) into P (ε). Then we have u00 (ε, x) + ε(u(ε, x))2 = 0 u(ε, 0) = 1, u(ε, 1) = 1
(9.8)
u000 (x) + εu001 (x) + ε2 u002 (x) + ε(u0 (x) + εu1 (x) + · · ·)2 = 0 or
u(ε, 0) = 1 = u0 (0) + εu1 (0) + ε2 u2 (0) + · · · u(ε, 1) = 1 = u0 (1) + εu1 (1) + ε2 u2 (1) + · · ·
(9.9)
9.1. REGULAR PERTURBATION METHODS
179
In the following, we obtain u0 (x), u1 (x), · · · by comparing the coefficients of εn . O(1) : Set ε = 0 in (9.8) we obtain u000 (x) = 0, 0 < x < 1 u0 (0) = u0 (1) = 1
(9.10)
Hence u(x) = 1 for all x O(ε) : Differentiating (9.8) with respect to ε yields d 00 du u (x, ε) + (u(x, ε))2 + ε(2u(x, ε) (x, ε)) = 0 dε dε
(9.11)
Set ε = 0 in (9.11),we have ½
u001 (x) + (u0 (x))2 = 0 u1 (0) = 0, u1 (1) = 0
(9.12)
From (9.10),we solve (9.12) and obtain u1 (x) = −
x2 x + 2 2
(9.13)
O(ε2 ) : Differentiating (9.11) with respect to ε and setting ε = 0 yield u002 (x) + 2u0 (x)u1 (x) = 0 u2 (0) = 0, u2 (1) = 0 From (9.10),(9.13) we solve (9.14) and obtain u2 (x) =
(9.14) x4 12
−
x3 6
+
x 12
Continue this process in step O(εn ), n ≥ 3, we obtain u(ε, x) = 1 + ε(−
x2 x x4 x3 x + ) + ε2 ( − + ) + O(ε3 ). 2 2 12 6 12
The regular perturbation method works in Example 9.1 and Example 9.2 because of the following Implicit Function Theorem. Before we state the IFT,we introdue the following definition. Definition 9.1.3 Let X, Y be Banach spaces and B(X, Y ) ={L : X → Y bounded linear operator} We say f : X → Y is Frech’et- differentiable at x0 ∈ X if there exists a bounded linear operator L ∈ B(X, Y ) such that lim
kuk→0
kf (x0 + u) − f (x0 ) − Luk =0 kuk
Remark 9.1.2 It is easy to show L is unique and we denote L = fx (x0 ) Definition 9.1.4 We say f ∈ C 1 if fx : X → B(X, Y ) is continued. Theorem 9.1.5 (Implicit Function Theorem) Let X, Y, Z be Banach Spaces, U open in X × Y , and f : U ⊆ X × Y → Zbe
180CHAPTER 9. INTRODUCTION TO REGULAR AND SINGULAR PERTURBATION METHODS continuons. Assume f is Frechet differentiable with respect to x and fx (x, y)is contiunons in U . Suppose (x0 , y0 ) ∈ U and f (x0 , y0 ) = 0. If A = fx (x0 , y0 ) : X → Z is an isomorphism of Xonto Z. i.e., fx (x0 , y0 ) is onto and has a bounded inverse f −1 (x0 , y0 ). Then (i) There exists a ball Br (y0 ) ⊆ Y and a unique map u : Br (x0 ) → X, satisfying u(y0 ) = x0 , f (u(y), y) = 0 (ii) If f ∈ C 1 then u(y) ∈ C 1 and uy (y) = −[fx (u(y), y)]−1 ◦ fy (u(y), y) Proof. See [Keener] or [Nirenberg] Now we want to explain why the perturbation method works in Example 9.2. First we decompose the problem P (ε) into ½ −u00 = εu2 u(0) = 0, u(1) = 0 and
½
Then u(x) = 1 + ε
R1 0
−u00 = 0 u(0) = 1, u(1) = 1
g(x, ξ)u2 (ξ)dξ
Define F : C[0, 1] × R → C[0, 1] by Z
1
F (u, ε) = u − 1 − ε
g(x, ξ)u2 (ξ)dξ
(9.15)
0
Obviously, F (u0 , 0) = 0, u0 (x) = 1 We want to solve F (u, ε) and express the solution u = u(ε) satisfying F (u(ε), ε) = 0. In order to apply Implicit Function Theorem, we need to verify Fu (u0 , 0) : C[0, 1] → C[0, 1] is an isomorphism. We note that for f : X → Y, X, Y Banach spaces. To compute f 0 (x0 ) we may use Gateux derivative (directional derivative) (
d f (x0 + tv))t=0 = f 0 (x0 + tv) · v |t=0 = f 0 (x0 )v dt
Now we compute Fu (u0 , 0)v, v ∈ C[0, 1], from Z 1 d d F (u0 + tv, ε) = [u0 + tv − 1 − ε g(x, ξ)(u0 (ξ) + tv(ξ))2 dξ] dt dt 0 Set t = 0, we have for ε = 0, Fu (u, 0)v = v Obviously Fu (u0 , 0) is an isomorphism. Differentiating F (u(ε), ε) = 0 with respect to ε , we have du ∂F + (u(ε), ε) = 0 dε ∂ε R1 Let ε = 0 in (9.16), then Fu (u0 , 0)u1 − 0 g(x, ξ)(u0 (ξ))2 dξ = 0 Fu (u(ε), ε)
Thus u1 (x) = i.e.,
R1 0
g(x, ξ)(u0 (ξ))2 dξ ½
−u001 = (u0 (x))2 u1 (0) = 0, u1 (1) = 0
(9.16)
9.1. REGULAR PERTURBATION METHODS
181
Example 9.1.7 Find a periodic solution of forced logistic equation u0 = au(1 + ε cos t − u) Let the periodic solution u(t, ε) = u0 (t) + εu1 (t) + ε2 u2 (t) + · · · Then
u00 (t) + εu01 (t) + ε2 u02 (t) + · · · = a(u0 (t) + εu1 (t) + ε2 u2 (t) + · · ·) · (1 + ε cos t − u0 (t) − εu1 (t) + ε2 u2 (t) − · · ·)
(9.17)
O(1) : Set ε = 0 in (9.17), we obtain u00 (t) = au0 (t)(1 − u0 (t)) we choose the carrying capacity 1 as periodic solution, i.e., u0 (t) = 1. O(ε) : Comparing coefficient of ε, we obtain u01 (t) = a[u0 (t)(cos t − u1 (t)) + u1 (t)(1 − u0 (t))] or u01 (t) + au1 (t) = a cos t (9.18) Solving (9.18) by integration factor, we get (eat u1 )0 = aeat cos t and the periodic solution u1 (t) = √a12 +1 cos(t − ϕ) Continue this process, we have the periodic solution u(t) = 1 + √
ε cos(t − ϕ) + O(ε2 ) a2 + 1
To explain why the perturbation method works, we define F : C 1 × R → C, F (u, ε) = u0 − au(1 + ε cos t − u) where C 1 , C is the Banach space of 2π-periodic differentiable, continuous functions respectively. Obviously F (u0 , 0) = 0 where u0 (x) = 1. In order to apply Implicit Function Theorem to verify Fu (u0 , 0) : C 1 → C is an isomorphism. It is easy to show Lv = Fu (u0 , 0)v = v 0 + av To show L has bounded inverse, we solve Lv = f ∈ C. Then Z v(t) = v(0)e−at + e−at
t
eaξ f (ξ)dξ
0
In order to have that v(t) is 2π-periodic, we need v(0) = v(2π), we choose
It is easy to verify kvkC 1
R 2π e−2πa 0 eaξ f (ξ)dξ v(0) = 1 − e−2πa ° ° = °L−1 f °C 1 ≤ M kf kC where kvkC 1 = kvkC + kv 0 kC
Example 9.1.8 Perturbation of eigenvalues Consider perturbed eigenvalue problem.
182CHAPTER 9. INTRODUCTION TO REGULAR AND SINGULAR PERTURBATION METHODS Let Ax = λx, A ∈ Rn×n have eigenpair (λ0 , x0 ). Consider perturbed eigenproblem
(A + εB)x = λx
Let λ(ε) = λ = λ0 + ελ1 + ε2 λ2 + · · · x(ε) = x = x0 + εx1 + ε2 x2 + · · · Then we have (A + εB)x(ε) = λ(ε)x(ε)
(9.19)
O(1) : Set ε = 0, we have Ax0 = λ0 x0 O(ε) : Differentiate (9.18) with respect to ε and then set ε = 0, we obtain Ax1 + Bx0 = λ1 x0 + λ0 x1 or (A − λ0 I)x1 = λ1 x0 − Bx0
(9.20)
In order to find λ1 , x1 form Fredholm Alternative theorem, (9.19) is solvable iff λ1 x0 − Bx0 ⊥N (A∗ − λ0 I) If we assume N (A∗ − λ0 I) = hy0 i is one-dimensional, then we are able to find λ1 by hBx0 , y0 i hλ1 x0 − Bx0 , y0 i = 0, or λ1 = , hx0 , y0 i 6= 0 hx0 , y0 i Then we solve x1 by (A − λ0 I)x1 = λ1 x0 − Bx0 Remark 9.1.3 Regular perturbation is very useful in theory of bifurcation. Interested reader should consult [K]. Also method of two timing is important in applied mathematics, reader may consult [C], [M].
9.2
Singular Perturbations : Boundary Value Problems
We say a problem P (ε) is singular if the solution x(ε)x(0) as ε → 0. In the following example, we explain the notion of outer solution, inner solution, boundary layer and matching. Example 9.2.1 Consider boundary value problem ½ εu00 + u0 = 0 , 0 < x < 1 u(0) = u0 , u(1) = u1 , 0 < ε ¿ 1 is a small parameter The exact solution of (9.20) is x
u(x, ε) =
1−e
x
1
u0 [e− ε − e− ε ] − 1ε
Then for ε > 0 sufficiently small u(x, ε)u0 e
+ −x ε
The graph of u(x, ε) is the following Fig. 9.1
u1 [1 − e− ε ] 1−e
− 1ε x
+ u1 (1 − e− ε )
(9.21)
9.2. SINGULAR PERTURBATIONS : BOUNDARY VALUE PROBLEMS
183
Fig.9.1 In the region near x = 0 with thickness O(ε), the solution u(x, ε) change rapidly, i.e., εu00 (x) is not small in this region. The region is called a boundary layer since it is near the boundary x = 0. If the region is in the interior, then we call it the interior layer. Given a singular perturbation problem, it is nontrivial to find where is the boundary layer or interior layer.It needs experiences and trial and errors. Now for the problem (9.20), we pretend we don’t know anything about the solution u(x, ε). In the following steps, we shall use perturbation method to obtain the approximate solution to true solution u(x, ε). Step1: Blindly use regular perturbation Let u(x, ε) = u0 (x) + εu1 (x) + ε2 u2 (x) + · · ·
(9.22)
and subsitute it into (9.20), we obtain ε(u000 + εu001 + ε2 u002 + · · ·) + (u00 + εu01 + ε2 u02 + · · ·) = 0 u(0) = u0 = u0 (0) + εu1 (0) + ε2 u2 (0) + · · · u(1) = u1 = u0 (1) + εu1 (1) + ε2 u2 (1) + · · · Then we compare coefficients of εn , n ≥ 0 ½ 0 u0 = 0 , u0 (x) = constant O(1) : u0 (0) = u0 , u0 (1) = u1 ½ 00 u0 + u01 = 0 O(ε) : , u1 (x) = constant u1 (0) = 0 , u1 (1) = 0 Continue this process, we obtain un (x) = const. for n ≥ 0 Thus for u0 6= u1 we recognize this is a singular problem, the regular perturbation method doesn’t work for u(x, ε) of (9.21) is not valid for all 0 ≤ x ≤ 1. Step2: We guess the location of boundary layer or interior layer. Here we assume the boundary layer is at x = 0. (If we assume the boundary later at x = 1, we shall find the following outer solution and inner solution will not be matched in the matching expansion)
184CHAPTER 9. INTRODUCTION TO REGULAR AND SINGULAR PERTURBATION METHODS (1) Outer expansion Since we assume the boundary layer is at x = 0, then we obtain outer solution U (x, ε) by regular perturbation method under the condition U (1, ε) = u1 . Let U (x, ε) = u0 (x) + εu1 (x) + ε2 u2 (x) + · · · Then regular perturbation method yields u0 (x) = u1 , u1 (x) = 0 , u2 (x) = 0 · ·· (2) Inner expansion Assume the inner solution in the boundary layer is W (τ, ε) = w0 (τ ) + εw1 (τ ) + ε2 w2 (τ ) + · · · Where τ = εxα , α > 0 to be determined, εα is the thickness of the boundary layer.The introducing of new variable τ means that we amplify the boundary layer. (3) Matching Let the solution u(x, ε) = U (x, ε) + W (τ, ε)
(9.23)
Substitute (9.22) into the equation (9.20), we obtain ε[U 00 (x, ε) + ε−2α or
d2 W dW (τ, ε)] + [U 0 (x, ε) + ε−α (τ, ε)] = 0 dτ 2 dτ 2
2
ε(u000 + εu001 + ε2 u002 + · · ·) + ε1−2α ( ddτw20 + ε ddτw21 + · · ·) dw1 0 +(u00 + εu01 + ε2 u02 + · · ·) + ε−α ( dw dτ + ε dτ + · · ·) = 0
(9.24)
For the boundary conditions in (9.20), if x = 0 then τ = 0. Then u(0)
= =
u0 = U (0, ε) + W (0, ε) (u0 (0) + εu1 (0) + · · ·) + (w0 (0) + εw1 (0) + · · ·)
Since u0 (0) = u1 , we have w0 (0) = u0 − u1 . As x is in the boundary layer region, we only consider W (τ, ε). Here we need ε1−2α = ε−α or α = 1. Multiplying ε on both sides of (9.23) yields 2
2
ε2 (u000 + εu001 + ε2 u002 + · · ·) + ( ddτw20 + ε ddτw21 + · · ·) dw1 0 +ε(u00 + εu01 + ε2 u02 + · · ·) + ( dw dτ + ε dτ + · · ·) = 0 Compare the coefficients of εn , n ≥ 0, we get ( 2 d w0 dw0 dτ 2 + dτ = 0 O(1) : w0 (0) = u0 − u1 , lim w0 (τ ) = 0 , lim τ →∞
dw0 (τ ) τ →∞ dτ
=0
(9.25)
(9.26)
0 Integrating (9.25) from τ to ∞yields dw dτ + w0 (τ ) = 0 , w0 (0) = u0 − u1 and −τ hence w0 (τ ) = (u0 − u1 )e ½ d2 w dw1 1 dτ 2 + dτ = 0 O(ε) : 1 w1 (0) = 0 , w1 (∞) = 0 , dw dτ (∞) = 0 Then w1 (τ ) = 0 If we compute the solution u(x, ε) up to O(ε), then
u(x, ε) = u1 + (u0 − u1 )e−x/ε + O(ε2 )
9.2. SINGULAR PERTURBATIONS : BOUNDARY VALUE PROBLEMS
185
Remark 9.2.1 If we assume the boundary layer is near x=1, then we assume u(x, ε) = U (x, ε) + W (η, ε) , η =
1−x ε
U (x, ε) = u0 (x) + εu1 (x) + · · · W (η, ε) = w0 (η) + εw1 (η) + · · ·
(9.27) (9.28)
Substitute into (9.20), we have ε2 U 00 (x, ε) +
d2 W dW =0 + εU 0 (x, ε) − dη 2 dη
(9.29)
Then from (9.27) and (9.28), it follows that d2 w0 d2 w1 + ε + · · ·) dη 2 dη 2 dw0 dw1 +ε + · · ·) = 0 +ε(u00 + εu01 + ε2 u02 + · · ·) − ( dη dη ( 2 d w0 dw0 dη 2 − dη = 0 O(1) : 0 w0 (0) = u1 − u0 , w0 (∞) = 0 , dw dη (∞) = 0 ε2 (u000 + εu001 + ε2 u002 + · · ·) + (
(9.30)
Integrating (9.29) from η to ∞yields dw0 = w0 (η) dη w0 (0) = u1 − u0 Then w0 (η) = (u1 − u0 )eη 0 as η → ∞ Thus the assumption that the boundary layer is at x = 1, is not valid. Nonlinear boundary value problem εy 00 + f (x, y)y 0 + g(x, y) = 0 y(0) = y0 , y(1) = y1
(9.31)
Assume the boundary layer is at x = 0. Let y(x, ε) = U (x, ε) + W (τ, ε) , τ =
x ε
(9.32)
(1) Find outer solution Let U (x, ε) = u0 (x) + εu1 (x) + · · · U (1, ε) = y1 And from (9.30) F (x, ε) = εU 00 (x, ε) + f (x, U (x, ε))U 0 (x, ε) + g(x, U (x, ε)) = 0 O(1) : Set ε = 0 in (9.32), then we have ½ f (x, u0 (x))u00 (x) + g(x, u0 (x)) = 0 u0 (1) = y1
(9.33)
(9.34)
186CHAPTER 9. INTRODUCTION TO REGULAR AND SINGULAR PERTURBATION METHODS Thus from (9.33) obtain the solution u0 (x). O(ε) : Differentiate (9.32) with respect to ε and then set ε = 0, we obtain ½
u000 (x) + f (x, u0 (x))u01 (x) + u1 (1) = 0
∂f 0 ∂y (x, u0 (x))u1 (x)u0 (x)
+
∂g ∂y (x, u0 (x))u1 (x)
=0 (9.35)
Then from (9.34) , we obtain solution u1 (x) (2) Find inner solution Let W (τ, ε) = w0 (τ ) + εw1 (τ ) + ε2 w2 (τ ) + · · · Substitute (9.31) into (9.30) , we have 2
ε[U 00 (x, ε) + ε−2 ddτW2 (τ, ε)] + f (x, U (x, ε) + W (τ, ε))(U 0 (x, ε) + ε−1 dW dτ (τ, ε)) +g(x, U (x, ε) + W (τ, ε)) = 0 (9.36) Multiply (9.35) by ε, we get d2 W (τ, ε) + f (x, U (x, ε) + W (τ, ε)) dτ 2 dW ·(εU 0 (x, ε) + (τ, ε)) + εg(x, U (x, ε) + W (τ, ε)) = 0 (9.37) dτ εU 00 (x, ε) +
G(x, τ, ε) =
y(0, ε) = y0 = U (0, ε) + W (0, ε) Compare the coefficients of εn , n ≥ 1, we have O(1) : Set ε = 0 in (9.36), we obtain d2 w0 dw0 dτ 2 + f (0, u0 (0) + w0 (τ )) dτ w0 (0) = y0 − u0 (0) 0 w0 (∞) = 0 , dw dτ (∞) = 0
=0
Case1: If y0 < u0 (0) then w0 (0) < 0 Integrate (9.37) from τ to ∞, we have dw0 (τ ) + − dτ
Z
∞
f (0, u0 (0) + w0 (τ )) τ
Then dw0 (τ ) = dτ We need
Z
Z
dw0 dτ = 0 dτ
u0 (0)
f (0, t) dt w0 (τ )+u0 (0)
u0 (0)
f (0, t) dt > 0 y+u0 (0)
for
y − u0 < y < 0
(9.38)
9.3. SINGULAR PERTURBATION : INITIAL VALUE PROBLEM
187
Fig.9.2 Case2: If y0 > u0 (0) Then we need
R u0 (0) y+u0 (0)
f (0, t)dt < 0
Exercise 9.2.1 Assume ½
εy 00 + y 0 + y = 0 y(0) = α0 , y(1) = β0
has boundary layer at and x = 0 and find the solution u(x, ε) by singular perturbation.
9.3
Singular Perturbation : Initial Value Problem
Example 9.3.1 Consider enzymatic reactions proposed by Michaelis and Menten (1913) involving a substrate (molecule) S reacting with an enzyme E to form a complex SE which in turn is converted into a product P . Schematically we have k1 S + Ek−1 SE,
SE →k2 P + E.
Let s = [S], e = [E], c = [SE], p = [P ] where [ ] denotes concentration. By Law of mass action, we have the system of nonlinear equations de dt
ds dt
= −k1 es + k−1 c,
dc dt
= k1 es − (k−1 + k2 ) c,
s(0) = s0 ,
e(0) = e0 ,
= −k1 es + (k−1 + k2 ) , dp dt
(9.39)
= k2 c,
c(0) = c0 ,
p(0) = p0 .
From (9.39), we have de dc + =0 dt dt
or
e(t) + c(t) ≡ e0
(9.40)
188CHAPTER 9. INTRODUCTION TO REGULAR AND SINGULAR PERTURBATION METHODS By (9.40) we have ds dt
= −k1 e0 s + (k1 s + k−1 ) c,
dc dt
= k1 e0 s − (k1 s + k−1 + k2 ) c,
s(0) = s0 ,
(9.41)
c(0) = 0
With the nondimensionalization τ = k1 e0 t, λ=
k2 k1 s0 ,
u(τ ) = K=
s(t) s0 ,
k−1 +k2 k1 s0 ,
v(τ ) = ε=
c(t) e0
(9.42)
e0 s0
the system (9.42) become du dτ
= −u + (u + K − λ) v
dv ε dτ = u − (u + K) v
u(0) = 1,
(9.43)
v(0) = 0
where 0 < ε ¿ 1 and from (9.43), K > λ.
Fig.9.3 Here v(τ ) changes rapidly in dimensionless time τ = O(²). After that v(τ ) is essendv tially in a steady state, or ² dτ ≈ 0, i.e., the v-reaction is so fast it is more or less in equilibrium at all times. This is Michaelis and Menten’s pseudo-steady state hypothesis. In the following we introduce method of singular perturbation for system (9.43). Singular Perturbation: Initial Value Problem Consider the following system dx dt
= f (x, y)
² dy dt = g(x, y), x(0, ²) = x0 ,
0 < |²| ¿ 1 y(0, ²) = y0
(9.44)
9.3. SINGULAR PERTURBATION : INITIAL VALUE PROBLEM
189
If we set ² = 0 in (9.44), then dx dt
= f (x, y),
x(0) = x0 (9.45)
0 = g(x, y) Assume g(x, y) = 0 can be solved as y = ϕ(x)
(9.46)
Substitute (9.45) into (9.46), then we have dx dt
= f (x, ϕ(x)) (9.47)
x(0) = x0 Let X0 (t), 0 ≤ t ≤ 1 be the unique solution of (9.47) and Y0 (t) = ϕ (X0 (t)). In general Y0 (0) 6= y. Assume the following hypothesis: There exists K > 0 such that for 0 ≤ t ≤ 1 ¯ · ¸ ¯¯ ∂g ¯ ≤ −K ¯ ∂y ¯ x = X0 (t) ¯ y = Y (t) 0
(H) and
·
¯ ¸ ¯¯ ∂g ¯ ≤ −K ¯ ∂y ¯ x = X0 (t) ¯ y=λ
for all λ lying between Y (0) and y0 . We shall prove that lim x(t, ²) = X0 (t), ²↓0
lim y(t, ²) = Y0 (t) ²↓0
Uniformly on 0 < t ≤ 1. Since Y0 (0) 6= y0 , we expect Y0 (t) to be non-uniformly valid at t = 0. Introduce a new variable, the stretch variable ξ = t/² and write x(t, ²) = X(t, ²) + u(ξ, ²) (9.48) y(t, ²) = Y (t, ²) + v(ξ, ²) where X(t, ²), Y (t, ²) are called ”outer solutions” and u(ξ, ²), v(ξ, ²) are called ”inner solutions”. There is a matching condition between inner and outer solutions, lim u(ξ, ²) = 0
ξ↑∞
lim v(ξ, ²) = 0
ξ↑∞
(9.49)
190CHAPTER 9. INTRODUCTION TO REGULAR AND SINGULAR PERTURBATION METHODS
Fig.9.4 Step 1: Finding outer solutions X(t, ²) and Y (t, ²). Let ∞ ∞ X X X(t, ²) = ²n Xn (t), Y (t, ²) = ²n Yn (t). n=0
(9.50)
n=0
Do the regular perturbation for system (9.44), i.e. substitute (9.50) into (9.44) and compare ²n -term for n = 0, 1, 2, ..... Compute P P f (X(t, ²), Y (t, ²)) = f ( ²n Xn , ²n Yn ) ·³ = f (X0 , Y0 ) + ²
´
∂f ∂x
X0 ,Y0
g (X(t, ²), Y (t, ²)) = g ( ·³ = g (X0 , Y0 ) + ²
∂g ∂x
³
P
X1 +
²n Xn ,
P
´ X1 +
¸
´
Y1 + O(²2 ),
X0 ,Y0
(9.51) ²n Yn )
³
X0 ,Y0
∂f ∂y
∂g ∂y
¸
´ X0 ,Y0
Y1 + O(²2 )
The comparison of ²2 term by substituting (9.49) into (9.44) yields O(1) dX0 dt
= f (X0 , Y0 ) (9.52)
= g(X0 , Y0 ) O(²) dX1 dt
0=
³ =
∂f ∂x
³
´
∂g ∂x
´
³ X0 ,Y0
X1 + ³
X0 ,Y0
X1 +
∂g ∂y
∂f ∂y
´ X0 ,Y0
Y1
´ X0 ,Y0
Y1 −
(9.53) dY0 dt
In (9.52) X0 (t), Y0 (t) satisfy Y0 (t) = ϕ (X0 (t)) , dX0 dt
= f (X0 , ϕ(X0 )) ,
X0 (0) = x0 .
(9.54)
9.3. SINGULAR PERTURBATION : INITIAL VALUE PROBLEM
191
From (9.53), we obtain "
dY0 Y1 (t) = − dt
µ
∂g ∂x
# ,µ
¶ X1 X0 ,Y0
∂g ∂y
¶ ,
(9.55)
X0 ,Y0
and X1 (t) satisfies dX1 dt
= ψ1 (t)X1 + µ1 (t) (9.56)
X1 (0) = 0 where µ ψ1 (t)
∂f ∂x ³ ´
=
µ1 (t)
∂g ∂y
³
=
³
¶ −
∂f ∂y
³
X0 ,Y0
X0 ,Y0
∂g ∂y
´
dY0 dt
´³ ∂g ∂y
∂g ∂x
´
´
|X0 ,Y0
,
·
X0 ,Y0
Inductively we shall have for i = 2, 3, ..... Yi (t) = αi (t) + βi (t)Xi (t) dXi dt
(9.57)
= ψi (t)Xi + µi (t),
Xi (0) = 0. P∞ for x(0, ²) = X(0, ²) = x0 = i=1 Xi (0)²n it follows that X0 (0) = x0 and Xi (0) = 0 for i = 1, 2, ..... Step 2: Inner expansion at singular layer near t = 0. From (9.44) and (9.48), ξ = t/², we have du dξ
=
d dξ
(x(²ξ, ²) − X(²ξ, ²))
= ²f (X(ξ², ²) + u(ξ, ²), Y (ξ², ²) + v(ξ, ²)) −²f (X(ξ², ²), Y (ξ², ²)) dv dξ
= g (X(ξ², ²) + u(ξ, ²), Y (ξ², ²) + v(ξ, ²))
(9.58)
−g (X(ξ², ²), Y (ξ², ²)) u(0, ²) = x(0, ²) − X(0, ²) = 0 v(0, ²) = y0 − Y (0, ²) 6= 0 Let u(ξ, ²) =
∞ X n=0
un (ξ)²n ,
v(ξ, ²) =
∞ X n=0
vn (ξ)²n .
(9.59)
192CHAPTER 9. INTRODUCTION TO REGULAR AND SINGULAR PERTURBATION METHODS Expand (9.58) in power series in ² by (9.59) and compare the coefficients on both sides of (9.58), we have set ² = 0, we obtain O(1) and
dV0 dξ
du0 dξ
=0 (9.60)
⇒ u0 (ξ) ≡ 0
u0 (0) = 0
= g (X0 (0), Y0 (0) + V0 (ξ)) − g (X0 (0), Y0 (0)) ≡M.V.T V0 (ξ)G (V0 (ξ))
V0 (0) = y0 − Y0 (0)
(9.61)
(Boundary layer jump)
From hypothesis (H), G (V0 (ξ)) ≤ −K < 0, |V0 (ξ)| initially decreases and |V0 (ξ)| ≤ |V0 (0)|e−Kξ for ξ > 0 small. O(1):
du1 dξ
= f [X0 (0), Y0 (0) + V0 (ξ)] − f (X0 (0), Y0 (0)) (9.62)
≡ V0 (ξ)F (V0 (ξ)) u1 (0) = 0
Once V0 (ξ) is solved by (9.61), we solve (9.62) and obtain Z ξ u1 (ξ) = v0 (s)F (v0 (s)) ds ∞
by the matching condition (9.49) u1 (∞) = 0. Hence x(t, ²) ∼ y(t, ²) ∼
X0 (t) + ² [X1 (t) + u1 (t/²)] + O(²)2 Y0 (t) + v0 (t/²) + O(²)
Now we go back to the Michaelis-Menten Kinetics dx dt
= f (x, y) = −x + (x + K − λ) y,
² dy dt
K > 0, λ > 0 (9.63)
= g(x, y) = x − (x + K) y
Let x(t, ²)
= X(t, ²) + u(ξ, ²) =
∞ X
²n Xn (t) +
n=0
y(t, ²)
= Y (t, ²) + v(ξ, ²) =
∞ X
∞ X
²n un (t)
n=0
²n Yn (t) +
n=0
∞ X
²n vn (t)
n=0
Then from (9.54) Y0 (t) = ϕ (X0 (t)) =
X0 (t) X0 (t) + K
(9.64)
9.3. SINGULAR PERTURBATION : INITIAL VALUE PROBLEM where X0 (t) satisfies
dx dt
x = −x + (x + K − λ) x+K =
193
−λx x+K
x(0) = x0 = 1
Then X0 (t) satisfies X0 (t) + K ln X0 (t) = 1 − λt From (9.61) we obtain dV0 dξ
= [x0 − (x0 + K) (Y0 (0) + v0 (ξ))] − [x0 − (x0 + K) Y0 (0)]
= v0 (0) =
− (x0 + K) v0 (ξ) y0 − Y0 (0), x0 = 1, y0 = 0
and v0 (ξ) = = Hence
µ ¶ x0 y0 − e−(x0 +K)ξ x0 + K µ ¶ −1 e−(1+K)ξ 1+K
x0 (t) y(t, ²) ∼ + x0 (t) + K
µ
−1 1+K
¶ e−(1+K)ξ(t/²) .
From (9.62) du1 dξ
=
f (x0 (0), Y0 (0) + V0 (ξ)) − f (X0 (0), Y0 (0))
=
(1 + K − λ) v0 (ξ) =
λ − (1 + K) −(1+K)ξ e 1+K
u1 (∞) = 0, u1 (ξ) = ((1 + K) − λ) e−(1+K)ξ
(9.65)
194CHAPTER 9. INTRODUCTION TO REGULAR AND SINGULAR PERTURBATION METHODS
Reference [K ] J.P.Keener : Principle of Applied Mathematics [O ] R.O’Malley : Introduction to Singular Perturbation [M ] J.A.Murdock : Perturbation Theory and Methods [C ] Cole J.D. : Perturbation Methods in Applied