This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
0.
Thus, the asymptotic stability of Malkin's system is guaranteed for 0 < µ < µ, where
0.67 <,u* < 0.68.
§6. GROWTH ESTIMATES OF SOLUTIONS
115
V. The method of freezing. Let us turn to system (4.6.1). There naturally arises the question: is the behavior of solutions of this system connected with the character of the eigenvalues of the matrix A(t)? For example, if
y =sup max{y1(t),...,yn(t)}, t,o
(4.6.14)
then can we, say, make a conclusion on the asymptotic stability from the fact that y < 0? Alas, in the general case, we cannot. EXAMPLE 4.6.5. Consider the system z = A(t)x, where
A(t) _
(1 +2cos4t) 2(1 +sin4t) 2(sin4t - 1) -1 +2cos4t
Here yl = Y2 = -1, and the system has the solution
x(t) = et(sin2t,cos2t)T,
x[x] = I.
Thus, the eigenvalues of A(t) are not directly connected with the character of the behavior of solutions of the system in the autonomous case. However, if the coefficients of the system are functions of small variation, then such a connection can be obtained. The idea of the method of freezing is that, fixing t1 E R+, we reduce our system (4.6.1) to an almost constant one,
z = [A(tl) + (A(t) - A(t1))]x.
(4.6.15)
The result given below is due to Alekseev [1, 2). We write a solution x(t) of system (4.6.15) as follows:
x(t) = eA(n)tx(0) +
eA(tI)(I-T)[A(T)
- A(tl)]x(r) dz.
100
Therefore,
IIx(t)II <
IleA(,,),II
_ IIx(0)II
+J
IleA(tj)(t-T)II
' IIA(T) - A(t1)II . IIx(r)IIdT.
This inequality holds for all t, including t = tl. Set t = t1 and then omit the subscript, i.e., denote t1 by t: IIx(t)II <
IIeA(t)'II
' IIx(0)II +Jo
IIeA(t)(t-T)II
' IIA(T) -A(t)II ' IIx(T)IIdr.
We introduce a restriction for the rate of change of A(t). Let (4.6.16)
6 = sup IIA(t)II
IIA(r) - A(t)II < 6(t - T).
To estimate eA(t)t , we use the inequality (1.3.11), whence, (4.6.17)
IIeA(t)'II
< D(1 + t)n-1eYt.
IV. STABILITY AND SMALL PERTURBATIONS OF COEFFICIENTS
116
Note that b1/(n+1)6n/(n+l)(1
d(l + t - r)"-I (t - z) <
+ t - T)n
=
(4.6.18)
G
Thus, by virtue of the conditions (4.6.16) and (4.6.17), we write
IIx(t)II < D(1 + t)"-le"1IIx(0)II +
D(1 + t - z)i-1 eY('-r)8(t - z)IIx(z)II dT,
J
and, taking into account (4.6.18), we have
IIx(t)II < D(1 + t)"-1e1IIx(0)II +
J0
dr.
De"(
Divide the last equality by t)n-'e ("+n,S11D+U)
cp(t) = (1 +
(4.6.19)
> 0.
Then IIx(t)II G cp(t)
De-,16110,+1),11x(0)11
+ fo
1
(ill(n
t) enn1/a+1
)rllx(T)Il
1
dT;
therefore, IIX(z)II
< DIIx(0)11 +fI De"`
611(n+l)
dr.
By the Gronwall-Bellman formula, we have 11x(1)11
cp(t)
G DIIx(0)11ef°
+Udr
DI =De
or, taking into account (4.6.19), we obtain (4.6.20)
11x(t)II < DIIx(0)II(1 +
Thus, we have proven the following statement. THEOREM 4.6.4. Let system (4.6.1) he such that
IIA(tl)-A(t2)II
x[x] < y +'i61 /(n+1)
where y is defined by formula (4.6.14) and d = n + De ""'I("+". Here n is the order of the system and D is the constant from the estimate (1.3.11).
16. GROWTH ESTIMATES OF SOLUTIONS
117
REMARK 4.6.3. The attainability of the estimate (4.6.21) was proved for systems of any order in [23]. This means that there exist systems whose greatest characteristic exponent is not less than y + 00611(n+1)
where co is a constant independent of 6. VI. Yakubovich's estimate for the characteristic exponents of systems with periodic coefficients [39]. Let the matrix A (t) in system (4.6.1) be such that
A(t) = A(t + co),
t E R.
We introduce the set of matrices G (t) satisfying the following conditions for t E R: 1. G*(t) = G(t), 2. G(t +(o) = G(t),
3. G E C I (R),
(G(t)a,a) > 0, where a E C" is arbitrary, hail 0, t E R. For example, for any co-periodic and continuously differentiable matrix F (t), the matrix 4.
G(t) = F*(t)F(t) satisfies the conditions given above. Let p be a multiplier of system (4.6.1). By Theorem 1.4.1, the solution x = e"cp (t), where A = 1 Lnp,
corresponds to it. Consider the form
fi(t) = (G(t)x, x).
(4.6.22)
We take our normal solution as x (t); then
fi(t) =
e(1.+
e2rRe:.(G
/
whence
2t ReA+ln(GV,(p).
(4.6.23)
Note that ln(G(t)
is a periodic and, consequently, a bounded function; therefore, dividing (4.6.23) by 2t, in the limit we have 1
1
ReA = 2 rli m I In c(t).
(4.6.24)
Starting with this, we obtain estimates for Re A. Differentiate the form (4.6.22) along the trajectories of system (4.6.1): = (d x, x) + (GA x, x) + (Gx, Ax) = (Qx, x), where (4.6.25)
Q = G + GA + A'G
(Q* = Q).
Consider the equation (4.6.26)
Det(Q - qG) = 0.
IV. STABILITY AND SMALL PERTURBATIONS OF COEFFICIENTS
118
Let qI (t) and q2 (t) be its smallest and greatest roots; note that they are real and coperiodic. Indeed, introduce the matrix G 1/2 The matrix G is Hermitian; therefore, there exists a unitary matrix U such that G = U*DU, where D is real diagonal [16, 27]. We write
G = U*DI/2UU*DI/2U - G1/2G1/2. Note that G1/2 is Hermitian; moreover, Det[(G-I/2)*[Q
- qG]G-1/2] =
Det[(G-1/2)*QG1/2
- qE],
whence we obtain the equivalence of (4.6.26) and (4.6.27)
Det[(G-1/2)*QG-1/2
The matrix
- qE] = 0.
H = (G-1/2):'QG-1/2
is Hermitian; hence [16, 27], the inequality
hmin(y,y) < (Hy,y) <_ hmax(y,y)
holds for any vector y = C", where h,,,i
hmax =max{hl(t),...,h"(t)},
and h1 (t), ... , h" (t) are the eigenvalues of H(t). By virtue of this, (4.6.27) implies (QG-1/2y G-1/2y)
gi(t)(y,y) <,
< g2(t)(y,y),
and, setting y = G1/2x, we obtain ql (t)(G1/2x, GI/2x) , (Qx, x) < g2(t)(GII2x, G112x), or
gl(t)(Gx,x) < (Qx,x) < g2(t)(Gx,x), or
fi(t) <, q2(t) (t)
(4.6.28)
The inequalities (4.6.28) give a two-sided estimate of the function fi(t): (4.6.29)
jqi(r)drInIn(0)jq2(r)dr. l
For any w-periodic function q(t) we have
f
q(r) dr =
I Jf q(r) dr + r(t), /
where r(t) is co-periodic. Thus,
lim r J 'q(r) dr = ! J a' q(r) dT. 1
s
1
0
0
Dividing inequality (4.6.29) by 2t, we pass to the limit, and, by virtue of (4.6.24), we
§6. GROWTH ESTIMATES OF SOLUTIONS
119
obtain (4.6.30)
f
2w
w
qI (r) dr < Re A <
21 f
0
w
q2 (r) dr.
0
Our reasoning shows that the result essentially depends on the choice of G (t). By an appropriate choice of G (t), the estimate (4.6.30) can be made as precise as desired. THEOREM 4.6.5 (Yakubovich [39]). Let
aI
(4631)
c
2w
f
w
w
1
gl (r) dr < al <
0
'
f q2 (r) dr - E < an < 0
2w
J gl (r) dr + e, 0
w
1
2w fo
q2(r) dr.
If the matrix A (t) is real, then the matrix G (t) can also be chosen real.
We omit the proof (the reader can find it in the monograph [39]). Let us consider the case of a second order real system in more detail. Here the matrix G (t) has the form G(t) = a(t) b(t) b(t) c(t) ' and the conditions for it are reduced to the requirements that the elements of the matrix be continuously differentiable, w-periodic, and
a(t)>0, c(t) > 0, a(t)c(t)-b'-(t)>0, 0
Det(QG-1 - qE) = 0, or (4.6.32)
q2 SpQG-1
=
- Sp(QG-)q + Det(QG-1) = 0,
GAG-=
Sp(QG-1 +GAG-1
SpGG-1
+2SpA.
The matrix G (t) satisfies some matrix equation G = B(t)G; thus,
B(t) = GG'1. Therefore, by the Ostrogradskii-Liouville formula, we have Det G (t) = Det G (0)efo
SP e(i) dT
,
or SpGG-1
= dt lnDetG(t).
The solution of equation (4.6.32) has the form
ql,2=
2(SpGG-1+2SpA)± ([spOG_t + 2SpA]2 -
\1/2 DetQG-I
I
IV. STABILITY AND SMALL PERTURBATIONS OF COEFFICIENTS
120
Hence (since fo Sp G G - I dr = 0), by Theorem 4.6.5, we obtain (4.6.33) fw
al,2 = 2ca
J0
m
SpA(r)dr±infJ
G[SpdG-1+2SpA]2
- DetQG-1 I
1 /2
dz
;
0
this implies the estimates (4.6.31). Now we apply these arguments to Malkin's example considered above. EXAMPLE 4.6.6 (Malkin). Consider the system
z1 = (-1 + 2sint)xl +ux2, X2 =,UXI - X2.
For G we take E; therefore,
Q = A+A* = 2 (/2(-1 +2sint)
-p 1
'
DetQ = -4[,u(-1 +2sint) +u2]. Our goal is to find out for which p Malkin's system is asymptotically stable, i.e., when a2 < 0. From formulas (4.6.33) we have f2R
a2<2 -1-IL+2J [(l+p(-1+2sint))2+4,u2]1/2dt 0
Let us compare this estimate with Example 4.6.4. By means of Yakubovich's method we have reduced the problem to the same inequality as by means of Lozinskii's method. In that case it was solved numerically, and here the integral can be simplified by means of the Cauchy-Bunyakovskii inequality:
J
V(t) dt <
(
co
a2<2(-1-'c+ 1-2u+7u2) Thus,
0
CHAPTER V
On the Variation of Characteristic Exponents Under Small Perturbations of Coefficients In Chapter V we study the problem of the influence of small perturbations of the coefficients on the stability of linear systems, as well as on the change of the spectrum of a system under such perturbations. These problems are closely connected, but the latter requires more subtle methods, since it raises the question of change of all the elements of the spectrum. Consider a system
z = A(t)x,
(5.0.1)
(5.0.2)
A E C(R+),
sup JIA(t)II < M,
X E C",
tEIR+
with spectrum (5.0.3)
-00
<00,
and the perturbed system (5.0.4)
(5.0.5)
Y = [A(t) + Q(t)ly,
Q E C(R+),
sup IIQ(t)II
with spectrum (5.0.6)
-oo
Under the influence of the perturbations Q(t), the characteristic exponents of system (5.0.1) vary, generally speaking, discontinuously; sometimes a finite shift of the characteristic exponents of the initial system corresponds to an arbitrarily small 6. In this chapter we shall introduce the notion of central exponents of linear systems and
shall see that they determine jumps of the exponents under small perturbations. We shall study the properties of the spectrum (5.0.3) itself, its relations with the central exponents, and the influence of these objects on how the exponents vary when passing from system (5.0.1) to system (5.0.4). 121
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
122
§1. Central exponents
In the investigation of stability in linear approximation, of primary interest is the increase of the greatest and the decrease of the smallest exponent under the influence of perturbations. The key method to solve such problems is provided by upper and lower functions and central exponents of system (5.0.1). These notions have been introduced and studied by Vinograd. In the presentation of the results we follow the monograph [9]. DEFINITION 5.1.1. The greatest exponent A,, of system (5.0.1) is said to be rigid
upwards if for any e > 0 there exists a 6 > 0 such that 2 1< A,, + e, where A' is the greatest exponent of system (5.0.4). In the opposite case, A is said to be mobile upwards, which means that a positive jump of A,, upwards can correspond to an arbitrarily small
6. Analogous motions of rigidity and mobility downwards can be introduced for the smallest exponent A,. The bounds of mobility of the exponents are, naturally, of interest. DEFINITION 5.1.2. Bounded measurable on R+ functions r(t) and R(t) are said to be lower and upper functions for system (5.0.1), respectively, if for any solution x (t) of this system the following estimates hold for any e > 0: (5.1.1)
d,.E exp
(f'(r(r) - e) dr)
<
DR,E exp l I (R (T)
+ e) dr)
,
where t > s > 0 and the quantities d,.,E, DR, do not depend on s. It is clear from the definition that these functions bound the growth of the solutions from below and from above, respectively. Sometimes it is convenient to have the inequalities (5.1.1) in terms of a fundamental matrix, namely, the Cauchy matrix
X(t, S) = X(t)X-I (s). LEMMA 5.1.1. For any fixed t and s the following relations are valid for the Cauchy matrix X (t, s) of a linear system: (5.1.2)
IIX(t,s)II =max
Ilx(s)II' I
_
1
(5.1.3)
IIX-' (t,s)II
IIx(t)II
-min 11x(s)11
PROOF.
1.
IIX(t,s)cII IIX(t,s)ll = max IIX(t,s)bll = max c llcll
IIhII=I
= max 2.
min lIx(t)ll 11x(s)11
-
liX(t)X-'(s)cll
11X(s)X' (s)cII 1
_-max liX(t)aIi
IIX(s)all =
1
= max uIIa(S)IIIIlIX(s,t)II (t =
llx(t)Il max Ilx(s)II
1
IIX-' (t, s)11
O
§1. CENTRAL EXPONENTS
123
The conditions (5.1.1) and (5.1.2) imply that the upper functions R(t) realize the estimate (5.1.4)
IIX(t,s)II
DEFINITION 5.1.3. The number fl defined as r
(5.1.5)
f1= inf { lim 1 R
111:-'oo t
f,
J
R(T) dT }
0
JJJ
is called the upper central exponent of system (5.0.1). Here the infimum is taken over the set of all the upper functions R of system (5.0.1). DEFINITION 5.1.4. The number co defined as (5.1.6)
r i
t,
co = sup 1 lim
1
OO t
r (T) dT } fo
JJJ
s called the lower central exponent of system (5.0.1).
In the formula (5.1.6), the
supremum is taken over the set of all the lower functions r of system (5.0.1).
REMARK 5.1.1. If in the sets of upper and lower functions we distinguish the subsets of constants, and if we take infimum and supremum over these subsets in (5.1.5) and (5.1.6), then we obtain the definitions of the upper singular exponent no and the lower singular exponent coo. The inequalities (5.1.7)
-M<(00
are obvious. The first and the last inequalities in (5.1.7) follow from Theorem 2.3.1.
REMARK 5.1.2. Lower functions and the lower central exponent do not require special consideration since this problem can be reduced to the investigation of upper functions and the upper central exponent for the adjoint system (5.1.8)
i = -A*(t)z.
Indeed,
Z(t, s) = [X' (t, s)]
.
The relations (5.1.3) and (5.1.1) imply the estimate (5.1.9)
IIX-1(t s)ll
and the latter implies our statement. Thus, the set {-r(t)} is the set of upper functions and the set {-R(t)} is the set of lower functions for the adjoint system (5.1.8). The following theorem is valid. THEOREM 5.1.1. The lower central exponent co of system (5.0.1) is equal to the upper central exponent of the adjoint system, taken with the opposite sign, and vice versa.
LEMMA 5.1.2. Central exponents are invariant under Lyapunov transformations.
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
124
PROOF. Let x = L(t)w be a Lyapunov transformation of system (5.0.1) to the system
w = (L-AL - L-IL)w. The Cauchy matrices of these systems are connected by
X(t,s) = X(t)X-I (s) = L(t)W(t)W-1(s)L-1(s), and, by virtue of IIL-I(t)II
IIL(t)II < K,
< K,
we have
IIX(t,s)II < jjW(t,s)IIK2 For the inverse transformation w = L-' (t)x, in a similar way we obtain
IIW(t,s)II
By means of simple examples, we illustrate one of the methods of constructing upper functions and of determining the upper central exponents. EXAMPLE 5.1.1. Consider a real diagonal system
z = diag[a1(t),...,a,(t)]x,
a; E C(R+);
its Cauchy matrix has the form
X (t, s) = diag [exp I
J
/
aI (T)
dr) , ... , exp I J t an (z) dr) ]
\
.
/J Obviously, any measurable and bounded function R(t) on R+ is an upper function if \\\
fai(z)dt < DR,£ +J r(R(z)+e)dr,
(5.1.10)
i = 1,...,n,
where t > s >, 0, s > 0. Here DR,£ is written instead of In DR,£; this makes no essential difference. Let us take T > 0 and divide the half-axis R+ by the points 0, T, 2T, ... into the intervals Jig
[(k - 1)T,kT]
of length T. On each of these intervals we define a function RT (t), coinciding with that of the functions aI (t), . . . , a, (t) whose integral over this interval is the greatest (or with one ofJRT(r)dr them if there is more than one such function), i.e.,
=maxJ a;(r)dz,
(5.1.11)
i = 1,...,n, k = 1,2,....
Thus, we define a bounded piecewise continuous function RT (t) on R+. At the endpoints of the intervals we may set
,n = 0,1, ... .
RT (in T) = 0,
Let us show that RT (t) is an upper function. Let the interval [s, t] consist of the integer number n - in + 1 of intervals Jk; then
f
n
(T) dT = k=ni
n
jj'.
max / aj (r) dT =
a1 (T) dT I'=m
fff
J,;
J,
RT (T) dr.
§1. CENTRAL EXPONENTS
125
If the interval [s, t] is arbitrary, then at its ends there appear intervals of length less than T. Denote them by [s, mT] and [nT, t], assuming that [m T, nT] C [s, t], and
(n+l)T>t.
(m-1)T<s, Then we have
ft
Js
mT
a;(z)dz=J
a,dr+J
mT
RT dr +
J
c
I"
- Js
fnT
JmT
RT dr +
ft
JnT
RT dr
t
fmT
Since
t
a;dz+a,dr
mT
is
<
nT
- a,) dr - I (RT - a,) dt.
(RT
T
IRT(t) - a,(t)I < 2M,
and the lengths of the last two intervals of integration do not exceed T, we finally obtain
fai(T)d4MT+jRT(r)d,
i = 1, ... , n.
By virtue of (5.1.10), this implies that RT (t) is an upper function; hence, rt lim 1 (5.1.12) RT (T) dT > S2. t-.oo t J0 Let us show that the limit in the inequality (5.1.12) is realized along the sequence
n = 1, 2, ... .
to = nT, Indeed, for
nTs t<(n+1)T we have t
f
t
<
1,T
- n7,
RT dz
RT dr
ft
I
t
RT dz fo
n7
J0
t-nT I t f RT dT
RT dr + nT
+I
0
ft
J0
nT
RT dT
n7
J0
t RT dr nT f nT
t-nTM+t-nTM<-
0,
nT n n-,oo since t - n T < T. Therefore, from the inequality (5.1.11) we obtain
nT
(5.1.13)
92T = lim
11-00,
1nf T
o
nT
R
or
f2=inf92T>S2. T>0
RT dT
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
126
Let us prove the reverse inequality, which would imply that n = 52. Take an e > 0 and an upper function R (t), defined in some other way such that
rim 1 J'R(z)dz
max
j
t--o0 t
0
a, (z) Jr < DR,e +
J
(R (r) + e) Jr.
On the interval [0, nT] we have pnT
nT
(R (r) + e) dz,
RT (z) dz < nDR,E + J0
or
f"nT
1
fn T
RT (z) dz < DT 'e + e + n7,
nT
R(z) dz.
Letting n -> eo, we obtain 52T < DR,e /T + 52 + 2E.
Therefore, Cl
T>o52T G 52,
thus, we have fl = 52, i.e., the upper function RT (t) constructed above realizes the upper central exponent. EXAMPLE 5.1.2. Consider the case of an autonomous system
X E C", A= const .
X= Ax,
Let a1, a2i ... , a" be the eigenvalues of the matrix A and
A = max{Rea;},
A = min {Re Ail.
From the estimate (1.3.10) we have II X(t, s)II = II expA(t - s)II
De exp(A + e)(t - s),
t > s > 0.
It follows that R(t) = A; hence, A = 52 = 520. Estimating the Cauchy matrix from below, or using the adjoint system, we obtain A = coo = co, since r(t) = A. EXAMPLE 5.1.3. Consider the system 11 = x1,
Here
X (t, s) =
diag[et-s
x2 = (2 + cos t)x2.
e2(t-s)+sin t-sins] _ [XI (t, s), x2(t, s)],
and X[x2] = 2.
X[x1] = 1,
For all t E R we have
al (t) = 1 G a2 = 2+cost; therefore, we can take R(t) = 2 + cost. Hence,
n= film t-x t1J R(z)dr=2, I'
o
i.e., 52 = A2. We can take r(t) = 1 as a lower function; hence, co = Al = 1.
§1. CENTRAL EXPONENTS
127
EXAMPLE 5.1.4. Consider the system x2 = 7rsin7C\/[x2.
X1 = 0,
Then
X(t,s) = (x1,X2) = ( 1 where
P
) - P ( s ))
), I
it
p(t)=J
7c
0
In our case, 1
22=rlim lp(t)=0.
21=0, As an upper function we take
R(t) = (7rsin7r'/[ + I7rsin n,/t'I)/2. Note that
R(t) > 0
for
t E (4k2, (2k + 1)2)
and
R(t) - 0
k = 0,1,....
t E [(2k + 1)2,4(k + 1)2],
for
Let us show that
lim 1
R(T) dT
t-'0o t J0 can be realized along the sequence tk = k2. Let
(2k)2 S t S (2k + 1)2. By analogy with Example 5.1.1, we have 4/'
fR(r)dr_ 41c2 f R(t) dt k-.oo -4 0. 0
By straightforward calculations, we obtain
J
(2k+1)'
R(T)dc = -(sin7r'/[ -7rvcosirv/-t) 7r
(2k)2
(2k+1)2
= 2(2k + 1 + 2k) = 2(41c + 1), (2k)2
or
j
(2n+1)2
R(t)dtJ
(2k+1)2
(2k)'
k=0
R(t)dt2(4k+1)=2(2n+1)(n+1). k=0
Finally, we have (2n+1)' 1
S2 = n-.o
(2n +
R(t) dt = nli
1)2
jo
2(2n + 1)(n + 1)
(2n + 1)2
= 1.
Taking
r(t) = (7r sin 7r'- In sin7r,`I)/2, we obtain co = -1. Let us turn again to systems (5.0.1) and (5.0.4), and let 11Q(t)II - 0, t - oo. Let us find out what happens to upper functions under this condition.
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
128
THEOREM 5.1.2. Perturbations of a linear system that tend to zero as t -* oo preserve the sets of upper and lower functions and do not change the central exponents. PROOF. Thus, we have
IIQ(t)II 5 8(t) -a 0.
(5.1.14)
Let us write the solution y(t) of system (5.0.4) using the method of variation of parameters,
y(t) = X(t,0)y(0) +10, X(t,s)Q(s)y(s)ds, or
11Y(011 5 IIX(t,0)11IIY(0)II +
f
I
IIX(t, s)IIIIQ(s)IIIIY(s)Il ds.
0
Take an e > 0 and one of the upper functions R(t) of system (5.0.1). By virtue of the estimate (5.1.4), we have IIY(0)IIDR,eeA'(R+e/2)dt ' + f DR,eeJ (R+e/2)dta(S)IIY(s)Il ds,
11Y(011 5
0
or IIY(t)IIe-fo(R+e/2)dt
R,Ea(s)e-Jo(R+e/2)dtIIY(s)11
, IIY(0)IIDR,e + fo, D
ds.
By the Gronwall-Bellman lemma, we have (5.1.15)
IIY(t)IIe- fo(R+e/2) dt ,
D,.ad(s) ds. IIY(0)IIDR,eef
By the condition (5.1.14), there exists a T > 0 such that
8(t) < e/(2DR,e)
for
t > T.
We continue the estimate (5.1.15): IIY(t)II 5 DR,EIIY(0)II exp
f
f
It
(R + e/2) dT +1. DR,e((s) ds +
0
0
DR,Ea(s) ds T
I
DR,EIIY(0)IIe°R-Thexp f (R(s)+e)ds. 0
Here b = max(o,T) 6 (t) . It follows from the inequality obtained that R (t) is an upper
function for system (5.0.4) as well. Changing the roles of systems (5.0.1) and (5.0.4) in the above arguments, we come to the conclusion that an upper function for system (5.0.4) is also an upper function for system (5.0.1). Thus, the sets of upper functions of these systems coincide; hence, the upper central exponents also coincide. For lower functions and lower central exponents, the arguments are carried out by passing to the O adjoint systems. Let us return to the question of the mobility of the bounds of the extreme exponents AI and A,, of system (5.0.1) under small perturbations, i.e., under the passage to system (5.0.4).
§1. CENTRAL EXPONENTS
129
DEFINITION 5.1.5. A number fl' is called an upper bound of mobility of A,,, if for any e > 0 there exists a o > 0 such that A;, < S2' + e
(A', belongs to the spectrum (5.0.6)).
Obviously, ),n < 1Y (consider zero perturbations); therefore, An < inf ff. We give a theorem that makes the value of S2' more precise. THEOREM 5.1.3 (Vinograd). For any e > 0 there exists a o > 0 such that the greatest exponent of system (5.0.4) does not exceed the quantity S2 + e.
PROOF. Repeat the proof of Theorem 5.1.2, omitting (5.1.14) up to the inequality (5.1.15) inclusive, just taking into account that o = const. Then we write (5.1.16)
IIy(t)II < IIy(0)IIDR,eexp l fo
[R(z)+ei/2+DR,fo]dal .
Here el = e/2, the function R(t) is assumed to be such that ( 5.1.17)
jlim
t
f R(z) dz < a + 2,
and we choose o equal to el /(2DR,,) From the inequality (5.1.16), taking into account (5.1.17), we have
X[y] 0 we have defined R(t) so that the inequality (5.1.17) is satisfied, then we have determined DR,,, and, finally, o. 0 COROLLARY 5.1.1. The upper central exponent S2 bounds from above the mobility of the greatest exponent of the system.
Analogously, we can introduce a lower bound of mobility of the smallest exponent X11 of system (5.0.1), and by passing to adjoint systems we can show that this bound is realized by the lower central exponent co. Now let us return to the greatest exponent. Its mobility upwards is bounded by the number fl, but this is an estimate from above. Is it exceedingly crude? Can arbitrarily small perturbations lead to a "jump" of the greatest exponent A,, into a neighborhood of S2?
DEFINITION 5.1.6. A number in is said to be the attainable upper bound of mobility of the greatest exponent A,, if it is an upper bound, and for any e > 0 there exist systems (5.0.4) with an arbitrarily small o and the greatest exponent A', > S2 - e.
Millionshchikov proved that central exponents are attainable [29]. We omit the proof of the following theorem. THEOREM 5.1.4 (Millionshchikov). The central exponents of linear systems are attainable, i.e., for any e > 0 there exist piecewise continuous matrices BE (t) and CE (t), JIBE(t)II < e,
IIc (t)II < e
for
such that the greatest exponent of the system y = [A (t) + BB(t)]y
t E R+,
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
130
is no less than fl - E, and the smallest exponent of the system
y = [A (t) + CE(t)]y is no greater than co + E. COROLLARY 5.1.2. The greatest exponent is rigid upwards if and only if it coincides with the upper central exponent.
The corollary follows from Definition 5.1.1 and Theorems 5.1.3 and 5.1.4. Following Vinograd [9], we show that the upper central exponent for a diagonal system is attainable. THEOREM 5.1.5 (Vinograd). Take a real system
X = diag[a1(t),a2(t),...,a,, (t)]x,
(5.1.18)
a;
E C(R),
and let S2 be its upper central exponent. Then the perturbed system
xl = al(t)x,+ 8x,+ a2(t)x2,
X2 =
(5.1.19)
+axn, 8x2 + a3(t)x3,
X3 _
Xn =
8xn-1 + an(t)Xn,
where 8 is arbitrarily small, has the characteristic exponent A, >1 fl.
PROOF. To simplify calculations, we write system (5.1.19) in the form
xa = a.(t)x, +8xc-,,
(5.1.20)
where the index a is assumed to be cyclic mod n. By means of the Picard method [5], we construct the solution of this system with the initial condition
a = 1,...,n.
xa(0) = 1,
We pass from system (5.1.20) to the integral equations xc.
=
exp (f a,, (T) dT) +0 '
f xa-I (r) (I ac(c)d)] [exp
r
dT,
and establish the form of the Picard approximations, (5. 1.21)
x(t) = exp (f' a(T) dr) +8
/
Jo
x(T) [exp
I
J
r
a) d)]/
dT.
Hence, we have
x(t) = X(t) +8 I,
(exp
[f' a_1 d +
0
f ad
]
drl)
,
(here we have set T = ti ),
x(2)(t) = XaM (t) +82 f t f t'
(exp [f
t
a--2 do +
f2a,,-, dd + f l a« dc]) dt1 dt2
§1. CENTRAL EXPONENTS
131
(here z = t2). Further, we proceed by induction. Let x(k-1,(t)
(5.1.22)
+6k J,(k)(t),
where
JI Jfk J«k)(t)
=
...
t>
(exp [
0\
rte
J0
/pt'
as-k d +
(5.1.23)
Jr,
as-lc+l dS
f
a,,
We change k to k + I in (5.1.21) and, using (5.1.22), we write x
(exp It a. dal
' (t) = x(k)(t) +8
J
0
+ak+l
= x
(exp[ft'
(expf t
J ak d+
T
l'
a. dal f f ...f
f a-k+l d+ ... + f t,
rA
a«_ l
d)f
Let us insert exp f a,, dc5 under the sign of the k-tuple integral; then, setting r = tk+1 , we obtain xk+l)(t) = x(k)(t) +ak+l
this proves that the equality (5.11./22) holds and that it can be written in the form x(k) = exp
f
k
t
as d +
J0
6'.J,,'')(t). r=1
Since the Picard approximations converge for all t E R, we have the solution required, t
(5.1.24)
x,, (t)=exp I
a=1,...,n.
00
k=1
Now we pass to the proof of the statement that the solution obtained has the characteristic exponent X[x] > S2. The terms of the series (5.1.24) are positive; therefore, 1/2
(5.1.25)
IIx(t)II _
Ixa(t)I2
> xn(t) > akJnk)(t);
(k"=l
the choice of the index is not important but it is convenient to specify it. Let us show that we can define a sequence tn, - oo so that the corresponding sequence J,(1c("'))(t) (t,,,) growing sufficiently rapidly with has maxima at the points t,,,, the values of m. We shall compare IIx(t)II with the function J,(,1c(m))(t) at the points t, ,,; this allows us to estimate from below the exponent of IIx(t)II and verify that X[x] > 92. We pass to explicit calculations. Take an arbitrarily large T > n, where n is the order of the system, and consider the function J,(,k) (t) for
k=nm-1,
t=mT,
mEN.
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
132
Write out the sum of integrals entering the exponent of the exponential in the integrand
and, taking into account that for the indicated value of k the index has run over m cycles, we combine these integrals in m groups:
r, =J a1d +
r,
+J
a2d
ands
r,
0
Ist group
+J
a, d + J
a, dd +
+
(5.1.26)
J
an ds
2nd group
a,dd+...+J
ands.
nfth group
The interval [0, mT ] is divided by the points T, 2T,..., (m - I) T into the intervals A1, A2, ... , A,,, of length T. Let us turn to the method of constructing the upper function (described in Example 5.1.1) such that RT (t) can be assumed to coincide with one of the functions a;,. (t) on the interval Ak, i.e., t E Ak.
RT (t) = a;A.(t),
This implies (5.1.27)
I
JO
»,T
2T
T
RT(r)dr =
a;,dr+ JO
JT
mT
a;,,,(r)dr. (,n - I) T
We show that the quantity (5.1.27) is a particular value of z for some specific values of the variables of integration. Let us associate to each interval Ak the kth group from the collection (5.1.26) and study this problem for k = 1 (the arguments for k > 1 are similar). The first summand in (5.1.27) is obtained if, in the first group, the d is extended to the whole interval AI, i.e., the points preceding a;, integral t;, - I are contracted to zero and the succeeding ones, to t, = T ; this gives 0=t1=...=ti,-I,
(5.1.28)
= to = T. ti, = ti, +I = Further, determine the subsequent deviations of the variables on the interval Al from their values in (5.1.28). On AI = [0, Tj, we place n intervals of unit length grouping them as follows: it - 1 ones are adjacent to zero, and the others, to T. Now let each t; run over only the corresponding interval indicated in Figure 2. Note that the deviation of t; from its value (5.1.28) does not exceed n. Note also that the variation of each variable t1, t2i ... , tn_ I affects two integrals (the lower limit of one of them and the upper limit of the other), and t,, affects only the last integral; moreover, <1 A Therefore, the sum of the integrals of the first group under the deviation described does not exceed 2M(n - 1), i.e., (5.1.29)
1st group
T
>f o
RT (r) dr - 2Mn(n - 1).
§1. CENTRAL EXPONENTS
F=I1- -i ti
t.
ti-1
t2
133
...
to-I
to
T
0
FIGURE 2
Repeating similar arguments for A2i ... , A,n, from formula (5.1.27) and the inequality (5.1.29) we obtain fntT
fmT
(5.1.30)
z >J
RT(T) dT - 2Mn(mn - 1) >
0
J0
RT (T) dr
- 2Mn2m.
We return to the integral J,(,1')(0 for k = nm - 1, t = mT. The integrand is positive; therefore, when the domain diminishes, the integral decreases. According to the definition (5.1.23), its domain of integration is the interval 0 < t1 < t2 < ... < tmn-1 < I = Inm)
and the variables t; run over an (mn - 1)-dimensional cube with edge of unit length, i.e., a subdomain of unit volume. Therefore, mT
Jm1(mT)
exp
f
RT (T) dT
- 2Mn2m
We return to the estimate (5.1.25). It is of interest to us for sufficiently large t. Let
mT
anm-1 Jnnm-1)(mT) mT
RT dT - 2Mn2m
3 dnin-1 exp 0
fmT
= exp J
RT dT
- 2Mn2m + (nm - 1) In b
0
By inequality (5.1.13), for any e > 0 there exists an N such that for any m > N we have inT
RT(T)dT 3 (S2-e)mT.
(5.1.31)
Hence,
x[xl = flini
t In 11x(t)II (5.1.31)
In Ilx(mT)II
m-oo mT 2Mn2 - n ln8 7,
lim
1>
=
m-ac 52
(nm - 1) ln6 - 2Mn2m + (i2 - e)mT mT
-E-81.
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
134
Note that -I > 0, since 6 is small, and e I --> 0 as T -* oo. Thus, we have shown that 0 system (5.1.19) has a characteristic exponent which is not less than f2. COROLLARY 5.1.3. In diagonal systems the upper central exponent 1 is always the least upper bound of mobility of the greatest exponent, and the latter is rigid if and only if it coincides with 92.
Let us illustrate the theorem proved on the example of the system considered in Example 5.1.4. EXAMPLE 5.1.5. The system
xl = 0,
(5.1.32)
X2 = 7r Sin 7Cy `x2,
as follows from Example 5.1.4, has X11 = A, = 0,!Q = 1. We show that the perturbed system
xl = axe, x2 =CXI +7rsin 7rvrtx,
(5.1.33)
has a characteristic exponent A' such that A' > Cl = 1. We repeat the arguments of Theorem 5.1.5, and consider J.") (1) for r even and a = 2,
Since
R(t) = (7rsin 7r'+ I7rsin7r,t I)/2, considered in Example 5.1.4, is a sharp upper function, we can modify the partition of R+, and, instead of the intervals Ok of length T, we can take the intervals Ak = [T2k, T_k+l ],
where
Ti=12,
k=0,1,....
Let us estimate J22n') (T2,,,+I ). The function
Z(tl,t2i..t,.,t)=J a2dz+J t,
0
for t, = T,, t =
a2dz r,
coincides with m
(2k+1)2
R(r)dz= k_0
(2k)'
and when t, changes in the bounds
It,-Ti4<' 1, z (t I, t2i ... , t,., t) changes by at most 4m7r. Indeed, let us turn to the inequality (5.1.30).
In our case M = 7r, n = 2, mn - I = r = 2m, the result of the deviation should be
§1. CENTRAL EXPONENTS
135
divided by 2 because a, - 0, and there are only two diagonal coefficients. Repeating the subsequent arguments of the theorem, we obtain
'82niexp Ilx(T2n,+1)II >
R(t)dt-4nm 0
R(t)dt-4nm+2min6
=exp f0
.
Hence, x[x] =11lim
I In Ilx(t)II
00 t
lim
I
m- oo T2m+1
In Ilx(T2m+,)II
2(2m+ 1)(m+ 1) -4nm+2min8
lim M-00
(2m + 1)2
= 1.
EXAMPLE 5.1.6. Consider a variant of perturbation of system (5.1.32). Instead of system (5.1.33), we consider the system
X, = (alf)x2>
(5.1.35)
z2 = (d/ f )x, + is sin is f x2i
where, as above, 6 is small, but the perturbations in this system tend to zero, i.e., the central exponents of systems (5.1.32) and (5.1.35) coincide (Theorem 5.1.2). We indicate the differences from the arguments of Example 5.1.5. The function JZ')(t) under the sign of the integral contains the extra factor 1/( t, t2... t,.) in comparison with (5.1.34). For I = T2,,+, this factor is estimated from below by the quantity 1/(T2m+1)m = 1/(2m + 1)2m
Then Ilx(T2m+1)II >
62m
rT +i exp
J
R(t) dt - 47cm
1
(2m + 1)22
T,,;,+i
> exp
(10
R( t) dt - 4xm + 2m ln8 - 2m ln(2m + 1)
Finally, we have AX1 > n lim
2(2m+1)(m+1)-4nm+2mIn 6-2min(2m+1) (2m + 1)2
= 1.
In the preceding example we stopped at this inequality. Here we can state the exact
equality x[x] = 1. Why? The upper central exponent of system (5.1.35) is equal to unity; therefore, the inequality x[x] > 1 is impossible.
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
136
§2. On the stability of characteristic exponents Let us turn again to systems (5.0.1) and (5.0.4). From the examples of the previous section it is clear that small perturbations of the coefficients of a system can lead to finite shifts of the characteristic exponents. Thus, in Example 5.1.6 the perturbation
Q(t) =
0
5//i
6/V'-t
0
shifts the greatest exponent A,, of system (5.1.32) by one to the right and, at the same time, for system (5.1.35) the perturbation
-al
0
Q(t) =
-b/,/
0
shifts the greatest exponent by one to the left, since it returns us back to system (5.1.32). A similar situation is possible for the interior points of the spectrum (5.0.3) of system (5.0.1). Nevertheless, a sufficiently ample class of systems (5.0.1) possesses spectra that
are rigid under small perturbations. In what follows we shall find which properties of a system determine this behavior, prove necessary and sufficient conditions for the rigidity of the spectrum, and show, e.g., that autonomous, periodic systems, as well as systems that are reducible and almost reducible to autonomous systems, possess such rigidity.
DEFINITION 5.2.1. The characteristic exponents of system (5.0.1) are said to be stable if for any e > 0 there exists a 6 > 0 such that the inequality sup,ER+ IIQ(t)II < o implies the inequality (5.2.1)
12
-.1;I
EXAMPLE 5.2.1. The characteristic exponent of a linear scalar equation is always stable. The equation z = a (t) x has the solution .
x(t) = x(0) exp
J0
a(r) dz.
Therefore,
2 = lim 1
a (T) dT.
toc t
fo
The perturbed system
y = [a(t) + q(t)]y,
jq(t)I < 6
where
for
t E R+,
has the solution r
y(t) = y(0)exp f [a (-c) +q(r)I dT, 0
or
exp(
fIa(r)dz-btl Y(0)<exp(
f,a(T)dT+dt
and, according to the properties of the exponents, we have
.1-8 «'
IA - A'I < 6.
JJJJJJ
§2. ON THE STABILITY OF CHARACTERISTIC EXPONENTS
Here
x[y] = A'= Eli m
I
137
f [a(z) + q(z)] dz.
J
Note that, actually, in the study of the variation of the characteristic exponents the inequality 11Q(t)II < a is essential not on the whole half-axis Ri., but starting with some sufficiently large moment T. Indeed, the exponents are determined by the behavior of the solutions as t - oo, and the variation of the coefficients on a finite interval does not affect this behavior. Moreover, stable exponents do not change under perturbations such that 11 Q (t) II - 0 as t - oo. Let us prove these statements. THEOREM 5.2.1. Let the characteristic exponents of system (5.0.1) be stable and IIQ(t)II -> 0
as
t
oo;
then the spectra of systems (5.0.1) and (5.0.4) coincide. PROOF. Let
X(t) = {x1(t),...,xn(t)} be a normal basis of system (5.0.1) and X[xi] = Ai,
i = 1,...,n.
Consider the system
z = [A(t) + GT(t)]x,
(5.2.2)
where the matrix GT(t) - 0 for t >, T, i.e., the perturbation is restricted to the finite interval [0, T]. Take the basis X(t) of system (5.2.2) satisfying the initial condition X(T) = X(T). By virtue of the condition for GT(t), these bases coincide for t > T, i.e.,
(t)-X(t),
(5.2.3)
t>' T.
The basis X(t) is normal since it is incompressible (because X(t) is incompressible), consequently, it realizes the complete spectrum of system (5.2.2), and, by the condition (5.2.3), this spectrum coincides with that of system (5.0.1). Thus, a perturbation of a system on a finite interval of time does not change the spectrum. We turn to the proof of the statement of the theorem. Let, to the contrary, the spectra of systems (5.0.3) and (5.0.4) be different for IIQ(t)II -+ 0,
t->OO,
i.e., let for some j
IAj-.1'I=a>0.
(5.2.4)
Set e = a/2 and, by the stability of characteristic exponents of system (5.0.1), define 6 > 0 such that for sup,ER,. IIR(t)II < a the spectrum a1 -<, 012 <... -an
of the system
y = [A(t) + R(t)]y satisfy the condition (5.2.1), (5.2.5)
Iai - A, < a/2,
i = 1, ... , n.
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
138
For the same 6 > 0 we define T > 0 such that
IIQ(t)II <6
t > T.
for
Let us construct the matrix R(t) in the following way:
R(t) - Q(t)
t > T,
for
and R (t) is arbitrary with
IIR(t)II <6
for
t E [0, T].
According to the arguments at the beginning of the proof,
since the matrices R(t) and Q(t) differ on a finite interval of time; then the inequality 0 (5.2.5) contradicts the inequality (5.2.4).
REMARK 5.2.1. Without the assumption on the stability of characteristic exponents, Theorem 5.2.1 does not hold (see Perron's Example 4.4.1 and Example 5.1.6). LEMMA 5.2.1. The stability of characteristic exponents is invariant under Lyapunov transformations.
PROOF. These transformations preserve both the spectrum of a system and the 0 smallness of perturbations. Now we present a sufficient condition for the stability of characteristic exponents obtained by Malkin [26]. THEOREM 5.2.2 (Malkin). Let the Cauchy matrix
X(t,r) _ {xI(t,T),...,xn(t,T)1 of system (5.0.1) be such that
1. x[xi(t,T)] =,ui, i = 1,...,n, 2. for any y > 0 the following inequalities hold: (5.2.6)
Ilxi(t,r)II
Cexp(,ui +y)(t -T)
(5.2.7)
IIxi(t,r)II
Cexp(,ui-y)(t-T)
for for
T
0,
T?t
0,
I
where C is a constant depending only on y and independent of r and t. Then the characteristic exponents of system (5.0.1) are stable. REMARK 5.2.2. The notation ui for the exponents is introduced because we cannot
guarantee that the Cauchy matrix is normal, and it may not realize the complete spectrum. Naturally, ui coincides with one of the A, , i, j = 1, . . . , n. PROOF. The proof of the theorem consists of three parts. Let us prove successively the following: 1. the shift of the characteristic exponents to the right is small, 2. system (5.0.1) is regular, 3. the shift of the characteristic exponents to the left is small.
§2. ON THE STABILITY OF CHARACTERISTIC EXPONENTS
139
We note that Malkin proved items 1 and 3 under the assumption that the system is regular. Bogdanov showed [7] that the conditions of the theorem guarantee that the system is regular; this result is presented in item 2. 1. Let to E R+. Any solution y(t) of system (5.0.4), by virtue of the method of variation of constants, satisfies the integral equation
y(t) = X(t, to)y(to) + f t X(t, r)Q(r)y(T) dr. to
Note that the equality to = oo is possible if the integral converges. We distinquish n solutions yl (t), ... , yn (t) of system (5.0.4), whose initial data at t = to coincide with the initial data of the solutions x1(t), ... , xn (t) of system (5.0.1), constituting a normal basis of system (5.0.1). We assume that k = 1,...,n - 1.
X[Xk] _ 2k <, 2k+1 = X[xk+1],
This gives n integral equations of the form (5.2.8)
Yk(t) = xk(t) + f t X(t,r)Q(r)Yk(r)dr,
k = 1,...,n.
to
Let us estimate the characteristic exponents of the solutions yk (t). Take an e > 0 such that
0 < E < (An - 2k)/2,
(5.2.9)
An 0 Ak
For such an e > 0 there exists a constant A such that t > 0.
Ilxk(t)II < Ae(4+E) ,
(5.2.10)
Our goal is to show that (5.2.11)
IIYk(t)II <
2Ae("+0t,
t > 0.
Consider the Picard approximations for equation (5.2.8): yko)(t) (5.2.12)
= Xk(t),
/
t
//
X(t,T)Q(T)Ykm-I)(T)dr,
Ykm)lt) = Xk(t) + f
m = 1,2,...
to
For yko), the inequality (5.2.11) is satisfied by (5.2.10). Assume that for the (m - 1)th approximation we have IIYk"'-1)(t)II
(5.2.13)
< 2Aet > 0,
and show that this estimate is valid for the mth approximation. From the formula (5.2.12) it follows that
f
IIYkn)(t)II <
t
II X (t,T)II ' IIQ(T)II '
tp
Let us estimate the integral, setting
J to
0, 00,
if )n = Ak, if An > Ak.
IIYk,n-1)(T)Il
dr
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
140
We assume that y = 6/2 in the inequalities (5.2.6) and (5.2.7). Consider two cases: a) to = 0, i.e., t > z; using (5.2.6), we have
f r IIX(t,r)il - IIQ(r)II -
IIy;,,,-')(r)
II dr , 2Acb f
0
r
+E)r dr
0
eEZ/2
=
dr <
4Acb
0r
e(%, +E)t
6
b) to = oo, i.e., r > t; using (5.2.7), we obtain
f r IIX(t,r)II
- 110-011 -
IN,-')(I)II
d-r <, 2A c,5 f
00
e E/2)(,-=)e(;1k+E)=dz r
00
=
2Ac8ee(:.k.-
+3E/2)z
dr
r
2Acbe (' -E/2)r e(ak. -;,,+3E/2)r
An - i1,k - 36/2
<
4Acbe(,.A+E)1
6
The last two inequalities follow from the condition (5.2.9).
In both cases we have obtained identical estimates for the integral. The estimate (5.2.13) also holds for the mth step if the integral does not exceed the quantity A exp(Ak + e)t. For this reason we choose 6 < 6/(4C). Recall that C depends on
y = e/2. We know [5] that the Picard process converges to the solution of the integral equation (5.2.8) for all t E R+; therefore, from (5.2.13), as m -> 00, we obtain that the estimate is valid. Thus, we have n solutions yI (t), ... , y,, (t) of system (5.0.4). Let us show that they form a basis of this system. Indeed, as can be seen from the estimates for the integrals, f o r t = 0 and sufficiently small6 the vectors y1(0), y2(0), ... , y, (0) differ little from the vectors x1(0), x2 (0), ... , x, (0), respectively . But the latter are linearly independent; therefore, Det{x1(0), ... , xn(0)} 54 0; this is sufficient f o r the solutions yI (t ), ... , y, (t) to be linearly independent. Moreover, from the estimates (5.2.11) we have X[Yk] <, 2k + E-
lf the basis y1(t),..., y,, (t) is not normal, then, by passing to a normal basis, the exponents can only diminish; therefore, we have (5.2.14)
),. , A.k +6,
k = 1,2,...,n.
2. Let us show that under our assumptions system (5.0.1) satisfies the conditions of Lemma 3.5.1 on regularity; i.e., 1) there exists lun,y0 1, f o Re Sp A(r) dr = S, 2) LI=1 ,1f; = S. According to the Ostrogradskii-Liouville formula, we have
DetX(t,0) = expfor SpA(r) dr.
§2. ON THE STABILITY OF CHARACTERISTIC EXPONENTS
141
Note that
[DetX(t,0)]-1 = DetX-1(t,0) = DetX(0,t). This and the condition (5.2.7) imply that n
=DetX(0,t)
e-fos1I(T)
;=1
= C"n!exp
(_11)t
exp(nyt).
The right-hand side of the inequality has a sharp characteristic exponent; therefore,
.1
/
fI
\
0
n
RE 1 I -J ReSpA(z)dzl 00 t
/
-E,u; +ny, ;=I
or n
f
(5.2.15)
lim 1 t
1x J
ReSpA(z)dz > 'Re
0
,u; -ny. =1
By the Lyapunov inequality (2.5.1) and by virtue of (5.2.15), we have "
1
r
Re Sp A(z) dz
li m
,u;
1
;=1
lim 1 ReSpA(z)dz r-,00 t J0
(5.2.16)
n
>, Eu; - ny. ;=1
Since y is arbitrarily small, the inequalities (5.2.16) become a chain of equalities and both statements formulated at the beginning of the proof of this item are valid. 3. Now let us show that the shift of the exponents to the left is small. The regularity of system (5.0.1) implies that
[exPJSPATdT].
.1; = X
By the Lyapunov inequality, we have (5.2.17)
I
>' X
[expft(spA(T) + Sp Q(z)) dl J
1
for system (5.0.4). Under the condition (5.0.5), IT X exp I Sp Q(z) dz <, no. 0
The function exp
[f'spA(t)dT]
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
142
has a sharp charactertistic exponent; therefore, (5.2.17) implies
I
exp.I Sp Q(T) dTl
A' > X fexp f r Sp A(T) dTJ + X i=1
L
J
J
o
LLLI
(see Theorem 2.1.1), and, finally, n
(5.2.18)
n
ai>Ai -nb.
According to e > 0 we choose a b > 0 such that (5.2.14) is satisfied and introduce y; > 0 so that
A =d;+e- y;,
(5.2.19)
i=1,...,n.
Substitute the expression obtained for 2 in (5.2.18): n
n
n
A;+ne-y; >
.1; -nb,
yi
or
From the last condition and (5.2.19), we have (5.2.20)
A >A;+e-n(e+b).
Combining (5.2.14) and (5.2.20), we obtain our claim.
O
COROLLARY 5.2.1. The Cauchy matrix X(t,T) satisfying the conditions (5.2.6) and (5.2.7) is a normal fundamental matrix.
PROOF. For this matrix the inequalities (5.2.16) imply that the Lyapunov inequal0 ity (2.5.1) is satisfied; consequently, the matrix is normal. THEOREM 5.2.3. Linear systems with constant coefficients have stable characteristic exponents.
PROOF. Let system (5.0.1) have the matrix of coefficients A - const (such systems were considered in Chapter I in more detail). The Cauchy matrix for such a system has the form
X(t,T) = QA = e' '-T), T
Let us introduce S, i.e., the matrix that transforms A to the Jordan form, B = S-1 AS = diag[J. (A1), ... , J,,k. (2k)J, where A1, ... , A,, are the eigenvalues of the matrix A (not necessarily distinct); J, (A) is
the Jordan block of order v, E; 1 p; = n. Hence, e`1('-T) = SeB('-T)S-1.
Moreover, (5.2.21)
eB('-T) = diag
[ej,,,ej,A.(%A)(r-T)1
§2. ON THE STABILITY OF CHARACTERISTIC EXPONENTS
143
where 0
1
t - z
ei(t-r)
(t - z)''-1
t-z
1
(v - 1)! Each element of the matrix expB(t - z) satisfies the estimates (5.2.6) and (5.2.7). Indeed, it has the following general form: (5.2.22)
k
(t - z)ke`k E {0, 1, ... , n - 1}.
The inequalities (5.2.6) and (5.2.7) for this function have the form k1(t (5223)
-
Ce(Re..+v)(t-r)
z)ke2(r-r)I
Ii (t lc!
t '>1 z > 0,
Ce(Rei.-)')(t-r)
Z)ke"('-r)
Z>t
0.
Dividing by the exponential, we obtain Ce}'(`-r),
(t - Z)k < I
k!
1 k1 (t -
Cey(r-')
T )k
,
t>' z>' 0,
z>t>0.
Both inequalities are reduced to a single one, and, obviously, there exists a constant C depending on y and independent of z that realizes the inequality k, Ok <, C exp yO, 0 > 0, namely
C = max
1
eER+ ]c!
Ok exp(-yO).
We return to the matrix (5.2.21):
eA(l-r) - V(t)S-1 = The columns of the matrix V (t) are solutions of system (5.0.1). They are divided into k groups. The first one is the result of multiplication of the matrix S by the first p, columns of the matrix exp B (t - z), etc., and the last one is the result of multiplication of S by the last Pk columns of the matrix exp B (t - z). The solutions of the mth group have the characteristic exponent Re A, ,
m = 1,2,...,k.
Multiplying the matrix V (t) by S-I from the right, we have e
A(t-r)
/ / / = {x1(t,t),x2(t,T),...,xn(t,Z)},
where each vector x1 (t, z) is a linear combination of the solutions VI (t), ... , vn (t ). Therefore, the components of any solution x; (t, z) represent linear combinations of functions of the form (5.2.22) with coefficients depending on the constant matrices S and S. For each of these functions, the estimates (5.2.23) are satisfied and the inequalities can only be strengthened if in the right-hand side Re A is replaced with the maximal exponent of the linear combination, i.e., with the exponent of the solution
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
144
xi (t, r). The constant C changes its value because of the multiplication by the constant matrices S and S-1. Estimating the vectors x, (t, r), i = 1, ... , n, componentwise, we verify that the inequalities (5.2.6) and (5.2.7) hold. 0 THEOREM 5.2.4. The characteristic exponents of systems that are reducible to autonomous ones are stable. PROOF. According to Erugin's Theorem 3.2.1, a reducible system has a fundamental matrix of the form
X(t) = L(t)expAt, where L(t) is Lyapunov, and A is constant. Thus,
X(t,r) = L(t)e'(t_t)L-1(r). It remains to repeat the proof of the previous theorem, changing S to L(t)S, and S-I to S-IL-1(t). Although these matrices are not constant, they are Lyapunov, i.e., bounded on R+; this affects the value of the constant C, but not the form of the
0
estimates (5.2.6) and (5.2.7).
THEOREM 5.2.5. The characteristic exponents of systems that are almost reducible to autonomous ones are stable.
PROOF. Let system (5.0.1) be almost reducible to the system
y = By,
(5.2.24)
B = const,
with the characteristic exponents
,ul <,u2
...
/Zn
This means that for any a > 0 there exists a Lyapunov matrix L,, (t) such that the system (5.2.25)
i = (La 1AL,, - L- 1 L,,)z = [B + $(t )]z
has the spectrum and
sup III (t)II < a.
IO
By the stability of the exponents of system (5.2.24), for any e > 0 there exists a 6 > 0 such that the characteristic exponents v1 < v2 < . < v of the system (5.2.26)
V = [B + Q(t)JV,
sup IIQ(t)II < A,
IO
satisfy the inequalities
Iv1 -,uil <e/2,
i = 1,...,n.
Take a = 0/2; applying the transformation y = L,, (t)w to system (5.0.4), we obtain the system (5.2.27)
with the spectrum
th_(B+(lo(t)+La1QLc,)w
A/ <),;<...
§2. ON THE STABILITY OF CHARACTERISTIC EXPONENTS
145
Let t E R+; for IIL.(t)II <, K, IIL« 1(t)II <, K such a K exists, since Li(t) is Lyapunov; we choose6 = A/(2K22) in (5.0.5). Then for
system (5.2.27) we have
II1(t)+La1Q(t)L,II sIIF(t)II+IILa QLaII<
2+K25= 2+2=A.
Both systems (5.2.27) and (5.2.25) are particular cases of system (5.2.26); thus,
IA, -,u;I <s/2,
i = 1,...,n;
IA; -,u;I <s/2,
consequently,
Ia,; -.1;I <E.
Hence, for e > 0 we have found a corresponding A > 0, and for it we have defined a = A/2 and the matrix L,, (t), generating the constant K, and, finally, 6 = A/(2K22).0 THEOREM 5.2.6. Let AI(t) and A2(t) be bounded and continuous matrices on 1[8+ such that 1.
(5.2.28)
IIAI(t) - A2(t)II
x
0,
2. one of the systems
X = A1(t)x, X = A2(t)x is autonomous, or reducible to an autonomous one, or is almost reducible to an autonomous one.
Then the characteristic exponents of these systems are stable and coincide.
PROOF. Let the system z = A, (t)x possess one of the properties indicated in item 2. We write the second system in the form
X = [AI(t) + (A2(t) - A1(t))]x. The final statement of the theorem follows from the stability of the exponents of the first system (Theorems 5.2.3, 5.2.4, 5.2.5) and Theorem 5.2.1, whose conditions are satisfied by virtue of (5.2.28). 0 EXAMPLE 5.2.2. Consider the system X, _ (1 + cost)x, +
(t
+
1x,,
X2 = e- 21 x, + x, sin t.
The characteristic exponents of this system are stable and equal to 0, 1. Indeed, compare our system with the system
X, = (1 +cost)x,, X2 = x2 sin t,
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
146
which is 2n-periodic and, consequently, reducible to a system with constant coefficients. Its exponents
A = lim
1
I t-.oo t
A2 = lim 1
/*oot
Jo
(1 +cosr)dr = 1,
f sin r dr = 0
J
o
are stable. Here we have used Theorems 5.2.4 and 5.2.6. §3. Integral separateness
Let us touch upon the history of the problem [23]. Perron [32] obtained a result which forms the basis of the theory of stability of exponents. Let us describe it in our terms. Let system (5.0.1) be diagonal, i.e., let
z = diag[a1(t),
. . . ,
where
k=1,...,n-1, t>0;
Re(ak+1(t)-ak(t))>a>0,
(5.3.1)
moreover, let 11Q(t)II -' 0 as t (5.0.1) and (5.0.4) coincide, i.e., A
' = A, = lim 1 r-,oo t
oo. Then the characteristic exponents of systems
f
Rea1(r)dr,
i
The inequality (5.3.1) is called the condition for separateness of the diagonal of system (5.0.1), or the condition for separateness of the functions a1 (t), ... , a (t). Note that the Perron conditions allow one to calculate the characteristic exponents via the coefficients.
EXAMPLE 5.3.1. Let us find the characteristic exponents of the system
z1 = (cosln t + sin lnt +
sinInt
X2 = (t - 1)2x1 + 2x2.
t+
1)x1 + te'x2,
/
The matrix of coefficients of the system is the sum of the matrices 2
diag[cos In t + sin In t, 2]
and
Q(t) =
+1
to-'
sin In t
(t - 1)2 here II Q(t) II
0 as t
oo. Note that
2- (sin lnt+cosInt) > 2-'> 0, i.e., the condition for separateness is satisfied; therefore,
Iim 1 act
(sin In r + cos In r) dr = 1, fo
A2 = 2.
0
§3. INTEGRAL SEPARATENESS
l47
Let us see whether it is possible to obtain this result by the methods considered in the previous section. If it turns out that the exponents of the system
x = diag[(sin In t + cos In t), 2]x are stable, then it is possible. It is clear that the diagonal system is neither autonomous, nor reducible, nor almost reducible to an autonomous system. Let us verify that the condition (5.2.6) of Theorem 5.2.2 is satisfied for xI(t); we have
Ixi (t, r) I = exp(t sin In t - r sin In r).
We must show that for any y > 0 there exists a constant C(y) such that for t > r > 0 the inequality etsinlnt-rsinlnr \
holds, or
Ce(I+Y)(t-r)
tsinInt-rsinlnr<(1+y)(t-r)+lnC.
Take y = e-n(1 - e-")-I and consider the sequences t o = exp (2mn +
rm = exp (2mn
2) ,
- 2)
Hence, e2mn+i
+
e2mn_i
(1 +
y)(e2mn+7
-
e2'nn-s)
+ In C,
or
1+e-"< 1+
(1-e-n)+e-2mnIn C;
1 - e-n
therefore, 1+e-n
SI
+e-2mnIn C.
Obviously, there is no constant C such that the inequality holds for arbitrarily large m, i.e., the sufficient condition for stability is not satisfied. In what follows, we shall see that, under the condition (5.3.1), the exponents of a diagonal system are actually stable, and, at the same time, the condition for separateness of the diagonal can be substituted by the condition for integral separateness. DEFINITION 5.3.1. The bounded continuous functions a I (t), a2 (t ), ... , a,, (t) are said to be integrally separated on R+ if there exist constants a > 0 and d > 0 such that (5.3.2)
f
[a1+I(u) - ai(u)]du 3 a(t - s) - d
s
for all t>s>0,i=1,...,n-1. Obviously, the condition for separateness implies integral separateness, but not vice versa.
EXAMPLE 5.3.2. The functions cos t, l are not separated, but are integrally sepa-
rated with the constants aI = 1, d = 2. It should be noted that Perron's result is not applicable to the diagonal system
x = diag[cost, 1]x, while Theorem 5.2.6 is, in contrast, applicable. Indeed, if the coefficients of this system
are perturbed by small stunmands tending to zero as t -> oo, then, according to Theorem 5.2.6, the characteristic exponents of both systems are stable and coincide,
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
148
i.e., are equal to 0 and 1. This statement follows from the fact that the unperturbed system is reducible to an autonomous one. The notion of integral separateness was introduced for systems of a general form by Bylov [11]. It, naturally, appeals to properties of fundamental matrices and not to the coefficients. DEFINITION 5.3.2. A linear system is said to be a system with integral separateness if it has solutions xl (t), ... , xn (t) such that the inequality IIxi+1(t)II
(5.3.3)
.
IIxi+1(s)II
Ilxi(t)II > de' IIxi (s)II
with some constants a > 0, d > 1, is valid for all t > s. Integral separateness is rather a strict condition and, therefore, has a lot of implications concerning different properties of a system, in particular, as will be shown below, the stability of characteristic exponents. Note some obvious ones. PROPERTY 5.3.1. Integrally separated systems have different characteristic exponents.
PROOF. We set s = 0 in the inequality (5.3.3) and take logarithms: In IIxi+I (t) 11 - In IIxi (t) 11 > at + Ind - In IIxi (0) 11 + In IIxi+i (0) 11,
whence lira 1(In IIxi+i(t)II -In Ilxi(t)II) > a.
toot At the same time,
1 lim 1 In IIxi+1(t)II < lim 1 lnllxi+l(t)II+ lim 1 In ,-,00 t '-oo t llxi(t)II Iixi(t)II
-0 t
= x[xi+I ] -
x[xil.
Finally, we have
x[xi+Il - x[Xi] >, a > 0. Note that the fact that characteristic exponents are different does not necessarily imply integral separateness of solutions. EXAMPLE 5.3.3. The system
XI = 0, X2 = (sin In t + cos In t)x2
has the fundamental matrix
X(t) = {XI(t),X2(t)lI = C 0
0 et sin In t
Therefore, X[x1] = 0, x[X2] = 1. Let us verify whether the condition (5.3.3) is satisfied: IIx2(t)II IIx2(s)II
.
IIxI(t)II
Ilxl(s)II
e'sinIni -s sin Ins
§3. INTEGRAL SEPARATENESS
149
Take
s = exp(2m - 1)7r, 2m > I > 1. For these values of t and s the right-hand side is always equal to 1; therefore, (5.3.3) is not satisfied. t = exp 2m7r,
PROPERTY 5.3.2. Integral separateness is invariant under Lyapunov transformations.
Let an integrally separated system (5.0.1) be reduced to the system jy = B (t) y by
a Lyapunov transformation y = L(t)x. We show that this system is also integrally separated. Consider its basis Y(t) = {L(t)x1 (t), L(t)x2(t), ... , L(t)xn(t)}, where x, (t), ... , xn (t) are solutions of (5.0.1) satisfying (5.3.3). Let a constant K > I be such that IIL-1(t)II <_ K
IIL(t)II <, K,
t E R.
for
Consider the following inequalities:
IIL-1Lxll < IIL-'IIIILxII <, KIILxII: therefore, IILXII >
"x",
IILxII <,.IILIIIIxII.
Now let us verify directly the inequality (5.3.3): IIL(t)xi+1(t)II - IIL(s)xi(s)II
1
IIxj+,(t)II . IIx;(s)II
d
a(,-.r)
IIL(s)x;+t(s)II . IIL(t)x.(t)II > K4 Ilxi+,(s)II . Ilxi(t)II ' K4e PROPERTY 5.3.3. A basis of system (5.0.1) .satisfying the inequality (5.3.3) is normal.
Indeed, all the characteristic exponents of the solutions x, (t), ... , xn(t) are different, i.e., a linear combination whose exponent is less than those of the solutions is impossible. PROPERTY 5.3.4. The diagonal system x = diag[a 1 (t), .
.
. , a, (t) ]x
is integrally separated if its diagonal coefficients are integrally separated.
To prove this statement, it is sufficient to consider the basis
X (t) = diag
[expft a, dr, ... exp
J0
a,, dr]
and verify whether the inequalities (5.3.3) are satisfied for it if the inequalities (5.3.2) hold.
Note that the converse for the latter statement is also valid; this immediately follows from the subsequent results of this section. The following important property was proved by Bylov [11]. THEOREM 5.3.1 (Bylov). An integrally separated system is reducible to a diagonal one.
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
150
PROOF. Let the inequality (5.3.3) be satisfied for the solutions xl (t), ... , xn(t) of system (5.0.1). For the sake of convenience, we introduce exponential characteristics of solutions in the following way: we associate the function
p, (t) = dt In 11x(t)II to the vector-function x (t); this defines functions p ( ' ) , . .. , pn (t), and (5.3.4)
llxi(t)II = llxi(s)Ilexp f
pi(r)dr,
i = 1,...,n.
0
From the conditions (5.3.3) we have (pi+I(Z) - pi (r)) dz > a(t - s) + In d,
(5.3.5)
i.e., we come to the integral separateness of the set of functions pl (t), ... , p" (t). Note that p, (t) is the same for x(t) and Cx(t), C a constant. Now let us turn to Theorem 3.3.3 on the reduction of a system to block-triangular form. According to Corollary 3.3.2, a linear system is reducible to a diagonal one by means of a Lyapunov transformation if and only if it has a basis
X(t) = {xl(t),...,xn(t)} satisfying the condition G (X) IIx2(t)Il2...Ilxn(t)II2
IIXI(t)II--'
> p >0
for
t E R+,
where G (X) is the Gram determinant of the solutions xl (t), ... , xn (t). The same condition in Remark 3.3.4 is reformulated in terms of angles between subspaces, (5.3.6)
IIx1 112
. IIG(IX)
.. II xn 112
= sine /31 ... sin 2 fin_ I > p > 0,
where
/3k=<(Lk,xk+I),
k=1,...,n-1,
and Lk is the k-dimensional linear subspace spanned by the solution vectors
X1(t),x2(t),...,xk(t). Thus, our goal is to prove that all the angles /3k are bounded away from zero. Moreover, we show that for all x E Lk the inequality (5.3.7)
llx(t)ll \ B,,ef," 'l-l"t Ilx(s)Il
t > s > 0,
is valid. From what follows it is clear that this inequality provides the property required for the angles. We introduce the assumption (5.3.8)
inf pi(t) =A> 0,
i,IER+
which will be dropped at the end of the proof.
§3. INTEGRAL SEPARATENESS
151
We use induction. For k = 1, the statement of the theorem and the estimate (5.3.7) are obvious. Now assume that f o r all k = 1, 2, ... , m the inequality IIxG
(5.3.9)
p>0
=sin2fl,...sin2fl/
III2,''Ihxkll
has been established, and for x E Lk the estimate IIx(t)II
(5.3.10)
11X(S)11
S Bkexp f
Pk(z)dr
holds.
We show that both statements are preserved for k = m + 1. Let, contrariwise, oo and a the first condition be violated. Then there exists a sequence ti -> oo as i sequence of solutions x; (ti) E Lm such that
/im(ti) = 4{x;(ti),xm+1(ti)} ->000.
(5.3.11)
,
Without loss of generality, we assume that
Ilxi(ti)II =
IIxm+I(ti)II = 1.
Therefore,
i oo, as IIxm+l (ti) - x(ti)II -> 0 and, according to the theorem on integral continuity [5], for any fixed T > 0 we have IIxm+1(ti + T) - x; (ti + T)II -> 0.
(5.3.12)
r-,oc
At the same time, by virtue of the assumption (5.3.10) and the identity (5.3.4), we have
IIxm+1 (ti + T) -x'(ti +T)II IIxm+l(ti + T)II - Ilxi(ti + T)II exp
f f
r+T
- Bm exp
Pm+t (t) dz
r
= eXp
r+T
p,, (r) dz
r
r+T
(
e--T] - Bm d
t+T ii +T
[1_ Bm exp I f
Pm+l (z) dz
\r
r
>e U [1
f
(z) - Pm+t (z)) dz
> 1.
The last inequality can always be ensured by the choice of a sufficiently large T > 0. The inequality obtained contradicts the condition (5.3.12), which is due to the assumption (5.3.11). Thus, the inequality (5.3.9) holds for k = m + 1. We pass to the proof of the inequality (5.3.10) for k = m + 1. Let
xEL,,,+1,
114 0 0.
Represent this solution in the form (5.3.13)
X (t) IIx (S)II
= C1xr(t) + C2xn,+1(t),
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
152
where x' E L,,, and IIx'(s)lI _ IIxn,+1(s)II = 1, CI and C2 do not vanish simultaneously. By virtue of the above arguments,
0 < p < <(x'(s), xn,+I (s)) < 7z/2.
(5.3.14)
By the representation (5.3.13) for t = s, we have that a linear combination of two unit vectors forming an angle satisfying the condition (5.2.14) is a unit vector. Obviously, I C1 I < C, I C21 < C and the constant C does not depend either on x E Ln,+I or on s > 0. Therefore,
11x(s)11
< CB,,,exp
f pn,(T)dT +CexpJ, pn,+I(T)dr r
r
exp
f pn,+I (T) dT IC + CBn, exp f (p, .
r
L
- p» ,+I) d-I
s'
)
C I 1+ Bm d e-"((-S ] exp f pn,+1 dT rr
LL
[1
n,
+
l
exp
pn,+1 (T) dr.
JJ
Thus, we have completed the proof under the assumption (5.3.8). The case inf;,,ER+ p,(t) < 0 is reduced to the strict inequality (5.3.8) by the Atransformation z = x exp At, where A is sufficiently large and satisfies
A + inf p1 > 0. i.r
This transformation changes the norm, increases the characteristic exponents by A, but does not affect in any way the angles; therefore, the condition (5.3.6) remains
0
valid.
COROLLARY 5.3.1. If a system is integrally separated, then any solution x E Lk, where Lk is the linear supspaee spanned by the solutions XI (t), ... , xk (t) from (5.3.3), satisfies the estimate (5.3.15)
IIx(t)II < II x(s)II
Bke.f'' Pk(1)(T
where Bk is independent of s. COROLLARY 5.3.2. An integrally separated system is reducible to the diagonal system
(5.3.16)
i=
P(t)z,
where
pr=
dtinjjx;ll,
and the diagonal is integrally separated.
i=1,...,n,
§4. STABILITY OF CHARACTERISTIC EXPONENTS
153
PROOF. Take an integrally separated basis
X(t) = {xl (t), ... , x. (01 of system (5.0.1) and, according to it (similarly to Theorem 3.3.3), construct a Lyapunov matrix L(t) realizing the transformation to a diagonal system,
L(t) _
x
x2
X1
I
IlkIll' 11x211" *
IIxnh1
Then
X(t) = L(t)diag[IIxI(t)11,
(5.3.17)
,
II
Consider the basis Z(t) of system (5.3.16) such that
X(t) = L(t)Z(t); this, according to (5.3.17), implies
Z(t) = diag[Ilxl(t)II,..., llxn(t)ll] At the same time,
P(t) - ZZ-I = diag Idt In IIxI II, ..
, dr In I1 x,,
11]
The separateness of the diagonal follows from (5.3.5).
0
Without giving the proof, we present Millionshchikov's criterion [28] for a small change of directions of solutions of a linear system under small perturbations of the coefficients. THEOREM 5.3.2 (Millionshchikov). The following two statements are equivalent: 1) system (5.0.1) satisfies the condition (5.3.3).for integral separateness,
2).for any e > 0 there exists a 6 > 0 such that if sup,ER+ IIQ(t)11 < 6, then for any solution y(t) of system (5.0.4) there exists a solution x(t) of system (5.0.1) such that
sup <(x(t), y(t)) < e, lER+
where the angle between the vectors x and y is taken in absolute value.
§4. Necessary and sufficient conditions for stability of characteristic exponents
At the beginning of the section we deal with a special Lyapunov transformation that is known as H-transformation in the theory of differential equations [9]. Let a scalar function p(t) be continuous and bounded on IlR+. DEFINITION 5.4.1. The function 1
(5.4.1)
pH(t) = H
/ r+H
J
P(z)dr
is called the Steklov function for p(t) with step H (H > 0) or the Steklov average.
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
154
LEMMA 5.4.1. Functions pi (t) and p2(t) are integrally separated if and only if for sufficiently large H their Steklov functions are separated in the standard sense:
a>0,
P2 (t)-pH(t)
(5.4.2)
tE][8+.
PROOF. First let us show that for a function p(t) such that
tER+,
IIP(t)II S M, the following equality holds: (5.4.3)
J
t
pH (x) dx =
Jt p(x) dx + I (t) - I (s),
I1(t)-I(s)I <.MH,
(5.4.4)
where
t-> s-> 0,
t+H
1
t
P(Y)dy I
1(t)=Hf
-
dx. H
Indeed, from (5.4.1) we have
J
t pH(x)dx sft
dx
dy Jxpx+H P(V)
f P(Y) dy JI t
1
=H
ly
t
c
v
1
H fp(y)
dy I
+H
dx + I
Jy-H
t
f
r
s+H
t
1
dx +
t+H
f dx + J'+H p(Y) dy Iy-H dx t
P(Y) dy
H - .f s
y-H
+H
= f t p(y) dy + s
s dx 1 I- f t P(Y) dy f Y-H HL fs+H dy f y-H dx + + p(y)
f
r
-H
dx
t+H
t 11
+ f+H
f
t
1
P(Y) dY +
H
f
jt
t+H
P(Y) dy
-H
y-
c
f
s+H
dx -
dx]
p(y) dy f
H
J c
P(Y) dy f -H
A
By a straightforward calculation, we obtain
H
f t
T+H
t
p(y) dy f - dx y H
this implies the estimate (5.4.4).
M ft+H
H
t
P(Y) dy I dx dx + 1 Jy-H Jt + fr+H p(Y) l dy f t dxl
P(Y) dy
(t-y+H)dy5 MH 2 ,
§4. STABILITY OF CHARACTERISTIC EXPONENTS
155
Now let the condition (5.4.2) hold. Using (5.4.3) and (5.4.4), we have
f'(p2(r) - PI (T))dr =
f(p(t) - PH(T))dr - I2(t) +I2(s) - I1(s) +I1(t) a(t - s) - 2MH;
this implies that the functions p1(t) and p2(t) are integrally separated. Let, conversely, the functions p1(t) and p2(t) be integrally separated, i.e.,
f(P2(u)_Pi(u))du
t>s>0, a>0, d>0.
a(t -s)-d,
Consider the difference of Steklov's averages and estimate it from below,
+H(P2(T)-P1(T))dT>a-H P2(t)-pH(t)=
0
i.e., (5.4.2) holds.
DEFINITION 5.4.2 (H-transformation). Let the diagonal of the matrix A(t) of system (5.0.1) be real; denote it by Ad (t ). Choose H > 0 and form the matrix AdH (t)
1 ft+H =
H
Ad(z)dz.
The transformation (5.4.5)
x =exp f (A,, ! (-c)
Ag(T))dT y
0
is said to be an H-transformation.
This transformation is Lyapunov, since the matrix of the transformation is bounded together with the inverse and its derivative. Indeed, the matrices Ad (t) and ASH (t) are bounded by the assumption (5.0.2); the boundedness of the matrix fo (Ad - AH) dT follows from (5.4.3). THEOREM 5.4.1. A diagonal real system with an integrally separated diagonal is reducible to a diagonal system with a separated diagonal. PROOF. Let the diagonal coefficients of the system
X = diag[a1(t),...,a,,(t)]x = A(t)x be integrally separated, i.e., let the inequalities (5.3.2) be valid. The transformation (5.4.5) reduces this system to the diagonal system
Y(L-1AL - L-1L)y = (A(t) - (A(t) - AH(t)))y = AH(t)y. By Lemma 5.4.1, the diagonal of this system is separated in the standard sense.
0
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
156
EXAMPLE 5.4.1. The system xl = (Cost)x1,
X7 = X7
has an integrally separated diagonal which is not separated in the standard sense (see Example 5.3.2). Using the H-transformation, we obtain the system
yl = y (sin(t + H) - sin t)yl, y2 = y2
with the diagonal separated in the standard sense for sufficiently large H.
We introduce the notion of growth of a vector-function to make the exposition more compact and explicit. DEFINITION 5.4.3. Let a scalar function r(t) be integrable on each finite interval of R+ and a vector-function x (t) be such that
x(t)=o (expf'r(r)dr) or
IIx(t)II S D exp
J0
r(r) dz,
t
where D is a constant. In this case we say that the growth of x (t) is no higher than r and denote this by x - r. THEOREM 5.4.2 [9]. Let the functions p1(t) and p2(t) be separated,
p2(t)-pl(t)>a>0,
(5.4.6)
tER+;
then for any e > 0 (0 < e < a/2) there exists a 6 > 0 such that the system (5.4.7)
z = diag[p1(t), p2(t)]x + Q(t)x = [P(t) + Q(t)]x
for sup,ER+ 11Q(t)II < 6 has the following property: there are no solutions of growth intermediate between pl + e and p2 - e, i.e., any solution x(t) of growth -< p2 - e is, in fact, of growth -< pl + e and, moreover, these solutions admit the estimate f
IIx(t)JI S IIx(0)IIDe expJ (p, (-r) +e) do
(5.4.8)
0
uniform in all such solutions and t > 0.
PROOF. By the method of variation of parameters, any solution x(t) of system (5.4.7) satisfies the integral equation
x(t) = exp
J
r
P(r) dz {x(0) +
P(z) dr 1 Q(s)x(s) ds] J exp (- J r
.
§4. STABILITY OF CHARACTERISTIC EXPONENTS
157
Note that the fact that the unperturbed system is diagonal allows us to write the fundamental matrix in the form exp f1 P(z) dz. The components of the integral equation have the form fr
x1(t) = expJ p1dr (549)
[xi(o)+fexp(_f'Pi dr I [Q(s)x(s)JI ds] ///
o
rr
x2(t) = expJ p2 dT
[x2(o)+f'ex(_f p2 dr) [Q(s)x(s)J2 ds] /
o
Fix e > 0, 0 < e < a/2, and choose a function r(t) such that (5.4.10)
t E R+.
pi(t) + e <, r(t) < p2(t) - e,
This is possible by the condition (5.4.6). Now let a solution x (t) of growth - r satisfy system (5.4.9). Then we have s
Iexp (- f
8D exp
P2 dz ) [Q(S)X(S)12
0
(- J0,V P2 dZ/) exp f t r dz (oDe-f'. o
The integral f0
e-
fo
P, dt
[Q(s)x(s)12 ds
converges; therefore, we must set -x,(0) equal to the value of this integral. Why? If the expression in the square brackets in the second equation (5.4.9) had a nonzero limit as t -f oo, then the function x2(1) would be of growth - p2, which contradicts our assumption. Therefore, 112 d[Q(s)x(S)]2 ds.
x2(t) _ -efo P2 (IT f e
Taking into account the last correction, we again write system (5.4.9) in the form of one equation. To this end we introduce the matrix
Z(t, s) _ (v (t,, s) w( 0 0
s)
where
v(t,s) w (t, s)
_
exp f` pi (z) dr, 0,
_
exp
p2 (r) dr,
fors < t, fors > t, fors > t,
fors
We set x(0) = (xi (0), 0) T; then system (5.4.9) is written in the form (5.4.11)
x(t) =efo I" `kx(0) + f Z(t, s)Q(s)x(s) ds. 0
Let us study this system. Let B'-(r) be the set of all continuous vector-functions x (t) on R with values in R2 satisfying the estimate
IIx(t)JI ( D exp f r r(u) du 0
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
158
with some constant D, i.e., functions whose growth is -< r. Obviously, this set represents a linear space, and for r1 (t) < r2(t), t E R+, we have
B2(r1) C B2(r2).
(5.4.12)
We introduce the norm IIxII, in B2(r) in the following way: (5.4.13)
Ilxll,. = sup
Ilx(t)Ile-
f, rdT
tER+
The space B2 (r) is isometric to the space of continuous bounded vector-functions
y(t) = x(t) exp
(-f' r(T) dr I
with the standard norm sup,ERIY(t) The convergence in this space means uniform convergence on R+. Hence, B2 (r) is a complete linear normed space; from the convergence of the sequence (t) on B2(r) there follows uniform convergence on any finite interval of R+. We write the equation (5.4.11) in the form
x(t) = (t) + Jx(t).
(5.4.14)
By the definition of the matrix Z(t, s) and the inequalities for r(t) (see (5.4.10)), for any t, s R+ the estimate
exp(f r(u)du-elt-sI)
IIZ(t,s)Il
t
is valid. Indeed, for t
s we have
IIZ(t, s)II = exp for t s
IIZ(t,s)II = exp
f
exp f
Further,
IIJx(t) - JY(t)Il
(f
exp (f r r(u) du - e(t -
p1
p2dT = exp f
-p2(T)dT
f(-r(u)-e)du=exp( f'r(u)du-elt-sI).
f
IIZ(t,s)ll . IIQ(t,s)II . IIx(s) -Y(S)II ds
ef'f e,llx -Yllads 0
0
efdrIlx
-YII,.,
or, for 6 < e/2, (5.4.15)
IIJ-JlI,
ellx-YII,,
0<1.
Thus, the operator J (see (5.4.14)) is a contraction and sends the space B2(r) into itself. Note that fi(t) = x(0)exp f t pl(T)dr; 0
§4. STABILITY OF CHARACTERISTIC EXPONENTS
therefore,
159
E B2(r) and
(5.4.16)
IIII, = I1x(0)II = II(0)II.
Indeed,
(t) exp
(-
f
r(u) du) II <
11x(0)11,
and the equality is attained at t = 0. By the contraction mapping principle, the equation
x=i`+Jx
(5.4.17)
has a unique solution x in B2(r) and it can be obtained by the method of successive approximations, xx
+ JXk_l.
Xk
X0 = S,
Hence, 00
x = kx lim xk = + J(xi+1 - xi ). i=0
By the linearity, we have
Majorizing this series by using the estimates (5.4.15) for the solution of equation (5.4.17), we obtain IIXIIY <
1
I ell ll,1
or, taking into account (5.4.16), (5.4.18)
11x(t)II <,
IIX(0)Ilefo,.(u)du
1-0
These arguments were carried out for an arbitrary function r(t) satisfying the inequalities pI(t) + e
r(t) <, p2(t) - E,
t E R+.
With the decrease of r (t) in these bounds, the space B2 (r) only diminishes (see (5.4.12)), the fixed point is unique, and, therefore, is independent of the specific choice of r(t). This implies that the solutions of system (5.4.2) of growth -< P2 - e actually satisfy the estimate (5.4.18) for r(t) = p1(t) + e, i.e., the estimate (5.4.8) holds. 0 We pass to necessary and sufficient conditions for the stability of the characteristic exponents of system (5.0.1). This result belongs to three authors: Bylov and Izobov (joint papers [12, 13]) and Millionshchikov [31]. These papers were published simultaneously in the same issue of the journal Differential Equations in 1969. We give the proof of this result for two-dimensional systems, considering separately the cases of different and coinciding exponents. Then we formulate and discuss the criterion for the stability of exponents for systems of arbitrary order. Consider a system (5.4.19)
X = A(t)x,
where x E R2 and A(t) is a 2 x 2 real matrix continuous and bounded on R+.
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
160
THEOREM 5.4.3. If system (5.4.19) has different characteristic exponents Al < 22, then their stability follows from the existence of a Lyapunov transformation x = L(t )z of this system to a diagonal system (5.4.20)
i = diag[pi (t), p2(t)]z = P(t)z,
where the functions pI (t) and P2(t) are integrally separated, i.e.,
f(p2(r)_Pi(r))dra(t_s)_dts0, a>O, dO. PROOF. Together with system (5.4.19), we consider the perturbed system
y = [A(t) + Q(t)]y, having the characteristic exponents Ai < A. We must show that for any e > 0, there exists a 6 > 0 such that i = 1,2, I2,-Ail <e, for sup,ER+ IIQ(t)II < 8. We assume from the beginning that the perturbed system is successively subjected to two Lyapunov transformations. The first is taken from the condition of the theorem (it reduces the matrix A(t) to the diagonal one P(t)), the second is an H-transformation which changes the integral separateness of the functions pI (t) and p2(t) to the separateness in the standard sense. Recall that the Lyapunov transformations neither change the characteristic exponents themselves, nor affect their stability. The matrix Q(t) under these transformations acquires four factors, i.e., two direct and two inverse matrices of these transformations, but the latter are bounded and the smallness of the perturbation is preserved. Now we assume that the perturbed system has the form (5.4.21)
y = diag[pi(t), p2(t)]y + Q(t)y = [P(t) + Q(t)]y,
where IIQ(t)II < b fort > 0, and (5.4.22)
P2 (t) - PI (t) > a > 0,
t E R+.
The fundamental matrix of the unperturbed system (5.4.20) has the form
X (t) = diag fexp L
Jo
1
pl dr, exp fo , p2
dr, J
and the characteristic exponent of any solution
x(t) = (xl(t),x2(t))T such that x2(t) 0 0 is determined by the coordinate X2(t) (this can be seen from the inequality (5.4.22)); therefore, x2 (t) determines the behavior of IIx(t)11 as t -+ oo. This coordinate is said to be leading. From what follows we shall see that this behavior is preserved for the perturbed system as well. Consider the solution
z(t) = (ZI(t),Z2(t))T of system (5.4.21) such that z,(0) = rll, z2(0)172 and Irl2I > I'II Multiplying the first eqation (5.4.21) by zI (t) and the second by z2(t), we obtain 2
2 L
// zi ti = pi(t)zi +E g1k(t)zizk, k=I
i = 1,2,
§4. STABILITY OF CHARACTERISTIC EXPONENTS
161
or
1dz ' I
2 dt
2
p,(t)z '
k=I
Hence, the following estimate holds: (5.4.23)
2
1 dz 2
2
-aEIzlzkl < 2 dt -p;(t)z,. <' a1: Izizkl k=I k=1
Using the last inequality, we show that (5.4.24)
for
Iz2(0I > Izl(t)I
t >, 0.
This inequality, by virtue of the choice of initial data, is valid for t = 0 and, by continuity, in a neighborhood to the right of the origin. Let t) be the upper bound of t such that (5.4.24) holds. Then (5.4.25)
z2(t2) = zi (t2),
z2'(t2) <, zi(t2).
The last inequality for the derivatives and the estimate (5.4.23) imply that
-2az2 + p2(t2)z2 <
2
dtI < 2622 + PI (t2)z2
We have z2(12) 54 0 since in the opposite case it would follow from (5.4.25) that
z2(t2) = z1(12) = 0,
i.e., z (t) - 0 for t > 0. Dividing the last inequality by Z2 2, for its extreme terms we write -26 + P2 (t2) <, 26 + PI (t2), or P2(t2) - PI(t2) <, 4a;
this contradicts (5.4.22), e.g., for a <, a/6. Hence, (5.4.24) is proved. The inequality (5.4.24) implies that X[z2] % X[zt];
therefore,
X[z] = X[z2] We show that this exponent lies in a small neighborhood of A2- Since z; (t) 0 for t E R+ (this follows from (5.4.24)), we divide the inequality (5.4.23) for i = 2 by z2 (t). Then p2(t) - 26 <
d dtz2 <, p2(t) + 26.
2
Let us integrate this inequality, (5.4.26)
P2 (t) dT - 2at < In Iz2(t)I - In Iz2(0)I 5 J / P2 (r) dT + tat. 0
Dividing the last inequality by t and letting t
oo, we obtain '2-2a<X[z2]A2+2a.
We have shown that there exists a characteristic exponent X12 of system (5.4.21) such that IA2 - '4I < e for 6 < e/2.
162
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
We pass to the discussion of the second exponent of the perturbed system. We know its solution z(t) _ (z1 (t), z2(t))T ; this allows us to lower the order of the system by the change Y1 = z1 (t)
f r u(r) dr + v,
(5.4.27)
for
Y2 = z2(t)J u(r)dr, G
where a = 0 if X[u] > 0, and a = oo if X[u] < 0, i.e., the integral is Lyapunov. Substituting (5.4.27) in system (5.4.21), we obtain (5.4.28)
z2(t)u = g21(t)v.
v = pi(t)v + g1I(t)v - z1(t)u,
For the function v(t), we have
v=
F1
(t)v +
g11
g21(t)z1(t)v
(t)v -
z2(t)
'
or, by virtue of (5.4.28), we have e,fp Pi(r)dt-2atv(O) \ v(t) \ v(O)efo pi(r)dr+?ar or dl - 26 <, X[v] <, Al + 26.
(5.4.29)
The characteristic exponent of the solution y(t) defined by the formula (5.4.27) coincides with the greatest of the exponents of its components. The inequality (5.4.24) implies fz2 XI L
r
f
u
drl
X IZI
f
r
u drl J
G
JJ
i.e., to obtain the final result it is necessary to compare the growth of v(t) and y2(t). We estimate the exponent of y2(t), using (5.4.28) and the estimate (5.4.26): IY2(t)I = I z2(t)
f
I
u dr
I-2(t)I
If'
1121 efu
drl Ig2IV I
G
6exp(fo pIdr+26s))
p, dr+2St
CI I112I exp (fo p2 dr - 26s)
ds
t
(pi -P2) dr+4d.` ds
efo P dr+2Ar5
I I.r, efo \efo p, dr+26t6
f
I
e-GS+46s
ds
U
Hence, X[Y2]
A2 + 68 - a.
It follows from the last inequality that the exponent of the solution y(t) is shifted to the right with respect to 22 by a finite distance, and the solution y(t) has the growth
§4. STABILITY OF CHARACTERISTIC EXPONENTS
163
-< P2 - a + 46. By Theorem 5.4.2, there are no solutions of growth between p, + e and P2 - e; therefore, the growth of y(t) -< pI + e, i.e.,
Ai =X[Y]<' AI+e.
It remains to show that Ai > A, - e. The solution y(t) is defined by (5.4.27) and X[Y] > X[v]. Indeed, the decrease of the exponent of the first component is possible only under the condition
X[vl = x
IZI
f J. ,
r
u drJ
but then Y2 (t) is the leading coordinate since r x[Y2l>X[ZIJudr]; J.
therefore,
x[Y] = x[Y2] > x[v]
On the other hand, if t
X ZI J. u dT < AV], then X[y] AM = AV]. Now the estimate for 2 from below follows from (5.4.29) for 6 < e/2.
0
COROLLARY 5.4.1. If the linear diagonal system (5.4.20) has different characteristic
exponents 2I < 22 and its diagonal is separated, then the characteristic exponents are stable.
COROLLARY 5.4.2. If the linear diagonal system (5.4.20) has an integrally separated diagonal, then its characteristic exponents are stable and different.
We give an example of a diagonal system of the second order with different but unstable characteristic exponents. We verify that its diagonal is not integrally separated. EXAMPLE 5.4.2 (Perron).
zI = -axl, X2 = (sin In t + cos In t - 2a) x2,
t , 0, 1 > a > 1/2.
This system was considered in Example 4.4.1. Here
21=-a<2,=1-2a<0. For (5.4.30)
1
1 < 2a < 1 + 2 exp(-n),
the perturbation
Q(t) _ (e`rr Ol 0
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
164
generates a solution, unbounded as t -* oo. This implies the instability of the exponents of the Perron system, since stable exponents do not change under the perturbations t -* oo IIQ(t)II --* 0 as (Theorem 5.2.1). We show that the diagonal of the Perron system is not integrally separated if (5.4.30) holds. Indeed, in the opposite case, the inequality
r
(+a + sin In r + cos In r - 2a) dr > a (t - s) - d,
S
i.e., the inequality (5.4.31)
-a(t-s)+(t sin In t-ssin In s)>a(t-s)-d
holds for all t > s > 0 with some constants a > 0, d > 0. Take the sequences s,,, = exp(2m - 1)x;
t,,, = exp(2m + 1)n,
then from (5.4.3 1) for any m E N we have -ae2mn(e" - e->t) > ae2m,t(e>t - e-,t) - d.
Therefore, taking into account (5.4.30), we obtain oz (e' - e-") < . a(e't
- e-")
< - 2 (e' - e-") < 0,
+de-2""t
+ de-2",>r
m -> 00;
0
this contradicts (5.4.31).
Actually, integral separateness of the diagonal is a necessary condition for the stability of characteristic exponents. THEOREM 5.4.4 [12]. If the exponents Al < 22 of a two-dimensional system
(5.4.32)
dx diag[p1(t),p2(t)]x = P(t)x, dt =
x1 = pi (t)x1, X2 = p2(t)x2,
are stable, then the.functions p, (t) and P2 (t) are integrally separated. Here 1
21 =(--00t lim - f p, (-r) dr, 0
r
.12
lim 1 I P2 (r) dr. = t-00 t 0
PROOF. The idea of the proof is as follows. Assume that the diagonal is not integrally separated. Fixing /3 E (2,, 22), we show that under this assumption for any 8 > 0 the perturbation IIQ(t)II
§4. STABILITY OF CHARACTERISTIC EXPONENTS
165
Assume that the functions pI(t) and p2(t) are not integrally separated. Then for any 8 > 0 we can indicate an infinite sequence of intervals {[Tk, Ok]} such that
dk = Ok - Tk -> 00,
TI, -+ oo
monotonically as
k -a oo,
and 8
fey.
- PI) dT < 4dk.
(5.4.33) TA
Fixing /3 E (21, A2), we show that (5.4.33) allows us to indicate a matrix Q(t), II Q(t) II < 6, for any 6 > 0, so that the perturbed system
dt = [P(t) + Q(t)]y
(5.4.34)
has a solution y(t),
x[yl = $. We choose 6 > 0 so that the inequalities
A + 36/2 <
(5.4.35)
+ 8 < 22
are satisfied. First we use 8/2-smallness of the perturbation in order to change the inequality (5.4.33), namely we consider the system
XI = (PI +8/2)x1,
(5.4.36)
X = P'(t)x,
X2 = P2X2,
with the exponents AI + 6/2, 22, which is intermediate between (5.4.32) and (5.4.34). In what follows, we shall deal with this system; its first coefficient is again denoted by p1(t), and the inequality (5.4.33) for this pl (t) is rewritten in the form A.
8
. (P2 - PI) dT < -4dk.
(5.4.37)
TA
System (5.4.36) will be perturbed on some nonintersecting intervals [to, to + 1 ] by means
of the method of rotations, carrying out in parallel the following two constructions: 1) of a perturbed system (5.4.38)
y = [P'(t) + Q(t)]y
with an already admissible perturbation II Q(t) II < 6/2,
2) of its solution y(t) so that x[y] is equal to $. The idea of the method of rotations consists in the following. In system (5.4.36), x2 is the leading coordinate. Solutions with zero second coordinate have the growth -< pl + 6/2. If this solution is slightly rotated at a certain moment from the axis x1 (i.e., the system is perturbed by a rotation), then it enters the sphere of influence of the coordinate x2 and begins to accumulate the growth up to the level required. Then, by means of rotations, we bring the solution back to the axis x1 and again rotate away from the axis at the next, determined in a special way, moment of time and keep it in the sphere of influence of x2 until it accumulates the growth required, etc. Finally, we shall have constructed a solution IIy(t)II < sincoexp$t
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
166
and proved that there exists a sequence tk -+ 00 such that
IIY(tk)ll = sinciexp ftk. This solves our problem. Now what is co and how is a rotation constructed? The solution y(t) is "patched"
from the solutions of system (5.4.36). The passage from one solution to another takes place on intervals [to, to + 1 ] of unit length and is carried out by an orthogonal transformation
y = U(t,(0)x(t) which results in a perturbation of system (5.4.36). Here
U(t w) - cos±w(t - to) - sin fw(t - to) - ( sin fw(t - to) cos ±co(t - to))
0 < w < '`,
'
2
x (t) is the solution of system (5.4.36) such that y (to) = xo; the sign before co determines
the direction of the rotation. For all t E [to, to + 1] we have
llx(t)II = lly(t)II, and, beginning with the moment to + 1, we continue y(t) along the solution x(t) of system (5.4.36) such that
x(to + 1) = y(to + 1) until the next rotation. The orthogonal transformation indicated leads to the system
y = (UP'U-I + UU-I)y = [P'(t) + Q(t)]y. Thus,
IIQII = II UP'U-I + UU-1- P'll = II UP'U-1-
P'U-I + P'U-I
- P'U-I U + UU-111
ll(U-E)P'U-1+P'U-'(E- U)+UU-'Il Note that II Ull < co; then, by the Lagrange formula, we have 11U - Ell = II U(t,co) - U(to,(o)II < co, since t - to < 1. We have I I U -1 I I = 1; hence, (5.4.39)
IIQ(t)ll < (2IIP'II + 1)w < (2M + 1)w,
if IP2(t)l < M IP1(t)I < M, Choosing a number co such that IIQ(t)ll < a/2
for
for
t E R.
t E R+,
co < 8/[2(2M + 1)], we say that the transformation is admissible (or of admissible smallness) if it is defined
by a matrix U(t,wt), where 0 < co, < co. In the passage from system (5.4.32) to system (5.4.38) the norm of the total perturbation is less than 6, since it results from the perturbation of the coefficient pt (t) (of system (5.4.36)) and the perturbation by
§4. STABILITY OF CHARACTERISTIC EXPONENTS
167
means of a rotation with the estimate (5.4.39). In what follows, when performing a concrete rotation, we do not write out Q(t) explicitly, and only check that the angle of rotation is sufficiently small.
The construction of the solution y(t) of system (5.4.38) will be carried out in cycles. Let us describe the first cycle in detail. Set 00 = 0 and choose To > 1 so that the following inequality holds: (5.4.40)
exp f (p1 + 6) dr < sin w exp /3t,
t > To;
0
this is possible by virtue of (5.4.35). On the interval [00, To - 1] our y(t) coincides with the solution x (t) of system (5.4.36) defined by the initial conditions x2(0) = 0,
x1(0) = 1,
i.e., lies on the axis x1 and is given by the formula t
y(t) = exp10 pi(r)dT(l,0)T, 0
and on the interval [To - 1, To] it is defined by the transformation
y(t) = U(t,wl)expJ p1 dr(1,0)T 7
0
Here 0 < co, < co, the rotation is carried out from the axis x1 to the axis X2, w1 plays the key role in the first cycle and the choice of the required co, completes this cycle. Thus, To
y(TO) =exp f pi(r)dr(coswl,sinw1)T 19o
From the sequence of intervals {[Tk, 9kJ}, which realize the inequality (5.4.37), we choose an interval (in what follows we assume that this is the first interval of the sequence and, thus, it has the form [r1, 01 J) so that rI > To + 1 and, moreover,
ft
sup
(5.4.41)
1
To
J
To
p,dT+J p2dT >/t+8;
Ho
To
this is possible by virtue of the inequalities (5.4.35) and (5.4.40). On the interval [To, rI - 1], we continue y(t) along the solution of system (5.4.36). On the interval [T1 - 1, TI J this system is perturbed by means of a rotation, which we construct having previously defined the solution up to the point rI inclusive. Thus, y (t) on the interval [T0, T I ] is defined by the vector
y(t) = diag
[expf'i dt,expJ p2dT] Y(To) To
o
(5.4.42)
fo
To
= exp
r
p1 dr (cos wI exp JT pI dT, sin wI exp L
Two cases are possible: 1) the vector y(T1) forms an angle ,< co with the axis x1, 2) the vector y(T1) forms an angle > co with the axis x1.
f
T
r
To
p2 dr
168
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
In the first case, by means of a rotation on the interval [TI - 1, TI], we make the vector y(t) at the moment T1 lie on the axis x1 and keep it there (as a solution of (5.4.36)) until the moment 01, i.e., on the interval [Ti, 01] we have (5.4.43)
Y(t) = IIY(TI)Il exp f P1 dT(1,0)T. T,
In the second case we cannot act in this way, since the angle of the rotation would have been greater than co and we should have exceeded the admissible smallness of perturbations. In this case we carry out the transformation by means of rotation twice (on the intervals [T1 - 1, T1] and [01 - 1, 01]), and at the moment 01 we make the vector y(t) lie on the axis x1. On the interval [Ti - 1, T1] we rotate the vector y(t) at the moment TI by the angle w towards the axis x1. The vector y(T1) defined by the formula (5.4.42) lies strictly inside the first quadrant since Y2(TI)
YI (T)
_ tan wl exp
f
(P2 - PI) dT < oo;
To
therefore, after the rotation towards the axis x1 on the interval [Ti - 1, T1], the vector remains inside the first quadrant and forms an angle greater than co with the axis x2; hence, Y2(T1) (5.4.44)
< cot Co.
Y1(ri)
On the interval [T1i01], we continue y(t) as a solution of system (5.4.36) and show that at the moment 01 the angle between the vector y(01) and the axis x1 is less than co. This enables us to carry out the transformation by means of rotation of admissible smallness on the interval [01 - 1, 01] and, at the moment 01, to make y(01) lie on the axis x1. On the interval [TI, 01], the vector y(t) satisfies system (5.4.36); consequently, r
I
r
Y(t) = YI(TI)expf
P1
T,
TI
LL
11T
dT,Y2(TI)expf p2dTJ
By means of inequalities (5.4.44) and (5.4.37), we estimate the angle a between the vector y(01) and the axis x1:
f e, tan a = Y2 (T I) exp YI(TI)
',
(P2 - p1) dT < cot coexp (- 16d1
\4
The interval [Ti, 01 ] is chosen sufficiently far away to the right of To so as to have the inequality (5.4.41); now we make this choice more precise assuming that d1 is so large
that cot co exp(-bdt /4) < tan co. 00, Tk - oo monotonically ask - oo. Under this choice we have tan a < tan co, both angles are in the first quadrant; therefore, 0 < a < Co. On the interval [01 - 1, 01], we carry out the transformation by means of rotation, which at the moment 01 makes the vector y (01) lie on the axis x1, i.e.,
This is possible because dk
Y(01) =
IIY(01)II[1,0jT
((5.4.43) gives the same result). By construction, the vector-function y(t) depends continuously on the parameter wl, and, thus, the value of In IIY (01) on the closed interval 0 < co, < co is finite.
§4. STABILITY OF CHARACTERISTIC EXPONENTS
169
Now we choose the number Ti so that (5.4.45)
TI
(M+ 2) 0, +o supC0lnlly(OI)II <6.
We shall not perturb system (5.4.36) on the interval [01,TI], y(t) is continued as a solution of this system. Therefore, for 01 < t < T1 we obtain y(t) = IIy(O,)II exp foe, pl dr(1, 0) T.
Now, to complete the first cycle we have to discuss an important detail: the smallness of col. Consider the limiting cases. 1) co, = 0. In this case there were no perturbations by means of rotations at all, and on [Oo, T1 ] we have
y(t) = exp f p, dr(1,0)T; 1
0
by virtue of (5.4.40), for To < t < Ti the following inequality holds: (5.4.46)
IIy(t)II <sin wexp/3t.
2) wI = co. On the interval [To, Ti], by virtue of (5.4.42), we have
IIY(t)II > Y2(t) = sinwexp
[LT0PidT+f;P2dT].
The inequality (5.4.41) ensures the existence of a t' E [To, ri] C [To, TI]
such that (5.4.47)
IIy(t')II > sincoexp ft'.
Since the dependence of II Y (t) II on w1 is continuous, by comparing the inequalities (5.4.46) and (5.4.47), we obtain the existence of an w1 E (0,co) such that for the y(t) constructed using this w1 there exists a tI E [To, T1] for which the following relations hold: sincoexp ft, t E [To, T,], IIy(t)II (5.4.48) sincoexpft1, t1 E [To,T1]. IIy(tl)II
The first cycle of the construction of the vector-function y(t), i.e., a solution of the perturbed system, is completed; we pass to the second cycle. We assume that the interval [r2, 02] is located so far away from the point Ti that Ti
(5.4.49)
sup
1
Tu<_i<_r, t
lnlly(OI)II+ f p,dv+ f p2dz >i+6.
This inequality is analogous to the inequality (5.4.41) of the first cycle. Further, we carry out the constructions according to the scheme of the first cycle replacing w1 by w2, and obtain the vector-function y (t) on the interval [02, T2] defined by
y(t) = y(02) eXp
f
e,
pl dr(1, 0)T.
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
170
The number T2 is chosen so that sup lnIIY(02)II
1
T2
2
0-<(02 <-(O
This is an analog of the inequality (5.4.45) of the first cycle. Consider again the limiting cases for (02.
1) w2 = 0, i.e., no rotation on the interval [Ti - 1, Ti]; then the equality r
r01
Y(t) = IIY(O1)II exp -
pt dz exp
J0
pl dz(1, 0)T,
t E [TI, T2],
fo
holds; thus, according to the choice of To (see (5.4.40)) and TI (see (5.4.45)), we have the estimate IIY(t)II < exp f
(pI r
+6) dz < sinco exp ft,
t E [Ti, T2].
0
2) w2 = w. In this case the perturbation by means of rotation has been carried out on the interval [TI - 1, TI]; therefore, Y(TI) = IIY(O1)II exp
f
Ti
p1 dz[cosw2i sinw2]T,
and fort E [T1, z,] C [TI, T2] we have Ti
r
Ti
hence,
f p2dr r
1T
,,The
Y(t)= IIY(O1)IIexp fol p,dz Icosw2 f p1dr,sinw2
Ti
T1
11Y(t)II > Y2(t) = sinwlly(O1)II exp [jo
p1 dr + fTr
P2 dz
inequality (5.4.49) guarantees the existence of a t" E [Ti, z2] such that IIY(trr)II > sinwexp/3trr.
Further, as in the first cycle, the continuous dependence of y(t) on w2 implies that there exist w2 E (0, co) and t2 E [Ti, T2] such that the following relations (analogs of (5.4.48) of the first cycle) hold: IIy(t)II
sinwexp/3t
IIY(t2)II = sinwexp/3t2
for all
for some
t E [T1, T2], t2 E [T1, T2].
By induction, we can extend the construction of the vector-function y (t) to the whole half-axis R. As a result, we have a solution y (t) of the perturbed system (5.4.38) with an admissible smallness of perturbation, and this solution has the following two properties: for all t E R+ the estimate
IIy(t)II <, sinwexpft holds, and there exists a sequence tar -4 co, k - oo, such that IIY(tk)II = sincoexpfltk.
By definition, X[y] = /3, and the theorem is proved.
0
§4. STABILITY OF CHARACTERISTIC EXPONENTS
171
Combining Corollary 5.4.1 and Theorem 5.4.4, we obtain the following statement. THEOREM 5.4.5. If a linear two-dimensional diagonal system has different characteristic exponents, then they are stable if and only if the diagonal coefficients are integrally separated.
Now, let us give one more example of a regular system having different and unstable characteristic exponents. EXAMPLE 5.4.3.
X2 =
1+
?r
2
sin ny_` l x2.
Here AI = 0, A2 =
1J'(1+2sinnv)dT
Ilim
lim 1
'-'°° t
t+
I
sin 7r V17 - vt cos n T
\\\
I = 1. 111
The system is regular by Lyapunov's Theorem 3.8.1. At the same time, the diagonal is not integrally separated. Indeed, let n E N, then (n2n-)''1)(1+2
2 sinzv) dT=O; (2
'-
therefore, we cannot indicate a > 0, d >, 0 such that
J`(1+2sinir ') dT>,a(t-s)-d would hold for all t >, s > 0. We pass to the case of a two-dimensional diagonal system with equal characteristic
exponents. First, let us introduce a notation. Let {r(t)} be the set of lower functions for system (5.0.1) (see Definition 5.1.2); then
C =sup{Ilim 1
f
J0
r(u)du
The upper central exponent is denoted, as before, by 92. THEOREM 5.4.6. If a linear two-dimensional system has equal characteristic exponents Al = A2 = A, then they are stable if and only if
w=A=S2. PROOF. Necessity. Let the exponents of the system be stable but
max{S2-A,A-&} = y> 0. For the sake of definiteness, we assume that fl - A = y > 0. According to Vinograd's Theorem 5.1.5, there exists a perturbation Q(t) of the initial system such that
IIQ(t)II <6,
1 ,0,
V. ON THE VARIATION OF CHARACTERISTIC EXPONENTS
172
where 6 is arbitrarily small, and the perturbed system has a characteristic exponent A' > 92. This contradicts the stability of the exponents. Similar arguments can be carried out for the case A - Co = y > 0. The fact that Co is attainable was established by Millionshchikov [29]. Sufficiency. By Corollary 5.1.2, the condition A = Cl guarantees that the exponent
is rigid upwards. By analogous arguments [9], it can be shown that the inequality
0
A = to implies that the exponent is rigid downwards.
The following three theorems are given without proof. Actually, all the arguments used in their proofs are similar to those in the proofs of Theorems 5.4.3, 5.4.4, 5.4.6 with the exception of the following one: the stability of exponents implies that there exists
a Lyapunov transformation reducing the system to diagonal (different characteristic exponents) or block-triangular (the general case) form. The validity of this statement is again established by Millionshchikov's method of rotations. Recall that the blocktriangular form of a system, as well as necessary and sufficient conditions for a system to be reducible to this form, were considered in Chapter III. THEOREM 5.4.7 [13]. If system (5.0.1) has different characteristic exponents AI < < A,, then they are stable if and only if there exists a Lyapunov transformation x = L(t)z of system (5.0.1) to the diagonal form A2 <
i = diag[p1(t),...,pn(t)]z,
where the functions
pi (t),
pi+I(t),
i = 1,...,n - 1,
are integrally separated, i.e., for all t > s > 0 t
f(Pi+I_pi)dt>a(t_s)_d
a> 0, d
0.
THEOREM 5.4.8 [13]. If system (5.0.1) has different characteristic exponents
then they are stable if and only if there exists a fundamental system of solutions
X(t) = such that IIxi+1(t)II IIxi+I(s)II
.
11xi(t)II > dea('-S),
Ilxi(s)II
d > 1,
a > 0.
THEOREM 5.4.9 [13, 31]. The characteristic exponents of system (5.0.1) are stable if and only if there exists a Lyapunov transformation x = L(t )z reducing system (5.0.1) to a block-triangular form dz
dt = diag[PI (t), P2(1),..., Pq (t )]z,
where each of the matrices Pit (t) is upper-triangular of order nk and for the block systems dz(k) dt
= pk(t)z(k)
§4. STABILITY OF CHARACTERISTIC EXPONENTS
173
the following conditions hold: 1) all solutions of the block have the same characteristic exponent Ak and
wk=Ak=S2k,
k=1,...,q,
2) for any pi (t) from the diagonal of the block Pk (t) and pj (t) from the diagonal of the block Pk+l (t) the condition for integral separateness is satisfied, i. e., there exist constants a > 0, d > 0 such that for all t > s > 0
f(i - p,)dr
a(t - s) - d,
k=1,2,...,q-1.
Let us discuss briefly the conditions of the theorem. When the characteristic exponents are stable, the linear space L" of solutions of system (5.0.1) can be represented as the direct sum of q lineals Lk of dimensions nk, respectively, Ei nk = n. All the solutions belonging to the lineal Lk have the same exponent Ak. The angles
ak(t) = Q(Mk,Lk+l}, where
Mk =LI
®L2®...ED Lk,
are such that
inf ak(t)>E>0.
tER+
The upper central exponent S1k and the number wk coincide with Ak; this guarantees that the exponent of the subspace is rigid upwards and downwards. Recall that Lyapunov transformations preserve central characteristic exponents. If the Cauchy matrix of the block Z(k) = Pk(t)Z(k) is denoted by U1, (t, r), then condition 2) of the theorem is equivalent to the following:
IIUk+I(t,r)II-' > de°('-r)ll Uk(t,r)II,
t > r, k =1,...,q -1.
In this case the blocks are said to be integrally separated. Integral separateness divides the spheres of influence of small perturbations of the system on the exponents of the subspaces and prevents the appearance of solutions of intermediate growth.
CHAPTER VI
A Linear Homogeneous Equation of the Second Order Consider the equation
i +a(t)X + b(t)x = 0,
(6.0.1)
where a (t) and b (t) are real functions of the real argument t, continuous and bounded on some interval I c R. Equation (6.0.1) is the simplest equation that is not integrable explicitly. Unfortunately, there is no sufficiently complete correlation between the properties of solutions
and the character of the coefficients of the equations. In this chapter we consider precisely this question and present some conditions on the coefficients that allow us to say whether the solutions of equation (6.0.1) are oscillating, bounded, and/or stable. We start with recalling some well-known facts [5]. 1. The solution of the Cauchy problem for equation (6.0.1) with the initial data
(to, xo, xo) c I x R2 exists, is unique, and is defined for all t from I. In the general case, the interval I is usually fixed by the choice of to. For example, for the equation t x+t+lx+t 1x=0 l
one of the three intervals (-001 -1)1
(-1, 1),
(1, oo)
can be considered as I, depending on the position of to. Note that for some equations it may happen that one or both solutions constituting a fundamental system of solutions can be continued smoothly from one interval to another. For example, the equation
i + x - x/t = 0 has the solution x = t defined on the whole real axis. 2. If a nontrivial solution x,(t) of equation (6.0.1) is known, then a linearly independent solution x2(t) is given by the formula
r exp (- f,o a(u) du) (6.0.2)
x2(t) = x1(t)
10
-
dT,
to,t E I.
X2(T)
This result is obtained by reducing equation (6.0.1) by means of the substitution
x = x, (t) f o z dT to a linear equation of the first order for the function z and by integrating the latter. Note that if for some 0 E I we have x1 (0) = 0, then exp (f,o a(u) du) lim x2 (t) X1(0)
(this follows from (6.0.2) by the generalized L'Hospital rule [34]). 175
VI. A LINEAR HOMOGENEOUS EQUATION OF THE SECOND ORDER
176
3. If the coefficient a (t) is a continuously differentiable function on I, then the substitution (6.0.3)
x=zexp(-I 2
to,tEI,
a(z)dzi,
J
reduces equation (6.0.1) to the form (6.0.4)
z" + p(t)z = 0,
(6.0.5)
p(t) = -a22(t)/4 - a'(t)/2 + b(t).
4. If two independent solutions of equation (6.0.1) are known, then the solution of the linear nonhomogeneous equation
i + a(t)i + b(t)x = f (t),
f E C(I),
can be always obtained by the method of variation of parameters (the Lagrange method). §1. On the oscillation of solutions of a linear homogeneous equation of the second order
In this section we consider the question of zeros of nontrivial solutions of the equation
i + a(t)x + b(t)x = 0
(6.1.1)
with continuous and bounded real coefficients on an interval I C R. DEFINITION 6.1.1. A solution x (t) of equation (6.1.1) is said to be oscillating in the interval (a, b) c I if it has at least two zeros in this interval. In the opposite case, x(t) is a nonoscillating solution.
Thus, for example, all the solutions of the equation (6.1.2)
i-m2x=0,
mER, m540,
are nonoscillating on R; this obviously follows from the form of the general solution
x = cl export + c2 exp(-mt). Let us turn to the equation (6.1.3)
i+m2x=0,
m E R,
m
0,
whose general solution x = CI COS mt + C2 sin mt
can be transformed to the form (6.1.4)
x = Asin(mt +6),
where A = (c, + c; )1/2 is the amplitude and
/
arcsin 1 cl/(c, +c)11'2)
is the initial phase. It is clear from (6.1.4) that all the solutions of equation (6.1.3) have the period 2n/m, the distance between two consecutive zeros of any solution is
§ I. ON THE OSCILLATION OF SOLUTIONS
177
equal to 7r/m, and all the solutions are oscillating on any interval of length greater than n/m. Note that, with the increase of m, the distance between the zeros of the solutions decreases.
These two simple equations provide a rather accurate model of the oscillating character of solutions of the equation
i + p(t)x = 0;
(6.1.5)
the passage to which from the equation (6.1.1) is carried out by the substitution (6.0.3), preserving the zeros of the solutions. In what follows, we establish that for p(t) 5 0 all the solutions of this equation are nonoscillating, while for p(t) > 0 they are oscillating and the frequency of oscillations grows with the growth of p(t). Before proving these results, we show that the question about the distance between consecutive zeros makes sense because of the following theorem.
LEMMA 6.1.1. The zeros of any nontrivial solution x (t) of equation (6.1.1) lying inside the interval I are simple and isolated.
PROOF. The first statement follows from the fact that only the trivial solution corresponds to the initial conditions x(to) = 0 and z(to) = 0. Conversely, let there exist a limit point to for the set {tk } of zeros of a nontrivial solution x (t). By continuity,
x(to) = 0. Let us show that then z(to) = 0. Indeed, X(to) =
limo
x(to + h) - x(to) h
0
By choosing h = tk - to, we obtain the required result.
COROLLARY 6.1.1. A nontrivial solution x (t) of equation (6.1.1) on any compact from I has a finite number of zeros. THEOREM 6.1.1 (Sturm). Consider two equations
y" + p(t)y = 0
and
z" + q(t)z = 0,
where p, q E C (I) and
q(t) > p(t),
(6.1.6)
t E I.
Then between each two consecutive zeros of any nontrivial solution of the first equation there is at least one zero of any solution of the second equation under the condition that between these zeros there are points such that q (t) > p (t).
PROOF. Let yl (t) be the solution of the first equation such that
yI(tI) = Y1 (t2) = 0
and
y1 (t) > 0
for
t E (t1,t,).
Assume that there exists a solution of the second equation z1 (t) such that
z1(t) > 0
for
t E (t1, t,)
(it is important for us to fix the sign of the solutions; this does not involve any loss of generality since -yI (t) and -z1 (t) are also solutions with the same zeros). We substitute these solutions in the corresponding equations, multiply the first one by z1(t), the second by yI (t), and, subtracting the second identity from the first, we obtain
yi zI - zI yI = (q(t) - p(t))zI
y1.
V1. A LINEAR HOMOGENEOUS EQUATION OF THE SECOND ORDER
178
Let us integrate this identity from t, to t2, taking into account that the left-hand side is the derivative of the difference y',z, - y1 z,' and that y, (t,) = Y1(t2) = 0. Thus, (6.1.7)
Yi (t2)zl (t2) - 4(ti)z1(ti)
f(q(r) - p(z))Yl zl dr.
By the condition (6.1.6) of the theorem and our assumptions, the right-hand side of the identity (6.1.7) is strictly positive, and the left-hand side is nonpositive since Yi(t2) > 0,
z1(t2) > 0,
Yi(t1) > 0,
z, (ti) > 0.
This contradiction is due to the assumption that z, (t) > 0
for
t E (t1, t2).
O
COROLLARY 6.1.2. The zeros of two linearly independent solutions of equation (6.1.5) mutually separate each other.
PROOF. Let us denote by y, (t) and z, (t) linearly independent solutions and carry
out the proof of Sturm's theorem up to the identity (6.1.7) inclusive. This time the right-hand side is equal to zero since q(t) - p(t), and the left-hand side is strictly negative since z1(t2) > 0,
zl(tl) > 0
(solutions constituting the fundamental system do not vanish simultaneously). Thus, z, (t) has only one zero inside (t1, t2), since in the opposite case, changing the roles of the equations, we obtain one more zero of the solution y, (t) between t,, t2. O COROLLARY 6.1.3 (on the estimate of the distance between consecutive zeros of any
nontrivial solution of equation (6.1.5)). Let the inequality (6.1.8)
0 < l2 < p(t) < L2
be valid on a closed interval [a, l3] C I. Then the estimate (6.1.9)
ir/L <, d < it/l
holds for the distance d between consecutive zeros of any nontrivial solution of equation (6.1.5).
PROOF. We apply successively Sturm's Theorem 6.1.1 to the pair of equations (6.1.5) and (6.1.3): a) for m = L; this gives the left inequality in (6.1.9), b) for m = l; this gives the right inequality in (6.1.9). O EXAMPLE 6.1.1. Consider the equation
z+2tz+(t2+t+1)x =0; a) let us estimate the distance between consecutive zeros of any nontrivial solution on the interval [16], b) let us show that the distance between the zeros tends to zero as t - 00. By the substitution (6.0.3) x = Ze
2 . f o 2r dr =
2/,
§1. ON THE OSCILLATION OF SOLUTIONS
179
which does not move the zeros of solutions, we reduce the initial equation to the equation
+tz=0 (see (6.0.5)).
a) By the inequality 16 < t < 121, according to Corollary 6.1.3 we have the estimate
n/11 < d < n/4. b) Let t > 122. Formula (6.1.9) implies that
0
as
COROLLARY 6.1.4. If the inequality
p(t)>12>0
(6.1.10)
holds for t E I, then on any interval (a, l3) c I such that l3 - a > n/l, all the solutions of equation (6.1.5) are oscillating.
Let I = [a, oo). In this case the condition (6.1.10) can be relaxed. THEOREM 6.1.2 [4]. If (6.1.11)
fp()d=oo
p(t)>0,
,
then all the solutions of the equation
x + p(t)x = 0 have infinitely many zeros on the interval [a, oo).
PROOF. Let, to the contrary, there be a solution x (t) preserving the sign beginning with some moment to > a; we assume that
x(t) > 0
for
t E [to, 00).
From (6.1.5) we have
X = -p(t)x(t) < 0, t > to; therefore, x'(t) decreases monotonically. Two cases are possible:
1) x'(t)>0fortto, 2) x'(t)<0forttj>to.
In the first case, the solution x (t) increases monotonically and x (t) > c for t > to. Hence,
x = -p(t)x(t) < -cp(t), and, after integrating, we have
x'(t) - x(to) < -c
Jo
p(r) dr.
With the growth oft, the right-hand side of the last inequality tends to -oo by virtue of (6.1.11); this contradicts the condition
x'(t) > 0
for
t>to.
VI. A LINEAR HOMOGENEOUS EQUATION OF THE SECOND ORDER
ISO
We pass to the second case. Let x'(t) be negative and decrease monotonically for t > tl; consequently, x'(t) < -c for t '> ti. Integrating this inequality we obtain
x(t) - x(t1) <' -c(t - ti), which contradicts the condition that x (t) is positive for all t > to.
0
EXAMPLE 6.1.2. x + cost tx = 0.
The conditions of Theorem 6.1.2 are satisfied for this equation; therefore, all its solutions have infinitely many zeros on R. Note that this result cannot be obtained by means of Sturm's theorem. Consider one more result which allows one to obtain a lower bound for the distance between zeros. It is essential here that we use the form (6.1.1) for the equation. THEOREM 6.1.3 (De la Vallee-Poussin [35]). Let the coefficients of the equation
z+a(t)x+b(t)x =0 be such that (6.1.12)
Ia(t)I <, MI,
t E I.
Ib(t)I < M2,
Then the estimates (6.1.13)
(6.1.14)
d>,
J4M1 + 8M2 - 2M1
M2>0,
M2
d > 2/MI,
M2 = 0,
are valid for the distance d between any two consecutive zeros of any nontrivial solution of equation (6.1.1).
PROOF. Let x(t) be a nontrivial solution of equation (6.1.1) and let to, to + d be its consecutive zeros. Without loss of generality we assume that to = 0. Consider the identity (6.1.15)
x'(t)d = Jo Tx"(T) dT -
d
(d - z)x"(T) dT
t
which can easily be verified by integrating by parts and taking into account that x(0) = x(d) = 0. Let us change x"(t) in (6.1.15) to its value from (6.1.1):
x'(t)d = - fo Ta(T)x'(r) dT + 1
J
, (d
= T)a(T)x(T) dT
/'t
(6.1.16)
f
t
Tb(T)x(T)dT+ J i(d-T)b(T)x(T)dr
and let us estimate the right-hand side of (6.1.16). Let Ix'(t)I '< p for t E [O,d];
§1. ON THE OSCILLATION OF SOLUTIONS
181
,u > 0 since x (t) is a nontrivial solution. From the identities
x(t) = fo` x '(z) dz
and
x(t) = f x(z) ' dz
we obtain the estimates
Ix(t)I < ,ut,
(6.1.17)
Ix(t)I < u(d - t).
The first estimate will be used for 0 < t < d/2, and the second for d/2 < t < d. Thus, (6.1.16) implies that
Ix'(t)Id <M1,u
[f' rdr+ J (d-
The first summand on the right-hand side of the last inequality is estimated by the number M1,ud2/2; this is a straightforward result of the integration. To estimate the second summand, we divide the intervals of integration by the point d/2 and use the estimates (6.1.17), respectively. Finally, we have I x'(t )I d < MIud 2/2 + MZ,ud 3/8;
this holds for all t E [0, d] and, in particular, at the point where Ix'(t)I = u. Hence, we obtain the inequality for d:
1 < MId/2 + M2d2/8.
(6.1.18)
The solution of this inequality implies the validity of the estimates (6.1.13) and (6.1.14). Recall that the number d, satisfying (6.1.18), lies to the right of the positive root of the equation
M2d2+4M1d-8=0. 0 REMARK 6.1.1 (on the comparison of Theorems 6.1.1 and 6.1.3). When estimating
the value of d (the distance between consecutive zeros of solutions of the equation z + a(t)x + b(t)x = 0) from below there appear two possibilities: 1) to use Theorem 6.3.1 straightforwardly, 2) to pass to the equation z + p(t)z by means of the substitution (6.0.3), and, further, to carry out the estimate according to Corollary 6.1.3. What is preferable? In the general case it is practically impossible to give a recommendation, but it should be noted that for sufficiently large Ia'(t)I, t E I, the second variant should be followed with caution. This results from the fact that
p(t) = -a2(t)/4 - a'(t)/2 + b(t) (see (6.0.4)), and an increase of p(t) makes the left-hand side of the estimate (6.1.9) less accurate.
Consider some examples.
V1. A LINEAR HOMOGENEOUS EQUATION OF THE SECOND ORDER
182
EXAMPLE 6.1.3. Let us estimate from below the distance between two neighboring zeros of nontrivial solutions of the following equations. 1) Consider Example 6.1.1:
i+2tz+(t2+t+1)x=0,
1
a) By the Sturm theorem, we have
l
p(t)=t
d>7r// ^~0.4.
b) By the de la Vallee-Poussin theorem (see (6.1.12) and (6.1.13)), we have
Ia(t)I < 10 = MI, Ib(t)I
d>
31 = M2
(4.10+8.31)1/2-2.10 31
0.2.
Thus, Sturm's theorem gives a better result 2) X- (arctan kt)X+ 7r 2 x = 0, t E R. a) By the Sturm theorem, we have
k
1
p(t) _ -4(arctankt)2 + d
2
2(1 + k2t2) +
<
k
2
2+
7rf > (k + 27x2)1/2
b) By the de la Vallee-Poussin theorem, we have
Ia(t)I < 2 = M1,
d>
( 2+82)h/2
b(t) = 7r2
_
7r2
2 7r
Under the condition 2k = 7x4 - 47r2, the estimates coincide; for 2k > 7x4 - 712 the second estimate is sharper.
We return to the equation (6.1.5) and consider the case when the coefficient p(t) is nonpositive (see the model equation (6.1.2)). THEOREM 6.1.4. Let
(6.1.19)
p(t) < 0,
tEL
Then all the nontrivial solutions of the equation
X+p(t)x =0 are nonoscillating on I.
PROOF. Let, to the contrary, there exist a solution x (t) $ 0 having two zeros, i.e.,
x(t1) = x(t2) = 0. Let us appeal to Theorem 6.1.1. It implies that any solution of the equation must intersect the axis t in the interval (t1, t2), which, obviously, is not true.
=0
0
The following theorem gives a necessary condition for equation (6.1.5) to have a solution with two zeros on I.
§ 1. ON THE OSCILLATION OF SOLUTIONS
183
THEOREM 6.1.5 (Lyapunov [37]). Let a solution x $ 0 of the equation
x + p(t)x = 0 have two zeros on the interval [a, b] c I. Then fh (6.1.20) p+(t) dt >
b 4a
where p+ (t) = max(p(t), 0). PROOF. Consider the equation
u" + p+(t)u = 0,
(6.1.21)
which is majorant for (6.1.5). By Theorem 6.1.1, this equation has a solution u(t) such that
u(a)=u(/3)=0,
a
and
u(t) > 0
t E (a, f3).
for
Consider the identity (6.1.22)
-(f3 - a) u(t) = (f3 - t)
J
«t
(s - a)u"(s) ds + (t -a)1,
(i - s)u"(s) ds.
Let us change the function u"(s) in (6.1.22) to its expression from (6.1.21): (6.1.23) 9
a)u(t) _ (f3 - t) I t(s - a)p+(s)u(s) ds + (t - a) I (i - s)p+(s)u(s) ds. Let
u(to) = maxu(t) a < t < /3. for Set t = to in (6.1.23), change u(s) to u(to) in the integrands (this leads to a strict inequality in (6.1.23)), and divide the result by u(to). Hence, f# /3-a
to
'(f3 - s)(s - a)p+(s) ds s)(s - a)p+(s) ds f'(#
<J
to
f$
= J (f3 - s) (s = a)p+(s) ds, or
1<
(6.1.24)
J
(fl -
s)(s - a) p+(s)
ds.
Note that the fraction in the integrand increases together with f3 fort > a and decreases together with a for t < f3; this is ensured by the signs of its derivatives with respect to f3 and a, correspondingly. By virtue of this fact, we have 1<
J
n (b -s)(sa- a) p+(s) ds <
J
b (bb au) p+(s) ds;
VI. A LINEAR HOMOGENEOUS EQUATION OF THE SECOND ORDER
184
this implies (6.1.20). In the last inequality we have used the fact that
x, y E R. o
4xy , (x + y)2,
COROLLARY 6.1.5. If
/b p+(Z)dT \
(6.1.25)
4
(b-a)
a
holds for [a, b] C I, then all the solutions of the equation
X + p(t)x = 0 are nonoscillating in [a, b].
Proof by contradiction.
EXAMPLE 6.1.4. Let us find the relation between the parameter a > 0 and the natural number n such that all the solutions of the equation
i+xe-" sin t =0 have no more than one zero on the interval [0, 2n7r]. We turn to the condition (6.1.25):
n-I
2nn
(21+I)n
[e-" sin t]+ dt = E
e-a t sin t dt
1=0 f2l n
n-I
_-
1
(e-(21+I)an
a2+1
+e- 21an) 1 - e-2nan
2n-1 a-aln a2 + 1 1-0 1
(a2 + 1)(1 -
e-an)
Hence, the relation required has the form 1 - e-2nan
4
(1 + a2)(1 - e-an)
2mr
COROLLARY 6.1.6 (on the number of zeros of a solution [37]). Let N > 2 be the number of zeros of a nontrivial solution x (t) of equation (6.1.5) on the interval0 < t < T,
p E C[0, T1. Then (6.1.26)
N<2
(TJTP+sds)
1/2
+ 1.
92. ON BOUNDEDNESS AND STABILITY OF SOLUTIONS
185
Note that the choice of the interval is made for the sake of convenience and does not lessen generality. Let x(t1) = 0 and 0 <, t1 < t2 < < IN < T. By Theorem 6.1.5, we have rk+I
4
p+(s)ds >
tk+l
rk.
k = 1,...,N - 1.
- tk
We sum these inequalities for k = 1, ... , N - 1: N-1 (6.1.27)
1 ri
p+(s) ds > 4 k=I
1t k+I
tk
.
According to the inequality connecting the harmonic mean and the arithmetic mean of N - 1 positive numbers, we have
N
N-1
-1
I 1
E
1
k=I tk+1
Yk
> Ek=II (tk+l - tk
N-1 tN - tl'
which, applied to (6.1.27), gives "V
or
p+(s) ds >
T
p+(s) ds >
4(N - 1)2 tN - tl
4(N - 1) 2 I
0
The last inequality implies (6.1.26).
To conclude this section we note that some interesting criteria of nonoscillation for equation (6.0.5) are given in [37], but in terms of solutions only. §2. On boundedness and stability of solutions of a linear equation of second order
The linear equation (6.2.1)
x + a(t)X + b(t)x = 0,
a,b E C(I C R),
reduces to the linear system dx = X, dt
dx
dt = -b(t)x - a(t)X;
in this interpretation all the results of the preceding chapters hold for this equation. Thus, equation (6.2.1) is Lyapunov stable, or, what is the same, all its solutions are stable, if and only if all the solutions of the equation and its derivatives are bounded as t oo. Frequently, only the boundedness of solutions is of interest. In this section we consider some coefficient criteria for boundedness and stability of solutions of equation (6.2.1) which were proved, in particular, in [4] and [35]. Note that some interesting results of this sort were formulated in [38]. Without loss of generality, we write equation (6.2.1) in the form (6.2.2)
X + p(t)x = 0,
p E C(I C R).
VI. A LINEAR HOMOGENEOUS EQUATION OF THE SECOND ORDER
186
COMPARISON THEOREM 6.2.1 [351. Let there be given two equations
z" + q(t)z = 0,
y" + p(t)y = 0,
p, q, E C(I C Il8),
such that (6.2.3)
1)
q(t) > p(t),
2)
y (t) and z (t) are solutions of these equations such that
(6.2.4)
q(t) 0- p(t),
Y(to) = z(to) = yo,
t E I,
Y'(to) = z'(to) = Yo,
to E I.
Then, in any neighborhood to the right of to in which z(t) has no zeros and p(t) 0 q(t), the following inequality holds:
(6.2.5)
IY(t)I > Iz(t)I;
moreover, the ratio y(t)/z (t) increases monotonically.
PROOF. Denote by (to,a) the interval where z(t) # 0 and p(r) 0 q(t). For z(to) : 0 such an interval exists by the continuity of z(t), and in the case z(to) = 0 it exists by virtue of z'(to) # 0 since we deal with a nontrivial solution. Substitute the solutions y(t) and z (t) in the corresponding equations; multiplying the first identity by z (t), the second by y (t), and subtracting the second from the first, we obtain
z(t)y"(t) - z"(t)y(t) = (q(t)
- p(t))y(t)z(t),
t E (to, a).
Integrating the result from to to t E (to, a) and taking into account the conditions (6.2.4), we obtain (6.2.6)
z(t)y'(t) - z'(t)y(t) = f(q(r) - p(r))Y()z(r) dr. o
The integrand is positive for t E (to, a) by virtue of (6.2.3) and the fact that y(t) and z (t) have the same sign. That this fact is true in a small neighborhood of to follows
from the conditions (6.2.4), and for other t E (to, a) from the fact that y(t) cannot vanish before z (t). Indeed, let, to the contrary, there exist
t* E (to, a)
such that
y(t*) = 0.
For t = t*, the right-hand side of (6.2.6) is strictly positive and the left-hand side is strictly negative, since the values z(t*) and y'(t*) necessarily have different signs. Thus, from (6.2.6) for t E (to, a) we have
z2(r)dt (L(t))
> 0,
which proves the inequality (6.2.5). Note that y(to)/z(to) = 1; this is obvious for y(to) # 0 and for y(to) = 0 we have y(to) 0 0; then
limyt)=limy t =1. t--,to z1(t)
t-'to z(t)
0
§2. ON BOUNDEDNESS AND STABILITY OF SOLUTIONS
187
EXAMPLE 6.2.1. Let us estimate the solution of the equation
X - (1 - (cos2t)/2)x = 0,
x(0) = 1,
x'(0) = 1,
on the interval [0, oo) from above and from below.
For the coefficient p (t) we have the estimate -1 < p (t) 1/2. Now we use Theorem 6.2.1. Consider z - x = 0 and z - x/2 = 0 as auxiliary equations with the initial conditions x (0) = x'(0) = 1. Hence,
1-v/2- _t/f
1+v/'22
2
e
t/f
r
The following theorem establishes the connection between the growth of the solutions of equation (6.2.2) with the monotonicity of the coefficient p(t). THEOREM 6.2.2 (Sonine-Polya [35]). Let a junction p(t) be such that
1) pEC'(I), p(t) #0,
tEI,
2) p'(t) >, 0 (p'(t) < 0), t E I. Then for any solution x(t) of equation (6.2.2) the values of Ix(t)I at the points of extremum form a nonincreasing (nondecreasing) sequence.
PROOF. Take a nontrivial solution x (t) of equation (6.2.2) and form the function
0(t) = x2(t) +
p(t)
(x'(t))2
By virtue of equation (6.2.2), (6.2.7)
cP(t)
=-
p2(t)
(x'(t))2p'(t).
Note that at the points of extremum of the solution (x'(t) = 0) we have 0 (t) = x2(t). The function go(t) is nonincreasing with respect to t for p(t) >, 0, since, by virtue of (6.2.7), o'(t) < 0, and p(t) is nondecreasing for p(t) < 0; this implies our claim. 0 COROLLARY 6.2.1. If p'(t) 0, t E I, then for any solution x(t) of equation (6.2.2) with initial conditions to, xo, xo = 0 we have
Ix(t)I < Ixol
for
t >, to.
PROOF. Ix(to)I = Ixol is the first term of the sequence of values of Ix(t)I at its points of extremum. To finish the proof, one proceeds according to Theorem 6.2.2. 0
EXAMPLE 6.2.2. The solution x(t) of the equation X + (1 + t2)x = 0 with the initial conditions to = 0, xo = 1, xo = 0 satisfies the inequality I x (t) I < I for t >, 0. COROLLARY 6.2.2. Let
pEC1(I),
p(t)<0,
p'(t)>, 0
for
tEI.
Then any solution x (t) of equation (6.2.2) with initial conditions to, xo, xo such that (6.2.8)
is monotonic for t >, to.
xo +
p(to)
(x0)2 < 0
VI. A LINEAR HOMOGENEOUS EQUATION OF THE SECOND ORDER
188
PROOF. For the trivial solution the statement is obvious. Let x(t) be a nontrivial solution; then (6.2.8) implies that xo 54 0. By the definition of the function cp(t), we have
v(to) <' 0,
p'(t) <' 0;
p'(to) '< 0,
hence,
for
ap(t) <, 0
t > to.
Therefore,
x2(t) + pit) (x '(t))2 < 0
t > to.
for
The last inequality implies that
t>to. 0
x'(1)54 0,
EXAMPLE 6.2.3. All the solutions x(t) of the equation x - (1 + 1/t)x = 0 with initial conditions to > 1, xo, xo such that xo(to + 1) - to(x0I)2 '< 0
are monotonic.
The following theorem is valid for arbitrary functions u(t) such that u E C2(R+) and can be used in our problems. THEOREM 6.2.3. Let U E C2(R+); then the boundedness of Iu(t)I and Iu"(t)I implies
that Iu'(t)I is bounded fort E R+. PROOF. Let
Iu"(t)I} = M.
sup{Iu(t)1, tER+
Let us set
u" - u = f (t);
(6.2.9)
considering u(t) as a solution of equation (6.2.9), by the Lagrange method we obtain
u(t) = cle' + c2e-' +
e' 2
f 0t
f
(z)e-t d?
- e-t f r f (r)et dz. 2 0
Since If (t) I <, 2M for t E R+, the last summand has a finite limit as t -> 00
(L'Hospital's rule); therefore, it is bounded. The boundedness of Iu(t)I necessarily implies that
cl = - Z
foo
J0
f
(r)e-' dz.
Hence, we find that
u(t) = -et
.f (z)e-7 dz + c2e-' - 1 e-'
J,
f
.f()er dr;
therefore, e,
(6.2.10)
u'(t)
2
J'f
(r)e-t dr
- c2e-' + 1 e-t 2
f J'
0
f (r)et d-c.
§2. ON BOUNDEDNESS AND STABILITY OF SOLUTIONS
189
All three summands on the right are bounded for t E R+; thus,
supIu'(t)I <00. 0 t>'o
COROLLARY 6.2.3. If sup, >,0 I p(t)I < oo in equation (6.2.2), then the boundedness of all its solutions implies their stability.
COROLLARY 6.2.4. Let I p(t)I < M, t E R+; then, if all the solutions of equation (6.2.2) tend to zero, they are asymptotically stable.
PROOF. Equation (6.2.2) is asymptotically stable if and only if all its solutions together with the derivatives tend to zero. Let x(t) 0 as t co; from equation M. Set f (t) _ oo, since I p(t)I (6.2.2) we have also that x"(t) 0 as t 0, is valid for x'(t); therefore, x"(t) - x(t); then equality (6.2.10), where f (t)
x'(t)
0
for
oo. 0
t
Let us turn to the equation i + m2x = 0. It is stable since all its solutions are bounded together with their derivatives (see (6.1.4)). Consider the perturbed equation z + [m2 + w(t)]x = 0,
(6.2.11)
m 54 0,
and find out for which w (t) the stability is preserved. As was shown in Theorem 4.3.1, the condition oc
Iw(r)I dr < oo
is sufficient, and it was noted in the same theorem that the stability of linear systems with constant coefficients may not be preserved under a perturbation which merely
tends to zero as t
oo.
To confirm this for equation (6.2.11), we consider an
example. ExAMPLE 6.2.4 [4]. The equation
x+ I1+4COstsint+ I12cos2tsingtl x=0 t 11
x(t) =cost exp
It cost r
dz, j which is unbounded because the integral is unbounded as t z
oo.
THEOREM 6.2.4 [4]. Let 1)
V/
- 0 as t - oo,
2) f,0 I w'(r)I dr < oo. Then all the solutions of equation (6.2.11) are stable.
PROOF. Let us show that the solutions of equation (6.2.11) are bounded as t
C o.
Take a nontrivial solution x(t), substitute it in the equation, multiply the left-hand side by x'(t), and integrate the result from to to t: I xR(t) + 1 x2(t) + J yr(z)x(t)x'(r) dr = c1,
2
2
,o
in = 1.
190
VI. A LINEAR HOMOGENEOUS EQUATION OF THE SECOND ORDER
Further; integrating by parts, we find that (6.2.12)
x'2t) +
1V(t)x22t)
-2J
r X2(T)dz = c2.
Since yr (t) --o 0 as t -* 0, there exists a sufficiently large to such that the inequality 1 + W(t) > 1/2 holds fort > to. From the identity (6.2.12) we have x2(t) <, 41c21 + 2
f
I v'(t)I x2(T) dT.
io
Applying the Gronwall-Bellman lemma (see Appendix) to the last inequality, we obtain x2(t) <, 41c21 exp (2.f 1V'(T)I dr) ; o
this ensures the boundedness of x(t) as t -+ co. Thus, all the solutions of equation (6.2.11) are bounded; this, according to Corollary 6.2.3, implies their stability.
0
REMARK 6.2.1. It is clear from Example 6.2.4 that the second condition of the theorem cannot be changed to W'(t) - 0 as t --o oo. §3. Linear equations with periodic coefficients
In this section we consider coefficient criteria, due to Lyapunov, for stability and instability of the equation (6.3.1)
+ p(t)x = 0,
where
p(t +w) = p(t),
p E C(R),
and give a generalization of the stability criterion. Equation (6.3.1) is equivalent to the system
dx (6.3.2)
_
dt - x' dx dt
= -p(t)x,
whose stability is determined by the location of the multipliers with respect to the unit circle (Theorem 4.2.3). Let f (t) and ap(t) be solutions of equation (6.3.1) such that (6.3.3) (6.3.4)
f (0) = 1, (p(0)=0,
f'(0) = 0, (p'(0)=1.
The matriciant of system (6.3.2) has the form
X(t,0) _
(f(t) (P (t)
W(t)1, (t) )
and the multipliers of the system are the eigenvalues of the matix X (w, 0). Note that SpA(T)dT = 1,
DetX((o,0) = Det X (0, 0) exp 0
§3. LINEAR EQUATIONS WITH PERIODIC COEFFICIENTS
191
since the matrix of the coefficients of system (6.3.2) has trace zero and X(0, 0) = E. We write the equation determining the multipliers in the form (6.3.5) (6.3.6)
Det(X(co, 0)
- pE)
p2 - Ap + 1 = 0,
A = Sp X (co, 0) = f (co) + p '(co).
The number A is called the Lyapunov constant. From (6.3.5) we have
A2 - 4)/2;
P1.2 = (A ±
hence, by Theorem 4.2.3, we have the following statements. PROPOSITION 6.3.1. If I A I > 2, then both multipliers pI and P2 are real, different, and IP1I > 1, IP2I < 1; therefore, system (6.3.2) and, consequently, also equation (6.3.1) are unstable.
PROPOSITION 6.3.2. If IAI < 2, then p1 = p2. IP1I = IP2I = 1; therefore, system (6.3.2) and, consequently, also equation (6.3.1) are stable. PROPOSITION 6.3.3. If I AI = 2, then p1 = P2 = p, IPI = 1 and, consequently, system
(6.3.2) has a periodic (p = 1) or antiperiodic (p = -1) solution, and the problem of stability reduces to the study of the canonical structure of the matrix X(60, 0). PROPOSITION 6.3.4. System (6:3.2) and, consequently, equation (6.3.1) cannot be
asymptotically stable, since P1 P2 = 1 and the multipliers cannot simultaneously lie strictly inside the unit circle.
Thus, to determine the character of stability of equation (6.3.1), we have to estimate the Lyapunov constant A defined by (6.3.6). To this end, we use the method of parameter. We write equation (6.3.1) in the form (6.3.7)
x = ep(t)x.
For e = -1 it coincides with equation (6.3.1). Let f (t, e) and p (t, e) be the solutions of equation (6.3.7) whose initial data are consistent with (6.3.3) and (6.3.4), i.e., (6.3.8) (6.3.9)
f (0,e) = 1, 0(0,e) = 0,
f'(O,e) = 0, 0'(0,e) = 1.
By the analyticity of the matriciant with respect to the parameter (Chapter I, §2, property 5), both solutions can be represented in the domain t E R, Ie I < oo by the series (6.3.10)
fk(t)Fk,
f (t, 15) _ k=0
0
(6.3.11)
ok(t)ek
0(t,E) _ k=0
whose coefficients are determined recurrently from equation (6.3.7) under the corresponding choice of initial conditions. An analogous statement also holds for the derivatives f (t, e) and ( (t,e).
VI. A LINEAR HOMOGENEOUS EQUATION OF THE SECOND ORDER
192
We substitute the series (6.3.10) in equation (6.3.7), equate the coefficients of like
powers of e on the left- and right-hand sides, and, taking into account (6.3.8), we obtain
fo(t) = 0,
fo(0) = 1,
fk = P(t)fk-I (t),
fl, (0) = 0,
fo(0) = 0, .fk(0) = 0,
k = 1,2,...,
or
fo(t) = 1, tI
t
fk(t) =
dt1 f P(12)fl,- I (12) dt2 0
fo
=
dt2J It P(12)fk- I (t2) dtI
J0
t,
Jt
=
J0
k = 1,2 .... .
(t - t2)P(t2)fk-I (t2) dt2i
Hence, (6.3.12)
f (l) = ft,-1) = 1 - J (t - tl)P(tI)dtl
+j'(1 - tl)p(t l) dt 1 f"(11 - t2)p(t2) dt2 +
f
+ (-1)k
(t - tl)p(tl)dtl...J tk - I (tk-I - tk)P(tk) dtk
+...
0
fo
Similarly, from (6.3.7), by virtue of (6.3.11) and the initial conditions (6.3.9), we have 'Po(t) = 0,
tPk = P(t)
0o(0) = 0,
0o(0) = 1,
Wk (0) = 0,
0k(0) = 0,
k = 1,2,...,
or
(P0(t) = t,
Wk(t) =
j(t - tl)P(tI)(Pk-1(tI)dt1,
k = 1,2,...;
hence,
ap(t) _ p(t, -1)
=t+
J0
(t - tl)p(tI)tI dtl (t - tl)P(tl) dt1
J0 + ... + (-1)k
J0
tl (tl
- t2)t2P(t2) dt2
f(t - tl)p(tl)
ftkJ dt1..(tk-I
- tk)tkP(tk) dtk + ...
§3. LINEAR EQUATIONS WITH PERIODIC COEFFICIENTS
193
To determine the Lyapunov constant, we need an expression for ap(t); we write it as follows:
(t) = 1 +
(6.3.13)
J0
J0
/
,
p(t1)t1 dt1
p(t1) dt1
J0
"
(tl - t2)t2p(t2) dt2 tk _,
t
+ ... + (-1)k ,f p(ti) 0
(tk-I - tk)tkp(tk) dtk + .. .
dtl... f 0
From (6.3.6), (6.3.12), and (6.3.13) we obtain
A = f(w) + 0(w)
f
=2-w, 0i p(tI)dtl 0
tk_+
(6.3.14)
tj
+
(cv - tI + t2)(t1 - t2)p(tI)p(t2) dt2
dt1 10"',
0
Ul
+(-1)k
11
d11/ dt2 (w-tI+tk)(t1-t2)
0
0
0
X (12 - t3) ... (tk-I - tk) p(tI)... p(tk)dtk+... .
X
Now we turn to concrete results. THEOREM 6.3.1 (Lyapunov [19]). If
p(t) <, 0,
p E C(R),
p(t)
0,
then equation (6.3.1) is unstable; moreover, both multipliers are positive, one of them being greater than one, and the other less than one.
PROOF. The equality (6.3.14) for p(t) <, 0 implies that A > 2; therefore, our statement holds. The positivity of both multipliers for A > 2 follows from (6.3.5). 0 THEOREM 6.3.2 (Lyapunov [19]). If p(t) > 0, p E C(R), and (6.3.15)
0 < cJ0,0 p(t) dt <, 4,
then equation (6.3.1) is stable; the multipliers are complex conjugate and lie on the unit circle.
PROOF. Let us write (6.3.14) in the form (6.3.16)
2-A=I1-I2+13-
and show that the series on the right of the formula (6.3.16) is a Leibnitz series. Indeed, this series is alternating, the absolute values of its terms decrease monotonically, and
limk..,,, Ik = 0. The first property follows from (6.3.14) for p(t) > 0, and the third,
VI. A LINEAR HOMOGENEOUS EQUATION OF THE SECOND ORDER
194
from the convergence of the series (6.3.14). Let us show the second property. The general form of I,, is given in (6.3.14). Thus,
Ik+I =
J0
dt(...
dtk(tl - t2)(t2 - t3) ... (tk_I - tk)P(tl) ... P(tk)
J0
f1k
X
(W - t1 + tk+1)(tk - tk+1)P(tk+1) dtk+1
Using the evident inequality xy < (x + y)2/4, we find
((0 - tI + tk+1)(tk - tk+1) < and, finally,
W I,,+, < 4 f
4
(W - tI + tk)2 <
4
(W - t( + tk),
w
p(t) dt 0
Ik < Ik.
From the last inequality it follows that Ik decreases monotonically with the growth .of k. From the estimate 0 < S < I1 for the sum of a Leibnitz series, in the case (6.3.16) we obtain
0<2-A<WJwp(t)dt<4; 0
0
hence, JAI < 2; therefore, Theorem 6.3.2 is valid.
COROLLARY 6.3.1. If equation (6.3.1) has an unbounded solution for p(t) , 0, then
WJop(u)du>4. 0
EXAMPLE 6.3.1. For which values of the real parameters a and b > 0 is the equation
x + (a + sin bt)x = 0 stable and for which is it unstable? 1) Let us check for which a and b > 0 the condition (6.3.15) holds. Here co = 27t/b, a)
p(t) , 0
b)
0<
2n 2ilh (a b fo
a , 1,
+ sin bt) dt2n= b2nb a < 4 *
n2a
62 <
Fi nally, b2 , n2a , n2.
2) The condition a + sin bt < 0 gives the answer to the second question, i.e., a
-1,bER+. The following result weakens the conditions of Theorem 6.3.2, taking into account,
e.g., the possibility of the coefficient p(t) having alternating sign. First we prove a lemma. LEMMA 6.3.1. Let co
(6.3.17)
Ij
p(t) dt , 0.
Then the normal solution of equation (6.3.1) corresponding to a real multiplier has a zero on any closed interval of length co.
§3. LINEAR EQUATIONS WITH PERIODIC COEFFICIENTS
195
PROOF. Let p be a real multiplier of equation (6.3.1). According to Theorem 1.4.1, a nontrivial real solution x(t) such that
x(t +(0) = px(t)
(6.3.18)
corresponds to it. Assume that x (t) > 0 fort E [0, (o], contrary to the claim. It follows then from (6.3.18) that x(t) > 0 for x E R. Substitute this solution in equation (6.3.1), divide it by x(t), and integrate the identity obtained from 0 tow:
f :1 (t)
dt +
o
f
p(t) dt = 0,
or
x'(t) x(t)
+
I
Jo
dt +
x(t)
0
J
p(z) dz = 0.
By virtue of (6.3.18), the first summand in the last identity is equal to zero, the second is positive, and we come to a contradiction with the condition (6.3.17). 0
ToREM 6.3.3. Let 1) fo p(t) dt > 0, 2) co fo p+(t) dt < 4,
p+ (t) - max(p(t), 0).
Then equation (6.3.1) is stable.
PROOF. Let us show that under the conditions of the theorem all the solutions of equation (6.3.1) are bounded; hence, by Corollary 6.2.3 they are stable. Assume the opposite: let equation (6.3.1) have an unbounded solution. Propositions 6.3.1-6.3.3 imply that this is possible only if there exists a real multiplier. To each real multiplier p there corresponds a normal solution x (t), which, under the first condition of the theorem by virtue of Lemma 6.3.1, necessarily has a zero on the closed interval [0, w]. The identity (6.3.18) implies that the distance between consecutive zeros of x(t) does not exceed co. Let
x(to) = x(t1) = 0
and
to, t1 E [0,w].
By Theorem 6.1.5, we have
p+(z) dz > 4;
w 0
this contradicts the second condition of the theorem. EXAMPLE 6.3.2. For which values of the real parameters a and b > 0 is the equation
x + (a + sin bt)x = 0 stable? Let us check the conditions of the theorem: 2n/b
(a + sin bt) dt =
2a
a>0.
If a > 1, then p(t) > 0 and the conditions of Theorem 6.3.2 are satisfied (the answer is given in Example 6.3.1).
Now let 0 <, a < 1. Then
I 0
2n/b
n/b+(aresin a )/b
(a + sin bt) dt +
(a + sin bt) + dt = fo
_ (an + 2a arcsin a + 2
f2n/b
Jn/h-(aresin a)/b
1 - a2/b).
(a + sin bt) dt
196
VI. A LINEAR HOMOGENEOUS EQUATION OF THE SECOND ORDER
In this case we obtain
1>a>, 0, 27r (an + 2a aresin a + 2
1 - a2)/b2 <, 4.
Appendix 1. The Gronwall-Bellman lemma and its generalization. THE GRONWALL-BELLMAN LEMMA [19]. Let functions u (t) i 0, v (t) > 0 be defined
and continuous for t > to and
+ f u(z)v(z) dr,
u(t)
(A.1)
o
where A E R+. Then for t > to we have
u(t)
(A.2)
efov(T)dT
PROOF. Denote the right-hand side of the inequality (A. 1) by g (t); hence,
g'(t) = u(t)v(t) < g(t)v(t),
or g'(t) - v(t)g(t) < 0. Multiplying the last inequality by exp (- fo v(z) dz), we write
d
[g(t)exP(_J'v(r)dr)] <0.
Integrating the result from to to t, we obtain
g(t) exp
(- J r v(z) dz/JJ l - g(to) < 0, o
or g(t) < A exp fro v(z) dz. Taking into account that u(t) < g(t), we obtain (A.2). 0 COROLLARY. If in the conditions of the lemma A = 0, then u(t) = O for t >, to. A GENERALIZATION OF THE GRONWALL-BELLMAN LEMMA. Let the function u (t) be
positive and continuous for t E (a, b) and let it satisfy the integral inequality (A.3)
u(t) < u(z) + I
Ju(z)v(z)dz
for any t, z E (a, b), where v E C(a, b) and v(t) > 0, t E (a, b). Then the two-sided estimate (A.4)
u(to)e-fov(:)dx
u(t)
is valid for a < to < t < b. 101
APPENDIX
198
PROOF. 1) t > T. In this case the inequality (A.3) has the form
u(t) '< u(r) + f u(z)v(z) dz T
t
and the right-hand side of the estimate (A.4) follows from the last inequality by the Gronwall-Bellman lemma for z = to. 2) t <, r. In this case the inequality (A.3) can be rewritten in the following way:
u(t) '< u(r) + f T u(z)v(z) dz.
(A.5)
Denoting the right-hand side of (A.5) by g (t), we have
g'(t) = -u(t)v(t) > -v(t)g(t), or
g'(t) + v(t)g(t) > 0. Multiplying the last inequality by exp I
f v(z) dz I , Z
we obtain
d
(A.6)
(g(t)expf v(z) dzl
0.
Let us integrate (A.6) from t to T; then t
t
g(r) - g(t) exp f v(z) dz > 0,
or
g(T) > g(t) exp f v(z) dz.
Z
Taking into account that g(r) = u(r) and g(t) > u(t), we have
(f v(z) dz l \ T
(A.7)
u(T) > u(t) exp f t v(z) dz = u(t) exp
t
T
.
/JJ
Recall that r > t. Changing t to to and z to t in (A.7), we obtain the left-hand side of the estimate (A.4). 0 2. Regularity of a linear system almost reducible to an autonomous one. THEOREM 3.5.2. A linear system almost reducible to a system with constant coefficients is regular. PROOF. Let a system
(A.8)
x = A(t)x,
A E C(R+),
be almost reducible to a system y = By, where B is a constant matrix. According to Theorem 3.4.3, the system (A.8) is also almost reducible to a diagonal system, whose diagonal consists of the real parts of the eigenvalues of the matrix B. Denote them by Al, 22, ... , A,,. Thus, for any 6 > 0 there exists a Lyapunov transformation x = La (t)z reducing the system (A.8) to the system (A.9)
_ = diag[),1,112......
b(t)z,
APPENDIX
199
<, 6. By the stability of characteristic exponents of systems with constant coefficients the spectrum of the system (A.9) is 6-close to {A1, 22, ... , An }, and, by the properties of Lyapunov transformations, it coincides with the spectrum of the system (A.8). Let X (t) and Z(t) be normal fundamental matrices of the systems (A.8) and (A.9) where sup 114)(t) 11
such that X(t) = Lo(t)Z; therefore, Det X (t) = Det L6 (t) Det Z(t). Hence, by the Ostrogradskii-Liouville formula, we have
DetX(to)expJ SpA(r)dr ro
n
o
t
A,dr+f SpO(r)dr
=DetL,5 (t)DetZ(to)exp (f'
.
o
t=1
Passing to absolute values in both sides of the last equality, taking logarithms, and dividing by t, we obtain 1
t
t
ReSpA(r)dr
to
t In I Det(Lo(t)Z(to)X-(to))I + t
A;
10
+
t f r ReSp(D (r) dr.
i=1
to
Note that I Sp(D(t)I <, no. Thus, t
.; + t In I
t to
Det(L6(t)Z(to)X-1(to))I
- nO t
+ n6 t j
to
t to
r=1
I t
fr
J
ReSpA(r)dr
to
In I
DetLb(t)Z(to)X-(to)I
+t
r
to .1;. ;=1
In the last inequality we pass to the limit as t oo, taking into account that sup1ER. IDetL((t)I is bounded. By the squeeze convergence principle, we obtain that the limit r
I t- 00 t lim
exists and
n
I
1.
t00
n
ft
JReSpA(r)dr <
.1; -n8 < , lim =1
Re Sp A(r) dr
ro
2.; +na. ;=1
Since 0 > 0 is arbitrary, we have I
f
t
n
lim -ReSpA(r)dr=E.1;; r - 00 t to
i =1
this, by Lemma 3.5.1, shows that system (A.8) is regular.
0
References 1. V. M. Alekseev, On the asymptotic behavior of solutions of weakly nonlinear systems of ordinary differential equations, Dokl. Akad. Nauk SSSR 134 (1960), no. 2, 247-250; English transi. in Soviet Math. Dokl. 1 (1960). 2. V. M. Alekseev and R. E. Vinograd, To the method of "freezing", Vestnik Moskov. Univ. Ser I Mat. Mekh. (1966), no. 5,30-35. (Russian) 3. V P. Basov, On the structure of a solution of a regular system, Vestnik Leningrad. Univ. (1952), no. 12, 3-8. (Russian) 4. R. Bellman, Stability theory of differential equations, McGraw-Hill, New York, 1953. 5. Yu. N. Bibikov, A general course of ordinary differential equations, Leningrad Univ., Leningrad, 1981. (Russian) 6. Yu. S. Bogdanov, To the theory of systems of linear differential equations, Dokl. Akad. Nauk SSSR 104 (1955), no. 6,813-814. (Russian) 7. , A note on §8 of I. G Malkin's monograph "The theory of stability of motion", Prikl. Mat. Mekh. 20 (1956), no. 3, 448. (Russian) 8. , Characteristic numbers of systems of linear differential equations, Mat. Sb. 41 (1957), no. 4, 481-498. (Russian) 9. B. F. Bylov, R. E. Vinograd, D. M. Grobman, and V. V. Nemyckii, The theory of Lyapunov exponents and its applications to problems of stability, "Nauka", Moscow, 1966. (Russian) 10. B. F. Bylov, Almost reducible systems of differential equations, Sibirsk. Mat. Zh. 3 (1962), no. 3, 333-359. (Russian) , On the reduction of systems of linear equations to the diagonal form, Mat. Sb. 67 (1965), no. 3, 11. 338-344; English transl. in Amer. Math. Soc. Transl. Ser. 289 (1970), 51-59. 12. B. F. Bylov and N. A. Izobov, Necessary and sufficient conditions for stability of characteristic exponents
of a diagonal system, Differentsial'nye Uravneniya 5 (1969), no. 10, 1785-1793; English transl. in Differential Equations 5 (1969). 13.
, Necessary and sufficient conditions for stability of characteristic exponents of a linear system, Differentsial'nye Uravneniya 5 (1969), no. 10, 1794-1903; English transl. in Differential Equations 5
(1969). 14. B. F. Bylov, On the reduction ofa linear system to the block-triangularform, Differentsial'nye Uravneniya 23 (1987), no. 12, 2027-2031; English transl. in Differential Equations 23 (1987). 15. R. E. Vinograd, A new proof of the Perron theorem and some properties of regular systems, Uspekhi Mat. Nauk 9 (1954), no. 2,129-136. (Russian)
16. F. R. Gantmakher, The theory of matrices, "Nauka", Moscow, 1967; English transl., Chelsea, New York, 1959. 17. D. M. Grobman, Characteristic exponents of systems close to linear ones, Mat. Sb. 30 (1952), 121-166. (Russian) 18. , Systems of differential equations analogous to linear ones, Dokl. Akad. Nauk SSSR 86 (1952), no. 1, 19-22. (Russian) 19. B. P. Demidovich, Lectures on the mathematical theory of stability, "Nauka", Moscow, 1967. (Russian) 20. N. P. Erugin, Reducible systems, Trudy Mat. Inst. Steklov. 13 (1946), 1-96. (Russian) 21. , Linear systems of ordinary differential equations with periodic and quasi periodic coefficients, Izdat. Akad. Nauk BSSR, Minsk, 1963; English transl., Math. Sci. Engrg., vol. 28, Academic Press, New York and London, 1966. 22. L. D. Ivanov, Variations of sets and functions, "Nauka", Moscow, 1975. (Russian) 23. N. A. Izobov, Linear systems of ordinary differential equations, Itogi Nauki i Tekhniki: Mat. Anal., vol. 12, VINITI, Moscow, 1974, pp. 71-146; English transl. in J. Soviet Math. 5 (1976). nr
REFERENCES
202
24. S. M. Lozinskii, Estimates for errors of numerical integration of ordinary differential equations. I, Izv. Vyssh. Uchebn. Zaved. Mat. 1958, no. 5, 52-90. (Russian) 25. A. M. Lyapunov, General problem of the stability of motion, Collected works, vol. 2, Izdat. Akad. Nauk SSSR, Moscow, 1956. (Russian) 26. I. G. Malkin, Theory of stability of motion, "Nauka", Moscow, 1966. (Russian) 27. M. Marcus and H. Minc, A survey of matrix theory and matrix inequalities, Allyn & Bacon, Boston, 1964.
28. V. M. Millionshchikov, A criterion for a small change of directions of solutions of a linear system of differential equations under small perturbations of the coefficients of the system, Mat. Zametki 4 (1968), no. 2, 173-180; English transl. in Math. Notes 4 (1968). , A proof of attainability of central exponents of linear systems, Sibirsk. Mat. Zh. 10 (1969), 29. no. 10, 99-104; English transl. in Siberian Math. J. 10 (1969). 30.
, On the instability of singular exponents and on the asymmetry of the relation of almost reducibility for linear systems of differential equations, Differentsial'nye Uravneniya 5 (1969), no. 4, 749-750; English
transi. in Differential Equations 5 (1969). , Structurally stable properties of linear systems of differential equations, Differentsial' nye Uravneniya 5 (1969), no. 10, 1775-1784; English transl. in Differential Equations 5 (1969). 32. V. V. Nemytskii and V. V. Stepanov, Qualitative theory of differential equations, GITTL, Moscow and Leningrad, 1949; English transl., Princeton Univ. Press, Princeton, NJ, 1966. 33. I. G. Petrovskii, Lectures on the theory of ordinary differential equations, "Nauka", Moscow, 1965; English trans]., Ordinary differential equations, Prentice-Hall, Englewood Cliffs, NJ, 1966. 34. W. Rudin, Principles of mathematical analysis, McGraw-Hill, New York, 1976. 35. F. Tricomi, Equationi dii ferenziale, Einaudi, Torino, 1965. 36. D. K. Faddeev and V. N. Faddeeva, Computational methods of linear algebra, Fizmatgiz, Moscow, 1963; English transl., Freeman, San Francisco, CA, 1963. 37. P. Hartman, Ordinary differential equations, Wiley, New York, 1964. 38. L. Cesari, Asymptotic behavior and stability problems in ordinary differential equations, Springer-Verlag, Berlin, 1959. 39. V. A. Yakubovich and V M. Starzhinskii, Linear differential equations with periodic coefficients and their applications, "Nauka", Moscow, 1972; English transl., vols. 1, 2, Israel Program for Scientific Translations, Jerusalem, and Wiley, New York, 1975. 40. , Parametric resonance in linear systems, "Nauka", Moscow, 1987. (Russian) 31.
Index
Bases,
Function,
binormal, 65 reciprocal, 64 Basis, 1, 37 normal, 38 Bound of mobility,
upper, 122, 128 lower, 122, 128 Steklov, 153 Fundamental system of solutions, 1 normal, 36 normalized, I
lower, 129 lower attainable, 129 upper, 129
upper attainable, 129 Characteristic exponent, of a function, 25, 26 sharp, 29 of an integral, 30 of a matrix, 31 of the product of functions, 28 of the sum of functions, 27 Coefficient of irregularity, Grobman's, 67 Lyapunov's, 66 Perron's, 66, 67
Incompressibility of a set of vector-functions, 36 Lyapunov, inequality, 40 integral, 30
Logarithm of a matrix, 15 Lineal, 37
Matriciant, 5 Matrix, Cauchy, 138, 142 fundamental, 1 Lyapunov, 44 monodromy, 14
Defect of reciprocal bases, 67
normal fundamental, 35 normalized fundamental, I Multipliers, 14
Estimate of the growth of solutions, Bogdanov's, 108 by means of the method of freezing, 115 Lozinskii's, 111 Lyapunov's, 108
Norm, logarithmic of a matrix, I l 1 of a matrix, 3
Vazhevskii's, 110 Yakubovich's, 117 Example, Malkin's, 94, 114, 120 Perron's, 95
Exponent, greatest mobile upwards, 122 greatest rigid upwards, 122, 130 lower central, 123 lower singular, 123 smallest mobile downwards, 122 smallest rigid downwards, 122 upper central, 123 upper singular, 123 Exponents, stable characteristic, 136
of a vector, 3
Ostrogradskii-Jacobi-Liouville formula, 1 Principle of linear inclusion, 106 Reducibility, 45 Similarity, kinematic, 44 static, 44 Spectrum of a linear system, 34 complete, 38 Steklov average, 153 Step of a pyramid, 38
Sum of characteristic exponents, of a fundamental system of solutions, 35 of a system, 38
204
System,
adjoint, 2, 3 almost constant, 88 almost linear, 106 almost reducible, 57 asymptotically stable, 81, 83 autonomous, 8, 84 block-triangular, 51 diagonal, 53, 58, 60, 130, 134, 149, 155 exponentially stable, 84 globally asymptotically stable, 84 homogeneous, I hyperbolic, 55 integrally separated, 148 Lappo-Danilevskii, 86 nonhomogeneous, I periodic, 13, 20, 84, 101 regular, 61 if. reducible, 45
INDEX
System (continued), reducible to a system with constant coefficients, 45, 62, 101 reducible to the system with zero matrix, 47 self-adjoint, 3 stable, 81 triangular, 49, 72 uniformly asymptotically stable, 97, 99 uniformly stable, 97, 98 unstable, 81 with constant coefficients, 8
Transformation, generalized Lyapunov, 71 Lyapunov, 44 Perron, 49 /3, 59
H, 155 Re, 60
Other Titles in This Series 104 103 102 101
( Continued front the front of this publication)
R. L. Dobrushin, R. Kotecki, and S. S6losman. Wulff construction: A global shape from local interaction. 1992 A. K. Tsikb, Multidimensional residues and their applications. 1992 A. M. Il'in, Matching of asymptotic expansions of solutions of boundary value problems, 1992 bang Zhi-fen, Ding Tong-ren, Huang Wen-no, and Doug Zben-xl, Qualitative theory of differential equations, 1992
100
V. L Popov. Groups, generators, syzygies, and orbits in invariant theory, 1992
99 Norio Shimakura, Partial differential operators of elliptic type. 1992 98 97
96 95 94 93
92 91
90 89
V. A. Vasslliev. Complements of discriminants of smooth maps: Topology and applications, 1992 (revised edition. 1994) Itiro Tamara, Topology of foliations: An introduction. 1992 A. 1. Markashevich. Introduction to the classical theory of Abelian functions, 1992 Gnangchang Doug, Nonlinear partial differential equations of second order, 1991 Vu, S. B'yasbenko, Finiteness theorems for limit cycles. 1991 A. T. Fomenko and A. A. Tuzhilin. Elements of the geometry and topology of minimal surfaces in three-dimensional space. 1991 E. M. Nikisbiu and V. N. Sorokbt, Rational approximations and orthogonality, 1991 Mamoru Mlmwa and Hirosi Toda, Topology of Lie groups. I and II, 1991 S. L. Sobolev, Some applications of functional analysis in mathematical physics, third edition, 1991 Valerll V. Kozkw and Dmitrif V. Tresbchiv, Billiards: A genetic introduction to the dynamics of systems with impacts. 1991
88
A. G. Kboranski . Fewnomials, 1991
87
Aleksardr Robertovkh Kenter, Ideals of identities of associative algebras, 1991 V. M. Kadets and M. 1. Kadets, Rearrangements of series in Banach spaces, 1991 Mlkio lie and Masaru Takeochi. Lie groups 1. II, 1991 Djio Tr6ng Thi and A. T. Fomenko. Minimal surfaces, stratified multivarifolds, and the Plateau problem, 1991 N. 1. Portenko, Generalized diffusion processes. 1990 Yasutaka Slbuya, Linear differential equations in the complex domain: Problems of analytic continuation. 1990 1. M. Gelfand and S. G. Gindlkio, Editors, Mathematical problems of tomography. 1990 Junjiro Noguchi and Takushlro Ochiai, Geometric function theory in several complex variables.
86 85
84 83 82 81
80
1990
66
N. 1. Akbiezer. Elements of the theory of elliptic functions, 1990 A. V. Skorokhod, Asymptotic methods of the theory of stochastic differential equations. 1989 V. M. Filippov, Variational principles for nonpotential operators, 1989 Phillip A. Grifths. Introduction to algebraic curves. 1989 B. S. Kashin and A. A. Saakyan, Orthogonal series, 1989 V. 1. Yrdovich, The linearization method in hydrodynamical stability theory, 1989 Yu. G. Resbetayak, Space mappings with bounded distortion, 1989 A. V. Pogorekv, Bendings of surfaces and stability of shells, 1988 A. S. Markus, Introduction to the spectral theory of polynomial operator pencils, 1988 N. I. Akhlezer, Lectures on integral transforms, 1988 V. N. Sans, Lattices with unique complements, 1988 A. G. Postnikov. Introduction to analytic number theory, 1988 A. G. Dragalin, Mathematical intuitionism: Introduction to proof theory, 1988 Ye Yan-gan, Theory of limit cycles, 1986
65
V. M. Zolotarev. One-dimensional stable distributions. 1986
79 78
77 76 75
74 73
72 71
70 69 68
67
(See the AMS catalog for earlier titles)