Operator Theory: Advances and Applications Vol. 177 Editor: I. Gohberg Editorial Office: School of Mathematical Sciences Tel Aviv University Ramat Aviv, Israel Editorial Board: D. Alpay (Beer-Sheva) J. Arazy (Haifa) A. Atzmon (Tel Aviv) J. A. Ball (Blacksburg) A. Ben-Artzi (Tel Aviv) H. Bercovici (Bloomington) A. Böttcher (Chemnitz) K. Clancey (Athens, USA) L. A. Coburn (Buffalo) R. E. Curto (Iowa City) K. R. Davidson (Waterloo, Ontario) R. G. Douglas (College Station) A. Dijksma (Groningen) H. Dym (Rehovot) P. A. Fuhrmann (Beer Sheva) B. Gramsch (Mainz) J. A. Helton (La Jolla) M. A. Kaashoek (Amsterdam)
Subseries: Advances in Partial Differential Equations Subseries editors: Bert-Wolfgang Schulze Universität Potsdam Germany Sergio Albeverio Universität Bonn Germany
H. G. Kaper (Argonne) S. T. Kuroda (Tokyo) P. Lancaster (Calgary) L. E. Lerer (Haifa) B. Mityagin (Columbus) V. Olshevsky (Storrs) M. Putinar (Santa Barbara) L. Rodman (Williamsburg) J. Rovnyak (Charlottesville) D. E. Sarason (Berkeley) I. M. Spitkovsky (Williamsburg) S. Treil (Providence) H. Upmeier (Marburg) S. M. Verduyn Lunel (Leiden) D. Voiculescu (Berkeley) D. Xia (Nashville) D. Yafaev (Rennes) Honorary and Advisory Editorial Board: C. Foias (Bloomington) T. Kailath (Stanford) H. Langer (Vienna) P. D. Lax (New York) H. Widom (Santa Cruz)
J. López-Gómez C. Mora-Corral
Algebraic Multiplicity of Eigenvalues of Linear Operators
Birkhäuser Basel . Boston . Berlin
Authors: J. López-Gómez Department of Applied Mathematics Universidad Complutense de Madrid Ciudad Universitaria de Moncloa 28040 Madrid Spain e-mail:
[email protected]
C. Mora-Corral Mathematical Institute 8QLYHUVLW\RI2[IRUG 6W*LOHV¶ 2[IRUG2;/% UK HPDLOPRUDFRU#PDWKVR[DFXN
0DWKHPDWLFV6XEMHFW&ODVVL¿FDWLRQ$$$$$$ $+-
/LEUDU\RI&RQJUHVV&RQWURO1XPEHU
Bibliographic information published by Die Deutsche Bibliothek 'LH'HXWVFKH%LEOLRWKHNOLVWVWKLVSXEOLFDWLRQLQWKH'HXWVFKH1DWLRQDOELEOLRJUD¿HGHWDLOHG bibliographic data is available in the Internet at
.
,6%1%LUNKlXVHU9HUODJ$*%DVHO%RVWRQ%HUOLQ 7KLVZRUNLVVXEMHFWWRFRS\ULJKW$OOULJKWVDUHUHVHUYHGZKHWKHUWKHZKROHRUSDUWRIWKH PDWHULDOLVFRQFHUQHGVSHFL¿FDOO\WKHULJKWVRIWUDQVODWLRQUHSULQWLQJUHXVHRI LOOXVWUDWLRQVUHFLWDWLRQEURDGFDVWLQJUHSURGXFWLRQRQPLFUR¿OPVRULQRWKHUZD\VDQG storage in data banks. For any kind of use permission of the copyright owner must be obtained. %LUNKlXVHU9HUODJ$*32%R[&+%DVHO6ZLW]HUODQG 3DUWRI6SULQJHU6FLHQFH%XVLQHVV0HGLD 3ULQWHGRQDFLGIUHHSDSHUSURGXFHGIURPFKORULQHIUHHSXOS7&) f &RYHUGHVLJQ+HLQ]+LOWEUXQQHU%DVHO 3ULQWHGLQ*HUPDQ\ ,6%1
H,6%1
ZZZELUNKDXVHUFK
—Calla te digo otra vez, Sancho —dijo Don Quijote—; porque te hago saber que no s´ olo me trae por estas partes el deseo de hallar al loco, cuanto el que tengo de hacer en ellas una haza˜ na, con que he de ganar perpetuo nombre y fama en todo lo descubierto de la Tierra; y ser´ a tal, que he de echar con ella el sello a todo aquello que puede hacer perfecto y famoso a un andante caballero. Don Quijote de la Mancha (Cap´ıtulo 25) Miguel de Cervantes Saavedra
‘Peace, I say, Sancho, once again,’ said Don Quixote: ‘for know, that it is not barely the desire of finding the madman that brings me to these parts, but the intention I have to perform an exploit in them, whereby I shall acquire a perpetual name and renown over the face of the whole earth: and it shall be such an one as shall set the seal to all that can render a knight-errant complete and famous.’ Translated by Charles Jarvis. (Oxford University Press. Hong Kong, 1999.)
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
I Finite-dimensional Classic Spectral Theory Part I: Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 The 1.1 1.2 1.3 1.4 1.5
Jordan Theorem Basic concepts . . . . . . . . . . . . . The Jordan theorem . . . . . . . . . . The Jordan canonical form . . . . . . The real canonical form . . . . . . . . An example . . . . . . . . . . . . . . . 1.5.1 The complex Jordan form of A 1.5.2 The real Jordan form of A . . . 1.6 Exercises . . . . . . . . . . . . . . . . 1.7 Comments on Chapter 1 . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
2
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
3 8 19 23 26 29 30 31 35
2 Operator Calculus 2.1 Norm of a linear operator . . . . . . . . . . . . 2.2 Introduction to operator calculus . . . . . . . . 2.3 Resolvent operator. Dunford’s integral formula 2.4 The spectral mapping theorem . . . . . . . . . 2.5 The exponential matrix . . . . . . . . . . . . . 2.6 An example . . . . . . . . . . . . . . . . . . . . 2.7 Exercises . . . . . . . . . . . . . . . . . . . . . 2.8 Comments on Chapter 2 . . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
38 41 45 51 53 56 58 62
viii
Contents
3 Spectral Projections 3.1 Estimating the inverse of a matrix . . . . 3.2 Vector-valued Laurent series . . . . . . . . 3.3 The eigenvalues are poles of the resolvent 3.4 Spectral projections . . . . . . . . . . . . 3.5 Exercises . . . . . . . . . . . . . . . . . . 3.6 Comments on Chapter 3 . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
63 65 66 68 70 72
Part II: Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
II Algebraic Multiplicities
4 Algebraic Multiplicity Through Transversalization 4.1 Motivating the concept of transversality . . . 4.2 The concept of transversal eigenvalue . . . . . 4.3 Algebraic eigenvalues and transversalization . 4.4 Perturbation from simple eigenvalues . . . . . 4.5 Exercises . . . . . . . . . . . . . . . . . . . . 4.6 Comments on Chapter 4 . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. 84 . 87 . 89 . 99 . 103 . 105
5 Algebraic Multiplicity Through Polynomial Factorization 5.1 Derived families and factorization . . . . . . . . . . . 5.2 Connections between χ and µ . . . . . . . . . . . . . 5.3 Coincidence of the multiplicities χ and µ . . . . . . . 5.4 A formula for the partial µ-multiplicities . . . . . . . 5.5 Removable singularities . . . . . . . . . . . . . . . . 5.6 The product formula . . . . . . . . . . . . . . . . . . 5.7 Perturbation from simple eigenvalues revisited . . . . 5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Comments on Chapter 5 . . . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
108 114 118 121 122 123 130 132 136
6 Uniqueness of the Algebraic Multiplicity 6.1 Similarity of rank-one projections . . . . . . 6.2 Proof of Theorem 6.0.1 . . . . . . . . . . . . 6.3 Relaxing the regularity requirements . . . . 6.4 A general uniqueness theorem . . . . . . . . 6.5 Applications. Classical multiplicity formulae 6.6 Exercises . . . . . . . . . . . . . . . . . . . 6.7 Comments on Chapter 6 . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
141 142 144 147 148 151 152
. . . . . . .
. . . . . .
. . . . . . .
. . . . . .
. . . . . . .
. . . . . .
. . . . . . .
. . . . . . .
Contents
ix
7 Algebraic Multiplicity Through Jordan Chains. Smith Form 7.1 The concept of Jordan chain . . . . . . . . . . . . . . . 7.2 Canonical sets and κ-multiplicity . . . . . . . . . . . . 7.3 Invariance by continuous families of isomorphisms . . 7.4 Local Smith form for C ∞ matrix families . . . . . . . . 7.5 Canonical sets at transversal eigenvalues . . . . . . . . 7.6 Coincidence of the multiplicities κ and χ . . . . . . . . 7.7 Labeling the vectors of the canonical sets . . . . . . . 7.8 Characterizing the existence of the Smith form . . . . 7.9 Two illustrative examples . . . . . . . . . . . . . . . . 7.9.1 Example 1 . . . . . . . . . . . . . . . . . . . . . 7.9.2 Example 2 . . . . . . . . . . . . . . . . . . . . . 7.10 Local equivalence of operator families . . . . . . . . . 7.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 7.12 Comments on Chapter 7 . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
155 157 161 167 171 174 180 181 189 189 191 193 199 204
8 Analytic and Classical Families. Stability 8.1 Isolated eigenvalues . . . . . . . . . . . . . . . 8.2 The structure of the spectrum . . . . . . . . . 8.3 Classic algebraic multiplicity . . . . . . . . . 8.4 Stability of the complex algebraic multiplicity 8.4.1 Local stability . . . . . . . . . . . . . 8.4.2 Global stability . . . . . . . . . . . . . 8.4.3 Homotopy invariance . . . . . . . . . . 8.5 Exercises . . . . . . . . . . . . . . . . . . . . 8.6 Comments on Chapter 8 . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
210 211 212 215 216 218 220 220 222
. . . . . . . . . . . . . . . . . . . . . two families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
226 229 230 231 233 237 238 241 246
. . . . . . . . .
. . . . . . . . .
9 Algebraic Multiplicity Through Logarithmic Residues 9.1 Finite Laurent developments of L−1 . . . . . . . 9.2 The trace operator . . . . . . . . . . . . . . . . . 9.2.1 Concept of trace and basic properties . . 9.2.2 Traces of the coefficients of the product of 9.3 The multiplicity through a logarithmic residue . 9.4 Holomorphic and classical families . . . . . . . . 9.5 Spectral projection . . . . . . . . . . . . . . . . . 9.6 Exercises . . . . . . . . . . . . . . . . . . . . . . 9.7 Comments on Chapter 9 . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
x
Contents
10 The Spectral Theorem for Matrix Polynomials 10.1 Linearization of a matrix polynomial . . . 10.2 Generalized Jordan theorem . . . . . . . . 10.3 Remarks on scalar polynomials . . . . . . 10.4 Constructing a basis in the phase space . 10.5 Exercises . . . . . . . . . . . . . . . . . . 10.6 Comments on Chapter 10 . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
250 252 255 256 262 263
11 Further Developments of the Algebraic Multiplicity 11.1 General Fredholm operator families . . . . . . . 11.2 Meromorphic families . . . . . . . . . . . . . . 11.3 Unbounded operators . . . . . . . . . . . . . . 11.4 Non-Fredholm operators . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
265 266 267 269
III Nonlinear Spectral Theory Part III: Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 12 Nonlinear Eigenvalues 12.1 Bifurcation values. Nonlinear eigenvalues . . . . . . . . . . . . . . 12.2 A short introduction to the topological degree . . . . . . . . . . . 12.2.1 Existence and uniqueness . . . . . . . . . . . . . . . . . . 12.2.2 Computing the degree. Analytic construction . . . . . . . 12.2.3 The fixed point theorem of Brouwer, Schauder and Tychonoff . . . . . . . . . . . . . . . . . . . . . . . . 12.2.4 Further properties . . . . . . . . . . . . . . . . . . . . . . 12.3 The algebraic multiplicity as an indicator of the change of index 12.4 Characterization of nonlinear eigenvalues . . . . . . . . . . . . . 12.5 Comments on Chapter 12 . . . . . . . . . . . . . . . . . . . . . .
. . . .
275 278 279 281
. . . . .
283 284 284 286 291
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Preface This book analyzes the existence and uniqueness of a generalized algebraic multiplicity for a general one-parameter family L of bounded linear operators with Fredholm index zero at a value of the parameter λ0 where L(λ0 ) is non-invertible. Precisely, given K ∈ {R, C}, two Banach spaces U and V over K, an open subset Ω ⊂ K, and a point λ0 ∈ Ω, our admissible operator families are the maps L ∈ C r (Ω, L(U, V )) for some r ∈ N, such that
(1)
L(λ0 ) ∈ Fred0 (U, V );
here L(U, V ) stands for the space of linear continuous operators from U to V , and Fred0 (U, V ) is its subset consisting of all Fredholm operators of index zero. From the point of view of its novelty, the main achievements of this book are reached in case K = R, since in the case K = C and r = 1, most of its contents are classic, except for the axiomatization theorem of the multiplicity. The book consists of three parts, and, although the necessary background to understand the book differs from each part, the entire book might become easily accessible to a multidisciplinary audience by establishing all its results in a finitedimensional setting, which is not a serious restriction. Part I covers the classic finite-dimensional spectral theory, and assumes hardly any previous knowledge by the reader, except some basic facts in complex analysis. In fact, its contents are intended to be taught at an undergraduate level, as we have considerably tidied up and shortened most of the available material in the literature. Part II introduces a generalized concept of algebraic multiplicity, which substantially extends all previous available concepts, to cover the case of families L as in (1). Actually, this objective is accomplished by adopting several different perspectives, each of them of great interest in its own right. For a comfortable reading of this part, some basic background in Functional Analysis is appropriate; essentially, the Hahn–Banach theorem and the most important properties of the Fredholm operators. Finally, Part III, which consists of a single chapter, introduces some pivotal concepts in nonlinear analysis (like the concept of nonlinear eigenvalue) and characterizes the nonlinear eigenvalues of L through the value of (−1)m[L;λ0 ] , where m[L; λ0 ] denotes the generalized algebraic multiplicity of L at λ0 . Although it would be desirable
xii
Preface
to have a good knowledge of the topological degree for a comfortable reading of this part, this is far from necessary, because a self-contained introduction to the topological degree is provided therein. Going beyond would take us outside the general scope of this book. A great proportion of Part II consists of versions and new findings of recent results attributable to the authors of this book and coworkers, except for the classic case when K = C and r = 1, where most of the results are attributable to the Russian school. Among the most important results of Part II, there is the next axiomatization theorem of the multiplicity. We denote by Sλ∞0 (U ) the set of functions of class C ∞ defined in a neighborhood of λ0 with values in Fred0 (U, U ). Theorem 0.0.1. There exists a unique map m[·; λ0 ] : Sλ∞0 (U ) → [0, ∞] satisfying the following axioms: A1. For all L, M ∈ Sλ∞0 (U ), m[LM; λ0 ] = m[L; λ0 ] + m[M; λ0 ]. A2. There exists a rank one projection P0 ∈ L(U ) such that m[L; λ0 ] = 1, where L(λ) = (λ − λ0 )P0 + IU − P0 ,
λ ∈ K.
The value m[L; λ0 ] is called the algebraic multiplicity of L at λ0 . Axiom A1 is known as the product formula, while Axiom A2 is nothing but a normalization condition, as will discussed in Chapter 6. Before describing the properties of the multiplicity, we should recall the familiar concept of order of a zero of a function at a point. Let X be a K-Banach space, Ω ⊂ K an open set, r ≥ 0 an integer or infinity, λ0 ∈ Ω, and f ∈ C r (Ω, X). If there exists an integer 0 ≤ k ≤ r such that f j) (λ0 ) = 0 for all 0 ≤ j < k, and f k) (λ0 ) = 0, then we define ordλ0 f = k. If r = ∞ and f j) (λ0 ) = 0 for all integers j ≥ 0, then we define ordλ0 f = ∞. If f j) (λ0 ) = 0 for all 0 ≤ j ≤ r, and f is not of class C r+1 in any neighborhood of λ0 , then we leave ordλ0 f undefined. Now we describe some properties of the multiplicity. For every L ∈ Sλ∞0 (U ), the function m[·; λ0 ] inherits, from Axioms A1 and A2, the following properties: i) m[L; λ0 ] ∈ N ∪ {∞}. ii) m[L; λ0 ] = 0 if and only if L(λ0 ) is an isomorphism. iii) m[L; λ0 ] < ∞ if and only if there exists an integer k ≥ 0 such that (λ − λ0 )k L(λ)−1 exists and is bounded for λ in a perforated neighborhood of λ0 .
Preface
xiii
iv) If dim U < ∞, then m[L; λ0 ] = ordλ0 det L. v) If K = C and U = V = CN , then 1 m[L; λ0 ] = tr L (λ)L(λ)−1 dλ 2πi γ for every Cauchy contour γ with inner domain D containing λ0 such that ¯ \ {λ0 }. L−1 is holomorphic in D \ {λ0 } and continuous in D ∞ vi) For every K-Banach space V and M ∈ Sλ0 (V ), m[L ⊕ M; λ0 ] = m[L; λ0 ] + m[M; λ0 ]. vii) If L is analytic, Ω is connected, and L(Ω) contains an isomorphism, then m[L; λ] < ∞
for all λ ∈ Ω.
viii) If A ∈ L(U ) and L is the family defined by L(λ) := λI − A,
λ ∈ K,
then m[L; λ0 ] = sup dim N [(λ0 I − A)k ]. k∈N
˜ ⊂ Ω of λ0 , ix) m[L; λ0 ] < ∞ if and only if there exist an open neighborhood Ω a topological decomposition U = U0 ⊕ U1 with n := dim U0 < ∞ and two maps E, F ∈ Sλ∞0 (U ) such that E(λ0 ), F(λ0 ) are isomorphisms and L(λ) = E(λ) [S(λ) ⊕ IU1 ] F(λ),
˜ λ ∈ Ω,
where IU1 stands for the identity of U1 , and S(λ) = diag (λ − λ0 )κ1 , . . . , (λ − λ0 )κn ,
λ ∈ K,
for some integers κ1 ≥ · · · ≥ κn ≥ 1. Moreover, in such a case, m[L; λ0 ] = κ1 + · · · + κn . Owing to Theorem 0.0.1, each of the listed properties, and actually any further property of the multiplicity, must be a consequence of Axioms A1 and A2. The concept of multiplicity for maps L of the form (1), rather than for operators (as is the case in the context of the classic spectral theory) goes back to the Russian school; it began to be developed in the 1940s for holomorphic families, that is, for the special case when K = C and r = 1 (see Keldyˇs [66, 67] for probably the first definition, and Markus & Sigal [95], Eni [28], Gohberg [42], Gohberg & Sigal [54], Gohberg, Kaashoek & Lay [44], as well as the references therein, for
xiv
Preface
later generalizations and expositions). Most of the available underlying theories are based on generalized Jordan chains, as discussed in Chapter 7 of this book. The concept of multiplicity in the case when K = R came later, and was introduced independently through four rather different devices. By using generalized Jordan chains (see Sarreither [116] and later Rabier [110]), through a finitedimensional reduction of the family L so that it can be defined as the multiplicity of the determinant of the reduced family (see Ize [61]), by means of a polynomial factorization of L (see Gohberg [42], Gohberg & Sigal [54], and Magnus [92]), as discussed in Chapter 5 of this book, or through a change of variable giving rise to a transversal family (see Esquinas & L´opez-G´omez [30], Esquinas [29], and L´opez-G´omez [82]), as discussed in Chapter 4. According to Theorem 0.0.1, all these concepts of multiplicity must coincide as long as they satisfy Axioms A1 and A2, though the reader should be aware of the fact that obtaining the product formula may be fraught with higher technical difficulties than directly checking that they do coincide from their own definitions. This book gives a complete description of all these (equivalent) constructions of the algebraic multiplicity. The reasons to bring them together in this book include: a) all of them involve interesting constructions in their own right; b) although equivalent, a number of multiplicity concepts depart from rather separated starting points and, consequently, their mutual connections are not clear; c) some properties of the multiplicity are easier to prove through a particular definition than departing from another. Nevertheless, the existence of a number of constructions of the multiplicity from different perspectives facilitates the derivation of its properties, both from the theoretical point of view and from the point of view of applications. Although there are some previous and partial comparisons between several different theories of algebraic multiplicity (see, for example, Esquinas [29], Fitzpatrick & Pejsachowicz [33], Rabier [110] and L´ opez-G´omez [82]), this book should provide the most complete monograph encompassing all those different constructions from the point of view of the axiomatization of Theorem 0.0.1. These algebraic multiplicities have many applications in a number of separate areas of mathematics. Among them, the constructions of the algebraic multiplicity are used for accomplishing the following tasks: • Finding optimal criteria for the change of the topological degree of L(λ) as the parameter λ crosses an eigenvalue of L, within the context of local and global bifurcation theory (see Sarreither [116], Ize [61], Magnus [92], Esquinas & L´ opez-G´omez [31], Esquinas [29], L´ opez-G´omez [82], Rabier [110], Fitzpatrick & Pejsachowicz [34], L´ opez-Gomez & Mora-Corral [91]).
Preface
xv
• Establishing local classification of operator families L through the existence and uniqueness of the Smith canonical form (see Gohberg, Kaashoek & Lay [44]), as well as global spectral results for general operator families (see Gohberg, Kaashoek & Lay [44], Leiterer [78, 79], Gohberg, Lancaster & Rodman [50], L´opez-G´omez [82]). • Finding completeness criteria for eigenvectors and generalized eigenvectors of operators and operator families (see Keldyˇs [66, 67], Friedman & Shinbrot [35], Gohberg & Kre˘ın [45, 46]). • Analyzing the solution manifold of general classes of ordinary linear differential equations with constant coefficients, and boundary-value problems (see Gohberg, Lancaster & Rodman [50, 51], Rodman [114], Wloka, Rowley & Lawruk [127]), as well as difference equations with constant coefficients (see Gohberg, Lancaster & Rodman [50]). • Studying interpolation of matrix families (see Rodman [114], Ball, Gohberg & Rodman [8]), • Studying linear systems in control engineering (see Gohberg, Lancaster & Rodman [51], Rodman [114], Wyman, Sain, Conte & Perdon [129]). In the sequel, we will outline the contents of each chapter of this book, with particular emphasis on the new findings originated by the authors’ research in the field. Chapter 1 is devoted to a proof of the Jordan theorem and a construction of the underlying canonical form. Chapter 2 provides a short self-contained introduction to operator calculus, yet in finite dimension. Then, the Dunford integral formula is introduced and the spectral mapping theorem is studied. Chapter 3 describes the construction of the Jordan spectral projections. The proofs are based upon the fact that the ascent of each eigenvalue equals its order as a pole of the resolvent operator. This methodology substantially differs from the standard one adopted in most classic textbooks on elementary finite-dimensional spectral theory, shortens the inherent difficulties for the general infinite-dimensional framework, and facilitates the teaching of these materials at an undergraduate level. Actually, this is the pivotal idea upon which the development of the subsequent abstract general theory of this book relies. Part II and the general theory of multiplicities begin at Chapter 4, after introducing all the spaces and most common notation used in the rest of the book. Throughout Part II, r ≥ 1 is an integer or infinity, K is either the real field or the complex field, Ω ⊂ K is an open set containing λ0 , and L ∈ C r (Ω, L(U, V )) satisfies L(λ0 ) ∈ Fred0 (U, V ). Besides, the following notation for the derivatives at L at λ0 1 Lj := Lj) (λ0 ), 0 ≤ j < r + 1, j! is used everywhere, so that Lj equals the jth coefficient of the Taylor development of L at λ0 . The parameter value λ0 is said to be an eigenvalue of L when L(λ0 ) is
xvi
Preface
not invertible. As L(λ0 ) is Fredholm of index zero, this amounts to N [L(λ0 )] = {0}.
(2)
Let Iso(U, V ) denote the set of linear topological isomorphisms from U to V . When L(λ0 ) ∈ Iso(U, V ), all results become trivial; thus, we will subsequently focus our attention on the case when (2) occurs. The theory of algebraic multiplicity presented in Chapter 4 is based on the concept of transversality, which is a pivotal concept throughout this book; it goes back to Esquinas & L´ opez-G´omez [31]. Definition 0.0.2. Given an integer 1 ≤ k ≤ r, it is said that λ0 is a k-transversal eigenvalue of L if k
Lj (N [L0 ] ∩ · · · ∩ N [Lj−1 ]) ⊕ R[L0 ] = V,
j=1
and Lk (N [L0 ] ∩ · · · ∩ N [Lk−1 ]) = {0}. In such a case, the algebraic multiplicity of L at λ0 , denoted by χ[L; λ0 ], is defined through k j dim Lj (N [L0 ] ∩ · · · ∩ N [Lj−1 ]). χ[L; λ0 ] := j=1
Although, at a first glance, Definition 0.0.2 might look shocking, it turns out that transversal eigenvalues enjoy a special structure that makes most of the involved operator calculations straightforward from an algorithmic point of view. The concept of multiplicity was originally introduced to capture the order of det L at λ0 in the finite-dimensional setting, but this is far from being an elementary feature that can be briefly explained. Another key concept of Part II, going back to L´opez-G´omez [82], is the one introduced by the following definition. Definition 0.0.3. Given an integer k ≥ 1, it is said that λ0 is a k-algebraic eigen˜ ⊂ Ω of λ0 such that value of L when there exists an open neighborhood Ω ˜ L(λ) ∈ Iso(U, V ) for all λ ∈ Ω \ {λ0 }, the values of the family λ → (λ − λ0 )k L(λ)−1 ,
˜ \ {λ0 }, λ∈Ω
are bounded, but (λ − λ0 )k−1 L(λ)−1 is unbounded for λ in every perforated neighborhood of λ0 . In such a case, we will write that λ0 ∈ Algk (L). When L is analytic, it turns out that λ0 ∈ Algk (L) if and only if λ0 is a pole of L−1 of order k. Consequently, this concept is well known in the analytic case. But, since there is no available concept of pole for non-meromorphic families, the concept of algebraic eigenvalue is new in the general C r setting adopted in this book.
Preface
xvii
Chapter 9 shows that the concept of algebraic eigenvalue introduced by Definition 0.0.3 indeed provides us with a concept of pole for L−1 , thus considerably extending the classic concept of pole. Quite strikingly, it turns out that if λ0 ∈ Algk (L) and r ≥ 2k, then the inverse map L−1 possesses a Laurent asymptotic expansion at λ0 , L(λ)−1 =
−1
(L−1 )n (λ − λ0 )n + o (λ − λ0 )−1 ,
as
λ → λ0 ,
n=−k
as well as L L−1 , L (λ)L(λ)−1 =
−1
Mn (λ − λ0 )n + o (λ − λ0 )−1 ,
as λ → λ0 ,
n=−k
and, in addition, according to the main theorem of L´ opez-G´omez & Mora-Corral [90], 0 if − k ≤ n ≤ −2, tr Mn = χ[L; λ0 ] if n = −1, which is a deep generalization to the case when K = R and L is of class C r for some r ∈ N of the following classic well-known result attributable to Gohberg [42], Markus & Sigal [95] and Gohberg & Sigal [54] (see also Gohberg, Goldberg & Kaashoek [43]). That generalization constitutes the bulk of Chapter 9. Theorem 0.0.4. Let Ω ⊂ C be open and connected, and L ∈ H(Ω, L(U, V )) such that L(Ω) ⊂ Fred0 (U, V ) and L(Ω) ∩ Iso(U, V ) = ∅. Then, the set Σ(L) := λ ∈ Ω : N [L(λ)] = {0} (3) is discrete in Ω, and L−1 is holomorphic in Ω\Σ(L). Moreover, for each λ0 ∈ Σ(L), the meromorphic function L−1 admits a Laurent development L(λ)−1 =
∞
(L−1 )n (λ − λ0 )n
n=−k
for some integer k ≥ 1, where (L−1 )0 ∈ Fred0 (V, U ) and (L−1 )n has finite rank for all −k ≤ n ≤ −1. Furthermore, the coefficients of the Laurent development of L L−1 at λ0 , ∞ L (λ)L(λ)−1 = Mn (λ − λ0 )n , n=−k
have finite rank for every n ∈ {−k, . . . , −1}, and their traces are given through 0 if − k ≤ n ≤ −2, tr Mn = m[L; λ0 ] if n = −1.
xviii
Preface
In order to obtain our real non-analytic counterparts of Theorem 0.0.4, we need the previous analysis carried out in Chapters 4, 5 and 7. The transversalization theorem of Chapter 4 establishes that if λ0 ∈ Algk (L) for some 0 ≤ k < r + 1, then there exist a polynomial Φ : K → L(U ) with Φ(λ0 ) = IU , and an integer ν = ν(Φ) ∈ {1, . . . , k}, such that λ0 is a ν-transversal eigenvalue of the product family LΦ := LΦ. Moreover, ν(Φ) and χ[LΦ ; λ0 ] are independent of Φ; therefore, one can extend the concept of multiplicity introduced in Definition 0.0.2 by setting χ[L; λ0 ] := χ[LΦ ; λ0 ] if λ0 ∈ Algk (L), and
χ[L; λ0 ] := ∞
if r = ∞ and λ0 is not an algebraic eigenvalue of L. Further, from the analysis of Chapter 5, it will become apparent that, in fact, if λ0 ∈ Algk (L), then ν(Φ) = k for all transversalizing families of isomorphisms Φ, whether polynomial or not. When L(λ0 ) is an isomorphism, the multiplicity χ[L; λ0 ] is defined as zero. Consequently, the multiplicity χ is finite if and only if λ0 is an algebraic value of L, in the sense that either L(λ0 ) ∈ Iso(U, V ), or λ0 ∈ Algk (L) for some integer 1 ≤ k < r + 1. The main goal of Chapter 5 is to prove that this concept of multiplicity actually satisfies Axioms A1 and A2 of Theorem 0.0.1, thus providing us with the existence of the multiplicity m. The analysis of this chapter is based upon another independent concept of multiplicity, denoted by µ[L; λ0 ], going back to Magnus [92], and defined through a certain polynomial factorization of the family L. Once some hidden connections between the algebraic invariants invoked in the definitions of µ and χ are established, it will follow that χ = µ and that the following result is satisfied. Theorem 0.0.5. For every integer 1 ≤ k ≤ r there exist k finite-rank projections P1 , . . . , Pk ∈ L(U ) and Mk ∈ C r−k (Ω, L(U, V )) such that L(λ) = Mk (λ) [(λ − λ0 )P1 + IU − P1 ] · · · [(λ − λ0 )Pk + IU − Pk ]
(4)
for all λ ∈ Ω. Moreover, some of the following mutually exclusive options occur: O1. L(λ0 ) ∈ Iso(U, V ), and, in such a case, χ[L; λ0 ] = 0
and
P1 = · · · = Pk = 0.
O2. λ0 ∈ Algν (L) for some 1 ≤ ν < r + 1, and, in such a case, Mν (λ0 ) ∈ Iso(U, V ), the projections P1 , . . . , Pν are non-zero, and χ[L; λ0 ] =
ν i=1
dim R[Pi ].
Preface
xix
O3. There does not exist any integer 1 ≤ ν ≤ r for which λ0 ∈ Algν (L), and, in such a case, χ[L; λ0 ] = ∞ if r = ∞, whereas χ[L; λ0 ] remains undefined if r ∈ N. Moreover, the projections P1 , . . . , Pk are non-zero. Based on the Magnus factorization (4) guaranteed by Theorem 0.0.5, Chapter 6 proves Theorem 0.0.1 and some generalizations. Chapter 6 concludes by inferring Properties (iv), (v) and (viii) listed above from Theorem 0.0.1 and a number of variants of it. Chapter 7 studies the classic concepts of Jordan chains, partial multiplicities and local Smith form. These are very old concepts going back to Smith [121] and Frobenius [36, 37], and characterize the local behavior of L at λ0 . The starting point of Chapter 7 is the following classic result going back to Gohberg [41] (see Gohberg, Goldberg & Kaashoek [43], Gohberg & Sigal [54], Gohberg, Kaashoek & Lay [44] for later generalizations). Hereafter, H means holomorphic. Theorem 0.0.6. Suppose K = C, let Ω be an open and connected subset of C, and let L ∈ H(Ω, L(U, V )) satisfy L(λ0 ) ∈ Fred0 (U, V ) and L(Ω) ∩ Iso(U, V ) = ∅. Then, ˜ ⊂ Ω of λ0 , a topological for every λ0 ∈ Ω, there exist an open neighborhood Ω decomposition U = U0 ⊕ U1 with n := dim U0 = dim N [L(λ0 )] < ∞, and three maps ˜ L(U, V )), F ∈ H(Ω, ˜ L(U )), S ∈ H(Ω, ˜ L(U0 )) E ∈ H(Ω, such that E(λ0 ) and F(λ0 ) are isomorphisms and ˜ λ ∈ Ω,
L(λ) = E(λ) [S(λ) ⊕ IU1 ] F(λ),
where IU1 stands for the identity of U1 and S(λ) = diag (λ − λ0 )κ1 , . . . , (λ − λ0 )κn ,
λ∈K
(5)
(6)
for some integers κ1 ≥ · · · ≥ κn ≥ 1. The factorization (5), with S given through (6), is called the local Smith form of L at λ0 . The integers κ1 , . . . , κn of (6) must be uniquely determined, and are referred to as the partial multiplicities of L at λ0 . One of the main results of Chapter 7 uses the transversalization theorem of Chapter 4 to show that m[L; λ0 ] = χ[L; λ0 ] =
n
κj .
j=1
Essentially, the proof of Theorem 0.0.6 is based on a preliminary finite-dimensional reduction relying on the property that L(λ0 ) ∈ Fred0 (U, V ), and on a further diagonalization algorithm based on the Gaussian elimination method. At each step of the underlying scheme one must make sure that the invariants of the transformed families remain unchanged, and this can be easily accomplished through the generalized Jordan chains of L at λ0 .
xx
Preface
In this context, the analysis of Chapter 7 reveals that the most appropriate device for characterizing the existence of a local Smith form is the concept of algebraic eigenvalue introduced by Definition 0.0.3, since it is a manageable and versatile invariant, intrinsic to the family L, which does not involve any constructive scheme, as occurs when using Jordan chains. Actually, the following sharp version of Theorem 0.0.6, valid for K ∈ {R, C}, is satisfied; it constitutes the bulk of Chapter 7. Theorem 0.0.7. Suppose r ∈ N and L ∈ C r (Ω, L(U, V )) with L(λ0 ) ∈ Fred0 (U, V ). Then, the following assertions are equivalent: i) L admits a local Smith form at λ0 , in the sense that there exist an open ˜ ⊂ Ω of λ0 , two families neighborhood Ω ˜ L(U, V )), F ∈ C(Ω, ˜ L(U )) E ∈ C(Ω, such that E(λ0 ) and F(λ0 ) are isomorphisms, and a topological decomposition U = U0 ⊕ U1 with dim U0 = dim N [L(λ0 )] = n, such that (5) and (6) hold true for some integers κ1 ≥ · · · ≥ κn with κ1 ≤ r and κn ≥ 1. ii) λ0 ∈ Algν (L) for some integer ν ≤ r. iii) The length of every Jordan chain of L at λ0 is bounded above by ν ≤ r. iv) The multiplicity χ[L; λ0 ] is well defined and finite. Although some of the implications between the conditions stated in Theorem 0.0.7 were proved by Rabier [110], this characterization theorem is attributable to the authors. Chapter 7 concludes with a short introduction to the theory of equivalence of families, as discussed, for example, by Gohberg, Kaashoek & Lay [44]. The pivotal feature of the underlying theory is the fact that the local Smith form of a family L is the simplest representation of L through pre- and post-multiplication by families of isomorphisms. As a by-product of Theorem 0.0.7, it becomes apparent that L and M are equivalent if and only if they possess the same local Smith form at λ0 . Chapter 8 focuses the attention on analytic maps L ∈ C ω (Ω, L(U, V )), and families of the form λ ∈ C, (7) L(λ) := λIU − A, for a given A ∈ L(U ), in order to prove the following result (an early version of its first part can be found in Trofimov [124]). Theorem 0.0.8. Let Ω be an open and connected subset of K, and L ∈ C ω (Ω, L(U, V )) such that L(Ω) ⊂ Fred0 (U, V ) and L(Ω) ∩ Iso(U, V ) = ∅. Then, the spectrum (3) of L in Ω is discrete in Ω and consists of algebraic eigenvalues of L, as discussed by Definition 0.0.3. Moreover, the map L−1 is analytic in Ω \ Σ(L), and exhibits a pole of order ν ∈ N at every λ0 ∈ Algν (L). If, in addition, L has the form (7), and λ0 ∈ Algν (L), then ν equals the algebraic ascent of λ0 , as an eigenvalue of A, and m[L; λ0 ] = χ[L; λ0 ] = ma (λ0 ) = dim N [(λ0 IU − A)ν ].
Preface
xxi
Chapter 8 concludes by establishing the stability of the algebraic multiplicity for holomorphic families, as well as its homotopy invariance, within the spirit of the classic theorem of Rouch´e. Chapter 10 shows that the Jordan theorem and the fundamental theorem of algebra are particular cases of the spectral theorem for monic matrix polynomials. By a matrix polynomial is meant a family L ∈ H(C, L(CN )) of the form L(λ) =
An λn ,
λ ∈ C,
n=0
where A0 , . . . , A ∈ L(CN ). It is said to be monic when A = ICN . The results of this chapter give some new insights into the available results in the literature. Actually, they are intended as a first step to generalize the theory developed in this book to cover more abstract settings within the context of spectral theory. Chapter 11 has an expository and bibliographical character. It introduces briefly some further developments of the multiplicity not previously treated in this book that are of interest in their own right. Finally, Chapter 12 combines the linear theory developed in this book together with the topological degree to show that the algebraic multiplicity studied in this book provides an optimal algebraic invariant to characterize the nonlinear eigenvalues of a given family L. Thus, facilitating the transference of information from linear to nonlinear functional analysis. It should be emphasized that eliminating the standard assumption that L is holomorphic in Ω (through the concept of algebraic eigenvalue) gives rise to some deep versions of important classic results; for instance, the following striking version of the celebrated theorem of Riemann on removable singularities. Indeed, besides providing a particular case of the theorem of Riemann when K = C, it also covers the case K = R, where no previous result seems to be available. Its number of applications might be huge. Theorem 0.0.9. Let r ∈ N ∪ {∞}, a point λ0 ∈ Ω, and L ∈ C r (Ω, L(U, V )) with L(λ0 ) ∈ Fred0 (U, V ). Suppose that, for some integer 0 ≤ ν ≤ r, the mapping λ → (λ − λ0 )ν L(λ)−1
(8)
exists and is bounded in Ω \ {λ0 }. Then there exists an open subset Ω ⊂ Ω with λ0 ∈ Ω for which the mapping λ → (λ − λ0 )ν L(λ)−1 ,
λ ∈ Ω \ {λ0 },
admits an extension of class C r−ν to Ω . Consequently, to remove an isolated singularity for mappings of the form (8), it suffices to impose L(λ0 ) ∈ Fred0 (U, V ) and λ0 ∈ Algν (L) for some integer ν ≤ r; the holomorphy is not necessary. This result illustrates the strength of the general theory developed in this book.
xxii
Preface
Except Chapters 11 and 12, all chapters of this book end by proposing a list of exercises and some bibliographical comments. Some of the exercises are computational, but most of them ask the reader to provide alternative proofs of the results of the corresponding chapter, or to adapt the proofs already given in the chapter to obtain some related results. Although we are indeed scientifically indebted to many people, in this occasion we are extremely honored to express our deepest gratitude to Professor I. Gohberg, the Editor of this series. It is not only that his pioneering works for meromorphic maps have influenced many chapters of this book, but also that his strong and lucid mathematical criticism during the final steps of its preparation have been tremendously encouraging for us. Finally, we should thank the Ministry of Education and Science of Spain for research funding under grant CGL2006–00524/BOS (National Plan of Global Change and Biodiversity).
Madrid and Oxford, February 2007 The authors
Part I
Finite-dimensional Classic Spectral Theory
Part I: Summary Part I of this book is devoted to the finite-dimensional classic spectral theory. The main topics covered in this part are the Jordan theorem and operator calculus. We assume that the readers are familiar with the basic concepts of linear algebra. Among them, finite-dimensional vector spaces, linear dependence and independence, bases, matrices and their elementary operations, range and kernel of a matrix and of a linear operator, systems of linear equations, determinants, and the Cramer rule. Some basic properties of polynomials are also assumed to be known. In particular, elementary properties of the sum, product, factorization and division of polynomials, and the fundamental theorem of algebra. Finally, we also assume that the reader has an elementary knowledge of the theory of holomorphic functions of one complex variable, including the Cauchy integral formula, the Cauchy estimates, the Riemann theorem on removable singularities, and the Taylor and Laurent developments. Throughout this book, C and R stand for the fields of complex and real numbers, and, for any integer number N ≥ 1 and K ∈ {R, C}, the set MN (K) denotes the space of square matrices of order N over the field K. Part I is divided into three chapters. In Chapter 1 we prove the Jordan theorem. This theorem establishes that for every A ∈ MN (C), the space CN decomposes into the direct sum of the ascent generalized eigenspaces associated with each of the eigenvalues of the matrix A. Then, we use the Jordan theorem to construct the complex and the real Jordan canonical forms of the matrix A. Chapter 2 is an introduction to operator calculus. Given a holomorphic function f and a square matrix A, one of the main aims of operator calculus is to give an appropriate definition of f (A) and to study its properties. As a paradigmatic application of operator calculus, Chapter 2 presents a detailed calculation of the exponential of a matrix. Finally, in Chapter 3 we use the results of the previous two chapters to construct the spectral projections associated with a square matrix A; the spectral projections are the projections of CN onto each of the ascent generalized eigenspaces of the matrix A.
Chapter 1
The Jordan Theorem In this chapter we prove the Jordan theorem, a pivotal result in mathematics, which establishes that, for every A ∈ MN (C), the space CN decomposes as the direct sum of the ascent generalized eigenspaces associated with each of the eigenvalues of A. Then, by choosing an appropriate basis in each of the ascent generalized eigenspaces, the Jordan canonical form of A is constructed. These bases are chosen in order to attain a similar matrix to A with a maximum number of zeros. This chapter is organized as follows. In Section 1.1 we review some basic concepts in the theory of matrices and finite-dimensional operators; among them, the relationship between operators and matrices, and the concepts of eigenvalue, eigenvector, eigenspace, characteristic polynomial and multiplicity. Section 1.2 is devoted to the proof of the Jordan theorem. Some important intermediate results will also be proved, such as the existence of interpolating polynomials and the Cayley–Hamilton theorem. In Section 1.3 we construct the Jordan canonical form, while in Section 1.4 we construct the real Jordan canonical form. Finally, Section 1.5 shows an example of the calculation of the Jordan canonical form.
1.1 Basic concepts From now on, we consider the space MN (C) of the square matrices of order N with coefficients in C, and the space L(CN ) of the linear operators from CN to CN . Both have dimension N 2 and, hence, are isomorphic. Once a basis B := {e1 , . . . , eN }, in CN is fixed, every matrix A = (aij )1≤i,j≤N ∈ MN (C)
4
Chapter 1. The Jordan Theorem
defines a linear operator TA ∈ L(CN ) through TA u =
N N
aij uj ei
for each u =
i=1 j=1
N
u j e j ∈ CN .
j=1
As the coordinates of the N -tuple TA u with respect to the basis B are obtained by multiplying the matrix A by the N -tuple of the coordinates of u with respect to B, it is usual to identify TA with A. The resulting map MN (C) A
→ L(CN ) → TA
is an isomorphism of vector spaces and of algebras. Consequently, there is no ambiguity in making that identification. In practice, A is referred to as the matrix associated with the operator TA with respect to the basis B. Pick T ∈ L(CN ), let B := {e1 , . . . , eN },
Bˆ := {ˆ e1 , . . . , eˆN },
be two bases of CN , and let AB and ABˆ denote their associated matrices with ˆ respectively. Then, respect to the bases B and B, ABˆ = P −1 AB P,
(1.1)
where P ∈ MN (C) is the matrix whose jth column consists of the coordinates of eˆj with respect to the basis B, that is, P = (pij )1≤i,j≤N , where eˆj =
N
pij ei ,
1 ≤ j ≤ N.
i=1
The matrix P is known as the matrix of the change of basis. Identity (1.1) follows easily by taking into account that the fact u=
N j=1
implies
u j ej =
N
uˆj eˆj
j=1
⎞ ⎛ ⎞ u1 u ˆ1 ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ = P ⎜ . ⎟. ⎝ . ⎠ ⎝ . ⎠ uN u ˆN ⎛
Two matrices are said to be similar when they represent the same linear operator with respect to two certain bases. In other words, A, B ∈ MN (C) are similar when there exists an invertible matrix P ∈ MN (C) such that AP = P B.
1.1. Basic concepts
5
One of the main objectives in finite-dimensional spectral theory is the construction of bases so that a given linear operator possesses an associated matrix with a maximum number of zeros; the closer to a diagonal matrix, the better. To that aim, one has to find the eigenvectors of the operator (or matrix) A, which are the vectors u ∈ CN \ {0} for which there exists some λ ∈ C such that Au = λu. The direction of an eigenvector is therefore invariant under the action of the linear operator A. The corresponding λ is called an eigenvalue of A. The matrix associated with A with respect to any basis containing an eigenvector exhibits a column with the associated eigenvalue in the entry of the principal diagonal, while all remaining entries of that column equal zero. Hereafter, given A ∈ L(CN ), the sets N [A] and R[A] will stand for the kernel (null-space) and the range (image) of A, respectively. Also, I denotes the identity matrix in MN (C), or the identity operator in L(CN ). The following definition makes the concepts of eigenvalue and eigenvector precise. Definition 1.1.1. It is said that λ ∈ C is an eigenvalue of the matrix A ∈ MN (C) if there exists u ∈ CN \ {0} such that Au = λu. Every u ∈ N [A − λI] is called an eigenvector of A associated with the eigenvalue λ. The linear subspace N [A − λI] is called the eigenspace of A associated with λ. Thanks to the Cramer rule, the homogeneous linear system Au = λu admits a non-zero solution u if and only if det(A − λI) = 0. Consequently, the set of eigenvalues of A coincides with the set of roots of the polynomial PA defined by PA (z) := det(A − zI),
z ∈ C,
which will be referred to as the characteristic polynomial of A. A polynomial is a mapping P : C → C of the form P (z) =
n
aj z j ,
z ∈ C,
j=0
for some integer n ≥ 0 and some coefficients a0 , . . . , an ∈ C, of course.
6
Chapter 1. The Jordan Theorem
The characteristic polynomial is invariant under changes of basis, and, hence, so are the eigenvalues. This feature reveals the consistency of referring to the characteristic polynomial of a linear operator, and, hence, to the eigenvalues of a linear operator. Indeed, if A, Q ∈ MN (C) and det Q = 0, then, for all z ∈ C, PQ−1 AQ (z) = det(Q−1 (A − zI)Q) = det(A − zI) = PA (z); however, the eigenvectors are not invariant under changes of basis. The equation PA (z) = 0 is called the characteristic equation of A, and its roots are the eigenvalues of A. It is easy to check that the characteristic polynomial has degree N , and that its leading coefficient is (−1)N . Therefore, thanks to the fundamental theorem of algebra, it possesses exactly N roots, counting according to their multiplicities. Consequently, we can express PA (z) = (−1)N
p
(z − λj )ma (λj ) ,
z ∈ C,
(1.2)
j=1
for some complex numbers λ1 , . . . , λp and integers ma (λ1 ), . . . , ma (λp ) ≥ 1 with the convention that λi = λj
if
1 ≤ i < j ≤ p,
which will be maintained throughout this book. Then, the eigenvalues of the matrix A are λ1 , . . . , λp . Moreover, ma (λ1 ) + · · · + ma (λp ) = N. Definition 1.1.2. Let A ∈ MN (C). Then, the set of eigenvalues of A, hereafter denoted by σ(A), is called the spectrum of A. Moreover, if the characteristic polynomial PA of A is expressed by (1.2), then ma (λj ) is called the algebraic multiplicity of λj as an eigenvalue of A. For each λ ∈ σ(A), the geometric multiplicity of the eigenvalue λ, subsequently denoted by mg (λ), is defined through mg (λ) := dim N [A − λI]. In general, mg (λ) ≤ ma (λ)
for each λ ∈ σ(A),
but the algebraic and geometric multiplicities do not always coincide. For example, for each α ∈ C, the characteristic polynomial PA of the matrix α 0 A= 1 α
1.1. Basic concepts
7
is given by PA (z) = (z − α)2 ,
z ∈ C,
and, hence, α is the unique eigenvalue of A. Its algebraic multiplicity equals 2, whereas its geometric multiplicity equals 1, since 0 N [A − αI] = span . 1 The eigenvalues λ ∈ σ(A) for which mg (λ) = ma (λ) are called semisimple. An eigenvalue is semisimple if and only if its algebraic multiplicity equals the maximum number of linearly independent associated eigenvectors. We will see that a matrix is diagonalizable (that is, it is similar to a diagonal matrix) if and only if all its eigenvalues are semisimple. An eigenvalue λ ∈ σ(A) is said to be simple when ma (λ) = 1. If λ is simple, necessarily mg (λ) = 1, and, hence, λ is semisimple. Besides the eigenspace N [A − λI] itself, associated with each eigenvalue λ ∈ σ(A) there is the following increasing chain of generalized eigenspaces N [A − λI] ⊂ N [(A − λI)2 ] ⊂ N [(A − λI)3 ] ⊂ · · · ,
(1.3)
which is indeed increasing because N [C] ⊂ N [BC] for every B, C ∈ MN (C). Moreover, as CN is finite dimensional, the chain (1.3) must stabilize, and, hence, there exists some integer j ≥ 1 such that N [(A − λI)k ] = N [(A − λI)j ],
k ≥ j.
Actually, for every j ≥ 1 such that N [(A − λI)j ] = N [(A − λI)j+1 ] the identities (1.4) hold. Indeed, for each u ∈ N [(A − λI)j+2 ], we have that (A − λI)u ∈ N [(A − λI)j+1 ] = N [(A − λI)j ], and, hence, u ∈ N [(A − λI)j+1 ]. Consequently, N [(A − λI)j+2 ] ⊂ N [(A − λI)j+1 ] and, therefore, (1.4) holds, because N [(A − λI)j+1 ] ⊂ N [(A − λI)j+2 ]. Now, we are ready to introduce the following concept.
(1.4)
8
Chapter 1. The Jordan Theorem
Definition 1.1.3. For each λ ∈ σ(A), the ascent of λ, throughout denoted by ν(λ), is the minimum integer number k ≥ 1 for which N [(A − λI)k ] = N [(A − λI)k+1 ]. For any integer j ≥ 1, the linear subspace N [(A − λI)j ] is said to be a generalized eigenspace of A associated with the eigenvalue λ, and any u ∈ N [(A − λI)j ] is said to be a generalized eigenvector of A (associated with λ). The linear manifold N [(A − λI)ν(λ) ] is called the ascent generalized eigenspace of A associated with the eigenvalue λ. According to Definition 1.1.3 and the previous discussion, for every λ ∈ σ(A) and 0 ≤ i < j ≤ ν(λ) ≤ k, one has that N [(A − λI)i ] N [(A − λI)j ] ⊂ N [(A − λI)ν(λ) ] = N [(A − λI)k ].
(1.5)
1.2 The Jordan theorem Hereafter, for any A ∈ MN (C), when we write σ(A) = {λ1 , . . . , λp }, we will suppose that λi = λj
if 1 ≤ i < j ≤ p,
even though it is not explicitly stated. The following result, which collects the most important properties of the ascent generalized eigenspaces, is the central result of the finite-dimensional classic spectral theory. Theorem 1.2.1 (Jordan). Let A ∈ MN (C) be with σ(A) = {λ1 , . . . , λp }. Then
A N [(A − λj I)ν(λj ) ] ⊂ N [(A − λj I)ν(λj ) ],
1 ≤ j ≤ p.
Actually, if for any 1 ≤ j ≤ p we denote by Aj the restriction of A from N [(A − λj I)ν(λj ) ] to itself, then σ(Aj ) = {λj }, Moreover, CN =
p
1 ≤ j ≤ p.
N [(A − λj I)ν(λj ) ]
j=1
and ma (λj ) = dim N [(A − λj I)ν(λj ) ],
1 ≤ j ≤ p.
(1.6)
1.2. The Jordan theorem
9
The proof of Theorem 1.2.1 will be decomposed in a number of independent results of great intrinsic interest in their own right. Hereafter, given a matrix A ∈ MN (C) and the polynomial P defined through P (z) =
n
z ∈ C,
aj z j ,
j=0
we will denote by P (A) the matrix P (A) :=
n
aj Aj .
j=0
Note that if P is expressed as P (z) = an
q
(z − µi )mi ,
z ∈ C,
i=1
for some q, m1 , . . . , mq ∈ N, and µ1 , . . . , µq ∈ C, then P (A) = an
q
(A − µi I)mi ;
i=1
the product being independent of the order of the factors, because A commutes with itself and with the identity operator I. The next result shows that P (A) = 0 if σ(A) = {λ1 , . . . , λp } and P is divided by the polynomial p (z − λj )ν(λj ) , z ∈ C. z → j=1
Every polynomial P satisfying P (A) = 0 will be called an annihilator polynomial of the matrix A. Lemma 1.2.2. Let A ∈ MN (C) be with σ(A) = {λ1 , . . . , λp }. Then p
(A − λj I)ν(λj ) = 0.
j=1
Proof. Let B = {e1 , . . . , eN } be a basis of CN . Then, for each 1 ≤ i ≤ N , the N + 1 vectors 0 ≤ j ≤ N, Aj ei , must be linearly dependent, and, hence, there exist (αi0 , . . . , αiN ) ∈ CN +1 \ {0}
10
Chapter 1. The Jordan Theorem
such that
N
αij Aj ei = 0.
j=0
Now, consider the polynomials Q1 , . . . , QN defined by Qi (z) :=
N
z ∈ C,
αij z j ,
1 ≤ i ≤ N.
j=0
Each of them is non-zero and satisfies Qi (A)ei = 0. Thus, the product polynomial Q := Q1 · · · QN satisfies Q = 0 and Q(A)ei = 0,
1 ≤ i ≤ N,
since Q1 (A), . . . , QN (A) commute with one another. As the matrix Q(A) vanishes over the basis B, necessarily Q(A) = 0. We divide Q by a constant, so that its leading coefficient can be taken as the unity, and, hence, Q can be expressed in the form Q(z) =
q
(z − µj )mj ,
z ∈ C,
j=1
for some natural numbers q, m1 , . . . , mq ≥ 1, and certain µ1 , . . . , µq ∈ C mutually different. Now, we consider the polynomial R defined through q
R(z) :=
(z − µj )mj ,
z ∈ C,
j=1 µj ∈ σ(A)
if σ(A) ∩ {µ1 , . . . , µq } = ∅, and by R := 1 if σ(A) ∩ {µ1 , . . . , µq } = ∅. For these choices, Q admits the factorization Q(z) = R(z)
q j=1 µj ∈σ(A) /
(z − µj )mj ,
z ∈ C.
1.2. The Jordan theorem
11
On the other hand, we obtain from the definition of eigenvalue that q
det
(A − µj I)
mj
q
=
j=1 / µj ∈σ(A)
[det(A − µj I)]mj = 0,
j=1 / µj ∈σ(A)
and, hence, the matrix q
(A − µj I)mj
j=1 / µj ∈σ(A)
is invertible. Consequently, R(A) = 0 since Q(A) = 0; therefore, there exists i ∈ {1, . . . , q} such that µi ∈ σ(A). Let us check that the polynomial P defined by P (z) :=
q
(z − µj )min{ν(µj ),mj } ,
z ∈ C,
(1.7)
j=1 µj ∈σ(A)
also satisfies P (A) = 0. Indeed, if mj ≤ ν(µj ) for each µj ∈ σ(A), then P = R and we already know that R(A) = 0. So suppose that there exists µj ∈ σ(A) such that ν(µj ) < mj . Without loss of generality, we can assume that ν(µ1 ) < m1 . Then, 0 = R(A) = (A − µ1 I)m1
q
(A − µj I)mj
j=2 µj ∈σ(A)
and, therefore, R
q
(A − µj I)mj ⊂ N [(A − µ1 I)m1 ] = N [(A − µ1 I)ν(µ1 ) ],
j=2 µj ∈σ(A)
whence (A − µ1 I)ν(µ1 )
q
(A − µj I)mj = 0.
j=2 µj ∈σ(A)
By repeating this process for each µj ∈ σ(A) with ν(µj ) < mj , it is apparent that the polynomial P defined by (1.7) satisfies P (A) = 0. This concludes the proof. The next result characterizes the annihilator polynomials of a matrix.
12
Chapter 1. The Jordan Theorem
Theorem 1.2.3 (Cayley–Hamilton). Let A ∈ MN (C) be with σ(A) = {λ1 , . . . , λp }. Then, a polynomial P satisfies P (A) = 0 if and only if P (z) = Q(z)
p
(z − λj )ν(λj ) ,
z ∈ C,
(1.8)
j=1
for some polynomial Q. Proof. Suppose that P admits the factorization (1.8). Then, thanks to Lemma 1.2.2, p P (A) = Q(A) (A − λj I)ν(λj ) = 0. j=1
To show the converse suppose that a polynomial P satisfies P (A) = 0. Then, for each 1 ≤ j ≤ p and x ∈ N [A − λj I] \ {0}, one has that 0 = P (A)x = P (λj )x and, hence, P (λ) = 0 for every λ ∈ σ(A). Thus, there exist a polynomial Q and p natural numbers m1 , . . . , mp ≥ 1, such that P (z) = Q(z)
p
(z − λj )mj ,
z ∈ C.
j=1
Moreover, by choosing appropriate exponents m1 , . . . , mp ≥ 1, we can suppose that no eigenvalue of A is a root of the polynomial Q. To complete the proof of the theorem it suffices to check that mj ≥ ν(λj )
for each 1 ≤ j ≤ p.
(1.9)
The proof of (1.9) proceeds by contradiction. Suppose mj < ν(λj ) for some 1 ≤ j ≤ p. Changing the order of the eigenvalues, if necessary, we can assume that m1 < ν(λ1 ). Then, by (1.5), N [(A − λ1 I)m1 ] is a proper subspace of N [(A − λ1 I)m1 +1 ]. Let x ∈ N [(A − λ1 I)m1 +1 ] \ N [(A − λ1 I)m1 ] and consider y := (A − λ1 I)m1 x.
1.2. The Jordan theorem
13
By construction, we have that y = 0
and Ay = λ1 y.
Therefore, P (A)x = Q(A)
p
(A − λj I)mj y = Q(λ1 )
j=2
p
(λ1 − λj )mj y = 0,
j=2
because no eigenvalue of A is a root of Q, the numbers λ1 , . . . , λp are distinct, and y = 0. This contradicts the assumption P (A) = 0, which shows (1.9) and concludes the proof. The next result shows the existence of a class of interpolating polynomials. Lemma 1.2.4. Suppose h ≥ 1 is an integer, and r1 , . . . , rh ∈ C satisfy ri = rj if 1 ≤ i < j ≤ h. For each j ∈ {1, . . . , h} consider any integer number mj ≥ 0 and any complex numbers αj,0 , . . . , αj,mj . Then there exists a polynomial P such that P k) (rj ) = αj,k ,
1 ≤ j ≤ h,
0 ≤ k ≤ mj .
Proof. We will argue by induction on h. When h = 1, the polynomial P defined by m1 α1,k P (z) := (z − r1 )k , z ∈ C, k! k=0
satisfies the required conditions. Suppose the result is true for h = n, and let Pn be a polynomial such that Pnk) (rj ) = αj,k ,
1 ≤ j ≤ n,
0 ≤ k ≤ mj .
To prove the result for h = n + 1 it suffices to check that there exists a polynomial Q for which the polynomial Pn+1 defined by Pn+1 (z) := Pn (z) + Q(z)
n
(z − rj )mj +1 ,
z ∈ C,
(1.10)
j=1
satisfies the required conditions. The auxiliary polynomial Πn defined by n
Πn (z) :=
(z − rj )mj +1 ,
z ∈ C,
1 ≤ j ≤ n,
0 ≤ k ≤ mj .
j=1
satisfies Πk) n (rj ) = 0,
(1.11)
14
Chapter 1. The Jordan Theorem
Moreover, for every polynomial Q satisfying (1.10), and k ≥ 0, differentiating (1.10) gives k k k) (1.12) Qk−) Π) Pn+1 = Pnk) + n. =0
Suppose 1 ≤ j ≤ n and 0 ≤ k ≤ mj . Then, from (1.11), (1.12) and the induction hypothesis we obtain that k)
Pn+1 (rj ) = Pnk) (rj ) = αj,k . Thus, to conclude the proof it suffices to construct Q satisfying αn+1,k = Pnk) (rn+1 ) +
k k =0
Qk−) (rn+1 )Π) n (rn+1 )
(1.13)
for each 0 ≤ k ≤ mn+1 . For k = 0, (1.13) becomes αn+1,0 = Pn (rn+1 ) + Q(rn+1 )Πn (rn+1 ), and, since Πn (rn+1 ) = 0, the polynomial Q must satisfy Q(rn+1 ) = β0 , where β0 :=
αn+1,0 − Pn (rn+1 ) . Πn (rn+1 )
(1.14)
In general, according to (1.13), for each 1 ≤ k ≤ mn+1 the polynomial Q must satisfy Qk) (rn+1 ) = βk , where k k 1 k) ) βk := βk− Πn (rn+1 ) . αn+1,k − Pn (rn+1 ) − Πn (rn+1 ) =1
As this is a recurrence formula whose initialization is given by (1.14), substituting the polynomial Q defined through βk (z − rn+1 )k , k!
mn+1
Q(z) :=
z ∈ C,
k=0
into (1.10) provides us with a P satisfying all required conditions.
Now, given A ∈ MN (C), we use Lemma 1.2.4 to construct an auxiliary family of polynomials associated with A that will enable us to prove the Jordan theorem. From now on, δij stands for the Kronecker delta, that is to say, δij = 1 if i = j, while δij = 0 if i = j. Proposition 1.2.5. Let A ∈ MN (C) be a matrix with exactly p ≥ 1 eigenvalues. Then there exist p polynomials P1 , . . . , Pp satisfying the following properties:
1.2. The Jordan theorem
15
• Pj (A) = 0 for each 1 ≤ j ≤ p. • Pi (A)Pj (A) = δij Pi (A) for each 1 ≤ i, j ≤ p. p • I = j=1 Pj (A). Proof. Suppose σ(A) = {λ1 , . . . , λp }. By Lemma 1.2.4, for each 1 ≤ j ≤ p there exists a polynomial Pj such that Pj (λj ) = 1,
k)
1 ≤ k ≤ ν(λj ) − 1,
Pj (λj ) = 0,
and k)
Pj (λi ) = 0,
0 ≤ k ≤ ν(λi ) − 1,
i ∈ {1, . . . , p} \ {j}.
These polynomials P1 , . . . , Pp satisfy the conditions of the statement. Indeed, fix 1 ≤ j ≤ p. By construction, the polynomial Pj − 1 has a zero of order greater than or equal to ν(λj ) at λj and, consequently, there exists a polynomial Qj such that Pj (z) − 1 = (z − λj )ν(λj ) Qj (z),
z ∈ C.
(1.15)
Similarly, Pj has a zero of order greater than or equal to ν(λi ) at each λi ∈ σ(A) \ {λj }, and, hence, there exists a polynomial Rj such that Pj (z) = Rj (z)
p
(z − λi )ν(λi ) ,
z ∈ C.
(1.16)
i=1 i=j
As Pj (λj ) = 1 = 0, the polynomial z → (z − λj )ν(λj ) cannot divide Pj and, hence, by Theorem 1.2.3, Pj (A) = 0. Moreover, by (1.15) and (1.16), we find that Pj (z)[Pj (z) − 1] = Qj (z)Rj (z)
p
(z − λi )ν(λi ) ,
z ∈ C,
i=1
and, hence, according to Theorem 1.2.3, Pj (A)[Pj (A) − I] = 0. Similarly, for any 1 ≤ i ≤ p with i = j, it is easy to see that Pi (A)Pj (A) = 0. Again by (1.15) and (1.16), the polynomial z → (z − λj )ν(λj ) divides Pj − 1 and each Pi with i = j. Consequently, it also divides their sum p i=1
Pi − 1.
16
Chapter 1. The Jordan Theorem
Actually, as this property is valid for each 1 ≤ j ≤ p, the product polynomial z →
p
(z − λj )ν(λj )
j=1
p
also divides i=1 Pi −1, since its factors are mutually prime. Consequently, thanks again to Theorem 1.2.3, p Pj (A), I= j=1
which completes the proof. Now, we have all necessary background to prove the Jordan theorem.
Proof of Theorem 1.2.1. Consider the polynomials P1 , . . . , Pp , whose existence was guaranteed by Proposition 1.2.5, and the vector subspaces 1 ≤ j ≤ p.
Nj := R[Pj (A)], We claim that A(Nj ) ⊂ Nj ,
1 ≤ j ≤ p,
(1.17)
and CN = N 1 + · · · + N p .
(1.18)
Indeed, (1.17) follows readily from the fact that APj (A) = Pj (A)A ,
1 ≤ j ≤ p,
and Property (1.18) follows straight away from the identity I=
p
Pj (A).
j=1
Now, we will prove that Nj = N [(A − λj I)ν(λj ) ],
1 ≤ j ≤ p.
Thanks to (1.16), for each 1 ≤ i ≤ p, the polynomial z →
p
(z − λj )ν(λj )
j=1
divides the polynomial z → (z − λi )ν(λi ) Pi (z)
(1.19)
1.2. The Jordan theorem
17
and, hence, by Theorem 1.2.3, (A − λi I)ν(λi ) Pi (A) = 0. Consequently, Ni = R[Pi (A)] ⊂ N [(A − λi I)ν(λi ) ].
(1.20)
Combining these inclusions with (1.18) shows that CN = N [(A − λ1 I)ν(λ1 ) ] + · · · + N [(A − λp I)ν(λp ) ]. Therefore, in order to prove (1.6) it suffices to show that the facts uj ∈ N [(A − λj I)ν(λj ) ], and
p
1 ≤ j ≤ p,
uj = 0
(1.21)
(1.22)
j=1
imply uj = 0 for each 1 ≤ j ≤ p. This can be obtained by contradiction. Suppose that uj = 0 for some j ∈ {1, . . . , p}. By rearranging the eigenvalues, if necessary, we can assume that u1 = 0. Then, by construction, (A − λ1 I)ν(λ1 ) u1 = 0. Let m denote the largest k ∈ N for which (A − λ1 I)k u1 = 0; this m exists because u1 = 0 and (A − λ1 I)0 = I. Moreover, 0 ≤ m < ν(λ1 ). Set v1 := (A − λ1 I)m u1 . Then, v1 = 0,
(A − λ1 I)v1 = (A − λ1 I)m+1 u1 = 0
and, hence, Av1 = λ1 v1 .
(1.23)
Now, consider the auxiliary polynomial P defined by P (z) := (z − λ1 )m
p
(z − λj )ν(λj ) ,
z ∈ C.
j=2
By the definition of v1 , (1.23) implies that P (A)u1 =
p j=2
(A − λj I)ν(λj ) v1 =
p j=2
(λ1 − λj )ν(λj ) v1 = 0.
18
Chapter 1. The Jordan Theorem
On the other hand, using identity (1.22) gives P (A)u1 = − =−
p
P (A)uk
k=2 p
(A − λ1 I)m
p
(A − λj I)ν(λj ) (A − λk I)ν(λk ) uk
j=2 j=k
k=2
and, hence, by (1.21), we find that P (A)u1 = 0. This contradiction concludes the proof of (1.6). Identity (1.19) follows straight away from (1.6) and (1.20). Thanks to (1.17) and (1.19), A leaves N [(A − λj I)ν(λj ) ] invariant for each 1 ≤ j ≤ p. Thus, we can define the operators Aj ∈ L(N [(A − λj I)ν(λj ) ]),
1 ≤ j ≤ p,
as the restriction of A from N [(A − λj I)ν(λj ) ] to itself. Consider 1 ≤ j ≤ p, a complex number µ, and u ∈ N [(A − λj I)ν(λj ) ] \ {0} such that Aj u = µu. Then 0 = (A − λj I)ν(λj ) u = (µ − λj )ν(λj ) u and, therefore, µ = λj . Consequently, σ(Aj ) = {λj } for each 1 ≤ j ≤ p. Now we set dj := dim N [(A − λj I)ν(λj ) ],
1 ≤ j ≤ p.
For each 1 ≤ j ≤ p, let Bj be a basis of N [(A − λj I) union of these bases B := B1 ∪ · · · ∪ Bp
ν(λj )
]. By (1.6), the (ordered)
provides us with a basis of CN . Moreover, thanks to the invariance property (1.17), with respect to B, the linear operator A has the associated matrix ⎛ ⎞ A1,B1 0 ··· 0 ⎜ ⎟ A2,B2 · · · 0 ⎟ ⎜ 0 ⎜ (1.24) AB := ⎜ . .. ⎟ .. .. ⎟, . ⎝ .. . ⎠ . 0 0 · · · Ap,Bp where, for each 1 ≤ j ≤ p, the matrix Aj,Bj denotes the associated matrix with Aj with respect to the basis Bj . Note that Aj,Bj ∈ Mdj (C), for each 1 ≤ j ≤ p, and that p dj . N= j=1
1.3. The Jordan canonical form
19
Moreover, if PB denotes the matrix whose columns consist of the coordinates of the vectors of B with respect to the original basis, then A = PB AB PB−1 .
(1.25)
From (1.24) and (1.25), we find that det(A − zI) = det(AB − zI) =
p
det(Aj,Bj − zI)
j=1
for all z ∈ C. Moreover, as σ(Aj,Bj ) = {λj } for all 1 ≤ j ≤ p, it follows that det(Aj,Bj − zI) = (−1)dj (z − λj )dj ,
z ∈ C,
1 ≤ j ≤ p,
and, therefore, det(A − zI) = (−1)
N
p
(z − λj )dj ,
z ∈ C.
j=1
On the other hand, by definition, det(A − zI) = (−1)N
p
(z − λj )ma (λj ) ,
z ∈ C,
j=1
and, consequently, dj = ma (λj )
for each 1 ≤ j ≤ p.
This concludes the proof.
1.3 The Jordan canonical form Pick a natural number n ≥ 1, and, for each integer 1 ≤ j ≤ n, let kj ≥ 1 be an integer number, and Bj ∈ Mkj (C). Then, one can define ⎞ ⎛ B1 0 · · · 0 ⎟ ⎜ ⎜ 0 B2 · · · 0 ⎟ ⎜ diag{B1 , . . . , Bn } := ⎜ . .. ⎟ .. .. ⎟ ∈ MN (C), . . ⎝ . . ⎠ . 0 0 · · · Bn where N := k1 + · · · + kn . We consider a matrix A ∈ MN (C) with spectrum σ(A) = {λ1 , . . . , λp }. The main goal of this section consists in, for each 1 ≤ j ≤ p, constructing a basis Bj := {ej1 , . . . , ejma (λj ) }
20
Chapter 1. The Jordan Theorem
of the ascent generalized eigenspace N [(A − λj I)ν(λj ) ] so that the matrix Aj,Bj of the operator from N [(A − λj I)ν(λj ) ] to itself defined by restriction of A is blockdiagonal Aj,Bj = diag{Jj,i1 , Jj,i2 , . . . , Jj,inj } for some integer number nj ≥ 1, where ih ∈ {1, . . . , ν(λj )} for 1 ≤ h ≤ nj , and for each natural n ≥ 1 the matrix Jj,n ∈ Mn (C) stands for the Jordan block of order n associated with the eigenvalue λj , which is defined by Jj,1 := λj ,
Jj,2 :=
λj 1
⎛
Jj,n
λj ⎜ ⎜1 ⎜ ⎜0 ⎜ := ⎜ . ⎜ .. ⎜ ⎜ ⎝0 0
0 λj
⎛
,
0 λj 1 .. . 0 0
Jj,3
0 0 λj .. . 0 0
λj ⎜ := ⎝ 1 0
··· ··· ··· .. .
0 0 0 .. . · · · λj ··· 1
0 λj 1 ⎞
⎞ 0 ⎟ 0 ⎠,..., λj
0 ⎟ 0⎟ ⎟ 0⎟ ⎟ . .. ⎟ .⎟ ⎟ ⎟ 0⎠ λj
(1.26)
Fix j ∈ {1, . . . , p}. When ν(λj ) = 1, then the basis Bj consists of eigenvectors, and, hence, 1 ≤ k ≤ ma (λj ) = mg (λj ). Aj ejk = λj ejk , Consequently, in such a case the matrix Aj,Bj is diagonal and consists of ma (λj ) Jordan blocks Jj,1 of order 1. Suppose ν(λj ) ≥ 2, and for each 1 ≤ k ≤ ν(λj ) − 1 let Ckj be a linear supplement of N [(A − λj I)k ] in N [(A − λj I)k+1 ], that is, N [(A − λj I)k+1 ] = N [(A − λj I)k ] ⊕ Ckj ,
1 ≤ k ≤ ν(λj ) − 1.
(1.27)
Then, setting C0j := N [A − λj I], we have that
ν(λj )−1
N [(A − λj I)
ν(λj )
]=
Ckj
k=0
and, hence, in order to construct a basis of N [(A−λj I)ν(λj ) ] it suffices to construct a basis of Ckj for each 0 ≤ k ≤ ν(λj ) − 1. Now, for every 0 ≤ k ≤ ν(λj ) − 1, we set hk := dim Ckj .
1.3. The Jordan canonical form
21
Then, hk = dim N [(A − λj I)k+1 ] − dim N [(A − λj I)k ] ≥ 1. j Let {ej1 , . . . , ejhν(λ )−1 } be a basis of Cν(λ , and consider the images of the elej )−1 j ments of this basis under the action of the operator A − λj I,
{(A − λj I)eji : 1 ≤ i ≤ hν(λj )−1 }.
(1.28)
j These images form a linearly independent system of Cν(λ . Indeed, suppose j )−2 there exist ci ∈ C, for 1 ≤ i ≤ hν(λj )−1 , for which hν(λj )−1
ci (A − λj I)eji = 0.
i=1
Then, since ν(λj ) − 1 ≥ 1, we have that hν(λj )−1
ci eji ∈ N [A − λj I] ⊂ N [(A − λj I)ν(λj )−1 ]
i=1
and, hence, since ej1 , . . . , ejhν(λ
j )−1
j ∈ Cν(λ , j )−1
hν(λj )−1
j ci eji ∈ N [(A − λj I)ν(λj )−1 ] ∩ Cν(λ . j )−1
i=1
Thus, owing to (1.27), this gives hν(λj )−1
ci eji = 0.
i=1
As the set {eji : 1 ≤ i ≤ hν(λj )−1 } is linearly independent, necessarily ci = 0 for each 1 ≤ i ≤ hν(λj )−1 and, therefore, the system (1.28) is linearly independent j and contained in Cν(λ , as claimed above. Consequently, if hν(λj )−2 = hν(λj )−1 , j )−2 then j j j = span (A − λ I)e , . . . , (A − λ I)e Cν(λ j j 1 hν(λ )−1 , j )−2 j
whereas in the case hν(λj )−2 > hν(λj )−1 one can extend the system (1.28) to j complete a basis of Cν(λ , say j )−2 {(A − λj I)ej1 , . . . , (A − λj I)ejhν(λ
j )−1
, ejhν(λ
j )−1
Suppose ν(λj ) = 2. Then j = N [A − λj I] Cν(λ j )−2
j +1 , . . . , ehν(λj )−2 }.
22
Chapter 1. The Jordan Theorem
and the set Bj := {ej1 , (A − λj I)ej1 , ej2 , (A − λj I)ej2 , . . . , ejhν(λ
j )−1
, (A − λj I)ejhν(λ
j )−1
, ejhν(λ
j )−1
j +1 , . . . , ehν(λj )−2 }
provides us with a basis of N [(A − λj I)2 ]. Moreover, since Aeji = λj eji + (A − λj I)eji ,
1 ≤ i ≤ hν(λj )−1 ,
A(A − λj I)eji = λj (A − λj I)eji ,
1 ≤ i ≤ hν(λj )−1 ,
Aeji
=
λj eji ,
hν(λj )−1 + 1 ≤ i ≤ hν(λj )−2 ,
the matrix of Aj with respect to the basis Bj is the following: Aj,Bj = diag{Jj,2 , Jj,2 , . . . , Jj,2 , Jj,1 , Jj,1 , . . . , Jj,1 }. hν(λj )−2 −hν(λj )−1
hν(λj )−1
When ν(λj ) ≥ 3, as in the previous step, we consider the system formed by all j , that is, images under the action of A − λj I of the vectors of the basis of Cν(λ j )−2 " ! (A − λj I)2 eji : 1 ≤ i ≤ hν(λj )−1 " ! ∪ (A − λj I)eji : hν(λj )−1 + 1 ≤ i ≤ hν(λj )−2 .
(1.29)
j This system is linearly independent in Cν(λ . Indeed, suppose ci ∈ C, for 1 ≤ j )−3 i ≤ hν(λj )−2 , satisfy hν(λj )−1
hν(λj )−2
ci (A −
λj I)2 eji
+
i=1
ci (A − λj I)eji = 0.
i=hν(λj )−1 +1
Then, arguing as above, it follows from ν(λj ) − 2 ≥ 1 that hν(λj )−1
i=1
hν(λj )−2
ci (A −
λj I)eji
+
j ci eji ∈ N [(A − λj I)ν(λj )−2 ] ∩ Cν(λ j )−2
i=hν(λj )−1 +1
and, therefore, due to (1.27), ci = 0 for each 1 ≤ i ≤ hν(λj )−2 . Similarly, (1.29) j provides us with a basis of Cν(λ if hν(λj )−3 = hν(λj )−2 , whereas it can be j )−3 j extended to complete a basis of Cν(λ , by adding the appropriate system j )−3
{eji : hν(λj )−2 + 1 ≤ i ≤ hν(λj )−3 },
1.4. The real canonical form
23
if hν(λj )−3 > hν(λj )−2 . When ν(λj ) = 3, we are done. In the general case, after ν(λj ) steps we will have constructed an ordered basis of the ascent generalized eigenspace of the form #
ν(λj )
Bj :=
i=1
hν(λ
ν(λj )−i !
j )−i #
=hν(λ
j )−i+1
#
+1
" (A − λj I)k ej .
k=0
The matrix of Aj with respect to Bj has exactly hν(λj )−1 Jordan blocks of order ν(λj ), exactly hν(λj )−2 − hν(λj )−1 Jordan blocks of order ν(λj ) − 1 and, in general, hν(λj )−k − hν(λj )−k+1 Jordan blocks of order ν(λj ) − k + 1, for each 1 ≤ k ≤ ν(λj ). The ordered set of vectors of Bj is often called a chain of generalized eigenvectors. Note that the operator Aj can be expressed as Aj = λj I + (Aj − λj I) and, consequently, Aj is a sum of the diagonal operator λj I plus the nilpotent operator Aj − λj I, since (A − λj I)ν(λj ) (N [(A − λj )ν(λj ) ]) = {0}. Moreover, the basis Bj that we have just constructed provides us with a matrix of Aj − λj I with the maximum possible number of zero entries. Let P be the matrix whose columns consist of the coordinates of the vectors of the bases B1 , . . . , Bp that we have just constructed. The matrix J := P AP −1 is called the Jordan canonical form of the matrix A. In this section we have proved that J = diag{J1 , J2 , . . . , Jp }, where, for each 1 ≤ j ≤ p, the matrix Jj consists of a finite diagonal union of Jordan blocks of the type Jj,k , for 1 ≤ k ≤ ν(λj ).
1.4 The real canonical form Throughout this section we suppose that A ∈ MN (R). In such a case, the characteristic polynomial of A has real coefficients, and, hence, ¯ : λ ∈ σ(A)} σ(A) = {λ
24
Chapter 1. The Jordan Theorem
where the complex conjugate of any z ∈ C is denoted by z¯. Consequently, the corresponding ascent generalized eigenspaces in the decomposition (1.6) also appear in pairs. Suppose σ(A) = {λ1 , . . . , λp } and pick 1 ≤ j ≤ p. It is easy to see that the vectors of the basis Bj constructed in Section 1.3 all have real components if λj ∈ R, whereas if λj = αj + iβj ,
with
αj , βj ∈ R,
βj = 0,
then, their components are complex. Suppose this is the case, and reorder the eigenvalues, if necessary, so that ¯j . λj+1 = αj − iβj = λ Then, for each k ≥ 0 and e ∈ CN , (A − λj I)k e = 0 if and only if (A − λj+1 I)k e¯ = 0,
(1.30)
where e¯ is the complex conjugate vector of e, that is, the vector whose components are the complex conjugate components of e. Therefore, for each k ≥ 0, dim N [(A − λj I)k ] = dim N [(A − λj+1 I)k ] and, in particular, ν(λj ) = ν(λj+1 ) and ma (λj ) = ma (λj+1 ). Let Bj := {ej1 , . . . , ejma (λj ) } be the basis of N [(A − λj I)ν(λj ) ] constructed in Section 1.3. Thanks to (1.30), it is easy to see that ej1 , . . . , e¯jma (λj ) } Bj+1 := {¯ provides us with the Jordan basis of N [(A − λj+1 I)ν(λj+1 ) ] constructed in Section 1.3. In other words, we are choosing := e¯jk , ej+1 k
1 ≤ k ≤ ma (λj ) = ma (λj+1 ).
Actually, by construction, if a system {ejk+i : 0 ≤ i ≤ h − 1}, of h ≥ 1 vectors of the basis Bj , where 1 ≤ k ≤ ma (λj ), provides a Jordan block Jj,h of order h, then, their conjugate vectors ¯jk+i , ej+1 k+i = e
0 ≤ i ≤ h − 1,
also provide a Jordan block of type Jj+1,h .
1.4. The real canonical form
25
From now on, for any integer n ≥ 1 and z ∈ Cn , the vectors Re z ∈ Rn and Im z ∈ Rn will stand for the real and the imaginary parts of z, respectively. According to this notation, when h = 1, we have that Aejk = λj ejk and, hence, A Re ejk = αj Re ejk − βj Im ejk , A Im ejk = αj Im ejk + βj Re ejk . Consequently, A leaves the real linear manifold generated by Re ejk and Im ejk invariant, and the matrix of its restriction to this manifold with respect to the basis {Re ejk , Im ejk } is R αj βj . (1.31) Jj,2 := −βj αj Note that the system {Re ejk , Im ejk } is linearly independent, since the determinant R of Jj,2 is non-zero. Similarly, when h ≥ 2, the identities Aejk+i = λj ejk+i + ejk+i+1 , Aejk+h−1
=
0 ≤ i ≤ h − 2,
λj ejk+h−1 ,
hold and, hence, identifying real and imaginary parts in these identities shows that, for each 0 ≤ i ≤ h − 2, A Re ejk+i = Re λj Re ejk+i − Im λj Im ejk+i + Re ejk+i+1 , A Im ejk+i = Re λj Im ejk+i + Im λj Re ejk+i + Im ejk+i+1 , and A Re ejk+h−1 = Re λj Re ejk+h−1 − Im λj Im ejk+h−1 , A Im ejk+h−1 = Re λj Im ejk+h−1 + Im λj Re ejk+h−1 . Therefore, A leaves the real linear manifold span Re ejk , Im ejk , . . . , Re ejk+h−1 , Im ejk+h−1 invariant, and the matrix of its restriction to this linear manifold is ⎞ ⎛ R Jj,2 0 ··· 0 0 ⎟ ⎜ R ⎜ I2 Jj,2 · · · 0 0 ⎟ ⎟ ⎜ R ⎜ . .. .. ⎟ .. .. Jj,2h = ⎜ .. ⎟, . . . . ⎟ ⎜ R ⎜ 0 0 · · · Jj,2 0 ⎟ ⎠ ⎝ R 0 0 · · · I2 Jj,2
(1.32)
26
Chapter 1. The Jordan Theorem R
where Jj,2 is the Jordan block (1.31), and I2 is the identity matrix of order 2. Note R
that, since the determinant of Jj,2h is non-zero, the manifold (1.32) has dimension 2h. These features show that choosing the basis R
Bj := {Re ej1 , Im ej1 , . . . , Re ejma (λj ) , Im ejma (λj ) }, instead of Bj ∪Bj+1 , the endomorphism A leaves the real linear manifold generated by BjR invariant, and that its associated matrix with respect to this basis is blockdiagonal, each of its blocks being of type R
1 ≤ k ≤ ν(λj ).
Jj,2k ,
Iterating the same process for each pair of complex conjugate eigenvalues, one is naturally driven to the construction of the so-called real Jordan canonical form R of the matrix A. The block Jj,2k is often called the real Jordan block of order 2k associated with the complex eigenvalue λj .
1.5 An example Throughout this section we make ⎛ 2 ⎜−1 ⎜ A := ⎜ ⎝1 0
the particular choice ⎞ 2 −1 −1 3 −1 1 ⎟ ⎟ ⎟ ∈ M4 (R). 1 1 0⎠ 1
−1
(1.33)
2
After some straightforward, but tedious, calculations, we arrive at the identity $ $ $2 − z 2 −1 −1 $$ $ $ −1 3 − z −1 1 $$ $ det(A − zI) = $ z ∈ C. $ = [(z − 2)2 + 1]2 , $ $ 1 1 1 − z 0 $ $ $ 0 1 −1 2 − z $ Thus, σ(A) = {2 − i, 2 + i} with algebraic multiplicities ma (2 − i) = ma (2 + i) = 2. We have that
⎛
i ⎜−1 ⎜ A − (2 − i)I = ⎜ ⎝1 0
⎞ 2 −1 −1 1+i −1 1⎟ ⎟ ⎟ 1 −1 + i 0 ⎠ 1 −1 i
1.5. An example
27
and the minor of order 3 formed by the last three entries of the first, third and fourth columns, does not vanish; indeed, $ $ $ $ $ $ $−1 −1 1$$ $−1 + i 0$ $−1 1$ $ $ $ $ $ $ $ $ = 2i = 0. $−$ $ 1 −1 + i 0$ = − $ $ $−1 i $ $ −1 $ $ i $ $0 −1 i Hence, dim R[A − (2 − i)I] = 3,
dim N [A − (2 − i)I] = 4 − 3 = 1,
and, therefore, since A is a real matrix, Theorem 1.2.1 implies that 2
dim N [(A − (2 − i)I) ] = 2,
(1.34)
⎧ ⎨ N [A − (2 + i)I] = N [A − (2 − i)I],
(1.35)
⎩ N [(A − (2 + i)I)2 ] = N [(A − (2 − i)I)2 ], and ν(2 − i) = ν(2 + i) = 2, ¯ the conjugate of the set L ⊂ R4 , i.e., where we have denoted by L ¯ := {¯ L u : u ∈ L}. Clearly, (x, y, z, t) ∈ N [A − (2 − i)I] if and only if ⎛
−1 −1 ⎜ ⎝ 1 −1 + i 0 −1
⎞⎛ ⎞ ⎛ ⎞ 1 x 1+i ⎟⎜ ⎟ ⎜ ⎟ 0⎠ ⎝z ⎠ = − ⎝ 1 ⎠ y, i t 1
y ∈ C.
By solving this system for the choice y = 1, we are led to (x, z, t) = (1, 1 + i, 1). Therefore, ⎡⎛
⎞⎤ 1 ⎢⎜ 1 ⎟⎥ ⎢⎜ ⎟⎥ N [A − (2 − i)I] = span ⎢⎜ ⎟⎥ , ⎣⎝1 + i⎠⎦ 1
⎡⎛
⎞⎤ 1 ⎢⎜ 1 ⎟⎥ ⎢⎜ ⎟⎥ N [A − (2 + i)I] = span ⎢⎜ ⎟⎥ . ⎣⎝1 − i⎠⎦ 1
28
Chapter 1. The Jordan Theorem 2
Now, we will determine N [(A − (2 − i)I) ]. Multiplying A − (2 − i)I by itself we find that ⎛
−4 ⎜−2 − 2i ⎜ 2 (A − (2 − i)I) = ⎜ ⎝−2 + 2i −2
4i −2 + 2i 2 + 2i 2i
⎞ 2 − 2i 2 + 2i⎟ ⎟ ⎟. 0 ⎠
−2i −2i −2 − 2i −2i
0
According to (1.34), 2
dim R[(A − (2 − i)I) ] = 2. 2
Moreover, a non-vanishing minor of order 2 of (A − (2 − i)I) is the one formed by the last two entries of the first and fourth rows; indeed, $ $ $−2i 2 − 2i$ $ $ $ = 2i(2 − 2i) = 0. $ $−2i 0 $ Thus, the second and third rows must depend linearly on the first and fourth ones; consequently, 2
(x, y, z, t) ∈ N [(A − (2 − 2i)I) ] if and only if
−i 1 − i −i 0
z 2 2i = x− y. t 1 i
(1.36)
For every (x, y) ∈ C2 , the system (1.36) has a unique solution, and making the choices (x, y) ∈ {(2, 0), (0, 2)}, we are led to ⎡⎛
⎞ ⎛ ⎞⎤ 2 0 ⎢⎜ 0 ⎟ ⎜ 2 ⎟⎥ ⎢⎜ ⎟ ⎜ ⎟⎥ N [(A − (2 − i)I)2 ] = span ⎢⎜ ⎟,⎜ ⎟⎥ . ⎣⎝ 2i ⎠ ⎝ 2 ⎠⎦ 1+i 1−i Thus, owing to (1.35), we also obtain that ⎡⎛
⎞ ⎛ ⎞⎤ 2 0 ⎢⎜ 0 ⎟ ⎜ 2 ⎟⎥ ⎢⎜ ⎟ ⎜ ⎟⎥ 2 N [(A − (2 + i)I) ] = span ⎢⎜ ⎟,⎜ ⎟⎥ . ⎣⎝ −2i ⎠ ⎝ 2 ⎠⎦ 1−i 1+i
1.5. An example
29
1.5.1 The complex Jordan form of A As the first vector of the Jordan basis of A we consider ⎛ ⎞ 2 ⎜ 0 ⎟ C ⎜ ⎟ e1 := ⎜ ⎟. ⎝ 2i ⎠ 1+i It is easy to see that C
2
e1 ∈ N [(A − (2 − i)I) ] \ N [A − (2 − i)I] and, hence, C
C
e2 := (A − (2 − i)I) e1 ∈ N [A − (2 − i)I] \ {0}. Clearly,
⎛ ⎞ ⎞ 1 −1 − i ⎜ 1 ⎟ ⎜−1 − i⎟ C ⎜ ⎟ ⎜ ⎟ e2 = ⎜ ⎟ ⎟ = −(1 + i) ⎜ ⎝1 + i⎠ ⎝ −2i ⎠ ⎛
−1 − i
1
and, hence, since A is real, we also find that ⎛
⎞ 2 ⎜ 0 ⎟ C C ⎜ ⎟ 2 e3 := e¯1 = ⎜ ⎟ ∈ N [(A − (2 + i)I) ] \ N [A − (2 + i)I] ⎝ −2i ⎠ 1−i and
⎛
⎞ −1 + i ⎜−1 + i⎟ C C ⎜ ⎟ e4 := e¯2 = ⎜ ⎟ ∈ N [A − (2 + i)I] \ {0}. ⎝ 2i ⎠ −1 + i
By construction, C
C
C
C
C
C
Ae1 = (2 − i)e1 + e2 ,
C
C
C
C
Ae2 = (2 − i)e2 ,
Ae3 = (2 + i)e3 + e4 ,
Ae4 = (2 + i)e4 .
Therefore, with respect to the basis C
C
C
C
BC := {e1 , e2 , e3 , e4 },
30
Chapter 1. The Jordan Theorem
the associated matrix to A equals its complex Jordan form ⎞ ⎛ 2−i 0 0 0 ⎜ 1 2−i 0 0 ⎟ ⎟ ⎜ JC := ⎜ ⎟ = PC−1 APC ⎝ 0 0 2+i 0 ⎠ 0 0 1 2+i where PC is the matrix of change of basis ⎛ 2 −1 − i ⎜ 0 −1 − i ⎜ PC := ⎜ ⎝ 2i −2i 1 + i −1 − i
⎞ 2 −1 + i 0 −1 + i⎟ ⎟ ⎟. −2i 2i ⎠ 1 − i −1 + i
(1.37)
(1.38)
1.5.2 The real Jordan form of A The real Jordan form is obtained by considering the new basis R
R
R
R
BR := {e1 , e2 , e3 , e4 }, where
⎛ ⎞ 2 ⎜ ⎟ R C ⎜0⎟ e1 := Re e1 = ⎜ ⎟ , ⎝0⎠ 1 ⎛ ⎞ −1 ⎜ ⎟ R C ⎜−1⎟ e3 := Re e2 = ⎜ ⎟ , ⎝0⎠ −1
Since
R
Ae1 R Ae2 R Ae3 R Ae4
⎛ ⎞ 0 ⎜ ⎟ R C ⎜0⎟ e2 := Im e1 = ⎜ ⎟ , ⎝2⎠ 1 ⎛ ⎞ −1 ⎜ ⎟ R C ⎜−1⎟ e4 := Im e2 = ⎜ ⎟ . ⎝−2⎠ −1 R
R
R
= 2e1 + e2 + e3 , R R R = −e1 + 2e2 + e4 , R R = 2e3 + e4 , R R = −e3 + 2e4 ,
the real Jordan form of A is given through ⎛ ⎞ 2 −1 0 0 ⎜1 2 0 0 ⎟ ⎜ ⎟ JR = ⎜ ⎟ = PR−1 APR , ⎝1 0 2 −1⎠ 0
1
1
2
(1.39)
1.6. Exercises
31
where
⎛
2 ⎜0 ⎜ PR = ⎜ ⎝0 1
0 0 2 1
−1 −1 0 −1
⎞ −1 −1⎟ ⎟ ⎟. −2⎠
(1.40)
−1
1.6 Exercises 1. Determine the Jordan canonical forms (both complex and real) of each of the following real matrices: 1 2 −1 −1 1 −1 −1 2 3 4 , , , , , −5 −1 2 −1 5 −3 −7 8 −1 7 ⎛
10 ⎜ ⎝ 5 −9
4 3 −4
⎞ ⎛ 13 1 ⎟ ⎜ 7 ⎠ , ⎝1 −12 2 ⎛
0 ⎜ ⎝1 1 2.
1 0 1
1 2 1
⎞ ⎛ 2 1 ⎟ ⎜ 1⎠ , ⎝ 2 1 −3
⎞ ⎛ 1 3 ⎟ ⎜ 1⎠ , ⎝ 2 0 4
2 0 2
⎞ ⎛ 1 1 5 −3 ⎟ ⎜ 1 −1⎠ , ⎝ 8 −5 2 4 −4 3 ⎞ ⎛ ⎞ 1 1 1 1 4 ⎜ 2 2 2 2⎟ ⎟ ⎟ ⎜ 2⎠ , ⎜ ⎟. ⎝ 3 3 3 3⎠ 3 4 4 4 4
⎞ −2 ⎟ −4⎠ , 3
Let A be a square matrix such that σ(A) = {α, β, γ} and dim N [(A − αI)2 ] = 3,
dim N [A − βI] = 3,
dim N [(A − βI) ] = 4,
dim N [(A − βI)3 ] = 5,
dim N [(A − βI)4 ] = 5,
dim N [A − γI] = 2,
dim N [(A − γI)2 ] = 3,
dim N [(A − γI)3 ] = 3.
dim N [A − αI] = 3, 2
Determine its order and its Jordan canonical form. 3. Given A ∈ MN (C), the adjoint of A, denoted by A∗ , is defined as the conjugate matrix of the transposed matrix of A, that is, ¯T . A∗ := A The matrix A ∈ MN (C) is said to be Hermitian if A = A∗ . Let ·, · be the skew-linear product in CN , N uj v¯j , u, v = j=1
for each u, v ∈ C with u = (u1 , . . . , uN ) and v = (v1 , . . . , vN ). Prove the following: • The matrix A is Hermitian if and only if N
Au, v = u, Av
for each
u, v ∈ CN .
• The spectrum of every Hermitian matrix consists of real semisimple eigenvalues.
32
Chapter 1. The Jordan Theorem • Every Hermitian matrix is diagonalizable. • The eigenvectors associated with different eigenvalues of a Hermitian matrix must be orthogonal. By orthogonal vectors it is meant that their skew-linear product equals zero. • The matrix P of the change of basis from any Hermitian matrix to its Jordan canonical form can be chosen to satisfy P ∗ P = I. Such matrices P are said to be orthogonal.
4. Suppose Ω ⊂ R is an open interval containing zero, k ≥ 1 is an integer number, and aij ∈ C k (Ω, C) for each i, j ∈ {1, . . . , N }. Consider, for each s ∈ Ω, the matrix A(s) = (aij (s))1≤i,j≤N ∈ MN (C). Suppose λ0 ∈ C is a simple eigenvalue of A0 := A(0) and ϕ0 ∈ N [A0 − λ0 I] \ {0}. We ask the following: • Check that
CN = N [A0 − λ0 I] ⊕ R[A0 − λ0 I].
• Given a linear subspace Y ⊂ CN such that CN = N [A0 − λ0 I] ⊕ Y, use the implicit function theorem to prove that there exist a real number δ > 0 and two unique functions Λ ∈ C k ((−δ, δ), C) and Φ ∈ C k ((−δ, δ), Y ) such that Λ(0) = λ0 ,
Φ(0) = ϕ0 ,
and for all s ∈ (−δ, δ), Φ(s) − ϕ0 ∈ Y,
A(s)Φ(s) = Λ(s)Φ(s).
• Calculate the jth derivative of Λ at 0 for each 1 ≤ j ≤ k, and prove that Λ (0) = 0 if and only if A (0)ϕ0 ∈ R[A0 − λ0 I]. 5. Suppose Ω ⊂ R is an open interval containing zero, aij ∈ C(Ω, C) for each i, j ∈ {1, . . . , N }, and consider the matrices A(s) = (aij (s))1≤i,j≤N ∈ MN (C),
s ∈ Ω.
Suppose σ(A(0)) = {λ1 , . . . , λp }, and let L :=
min
1≤i<j≤N
{|λi − λj |} > 0.
Use a celebrated theorem on complex variable theory to prove that for each R ∈ (0, L) there exists δ > 0 with (−δ, δ) ⊂ Ω such that for every s ∈ (−δ, δ) and 1 ≤ j ≤ p the matrix A(s) has exactly ma (λj ) eigenvalues in the open disk of radius R centered at λj , counting according to their algebraic multiplicities.
1.6. Exercises 6.
33
Let A = (aij )1≤i,j≤N ∈ MN (C).
Prove that there exists a sequence {An }n∈N of matrices An = (an ij )1≤i,j≤N ∈ MN (C),
n ≥ 1,
such that, for each n ∈ N, all the eigenvalues of An are simple, and lim an ij = aij
n→∞
for all
i, j ∈ {1, . . . , N }.
7. Let α ∈ C \ {0} and A ∈ MN (C) be such that σ(A) = {λ1 , . . . , λp }. Construct a basis in CN so that the associated matrix of A is block-diagonal with blocks of order n of the form ⎛ ⎞ λj 0 0 ··· 0 0 ⎜α λ 0 ··· 0 0⎟ j ⎜ ⎟ ⎜ ⎟ ⎜0 ⎟ · · · 0 0 α λ j ⎜ ⎟ ⎜. ⎟, . . . . . ⎜ .. .. .. .. .. .. ⎟ ⎜ ⎟ ⎜ ⎟ ⎝0 0 0 · · · λj 0 ⎠ 0
0
0
···
α
λj
where 1 ≤ n ≤ ν(λj ) and 1 ≤ j ≤ p. 8. Prove that for each invertible matrix A ∈ MN (C) there exists a polynomial P such that P (A)2 = A. Prove also that for all B ∈ M2 (C) and every polynomial Q, 0 0 Q(B) = . 1 0 9. 10.
Let A ∈ MN (C) be with ν(0) ≥ 2. Prove that B 2 = A if B ∈ MN (C). For any matrix A = (aij )1≤i,j≤N ∈ MN (C),
the trace of A, hereafter denoted by tr A, is defined through tr A :=
N
ajj .
j=1
Prove the validity of the following assertions: • For each B ∈ MN (C), tr(AB) = tr(BA). • Two similar matrices have the same trace. • Let λ1 , . . . , λN be the eigenvalues of A, where each eigenvalue is repeated according to its multiplicity. Then, tr A =
N j=1
λj
and
det A =
N j=1
λj .
34 11.
Chapter 1. The Jordan Theorem Let A ∈ MN (R) be a diagonal matrix such that (A + I)2 = 0.
Determine its Jordan canonical form. 12.
Determine all matrices A ∈ M3 (C) satisfying the equation A3 − 2A2 + A = 0.
13.
Let A ∈ MN (C) be such that σ(A) = {α, β},
ν(α) = 2,
ν(β) = 1.
Construct a polynomial P such that P (A) = 0. 14. Let A ∈ MN (C) be a Hermitian matrix (see Exercise 3). The Rayleigh quotient of A is the mapping R : CN \ {0} → C defined by R(u) :=
Au, u , u, u
u ∈ CN \ {0},
where ·, · is the skew-linear product of CN . Let λ1 ≤ λ2 ≤ · · · ≤ λN be the eigenvalues of A, where each eigenvalue is repeated according to its algebraic multiplicity. Thanks to the result of Exercise 3, we already know that there exist N eigenvectors 1 ≤ j ≤ N, pj ∈ N [A − λj I], such that the basis {p1 , . . . , pN } is orthonormal in (CN , ·, ·), that is, pi , pj = δij
1 ≤ i, j ≤ N.
for all
For each 1 ≤ k ≤ N set Vk := span[p1 , . . . , pk ], and let Gk denote the set of vector subspaces of CN of dimension k. Finally, define V0 := {0} and G0 := {V0 }. Prove the following properties: • For each 1 ≤ k ≤ N , λk = R(pk ) = max R(u). u∈Vk \{0}
• For each 1 ≤ k ≤ N , λk =
min
⊥ \{0} u∈Vk−1
R(u),
where, given a linear subspace V , we define by V ⊥ the orthogonal subspace of V with respect to the skew-linear product ·, ·, that is to say, V ⊥ := {u ∈ CN : u, v = 0 for each v ∈ V }.
1.7. Comments on Chapter 1
35
• For each 1 ≤ k ≤ N , λk = min
max R(u) = max
V ∈Gk u∈V \{0}
• 15.
min
V ∈Gk−1 u∈V ⊥ \{0}
R(u).
R(u) : u ∈ CN \ {0} = [λ1 , λN ].
A Hermitian matrix A ∈ MN (C) is said to be positive if Au, u ≥ 0
for each
u ∈ CN .
Prove that A is positive if and only if λ ≥ 0 for all λ ∈ σ(A). 16.
A Hermitian matrix A ∈ MN (C) is said to be positive definite if Au, u > 0
for each
u ∈ CN \ {0}.
Prove that A is positive definite if and only if λ > 0 for all λ ∈ σ(A). 17. Let A ∈ MN (C) be invertible, and consider its characteristic polynomial PA . Prove that the characteristic polynomial PA−1 of A−1 satisfies PA−1 (z) =
1 (−z)n PA (z −1 ), det A
z ∈ C \ {0}.
1.7 Comments on Chapter 1 The material of this chapter is standard and can be found in most advanced textbooks on linear algebra. Classic books covering most of the material are Jacobson [63], Burton [17], Hirsch & Smale [59], Blyth & Robertson [13], Lang [76], and Lax [77]. In addition, Gantmacher [39] is an outstanding book in matrix theory covering both theory and applications; it counts among the most paradigmatic books in matrix theory of the 20th Century. There are a number of different approaches to the proof of the Jordan theorem, of course. In this chapter we have followed the one based on polynomial interpolation, which is based on the text-book L´ opez-G´omez [83], and was inspired by Guzm´ an [55]. An interesting approach to the Jordan theorem with emphasis on invariant subspaces can be found in Gohberg, Lancaster & Rodman [51]. Exercise 4 establishes a classic very celebrated result on perturbation from simple eigenvalues, and Exercise 5 establishes the continuous dependence of the spectrum. Both are pivotal results in the theory of perturbation of linear operators. The review paper by Gohberg & Kre˘ın [45] and the comprehensive book by Kato [65] are two of the most paradigmatic references on these topics; for a different perspective, see the book by Gohberg, Lancaster & Rodman [51]. Exercise 14 establishes the classic Courant–Fischer theorem. It plays a crucial role in the spectral theory of self-adjoint (or Hermitian) operators, and, in particular, in classic potential theory.
Chapter 2
Operator Calculus In Section 1.2 we defined P (A) when P is a polynomial and A ∈ MN (K) with K ∈ {R, C}. More generally, one of the main goals of operator calculus consists in giving an appropriate definition of f (A) when f : K → K is an arbitrary function, as well as in studying the most important analytical properties of f (A). This chapter covers these issues for the special, but important, case when f is a certain holomorphic function and A ∈ MN (C). This chapter begins by introducing the concept of norm of a linear operator, which will be very useful to get norm estimates for linear operators. Then, it will describe the construction of f (A) for the special case when f is holomorphic at zero and the square matrix A has sufficiently small norm. As in the case when f is a polynomial, f (A) can be defined as the matrix obtained by substituting the variable z by A in the Taylor series of f (z) at z = 0. As a paradigmatic example, for each A ∈ MN (C) the exponential of the matrix A can be defined through eA :=
∞ An . n! n=0
Then, we will obtain the Dunford integral formula, which is an extension of the Cauchy integral formula to the class of matrix-valued holomorphic functions. To obtain the Dunford integral formula we will have to study some fundamental properties of the resolvent operator R(z; A) := (zI − A)−1 ,
z ∈ C \ σ(A),
of a matrix A ∈ MN (C). Finally, we will determine the exponential of A through its Jordan canonical form. More precisely, Section 2.1 introduces the concept of norm equivalence, and proves that all norms in a finite-dimensional space are equivalent. It also presents the subordinated operator norm and establishes some of its main properties. Section 2.2 defines f (A) for holomorphic functions f at the origin, and matrices A
38
Chapter 2. Operator Calculus
with sufficiently small norm. This section provides the basic elements of operator calculus. Section 2.3 studies the properties of the resolvent operator and introduces the Dunford integral formula. Section 2.4 shows that, under the previous assumptions, σ (f (A)) = f (σ(A)) . Finally, Section 2.5 provides the algorithms to determine eA from the Jordan canonical form of A.
2.1 Norm of a linear operator Suppose K ∈ {R, C}. Then, the space MN (K) of the square matrices of order N 2 with coefficients in K is isomorphic to KN and, hence, it can be equipped with the 2 structure of Banach space from any norm of KN . For example, for each p ∈ [1, ∞) we can consider the norm · p defined by ⎞1/p ⎛ N N p |aij | ⎠ , A p := ⎝
A ∈ MN (K),
i=1 j=1
or the norm · ∞ defined by A ∞ := max {|aij |} , 1≤i,j≤N
A ∈ MN (K).
But none of these norms can be comfortably handled when the matrices are considered as endomorphisms of KN and, consequently, we ought to introduce a specific norm for linear operators. Since every norm defines a topology, we need to ascertain whether or not two norms define the same topology. Definition 2.1.1. Two norms of a vector space U are said to be equivalent if they induce in U the same topology. The next result provides a necessary and sufficient condition for two norms to be equivalent. Lemma 2.1.2. Let · and · be two norms of the vector space U . Then, they are equivalent if and only if there exist constants α, β > 0 such that α u ≤ u ≤ β u
for each
u ∈ U.
Proof. Let u0 ∈ U and r > 0. If there exist α, β > 0 satisfying (2.1), then {u ∈ U : u − u0 < r/β} ⊂ {u ∈ U : u − u0 < r} ⊂ {u ∈ U : u − u0 < r/α}, and, hence, the norms · and · induce the same topology in U .
(2.1)
2.1. Norm of a linear operator
39
Reciprocally, if the two induced topologies coincide, then the unit ball {u ∈ U : u < 1} is open in the normed vector space (U, · ), and, therefore, there exists m > 0 such that {u ∈ U : u < m} ⊂ {u ∈ U : u < 1}. By symmetry, there exists M > 0 such that {u ∈ U : u < M } ⊂ {u ∈ U : u < 1}.
From these inclusions, (2.1) can be easily obtained.
From the topological point of view, it is inessential to consider one norm or another if U is finite dimensional, since the next result establishes that, for each integer d ≥ 1, all norms in Kd are equivalent; in other words, Kd possesses a unique topology of normed vector space. The key property for this result to hold is the compactness of the unit ball in any finite-dimensional space. It is well known that every closed and bounded set in K is compact. Also, the space Kd is equipped with the product topology, which coincides with the topology induced by · ∞ . Moreover, the product of compact sets is compact. Consequently, every closed and bounded set of Kd equipped with the · ∞ -topology is compact. Theorem 2.1.3. For every d ≥ 1, all norms of Kd are equivalent. Proof. It suffices to prove that the norm · ∞ defined by u ∞ := max |uj |,
u = (u1 , . . . , ud ) ∈ Kd ,
1≤j≤d
is equivalent to every norm of Kd . So let · be a norm in Kd , and let e1 , . . . , ed denote the vectors of the canonical basis of Kd . Then, u =
d
u j ej ≤
j=1
d
uj ej =
j=1
d
|uj | · ej ≤ u ∞
j=1
d
ej .
j=1
In particular, the function · : (Kd , · ∞ ) → [0, ∞)
(2.2)
is continuous. The boundary ∂B1 := {u ∈ Kd : u ∞ = 1} of the unit ball, being closed and bounded, is compact in (Kd , · ∞ ). As the function (2.2) is continuous and ∂B1 is compact, there exists u∗ ∈ ∂B1 such that u ≥ u∗
for each u ∈ ∂B1 .
40
Chapter 2. Operator Calculus
Therefore, for each u ∈ Kd , u ≥ u∗ · u ∞ . By Lemma 2.1.2, the norms · and · ∞ are equivalent.
According to Theorem 2.1.3, there is a unique topological structure of finitedimensional Banach space. Now, fixing an integer N ≥ 1 and an arbitrary norm · KN in KN , we define, for each A ∈ MN (K), A L(KN ) := sup Au KN : u ∈ KN , u KN = 1 . (2.3) As every linear operator in KN is continuous, and the boundary of the unit ball is a compact subset of KN , the supremum of (2.3) is attained, and the magnitude A L(KN ) is finite. The number A L(KN ) is called the norm of the operator A subordinated to the norm · KN . The function MN (K) A
→ [0, ∞) → A L(KN )
is called the operator norm subordinated to · KN . The following result establishes its main properties. Lemma 2.1.4. Let · KN be a norm in KN . Then, the map · L(KN ) : MN (K) → [0, ∞) defined by (2.3) is a norm. Moreover, for each u ∈ KN and A ∈ MN (K), Au KN ≤ A L(KN ) u KN
(2.4)
and, hence, for each A, B ∈ MN (K), AB L(KN ) ≤ A L(KN ) B L(KN ) .
(2.5)
Therefore, for each A ∈ MN (K) and n ∈ N, the following estimate holds, An L(KN ) ≤ A nL(KN ) .
(2.6)
Proof. Pick A ∈ MN (K). By definition, A L(KN ) ≥ 0. Moreover, if A L(KN ) = 0, then Au KN = 0 for each u ∈ KN with u KN = 1 and, hence, A is zero over any basis, whence A = 0. For each λ ∈ K and u ∈ KN with u KN = 1, one has that λAu KN = |λ| · Au KN
2.2. Introduction to operator calculus
41
and, therefore, λA L(KN ) = |λ| · A L(KN ) . Finally, for each B ∈ MN (KN ) and u ∈ KN with u KN = 1, the following estimates hold, (A + B)u KN ≤ Au KN + Bu KN ≤ A L(KN ) + B L(KN ) , and, hence, A + B L(KN ) ≤ A L(KN ) + B L(KN ) . This shows that (2.3) indeed is a norm in MN (K). Now we check (2.4). Obviously, (2.4) is satisfied at u = 0. Suppose u = 0. Then, Au KN u = A N ≤ A L(KN ) , u KN u KN K whence (2.4). Finally, we will verify (2.5). By (2.4), for each u ∈ KN with u KN = 1, we have that ABu KN ≤ A L(KN ) Bu KN ≤ A L(KN ) B L(KN ) . This implies (2.5); inequality (2.6) is an immediate consequence of (2.5).
Lemma 2.1.4 and Theorem 2.1.3 prove that (2.3) equips MN (K) with its unique structure of normed space; actually, by Exercise 1, of Banach space. Property (2.6) is the main reason for having introduced the norm (2.3).
2.2 Introduction to operator calculus The next result explains how to generate operators from holomorphic functions at the origin. From now on, we will work in the complex field, so we make the choice K = C. The set DR stands for the open disk in C of radius R and center 0. Also, for any open subset Ω ⊂ C, the set H(Ω) denotes the space of holomorphic functions from Ω to C. Throughout the remaining of this chapter, we consider fixed a norm · CN of CN , and its associated subordinated operator norm · L(CN ) . When no ambiguity can arise, we will simply write · to denote either norm. Theorem 2.2.1. Let R > 0. Consider a function f ∈ H(DR ) and a matrix A ∈ MN (C) satisfying A L(CN ) < R. Then the series
∞ f j) (0) j=0
j!
Aj
(2.7)
42
Chapter 2. Operator Calculus
converges absolutely in L(CN ) to a matrix that will be denoted by f (A). Moreover, for any invertible matrix P ∈ MN (C), one has that f (P −1 AP ) = P −1 f (A)P.
(2.8)
Proof. Since A < R and f ∈ H(DR ), by (2.6) and the Cauchy estimates, it is apparent that ∞ ∞ |f j) (0)| j f j) (0) j A ≤ R < ∞. j! j! j=0 j=0 From the convergence of these series, it is easy to see that the sequence of partial sums n f j) (0) j Sn := A ∈ MN (C), n ≥ 1, j! j=0 is a Cauchy sequence in MN (C). By completeness, the limit f (A) := lim Sn n→∞
is well defined and, in fact, the series (2.7) converges absolutely to f (A). Now, suppose B = P −1 AP. Then, a straightforward induction argument shows that B j = P −1 Aj P,
j ≥ 0.
Consequently, for each n ≥ 1, n f j) (0) j=0
j!
B j = P −1
n f j) (0) j=0
j!
Aj P
and, taking limits as n → ∞, we conclude (2.8).
Property (2.8) is crucial from the point of view of the applications, since it establishes that in order to determine f (A) it suffices to determine f (J), where J is the Jordan canonical form of A. In general, calculating f (J) tends to be substantially easier than calculating f (A). In the next definition, we recall the concept of vector-valued holomorphic function. Definition 2.2.2. Consider an open set Ω ⊂ C, an integer d ≥ 1 and a function F : Ω → Cd . It is said that F is holomorphic in Ω when lim
w→z
F (w) − F (z) w−z
2.2. Introduction to operator calculus
43
exists in Cd for each z ∈ Ω. In such a case, this limit is denoted by F (z), and called the derivative of F at z. The set of holomorphic functions from Ω to Cd will be hereafter denoted by H(Ω, Cd ). By expressing F through its components F := (F1 , . . . , Fd ), it is apparent F ∈ H(Ω, Cd ) if and only if Fj ∈ H(Ω) for each j ∈ {1, . . . , d}. Moreover, in such a case, F = (F1 , . . . , Fd ). From this feature, it is easy to see that all algebraic properties of C-valued holomorphic functions remain true for vector-valued holomorphic functions. For instance, the derivative of a sum equals the sum of the derivatives, and the product formula for the derivative of a product of a scalar-valued function by a vector-valued function holds. More importantly, defining the path integral of a continuous vectorvalued function as the vector formed by the integrals of each of the components of the function, makes the Cauchy integral formula remain valid for vector-valued functions. The proof of the following elementary lemma is proposed in Exercise 4. Lemma 2.2.3. Suppose α, β are two distinct complex numbers, and n ≥ 2 is an integer. Then αn − β n − nβ n−1 = (α − β) kβ k−1 αn−k−1 . α−β n−1
k=1
Lemma 2.2.3 will provide us with a proof of a special, but important, case of the chain rule. Although there are more general versions of the chain rule, the next one suffices for the purposes of this book. Theorem 2.2.4. Suppose R > 0, consider f ∈ H(DR ) and A ∈ MN (C), and set ρ := R/ A L(CN ) . Then, the matrix-valued function F : Dρ → MN (C) defined by F (z) := f (zA),
z ∈ Dρ ,
is holomorphic in Dρ and F (z) = Af (zA),
z ∈ Dρ .
Proof. Throughout this proof, we denote an :=
f n) (0) , n!
n ≥ 0,
44
Chapter 2. Operator Calculus
and fix z ∈ Dρ . Let ε > 0 be such that (1 + ε)|z| < ρ. By Theorem 2.2.1 and our choice of ρ, both f (zA) and f (zA) are well defined. By definition, for each h ∼ 0 with h = 0, we have that . / ∞ (z + h)n − z n F (z + h) − F (z) − Af (zA) = − nz n−1 An . an h h n=2 Moreover, due to Lemma 2.2.3, (z + h)n − z n − nz n−1 = h kz k−1 (z + h)n−k−1 h n−1
k=1
for each n ≥ 2 and, hence, ∞ n−1 F (z + h) − F (z) − Af (zA) = h an kz k−1 (z + h)n−k−1 An . h n=2
(2.9)
k=1
¯ ε|z| the following inequalities hold, On the other hand, for all h ∈ D |
n−1
kz k−1 (z + h)n−k−1 | ≤
k=1
n−1
k|z|k−1 (|z| + |h|)n−k−1
k=1
≤
n−1
k(|z| + |h|)n−2 ≤ n2 [(1 + ε)|z|]n−2
k=1
and, hence, for each n ≥ 2, an
n−1
kz k−1 (z + h)n−k−1 An L(CN ) ≤ |an |n2 [(1 + ε)|z|]n−2 A nL(CN ) .
(2.10)
k=1
Moreover, by the Cauchy estimates, we get 1/n (1 + ε)|z| lim sup |an |n2 [(1 + ε)|z|]n−2 A nL(CN ) A L(CN ) ≤ R n→∞ ρ < A L(CN ) = 1. R Therefore, by (2.10) and the root test for the convergence of series, it is apparent that the matrix series of the right-hand side of (2.9) is convergent and that, actu¯ ε|z| . Finally, taking limits as h → 0 in ally, its sum is uniformly bounded for h ∈ D (2.9) completes the proof.
2.3. Resolvent operator. Dunford’s integral formula
45
The following property provides us with the series of the product of the matrix functions generated through Theorem 2.2.1 by means of the Cauchy product of the corresponding matrix series. Lemma 2.2.5. Suppose R > 0, take f, g ∈ H(DR ) and let A, B ∈ MN (C) satisfy B L(CN ) < R.
A L(CN ) < R, Then f (A)g(B) =
∞ n f n−j) (0) g j) (0) n=0 j=0
(n − j)!
j!
An−j B j .
Proof. Clearly, for each z, w ∈ DR one has that f (z)g(w) =
∞ n f n−j) (0) g j) (0) n=0 j=0
(n − j)!
j!
wj .
Consequently, arguing as is the proof of Theorem 2.2.1, one can easily complete all technical details of the proof.
2.3 Resolvent operator. Dunford’s integral formula In this section we will obtain a generalized version of the Cauchy integral formula valid for matrix-valued functions; namely, the Dunford integral formula. We begin by introducing the concept of resolvent and by studying some of its main properties. Definition 2.3.1. Let A ∈ MN (C). The resolvent set of A, denoted by (A), is defined as the complement of the spectrum of A in C, (A) := C \ σ(A). The resolvent operator of A is the matrix function R(·; A) defined by R(z; A) := (zI − A)−1 ,
z ∈ (A).
It turns out that, for each z ∈ (A) and v ∈ CN , the vector u := R(z; A)v provides us with the unique solution of the linear system (zI − A)u = v, hence the name of resolvent operator. The following classic result provides a sufficient condition for an operator to be invertible. For each R > 0 and z0 ∈ C, the subset DR (z0 ) ⊂ C denotes the open disk of radius R centered at z0 . We have already defined DR as DR (0).
46
Chapter 2. Operator Calculus
Lemma 2.3.2. Let T ∈ L(CN ) be such that T L(CN ) < 1.
(2.11)
Then I − T is invertible and its inverse is given through the series ∞
(I − T )−1 =
T n.
(2.12)
n=0
Moreover, 1 . 1 − T L(CN )
(I − T )−1 L(CN ) ≤
(2.13)
Proof. The function f : D1 → C defined by f (z) :=
∞ 1 = zn, 1 − z n=0
z ∈ D1 ,
is holomorphic, and, hence, according to Theorem 2.2.1, the matrix f (T ) =
∞
Tj
j=0
is well defined. To prove (2.12) it suffices to show that f (T ) is the inverse of I − T . Indeed, for each n ≥ 1 one has that n
T j (I − T ) = (I − T )
j=0
n
T j = I − T n+1 .
(2.14)
j=0
On the other hand, by (2.6) and (2.11), lim T n+1 = 0.
n→∞
Consequently, passing to the limit as n → ∞ in (2.14) gives f (T )(I − T ) = (I − T )f (T ) = I. Finally, the estimate (2.13) is easily obtained from the next one,
n
T j L(CN ) ≤
j=0
n
T jL(CN ) ,
n ≥ 1,
j=0
by taking limits as n → ∞. The next result, although straightforward, is extremely useful.
2.3. Resolvent operator. Dunford’s integral formula
47
Lemma 2.3.3. For each A ∈ MN (C) and z, w ∈ (A), the following identity holds R(z; A) − R(w; A) = (w − z) R(z; A)R(w; A).
(2.15)
Proof. By definition, R(z; A) = R(z; A)(wI − A)R(w; A) = R(z; A)[(w − z)I + (zI − A)]R(w; A) = (w − z)R(z; A)R(w; A) + R(w; A),
whence (2.15) follows.
Identity (2.15) is usually referred to as the resolvent formula. The next result establishes that the resolvent operator is holomorphic. Theorem 2.3.4. For each A ∈ MN (C) the operator-valued function R(·; A) : (A) → L(CN ) is holomorphic, and its derivative at each z ∈ (A) equals −[R(z; A)]2 . Moreover, R(·; A) admits the following Laurent development at zero: R(z; A) =
∞
1 An , n+1 z n=0
¯ A z ∈ Ω := C \ D , L(CN )
(2.16)
uniformly in compact subsets of Ω. Proof. First, we show the continuity of the resolvent in (A). By (2.15), for each z, w ∈ (A), R(z; A) = [I + (w − z)R(z; A)]R(w; A). ¯ ε (z), Fix z ∈ (A). Then, there exists ε > 0 such that for each w ∈ D (w − z)R(z; A) L(CN ) < 1 and, hence, by Lemma 2.3.2, the operator I+(w−z)R(z; A) is invertible. Therefore, ¯ ε (z) the following identity holds for each w ∈ D R(w; A) = [I + (w − z)R(z; A)]−1 R(z; A). Applying again Lemma 2.3.2 yields [I + (w − z)R(z; A)]−1 =
∞
(z − w)n [R(z; A)]n ,
n=0
(2.17)
48
Chapter 2. Operator Calculus
whence
lim [I + (w − z)R(z; A)]−1 = I.
w→z
Consequently, taking limits in (2.17) gives lim R(w; A) = R(z; A),
w→z
which proves the continuity of the function R(·; A) in (A). Now, its holomorphy is an immediate consequence of (2.15). Indeed, for all z, w ∈ (A) with w = z, we have that R(w; A) − R(z; A) = −R(w; A)R(z; A) w−z and, hence, R(w; A) − R(z; A) lim = −[R(z; A)]2 . w→z w−z Finally, let K be a compact subset of the open set Ω. Then, there exists ε > 0 such that (2.18) |z| > A L(CN ) + ε for each z ∈ K. According to Lemma 2.3.2 applied to z −1 A, it is apparent that, for each z ∈ K, R(z; A) =
∞ A 1 1 (I − )−1 = An . n+1 z z z n=0
Moreover, this series is uniformly convergent in K, because, due to (2.18), we have that, for all n ≥ 1 and z ∈ K, R(z; A) −
n j=0
1
z
∞
1 Aj L(CN ) j+1 z j=n+1 j ∞ A L(CN ) 1 ≤ A L(CN ) + ε j=n+1 A L(CN ) + ε n+1 A L(CN ) 1 = . ε A L(CN ) + ε
Aj L(CN ) = j+1
This concludes the proof.
Note that Theorem 2.3.4 actually proves the convergence of the series (2.16) ¯ ρ as long as to the resolvent operator uniformly in the open unbounded set C \ D ρ > A L(CN ) . Now we introduce some preliminaries for the Dunford integral formula. That formula, besides being a substantial generalization of the Cauchy integral formula, plays a pivotal role in Functional Analysis; very especially, in the spectral theory
2.3. Resolvent operator. Dunford’s integral formula
49
of linear operators. Prior to this, we need to recall some basic concepts about closed path (or contour) integrals. First, we should make precise the class of contours that will be used in the integrals. By definition, a Cauchy contour is the oriented boundary of a Cauchy domain in C. By a Cauchy domain is meant any disjoint union in C of finitely many non-empty open, bounded and connected sets ∆1 , . . . , ∆r such that ∆i ∩ ∆j = ∅,
1 ≤ i < j ≤ r,
and, for each 1 ≤ j ≤ r, the boundary of ∆j consists of finitely many nonintersecting closed rectifiable Jordan curves, each of them C is oriented clockwise or counterclockwise according to whether the points of ∆j near a point of C are outside or inside of C. In such a case, the inner domain of the Cauchy contour γ is defined through r # ∆i . Dγ := i=1
For every non-empty compact subset σ of an open set Ω ⊂ C, there exists a Cauchy contour γ in Ω for which σ ⊂ Dγ . Indeed, construct in the complex plane a grid of congruent hexagons of diameter less than one third of the distance between σ and C \ Ω, and let ∆ be the interior of the union of all closed hexagons of the grid that have non-empty intersection with σ. Clearly, the boundary of ∆ provides us with a Cauchy contour in Ω whose inner domain contains σ. Let d ≥ 1 be an integer number, γ a Cauchy contour, and f : γ → Cd a continuous function component-wise expressed by f = (f1 , . . . , fd ). Then, the integral of f along γ is defined through f (z) dz := f1 (z) dz, . . . , fd (z) dz . γ
γ
γ
Finally, we make precise the version of the Cauchy integral formula that will be subsequently used. Theorem 2.3.5 (Cauchy’s integral formula). Suppose Ω ⊂ C is open, f ∈ H(Ω), ¯ γ ⊂ Ω. Then, and γ is a Cauchy contour whose inner domain Dγ satisfies D f (z) 1 dz for each z0 ∈ Dγ . (2.19) f (z0 ) = 2πi γ z − z0 The following result provides us with a substantial generalization of the Cauchy integral formula (2.19). It establishes that the operator f (A) defined by Theorem 2.2.1 can be represented through a vector-valued contour integral within the spirit of (2.19). The formula, being attributable to Dunford, is usually referred to as the Dunford integral formula. Throughout the remaining of this book, CR stands for the positively oriented boundary of DR , for each R > 0.
50
Chapter 2. Operator Calculus
Theorem 2.3.6. Suppose R > 0, the matrix A ∈ MN (C) satisfies A L(CN ) < R, and f ∈ H(DR ). Let γ be a Cauchy contour in DR whose inner domain contains the spectrum of A. Then 1 f (A) = f (z)R(z; A) dz. (2.20) 2πi γ Proof. For each λ ∈ σ(A) there exists u ∈ CN such that u CN = 1
and Au = λu.
Thus, |λ| = Au CN ≤ A L(CN ) < R and, therefore, ¯ A ⊂ DR . σ(A) ⊂ D L(CN )
(2.21)
Now we consider r satisfying A L(CN ) < r < R. By (2.21), the circle Cr is a Cauchy contour satisfying the hypothesis of the theorem. Moreover, according to Theorem 2.3.4, R(z; A) =
∞ n=0
1 z n+1
An
uniformly in z ∈ Cr .
Therefore, 1 2πi
∞ ∞ 1 f (z) f n) (0) n n A = f (A). f (z)R(z; A) dz = dz A = 2πi Cr z n+1 n! Cr n=0 n=0
This proves (2.20) for the special case when γ = Cr . To complete the proof in the general case, it suffices to note that, according to Theorem 2.3.5, by the definition of holomorphic vector-valued function, f (z)R(z; A) dz = f (z)R(z; A) dz. γ
This concludes the proof.
Cr
Actually, Theorem 2.3.6 shows the consistence of the following concept, which provides us with a substantial and sharp generalization of the corresponding one introduced by Theorem 2.2.1.
2.4. The spectral mapping theorem
51
Definition 2.3.7. Let A ∈ MN (C), and denote by F (A) the family of complex functions of a complex variable that are holomorphic in an open neighborhood (not necessarily connected) of σ(A). Let f ∈ F(A), and Ω ⊂ C be an open set such that σ(A) ⊂ Ω and f ∈ H(Ω). Then, f (A) ∈ MN (C) is the linear operator defined through 1 f (z)R(z; A) dz, f (A) := 2πi ∂∆
(2.22)
where ∆ is a Cauchy domain in Ω such that ¯ ⊂ Ω. σ(A) ⊂ ∆ ⊂ ∆ Remark 2.3.8. By applying Theorem 2.3.5 to every coefficient of the matrix f (z)R(z; A),
z ∈ ∂∆,
it is apparent that the Dunford integral of (2.22) does not depend on the choice of Ω, but exclusively on the function f and on the operator A. Moreover, under the assumptions of Theorem 2.3.6, it turns out that (2.22) equals the operator f (A) as defined through its Taylor series centered at zero, by Theorems 2.2.1 and 2.3.6. It should be remembered that the existence of Cauchy domains ∆ fulfilling the requirements of Definition 2.3.7 has already been established.
2.4 The spectral mapping theorem We begin by establishing a very natural and useful product formula. The notation introduced in Section 2.3 and, in particular, in Definition 2.3.7, will be maintained through this section. Proposition 2.4.1. If A ∈ MN (C) and f, g ∈ F(A), then f g ∈ F(A) and (f g)(A) = f (A)g(A). Proof. Since f g is defined in the intersection of the domains of f and g, and f g is holomorphic therein, necessarily f g ∈ F(A). Let ∆1 and ∆2 be two Cauchy domains such that ¯ 1 ⊂ ∆2 σ(A) ⊂ ∆1 ⊂ ∆ ¯ 2 . Then, by Remark for which f and g are holomorphic in a neighborhood of ∆ 2.3.8, we have that 2 1 f (A)g(A) = f (z)R(z; A) dz g(w)R(w; A) dw 2πi ∂∆1 ∂∆2 2 1 = f (z)g(w)R(z; A)R(w; A) dw dz 2πi ∂∆1 ∂∆2
52
Chapter 2. Operator Calculus
and, hence, using Lemma 2.3.3 gives 2 1 R(z; A) − R(w; A) dw dz f (z)g(w) f (A)g(A) = 2πi w−z ∂∆1 ∂∆2 / . 1 1 g(w) = dw dz f (z)R(z; A) 2πi ∂∆1 2πi ∂∆2 w − z / . 1 1 f (z) − dz dw. g(w)R(w; A) 2πi ∂∆2 2πi ∂∆1 w − z ¯ 1 ⊂ ∆2 , according to Theorem 2.3.5, for each z ∈ ∂∆1 On the other hand, since ∆ and w ∈ ∂∆2 , we have that 1 g(w) 1 f (z) dw = g(z) and dz = 0. 2πi ∂∆2 w − z 2πi ∂∆1 w − z Therefore, inserting these identities in the previous one gives 1 f (A)g(A) = f (z)g(z)R(z; A) dz = (f g)(A), 2πi ∂∆1
as claimed in the statement.
The main result of this section is the following. It describes how the spectrum of a matrix is transformed by holomorphic functions. Theorem 2.4.2 (Spectral mapping theorem). For every A ∈ MN (C) and f ∈ F (A), the following identity of sets is satisfied, σ(f (A)) = f (σ(A)). Proof. Pick λ ∈ σ(A), and let D ⊂ C be an open set containing σ(A) such that f is holomorphic in D. Now, let g denote the auxiliary function defined by if z ∈ D \ {λ}, (f (z) − f (λ))(z − λ)−1 g(z) := f (λ) if z = λ. By the celebrated theorem of Riemann on removable singularities, we obtain that g ∈ H(D). Moreover, by construction, g(z)(z − λ) = f (z) − f (λ),
z ∈ D,
and, hence, owing to Proposition 2.4.1, g(A)(A − λI) = f (A) − f (λ)I. Consequently, since λ ∈ σ(A), det[f (A) − f (λ)I] = det g(A) · det(A − λI) = 0,
2.5. The exponential matrix
53
whence f (λ) ∈ σ(f (A)) and, therefore, f (σ(A)) ⊂ σ(f (A)). Now, suppose λ ∈ / f (σ(A)). Then, there exists an open subset Ω ⊂ C such that σ(A) ⊂ Ω and f (z) = λ for each z ∈ Ω. Consequently, the auxiliary function h defined by h(z) :=
1 , f (z) − λ
z ∈ Ω,
is holomorphic in Ω. According to Proposition 2.4.1, we have that h(A)(f (A) − λI) = I and, in particular, 1 = det h(A) · det[f (A) − λI]. Therefore λ ∈ / σ(f (A)). This ends the proof.
2.5 The exponential matrix Owing to Theorem 2.2.1, for each A ∈ MN (C) we can define the exponential eA of the matrix A through the identity ∞ An . e := n! n=0 A
(2.23)
The following result collects its main properties. Proposition 2.5.1. Let A, B ∈ MN (C). Then, (a) eA L(CN ) ≤ eAL(CN ) . (b) eA+B = eA eB = eB eA if AB = BA. (c) The matrix eA is invertible, e0 = I, and (eA )−1 = e−A . (d) eB = P −1 eA P if B = P −1 AP for some invertible P ∈ MN (C). Proof. For each n ≥ 1 the following is satisfied,
n Aj j=0
j!
L(CN ) ≤
n A j L(CN ) j=0
Taking limits as n → ∞ shows Part (a).
j!
≤ eAL(CN ) .
54
Chapter 2. Operator Calculus
Suppose AB = BA. Then, an easy induction argument proves that (A + B)k =
k k j=0
j
Aj B k−j
for every integer k ≥ 0. Consequently, e
A+B
=
∞ (A + B)k k=0
k!
k ∞ Aj B k−j = . j! (k − j)! j=0 k=0
As the second series is the Cauchy product of the series defining eA and eB , by Lemma 2.2.5, Part (b) follows. The identity e0 = I follows from the definition of the exponential. Moreover, as A commutes with −A, Part (b) yields I = e0 = eA−A = eA e−A = e−A eA and, therefore, e−A is the inverse matrix of eA . This proves Part (c). Finally, Property (d) is a consequence of identity (2.8). In practice, calculating the matrix eA from (2.23) may be extraordinarily involved, unless A is nilpotent or an appropriate basis of CN has been chosen to get an easy algorithm to identify An in terms of A, for each n ≥ 2. Nevertheless, according to Proposition 2.5.1(d), the problem of calculating eA can be reduced to the problem of determining eJ , where J is the Jordan form of A. Consequently, the problem simplifies to the calculation of the exponential of the Jordan blocks, which can be accomplished very easily. Indeed, for A ∈ MN (C), let P be the matrix of the change of basis to its Jordan canonical form J. Then, A = P JP −1 and, hence, by Proposition 2.5.1(d), ezA = P ezJ P −1
for each
z ∈ C.
Remember that the columns of P consist of the generalized eigenvectors constructed in Section 1.3. Moreover, the canonical form J has the form J = diag{J1 , J2 , . . . , J−1 , J } for a certain family of Jordan blocks J1 , . . . , J , each of them having the form Jλ,h (see (1.26)), for some λ ∈ σ(A) and some integer 1 ≤ h ≤ ν(λ). In other
2.5. The exponential matrix
55
words, for each 1 ≤ k ≤ , the block Jk is a square matrix of order h whose main diagonal consists of λ’s, whose subdiagonal consists of 1’s, whereas all remaining coefficients vanish. This has already been discussed in Section 1.3. Multiplying by blocks shows that, for each integer n ≥ 0, n , Jn , J n = diag J1n , J2n , . . . , J−1 and, consequently, for each z ∈ C, ezJ = diag ezJ1 , ezJ2 , . . . , ezJ−1 , ezJ . Fix k ∈ {1, . . . , } and suppose Jk = Jλ,h (see (1.26)), for some λ ∈ σ(A) and 1 ≤ h ≤ ν(λ). Then Jk = J0,h + λIh , where Ih stands for the identity matrix of order h. As the identity commutes with all matrices, by Proposition 2.5.1(b), it follows that ezJk = eλz ezJ0,h ,
z ∈ C.
From now on, given two natural numbers n, k ≥ 1 will denote ⎛ 0 ··· B1 ⎜ B1 ··· ⎜ B2 trng{B1 , B2 , . . . , Bn } := ⎜ .. .. ⎜ .. . ⎝ . . Bn
Bn−1
and B1 , . . . , Bn ∈ Mk (C), we
⎞ 0 ⎟ 0⎟ .. ⎟ ⎟ ∈ Mnk (C). . ⎠ · · · B1
(2.24)
h = 0 that Using this notation (with n = h and k = 1), we find from J0,h
ezJ0,h =
0 1 z h−2 z h−1 zn n z2 J0,h = trng 1, z, , . . . , , n! 2! (h − 2)! (h − 1)! n=0 h−1
and, therefore, 0 1 z 2 λz z h−2 z h−1 e ,..., eλz , eλz . ezJk = trng eλz , z eλz , 2! (h − 2)! (h − 1)! Consequently, the exponential ezA can be determined through a finite algorithm from the Jordan canonical form of A. When A has real coefficients, then so does etA for each t ∈ R, and, in a number of circumstances, it is imperative to determine etA from the real Jordan R canonical form J of A. As we discussed earlier, that problem can be reduced to
56
Chapter 2. Operator Calculus
calculate etJk for each 1 ≤ k ≤ . When Jk is a Jordan block associated with a real eigenvalue of A we can proceed as described above. If α, β ∈ R and R
Jk = Jα+iβ,2 :=
α −β
β , α
then, the calculation of etJk should be carried out as follows. First, decompose
α −β
0 β = α I2 + −β α
β 0
.
Then, according to Proposition 2.5.1(b) and using the definition of the exponential, one obtains cos(βt) − sin(βt)
etJk = eαt
sin(βt) , cos(βt)
because αI2 commutes with all matrices. In general, when ! R " R Jk = Jα+iβ,2h = trng Jα+iβ,2 , I2 , 0, . . . , 0
for some
h ≥ 2,
R
one first decomposes Jα+iβ,2h into the sum of two commuting matrices R
R
Jα+iβ,2h = trng{0, I2 , 0, . . . , 0} + trng{Jα+iβ,2 , 0, 0, . . . , 0}, R
in order to determine etJα+iβ,2h by means of Proposition 2.5.1(b). This shows that 0 R R R etJα+iβ,2h = trng etJα+iβ,2 , t etJα+iβ,2 , . . . , where
R
e
tJα+iβ,2
=e
αt
R th−1 etJα+iβ,2 (h − 1)!
1 ,
cos(βt) sin(βt) . − sin(βt) cos(βt)
2.6 An example In this section we shall determine ezA , for each z ∈ C, where A is the matrix introduced in (1.33). According to (1.37) and Proposition 2.5.1(d) we have that, for every z ∈ C, ezA = PC ezJC PC−1 ,
2.6. An example
57
where JC and PC are the matrices defined in (1.37) and (1.38), respectively. Moreover, owing to the results of Section 2.5, ⎛ ⎜2 z⎝
! ezJC = diag e ⎛ (2−i)z e ⎜ze(2−i)z ⎜ =⎜ ⎝ 0 0
⎛
⎞
−i 1
0 ⎟ ⎠ 2−i 0
e(2−i)z 0 0
⎜2 z⎝
,e 0 0
e(2+i)z ze(2+i)z
⎞
+i 1
0 ⎟ ⎠" 2+i 0 0 0
⎞ ⎟ ⎟ ⎟ ⎠
e(2+i)z
and, therefore, ⎛
⎞ e(2−i)z 0 0 0 ⎜ze(2−i)z e(2−i)z 0 0 ⎟ ⎜ ⎟ −1 ezA = PC ⎜ ⎟P = ⎝ 0 0 ⎠ C 0 e(2+i)z 0 0 ze(2+i)z e(2+i)z ⎛ ⎞ cos z − z sin z z cos z + sin z − sin z −z(cos z − sin z) ⎜ −(z + 1) sin z (z + 1) cos z − sin z −z cos z + (z + 2) sin z ⎟ ⎜ ⎟ e2z ⎜ ⎟ ⎝z(cos z − sin z) z(cos z + sin z) cos z − sin z ⎠ −2(z cos z − sin z) −z sin z z cos z − sin z (1 − z) cos z + (z + 1) sin z for every z ∈ C. Alternatively, it follows from (1.39) that, for every z ∈ C, ⎛
ezA = PR ezJR PR−1 ⎛
e2z cos z ⎜ e2z sin z ⎜ = PR ⎜ 2z ⎝ze cos z ze2z sin z
−e2z sin z e2z cos z −ze2z sin z ze2z cos z
0 0 2z e cos z e2z sin z
cos z − z sin z z cos z + sin z − sin z ⎜ −(z + 1) sin z (z + 1) cos z − sin z ⎜ e2z ⎜ ⎝z(cos z − sin z) z(cos z + sin z) cos z − sin z −z sin z z cos z − sin z where PR is the matrix defined in (1.40).
⎞ 0 ⎟ 0 ⎟ −1 ⎟P = −e2z sin z ⎠ R e2z cos z
⎞ −z(cos z − sin z) −z cos z + (z + 2) sin z ⎟ ⎟ ⎟, ⎠ −2(z cos z − sin z) (1 − z) cos z + (z + 1) sin z
58
Chapter 2. Operator Calculus
2.7 Exercises 1. Prove that every norm of Kd equips it with the structure of Banach space (complete normed space). 2.
Prove that the norm (2.3) satisfies 0
A L(KN ) = max { Au KN : u KN ≤ 1} = max
1
Au KN : u ∈ KN \ {0}
u KN
= min {L ≥ 0 : Au KN ≤ L u KN } for every A ∈ MN (C). 3. Let · and · be two norms in a given vector space U . Consider their subordinated norms · L(U ) and · L(U ) in L(U ), respectively. Show that · and · are equivalent if and only if so are · L(U ) and · L(U ) . 4.
Use induction on n to prove Lemma 2.2.3.
5. Use Lemma 2.3.2 to prove that the set of invertible matrices in MN (K) is open, where K ∈ {R, C}. 6. Calculate etA , for every t ∈ R, where A is each one of the matrices of Exercise 1 of Chapter 1. 7.
Calculate sin A and cos A, where ⎛
0 ⎜1 ⎜ A=⎜ ⎝0 0
8.
Let
⎛
0 0 1 0
0 ⎜ A = ⎝1 0
⎞ 0 0⎟ ⎟ ⎟. 0⎠
0 0 0 1
0 0 1
0 ⎞ 1 ⎟ 0⎠ . 0
Calculate cos A, sin A and eA from the corresponding Taylor series. This is one of the few circumstances where the Jordan form does not simplify the necessary calculations. 9. Let A ∈ MN (C). Prove that for each ε > 0 there exist an invertible matrix P , a diagonal matrix D and a nilpotent matrix R such that P −1 AP = D + R 10.
and
R L(CN ) ≤ ε.
Let A ∈ MN (C) and α, β ∈ R be such that α < Re λ < β
for each
λ ∈ σ(A).
Prove that there exists a basis {e1 , . . . , eN } of CN such that α u 2 ≤ Re Au, u ≤ β u 2
for each
u ∈ CN ,
2.7. Exercises
59
where, for each uj , vj ∈ C and 1 ≤ j ≤ N , we have denoted 2
N j=1
uj ej ,
N
3 vj ej
:=
j=1
N
uj v¯j
and
j=1
N
uj ej :=
j=1
N
1/2 2
|uj |
.
j=1
11. For each A ∈ MN (C), the spectral radius of A, usually denoted by spr A, is the real number defined through spr A := max {|λ| : λ ∈ σ(A)} . Prove the following properties: • spr A ≤ A L(CN ) for every norm · of CN . • For each ε > 0 there exists a norm · of CN such that
A L(CN ) ≤ spr A + ε. • Let S denote the set of norms · L(CN ) of L(CN ) that are subordinated to some norm of CN . Then, spr A = inf A L(CN ) : · L(CN ) ∈ S . 12.
Prove that, for all A ∈ MN (C) and z ∈ C, det ezA = ez tr A .
The concept of trace was given in Exercise 10 of Chapter 1. 13.
Pick R > 0 and f ∈ H(DR ); suppose A ∈ MN (C) satisfies spr A < R.
Use the Jordan canonical form to prove that σ(f (A)) = f (σ(A)) and that, actually, f (λ) is an eigenvalue of f (A) of multiplicity m if λ is an eigenvalue of A of multiplicity m. This result provides us with a sharp version of Theorem 2.4.2, though the proposed proof cannot be adapted to the infinite-dimensional setting. 14.
Let A ∈ MN (C) be an invertible matrix. Prove that A = eL
for some L ∈ MN (C). 15.
Suppose that A ∈ MN (C) and f ∈ F(A) satisfy f −1 (0) ∩ σ(A) = ∅.
Prove that f (A) is invertible.
60
Chapter 2. Operator Calculus
16. Let A ∈ MN (C) and an open set Ω ⊂ C such that σ(A) ⊂ Ω. Prove that there exists ε > 0 such that if B ∈ MN (C) and A − B ≤ ε, then σ(B) ⊂ Ω. 17.
Let A ∈ MN (C) and x ∈ CN . Prove the following: • The function f : C → MN (C) defined by f (z) = ezA for z ∈ C is holomorphic, and f (z) = Af (z) for each z ∈ C. • For each x ∈ CN , the function ϕx : C → CN defined by ϕx (z) = ezA x for z ∈ C provides us with the unique solution in H(C, CN ) of the initial-value problem
u (z) = Au(z), u(0) = x.
z ∈ C,
18. For each 1 ≤ p ≤ ∞ and A ∈ MN (C), let A p denote the norm of A subordinated to the norm · p of CN . Set A = (aij )N i,j=1 ∈ MN (C). The following properties are requested to be proved: N |aij |. • A 1 = max 1≤j≤N
i=1
• A 2 = [spr(AA∗ )]1/2 , where A∗ is the adjoint matrix of A, and spr denotes the spectral radius (see Exercise 3 of Chapter 1 and Exercise 11 above for the definitions, if necessary). • A 2 = spr(A) if A ∈ MN (C) is Hermitian. • A ∞ = max
1≤i≤N
• A 2 ≤
N
|aij |.
j=1
N N
|aij |2
1/2 =
4
tr(AA∗ ) ≤ N A 2 .
i=1 j=1
19.
Let A ∈ MN (CN ). Prove the equivalence of the following four conditions: • lim An = 0. n→∞
• lim An u = 0 for each u ∈ CN . n→∞
• spr A < 1 (see Exercise 11 above, if necessary). • There exists a norm · in CN for which
A L(CN ) < 1, where · L(CN ) is the norm subordinated to · . Conclude that, for every norm · of CN , 1/n
spr A = lim An L(CN ) . n→∞
2.7. Exercises
61
20. Let b ∈ CN and A ∈ MN (C) be such that I − A is invertible, and, for every u0 ∈ C, let {un }n≥0 denote the sequence of iterates defined by un+1 := Aun + b,
n ≥ 0.
Prove the equivalence of the following conditions: • The sequence {un }n≥0 converges for all u0 ∈ C. • For every u0 ∈ C,
lim un = (I − A)−1 b.
n→∞
• spr A < 1. 21.
Let A, B ∈ MN (C) be two Hermitian matrices, and let α1 ≤ · · · ≤ αN
and
β1 ≤ · · · ≤ βN
be their respective systems of eigenvalues, counting according to their multiplicity. Prove that max |αn − βn | ≤ A − B 2 , 1≤n≤N
where · 2 denotes the operator norm subordinated to the norm · 2 in CN . 22.
Let A ∈ MN (C), and consider ω := max{Re λ : λ ∈ σ(A)}.
This exercise suggests two different proofs of the fact that, for every ε > 0, there exists M (ε) > 0 such that
etA L(CN ) ≤ M (ε)e(ω+ε)t
for all
t > 0.
• Choose an appropriate Cauchy contour whose inner domain contains σ(A), and use the Dunford integral formula to prove the result. • Use, instead, the Jordan canonical form. 23. Let A ∈ MN (C) and λ ∈ σ(A). Use Exercise 5 of Chapter 1 to prove the existence of δ > and R > 0 with the property that for every B ∈ MN (C) satisfying A − B ≤ δ, the matrix B has exactly ma (λ) eigenvalues in DR (λ), counting according to their algebraic multiplicities. 24. Show that assertion “The set of matrices whose spectrum consists of simple eigenvalues is dense in MN (C)” is nothing but a re-statement of the main result established by Exercise 6 of Chapter 1.
62
Chapter 2. Operator Calculus
2.8 Comments on Chapter 2 This chapter contains standard material, though not really elementary, which can be found in a number of classic textbooks such as Taylor [122], Gantmacher [39], Dunford & Schwartz [27], Yosida [130], or Gohberg, Goldberg & Kaashoek [43]. Usually, operator calculus is introduced from the very beginning in the infinite-dimensional setting, which makes it difficult to teach it at an undergraduate level. In such a setting, f is a holomorphic function, A is a linear endomorphism of a Banach space, and the main goal is to analyze the properties of f (A). Nevertheless, these general approaches do not differ substantially from the one adopted in this chapter, which goes back to the textbooks of L´ opez-G´omez [83, 84] and can be comfortably taught to undergraduate students. Other finite-dimensional approaches to operator calculus can be found in the books of Gohberg, Lancaster & Rodman [51], and Gantmacher [39]. In the latter, a very detailed and complete description is given. Certainly, operator calculus is a very active field of research. So far, one of the major advances has been carried out for holomorphic functions f . Also well known is the operator calculus for continuous functions f : R → R and for selfadjoint operators A (see, for example, Gohberg, Goldberg & Kaashoek [43]). The analysis of the next chapters will give some answers to these and other related questions in the general case when f ∈ C ∞ (R, R), not necessarily real analytic, and A is a Fredholm operator of index zero, not necessarily selfadjoint. Lemma 2.3.2 is sometimes known as the Banach lemma. The series of (2.12) is called the Neumann series. Exercise 17 is the starting point for the theory of linear differential equations. Classic books on this vast subject are Coddington & Levinson [21], Hartman [57], Hale [56], Arrowsmith & Place [7], and Amann [2], though this list is not exhaustive. Applications of the operator calculus to the theory of linear differential equations, from a viewpoint of matrix theory, can be found in Gantmacher [39]. Exercise 20 presents a paradigmatic iterative scheme for solving linear equations (see Ciarlet [20]). A number of the proposed exercises were borrowed from Guzm´an [55] and L´opez-G´omez [83, 84].
Chapter 3
Spectral Projections This chapter considers A ∈ MN (C) and uses the Dunford integral formula to construct the spectral projections of CN onto N [(A − λI)ν(λ) ] for every λ ∈ σ(A). It also shows that, for each λ ∈ σ(A), the algebraic ascent ν(λ) equals the order of λ as a pole of the associated resolvent operator R(z; A) := (zI − A)−1 ,
z ∈ (A) := C \ σ(A).
(3.1)
Precisely, this chapter is structured as follows. Section 3.1 gives a universal estimate for the norm of the inverse of a matrix in terms of its determinant and its norm. From this estimate it will become apparent that the eigenvalues of A are poles of the resolvent operator (3.1). The necessary analysis to show this feature will be carried out in Section 3.3. Section 3.2 gives a result on Laurent series valid for vector-valued holomorphic functions. Finally, Section 3.4 constructs the spectral projections associated with the direct sum decomposition (1.6), whose validity was established by the Jordan Theorem 1.2.1.
3.1 Estimating the inverse of a matrix From now on, for any integer numbers d, k ≥ 1 and a1 , . . . , ak ∈ Cd , we shall denote by [a1 , . . . , ak ] the matrix with k columns and d rows, whose k columns consist of the ordered column vectors a1 , . . . , ak . The next inequality estimates the norm of the inverse of a matrix in terms of the norm of the matrix and of its determinant.
64
Chapter 3. Spectral Projections
Proposition 3.1.1. Let · be a norm in MN (C). Then there exists a constant γ > 0 such that, for every invertible matrix A ∈ MN (C), A−1 ≤ γ
A N −1 . | det A|
Proof. Pick v ∈ CN , set u := Av, and let a1 , . . . , aN ∈ CN denote the columns of A. Then A = [a1 , . . . , aN ]. Now we consider, for each 1 ≤ j ≤ N , the value bj := det[a1 , . . . , aj−1 , u, aj+1 , . . . , aN ]. By the Cramer rule, for each 1 ≤ j ≤ N , the jth component vj of the vector v satisfies bj . vj = det A Let aj1 , . . . , ajN ∈ C denote the components of aj for 1 ≤ j ≤ N . By definition of determinant, for each 1 ≤ j ≤ N , we have that bj =
(−1)sign σ uσ(j)
N
ai,σ(i) ,
i=1 i=j
σ∈PN
where PN stands for the set of permutations of {1, . . . , N }, and the sign or parity of any σ ∈ PN is denoted by sign σ. Then, for each 1 ≤ j ≤ N , |bj | ≤
σ∈PN
|uσ(j) |
N
−1 |ai,σ(i) | ≤ N ! u ∞ A N ∞ .
i=1 i=j
Therefore, v ∞ ≤ N !
1 −1 u ∞ A N ∞ . | det A|
(3.2)
Take · CN any norm in CN , and let · L(CN ) be its subordinated operator norm. By the equivalence of norms established by Theorem 2.1.3, it follows from (3.2) that there exists a constant γ > 0 such that A−1 u CN ≤ γ
1 −1 u CN A N L(CN ) , | det A|
whence A−1 L(CN ) ≤ γ
1 −1 A N . L(CN ) | det A|
A further application of Theorem 2.1.3 concludes the proof.
3.2. Vector-valued Laurent series
65
3.2 Vector-valued Laurent series The theory of Laurent series is valid not only for complex-valued holomorphic functions, but also for complex vector-valued holomorphic functions. Although the general theory works in more general contexts, this chapter uses exclusively the following special case of it, valid for matrix-valued holomorphic functions on a perforated disk. Theorem 3.2.1. Suppose N ≥ 1 is an integer, and f ∈ H (DR (z0 ) \ {z0 }, MN (C)) for some R > 0 and z0 ∈ C. Then there exists a unique sequence of matrices {Ln }n∈Z in MN (C) such that f (z) =
∞
(z − z0 )n Ln
for each
z ∈ DR (z0 ) \ {z0 },
(3.3)
n=−∞
uniformly for z in compact subsets of DR (z0 ) \ {z0 }. Moreover, 1 1 Ln = f (z) dz 2πi Cr (z0 ) (z − z0 )n+1 for each r ∈ (0, R) and n ∈ Z. The series (3.3) is called the Laurent series of f at z0 , and its coefficient operators Ln , for n ∈ Z, are the Laurent coefficients of f at z0 . Proof. Define Ω := DR (z0 ) \ {z0 }. For each 1 ≤ i, j ≤ N and z ∈ Ω, let fij (z) denote the (i, j)th entry of the matrix f (z); fix such an (i, j). Then fij ∈ H(Ω), and, hence, by the classic Laurent theorem, there exists a unique sequence of complex numbers {Lij n }n∈Z such that fij (z) =
∞
n Lij n (z − z0 )
for each
z ∈ Ω,
n=−∞
uniformly on compact subsets of Ω. Moreover, for each r ∈ (0, R) and n ∈ Z, 1 1 = fij (z) dz. Lij n 2πi Cr (z0 ) (z − z0 )n+1 For each n ∈ Z, let Ln ∈ MN (C) be the matrix whose (i, j)th entry equals Lij n. It is easy to see that the sequence {Ln }n∈Z satisfies all the requirements of the theorem. The uniqueness is an immediate consequence of the uniqueness of the Lij n ’s, once the entries in (3.3) are identified. Now, we can make the following concept precise.
66
Chapter 3. Spectral Projections
Definition 3.2.2 (Pole). Suppose N ≥ 1 is an integer, and f ∈ H (DR (z0 ) \ {z0 }, MN (C)) for some R > 0 and z0 ∈ C. Then, it is said that z0 is a pole of order m ≥ 1 of f when the coefficients Ln , for n ∈ Z, of the Laurent series (3.3) satisfy L−m = 0
for each k ≥ 1.
and L−(m+k) = 0
(3.4)
3.3 The eigenvalues are poles of the resolvent The next result shows that the eigenvalues of any matrix A ∈ MN (C) are poles of its resolvent operator (3.1), and it establishes some important relationships between the underlying Laurent coefficients. It should be remembered that, for any λ ∈ σ(A), we denote by ma (λ) the algebraic multiplicity of λ. Proposition 3.3.1. Consider A ∈ MN (C). Let λ ∈ σ(A) and R > 0 be such that DR (λ) \ {λ} ⊂ (A) = C \ σ(A). Let Ln :=
1 2πi
Cr (λ)
R(z; A) dz, (z − λ)n+1
n ∈ Z,
0 < r < R,
be the Laurent coefficients of R(·; A) at λ, whose existence is guaranteed by Theorems 2.3.4 and 3.2.1. Then L−(n+1) = (A − λI)n L−1 ,
n ≥ 0,
(3.5)
and there exists an integer m = m(λ) with 1 ≤ m(λ) ≤ ma (λ) for which (3.4) and R[L−1 ] ⊂ N [(A − λI)m(λ) ] are satisfied. In particular, λ is a pole of order m(λ) of the resolvent operator R(·; A). Proof. By Proposition 3.1.1, there exists γ > 0 such that R(z; A) ≤ γ
zI − A N −1 | det(zI − A)|
for each z ∈ (A).
(3.6)
As the function z → zI − A ¯ R (λ) is compact, there exists a constant C1 > 0 such is continuous and the set D that ¯ R (λ). γ zI − A N −1 ≤ C1 for each z ∈ D
3.3. The eigenvalues are poles of the resolvent
67
Moreover, since λ is a root of order ma (λ) of the characteristic polynomial z → det(zI − A), there exists a constant C2 > 0 such that | det(zI − A)| ≥ C2 |z − λ|ma (λ)
¯ R (λ). for each z ∈ D 2
Thus, inserting these inequalities in (3.6) shows that there exists a constant C3 > 0 for which R(z; A) ≤
C3 |z − λ|ma (λ)
¯ R (λ) \ {λ}. for each z ∈ D 2
(3.7)
Therefore, for each r ∈ (0, R/2) and k ≥ 1, we find from (3.7) that R(z; A) 1 L−[ma (λ)+k] ≤ dz 2π Cr (λ) |z − λ|−[ma (λ)+k]+1 1 C3 ≤ dz = C3 rk 2π Cr (λ) |z − λ|−k+1 and, consequently, passing to the limit as r 0 shows that L−[ma (λ)+k] = 0 for all k ≥ 1.
(3.8)
Now, we will check that L−n = 0
for some n ∈ {1, . . . , ma (λ)}.
(3.9)
The proof proceeds by contradiction. Suppose (3.9) does not hold. Then, using (3.8) shows that λ is a removable singularity of R(·; A) and lim R(z; A) = L0 .
z→λ
Then 1 = lim det [R(z; A)(zI − A)] = det L0 det(λI − A), z→λ
which shows that λ is not an eigenvalue of A. This contradiction proves (3.9). Obviously, combining (3.8) and (3.9) shows that λ is a pole of order m of the function R(·; A) for some 1 ≤ m ≤ ma (λ). Now, by the definition of R(z; A), Theorem 3.2.1 implies that, for every z ∈ DR (λ) \ {λ}, I = (zI − A)R(z; A) = [(z − λ)I + (λI − A)]
∞ n=−∞
Ln (z − λ)n
68
Chapter 3. Spectral Projections
and, hence, I=
∞
[Ln−1 + (λI − A)Ln ] (z − λ)n .
n=−∞
Therefore, by the uniqueness of the Laurent expansion as a consequence of Theorem 3.2.1, it is apparent that 0 = Ln−1 + (λI − A)Ln
for all n ≤ −1.
By induction, (3.5) is easily obtained. In particular, (A − λI)m L−1 = L−(m+1) = 0, because λ is a pole of order m of R(·; A). Equivalently, R[L−1 ] ⊂ N [(A − λI)m ],
which concludes the proof.
3.4 Spectral projections This section constructs the spectral projections associated with each of the factors of the direct sum decomposition of CN given by (1.6) in the Jordan Theorem 1.2.1. These projections are constructed through the Dunford integrals of the resolvent operator R(z; A) over a series of Cauchy contours surrounding each of the eigenvalues of the matrix, and no other eigenvalue. It should be remembered that δij stands for the Kronecker delta, and that, given a Cauchy contour γ, we are denoting by Dγ its inner domain. Theorem 3.4.1. Let A ∈ MN (C) be with σ(A) = {λ1 , . . . , λp }, and set L :=
min
1≤i<j≤N
|λi − λj | > 0.
Then, the following properties are satisfied: (a) For every Cauchy contour γ such that σ(A) ⊂ Dγ , 1 R(z; A) dz. I= 2πi γ (b) For each 1 ≤ j ≤ p, let Pj be the operator defined by 1 R(z; A) dz, Pj := 2πi Cr (λj )
3.4. Spectral projections
69
where r ∈ (0, L) is arbitrary. Then, p
Pk = I
k=1
and Pi Pj = δij Pj , (c) C
N
=
p
i, j ∈ {1, . . . , p}.
R[Pk ].
k=1
(d) For each 1 ≤ j ≤ p, the ascent ν(λj ) of the eigenvalue λj equals the order of λj as a pole of R(·; A). Moreover, R[Pj ] = N [(A − λj I)ν(λj ) ]. Remark 3.4.2. The operators P1 , . . . , Pp are usually referred to as the spectral projections of the matrix A. According to Theorems 2.3.4 and 3.2.1, for each 1 ≤ j ≤ p, the spectral projection Pj of CN onto N [(A − λj I)ν(λj ) ] equals the operator coefficient L−1 of the Laurent series of R(·; A) at λj . As L−1 is usually referred to as the residue of the resolvent operator at λj , one might re-state the previous theorem by simply saying that the spectral projections of a matrix are the residua of its resolvent operator at each of the eigenvalues of the matrix. Proof. Part (a) is a direct consequence of Theorem 2.3.6 applied to the constant polynomial 1. Now we consider the operators P1 , . . . , Pp defined in Part (b). According to Part (a) and Theorem 2.3.6, it is apparent that p k=1
1 Pk = 2πi
R(z; A) dz = I. γ
Now, fix 0 < ρ ≤ L/2 and, for each 1 ≤ j ≤ p, consider the function fj :
p #
Dρ (λk ) → C
k=1
defined by
fj (z) :=
1 0
if z ∈ Dρ (λj ), 5p if z ∈ k=1 Dρ (λk ) \ Dρ (λj ).
Clearly, fj ∈ F(A) and 1 Pj = fj (z)R(z; A) dz = fj (A), 2πi Cρ (λj )
1 ≤ j ≤ p.
70
Chapter 3. Spectral Projections
In addition, 1 ≤ i, j ≤ p.
fi fj = δij fi ,
Thus, we find from Proposition 2.4.1 that fi fj ∈ F(A) and 1 fi (z)fj (z)R(z; A)dz = δij Pi , 1 ≤ i, j ≤ p, Pi Pj = 2πi γ which concludes the proof of Part (b). So far, we have shown that the operators P1 , . . . , Pp are disjoint projections whose sum equals the identity. In particular, this implies Part (c). Finally, we will prove Part (d). For each 1 ≤ j ≤ p, let mj denote the order of λj as a pole of the resolvent R(·; A). By Proposition 3.3.1 and Remark 3.4.2, we have that 1 ≤ mj ≤ ma (λj ) and R[Pj ] ⊂ N [(λj I − A)mj ],
1 ≤ j ≤ p,
and, hence, R[Pj ] ⊂ N [(A − λj I)mj ] ⊂ N [(A − λj I)ν(λj ) ],
1 ≤ j ≤ p.
(3.10)
On the other hand, by Part (c), the Jordan Theorem 1.2.1 shows that CN =
p
R[Pj ] =
j=1
p
N [(A − λj I)ν(λj ) ].
j=1
Consequently, the inclusions (3.10) must be equalities, and, therefore, according to the definition of algebraic ascent, it is apparent that ν(λj ) = mj
for every 1 ≤ j ≤ p,
which concludes the proof.
3.5 Exercises 1.
Prove that, necessarily, α = N − 1 if there exist γ, α ∈ R satisfying
A−1 ≤ γ
A α | det A|
for all invertible matrices A ∈ MN (C). 2. The purpose of this exercise is to give an alternative proof of Theorem 3.4.1, based on the Jordan canonical form. Let A ∈ MN (C) be with σ(A) = {λ1 , . . . , λp },
3.5. Exercises
71
and let J denote its Jordan canonical form constructed in Section 1.3. It is already known that A = P −1 JP for some invertible matrix P ∈ MN (C), and that J can be expressed in the form J = diag{J1 , . . . , Jp }, where, for each 1 ≤ j ≤ p, Jj = diag{Jj,i1 , . . . , Jj,inj }, with ν(λj ) = max{ih : 1 ≤ h ≤ nj }, and for each natural number n ≥ 1, the matrix Jj,n is a Jordan block of the form (1.26). Prove successively the following: • For each z ∈ (A), R(z; A) = P R(z; J)P −1 ,
R(z; J) = diag{R(z; J1 ), . . . , R(z; Jp )},
and R(z; Jj ) = diag{R(z; Jj,i1 ), . . . , R(z; Jj,inj )} for all 1 ≤ j ≤ p. • For each 1 ≤ k ≤ p and every Cauchy contour γk satisfying γk ∩ σ(A) = ∅
and
σ(A) ∩ Dγk = {λk },
the following identities are satisfied: 1 1 R(z; A) dz = P R(z; J) dz P −1 , 2πi γk 2πi γk 1 2πi
0
R(z; J) dz = diag γk
1 2πi
R(z; J1 ) dz, . . . , γk
1 2πi
1 R(z; Jp ) dz ,
γk
and, for each 1 ≤ j ≤ p, 0 1 1 1 1 R(z; Jj ) dz = diag R(z; Jj,i1 ) dz, . . . , R(z; Jj,inj ) dz . 2πi γk 2πi γk 2πi γk • For each 1 ≤ j ≤ p and 1 ≤ h ≤ nj , " ! R(z; Jj,h ) = trng (z − λj )−1 , . . . , (z − λj )−h . • For each 1 ≤ j, k ≤ p, the matrix 1 2πi
R(z; Jj ) dz γk
is the identity matrix if j = k, and is the zero matrix if j = k. Moreover, p 1 R(z; J) dz = I. 2πi γk k=1
72
Chapter 3. Spectral Projections • For each 1 ≤ k ≤ p define 1 Pˆk = 2πi
R(z; J) dz. γk
Then, Pˆ1 , . . . , Pˆp are mutually disjoint projections whose sum is the identity. Moreover, for each 1 ≤ k ≤ p the ascent ν(λk ) of the eigenvalue λk equals its order as a pole of R(·; J), and R[Pˆk ] = N [(J − λk I)ν(λk ) ]. • For each 1 ≤ k ≤ p define Pk =
1 2πi
R(z; A) dz. γk
Then, P1 , . . . , Pp are mutually disjoint projections whose sum is the identity. Moreover, for each 1 ≤ k ≤ p the ascent ν(λk ) of the eigenvalue λk equals its order as a pole of R(·; A), and R[Pk ] = N [(A − λk I)ν(λk ) ].
3.6 Comments on Chapter 3 Although this chapter contains standard material, its approach differs substantially from the one adopted in classic textbooks, such as Gohberg & Kre˘ın [45], Taylor [122], Dunford & Schwartz [27], Yosida [130], Kato [65], and Gohberg, Goldberg & Kaashoek [43], where the fact that the eigenvalues are poles of the resolvent is proved once the spectral projections are constructed. Actually, in most of the available texts the Jordan theorem is derived from the properties of the spectral projections, and at a later step it is shown that any eigenvalue must be a pole of the resolvent. This chapter has adopted the converse methodology. The use of the elementary Proposition 3.1.1, attributable to Kato [65], to prove from the very beginning that any eigenvalue is a pole of the resolvent, simplifies tremendously the mathematical apparatus required to cover these topics. As a by-product, it facilitates the teaching of these topics at an undergraduate level. Nevertheless, this is not the unique advantage of the approach chosen in this chapter. Indeed, the fact that all spectral properties can be easily obtained from the fact that the eigenvalues are poles of the resolvent suggests that for general one-parameter families of operators L, not necessarily of the special form L(z) = zI − A,
z ∈ K,
most of their local spectral properties at any singular value z0 ∈ K should follow, by some method, by simply imposing the growth condition L(z)−1 ≤
C , |z − z0 |ν
z ∼ z0 , z = z0 ,
3.6. Comments on Chapter 3
73
even if L is not real analytic. This crucial observation is on the foundations of the general abstract theory that is going to be developed in the next chapters. It actually triggered it from the very beginning. In most textbooks it is usual to introduce the spectral projections in the infinite-dimensional setting for the special case when A is compact and, possibly, selfadjoint. Although this introductory chapter has dealt exclusively with the finite-dimensional setting, in Chapter 9 we will generalize these results to cover the general case when L is a one-parameter family of Fredholm operators of index zero. As in Chapters 1 and 2, the approach to this chapter has been based on the books by L´ opez-G´omez [83, 84]. Exercise 2 is based on the finite-dimensional approach to operator calculus given in the book by Gohberg, Lancaster & Rodman [50]. The theory of Laurent series also holds for vector-valued holomorphic functions. In the book by Gohberg, Goldberg & Kaashoek [43] a general way of reasoning to prove results on vector-valued functions from the corresponding ones for scalar-valued functions is explained. Applications of the Laurent series to linear ordinary differential equations, from the point of view of matrix theory, can be found in Gantmacher [39].
Part II
Algebraic Multiplicities
Part II: Summary Banach spaces and linear bounded operators Throughout Part II, we consider the field K ∈ {R, C}, and U, V, W, X, Y . . . will denote non-zero K-Banach spaces. Now we introduce the main classes of linear bounded operators used all through the book. Given two K-Banach spaces U and V , the set L(U, V ) denotes the K-Banach space of the linear bounded operators from U to V , and L(U ) := L(U, U ). As U and V are Banach spaces, they are equipped with norms. Both norms are always denoted by · , as well as the operator norm subordinated to those norms in L(U, V ). For any T ∈ L(U, V ), the sets N [T ] and R[T ] stand for the null space (kernel) and the range (image) of T , respectively, and the rank of T is defined by rank T := dim R[T ]. An operator T ∈ L(U, V ) is said to be a finite rank operator if rank T < ∞. An operator P ∈ L(U ) is said to be a projection if P 2 = P . In such a case, P is said to project U onto R[P ]. An operator T ∈ L(U, V ) is said to be a Fredholm operator when dim N [T ] < ∞ and
codim R[T ] < ∞.
In such a case, R[T ] must be closed. Throughout Part II, we denote by Fred(U, V ) the set of all Fredholm operators in L(U, V ), and Fred(U ) := Fred(U, U ). Moreover, for every T ∈ Fred(U, V ), the index of T is defined through ind T := dim N [T ] − codim R[T ]. The set of Fredholm operators of index zero of L(U, V ) is denoted by Fred0 (U, V ). Consequently, T ∈ Fred0 (U, V ) if and only if T ∈ L(U, V ) and
dim N [T ] = codim R[T ] < ∞.
We also denote Fred0 (U ) := Fred0 (U, U ). The identity operator of U , always denoted by IU , lies in Fred0 (U ), of course. But 0 ∈ / Fred(U, V ) if U or V are infinite dimensional. In most of Part II we will assume that Fred0 (U, V ) = ∅; then U and V must be isomorphic. Therefore, assuming U = V is far from being a serious restriction. In such a case, the most paradigmatic subset of Fred0 (U ) consists of the set of compact perturbations of the identity IU . An operator T ∈ L(U, V ) is said to be compact if the closure T (B) is a compact subset of V for all bounded sets B ⊂ U . We denote by K(U, V ) the subset of L(U, V ) formed by all compact operators. It is well known that IU ∈ K(U ) if and only if dim U < ∞. Another important subset of Fred0 (U, V ) is the set Iso(U, V ) consisting of all the isomorphisms from U to V . As usual, K(U ) := K(U, U ) and Iso(U ) := Iso(U, U ).
Part II: Summary
77
Continuous and smooth maps Throughout Part II, given K ∈ {R, C}, an open subset Ω ⊂ K, and a K-Banach space W , we denote by C(Ω, W ) the set of continuous maps from Ω to W , and, more generally, for each integer r ≥ 1, the set C r (Ω, W ) stands for the subset of C(Ω, W ) of those maps having a continuous rth differential. We also consider 6 C 0 (Ω, W ) := C(Ω, W ), C ∞ (Ω, W ) := C r (Ω, W ), r∈N
and denote by C ω (Ω, W ) the set of analytic maps from Ω to W . It should be noted that if K = C, then C 1 (Ω, W ) = C ω (Ω, W ) = H(Ω, W ), all these sets being the space of holomorphic maps from Ω to W . Finally, C r (Ω) will denote C r (Ω, K), for each r ∈ N ∪ {∞, ω}. Given λ0 ∈ Ω, a map f : Ω \ {λ0 } → W , and k ∈ Z, it is said that f (λ) = o((λ − λ0 )k )
as λ → λ0
if
f (λ) = 0. (λ − λ0 )k When λ0 is clear from the context, it is simply said that lim
λ→λ0
f (λ) = o((λ − λ0 )k ). Given λ0 ∈ Ω, a map f ∈ C r (Ω, W ) for some r ∈ N ∪ {∞}, and an integer 0 ≤ k ≤ r, we will write ordλ0 f := k when f j) (λ0 ) = 0 for all 0 ≤ j < k and f k) (λ0 ) = 0, while ordλ0 f := ∞ if r = ∞ and f (λ0 ) = 0 for all j ≥ 0. The expression ordλ0 f will be read: the order of f at λ0 . We leave ordλ0 f undefined when, for some integer k ≥ 0, the function f is not of class C k+1 in any neighborhood of λ0 , and f j) (λ0 ) exists and equals zero for all 0 ≤ j ≤ k. For the sake of notation in Taylor developments, throughout this part, from its beginning to its end, given λ0 ∈ Ω, a natural number k and f ∈ C k (Ω, W ), we use the notation 1 fk := f k) (λ0 ). k! It should not cause any confusion, because λ0 will always be fixed beforehand. In particular, we will subsequently set j)
f0 := f (λ0 ), if there is no ambiguity. It is important to keep this notation in mind from now on, as it will be used throughout the remainder of the book.
78
Part II: Summary
Operator families and generalized spectrum Throughout Part II, given two K-Banach spaces U, V , and r ∈ N ∪ {∞, ω}, any map L ∈ C r (Ω, L(U, V )) will be referred to as an operator family of class C r in Ω from U to V . They will be ˆ . . . Among them, polynomial operator families will denoted hereafter by L, M, L play a pivotal role. A map Φ ∈ C ω (K, L(U, V )) is said to be a polynomial operator family if there exist d ∈ N and F0 , . . . Fd ∈ L(U, V ) such that Φ(λ) =
d
λj Fj ,
λ ∈ K.
j=0
The set of all polynomial operator families from U to V will be denoted by P[L(U, V )], and P[L(U )] := P[L(U, U )]. Usually, polynomial operator families will be named by Greek capital letters Φ, Ψ, Υ . . . The product of two operator families is defined in the natural way by means of their composition. Precisely, if Ω and Ω are two open subsets of K and L : Ω → L(U, V ),
M : Ω → L(W, U )
are two arbitrary maps, then the product map LM : Ω ∩ Ω → L(W, V ) is defined by LM(λ) := L(λ) ◦ M(λ),
λ ∈ Ω ∩ Ω ,
though, in practice, the composition symbol ◦ will not be written. Given an operator family L ∈ C(Ω, L(U, V )), the point λ0 ∈ Ω is said to be a singular value of L if L0 := L(λ0 ) ∈ / Iso(U, V ), while it is said to be an eigenvalue of L if dim N [L0 ] ≥ 1. The spectrum of L, denoted by Σ(L), is defined as the set of singular values of L, while Eig(L) stands for the set of eigenvalues of L. Obviously, Eig(L) ⊂ Σ(L). Thanks to the open mapping theorem, in the special case when L0 ∈ Fred0 (U, V ), one has that λ0 ∈ Σ(L) if and only if λ0 ∈ Eig(L). Consequently, if L(Ω) ⊂ Fred0 (U, V ), then Σ(L) = Eig(L). Such concepts should not be confused with
Part II: Summary
79
the classic concept of spectrum of the operator L(λ), denoted by σ(L(λ)), for a particular value λ ∈ Ω. By definition, for every T ∈ L(U ), σ(T ) = Σ(LT ), where LT (λ) := λIU − T,
λ ∈ K.
Operations and order in N ∪ {∞, ω} We will sometimes use the sets N ∪ {∞} and N ∪ {∞, ω}; for example, to consider maps of class C r (Ω, W ) for some r belonging to N ∪ {∞} or to N ∪ {∞, ω}. Consequently, it is appropriate to fix the following conventions. The sum, difference and total order in N ∪ {∞} are the usual ones. The sum, difference and partial order in N ∪ {∞, ω} are defined so that N ∪ {∞} preserves them, and, in addition, for all k ∈ N, k + ω = ω, k < ω, ω − k = ω. Also, ω becomes a synonym for ∞ when it is used for labeling sequences, or it indicates cardinality, so that the following expressions have the same meaning: {xn }n∈N ,
{xn }∞ n=0 ,
{xn }ω n=0 .
Main general goal Essentially, classic spectral theory studies holomorphic operator families L ∈ H(C, MN (C)) of the special type L(λ) = λICN − A,
λ ∈ C,
for a given A ∈ MN (C). These are the families already studied in Part I. More generally, the Riesz–Fredholm theory deals with holomorphic families L ∈ H(C, L(U )) of the form L(λ) = λIU − T, λ ∈ C, for a given T ∈ L(U ), where U is a complex Banach space. In particular, it analyzes the concepts of algebraic multiplicity, ascent. . . at any eigenvalue λ0 ∈ σ(T ), and studies their main properties. Part II introduces a generalized concept of algebraic multiplicity at every λ0 ∈ Eig(L) for a general operator family L ∈ C r (Ω, L(U, V )), for some r ∈ N such that L0 ∈ Fred0 (U, V ), and studies its most relevant properties; among them, the product formula (see Chapter 5). Actually, the product formula and a certain normalization condition determine the algebraic multiplicity uniquely (see Chapter 6). As a by-product, we will get an extremely simple axiomatic for all existing
80
Part II: Summary
concepts of algebraic multiplicity available in the specialized literature. According to that axiomatic, it will become apparent that the generalized concept of algebraic multiplicity introduced in Part II indeed extends all classic concepts of algebraic multiplicity. The most significant consequences of the theory developed in this part will be discussed throughout each of the chapters forming it.
Prerequisites The reading of Part II requires more mathematical background than the one necessary for Part I. But not that much, as only the most basic concepts and results of Functional Analysis, such as the Hahn–Banach theorem, are needed. In addition, the knowledge of the most basic features of the theory of (bounded) Fredholm operators is required. The following list collects the main results that will be invoked throughout Part II: If L ∈ Fred0 (U, V ) and T ∈ K(U, V ), then L + T ∈ Fred0 (U, V ). If L ∈ Fred0 (U, V ) and M ∈ Fred0 (W, U ), then LM ∈ Fred0 (W, V ). Fred0 (U, V ) is an open subset of L(U, V ). For each L ∈ Fred0 (U, V ) there exists a finite-rank operator F ∈ L(U, V ) such that L + F ∈ Iso(U, V ). • The Fredholm index defines a continuous map • • • •
T → ind T from Fred(U, V ) to Z. Two good sources for the theory of Fredholm operators are the review paper by Gohberg & Kre˘ın [45] and the comprehensive treatise of Kato [65]. The book of Gohberg, Goldberg & Kaashoek [43] also provides an elementary introduction to that theory. Although the Riesz–Fredholm theory of compact operators and their algebraic multiplicities is useful to understand why the generalized concept of algebraic multiplicity introduced in Part II is indeed a generalization of the classic concept of multiplicity for a compact operator, some of its most significant results will follow in a series of corollaries of the abstract theory developed herein. Consequently, no previous knowledge of it is required. Except in Section 11.3, where certain features about non-bounded operators will be discussed, throughout Part II, by an operator is always meant a linear and bounded operator.
Structure Part II is divided into eight chapters. Chapters 4 and 5 construct two different but equivalent algebraic multiplicities. Chapter 6 gives an axiomatic for the algebraic multiplicity. Chapter 7 characterizes the existence of the local Smith form
Part II: Summary
81
of a family at a singular value. Chapter 8 focuses attention on holomorphic and classical operator families, obtaining the most important results of the classic theory of Riesz–Fredholm. Chapter 9 constructs finite Laurent developments for the inverses of wide classes of non-analytic families, and ascertains some important properties concerning the logarithmic residues. Chapter 10 gives a proof of the spectral theorem for matrix polynomials. Finally, in Chapter 11 we describe some further developments of the algebraic multiplicity to enhance further research in some related areas of great interest.
Chapter 4
Algebraic Multiplicity Through Transversalization Throughout this chapter we will consider K ∈ {R, C}, two K-Banach spaces U and V , an open subset Ω ⊂ K, a point λ0 ∈ Ω, and a family L ∈ C r (Ω, L(U, V )), for some r ∈ N ∪ {∞}, such that L0 := L(λ0 ) ∈ Fred0 (U, V ). When λ0 ∈ Eig(L), the point λ0 is said to be an algebraic eigenvalue of L if there exist δ, C > 0 and m ≥ 1 such that, for each 0 < |λ − λ0 | < δ, the operator L(λ) is an isomorphism and C . L(λ)−1 ≤ |λ − λ0 |m The main goal of this chapter is to introduce the concept of algebraic multiplicity of L at any algebraic eigenvalue λ0 . This algebraic multiplicity will be denoted by χ[L; λ0 ], and will be defined through the auxiliary concept of transversal eigenvalue. Such concept will be motivated in Section 4.1 and will be formally defined in Section 4.2. Essentially, λ0 is a transversal eigenvalue of L when it is an algebraic eigenvalue for which the perturbed eigenvalues a(λ) ∈ σ(L(λ)) from 0 ∈ σ(L0 ), as λ moves from λ0 , can be determined through standard perturbation techniques; these perturbed eigenvalues a(λ) are those satisfying a(λ0 ) = 0. This feature will be clarified in Sections 4.1 and 4.4, where we study the behavior of the eigenvalue a(λ) and its associated eigenvector in the special case when 0 is a simple eigenvalue of L0 . In such a case, the multiplicity of L at λ0 equals the order of the function a at λ0 . Section 4.3 shows that every operator family L satisfying the general requirements of this chapter can be tranversalized through a polynomial
84
Chapter 4. Algebraic Multiplicity Through Transversalization
family Φ ∈ P[L(U, V )] with Φ0 ∈ Iso(U, V ), provided λ0 is an algebraic eigenvalue of L; that means that the product family LΦ is transversal at λ0 . The algebraic multiplicity of L at λ0 is then defined through χ[LΦ; λ0 ]. The consistency of this concept of multiplicity relies upon the fact that it does not depend on the local transversalizing family Φ.
4.1 Motivating the concept of transversality Originally, the interest of the concept of transversality goes back to the fact that transversality makes the resolution of a special class of triangular systems trivial. Although the precise meaning of this claim will not become apparent until Chapters 7 and 9, we consider now a simple example to illustrate it. The deepest significance of the concept of transversal eigenvalue will be revealed later; through, e.g., Proposition 7.1.3 and the results of Section 7.2, where it will be made apparent that, at transversal eigenvalues, the structure of the canonical sets of Jordan chains are themselves canonical. According to Theorem 4.3.2, which establishes the existence of transversalizing isomorphisms at any eigenvalue λ0 , it seems rather superfluous to give non-trivial examples for transversality (see Theorem 5.3.1 as well). Indeed, by Theorem 4.3.2, through the appropriate local change of coordinates at λ0 , any operator family can be transversalized. Suppose we are given a natural number k ≥ 0 and k + 1 operators L0 , . . . , Lk ∈ L(U, V ), and consider the homogeneous linear system j
Li uj−i = 0,
0 ≤ j ≤ k,
(4.1)
i=0
in the unknowns u0 , . . . , uk ∈ U . The next lemma reveals that under a special condition, closely related to the concept of transversality, system (4.1) becomes the uncoupled system Li uj−i = 0,
0 ≤ i ≤ j ≤ k.
Lemma 4.1.1. Let k ∈ N and L0 , . . . , Lk ∈ L(U, V ) be such that the spaces k−1 6
R[L0 ], L1 (N [L0 ]), . . . , Lk (
N [Lj ])
(4.2)
j=0
are in direct sum. Then, (4.1) is satisfied if and only if uj ∈ N [L0 ] ∩ · · · ∩ N [Lk−j ],
0 ≤ j ≤ k.
(4.3)
4.1. Motivating the concept of transversality
85
Proof. Obviously, (4.3) implies (4.1). To prove the converse implication, we will proceed by induction on k. Suppose (4.1). Then, L0 u0 = 0 and, hence, (4.3) holds true for k = 0. Assume that the facts (y0 , . . . , yk−1 ) ∈ U k
and
j
Li yj−i = 0,
0 ≤ j ≤ k − 1,
i=0
imply yj ∈
k−1−j 6
0 ≤ j ≤ k − 1.
N [Li ],
i=0
Then, applying this induction hypothesis to (u0 , . . . , uk−1 ) shows that uj ∈
k−1−j 6
0 ≤ j ≤ k − 1.
N [Li ],
(4.4)
i=0
On the other hand, particularizing (4.1) at j = k gives L0 uk +
k
Li uk−i = 0.
(4.5)
i=1
Moreover, by (4.4), we have that L0 uk ∈ R[L0 ],
Li uk−i ∈ Li (N [L0 ] ∩ · · · ∩ N [Li−1 ]) ,
1 ≤ i ≤ k.
Thus, since the spaces (4.2) are in direct sum, (4.5) implies L0 uk = 0 and Li uk−i = 0
for each 1 ≤ i ≤ k.
Equivalently, Lk−j uj = 0 for each 0 ≤ j ≤ k. Combining these relations with (4.4) gives uj ∈
k−j 6
N [Li ] for each 0 ≤ j ≤ k,
i=0
and, therefore, (4.1) is satisfied.
There are many situations where systems of the form (4.1) appear; for instance, in setting the concept of generalized Jordan chain. But those applications are postponed until Chapters 7 and 9. Now we will analyze another important example closely related to the problem of ascertaining the perturbed eigenvalues from 0 for the family L(λ) as λ moves from λ0 . As already announced, this problem will be dealt with in Section 4.4. We use the notation introduced in the general presentation of Part II. Lemma 4.1.2. Suppose L ∈ C r (Ω, L(U, V )) and ϕ ∈ C r (Ω, U ) for some r ∈ N∪{∞}. Consider λ0 ∈ Ω and assume that the spaces (4.2) are in direct sum for some
86
Chapter 4. Algebraic Multiplicity Through Transversalization
integer 1 ≤ k ≤ r. Then the following properties are equivalent: • L(λ)ϕ(λ) = o((λ − λ0 )k ). j • i=0 Li ϕj−i = 0 for all 0 ≤ j ≤ k. • Li ϕj−i = 0 for all 0 ≤ i ≤ j ≤ k. Proof. As Lϕ ∈ C r (Ω, V ), the condition L(λ)ϕ(λ) = o((λ − λ0 )k ) is equivalent to (Lϕ)j) (λ0 ) = 0,
0 ≤ j ≤ k.
By the Leibniz rule, this is equivalent to j
Li ϕj−i = 0,
0 ≤ j ≤ k.
(4.6)
i=0
Finally, according to Lemma 4.1.1, (4.6) is satisfied if and only if 0 ≤ i ≤ j ≤ k,
Li ϕj−i = 0,
which concludes the proof.
Based on Lemma 4.1.2, we can easily obtain the following perturbation result. It clearly reveals how the transversality of the spaces (4.2) tremendously facilitates the calculations in ascertaining the perturbations of eigenvalues and associated eigenvectors for linear operators. Its full significance will become apparent in Section 4.4. Lemma 4.1.3. Suppose that for some r ∈ N ∪ {∞} the functions L ∈ C r (Ω, L(U )),
ϕ ∈ C r (Ω, U ),
a ∈ C r (Ω)
satisfy L(λ)ϕ(λ) = a(λ)ϕ(λ),
λ ∈ Ω,
and, for some point λ0 ∈ Ω and some integer 1 ≤ k ≤ r, the spaces (4.2) are in direct sum, and 0 ≤ i ≤ k. ai = 0, Then Li ϕj−i = 0,
0 ≤ i ≤ j ≤ k.
Proof. Define the auxiliary map M ∈ C r (Ω, L(U )) through M(λ) := L(λ) − a(λ)I,
λ ∈ Ω.
Then, Mϕ = 0 in Ω, and Mi = Li ,
0 ≤ i ≤ k.
Consequently, Lemma 4.1.2 concludes the proof.
4.2. The concept of transversal eigenvalue
87
4.2 The concept of transversal eigenvalue The next definition introduces the concept of transversal eigenvalue. It should not be forgotten that 0 ≤ j < r + 1. j!Lj = Lj) (λ0 ), Definition 4.2.1. Let L ∈ C r (Ω, L(U, V )) satisfy L0 ∈ Fred0 (U, V ) for some r ≥ 1; consider λ0 ∈ Eig(L), and k ∈ N ∩ [1, r]. Then, λ0 is said to be a k-transversal eigenvalue of L if the following topological direct sum decomposition is satisfied: k
Lj (N [L0 ] ∩ · · · ∩ N [Lj−1 ]) ⊕ R[L0 ] = V,
(4.7)
j=1
with Lk (N [L0 ] ∩ · · · ∩ N [Lk−1 ]) = {0}.
(4.8)
In such a case, the integer k is called the order of transversality of L at λ0 . More generally, given k ∈ N ∩ [0, r], the point λ0 is said to be a k-transversal value of L when either k = 0 and L0 ∈ Iso(U, V ), or k ≥ 1 and λ0 is a k-transversal eigenvalue of L. Condition (4.8) makes the integer k uniquely determined. The next result provides us with some useful relationships between the dimensions of the vector spaces arising in (4.7), as well as with a characterization of transversal eigenvalues. Proposition 4.2.2. Let r ∈ N∗ ∪ {∞} and L ∈ C r (Ω, L(U, V )) be such that L0 ∈ Fred0 (U, V ), with λ0 ∈ Eig(L), and the spaces (4.2) are in direct sum for some integer 1 ≤ k ≤ r. Set ni := dim
i 6
0 ≤ i ≤ k,
N [Lj ],
j=0
i := dim Li
6 i−1
N [Lj ] ,
1 ≤ i ≤ k.
j=0
Then ni−1 = ni + i ,
1 ≤ i ≤ k,
(4.9)
and, consequently, ni = nk +
k
j ,
0 ≤ i ≤ k − 1.
(4.10)
j=i+1
Moreover, λ0 is a k-transversal eigenvalue of L if and only if nk = 0,
k ≥ 1,
(4.11)
88
Chapter 4. Algebraic Multiplicity Through Transversalization
and, in such a case, ni =
k
j ,
0 ≤ i ≤ k − 1.
(4.12)
j=i+1
Proof. Since i 6 j=0
N [Lj ] ⊂
i−1 6
N [Lj ] ⊂ N [L0 ],
1 ≤ i ≤ k,
j=0
and dim N [L0 ] < ∞, for each 1 ≤ i ≤ k there exists a finite-dimensional subspace Ui ⊂ U such that N [L0 ] ∩ · · · ∩ N [Li−1 ] = Ui ⊕ (N [L0 ] ∩ · · · ∩ N [Li ])
(4.13)
N [L0 ] = U1 ⊕ · · · ⊕ Uk ⊕ (N [L0 ] ∩ · · · ∩ N [Lk ]) .
(4.14)
and, hence, By construction, Li (N [L0 ] ∩ · · · ∩ N [Li−1 ]) = Li (Ui ),
1 ≤ i ≤ k,
(4.15)
and, in addition, dim Ui = dim Li (Ui ),
1 ≤ i ≤ k,
(4.16)
because the restriction of Li from Ui to Li (Ui ) is an isomorphism. Thanks to (4.13), (4.15) and (4.16), we find that ni−1 = dim Ui + ni = dim Li (Ui ) + ni = i + ni ,
1 ≤ i ≤ k,
which provides us with (4.9). Identity (4.10) is a direct consequence of the identities of (4.9). Now, assume that λ0 is a k-transversal eigenvalue of L. Then, L1 (U1 ) ⊕ · · · ⊕ Lk (Uk ) ⊕ R[L0 ] = V
(4.17)
and, hence, dim N [L0 ] = codim R[L0 ] =
k
dim Li (Ui ) =
i=1
k
dim Ui ,
i=1
because L0 ∈ Fred0 (U, V ). Thus, we find from (4.14) that N [L0 ] = U1 ⊕ · · · ⊕ Uk
and N [L0 ] ∩ · · · ∩ N [Lk ] = {0}.
Therefore, nk = 0. Note that k ≥ 1 holds from (4.8), and that (4.12) is just (4.10) with nk = 0.
4.3. Algebraic eigenvalues and transversalization
89
To show the converse implication, suppose (4.11). Then, (4.10) implies (4.12). In particular, k j dim N [L0 ] = n0 = j=1
and, hence, n0 = codim R[L0 ] ≥
k
dim Li (Ui ) = n0 .
i=1
Consequently, codim R[L0 ] =
k
dim Li (Ui )
i=1
and (4.17) holds. As k ≥ 1, the point λ0 must be a k-transversal eigenvalue of L. This concludes the proof.
4.3 Algebraic eigenvalues and transversalization The following definition fixes the pivotal concept of algebraic eigenvalue. Definition 4.3.1. Suppose r ∈ N ∪ {∞}, and L ∈ C r (Ω, L(U, V )) satisfy L0 ∈ Fred0 (U, V ), with λ0 ∈ Eig(L). Then, λ0 is said to be an algebraic eigenvalue of L if there are an integer ν ≥ 1 and two positive constants δ > 0 and C > 0 such that, for each 0 < |λ − λ0 | < δ, the operator L(λ) is an isomorphism and L(λ)−1 ≤
C . |λ − λ0 |ν
(4.18)
The least integer ν ≥ 1 satisfying these requirements will be called the order of λ0 as an algebraic eigenvalue of L. More generally, for a given ν ∈ N, the point λ0 is said to be a ν-algebraic value of L when either ν = 0 and L0 ∈ Iso(U, V ), or ν ≥ 1 and λ0 is an algebraic eigenvalue of order ν of L. For every ν ∈ N, we denote by Algν (L) the set of all ν-algebraic values of L in Ω, and # Alg(L) := Algν (L). ν∈N
It must be emphasized that 0-algebraic values of L are not eigenvalues of L; however, they will be useful to extend the concept of algebraic multiplicity to cover all values of λ ∈ Ω for which L(λ) ∈ Iso(U, V ). We also note Alg0 (L) = Ω \ Σ(L)
and
Alg(L) \ Alg0 (L) ⊂ Eig(L).
90
Chapter 4. Algebraic Multiplicity Through Transversalization
The order of λ0 as an algebraic eigenvalue of L equals the order of λ0 as a pole of L−1 if L is analytic. Rather strikingly, although the concept of pole is not available for non-analytic families, the theory developed in Chapter 9 will reveal that the concept of order of λ0 as an algebraic eigenvalue of L in the non-analytic case shares most of the local properties with the concept of order of λ0 as a pole of L−1 in the analytic case. Among them, the existence of Laurent expansions of L−1 , necessarily finite if L is not analytic. The main result of this section establishes that if λ0 ∈ Ω is a ν-algebraic eigenvalue of L ∈ C r (Ω, L(U, V )), for some r ≥ 1 and some integer 1 ≤ ν ≤ r, then there exist a natural number 1 ≤ k ≤ ν and a polynomial family Φ ∈ P[L(U, V )] with Φ0 ∈ Iso(U, V ) such that λ0 is a k-transversal eigenvalue of the transformed family LΦ defined as LΦ (λ) := L(λ)Φ(λ),
λ ∈ Ω.
(4.19)
Moreover, the dimensions Φ Φ dim LΦ j (N [L0 ] ∩ · · · ∩ N [Lj−1 ]),
0 ≤ j < r + 1,
do not depend on the family of isomorphisms Φ and, consequently, they may be regarded as algebraic invariants of L. This result can be stated briefly by simply saying that L can be transversalized at λ0 if λ0 is an algebraic eigenvalue of order 1 ≤ ν ≤ r. In such a case, any local family of isomorphisms Φ for which λ0 is a transversal eigenvalue of (4.19) will be referred to as a transversalizing family for L at λ0 . It turns out that if r = ∞, then λ0 is an algebraic eigenvalue of L if and only if L can be transversalized at λ0 , but the complete proof of this characterization is postponed until Chapter 5, as it uses another concept of multiplicity to be introduced there. The main result of this chapter can be stated as follows. Theorem 4.3.2 (of transversalization). Suppose r ∈ N∪{∞}. Let L ∈ C r (Ω, L(U, V )) satisfy L0 ∈ Fred0 (U, V ) and λ0 ∈ Eig(L). Then, for each integer 1 ≤ k ≤ r there k exists Φk ∈ P[L(U )] with Φk0 = IU such that the product family LΦ defined by k
LΦ (λ) := L(λ)Φk (λ),
λ ∈ Ω,
satisfies Vk :=
k
k
k
k
k
Φ Φ Φ LΦ j (N [L0 ] ∩ · · · ∩ N [Lj−1 ]) ⊕ R[L0 ] ⊂ V.
j=1
If, in addition, λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r, then there exists a least 1 ≤ k ≤ ν for which Vk = V and, therefore, λ0 is a k-transversal eigenvalue k of LΦ .
4.3. Algebraic eigenvalues and transversalization
91
Moreover, if Φ, Ψ ∈ C r (Ω, L(U )) are two families for which Φ0 , Ψ0 ∈ Iso(U ) and, for some 1 ≤ k, ≤ ν, the transformed families LΦ := LΦ,
LΨ := LΨ,
satisfy k
j−1 6
LΦ j (
j=1
i=0
j−1 6
LΨ j (
j=1
Φ N [LΦ i ]) ⊕ R[L0 ] = V,
k−1 6
LΦ k(
N [LΦ i ]) = {0},
i=0 Ψ N [LΨ i ]) ⊕ R[L0 ] = V,
i=0
−1 6
LΨ (
N [LΨ i ]) = {0},
i=0
then k = and, for each 1 ≤ j ≤ k, j−1 6 Φ dim Lj ( N [LΦ i ]) i=0
=
j−1 6 Ψ dim Lj ( N [LΨ i ]). i=0
Before giving the proof of Theorem 4.3.2, we discuss some of its main consequences. According to Theorem 4.3.2, the following concept of algebraic multiplicity is consistent. Definition 4.3.3. Suppose r ∈ N ∪ {∞}, the family L ∈ C r (Ω, L(U, V )) satisfies L0 ∈ Fred0 (U, V ), and either r = ∞, or λ0 ∈ Algν (L) for some integer 0 ≤ ν < r + 1. Then, the algebraic multiplicity of L at λ0 , denoted by χ[L; λ0 ], is defined as follows: • χ[L; λ0 ] :=
k j=0
j−1 6
j · dim LΦ j (
N [LΦ i ]), if λ0 ∈ Algν (L) for some 1 ≤ ν < r+1,
i=0
where Φ ∈ C r (Ω, L(U )) is any family for which λ0 is a k-transversal eigenvalue of LΦ := LΦ, for a certain 1 ≤ k ≤ ν. • χ[L; λ0 ] := 0, if λ0 ∈ Alg0 (L); equivalently, if L0 ∈ Iso(U, V ). • χ[L; λ0 ] := ∞, if r = ∞ and λ0 ∈ / Alg(L). The multiplicity χ[L; λ0 ] is left undefined in all remaining cases, that is, when # r ∈ N and λ0 ∈ / Algν (L). 0≤ν≤r
As commented earlier, if r = ∞ and λ0 ∈ Eig(L), then L can be transversalized at λ0 if and only if λ0 ∈ Alg(L). Actually, in such a case, as a consequence of the theory developed in Chapter 5, for any local transversalizing isomorphism Φ ∈ C ∞ (Ω, L(U )), the order of λ0 as an algebraic eigenvalue of L equals the order of transversality of λ0 as a transversal eigenvalue of the transversalized family LΦ := LΦ.
92
Chapter 4. Algebraic Multiplicity Through Transversalization
Consequently, the hypothesis λ0 ∈ Algν (L), for some 1 ≤ ν ≤ r, in the second assertion of Theorem 4.3.2 is not only sufficient for its validity, but also necessary. This is an extremely sharp result, because it entails that Vk must be a proper subspace of V for all k ≥ 1 if λ0 ∈ / Alg(L) (see the statement of Theorem 4.3.2). Simple examples show that the assumption that λ0 is an algebraic eigenvalue of L cannot be removed from the statement of Theorem 4.3.2 without seriously affecting its validity. Indeed, the family L ∈ C ∞ (R) defined by −2 if λ ∈ R \ {0}, e−λ L(λ) := (4.20) 0 if λ = 0 is real analytic in R \ {0}, but 0 ∈ / Alg(L). Moreover, Lj = 0 for all j ≥ 0, and, therefore, 0 cannot be a k-transversal eigenvalue of LΦ for any integer k ≥ 0 and Φ ∈ C ∞ (R). The proof of Theorem 4.3.2 is based on the following lemma. Lemma 4.3.4. Let m ∈ N∗ and L ∈ C m+1 (Ω, L(U, V )) be such that L0 ∈ Fred0 (U, V ) and m Vm := Lj (N [L0 ] ∩ · · · ∩ N [Lj−1 ]) ⊕ R[L0 ] ⊂ V. (4.21) j=1
Then there exists Φ ∈ P[L(U )] with Φ0 = IU for which the new family M := LΦ satisfies Vm+1 :=
m+1
Mj (N [M0 ] ∩ · · · ∩ N [Mj−1 ]) ⊕ R[M0 ] ⊂ V
(4.22)
j=1
and Mh = Lh ,
0 ≤ h ≤ m.
(4.23)
Proof. Let Y be a closed subspace of U such that U = N [L0 ] ⊕ Y. Arguing as in the proof of Proposition 4.2.2, for each 1 ≤ j ≤ m there exists a subspace Uj of N [L0 ] such that N [L0 ] ∩ · · · ∩ N [Lj−1 ] = Uj ⊕ (N [L0 ] ∩ · · · ∩ N [Lj ]). Moreover, these subspaces satisfy Lj (Uj ) = Lj (N [L0 ] ∩ · · · ∩ N [Lj−1 ]),
1 ≤ j ≤ m,
and dim Uj = dim Lj (Uj ),
1 ≤ j ≤ m.
(4.24)
4.3. Algebraic eigenvalues and transversalization
93
Thus, (4.21) can be expressed as Vm =
m
Lj (Uj ) ⊕ R[L0 ].
j=1
From now on, we set
m
Xm :=
Uj ⊕ Y.
j=1
Clearly, codim Xm = codim Vm , since L0 ∈ Fred0 (U, V ). Actually, the operators L0 |Y : Y → R[L0 ] and Lj |Uj : Uj → Lj (Uj ),
1 ≤ j ≤ m,
are isomorphisms, and Uj ⊂ N [Lh ] whenever 0 ≤ h < j ≤ m. Now, consider the projections P0 , . . . , Pm ∈ L(U )
and Q0 , . . . , Qm ∈ L(V )
such that R[P0 ] = Y,
N [P0 ] = N [L0 ],
R[Ph ] = Uh ,
N [Ph ] =
m
Uj ⊕ Y ,
1 ≤ h ≤ m,
j=1 j=h
R[Q0 ] = R[L0 ],
N [Q0 ] =
m
Lj (Uj ),
j=1
R[Qh ] = Lh (Uh ),
N [Qh ] =
m
Lj (Uj ) ⊕ R[L0 ], 1 ≤ h ≤ m.
j=1 j=h
Then, by definition of direct sum, in order to obtain (4.22) it suffices to construct Φ ∈ P[L(U )] such that M := LΦ satisfies (4.23) and Qh Mm+1 = 0,
0 ≤ h ≤ m.
(4.25)
0 ≤ h ≤ m + 1.
(4.26)
From the definition of M it is apparent that Mh =
h j=0
Lh−j Φj ,
94
Chapter 4. Algebraic Multiplicity Through Transversalization
Thus, to obtain (4.23) it is sufficient to choose Φ such that Φ0 = IU and Lh−j Φj = 0,
1 ≤ j ≤ h ≤ m.
These relations can be equivalently written in the form R[Φj ] ⊂ N [L0 ] ∩ · · · ∩ N [Lm−j ],
1 ≤ j ≤ m.
(4.27)
By construction, for each 1 ≤ j ≤ m, N [L0 ] ∩ · · · ∩ N [Lm−j ] = Um−j+1 ⊕ · · · ⊕ Um ⊕ (N [L0 ] ∩ · · · ∩ N [Lm ]) and, hence, (4.27) is accomplished for every family Φ satisfying Ph−j Φj = 0,
1 ≤ j ≤ h ≤ m.
(4.28)
Indeed, (4.28) implies P0 Φj = 0,
1 ≤ j ≤ m,
and, hence, R[Φj ] ⊂ N [L0 ] = U1 ⊕ · · · ⊕ Um ⊕ (N [L0 ] ∩ · · · ∩ N [Lm ]). Moreover, P1 Φj = · · · = Pm−j Φj = 0. Therefore, (4.28) implies (4.27). Assume Φ0 = IU and (4.27). Using (4.26), we find that (4.25) is satisfied if and only if m+1
Qh Lm−j+1 Φj = 0,
0 ≤ h ≤ m.
(4.29)
j=0
Moreover, it follows from (4.27) that, for each 1 ≤ j ≤ m, R[Lm−j+1 Φj ] ⊂ Lm−j+1 (Um−j+1 ), since R[Φj ] ⊂ N [L0 ] ∩ · · · ∩ N [Lm−j ] = Um−j+1 ⊕ (N [L0 ] ∩ · · · ∩ N [Lm−j+1 ]). Thus, for each 1 ≤ j ≤ m and 0 ≤ h ≤ m with h = m − j + 1, Qh Lm−j+1 Φj = 0 and, hence, (4.29) can be written in the form Qh Lm+1 = −Qh Lh Φm+1−h ,
0 ≤ h ≤ m.
(4.30)
4.3. Algebraic eigenvalues and transversalization
95
Further, since the operators Q0 L0 |Y : Y → R[L0 ], Qh Lh |Uh : Uh → Lh (Uh ), 1 ≤ h ≤ m, are isomorphisms, (4.30) is satisfied provided Φm+1 = −P0 (Q0 L0 |Y )−1 Q0 Lm+1 and
Φm+1−h = −Ph (Qh Lh |Uh )−1 Qh Lm+1 ,
(4.31)
1 ≤ h ≤ m.
(4.32)
As (4.28) does not entail any restriction on Φm+1 , it is clear that (4.31) is compatible with (4.28). Moreover, choosing (4.32) yields R[Φm+1−h ] ⊂ Uh ⊂ N [L0 ] ∩ · · · ∩ N [Lh−1 ],
1 ≤ h ≤ m.
Equivalently, R[Φj ] ⊂ N [L0 ] ∩ · · · ∩ N [Lm−j ],
1 ≤ j ≤ m.
Therefore, (4.27) holds true. This concludes the proof.
Throughout the rest of this book, for any integer n ≥ 1, vector spaces X1 , . . . , Xn and vectors xj ∈ Xj , for 1 ≤ j ≤ n, we will denote ⎛ ⎞ x1 ⎜ . ⎟ . ⎟ col[x1 , . . . , xn ] := ⎜ (4.33) ⎝ . ⎠ ∈ X1 × · · · × Xn . xn Also, we use the notation trng, which has been partially introduced in (2.24). Given an integer n ≥ 1, two Banach spaces U, V , and n operators B1 , . . . , Bn ∈ L(U, V ), the operator
⎛
B1 ⎜ ⎜ B2 trng{B1 , B2 , . . . , Bn } := ⎜ ⎜ .. ⎝ . Bn
0 B1 .. . Bn−1
⎞ 0 ⎟ 0⎟ .. ⎟ ⎟ . ⎠ · · · B1
··· ··· .. .
(4.34)
will denote the element of L(U n , V n ) defined through trng{B1 , B2 , . . . , Bn } col[u1 , . . . , un ] := col[B1 u1 , B2 u1 + B1 u2 , . . . , Bn u1 + Bn−1 u2 + · · · + B1 un ] for each u1 , . . . , un ∈ U . The following result, of technical nature, will be useful in the proof of Theorem 4.3.2.
96
Chapter 4. Algebraic Multiplicity Through Transversalization
Lemma 4.3.5. Let r ∈ N and L, M ∈ C r (Ω, L(U, V )) be such that, for some integer 1 ≤ k ≤ r, the spaces k−1 6
R[L0 ], L1 (N [L0 ]), . . . , Lk (
N [Lj ]),
j=0
are in direct sum, and so are k−1 6
R[M0 ], M1 (N [M0 ]), . . . , Mk (
N [Mj ]).
j=0
Suppose, in addition, that there exists Υ ∈ C r (Ω, L(U )) such that Υ0 ∈ Iso(U )
and
L = MΥ.
(4.35)
Then, for every 0 ≤ j ≤ k, Υ0 (N [L0 ] ∩ · · · ∩ N [Lj ]) = N [M0 ] ∩ · · · ∩ N [Mj ] and, consequently, dim N [L0 ] ∩ · · · ∩ N [Lj ] = dim N [M0 ] ∩ · · · ∩ N [Mj ]. Proof. Applying the Leibniz rule to (4.35) yields trng{L0 , . . . , Ls } = trng{M0 , . . . , Ms } · trng{Υ0 , . . . , Υs }
(4.36)
for all integers 0 ≤ s ≤ r. Now pick 0 ≤ j ≤ k and x ∈ N [L0 ] ∩ · · · ∩ N [Lj ]. Then col[x, . . . , x] ∈ N [trng{L0 , . . . , Lj }] , and, thanks to (4.36), we find that trng{Υ0 , . . . , Υj } col[x, . . . , x] ∈ N [trng{M0 , . . . , Mj }] . Consequently, Lemma 4.1.1 implies that Υ0 x ∈ dim N [M0 ] ∩ · · · ∩ N [Mj ] and, therefore, for each 0 ≤ j ≤ k, Υ0 (N [L0 ] ∩ · · · ∩ N [Lj ]) ⊂ N [M0 ] ∩ · · · ∩ N [Mj ]. As Υ0 is an isomorphism, there exists an open neighborhood Ω of λ0 such that the function Υ−1 defined by Υ−1 (λ) := Υ(λ)−1
for
λ ∈ Ω
4.3. Algebraic eigenvalues and transversalization
97
belongs to C r (Ω , L(U )). In Ω we have M = LΥ−1 , and, therefore, according to the result that we have just proved, for each 0 ≤ j ≤ k, Υ−1 0 (N [M0 ] ∩ · · · ∩ N [Mj ]) ⊂ N [L0 ] ∩ · · · ∩ N [Lj ] and, consequently, N [M0 ] ∩ · · · ∩ N [Mj ] ⊂ Υ0 (N [L0 ] ∩ · · · ∩ N [Lj ]),
which concludes the proof. Now, we proceed to prove Theorem 4.3.2.
Proof of Theorem 4.3.2. Let Y be a closed subspace of U , and Z a finite-dimensional subspace of V such that U = N [L0 ] ⊕ Y
and V = Z ⊕ R[L0 ].
Then, since L0 ∈ Fred0 (U, V ), we have dim Z = dim N [L0 ]. Consider continuous projections P0 ∈ L(U ) and Q0 ∈ L(V ) such that R[P0 ] = Y,
N [P0 ] = N [L0 ],
R[Q0 ] = R[L0 ],
N [Q0 ] = Z.
The construction of the transversalizing local isomorphisms of U will be accomplished iteratively. We begin by constructing a polynomial Φ1 ∈ P[L(U )] of the form Φ1 (λ) := IU + (λ − λ0 )Φ11 , λ ∈ K, so that the family L1 := LΦ1 satisfies V1 := L11 (N [L10 ]) ⊕ R[L10 ] ⊂ V. Necessarily, by (4.37), L10 = L0
and L11 = L1 + L0 Φ11 .
Clearly, it suffices to construct Φ11 ∈ L(U ) such that Q0 L1 + Q0 L0 Φ11 = 0, which can be accomplished by making the choice Φ11 := −P0 (Q0 L0 |Y )−1 Q0 L1 .
(4.37)
98
Chapter 4. Algebraic Multiplicity Through Transversalization
Now consider an integer 1 ≤ k ≤ r. Then, based on the previous result and applying Lemma 4.3.4 recursively, there exist Ψ1 , . . . , Ψk ∈ P[L(U )] such that m Ψm 0 := Ψ (λ0 ) = IU ,
1 ≤ m ≤ k,
for which the family Lm defined by Lm (λ) := L(λ)Ψ1 (λ) · · · Ψm (λ), satisfies Vm :=
m
λ ∈ Ω,
m m m Lm j (N [L0 ] ∩ · · · ∩ N [Lj−1 ]) ⊕ R[L0 ] ⊂ V.
j=1
This ends the proof of the first part of the theorem. Now, we shall show that Vν = V if λ0 ∈ Algν (L) for some 1 ≤ ν ≤ r. As in the proof of Lemma 4.3.4, for each 1 ≤ j ≤ ν let Uj be a subspace of N [L0 ] satisfying (4.24), and set Xm :=
m
Uj ⊕ Y,
1 ≤ m ≤ ν.
j=1
Arguing as in the proof of Lemma 4.3.4 shows that codim Xm = codim Vm ,
1 ≤ m ≤ ν.
(4.38)
Recall that L0 ∈ Fred0 (U, V ) and Lm 0 = L0 ,
1 ≤ m ≤ ν.
Looking for a contradiction, assume that λ0 ∈ Algν (L) with 1 ≤ ν ≤ r and Vν V . Then, it follows from (4.38) that Xm is a proper subspace of U for each 1 ≤ m ≤ ν. Moreover, by construction, we already know that N [L0 ] = N [Lν0 ] = U1 ⊕ · · · ⊕ Uν ⊕ (N [Lν0 ] ∩ · · · ∩ N [Lνν ]). Thus, since U = N [L0 ] ⊕ Y
and Xν U,
we find that N [Lν0 ] ∩ · · · ∩ N [Lνν ] = {0} and, therefore, we can pick a vector u0 ∈ N [Lν0 ] ∩ · · · ∩ N [Lνν ] \ {0}. Necessarily, Lν (λ)u0 = o((λ − λ0 )ν ) as λ → λ0 .
(4.39)
4.4. Perturbation from simple eigenvalues
99
Consequently, since λ0 ∈ Algν (L) and, by construction, Ψ1 (λ) · · · Ψν (λ)u0 = L(λ)−1 Lν (λ)u0 , we find from (4.39) that 0 = u0 = lim Ψ1 (λ) · · · Ψν (λ)u0 = lim L(λ)−1 Lν (λ)u0 = 0, λ→λ0
λ→λ0
which is impossible. This contradiction shows that Vν = V and concludes the proof of the second assertion of the theorem. Finally, suppose that λ0 ∈ Algν (L), with 1 ≤ ν ≤ r, and let Φ, Ψ ∈ C r (Ω, L(U )) be such that Φ0 , Ψ0 ∈ Iso(U ) and, for some 1 ≤ k, ≤ ν, the point λ0 is a k-transversal eigenvalue of LΦ := LΦ, while λ0 is an -transversal eigenvalue of LΨ := LΨ. Then, LΦ = LΨ Ψ−1 Φ and, according to Lemma 4.3.5, it is apparent that Φ Ψ Ψ dim N [LΦ 0 ] ∩ · · · ∩ N [Lj ] = dim N [L0 ] ∩ · · · ∩ N [Lj ]
(4.40)
for all 0 ≤ j ≤ min{k, }. On the other hand, according to Proposition 4.2.2, we have that, for each 1 ≤ j ≤ min{k, }, dim
dim
j−1 6 i=0
j−1 6 Φ dim Lj ( N [LΦ i ]) i=0
j−1 6
j−1 6
N [LΦ i ] N [LΨ i ]
i=0
=
=
dim LΨ j (
N [LΨ i ])
+ dim
j 6
N [LΦ i ],
i=0
+ dim
i=0
j 6
N [LΨ i ].
i=0
Consequently, putting together the above equalities with (4.40) shows Φ Φ Ψ Ψ Ψ dim LΦ j (N [L0 ] ∩ · · · ∩ N [Lj−1 ]) = dim Lj (N [L0 ] ∩ · · · ∩ N [Lj−1 ])
for all 0 ≤ j ≤ min{k, }. As, by Definition 4.2.1, we have that k−1 6
LΦ k(
−1 6
Ψ N [LΦ i ]) = {0} and L (
i=0
N [LΨ i ]) = {0},
i=0
necessarily k = . This concludes the proof.
4.4 Perturbation from simple eigenvalues Throughout this section we assume that U is a subspace of V with continuous inclusion, that L ∈ C r (Ω, L(U, V )) for some r ≥ 1, and that 0 is a simple eigenvalue of L0 , which means that dim N [L0 ] = 1 and N [L0 ] ⊕ R[L0 ] = V.
(4.41)
100
Chapter 4. Algebraic Multiplicity Through Transversalization
Exercises 2 and 3 will provide a list of necessary and sufficient conditions for the transversality condition of (4.41) to hold. In particular, it will be shown that L0 ∈ Fred0 (U, V ) if dim N [L0 ] < ∞ and N [L0 ] ⊕ R[L0 ] = V. Consequently, in this case, it is not necessary to impose the Fredholm condition L0 ∈ Fred0 (U, V ), as we have done in this chapter. Under these general assumptions, the zero eigenvalue of L0 perturbs into a unique eigenvalue a(λ) of L(λ). Precisely, the following perturbation result is satisfied. Lemma 4.4.1. Suppose U is a subspace of V with continuous inclusion, r ∈ N∗ ∪ {∞, ω}, the family L ∈ C r (Ω, L(U, V )) satisfies (4.41), and λ0 ∈ Ω. Let Y be a closed subspace of U and ϕ0 ∈ N [L0 ] \ {0} such that N [L0 ] = span[ϕ0 ]
and
N [L0 ] ⊕ Y = U.
Then there exist an open neighborhood Ω of λ0 and two unique functions a ∈ C r (Ω , K) and ϕ ∈ C r (Ω , U ) such that, for each λ ∈ Ω , L(λ)ϕ(λ) = a(λ)ϕ(λ)
(4.42)
and a(λ0 ) = 0,
ϕ(λ0 ) = ϕ0 ,
ϕ(λ) − ϕ0 ∈ Y.
Proof. Let U be a neighborhood of (λ0 , 0, 0) in K × K × Y such that the operator F : U → V defined by F(λ, a, z) := L(λ)(ϕ0 + z) − a(ϕ0 + z),
(λ, a, z) ∈ U,
is well defined. Then, F ∈ C r (U, V ) and F(λ0 , 0, 0) = 0. Moreover, the derivative of F(λ0 , ·, ·) at (0, 0) equals the operator T ∈ L(K × Y, V ) defined through T (a, z) := L0 z − aϕ0 ,
(a, z) ∈ K × Y.
Condition (4.41) entails T being an isomorphism. The implicit function theorem easily completes the proof. One of the main goals of perturbation theory is to ascertain the first nonvanishing term of the Taylor expansion of the perturbed eigenvalue function a, as the knowledge of those terms is imperative in most applications of the underlying theory. The next result shows that a1 := a (λ0 ) = 0 if and only if λ0 is a 1transversal eigenvalue of L. Proposition 4.4.2. Under the same hypotheses and notation of Lemma 4.4.1, a1 = 0 if and only if λ0 is a 1-transversal eigenvalue of L.
4.4. Perturbation from simple eigenvalues
101
Proof. Differentiating (4.42) and particularizing at λ0 gives L1 ϕ0 + L0 ϕ1 = a1 ϕ0 .
(4.43)
Thus, L1 ϕ0 = −L0 ϕ1 ∈ R[L0 ] if a1 = 0. Conversely, L1 ϕ0 ∈ R[L0 ] implies that L1 ϕ0 + L0 ϕ1 ∈ R[L0 ] and, hence, due to (4.43), a1 ϕ0 ∈ R[L0 ]. Therefore, a1 = 0, because we are assuming that ϕ0 ∈ / R[L0 ], by (4.41). The following result shows that the algebraic multiplicity χ[L; λ0 ] of L at λ0 equals ordλ0 a, where a is the perturbed eigenvalue function from the zero eigenvalue of L0 , whose existence was established by Lemma 4.4.1. Theorem 4.4.3. Under the same hypotheses and notation of Lemma 4.4.1, suppose, in addition, that λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r. Then, for every choice of Y , χ[L; λ0 ] = ordλ0 a. Moreover, χ[L; λ0 ] equals the order of transversality of all transversalized families from L at λ0 . Proof. According to Theorem 4.3.2, there exist k ∈ {1, . . . , ν} and Φ ∈ P[L(U )] with Φ0 = IU for which λ0 is a k-transversal eigenvalue of LΦ := LΦ. According to Definitions 4.2.1 and 4.3.3, we have that χ[L; λ0 ] = k, because codim R[L0 ] = 1, by (4.41). Consequently, to prove the theorem it suffices to show the identity ordλ0 a = k. Note that, by Definition 4.2.1, Φ Φ LΦ k (N [L0 ] ∩ · · · ∩ N [Lk−1 ]) = {0} Φ and, hence, N [LΦ 0 ] ∩ · · · ∩ N [Lk−1 ] is a non-zero space contained in
N [LΦ j ] = span[ϕ0 ], so
k−1 6 j=0
N [LΦ j ] = span[ϕ0 ].
102
Set
Chapter 4. Algebraic Multiplicity Through Transversalization
ψ(λ) := Φ(λ)−1 ϕ(λ)
for λ ∼ λ0 .
Then, identity (4.42) can be expressed in the form LΦ (λ)ψ(λ) = a(λ)ϕ(λ)
for λ ∼ λ0 .
(4.44)
We now show that am = 0 for some 1 ≤ m ≤ k. Suppose the opposite to be true, so that aj = 0 for each 0 ≤ j ≤ k. Then, by (4.44), LΦ (λ)ψ(λ) = o((λ − λ0 )k ) for λ ∼ λ0 and, hence, according to Lemma 4.1.2 applied to LΦ , k 6
ψ0 ∈
N [LΦ j ].
j=0
On the other hand, by Proposition 4.2.2 applied to LΦ , we have k 6
N [LΦ j ] = {0}.
j=0
Therefore, ψ0 = 0, which is a contradiction, since ψ0 = Φ−1 0 ϕ0 = ϕ0 = 0. Consequently, 1 ≤ m := ordλ0 a ≤ k and, hence, it follows from (4.44) that LΦ (λ)ψ(λ) = o((λ − λ0 )m−1 ) for λ ∼ λ0 . Thus, Lemma 4.1.2 applied to LΦ shows that ψ0 , . . . , ψm−1 ∈ N [LΦ 0]=
m−1 6
N [LΦ j ] = span[ϕ0 ].
j=0
Combining this with (4.44) and using the transversality of LΦ yields Φ LΦ 0 ψm + Lm ψ0 = am ϕ0 = 0 Φ and, hence, LΦ m ϕ0 = 0, because of L0 = L0 and (4.41). Therefore, Φ Φ dim LΦ m (N [L0 ] ∩ · · · ∩ N [Lm−1 ]) = 1.
This implies k = m and ends the proof.
Theorem 5.7.1 will provide us with an improvement of Theorem 4.4.3 by simply assuming that χ[L; λ0 ] is well defined, even if λ0 ∈ / Alg(L) (see also Exercise 5).
4.5. Exercises
103
4.5 Exercises 1. Identify L(K2 ) with M2 (K), and let L, L∗ ∈ L(K, L(K2 )) be the operator families defined by λ λ2 λ ∈ K. , L∗ (λ) := L(λ)∗ , L(λ) := 1 0 Prove that 0 is a 2-transversal eigenvalue of L, but it is not a transversal eigenvalue of L∗ . 2.
Suppose U is a subspace of V with continuous inclusion and let B ∈ L(U, V ) satisfy dim N [B] < ∞
and
N [B] ⊕ R[B] = V.
Prove that B ∈ Fred0 (U, V ). 3. Prove that the following conditions are equivalent for every A ∈ Fred0 (U, V ) and α ∈ N: • N [Aα ] ⊕ R[Aα ] = V . • N [Aα ] ∩ R[Aα ] = {0}. • N [Aα ] + R[Aα ] = V . • N [Aα ] ⊕ (U ∩ R[Aα ]) = U . • The restriction of Aα from U ∩ R[Aα ] to V is injective. • The restriction of Aα from U ∩ R[Aα ] to R[Aα ] is an isomorphism. • N [Aα ] = N [Aα+1 ]. • N [Aα ] = N [Aβ ] for all β ≥ α. • R[Aα ] = R[Aα+1 ]. • R[Aα ] = R[Aβ ] for all β ≥ α. 4.
Given T ∈ L(U ), consider the family L ∈ C ω (K, L(U )) defined by L(λ) := λI − T,
λ ∈ K,
and let λ0 ∈ Eig(L) be such that L0 ∈ Fred0 (U ). Prove the equivalence of the following assertions: • λ0 is a transversal eigenvalue of L. • λ0 is a 1-transversal eigenvalue of L. • N [L0 ] ⊕ R[L0 ] = U . In such a case, χ[L; λ0 ] = dim N [L0 ]. 5. Identify L(R) with R, and suppose L ∈ C r (Ω, L(R)) for some r ∈ N ∪ {∞}. Prove that ordλ0 L is defined if and only if so is χ[L; λ0 ], and that, in such a case, χ[L; λ0 ] = ordλ0 L. 6. Suppose L ∈ C r (Ω, L(U, V )) for some r ∈ N ∪ {∞}, and for a certain λ0 ∈ Ω one has L0 ∈ Fred0 (U, V ), and χ[L; λ0 ] is defined. Prove that χ[L; λ0 ] ≥ dim N [L0 ].
104
Chapter 4. Algebraic Multiplicity Through Transversalization
7. Let L ∈ C r (Ω, L(U, V )) be a family for which χ[L; λ0 ] is defined, and consider the new family M defined by M(λ) := L(λ + λ0 ),
for λ + λ0 ∈ Ω.
Prove that χ[M; 0] = χ[L; λ0 ]. 8.
Let U, V, W be Banach spaces, r ∈ N ∪ {∞}, and L ∈ C r (Ω, L(U, V )),
Φ ∈ C r (Ω, L(V, W ))
such that L0 ∈ Fred0 (U, V ) and Φ0 ∈ Iso(V, W ) for some λ0 ∈ Ω. Given an integer 0 ≤ k ≤ r, prove the following: • λ0 is a k-transversal eigenvalue of L if and only if it is a k-transversal eigenvalue of ΦL. • χ[L; λ0 ] is defined if and only if χ[ΦL; λ0 ] is defined, and, in such a case, χ[L; λ0 ] = χ[ΦL; λ0 ]. 9.
Let U1 , U2 , V1 , V2 be Banach spaces and λ0 ∈ Ω ⊂ K. Consider L ∈ C r (Ω, L(U1 , V1 ))
and
M ∈ C r (Ω, L(U2 , V2 ))
such that L0 ∈ Fred0 (U1 , V1 ) and M0 ∈ Fred0 (U2 , V2 ). Define the product family L × M as λ ∈ Ω. (L × M)(λ) = L(λ) × M(λ) ∈ L(U1 × U2 , V1 × V2 ), Prove, step by step, the following: • For all integers 0 ≤ s ≤ r, (L × M)s
6 s−1 j=0
6 6 s−1 s−1 N [(L × M)j ] = Ls N [Lj ] × Ms N [Mj ] . j=0
j=0
• If λ0 is a k-transversal eigenvalue of L and λ0 is an -transversal eigenvalue of M, then λ0 is a max{k, }-transversal eigenvalue of L × M. • If λ0 is a k-algebraic eigenvalue of L and λ0 is an -algebraic eigenvalue of M, then λ0 is a max{k, }-algebraic eigenvalue of L × M. / Alg(L) or λ0 ∈ / Alg(M), then λ0 ∈ / Alg(L × M). • Suppose r = ∞. If λ0 ∈ • χ[L × M; λ0 ] is defined if and only if both χ[L; λ0 ] and χ[M; λ0 ] are defined, and, in such a case, χ[L × M; λ0 ] = χ[L; λ0 ] + χ[M; λ0 ].
4.6. Comments on Chapter 4
105
4.6 Comments on Chapter 4 The construction of the algebraic multiplicity presented in this chapter goes back to Esquinas & L´opez-G´omez [29, 30, 31, 82]. Originally, it was introduced in the context of bifurcation theory to characterize the nonlinear eigenvalues of L, which further enhanced a huge industry in topological global analysis. A value λ0 ∈ Ω is said to be a nonlinear eigenvalue of L ∈ C r (Ω, L(U, V )), for some r ≥ 2, if for every map N ∈ C 2 (Ω × U, V ) such that N(λ, 0) = 0,
lim
u→0
N(λ, u) = 0, u
λ ∼ λ0 ,
there exists a sequence {(ξn , un )}n≥1 in Ω × (U \ {0}) such that lim (ξn , un ) = (λ0 , 0) and L(ξn )un + N(ξn , un ) = 0,
n→∞
n ≥ 1.
The theory of Esquinas & L´ opez-G´omez established that, when χ[L; λ0 ] is defined, a λ0 ∈ Eig(L) is a nonlinear eigenvalue of L if and only if χ[L; λ0 ] ∈ 2N + 1. These topics will be treated in Chapter 12, where we shall give a proof of the theorem of characterization of nonlinear eigenvalues through the local Smith form studied in Chapter 7 and the topological degree, which will be introduced there. So, we stop this discussion here. The concept of algebraic eigenvalue goes back to L´opez-G´omez [82]. It was introduced to overcome a serious gap in the proof of the main theorem of Esquinas [29], which is true as stated there in if L ∈ C ω (Ω, L(U, V )), because in such a case all isolated eigenvalues must be algebraic, as will become apparent in Chapter 8. Since at least Poincar´e, the study of eigenvalue perturbations has been a milestone in the generation of new problems and ideas in spectral theory. Possibly, the paradigm of the perturbation theory of linear operators is the celebrated book of Kato [65]. A briefer text in the theory of perturbation theory of Fredholm operators is the review paper by Gohberg & Kre˘ın [45]. In fact, Lemma 4.4.1 is attributable to Kato [65], where, in addition, the problem of constructing generalized algebraic multiplicities for families of the form L(λ) = L0 + (λ − λ0 )L1 ,
λ ∈ K,
in the context of perturbation theory, was originally addressed. Theorem 4.4.3 gives a rather satisfactory answer to the problem of analyzing spectral perturbations from simple eigenvalues; it goes back to L´ opez-G´omez [82], where, actually, a completely general answer to the perturbation problem was proposed by means of the topological degree. Essentially, when K = R, the algebraic multiplicity χ[L; λ0 ] measures the parity of the Poincar´e crossing number of L(λ) as λ crosses λ0 . By
106
Chapter 4. Algebraic Multiplicity Through Transversalization
crossing number is meant the total number of classic eigenvalues of L(λ) that crosses the imaginary axis as λ crosses λ0 . This feature is crucial in the context of the theory of dynamical systems. According to Theorem 4.4.3, a(λ) changes sign as λ crosses λ0 if and only if χ[L; λ0 ] is odd, and, consequently, χ[L; λ0 ] provides us with an algebraic invariant characterizing whether or not a(λ) crosses the imaginary axis along the real axis. Such characterizations are imperative to study the physical plausibility of (λ, 0), as a mathematical solution, in a number of nonlinear equations describing real world models of the form L(λ)u + N(λ, u) = 0, where N(λ, u) = o( u ) is some nonlinear operator. Further details on these and some other closely related nonlinear topics, directed to an interdisciplinary audience, can be found in L´ opez-G´omez [85, 82]. Proposition 4.4.2 can be stated by saying that if zero is a simple eigenvalue of L0 , then a1 := a (λ0 ) = 0 (a transversality condition of dynamical nature) if and only if (4.45) L1 (N [L0 ]) ⊕ R[L0 ] = V (a transversality condition of algebraic nature). This crucial feature goes back to Crandall & Rabinowitz [22, 23], and is on the roots of their extremely celebrated bifurcation theorem and exchange stability principle. Actually, (4.45) is referred to in the specialized literature on nonlinear analysis as the transversality condition of Crandall and Rabinowitz. From the very beginning, (4.45) triggered the abstract theory developed in this chapter. By its huge number of applications in the context of nonlinear partial differential equations, there is no doubt that the results of Crandall & Rabinowitz [22, 23] are among the most important mathematical works of the last decades. Although the general exposition of this chapter has been based on [82], the notation and materials included here have been considerably polished and refined.
Chapter 5
Algebraic Multiplicity Through Polynomial Factorization This chapter describes an equivalent approach to the concept of multiplicity χ[L; λ0 ] introduced in Chapter 4; in this occasion by means of an appropriate polynomial factorization of L at λ0 . However, at first glance these approaches are seemingly completely different. The approach adopted in this chapter provides two extremely relevant properties of χ[L; λ0 ], not discussed in Chapter 4, and most of the uniqueness results of Chapter 6. Precisely, it might be viewed as a technical device to prove the following: • Given λ0 ∈ Ω and L ∈ C ∞ (Ω, L(U, V )), the family L can be transversalized at λ0 ∈ Ω if and only if λ0 ∈ Alg(L), equivalently, if and only if χ[L; λ0 ] < ∞. • The algebraic multiplicity of a product equals the sum of the algebraic multiplicities (product formula). Chapter 5 is organized as follows. Section 5.1 constructs the new concept of multiplicity, denoted by µ[L; λ0 ], and proves its most elementary properties. Section 5.2 establishes some hidden connections between the multiplicities µ and χ. Section 5.3 shows that µ = χ and uses this equivalence to prove that a C ∞ family L can be transversalized at λ0 if and only if λ0 ∈ Alg(L), which is an improvement of the second assertion of Theorem 4.3.2. Section 5.4 gives a formula for the partial multiplicities introduced in Section 5.1. Section 5.5 consists of the proof of the following real non-analytic counterpart of the celebrated theorem of Riemann on removable singularities. Theorem 5.0.1 (theorem of removable singularities). Let r ∈ N ∪ {∞}, consider λ0 ∈ Ω and L ∈ C r (Ω, L(U, V )) with L(λ0 ) ∈ Fred0 (U, V ). Suppose (λ − λ0 )ν L(λ)−1 exists and is bounded for λ in Ω \ {λ0 }, for some ν ∈ [0, r] ∩ Z. Then
108
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
there exists an open subset Ω ⊂ Ω with λ0 ∈ Ω for which the function λ → (λ − λ0 )ν L(λ)−1 ,
λ ∈ Ω \ {λ0 },
admits an extension of class C r−ν to Ω . Theorem 5.0.1 is a deep result whose proof is based on the main theorem of this chapter (Theorem 5.3.1). The condition L0 ∈ Fred0 (U, V ) cannot be removed from its statement (see Exercise 9). The assumption (λ − λ0 )ν L(λ)−1 exists and is bounded for λ in Ω \ {λ0 }, for some ν ∈ [0, r] ∩ Z holds if and only if λ0 ∈ Algk (L) for some k ∈ [0, r]∩Z. In such a case, the theorem applies for all ν ∈ [k, r] ∩ Z. Chapter 8 shows that this is actually the case when L is analytic and λ0 is an isolated singularity of L−1 . Section 5.6 is devoted to the proof of the product formula. Finally, Section 5.7 gives the sharp version of Theorem 4.4.3 announced at the end of Section 4.4. As in Chapter 4, throughout this chapter we consider K ∈ {R, C}, two KBanach spaces U and V , an open subset Ω ⊂ K, a point λ0 ∈ Ω, and a family L ∈ C r (Ω, L(U, V )) for some r ∈ N ∪ {∞}, such that L0 := L(λ0 ) ∈ Fred0 (U, V ), even if not mentioned. As Fred0 (U, V ) is an open set of L(U, V ), the previous conditions actually imply L(λ) ∈ Fred0 (U, V )
for
λ ∼ λ0 .
5.1 Derived families and factorization Given the family L we consider the r + 1 auxiliary families M{j} : Ω → L(U, V ),
0 ≤ j < r + 1,
defined inductively as follows: 1. M{0} := L. 2. Let 0 ≤ i < r and assume that a family M{i} ∈ C r−i (Ω, L(U, V )) has been defined. Consider a projection Πi ∈ L(U ) with range N [M{i} (λ0 )]. Then, we define M{i} (λ) (λ − λ0 )−1 Πi + IU − Πi if λ = λ0 , M{i+1} (λ) := if λ = λ0 . M{i} (λ0 )Πi + M{i} (λ0 ) Note that M{i} (λ0 )Πi = 0 and M{i+1} ∈ C r−i−1 (Ω, L(U, V )).
5.1. Derived families and factorization
109
Definition 5.1.1. The (finite or infinite) sequence {M{j} }rj=0 is called a derived sequence from L at λ0 . For each 0 ≤ j < r + 1, the family M{j} is said to be a jth derived family from L at λ0 . The sequence {Πj }rj=0 is said to be the sequence of projections corresponding to {M{j} }rj=0 . From now on, given a finite-rank projection P ∈ L(U ), we will consider the polynomial family RP ∈ P[L(U )] defined by RP (t) := P + t(IU − P ),
t ∈ K.
(5.1)
Note that RP (t) is invertible for all t = 0 and that RP (t)−1 = RP (t−1 ),
t ∈ K \ {0}.
(5.2)
Using this notation, one may express M{i+1} (λ) = (λ − λ0 )−1 M{i} (λ)RΠi (λ − λ0 ),
λ ∈ Ω \ {λ0 }.
In the construction of M{j+1} from M{j} , the projection Πj can be chosen in many different ways. Consequently, there are, in fact, infinitely many derived sequences from L at λ0 . The following result collects their main properties. In particular, it justifies the way they have been constructed. Note that, in Step 2 above, the projection Πj exists if dim N [M{j} (λ0 )] < ∞, since every finitedimensional subspace of U possesses a topological supplement. Lemma 5.1.2. Let {M{j} }rj=0 be a derived sequence from L at λ0 with corresponding sequence of projections {Πj }rj=0 . Take j ∈ N ∩ [0, r]. Then M{j} (λ) ∈ Fred0 (U, V ) for all λ ∈ Ω satisfying L(λ) ∈ Fred0 (U, V ). Moreover, M{j} (λ) = (λ − λ0 )−j L(λ)RΠ0 (λ − λ0 ) · · · RΠj−1 (λ − λ0 )
(5.3)
for all λ ∈ Ω \ {λ0 }. Furthermore, for every j ∈ N ∩ [0, r − 1], N [Πj ] ∩ N [M{j+1} (λ0 )] = {0}
(5.4)
dim N [M{j+1} (λ0 )] ≤ dim N [M{j} (λ0 )].
(5.5)
and Proof. First, we show that M{j} (λ) ∈ Fred0 (U, V ) if L(λ) ∈ Fred0 (U, V ). The proof is a simple induction on j. As M{0} = L, there is nothing to prove when j = 0. Suppose M{j} (λ) ∈ Φ0 (U, V ) for some 0 ≤ j < r and all λ ∈ Ω satisfying L(λ) ∈ Φ0 (U, V ). Then, N [M{j} (λ0 )] is finite dimensional and, hence, there exists a projection Πj ∈ L(U ) with R[Πj ] = N [M{j} (λ0 )].
110
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
Thus, by definition, M{j+1} (λ0 ) = M{i} (λ0 )Πi + M{i} (λ0 ) is a sum of a finite-rank operator plus an element of Fred0 (U, V ), and, hence, it belongs to Fred0 (U, V ). If λ = λ0 and L(λ) ∈ Fred0 (U, V ), then, by definition, M{j+1} (λ) := (λ − λ0 )−1 M{j} (λ)RΠj (λ − λ0 ) and, hence, due to the induction hypothesis, M{j+1} (λ) is given by a product of a Fredholm operator of index zero with an invertible operator. Therefore, M{j+1} (λ) ∈ Fred0 (U, V ) (see the properties of Fredholm operators listed in the presentation of Part II). We now prove (5.3). By definition, for each λ ∈ Ω \ {λ0 }, M{1} (λ) = (λ − λ0 )−1 L(λ)RΠ0 (λ − λ0 ). Suppose (5.3) is satisfied by a particular j. Then, M{j+1} (λ) =(λ − λ0 )−1 M{j} (λ)RΠj (λ − λ0 ) =(λ − λ0 )−(j+1) L(λ)RΠ0 (λ − λ0 ) · · · RΠj (λ − λ0 ), which proves (5.3). To prove (5.4), let x ∈ N [Πj ] ∩ N [M{j+1} (λ0 )]. Then,
0 = M{j+1} (λ0 )x = M{i} (λ0 )Πi x + M{i} (λ0 )x = M{j} (λ0 )x
and, hence, x ∈ N [M{j} (λ0 )]. Therefore, 0 = Πj x = x. This proves (5.4), and, hence, Πj establishes a linear injection from N [M{j+1} (λ0 )] to N [M{j} (λ0 )] and, consequently, (5.5) holds. Suppose M{j} (λ0 ) is invertible for some j ≥ 0. Then, it follows from (5.5) that for each j ≤ k < r + 1, N [M{k} (λ0 )] = N [M{j} (λ0 )] = {0} and, hence, M{k} = M{j} , by definition. Bearing this in mind, we can introduce the following concepts of ascent and multiplicity. Definition 5.1.3. Let {M{j} }rj=0 be a derived sequence from L at λ0 . Then, the ascent of L at λ0 , denoted by α[L; λ0 ], is defined as follows:
5.1. Derived families and factorization
111
• α[L; λ0 ] equals the least j ∈ N ∩ [0, r] for which M{j} (λ0 ) ∈ Iso(U, V ), if such a j exists. • α[L; λ0 ] = ∞, if r = ∞ and dim N [M{j} (λ0 )] ≥ 1 for all j ∈ N. For every integer 0 ≤ j ≤ r, the jth partial µ-multiplicity of L at λ0 is defined through j µj [L; λ0 ] := dim N [M{i} (λ0 )], i=0
and the µ-multiplicity of the operator family L at λ0 , denoted by µ[L; λ0 ], is defined by: • µ[L; λ0 ] := µα[L;λ0 ] [L; λ0 ], if α[L; λ0 ] < ∞. • µ[L; λ0 ] := ∞, if α[L; λ0 ] = ∞. The ascent α[L; λ0 ] and the µ-multiplicity µ[L; λ0 ] are left undefined when r ∈ N, the family L is of class C r , but L is not of class C r+1 in any neighborhood of λ0 , and M{r} (λ0 ) is not invertible. Moreover, we have actually defined α[L; λ0 ] = µ[L; λ0 ] = 0 if L0 ∈ Iso(U, V ). Conversely, L0 ∈ Iso(U, V ) if α[L; λ0 ] = 0 or µ[L; λ0 ] = 0. The concepts introduced by Definition 5.1.3 are consistent, in the sense that they do not vary when the sequence of derived families changes. Indeed, ˆ {j} }r let {M{j} }rj=0 and {M j=0 be two arbitrary derived sequences from L at λ0 with corresponding projections {Πj }rj=0 and {Πˆj }rj=0 , respectively. We claim that for each 0 ≤ j < r + 1 there exists a polynomial Θj ∈ P[L(U )] such that Θj (λ0 ) ∈ Iso(U ) and ˆ {j} (λ) = M{j} (λ)Θj (λ), M
λ ∈ Ω.
(5.6)
This actually shows the consistency of Definition 5.1.3, because it implies ˆ {j} (λ0 )] = dim N [M{j} (λ0 )], dim N [M
j ∈ N ∩ [0, r].
To prove the previous claim we proceed by induction on j. Assume j = 0. Then, (5.6) is satisfied with λ ∈ K, Θ0 (λ) = IU , since ˆ {0} = M{0} = L. M Now, assume that (5.6) is satisfied for some particular j ≥ 0 and that Θj and Θ−1 j are polynomials. Then, ˆ {j} (λ0 )]) = N [M{j} (λ0 )] Θj (λ0 )(N [M
112
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
and, hence, ˆ j = Θj (λ0 )Π ˆj, Πj Θj (λ0 )Π
ˆ j Θj (λ0 )−1 Πj = Θj (λ0 )−1 Πj . Π
(5.7)
Thus, using the induction hypothesis and (5.2), one obtains that, for each λ ∈ Ω \ {λ0 }, ˆ j (λ)R ˆ (λ − λ0 ) ˆ {j+1} (λ) = (λ − λ0 )−1 M M Πj = (λ − λ0 )−1 M{j} (λ)Θj (λ)RΠˆ j (λ − λ0 ) = (λ − λ0 )−1 M{j} (λ)RΠj (λ − λ0 )Θj+1 (λ)
(5.8)
= M{j+1} (λ)Θj+1 (λ), where Θj+1 is defined through Θj+1 (λ) := RΠj ((λ − λ0 )−1 )Θj (λ)RΠˆ j (λ − λ0 ),
λ ∈ K \ {λ0 },
whose inverse is given by Θj+1 (λ)−1 = RΠˆ j ((λ − λ0 )−1 )Θj (λ)−1 RΠj (λ − λ0 ),
λ ∈ K \ {λ0 }.
It remains to show that the families Θj+1 and Θ−1 j+1 are indeed polynomials when extended to K by passing to the limit as λ → λ0 . The unique doubtful terms in their developments are ˆj (λ − λ0 )−1 (IU − Πj )Θj (λ)Π
ˆ j )Θj (λ)−1 Πj , and (λ − λ0 )−1 (IU − Π
respectively, but, actually, owing to (5.7), we have that ˆ j = 0 and (IU − Π ˆ j )Θj (λ0 )−1 Πj = 0. (IU − Πj )Θj (λ0 )Π As, according to the induction hypothesis Θj and Θ−1 j are polynomials, necessarily so are Θj+1 and Θ−1 . Finally, passing to the limit as λ → λ0 in (5.8) shows also j+1 that ˆ {j+1} (λ0 ) = M{j+1} (λ0 )Θj+1 (λ0 ). M This concludes the induction and proves the consistency of Definition 5.1.3. The next result shows that the ascent α[L; λ0 ] and the multiplicity µ[L; λ0 ] are invariant by pre- and post-multiplication of local isomorphism families. ˆ ∈ C r (Ω, L(U, V )) with L0 ∈ Fred0 (U, V ). Proposition 5.1.4. Suppose r ≥ 1 and L, L r ˆ = LΦ, then If there exist Φ ∈ C (Ω, L(U )) such that Φ0 ∈ Iso(U ) and L ˆ λ0 ] = µj [L; λ0 ], µj [L;
0 ≤ j < r + 1.
(5.9)
ˆ λ0 ] = µ[L; λ0 ] µ[L;
(5.10)
Consequently, ˆ λ0 ] = α[L; λ0 ] α[L;
and
if µ[L; λ0 ] is defined. Similarly, (5.9) and (5.10) are satisfied if there exists Ψ ∈ C r (Ω, L(V )) with ˆ = ΨL. Ψ0 ∈ Iso(V ) such that L
5.1. Derived families and factorization
113
Proof. Note that ˆ 0 = L0 Φ0 ∈ Fred0 (U, V ). L By making Ω smaller, if necessary, we can suppose, without loss of generality, that Φ(λ) ∈ Iso(U ) for all λ ∈ Ω. ˆ {j} }r ˆ Let {M{j} }rj=0 and {M j=0 be two derived sequences from L and L, r r respectively, at λ0 with corresponding projections {Πj }j=0 and {Πˆj }j=0 . As in the proof of the consistency of Definition 5.1.3, it suffices to show that, for each 0 ≤ j < r + 1, there exists Θj ∈ C r−j (Ω, L(U )) such that Θj (λ0 ) ∈ Iso(U ) and ˆ {j} = M{j} Θj . M
(5.11)
Similarly, the proof of (5.11) involves an induction argument. Clearly, when j = 0, equality (5.11) holds true with Θ0 := Φ ∈ C r (Ω, L(U )). Suppose (5.11) holds for a particular 0 ≤ j < r and some Θj ∈ C r−j (Ω, L(U )) with Θ(λ0 ) ∈ Iso(U ). Then, for each λ ∈ Ω \ {λ0 }, it follows from (5.11) and the definition of derived family that ˆ {j+1} (λ) = M{j+1} (λ)Θj+1 (λ), M where
(5.12)
Θj+1 (λ) := RΠj ((λ − λ0 )−1 )Θj (λ)RΠˆ j (λ − λ0 ).
Note that, for each λ ∈ Ω \ {λ0 }, Θj+1 (λ)−1 = RΠˆ j ((λ − λ0 )−1 )Θj (λ)−1 RΠj (λ − λ0 ), and that Θ−1 ∈ C r−j (Ω, L(U )) as well. As in the proof of the consistency, it is j apparent from the induction hypothesis that ˆ j = Θj (λ0 )Π ˆj, Πj Θj (λ0 )Π
ˆ j Θj (λ0 )−1 Πj = Θj (λ0 )−1 Πj , Π
and, therefore, ˆ j = (IU − Πj )Θj (λ0 )Π ˆj, lim (λ − λ0 )−1 (IU − Πj )Θj (λ)Π
λ→λ0
ˆ j )Θj (λ)−1 Πj = (IU − Π ˆ j )[Θ−1 ] (λ0 )Πj . lim (λ − λ0 )−1 (IU − Π j
λ→λ0
In particular, the operators Θj+1 (λ0 ) := lim Θj+1 (λ) λ→λ0
and
lim Θj+1 (λ)−1
λ→λ0
are well defined and mutually inverse. Consequently, the families Θj+1 and Θ−1 j+1 are of class C r−j−1 in Ω. Passing to the limit as λ → λ0 in (5.12) completes the proof of the first claim. The proof of the last assertion of the statement is proposed in Exercise 1.
114
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
Chapter 7 will provide a general version of Proposition 5.1.4, where the regularity assumptions imposed on the isomorphisms families are shown to be superfluous.
5.2 Connections between χ and µ This section reveals the rather hidden connections between the multiplicity µ[L; λ0 ] introduced by Definition 5.1.3 and the multiplicity χ[L; λ0 ] introduced by Definition 4.3.3. As a by-product, it will become apparent that µ[L; λ0 ] < ∞ if λ0 ∈ Algν (L) for some integer 0 ≤ ν < r + 1. In such a case, according to Definition 4.3.3, the multiplicity χ[L; λ0 ] is well defined and finite. It turns out that # µ[L; λ0 ] = χ[L; λ0 ] if λ0 ∈ Algν (L). (5.13) 0≤ν
The proof of (5.13) is based on the next result. Theorem 5.2.1. Suppose r ≥ 0, the family L ∈ C r (Ω, L(U, V )) satisfies L0 ∈ Fred0 (U, V ), and, for some k ∈ N ∩ [0, r], the linear spaces k−1 6
R[L0 ], L1 (N [L0 ]), . . . , Lk (
N [Lj ])
(5.14)
j=0
are in direct sum. Then, for every sequence {M{j} }rj=0 of derived families from L at λ0 , (5.15) N [M{i} (λ0 )] = N [L0 ] ∩ · · · ∩ N [Li ], for all i ∈ {0, . . . , k}, and µk−1 [L; λ0 ] =
k j=1
j−1 6
j · dim Lj (
N [Li ])+k · dim N [M{k} (λ0 )].
(5.16)
i=0
Proof. Let {M{j} }rj=0 be a sequence of derived families from L at λ0 with corresponding sequence of projections {Πj }rj=0 . Proving (5.15) requires induction. By construction, N [M{0} (λ0 )] = N [L0 ]. Now, take 1 ≤ j ≤ k and assume that (5.15) is satisfied for each 0 ≤ i ≤ j − 1. Then, for each 0 ≤ ≤ i ≤ j − 1, N [M{i} (λ0 )] ⊂ N [M{} (λ0 )]
5.2. Connections between χ and µ
115
and, hence, (IU − Π )Πi = 0, (IU − Π )(IU − Πi ) = IU − Π .
Π Πi = Πi , Π (IU − Πi ) = Π − Πi ,
(5.17)
Now, it follows from (5.1) and (5.17) that, for every λ ∈ Ω, RΠ0 (λ − λ0 )RΠ1 (λ − λ0 ) · · · RΠj−1 (λ − λ0 ) = Πj−1 +
j−1 (λ − λ0 )i Πj−i−1 (IU − Πj−i ) + (λ − λ0 )j (IU − Π0 ).
(5.18)
i=1
On the other hand, according to (5.3), M{j} (λ) = (λ − λ0 )−j
j
(λ − λ0 )i Li + o((λ − λ0 )j )
i=0
(5.19)
· RΠ0 (λ − λ0 )RΠ1 (λ − λ0 ) · · · RΠj−1 (λ − λ0 ). Therefore, substituting (5.18) into (5.19) and using the induction hypothesis demonstrate that M{j} (λ0 ) = L0 (IU − Π0 ) +
j−1
Li Πi−1 (IU − Πi ) + Lj Πj−1 .
(5.20)
Li (N [L0 ] ∩ · · · ∩ N [Li−1 ]) ⊕ R[L0 ].
(5.21)
i=1
In particular, by (5.20), R[M{j} (λ0 )] ⊂
j i=1
Now we will prove that N [M{j} (λ0 )] ⊂ N [L0 ] ∩ · · · ∩ N [Lj ].
(5.22)
Take u0 ∈ N [M{j} (λ0 )]. Then, using (5.20) and (5.21), we observe that, for each 1 ≤ i ≤ j − 1, 0 = L0 (I − Π0 )u0 = Li Πi−1 (I − Πi )u0 = Lj Πj−1 u0 .
(5.23)
On the other hand, it follows from the definition of Π0 that Π0 u0 ∈ N [M{0} (λ0 )] = N [L0 ] and so L0 Π0 u0 = 0. Thus, the first equality of (5.23) implies L0 u0 = 0. In particular, Π0 u0 = u0 .
116
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
Now, fix 0 ≤ ≤ j − 2 and assume u0 ∈ N [L0 ] ∩ · · · ∩ N [L ].
(5.24)
By (5.17), equation (5.23) implies that 0 = L+1 Π (IU − Π+1 )u0 = L+1 (Π u0 − Π+1 u0 ) = L+1 Π u0 , since Π+1 u0 ∈ N [M{+1} (λ0 )] = N [L0 ] ∩ · · · ∩ N [L+1 ] ⊂ N [L+1 ], because of the induction hypothesis. Thus, Π u0 ∈ N [L+1 ]. On the other hand, using the induction hypothesis, we find from (5.24) that u0 ∈ N [M{} (λ0 )]. Hence, Π u0 = u0 and, therefore, u0 ∈ N [L0 ] ∩ · · · ∩ N [L+1 ]. We have just proved that u0 ∈ N [L0 ] ∩ · · · ∩ N [Lj−1 ].
(5.25)
Moreover, it follows from the last identity of (5.23) that Πj−1 u0 ∈ N [Lj ], and, using the induction assumption, (5.25) implies u0 ∈ N [M{j−1} (λ0 )]. Therefore, Πj−1 u0 = u0 and u0 ∈ N [L0 ] ∩ · · · ∩ N [Lj ],
(5.26)
which concludes the proof of (5.22). Now, we prove the converse inclusion of (5.22). Suppose (5.26). By the induction hypothesis, u0 ∈ N [M{0} (λ0 )] ∩ · · · ∩ N [M{j−1} (λ0 )], and, therefore, Π0 u0 = · · · = Πj−1 u0 = u0 .
5.2. Connections between χ and µ
117
By (5.20), these equalities imply M{j} (λ0 )u0 = Lj u0 = 0, which concludes the proof of (5.15). Finally, by (5.15), Proposition 4.2.2 shows that, for each 0 ≤ i ≤ k − 1, dim N [M{i} (λ0 )] = dim N [M{i+1} (λ0 )] + dim Li+1 (N [M{i} (λ0 )]). Thus, for each 0 ≤ ≤ k − 1, k−1
dim N [M{i} (λ0 )] =
i=
k−1
dim Li+1 (N [M{i} (λ0 )]) + dim N [M{i+1} (λ0 )]
i=
=
k
dim Li (N [M{i−1} (λ0 )])
i=+1
+
k−1
dim N [M{i} (λ0 )] + dim N [M{k} (λ0 )].
i=+1
Therefore, iterating the previous identity gives k−1
dim N [M{i} (λ0 )] =
i=0
k
j · dim Lj (N [M{j−1} (λ0 )])+k · dim N [M{k} (λ0 )],
j=1
since dim N [M{k−1} (λ0 )] = dim Lk (N [M{k−1} (λ0 )]) + dim N [M{k} (λ0 )].
This proves (5.16) and concludes the proof.
Based on Theorem 5.2.1, Proposition 5.1.4 and Theorem 4.3.2, we can obtain the main result of this section. Theorem 5.2.2. Suppose r ∈ N∪{∞}, the family L ∈ C r (Ω, L(U, V )) satisfies L0 ∈ Fred0 (U, V ), and λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r. Let Φ ∈ C r (Ω, L(U )) satisfy Φ0 ∈ Iso(U ) and λ0 is a k-transversal eigenvalue of LΦ := LΦ for some k ∈ {1, . . . , ν}. Then α[L; λ0 ] = k and µj [L; λ0 ] = µ[L; λ0 ] = χ[L; λ0 ], {M{j} }rj=0
Proof. Let nition 4.3.3,
k − 1 ≤ j < r + 1,
be a sequence of derived families from LΦ at λ0 . By Defi-
χ[L; λ0 ] =
k h=1
Φ Φ h dim LΦ h (N [L0 ] ∩ · · · ∩ N [Lh−1 ])
118
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
and, according to Proposition 5.1.4, α[L; λ0 ] = α[LΦ ; λ0 ] and 0 ≤ j < r + 1.
µj [L; λ0 ] = µj [LΦ ; λ0 ]
Thus, to complete the proof, it suffices to show α[LΦ ; λ0 ] = k and Φ
µj [L ; λ0 ] =
k
Φ Φ h dim LΦ h (N [L0 ] ∩ · · · ∩ N [Lh−1 ])
(5.27)
h=1
for each integer k − 1 ≤ j ≤ r. Indeed, as λ0 is a k-transversal eigenvalue of LΦ , by Proposition 4.2.2, Φ N [LΦ 0 ] ∩ · · · ∩ N [Lk ] = {0} and, hence, due to Theorem 5.2.1, N [M{j} (λ0 )] = {0},
k ≤ j < r + 1.
(5.28)
On the other hand, since λ0 is a k-transversal eigenvalue of LΦ , Φ N [LΦ 0 ] ∩ · · · ∩ N [Lk−1 ] = {0}
and, hence, thanks again to Theorem 5.2.1, we find that N [M{k−1} (λ0 )] = {0}. Therefore, k is the least natural number j ≥ 0 for which M{j} (λ0 ) is invertible. Consequently, α[LΦ ; λ0 ] = k, by Definition 5.1.3. Finally, (5.27) follows straight from (5.16) and (5.28).
5.3 Coincidence of the multiplicities χ and µ This section shows that both multiplicities χ and µ are defined if and only if either r = ∞, or r ∈ N and λ0 ∈ Algν (L) for some integer ν ∈ {0, . . . , r}; in any of these cases, µ = χ. Moreover, it reveals that the multiplicity of L at λ0 exists and is finite if and only if λ0 ∈ Algν (L) for some 0 ≤ ν < r + 1. In particular, in the second assertion of Theorem 4.3.2, the assumption that λ0 is an algebraic eigenvalue of L is not only sufficient for its validity, but also necessary. Precisely, the following result is satisfied. Theorem 5.3.1 (of coincidence and finiteness of χ and µ). Let r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )) be with L0 ∈ Fred0 (U, V ). Then the following assertions are equivalent: i) λ0 ∈ Algν (L) for some integer 0 ≤ ν ≤ r. ii) µ[L; λ0 ] is defined and finite.
5.3. Coincidence of the multiplicities χ and µ
119
iii) There exist an integer 0 ≤ k ≤ r and a family Φ ∈ C r (Ω, L(U )) with Φ0 ∈ Iso(U ) for which λ0 is a k-transversal value of LΦ := LΦ. iv) There exist an integer 0 ≤ k ≤ r and a polynomial Φ ∈ P[L(U )] with Φ0 = IU for which λ0 is a k-transversal value of LΦ := LΦ. Moreover, in such a case, α[L; λ0 ] = k = ν
and
µ[L; λ0 ] = χ[L; λ0 ].
(5.29)
Furthermore, µ[L; λ0 ] = ∞ if and only if χ[L; λ0 ] = ∞; and µ[L; λ0 ] is undefined if and only if so is χ[L; λ0 ]. In any of these cases, for every integer 1 ≤ k ≤ r, the subspace Vk constructed in Theorem 4.3.2 is a proper subspace of V . Proof. The conclusions follow straight away from the definitions if L0 is an isomorphism. So, throughout the proof, we will assume that λ0 ∈ Eig(L). Suppose µ[L; λ0 ] < ∞ and set := α[L; λ0 ] ≥ 1. Let {M{j} }rj=0 be any sequence of derived families from L at λ0 , with corresponding sequence of projections {Πj }rj=0 . By Definition 5.1.3, the operator M{} (λ0 ) is an isomorphism. Moreover, according to (5.3), for each λ ∈ Ω \ {λ0 }, M{} (λ) = (λ − λ0 )− L(λ)RΠ0 (λ − λ0 ) · · · RΠ−1 (λ − λ0 ). Since M{} (λ0 ) is invertible, then M{} (λ) ∈ Iso(U, V ) for each λ ∼ λ0 , and, therefore, L(λ) ∈ Iso(U, V ) if λ ∼ λ0 with λ = λ0 . Thus, for each λ ∼ λ0 with λ = λ0 , L(λ)−1 = (λ − λ0 )− RΠ0 (λ − λ0 ) · · · RΠ−1 (λ − λ0 )M{} (λ)−1 .
(5.30)
Consequently, by (5.30), there exists a constant C > 0 such that L(λ)−1 ≤ C|λ − λ0 |− ,
λ ∼ λ0 , λ = λ0 .
This shows that λ0 ∈ Algν (L) for some ν ∈ {1, . . . , }. Moreover, according to Theorem 4.3.2, there exist k ∈ {1, . . . , ν} and a polynomial Φ ∈ P[L(U )] with Φ0 = IU for which λ0 is a k-transversal eigenvalue of LΦ. In particular, k ≤ ν ≤ = α[L; λ0 ]. According to Theorem 5.2.2, we actually have k = α[L; λ0 ] and µ[L; λ0 ] = µk−1 [L; λ0 ] = χ[L; λ0 ]. Therefore, (5.29) holds.
120
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
Reciprocally, when λ0 ∈ Algν (L) for some integer 1 ≤ ν < r+1, by Theorems 4.3.2 and 5.2.2, L admits a transversalizing polynomial Φ and k = α[L; λ0 ] ≤ ν
and µ[L; λ0 ] = χ[L; λ0 ] < ∞.
This shows (5.29). Finally, assume that either α[L; λ0 ] equals infinity or it is not defined. Then N [M{j} (λ0 )] = {0}
for each 0 ≤ j < r + 1.
(5.31)
For each integer 1 ≤ k ≤ r, let Vk be any of the subspaces of V constructed in Theorem 4.3.2. If Vk = V for some 1 ≤ k < r + 1, then, owing to Propositions 4.2.2, 5.1.4 and Theorem 5.2.1, the operator M{k} (λ0 ) must be an isomorphism, which contradicts (5.31). Therefore, Vk V
for each integer 1 ≤ k ≤ r.
Consequently, in the case r = ∞, α[L; λ0 ] = ∞ and µ[L; λ0 ] = χ[L; λ0 ] = ∞, by definition, whereas neither α, µ nor χ are defined if r ∈ N.
According to Theorem 5.3.1, throughout the rest of this book we may use the notation χ[L; λ0 ] for the µ-multiplicity introduced by Definition 5.1.3. We will sometimes set χj [L; λ0 ] := µj [L; λ0 ], 0 ≤ j < r + 1. (5.32) For each integer 0 ≤ j ≤ r, the number χj [L; λ0 ] will be referred to as the jth partial algebraic µ-multiplicity of L at λ0 , or as the jth partial algebraic χ-multiplicity of L at λ0 . From now on, given a finite-rank projection P ∈ L(U ), we will consider the polynomial family CP ∈ P[L(U )] defined by CP (t) := tP + IU − P,
t ∈ K.
(5.33)
According to (5.1) and (5.2), RP (t)−1 = RP (t−1 ) = t−1 CP (t),
t ∈ K \ {0}.
(5.34)
Consequently, CP (t) is invertible for all t ∈ K \ {0}, and (5.3) can be equivalently written in the form L(λ) = M{j} (λ)CΠj−1 (λ − λ0 ) · · · CΠ0 (λ − λ0 ). As a by-product of Theorem 5.3.1, the next corollary is found.
(5.35)
5.4. A formula for the partial µ-multiplicities
121
Corollary 5.3.2. Let r ∈ N∪{∞} and L ∈ C r (Ω, L(U, V )) be with L0 ∈ Fred0 (U, V ). Then, the following characterizations, describing mutually exclusive options, hold true: (a) L0 ∈ Iso(U, V ) if and only if χ[L; λ0 ] = 0. (b) λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r if and only if there exist M ∈ C r−ν (Ω, L(U, V )) with M0 ∈ Iso(U, V ), and ν finite-rank projections P0 , . . . , Pν−1 ∈ L(U ) \ {0} such that L(λ) = M(λ)CPν−1 (λ − λ0 ) · · · CP0 (λ − λ0 ),
λ ∈ Ω.
(5.36)
Moreover, in such a case, χ[L; λ0 ] =
ν−1
rank Pi .
i=0
5 / 0≤ν
5.4 A formula for the partial µ-multiplicities The following result provides us with a finite algorithm to calculate the jth partial χ-multiplicities of a family L of class C r at any λ0 ∈ Algν (L) with ν < r + 1. Theorem 5.4.1. Let r ∈ N∪{∞} and L ∈ C r (Ω, L(U, V )) be with L0 ∈ Fred0 (U, V ). Suppose λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r. Let Φ ∈ C r (Ω, L(U )) be such that Φ0 ∈ Iso(U ), for which λ0 is a ν-transversal eigenvalue of LΦ := LΦ, and set j := dim LΦ j
6 j−1
N [LΦ i ] ,
1 ≤ j ≤ ν,
i=0
A := (aij )1≤i,j≤ν ∈ Mν (R),
aij := min{i, j},
1 ≤ i, j ≤ ν.
Then, the matrix A is invertible and col[χ0 [L; λ0 ], . . . , χν−1 [L; λ0 ]] = A col[ 1 , . . . , ν ]. In particular, χ[L; λ0 ] = χν−1 [L; λ0 ] =
ν j=1
j j .
122
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
Proof. The existence of the transversalizing family Φ is guaranteed by Theorem 5.3.1. Moreover, by (5.32) and Proposition 5.1.4, χj [L; λ0 ] = χj [LΦ ; λ0 ],
0 ≤ j ≤ ν − 1.
Let {M{j} }rj=0 be a sequence of derived families from LΦ at λ0 . Then, thanks to Theorem 5.2.1, Φ N [M{i} (λ0 )] = N [LΦ 0 ] ∩ · · · ∩ N [Li ],
0 ≤ i ≤ ν,
and, hence, χj [L; λ0 ] =
j
Φ dim N [LΦ 0 ] ∩ · · · ∩ N [Li ],
0 ≤ j ≤ ν − 1.
i=0
Consequently, thanks to Proposition 4.2.2, we find that χj [L; λ0 ] =
j ν
h =
i=0 h=i+1
ν
min{h, j + 1} h
h=1
for each 0 ≤ j ≤ ν − 1. The invertibility of the matrix A is evident.
5.5 Removable singularities This section consists of the proof of Theorem 5.0.1. Let r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )), with L0 ∈ Fred0 (U, V ), be such that (λ−λ0 )ν L(λ)−1 exists and is bounded for λ in Ω \ {λ0 }, for some integer 0 ≤ ν ≤ r. Then, λ0 ∈ Algk (L) for some k ∈ {0, . . . , ν}. Suppose k = 0. Then, L0 ∈ Iso(U, V ) and, hence, the family L−1 is of class r C in a neighborhood of λ0 . So we are done in this case. Suppose k ≥ 1. Then, by Corollary 5.3.2(b), there exist k finite-rank projections P0 , . . . , Pk−1 ∈ L(U ) \ {0} and M ∈ C r−k (Ω, L(U, V )) for which M0 ∈ Iso(U, V ) and (5.36) is satisfied, where the number ν in (5.36) is to be replaced with k. As M0 is an isomorphism, there exists Ω ⊂ Ω, with λ0 ∈ Ω , such that the family M−1 is of class C r−k in Ω . By (5.36) and (5.34), it is apparent that, for each λ ∈ Ω \ {λ0 }, L(λ)−1 = CP0 (λ − λ0 )−1 · · · CPk−1 (λ − λ0 )−1 M(λ)−1 = (λ − λ0 )−k RP0 (λ − λ0 ) · · · RPk−1 (λ − λ0 )M(λ)−1
5.6. The product formula
123
and, consequently, (λ − λ0 )k L(λ)−1 = RP0 (λ − λ0 ) · · · RPk−1 (λ − λ0 )M(λ)−1 .
(5.37)
As the right-hand side of (5.37) is of class C r−k in Ω , it indeed provides us with an extension of class C r−k of the function λ → (λ − λ0 )k L(λ)−1 to Ω , which concludes the proof of Theorem 5.0.1, because ν ≥ k.
5.6 The product formula This section shows that the multiplicity of the product of two families equals the sum of their multiplicities. This property will be referred to as the product formula. It is a pivotal property, because it is the basis of all uniqueness results of Chapter 6. Theorem 5.6.1 (product formula). Let U, V, W be three K-Banach spaces, r, s ∈ N ∪ {∞}, and L ∈ C r (Ω, L(U, V )), M ∈ C s (Ω, L(W, U )), with L0 ∈ Fred0 (U, V ) and M0 ∈ Fred0 (W, U ), such that the algebraic multiplicities χ[L; λ0 ] and χ[M; λ0 ] are well defined, and min{r, s} ≥ α[L; λ0 ] + α[M; λ0 ].
(5.38)
Then, χ[LM; λ0 ] is well defined, max {α[L; λ0 ], α[M; λ0 ]} ≤ α[LM; λ0 ] ≤ α[L; λ0 ] + α[M; λ0 ],
(5.39)
and χ[LM; λ0 ] = χ[L; λ0 ] + χ[M; λ0 ].
(5.40)
Remark 5.6.2. According to Theorem 5.3.1, given q ∈ N ∪ {∞} and a family N ∈ C q (Ω, L(U, V )) with N0 ∈ Fred0 (U, V ), the algebraic multiplicity χ[N; λ0 ] is well defined if and only if either q = ∞, or q ∈ N and λ0 ∈ Algk (L) for some k ∈ {0, . . . , q}. Identity (5.40) will be referred to as the product formula. By Corollary 5.3.2, the proof of Theorem 5.6.1 may be based on the special case emphasized by the next result. Proposition 5.6.3. Suppose L ∈ C r (Ω, L(U, V )) for some r ∈ N∪{∞}, the operator L0 ∈ Fred0 (U, V ), and λ0 ∈ Algk (L) for some integer 0 ≤ k ≤ r. Then, for every finite-rank projection Q ∈ L(U ) \ {0}, χ[CQ (· − λ0 ); λ0 ] = rank Q,
(5.41)
124
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
where CQ (t) = tQ + IU − Q,
t ∈ K.
If, in addition, the function λ → L(λ)CQ (λ − λ0 )
belongs to
C k+1 (Ω, L(U, V )),
(5.42)
then χ[LCQ (· − λ0 ); λ0 ] = χ[L; λ0 ] + rank Q.
(5.43)
Remark 5.6.4. The assumption (5.42) is crucial for the validity of this result. In some sense, this condition imposes that L is a first-order derived family from LCQ (· − λ0 ) at λ0 with corresponding projection Q. As λ0 may be a (k + 1)algebraic eigenvalue of LCQ (· − λ0 ), the multiplicity of LCQ (· − λ0 ) at λ0 might not be defined if (5.42) fails. This explains why we are imposing the restriction (5.38) on the regularity of the families in the statement of Theorem 5.6.1. It is satisfied if r = s = ∞, of course. Proof. To simplify the notation, we simply set C(λ) := CQ (λ − λ0 ) = (λ − λ0 )Q + IU − Q,
λ ∈ K.
Then, C0 = IU − Q and C1 = Q; therefore, N [C0 ] = R[Q],
R[C0 ] = N [Q],
N [Q] ⊕ R[Q] = U,
because Q2 = Q. Moreover, C1 (N [C0 ]) = Q(R[Q]) = R[Q], and, therefore, C1 (N [C0 ]) ⊕ R[C0 ] = R[Q] ⊕ N [Q] = U. Consequently, λ0 is a 1-transversal eigenvalue of C and, according to Definition 4.3.3, χ[C; λ0 ] = dim C1 (N [C0 ]) = dim R[Q] = rank Q. This concludes the proof of (5.41). The proof of (5.43) will proceed by induction on k ≥ 0. Suppose k = 0. Then, ˜ defined by by definition, L0 ∈ Iso(U, V ). By (5.42), the map L ˜ L(λ) := L(λ)CQ (λ − λ0 ),
λ ∈ Ω,
˜ because of (5.34). Thus, acbelongs to C 1 (Ω, L(U, V )). Moreover, λ0 ∈ Alg1 (L), ˜ cording to Theorem 5.3.1, χ[L; λ0 ] is well defined and finite. Necessarily, ˜ 0 = L0 (IU − Q), L
˜ 1 = L0 Q + A(IU − Q), L
5.6. The product formula
125
for a certain operator A, and, since L0 ∈ Iso(U, V ), we have that ˜ 0 ] = R[Q], N [L
˜ 0 ] = L0 (N [Q]), R[L
˜ 1 (N [L ˜ 0 ]) = L0 (R[Q]). L
Consequently, since U = N [Q] ⊕ R[Q] and, hence, V = L0 (N [Q]) ⊕ L0 (R[Q]), we find that ˜ 1 (N [L ˜ 0 ]) ⊕ R[L ˜ 0 ] = V. L Therefore, ˜ 1 (N [L ˜ 0 ]) = dim L0 (R[Q]) = dim R[Q], ˜ λ0 ] = dim L χ[L; which concludes the proof of (5.43) in the case k = 0. Suppose k ≥ 1, fix 0 ≤ s < k, and assume that χ[MCQ (· − λ0 ); λ0 ] = χ[M; λ0 ] + rank Q for all families M of class C s such that λ0 ∈ Algj (M) for some j ∈ {0, . . . , s} and all finite-rank projections Q ∈ L(U ) \ {0} for which MCQ (· − λ0 ) ∈ C s+1 (Ω, L(U, V )). Let L ∈ C s+1 (Ω, L(U, V )) be such that λ0 ∈ Algs+1 (L), and Q ∈ L(U ) \ {0} a finite-rank projection such that LCQ (· − λ0 ) ∈ C s+2 (Ω, L(U, V )). Since the spaces N [Q] ∩ N [L0 ] and R[Q] are finite dimensional and their intersection equals {0}, there exists a projection P ∈ L(U ) such that R[P ] = N [Q] ∩ N [L0 ],
R[Q] ⊂ N [P ].
Then, P Q = QP = 0.
(5.44)
Consequently, (P + Q)2 = P 2 + Q2 = P + Q and, therefore, P + Q is a finite-rank projection for which R[P + Q] = R[P ] ⊕ R[Q],
rank(P + Q) = rank P + rank Q.
Indeed, due to (5.44), for all x, y ∈ U we have that P x + Qy = P 2 x + Q2 y = (P + Q)(P x + Qy) ∈ R[P + Q]
(5.45)
126
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
and, hence, R[P ] + R[Q] ⊂ R[P + Q]. This shows that R[P + Q] = R[P ] + R[Q]. Now, let x ∈ R[P ] ∩ R[Q]. Then, x = P x = Qx and, hence, P x = P Qx = 0
and (IU − P )x = (IU − P )P x = 0,
which implies x = 0. Therefore, (5.45) holds. It turns out that, in addition, N [L(λ0 )CQ (0)] = R[P + Q].
(5.46)
Indeed, since R[P ] ⊂ N [L0 ], (5.44) implies L(λ0 )CQ (0)(P + Q) = L0 (IU − Q)(P + Q) = L0 P = 0 and, hence, R[P + Q] ⊂ N [L(λ0 )CQ (0)]. On the other hand, for any u ∈ N [L(λ0 )CQ (0)] = N [L0 (IU − Q)] we have that (IU − Q)u ∈ N [L0 ] ∩ N [Q] = R[P ] and, hence, using (5.45) gives u ∈ R[P ] + R[Q] = R[P + Q], which concludes the proof of (5.46). Thanks to (5.46), P +Q is a projection onto N [L(λ0 )CQ (0)] and, hence, there exists a sequence of derived families {M{j} }s+2 j=0 from the family Ω λ → L(λ)CQ (λ − λ0 ) at λ0 with corresponding projections {Pj }s+2 j=0 such that P0 = P + Q. The family M{1} is of class C s+1 and, according to (5.3), for each λ ∈ Ω \ {λ0 }, M{1} (λ) = (λ − λ0 )−1 L(λ)CQ (λ − λ0 )RP +Q (λ − λ0 ).
(5.47)
5.6. The product formula
127
Moreover, by Definition 5.1.3, Theorem 5.3.1, and (5.45), we can infer from (5.47) that (5.48) χ[LCQ (· − λ0 ); λ0 ] = χ[M{1} ; λ0 ] + rank P + rank Q. On the other hand, due to (5.44), CQ (t)RP +Q (t) = (tQ + IU − Q)[P + Q + t(IU − P − Q)] = P + t(IU − P ) = RP (t) for each t ∈ K, and, consequently, we can express (5.47) in the form M{1} (λ) = (λ − λ0 )−1 L(λ)RP (λ − λ0 ),
λ ∈ Ω \ {λ0 }.
(5.49)
Since N [L0 ] is a finite-dimensional space containing R[P ], and N [P ] is a finite codimensional subspace, because R[P ] is finite dimensional and U = N [P ] ⊕ R[P ], then, there exists a projection Π ∈ L(U ) such that R[Π] = N [L0 ],
N [Π] ⊂ N [P ].
Let {N{j} }s+1 j=0 be a sequence of derived families from L at λ0 with corresponding s projections {Πj }s+1 j=0 for which Π0 = Π. Then, N{1} is of class C and, according to (5.3), N{1} (λ) = (λ − λ0 )−1 L(λ)RΠ (λ − λ0 ),
λ ∈ Ω \ {λ0 }.
(5.50)
Moreover, by Definition 5.1.3 and Theorem 5.3.1, χ[L; λ0 ] = χ[N{1} ; λ0 ] + rank Π,
(5.51)
and λ0 is an algebraic value of N{1} of order α[N{1} ; λ0 ] = s. In addition, substituting (5.50) into (5.49) and using (5.2) show that M{1} (λ) = N{1} (λ)RΠ (λ − λ0 )−1 RP (λ − λ0 )
(5.52)
(5.53)
for each λ ∈ Ω \ {λ0 }. To simplify the right-hand side of (5.53), we still need some additional information. As R[P ] ⊂ R[Π], we have that ΠP = P . Also, P Π = P , since N [Π] ⊂ N [P ]. Thus, P Π = ΠP = P (5.54) and, hence, (Π − P )2 = Π − P. Consequently, R[Π] = R[Π − P ] ⊕ R[P ]
128
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
and, therefore rank(Π − P ) = rank Π − rank P.
(5.55)
Also, again by (5.54), we have that, for each t ∈ K \ {0}, RΠ (t−1 ) RP (t) = [Π + t−1 (IU − Π)][P + t(IU − P )] = t(Π − P ) + IU − Π + P = CΠ−P (t) and, hence, (5.53) becomes M{1} (λ) = N{1} (λ)CΠ−P (λ − λ0 ),
λ ∈ Ω.
Note that N{1} is of class C s , and that M{1} is of class C s+1 . By (5.52) and (5.55), the induction hypothesis implies χ[M{1} ; λ0 ] = χ[N{1} ; λ0 ] + rank Π − rank P.
(5.56)
Therefore, applying (5.48), (5.56) and (5.51) gives χ[LCQ (· − λ0 ); λ0 ] = χ[M{1} ; λ0 ] + rank P + rank Q = χ[N{1} ; λ0 ] + rank Π + rank Q = χ[L; λ0 ] + rank Q,
which concludes the proof. Proof of Theorem 5.6.1. We set k := α[L; λ0 ],
:= α[M; λ0 ].
According to Theorem 5.3.1, k ∈ N (respectively, ∈ N) if and only if λ0 ∈ Algk (L) (respectively, λ0 ∈ Alg (M)). Suppose k, ∈ N. Then, there exists a constant C > 0 such that for every λ in some perforated neighborhood of λ0 , (LM)−1 (λ) ≤ M(λ)−1 · L(λ)−1 ≤
C . |λ − λ0 |k+
Thus, λ0 ∈ Algν (LM) for some integer 0 ≤ ν ≤ k + , and, since LM ∈ C k+ (Ω, L(W, V )), it follows from Theorem 5.3.1 that χ[LM; λ0 ] is well defined and that α[LM; λ0 ] = ν ∈ N. There also exists a constant B > 0 such that for every λ in some perforated neighborhood of λ0 , L(λ)−1 ≤ M(λ) · (LM)−1 (λ) ≤
B . |λ − λ0 |ν
(5.57)
5.6. The product formula
129
Hence, k ≤ ν. Similarly, one can conclude that ≤ ν and, therefore, max{k, } ≤ ν ≤ k + .
(5.58)
Therefore, (5.39) holds true if k, ∈ N. Now, suppose k = ∞. Then, by (5.38), L and M are of class C ∞ and, hence, LM is also of class C ∞ . In particular, χ[LM; λ0 ] is well defined. Moreover, by (5.57), it follows from Theorem 5.3.1 that α[LM; λ0 ] = ∞, because k = ∞. Therefore, when k = ∞, max{k, } = α[LM; λ0 ] = k + = ∞. Also, in such a case, by Definition 5.1.3 and Theorem 5.3.1, χ[LM; λ0 ] = χ[L; λ0 ] + χ[M; λ0 ] = ∞. By symmetry, the same features hold if = ∞. The previous arguments show that χ[LM; λ0 ] is well defined in all possible cases and that (5.39) is always satisfied. Moreover, (5.40) is satisfied if k = ∞ or = ∞. Consequently, to complete the proof it suffices to show (5.40) when k, ∈ N. In such a case, we already know that ν := α[LM; λ0 ] ∈ N. By definition, χ[L; λ0 ] = 0 if k = 0, and χ[M; λ0 ] = 0 if = 0. Therefore, (5.40) follows straight away from Proposition 5.1.4 and Theorem 5.3.1 if k = 0 or = 0. It remains to complete the proof in the case 1 ≤ k ∈ N,
1 ≤ ∈ N.
(5.59)
Suppose (5.59). Let {M{j} }j=0 be a derived sequence from M at λ0 with corresponding sequence of projections {Πj }j=0 . Then, M{} (λ0 ) is an isomorphism, M{0} = M, and, for every j ∈ {0, . . . , }, M{j} (λ0 ) ∈ Fred0 (W, U ),
M{j} ∈ C s−j (Ω, L(W, U )),
(5.60)
and M{j} (λ) = M{j+1} (λ)CΠj (λ − λ0 ),
λ ∈ Ω.
Moreover, χ[M; λ0 ] =
−1 j=0
rank Πj .
(5.61)
130
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
In particular, for each j ∈ {0, . . . , − 1}, L(λ)M{j} (λ) = L(λ)M{j+1} (λ)CΠj (λ − λ0 ),
λ ∈ Ω.
Fix a j ∈ {0, . . . , − 1}. Then, by (5.38) and (5.60), LM{j} is of class C k+−j and LM{j+1} is of class C k+−j−1 . Moreover, λ0 is an algebraic eigenvalue of LM{j+1} with order less than or equal to k + − j − 1. Thus, according to Proposition 5.6.3, we have that χ[LM{j} ; λ0 ] = χ[LM{j+1} ; λ0 ] + rank Πj . Consequently, using (5.61), shows that χ[LM; λ0 ] = χ[LM{} ; λ0 ] +
−1
rank Πj = χ[LM{} ; λ0 ] + χ[M; λ0 ].
j=0
As M{} is of class C k and M{} (λ0 ) is an isomorphism, due to Proposition 5.1.4, χ[LM{} ; λ0 ] = χ[L; λ0 ] and, therefore, χ[LM; λ0 ] = χ[L; λ0 ] + χ[M; λ0 ],
which concludes the proof.
5.7 Perturbation from simple eigenvalues revisited According to Theorem 5.3.1, the following substantial extension of Theorem 4.4.3 holds. It should be noted that λ0 might not be a ν-algebraic eigenvalue of L for any 1 ≤ ν ≤ r, and that, in such a case, Theorem 4.4.3 cannot be applied. Theorem 5.7.1. Suppose U, V are K-Banach spaces such that U is a subspace of V with continuous inclusion, Ω ⊂ K is open, r ∈ N∗ ∪ {∞, ω}, the family L ∈ C r (Ω, L(U, V )) satisfies λ0 ∈ Ω and dim N [L0 ] = 1
and
N [L0 ] ⊕ R[L0 ] = V.
(5.62)
Let Y be a closed subspace of U and ϕ0 ∈ N [L0 ] \ {0} such that N [L0 ] = span[ϕ0 ]
and
N [L0 ] ⊕ Y = U,
and denote by a and ϕ the perturbed functions from 0 ∈ K and ϕ0 , whose existence was guaranteed by Lemma 4.4.1. Then, ordλ0 a is defined and ordλ0 a := χ[L; λ0 ] = α[L; λ0 ]
(5.63)
when χ[L; λ0 ] is defined, while ordλ0 a is undefined or ordλ0 a > r if χ[L; λ0 ] is undefined.
5.7. Perturbation from simple eigenvalues revisited
131
Proof. By Theorem 5.3.1, we already know that χ[L; λ0 ] is defined if and only if λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r, or r = ∞. Actually, χ[L; λ0 ] < ∞ if and only if λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r and, in such a case, both α[L; λ0 ] and ν equal the order of transversality of every transversalized family from L at λ0 . Consequently, (5.63) follows straight ahead from Theorem 4.4.3 if the algebraic multiplicity is finite. / Alg(L). By Lemma 4.4.1, the Suppose χ[L; λ0 ] = ∞. Then r = ∞ and λ0 ∈ functions a and ϕ are of class C ∞ and, hence, ordλ0 a ≥ 1 is well defined. We claim that ordλ0 a = ∞. Suppose, looking for a contradiction, that k := ordλ0 a ∈ N. By Theorems 4.3.2 and 5.3.1, there exists a polynomial Φ ∈ P[L(U )] such that Φ(λ0 ) = IU and the product family LΦ := LΦ satisfies Vk :=
k
Φ Φ Φ LΦ j (N [L0 ] ∩ · · · ∩ N [Lj−1 ]) ⊕ R[L0 ] V.
(5.64)
Φ Φ LΦ j (N [L0 ] ∩ · · · ∩ N [Lj−1 ]) = {0},
(5.65)
j=1
Actually, (5.64) implies k j=1
because LΦ = L0 and codim R[L0 ] = 1. Now we set ψ(λ) := Φ(λ)−1 ϕ(λ),
λ ∼ λ0 .
Since Lϕ = aϕ, we have that LΦ (λ)ψ(λ) = a(λ)ϕ(λ),
λ ∼ λ0 .
Thus, LΦ (λ)ψ(λ) = o((λ − λ0 )k−1 ), and
k
as λ → λ0 ,
LΦ i ψk−i = ak ψ0 = 0.
i=0
Thanks to Lemma 4.1.2, (5.66) implies ψ0 ∈
k−1 6 j=0
N [LΦ j ],
ψ0 , . . . , ψk−1 ∈ N [L0 ],
(5.66)
(5.67)
132
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
and, consequently, N [L0 ] =
k−1 6
N [LΦ j ]
and
ψ0 , . . . , ψk−1 ∈
j=0
k−1 6
N [LΦ j ],
j=0
because dim N [L0 ] = 1. Thus, (5.67) becomes Φ LΦ 0 ψk + Lk ψ0 = ak ψ0 = 0.
By (5.62), we find from this identity that LΦ k ψ0 = 0 and, therefore, LΦ 0
6 k−1
N [LΦ j ] = {0},
j=0
which contradicts (5.65). This contradiction shows that ordλ0 a = ∞, so concluding the proof when χ[L; λ0 ] is defined. Finally, suppose χ[L; λ0 ] is undefined. Then, there exists an integer k ≥ r such that L is of class C k in a neighborhood of λ0 , but in no neighborhood of λ0 is it of class C k+1 . Consequently, according to Theorems 4.3.2 and 5.3.1, there exists a polynomial Φ ∈ P[L(U )] such that Φ(λ0 ) = IU and LΦ := LΦ satisfies (5.64). Arguing as in the previous case when the multiplicity was infinity shows that a0 = · · · = ak = 0.
(5.68)
As, in general, the perturbed eigenvalue a is of C k , but it is not necessarily of class C k+1 , it is apparent that either ordλ0 a is undefined, or, due to (5.68) ordλ0 a > k ≥ r.
This concludes the proof.
5.8 Exercises 1. Suppose r ≥ 1 and the families L ∈ C r (Ω, L(U, V )) and Ψ ∈ C r (Ω, L(V )) satisfy L0 ∈ Fred0 (U, V ) and Ψ0 ∈ Iso(V ). Let {L{j} }∞ j=0 be a derived sequence of L at λ0 with ∞ corresponding projections {Πj }∞ j=0 . Prove that {L{j} }j=0 is a derived sequence of ΨL ∞ at λ0 with corresponding projections {Πj }j=0 . Use this fact to give a proof of the last assertion of Proposition 5.1.4. 2. Prove the result of Exercise 5 of Chapter 4 by using the definition of the µmultiplicity. 3. Let r ∈ N ∪ {∞}, consider λ0 ∈ Ω and L ∈ C r (Ω, L(U, V )) with L0 ∈ Fred0 (U, V ) such that χ[L; λ0 ] is defined. Prove that χ[L; λ0 ] = dim N [L0 ] if and only if λ0 is a k-algebraic value of L for some k ∈ {0, 1} ∩ [0, r]. This complements the result proposed in Exercise 6 of Chapter 4.
5.8. Exercises
133
4. Let r ∈ N ∪ {∞}, consider λ0 ∈ Ω and L ∈ C r (Ω, L(U, V )) such that L0 ∈ Fred0 (U, V ) and λ0 ∈ Algk (L) for some k ∈ N ∩ [0, r]. Let Φ ∈ C ∞ (Ω, L(U )) be such that Φ(λ0 ) is an isomorphism, and λ0 is a k-transversal value of LΦ. Prove that, for ˆ ∈ C r (Ω, L(U, V )) satisfying every L ˆ j = Lj , L
0 ≤ j ≤ k,
ˆ and a k-transversal value of LΦ. ˆ Moreover, λ0 is a k-algebraic value of L ˆ λ0 ]. χ[L; λ0 ] = χ[L; 5. This exercise provides a dual construction of the construction carried out in Section 5.1. Suppose Ω ⊂ K is open, U and V are K-Banach spaces, λ0 ∈ Ω, and L ∈ C r (Ω, L(U, V )) satisfies L0 ∈ Fred0 (U, V ) for some r ∈ N ∪ {∞}. Consider the r + 1 families M{j} ∈ C r−i (Ω, L(U, V )), for 0 ≤ j < r + 1, defined inductively as follows: • M{0} := L. • Given 0 ≤ i < r and M{i} ∈ C r−i (Ω, L(U, V )), let σi ∈ L(V ) be a projection with range R[M{i} (λ0 )] and define [σi + (λ − λ0 )−1 (IV − σi )]M{i} (λ) if λ ∈ Ω \ {λ0 }, M{i+1} (λ) := σi M{i} (λ0 ) + (IV − σi )M{i} (λ0 ) if λ = λ0 . The (finite or infinite) sequence {M{j} }rj=0 is called a sequence of dual derived families from L at λ0 . The aim of this exercise is to show that the dual derived families enjoy dual properties of the derived families constructed in Section 5.1. Precisely, it is requested to prove the following: • M{j} (λ) ∈ Fred0 (U, V ) for all 0 ≤ j < r + 1 and each λ ∈ Ω satisfying L(λ) ∈ Fred0 (U, V ). Moreover, for all λ ∈ Ω \ {λ0 } and 0 ≤ j < r + 1, M{j} (λ) = [σj−1 +(λ−λ0 )−1 (IV −σj−1 )] · · · [σ0 +(λ−λ0 )−1 (IV −σ0 )]L(λ). In addition, N [σj ] + R[σj+1 ] = V and, so, dim R[σj ] ≤ dim R[σj+1 ] for each 0 ≤ j < r. • Define, for each 0 ≤ j < r + 1, the dual partial µ-multiplicity of L at λ0 by µ∗j [L; λ0 ] :=
j
codim R[σi ].
i=0
If M{j} (λ0 ) is invertible for some integer 0 ≤ j ≤ r, then the dual ascent of L at λ0 , denoted by α∗ [L; λ0 ], is defined as the least integer j for which M{j} (λ0 ) is invertible. In such a case, the dual multiplicity of L at λ0 , denoted by µ∗ [L; λ0 ], is defined through µ∗ [L; λ0 ] := µ∗α∗ [L;λ0 ] [L; λ0 ].
134
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization If r = ∞ and there does not exist j ∈ N for which M{j} (λ0 ) is invertible, then the dual ascent and dual multiplicity are defined as α∗ [L; λ0 ] = µ∗ [L; λ0 ] = ∞. Prove the consistency of these concepts. • Prove the dual counterpart of Proposition 5.1.4. • The reader may try to find the relationships between this dual construction and the construction of Section 5.1; in particular, that the functions µ and µ∗ are identical, and so are the functions α and α∗ . But an easier proof will be sketched in Exercise 17 of Chapter 7.
6. Let U = V = K2 , identify L(U, V ) with M2 (K), and consider the families L, M ∈ P[M2 (K)] defined by M(λ) := diag{λs , λ },
L(λ) := diag{λk , 1},
λ ∈ K,
for some integers k, s, ≥ 0. Prove that α[L; 0] = µ[L; 0] = k,
α[M; 0] = max{s, },
α[LM; 0] = max{k + s, },
µ[M; 0] = s + ,
µ[LM; 0] = k + s + ,
to conclude that the inequalities (5.39) are optimal. 7. This exercise proposes an alternative proof of the main result of Exercise 9 of Chapter 4 by means of the characterization of χ provided by µ through Theorem 5.3.1. Precisely, under the same assumptions as in that exercise, let {L{j} }rj=0 and {M{j} }rj=0 be two sequences of derived families from L and M at λ0 with corresponding projections {Πj }rj=0 and {Pj }rj=0 , respectively. Prove the following: • {L{j} × M{j} }rj=0 is a sequence of derived families from L × M at λ0 with corresponding projections {Πj × Pj }rj=0 . • µj [L × M; λ0 ] = µj [L; λ0 ] + µj [M; λ0 ] for all 0 ≤ j < r + 1. • µ[L × M; λ0 ] is defined if and only if both µ[L; λ0 ] and µ[M; λ0 ] are defined, and, in such a case, µ[L × M; λ0 ] = µ[L; λ0 ] + µ[M; λ0 ] and α[L × M; λ0 ] = max {α[L; λ0 ], α[M; λ0 ]} . 8. Let Ω , Ω be two open neighborhoods of λ0 ∈ K, suppose r, s ∈ N ∪ {∞}, consider L ∈ C r (Ω, L(U, V )) with L0 ∈ Fred0 (U, V ), and let f ∈ C s (Ω ) satisfy f (λ0 ) = λ0
and
f (Ω ) ⊂ Ω.
Let g ∈ C s (Ω ) be the function defined by g(λ) := f (λ) − λ0 ,
λ0 ∈ Ω .
5.8. Exercises
135
We ask the following: • Assume r = s = ∞ and, using Theorem 5.6.1, (5.3) and Exercise 2, prove the following identities: α[L ◦ f ; λ0 ] = µ[g; λ0 ] α[L; λ0 ],
µ[L ◦ f ; λ0 ] = µ[g; λ0 ] µ[L; λ0 ].
(5.69)
• Assume that the three multiplicities µ[L; λ0 ], µ[g; λ0 ] and µ[L ◦ f ; λ0 ] are defined, impose conditions on r, s, and use Theorem 5.6.1, (5.3) and Exercise 2 to obtain (5.69). 9. This exercise shows that Theorem 5.0.1 may fail when the assumption L0 ∈ Fred0 (U, V ) is removed. Let K = R, the spaces U = V = C([0, 1], C), and the operator T ∈ L(U ) defined by t f (s) ds, t ∈ [0, 1], f ∈ U. T f (t) = 0
Let L ∈ P[L(U )] be the real polynomial defined by L(λ) = iIU λ − T,
λ ∈ R.
Prove, step by step, the following: • T is injective but not surjective. Therefore, T is not Fredholm of index zero. • L(λ) is invertible for each λ = 0 and, actually, for each f ∈ U , t 1 1 f (s)e(t−s)/(iλ) ds, t ∈ [0, 1]. f (t) − 2 L(λ)−1 f (t) = iλ λ 0 • T is not a Fredholm operator. • L(λ)−1 ≤ 2|λ|−2 for all λ ∈ R \ {0}. • There does not exist any integer n for which the limit lim λn L(λ)−1
λ→0
does exist.
136
Chapter 5. Algebraic Multiplicity Through Polynomial Factorization
5.9 Comments on Chapter 5 The most pioneering materials of this chapter were developed by Gohberg & Sigal [54] and Gohberg [42]. The concept of multiplicity introduced by Definition 5.1.3 goes back to Magnus [92]. It was introduced in the context of nonlinear analysis to generalize some celebrated results by Krasnosel’ski˘ı [69] and Rabinowitz [111, 112]. It is extremely significant that Magnus [92] himself asserted “It looks as if the calculation of multiplicity might be fraught with difficulties, as our definition is far from transparent.” This sincere declaration might have been provoked by the fact that, except when U = V = Rd , or U = V and L(λ) = λIU − T for some T ∈ L(U ) and every λ ∈ K, Magnus [92] could not ascertain whether or not µ[L; λ0 ] < ∞. Theorem 5.3.1 has actually shown that µ[L; λ0 ] < ∞ if and only if
λ0 ∈ Alg(L)
through the identification χ = µ. Chapter 7 will show that χ[L; λ0 ] < ∞ if λ0 is an isolated eigenvalue of an analytic family L. Essentially, Theorem 5.3.1 is attributable to L´ opez-G´omez [82]. Besides Definition 5.1.3 and Theorem 4.3.2, the crucial step in the proof of Theorem 5.3.1 relies on Theorem 5.2.1, which goes back to Esquinas & L´opez-G´omez [31]. The key step in proving Theorem 5.2.1 was to show that, for every 1 ≤ j ≤ k, M{j} (λ0 ) = L0 (IU − Π0 ) +
j−1
Li Πi−1 (IU − Πi ) + Lj Πj−1
i=1
if λ0 ∈ Algk (L), which expresses M{j} (λ0 ) in terms of the classical derivatives of L at λ0 . When λ0 is not a transversal eigenvalue of L, in order to construct the derived sequence {M{j} }kj=0 , one should first search for a local transversalizing family of isomorphisms Φ, and then apply the previous formula to LΦ := LΦ. Going back to the proof of Theorem 4.3.2, it is easily realized that such a task might be extremely tedious and hard. Consequently, establishing the identification χ = µ involves plenty of serious technical difficulties. Nevertheless, the approach adopted in this chapter provides an extremely versatile polynomial factorization of L through L(λ) = M{k} (λ)CΠk−1 (λ − λ0 ) · · · CΠ0 (λ − λ0 ),
λ ∼ λ0 ,
which is the main technical tool used in the proof of the product formula χ[LM; λ0 ] = χ[L; λ0 ] + χ[M; λ0 ],
5.9. Comments on Chapter 5
137
which has been established by Theorem 5.6.1, attributable to Magnus [92]. Originally, Magnus established it for the very special case when r = s = ∞, where most of the technical difficulties of the general case covered in this chapter are easily overcome; however, this should not underrate the well-deserved merit of his result. Actually, from Definition 5.1.3 itself, it becomes apparent that µ was introduced having in mind the product formula. Seemingly, the idea of factorization of the family goes back to Gohberg & Sigal [54] and Gohberg [42]. In these papers, with a similar idea of factorization, but overall different approach, they obtain fundamental properties of the multiplicity, like the stability property (see Chapter 8) and the logarithmic residue theorem. Rather strikingly, Chapter 6 shows that the product formula together with the property that the multiplicity of some rank-one projection equals one do characterize the algebraic multiplicity χ = µ, in the sense that all remaining properties must follow from these two. The importance of all these features greatly escapes from linear to nonlinear analysis, as, in the case K = R, those polynomial factorizations of L considerably facilitate the analysis of the problem of ascertaining whether or not the Poincar´e topological index of L(λ) at u = 0, to be introduced in Section 12.2, changes as λ crosses λ0 . It turns out that it changes if and only if χ[L; λ0 ] ∈ 2N + 1, if and only if λ0 is a nonlinear eigenvalue of L, as discussed in Chapter 12. The result of Exercise 8 goes back, essentially, to Magnus [92] (the product formula for the multiplicity to be introduced in Chapter 7 was already proved by Sigal [119]). The counter-example of Exercise 9 was constructed by Magnus & Mora-Corral [94]. We think that Theorem 5.0.1 is a striking property of Fredholm operators that deserves further attention.
Chapter 6
Uniqueness of the Algebraic Multiplicity Throughout this chapter, given K ∈ {R, C}, two non-zero K-Banach spaces U, V , and λ0 ∈ K, we denote by Sλ∞0 (U, V ) the set of all families L of class C ∞ in a neighborhood of λ0 with values in L(U, V ) such that L0 := L(λ0 ) ∈ Fred0 (U, V ); the neighborhood may depend on L. We also set Sλ∞0 (U ) := Sλ∞0 (U, U ). Clearly, LM ∈ Sλ∞0 (U, V ) if L ∈ Sλ∞0 (W, V ) and M ∈ Sλ∞0 (U, W ), where W is another K-Banach space. Moreover, L−1 ∈ Sλ∞0 (V, U ) whenever L ∈ Sλ∞0 (U, V ) and L0 ∈ Iso(U, V ). So far, in Chapters 4 and 5 we have constructed a function χ[ · ; λ0 ] : Sλ∞0 (U, V ) → N ∪ {∞} that satisfies the following properties: M1. χ[L; λ0 ] = 0 if and only if L0 ∈ Iso(U, V ); in this case, α[L; λ0 ] = 0. M2. 1 ≤ χ[L; λ0 ] < ∞ if and only if λ0 ∈ Algν (L) for some integer ν ≥ 1; in this case, α[L; λ0 ] = ν. / Alg(L); in this case, α[L; λ0 ] = ∞. M3. χ[L; λ0 ] = ∞ if and only if λ0 ∈ M4. For every L ∈ Sλ∞0 (W, V ) and M ∈ Sλ∞0 (U, W ), χ[LM; λ0 ] = χ[L; λ0 ] + χ[M; λ0 ].
140
Chapter 6. Uniqueness of the Algebraic Multiplicity
M5. χ[CP (· − λ0 ); λ0 ] = rank P for every finite-rank projection P ∈ L(U ), where CP (t) = t P + IU − P,
t ∈ K.
Indeed, these properties are consequences of Theorem 5.3.1, Proposition 5.6.3 and Theorem 5.6.1. The main goal of this chapter is to show that all properties of the algebraic multiplicity χ (equivalently, µ) can be obtained from Properties M4 and M5. For example, in the case U = V , the following uniqueness theorem is satisfied. Theorem 6.0.1 (of uniqueness). Let U be a non-zero K-Banach space and λ0 ∈ K. Then, there exists a unique function m[ · ; λ0 ] : Sλ∞0 (U ) → [0, ∞] satisfying the following properties: A1. For all L, M ∈ Sλ∞0 (U ), m[LM; λ0 ] = m[L; λ0 ] + m[M; λ0 ]. A2. There exists a rank-one projection P0 ∈ L(U ) such that m[CP0 (· − λ0 ); λ0 ] = 1. Moreover, m[L; λ0 ] = χ[L; λ0 ]
for all L ∈ Sλ∞0 (U )
and, in particular, m ranges over N ∪ {∞}. Consequently, all definitions, constructions, properties and relationships established in Chapters 4 and 5 do actually follow from Axioms A1 and A2 if U = V . Axiom 2 can be regarded as a normalization property of the algebraic multiplicity, in the sense that the algebraic multiplicity is uniquely determined by the product formula (Axiom A1) up to a multiplicative constant, which is determined by fixing the value of m[CP0 (· − λ0 ); λ0 ] for some rank-one projection P0 . This chapter is organized as follows. Section 6.1 shows the similarity of two arbitrary rank-one projections, which is needed in the proof of the uniqueness theorems. Section 6.2 gives the proof of Theorem 6.0.1 and establishes another version of it. Section 6.3 relaxes the regularity requirements of Theorem 6.0.1. Section 6.4 extends Theorem 6.0.1 to cover the general case when the spaces U, V are different but isomorphic. Finally, Section 6.5 applies these uniqueness theorems to get some explicit formulae for χ.
6.1. Similarity of rank-one projections
141
6.1 Similarity of rank-one projections The following result will be useful in the proof of Theorem 6.0.1. Lemma 6.1.1. Let P, Q ∈ L(U ) be two rank-one projections. Then, there exists an isomorphism T ∈ L(U ) such that P = T QT −1.
(6.1)
Proof. As P, Q ∈ L(U ) are projections, U = N [P ] ⊕ R[P ] = N [Q] ⊕ R[Q]. To construct the isomorphism T we distinguish two cases. First, suppose N [P ] ⊂ N [Q] or N [Q] ⊂ N [P ].
(6.2)
Since codim N [P ] = codim N [Q] = 1, necessarily N [P ] = N [Q]. Let p ∈ R[P ] \ {0} and q ∈ R[Q] \ {0}. Then, the unique operator T ∈ L(U ) satisfying T |N [Q] = I|N [Q] = I|N [P ] ,
T q = p,
where I stands for the identity, provides us with an isomorphism for which P T = T Q. Now, suppose that none of the inclusions (6.2) holds. Then, N [P ] ∩ N [Q] = N [P ] and N [P ] ∩ N [Q] = N [Q]. Take y ∈ N [P ],
x0 ∈ N [P ] \ N [Q].
Then, Qy ∈ R[Q] and Qx0 ∈ R[Q] \ {0}. Since rank Q = 1, we obtain that there exists λ ∈ K such that Qy = λQx0 . Thus, y − λx0 ∈ N [P ] ∩ N [Q]. This proves that N [P ] = (N [P ] ∩ N [Q]) + span[x0 ] and, hence, the codimension of N [P ] ∩ N [Q] in N [P ] is 1. Therefore, N [P ] ∩ N [Q] is a closed hyperplane of N [P ] and admits a one-dimensional topological supplement U0 N [P ] = (N [P ] ∩ N [Q]) ⊕ U0 .
142
Chapter 6. Uniqueness of the Algebraic Multiplicity
By symmetry, N [P ] ∩ N [Q] is a closed hyperplane of N [Q] and there exists a one-dimensional subspace U1 of U such that N [Q] = (N [P ] ∩ N [Q]) ⊕ U1 . Let u0 ∈ U0 \ {0},
u1 ∈ U1 \ {0},
p ∈ R[P ] \ {0},
q ∈ R[Q] \ {0}.
Then, it is easy to see that the unique operator T ∈ L(U ) satisfying T |N [P ]∩N [Q] = I|N [P ]∩N [Q] ,
T u1 = u0 ,
T q = p,
is an isomorphism that satisfies (6.1). This concludes the proof.
6.2 Proof of Theorem 6.0.1 This section is devoted to the proof of Theorem 6.0.1. Let U be a non-zero K-Banach space and λ0 ∈ K. Then, thanks to the Hahn– Banach theorem, U admits rank-1 projections. Moreover, by Theorem 5.6.1 and Proposition 5.6.3, we already know that the function χ[·; λ0 ] satisfies Axioms A1 and A2. It remains to prove the uniqueness. Let m[·; λ0 ] : Sλ∞0 (U ) → [0, ∞] satisfy Axioms A1 and A2. Then, the following properties hold: i) m[L; λ0 ] = 0 if L ∈ Sλ∞0 (U ) and L0 ∈ Iso(U ). ii) m[CP (· − λ0 ); λ0 ] = 1 for every rank-1 projection P ∈ L(U ). iii) m[CP (· − λ0 ); λ0 ] = rank P for every non-zero finite-rank projection P ∈ L(U ). Indeed, let L ∈ Sλ∞0 (U ) be such that L(λ0 ) is invertible. Then, by Axioms A1 and A2, we find 1 = m[CP0 (· − λ0 ); λ0 ] = m[CP0 (· − λ0 )LL−1 ; λ0 ] = m[CP0 (· − λ0 ); λ0 ] + m[L; λ0 ] + m[L−1 ; λ0 ] = 1 + m[L; λ0 ] + m[L−1 ; λ0 ]. Consequently, m[L; λ0 ] = 0, because m ≥ 0. This concludes the proof of Property i). To prove Property ii), let P ∈ L(U ) be a rank-1 projection. By Lemma 6.1.1, there exists an invertible operator T ∈ L(U ) such that P = T P0 T −1 , where P0 is the projection of Axiom A2. Thus, CP = T CP0 T −1
6.2. Proof of Theorem 6.0.1
143
and, therefore, combining Axioms A1 and A2 with Property i) shows that m[CP (· − λ0 ); λ0 ] = m[T ; λ0 ] + m[CP0 (· − λ0 ); λ0 ] + m[T −1 ; λ0 ] = m[CP0 (· − λ0 ); λ0 ] = 1, which concludes the proof of Property ii). To prove Property iii), let P ∈ L(U ) be a projection with rank n ≥ 1. Then, there exist n rank-1 projections P1 , . . . , Pn ∈ L(U ) such that P =
n
Pi
i=1
and 1 ≤ i, j ≤ n,
Pi Pj = 0,
i = j.
These projections satisfy CP = CP1 · · · CPn and, therefore, by Axiom A1 and Property ii), we find that m[CP (· − λ0 ); λ0 ] =
n
m[CPi (· − λ0 ); λ0 ] = n = rank P.
i=1
This concludes the proof of Property iii). We have all the necessary ingredients to conclude the proof of Theorem 6.0.1. Let L ∈ Sλ∞0 (U ). Then, some of the following mutually exclusive options occur: either L(λ0 ) is an isomorphism, or λ0 is a k-algebraic eigenvalue of L(λ0 ) for some integer k ≥ 1, or λ0 is not an algebraic eigenvalue of L. Suppose L(λ0 ) is an isomorphism. Then, according to Property i) we have that m[L; λ0 ] = 0. Suppose λ0 is a k-algebraic eigenvalue of L(λ0 ) for some integer k ≥ 1. Then, by Corollary 5.3.2(b), there exist k finite-rank projections Π0 , . . . , Πk−1 ∈ L(U ) \ {0} and a family M ∈ Sλ∞0 (U ) with M(λ0 ) invertible, such that L(λ) = M(λ)CΠk−1 (λ − λ0 ) · · · CΠ0 (λ − λ0 ),
λ ∼ λ0 .
Consequently, by Axiom A1 and Properties iii) and i), we have that m[L; λ0 ] = m[M; λ0 ] +
k−1
m[CΠi (· − λ0 ); λ0 ] =
i=0
k−1
rank Πi .
i=0
Finally, suppose that λ0 is not an algebraic eigenvalue of L. Then, according to Corollary 5.3.2(c), for every integer ν ≥ 1 there are ν finite-rank projections Π0 , . . . , Πν−1 ∈ L(U ) \ {0}
144
Chapter 6. Uniqueness of the Algebraic Multiplicity
and an operator Mν ∈ Sλ∞0 (U ) such that L(λ) = Mν (λ)CΠν−1 (λ − λ0 ) · · · CΠ0 (λ − λ0 ),
λ ∼ λ0 .
Therefore, again by Axiom A1 and Property iii), we find that m[L; λ0 ] = m[Mν ; λ0 ] +
ν−1
rank Πi ≥ ν,
(6.3)
i=0
because m ≥ 0. As ν ≥ 1 is arbitrary, this implies that m[L; λ0 ] = ∞ and completes the proof of Theorem 6.0.1. In the previous proof the hypothesis that m takes values in [0, ∞] was used exclusively to show that m[L; λ0 ] = 0 if L ∈ Sλ∞0 (U ) is invertible, and in (6.3). Therefore, the following uniqueness result holds true (see Exercise 4). Theorem 6.2.1 (of uniqueness). Let U be a non-zero K-Banach space, λ0 ∈ K, and denote ∞ A∞ λ0 (U ) := L ∈ Sλ0 (U ) : λ0 ∈ Alg(L) . Then, there exists a unique function m[ · ; λ0 ] : A∞ λ0 (U ) → C satisfying the following properties: A1. For all L, M ∈ A∞ λ0 (U ), m[LM; λ0 ] = m[L; λ0 ] + m[M; λ0 ]. A2. There exists a rank-1 projection P0 ∈ L(U ) such that m[CP0 (· − λ0 ); λ0 ] = 1. A3. m[L; λ0 ] = 0 for all L ∈ A∞ λ0 (U ) such that L(λ0 ) ∈ Iso(U ). Moreover, m[L; λ0 ] = χ[L; λ0 ]
for all L ∈ A∞ λ0 (U )
and, in particular, m ranges over N.
6.3 Relaxing the regularity requirements The main difficulty in generalizing Theorem 6.0.1 to a C r class regularity setting relies on the troubling fact that χ[LM; λ0 ] may be undefined even if χ[L; λ0 ] and χ[M; λ0 ] are defined. In other words, we should overcome the difficulty that the class of families of class C r in a neighborhood of λ0 with values in L(U ) for which χ[L; λ0 ] is defined is not closed under the composition, which makes it impossible
6.3. Relaxing the regularity requirements
145
to impose the product formula as an axiom. To overcome such a problem we must reduce the class of admissible families in order to guarantee that χ[LM; λ0 ] is defined whenever χ[L; λ0 ] and χ[M; λ0 ] are so. We have seen that algebraic families at λ0 provide a reasonable class of families for which a uniqueness theorem from the product formula can be given. Precisely, given a point λ0 ∈ K and two non-zero K-Banach spaces U and V , let Mλ0 (U, V ) denote the set of functions L ∈ C r (Ω, L(U, V )), for some open neighborhood Ω ⊂ K of λ0 , such that L(λ0 ) ∈ Fred0 (U, V ) and either r = ∞, or r ∈ N and λ0 ∈ Algk (L) for some integer 0 ≤ k ≤ r. Note that these are the functions for which the multiplicity χ[L; λ0 ] is well defined. Set also Mλ0 (U ) := Mλ0 (U, U ). As Sλ∞0 (U ) ⊂ Mλ0 (U ), the next uniqueness theorem provides an extension of Theorem 6.0.1. Theorem 6.3.1. Let U be a non-zero K-Banach space and λ0 ∈ K. Then, there exists a unique function m[·; λ0 ] : Mλ0 (U ) → [0, ∞] satisfying the following properties: A1. For every L, M ∈ Mλ0 (U ) such that L, M are of class C α[L;λ0 ]+α[M;λ0 ] in a neighborhood of λ0 , m[LM; λ0 ] = m[L; λ0 ] + m[M; λ0 ]. A2. There exists a rank-1 projection P0 ∈ L(U ) such that m[CP0 (· − λ0 ); λ0 ] = 1. Moreover, m[L; λ0 ] = χ[L; λ0 ]
for all L ∈ Mλ0 (U )
and, in particular, m ranges over N ∪ {∞}. Proof. Let U be a non-zero K-Banach space and λ0 ∈ K. Then, thanks to the Hahn–Banach theorem, U admits rank-1 projections. Moreover, thanks to Theorem 5.6.1 and Proposition 5.6.3, we know that the function χ[·; λ0 ] satisfies Axioms A1 and A2. It remains to prove the uniqueness. Let m[·; λ0 ] : Mλ0 (U ) → [0, ∞] satisfy Axioms A1 and A2.
146
Chapter 6. Uniqueness of the Algebraic Multiplicity
Then, the following properties hold: i) m[L; λ0 ] = 0 if L ∈ Mλ0 (U ) and L(λ0 ) is invertible. ii) m[CP (· − λ0 ); λ0 ] = 1 for every rank-1 projection P ∈ L(U ). iii) m[CP (· − λ0 ); λ0 ] = rank P for every non-zero finite-rank projection P ∈ L(U ). Indeed, let L ∈ Mλ0 (U ) be such that L(λ0 ) is invertible. Then, CP0 , L, L−1 , CP0 LL−1 , LL−1 ∈ Mλ0 (U ) and, hence, according to Axioms A1 and A2, 1 = m[CP0 (· − λ0 ); λ0 ] = 1 + m[L; λ0 ] + m[L−1 ; λ0 ]. Therefore,
m[L; λ0 ] = m[L−1 ; λ0 ] = 0,
because m ≥ 0, which ends the proof of Property i). Properties ii) and iii) have already been shown in the proof of Theorem 6.0.1. Let L ∈ Mλ0 (U ). Then, L is of class C r for some r ∈ N ∪ {∞}, and some of the following mutually exclusive alternatives occur: either L0 ∈ Iso(U ); or λ0 ∈ Algk (L) for some integer 1 ≤ k ≤ r; or r = ∞ but λ0 ∈ / Alg(L). If L(λ0 ) is an isomorphism, then m[L; λ0 ] = 0, by Property i). Suppose λ0 is a k-algebraic eigenvalue of L(λ0 ) for some integer number 1 ≤ k ≤ r. By Theorem 5.3.1, k = α[L; λ0 ]. Let {M{j} }rj=0 be a derived family from L at λ0 with associated projections {Πj }rj=0 . Then, M{0} = L, the family M{k} is continuous and take invertible values, and, for every j ∈ {0, . . . , k − 1}, the family M{j} is of class C k−j , λ ∈ Ω, M{j} (λ) = M{j+1} (λ)CΠj (λ − λ0 ), and α[M{j+1} ; λ0 ] = α[L; λ0 ] − j − 1,
α[CΠj (· − λ0 ); λ0 ] = 1.
Thus, for each 0 ≤ j ≤ k − 1, M{j} is of class C α[M{j+1} ;λ0 ]+α[CΠj (·−λ0 );λ0 ] in a neighborhood of λ0 , and, according to Axiom A1 and Properties iii) and i), we have that m[L; λ0 ] = m[M{k} ; λ0 ] +
k−1
m[CΠi (· − λ0 ); λ0 ] =
i=0
k−1
rank Πi .
i=0
Finally, suppose r = ∞ and λ0 is not an algebraic eigenvalue of L. Let {M{j} }∞ j=0 be a derived family from L at λ0 with associated projections {Πj }∞ j=0 . Then, for every integer ν ≥ 1, the family Mν ∈ Sλ∞0 (U ) satisfies L(λ) = Mν (λ)CΠν−1 (λ − λ0 ) · · · CΠ0 (λ − λ0 ),
λ ∼ λ0 .
6.4. A general uniqueness theorem
147
Therefore, again by Axiom A1 and Property iii), we find that m[L; λ0 ] = m[Mν ; λ0 ] +
ν−1
rank Πi ≥ ν,
i=0
because m ≥ 0. As ν ≥ 1 is arbitrary, this implies that m[L; λ0 ] = ∞ and concludes the proof.
6.4 A general uniqueness theorem The assumption U = V in the statement of Theorems 6.0.1, 6.2.1 and 6.3.1 is not really restrictive, since U and V must be isomorphic if Fred0 (U, V ) is non-empty. In fact, the following improvement of Theorem 6.3.1 can be obtained. Theorem 6.4.1. Let X be a non-zero K-Banach space and λ0 ∈ K. Let J(X) be a set of K-Banach spaces isomorphic to X such that X ∈ J(X). Then, there exists a unique function # m[ · ; λ0 ] : Mλ0 (U, V ) → [0, ∞] U,V ∈J(X)
satisfying the following properties: A1. If U, V, W ∈ J(X) with L ∈ Mλ0 (U, V ),
M ∈ Mλ0 (W, U ),
and L, M are of class C α[L;λ0 ]+α[M;λ0 ] in a neighborhood of λ0 , then m[LM; λ0 ] = m[L; λ0 ] + m[M; λ0 ]. A2. There exists a rank-1 projection P0 ∈ L(X) such that m[CP0 (· − λ0 ); λ0 ] = 1. Actually, for every U, V ∈ J(X) and L ∈ Mλ0 (U, V ), m[L; λ0 ] = χ[L; λ0 ]. In particular, m ranges over N ∪ {∞}. Proof. Thanks to Theorem 5.6.1, we know that the algebraic multiplicity χ constructed in Chapter 4, or Chapter 5, satisfies Axioms A1 and A2. It remains to prove its uniqueness. Let # Mλ0 (U, V ) → [0, ∞] m[ · ; λ0 ] : U,V ∈J(X)
148
Chapter 6. Uniqueness of the Algebraic Multiplicity
be a function satisfying Axioms A1 and A2 of this theorem. Then, the auxiliary function mX [ · ; λ0 ] : Mλ0 (X) → [0, ∞] defined through mX [L; λ0 ] := m[L; λ0 ],
L ∈ Mλ0 (X),
satisfies Axioms A1 and A2 of Theorem 6.3.1. Consequently, mX [L; λ0 ] = χ[L; λ0 ]
for all L ∈ Mλ0 (X).
(6.4)
Let U ∈ J(X) and consider any isomorphism J ∈ L(X, U ). Then, according to Axioms A2 and A1, we have that 1 = m[CP0 (· − λ0 ); λ0 ] = m[CP0 (· − λ0 )J −1 J; λ0 ] = m[CP0 (· − λ0 ); λ0 ] + m[J −1 ; λ0 ] + m[J; λ0 ] = 1 + m[J −1 ; λ0 ] + m[J; λ0 ] and, therefore,
m[J −1 ; λ0 ] = m[J; λ0 ] = 0,
(6.5)
because m ≥ 0. Let U, V ∈ J(X) and L ∈ Mλ0 (U, V ). Then, there exist isomorphisms M ∈ L(V, X) and J ∈ L(X, U ). Thanks to (6.5), m[J; λ0 ] = m[M ; λ0 ] = 0 and, hence, by Axiom A1, m[L; λ0 ] = m[M LJ; λ0 ]. As M LJ ∈ Mλ0 (X), (6.4) and the product formula for χ show that m[L; λ0 ] = mX [M LJ; λ0 ] = χ[M LJ; λ0 ] = χ[L; λ0 ].
This concludes the proof.
6.5 Applications. Classical multiplicity formulae This section gives two applications of Theorems 6.0.1 and 6.2.1 to characterize χ[L; λ0 ] in the special case when U = V = KN . Corollary 6.5.1. Let N ≥ 1 be an integer and choose a basis in KN to identify L(KN ) with MN (K). Then, for every λ0 ∈ K and L ∈ Sλ∞0 (KN ), χ[L; λ0 ] = ordλ0 det L.
6.5. Applications. Classical multiplicity formulae
149
Proof. Fix λ0 ∈ K. Then, the function m[·; λ0 ] : Sλ∞0 (KN ) → [0, ∞] defined by m[L; λ0 ] := ordλ0 det L,
L ∈ Sλ∞0 (KN ),
satisfies Axioms A1 and A2 of Theorem 6.0.1. Indeed, m is well defined because L is of class C ∞ in a neighborhood of λ0 . Moreover, for every L, M ∈ Sλ∞0 (KN ), we have that m[LM; λ0 ] = ordλ0 det(LM) = ordλ0 [det L · det M] = ordλ0 det L + ordλ0 det M = m[L; λ0 ] + m[M; λ0 ]. Consequently, m satisfies Axiom A1. Let P0 be the rank-one projection P0 := diag{1, 0, . . . , 0}. Then, for each t ∈ K, CP0 (t) = t P0 + IN − P0 = diag{t, 1, . . . , 1} and, hence, m[CP0 (· − λ0 ); λ0 ] = ordλ0 det CP0 (· − λ0 ) = ordλ0 (· − λ0 ) = 1. Consequently, m also satisfies Axiom A2. Therefore, by Theorem 6.0.1, m[L; λ0 ] = χ[L; λ0 ] for all L ∈ Sλ∞0 (KN ).
This concludes the proof. Corollary 6.5.2. Let N ≥ 1 be an integer and choose a basis in C N L(CN ) with MN (C). Then, for all λ0 ∈ C and L ∈ A∞ λ0 (C ), 1 L (λ)L(λ)−1 dλ, χ[L; λ0 ] = tr 2πi γ
N
to identify
(6.6)
where γ is any Cauchy contour whose inner domain Dγ contains λ0 , the family ¯ γ , and D ¯ γ \ {λ0 } contains no eigenvalue L is holomorphic in a neighborhood of D of L. N Proof. Fix λ0 ∈ C, and note that L is holomorphic at λ0 if L ∈ A∞ λ0 (C ). By Theorem 6.2.1, the proof will be completed if we show that the function m[·; λ0 ] : N A∞ λ0 (C ) → C defined through 1 N L (λ)L(λ)−1 dλ, L ∈ A∞ m[L; λ0 ] := tr λ0 (C ), 2πi γ
satisfies Axioms A1–A3 of Theorem 6.2.1, where γ is any Cauchy contour satisfying the requirements of the statement.
150
Chapter 6. Uniqueness of the Algebraic Multiplicity
We point out that this is a typical example where checking that m is nonnegative may be rather involved, though Axiom A3 of Theorem 6.2.1 is a direct consequence of the fact that L L−1 is holomorphic in an open neighborhood of λ0 if L(λ0 ) is an isomorphism and, therefore, in such a case, the Cauchy theorem guarantees that m[L; λ0 ] = 0. N Let L, M ∈ A∞ λ0 (C ) and consider a Cauchy contour γ with inner domain ¯ γ , and D ¯ γ \ {λ0 } Dγ containing λ0 such that both families are holomorphic in D contains no eigenvalue of L or of M. Then, (LM) (LM)−1 = L L−1 + LM M−1 L−1 and, hence, 1 m[LM; λ0 ] = tr (LM) (λ)(LM)(λ)−1 dλ 2πi γ 1 1 −1 = tr L (λ)L(λ) dλ + tr L(λ)M (λ)M(λ)−1 L(λ)−1 dλ 2πi γ 2πi γ 1 M (λ)M(λ)−1 dλ = m[L; λ0 ] + m[M; λ0 ], = m[L; λ0 ] + tr 2πi γ because the trace is invariant by similarity. Therefore, m satisfies Axiom A1 of Theorem 6.2.1. Finally, let P ∈ M(C) be a rank-1 projection. Then, CP (t)CP (t)−1 = P t−1 P + ICN − P = t−1 P, t ∈ C \ {0}, and, hence, m[CP (· − λ0 ); λ0 ] =
1 2πi
(λ − λ0 )−1 dλ tr P = 1. γ
Consequently, Axiom A2 of Theorem 6.2.1 is satisfied as well. Therefore, the result is a direct consequence of Theorem 6.2.1. Remark 6.5.3. Actually, as an easy consequence of Proposition 3.1.1, it readily follows that every isolated eigenvalue λ0 of a holomorphic family must be an algeN ∞ N braic eigenvalue (see Exercise 3). Therefore, A∞ λ0 (C ) consists of all L ∈ Sλ0 (C ) for which either L(λ0 ) is an isomorphism, or λ0 is an isolated eigenvalue of L. The following result is an immediate consequence of Corollary 6.5.2. Corollary 6.5.4. Let N ≥ 1 be an integer and choose a basis in CN to identify L(CN ) with MN (C). Then, for all A ∈ MN (C) and λ0 ∈ σ(A), χ[·ICN − A; λ0 ] = ma (λ0 ), where ma (λ0 ) denotes the algebraic multiplicity of λ0 as an eigenvalue of the matrix A.
6.6. Exercises
151
Proof. According to (6.6), we have that, for sufficiently small ε > 0, χ[·ICN − A; λ0 ] = tr
1 2πi
Cε (λ0 )
(λICN − A)−1 dλ = tr
1 2πi
R(λ; A) dλ. Cε (λ0 )
On the other hand, by Theorem 3.4.1, the operator Pλ0 :=
1 2πi
R(λ; A) dλ Cε (λ0 )
projects CN onto N [(λ0 ICN − A)ν(λ0 ) ], where ν(λ0 ) is the algebraic ascent of λ0 as an eigenvalue of A. Therefore, we conclude from Theorem 1.2.1 that χ[·ICN − A; λ0 ] = tr Pλ0 = rank Pλ0 = dim N [(λ0 ICN − A)ν(λ0 ) ] = ma (λ0 ). The fact that the trace of Pλ0 equals its range follows very easily from the fact that Pλ0 |N [(λ0 ICN −A)ν(λ0 ) ] = IN [(λ0 ICN −A)ν(λ0 ) ] ; see Proposition 9.2.2 for a general proof of this feature. This concludes the proof. In Chapters 7 and 9, all the results of this section will be generalized to the infinite-dimensional setting.
6.6 Exercises 1.
Prove successively the following: • If T ∈ L(U ) is an operator of finite rank, then there exist a finite-dimensional subspace X ⊂ U and a closed subspace Y ⊂ U such that U = X ⊕ Y,
T (X) ⊂ X
and
Y ⊂ N [T ].
• Use this feature to obtain the following generalization of Lemma 6.1.1: If P, Q ∈ L(U ) are two projections of the same finite rank then there exists an isomorphism T ∈ L(U ) such that P = T QT −1 . 2. Use Lemma 6.1.1 and induction to give an alternative proof of the second result of Exercise 1: If P, Q ∈ L(U ) are two projections of the same finite rank then there exists an isomorphism T ∈ L(U ) such that P = T QT −1 . 3. Use Proposition 3.1.1 to prove that every isolated eigenvalue λ0 of a holomorphic family with values in MN (C) must be algebraic.
152
Chapter 6. Uniqueness of the Algebraic Multiplicity
4.
The following is requested to be done: ∞ • Prove that LM ∈ A∞ λ0 (U ) if L, M ∈ Aλ0 (U ). • Given a semigroup (G, +) containing (N, +) as a sub-semigroup, prove that there exists a unique function m[ · ; λ0 ] : A∞ λ0 (U ) → G satisfying the following properties: A1. m[LM; λ0 ] = m[L; λ0 ] + m[M; λ0 ] for all L, M ∈ A∞ λ0 (U ). A2. m[CP0 (· − λ0 ); λ0 ] = 1 for some rank-1 projection P0 ∈ L(U ). A3. m[L; λ0 ] = 0 whenever L ∈ A∞ λ0 (U ) and L(λ0 ) is an isomorphism.
5.
Prove that there are two different functions m[ · ; λ0 ] : Sλ∞0 (U ) → Z ∪ {∞}
satisfying Axioms A1 and A2 of the statement of Theorem 6.0.1. 6. Prove that Axioms A1–A3 of Exercise 4 are independent. In other words, none of them is inferable from the remaining two. 7. State and prove the version of the result of Exercise 4 for the general case when U = V . 8. Let U , V be non-zero spaces with the same finite dimension, and choose bases so that the determinant can be defined in L(U, V ). Prove that χ[L; λ0 ] = ordλ0 det L
for all
L ∈ Sλ∞0 (U, V ).
9. Generalize the result of Corollary 6.5.2 to cover the general case when U and V are isomorphic to CN , for some N ≥ 1. 10. Adapt the proof of Theorem 6.3.1 to prove the following result: Let U be a Banach space, and λ0 ∈ K. Then there exists at most one function m[·; λ0 ] : Mλ0 (U ) → [0, ∞] satisfying the following properties: • For all L, M ∈ Mλ0 (U ) such that LM ∈ Mλ0 (U ), m[LM; λ0 ] = m[L; λ0 ] + m[M; λ0 ]. • A projection P0 ∈ L(U ) of rank-one exists such that m[CP0 (· − λ0 ); λ0 ] = 1.
6.7 Comments on Chapter 6 This chapter is based on Mora-Corral [99]. Actually, all the uniqueness theorems studied in this chapter go back to Mora-Corral [100, 99]. More general uniqueness theorems can be found in [99] and in the list of exercises of Chapter 7. The results of Section 6.5 reveal that the concept of multiplicity introduced in Chapter 4 equals the classic concepts of algebraic multiplicity in Euclidean spaces, which might have been expected from the very beginning of Chapter 4. All those results will be generalized in Chapters 7 and 9 through the local Smith form. So deeper discussions are postponed.
Chapter 7
Algebraic Multiplicity Through Jordan Chains. Smith Form This chapter studies the algebraic multiplicity according to the theory of Jordan chains. The concept of Jordan chain extends the classic concept of chain of generalized eigenvectors, already studied in Section 1.3. It will provide us with a further approach to the algebraic multiplicities χ and µ introduced and analyzed in Chapters 4 and 5, respectively, whose axiomatization has already been accomplished through the uniqueness theorems included in Chapter 6. More precisely, the new concept of multiplicity to be introduced in this chapter is based upon the number and lengths of all Jordan chains of the family L at the value λ0 , through the concept of canonical set of Jordan chains. Not surprisingly, it equals the multiplicities χ[L; λ0 ] and µ[L; λ0 ], because it satisfies the product formula and, consequently, it inherits the axiomatic of Chapter 6. Actually, establishing the hidden inter-relations connecting the associated invariants of all those different approaches to the algebraic multiplicity will extraordinarily facilitate the derivation of a series of extremely sharp properties of the canonical sets of Jordan chains. For example, it will reveal that, for a given family L ∈ C r (Ω, L(U, V )) with r ∈ N ∪ {∞} (not necessarily analytic) with L0 ∈ Fred0 (U, V ), the lengths of all Jordan chains of L at λ0 ∈ Eig(L) are uniformly bounded above if and only if λ0 is an algebraic eigenvalue of L. As a by-product, this pivotal property will allow us to characterize whether or not the family L admits a local Smith form at λ0 . Basically, the local Smith form of L at λ0 is the simplest canonical form that can be obtained through pre- and postmultiplication of L by two families of isomorphisms. Among the main results of this chapter, it will be shown that L possesses a local Smith form of at λ0 if and only if λ0 ∈ Alg(L). In other words, if and only if χ[L; λ0 ] = µ[L; λ0 ] < ∞.
154
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
The local Smith form tremendously facilitates the proof of some of the most useful properties of the algebraic multiplicity that has been axiomatized in Chapter 6, such as the stability theorem for holomorphic families (see Section 8.4), and the product formula in its more general version (see next Exercise 19). In addition to that, another important interest relies on the fact that it provides the most versatile way to characterize whether two given families are locally equivalent. Two families are said to be locally equivalent when one of them can be obtained from the other through pre- and post-multiplication by two isomorphism families. It turns out that two families are locally equivalent at λ0 if and only if they possess the same local Smith form at λ0 . The general plan of this chapter is as follows. Section 7.1 introduces the concept of Jordan chain and uses the theory of Chapter 4 for characterizing the Jordan chains at any transversal eigenvalue. Section 7.2 fixes the concepts of canonical set of Jordan chains and of partial and total κ-multiplicities, denoted by κj [L; λ0 ] and κ[L; λ0 ], through the number of Jordan chains of a given length in any canonical set. Section 7.3 shows the invariance of the partial κ-multiplicities under pre- and post-multiplication by continuous isomorphism families. Section 7.4 introduces the concept of local Smith form for L ∈ C r (Ω, L(U, V )) at any λ0 ∈ Eig(L) and establishes the existence of a local Smith form when r = ∞ and no Jordan chain of L can be continued indefinitely in the special case when U and V are isomorphic to Km , for some m ≥ 1. Section 7.5 combines the transversalization theorem of Chapter 4 with the invariance theorem of Section 7.3 to obtain a finite scheme for ascertaining the partial κ-multiplicities and, consequently, constructing canonical sets of Jordan chains systematically. The remaining sections contain some of the main consequences of the basic analysis carried out in the previous ones. Namely, in Section 7.6 it is proved that the multiplicity κ[L; λ0 ] coincides with the multiplicities χ[L; λ0 ] and µ[L; λ0 ], and that κ[L; λ0 ] < ∞ if and only if the length of every Jordan chain of L at λ0 is bounded above by a ν < r + 1. According to the results of Chapters 4 and 5, it turns out that κ[L; λ0 ] < ∞ if and only if λ0 ∈ Algν (L) for some 0 ≤ ν < r + 1. Incidentally, much like in the context of classic spectral theory, ν can be characterized as the lowest natural number k for which dim N [trng {L0 , . . . , Lk−1 }] = dim N [trng {L0 , . . . , Lk }] , though this point will not be completely clarified until Chapter 8. Section 7.7 labels the vectors of a canonical set of Jordan chains in an appropriate way useful in Chapter 10. All previous information will be brought together in Section 7.8 to characterize the existence of the local Smith form. Section 7.9 illustrates that characterization by analyzing two simple examples. Finally, Section 7.10 introduces the notion of local equivalence of families and uses the theorem of characterization of the local Smith form to determine whether two given families are locally equivalent at a point.
7.1. The concept of Jordan chain
155
As in Chapters 4 and 5, throughout this chapter we consider K ∈ {R, C}, two K-Banach spaces U and V , an open subset Ω ⊂ K, a point λ0 ∈ Ω, and a family L ∈ C r (Ω, L(U, V )), for some r ∈ N ∪ {∞, ω}, with L0 ∈ Fred0 (U, V ), even if not mentioned. As Fred0 (U, V ) is an open set of L(U, V ), we actually have that L(λ) ∈ Fred0 (U, V ) for λ sufficiently close to λ0 . Throughout the rest of this book, for any family L ∈ C r (Ω, L(U, V )) and any integer 0 ≤ s ≤ r, we will denote ⎛ ⎞ L0 0 ··· 0 ⎜ ⎟ L0 ··· 0 ⎟ ⎜L1 (7.1) L[s] := trng{L0 , . . . , Ls } = ⎜ . ⎟ .. .. ⎜ .. ⎟, . .. ⎠ ⎝ . . Ls Ls−1 · · · L0 where, as usual in this book, j!Lj = Lj) (λ0 ),
0 ≤ j < r + 1.
These conventions will substantially simplify most of the subsequent notation and statements.
7.1 The concept of Jordan chain The next definition introduces the concept of Jordan chain. Definition 7.1.1. Let r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )). Then, for a given integer 0 ≤ s ≤ r, a vector (u0 , . . . , us ) ∈ U s+1 with u0 = 0 is said to be a Jordan chain of length s + 1 of L at λ0 starting at u0 if j
Li uj−i = 0,
0 ≤ j ≤ s.
(7.2)
i=0
Remark 7.1.2. Using (7.1), the identities (7.2) can be equivalently expressed as L[s] col[u0 , . . . , us ] = 0 and, consequently, the set of Jordan chains of length s + 1 of L at λ0 is given through the column vectors col[u0 , . . . , us ] ∈ N [L[s] ] with u0 = 0. Therefore, the following properties are satisfied: • If (u0 , . . . , us ) is a Jordan chain of L at λ0 , then, so is (u0 , . . . , up ) for each integer 0 ≤ p ≤ s. • If (u0 , . . . , us ) and (v0 , . . . , vs ) are two Jordan chains of length s+1 of L at λ0 , then, so is (λu0 + µv0 , . . . , λus + µvs ) for every λ, µ ∈ K with λu0 + µv0 = 0.
156
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
As (7.2) imposes L0 u0 = 0, the family L admits a Jordan chain at λ0 if and only if λ0 ∈ Eig(L). In such a case, any vector u0 ∈ N [L0 ] \ {0} is said to be an eigenvector of L at λ0 ; the vector (u0 ) itself provides a Jordan chain of length 1 of L at λ0 starting at u0 , of course. Suppose U ⊂ V with continuous embedding J : U → V , and consider the associated operator family L(λ) = λJ − T,
λ ∈ K,
for a given T ∈ L(U, V ). Then, L ∈ P[L(U, V )] and for any λ0 ∈ K, L0 = λ0 J − T,
L1 = J,
Lh = 0
for all h ≥ 2.
Thus, in this special case, system (7.2) becomes (T − λ0 J)u0 = 0,
(T − λ0 J)u1 = u0 ,
...,
(T − λ0 J)us = us−1 ,
and, hence, the vector (u0 , . . . , us ) is a Jordan chain of length s + 1 of L at λ0 starting at u0 if and only if it equals (T − λ0 J)s us , (T − λ0 J)s−1 us , . . . , (T − λ0 J)2 us , (T − λ0 J)us , us with us ∈ N [(T − λ0 J)s+1 ] \ N [(T − λ0 J)s ]. Consequently, in the classical setting, the Jordan chains provide all possible chains of generalized eigenvectors forming the Jordan basis, as discussed in Section 1.3. So the concept of Jordan chain introduced by Definition 7.1.1 is a natural generalization of the concept of generalized eigenvector chain associated with an eigenvalue λ0 of a square matrix. Systems of the type (7.2) were already introduced in Section 4.1 to motivate the concept of transversal eigenvalue. Actually, the concept of transversal eigenvalue was introduced to get system (7.2) uncoupled and, consequently, at a transversal eigenvalue the structure of the Jordan chains should be extremely simple. In fact, the following result holds. Proposition 7.1.3. Suppose r ∈ N∗ ∪ {∞}, the family L ∈ C r (Ω, L(U, V )) satisfies L0 ∈ Fred0 (U, V ), and, for some integer 1 ≤ ν ≤ r, the subspaces ν−1 6
R[L0 ], L1 (N [L0 ]), . . . , Lν (
N [Lj ])
j=0
are in direct sum. Then, for each integer 0 ≤ s ≤ ν, the vector (u0 , . . . , us ) ∈ U s+1 is a Jordan chain of L at λ0 if and only if u0 = 0 and uj ∈
s−j 6 i=0
N [Li ],
0 ≤ j ≤ s.
(7.3)
7.2. Canonical sets and κ-multiplicity
157
In other words, ⎛ N [L[s] ] = ⎝
s 6
⎞
⎛
N [Lj ]⎠ × ⎝
j=0
s−1 6
⎞ N [Lj ]⎠ × · · · × N [L0 ].
j=0
Consequently, s + 1 ≤ ν when λ0 is a ν-transversal eigenvalue of L and there are Jordan chains of L at λ0 of length s + 1. Proof. Let s be an integer such that 0 ≤ s ≤ ν. According to Lemma 4.1.1, it is obvious that (u0 , . . . , us ) ∈ U s+1 is a Jordan chain of L at λ0 if and only if u0 = 0 and (7.3) is satisfied. Suppose λ0 is a ν-transversal eigenvalue of L for some integer 1 ≤ ν ≤ r. Then, by Proposition 4.2.2, ν 6 N [Lj ] = {0} j=0
and, therefore, s ≤ ν − 1 for all Jordan chains of length s + 1, which concludes the proof.
7.2 Canonical sets and κ-multiplicity The next definition introduces the concept of rank map associated with an operator family at a given λ0 ∈ Eig(L). Definition 7.2.1. Let r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )) with λ0 ∈ Eig(L) and L0 ∈ Fred0 (U, V ). Let Lr+1 denote either the empty set, if r = ∞, or the set of eigenvectors u0 ∈ N [L0 ] \ {0} for which L admits a Jordan chain of length r + 1 at λ0 starting at u0 , if r ∈ N. Then, the rank map associated with L at λ0 is the map rank : N [L0 ] \ (Lr+1 ∪ {0}) → N ∪ {∞} defined, for every u0 ∈ N [L0 ] \ (Lr+1 ∪ {0}) , as follows: • rank(u0 ) = k ∈ N if k < r + 1 and L admits a Jordan chain of length k at λ0 starting at u0 , but not a Jordan chain of length k + 1. • rank(u0 ) = ∞ when, for every integer k ≥ 1, the family L possesses a Jordan chain of length k at λ0 starting at u0 . In the case r = ∞, the rank map has been defined over all N [L0 ] \ {0}. Another closely related concept of infinity rank might be given through the following definition.
158
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
Definition 7.2.2. Suppose r = ∞ and (u0 , . . . , us ) is a Jordan chain of L at λ0 . It is said that the Jordan chain (u0 , . . . , us ) can be continued indefinitely if there exists a sequence {un }n≥s+1 in U such that (u0 , . . . , un ) is a Jordan chain of L at λ0 for all n ≥ s + 1. According to Definition 7.2.1, rank u0 = ∞ if the Jordan chain (u0 ) can be continued indefinitely; however, in general there might exist some u0 ∈ N [L0 ] \ {0} with rank u0 = ∞ for which (u0 ) cannot be continued indefinitely. Actually, / Fred0 (U, V ). Nevertheless, Exercise 15 shows that this can indeed occur if L0 ∈ in the case L0 ∈ Fred0 (U, V ), it will become apparent from Theorem 7.8.3 that rank u0 = ∞ if and only if the chain (u0 ) can be continued indefinitely. Simple examples show the existence of eigenvectors with infinity rank. Indeed, suppose U = V = K = R, the point λ0 is 0, and L is the family defined by (4.20). Then, Lj = 0 for all j ≥ 0, and, hence, rank u0 = ∞
for all u0 ∈ R \ {0},
because, for any n ≥ 1, the vector (u0 , . . . , u0 ) ∈ Rn is a Jordan chain of length n. Now we describe the concept of canonical set of Jordan chains. To this aim, we consider r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )) with L0 = L(λ0 ) ∈ Fred0 (U, V ) \ Iso(U, V ),
(7.4)
for which the length of all Jordan chains of L at λ0 is uniformly bounded above by some integer k ≤ r. Then, the rank map is defined over the whole N [L0 ] \ {0}. Let κ1 ≤ k satisfy that L possesses a Jordan chain of length κ1 , but it does not admit a Jordan chain of length κ1 + 1. Then, there exists u0,1 ∈ N [L0 ] \ {0} with
rank u0,1 = κ1 ,
and the length of all Jordan chains of L at λ0 is bounded above by κ1 . Define n := dim N [L0 ], and proceeding inductively, suppose that for some 2 ≤ i ≤ n we have chosen i − 1 linearly independent vectors u0,1 , . . . , u0,i−1 ∈ N [L0 ], and take u0,i ∈ Ci := N [L0 ] \ span[u0,1 , . . . , u0,i−1 ] such that rank u0,i = κi := max{rank u : u ∈ Ci }. This process can be repeated until n linearly independent vectors u0,1 , . . . , u0,n ∈ N [L0 ] have been selected. They form a basis of N [L0 ], since n = dim N [L0 ]. Finally, for each i ∈ {1, . . . , n}, let (u0,i , u1,i , . . . , uκi −1,i ) be any Jordan chain of length κi of L at λ0 starting at u0,i .
7.2. Canonical sets and κ-multiplicity
159
Definition 7.2.3. Any ordered family u0,1 , . . . , uκ1 −1,1 ; · · · ; u0,i , . . . , uκi −1,i ; · · · ; u0,n , . . . , uκn −1,n
(7.5)
constructed through the previous scheme is said to be a canonical set of Jordan chains of L at λ0 . By construction, κ1 ≥ · · · ≥ κn ≥ 1 and the following result is obtained. Proposition 7.2.4. Let r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )) satisfy (7.4) for which the length of all Jordan chains of L at λ0 is uniformly bounded above by some integer k ≤ r. Then, the integers κ1 , . . . , κn of the construction of any canonical set of Jordan chains are the same for all canonical sets of Jordan chains. Proof. Clearly, κ1 is the same for every canonical set of Jordan chains. Set K1 := {u ∈ N [L0 ] \ {0} : rank u = κ1 }. By Remark 7.1.2, E1 := K1 ∪ {0} is the linear subspace of N [L0 ] generated by K1 . Set := dim E1 and let {u0,1 , . . . , u0,n } be a basis of N [L0 ] selected from a canonical set of Jordan chains. Then, E1 = span[u0,1 , . . . , u0, ], because these vectors are linearly independent, by construction. Thus, κ1 = · · · = κ are independent of the canonical set. This concludes the proof if = n. Suppose < n. Clearly, κ+1 is independent of the canonical set of Jordan chains. Then, reasoning as before, we consider K2 := {u ∈ N [L0 ] \ {0} : rank u ≥ κ+1 }. By Remark 7.1.2, E2 := K2 ∪ {0} is the subspace of N [L0 ] generated by K2 . Moreover, since κ1 ≥ κ+1 , we have that E1 ⊂ E2 = span[u0,1 , . . . , u0,dim E2 ]. Consequently, κ+1 = · · · = κdim E2 are independent of the canonical set of Jordan chains. Repeating this scheme as many times as necessary concludes the proof. Proposition 7.2.4 actually shows the consistency of the following concept of algebraic κ-multiplicity. Definition 7.2.5. Let r ∈ N∪{∞} and L ∈ C r (Ω, L(U, V )) be with L0 ∈ Fred0 (U, V ). Then, the algebraic κ-multiplicity of L at λ0 , denoted by κ[L; λ0 ], is defined through:
160
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
• κ[L; λ0 ] = 0, if L0 ∈ Iso(U, V ). n • κ[L; λ0 ] := j=1 κj , if n := dim N [L0 ] ≥ 1 and the length of all Jordan chains of L at λ0 is uniformly bounded above by some integer k ≤ r. In such a case, for each 1 ≤ j ≤ n, the integer κj [L; λ0 ] := κj is said to be the jth partial κ-multiplicity of L at λ0 . • κ[L; λ0 ] = ∞, if r = ∞ and L possesses Jordan chains at λ0 with arbitrarily large length. The multiplicity κ[L; λ0 ], as well as the partial κ-multiplicities, is left undefined when r ∈ N, the family L is not of class C r+1 in any neighborhood of λ0 , and L possesses a Jordan chain of length r + 1 at λ0 . The extension of the concept of partial κ-multiplicities to the case when r = ∞ and the lengths of the Jordan chains of L are unbounded has been postponed until Exercise 5. Theorem 7.6.3 establishes that κ coincides with the multiplicity χ introduced in Chapter 4. Consequently, according to Theorem 5.3.1, it must also coincide with the multiplicity µ introduced in Chapter 5. Example. Suppose U = V = Km for some integer m ≥ 1, and L(λ) = diag{(λ − λ0 )p1 , . . . , (λ − λ0 )pm },
λ ∈ K,
for some p1 , . . . , pm ∈ N. Then, n := dim N [L0 ] = Card{i ∈ {1, . . . , m} : pi ≥ 1} and, according to Definition 7.2.5, κ[L; λ0 ] =
m
pj .
j=1
Moreover, in the case n ≥ 1, κ1 := κ1 [L; λ0 ] = max{p1 , . . . , pm } and, in general, when n ≥ 2, κj [L; λ0 ] = max ({p1 , . . . , pm } \ {κ1 , . . . , κj−1 }) , In particular, if
2 ≤ j ≤ n.
L(λ) = diag (λ − λ0 )p1 , . . . , (λ − λ0 )pn , 1, . . . , 1 m−n
with p1 ≥ · · · ≥ pn ≥ 1, then, n = dim N [L0 ], κ[L; λ0 ] =
n j=1
pj ,
and κj [L; λ0 ] = pj ,
1 ≤ j ≤ n.
7.3. Invariance by continuous families of isomorphisms
161
7.3 Invariance by continuous families of isomorphisms The next result shows the invariance of the partial multiplicities under multiplication by continuous families of isomorphisms. Theorem 7.3.1 (of invariance). Let r ∈ N ∪ {∞}, and let there be four K-Banach ˆ , Vˆ , and four operator families spaces U , V , U ˆ , Vˆ )), M ∈ C r (Ω, L(U ˆ F ∈ C(Ω, L(U, U)),
L ∈ C r (Ω, L(U, V )), E ∈ C(Ω, L(Vˆ , V )), such that E0 := E(λ0 ) ∈ Iso(Vˆ , V ),
ˆ F0 := F(λ0 ) ∈ Iso(U, U),
(7.6)
and L = EMF.
(7.7)
ˆ , Vˆ ), and, in such a case, Then, L0 ∈ Fred0 (U, V ) if and only if M0 ∈ Fred0 (U for every integer 0 ≤ s ≤ r there is a linear bijection between the sets of Jordan chains of length s + 1 of L and M at λ0 . In particular, the length of all Jordan chains of L at λ0 is uniformly bounded above by some integer k ≤ r if and only if so is the length of all Jordan chains of M. Moreover, in such a circumstance, κ[L; λ0 ] = κ[M; λ0 ], and, actually, when L0 ∈ / Iso(U, V ), κj [L; λ0 ] = κj [M; λ0 ],
1 ≤ j ≤ dim N [L0 ] = dim N [M0 ].
Proof. By (7.6) and (7.7), it is clear that ˆ , Vˆ ). L0 ∈ Fred0 (U, V ) if and only if M0 ∈ Fred0 (U Throughout the rest of the proof we will work under this assumption. ˆ , Vˆ ). Again by (7.6) and (7.7), L0 ∈ Iso(U, V ) if and only if M0 ∈ Iso(U Moreover, in such a case, according to Definition 7.2.5, we have that κ[L; λ0 ] = κ[M; λ0 ] = 0, which concludes the proof. So, we will subsequently assume that n = dim N [L0 ] = dim N [M0 ] ≥ 1. First, we will prove the theorem in the special case when E ∈ C r (Ω, L(Vˆ , V ))
ˆ )). and F ∈ C r (Ω, L(U, U
Fix an integer 0 ≤ s ≤ r. Then, applying the Leibniz rule to (7.7) gives L[s] = E[s] M[s] F[s] .
162
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
By Remark 7.1.2, the spaces N [M[s] ] and N [L[s] ] provide all Jordan chains of length s + 1 of M and L at λ0 , respectively. Moreover, by (7.6), E[s] ∈ Iso(Vˆ s+1 , V s+1 ),
ˆ s+1 ). F[s] ∈ Iso(U s+1 , U
Therefore, the map Is : N [L[s] ] → N [M[s] ] col[u0 , . . . , us ] → F[s] col[u0 , . . . , us ] establishes an isomorphism. Moreover, setting col[ˆ u0 , . . . , uˆs ] := F[s] col[u0 , . . . , us ] shows that u ˆ0 = 0 if and only if u0 = 0, since u ˆ0 = F0 u0 . Consequently, the map Is establishes a linear bijection between the Jordan chains of length s + 1 of M and those of L. The remaining assertions of the theorem are straightforward consequences of this fact. The same idea can be adapted to cover the general case when E and F are continuous, although carrying out all technical details in the general case is more involved, as identity (7.7) cannot be differentiated. Fix an integer 0 ≤ s ≤ r. By Theorem 4.3.2, there exists Φ ∈ P[L(U )] with Φ0 = IU for which the transformed family LΦ := LΦ satisfies Vs :=
s
(7.8)
Φ Φ Φ LΦ j (N [L0 ] ∩ · · · ∩ N [Lj−1 ]) ⊕ R[L0 ] ⊂ V.
j=1
Then, setting FΦ := FΦ, we have FΦ 0 = F0
and LΦ = EMFΦ .
(7.9)
By (7.8), according to the result already established in the regular case, it is apparent that the map N [LΦ [s] ] → N [L[s] ] col[u0 , . . . , us ] → Φ[s] col[u0 , . . . , us ] establishes a linear bijection between the Jordan chains of length s + 1 of LΦ and those of L. Consequently, to complete the proof it suffices to establish a linear bijection between the Jordan chains of length s + 1 of LΦ and those of M. According to (7.6) and (7.9), we have that M0 F0 u0 = 0 if and only if
u0 ∈ N [LΦ 0]
(7.10)
7.3. Invariance by continuous families of isomorphisms
163
and, hence, in the case s = 0, the map u0 → F0 u0 indeed establishes a linear bijection between the Jordan chains of length 1 of LΦ and those of M. Suppose 1 ≤ s < r + 1, and let (u0 , . . . , us ) be a Jordan chain of LΦ . Then, by Proposition 7.1.3, u0 = 0 and uj ∈
s−j 6
N [LΦ i ],
0 ≤ j ≤ s.
i=0
In particular, dim
s 6
N [LΦ i ]≥ 1
i=0
and there exist s + 1 natural numbers ns ≤ ns−1 ≤ · · · ≤ n1 ≤ n0 , and n0 linearly independent vectors u1 , . . . , un0 in U such that j 6
1 nj N [LΦ i ] = span[u , . . . , u ],
0 ≤ j ≤ s.
i=0
Clearly,
! " i N [LΦ ] = span col[0, . . . , 0 , u , 0, . . . , 0 ] : 1 ≤ i ≤ n , 0 ≤ j ≤ s . s−j [s] j
s−j
On the other hand, for each integer 0 ≤ ≤ s, i−1 LΦ (λ) − j=0 (λ − λ0 )j LΦ j Φ Φ 0= Li u−i = L0 u + lim u−i . i λ→λ0 (λ − λ ) 0 i=0 i=1 Moreover, owing to Remark 7.1.2, we have that, for any λ = λ0 , i−1
(λ − λ0 )j−i LΦ j u−i =
i=1 j=0
−1
(λ − λ0 )k
+k
LΦ j u+k−j = 0.
j=0
k=−
Thus, 0 = lim
λ→λ0
(λ − λ0 )−i LΦ (λ)u−i = lim
λ→λ0
i=0
(λ − λ0 )−i E(λ)M(λ)FΦ (λ)u−i .
i=0
As E0 ∈ Iso(Vˆ , V ), the inverse operator E−1 is continuous in a neighborhood of λ0 and, hence, for each integer 0 ≤ ≤ s, lim
λ→λ0
i=0
(λ − λ0 )−i M(λ)FΦ (λ)u−i = 0.
164
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
For each λ = λ0 and 0 ≤ ≤ s, we also have M(λ) =
(λ − λ0 )j Mj + o (λ − λ0 )
j=0
and, consequently, lim λ→λ0
i
Mj (λ − λ0 )j−i FΦ (λ)u−i = 0,
0 ≤ ≤ s.
i=0 j=0
Therefore, for every 0 ≤ ≤ s, lim M[] FΦ [[]] (λ) col[u0 , . . . , u ] = 0,
λ→λ0
(7.11)
where +1 ˆ +1 FΦ ,U ) [[]] : Ω \ {λ0 } → L(U
stands for the family defined through Φ −1 Φ FΦ F (λ), . . . , (λ − λ0 )− FΦ (λ) [[]] (λ) := trng F (λ), (λ − λ0 ) for all λ ∈ Ω \ {λ0 }. Take 1 ≤ ≤ s and col[u0 , . . . , u ] ∈ N [LΦ [] ]. Then, taking into account that FΦ 0 = F0 , we find from (7.11) that 0 = lim M[] FΦ [[]] (λ) col[u0 , . . . , u ] λ→λ0
= M[] diag{F0 , . . . , F0 } col[u0 , . . . , u ] + lim M[] trng{0, (λ − λ0 )−1 FΦ (λ), . . . , (λ − λ0 )− FΦ (λ)} col[u0 , . . . , u ] λ→λ0 0 = col[M0 F0 u0 , . . . , M F0 u0 ] + M[−1] col[F0 u1 , . . . , F0 u ] 0 + lim M[] λ→λ0 trng{(λ − λ0 )−1 FΦ (λ), . . . , (λ − λ0 )− FΦ (λ)} col[u0 , . . . , u−1 ] 0 = col[M0 F0 u0 , . . . , M F0 u0 ] + M[−1] col[F0 u1 , . . . , F0 u ] 0 + lim . λ→λ0 (λ − λ0 )−1 M[−1] FΦ [[−1]] (λ) col[u0 , . . . , u−1 ]
7.3. Invariance by continuous families of isomorphisms
165
Therefore, identifying the last rows shows that lim (λ − λ0 )−1 M[−1] FΦ [[−1]] (λ) col[u0 , . . . , u−1 ]
λ→λ0
= − col[M1 F0 u0 , . . . , M F0 u0 ] − M[−1] col[F0 u1 , . . . , F0 u ]. By the result of Exercise 1, ˆ , Vˆ ), M[−1] ∈ Fred0 (U so the space R[M[−1] ] is closed, and, hence, we find from this identity that there exists ˆ ˆ−1 ] ∈ U col[ˆ u0 , . . . , u such that M[−1] col[ˆ u0 , . . . , u ˆ−1 ] = − col[M1 F0 u0 , . . . , M F0 u0 ] − M[−1] col[F0 u1 , . . . , F0 u ].
(7.12)
To summarize, for each 1 ≤ ≤ s and col[u0 , . . . , u ] ∈ N [LΦ [] ] there exists ˆ ˆ−1 ] ∈ U col[ˆ u0 , . . . , u satisfying (7.12). For every 1 ≤ i ≤ ns , we have that col[ui , 0, . . . , 0] ∈ N [LΦ [s] ] s
and, hence, due to (7.12), there exists ˆs col[ˆ ui0 , . . . , u ˆis−1 ] ∈ U such that ui0 , . . . , uˆis−1 ] = − col[M1 F0 ui , . . . , Ms F0 ui ]. M[s−1] col[ˆ Now, let 1 ≤ j ≤ s and nj < i ≤ nj−1 . Then, ui ∈ span[u1 , . . . , unj−1 ] =
j−1 6
N [LΦ i ]
i=0
and, hence, col[ui , 0, . . . , 0] ∈ N [LΦ [j−1] ]. j−1
(7.13)
166
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
Thus, by (7.12), there exists ˆj col[ˆ ui0 , . . . , u ˆij−1 ] ∈ U such that M[j−1] col[ˆ ui0 , . . . , u ˆij−1 ] = − col[M1 F0 ui , . . . , Mj F0 ui ].
(7.14)
Moreover, if 1 ≤ i ≤ n0 , then ui ∈ N [LΦ 0 ], and (7.10) implies M0 F0 ui = 0.
(7.15)
As the vectors u1 , . . . , un0 are linearly independent, there exist operators ˆ) F1 , . . . , Fs ∈ L(U, U such that Fj ui = u ˆij−1 ,
1 ≤ j ≤ s,
1 ≤ i ≤ nj .
ˆ we have that Since F0 ∈ Iso(U, U), ˆ s+1 ). F{s} := trng{F0 , F1 , . . . , Fs } ∈ Iso(U s+1 , U Moreover,
F{s} N [LΦ [s] ] ⊂ N [M[s] ].
Indeed, by (7.13), (7.14), and (7.15), we have that, for every 0 ≤ j ≤ s − 1 and 1 ≤ i ≤ ns−j , M[s] F{s} col[0, . . . , 0, ui , 0, . . . , 0] j
s−j
= M[s] col[0, . . . , 0, F0 ui , F1 ui , . . . , Fs−j ui ] j
= M[s] col[0, . . . , 0, F0 ui , u ˆi0 , . . . , u ˆis−1−j ] j
= col[0, . . . , 0, M0 F0 ui , . . . , Ms−j F0 ui ] j
ui0 , . . . , u ˆis−1−j ] + col 0, . . . , 0, M[s−1−j] col[ˆ
j+1
= col[0, . . . , 0, M1 F0 ui , . . . , Ms−j F0 ui ] j+1
ui0 , . . . , u ˆis−1−j ] + col 0, . . . , 0, M[s−1−j] col[ˆ
j+1
= 0.
7.4. Local Smith form for C ∞ matrix families
167
Moreover, for every 1 ≤ i ≤ n0 , M[s] F{s} col[0, . . . , 0, ui ] = col[0, . . . , 0, M0 F0 ui ] = 0, s
s
because of (7.15). Therefore, the map F{s} : N [LΦ [s] ] → N [M[s] ] is well defined, linear, continuous and injective. Moreover, it sends Jordan chains of LΦ into Jordan chains of M, because if u0 ∈ U \ {0} then FΦ 0 u0 = F0 u0 = 0. Therefore, F[s] establishes a linear injection from the set of Jordan chains of length s + 1 of LΦ at λ0 to the set of Jordan chains of length s + 1 of M. As we have already constructed a linear bijection between the Jordan chains of length s + 1 of LΦ and those of L, we have actually established a linear injection from the set of Jordan chains of length s + 1 of L to the set of Jordan chains of length s + 1 of M. By the symmetric nature of the problem, transversalizing in this occasion M instead of L, it is easily realized that there is another linear injection from the set of Jordan chains of length s + 1 of M to the set of Jordan chains of length s + 1 of L. Consequently, we actually have established a linear bijection between both sets of Jordan chains, which concludes the proof.
7.4 Local Smith form for C ∞ matrix families As usual, for any integer m ≥ 1, we identify the space of m × m square matrices Mm (K) with L(Km ). The following result establishes the importance of the family considered in the Example of Section 7.2. Theorem 7.4.1. Let m ≥ 1 be an integer, r ∈ {∞, ω} and λ0 ∈ Ω. Consider an operator family L ∈ C r (Ω, Mm (K)) such that n := dim N [L0 ] ≥ 1 and no Jordan chain of L at λ0 can be continued indefinitely. Then, there exist an open subset Ω ⊂ Ω with λ0 ∈ Ω , and two families E, F ∈ C r (Ω , Mm (K)) such that E0 , F0 ∈ Iso(Km ) and L(λ) = E(λ) D(λ) F(λ),
λ ∈ Ω ,
(7.16)
where D ∈ P[Mm (K)] is the family defined, for each λ ∈ K, by D(λ) := diag (λ − λ0 )κ1 [L;λ0 ] , . . . , (λ − λ0 )κn [L;λ0 ] , 1, . . . , 1 . m−n
(7.17)
168
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
Remark 7.4.2. If n = m, then no entry 1 appears in the expression of D in (7.17). In general, D exhibits as many 1’s as indicated by the difference m − n. Obviously, the result is also satisfied if n = 0. In such a case, D = Im and one can make the choice E = Im and F = L. Proof of Theorem 7.4.1. It is based on the Gaussian elimination method, and proceeds by induction on m. Suppose m = 1. Then, L is a K-valued function and L0 = 0. Moreover, since no Jordan chain of L at λ0 can be continued indefinitely, 1 ≤ κ := κ[L; λ0 ] = κ1 [L; λ0 ] = ordλ0 L < ∞ and, hence, there exists F ∈ C r (Ω) such that F0 = 0 and L(λ) = (λ − λ0 )κ F(λ),
λ ∈ Ω.
Therefore, (7.16) holds true by making the choice E(λ) = 1 and D(λ) = (λ − λ0 )κ for each λ ∈ Ω. Suppose m ≥ 2 and the result to be true for every family in C r (Ω, Mm−1 (K)) satisfying the requirements of the theorem. Let L ∈ C r (Ω, Mm (K)) satisfy the requirements of the theorem. If ordλ0 L = ∞, then Lj = 0 for all j ≥ 0 and, hence, every Jordan chain of L at λ0 can be continued indefinitely. Thus, (7.18) ordλ0 L < ∞. Now, for each 1 ≤ i, j ≤ m, we denote by (i,j) the (i, j)th entry of L. Let (i0 , j0 ) be such that 0 ≤ h := ordλ0 (i0 ,j0 ) = min ordλ0 (i,j) . 1≤i,j≤m
By (7.18), h < ∞. Let P, Q ∈ Mm (K) be two permutation matrices (constant and ˜ defined by invertible) for which the family L ˜ L(λ) := P L(λ)Q,
λ ∈ Ω,
(7.19)
satisfies h = ordλ0 ˜(m,m) = where we have denoted
min ordλ0 ˜(i,j) ,
1≤i,j≤m
˜ = ˜(i,j) L
. 1≤i,j≤m
By the choice of h, for each (i, j) ∈ {1, . . . , m}2 there exists b(i,j) ∈ C r (Ω) such that λ ∈ Ω. ˜(i,j) (λ) = (λ − λ0 )h b(i,j) (λ), ˜ ⊂ Ω of λ0 Moreover, since b(m,m) (λ0 ) = 0, there exists an open neighborhood Ω such that ˜ b−1 (m,m) (0) ∩ Ω = ∅.
7.4. Local Smith form for C ∞ matrix families
169
˜ F ˜ ∈ C r (Ω, ˜ Mm (K)) defined by Now we consider the matrix families E, ⎛
b
⎜ ⎜ ˜ := ⎜ E ⎜ ⎜ ⎝
Im−1
(1,m) − b(m,m) .. . b(m−1,m) − b(m,m)
0 ··· 0
1
⎛ ⎜ ⎜ ˜ := ⎜ F ⎜ ⎜ ⎝
b
··· −
⎟ ⎟ ⎟ ⎟, ⎟ ⎠
0 .. . 0
Im−1 (m,1) − b(m,m)
⎞
b(m,m−1) b(m,m)
⎞ ⎟ ⎟ ⎟ ⎟. ⎟ ⎠
1
˜ and F ˜ take invertible values. Moreover, for each λ ∈ Ω, ˜ we have that Clearly, E ˜ L(λ) ˜ F(λ) ˜ ˜ E(λ) = diag{M(λ), ˜(m,m) (λ)} ˜ = diag{M(λ), (λ − λ0 )h } diag{Im−1 , b(m,m) (λ)},
(7.20)
˜ ∈ C r (Ω, ˜ Mm−1 (K)) stands for the matrix whose (i, j)th entry is given where M by b(i,m) , 1 ≤ i, j ≤ m − 1. m ˜ (i,j) := ˜(i,j) − ˜(m,j) b(m,m) Since diag{Im−1 , b(m,m) (λ0 )} ∈ Iso(Km ), by (7.19) and (7.20), we find from Theorem 7.3.1 that no Jordan chain of ˜ ˜mm } diag{M, can be continued indefinitely, as it was assumed to occur for L. On the other hand, a vector us u0 ,..., ∈ (Km )s+1 0 0 ˜ ˜mm } at λ0 whenever the vector is a Jordan chain of diag{M, (u0 , . . . , us ) ∈ (Km−1 )s+1 ˜ at λ0 . Therefore, no Jordan chain of M ˜ can be continis a Jordan chain of M ued indefinitely. Consequently, by the induction hypothesis, there exist an open ˆ ⊂Ω ˜ of λ0 and two families neighborhood Ω ˆ F ˆ ∈ C r (Ω, ˆ Mm−1 (K)) E,
170
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
ˆ 0, F ˆ 0 ∈ Iso(Km−1 ) and such that E ˜ ˆ D(λ) ˜ ˆ M(λ) = E(λ) F(λ),
ˆ λ ∈ Ω,
(7.21)
˜ is given through where D ˜ ˜ ˜ D(λ) := diag (λ − λ0 )κ1 [M;λ0 ] , . . . , (λ − λ0 )κn˜ [M;λ0 ] , 1, . . . , 1 m−1−˜ n
˜ 0 ]. By (7.21), we have that, for each λ ∈ Ω, ˆ and n ˜ := dim N [M ˜ ˆ ˜ ˆ diag{M(λ), (λ − λ0 )h } = diag{E(λ), 1} · diag{D(λ), (λ − λ0 )h } · diag{F(λ), 1}. Therefore, substituting this identity into (7.20), we find from (7.19) that there ˆ of λ0 such that exists an open neighborhood Ω ⊂ Ω L(λ) = E (λ)D (λ)F (λ),
λ ∈ Ω ,
where, for each λ ∈ Ω , ˜ −1 diag{E(λ), ˆ E (λ) := P −1 E(λ) 1}, ˜ D (λ) := diag{D(λ), (λ − λ0 )h }, ˆ ˜ −1 Q−1 . 1} diag{Im−1 , b(m,m) (λ)}F(λ) F (λ) := diag{F(λ), According to Theorem 7.3.1, n = dim N [L0 ] = dim N [D0 ] and κj [L; λ0 ] = κj [D ; λ0 ],
1 ≤ j ≤ n.
Note that n ˜ = n if h = 0, while n ˜ = n − 1 if h ≥ 1. Moreover, according to the Example of Section 7.2, there exist two permutation matrices P and Q such that D (λ) := P diag (λ − λ0 )κ1 [L;λ0 ] , . . . , (λ − λ0 )κn [L;λ0 ] , 1, . . . , 1 Q . m−n
Consequently, making the choice E(λ) := E (λ)P , concludes the proof.
F(λ) := Q F (λ),
λ ∈ Ω ,
Based on the previous result, it becomes rather natural to introduce the following concept of local Smith form.
7.5. Canonical sets at transversal eigenvalues
171
Definition 7.4.3. Let r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )) be with L0 ∈ Fred0 (U, V ) and n := dim N [L0 ] ≥ 1. The operator family L is said to admit a local Smith form at λ0 when there exist an open neighborhood Ω ⊂ Ω of λ0 , two families E ∈ C(Ω , L(U, V ))
and F ∈ C(Ω , L(U ))
with E0 ∈ Iso(U, V ) and F0 ∈ Iso(U ), a direct sum topological decomposition U = U0 ⊕ U1 , with n = dim U0 , and n integer positive numbers k1 ≥ · · · ≥ kn such that L(λ) = E(λ) [S(λ) ⊕ IU1 ] F(λ), where
λ ∈ Ω ,
S(λ) = diag (λ − λ0 )k1 , . . . , (λ − λ0 )kn ,
λ ∈ K.
In such a case, S is said to be a local Smith form of L at λ0 . According to Theorem 7.3.1, if the lengths of all Jordan chains of L at λ0 are uniformly bounded above by some integer k ≤ r, necessarily kj = κj [L; λ0 ],
1 ≤ j ≤ n,
and, consequently, the local Smith form should be unique, if it exists, but the rigorous proof of this feature is postponed until Proposition 7.10.2. Theorem 7.4.1 established that every C ∞ family of square matrices for which no Jordan chain can be continued indefinitely indeed possesses a local Smith form. Theorem 7.8.1 will show that this condition on the Jordan chains of L is not only sufficient for the existence of the local Smith form but also necessary. Necessary and sufficient conditions for the existence of the local Smith form for general operator-valued families of class C r will be established by Theorem 7.8.3. The Smith form of a family is the simplest canonical form that can be obtained from it through preand post-multiplication by isomorphism families, of course.
7.5 Canonical sets at transversal eigenvalues In this section we combine the transversalization Theorem 4.3.2 with the invariance Theorem 7.3.1 to provide a scheme to calculate the partial κ-multiplicities and, thus, building up, in a rather systematic way, canonical sets of Jordan chains. Proposition 7.5.1. Let r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )) satisfy that L0 ∈ Fred0 (U, V ) and λ0 is a ν-transversal eigenvalue for some integer 1 ≤ ν ≤ r. Set j := dim Lj (N [L0 ] ∩ · · · ∩ N [Lj−1 ]),
1 ≤ j ≤ ν.
(7.22)
172
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
Then, every canonical set of Jordan chains of L at λ0 consists of exactly j Jordan chains of length j for all j ∈ {1, . . . , ν}. Consequently, the partial κ-multiplicities of L at λ0 are given through ·· = 1 ν = · · · = ν > ν − 1 = · · · = ν − 1 > · · · > 1 = · ν
(7.23)
1
ν−1
and, therefore, κ1 [L; λ0 ] = ν and κ[L; λ0 ] =
n
κj [L; λ0 ] =
j=1
ν
i i = χ[L; λ0 ],
(7.24)
i=1
where χ is the algebraic multiplicity introduced by Definition 4.3.3. Proof. Thanks to Proposition 7.1.3, L cannot admit a Jordan chain of length greater than ν. Thus, the length of all Jordan chains of L is bounded above by ν. Set n := dim N [L0 ]. According to Proposition 4.2.2, we have ν 6
N [Li ] = {0}
i=0
and dim
j 6
N [Li ] =
i=0
ν
0 ≤ j ≤ ν − 1.
i ,
i=j+1
Moreover, j 6
N [Li ] ⊂
i=0
j−1 6
N [Li ] ⊂ N [L0 ],
1 ≤ j ≤ ν.
i=0
Therefore, there exists a basis {u1 , . . . , un0 } of N [L0 ] such that j 6
N [Li ] = span u1 , . . . , u νi=j+1 i ,
0 ≤ j ≤ ν − 1.
i=0
Thanks to Proposition 7.1.3, it is easy to check that the following ordered set provides us with a canonical set of Jordan chains of L at λ0 : u , 0, . . . , 0; u2 , 0, . . . , 0; · · · ; uν , 0, . . . , 0; 1 ν
ν
ν
u +1 , 0, . . . , 0; uν +2 , 0, . . . , 0; · · · ; uν +ν−1 , 0, . . . , 0; · · · ; ν ν−1
ν−1
ν−1
u +···+ +1 , 0; uν +···+3 +2 , 0; · · · ; uν +···+3 +2 , 0; ν 3 2 2 2 uν +···+3 +2 +1 ; uν +···+3 +2 +2 ; · · · ; uν +···+2 +1 . 1
1
1
7.5. Canonical sets at transversal eigenvalues
173
It consists of j Jordan chains of length j for all 1 ≤ j ≤ ν. As (7.23) and (7.24) follow straight away from Definitions 7.2.5 and 4.3.3, this concludes the proof. Naturally, Proposition 7.5.1 characterizes the local Smith form given by Theorem 7.4.1. Corollary 7.5.2. Consider m ∈ N∗ and r ∈ {∞, ω}. Let L ∈ C r (Ω, Mm (K)) be a family for which λ0 ∈ Ω is a ν-transversal eigenvalue for some integer ν ≥ 1. Then, the matrix D(λ) introduced in (7.17) equals diag (λ − λ0 )ν , . . . , (λ − λ0 )ν , . . . , λ − λ0 , . . . , λ − λ0 , 1, . . . , 1 , m−n
1
ν
where n = dim N [L0 ] and the j ’s are given through (7.22). In particular, det D(λ) = (λ − λ0 )κ[L;λ0 ] = (λ − λ0 )χ[L;λ0 ] ,
λ ∈ K.
Actually, according to Theorems 4.3.2 and 7.3.1, the multiplicities χ and κ must coincide at any algebraic eigenvalue of the family, which provides a local Smith form in the finite-dimensional case for C ∞ families. Precisely, the following result is satisfied. Theorem 7.5.3. Let r ∈ N∗ ∪{∞} and L ∈ C r (Ω, L(U, V )) be with L0 ∈ Fred0 (U, V ), such that λ0 ∈ Algν (L) for some ν ∈ N ∩ [0, r]. Let Φ ∈ C r (Ω, L(U )) be such that Φ0 = IU for which λ0 is a ν-transversal value of LΦ := LΦ, and set Φ Φ j := dim LΦ j (N [L0 ] ∩ · · · ∩ N [Lj−1 ]),
1 ≤ j ≤ ν.
Then, every canonical set of Jordan chains of L at λ0 possesses exactly j chains of length j for all 1 ≤ j ≤ ν. Consequently, κ[L; λ0 ] =
ν
j j = χ[L; λ0 ].
j=1
If, in addition, dim U = dim V = m for some integer m ≥ 1 and r ∈ {∞, ω}, then the family G ∈ P[Mm (K)] defined for each λ ∈ K as S(λ) := diag (λ − λ0 )ν , . . . , (λ − λ0 )ν , . . . , λ − λ0 , . . . , λ − λ0 1
ν
provides us with a local Smith form of L at λ0 . Proof. The existence of the transversalizing family Φ is guaranteed by Theorem 5.3.1. Thanks to Proposition 7.5.1, every canonical set of Jordan chains of LΦ at λ0 consists of j Jordan chains of length j for all 1 ≤ j ≤ ν and, consequently, by Definition 4.3.3, χ[L; λ0 ] = χ[LΦ ; λ0 ] = κ[L; λ0 ] =
ν j=1
j j .
174
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
The last assertion of the theorem follows straight from Corollary 7.5.2 and Definition 7.4.3. It should be noted that D(λ) = S(λ) ⊕ Im−dim N [L0 ] ,
λ ∈ K,
where D(λ) is the matrix introduced in (7.17). This concludes the proof.
7.6 Coincidence of the multiplicities κ and χ The next result, of an auxiliary nature, provides us with the dimensions of the kernels of the operators L[s] = trng{L0 , . . . , Ls } for every integer 1 ≤ s ≤ r and L ∈ C r (Ω, L(U, V )). Proposition 7.6.1. Let r ∈ N∗ ∪ {∞} and L ∈ C r (Ω, L(U, V )) be such that L0 ∈ Fred0 (U, V ) and λ0 ∈ Eig(L). For a given integer 1 ≤ s ≤ r, let Φ ∈ C r (Ω, L(U )) be with Φ0 ∈ Iso(U ) for which the family LΦ := LΦ satisfies Vs :=
s
Φ Φ Φ LΦ j (N [L0 ] ∩ · · · ∩ N [Lj−1 ]) ⊕ R[L0 ] ⊂ V
j=1
and set nj
Φ := dim N [LΦ 0 ] ∩ · · · ∩ N [Lj ],
0 ≤ j ≤ s,
j
Φ Φ := dim LΦ j (N [L0 ] ∩ · · · ∩ N [Lj−1 ]),
1 ≤ j ≤ s.
Then (see (7.1)), dim N [L[s] ] =
s
h h + (s + 1)ns .
h=1
Proof. Fix an integer 1 ≤ s ≤ r. The existence of Φ is guaranteed by Theorem 4.3.2. According to Lemma 4.1.1, ⎛ ⎞ ⎛ ⎞ s s−1 6 6 ⎝ ⎠×⎝ ⎠ × · · · × N [LΦ N [LΦ N [LΦ N [LΦ j ] j ] 0 ], [s] ] = j=0
j=0
and, hence, dim N [LΦ [s] ] =
s
nj .
j=0
Owing to Proposition 4.2.2, we have that nj = ns +
s h=j+1
h ,
0 ≤ j ≤ s − 1.
7.6. Coincidence of the multiplicities κ and χ
175
Thus, dim N [LΦ [s] ] = ns +
s−1
⎛ ⎝ ns +
j=0
s
⎞ h ⎠ =
h=j+1
s
h h + (s + 1)ns .
h=1
Finally, according to the Leibniz rule, LΦ [s] = L[s] Φ[s] and, therefore, dim N [LΦ [s] ] = dim N [L[s] ], because Φ[s] ∈ Iso(U s+1 ). This concludes the proof.
Now, for each integer 1 ≤ s ≤ r we consider the map Js : N [L[s−1] ] → N [L[s] ] y → col[0, y], which is a linear injection and, hence, dim N [L[s−1] ] ≤ dim N [L[s] ],
1 ≤ s < r + 1.
Obviously, the map Js is an isomorphism if and only if dim N [L[s−1] ] = dim N [L[s] ]. The following result shows that λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r if and only if ν is the lowest integer k for which Jk N [L[k−1] ] = N [L[k] ], much like in the classical case when U = V = Km for some integer m ≥ 1 and L(λ) = λIU − T,
λ ∈ K,
for a given T ∈ Mm (K), where we already know that any eigenvalue λ0 ∈ Eig(L) = σ(T ) must be ν-algebraic for some integer ν ≥ 1 and that ν is the lowest integer k for which N [(λ0 IU − T )k ] = N [(λ0 IU − T )k+1 ]. Such similarities will be completely explained in Chapter 8, where the attention will be focused on these classical families. Theorem 7.6.2. Let r ∈ N∗ ∪{∞} and L ∈ C r (Ω, L(U, V )) be with L0 ∈ Fred0 (U, V ). Then, λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r if and only if dim N [L[j−1] ] < dim N [L[j] ], dim N [L[ν−1] ] = dim N [L[ν] ].
for all 1 ≤ j ≤ ν − 1,
(7.25)
176
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
Moreover, in such a case, dim N [L[ν−1] ] = χ[L; λ0 ] = κ[L; λ0 ]
(7.26)
and dim N [L[ν−1] ] = dim N [L[] ],
ν ≤ < r + 1.
(7.27)
Proof. Assume that λ0 ∈ Algν (L) for some 1 ≤ ν < r + 1. Then, according to Theorem 5.3.1, there exists Φ ∈ P[L(U )] with Φ0 = IU such that λ0 is a ν-transversal eigenvalue of LΦ := LΦ. Thus, ν
Φ Φ Φ LΦ j (N [L0 ] ∩ · · · ∩ N [Lj−1 ]) ⊕ R[L0 ] = V,
j=1
and, setting nj j
Φ := dim N [LΦ 0 ] ∩ · · · ∩ N [Lj ], Φ dim LΦ j (N [L0 ]
:=
∩ ···∩
0 ≤ j ≤ ν, 1 ≤ j ≤ ν,
N [LΦ j−1 ]),
(7.28)
we have from Proposition 4.2.2 that n0 ≥ n1 ≥ · · · ≥ nν−1 ≥ 1,
nν = 0.
Thus, according to Propositions 7.6.1, 4.2.2 and 7.5.1, dim N [L[ν−1] ] =
ν−1
h h + νnν−1 =
h=1
ν
h h = dim N [L[ν] ] = χ[L; λ0 ] = κ[L; λ0 ],
h=1
whereas, for every integer 1 ≤ j ≤ ν − 1 (if there is any), dim N [L[j−1] ] =
j−1
h h +jnj−1 =
h=1
j h=1
h h +jnj <
j
h h +(j+1)nj = dim N [L[j] ],
h=1
because nj ≥ 1. This concludes the proof of (7.25) and (7.26). To show (7.27) we will proceed by contradiction. Suppose it fails. Then, there exists ν ≤ < r such that dim N [L[ν] ] = dim N [L[ν+j] ], 0 ≤ j ≤ − ν, but dim N [L[] ] < dim N [L[+1] ]. Consequently, the map J+1 : N [L[] ] → N [L[+1] ] y → col[0, y]
(7.29)
7.6. Coincidence of the multiplicities κ and χ
177
is not surjective and, hence, there exists (u0 , . . . , u+1 ) ∈ N [L[+1] ] with u0 = 0. In other words, L admits a Jordan chain of length + 2 at λ0 . Thus, by Remark 7.1.2, it possesses a Jordan chain of length + 1 and, consequently, the map J : N [L[−1] ] → N [L[] ] y → col[0, y] cannot be surjective. Therefore, dim N [L[−1] ] < dim N [L[] ], which is a contradiction, and concludes the proof of (7.27). To show the converse implication, suppose (7.25) occurs for some integer 1 ≤ ν ≤ r. By Theorem 4.3.2, there exists Φ ∈ P[L(U )] with Φ0 = IU such that the subspaces ν−1 6 Φ Φ Φ R[LΦ ], L (N [L ]), . . . , L ( N [LΦ 0 1 0 ν j ]) j=0
are in direct sum, where LΦ := LΦ. Setting (7.28), we find from Propositions 7.6.1 and 4.2.2 that dim N [L[ν−1] ] =
ν−1
h h + νnν−1 =
h=1
ν
h h + νnν
h=1
and dim N [L[ν] ] =
ν
h h + (ν + 1)nν .
h=1
Therefore, by (7.25), nν = 0. Similarly, ν−1 h=1
h h + (ν − 1)nν−1 =
ν−2
h h + (ν − 1)nν−2 = dim N [L[ν−2] ]
h=1
< dim N [L[ν−1] ] =
ν−1
h h + νnν−1
h=1
and, hence, 1 ≤ nν−1 = nν + ν = ν . Therefore, according to Proposition 4.2.2, λ0 is a ν-transversal eigenvalue of LΦ . Consequently, by Theorem 5.3.1, we conclude that λ0 ∈ Algν (L). This ends the proof.
178
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
More generally, the following result, establishing the coincidence of χ, µ and κ, is satisfied. Theorem 7.6.3 (of coincidence of χ and κ). Let r ∈ N∪{∞} and L ∈ C r (Ω, L(U, V )) be with L0 ∈ Fred0 (U, V ). Then, κ[L; λ0 ] is defined if and only if so is χ[L; λ0 ]; in such a case, κ[L; λ0 ] = χ[L; λ0 ]. Moreover, the following conditions are equivalent: i) χ[L; λ0 ] < ∞. ii) λ0 ∈ Algν (L) for some integer 0 ≤ ν ≤ r. iii) The length of any Jordan chain of L is bounded above by ν < r + 1. Proof. Some of the following alternatives occur. Either L0 ∈ Iso(U, V ), or λ0 ∈ Algν (L) or / λ0 ∈
for some integer 1 ≤ ν ≤ r, #
Algk (L).
0≤k
Suppose L0 ∈ Iso(U, V ). Then, by definition, both multiplicities are defined and κ[L; λ0 ] = χ[L; λ0 ] = 0. Moreover, in such a case, (ii) holds with ν = 0, and L does not admit a Jordan chain. Therefore, the equivalence of (i)–(iii) trivially occurs when ν = 0. Suppose λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r. Then, according to Theorem 7.6.2, conditions (7.25) and (7.26) are satisfied and, hence, κ[L; λ0 ] = χ[L; λ0 ] < ∞, by Theorem 5.3.1. In particular, this shows that (ii) implies (i). Actually, the equivalence between (i) and (ii) has already been established by Theorem 5.3.1. In addition, thanks to (7.25) of Theorem 7.6.2, the linear injection Jν : N [L[ν−1] ] → N [L[ν] ] y → col[0, y] is an isomorphism and, therefore, L cannot admit any Jordan chain at λ0 of length ν + 1. By Remark 7.1.2, the length of any Jordan chain of L must be bounded above by ν and, consequently, (ii) also implies (iii). Now, we will show the validity of the converse implications. Suppose (iii) and let 1 ≤ ν ≤ r be an integer such that L possesses a Jordan chain at λ0 of length ν, but it does not admit a Jordan chain of length ν + 1. Then, the fact (u0 , . . . , uν ) ∈ N [L[ν] ]
7.6. Coincidence of the multiplicities κ and χ
179
implies u0 = 0 and, hence, Jν is an isomorphism. Consequently, dim N [L[ν−1] ] = dim N [L[ν] ].
(7.30)
Arguing as in the proof of Theorem 7.6.2, let Φ ∈ P[L(U )] be such that Φ0 = IU and the subspaces ν−1 6
Φ Φ Φ R[LΦ 0 ], L1 (N [L0 ]), . . . , Lν (
N [LΦ j ])
j=0
are in direct sum, where LΦ := LΦ. Setting (7.28), we find from Propositions 7.6.1 and 4.2.2 that dim N [L[ν−1] ] =
ν−1
h h + νnν−1 =
h=1
and dim N [L[ν] ] =
ν
h h + νnν
h=1 ν
h h + (ν + 1)nν .
h=1
Therefore, by (7.30), we find that Φ nν = dim N [LΦ 0 ] ∩ · · · ∩ N [Lν ] = 0.
Let denote the lowest integer 0 ≤ k ≤ ν for which nk = 0. Due to Proposition 4.2.2, λ0 is an -transversal eigenvalue of LΦ and, hence, by Theorem 5.3.1, λ0 ∈ Alg (L). Moreover, as we have already seen that (ii) implies (iii), the length of any Jordan chain of L must be bounded above by and, so, ν ≤ . Therefore, ν = and λ0 ∈ Algν (L), which shows that (iii) indeed implies (ii). Consequently, we have established the equivalence between (i), (ii), and (iii). Throughout the rest of the proof we suppose # / Algk (L). λ0 ∈ 0≤k
Either r = ∞ or r ∈ N. Suppose r = ∞. Then, according to Theorem 5.3.1, we have that χ[L; λ0 ] = ∞. Moreover, by Theorem 7.6.2, for every k ≥ 1, 1 ≤ dim N [L[k−1] ] < dim N [L[k] ]
(7.31)
and, therefore, the map Jk cannot be surjective. Consequently, L possesses a Jordan chain of length k + 1 for all k ≥ 1 and, by Definition 7.2.5, κ[L; λ0 ] = ∞ = χ[L; λ0 ].
180
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
In the case r ∈ N, (7.31) is still valid for each 1 ≤ k ≤ r and, therefore, L admits a Jordan chain of length r + 1. Consequently, κ[L; λ0 ] is not defined, neither is χ[L; λ0 ], which concludes the proof.
7.7 Labeling the vectors of the canonical sets For later purposes, it is sometimes appropriate to change the notation introduced for the canonical sets of Jordan chains. Suppose λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r and let (7.5) be a canonical set of Jordan chains of L at λ0 , as described in Section 7.2. Then, the partial multiplicities of L at λ0 are κ1 ≥ · · · ≥ κn , with κn ≥ 1 and n := dim N [L0 ]. By Theorem 7.6.2, (7.25) is satisfied and, therefore, ν equals the maximum length of the Jordan chains of L at λ0 , in other words, κ1 = ν. Let 1 , . . . , ν ≥ 0 be the integers introduced through Theorem 7.5.3 so that the partial multiplicities of L at λ0 are (7.23). Now, we choose the vectors xhi,j ∈ U,
1 ≤ j ≤ ν,
1 ≤ h ≤ j ,
0 ≤ i ≤ j − 1,
in such a way that 1 ν ν x0,ν , . . . , x1ν−1,ν ; · · · ; x0,ν , . . . , xν−1,ν ;
ν−1 ν−1 , . . . , xν−2,ν−1 ;··· ; x10,ν−1 , . . . , x1ν−2,ν−1 ; · · · ; x0,ν−1
j j 1 , . . . , xj−1,j ; · · · ; x10,1 ; · · · ; x0,1 x10,j , . . . , x1j−1,j ; · · · ; x0,j
(7.32)
is a canonical set of Jordan chains of L at λ0 . The next result constructs a basis of N [L[ν−1] ] from this canonical set of Jordan chains. Proposition 7.7.1. Let r ∈ N∗ ∪ {∞} and L ∈ C r (Ω, L(U, V )) be with L0 ∈ Fred0 (U, V ) such that λ0 ∈ Algν (L) for some integer 0 ≤ ν ≤ r. Let (7.32) be a canonical set of Jordan chains of L at λ0 . Then,
B :=
⎧ ⎪ ⎨ ⎪ ⎩
col[0, . . . , 0, xh0,j , . . . , xhi,j ] ∈ U ν :
is a basis of N [L[ν−1] ].
ν−i−1
1 ≤ h ≤ j 0≤i≤j−1 1≤j≤ν
⎫ ⎪ ⎬ ⎪ ⎭
7.8. Characterizing the existence of the Smith form
181
ν Proof. By construction, the set B is contained in N [L[ν−1] ] and has j=1 j j vectors. Thus, by (7.26), it suffices to show the linear independence of the vectors of B. Let 1 ≤ h ≤ j , 0 ≤ i ≤ j − 1, 1 ≤ j ≤ ν, αhi,j ∈ K, be such that j j−1 ν j=1 i=0 h=1
αhi,j col[0, . . . , 0, xh0,j , . . . , xhi,j ] = 0.
(7.33)
ν−i−1
Focusing our attention on the first component of (7.33), we find that ν ν x0,ν = 0. α1ν−1,ν x10,ν + · · · + αν−1,ν
As (7.32) is a canonical set of Jordan chains, this implies αhν−1,ν = 0,
1 ≤ h ≤ ν .
Now, using induction on r ≥ 1, we will show that αhν−r,j = 0,
1 ≤ h ≤ j ,
ν − r + 1 ≤ j ≤ ν,
1 ≤ r ≤ ν.
(7.34)
We have just shown the result for r = 1. Let 2 ≤ ρ ≤ ν and suppose (7.34) is satisfied for all r ∈ {1, . . . , ρ − 1}. Then, the ρth component of (7.33) provides 0=
ν
j j−1
αhi,j xhi−(ν−ρ),j
j=ν−ρ+1 i=ν−ρ h=1
=
ν
j
αhν−ρ,j xh0,j ,
j=ν−ρ+1 h=1
because, due to (7.34), αhi,j = 0 if ν − ρ + 1 ≤ i < j and 1 ≤ h ≤ j . As (7.32) is a canonical set of Jordan chains, we obtain that αhν−ρ,j = 0,
1 ≤ h ≤ j ,
ν − ρ + 1 ≤ j ≤ ν,
which completes the proof.
7.8 Characterizing the existence of the Smith form This section uses the algebraic multiplicities χ and κ to characterize the existence of the local Smith form of L at λ0 through their associated algebraic invariants. The main finite-dimensional result reads as follows. Theorem 7.8.1 (of characterization of the existence of the Smith form). Let m ≥ 1 be an integer, r ∈ N∗ ∪ {∞, ω} and λ0 ∈ Ω. Consider a matrix operator family L ∈ C r (Ω, L(Km )) with n := dim N [L0 ] ≥ 1. Then, the following conditions are equivalent:
182
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
i) L possesses a local Smith form at λ0 , in the sense of Definition 7.4.3, and the number k1 appearing in that definition satisfies k1 ≤ r. ii) λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r. If, in addition, r ∈ {∞, ω}, then these conditions are also equivalent to each of the next two: iii) ordλ0 det L < ∞. iv) No Jordan chain of L at λ0 can be continued indefinitely. Moreover, in such a case, there exist an open neighborhood Ω ⊂ Ω of λ0 , families E ∈ C r−ν (Ω , Mm (K)), and F ∈ C ω (Ω , Mm (K)), with E0 , F0 ∈ Iso(Km ), such that (7.35) L(λ) = E(λ) D(λ) F(λ), λ ∈ Ω , where D(λ) = diag (λ − λ0 )κ1 [L;λ0 ] , . . . , (λ − λ0 )κn [L;λ0 ] , 1, . . . , 1 ,
λ ∈ K.
m−n
Furthermore, the previous characterizations can be formulated through conditions (i) and (iii) of Theorem 7.6.3, as well as through (iii) and (iv) of Theorem 5.3.1. Proof. Suppose (i). Then, there exist an open neighborhood Ω ⊂ Ω of λ0 , families E ∈ C(Ω , L(Km )) and F ∈ C(Ω , L(Km )), with E0 ∈ Iso(Km ) and F0 ∈ Iso(Km ), such that L(λ) = E(λ) D(λ) F(λ), λ ∈ Ω , where
D(λ) = diag (λ − λ0 )k1 , . . . , (λ − λ0 )kn , 1, . . . , 1 ,
λ ∈ K,
m−n
for some integers k1 ≥ · · · ≥ kn such that k1 ≤ r and kn ≥ 1. In particular, ordλ0 det L =
n
kj
j=1
and, hence, (i) implies (iii). Moreover, L(λ)−1 = F(λ)−1 D(λ)−1 E(λ)−1 ,
λ ∼ λ0 , λ = λ0 ,
and, since D(λ)−1 = diag (λ − λ0 )−k1 , . . . , (λ − λ0 )−kn , 1, . . . , 1 , there exists a constant C > 0 such that L(λ)−1 ≤
C , |λ − λ0 |k1
λ ∼ λ0 , λ = λ0 .
λ ∈ K \ {λ0 },
7.8. Characterizing the existence of the Smith form
183
Therefore, λ0 ∈ Algν (L) for some integer ν with 1 ≤ ν ≤ k1 ≤ r, and, consequently, (i) implies (ii). ˜ and F defined through Suppose (ii) and consider the auxiliary families L ˜ L(λ) :=
ν
(λ − λ0 )i Li ,
−1 ˜ F(λ) := L(λ)L(λ) ,
λ ∼ λ0 .
i=0
˜ − L ∈ C r (Ω, L(Km )) and all its derivatives up to order ν vanish at λ0 . Clearly, L Thus, the auxiliary family R, defined by ˜ − L(λ) , λ ∼ λ0 , λ = λ0 , R(λ) := (λ − λ0 )−ν L(λ) and R(λ0 ) := 0, is of class C r−ν . Moreover, by construction, we have that F(λ) = [L(λ) + (λ − λ0 )ν R(λ)] L(λ)−1 = IV + (λ − λ0 )ν R(λ)L(λ)−1 ,
λ ∼ λ0 ,
λ = λ0 .
As λ0 ∈ Algν (L), due to Theorem 5.0.1, the map λ → (λ − λ0 )ν L(λ)−1 is of class C r−ν in a neighborhood of λ0 , as is R. Therefore, so is F. Moreover, F(λ0 ) = IV , because R(λ0 ) = 0. Thus, there exists a constant C > 0 such that, for each λ sufficiently close to λ0 with λ = λ0 , ˜ −1 = L(λ)−1 F(λ)−1 = L(λ)
(λ − λ0 )ν L(λ)−1 F(λ)−1 C ≤ . |λ − λ0 |ν |λ − λ0 |ν
˜ for some 1 ≤ k ≤ ν. As L ˜ ∈ P[L(Km )], we have Consequently, λ0 ∈ Algk (L) ˜ that L is analytic. Moreover, according to Theorem 7.6.3, the length of all Jordan ˜ is bounded above by k ≤ ν < r + 1. In particular, no Jordan chain chains of L ˜ of L can be continued indefinitely. Thus, thanks to Theorem 7.4.1, there exist an ˜ F ˜ ∈ C ω (Ω, ˜ and two families E, ˜ Mm (K)) such that ˜ ⊂ Ω with λ0 ∈ Ω, open subset Ω m ˜ ˜ E0 , F0 ∈ Iso(K ) and ˜ ˜ ˜ L(λ) = E(λ) D(λ) F(λ),
˜ λ ∈ Ω,
where, for every λ ∈ K, ˜ ˜ D(λ) := diag (λ − λ0 )κ1 [L;λ0 ] , . . . , (λ − λ0 )κn [L;λ0 ] , 1, . . . , 1 . m−n
˜ 0 . Moreover, ˜ 0 ], since L0 = L Note that n = dim N [L ˜ ˜ ˜ = F(λ)−1 E(λ) D(λ) F(λ), L(λ) = F(λ)−1 L(λ)
λ ∼ λ0 .
184
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
˜ and F := F, ˜ we have that E is of class C r−ν and F is of Thus, setting E := F−1 E ω class C in a neighborhood of λ0 and that (7.35) holds with the required regularity properties. In particular, it becomes apparent that (ii) implies (i). Note that, due to Theorem 7.3.1 and the Example of Section 7.2, ˜ λ0 ], κj [L; λ0 ] = κj [D; λ0 ] = κj [L;
1 ≤ j ≤ n.
Moreover, ˜ λ0 ] = ν. κ1 [L; λ0 ] = κ1 [D; λ0 ] = κ1 [L; The previous analysis concludes the proof when r ∈ N. From now on, we restrict ourselves to considering the case r ∈ {∞, ω}. We have already seen that (i) implies (iii). Suppose (iii). Then, thanks to Proposition 3.1.1, it is easy to see that λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ ordλ0 det L and, consequently, (iii) implies (ii). Therefore, (i), (ii) and (iii) are indeed equivalent. Suppose (ii). Then, according to Theorem 7.6.3, the length of any Jordan chain of L is bounded above by ν and, hence, (iv) holds. Suppose (iv). Then, (i) follows straight away from Theorem 7.4.1. This concludes the proof. The following result provides us with the basic technical tool for obtaining the infinite-dimensional counterpart of Theorem 7.8.1. Lemma 7.8.2. Let r ∈ N∪{∞, ω} and L ∈ C r (Ω, L(U, V )) be with L0 ∈ Fred0 (U, V ). ˜ ⊂ Ω of λ0 , a topological direct sum deThen, there exist an open neighborhood Ω composition U = U0 ⊕ U1 , with dim U0 = dim N [L0 ] < ∞, and three operator families ˜ L(U, V )), E ∈ C r (Ω,
˜ L(U )), F ∈ C r (Ω,
˜ L(U0 )) M ∈ C r (Ω,
(7.36)
˜ λ ∈ Ω.
(7.37)
with E0 ∈ Iso(U, V ) and F0 ∈ Iso(U ), such that L(λ) = E(λ) [M(λ) ⊕ IU1 ] F(λ),
Proof. Suppose L0 ∈ Iso(U, V ). Then, the result holds by making the choices U0 = {0},
U1 = U,
F = IU ,
M = IU0 ,
E = L.
So, from now on, we will assume that dim N [L0 ] ≥ 1. Since L0 ∈ Fred0 (U, V ), there is an operator F ∈ L(U, V ) with dim R[F ] = codim R[L0 ] = dim N [L0 ],
(7.38)
7.8. Characterizing the existence of the Smith form
185
such that L0 + F ∈ Iso(U, V ). ˜ ⊂ Ω of λ0 for which the family E Thus, there exists an open neighborhood Ω defined by ˜ E(λ) := L(λ) + F, λ ∈ Ω, ˜ L(V, U )). ˜ and E−1 ∈ C r (Ω, satisfies E(λ) ∈ Iso(U, V ) for each λ ∈ Ω, As dim R[F ] < ∞, the space N [F ] admits a topological finite-dimensional complement U0 in U , so U = U0 ⊕ U1 ,
U1 := N [F ].
Let P ∈ L(U ) be the projection satisfying R[P ] = U0
and N [P ] = N [F ].
Then, P has finite rank and F P = F, ˜ because IU − P is a projection onto N [F ]. Thus, for each λ ∈ Ω, IU − E(λ)−1 F = IU − P E(λ)−1 F P IU − (IU − P ) E(λ)−1 F P , and, setting
F(λ) := IU − (IU − P )E(λ)−1 F P,
˜ λ ∈ Ω,
we have that IU − E(λ)−1 F = IU − P E(λ)−1 F P F(λ),
˜ λ ∈ Ω.
˜ Consequently, for each λ ∈ Ω, L(λ) = E(λ) − F = E(λ)[IU − E(λ)−1 F ] = E(λ) IU − P E(λ)−1 F P F(λ). ˜ L(U )) and, for each λ ∈ Ω, ˜ the operator F(λ) is invertible. In Clearly, F ∈ C r (Ω, fact, ˜ λ ∈ Ω. F(λ)−1 = IU + (IU − P )E(λ)−1 F P, Moreover, setting D(λ) := IU − P E(λ)−1 F P,
˜ λ ∈ Ω,
we have that L(λ) = E(λ) D(λ) F(λ),
˜ λ ∈ Ω.
Furthermore, with respect to the decomposition U = U0 ⊕ U1 = R[P ] ⊕ N [P ],
186
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
the family D can be expressed as ˜ λ ∈ Ω,
D(λ) = diag{D(λ)|U0 , IN [F ] } = D(λ)|U0 ⊕ IN [F ] , since D(λ)(U0 ) ⊂ U0
˜ λ ∈ Ω.
and D(λ)|N [F ] = I|N [F ] ,
Therefore, (7.37) holds true with ˜ λ ∈ Ω.
M(λ) := D(λ)|U0 , Note that, (7.38) indeed implies
dim U0 = dim N [L0 ],
which concludes the proof. The following substantial extension of Theorem 7.8.1 is satisfied.
Theorem 7.8.3 (of characterization of the existence of the Smith form). Let r ∈ N∗ ∪ {∞, ω}. Consider λ0 ∈ Ω and an operator family L ∈ C r (Ω, L(U, V )) with L0 ∈ Fred0 (U, V ) and n := dim N [L0 ] ≥ 1. Then, the following conditions are equivalent: i) L possesses a local Smith form at λ0 , in the sense of Definition 7.4.3, and the number k1 appearing in that definition satisfies k1 ≤ r. ii) λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r. If, in addition, r ∈ {∞, ω}, then these conditions are also equivalent to each of the next two: ˆ and Vˆ , any topoiii) For every neighborhood Ω of λ0 , any K-Banach spaces U logical decompositions ˆ =U ˆ1 ⊕ U ˆ2 U
and
Vˆ = Vˆ1 ⊕ Vˆ2
with ˆ1 = dim Vˆ1 < ∞, dim U and any families E ∈ C ∞ (Ω , L(Vˆ , V )), ˆ2 , Vˆ2 )), N ∈ C ∞ (Ω , L(U
ˆ )), F ∈ C ∞ (Ω , L(U, U ∞ ˆ1 , Vˆ1 )), M ∈ C (Ω , L(U
such that E0 , F0 , N0 are isomorphisms, and L(λ) = E(λ) [M(λ) ⊕ N(λ)] F(λ),
λ ∈ Ω ,
one has ordλ0 det M < ∞. iv) No Jordan chain of L at λ0 can be continued indefinitely.
7.8. Characterizing the existence of the Smith form
187
Moreover, in such a case, there actually exist an open neighborhood Ω ⊂ Ω of λ0 , two operator families E ∈ C r−ν (Ω , L(U, V )) and F ∈ C r (Ω , L(U )) with E0 ∈ Iso(U, V ) and F0 ∈ Iso(U ), and a topological direct sum decomposition U = U0 ⊕ U1 , with n = dim U0 , such that L(λ) = E(λ) [S(λ) ⊕ IU1 ] F(λ),
λ ∈ Ω ,
(7.39)
where, for each λ ∈ K,
S(λ) = diag (λ − λ0 )κ1 [L;λ0 ] , . . . , (λ − λ0 )κn [L;λ0 ] .
Furthermore, the previous characterizations can be formulated through conditions (i) and (iii) of Theorem 7.6.3, as well as through (iii) and (iv) of Theorem 5.3.1. Proof. Suppose (i). Then, there exists an open neighborhood Ω ⊂ Ω of λ0 , families E ∈ C(Ω , L(U, V )) and F ∈ C(Ω , L(U )), with E0 ∈ Iso(U, V ) and F0 ∈ Iso(U ), a direct sum topological decomposition U = U0 ⊕ U1 , with n = dim U0 , and n integer numbers k1 ≥ · · · ≥ kn with k1 ≤ r and kn ≥ 1 such that L(λ) = E(λ) [S(λ) ⊕ IU1 ] F(λ), where
λ ∈ Ω ,
S(λ) = diag (λ − λ0 )k1 , . . . , (λ − λ0 )kn ,
λ ∈ K.
˜ ⊂ Ω of λ0 such that Thus, there exists a neighborhood Ω L(λ)−1 = F(λ)−1 [S(λ)−1 ⊕ IU1 ]E(λ)−1 ,
˜ \ {λ0 }, λ∈Ω
and, hence, there exists a constant C > 0 such that L(λ)−1 ≤
C , |λ − λ0 |k1
˜ \ {λ0 }. λ∈Ω
Therefore, λ0 ∈ Algν (L) for some integer ν with 1 ≤ ν ≤ k1 ≤ r, and, consequently, (i) implies (ii). Suppose (ii). Then, by Theorem 7.6.3, the length of all Jordan chains of L is bounded above by ν < r + 1. On the other hand, owing to Lemma 7.8.2, there ˜ ⊂ Ω of λ0 , a topological direct sum decomposition exist an open neighborhood Ω U = U0 ⊕ U1 , with n = dim U0 , and three operator families ˜ ∈ C r (Ω, ˜ L(U, V )), E
˜ ∈ C r (Ω, ˜ L(U )), F
˜ L(U0 )), M ∈ C r (Ω,
˜ 0 ∈ Iso(U ), such that ˜ 0 ∈ Iso(U, V ) and F with E ˜ ˜ L(λ) = E(λ) [M(λ) ⊕ IU1 ] F(λ),
˜ λ ∈ Ω.
(7.40)
Thanks to Theorem 7.3.1, the length of any Jordan chain of M ⊕ IU1 is bounded above by ν < r + 1. Moreover, any Jordan chain of M ⊕ IU1 at λ0 must be of the form u0 us ,..., 0 0
188
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
for some Jordan chain (u0 , . . . , us ) of M at λ0 . Consequently, the lengths of the Jordan chains of M at λ0 are bounded above by ν < r + 1. Actually, there is a linear bijection between them and those of L and, therefore, κj [L; λ0 ] = κj [M; λ0 ],
1 ≤ j ≤ n.
By Theorem 7.6.3, λ0 ∈ Algk (M) for some integer 1 ≤ k ≤ ν and, therefore, ˜ of λ0 , according to Theorem 7.8.1, there exist an open neighborhood Ω ⊂ Ω r−k ω families E ∈ C (Ω , L(U0 )) and F ∈ C (Ω , L(U0 )) with E0 , F0 ∈ Iso(U0 ), such that λ ∈ Ω , (7.41) M(λ) = E (λ) S(λ) F (λ), where S(λ) = diag (λ − λ0 )κ1 [L;λ0 ] , . . . , (λ − λ0 )κn [L;λ0 ] ,
λ ∈ K.
According to (7.40) and (7.41), we find that, in Ω , ˜ (E ⊕ IU1 ) (S ⊕ IU1 ) (F ⊕ IU1 ) F. ˜ L=E Therefore, making the choices ˜ (E ⊕ IU ) , E := E 1
˜ F := (F ⊕ IU1 ) F,
concludes the proof of (i), and shows the equivalence of (i) and (ii). Note that, since k ≤ ν, E ∈ C r−ν (Ω , L(U, V )) and F ∈ C r (Ω , L(U )). Consequently, we have also proved (7.39) subject to the regularity requirements of the theorem. This concludes the proof when r ∈ N. From now on, we assume that r ∈ {∞, ω}. Suppose (ii). Then, according to Theorem 7.6.3, the lengths of the Jordan chains of L are bounded above by ν < r + 1 and, therefore, (ii) implies (iv). Moreover, under the circumstances detailed in the statement of (iii), for each λ = λ0 sufficiently close to λ0 we have that M(λ)−1 ⊕ N(λ)−1 = F(λ)L(λ)−1 E(λ). Thus,
M(λ)−1 ≤ C L(λ)−1
for some constant C > 0, and, hence, λ0 ∈ Algk (M) for some integer 1 ≤ k ≤ ν. Therefore, by Theorem 7.8.1, det M has a zero of finite order at λ0 . Consequently, (ii) also implies (iii). Suppose (iii). Thanks to Lemma 7.8.2, there exist an open neighborhood ˜ ⊂ Ω of λ0 , a topological direct sum decomposition U = U0 ⊕U1 with n = dim U0 , Ω and three operator families (7.36) such that L(λ) = E(λ) [M(λ) ⊕ IU1 ] F(λ),
˜ λ ∈ Ω.
(7.42)
7.9. Two illustrative examples
189
As we are assuming that (iii) occurs, det M has a zero of finite order at λ0 . By Theorem 7.8.1, λ0 is an algebraic eigenvalue of M and, therefore, we find from (7.42) that λ0 ∈ Algν (L) for some integer ν ≥ 1. Consequently, (iii) implies (ii). It remains to prove that (iv) implies (i). Suppose (iv) and consider the decomposition (7.42). By Theorem 7.4.1, M admits a local Smith form at λ0 . Arguing as in the final part of the proof of (i) from (ii), the Smith form of M provides us with a Smith form for L. Therefore, (i) is satisfied, which ends the proof.
7.9 Two illustrative examples This section works out two simple examples to illustrate the practical use of Theorem 7.8.1.
7.9.1 Example 1 Consider K = R, the spaces U = V = R2 and the map L : R → L(R2 ) defined by ⎞ 1 1 −2λ2 1 + λ sin λ 1 + λ3 cos λ λ ⎠, L(λ) := ⎝ 2 −3λ2 ⎛
and
L(0) := lim L(λ) = λ→0
0 0
λ ∈ R \ {0},
0 . 2
It is easy to see that L is of class C 2 and that λ0 := 0 is an algebraic eigenvalue of L of order ν = 2 = r. Consequently, according to Theorem 7.8.1, L possesses a local Smith form at 0, in the sense of Definition 7.4.3, and k1 ≤ 2. Note that L has a non-vanishing constant entry, and its determinant satisfies det L(λ) = −4λ2 (1 + o(1))
as λ → 0.
To calculate the local Smith form of L at 0, we use the algorithm described in the proof of Theorem 7.4.1. For each i, j ∈ {1, 2}, let ij denote the (i, j)th entry of L. Since ord0 22 = 0, we can use 22 as a pivot element. To make a zero in the entry (2, 1)th, we replace, for each λ ∈ R, the column
11 (λ) 21 (λ)
with
11 (λ) 21 (λ)
3 + λ2 2
12 (λ) . 22 (λ)
190
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
This amounts to performing the matrix multiplication ⎛ ⎞ 3 6 3 3 1 1 1 2 3 4 λ + λ cos ⎠ ⎝−2λ + 2 λ − 2λ sin λ + 2 λ cos λ λ 0 2 ⎛ ⎞ 1 0 ⎠ =: L1 (λ), = L(λ) ⎝ 3 2 λ 1 2 for every λ ∈ R \ {0}, and 0 L (0) := lim L (λ) = λ→0 0 1
1
0 . 2
For each i, j ∈ {1, 2}, let 1ij denote the (i, j)th entry of L1 . To make a zero in the (1, 2)th entry of L1 , we replace, for each λ ∈ R \ {0}, the row 111 (λ) 112 (λ) with
111 (λ)
112 (λ)
1 − 2
1 1 4 λ + λ cos 21 (λ) λ
122 (λ) .
This amounts to performing the matrix multiplication ⎛ ⎞ 3 3 1 3 6 1 2 3 0⎠ ⎝−2λ + 2 λ − 2λ sin λ + 2 λ cos λ 0 2 ⎞ ⎛ 1 1 1 − λ + λ4 cos 2 λ ⎠ L1 (λ) =: L2 (λ), =⎝ 0 1 for every λ ∈ R \ {0}, and 0 L (0) := lim L (λ) = λ→0 0 2
2
0 . 2
Finally, factorizing 1 0 3 1 3 1 diag −2λ2 + λ3 − 2λ3 sin + λ6 cos , 2 2 λ 2 λ 1 0 2 1 3 3 1 = diag λ , 1 · diag −2 + λ − 2λ sin + λ4 cos , 2 2 λ 2 λ
7.9. Two illustrative examples
191
and tracing all the previous identities, we conclude that diag{λ2 , 1} is the local Smith form of L at 0, and that ⎛ 1 L(λ) = ⎝ 0
1 2
⎞ 1 4 λ + λ cos λ2 λ ⎠ 0 1 ⎛ −2 + ×⎝
0 1 1 3 1 3 λ − 2λ sin + λ4 cos 2 λ 2 λ −3λ2
⎞ 0⎠ 2
is the factorization of Definition 7.4.3. The value of the matrices above at λ = 0 is defined by continuity. Finally, note that, according to Theorem 7.6.3, χ[L; 0] = κ[L; 0] = 2 = ord0 det L.
7.9.2 Example 2 Consider the map L : R → L(R2 ) defined by L(λ) :=
4λ(1 + λ) − λ3 sign λ , λ[−4 + (−1 + λ)λ2 sign λ]
−λ2 sign λ (−1 + λ)λ2 sign λ
λ ∈ R,
where sign : R → {−1, 0, 1} is the map defined through sign x =
x/|x|, 0,
if if
x = 0, x = 0.
As the map : R → R defined by (λ) := λ2 sign λ,
λ ∈ R,
satisfies d (λ) = 2|λ|, dλ
λ ∈ R,
it is apparent that ∈ C 1 (R) \ C 2 (R). Thus, L ∈ C 1 (R, L(R2 )), but L is not of class C 2 , and, hence, the regularity r = 1 is optimal. Moreover, for λ ∼ 0 with λ = 0 we have that ⎛ ⎞ 4 + λ2 sign λ − λ3 sign λ 4 + 4λ − λ2 sign λ ⎜ 4λ2 (λ2 − 2) sign λ 4λ2 (λ2 − 2) sign λ ⎟ ⎜ ⎟ L(λ)−1 = ⎜ ⎟ ⎝ ⎠ 1 −1 + λ 2 2 4λ(λ − 2) 4λ(λ − 2)
192
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
and, therefore, the family ⎛ λ → λ2 L(λ)−1
4 + λ2 sign λ − λ3 sign λ ⎜ 4(λ2 − 2) sign λ ⎜ =⎜ ⎝ λ(−1 + λ) 4(λ2 − 2)
⎞ 4 + 4λ − λ2 sign λ 4(λ2 − 2) sign λ ⎟ ⎟ ⎟ ⎠ λ 2 4(λ − 2)
is bounded in a neighborhood of 0, but it is not continuous at 0. Consequently, λ0 := 0 is an algebraic eigenvalue of order ν = 2 of L, but, since L is not of class C 2 , the multiplicity of L at λ0 is not defined. Incidentally, note that Theorem 5.0.1 cannot be applied because r = 1 < ν = 2, and, actually, its thesis fails to be true in the present example. Therefore, its regularity requirements seem to be optimal. By Theorem 7.8.1, L cannot admit a local Smith form at 0. If we try to calculate the local Smith form, then we will arrive at a step of the construction from which one cannot go on. To illustrate this, we are going to try to perform the construction described in Section 7.4. For each i, j ∈ {1, 2}, let ij denote the (i, j)th entry of L. Note that ord0 12 = ord0 22 = 1, whereas the order of 11 and of 21 at zero is not defined. Naturally, the fact that the latter orders are not defined gives us a hint of the fact that L does not admit a local Smith form at 0. In the first step, we use 22 as a pivot element. To make a zero in the (2, 1)th entry, we replace, for each λ ∼ 0, the column 12 (λ) 11 (λ) 11 (λ) (−1 + λ)λ sign λ with − , −4 + (−1 + λ)λ2 sign λ 22 (λ) 21 (λ) 21 (λ) which amounts to performing the matrix multiplication ⎞ ⎛ 4λ2 (2 − λ2 ) 3 4λ(1 + λ) − λ sign λ ⎠ ⎝ λ2 (λ − 1) − 4 sign λ 2 0 λ[−4 + (−1 + λ)λ sign λ] ⎞ ⎛ 1 0 ⎠ =: L1 (λ). = L(λ) ⎝ (−1 + λ)λ sign λ − 1 −4 + (−1 + λ)λ2 sign λ Now, for each i, j ∈ {1, 2}, let 1ij denote the (i, j)th entry of L1 . To make a zero in the (1, 2)th entry of L1 , we replace, for each λ ∈ R \ {0}, the row 111 (λ) 112 (λ) with
111 (λ)
112 (λ) −
4(1 + λ) − λ2 sign λ 1 (λ) −4 + (−1 + λ)λ2 sign λ 21
122 (λ) .
7.10. Local equivalence of operator families
193
This amounts to performing the matrix multiplication ⎛ ⎞ 4λ2 (2 − λ2 ) 0 ⎝ λ2 (λ − 1) − 4 sign λ ⎠ 2 0 λ[−4 + (−1 + λ)λ sign λ] ⎛ ⎞ 4(1 + λ) − λ2 sign λ 1 − −4 + (−1 + λ)λ2 sign λ ⎠ L1 (λ). =⎝ 0 1 Finally, we notice that if we factorize 0 1 4λ2 (2 − λ2 ) 2 diag , λ[−4 + (−1 + λ)λ sign λ] λ2 (λ − 1) − 4 sign λ 0 1 4(2 − λ2 ) 2 , −4 + (−1 + λ)λ = diag λ2 , λ × diag sign λ , λ2 (λ − 1) − 4 sign λ then the function λ →
4(2 − λ2 ) λ2 (λ − 1) − 4 sign λ
is not continuous at 0. If, instead of that, we make the factorization 0 1 4λ2 (2 − λ2 ) 2 diag , λ[−4 + (−1 + λ)λ sign λ] λ2 (λ − 1) − 4 sign λ 0 1 4λ(2 − λ2 ) 2 , −4 + (−1 + λ)λ sign λ , = diag {λ, λ} × diag λ2 (λ − 1) − 4 sign λ then, the matrix 0 diag
1 4λ(2 − λ2 ) 2 , −4 + (−1 + λ)λ sign λ λ2 (λ − 1) − 4 sign λ
is not invertible at λ = 0. Consequently, no Smith form is available for L.
7.10 Local equivalence of operator families This section uses Theorem 7.8.3 to classify the set of C r families, r ∈ N ∪ {∞, ω}, through the concept of equivalence of families. Throughout the rest of this book, given K ∈ {R, C}, a point λ0 ∈ K, two non-zero K-Banach spaces U and V , and r ∈ N ∪ {∞, ω}, we denote by Sλr0 (U, V ) the set of all families L of class C r on a neighborhood of λ0 with values in L(U, V ) such that L(λ0 ) ∈ Fred0 (U, V ); the neighborhood may depend on L. We also set Sλr0 (U ) := Sλr0 (U, U ),
194
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
and denote by Sλr0 the class of families L for which there exist two Banach spaces U, V such that L ∈ Sλr0 (U, V ). Clearly, LM ∈ Sλr0 (U, V ) if W is a Banach space, L ∈ Sλr0 (W, V ) and M ∈ r Sλ0 (U, W ). Moreover, L−1 ∈ Sλr0 (V, U ) if L ∈ Sλr0 (U, V ) and L(λ0 ) ∈ Iso(U, V ). The following definition introduces the concept of equivalence. Definition 7.10.1 (Equivalence). Let U1 , U2 , V1 , V2 be four Banach spaces, λ0 ∈ K and r, s ∈ N ∪ {∞, ω}. The families L ∈ Sλr0 (U1 , V1 ) and M ∈ Sλr0 (U2 , V2 ) are said to be C s -equivalent at λ0 when there exist an open neighborhood Ω ⊂ K of λ0 and two families E ∈ Sλs0 (V2 , V1 ) and F ∈ Sλs0 (U1 , U2 ) such that L(λ) = E(λ) M(λ) F(λ),
λ ∈ Ω.
(7.43)
s
In such a case, we will write that L ≡ M. Let J be a set of isomorphic Banach spaces. It is easy to verify that the C s -equivalence indeed establishes a relation of equivalence in # Sλr0 (U, V ). U,V ∈J
The following result shows the uniqueness of the Smith form at λ0 when it exists, that is to say, when L is in Sλr0 and λ0 ∈ Algν (L) for some 0 ≤ ν < r + 1. Proposition 7.10.2. Let U, V be two Banach spaces such that U = X ⊕ Y,
V = W ⊕ Z,
(7.44)
for some closed subspaces X, Y ⊂ U , and W, Z ⊂ V , with m = dim X = dim W < ∞. For a given r ∈ N ∪ {∞} and k1 , . . . , km ∈ N ∩ [0, r],
J ∈ Sλr0 (Y, Z)
such that J0 ∈ Iso(Y, Z), let M ∈ Sλr0 (U, V ) denote the operator family defined, for each λ ∼ λ0 , by M(λ) := diag (λ − λ0 )k1 , . . . , (λ − λ0 )km ⊕ J(λ) with respect to certain bases of X and W , and the decompositions (7.44). 0
If a family L in Sλr0 satisfies L ≡ M, then λ0 ∈ Algν (L) for some 0 ≤ ν < r + 1, the number n := dim N [L0 ] satisfies n ≤ m, and there exists a permutation τ of {1, . . . , m} such that kτ (n+1) = · · · = kτ (m) = 0,
kτ (j) = κj [L; λ0 ],
1 ≤ j ≤ n.
(7.45)
7.10. Local equivalence of operator families
195
Proof. Obviously, there exist an open neighborhood Ω of λ0 and a constant C > 0 such that C , λ ∈ Ω \ {λ0 }. M(λ)−1 ≤ |λ − λ0 |max{k1 ,...,kn } ˜ ⊂ Ω of λ0 and Thus, we find from (7.43) that there exist an open neighborhood Ω ˜ a constant C such that C˜ ˜ \ {λ0 }. L(λ)−1 ≤ , λ∈Ω |λ − λ0 |max{k1 ,...,kn } Moreover, max{k1 , . . . , kn } < r + 1. Thus, λ0 ∈ Algν (L) for some integer 0 ≤ ν ≤ r. According to Theorem 7.6.3, κ[L; λ0 ] is well defined and finite. Moreover, thanks to Theorem 7.3.1, we find from (7.43) that κ[L; λ0 ] = κ[M; λ0 ] and, actually, when L0 is not an isomorphism, κj [L; λ0 ] = κj [M; λ0 ], Now we set
1 ≤ j ≤ n.
D(λ) := diag (λ − λ0 )k1 , . . . , (λ − λ0 )km ,
λ ∈ K,
regarded as the restriction of M(λ) from X to W , for λ ∼ λ0 . As J0 is an isomorphism, for any s ≥ 0, every Jordan chain of M at λ0 of length s + 1 must be of the form u0 us ,..., , 0 0 for some Jordan chain (u0 , . . . , us ) of D at λ0 . Consequently, we actually have κ[L; λ0 ] = κ[D; λ0 ] and, in addition, κj [L; λ0 ] = κj [D; λ0 ],
1 ≤ j ≤ n,
when n ≥ 1. Suppose n = 0. Then, κ[D; λ0 ] = κ[L; λ0 ] = 0 and, necessarily kj = 0 for all 1 ≤ j ≤ m, which concludes the proof. Suppose n ≥ 1, and let τ be a permutation of {1, . . . , m} such that kτ (1) ≥ · · · ≥ kτ (m) . 0
As L ≡ M at λ0 , and J0 is an isomorphism, then n = dim N [D0 ]. Therefore, owing to the result already established in the Example of Section 7.2, we conclude n ≤ m and (7.45), which ends the proof.
196
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
The following proposition will be used in the proof of the main result of this section. It is a generalization of a result already established in the proof of Theorem 7.8.1. Proposition 7.10.3. Let r ∈ N ∪ {∞, ω} and L, M ∈ Sλr0 (U, V ) be such that λ0 ∈ Algν (L), for some integer 0 ≤ ν ≤ r, and Mn = Ln for all 0 ≤ n ≤ ν. Then there (V ) such that E0 = IV and L = EM locally at λ0 . In particular, exists E ∈ Sλr−ν 0 r−ν
L ≡ M at λ0 , and λ0 ∈ Algν (M). Proof. Let Ω be an open neighborhood of λ0 contained in the domains of definition of L and M, and such that L(λ) ∈ Iso(U, V ) for all λ ∈ Ω\{λ0 }. Let G : Ω → L(V ) be the family defined by G(λ) := M(λ)L(λ)−1 , and
G(λ0 ) := lim G(λ) = lim [L(λ) + o((λ − λ0 )ν )] L(λ)−1 = IV . λ→λ0
In fact,
λ ∈ Ω \ {λ0 },
λ→λ0
G(λ) = IV + [M(λ) − L(λ)] L(λ)−1 ,
λ ∈ Ω \ {λ0 }.
As Mn = Ln for every 0 ≤ n ≤ ν, the function λ → (λ − λ0 )−ν [M(λ) − L(λ)] is of class C r−ν in an open neighborhood of λ0 . As λ0 ∈ Algν (L), by Theorem 5.0.1, the map λ → (λ − λ0 )ν L(λ)−1 is also of class C r−ν . Consequently, G is of class C r−ν as well. Finally, set E(λ) := G(λ)−1 ,
λ ∼ λ0 .
Then, E ∈ Sλr−ν (V ) and L = EM, which concludes the proof. 0
The main result of this section provides a list of necessary and sufficient conditions for the local equivalence of two given families. It reads as follows. Theorem 7.10.4. Let r ∈ N∗ ∪ {∞, ω}, an integer 1 ≤ ν ≤ r, four K-Banach spaces U1 , U2 , V1 , V2 , families L ∈ Sλr0 (U1 , V1 ) and M ∈ Sλr0 (U2 , V2 ) be such that λ0 ∈ Algν1 (L)∩Alg ν2 (M) for some ν1 , ν2 ∈ N∩(0, ν]. Then the following conditions are equivalent: i) L is C r−ν -equivalent to M at λ0 . ii) L is C 0 -equivalent to M at λ0 . iii) κj [L; λ0 ] = κj [M; λ0 ] for all 1 ≤ j ≤ n := dim N [L0 ] = dim N [M0 ]. iv) dim N [L[s] ] = dim N [M[s] ] for all s ∈ N ∩ [0, r].
7.10. Local equivalence of operator families
197
v) dim N [L[s] ] = dim N [M[s] ] for all 0 ≤ s ≤ ν − 1. vi) For each 0 ≤ j ≤ ν − 1 there exist Ej ∈ L(U1 , U2 ) and Fj ∈ L(V2 , V1 ) such that E0 and F0 are isomorphisms, and L[ν−1] = trng{E0 , . . . , Eν−1 } · M[ν−1] · trng{F0 , . . . , Fν−1 }. vii) There exist G0 , . . . , Gν−1 ∈ L(V1 , V2 ) such that G0 is an isomorphism and N [L[ν−1] ] = trng{G0 , . . . , Gν−1 }N [M[ν−1] ]. viii) χj [L; λ0 ] = χj [M; λ0 ] for all integers 0 ≤ j ≤ ν − 1, where the χj ’s stand for the partial χ-multiplicities introduced in (5.32). ix) χj [L; λ0 ] = χj [M; λ0 ] for all integers 0 ≤ j ≤ r. x) There exist polynomials Φ ∈ P[L(U1 )] and Ψ ∈ P[L(U2 )] such that Φ0 = IU1 and Ψ0 = IU2 , for which λ0 is a transversal eigenvalue of the families LΦ := LΦ and MΨ := MΨ, and, for each 0 ≤ j ≤ ν − 1, dim LΦ j
6 j−1
6 j−1 Ψ N [LΦ N [MΨ i ] = dim Mj i ] .
i=0
(7.46)
i=0
Therefore, ν1 = ν2 . xi) For all Φ, Ψ in Sλν0 such that Φ0 and Ψ0 are isomorphisms and λ0 is a transversal eigenvalue of the families LΦ := LΦ and MΨ := MΨ, equality (7.46) occurs for every j ∈ N ∩ [0, ν − 1]. Therefore, ν1 = ν2 . Proof. Obviously, (i) implies (ii), (iv) implies (v), (vi) implies (vii), (viii) implies (ix), and (xi) implies (x). By Theorem 7.3.1, (ii) implies (iii). By Theorem 7.6.2, (v) implies (iv). The equivalence between (v) and (x) follows from Theorem 4.3.2 and Propositions 7.6.1 and 4.2.2. The equivalence between (viii) and (x) follows from Theorems 4.3.2 and 5.4.1. By Theorem 7.5.3, (x) implies (iii). Due to (5.32), by Theorem 5.2.2, (ix) implies (viii). The equivalence between (x) and (xi) follows from Proposition 4.2.2 and Theorem 4.3.2. It remains to prove, for example, that (iii) implies (i), that (vii) implies (v), and that (ii) implies (vi). Suppose (iii). By Theorem 7.8.3, there is a topological direct sum decomposition U1 = X0 ⊕ X1 , with n = dim X0 , such that r−ν
L ≡ SL ⊕ IX1 , where SL (λ) = diag (λ − λ0 )κ1 [L;λ0 ] , . . . , (λ − λ0 )κn [L;λ0 ] ,
λ ∈ K.
198
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
Similarly, there is a topological direct sum decomposition U2 = Y0 ⊕ Y1 , with n = dim Y0 , such that r−ν M ≡ SM ⊕ IY1 , where SM (λ) = diag (λ − λ0 )κ1 [M;λ0 ] , . . . , (λ − λ0 )κn [M;λ0 ] ,
λ ∈ K.
By (iii), SL = SM . Moreover, X1 and Y1 are isomorphic, because they have the same finite codimension within two isomorphic spaces. Therefore, r−ν
L ≡ M, which ends the proof of (iii) implies (i). Suppose (vii). Let 0 ≤ s ≤ ν − 1 and x ∈ N [L[s] ]. Then, col[0, . . . , 0, x] ∈ N [L[ν−1] ] ν−s−1
and, according to (vii), there exists z ∈ N [M[ν−1] ] such that col[0, . . . , 0, x] = trng{G0 , . . . , Gν−1 }z. ν−s−1
Since G0 is an isomorphism, necessarily z = col[0, y] for some y ∈ N [M[s] ], and, therefore, x = trng{G0 , . . . , Gs }y. As G0 is an isomorphism, the previous reasoning can be reversed and, consequently, N [L[s] ] = trng{G0 , . . . , Gs }N [M[s] ], which implies (v), and shows that (vii) implies (v). Finally, we will show that (ii) implies (vi). Suppose (ii) and consider the ˆ ∈ P[L(U2 , V2 )] defined by ˆ ∈ P[L(U1 , V1 )] and M Taylor polynomials L ˆ L(λ) :=
ν j=0
Lj (λ − λ0 )j ,
ˆ M(λ) :=
ν
Mj (λ − λ0 )j ,
λ ∈ K.
j=0
ˆ are C 0 -equivalent at λ0 , as well as M and M. ˆ As By Proposition 7.10.3, L and L ˆ and we are assuming that L and M are C 0 -equivalent at λ0 , the analytic families L ˆ are C 0 -equivalent at λ0 . As we already know that (ii) and (i) are equivalent, L ˆ M ω ω ˆ and M are actually C -equivalent at λ0 . Therefore, there exist families E, F in Sλ0 ˆ = EMF. ˆ such that E(λ0 ) and F(λ0 ) are isomorphisms and L By the Leibniz rule, L[ν−1] = E[ν−1] · M[ν−1] · F[ν−1] , which implies (vi). This concludes the proof.
7.11. Exercises
199
7.11 Exercises 1. Let s ∈ N, two K-Banach spaces U, V , and, for each j ∈ {0, . . . , s}, an operator Tj ∈ L(U, V ) be with T0 ∈ Fred0 (U, V ). It is requested to accomplish the following: • Prove that, for each ε ∈ K \ {0}, trng {T0 , . . . , Ts } = D(ε−1 , V ; s) trng {T0 , εT1 , . . . , εs Ts } D(ε, U ; s), where, given a Banach space W and ε ∈ K, the operator D(ε, W ; s) stands for D(ε, W ; s) := diag {IW , εIW , . . . , εs IW } . • Show that trng{T0 , εT1 , . . . , εs Ts } = diag{T0 , . . . , T0 } + trng{0, εT1 , . . . , εs Ts }. • Conclude that trng{T0 , . . . , Ts } ∈ Fred0 (U s+1 , V s+1 ). 2.
Let D ∈ P[L(Km )] be the family defined by D(λ) = diag (λ − λ0 )k1 , . . . , (λ − λ0 )kn , 1, . . . , 1 ,
λ ∈ K,
m−n
for some integers m ≥ n ≥ 1 and k1 ≥ · · · ≥ kn ≥ 1. Prove, using the definition, that λ0 is a k1 -transversal eigenvalue of D. 3.
This exercise asks the following: • Give an example of a family L ∈ C ω (K, L(K2 )) such that the matrix L(λ) is triangular for all λ ∈ K, and λ0 ∈ K is an algebraic eigenvalue of L, but not a transversal eigenvalue of L. • Give an example of a family L ∈ C ω (K, L(K2 )) such that λ0 ∈ K is a transversal eigenvalue of L, but the matrix L(λ0 ) is not triangular.
4. Given r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )) with L0 ∈ Fred0 (U, V ), it is requested to accomplish the following: • Prove the equivalence of the following assertions without invoking the multiplicity χ constructed in Chapter 4: i) κ[L; λ0 ] < ∞. ii) λ0 ∈ Algν (L) for some integer 0 ≤ ν ≤ r. iii) The length of any Jordan chain of L is bounded above by ν < r + 1. • Give an alternative proof of Theorem 7.8.1 without invoking the multiplicity χ. 5. This exercise proposes the construction of a local canonical form at any nonalgebraic eigenvalue in the finite-dimensional setting (for simplicity). More precisely, given an integer m ≥ 1, the space U = Km , the point λ0 ∈ Ω, and a family L ∈ C ∞ (Ω, L(U )) with λ0 ∈ Eig(L) \ Alg(L), it is requested to adapt the proof of Theorem 7.4.1 in order to show that there exist a decomposition (7.47) U = U0 ⊕ U1
200
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
with 1 ≤ p := dim U0 ≤ m, and four operator families E, F ∈ C ∞ (Ω, L(U )),
A ∈ C ∞ (Ω, L(U0 )),
B ∈ C ∞ (Ω, L(U1 )),
such that E(λ0 ) and F(λ0 ) are isomorphisms, ordλ0 A = ∞, B(λ) = diag{(λ − λ0 )kp+1 , . . . , (λ − λ0 )km },
λ ∼ λ0 ,
for some integers kp+1 ≥ · · · ≥ km ≥ 0, if p < m, while B = 0 if p = m, and, with respect to (7.47), L admits the factorization L = E(A ⊕ B)F. 6. Let Sub(U ) denote the set of non-zero linear subspaces of the Banach space U having finite dimension or finite codimension. Directly from the following definition of m[·; λ0 ], prove that the function # m[ · ; λ0 ] : Sλ∞0 (V ) → [0, ∞] V ∈Sub(U )
defined by m[L; λ0 ] := sup dim N [trng{L0 , . . . , Ls }],
#
L∈
s≥0
Sλ∞0 (V ),
V ∈Sub(U )
satisfies the following properties: M1. For every V ∈ Sub(U ) and E, L, F ∈ Sλ∞0 (V ), m[ELF; λ0 ] ≥ m[L; λ0 ]. M2.
If V ∈ Sub(U ) has dimension 1, then m[(· − λ0 )n IV ; λ0 ] = n,
M3. M4.
n ∈ N∗ .
There exists L ∈ Sλ∞0 (U ) such that m[L; λ0 ] = 0. If V ∈ Sub(U ) is decomposed as V = V1 ⊕ V2 for some V1 , V2 ∈ Sub(V ), and for each i ∈ {1, 2} we have Li ∈ Sλ∞0 (Vi ), then m[L1 ⊕ L2 ; λ0 ] = m[L1 ; λ0 ] + m[L2 ; λ0 ].
7.
Let U be a Banach space and suppose that a mapping # Sλ∞0 (V ) → [0, ∞] m[·; λ0 ] : V ∈Sub(U )
exists satisfying properties M1–M4 of Exercise 6. Use the result of Exercise 5 to prove that for every V ∈ Sub(U ), and every L ∈ Sλ∞0 (V ), m[L; λ0 ] = χ[L; λ0 ]. This is another uniqueness result, in the spirit of those of Chapter 6.
7.11. Exercises 8.
201
Use Exercise 7 to prove that, for each Banach space U , there exists a unique function mU [ · ; λ0 ] : Sλ∞0 (U ) → [0, ∞]
satisfying the following properties: M1.
For each L, M ∈ Sλ∞0 (U ), mU [LM; λ0 ] = mU [L; λ0 ] + mU [M; λ0 ].
M2.
mU [(· − λ0 )IU ; λ0 ] = 1 if dim U = 1.
M3.
There exists L ∈ Sλ∞0 (U ) with mU [L; λ0 ] < ∞ if dim U = ∞.
M4.
If U = U1 ⊕ U2 with Ui ∈ Sub(U ) and Li ∈ Sλ∞0 (Ui ) for each i ∈ {1, 2}, then mU [L1 ⊕ L2 ; λ0 ] = mU1 [L1 ; λ0 ] + mU2 [L2 ; λ0 ].
9. Show that Theorem 6.0.1 can be obtained as an application of the uniqueness result of Exercise 8. 10. The main goal of this exercise is to show that none of the properties M1–M4 of Exercise 8 can be deduced from the remaining ones. More precisely, consider a non-zero Banach space U and the four maps miU [ · ; λ0 ] : Sλ∞0 (U ) → N ∪ {∞},
i ∈ {1, 2, 3, 4},
defined, for each L ∈ Sλ∞0 (U ), by m1U [L; λ0 ] := dim N [L(λ0 )], χ[L; λ0 ] if m3U [L; λ0 ] := ∞ if χ[L; λ0 ] if m4U [L; λ0 ] := 2χ[L; λ0 ] if
m2U [L; λ0 ] := 2χ[L; λ0 ], dim U < ∞, dim U = ∞, dim U = 1, dim U > 1,
and prove that, for each i ∈ {1, 2, 3, 4}, the map miU satisfies property Mj of Exercise 8 for all j ∈ {1, 2, 3, 4} \ {i}. 11. Let J ∈ Mn (K) be the matrix Jj,n of (1.26). Calculate the local Smith form at λj of the family LJ defined by LJ (λ) := λIn − J,
λ ∈ K.
Use this calculation and the Jordan canonical form to calculate the local Smith form of the family LA defined by LA (λ) := λIn − A,
λ ∈ K,
at every λj ∈ σ(A), for each of the matrices A of Exercise 1 of Chapter 1.
202
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
12. Let U and V be two Banach spaces. Use the appropriate results of Section 7.6 to prove, directly from the definition, that the function Sλ∞0 (U, V )
→
N ∪ {∞}
L
→
sup dim N [trng{L0 , . . . , Ls }] s≥0
satisfies the product formula of Theorem 5.6.1. 13. Prove Theorem 7.3.1 without invoking Theorem 4.3.2. Indication: Adapt the proof of Theorem 7.3.1, and use Proposition 7.7.1 to choose a basis of N [L[s] ] that simplifies the calculations. 14. This exercise proposes a dual construction for the Jordan chains described in this chapter; namely, the left Jordan chains. For the sake of simplicity, we restrict ourselves to considering the finite-dimensional setting. Fix an integer m ≥ 1, and a family L ∈ C r (Ω, L(Km )) for some r ∈ N ∪ {∞, ω}. Let 0 ≤ s ≤ r be an integer. Every ordered set of row vectors (u0 , . . . , us ) ∈ (Km )s+1 with u0 = 0 and
j
uj−i Li = 0,
0 ≤ j ≤ s,
i=0
is said to be a left Jordan chain of L at λ0 starting at u0 . You are asked to accomplish the following tasks: • Define left rank in a natural way. • Introduce the concept of canonical set of left Jordan chains, and give some algorithm to construct it. • Introduce the associated concepts of left κ-multiplicity and left partial κ- multiplicities. • Given a family L ∈ C r (Ω, L(Km )) with the length of all its left Jordan chains at λ0 bounded above by some integer k ≤ r, and E, F ∈ C r (Ω, L(Km )) such that E(λ0 ), F(λ0 ) are isomorphisms, prove that the left partial κ-multiplicities of L at λ0 equal those of ELF if dim N [L0 ] ≥ 1. • State and prove a left counterpart of Theorem 7.3.1. Indication: Adapt the proof of Theorem 7.3.1 suggested in Exercise 13. • State and prove a left counterpart of Theorem 7.4.1. • Prove that the left partial κ-multiplicities actually equal the partial κ-multiplicities. 15. This exercise shows that Theorem 7.8.3 might fail when the condition L0 ∈ Fred0 (U, V ) is removed from its statement. Consider p ∈ [1, ∞], and the Banach space p of all sequences x := (x(0), x(1), . . .) ∈ KN such that ∞ 1/p p
x p := |x(n)| < ∞, n=0
if p < ∞, and
x ∞ := sup{|x(n)|} < ∞, n≥0
7.11. Exercises
203
when p = ∞. For each j ≥ 0, let ej ∈ p denote the vector defined by ej (i) = δij for all i ≥ 0; here δij denotes the Kronecker delta. For each j ≥ 0, let Pj ∈ L(p ) be the projection defined by Pj x = x(j)ej . Clearly, Pj is not a Fredholm operator. Prove, step by step, the following: • Pj = 1 for each j ≥ 0. • Set Ω := {λ ∈ K : |λ| < 1}. Then the family L : Ω → L(p ) defined by L(λ) :=
∞
λn Pn ,
λ ∈ Ω,
n=0
is analytic. • The operator L(λ) is injective for all λ ∈ Ω \ {0}; however, L(λ) ∈ / Iso(p ) for all λ ∈ Ω. • Extending Definitions 7.1.1, 7.2.1 and 7.2.2 to cover the case of non-Fredholm families, prove that no Jordan chain of L at 0 can be continued indefinitely. • rank x = ∞ for all x ∈ p \ {0}. 16. Let L ∈ C r (Ω, L(U, V )) be such that λ0 ∈ Algk (L) for some k ∈ N ∩ [0, r]. Express the partial µ-multiplicities introduced by Definition 5.1.3 in terms of the partial κ-multiplicities of L at λ0 . 17. Use Exercise 16 and the local Smith form of Theorem 7.8.3 to prove that the dual construction of Exercise 5 of Chapter 5 is actually equivalent to the construction of Section 5.1. 18. Prove that in Theorem 7.8.3 the family F can be chosen to be analytic. Indication: Adapt the proof of Theorem 7.8.1, and apply Theorem 7.8.3 to the Taylor polynomial of L. 19. Adapt the proof of Theorem 7.8.3 and use Exercise 18 to show that the families E and F of (7.39) can be chosen so that E is analytic and F is of class C r−ν . Combine this feature with Theorems 5.6.1, 7.3.1 and 7.8.3, and Exercise 18 to get the following general version of the product formula: For every r, s ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )),
M ∈ C s (Ω, L(W, U ))
such that L0 ∈ Fred0 (U, V ),
M0 ∈ Fred0 (W, U )
κ[L; λ0 ],
κ[LM; λ0 ]
and the multiplicities κ[M; λ0 ],
are well defined, the following identity holds: κ[LM; λ0 ] = κ[L; λ0 ] + κ[M; λ0 ].
204
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
7.12 Comments on Chapter 7 The concept of Jordan chain is, of course, very well known for linear families of matrices (see, e.g., Gantmacher [39]). Jordan chains of families of operators (as discussed in this chapter) were introduced in the decade of 1940 to develop a spectral theory for polynomial, holomorphic and meromorphic families. One of the first uses of Jordan chains (in that case, in the meromorphic setting) can be found in Keldyˇs [66, 67]. The reader is referred to Gohberg [42] for an early exposition of these concepts and for a list of references to the original papers. Different expositions about Jordan chains, under closely related assumptions, can also be found in Markus & Sigal [95], Gohberg & Sigal [54], Gohberg & Rodman [53], and Gohberg, Lancaster & Rodman [50]. Jordan chains have been shown to be a powerful technical tool in a number of areas, as already discussed in the Preface of this book. Later definitions and uses of Jordan chains for non-analytic families, in connection to bifurcation theory, are in Sarreither [116], Rabier [110], and L´ opezG´ omez & Mora-Corral [89]. The exposition of this chapter has followed the construction described in [89] and anticipated in [88]; however, in this chapter we have considerably polished and expanded the original material of [89]. The concept of local Smith form goes back to Smith [121], where its existence is proved in the ring of integers. A proof of the existence of the local Smith form in an abstract ring can be found in Jacobson [63]. For an exposition in the context of matrix polynomials see, for example, Gantmacher [39], or Gohberg, Lancaster & Rodman [50]. One of the first studies of equivalence of families can be found in Gohberg [42]. Lemma 7.8.2 has been borrowed from Gohberg, Goldberg & Kaashoek [43] (see also Gohberg & Sigal [54]), and our main source on the theory of equivalence of families has been Gohberg, Kaashoek & Lay [44]. The relationship between local and global equivalence is studied in Leiterer [78, 79], where the global Smith form for operator-valued analytic functions was introduced (see also Gohberg & Rodman [52]). Many of the implications of Theorem 7.10.4 were already known in the context of holomorphic and meromorphic families; see Gohberg and Sigal [54] and Gohberg [42] for probably the first results, and Sigal [119] and Bart, Kaashoek and Lay [9] for extension to Fredholm families of an arbitrary index. Criteria for global equivalence of matrix polynomials are very well known (see, e.g., Gel’fand [40]). Although in the context of polynomial operator families, the existence of a local Smith form is a very particular case of the available global Smith form described in Gohberg, Lancaster & Rodman [50], it should be noted that, not only the local results of this chapter are optimal for general operator families of class C r regularity, for r ∈ N, but also the Magnus factorizations introduced in Chapter 5 can actually be used to construct a global Smith form in the general
7.12. Comments on Chapter 7
205
context of this book. Indeed, let Ω ⊂ K be a bounded open connected set, and suppose the family L ∈ C ∞ (Ω, L(U, V )) satisfies L(Ω) ⊂ Fred0 (U, V ) and the set Σ(L) consists of a finite number of algebraic eigenvalues, say, Σ(L) = Eig(L) = Alg(L) = {λ1 , . . . , λp }, with λi = λj ,
1 ≤ i < j ≤ p,
for some integer p ≥ 1. The theory developed in Chapter 8 shows that this occurs when L ∈ C ω (Ω, L(U, V )) and the sets L(Ω) ∩ Iso(U, V ) and Σ(L) are non-empty. As we are assuming Σ(L) = Alg(L), for every j ∈ {1, . . . , p}, there exists νj ∈ N∗ such that λj ∈ Algνj (L). According to Corollary 5.3.2, there are ν1 finite-rank projections Πh,1 ∈ L(U ) \ {0}, for 1 ≤ h ≤ ν1 , and M ∈ C ∞ (Ω, L(U, V )) such that M(λ1 ) ∈ Iso(U, V )
(7.48)
and L(λ) = M(λ)P1 (λ − λ1 ),
λ ∈ Ω,
(7.49)
where we have denoted P1 (t) := [tΠν1 ,1 + IU − Πν1 ,1 ] · · · [tΠ1,1 + IU − Π1,1 ],
t ∈ K.
If p = 1, then we are done. So, suppose p ≥ 2. Then, as P1 (t) is invertible for all t ∈ K \ {0}, we find from (7.48) and (7.49) that Σ(M) = Alg(M) = {λ2 , . . . , λp }. Moreover, it is easily seen that λj ∈ Algνj (M),
2 ≤ j ≤ p.
Thus, according to Corollary 5.3.2, there are ν2 finite-rank projections Πh,2 ∈ L(U ) \ {0}, for 1 ≤ h ≤ ν2 , and N ∈ C ∞ (Ω, L(U, V )) such that N(λ2 ) ∈ Iso(U, V ) and M(λ) = N(λ)P2 (λ − λ2 ),
λ ∈ Ω,
(7.50)
where we have denoted P2 (t) := [tΠν2 ,2 + IU − Πν2 ,2 ] · · · [tΠ1,2 + IU − Π1,2 ], Consequently, substituting (7.50) into (7.49) shows that L(λ) = N(λ)P2 (λ − λ2 )P1 (λ − λ1 ),
λ ∈ Ω.
t ∈ K.
206
Chapter 7. Algebraic Multiplicity Through Jordan Chains. Smith Form
By construction, N(Ω) ⊂ Iso(U, V ) if p = 2, and in the general case when p ≥ 3, after finitely many steps, the previous argument shows the existence of pj=1 νj finite-rank projections Πj,i ∈ L(U ) \ {0},
1 ≤ i ≤ p,
1 ≤ j ≤ νi ,
and of a family E ∈ C ∞ (Ω, L(U )) such that E(Ω) ⊂ Iso(U ) and L(λ) = E(λ)Pp (λ − λp ) · · · P1 (λ − λ1 ),
λ ∈ Ω,
where, for every 1 ≤ i ≤ p, Pi (t) := [tΠνi ,i + IU − Πνi ,i ] · · · [tΠ1,i + IU − Π1,i ],
t ∈ K.
Clearly, the product polynomial P defined by P(λ) := Pp (λ − λp ) · · · P1 (λ − λ1 ),
λ ∈ K,
satisfies that P(λ) is a finite-rank perturbation of the identity IU for all λ ∈ K. Hence, P(K) ⊂ Fred0 (U ). Moreover, if dim U = dim V < ∞,
(7.51)
then P is a matrix polynomial, as discussed in Chapter 10 and, therefore, according to the global results of Gohberg, Lancaster & Rodman [50], P possesses a global Smith form. Obviously, such form provides us with a global Smith form for L in Ω. As L is a general family, this result indeed provides a substantial extension of the previous finite-dimensional global results of Gohberg, Lancaster & Rodman [50]. Further, it is rather obvious that the regularity of L can be relaxed to assume L ∈ C r (Ω, L(U, V )) with p νj , r= i=1
or even r = max{ν1 , . . . , νp }. In such a case, the existence of E ∈ C(Ω, L(U, V )) satisfying E(Ω) ⊂ Iso(U, V ) and L(λ) = E(λ)P(λ),
λ ∈ Ω,
is still guaranteed by the theory developed in Chapter 5. At a further stage, condition (7.51) can be overcome by using the fact that P is a finite-rank perturbation of IU . Therefore, the theory developed in this book does not inherit exclusively a local nature, but essentially a global nature in the sense that it provides a general scheme to get global results through a canonical procedure. As already anticipated in Section 3.6, the condition that # Algν (L), λ0 ∈ 0≤ν
7.12. Comments on Chapter 7
207
has been shown to be pivotal for characterizing the existence of the local Smith form of L at λ0 . As a consequence of the results of Chapter 8, any analytic family must satisfy that condition at every isolated eigenvalue. This feature might explain why most of the work on Jordan chains has been developed for holomorphic and meromorphic pencils. The proof of the result of Exercise 12 can be found in Sarreither [117], where a number of different results about the multiplicity of a product can be found. The proof of the result of Exercise 5 can be found in Mora-Corral [99]. Similarly, Exercises 7, 8 and 10 are based on the results of [99].
Chapter 8
Analytic and Classical Families. Stability This chapter focuses attention on the analytic operator families. After studying some universal spectral properties of these families, this chapter deals with the classical families LA of the form LA (λ) := λIU − A,
λ ∈ K,
(8.1)
for a given A ∈ L(U ), in order to show that, in this particular case, the classic concepts of algebraic ascent and multiplicity equal the generalized concepts introduced in the previous four chapters. Consequently, the algebraic multiplicity analyzed in this book, from a series of different perspectives, is indeed a generalization of the classic concept of algebraic multiplicity. The plan of this chapter is the following. Section 8.1 characterizes the isolated eigenvalues of a general analytic family. Section 8.2 shows that the discrete spectrum of an analytic family consists of isolated eigenvalues. Section 8.3 shows the coincidence between the classic concept of multiplicity and the generalized concepts introduced in this book. Finally, Section 8.4 proves the global invariance by homotopy of the complex global multiplicity of a holomorphic family in a bounded domain. Throughout this chapter we consider K ∈ {R, C}, two K-Banach spaces U and V , an open subset Ω ⊂ K, a point λ0 ∈ Ω, and a general family L ∈ C ω (Ω, L(U, V )) with L0 ∈ Fred0 (U, V ). Under these assumptions, we already know that L(λ) ∈ Fred0 (U, V ) for λ ∼ λ0 . As in Chapter 7, we will also adopt the notation (7.1).
210
Chapter 8. Analytic and Classical Families. Stability
8.1 Isolated eigenvalues The following result characterizes the isolated eigenvalues of an analytic operator family. It might be regarded as an improvement of Theorem 7.8.3. Theorem 8.1.1. Let λ0 ∈ Ω and L ∈ C ω (Ω, L(U, V )) be with L0 := L(λ0 ) ∈ Fred0 (U, V ). Then, the following conditions are equivalent: i) There exists an open neighborhood Ω ⊂ Ω of λ0 such that L(λ) ∈ Iso(U, V ) ii) iii) iv) v) vi)
for all
λ ∈ Ω \ {λ0 }.
λ0 ∈ Algν (L) for some ν ∈ N. L possesses a local Smith form at λ0 , in the sense of Definition 7.4.3. The set of lengths of the Jordan chains of L at λ0 is bounded above. No Jordan chain of L at λ0 can be continued indefinitely. m[L; λ0 ] < ∞, where m is the unique multiplicity of Theorem 6.0.1.
Proof. By definition of algebraic eigenvalue, (ii) implies (i). According to Theorems 7.8.3, 7.6.3 and 6.0.1, conditions (ii), (iii), (iv), (v) and (vi) are equivalent. ˜ ⊂ Ω of Suppose (i). By Lemma 7.8.2, there exist an open neighborhood Ω λ0 , a topological direct sum decomposition U = U0 ⊕ U1 , with dim U0 = dim N [L0 ] < ∞, and three operator families ˜ L(U, V )), E ∈ C ω (Ω,
˜ L(U )), F ∈ C ω (Ω,
˜ L(U0 )), M ∈ C ω (Ω,
with E0 ∈ Iso(U, V ) and F0 ∈ Iso(U ), such that L(λ) = E(λ) [M(λ) ⊕ IU1 ] F(λ),
˜ λ ∈ Ω.
(8.2)
˜ \ {λ0 }, choosing a basis in As we are assuming that L(λ) ∈ Iso(U, V ) for all λ ∈ Ω U0 shows that ˜ \ {λ0 }. det M(λ) = 0, λ∈Ω ˜ necessarily As det M ∈ C ω (Ω), ordλ0 M < ∞. Therefore, thanks to Proposition 3.1.1, λ0 must be an algebraic eigenvalue of M. Obviously, according to (8.2), this implies (ii) and concludes the proof. The family L defined through (4.20) is not analytic, despite being of class / Alg(L). Consequently, the C ∞ ; the value 0 is an isolated eigenvalue of L, but 0 ∈ analyticity is crucial in Theorem 8.1.1, as well as in the remaining results of this chapter.
8.2. The structure of the spectrum
211
8.2 The structure of the spectrum The next result ascertains the structure of the spectrum of an analytic family of Fredholm operators with index zero. Theorem 8.2.1. Suppose the following: i) Ω is an open connected subset of K. ii) L ∈ C ω (Ω, L(U, V )) satisfies L(Ω) ⊂ Fred(U, V ). iii) L(Ω) ∩ Iso(U, V ) = ∅. Then, L(Ω) ⊂ Fred0 (U, V ), and, hence, by the open mapping theorem, Σ(L) = Eig(L). Moreover, Σ(L) is discrete in Ω and, hence, due to Theorem 8.1.1, Σ(L) = Eig(L) ⊂ Alg(L). Proof. By (i) and (ii), L(Ω) is a connected set contained in Fred(U, V ). Due to (iii), it satisfies L(Ω) ∩ Fred0 (U, V ) = ∅. Therefore, L(Ω) ⊂ Fred0 (U, V ), because the Fredholm index is continuous. Consequently, Σ(L) consists of eigenvalues and, necessarily, must be closed. In addition, according to (iii), Σ(L) Ω. Let S be the set of accumulation points of Σ(L). Then, S is closed in Ω and S ⊂ Σ(L). Assume that Σ(L) is not discrete and let λ0 ∈ S. By Lemma 7.8.2, there ˜ ⊂ Ω of λ0 , a topological direct sum decomposition exist an open neighborhood Ω U = U0 ⊕ U1 , with dim U0 = dim N [L0 ] < ∞, and three operator families ˜ L(U, V )), E ∈ C ω (Ω,
˜ L(U )), F ∈ C ω (Ω,
˜ L(U0 )), M ∈ C ω (Ω,
with E0 ∈ Iso(U, V ) and F0 ∈ Iso(U ), satisfying (8.2). Clearly, ˜ Σ(M) = Σ(L) ∩ Ω and, consequently, choosing a basis in U0 , the point λ0 must be an accumula˜ Thus, by the identity theorem for analytic tion point of zeros of det M ∈ C ω (Ω). functions, ˜ det M = 0 in Ω. ˜ ⊂ S, and, hence, S is an open subset of Ω. As it is also closed, Therefore, Ω and Ω is connected, necessarily S = Ω and Σ(L) = Ω, which contradicts (iii). Consequently, Σ(L) must be discrete in Ω. Finally, Theorem 8.1.1 concludes the proof, because any isolated eigenvalue of an analytic family is algebraic.
212
Chapter 8. Analytic and Classical Families. Stability
8.3 Classic algebraic multiplicity Throughout this section, the attention will be focused on the classical case when LA (λ) = λIU − A,
λ ∈ K,
(8.3)
for a given A ∈ L(U ). Our main goal is to show that the classic algebraic multiplicity of A equals the generalized algebraic multiplicity χ[LA ; ·] analyzed in this book. According to Corollary 6.5.4, in the finite-dimensional case when U = Km for some integer m ≥ 1, it is already known that λ0 ∈ σ(A) = Σ(LA ) = Eig(LA ),
χ[LA ; λ0 ] = ma (λ0 ),
where ma (λ0 ) denotes the algebraic multiplicity of λ0 as an eigenvalue of the matrix A. Obtaining a general version of this result becomes slightly more delicate. Now we consider a K-Banach space U and an operator A ∈ L(U ). Clearly, LA ∈ C ω (K, L(U )). Moreover, if the modulus of λ ∈ K is sufficiently large, one has that λ−1 A < 1 and, therefore, for such range of values of λ, LA (λ) = λIU − A = λ IU − λ−1 A ∈ Iso(U ). If one assumes that, for some open set Ω ⊂ K, LA (Ω) ⊂ Fred(U ),
LA (Ω) ∩ Iso(U ) = ∅,
then, according to Theorem 8.2.1, we find that σ(A) ∩ Ω = Σ(LA ) ∩ Ω = Eig(LA ) ∩ Ω = Eig(A) ∩ Ω. Here and throughout the rest of the book, given an operator B ∈ L(U ), the set Eig(B) denotes the set of eigenvalues of the operator B. Fix any λ0 ∈ σ(A) ∩ Ω. Then, the classic algebraic multiplicity of A at λ0 , denoted by ma (λ0 ), is defined through ma (λ0 ) := dim
∞ #
N [(λ0 IU − A)j ].
j=1
By Theorem 1.2.1, this quantity indeed provides us with the algebraic multiplicity of λ0 as an eigenvalue of A, if U = Km . In the general case when U is an arbitrary Banach space, either N [(λ0 IU − A)j ] N [(λ0 IU − A)j+1 ] for all j ≥ 0,
(8.4)
8.3. Classic algebraic multiplicity
213
or, alternatively, there exists an integer ν(λ0 ) ≥ 1 such that N [(λ0 IU −A)i ] N [(λ0 IU −A)j ] ⊂ N [(λ0 IU −A)ν(λ0 ) ] = N [(λ0 IU −A) ] for all 0 ≤ i < j ≤ ν(λ0 ) ≤ , much like in the finite-dimensional setting. When the second option occurs, ν(λ0 ) is referred to the algebraic ascent of λ0 as an eigenvalue of A, and N [(λ0 IU −A)ν(λ0 ) ] is named the ascent generalized eigenspace associated with λ0 . Consequently, in all these circumstances, we are led to the identity ma (λ0 ) = sup dim N [(λ0 IU − A)j ]. j≥0
Obviously, alternative (8.4) cannot occur if dim U < ∞. The key result revealing why ma (λ0 ) = χ[LA ; λ0 ] is the following one. Proposition 8.3.1. Given A ∈ L(U ), let LA be the family defined by (8.3). Then, for every integer s ≥ 1 and λ0 ∈ K, the operator A s N [trng{LA 0 , . . . , Ls−1 }] → N [(λ0 IU − A) ] col[x0 , . . . , xs−1 ] → xs−1
(8.5)
is a linear topological isomorphism whose inverse is given by A N [(λ0 IU − A)s ] → N [trng{LA 0 , . . . , Ls−1 }] y → col[x0 , . . . , xs−1 ]
where we have denoted xs−i := (A − λ0 IU )i−1 y,
1 ≤ i ≤ s.
Consequently, s dim N [LA [s−1] ] = dim N [(λ0 I − A) ],
s ≥ 1.
Proof. Note that LA 0 = λ0 IU − A,
LA 1 = IU ,
LA j = 0,
j ≥ 2.
Fix s ≥ 1 and let (x0 , . . . , xs−1 ) ∈ U s . If A col[x0 , . . . , xs−1 ] ∈ N [trng{LA 0 , . . . , Ls−1 }],
by an elementary induction, it is apparent that xi ∈ N [(λ0 IU − A)i+1 ],
0 ≤ i ≤ s − 1.
Therefore, the mapping (8.5) is well defined, linear and continuous.
(8.6)
214
Chapter 8. Analytic and Classical Families. Stability
Conversely, for any y ∈ N [(λ0 IU − A)s ], it is easy to verify that, when one sets xs−i := (A − λ0 IU )i−1 y,
1 ≤ i ≤ s,
it follows that A col[x0 , . . . , xs−1 ] ∈ N [trng{LA 0 , . . . , Ls−1 }].
Consequently, (8.6) also is well defined, linear and continuous. Finally, it can be checked straightforwardly that these maps are mutually inverses. This concludes the proof. Owing to Proposition 8.3.1 and Theorem 7.6.2, the following fundamental result holds. Theorem 8.3.2. Let λ0 ∈ K and A ∈ L(U ) be such that λ0 is an isolated eigenvalue of A, and λ0 IU − A ∈ Fred0 (U ). Then ma (λ0 ) = χ[LA ; λ0 ] < ∞
and
ν(λ0 ) = α[LA ; λ0 ],
where LA (λ) := λIU − A,
λ ∈ K.
Moreover, ν(λ0 ) equals the order of λ0 as a pole of the (analytic) resolvent operator λ → R(λ; A) := (λIU − A)−1 ,
λ ∈ K \ σ(A).
Proof. Clearly, LA ∈ C ω (K, L(U )). Moreover, by the assumptions, LA satisfies LA (λ0 ) ∈ Fred0 (U ) and condition (i) of Theorem 8.1.1. Thus, by Theorem 8.1.1, there exists an integer ν ≥ 1 such that λ0 ∈ Algν (LA ). By Definition 4.3.1, ν equals the order of the pole of R(·; A) at λ0 . Thanks to Theorem 5.3.1, ν = α[LA ; λ0 ]. Moreover, according to Theorem 7.6.2, we have that A A A dim N [LA [j−1] ] < dim N [L[j] ] ≤ dim N [L[ν−1] ] = dim N [L[] ],
whenever 1 ≤ j ≤ ν − 1 ≤ , and that A dim N [LA [ν−1] ] = χ[L ; λ0 ].
Therefore, due to Proposition 8.3.1, it becomes apparent that ν(λ0 ) = ν and that A ma (λ0 ) = dim N [(λ0 IU − A)ν ] = dim N [LA [ν−1] ] = χ[L ; λ0 ].
This concludes the proof.
8.4. Stability of the complex algebraic multiplicity
215
In the special, but important, case when A ∈ K(U ) is a compact operator, the general theory provides us with the following classic result. Corollary 8.3.3. Suppose A ∈ K(U ) and let σ(A) denote the spectrum of A. Then, σ(A) \ {0} consists of isolated eigenvalues with finite algebraic multiplicity. Proof. For each λ ∈ K \ {0}, the operator LA (λ) := λIU − A = λ(IU − λ−1 A) is a non-zero multiple of a compact perturbation of the identity. Thus, LA (K \ {0}) ⊂ Fred0 (U ). Moreover, LA (λ) ∈ Iso(U ) if |λ| > A . Therefore, according to Theorem 8.2.1 (applied to Ω := K \ {0}), we find that σ(A) ∩ (K \ {0}) is a discrete set contained in Eig(A). Theorem 8.3.2 concludes the proof.
8.4 Stability of the complex algebraic multiplicity When K = C, the fact L ∈ C 1 (Ω, L(U, V )) entails L ∈ H(Ω, L(U, V )). So, for complex operator families, we have actually been dealing with holomorphic families, except in the least interesting case when the family is merely continuous, where none of the algebraic invariants introduced in this book can be defined, unless the family consists of isomorphisms. Although at first glance one might be tempted to think that the theories of complex and real multiplicities are essentially the same, except perhaps for some incidental simplification of the proofs in the complex setting, there is, however, a crucial difference between both theories. Namely, whereas the complex multiplicity is stable, in the sense that small perturbations of the family do not affect the value of the multiplicity, it turns out that the real multiplicity is unstable. Indeed, in the simplest case when U = V = R, the multiplicity of L at λ0 is given through ordλ0 L, and it is well known that the order does not enjoy any reasonable stability property. This section shows the stability of the complex multiplicity for holomorphic operator families. Throughout this section we suppose that K = C and consider a non-empty bounded connected open set Ω ⊂ C and a family L such that ¯ L(U, V )), L ∈ C(Ω,
L|Ω ∈ H(Ω, L(U, V ))
(8.7)
and L(∂Ω) ⊂ Iso(U, V ),
(8.8)
and L(Ω) ⊂ Fred(U, V )
216
Chapter 8. Analytic and Classical Families. Stability
¯ and ∂Ω, the closure and the boundary of Ω, where, as usual, we have denoted by Ω respectively. Then, as the Fredholm index is an integer-valued continuous function ¯ is connected, we have that and Ω ¯ ⊂ Fred0 (U, V ). L(Ω) Moreover, by Theorem 8.2.1, the set Σ(L) is contained in Alg(L) and consists of a finite number of algebraic eigenvalues. It should be noted that, thanks to (8.8), L(λ) ∈ Iso(U, V ) for all λ ∈ Ω sufficiently close to ∂Ω. From now on, we will denote by m the multiplicity whose existence and uniqueness was established by Theorem 6.0.1. By the theory developed in the previous chapters, we already know that it equals the multiplicities χ, µ and κ introduced in Chapters 4, 5 and 7, respectively. The following concept makes sense. Definition 8.4.1. The multiplicity of L in Ω, denoted by m[L; Ω], is defined through m[L; Ω] := m[L; λ]. (8.9) λ∈Ω
When Σ(L) = {λ1 , . . . , λn } for some integer n ≥ 1, and λi = λj for 1 ≤ i < j ≤ n, then (8.9) becomes m[L; Ω] :=
n
m[L; λi ],
i=1
while m[L; Ω] = 0 if L(Ω) ⊂ Iso(U, V ). Consequently, under hypotheses (8.7) and (8.8), m[L; Ω] ∈ N.
8.4.1 Local stability The next result establishes the local stability of the multiplicity m[L; Ω]. Proposition 8.4.2. Suppose L, M ∈ H(Ω, L(U, V )) satisfy λ0 ∈ Alg(L), and L(λ0 ) ∈ Fred0 (U, V ). Then, there exist r, δ > 0 such that the estimate sup
L(λ) − M(λ) ≤ δ
(8.10)
λ∈Cr (λ0 )
implies m[M; Dr (λ0 )] = m[L; Dr (λ0 )] = m[L; λ0 ]. Proof. It is based on the simultaneous application to L and M of the finitedimensional reduction described in the proof of Lemma 7.8.2. Since L0 ∈ Fred0 (U, V ), there exists a finite-rank operator F ∈ L(U, V ) for which L0 + F ∈ Iso(U, V ).
8.4. Stability of the complex algebraic multiplicity
217
For sufficiently small δ, r > 0, by (8.10) and the Cauchy integral formula, M0 is close enough to L0 so that M0 + F ∈ Iso(U, V ) as well. Let EL , EM ∈ H(Ω, L(U, V )) be the families defined by EL (λ) := L(λ) + F,
EM (λ) := M(λ) + F,
λ ∈ Ω.
Obviously, EL (λ) and EM (λ) are isomorphisms for λ sufficiently close to λ0 . As F has finite rank, N [F ] has a finite-dimensional topological complement U0 in U . Let P ∈ L(U ) be the projection satisfying R[P ] = U0
and N [P ] = N [F ].
Then, P has finite rank and F P = F . Thus, for every λ ∼ λ0 and N ∈ {L, M}, we have that IU − EN (λ)−1 F = IU − P EN (λ)−1 F P IU − (IU − P )EN (λ)−1 F P , and, setting FN (λ) := IU − (IU − P )EN (λ)−1 F P,
λ ∼ λ0 ,
one gets that the families FL and FM are holomorphic in a neighborhood of λ0 . These families take invertible values for λ ∼ λ0 and actually FN (λ)−1 = IU + (IU − P )EN (λ)−1 F P,
N ∈ {L, M},
λ ∼ λ0 .
Consequently, for each λ ∼ λ0 and N ∈ {L, M}, we are led to N(λ) = EN (λ) IU − P EN (λ)−1 F P FN (λ). Moreover, setting DN (λ) := IU − P EN (λ)−1 F P,
N ∈ {L, M},
λ ∼ λ0 ,
we have that, for each N ∈ {L, M} and λ ∼ λ0 , N(λ) = EN (λ) DN (λ) FN (λ) and DN (λ)(U0 ) ⊂ U0 ,
DN (λ)|N [F ] = I|N [F ] .
We now define, for each N ∈ {L, M} and λ ∼ λ0 , GN (λ) := DN (λ)|U0 ∈ L(U0 ).
218
Chapter 8. Analytic and Classical Families. Stability
Then, with respect to the decomposition U = R[P ] ⊕ N [P ], the family DN can be expressed as DN (λ) = diag{GN (λ), IN [F ] } = GN (λ) ⊕ IN [F ] ,
λ ∼ λ0 .
According to the proof of (ii) implies (i) in Theorem 7.8.3, we find that m[L; λ0 ] = m[GL ; λ0 ] and m[M; λ0 ] = m[GM ; λ0 ]. Thus, by Corollary 6.5.1, m[L; λ0 ] = ordλ0 det GL
and m[M; λ0 ] = ordλ0 det GM .
By standard results in one-dimensional complex function theory, there exist δ , r > 0 such that if (8.11) sup det GL (λ) − det GM (λ) ≤ δ , λ∈Cr (λ0 )
then ordλ0 det GL = ordλ0 det GM and, hence, m[L; λ0 ] = m[M; λ0 ]. Finally, by the construction of GL and GM , it is apparent that there exist δ, r > 0 such that condition (8.10) implies (8.11), because the determinant is continuous and there exists a constant C > 0 such that GL (λ) − GM (λ) ≤ C L(λ) − M(λ) for every λ sufficiently close to λ0 . This concludes the proof.
8.4.2 Global stability Once we have established the local stability of the multiplicity, we can proceed to obtain its global stability. Theorem 8.4.3 (of stability). Let Ω = ∅ be a bounded connected open subset of C, and L an operator family satisfying (8.7) and (8.8). Then, there exists δ > 0 such that, for every operator family M satisfying ¯ L(U, V )), M ∈ C(Ω,
M|Ω ∈ H(Ω, L(U, V ))
and sup L(λ) − M(λ) ≤ δ,
(8.12)
λ∈∂Ω
one has M(Ω) ⊂ Fred0 (U, V ),
M(∂Ω) ⊂ Iso(U, V ),
(8.13)
and m[L; Ω] = m[M; Ω].
(8.14)
8.4. Stability of the complex algebraic multiplicity
219
Proof. By the discussion carried out before Definition 8.4.1, we know that ¯ ⊂ Fred0 (U, V ). L(Ω) Moreover, Σ(L) is finite, possibly empty, and m[L; Ω] = m[L; λ] ∈ N. λ∈Σ(L)
If δ > 0 is sufficiently small, then condition (8.12) implies M(λ) ∈ Iso(U, V ) for all λ ∈ Ω sufficiently close to ∂Ω. In addition, by the Cauchy integral formula, for each ε > 0 there exists δ > 0 such that inequality (8.12) guarantees sup L(λ) − M(λ) ≤ ε.
(8.15)
¯ λ∈Ω
It suffices to prove that, under condition (8.15), all the conclusions of the theorem ¯ is compact, there exists ε0 > 0 hold true for sufficiently small ε > 0. Clearly, as Ω such that (8.13) is satisfied for every ε ∈ (0, ε0 ] and M satisfying (8.15). Moreover, if Σ(L) = ∅, then, by shortening ε, if necessary, one gets Σ(M) = ∅ as long as M satisfies (8.15). Therefore, in such circumstances, we find that m[L; Ω] = m[M; Ω] = 0, which ends the proof of the theorem. Assume that, for some integer n ≥ 1, Σ(L) = {λ1 , . . . , λn }. As Σ(L) is finite, due to Proposition 8.4.2, there exist r, δ > 0 with n #
¯ r (λi ) ⊂ Ω, D
¯ r (λi ) ∩ D ¯ r (λj ) = ∅ if 1 ≤ i < j ≤ n, D
i=1
for which inequality (8.15) implies m[L; λi ] = m[M; Dr (λi )], and, hence, m[L; Ω] =
n
1 ≤ i ≤ n,
m[M; Dr (λi )].
i=1
As ¯\ K := Ω
n #
Dr (λi )
i=1
¯ such that L(K) ⊂ Iso(U, V ), the number ε > 0 can be is a compact subset of Ω shortened, if necessary, so that M(K) ⊂ Iso(U, V ). For such a choice of ε > 0, identity (8.14) is satisfied, and the proof is completed.
220
Chapter 8. Analytic and Classical Families. Stability
8.4.3 Homotopy invariance Based on Theorem 8.4.3, the invariance by homotopy of m[L; Ω] readily follows. Precisely, the following result holds true. Corollary 8.4.4 (Invariance by homotopy). Let Ω = ∅ be a bounded connected open subset of C, and X a connected topological space. Then, for every H ∈ C(X × ¯ L(U, V )) satisfying Ω, • H(x, ·) is holomorphic in Ω for all x ∈ X, • H(X × Ω) ⊂ Fred(U, V ), • H(X × ∂Ω) ⊂ Iso(U, V ), and any x0 ∈ X, one has that m[H(x, ·); Ω] = m[H(x0 , ·); Ω]
for all
x ∈ X.
Proof. By Theorem 8.4.3, the map h : X → N defined by h(x) := m[H(x, ·); Ω],
x ∈ X,
is continuous. As X is connected and N has the discrete topology, h must be constant, which concludes the proof. In most applications, the space X is the interval [0, 1].
8.5 Exercises 1.
For a given A ∈ L(U ), consider the classical family LA (λ) := λIU − A,
λ ∈ K,
and suppose λ0 ∈ K is an isolated point of σ(A). According to Theorem 8.1.1, λ0 ∈ Algν (LA ) for some integer ν ≥ 1. Let 1 , . . . , ν ≥ 0 be ν integers so that the partial κ-multiplicities of LA at λ0 are given through (7.23). Prove that dim N [(λ0 IU − A)s ] =
s−1
jj + s
j=1
ν
j ,
0 ≤ s ≤ ν.
j=s
2. Given p ∈ [1, ∞], let p be the space of sequences introduced in Exercise 15 of Chapter 7. Define S ∈ L(p ) as (Sx)(n) = x(n + 1),
x ∈ p ,
Prove that dim N [S α ] = α,
α ∈ N.
n ≥ 0.
8.5. Exercises
221
Let Ω ⊂ K be open, λ0 ∈ Ω and L ∈ C ∞ (Ω, L(U )) such that L0 ∈ Fred0 (U ),
3.
N [L0 ] ∩ N [L1 ] = {0}. and L(λ)L0 = L0 L(λ),
λ ∼ λ0 .
{M{j} }∞ j=0
Let be a sequence of derived families from L at λ0 , as discussed in Chapter 5. Prove, step by step, the following assertions: • For every integer j ≥ 0 and λ ∼ λ0 , Lj0 M{j} (λ) = L(λ)Lj0 . • For every integer j ≥ 0,
Lj0 M{j} (λ0 ) = L1 Lj0 .
• For every integer j ≥ 0, N [Lj+1 ] = N [M{j} (λ0 )] ⊕ N [Lj0 ]. 0 • For every integer j ≥ 1, N [Lj0 ] = N [M{0} (λ0 )] ⊕ · · · ⊕ N [M{j−1} (λ0 )]. 4. Use the result of Exercise 3 to give an alternative proof of Theorem 8.3.2 directly from the results of Chapter 5. 5.
Given a linear compact operator K ∈ K(U ), let LK be the family defined as LK (λ) := λIU − K,
λ ∈ K.
Prove that, for every open subset Ω ⊂ K \ {0}, the (finite or infinite) number m[LK ; Ω] equals the total number of eigenvalues of K in Ω, counted according to their classical algebraic multiplicities. 6.
Let U be a real Banach space, set O := {K ∈ K(U ) : IU − K ∈ Iso(U )},
and consider the map, denoted by Ind (often referred to as the topological index map), defined by Ind : O → N K → (−1)n[K] where n(K) stands for the number of real eigenvalues of K greater than 1, counted according to their algebraic multiplicities. Then, define the map, called topological degree and denoted by Deg, Deg(IU − K) := Ind(K),
K ∈ O.
222
Chapter 8. Analytic and Classical Families. Stability
Prove the following: • Deg(IU ) = 1. • If U = Rm for some integer m ≥ 1, then Deg(IU − K) = sign det(IU − K),
K ∈ O.
• For every K1 , K2 ∈ O, Deg ([IU − K1 ][IU − K2 ]) = Deg(IU − K1 ) · Deg(IU − K2 ). ;n • Let U = j=1 Uj be a topological direct sum decomposition of U for some integer n ≥ 1, and K ∈ O such that K(Uj ) ⊂ Uj for all j ∈ {1, . . . , n}. For each j ∈ {1, . . . , n}, define Kj ∈ L(Uj ) as the restriction of K to Uj Then, Deg(I − K) =
n
Deg IUj − Kj .
j=1
7. In this exercise we suppose K = R, and, given K ∈ C ω (Ω, L(U )) such that K(Ω) ⊂ K(U ), we consider the family L defined by L(λ) := IU − K(λ),
λ ∈ Ω.
Let λ0 ∈ Ω be an isolated point of Σ(L). Use a sequence of derived families from L at λ0 , as discussed in Chapter 5, and the properties of Exercise 6 to show the existence of a constant η ∈ {−1, 1} such that Deg(L(λ)) = Ind(K(λ)) = η sign(λ − λ0 )m[L;λ0 ] ,
λ ∼ λ0 .
Consequently, Ind(K(λ)) changes as λ crosses λ0 if and only if m[L; λ0 ] is odd. 8. Let Ω = ∅ be a bounded open connected subset of C, and L an operator family satisfying (8.7) and (8.8). Suppose that there exists λ0 ∈ C \ ∂Ω such that {(λ − λ0 )x : −∞ < x < 0} ∩ σ(L(λ)) = ∅
for all
λ ∈ ∂Ω.
Prove the following: • If λ0 ∈ Ω, then m[L; Ω] = 1. Consequently, there exists λ∗ ∈ Ω such that L(λ) ∈ Iso(U ) for each λ ∈ Ω \ {λ∗ }, while λ∗ ∈ Σ(L) and m[L; λ∗ ] = 1. / Ω, then m[L; Ω] = 0. Therefore, L(Ω) ⊂ Iso(U ). • If λ0 ∈
8.6 Comments on Chapter 8 One of the most pioneering studies of Fredholm-operator-valued holomorphic functions can be found in Gohberg [41]. The paper of Gohberg [42] presents a description of related results with references to the original sources. Stability results started in Gohberg & Kre˘ın [45]. The stability of the multiplicity is due to Eni [28]; see also Markus & Sigal [95] and Gohberg & Sigal [54]. Later results appeared in Wolf [128] and Boer & Thijsse [15].
8.6. Comments on Chapter 8
223
A number of different proofs of the results of Sections 8.1, 8.2 and 8.3 can be found in, for example, Gohberg, Goldberg & Kaashoek [43], Ramm [113], and L´opez-G´omez [82], though Theorem 8.1.1 is a substantial improvement of all those results, and we believe that the proof of Theorem 8.3.2 given here is simpler than the previous ones. Exercises 3 and 4 propose an alternative proof of Theorem 8.3.2. In this occasion, instead of invoking the results of Chapter 7, we are simply using the results of Chapter 5. Although the proof is based on some rather elementary facts and basic definitions, it runs slightly more lengthily than the one given in Section 8.3. It goes back to Magnus [92]. The relevance of Corollary 8.3.3 is based upon its huge number of applications to the theory of partial differential equations and integral equations. It has been a milestone in the development of modern functional analysis. Theorem 8.4.3 can be regarded as an infinite-dimensional version of the celebrated Rouch´e’s theorem. Actually, the theorem of invariance by homotopy is a global version of the Rouch´e theorem. Exercises 6 and 7 summarize some of the most important properties of the topological index for compact perturbations of the identity from the point of view of nonlinear analysis. They are pivotal results in bifurcation theory, as it will become apparent in Chapter 12 (see L´opez-G´omez [82] for further details).
Chapter 9
Algebraic Multiplicity Through Logarithmic Residues The stability results of Section 8.4 can be regarded as infinite-dimensional versions of the classic Rouch´e theorem. A closely related topic in complex function theory is the so-called argument principle, otherwise known as the logarithmic residue theorem, which has been established by Theorem 3.4.1 (for classical families) and Corollary 6.5.2 in a finite-dimensional setting. The main goal of this chapter is to obtain a number of infinite-dimensional generalizations of all these results. As a straightforward consequence of the main theorem of this chapter, it follows that if λ0 is an isolated eigenvalue of a holomorphic family L ∈ H(Ω, L(U, V )) with L0 ∈ Fred0 (U, V ), then, for every sufficiently small r > 0, the operator 1 L (λ)L(λ)−1 dλ 2πi Cr (λ0 ) has finite rank, and χ[L; λ0 ] = tr
1 2πi
L (λ)L(λ)−1 dλ.
(9.1)
Cr (λ0 )
Strikingly, this result has a real non-analytic counterpart, which constitutes the main theorem of this chapter. More precisely, given two real Banach spaces U, V , an open interval Ω ⊂ R, a family L ∈ C r (Ω, L(U, V )) for some r ∈ N ∪ {∞} such that L0 ∈ Fred0 (U, V ) and λ0 ∈ Algν (L) for a certain integer ν ≥ 1 with 2ν ≤ r, the main result of this chapter establishes the existence of (unique) finite-rank operators −1 ∈ L(V ), n ∈ {−ν, . . . , −1}, LL n
226
Chapter 9. Algebraic Multiplicity Through Logarithmic Residues
such that, for λ = λ0 sufficiently close to λ0 , the following asymptotic expansion at λ0 holds true, L (λ)L(λ) =
−1 −1 LL (λ − λ0 )n + o (λ − λ0 )−1 , n n=−ν
and, crucially,
χ[L; λ0 ] = tr L L−1 −1 .
(9.2)
Identity (9.2) provides us with a sharp generalization of (9.1) to the real nonanalytic setting. The plan of this chapter is as follows. Section 9.1 shows the existence of a finite Laurent development (or asymptotic expansion) of L−1 at any λ0 ∈ Algν (L) with ν ≤ r and analyzes some of its properties. Section 9.2 introduces the concept of trace of a finite-rank operator and studies its most basic properties, focusing special attention on those necessary to prove the main theorem of this chapter. Section 9.3 establishes the main theorem, Theorem 9.3.2, which has been already discussed above. Section 9.4 applies Theorem 9.3.2 for deriving (9.1) in the holomorphic setting. Finally, Section 9.5 constructs the underlying spectral projector, thus generalizing Theorem 3.4.1(d) substantially.
9.1 Finite Laurent developments of L−1 The following result introduces the concept of finite Laurent development for the inverse L−1 of a real non-analytic operator family L. It establishes the existence of a unique asymptotic expansion of L−1 at any algebraic eigenvalue. Theorem 9.1.1. Let λ0 ∈ Ω and L ∈ C r (Ω, L(U, V )) be such that L0 ∈ Fred0 (U, V ) for some r ∈ N, and λ0 ∈ Algν (L) for some ν ∈ {0, . . . , r}. Then, there exist r − ν + 1 (unique) operators (L−1 )i ∈ L(V, U ),
i ∈ {−ν, . . . , r − 2ν},
such that L(λ)−1 =
r−2ν
(λ − λ0 )i (L−1 )i + o (λ − λ0 )r−2ν
as λ → λ0 .
i=−ν
Moreover, (L−1 )−ν = 0, dim R[(L−1 )i ] < ∞
for all
− ν ≤ i ≤ min{−1, r − 2ν},
and if r ≥ 2ν, then (L−1 )0 ∈ Fred0 (V, U ). According to Theorem 9.1.1, we can introduce the following concept.
(9.3)
9.1. Finite Laurent developments of L−1
227
Definition 9.1.2. The finite asymptotic expansion (9.3) will be referred to as the finite Laurent development of L−1 at λ0 . Accordingly, the operators (L−1 )i , for −ν ≤ i ≤ r − 2ν, will be referred to as the Laurent coefficients of L−1 at λ0 . Proof of Theorem 9.1.1. The uniqueness of the operators (L−1 )−ν , . . . , (L−1 )r−2ν can be obtained very easily. Indeed, multiplying (9.3) by (λ − λ0 )ν we obtain a Taylor expansion, whose operator coefficients are known to be unique. In the case ν = 0, we have L0 ∈ Iso(U, V ) and, hence, L−1 is of class C r in an open neighborhood of λ0 . In such a case, (9.3) is the Taylor expansion of L−1 at λ0 and we are done. Suppose 1 ≤ ν ≤ r. Then, the existence of the asymptotic expansion follows from the theory developed in Chapter 5. By Corollary 5.3.2, there exist ν finiterank projections Πj ∈ L(U ), for 0 ≤ j ≤ ν − 1, an open neighborhood D ⊂ Ω of λ0 , and a family T ∈ C r−ν (D, L(V, U )) such that T(D) ⊂ Iso(V, U ) and L(λ)−1 = CΠ0 (λ − λ0 )−1 · · · CΠν−1 (λ − λ0 )−1 T(λ),
λ ∈ D \ {λ0 },
(9.4)
where, for any finite-rank projection P ∈ L(U ) and t ∈ K \ {0}, it should be remembered that CP (t)−1 = t−1 P + IU − P. Clearly, the product of the first ν operator factors on the right-hand side of (9.4) can be expressed in the form CΠ0 (λ − λ0 )−1 · · · CΠν−1 (λ − λ0 )−1 =
0
Mi (λ − λ0 )i ,
(9.5)
i=−ν
for certain operators Mj ∈ L(U ), for −ν ≤ j ≤ 0, whose explicit knowledge is not necessary here, though it is easily realized that M0 = [IU − Π0 ] · · · [IU − Πν−1 ]
(9.6)
and that, for every j ∈ {−ν, . . . , −1}, the operator Mj has finite rank, because Πi has finite rank for all 0 ≤ i ≤ ν − 1. As this entails IU − Πi ∈ Fred0 (U ), equality (9.6) implies that M0 ∈ Fred0 (U ). On the other hand, since T ∈ C r−ν (D, L(V, U )), we have that T(λ) =
r−ν
(λ − λ0 )i Ti + o((λ − λ0 )r−ν ) as λ → λ0 .
i=0
Thus, according to (9.5) and (9.4), we find that L(λ)−1 =
r−2ν min{r−ν,i+ν} i=−ν
j=max{0,i}
Mi−j Tj (λ − λ0 )i + o (λ − λ0 )r−2ν
228
Chapter 9. Algebraic Multiplicity Through Logarithmic Residues
as λ → λ0 . Consequently, setting
min{r−ν,i+ν} −1
(L
)i :=
Mi−j Tj ∈ L(V, U ),
−ν ≤ i ≤ r − 2ν,
(9.7)
j=max{0,i}
yields (9.3). As, for every j ∈ {−ν, . . . , −1}, the operator Mj has finite rank, it is apparent that (L−1 )i has finite rank for all −ν ≤ i ≤ min{−1, r − 2ν}. Moreover, as we are assuming that λ0 ∈ Algν (L), then ν is the minimum integer satisfying (4.18) and, therefore, (L−1 )−ν = 0. Finally, suppose r ≥ 2ν. Then, according to (9.7), (L−1 )0 =
ν
M−j Tj .
j=0
As T0 ∈ Iso(V, U ) and M0 ∈ Fred0 (U ), then M0 T0 ∈ Fred0 (V, U ). Moreover, ν
M−j Tj
j=1
is a finite-rank operator. Therefore, (L−1 )0 ∈ Fred0 (V, U ), because it is a finiterank perturbation of a Fredholm operator of index zero. This completes the proof. In the analytic case, combining Theorem 8.2.1 with Theorem 9.1.1 provides us with the following fundamental result. It establishes that any λ0 ∈ Σ(L) must be an algebraic eigenvalue of L and that, therefore, is a pole of L−1 , which admits a (unique) finite Laurent development at λ0 . Corollary 9.1.3. Suppose: i) Ω is an open connected subset of K. ii) L ∈ C ω (Ω, L(U, V )) satisfies L(Ω) ⊂ Fred(U, V ). iii) L(Ω) ∩ Iso(U, V ) = ∅. Then, L(Ω) ⊂ Fred0 (U, V ), the set Σ(L) = Eig(L) is contained Alg(L) and is discrete in Ω, the inverse family L−1 defined as L−1 (λ) := L(λ)−1 ,
λ ∈ Ω \ Σ(L),
is analytic in Ω \ Σ(L), and every λ0 ∈ Σ(L) is a pole of L−1 . Moreover, for each ν ∈ N and λ0 ∈ Algν (L), there exist an open neighborhood D ⊂ Ω of λ0 and a (unique) sequence of operators {(L−1 )n }∞ n=−ν in L(V, U ) such that (L−1 )−ν = 0, and L−1 (λ) =
∞ n=−ν
(L−1 )n (λ − λ0 )n ,
λ ∈ D \ {λ0 },
(9.8)
9.2. The trace operator
229
uniformly on compact subsets of D \ {λ0 }. In addition, (L−1 )0 ∈ Fred0 (V, U ),
dim R[(L−1 )n ] < ∞,
n ∈ {−ν, . . . , −1}.
Proof. The first part of the corollary is a direct consequence of Theorem 8.2.1. Let λ0 ∈ Σ(L). Then, λ0 ∈ Algν (L) for some integer ν ≥ 1. According to Theorem 9.1.1 applied to r = 2ν, there exist ν + 1 (unique) operators (L−1 )i ∈ L(V, U ), for i ∈ {−ν, . . . , 0}, an open neighborhood D ⊂ Ω of λ0 , and R ∈ C ω (D \ {λ0 }, L(V, U )), with R(λ) = o(1) as λ → λ0 , such that 0
L−1 (λ) =
(L−1 )i (λ − λ0 )i + R(λ),
λ ∈ D \ {λ0 }.
i=−ν
Moreover, the operator (L−1 )−ν is non-zero, (L−1 )0 ∈ Fred0 (V, U ), and dim R[(L−1 )n ] < ∞
for all n ∈ {−ν, . . . , −1}.
By the theorem of Riemann on removable singularities, setting R(λ0 ) = 0, we have that R ∈ C ω (D, L(V, U )). This concludes the proof very easily. The existence, uniqueness and convergence of the whole series (9.8) follows from standard facts in the theory of analytic functions. Definition 9.1.4. The infinite development (9.8) is called the Laurent series of L−1 at λ0 . Accordingly, the operators (L−1 )n , for n ∈ Z ∩ [−ν, ∞), are referred to as the Laurent coefficients of L−1 at λ0 . More generally, suppose p ≤ q are integers, W is a K-Banach space, and a map R : Ω \ {λ0 } → W satisfies R(λ) =
q
(λ − λ0 )n Rn + o((λ − λ0 )q ),
λ ∈ Ω \ {λ0 }.
n=p
Then, the coefficient operators Rn ∈ W must be unique for p ≤ n ≤ q. We will sometimes denote them by c(R, λ0 , n) := Rn ,
p ≤ n ≤ q.
According to this notation, (9.3) can be equivalently written in the form L(λ)−1 =
r−2ν
c(L−1 , λ0 , i)(λ − λ0 )i + o (λ − λ0 )r−2ν .
i=−ν
9.2 The trace operator This section extends the finite-dimensional concept of trace to the infinite-dimensional setting, and obtains a series of important properties concerning the traces of the product of two finite Laurent series at a λ0 ∈ Ω.
230
Chapter 9. Algebraic Multiplicity Through Logarithmic Residues
9.2.1 Concept of trace and basic properties The following definition extends the classic finite-dimensional concept of trace to an infinite-dimensional setting. Definition 9.2.1. Let U be a K-Banach space and A ∈ L(U ) a finite-rank operator. Then, the trace of A, denoted by tr A, is defined through tr A := tr AR , where AR stands for the operator defined by restriction of A, A|R[A] : R[A] → R[A], which is finite dimensional. The next result collects the basic properties of this concept. They are easily obtained from the corresponding properties of its finite-dimensional counterpart. Proposition 9.2.2. Let A ∈ L(U ) be a finite-rank operator. Then, i) For every finite-dimensional subspace M ⊂ U with R[A] ⊂ M , tr A = tr AM , where AM stands for the restriction A|M : M → M . ii) For every finite-rank operator B ∈ L(U ) and β ∈ K, tr(βA + B) = β tr A + tr B. iii) If A is a projection then tr A = rank A. iv) For every Banach space V , every finite-rank operator B ∈ L(U, V ), and C ∈ L(V, U ), tr(BC) = tr(CB). Proof. Let M be a finite-dimensional subspace of U such that R[A] ⊂ M . Let AR ∈ L(R[A]) denote the restriction of A from R[A] to itself, and AM ∈ L(M ) the restriction of A from M to itself. By definition, tr A = tr AR . Let U1 be a topological complement of R[A] in M , and let A1 denote the restriction of A from U1 to R[A]. With respect to the decomposition M = R[A] ⊕ U1 , the operator AM can be expressed in the form AR A1 AM = 0 0 and, therefore, tr AM = tr AR , which concludes the proof of property (i).
9.2. The trace operator
231
Let B ∈ L(U ) be a finite-rank operator, and β ∈ K. Let U1 be a finitedimensional subspace of U such that R[A] + R[B] ⊂ U1 . Let A1 ∈ L(U1 ) and B1 ∈ L(U1 ) denote the restrictions of A and B from U1 to itself, respectively. Then, by the properties of the finite-dimensional trace operator, we find from property (i) that tr(βA + B) = tr(βA1 + B1 ) = β tr A1 + tr B1 = β tr A + tr B, which proves property (ii). Suppose A is a finite-rank projection. Then, the restriction AR of A from R[A] to itself equals the identity operator in R[A], whence tr A = tr AR = tr IR[A] = dim R[A] = rank A, which ends the proof of property (iii). Finally, assume that, for another K-Banach space V , the operator B ∈ L(U, V ) has finite rank and C ∈ L(V, U ). Since R[BC] ⊂ R[B], we obtain that BC is a finite-rank operator. Moreover, R[CB] = C(R[B]) and, hence, rank(CB) ≤ rank B. Let B1 denote the restriction of B from R[CB] to R[B], and C1 the restriction of C from R[B] to R[CB]. By (i) and the properties of the finite-dimensional trace we obtain that tr(BC) = tr(B1 C1 ) = tr(C1 B1 ) = tr(CB),
which proves property (iv).
9.2.2 Traces of the coefficients of the product of two families This subsection gives some useful properties concerning the traces of the Laurent operator coefficients of a product of two operator families. Proposition 9.2.3. Let p ≤ q be integers, L : Ω \ {λ0 } → L(U, V ) a map of the form q Ln (λ − λ0 )n + o((λ − λ0 )q ), L(λ) = n=p
with dim R[Ln ] < ∞
for all
p ≤ n ≤ min{q, −1},
∞
(9.9)
and M ∈ C (Ω, L(V, U )). Then, for every p ≤ n ≤ min{q, −1}, the operators c(LM, λ0 , n) ∈ L(V )
and
c(ML, λ0 , n) ∈ L(U )
have finite rank and tr c(LM, λ0 , n) = tr c(ML, λ0 , n).
232
Chapter 9. Algebraic Multiplicity Through Logarithmic Residues
Proof. For λ in a perforated neighborhood of λ0 , we have that L(λ)M(λ) =
q n
Lj Mn−j (λ − λ0 )n + o((λ − λ0 )q ),
n=p j=p
M(λ)L(λ) =
q n
Mn−j Lj (λ − λ0 )n + o((λ − λ0 )q ).
n=p j=p
Therefore, by the uniqueness of the coefficients in these asymptotic developments, we find that, for every p ≤ n ≤ min{q, −1}, c(LM, λ0 , n) =
n
Lj Mn−j ,
c(ML, λ0 , n) =
j=p
n
Mn−j Lj .
j=p
According to (9.9), each of these coefficient operators must have finite rank. Therefore, by properties (ii), (iv) of Proposition 9.2.2, we find that tr c(LM, λ0 , n) =
n
tr (Lj Mn−j ) =
j=p
n
tr (Mn−j Lj ) = tr c(ML, λ0 , n),
j=p
which completes the proof. Proposition 9.2.4. Consider r ∈ N ∪ {∞}. Let L ∈ C r (Ω, L(U, V ))
and
F ∈ C ∞ (Ω, L(U ))
be such that L0 ∈ Fred0 (U, V ) and F0 ∈ Iso(U ). Assume that λ0 ∈ Algν (L) for some integer ν ≥ 1 with 2ν ≤ r +1. Then, for each i ∈ {−ν, . . . , −1}, the operators c((LF) (LF)−1 , λ0 , i) and c(L L−1 , λ0 , i) have finite rank and tr c((LF) (LF)−1 , λ0 , i) = tr c(L L−1 , λ0 , i). Proof. In some perforated neighborhood of λ0 , we can express (LF) (LF)−1 = L L−1 + LF F−1 L−1 .
(9.10)
The family LF F−1 is of class C r in a neighborhood of λ0 . By Theorem 9.1.1, there ˜ ⊂ Ω of λ0 such that, for each λ ∈ Ω ˜ \ {λ0 }, the exists an open neighborhood Ω −1 operator L(λ) exists and L(λ)−1 =
−1
c(L−1 , λ0 , i)(λ − λ0 )i + o (λ − λ0 )−1 .
i=−ν
Moreover, dim R[c(L−1 , λ0 , i)] < ∞
for all i ∈ {−ν, . . . , −1}.
(9.11)
9.3. The multiplicity through a logarithmic residue
233
Thus, due to Proposition 9.2.3, for every i ∈ {−ν, . . . , −1}, the operators c(LF F−1 L−1 , λ0 , i) and c(F F−1 , λ0 , i) have finite rank and tr c(LF F−1 L−1 , λ0 , i) = tr c(F F−1 , λ0 , i). Actually, since F F−1 is of class C ∞ in a neighborhood of λ0 , c(F F−1 , λ0 , i) = 0,
i ∈ {−ν, . . . , −1},
and, therefore, tr c(LF F−1 L−1 , λ0 , i) = 0,
i ∈ {−ν, . . . , −1}.
(9.12)
On the other hand, as L is of class C r−1 , it follows from (9.11) that dim R[c(L L−1 , λ0 , i)] < ∞
for all i ∈ {−ν, . . . , −1}.
Consequently, by (9.10), dim R[c((LF) (LF)−1 , λ0 , i)] < ∞
for all i ∈ {−ν, . . . , −1},
and, thanks to Proposition 9.2.2(ii), we conclude from (9.10) and (9.12) that tr c((LF) (LF)−1 , λ0 , i) = tr c(L L−1 , λ0 , i),
i ∈ {−ν, . . . , −1}.
This ends the proof.
9.3 The multiplicity through a logarithmic residue The next result shows that transversalizing a family L at λ0 ∈ Ω extraordinarily simplifies the finite Laurent expansion of L L−1 at any λ0 ∈ Algk (L) for some integer k ≥ 1. More precisely, it characterizes the ranges of c(L, λ0 , i) · c(L−1 , λ0 , i),
−k ≤ i ≤ 0,
through the subspaces in which the direct sum (4.7) decomposes. This property will enable us to obtain the main result of this chapter, which asserts tr c(L L−1 , λ0 , −1) = χ[L; λ0 ].
234
Chapter 9. Algebraic Multiplicity Through Logarithmic Residues
Lemma 9.3.1. Let r ∈ N ∪ {∞} and λ0 ∈ Ω. Suppose L ∈ C r (Ω, L(U, V )) satisfies L0 ∈ Fred0 (U, V ), and λ0 is a k-transversal eigenvalue of L for some integer k ≥ 1 with 2k ≤ r. Then, R[L0 (L−1 )0 ] = R[L0 ],
R[Li (L−1 )−i ] = Li (
i−1 6
N [Lj ]),
1 ≤ i ≤ k,
(9.13)
j=0
and
k
Li (L−1 )−i = IV ,
(9.14)
i=0 −1
Li (L
−1
)−i Lj (L
−1
)−j = δij Li (L
)−i ,
0 ≤ i, j ≤ k.
Proof. Thanks to Theorem 9.1.1, the inverse operator L−1 admits, for λ in a perforated neighborhood of λ0 , a unique finite Laurent expansion of the form L(λ)−1 =
0
(L−1 )n (λ − λ0 )n + o(1),
(9.15)
n=−k
for some operators
(L−1 )−k , . . . , (L−1 )0 ∈ L(V, U )
such that (L−1 )−k = 0 and dim R[(L−1 )n ] < ∞,
n ∈ {−k, . . . , −1}.
A direct calculation yields IV = L(λ)L(λ)−1 k 0 = Ln (λ − λ0 )n + o((λ − λ0 )k ) (L−1 )n (λ − λ0 )n + o(1) n=0
=
n=−k
0 n+k
Li (L−1 )n−i (λ − λ0 )n + o(1),
n=−k i=0
for λ in a certain perforated neighborhood of λ0 . Consequently, owing to the uniqueness of the Laurent coefficients, guaranteed by Theorem 9.1.1, we find that k
Li (L−1 )−i = IV
(9.16)
i=0
and
n+k i=0
Li (L−1 )n−i = 0,
n ∈ {−k, . . . , −1}.
(9.17)
9.3. The multiplicity through a logarithmic residue
235
In particular, the first identity of (9.14) holds. Moreover, according to (9.17), we have that, for each v−k , . . . , v−1 ∈ V , n+k
Li (L−1 )n−i vn−i = 0,
n ∈ {−k, . . . , −1},
i=0
and, consequently, as λ0 is a k-transversal eigenvalue of L, Lemma 4.1.1 implies that −k ≤ n ≤ −1, 0 ≤ i ≤ n + k. Li (L−1 )n−i vn−i = 0, In other words, R[(L−1 )−i ] ⊂
i−1 6
N [Lj ],
1 ≤ i ≤ k,
(9.18)
j=0
and, hence, −1
R[L0 (L
−1
)0 ] ⊂ R[L0 ],
R[Li (L
)−i ] ⊂ Li (
i−1 6
N [Lj ]),
1 ≤ i ≤ k.
j=0
Therefore, according to (9.16), by the uniqueness of the projections associated with every factor of a direct sum, the operators L0 (L−1 )0 ,
L1 (L−1 )−1 ,
...,
Lk (L−1 )−k ,
(9.19)
must be the projections associated with the direct sum decomposition (4.7), which concludes the proof. Note that (9.13) and (9.14) indeed establish that the operators (9.19) are the associated projections to each of the factors of the direct sum (4.7). The main result of this chapter can be stated as follows. Theorem 9.3.2. Let r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )) be such that L0 ∈ Fred0 (U, V ) and λ0 ∈ Algν (L) for some integer ν ≥ 1 with 2ν ≤ r. Then, for every n ∈ {−ν, . . . , −1}, the operator c(L L−1 , λ0 , n) ∈ L(V ) has finite rank and 0, n ∈ {−ν, . . . , −2}, −1 (9.20) tr c(L L , λ0 , n) = χ[L; λ0 ], n = −1. Proof. Assume that λ0 is a ν-transversal eigenvalue of L. Then, by (9.15) and L (λ) =
ν
(n + 1)Ln+1 (λ − λ0 )n + o((λ − λ0 )ν ),
n=0
we obtain that, for λ ∼ λ0 and λ = λ0 , L (λ)L(λ)−1 =
0
n+ν+1
n=−ν
i=1
iLi (L−1 )n+1−i (λ − λ0 )n + o(1).
236
Chapter 9. Algebraic Multiplicity Through Logarithmic Residues
Thus, from (9.18) it is apparent that, whenever ν ≥ 2, c(L L−1 , λ0 , n) = 0,
n ∈ {−ν, . . . , −2}.
Moreover, c(L L−1 , λ0 , −1) =
ν
iLi c(L−1 , λ0 , −i) =
i=1
ν
iLi (L−1 )−i .
i=1
Consequently, due to Lemma 9.3.1, c(L L−1 , λ0 , −1) is a linear combination of finite-rank projections. In particular, it has finite rank. Therefore, since Li (L−1 )−i is a projection for every 1 ≤ i ≤ ν, we find from Proposition 9.2.2 and Definition 4.3.3 that tr c(L L−1 , λ0 , −1) =
ν
i dim Li
i=1
6 i−1
N [Lj ] = χ[L; λ0 ],
j=0
which concludes the proof of the theorem when λ0 is a ν-transversal eigenvalue of L. Suppose now that λ0 ∈ Algν (L). Then, according to Theorem 5.3.1, there exists Φ ∈ P[L(U )] such that Φ(λ0 ) = IU and λ0 is a ν-transversal eigenvalue of the transversalized family LΦ := LΦ. Thanks to Proposition 9.2.4, we have that tr c((LΦ ) (LΦ )−1 , λ0 , n) = tr c(L L−1 , λ0 , n),
n ∈ {−ν, . . . , −1}.
Moreover, since λ0 is a ν-transversal eigenvalue of LΦ , we already know that 0, n ∈ {−ν, . . . , −2}, Φ Φ −1 tr c((L ) (L ) , λ0 , n) = Φ χ[L ; λ0 ], n = −1. Finally, by Definition 4.3.3, χ[L; λ0 ] = χ[LΦ ; λ0 ]. Therefore, (9.20) holds true, which concludes the proof.
Theorem 9.3.2 is satisfied for ν = 0 as well. Indeed, in such a case, L0 ∈ Iso(U, V ) and, hence, L L−1 is of class C r−1 in a neighborhood of λ0 . Consequently, c(L L−1 , λ0 , −1) = 0 and, therefore,
tr c(L L−1 , λ0 , −1) = tr 0 = 0 = χ[L; λ0 ].
9.4. Holomorphic and classical families
237
9.4 Holomorphic and classical families When K = C and L ∈ H(Ω; L(U, V )), Theorem 9.3.2 provides us with the following classic result, which extends to arbitrary complex Banach spaces the finitedimensional result established by Corollary 6.5.2. Theorem 9.4.1. Suppose: i) Ω is an open connected subset of C. ii) L ∈ H(Ω, L(U, V )) satisfies L(Ω) ⊂ Fred(U, V ). iii) L(Ω) ∩ Iso(U, V ) = ∅. Then, for every λ0 ∈ Σ(L), χ[L; λ0 ] = tr
1 2πi
L (λ)L(λ)−1 dλ,
γ
where γ is any Cauchy contour whose inner domain Dγ satisfies ¯ γ ⊂ Ω, D
¯ γ ∩ Σ(L) = {λ0 }. D
Proof. Thanks to Theorem 8.2.1, L(Ω) ⊂ Fred0 (U, V ), and Σ(L) = Eig(L) ⊂ Alg(L) is a discrete set in Ω. Fix λ0 ∈ Σ(L) and let γ be a Cauchy contour whose inner domain Dγ satisfies the requirements of the statement. Also, let ν ≥ 1 be an integer such that λ0 ∈ Algν (L). Then, λ0 ∈ Algk (L L−1 ) for some k ∈ {1, . . . , ν} and, therefore, the Laurent series L (λ)L−1 (λ) =
∞
c(L L−1 , λ0 , n)(λ − λ0 )n
(9.21)
n=−k
converges uniformly in compact subsets of ¯ γ + Dδ (0) \ {λ0 }, D for sufficiently small δ > 0. Therefore, integrating (9.21) along γ, shows that 1 c(L L−1 , λ0 , −1) = L (λ)L−1 (λ) dλ. 2πi γ Consequently, by Theorem 9.3.2,
−1
χ[L; λ0 ] = tr c(L L which concludes the proof.
1 , λ0 , −1) = tr 2πi
L (λ)L−1 (λ) dλ,
γ
238
Chapter 9. Algebraic Multiplicity Through Logarithmic Residues
According to Theorem 9.4.1, when A ∈ L(U ) and the classical family LA defined as LA (λ) := λIU − A, λ ∈ C, satisfies LA (Ω) ⊂ Fred(U ) for some open connected subset Ω ⊂ C, then 1 (λIU − A)−1 dλ, χ[LA ; λ0 ] = tr 2πi γ for all λ0 ∈ Eig(A) ∩ Ω and any Cauchy contour γ whose inner domain Dγ satisfies ¯ γ ⊂ Ω and σ(A) ∩ D ¯ γ = {λ0 }. Moreover, this is always the case if A ∈ K(U ) and D λ0 = 0. Further, by Theorem 8.3.2, under these circumstances, we actually have χ[LA ; λ0 ] = ma (λ0 ) = dim N [(λ0 IU − A)ν(λ0 ) ] and λ0 ∈ Algν(λ0 ) (L). Therefore, 1 ma (λ0 ) = tr 2πi
(λIU − A)−1 dλ.
γ
Actually, in the next section we will show that ((LA )−1 )−1 := c((LA )−1 , λ0 , −1) =
1 2πi
(λIU − A)−1 dλ ∈ L(U )
γ
is a projection onto N [(λIU − A)ν(λ0 ) ]; thus generalizing the corresponding finitedimensional result of Theorem 3.4.1.
9.5 Spectral projection According to Theorem 7.6.2, if r ∈ N∗ ∪ {∞} and L ∈ C r (Ω, L(U, V )) satisfies L0 ∈ Fred0 (U, V ), then λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r if and only if dim N [L[j−1] ] < dim N [L[j] ],
for all 1 ≤ j ≤ ν − 1,
dim N [L[ν−1] ] = dim N [L[ν] ], and, in such a case, χ[L; λ0 ] = dim N [L[ν−1] ]. The following result provides us with a projection of U ν onto N [L[ν−1] ]. Theorem 9.5.1. Let r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )) be such that L0 ∈ Fred0 (U, V ) and λ0 ∈ Algν (L) for some integer ν ≥ 1 with 2ν ≤ r + 1. Then, the operator ⎞⎛ ⎞ ⎛ · · · L1 Lν (L−1 )−ν ⎟⎜ . ⎜ .. . ⎟ ν .. .. ⎟⎜ . P := ⎜ . . .. ⎟ . ⎠⎝ . ⎠ ∈ L(U ) ⎝ (L−1 )−1 · · · (L−1 )−ν L2ν−1 · · · Lν
9.5. Spectral projection
239
is a projection onto N [L[ν−1] ]. Moreover, R[trng{(L−1 )−ν , . . . , (L−1 )−1 }] = N [L[ν−1] ]
(9.22)
and, therefore, due to Theorem 7.6.2, χ[L; λ0 ] = dim R[P ] = dim R[trng{(L−1 )−ν , . . . , (L−1 )−1 }]. Proof. As LL−1 = IV , identifying terms with the same order in the asymptotic expansion of LL−1 , one easily arrives at trng{L0 , . . . , Lν−1 } · trng{(L−1 )−ν , . . . , (L−1 )−1 } = 0.
(9.23)
Similarly, it follows from L−1 (λ)L(λ) = IU , that
⎛
(L−1 )0 ⎜ .. ⎜ . ⎝ −1 (L )ν−1
⎞⎛ · · · (L−1 )−ν+1 ⎟⎜ .. .. ⎟⎜ . . ⎠⎝ ···
(L−1 )0
λ ∼ λ0 ,
L0 .. . Lν−1
λ = λ0 , ⎞
..
.
⎟ ⎟ = IU ν − P. ⎠
(9.24)
· · · L0
By (9.23), R[P ] ⊂ N [trng{L0 , . . . , Lν−1 }] = N [L[ν−1] ]. Let x ∈ N [L[ν−1] ]. Then, thanks to (9.24), P x = x. Consequently, P is a projection onto N [L[ν−1] ]. Moreover, it follows from the definition of P that R[P ] ⊂ R trng{(L−1 )−ν , . . . , (L−1 )−1 } , and, according to (9.23), R trng{(L−1 )−ν , . . . , (L−1 )−1 } ⊂ N L[ν−1] . Therefore, (9.22) holds. The remaining assertions follow from Theorem 7.6.2. This concludes the proof. As a consequence of Theorem 9.5.1, one can easily obtain the following classic result, which generalizes Theorem 3.4.1(d). Theorem 9.5.2. Given A ∈ L(U ), let LA ∈ P[L(U )] be defined by LA (λ) := λIU − A,
λ ∈ K,
and assume that λ0 ∈ K is an isolated eigenvalue of LA such that LA 0 ∈ Fred0 (U ). Then, A −1 ∈ L(U ) (L ) −1 is a projection onto N [(λIU − A)ν(λ0 ) ].
240
Chapter 9. Algebraic Multiplicity Through Logarithmic Residues
If, in addition, K = C then for any Cauchy contour γ whose inner domain ¯ γ ⊂ Ω and σ(A) ∩ D ¯ γ = {λ0 }, one has that Dγ satisfies D −1 1 c LA , λ0 , −1 = (λIU − A)−1 dλ. (9.25) 2πi γ Proof. By Theorem 8.3.2, λ0 ∈ Algν(λ0 ) (LA ). As usual, we set −1 A −1 := c LA , λ0 , j , (L ) −j
j ∈ {−ν(λ0 ), . . . , −1}.
Since LA 1 = IU ,
LA j = 0,
j ≥ 2,
we find from Theorem 9.5.1 that the operator ⎛ ⎞ A −1 0 ··· 0 (L ) −ν(λ0 ) ⎟ ⎜ ⎜0 · · · 0 (LA )−1 ⎟ −ν(λ0 )+1 ⎟ ⎜ P := ⎜ . . ⎟ ∈ L(U ν(λ0 ) ) .. ⎜ .. ⎟ . . ... . ⎝ ⎠ A −1 0 ··· 0 (L ) −1 is a projection onto N [LA [ν(λ0 )−1] ]. Now, we denote by Pν(λ0 )−1 ∈ L(U ν(λ0 ) , U ) the operator defined through Pν(λ0 )−1 col[u0 , . . . , uν(λ0 )−1 ] = uν(λ0 )−1 ,
u0 , . . . , uν(λ0 )−1 ∈ U.
As P is a projection onto N [LA [ν(λ0 )−1] ], we have that R[ (LA )−1 −1 ] ⊂ Pν(λ0 )−1 N [LA [ν(λ0 )−1] ] . Conversely, for any ] , uν(λ0 )−1 ∈ Pν(λ0 )−1 N [LA [ν(λ0 )−1] there exists u0 , . . . , uν(λ0 )−2 ∈ U such that u := col[u0 , . . . , uν(λ0 )−1 ] ∈ N [LA [ν(λ0 )−1] ]. Moreover, since P u = u, we have that A −1 u = uν(λ0 )−1 . (L ) −1 ν(λ0 )−1
9.6. Exercises
241
Consequently, (LA )−1 −1 is a projection onto Pν(λ0 )−1 (N [LA [ν(λ0 )−1] ]). On the other hand, by Proposition 8.3.1, we have that ν(λ0 ) ]. Pν(λ0 )−1 (N [LA [ν(λ0 )−1] ]) = N [(λ0 IU − A)
Therefore, (LA )−1 −1 is a projection onto N [(λ0 IU − A)ν(λ0 ) ]. If K = C, then, by Theorem 9.4.1, it is apparent that (9.25) is satisfied. This concludes the proof.
9.6 Exercises 1. The statement of Theorem 9.3.2 holds true substituting L−1 L for L L−1 . State this formally and prove it. 2. The proof of Theorem 9.3.2 can be easily adapted to show that, for every scalarvalued function f of class C k−1 in a neighborhood of λ0 , tr c(f L L−1 , λ0 , −1) = tr c(f L−1 L , λ0 , −1) = f (λ0 )χ[L; λ0 ]. State this formally and prove it. 3.
This exercise provides an example of an infinite-dimensional operator family L ∈ C r (Ω, L(U, V ))
with
L(Ω) ⊂ Fred0 (U, V )
for which the Laurent series of L−1 can be determined explicitly. Precisely, consider integers n, r ≥ 1, an open interval D ⊂ R with 0 ∈ D, a function g ∈ C r (D, R) satisfying g(0) = 0, and define λ ∈ D. f (λ) := (nπ)2 + g(λ), Let U denote the Banach space of functions u ∈ C 2 [0, 1] such that u(0) = u(1) = 0, and V := C[0, 1]. The following tasks are requested to be accomplished: • Prove that the operator d2 : U → V dx2 u → u is an isomorphism. • Use the Ascoli–Arzel` a theorem to prove that the linear operator 2 −1 d : V →V dx2 is compact. • Let L denote the family defined by 2 −1 d u, L(λ)u := u + f (λ) dx2
λ ∈ D,
Prove that L ∈ C r (D, L(V )) satisfies L(D) ⊂ Fred0 (V ).
u ∈ V.
242
Chapter 9. Algebraic Multiplicity Through Logarithmic Residues • Prove that, for each λ ∈ D such that sin
4
f (λ) = 0,
the operator L(λ) is invertible and actually, for every v ∈ V and x ∈ [0, 1], x 4 4 4 4 sin( f (λ)t)v(t)dt + sin( f (λ)x) L(λ)−1 v(x) = v(x) + f (λ) cos( f (λ)x) 0 4 4 1 4 4 4 f (λ) cos f (λ) 1 4 × − sin( f (λ)t)v(t)dt + f (λ) cos( f (λ)t)v(t)dt . sin f (λ) 0 0 • Prove that N [L(λ0 )] = span[sin(nπ·)], and that 0 is an isolated eigenvalue of L if and only if it is an isolated zero of g. • From now on, suppose ord0 g = k for some k ∈ N∗ with r ≥ 2k − 1 to make sure that 4 ord0 sin f = k and 0 ∈ Algk (L). • Use the identity
4 −1 = (−1)n 2nπgk−1 lim λk sin f (λ)
λ→0
to infer that, for every v ∈ V and x ∈ [0, 1], c(L−1 , 0, −k)v(x) = −2n2 π 2 gk−1 • Show L (λ)L(λ)−1 = kgk and
d2 dx2
−1
d2 dx2
−1
1
sin(nπt)v(t)dt sin(nπx). 0
c(L−1 , 0, −k)λ−1 + o(λ−1 )
sin(nπ·) = −(nπ)−2 sin(nπ·).
• Conclude that, for every v ∈ V and x ∈ [0, 1], 1 c(L L−1 , 0, −1)v(x) = 2k sin(nπt)v(t)dt sin(nπx) 0
and that c(L L χ[L; 0] = k.
−1
, 0, −1) is a rank-one operator whose trace is k. Therefore,
4. Use the local Smith form of Theorem 7.8.3 to give an alternative proof of Theorems 9.1.1 and 9.3.2. 5. Imposing the necessary regularity requirements on the families involved, prove the results of Exercise 9 of Chapter 4 and Exercise 7 of Chapter 5 by using the representation of the multiplicity as a logarithmic residue. 6. Imposing the necessary regularity requirements on the families involved, prove the results of Exercise 8 of Chapter 5 by using the representation of the multiplicity as a logarithmic residue.
9.6. Exercises
243
Prove the following generalization of Proposition 9.2.3: Let
7.
L : Ω \ {λ0 } → L(V, U ),
M : Ω \ {λ0 } → L(U, V ),
be two maps satisfying p2
L(λ) =
Ln (λ − λ0 )n + o((λ − λ0 )p2 ),
n=p1
M(λ) =
q2
Mn (λ − λ0 )n + o((λ − λ0 )q2 ),
n=q1
for some integers p1 , p2 , q1 , q2 , with p1 ≤ p2 and q1 ≤ q2 , and certain finite-rank operators Li and Mj for p1 ≤ i ≤ min{p2 , −1} and q1 ≤ j ≤ min{q2 , −1}. Then, for every p1 + q1 ≤ n ≤ min{p1 + q2 , p2 + q1 , −1}, the operators c(LM, λ0 , n) ∈ L(V )
and
c(ML, λ0 , n) ∈ L(U )
have finite rank and tr c(LM, λ0 , n) = tr c(ML, λ0 , n). Let U, V, W be three K-Banach spaces, r, s ∈ N ∪ {∞}, and
8.
L ∈ C r (Ω, L(U, V )),
M ∈ C s (Ω, L(W, V )),
be such that L0 ∈ Fred0 (U, V ) and M0 ∈ Fred0 (W, V ). Suppose λ0 ∈ Algk (L) ∩ Alg (M) for some integers k, ≥ 1 satisfying 2k ≤ r + 1 and 2 ≤ s + 1. Prove the following: • λ0 ∈ Algp (LM) for some integer max{k, } ≤ p ≤ k + and LM ∈ C for some t ≥ min{r, s}. • Suppose 2p ≤ t + 1 in order to prove that, for each −k − ≤ n ≤ −1, the operators c((LM) (LM)−1 , λ0 , n), c(L L−1 , λ0 , n) and c(M M−1 , λ0 , n) have finite rank and t
tr c((LM) (LM)−1 , λ0 , n) = tr c(L L−1 , λ0 , n) + tr c(M M−1 , λ0 , n). • Show that, for n = −1, the previous identity provides us with the product formula. Note that this is a generalization of Proposition 9.2.4. 9. Prove that Theorem 9.3.2 also holds for families of class C 2ν−1 regularity. State this formally and prove it. 10. Let r ∈ N ∪ {∞} and L, M ∈ C r (Ω, L(U, V )) be such that L0 ∈ Fred0 (U, V ) and λ0 ∈ Algk (L) for some integer k ≥ 1 with r ≥ 2k − 1. Adapt the proof of Proposition 7.10.3 in order to show that if Ln = Mn , then
0 ≤ n ≤ 2k − 1,
L(λ)−1 − M(λ)−1 = o((λ − λ0 )−1 ).
244
Chapter 9. Algebraic Multiplicity Through Logarithmic Residues
11. Let r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )) be such that L0 ∈ Fred0 (U, V ) and λ0 ∈ Algν (L) for some integer ν ≥ 1 with 2ν ≤ r, and, for every n ∈ {−ν, . . . , −1}, set (L−1 )n := c(L−1 , λ0 , n) and Sn := trng{(L−1 )−ν , (L−1 )−ν+1 , . . . , (L−1 )n } ∈ L(V n+ν+1 , U n+ν+1 ). Prove the following assertions: • N [L0 ] = {u ∈ U : col[0, . . . , 0, u] ∈ R[S−1 ]}. • For every u ∈ N [L0 ] \ {0}, rank u = max {s ∈ {1, . . . , ν} : col[0, . . . , 0, u] ∈ R[S−s ]} . • Let u ∈ N [L0 ] \ {0} satisfy rank u = s and S−s col[xs , . . . , xk ] = col[0, . . . , 0, u]. Set u0 = u,
uj =
ν
(L−1 )j−n xn ,
1 ≤ j ≤ s − 1.
n=s
Show that (u0 , . . . , us−1 ) is a Jordan chain of L at λ0 . 12. Let r ∈ N ∪ {∞} and L ∈ C r (Ω, L(U, V )) be such that L0 ∈ Φ0 (U, V ) and λ0 ∈ Algν (L) for some integer ν ≥ 1 with 2ν ≤ r + 1. Prove the following assertions: • The operator Q defined by ⎛ ⎜ Q := ⎜ ⎝
Lν .. .
··· .. .
L2ν−1
···
⎞ ⎛ −1 (L )−ν L1 ⎜ .. .. ⎟ ⎟⎜ . . ⎠⎝ −1 Lν (L )−1
⎞ ..
⎟ ⎟ ∈ L(V ν ) ⎠
.
···
(L−1 )−ν
is a projection. • The mapping R[trng{(L−1 )−ν , . . . , (L−1 )−1 }] x
→ →
R[Q] ⎛ Lν ⎜ . ⎜ . ⎝ . L2ν−1
··· .. . ···
is an isomorphism whose inverse is given by the restriction of trng{(L−1 )−ν , . . . , (L−1 )−1 } from R[Q] to R[trng{(L−1 )−ν , . . . , (L−1 )−1 }].
⎞ L1 .. ⎟ ⎟ . ⎠x Lν
9.6. Exercises
245
13. Let r ∈ N∗ ∪ {∞} and L ∈ C r (Ω, L(U, V )) be such that L0 ∈ Fred0 (U, V ) and λ0 ∈ Algν (L) for some integer ν ≥ 1 with 2ν ≤ r + 1, and set (L−1 )i := c(L−1 , λ0 , i),
B := trng{(L−1 )−ν , . . . , (L−1 )−1 },
A := trng{L0 , . . . , Lν−1 }, and
⎛
Lν ⎜ . ⎜ C := ⎝ .. L2ν−1
··· .. . ···
i ∈ {−ν, . . . , −1},
⎛
⎞ L1 .. ⎟ ⎟ . ⎠, Lν
(L−1 )0 ⎜ .. D := ⎜ . ⎝ (L−1 )ν−1
··· .. . ···
⎞ (L−1 )−ν+1 ⎟ .. ⎟. . ⎠ (L−1 )0
Use Exercise 12 to prove the following: • The next identities are satisfied: χ[L; λ0 ] = dim N [A] = dim N [DA] = dim N [AD] = rank B = rank(BC) = rank(CB). • The next identities are satisfied: ν −1 nLn (L )−n = tr c(L L−1 , λ0 , −1), tr n=1
tr
ν
n(L
−1
)−n Ln
= tr c(L−1 L , λ0 , −1).
n=1
• Infer, from the previous items, the validity of Theorem 9.3.2. 14. Let r ∈ N∗ ∪ {∞} and L ∈ C r (Ω, L(U, V )) be such that L0 ∈ Fred0 (U, V ) and λ0 ∈ Algν (L) for some integer ν ≥ 1 with 2ν ≤ r + 1. Let 1 , . . . , ν ≥ 0 be a series of integer numbers such that the partial κ-multiplicities of L at λ0 are given by (7.23). Set (L−1 )n := c(L−1 , λ0 , n), and
−ν ≤ n ≤ −1,
rs := dim R[trng{(L−1 )−ν , . . . , (L−1 )s }],
−ν ≤ s ≤ −1.
Do the following: • Prove that each of the spaces R[trng{(L−1 )−ν , . . . , (L−1 )s }],
−ν ≤ s ≤ −1,
is finite dimensional. • Use the Smith form at λ0 to get the identity col[r−1 , . . . , r−ν ] = trng{1, . . . , ν} col[ν , . . . , 1 ]. 15. This exercise completes Theorem 7.10.4. Consider r ∈ N ∪ {∞, ω}, four K-Banach spaces U1 , U2 , V1 and V2 , two families, L ∈ Sλr0 (U1 , V1 ) and M ∈ Sλr0 (U2 , V2 ), such that λ0 is an algebraic eigenvalue of both L and M of (possibly different) order less than or equal to k, for some integer k ≥ 1 with 2k ≤ r + 1. Set (L−1 )i := c(L−1 , λ0 , i),
(M−1 )i := c(M−1 , λ0 , i),
−k ≤ i ≤ −1.
246
Chapter 9. Algebraic Multiplicity Through Logarithmic Residues
Prove the equivalence of the following assertions: • L is C r−k -equivalent to M at λ0 . • For all −k ≤ s ≤ −1, dim R[trng{(L−1 )−k , . . . , (L−1 )s }] = dim R[trng{(M−1 )−k , . . . , (M−1 )s }]. • For each 1 ≤ j ≤ k there exist Ej ∈ L(U2 , U1 ) and Fj ∈ L(V1 , V2 ) such that E1 and F1 are isomorphisms and trng{(L−1 )−k , . . . , (L−1 )−1 } = trng{E1 , . . . , Ek } · trng{(M−1 )−k , . . . , (M−1 )−1 } · trng{F1 , . . . , Fk }. • There exist E1 , . . . , Ek ∈ L(U2 , U1 ) such that E1 is an isomorphism and R[trng{(L−1 )−k , . . . , (L−1 )−1 }] = trng{E1 , . . . , Ek }R[trng{(M−1 )−k , . . . , (M−1 )−1 }].
9.7 Comments on Chapter 9 The logarithmic residue theorem giving rise to (9.1), which has been formally established by Theorem 9.4.1, is a classic result attributable to Gohberg & Sigal [54], although in the most pioneering versions of this result the multiplicity χ[L; λ0 ] could not appear, of course. The main result of the chapter, established by Theorem 9.3.2, comes from L´opez-G´omez & Mora-Corral [90], as well as most of the results of Sections 9.1, 9.2 and 9.3. The corollaries of Section 9.4 are due to Markus & Sigal [95] and Gohberg & Sigal [54], and the construction of the spectral projector in Section 9.5 was carried out by Magnus & Mora-Corral [94]. Nevertheless, Corollary 9.1.3 might be considered to be a classic result attributable to Markus & Sigal [95] (see Gohberg, Goldberg & Kaashoek [43], and Ramm [113], for a number of different approaches). Far beyond the concept of trace introduced in Section 9.2, there are fairly general concepts of trace (see, for example, the review paper of Gohberg & Kre˘ın [45] and the books of Kato [65], and Gohberg, Goldberg & Kaashoek [43], as well as the references therein); however, those generalities do not add value to the analysis of this chapter. The result of Exercise 11 is due to Gohberg [42] in the meromorphic case. Exercises 12 and 13 are based on results by Magnus & Mora-Corral [94]. They use the asymptotic developments of L−1 to obtain multiplicity formulae involving certain kernels and ranges of the underlying operator coefficients. Exercise 15 completes the equivalence theory already studied in Section 7.10 by means of the new devices introduced in this chapter. Establishing interrelations between the Laurent expansion of L−1 and the left and right Jordan chains of L at λ0 has been shown to be extremely fruitful
9.7. Comments on Chapter 9
247
in spectral theory (see Gohberg, Lancaster & Rodman [50]). These ideas go back to Keldyˇs [66, 67]. Exercise 11 provides a generalization of some classic results in the context of matrix polynomials. In Exercises 11-13 a series of Toepliz matrices appear. This fact is utterly attributable to the intrinsic nature of the underlying theory, as Toepliz matrices naturally arise in developing formal products. The interested reader is sent to Magnus & Mora-Corral [94] for a general perspective of Toepliz and Hankel matrices in the context of the algebraic multiplicity, and Putnam [109] or Peller [106], for general accessible approaches to these matrices. Exercise 4 proposes the elaboration of an alternative proof of Theorem 9.3.2, in this occasion through the local Smith form studied Chapter 7. The real non-analytic theory developed in this chapter does actually make a huge number of results rigorous, obtained independently in mathematical physics by means of formal asymptotic developments (see Helffer [58], Hislop & Sigal [60], Ramm [113], and the list of references therein).
Chapter 10
The Spectral Theorem for Matrix Polynomials This chapter studies the polynomial families of the form L(λ) =
Aj λj ,
λ ∈ C,
(10.1)
j=0
where ∈ N and
A0 , . . . , A ∈ MN (C),
A = 0,
∗
for some N ∈ N . These families are called matrix polynomials in most of the available literature. More precisely, the family L defined in (10.1) is said to be a matrix polynomial of order N and degree . The main goal of this chapter is to obtain a spectral theorem for matrix polynomials, respecting the spirit of the Jordan Theorem 1.2.1. Generically, the spectrum of the family L consists of N (different) eigenvalues and, hence, we cannot expect a decomposition of CN into the direct sum of all underlying generalized eigenvectors. To state the main result of this chapter, suppose λi = λj , 1 ≤ i < j ≤ p, Σ(L) = {λ1 , . . . , λp }, and, for each 1 ≤ j ≤ p, let νj := ν(λj ) denote the order of λj as an algebraic eigenvalue of L. Then, according to Theorem 7.6.2, 1 ≤ j ≤ p, χ[L; λj ] = dim N L[νj −1,j] , where L[νj −1,j] := trng{L0,j , . . . , Lνj −1,j }, and Li,j =
1 i) L (λj ), i!
i ≥ 0,
1 ≤ j ≤ p,
1 ≤ j ≤ p.
250
Set
Chapter 10. The Spectral Theorem for Matrix Polynomials
⎛ ⎜ ⎜ ⎜ C := ⎜ ⎜ ⎜ ⎝
0
ICN 0
⎞ ICN .. .
..
.
0
−A0
−A1
ICN −A−1
···
⎟ ⎟ ⎟ ⎟ ∈ MN (C); ⎟ ⎟ ⎠
the main theorem of this chapter establishes the existence of p isomorphisms Jj ∈ Iso N L[νj −1,j] , N [(λj ICN − C)νj ] , 1 ≤ j ≤ p, such that CN =
p
p Jj N L[νj −1,j] = N [(λj ICN − C)νj ] ,
j=1
j=1
very much in the spirit of Chapter 1. This result, referred to as the spectral theorem for matrix polynomials, provides us with a generalization of the Jordan Theorem 1.2.1 and of the fundamental theorem of algebra. Its proof follows straightforwardly from Theorem 7.6.2, Proposition 8.3.1 and Theorem 1.2.1. Once proved, we will use the results of Chapter 7 in order to construct a particular basis of CN through a canonical set of Jordan chains at every eigenvalue of L, as in Chapter 1. Essentially, this chapter uses the local theory developed in Part II to obtain a global spectral theorem, in the spirit of Part I, but in this occasion for general matrix polynomials, not just for classical operator families; thus opening a door towards the development of new techniques and results in spectral theory. The chapter is organized as follows. Section 10.1 shows the existence of a global linearization for matrix polynomials. Basically, a global linearization consists of a classical family capturing all spectral invariants of the matrix polynomial itself. Section 10.2 proves the spectral theorem announced above. Section 10.3 derives the fundamental theorem of algebra from the spectral theorem, and establishes some rather obvious connections between the underlying theory and the theory of linear differential equations and systems. Finally, Section 10.4 uses the spectral theorem to construct a basis of CN consisting of generalized eigenvectors; thus respecting the genuine classical tradition of constructing Jordan basis for classical families.
10.1 Linearization of a matrix polynomial Let N ∈ N∗ and ∈ N; given A0 , . . . , A ∈ MN (C),
A = 0,
10.1. Linearization of a matrix polynomial
251
the map L ∈ H(C, MN (C)) defined by (10.1) is said to be a matrix polynomial of order N and degree . Obviously, matrix polynomials belong to P[MN (C)]. The matrix polynomial L defined by (10.1) is said to be monic when A = ICN . According to the notation and concepts introduced in the presentation of Part II, the spectrum of L, denoted by Σ(L), is given by Σ(L) = {λ ∈ C : det L(λ) = 0}. Note that det L is a complex-valued polynomial. If L is monic, then det L is a monic complex-valued polynomial of degree N and, therefore, according to Theorem 8.2.1, Σ(L) ⊂ Alg(L). From now on, all vectors are regarded as column vectors, and we will use the notation trng introduced in (2.24) and (4.34). In addition, square matrices of order N will often be decomposed in square blocks of order N , and, for simplifying notation, for every integer m ≥ 1, the identity operator of Cm will be denoted by Im . The next result provides us with a global linearization of a matrix polynomial, in the sense that there exist a Banach space U , two entire operator-valued functions E, F ∈ H(C, L(V )), where V := CN ⊕ U , and an operator C ∈ L(V ) such that E(C), F(C) ⊂ Iso(V ) and L(λ) ⊕ IU = E(λ)(λIV − C)F(λ),
λ ∈ C.
In such a case, the space V is called the phase space of L. Lemma 10.1.1. Let N, ≥ 1 be integers, and L ∈ P[MN (C)] a monic matrix polynomial of the form (10.1), for some A0 , . . . , A ∈ MN (C) with A = IN . Consider the matrix ⎞ ⎛ 0 IN ⎟ ⎜ 0 IN ⎟ ⎜ ⎟ ⎜ . . ⎟ ∈ MN (C), ⎜ . . C := ⎜ . . ⎟ ⎟ ⎜ ⎝ 0 IN ⎠ −A0
−A1
···
−A−1
and, for each 1 ≤ j ≤ − 1, the matrix polynomials ej ∈ P[MN (C)] and E, F ∈ P[MN (C)], defined, for every λ ∈ C, by ej (λ) :=
−j
Aj+i λi ,
1 ≤ j ≤ − 1,
i=0
F(λ) = trng{IN , λIN , . . . , λ−1 IN },
252
Chapter 10. The Spectral Theorem for Matrix Polynomials
and
⎛1 e (λ) ⎜ −I ⎜ N ⎜ ⎜ E(λ) = ⎜ ⎜ ⎜ ⎝
e2 (λ) 0
···
−IN
..
.
..
.
e−1 (λ)
IN
0 −IN
0
⎞ ⎟ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠
Then, det E and det F are non-zero constant polynomials, and the following global factorization is satisfied: diag{L(λ), IN (−1) } = E(λ) (λIN − C) F(λ),
λ ∈ C.
(10.2)
Proof. It is immediate to see that det F is the constant polynomial 1, and det E is the constant polynomial 1 or −1. Identity (10.2) follows easily by a direct calculation.
10.2 Generalized Jordan theorem The following result collects some basic consequences of the factorization (10.2). Proposition 10.2.1. Let L ∈ P[MN (C)] be a monic matrix polynomial of degree ≥ 1, and set M := diag{L, IN (−1) } ∈ P[MN (C)]. Let E, F ∈ P[MN (C)] and C ∈ MN (C) be the maps given by Lemma 10.1.1. Then, for every λ0 ∈ Σ(L) and s ∈ N, the following three maps are linear isomorphisms: ϕ1 : N [L[s] ] → N [M[s] ] col[x0 , x1 , . . . , xs ] → col[x0 , 0, x1 , 0, . . . , xs , 0], ϕ2 : N [M[s] ] → N [LC [s] ] y → F[s] y, C s+1 ] = N [(λ0 IN − C)s+1 ] ϕ3 : N [LC [s] ] → N [(L0 )
col[y0 , y1 , . . . , ys ] → ys , where we have denoted 0 = (0, . . . , 0) ∈ CN (−1) ,
LC (λ) := λIN − C,
λ ∈ C.
Consequently, s+1 ] . Is+1,λ0 := ϕ3 ◦ ϕ2 ◦ ϕ1 ∈ Iso N [L[s] ], N [(LC 0)
(10.3)
10.2. Generalized Jordan theorem
253
Proof. According to Lemma 10.1.1, M = ELC F.
(10.4)
Taking into account M0 = diag{L0 , IN (−1) },
Mj = diag{Lj , 0},
j ≥ 1,
one can see easily that ϕ1 is a linear isomorphism. By the Leibniz rule, we find from (10.4) that, for every s ≥ 0 and λ0 ∈ Σ(L), M[s] = E[s] LC [s] F[s] . Moreover, E[s] and F[s] are isomorphisms, because E0 and F0 are invertible matrices. Consequently, ϕ2 must be a linear isomorphism. The fact that ϕ3 is an isomorphism was proved by Proposition 8.3.1. This concludes the proof. Now we fix a monic matrix polynomial L ∈ P[MN (C)] of degree ≥ 1. Then, det L is a monic complex-valued polynomial of degree N . Thus, p det L(λ) = (λ − λj )mj , λ ∈ C, j=1
for some integers m1 , . . . , mp ≥ 1 and complex numbers λ1 , . . . λp such that λi = λj if 1 ≤ i < j ≤ p. Necessarily, p mj N = j=1
and, according to Lemma 10.1.1, we have that {λ1 , . . . , λp } = Σ(L) = Σ(diag{L, IN (−1) }) = Σ(LC ) = σ(C). In the sequel, much like in Theorem 1.2.1, we will work simultaneously with all points of Σ(L), and, so, we ought to refine the notation used previously throughout this book by setting Ni,j =
1 i) N (λj ), i!
i ≥ 0,
1 ≤ j ≤ p,
and N[s,j] := trng{N0,j , . . . , Ns,j },
s ≥ 0,
1 ≤ j ≤ p,
254
Chapter 10. The Spectral Theorem for Matrix Polynomials
for every complex Banach space U and N ∈ H(C, L(U )). Furthermore, the notation introduced by (10.3) will be simplified by simply setting Is,j := Is,λj ,
1 ≤ j ≤ p,
s ≥ 0.
There is no need to emphasize the dependence on L, because L is fixed throughout the remaining of this chapter. The following generalized version of Theorem 1.2.1 is satisfied. Theorem 10.2.2 (Generalized Jordan theorem). Let L ∈ P[MN (C)] be a monic matrix polynomial of degree ≥ 1 such that Σ(L) = {λ1 , . . . , λp },
λi = λj ,
1 ≤ i < j ≤ p,
(10.5)
and, for each 1 ≤ j ≤ p, let νj denote the order of λj as an algebraic eigenvalue of L. Then, p CN = Iνj ,j N [L[νj −1,j] ] . (10.6) j=1
In particular, by Theorem 7.6.2 and, according to Definition 8.4.1, we have that N =
p
dim N [L[νj −1,j] ] =
j=1
p
κ[L; λj ] = m[L; C].
j=1
Proof. Let C be the matrix given by Lemma 10.1.1. According to (10.2), we have that Σ(L) = σ(C) and, actually, thanks to Theorems 8.3.2 and 7.3.1, for each 1 ≤ j ≤ p, the integer νj equals the algebraic ascent of λj as an eigenvalue of C. Thus, by Theorem 1.2.1 applied to the matrix C, we find that C
N
=
p
N [(λj IN − C)νj ].
j=1
Finally, owing to Proposition 10.2.1, Iνj ,j N [L[νj −1,j] ] = N [(λj IN − C)νj ],
1 ≤ j ≤ p,
which ends the proof. Remark 10.2.3. Suppose = 1. Then, E(λ) = F(λ) = IN ,
L(λ) = λIN + A0 ,
C = −A0 ,
λ ∈ C,
and, hence, (10.6) becomes C
N
=
p
N [(λj IN + A0 )νj ].
j=1
Consequently, Theorem 10.2.2 indeed provides us with Theorem 1.2.1.
10.3. Remarks on scalar polynomials
255
10.3 Remarks on scalar polynomials Suppose N = 1 and ≥ 1. Then Aj ∈ C for all 0 ≤ j ≤ , and L is a monic scalar-valued polynomial. In such a case, (10.6) becomes C =
p
N [(λj I − C)νj ],
(10.7)
j=1
where
⎛ ⎜ ⎜ ⎜ C =⎜ ⎜ ⎜ ⎝
0
1 0
⎞ 1 .. .
..
.
0 −A0
−A1
···
1 −A−1
⎟ ⎟ ⎟ ⎟ ∈ M (C). ⎟ ⎟ ⎠
(10.8)
By Lemma 10.1.1, the set of roots of L equals the set of roots of the polynomial λ → det(λI − C). Thanks to (10.7), there exists j ∈ {1, . . . , p} such that N [(λj I − C)νj ] = {0}. Consequently, det(λj I − C) = 0 and we conclude that L(λj ) = 0. Therefore, Theorem 10.2.2 indeed provides us with the fundamental theorem of algebra. Now, we consider the linear homogeneous differential equation of order whose symbol is given by L, that is to say,
Aj Dj u = 0,
Dj u = uj) ,
0 ≤ j ≤ ,
j=0
which can be briefly expressed in the form L(D)u = 0.
(10.9)
By well-known results in the theory of ordinary differential equations, the set of entire solutions u ∈ H(C) of equation (10.9), denoted by He , is an -dimensional linear subspace of H(C). Actually, a basis of He is {pi,j : 0 ≤ i ≤ νj − 1, 1 ≤ j ≤ p},
256
Chapter 10. The Spectral Theorem for Matrix Polynomials
where, by definition, z ∈ C,
pi,j (z) := z i eλj z ,
0 ≤ i ≤ νj − 1,
1 ≤ j ≤ p.
Moreover, the map H(C) → H(C, C ) u → col[u, Du, . . . , D−1 u] is a linear isomorphism between He and the linear subspace Hs of H(C, C ) formed by all entire solutions of the first-order homogeneous linear system U = CU,
U = col[u0 , . . . , u−1 ],
where C is the matrix defined in (10.8). Further, the column vector-valued functions of the exponential matrix given by z → ezC ,
z ∈ C,
form a basis of Hs . These properties of the matrix C are paradigmatic in the theory of linear differential equations and systems.
10.4 Constructing a basis in the phase space This section adopts another perspective for expressing Theorem 10.2.2. It will be shown how the canonical sets of Jordan chains of L at each of its eigenvalues can be used to construct an appropriate basis in the phase space CN , in the spirit of the construction of a Jordan basis for an operator A that allows us to express it through its Jordan canonical form. The following preparatory lemma provides an explicit formula for each of the isomorphisms introduced by (10.3). Lemma 10.4.1. Let L ∈ P[MN (C)] be a monic matrix polynomial of degree ≥ 1 satisfying (10.5). For s ∈ N and j ∈ {1, . . . , p}, consider col[x0 , . . . , xs ] ∈ N [L[s,j] ]. Set col[y0 , . . . , y−1 ] := Is+1,j col[x0 , . . . , xs ] ∈ (CN ) . Then, yk =
s i=max{0,s−k}
k λi−s+k xi , i−s+k j
0 ≤ k ≤ − 1.
(10.10)
10.4. Constructing a basis in the phase space
Proof. Adopting the convention a =0 b
257
if b > a
or b < 0,
a a for every a, b ∈ Z, = a−b b
we still have
and, in addition, (10.10) can be written in the more appropriate form s k xi , 0 ≤ k ≤ − 1. λi−s+k yk = j i − s + k i=0
(10.11)
(10.12)
Fix s ≥ 0 and 1 ≤ j ≤ p, and let col[x0 , . . . , xs ] ∈ N [L[s,j] ]. Then, according to the definition of Is+1,j , Is+1,j col[x0 , . . . , xs ] =
s
Fi,j col[xs−i , 0].
(10.13)
i=0
To prove (10.12) we will proceed by induction on s ∈ N. Suppose s = 0. Then, according to (10.13), we have that IN } col[x0 , 0] I1,j x0 = F0,j col[x0 , 0] = trng{IN , λj IN , . . . , λ−1 j = col[λkj x0 ,
0 ≤ k ≤ − 1].
Here and in the rest of the chapter we use the following notation: if q ∈ N and for each 0 ≤ n ≤ q there is a vector un in a Banach space Un , then we define col[un ,
0 ≤ n ≤ q] := col[u0 , . . . , uq ].
The previous calculation shows that yk = λkj x0 ,
0 ≤ k ≤ − 1,
which coincides with the realization of (10.12) at s = 0. Therefore, the result holds for s = 0. Assume that (10.12) is satisfied for some particular s ≥ 0 and consider col[y0 , . . . , y−1 ] := Is+2,j col[x0 , . . . , xs , xs+1 ]. Then, taking into account (10.13) and rearranging terms, we can infer that col[y0 , . . . , y−1 ] =
s+1 i=0
Fi,j col[xs+1−i , 0] =
s
Fi,j col[xs+1−i , 0] + Fs+1,j col[x0 , 0]
i=0
= Is+1,j col[x1 , . . . , xs , xs+1 ] + Fs+1,j col[x0 , 0].
258
Chapter 10. The Spectral Theorem for Matrix Polynomials
According to the induction hypothesis, we have that Is+1,j col[x1 , . . . , xs+1 ] s k i−s+k = col xi+1 , 0 ≤ k ≤ − 1 λ i−s+k j i=0 s+1 k h−(s+1)+k = col xh , 0 ≤ k ≤ − 1 . λ h − (s + 1) + k j h=1
On the other hand, differentiating F gives 0 0 1 0−(s+1) 1−(s+1) Fs+1,j = trng λj IN , λ IN , s+1 s+1 j 1 − 1 −1−(s+1) ..., λj IN s+1 and, consequently, .
/ k k−(s+1) x0 , 0 ≤ k ≤ − 1 Fs+1,j col[x0 , 0] = col λ s+1 j . / k k−(s+1) = col λ x0 , 0 ≤ k ≤ − 1 , k − (s + 1) j where we have used (10.11). Therefore, adding up these two column vectors shows that indeed col[y0 , . . . , y−1 ] = Is+2,j col[x0 , . . . , xs , xs+1 ] s+1 k h−(s+1)+k λ xh , 0 ≤ k ≤ − 1 , = col h − (s + 1) + k j h=0
which concludes the proof.
Now, we will invoke Proposition 7.7.1 in order to construct a basis of the subspace N [L[νj −1,j] ] for every j ∈ {1, . . . , p}. Thanks to Proposition 10.2.1, Theorem 10.2.2, and Lemma 10.4.1, these bases will provide us with a basis of the whole phase space CN , thus respecting the spirit of the construction of a Jordan basis to obtain the associated canonical form. Naturally, for each 1 ≤ j ≤ p, the basis of N [L[νj −1,j] ] comes from a canonical set of Jordan chains, which will be labeled according to Section 7.2. Setting rj := dim N [L0,j ],
1 ≤ j ≤ p,
κji := κ[L; λj ],
1 ≤ i ≤ rj ,
1 ≤ j ≤ p,
10.4. Constructing a basis in the phase space
259
we have νj = κj1 ≥ · · · ≥ κjrj ≥ 1,
1 ≤ j ≤ p,
and, hence, owing to Definition 7.2.5 and Theorem 7.6.2, κ[L; λj ] =
rj
1 ≤ j ≤ p.
κji = dim N [L[νj −1,j] ],
i=1
Now, for each 1 ≤ j ≤ p, let xj0,1 , . . . , xjκj1 −1,1 ; · · · ; xj0,rj , . . . , xjκjr
j
−1,rj
,
(10.14)
denote a canonical set of Jordan chains of L at λj , as in (7.5). Then, thanks to Proposition 7.7.1, for every 1 ≤ j ≤ p, the set rj ! #
" col 0, xj0,i , col 0, xj0,i , xj1,i , . . . , col 0, xj0,i , . . . , xjκji −1,i
i=1
is a basis of N [L[νj −1,j] ]. Thus, due to Proposition 10.2.1 and Lemma 10.4.1, for every 1 ≤ j ≤ p, the set of column vectors B j :=
rj ! #
j col xj0,i , λj xj0,i , λ2j xj0,i , . . . , λ−1 x j 0,i ,
i=1
j −2 j col xj1,i , λj xj1,i +xj0,i , λ2j xj1,i +2λj xj0,i , . . . , λ−1 x +( −1)λ x 1,i 0,i , j j ··· , min{,κji } " −1 −a j j j j col xκji−1,i , λj xκji−1,i +xκji−2,i , . . . , λj xκji−a,i −a a=1 νj provides us with a basis of N [(LC 0,j ) ]. Therefore, by Theorem 1.2.1,
B :=
p #
Bj
(10.15)
j=1
is a basis of the phase space CN . The next result characterizes the matrix whose columns consist of the coordinates of the vectors of the basis B. Given two arbitrary integers m, n ≥ 1, the set Mm×n (C) stands for the space of rectangular matrices with m rows and n columns with complex entries. Moreover, for each 1 ≤ j ≤ p, we denote mj := κ[L; λj ] = dim N [L[νj −1,j] ], and consider the matrix, denoted by Xj , whose columns consist of the coordinates of all vectors of the canonical set of Jordan chains (10.14) Xj := [xj0,1 , . . . , xjκj1 −1,1 , . . . , xj0,rj , . . . , xjκjr
j
−1,rj ]
∈ MN ×mj (C).
260
Chapter 10. The Spectral Theorem for Matrix Polynomials
Then, we can construct the matrix formed by all these canonical sets of Jordan chains X := [X1 , . . . , Xp ] ∈ MN ×N (C). We also consider T T Jj := diag{Jj,κ , . . . , Jj,κ } ∈ Mmj (C), j1 jr j
where for each 1 ≤ j ≤ p, ⎛ λj 1 0 ⎜ ⎜ 0 λj 1 ⎜ ⎜ 0 0 λj ⎜ T Jj,κji = ⎜ . .. .. ⎜ .. . . ⎜ ⎜ ⎝0 0 0 0 0 0
··· ··· ··· .. .
0 0 0 .. . · · · λj ··· 0
⎞ 0 ⎟ 0⎟ ⎟ 0⎟ ⎟ ∈ Mκji (C), .. ⎟ .⎟ ⎟ ⎟ 1⎠ λj
1 ≤ j ≤ p,
1 ≤ i ≤ rj ,
as well as the matrix J := diag{J1 , . . . , Jp } ∈ MN (C). Then, the following result is satisfied. Lemma 10.4.2. The matrix whose columns are the vectors of the basis (10.15) is given through B := col[X, XJ, XJ 2, . . . , XJ −1 ] ∈ MN (C).
(10.16)
Proof. First, note that J n = diag{J1n , . . . , Jpn },
n ≥ 0,
and, hence, XJ n = [X1 J1n , X2 J2n , . . . , Xp Jpn ],
0 ≤ n ≤ − 1.
Consequently, we have to calculate the columns of the set of matrices Xj Jjn : 1 ≤ j ≤ p, 0 ≤ n ≤ − 1 . To accomplish this task, we begin by calculating the powers of a matrix A ∈ Mh (C) of the form A = trng{λ, 1, 0, . . . , 0} = λIh + N,
N := trng{0, 1, 0, . . . , 0},
for some integer h ≥ 1 and λ ∈ C. According to the Newton binomial formula, for every integer n ≥ 0, n n n−i i n λ N . A = i i=0
10.4. Constructing a basis in the phase space
261
Moreover, ⎧ ⎪ . . . , 0, 1, 0, . . . , 0}, ⎨ trng{0, i N = i h−i−1 ⎪ ⎩ 0,
0 ≤ i ≤ h − 1, i ≥ h.
Therefore, ⎧ ⎪ ⎨
⎫ ⎪ ⎬ n n−1 n n n A = trng λ , ,..., λ λ, 1, 0, · · · , 0 , ⎪ ⎪ 1 n−1 ⎩ ⎭ h−n−1
whenever 1 ≤ n ≤ h − 1, whereas 0 1 n n−1 n An = trng λn , ,..., λ λn−h+1 1 h−1 if n ≥ h. Note also that (AT )n = (An )T ,
n ≥ 0.
With all these features in mind, we can fix j ∈ {1, . . . , p} in order to find the expression of the columns of the matrices Xj Jjn ,
0 ≤ n ≤ − 1.
Now we set Xji := [xj0,i , . . . , xjκji −1,i ],
1 ≤ i ≤ rj .
Then, r
Xj = [Xj1 , . . . , Xj j ], and, hence,
r
T T )n , . . . , Xj j (Jj,κ )n ]. Xj Jjn = [Xj1 (Jj,κ j1 jr j
Fix i ∈ {1, . . . , rj }. Then, we have that j j j j j T Xji Jj,κ = λ x , x + λ x , . . . , x + λ x j 0,i j 1,i j κji −1,i , 0,i κji −2,i ji 2 2 T 2 j Xji Jj,κ = λ x , λj xj0,i + λ2j xj1,i , j 0,i ji 1 2 xj0,i + λj xj1,i + λ2j xj2,i , . . . , 1 2 xjκji −3,i + λj xjκji −2,i + λ2j xjκji −1,i , 1
262
Chapter 10. The Spectral Theorem for Matrix Polynomials
and, in general, for each integer n ≥ 0, n n n−1 j T n j Xji Jj,κ = λ x , x0,i + λnj xj1,i , λ j 0,i ji 1 j n n−1 j n n−2 j x1,i + λnj xj2,i , λ λj x0,i + 1 j 2 κji −1 n n−κ +1+h j xh,i . ..., λj ji κji − 1 − h h=0
Therefore, the columns of the matrix B are given by the ordered set rj ! p # # j=1 i=1
j col xj0,i , λj xj0,i , λ2j xj0,i , . . . , λ−1 x j 0,i ,
2 col xj1,i , xj0,i + λj xj1,i , λj xj0,i + λ2j xj1,i , . . . , 1 − 1 −2 j j x λj x0,i + λ−1 j 1,i , 1 . . . , col xjκji −1,i , xjκji −2,i + λj xjκji −1,i , 2 λj xjκji −2,i + λ2j xjκji −1,i , . . . , xjκji −3,i + 1 κji −1 " −1 −κ +h λj ji xjh,i . κji − 1 − h h=0
Finally, note that this set of vectors coincide with the set of vectors forming the basis B. This concludes the proof.
10.5 Exercises 1.
Let L ∈ P[MN (C)]. Prove that either Σ(L) is finite or it equals C.
2.
Give an example of a polynomial matrix L of the form L(λ) = A0 + λA1 ,
λ ∈ C,
for some A0 , A1 ∈ MN (C) \ {0}, for which 0 ∈ σ(A1 ) and Σ(L) is a finite non-empty set. 3. Let L ∈ P[MN (C)] be a monic matrix polynomial satisfying (10.5). For each 1 ≤ j ≤ p let (10.14) be a canonical set of Jordan chains of L at λj . Prove that CN = span{xjk,i : 1 ≤ j ≤ p, 1 ≤ i ≤ rj , 0 ≤ k ≤ κji − 1}.
10.6. Comments on Chapter 10
263
4. Let L ∈ P[MN (C)] be the matrix polynomial of degree ≥ 1 defined by (10.1) with A = IN . Prove that the set of matrix coefficients (A0 , A1 , . . . , A−1 ) ∈ (MN (C)) for which Σ(L) consists of N different eigenvalues is dense in (MN (C)) . Prove also that its complement has N 2 -dimensional Lebesgue measure zero. 5. Discuss the implications of Lemma 10.4.2 when Σ(L) possesses N different eigenvalues. 6. Generalize Theorem 10.2.2 and Lemma 10.4.2 to cover the case when A is invertible, not necessarily the identity, and give a proof of the corresponding result.
10.6 Comments on Chapter 10 The spectral theorem for matrix polynomials is attributable to Gohberg, Lancaster & Rodman [47]. Extensions of these results to operator polynomials are due to Gohberg, Lancaster & Rodman [49]. Extensions to analytic matrix functions are due to Gohberg & Rodman [52]. Basic texts in the theory of matrix polynomials are Gantmacher [39], Gel’fand [40], Lancaster [73], and Gohberg, Lancaster & Rodman [50]. The use of linearizations in the context of theory of matrices is classic (see the books by Gohberg, Lancaster & Rodman [50] and Rodman [114]). The first infinite-dimensional results on linearization are due to Gohberg, Kaashoek & Lay [44]; see also Boer [14], Mitiagin [97, 98], and Kaashoek, Mee & Rodman [64] for later results about explicit linearizations. Within the context of the theory of multiplicity, they have also been used by Arason & Magnus [5], and Magnus & Mora-Corral [94]. Two excellent books on the theory of matrix and operator polynomials are Gohberg, Lancaster & Rodman [50] and Rodman [114]. Their contents go much further than this chapter. Apart from the facts pointed out in Section 10.3, there are many other relationships between the theory of matrix polynomials and the theories of (linear) ordinary differential equations, partial differential equations and difference equations. The reader is sent to Gohberg, Lancaster & Rodman [50, 51], Rodman [114], Thijsse [123], and Wloka, Rowley & Lawruk [127] for further details. These results originated in Lancaster [74, 75]. In most of the available literature (as well as in the classic Jordan canonical form), instead of establishing Theorem 10.2.2, the construction of the basis for the phase space CN is accomplished by means of Lemma 10.4.2. So the spectral theorem is actually established through the invertibility of the matrix (10.16) (see for example, Gohberg, Lancaster & Rodman [50], and Wloka, Rowley & Lawruk [127]). Consequently, the statement of Theorem 10.2.2 in this book might be considered to be new. This change of methodology in the establishment of the spectral
264
Chapter 10. The Spectral Theorem for Matrix Polynomials
theorem of this chapter is deeply inspired by Chapter 1, where Jordan Theorem 1.2.1 was established as the main spectral theorem, and, at a further stage, a Jordan basis was constructed. Nevertheless, in most of specialized literature, a great part of the effort is focused on the construction of an explicit basis, despite being unnecessary for ascertaining the inherent canonical form. This clearly illustrates that algebraic techniques are still quite imbued with the spirit of the invariant calculus of the nineteenth century, when papers used to contain hundred of pages constructing complete systems of generating invariants. Although Hilbert became world-famous for solving Gordan’s problem through an extremely simple, non-constructive, method, which originated one of the most controversial mathematical debates of that time span, yet in the twentieth-first century, the spirit of Hilbert seems to be far from being utterly adopted.
Chapter 11
Further Developments of the Algebraic Multiplicity This chapter exposes briefly some further developments of the theory of multiplicity whose treatment lies outside the general scope of this book. Consequently, it possesses an expository and bibliographic character. Section 11.1 deals with general Fredholm families, not necessarily of index zero. Section 11.2 analyzes meromorphic families. Section 11.3 discusses families of linear, not necessarily bounded, operators. Finally, Section 11.4 considers general families of linear, not necessarily Fredholm, operators.
11.1 General Fredholm operator families Following the pioneering work of Gohberg [41], Gohberg, Kaashoek & Lay [44] and Gohberg, Lancaster & Rodman [50] studied general Fredholm families of arbitrary index. In such a case, one can still construct Jordan chains and local Smith forms. Precisely, the existence of a local Smith form for analytic families of Fredholm operators can be stated as follows. Theorem 11.1.1. Let λ0 ∈ Ω and L ∈ C ω (Ω, L(U, V )) be such that λ0 ∈ Σ(L) and L0 ∈ Fred(U, V ). Then, there exist an open neighborhood Ω ⊂ Ω of λ0 , two families E ∈ C ω (Ω , L(V )) and F ∈ C ω (Ω , L(U )) with E0 ∈ Iso(V ) and F0 ∈ Iso(U ), two topological decompositions U = U0 ⊕ U1 ,
V = V0 ⊕ V1 ,
with dim U0 < ∞ and dim V0 < ∞, an isomorphism I ∈ Iso(U1 , V1 ), an integer n ≥ 1, and n unique κ1 , . . . , κn ∈ N, with κ1 ≥ · · · ≥ κn ≥ 1,
266
Chapter 11. Further Developments of the Algebraic Multiplicity
such that, for every λ ∈ Ω , L(λ) = E(λ)(S(λ) ⊕ I)F(λ) where, according to the sign of the Fredholm index of L0 , either ⎛ ⎞ diag (λ − λ0 )κ1 , . . . , (λ − λ0 )κn ⎠ ∈ L(U0 , V0 ), S(λ) = ⎝ 0 or else S(λ) =
diag (λ − λ0 )κ1 , . . . , (λ − λ0 )κn
0
∈ L(U0 , V0 ).
(11.1)
(11.2)
In the context of Theorem 11.1.1, the integers κ1 , . . . , κn are defined as the partial κ-multiplicities of L at λ0 . The corresponding equivalence result reads as follows. Theorem 11.1.2. Let U1 , U2 , V1 , V2 be four Banach spaces, λ0 ∈ Ω, and L ∈ C ω (Ω, L(U1 , V1 )),
M ∈ C ω (Ω, L(U2 , V2 ))
two analytic operator families such that L0 ∈ Fred(U1 , V1 ) and M0 ∈ Fred(U2 , V2 ). Then, the following assertions are equivalent: i) There exist an open neighborhood Ω ⊂ Ω of λ0 and two families E ∈ C ω (Ω , L(V2 , V1 )),
F ∈ C ω (Ω , L(U1 , U2 )),
with E0 ∈ Iso(V2 , V1 ) and F0 ∈ Iso(U1 , U2 ), such that L(λ) = E(λ)M(λ)F(λ),
λ ∈ Ω .
ii) L and M have the same partial κ-multiplicities at λ0 , and ind L0 = ind M0 .
11.2 Meromorphic families Meromorphic families also have a local Smith form. Precisely, the following result is satisfied (see Gohberg, Kaashoek & Lay [44] and the references therein). Theorem 11.2.1. Suppose K = C, and let L : Ω → L(U, V ) be a meromorphic map such that L(λ) ∈ Fred(U, V ) for all λ ∈ Ω \ {λ0 }. Then, there exist an open neighborhood Ω ⊂ Ω of λ0 , two families E ∈ H(Ω , L(V )) and F ∈ H(Ω , L(U )), with E0 ∈ Iso(V ) and F0 ∈ Iso(U ), two topological decompositions U = U0 ⊕ U1 ,
V = V0 ⊕ V1 ,
11.3. Unbounded operators
267
with dim U0 < ∞ and dim V0 < ∞, an isomorphism I ∈ Iso(U1 , V1 ), an integer n ≥ 1, and n unique (up to permutations) κ1 , . . . , κn ∈ Z \ {0} such that, for every λ ∈ Ω \ {λ0 }, L(λ) = E(λ)(S(λ) ⊕ I)F(λ) where S(λ) is given through either (11.1) or (11.2). As in Section 11.1, the integers κ1 , . . . , κn , which may be positive or negative, are defined as the partial κ-multiplicities of L at λ0 . The corresponding equivalence theorem reads as follows. Theorem 11.2.2. Suppose K = C, and let U1 , U2 , V1 , V2 be four Banach spaces, λ0 ∈ Ω, and L : Ω → L(U1 , V1 ), M : Ω → L(U2 , V2 ) two meromorphic maps such that L(λ) ∈ Fred(U1 , V1 ) and M(λ) ∈ Fred(U2 , V2 ) for all λ ∈ Ω \ {λ0 }. Then, the following assertions are equivalent: i) There exist an open neighborhood Ω ⊂ Ω of λ0 and two families E ∈ H(Ω , L(V2 , V1 )),
F ∈ H(Ω , L(U1 , U2 )),
with E0 ∈ Iso(V2 , V1 ) and F0 ∈ Iso(U1 , U2 ), such that L(λ) = E(λ)M(λ)F(λ),
λ ∈ Ω \ {λ0 }.
ii) L and M have the same partial κ-multiplicities at λ0 , and there exists an ˜ ⊂ Ω of λ0 such that open neighborhood Ω ind L(λ) = ind M(λ),
˜ \ {λ0 }. λ∈Ω
Naturally, there is a counterpart of the Rouch´e theorem in the context of general Fredholm meromorphic families of arbitrary index, but we omit its details here, and the reader is sent to Gohberg & Sigal [54].
11.3 Unbounded operators There are also some available concepts of algebraic multiplicity for operator families whose values are linear, but not necessarily continuous, operators. Among them, the concept introduced by Laloux & Mawhin [71, 72] stands out because of its number of applications to the theory of nonlinear differential equations and systems. Respecting the terminology introduced in this book, given two complex normed spaces U and V , Laloux & Mawhin [71, 72] considered holomorphic families L of the form L(λ) := L − λA, λ ∈ C,
268
Chapter 11. Further Developments of the Algebraic Multiplicity
where L : dom L ⊂ U → V is a linear operator, not necessarily continuous, such that R[L] is closed in V and dim N [L] = codim R[L] < ∞, while A : U → V is linear and enjoys certain continuity and compactness assumptions, which will be made precise below. Naturally, in this context, the set of eigenvalues of L, denoted by Eig(L), is defined through Eig(L) := {λ ∈ C : N [L − λA] = {0}}. Under these general assumptions on L, there exist projections P ∈ L(U ) and Q ∈ L(V ) such that R[P ] = N [L], R[L] = N [Q], and the operator LP defined by restriction of L, LP := L|dom L∩N [P ] : dom L ∩ N [P ] → R[L] is invertible. Then, Laloux and Mawhin imposed Σ(L) = C,
L−1 P (I − Q)A ∈ K(U )
and QA ∈ L(U, V ).
Under these assumptions, for each µ0 ∈ C \ Eig(L), the operator A0 := (L − µ0 A)−1 A is compact and it turns out that Σ(L) equals the set of eigenvalues of A0 + µ0 IU . Moreover, for any λ0 ∈ Eig(L), the classic algebraic multiplicity of λ0 − µ0 as an eigenvalue of A0 is defined and does not depend on µ0 . Therefore, the following concept of multiplicity, attributable to Laloux and Mawhin, is consistent M [L; λ0 ] := ma (λ0 − µ0 ), where ma stands for the classic algebraic multiplicity with respect to A0 . Although this concept is a substantial generalization of the classic concept of algebraic multiplicity of an operator, it turns out that it cannot be generalized to cover the concepts of multiplicity introduced in Chapters 4, 5, 7, 9, and axiomatixed in Chapter 6, because it exclusively works out for a very special type of operator families, under very strong compactness requirements that, in general, are not satisfied.
11.4. Non-Fredholm operators
269
11.4 Non-Fredholm operators There also exists a general theory for operator families whose values are arbitrary, not necessarily Fredholm, linear continuous operators. It is the abstract theory developed by Magnus [93], Arason & Magnus [4, 5, 6], and Magnus & MoraCorral [94]. According to it, given a complex Banach space U , an open set D ⊂ C, and a holomorphic family L ∈ H(D, L(U )), (11.3) the spectrum of L is defined as the set Σ(L) of points λ ∈ D such that L(λ) is not invertible, and, given any bounded open set Ω such that ¯ ⊂D Ω
and ∂Ω ∩ Σ(L) = ∅,
(11.4)
the algebraic multiplicity of L in Ω, denoted by m[L, Ω], is defined as the isomorphism class of the quotient Banach space H(Ω, U ) . L(H(Ω, U )) Naturally, when the quotient is a finite-dimensional space of dimension d, the multiplicity of L in Ω is said to be finite, and it actually equals d, of course. Every pair (L, Ω) satisfying (11.3) and (11.4) is called an admissible pair. The following result collects the main properties of this concept of multiplicity, whose significance relies upon its great generality. Theorem 11.4.1. Let U, V be complex Banach spaces, L ∈ H(D, L(U )) and M ∈ H(D, L(V )). Suppose Ω ⊂ C is a bounded open set such that (L, Ω) and (M, Ω) are admissible pairs. Then, the following properties are satisfied: i) m[L, Ω] = 0 if and only if Ω ∩ Σ(L) = ∅. ii) If U = V , then m[LM, Ω] = m[L, Ω] + m[M, Ω], where the sum of two isomorphism classes of Banach spaces is understood as the isomorphism class of the direct sum of any two representatives of these classes. iii) m[L ⊕ M, Ω] = m[L, Ω] + m[M, Ω]. iv) For every pair of open bounded subsets Ω1 and Ω2 of C for which (L, Ω1 ) and (L, Ω2 ) are admissible pairs and Ω1 ∩ Ω2 = ∅, necessarily m[L, Ω1 ∪ Ω2 ] = m[L, Ω1 ] + m[L, Ω2 ]. v) For any bounded open subset Ω ⊂ C and any continuous map H : [0, 1] × D → L(U )
270
Chapter 11. Further Developments of the Algebraic Multiplicity
such that H(t, ·) ∈ H(D, L(U )) and (H(t, ·), Ω) is an admissible pair for all t ∈ [0, 1], necessarily m[H(t, ·), Ω] = m[H(0, ·), Ω]
for all t ∈ [0, 1].
In this abstract setting there are also available uniqueness theorems, in the spirit of those given in Chapter 6. Moreover, according to Theorem 9.5.1 and Exercises 12 and 13 of Chapter 9, the next result makes it apparent that, in the context of holomorphic families, this abstract concept of multiplicity is indeed a generalization of the one studied in the previous chapters (see Magnus & MoraCorral [99]). Theorem 11.4.2. Let U be a complex Banach space, L ∈ H(D, L(U )) and λ0 ∈ Σ(L) such that λ0 is a pole of L−1 of order ν ≥ 1. Let Ω ⊂ D be an open and bounded set for which (L, Ω) is an admissible pair and Ω ∩ Σ(L) = {λ0 }. Then, the following properties are satisfied: • m[L, Ω] is represented by N [trng{L0 , . . . , Lν−1 }] = R[trng{(L−1 )−ν , . . . , (L−1 )−1 }]. • The spaces (11.5) equal the range of the projection ⎞⎛ ⎛ ··· Lν (L−1 )−ν ⎟⎜ . ⎜ . . .. ⎟⎜ . ⎜ .. .. . ⎠⎝ . ⎝ −1 −1 (L )−1 · · · (L )−ν L2ν−1 · · ·
⎞ L1 .. ⎟ ⎟ . ⎠ Lν
and, consequently, m[L, Ω] is also represented by such range. • m[L, Ω] is represented by the range of the projection ⎛ ⎞⎛ ⎞ · · · L1 Lν (L−1 )−ν ⎜ . ⎟ .. . ⎟⎜ .. .. ⎜ . ⎟. ⎜ . .. ⎟ . . ⎝ . ⎠⎝ ⎠ −1 −1 (L )−1 · · · (L )−ν L2ν−1 · · · Lν
(11.5)
Part III
Nonlinear Spectral Theory
Part III: Summary Part III of the book is an attempt to describe the importance of the theory developed in Part II to Nonlinear Analysis. It is, therefore, a short introduction to Nonlinear Analysis from the point of view of Linear Algebra and Linear Operator Theory. In the mathematical modeling of real-world systems, it is imperative to deal with abstract nonlinear operators between real Banach spaces under minimal regularity conditions. In these models, the underlying maps tend to be of class C r for some r ≥ 1, but they are not analytic in general. In fact, in many applications, the parameter λ is real (by the intrinsic nature of these problems) and the map L is merely continuous over some interval. In the context of Nonlinear Analysis, the most important topological magnitude to be measured is the change of the index Ind(L(λ), 0) as λ crosses some λ0 ∈ Σ(L). In finite dimension, such index equals the sign of the determinant of L(λ). A general principle in Nonlinear Analysis establishes that if Ind(L(λ), 0) changes as λ crosses λ0 , then a connected set of solutions (λ, u), with u = 0, to the equation L(λ)u + N(λ, u) = 0 emanates from Ω × {0} at λ0 , for every continuous and compact map N such that N(λ, u) = o( u ),
u → 0.
Such a value of λ0 is said to be a nonlinear eigenvalue of L. This part contains only one chapter, in which the most important abstract concepts of Nonlinear Analysis are introduced, and some of their main properties are established. Under the assumption that χ[L; λ0 ] is well defined and finite, the main theorem of the chapter establishes that λ0 is a nonlinear eigenvalue of L if and only if the algebraic multiplicity χ[L; λ0 ] is odd. It turns out that the theory of algebraic multiplicity studied throughout this book provides us with an algorithm to determine the changes of the topological index Ind(L(λ), 0). This represents a strong link between Linear Operator Theory and Nonlinear Analysis. This part will use two basic properties of compact operators. Namely, that K(U ) is a closed subset of L(U ), and that if K ∈ K(U ) then IU + K ∈ Fred0 (U ). As Part III consists of an expository chapter that transfers information, techniques and results from Linear to Nonlinear Analysis, we have not made a list of exercises. Instead, we have decided to close the chapter and, hence, the book, by making a general discussion about some of the central questions in the study of nonlinear problems. Once the reading of this book has been completed, the interested reader might wish to have a further glance at L´ opez-G´omez [82].
Chapter 12
Nonlinear Eigenvalues Throughout this chapter, the field K will always be the real field R; we consider a real Banach space U , an open interval Ω ⊂ R, a neighborhood U of 0 ∈ U , an integer number r ≥ 0, a family L ∈ C r (Ω, L(U )), and a nonlinear map N ∈ C(Ω × U, U ) satisfying the following conditions: (AL) L(λ) − IU ∈ K(U ) for every λ ∈ Ω, i.e., L(λ) is a compact perturbation of the identity map. (AN) N is compact, i.e., the image by N of any bounded set of Ω × U is relatively compact in U . Also, for every compact K ⊂ Ω, lim sup
u→0 λ∈K
N(λ, u) = 0. u
From now on, we consider the operator F ∈ C(Ω × U, U ) defined as F(λ, u) := L(λ)u + N(λ, u),
(12.1)
and the associated equation F(λ, u) = 0,
(λ, u) ∈ Ω × U.
(12.2)
By Assumptions (AL) and (AN), it is apparent that F(λ, 0) = 0,
Du F(λ, 0) = L(λ),
λ ∈ Ω,
and, hence, (12.2) can be thought of as a nonlinear perturbation around (λ, 0) of the linear equation L(λ)u = 0, λ ∈ Ω, u ∈ U. (12.3) Equation (12.2) can be expressed as a fixed-point equation for a compact operator. Indeed, F(λ, u) = 0 if and only if u = [IU − L(λ)]u − N(λ, u).
274
Chapter 12. Nonlinear Eigenvalues
Most integral equations of Mathematical Physics, reaction-diffusion systems in Chemistry, and many paradigmatic models of Mathematical Biology and Ecology admit an abstract formulation of this type. In these models, U is an appropriate ¯ of all continuous functions space of real functions; for example the space C0 (D) ¯ u : D → R such that u = 0 on ∂D, for a given domain D of RN , for some integer N ≥ 1. By the intrinsic nature of real-world mathematical models, in most of them the interest of the researcher is focused on functions measuring empirical real magnitudes, e.g., chemical concentrations, population densities, temperatures, pressures, etc.; consequently, one is naturally led to consider mathematical models over the real field. Moreover, in many circumstances, these models involve nonlinear maps of the form N(λ, u) := λ|u|p ,
¯ (λ, u) ∈ R × C0 (D),
for a given real number p > 1, and, hence, they usually fail to be analytic. Similarly, L may not be real analytic, as its dependence on the parameter is not fixed by analysts, but determined from empirical studies. For example, if U is finite dimensional, the map L might be given through L(λ) := |λ|r+1 ,
λ ∈ R,
for some integer r ≥ 0, which is of class C r and satisfies Assumption (AL), but, if r is even, it is not analytic. Consequently, in most applications the natural setting is the real non-analytic one, rather than the holomorphic one. The part of Mathematical Analysis devoted to the problem of ascertaining the structure of the solution set of (12.2) is called Nonlinear Analysis. As (λ, 0) is a solution of (12.2) for every λ ∈ Ω, it will be referred to as a trivial solution. The most ambitious goal of Nonlinear Analysis is to find as much information as possible about F−1 (0) from the fact that Ω × {0} ⊂ F−1 (0). To accomplish that goal, the first step consists in determining the values λ0 ∈ Ω for which a sequence of non-trivial solutions emanates at (λ0 , 0). These are the so-called bifurcation values from Ω × {0} (see Section 12.1 for the precise concept). Surprisingly, there are some bifurcation values λ0 ∈ Ω for which whether or not bifurcation occurs from Ω × {0} at (λ0 , 0) is based exclusively on the properties of the family L about λ0 , and not on the specific choice of the nonlinearity N. Such values of λ0 are called nonlinear eigenvalues of L (see Section 12.1). Note that λ0 ∈ Σ(L) if and only if (λ0 , 0) is a bifurcation point from R × {0} for the linear equation (12.3). In such a case, the set of solutions emanating from R × {0} at (λ0 , 0) is a linear manifold. The main goal of this chapter is to prove that λ0 is a nonlinear eigenvalue of L if and only if the algebraic multiplicity of L at λ0 , as discussed in Part II, is an odd natural number. Our proof of this fundamental characterization relies on the topological degree (introduced in Exercise 6 of Chapter 8, in the linear case) and on the existence of the local Smith form established in Chapter 7.
12.1. Bifurcation values. Nonlinear eigenvalues
275
This chapter is organized as follows. Section 12.1 introduces the concept of nonlinear eigenvalue and states the theorem of characterization of nonlinear eigenvalues through the oddity of χ[L; λ0 ]. Section 12.2 gives a short self-contained introduction to the topological degree (without proofs). Section 12.3 establishes that χ[L; λ0 ] is odd if and only if the local topological degree of L(λ) at 0 changes as λ crosses λ0 . Finally, Section 12.4 provides the proof of the characterization theorem presented in Section 12.1. At a later stage, Nonlinear Analysis studies the local and global structures of the components of the set of non-trivial solutions emanating from Ω × {0} at (λ0 , 0). Singularity Theory and Real Algebraic Geometry provide us with rather satisfactory answers to the local problem, while Global Bifurcation Theory analyzes the global structure of these components. These topics are beyond the specific scope of this chapter and will not be covered in the book. Throughout this chapter, given a real Banach space U , we denote by GLc (U ) the set of T ∈ Iso(U ) such that T − IU ∈ K(U ).
12.1 Bifurcation values. Nonlinear eigenvalues The following concept plays a crucial role in Nonlinear Analysis. Definition 12.1.1. Let λ0 ∈ Ω. If (λ0 , 0) ∈ F−1 (0) ∩ [Ω × (U \ {0})] then (λ0 , 0) is called a bifurcation point of (12.2) from Ω × {0}. In addition, λ0 is said to be a bifurcation value of (12.2) from Ω × {0}. Of course, (λ0 , 0) is a bifurcation point of (12.2) from the curve of trivial solutions Ω × {0} if for each natural n there exists (λn , un ) ∈ F−1 (0) ∩ [Ω × (U \ {0})] such that lim (λn , un ) = (λ0 , 0).
n→∞
The next result establishes that the set of bifurcation values from Ω × {0} must be contained in Σ(L). Lemma 12.1.2. Suppose λ0 is a bifurcation value of (12.2) from Ω × {0}. Then, λ0 ∈ Σ(L). Proof. For each integer n ≥ 1, there exist λn ∈ Ω and un ∈ U \ {0} such that F(λn , un ) = 0 and lim (λn , un ) = (λ0 , 0). n→∞
276
Chapter 12. Nonlinear Eigenvalues
Then, for every n ≥ 1, −L(λ0 )
N(λn , un ) un un = [L(λn ) − L(λ0 )] + , un un un
and, hence, by Assumption (AN), it is apparent that lim L(λ0 )
n→∞
un = 0. un
Consequently, since un un un = [IU − L(λ0 )] + L(λ0 ) , un un un
n ≥ 1,
and IU − L(λ0 ) ∈ K(U ), there exists a subsequence of {un }n≥1 , say {unm }m≥1 , for which unm ϕ := lim m→∞ unm exists. Necessarily, ϕ = 1; in particular, ϕ = 0. Also, L(λ0 )ϕ = 0,
which concludes the proof.
Simple examples show that the converse of Lemma 12.1.2 is false in general. Indeed, when we make the choice U = U = Ω = R,
L(λ) = λ2 ,
N(λ, u) = u3 ,
the equation (12.2) becomes λ2 u + u3 = 0,
(12.4)
whose unique solutions are those of the form (λ, 0); consequently, in this case F does not admit a bifurcation value from Ω × {0}, despite 0 ∈ Σ(L). Note that if, instead of (12.4), we consider λ2 u − u3 = 0, then 0 becomes a bifurcation value from Ω × {0}. Therefore, in case L(λ) = λ2 , whether or not 0 is a bifurcation value from Ω × {0} is based on the special nature of the nonlinearity N. Note that if λ0 ∈ Σ(L), then there always exists a map N satisfying Assumption (AN) for which (λ0 , 0) is a bifurcation point of (12.2) from Ω × {0}, for example N = 0. When this occurs for any admissible nonlinearity N, it is said that λ0 is a nonlinear eigenvalue of L. More precisely, this concept is introduced in the following definition.
12.1. Bifurcation values. Nonlinear eigenvalues
277
Definition 12.1.3. Let L satisfy Assumption (AL), and consider λ0 ∈ Σ(L). Then, λ0 is said to be a nonlinear eigenvalue of L if λ0 is a bifurcation value of (12.2) from Ω × {0} for all N satisfying Assumption (AN). The concept of algebraic multiplicity studied in Part II provides us with the following fundamental characterization. Recall that a continuum is a compact and connected metric space. Theorem 12.1.4 (Characterization of nonlinear eigenvalues). Suppose L satisfies Assumption (AL) and λ0 ∈ Algν (L) for some integer number 1 ≤ ν ≤ r. Then, the following assertions are equivalent: i) λ0 is a nonlinear eigenvalue of L. ii) χ[L; λ0 ] ∈ 2N + 1. Moreover, in such a case, there is a continuum C ⊂ F−1 (0) satisfying C = {(λ0 , 0)}
and
C ∩ (Ω × {0}) = {(λ0 , 0)}.
In the general context of this chapter, the set of non-trivial solutions of equation (12.2) is defined by (12.5) S := F−1 (0) ∩ [Ω × (U \ {0})] ∪ (Σ(L) × {0}) , so that it consists of all solutions (λ, u) ∈ Ω × U of (12.2) with u = 0, plus all possible bifurcation points from Ω × {0}. The last assertion of Theorem 12.1.4 establishes that S has a component C strictly containing (λ0 , 0), if λ0 is a nonlinear eigenvalue of L. By a component it is meant a closed and connected subset which is maximal for the inclusion. As anticipated in the presentation of this chapter, the proof of Theorem 12.1.4 is postponed until Section 12.4. The following two sections provide some key technical devices that will be used in the proof of Theorem 12.1.4. Now we present a result that states that Σ(L) and S are closed. Proposition 12.1.5. Σ(L) is a closed subset of Ω, and S is a closed subset of Ω × U. Proof. First we prove that Σ(L) is closed in Ω. Let {λn }n∈N be a sequence in Σ(L) such that lim λn = λω ∈ Ω. n→∞
Define K(λ) := IU − L(λ),
λ ∈ Ω.
We find from Assumption (AL) that K(λ) ∈ K(U ) for every λ ∈ Ω. Moreover, for each n ≥ 1, there exists un ∈ U such that K(λn )un = un ,
un = 1.
278
Chapter 12. Nonlinear Eigenvalues
Clearly, K(λω )un = un + [K(λω ) − K(λn )] un , As K is continuous,
n ≥ 1.
(12.6)
lim [K(λω ) − K(λn )] un = 0,
n→∞
and, by compactness of K(λω ), it follows from (12.6) that there exists a subsequence of {un }n≥1 (not re-labeled) such that the limit uω := lim un ∈ U n→∞
is well defined. Consequently, passing to the limit as n → ∞ in (12.6) shows K(λω )uω = uω . Equivalently, L(λω )uω = 0. Moreover, uω = 1, since un = 1 for every n ≥ 1; therefore, λω ∈ Σ(L). This shows that Σ(L) is closed in Ω. Now we prove that S is closed in Ω × U. Define Z := F−1 (0) ∩ [Ω × (U \ {0})] . As F is continuous, then Z is closed in Ω × (U \ {0}), so Z = Z¯ ∩ [Ω × (U \ {0})] . By Lemma 12.1.2, Z¯ ∩ (Ω × {0}) ⊂ Σ(L) × {0}. Therefore,
and
S = Z¯ ∪ (Σ(L) × {0}) ∩ (Ω × U) ,
Z¯ ∩ (Ω × U) ∪ (Σ(L) × {0}) ⊂ S
which shows that S is closed in Ω × U.
12.2 A short introduction to the topological degree Essentially, the topological degree is a generalized counter of the number of zeros that a continuous map f has in a bounded open set D such that f −1 (0) ∩ ∂D = ∅. The topological degree was introduced by Brouwer [16] in the special case U = RN as a technical device to prove his celebrated fixed point theorem; it was later extended by Leray & Schauder [80] to include general compact perturbations of the identity map in arbitrary Banach spaces. This extension provides us, rather
12.2. A short introduction to the topological degree
279
naturally, with the celebrated fixed point theorem of Schauder [118] and Tychonoff [125], as will become apparent in Subsection 12.2.3. When D is the bounded component enclosed by a plane Jordan curve of class ¯ with C 1 and f is holomorphic in a neighborhood of D, f −1 (0) ∩ ∂D = ∅, then the degree of f in D, subsequently denoted by Deg(f, D), can be defined through f 1 Deg(f, D) := 2πi γ f and, therefore, by the Argument Principle, Deg(f, D) equals the total number of zeros of f in D counted according to their respective orders. Although for an arbitrary continuous function f , the degree Deg(f, D) does not provide us with the exact number of zeros of f in D, it will certainly satisfy some fundamental properties that any suitable generalized counter of zeros should exhibit; in particular: • The identity map has degree one in D if 0 ∈ D. • The topological degree of f in any disjoint union of open sets equals the sum of its degrees in each of them separately. • The degree is invariant by continuous deformations provided the maps of the deformation do not have zeros along the boundary ∂D. • f −1 (0) ∩ D = ∅ if Deg(f, D) = 0. Strikingly, these four properties determine uniquely the topological degree.
12.2.1 Existence and uniqueness The most versatile and quickest way to introduce the degree possibly proceeds through the following existence and uniqueness theorem. The existence part goes back to Brouwer [16] in RN , and to Leray & Schauder [80] in arbitrary Banach spaces. The uniqueness part is attributable to F¨ uhrer [38] in the context of the Brouwer degree, and to Amann & Weiss [3] in the general setting of Leray and Schauder. Theorem 12.2.1 (Existence and uniqueness of the topological degree). Let U be a real Banach space. Let A denote the set of pairs (f, D) such that D ⊂ U is open ¯ ⊂ D(f ) ⊂ U , the map f − IU is compact, and bounded, f ∈ C(D(f ), U ) with D and 0 ∈ / f (∂D). Then, there exists a unique mapping Deg : A → Z satisfying the following properties:
280
Chapter 12. Nonlinear Eigenvalues
(D1) For every open and bounded subset D ⊂ U with 0 ∈ D, Deg(IU , D) = 1. (D2) For every (f, D) ∈ A and every pair of open disjoint subsets D1 , D2 of D ¯ \ (D1 ∪ D2 )), such that 0 ∈ / f (D Deg(f, D) = Deg(f, D1 ) + Deg(f, D2 ). ¯ U ) such that (H(t, ·), D) ∈ A for all (D3) For every D ⊂ U and H ∈ C([0, 1] × D, t ∈ [0, 1], Deg(H(0, ·), D) = Deg(H(1, ·), D). The value Deg(f, D) is called the Leray–Schauder degree (or the topological degree, or simply the degree) of f in D. Property D1 is usually referred to as the normalization property. Note that for every n ∈ Z, the new map n Deg satisfies Properties D2, D3, but not D1. Thus, Property D1 indeed normalizes the degree of the identity map to make its value equal to its number of zeros in any D containing 0. Property D2 entails three basic properties that any counter of zeros must satisfy. Precisely, by choosing D = D1 = D2 = ∅, it shows that Deg(f, ∅) = 0,
(12.7)
thus showing that no continuous map can have a zero in the empty set. Moreover, in the special case when D = D1 ∪ D2 , it shows that Deg(f, D) = Deg(f, D1 ) + Deg(f, D2 ),
(12.8)
so establishing the additivity property of the counter. Finally, in the special case when D2 = ∅, (12.7) and (12.8) imply Deg(f, D) = Deg(f, D1 ), which is usually referred to as the excision property of the degree. If, in addition, D1 = ∅, the excision property establishes that Deg(f, D) = 0
¯ = ∅. if f −1 (0) ∩ D
Equivalently, if (f, D) ∈ A and Deg(f, D) = 0 then f −1 (0) ∩ D = ∅,
(12.9)
which guarantees the existence of a zero of f in D when the degree of f in D is non-zero. Property (12.9) is usually referred to as the solution property of the degree. Property D3 establishes the invariance by homotopy of the counter. This pivotal property shows that the generalized number of zeros of f in D, measured by Deg(f, D), remains unchanged if the map is deformed continuously in such a way that it does not lose or gain zeros through ∂D. This property endows the degree with the possibility of being computed.
12.2. A short introduction to the topological degree
281
12.2.2 Computing the degree. Analytic construction This subsection describes a standard procedure for computing the topological degree. Let R denote the set of (f, D) ∈ A such that f is of class C 1 in D, and for every u ∈ f −1 (0) ∩ D, we have that Df (u) ∈ Iso(U ). These pairs are usually referred to as regular pairs. Using the Inverse Function Theorem and the facts that f is a compact perturbation of the identity, and that f −1 (0) ∩ ∂D = ∅, it is easy to see that f −1 (0) ∩ D must be finite, possibly empty. The following result is a direct consequence of the infinite dimensional version by Smale [120] of the celebrated theorem of Sard [115], and of Property D3 of Theorem 12.2.1. Theorem 12.2.2 (of regularization). Let U be a real Banach space. Let A denote the class of admissible pairs introduced by Theorem 12.2.1 and let R ⊂ A be the associated class of regular pairs. Then, for every (f, D) ∈ A there exists h ∈ ¯ U ) such that C([0, 1] × D, h(0, ·) = f,
(h(1, ·), D) ∈ R,
and (h(t, ·), D) ∈ A for every t ∈ [0, 1]; moreover, for every such h, Deg(f, D) = Deg(h(1, ·), D). Consequently, the computation of Deg(f, D) can be reduced to calculate the degree of any homotopic regular pair. To calculate the degree of a regular pair, the ¯ =∅ following procedure should be implemented. Let (f, D) ∈ R. If f −1 (0) ∩ D then Deg(f, D) = 0, due to (12.9). If not, let n ≥ 1 be the number of elements ¯ and let f −1 (0) ∩ D, ¯ = {u1 , . . . , un } ⊂ D. f −1 (0) ∩ D Now, let ε > 0 satisfy Bε (uj ) ⊂ D,
1 ≤ j ≤ n,
and ¯ε (uj ) = ∅, ¯ε (ui ) ∩ B B
1 ≤ i < j ≤ n.
Then, according to Property D2, Deg(f, D) =
n
Deg (f, Bε (uj )) .
(12.10)
j=1
Moreover, by the excision property, Deg(f, Bε (uj )) is independent of ε as long as Bε (uj ) ⊂ D
¯ε (uj ) = {uj }. and f −1 (0) ∩ B
More generally, if (f, D) ∈ A and u0 ∈ D satisfies that f does not have zeros in a perforated neighborhood of u0 , then Deg(f, Bε (u0 )) is well defined and independent of ε > 0 as long as ¯ε (u0 ) ⊂ {u0 }. f −1 (0) ∩ B
282
Chapter 12. Nonlinear Eigenvalues
The value Deg(f, Bε (u0 )) will be referred to as the index of f at u0 , and denoted by Ind(f, u0 ). With this concept, (12.10) can be expressed in the form Deg(f, D) =
n
Ind (f, uj ) .
(12.11)
j=1
¯ε (uj ), we set Now, for every 1 ≤ j ≤ n and u ∈ B Rj (u) := f (u) − Df (uj )(u − uj ). Then, since (f, D) ∈ R and f (uj ) = 0, it is easy to see that if ε > 0 is small enough, then the homotopy Hj defined as ¯ε (uj ), (t, u) ∈ [0, 1] × B
Hj (t, u) := Df (uj )(u − uj ) + tRj (u),
satisfies (Hj (t, ·), Bε (uj )) ∈ A for every 1 ≤ j ≤ n and t ∈ [0, 1]. Therefore, by the homotopy invariance of the degree (Property D3), we find from (12.11) that Deg(f, D) =
n
Ind (Df (uj )(· − uj ), uj ) .
(12.12)
j=1
Finally, since f − IU is compact, we have that Df (uj ) − IU ∈ K(U ),
1 ≤ j ≤ n,
and, owing to the celebrated formula of Leray & Schauder [80], we find that Ind (Df (uj )(· − uj ), uj ) = (−1)mj ,
1 ≤ j ≤ n,
where mj stands for the sum of the algebraic multiplicities of all negative eigenvalues of Df (uj ). In particular, Ind(T, 0) = sign det T
(12.13)
if dim U < ∞ and T ∈ L(U ). Therefore, we conclude from (12.12) that Deg(f, D) =
n
(−1)mj .
(12.14)
j=1
Conversely, the most popular analytical construction of the topological degree consists in defining the topological degree of a pair (f, D) ∈ R by (12.14) and extending it to the class A through Theorem 12.2.2. In practical applications, to compute the topological degree of a pair (f, D) ∈ A, one should accomplish the following tasks: 1. Construct a regularizing homotopy to transform f into some map F with (F, D) ∈ R. Calculate F −1 (0). 2. Calculate DF (u0 ) for every u0 ∈ D ∩ F −1 (0). 3. Compute the sum of the algebraic multiplicities of all negative eigenvalues of DF (u0 ) for every u0 ∈ D ∩ F −1 (0).
12.2. A short introduction to the topological degree
283
12.2.3 The fixed point theorem of Brouwer, Schauder and Tychonoff To illustrate the scheme of Subsection 12.2.2, we will prove the following special, but significant, version of the fixed point theorem of Brouwer [16], Schauder [118], and Tychonoff [125]. Theorem 12.2.3 (of Brouwer–Schauder–Tychonoff ). Let U be a real Banach space. ¯→B ¯ is a continuous and compact map. Let B ⊂ U be an open ball. Suppose f : B Then, f (u∗ ) = u∗ ¯ for some u∗ ∈ B. Proof. Let u0 ∈ U and R > 0 satisfy B = BR (u0 ). If f (u∗ ) = u∗ for some u∗ ∈ ∂B, then the conclusion of the theorem is true. So, suppose f (u) = u
for all u ∈ ∂B,
(12.15)
and consider the map H defined as H(t, u) := u − u0 + t[u0 − f (u)],
¯ (t, u) ∈ [0, 1] × B.
Clearly, H is continuous and, for every t ∈ [0, 1], the map H(t, ·) − IU is compact. Moreover, H(t, u) = 0 for all (t, u) ∈ [0, 1] × ∂B. (12.16) Indeed, by (12.15), property (12.16) holds for t = 1. Thus, if (12.16) fails, there must exist t∗ ∈ [0, 1) and u∗ ∈ ∂B such that H(t∗ , u∗ ) = 0. Consequently, and, hence,
u∗ − u0 = t∗ [f (u∗ ) − u0 ] R = u∗ − u0 = t∗ f (u∗ ) − u0 ≤ Rt∗ < R,
which is impossible. Therefore, (12.16) is satisfied, and, consequently, according to Property D3, Deg(IU − f, B) = Deg(H(1, ·), B) = Deg(H(0, ·), B) = 1, since D(H(0, ·)) = IU , and IU does not have any negative eigenvalue. Consequently, owing to the solution property (12.9), it is apparent that f possesses a fixed point in B. This concludes the proof.
284
Chapter 12. Nonlinear Eigenvalues
12.2.4 Further properties The following result collects some well-known properties of the topological degree that will be used in the next two sections. Properties ii)–iv) follow straight away from the Leray–Schauder formula and basic theory of linear operators, while the general property of invariance by homotopy formulated in Property i) follows from the invariance by homotopy and the excision property. From now on, given O ⊂ R × U and t ∈ R, we define Ot := {u ∈ U : (t, u) ∈ O}. Proposition 12.2.4. The following properties hold: ¯ U) i) Suppose a < b, and O is an open bounded subset of [a, b]×U . Let H ∈ C(O, be such that the map H(t, ·) − IU : Ot → U is compact for every t ∈ [a, b], and H(t, u) = 0
for each
t ∈ [a, b]
and
(t, u) ∈ ∂O.
Then, Deg (H(a, ·), Oa ) = Deg (H(b, ·), Ob ) . ii) Ind(A, 0) ∈ {−1, 1} for every A ∈ GLc (U ). Moreover, the mapping Ind(·, 0) : GLc (U ) → {−1, 1} is continuous. iii) For every A, B ∈ GLc (U ), Ind(AB, 0) = Ind(A, 0) · Ind(B, 0). iv) Let U1 , U2 be two closed subspaces of U such that U = U1 ⊕ U2 . Let T ∈ GLc (U ) be such that T (Ui ) ⊂ Ui , for i = 1, 2. Then, Ind(T, 0) = Ind(T |U1 , 0) · Ind(T |U2 , 0).
12.3 The algebraic multiplicity as an indicator of the change of index The following result (anticipated in Excercise 7 of Chapter 8) shows that the algebraic multiplicity χ[L; λ0 ] can detect all changes of the topological index Ind(L(λ), 0) as the real parameter λ varies. In fact, it shows that the theory of the multiplicity developed in Part II provides us with an algorithm to compute any change of index; thus establishing some deep connections between Algebra, Analysis, Topology and the theory of Dynamical Systems.
12.3. The algebraic multiplicity as an indicator of the change of index
285
Proposition 12.3.1. Suppose L satisfies (AL) and λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r. Then, the following assertions are equivalent: i) Ind(L(λ), 0) changes as λ crosses λ0 . ii) χ[L; λ0 ] ∈ 2N + 1. Proof. Since L(λ) is a compact perturbation of the identity map for every λ ∈ Ω, then L(λ) ∈ Fred0 (U ). Let {M{j} }νj=0 be a derived sequence from L at λ0 with corresponding sequence of (finite-rank) projections {Pj }νj=0 . Then, M{0} = L,
M{ν} (λ0 ) ∈ Iso(U ),
M{ν} ∈ C(Ω, L(U )),
and, by (5.35), L(λ) = M{ν} (λ)CPν−1 (λ − λ0 ) · · · CP0 (λ − λ0 ),
λ ∈ Ω,
where CP0 , . . . CPν−1 are the operator families defined in (5.33). Now we prove by induction on j that M{j} is a compact perturbation of the identity for any j ∈ {0, . . . , ν}. Clearly, by definition and Assumption (AL), IU − M{0} (λ) = IU − L(λ) ∈ K(U ),
λ ∈ Ω.
Suppose that IU − M{j} (λ) ∈ K(U ) for all λ ∈ Ω and a particular j ∈ {0, . . . , ν − 1}. Then, for every λ ∈ Ω \ {λ0 }, we find from the definition of derived sequence that M{j+1} (λ) − IU = M{j} (λ) − IU +
1 − λ + λ0 M{j} (λ)Pj , λ − λ0
which is compact, as a finite-rank perturbation of a compact operator. In addition, M{j+1} (λ0 ) − IU is compact too, as a limit of compact operators. Consequently, M{j} is a compact perturbation of the identity for every j ∈ {0, . . . , ν}. In particular, there exists δ > 0 such that M{ν} (λ) ∈ GLc (U ),
λ ∈ (λ0 − δ, λ0 + δ).
Then, by Proposition 12.2.4 iii), for all 0 < |λ − λ0 | < δ, Ind(L(λ), 0) = Ind(M{ν} (λ), 0) ·
ν−1
Ind(CPj (λ − λ0 ), 0).
j=0
In addition, by Proposition 12.2.4 ii), there exists η ∈ {−1, 1} such that Ind(M{ν} (λ), 0) = η,
|λ − λ0 | < δ.
286
Chapter 12. Nonlinear Eigenvalues
Moreover, according to Proposition 12.2.4 iv), (5.33), (12.13), and Theorem 12.2.1 (D1), for every j ∈ {0, . . . , ν − 1} and 0 < |λ − λ0 | < δ, it is apparent that Ind(CPj (λ − λ0 ), 0) = Ind((λ − λ0 )IR[Pj ] , 0) · Ind(IN [Pj ] , 0) = sign det((λ − λ0 )IR[Pj ] ) = sign(λ − λ0 )dim R[Pj ] and, consequently, we conclude from Corollary 5.3.2 (b) that Ind(L(λ), 0) = η
ν−1
ν−1
sign(λ − λ0 )dim R[Pj ] = η sign(λ − λ0 )
j=0
rank Pj
j=0
= η sign(λ − λ0 )χ[L;λ0 ] ,
which ends the proof.
12.4 Characterization of nonlinear eigenvalues This section is devoted to the proof of Theorem 12.1.4, in which we use the local Smith form and Proposition 12.3.1. Thus, throughout this section we assume that L satisfies Assumption (AL) and that λ0 ∈ Algν (L) for some integer 1 ≤ ν ≤ r. Then, thanks to Theorem 5.3.1, χ[L; λ0 ] is well defined and finite. In particular, λ0 is isolated in Σ(L). Moreover, by Proposition 12.3.1, Ind(L(λ), 0) changes as λ crosses λ0 if and only if χ[L; λ0 ] is odd. Suppose this is the case, let N ∈ C(Ω×U, U ) be any map satisfying Assumption (AN), and consider the map F defined by (12.1), and the equation (12.2). As λ0 is isolated in Σ(L), and Ω is open, there exists δ > 0 such that [λ0 − δ, λ0 + δ] ⊂ Ω
and [λ0 − δ, λ0 + δ] ∩ Σ(L) = {λ0 }.
Moreover, according to Lemma 12.1.2, there exists η0 > 0 such that, for every η ∈ (0, η0 ], the cylinder Qη := {(λ, u) ∈ R × U : |λ − λ0 | ≤ δ, u ≤ η}, satisfies Qη ⊂ Ω × U and |λ − λ0 | < δ
if (λ, u) ∈ Qη ∩ S.
(12.17)
Recall from (12.5) the definition of S. These cylinders satisfy the following important property. Lemma 12.4.1. For every η ∈ (0, η0 ], we have ∂Qη ∩ S = ∅.
12.4. Characterization of nonlinear eigenvalues
287
Proof. Assume that ∂Qη ∩ S = ∅
(12.18)
for some η ∈ (0, η0 ]; we shall derive a contradiction. According to (12.17), by the excision property and the invariance by homotopy of the degree, arguing as in Subsection 12.2.2, it is easily seen that Deg(F(λ0 − δ, ·), Bη ) = Ind(L(λ0 − δ), 0), Deg(F(λ0 + δ, ·), Bη ) = Ind(L(λ0 + δ), 0).
(12.19)
Moreover, by (12.18), it follows from the invariance by homotopy of the degree that Deg(F(λ, ·), Bη ) is constant for every λ ∈ [λ0 − δ, λ0 + δ]. Therefore, we conclude from (12.19) that Ind(L(λ0 − δ), 0) = Ind(L(λ0 + δ), 0),
(12.20)
which contradicts the fact that Ind(L(λ), 0) changes as λ crosses λ0 , and concludes the proof. According to (12.17) and Lemma 12.4.1, for each η ∈ (0, η0 ] there exists (λη , uη ) ∈ ∂Qη ∩ S ˜ ∈ [λ0 −δ, λ0 +δ] with uη = η. Since |λη −λ0 | ≤ δ for every η ∈ (0, η0 ], there exist λ and a sequence {ηn }n≥1 of positive numbers such that lim ηn = 0,
n→∞
˜ 0). lim (ληn , uηn ) = (λ,
n→∞
˜ = λ0 . Consequently, (λ0 , 0) is a bifurcation Therefore, thanks to Lemma 12.1.2, λ point from Ω × {0} of equation (12.2) and, hence, λ0 is a nonlinear eigenvalue of L. This shows that ii) implies i) in Theorem 12.1.4. To show the existence of the continuum C we will use the following result, borrowed from Whyburn [126], whose proof is omitted here. Lemma 12.4.2. Let M be a compact metric space and A and B two disjoint nonempty compact subsets of M . Then, some of the following (mutually exclusive) alternatives occur: i) There exists a continuum C ⊂ M such that C ∩ A = ∅
and
C ∩ B = ∅.
ii) There exist two disjoint compact subsets MA and MB of M such that A ⊂ MA ,
B ⊂ MB ,
M = MA ∪ MB .
288
Chapter 12. Nonlinear Eigenvalues
Now we take η ∈ (0, η0 ] and consider the set M := Qη ∩ S. According to Proposition 12.1.5, M is closed and bounded. Moreover, since (λ, u) ∈ M if and only if |λ − λ0 | ≤ δ,
u ≤ η,
u = [IU − L(λ)]u − N(λ, u),
it follows from Assumptions (AL) and (AN) that M is compact. Now, consider the sets A and B defined as A := {(λ, u) ∈ M : u = η},
B := {(λ0 , 0)}.
By Lemma 12.4.1 and (12.17), A is a non-empty compact subset of M . Clearly, so is B. Moreover, A and B are disjoint. Thus, some of the alternatives of Lemma 12.4.2 are satisfied. We shall prove that ii) does not hold. Suppose, looking for a contradiction, that ii) is satisfied. Then, since MA and MA are disjoint compact sets, there exists ε > 0 such that the ε-neighborhood of MB , MBε := MB + {(λ, u) ∈ R × U : (λ, u) < ε}, satisfies MA ∩ MBε = ∅ and |λ − λ0 | < δ
if (λ, u) ∈ MBε ;
(12.21)
this is possible thanks to (12.17). Then, since M = MA ∪ MB , we have that ∂MBε ∩ M = ∅. Now we show that if β > 0 is small enough, then Qβ ∩ MA = ∅.
(12.22)
Indeed, if not, there would exist a sequence {(λn , un )}n∈N in MA such that ˜ 0) lim (λn , un ) = (λ,
n→∞
˜ ∈ [λ0 − δ, λ0 + δ]. Since (λ, ˜ 0) ∈ MA ⊂ M we have that λ ˜ = λ0 . So for some λ (λ0 , 0) ∈ MA ∩ B, which is a contradiction. This proves that (12.22) is true for β > 0 small enough; fix such a β. Then the open set ◦
O := Qβ ∪ MBε satisfies MB ⊂ O and ∂O ∩ M = ∅. Thus, by Proposition 12.2.4 i), we find that Deg(F(λ0 − δ, ·), Oλ0 −δ ) = Deg(F(λ0 + δ, ·), Oλ0 +δ ). Also, according to (12.21), it is obvious that Oλ0 −δ = Oλ0 +δ = Bβ ⊂ U,
12.4. Characterization of nonlinear eigenvalues
289
and, by (12.19), equality (12.20) holds true, which is impossible. This contradiction shows that Alternative i) of Lemma 12.4.2 holds, and, hence, the existence of the continuum C with the properties described in that lemma concludes the proof of the last assertion of Theorem 12.1.4. Next, we suppose that χ[L; λ0 ] ∈ 2N. To complete the proof of Theorem 12.1.4 it suffices to construct an operator N satisfying Assumption (AN) and such that the unique zeros of F in a neighborhood of (λ0 , 0) are in Ω× {0}. This would show that λ0 cannot be a nonlinear eigenvalue of L, and would conclude the proof. According to Theorem 7.8.3, L possesses a local Smith form at λ0 . More precisely, there exist an open neighborhood Ω ⊂ Ω of λ0 , two families E ∈ C(Ω , L(U )) and G ∈ C(Ω , L(U )), with E0 , G0 ∈ Iso(U ), a direct sum topological decomposition U = U0 ⊕ U1 , and n := dim U0 integer positive numbers k1 ≥ · · · ≥ kn , with k1 ≤ r, such that L(λ) = E(λ) [D(λ) ⊕ IU1 ] G(λ),
λ ∈ Ω ,
where D(λ) = diag (λ − λ0 )k1 , . . . , (λ − λ0 )kn ∈ L(U0 ),
λ ∈ Ω .
Through the change of variables v = G(λ)u,
λ ∈ Ω ,
it is easy to see that equation (12.2) is equivalent, in a neighborhood of λ0 , to [D(λ) ⊕ IU1 ]v + M(λ, v) = 0, where
(12.23)
M(λ, v) := E(λ)−1 N(λ, G(λ)−1 v),
in the sense that D(λ) ⊕ IU1 satisfies Assumption (AL), M satisfies Assumption (AN), and F(λ, u) = 0 if and only if (λ, v) := (λ, G(λ)u) solves (12.23). In particular, (λ0 , 0) is a bifurcation point of (12.2) if and only if it is a bifurcation point of (12.23). Consequently, to complete the proof of Theorem 12.1.4 it suffices to construct an M satisfying Assumption (AN) for which all solutions (λ, v) of (12.23) in a neighborhood of (λ0 , 0) satisfy v = 0. Set v = x + y,
x ∈ U0 ,
y ∈ U1 ,
and choose any M satisfying M(λ, x + y) = M(λ, x) ∈ U0 for all (λ, x, y) ∈ R × U0 × U1 in a neighborhood of (λ0 , 0, 0).
290
Chapter 12. Nonlinear Eigenvalues
Then, equation (12.23) can be expressed as D(λ)x + M(λ, x) = 0, y = 0, and, consequently, it suffices to find some ε > 0 and n continuous functions ¯ε → R, Mi : [λ0 − ε, λ0 + ε] × B
1 ≤ i ≤ n,
such that
|Mi (λ, x)| = 0, 1 ≤ i ≤ n, x for which (λ, 0) are the solely solutions (λ, x) of the n-dimensional system ⎞⎛ ⎞ ⎛ ⎞ ⎛ (λ − λ0 )k1 x1 M1 (λ, x) ⎟⎜ . ⎟ ⎜ ⎟ ⎜ .. .. ⎟⎜ . ⎟+⎜ ⎟ = 0, ⎜ (12.24) . . ⎠⎝ . ⎠ ⎝ ⎠ ⎝ kn (λ − λ0 ) Mn (λ, x) xn lim
max
x→0 |λ−λ0 |≤ε
for |λ − λ0 | ≤ ε, and x ≤ ε; here x1 , . . . , xn stand for the components of x with respect to the canonical basis of Rn , and the ball Bε is in Rn . Note that Assumption (AN) is automatically satisfied, since M has been chosen with finite rank. Now we make the appropriate choice of Mi , for each i ∈ {1, . . . , n}. Let i ∈ {1, . . . , n} such that ki is even. Then, by making the choice Mi (λ, x) := x3i ,
|λ − λ0 | ≤ ε,
x ≤ ε,
the ith equation of system (12.24) becomes (λ − λ0 )ki xi + x3i = 0, whose unique solution (λ, xi ) is (λ, 0). According to Theorem 7.6.3, we have that χ[L; λ0 ] =
n
ki .
i=1
As we are assuming that χ[L; λ0 ] is even, there is an even number of indexes i ∈ {1, . . . , n} such that ki is odd. If there are none at all, the proof is done. Suppose i, j ∈ {1, . . . , n} are such that i = j, and ki and kj are odd. Then, making the choice Mi (λ, x) := x3j ,
Mj (λ, x) := −x3i ,
|λ − λ0 | ≤ ε,
x ≤ ε,
the system consisting of the ith and jth equations of (12.24) becomes (λ − λ0 )ki xi + x3j = 0, (λ − λ0 )kj xj − x3i = 0. As ki − kj is even, it is easy to conclude that any solution of the above system must satisfy xi = xj = 0, which ends the proof of Theorem 12.1.4.
12.5. Comments on Chapter 12
291
12.5 Comments on Chapter 12 Some specialized monographs about nonlinear integral equations are Krasnosel’ski˘ı [69], L´ opez-G´omez [82], and Kielh¨ ofer [68]; each of them enjoys a different taste. Some simple prototypical models of great relevance from the point of view of their applications to Mathematical Biology and Ecology can be found in the paradigmatic monographs of Cantrell & Cosner [18], Murray [102] and Okubo [103], and in the paper of L´ opez-G´omez & Molina-Meyer [87], which attempts to be an instance of how serious mathematics should be diffused among empirical scientists. The exposition of this chapter is based on L´ opez-G´omez [82, 85], Magnus [92], and Mora-Corral [101]. The concepts of bifurcation point and nonlinear eigenvalue are classic and, essentially, go back to Poincar´e [108], at least in the context of nonlinear oscillations in Celestial Mechanics. However, the notation and terminologies used in this chapter were essentially fixed by Krasnosel’ski˘ı [69] and his School of Nonlinear Analysis. In fact, under the assumptions of this chapter, the first general result concerning nonlinear eigenvalues was a celebrated theorem of Krasnosel’ski˘ı, which established that if A ∈ K(U ),
L(λ) = λIU − A,
λ ∈ R,
and λ0 = 0 is an eigenvalue of A of odd multiplicity ma (λ0 ), then λ0 is a nonlinear eigenvalue of L. Note that, according to Theorem 8.3.2, χ[L; λ0 ] = ma (λ0 ). Even in the classical setting of Krasnosel’ski˘ı, the characterization described in Theorem 12.1.4 goes back to Esquinas & L´opez-G´omez [29, 30, 31, 82], to whom Theorem 12.1.4 is attributable. The proof of Theorem 12.1.4 in Section 12.4 reveals that λ0 is a nonlinear eigenvalue of L provided Ind(L(λ), 0) changes as λ crosses λ0 . Thus, at least for this implication, the condition that λ0 is an algebraic eigenvalue of L is not necessary. Some further (optimal) improvements by Ize [62] and Fitzpatrick & Pejsachowitz [32] showed that λ0 is a nonlinear eigenvalue of L if and only if Ind(L(λ), 0) changes as λ crosses λ0 . To prove this deep characterization, they used Obstruction Theory (a non-constructive technique of Algebraic Topology) in order to show that Ind(L(λ), 0) changes as λ crosses λ0 if λ0 is a nonlinear eigenvalue. Although in these contexts, Algebraic Topology provides us with optimal results, these techniques do not give an explicit nonlinearity for which (λ, 0) is the unique solution of (12.2) nearby (λ0 , 0) if Ind(L(λ), 0) remains constant as λ crosses λ0 ; our proof of Theorem 12.1.4 through the Smith form is, in contrast, constructive. That proof goes back to Mora-Corral [101] and, apparently, is substantially shorter than the original one by Esquinas & L´opez-G´omez [29, 30, 31, 82], where, instead of the Smith form, transversalization theory was used straight away. It should not be forgotten that transversalization theory was the main technical device in
292
Chapter 12. Nonlinear Eigenvalues
characterizing the existence of the local Smith form in Chapter 7 in the general non-analytical case. Consequently, the shortening of the lengthy proofs of L´opezG´ omez [82, Chapter 4] has been actually facilitated by the theory developed in Chapter 7 of this book. Incidentally, it was the development of the necessary technical tools for proving Theorem 12.1.4 that originated the theory of transversality studied in Chapter 4. The interested reader is sent to L´opez-G´omez [82, Chapter 4] for any further details concerning this issue. Later, transversalization theory has been shown to be an important device throughout this book. The idea of using the Smith form in the proof of Theorem 12.1.4 seems to go back to Rabier [110]. Lemma 12.4.2 describes a topological property, traditionally attributed to Whyburn [126], although the result seems to be considerably earlier. See Kuratowski [70] for a compendium of results in this spirit on connection and separation, and Alexander [1] for an exposition from the point of view of Bifurcation Theory. Within the context of local bifurcation theory, the topological degree was first used by Krasnosel’ski˘ı [69] to prove his celebrated bifurcation theorem, stated above. As shown in the proof of Theorem 12.1.4, if Ind(L(λ), 0) changes as λ crosses λ0 , then λ0 is a nonlinear eigenvalue of L, and a continuum of non-trivial solutions emanates from Ω×{0} at λ0 . This is one of the most important principles of Nonlinear Analysis, due to its huge number of applications. As discussed in this chapter, the topological degree was introduced by Brouwer [16] and later refined by Leray & Schauder [80], in connection with fixed point theory. However, some pivotal underlying ideas, mostly those related to the winding number, might be attributable to C.F. Gauss, L. Kronecker and H. Poincar´e, among other significant predecessors. All proofs of the results of Section 12.2 can be easily constructed from Lloyd [81], which is a paradigmatic textbook on topological degree theory from an analytic point of view. In constructing the topological degree, Brouwer [16] founded Algebraic Topology. Naturally, there are a number of textbooks where degree theory is studied adopting the point of view of Algebraic Topology, but this issue is beyond the general scope of this book. More general developments of the Leray–Schauder degree for wider classes of fixed-point equations, where the operators enjoy weaker regularity conditions than Assumptions (AL) and (AN), were introduced by Browder, Nussbaum and Petryshyn in the late sixties (cf. Lloyd [81], Deimling [26] and Petryshyn [107]) as well as by Mawhin [96], and, more recently, by Fitzpatrick & Pejsachowitz [32, 33, 34], Pejsachowitz & Rabier [104, 105], and Benevieri & Furi [10, 11, 12], who have extended the range of applicability of the classic topological degree to cover large classes of Fredholm operators of index zero, not necessarily compact perturbations of the identity map. From now on, we suppose that Ω × U = R × U and that λ0 is a nonlinear eigenvalue of L. Then, according to Theorem 12.1.4, there is a continuum C ⊂ S such that {(λ0 , 0)} C. The set S of non-trivial solutions of (12.2) has some component Cm such that (λ0 , 0) ∈ Cm . By a component, we mean a closed and connected set which is maximal for the inclusion. Global bifurcation theory is the
12.5. Comments on Chapter 12
293
part of Nonlinear Analysis that studies the global topological behavior of these components Cm . This branch originated from the pioneering works of Rabinowitz [111, 112], who established that Cm must satisfy some of the following (not mutually exclusive) alternatives: A1. Cm is unbounded in R × U . ˜0 ∈ Σ(L) \ {λ0 } of L such A2. There exists a nonlinear eigenvalue λ ˜ 0 , 0) ∈ Cm . that (λ Since the publication of these works, this alternative is referred to as the global alternative of P. H. Rabinowitz, and represents a paradigmatic result in global Nonlinear Analysis, due to its huge number of applications to the theory of nonlinear Partial Differential Equations. In linear equations, i.e., when N = 0, Alternative A1 always occurs, as there is a linear manifold of non-trivial solutions at λ0 . But, in nonlinear problems, according to the nature of N, any of the two alternatives, or even both simultaneously, can occur (see L´opez-G´omez [82, Chapter 7], and L´opez-G´omez & Molina-Meyer [86]). Some sharp improvements of these pioneering results were later found by Dancer [24, 25], Ize [61], Magnus [92] and L´ opez-G´omez [82] in connection with the problem of ascertaining the fine structure of the compact components of S meeting R × {0}. The analysis of these and other related problems in connection, for example, with the analysis of the local structure of the set of solutions bifurcating from Ω × {0} at a nonlinear eigenvalue, lies within the field of Singularity Theory and real Algebraic Geometry, but they are outside the general scope of the present book. We will, consequently, stop this final discussion here. From a topological perspective, the algebraic multiplicity χ[L; λ0 ] merely provides us with a finite algorithm to calculate the local topological index Ind(L(λ), 0). Therefore, it seems imperative to be able to compute it for a sufficiently large class of operator families, not necessarily analytic, and this task has been widely accomplished in the book.
Bibliography [1] J.C. Alexander, A primer on connectivity. In Fixed point theory, pp. 455– 483. Lecture Notes in Mathematics 886, Springer, Berlin-New York 1981. [2] H. Amann, Gew¨ ohnliche Differentialgleichungen. De Gruyter Lehrbuch, Walter de Gruyter, Berlin 1983. [3] H. Amann and S.A. Weiss, On the uniqueness of the topological degree. Math. Z. 130 (1973), 39–54. [4] J. Arason and R. Magnus, The universal multiplicity theory for analytic operator-valued functions. Math. Proc. Cambridge Philos. Soc. 118 (1995), 315–320. [5] J. Arason and R. Magnus, An algebraic multiplicity theory for analytic operator-valued functions. Math. Scand. 82 (1998), 265–286. [6] J. Arason and R. Magnus, A multiplicity theory for analytic functions that take values in a class of Banach algebras. J. Operator Theory 45 (2001), 161–174. [7] D.K. Arrowsmith and C.M. Place, Ordinary differential equations. A qualitative approach with applications. Chapman and Hall Mathematics Series, Chapman & Hall, London 1982. [8] J.A. Ball, I. Gohberg and L. Rodman, Interpolation of Rational Matrix Functions. Operator Theory: Advances and Applications 45, Birkh¨ auser, Basel 1990. [9] H. Bart, M.A. Kaashoek and D.C. Lay, Relative inverses of meromorphic operator functions and associated holomorphic projection functions. Math. Ann. 218 (1975), 199–210. [10] P. Benevieri and M. Furi, A simple notion of orientability for Fredholm maps of index zero between Banach manifolds and degree theory. Ann. Sci. Math. Qu´ebec 22 (1998), 131–148. [11] P. Benevieri and M. Furi, On the concept of orientabily for Fredholm maps between real Banach manifolds. Top. Meth. Nonl. Anal. 16 (2000), 279–306.
296
Bibliography
[12] P. Benevieri and M. Furi, Bifurcation results for families of Fredholm maps of index zero between Banach spaces. In Nonlinear Analysis and its Applications (St. John’s NF, 1999), pages 35–47. Nonlinear Analysis Forum 6, 2001. [13] T.S. Blyth and E.F. Robertson, Linear algebra. Essential Student Algebra 4. Chapman and Hall, New York 1986. [14] B. den Boer, Linearization of operator functions on arbitrary open sets. Integral Equations and Operator Theory 1 (1978), 19–27. [15] H. den Boer and G.P.A. Thijsse, Semistability of sums of partial multiplicities under additive perturbation. Integral Equations Operator Theory 3 (1980), 23–42. ¨ [16] L.E.J. Brouwer, Uber Abbildung von Mannigfaltigkeiten. Math. Annal. 71 (1911), 97–115. [17] D.M. Burton, Abstract and linear algebra. Addison-Wesley, Reading-LondonDon Mills 1972. [18] R.S. Cantrell and C. Cosner, Spatial ecology via reaction-diffusion equations. Wiley Series in Mathematical and Computational Biology, John Wiley & Sons, Chichester 2003. [19] S.N. Chow and J.K. Hale, Methods of bifurcation theory. Grundlehren der Mathematischen Wissenschaften 251, Springer, New York-Berlin 1982. [20] P.G. Ciarlet, Introduction a ` l’analyse num´erique matricielle et a ` l’optimisation. Masson, Paris 1982. [21] E.A. Coddington and N. Levinson, Theory of ordinary differential equations. McGraw-Hill, New York-Toronto-London 1955. [22] M.G. Crandall and P.H. Rabinowitz, Bifurcation from simple eigenvalues. J. Funct. Analysis 8 (1971), 321–340. [23] M.G. Crandall and P.H. Rabinowitz, Bifurcation, perturbation of simple eigenvalues, and linearized stability. Arch. Rat. Mech. Anal. 52 (1973), 161– 180. [24] E.N. Dancer, On the structure of solutions of non-linear eigenvalue problems. Ind. Univ. Math. J. 23 (1974), 1069–1076. [25] E.N. Dancer, Bifurcation from simple eigenvalues and eigenvalues of geometric multiplicity one. Bull. London Math. Soc. 34 (2002), 533–538. [26] K. Deimling, Nonlinear Functional Analysis. Springer, Berlin 1985. [27] N. Dunford and J.T. Schwartz, Linear operators. Part I. Pure and Applied Mathematics 7, Interscience Publishers, New York-London 1958. [28] V.M. Eni, Stability of the root-number of an analytic operator-function and perturbations of its characteristic numbers and eigenvectors. Dokl. Akad. Nauk SSSR 173 (1967), 1251–1254. English translation: Soviet Math. Dokl. 8 (1967), 542–545. [29] J. Esquinas, Optimal multiplicity in local bifurcation theory, II: General case. J. Diff. Equations 75 (1988), 206–215.
Bibliography
297
[30] J. Esquinas and J. L´ opez-G´omez, Resultados o ´ptimos en teor´ıa de bifurcaci´ on y aplicaciones. In Actas del IX Congreso de Ecuaciones Diferenciales y Aplicaciones, pp. 159–162. Universidad de Valladolid, Valladolid 1986. [31] J. Esquinas and J. L´ opez-G´omez, Optimal multiplicity in local bifurcation theory, I: Generalized generic eigenvalues. J. Diff. Equations 71 (1988), 72– 92. [32] P.M. Fitzpatrick and J. Pejsachowicz, A local bifurcation theorem for C 1 Fredholm maps. Proc. Amer. Math. Soc. 109 (1990), 995–1002. [33] P.M. Fitzpatrick and J. Pejsachowicz, Parity and generalized multiplicity. Trans. Amer. Math. Soc. 326 (1991), 281–305. [34] P.M. Fitzpatrick and J. Pejsachowicz, Orientation and the Leray–Schauder theory for fully nonlinear elliptic boundary value problems. Mem. Amer. Math. Soc. 483, Providence 1993. [35] A. Friedman and M. Shinbrot, Nonlinear eigenvalue problems. Acta Math. 121 (1968), 77–125. [36] G. Frobenius, Theorie der linearen Formen mit ganzen Coefficienten. J. Reine Angew. Math. 86 (1879), 146–208. [37] F.G. Frobenius, Gesammelte Abhandlungen. Band I. Edited by J.-P. Serre, Springer, Berlin-New York 1968. [38] L. F¨ uhrer, Theorie des Abbildungsgrades in endlichdimensionalen R¨ aumen. Ph. D. Dissertation, Freie Univ. Berlin, Berlin 1971. [39] F.R. Gantmacher, The theory of matrices, vol. 1, 2. Chelsea, New York 1959. Translated from Russian. [40] I.M. Gel’fand, Lectures on linear algebra. Interscience Tracts in Pure and Applied Mathematics 9, Interscience, New York – London 1961. [41] I.C. Gohberg, On linear operators depending analytically on a parameter (Russian). Doklady Akad. Nauk. SSSR (N.S.) 78 (1951), 629–632. [42] I.C. Gohberg, Certain questions of the spectral theory of finitely meromorphic operator functions (Russian). Izv. Akad. Nauk. Armjan. SSSR Ser. Mat. 6 (1971), 160–181; corrections in ibid. 7 (1972), 152. [43] I.C. Gohberg, S. Goldberg and M.A. Kaashoek, Classes of Linear Operators, vol. 1. Operator Theory: Advances and Applications 49, Birkh¨ auser, Basel 1990. [44] I.C. Gohberg, M.A. Kaashoek and D.C. Lay, Equivalence, Linearization, and Decomposition of Holomorphic Operator Functions. J. Funct. Analysis 28 (1978), 102–144. [45] I.C. Gohberg and M.G. Kre˘ın, Fundamental aspects of defect numbers, root numbers and indexes of linear operators (Russian). Uspehi Mat. Nauk. (N.S.) 12 (1957), no. 2(74), 43–118. English Translation: The basic propositions on defect numbers, root numbers and indices of linear operators. Amer. Math. Soc. Transl. (2) 13 (1960), 185–264.
298
Bibliography
[46] I.C. Gohberg and M.G. Kre˘ın, Introduction to the Theory of Linear Nonselfadjoint Operators. Translations of Mathematical Monographs 18, American Mathematical Society, Providence 1969. [47] I. Gohberg, P. Lancaster and L. Rodman, Spectral analysis of matrix polynomials, I. Canonical forms and divisors. Linear Algebra and Applications 20 (1978), 1–44. [48] I. Gohberg, P. Lancaster and L. Rodman, Spectral analysis of matrix polynomials, II. The resolvent form and spectral divisors. Linear Algebra and Applications 21 (1978), 65–88. [49] I. Gohberg, P. Lancaster and L. Rodman, Representations and divisibility of operator polynomials. Canad. J. Math. 30 (1978), 1045–1069. [50] I. Gohberg, P. Lancaster and L. Rodman, Matrix polynomials. Computer Science and Applied Mathematics, Academic Press, New York 1982. [51] I. Gohberg, P. Lancaster and L. Rodman, Invariant subspaces of matrices with applications. Canadian Mathematical Society Series of Monographs and Advanced Texts, Wiley Interscience, John Wiley, New York 1986. [52] I. Gohberg and L. Rodman, Analytic matrix functions with prescribed local data. J. D’Analyse Math. 40 (1981), 90–128. [53] I. Gohberg and L. Rodman, Analytic operator-valued functions with prescribed local data. Acta Math. (Szeged) 45 (1983), 189–199. [54] I.C. Gohberg and E.I. Sigal, An Operator Generalization of the Logarithmic Residue Theorem and the Theorem of Rouch´e. Math. Sbornik 84(126) (1971), 607–629. English Trans.: Math. USSR Sbornik 13 (1971), 603–625. [55] M. de Guzm´an, Ecuaciones diferenciales ordinarias. Teor´ıa de estabilidad y control. Alhambra, Madrid 1975. [56] J.K. Hale, Ordinary differential equations. 2nd edition, Krieger Publishing, Huntington (N.Y.) 1980. [57] P. Hartman, Ordinary differential equations. John Wiley, New York-LondonSydney 1964. [58] B. Helffer, Semi-Classical Analysis for the Schr¨ odinger Operator and Applications. Lectures Notes in Mathematics 1336, Springer, Berlin 1988. [59] M.W. Hirsch and S. Smale, Differential equations, dynamical systems, and linear algebra. Pure and Applied Mathematics 60, Academic Press, New York 1974. [60] P.D. Hislop and I.M. Sigal, Introduction to Spectral Theory With Applications to Schr¨ odinger Operators. Applied Mathematical Sciences 113, Springer, New York 1996. [61] J. Ize, Bifurcation theory for Fredholm operators. Mem. Amer. Math. Soc. 174, Providence 1976. [62] J. Ize, Necessary and sufficient conditions for multiparameter bifurcation. Rocky Mountain J. Math. 18 (1988), 305–337.
Bibliography
299
[63] N. Jacobson, Lectures in abstract algebra, Vol. II. Linear algebra. Van Nostrand, Toronto-New York-London 1953. [64] M.A. Kaashoek, C.V.M. van der Mee and L. Rodman, Analytic operator functions with compact spectrum, I. Spectral nodes, linearization and equivalence. Integral Equations Operator Theory 4 (1981), 504–547. [65] T. Kato, Perturbation Theory for Linear Operators. 2nd Edition, Springer, Berlin 1976. [66] M.V. Keldyˇs, On the characteristic values and characteristic functions of certain classes of non-self-adjoint equations (Russian). Doklady Akad. Nauk SSSR (N.S.) 77 (1951), 11–14. [67] M.V. Keldyˇs, The completeness of eigenfunctions of certain classes of nonselfadjoint linear operators (Russian). Uspehi Mat. Nauk 26 (1971), 15–41. English Translation: Russian Math. Surveys 26 (1971), 15–44. [68] H. Kielh¨ ofer, Bifurcation Theory. An Introduction with Applications to PDEs. Applied Mathematical Sciences 156, Springer, New York 2004. [69] M.A. Krasnosel’ski˘ı, Topological Methods in the Theory of Nonlinear Integral Equations. Gos. Izdat. Tehn.-Teor. Lit., Moscow 1956. English trans.: Pergamon Press, New York 1964. [70] K. Kuratowski, Topology. Vol. II. Academic Press, New York-London, PWN Polish Scientific Publishers, Warsaw 1968. [71] B. Laloux and J. Mawhin, Coincidence index and multiplicity. Trans. Amer. Math. Soc. 217 (1976), 143–162. [72] B. Laloux and J. Mawhin, Multiplicity, Leray–Schauder formula, and bifurcation. J. Diff. Equations 24 (1977), 309–322. [73] P. Lancaster, Theory of matrices. Academic Press, New York-London 1969. [74] P. Lancaster, A fundamental theorem on lambda-matrices with applications, I. Ordinary differential equations with constant coefficients. Linear Algebra and Applications 18 (1977), 189–211. [75] P. Lancaster, A fundamental theorem on lambda-matrices with applications, II. Difference equations with constant coefficients. Linear Algebra and Applications 18 (1977), 213–222. [76] S. Lang, Linear algebra. 3rd edition, Undergraduate Texts in Mathematics, Springer, New York, 1987. [77] P.D. Lax, Linear algebra. Pure and Applied Mathematics, Wiley Interscience, John Wiley, New York 1997. [78] J. Leiterer, Local and global equivalence of meromorphic operator functions, I. Math. Nachr. 83 (1978), 7–29. [79] J. Leiterer, Local and global equivalence of meromorphic operator functions, II. Math. Nachr. 84 (1978), 145–170. [80] J. Leray and J. Schauder, Topologie et ´equations fonctionelles. Ann. Sci. ´ Ecole Norm. Sup. S´er. 3 51 (1934), 45–78.
300
Bibliography
[81] N.G. Lloyd, Degree Theory. Cambridge Tracts in Mathematics, Cambridge University Press, Cambridge 1978. [82] J. L´ opez-G´omez, Spectral Theory and Nonlinear Functional Analysis. CRC Press, Chapman and Hall RNM 426, Boca Raton 2001. [83] J. L´ opez-G´omez, Ecuaciones diferenciales y variable compleja. Con teor´ıa espectral y una introducci´ on al grado topol´ ogico de Brouwer. Prentice Hall, Madrid 2001. [84] J. L´ opez-G´omez, Ecuaciones diferenciales y variable compleja. Problemas propuestos y resueltos. Prentice Hall, Madrid 2002. [85] J. L´ opez-G´omez, Spectral Theory and Nonlinear Analysis. In Ten Mathematical Essays on Approximation in Analysis and Topology (J. Ferrera, J. L´opez-G´omez and F. Ruiz del Portal, editors), pp. 151–176, Elsevier, Amsterdam 2005. [86] J. L´ opez-G´omez and M. Molina-Meyer, Bounded components of positive solutions of abstract fixed point equations: mushrooms, loops and isolas. J. Diff. Equations 209 (2005), 416–441. [87] J. L´ opez-G´omez and M. Molina-Meyer, The competitive exclusion principle versus biodiversity through competitive segregation and further adaptation to spatial heterogeneities. Theoretical Population Biology 69 (2006), 94-109. [88] J. L´ opez-G´omez and C. Mora-Corral, Characterizing the existence of local Smith forms for C ∞ families of matrix operators. Contemporary Mathematics 321, pp. 139–151, Amer. Math. Soc., Providence 2003. [89] J. L´ opez-G´omez and C. Mora-Corral, Local Smith form and equivalence for one-parameter families of Fredholm operators of index zero. In Spectral Theory and Nonlinear Analysis with Applications to Spatial Ecology. (S. CanoCasanova, J. L´opez-G´omez and C. Mora-Corral, editors), pp. 127–161, World Scientific, Singapore 2005. [90] J. L´ opez-G´omez and C. Mora-Corral, Finite Laurent developments and the logarithmic residue theorem in the real non-analytic case. Integral Equations Operator Theory 51 (2005), 519–552. [91] J. L´ opez-G´omez and C. Mora-Corral, Counting zeros of C 1 Fredholm maps of index one. Bull. London Math. Soc. 37 (2005), 778–792. [92] R.J. Magnus, A generalization of multiplicity and the problem of bifurcation. Proc. London Math. Soc. (3) 32 (1976), 251–278. [93] R. Magnus, On the multiplicity of an analytic operator-valued function. Math. Scand. 77 (1995), 108–118. [94] R. Magnus and C. Mora-Corral, Natural representations of the multiplicity of an analytic operator-valued function at an isolated point of the spectrum. Integral Equations and Operator Theory 53 (2005), 87–106. [95] A.S. Markus and E.I. Sigal, The multiplicity of the characteristic number of an analytic operator function (Russian). Mat. Issled. 5 (1970), 129–147.
Bibliography
301
[96] J. Mawhin, Equivalence theorems for nonlinear operators equations and coincidence degree theory for some mappings in locally convex topological vector spaces. J. Diff. Equations 12 (1972), 610–636. [97] B. Mitiagin, Linearization of holomorphic operator functions, I. Integral Equations and Operator Theory 1 (1978), 114–131. [98] B. Mitiagin, Linearization of holomorphic operator functions, II. Integral Equations and Operator Theory 1 (1978), 226–249. [99] C. Mora-Corral, On the Uniqueness of the Algebraic Multiplicity. J. London Math. Soc. 69 (2004), 231–242. [100] C. Mora-Corral, Axiomatizing the algebraic multiplicity. In The first 60 years of Nonlinear Analysis of Jean Mawhin, (M. Delgado, A. Su´ arez, J. L´opezG´ omez and R. Ortega, editors), pp. 175–187, World Scientific, Singapore 2004. [101] C. Mora-Corral, Algebraic Multiplicities and Bifurcation Theory. Ph.D. Thesis, Universidad Complutense de Madrid, Madrid 2004. [102] J.D. Murray, Mathematical Biology. Biomathematics Texts, Springer, Berlin 1989. [103] A. Okubo, Diffusion and Ecological Problems: Mathematical Models. Springer, New York 1980. [104] J. Pejsachowicz and P.J. Rabier, A substitute for the Sard–Smale theorem in the C 1 case. J. d’Anal. Math. 76 (1998), 265–288. [105] J. Pejsachowicz and P.J. Rabier, Degree theory for C 1 Fredholm mappings of index 0. J. d’Anal. Math. 76 (1998), 289–319. [106] V.V. Peller, Hankel operators and their applications. Springer Monographs in Mathematics, Springer, New York 2003. [107] W.V. Petryshyn, Generalized topological degree and semilinear equations. Cambridge Tracts in Mathematics 117, Cambridge University Press, Cambridge 1995. [108] H. Poincar´e, Les m´ethodes nouvelles de la m´ecanique c´eleste. Tomes I–III. Dover, New York 1957. [109] C.R. Putnam, Commutation properties of Hilbert space operators and related topics. Ergebnisse der Mathematik und ihrer Grenzgebiete 36, Springer, Berlin 1967. [110] P.J. Rabier, Generalized Jordan chains and two bifurcation theorems of Krasnoselskii. Nonl. Anal. TMA 13 (1989), 903–934. [111] P.H. Rabinowitz, Some global results for nonlinear eigenvalue problems. J. Funct. Anal. 7 (1971), 487–513. [112] P.H. Rabinowitz, Some aspects of nonlinear eigenvalue problems, Rocky Mountain J. of Maths. 3 (1973), 161–202. [113] A.G. Ramm, Singularities of the inverses of Fredholm operators. Proc. Roy. Soc. Edinburgh 102A (1986), 117–121.
302
Bibliography
[114] L. Rodman, An introduction to operator polynomials. Operator Theory: Advances and Applications 38, Birkh¨ auser, Basel 1989. [115] A. Sard, The measure of the critical values of differentiable maps. Bull. Amer. Math. Soc. 48 (1942), 883–890. [116] P. Sarreither, Transformationseigenschaften endlicher Ketten und allgemeine Verzweigungsaussagen. Math. Scand. 35 (1974), 115–128. [117] P. Sarreither, Zur algebraischen Vielfachheit eines Produktes von Operatorscharen. Math. Scand. 41 (1977), 185–192. [118] J. Schauder, Der Fixpunktsatz in Funktionalr¨ aumen. Studia Math. 2 (1930), 171–180. [119] E.I. Sigal, The multiplicity of a characteristic value of a product of operatorfunctions (Russian). Mat. Issled. 5 (1970), 118–127. [120] S. Smale, An infinite dimensional version of Sard’s theorem. Amer. J. Math. 87 (1965), 861–866. [121] H.J.S. Smith, On systems of linear indeterminate equations and congruences. Phil. Trans. Roy. Soc. London 151 (1861), 293–326. [122] A.E. Taylor, Introduction to functional analysis. John Wiley & Sons, Chapman & Hall, London 1958. [123] G.Ph.A. Thijsse, On solutions of elliptic differential equations with operator coefficients. Integral Equations and Operator Theory 1 (1978), 567–579. [124] V.P. Trofimov, The root subspaces of operators that depend analytically on a parameter (Russian). Mat. Issled. 3 (1968), 117–125. [125] A.N. Tychonoff, Ein Fixpunktsatz. Math. Ann. 111 (1935), 767–776. [126] G.T. Whyburn, Topological Analysis. Princeton University Press, Princeton 1958. [127] J.T. Wloka, B. Rowley and B. Lawruk, Boundary Value Problems for Elliptic Systems. Cambridge University Press, Cambridge 1995. [128] R. Wolf, Zur Stabilit¨ at der algebraischen Vielfachheit von Eigenwerten von holomorphen Fredholm-Operatorfunktionen. Applicable Anal. 9 (1979), 165– 177. [129] B.F. Wyman, M.K. Sain, G. Conte and A.M. Perdon, Poles and Zeros of Matrices of Rational Functions. Linear Algebra Appl. 157 (1991), 113–139. [130] K. Yosida, Functional analysis. 2nd edition, Die Grundlehren der mathematischen Wissenschaften 123, Springer, New York 1968.
Notation General N N∗ Z, R, C K δij C r (Ω, V ) C r (Ω) C(Ω, V ) C(Ω) C ω (Ω, V ) H(Ω, V ) H(Ω) o(λ − λ0 )p
The sets of natural numbers, including zero N \ {0} The sets of integer, real and complex numbers, respectively Any of the fields R or C Kronecker’s delta: δij = 1 if i = j; and δij = 0 if i = j The space of functions of class C r from Ω to V C r (Ω, K) C 0 (Ω, V ) C 0 (Ω, K) The space of analytic functions from Ω to V The space of the holomorphic functions from Ω to V H(Ω, C) Any function f defined on a perforated neighborhood of λ0 such that lim (λ − λ0 )−p f (λ) = 0
|a| Re e Im e ¯ λ
Absolute value of a ∈ R, or modulus of a ∈ C Real part of (the complex number or the vector) e Imaginary part of (the complex number or the vector) e Complex conjugate of λ ∈ C Complex conjugate of the vector e Number of elements of the set A, if A is finite, and ∞ if A is infinite Binomial coefficient Order of λ0 as a zero of f
λ→λ0
e¯ Card A a b
ordλ0 f
Operator calculus
u
T
(A)
Norm of u ∈ U Operator norm of T ∈ L(U, V ) Resolvent set of the operator A; that is, C \ σ(A)
304 R(z; A) P (A) f (A) spr A
Notation Resolvent operator of A at z ∈ (A); that is, (zI − A)−1 Action of the polynomial P over the matrix A Action of the holomorphic function f over the matrix A Spectral radius of the matrix A
Vector spaces U, V, W U0 , U1 L(U, V ) L(U ) Iso(U, V ) Iso(U ) Fred(U, V ) Fred(U ) Fred0 (U, V ) Fred0 (U ) K(U, V ) K(U ) GLc (U ) det A tr A σ(A) Eig(A) ma (λ) mg (λ) u Mα×β (K) MN (K) IU I IN N [T ] R[T ] rank T span[u1 , . . . un ] dim U0 codim U1 ind T
Real or complex Banach spaces Closed vector subspaces of U Banach space of the linear and bounded operators from U to V L(U, U ) Set of linear isomorphisms from U to V Iso(U, U ) Subset of L(U, V ) formed by the Fredholm operators Fred(U, U ) Set of operators of Fred(U, V ) with index zero Fred0 (U, U ) Set of compact operators from U to V K(U, U ) Set of T ∈ Iso(U ) such that T − IU ∈ K(U ) Determinant of the square matrix A Trace of the finite-rank operator A ∈ L(U ) Spectrum of the matrix A or of the linear operator A Set of eigenvalues of the matrix A or of the linear operator A Algebraic multiplicity of λ as an eigenvalue of A Geometric multiplicity of λ as an eigenvalue of A A vector in U Space of matrices with coefficients in K with α rows and β columns MN×N (K) Identity operator in U Identity operator, when the space is clear from the context Identity operator in L(KN ), or identity matrix in MN (K) Kernel (null-space) of T ∈ L(U, V ) Range (image) of T ∈ L(U, V ) dim R[T ] Set of linear combinations of u1 , . . . , un ∈ U Dimension of the vector space U0 Codimension in U of the subspace U1 ⊂ U , that is, dim(U/U1 ) Fredholm index of the operator T ∈ L(U, V ), that is, dim N [T ] − codim R[T ]
Notation
305
diag{a1 , . . . , an } Diagonal matrix with diagonal elements a1 , . . . , an in this order, or block-diagonal matrix or operator trng Block-triangular matrix or operator defined in (4.34) trng{L0 , . . . , Ls } (see (7.1)) L[s] Block-decomposition of a matrix or of an operator [a1 , . . . , ak ] col[a1 , . . . , an ] Matrix or operator defined in (4.33) ⊕ It indicates direct sum of vector subspaces × It indicates direct product (or Cartesian product) of vector spaces Transposed vector of u uT Transposed matrix of A AT Characteristic polynomial of A PA u, v Skew-linear product of the vectors u, v ∈ CN Br (u) Open ball of center u ∈ U and radius r > 0 Br (0) Br Open disk in C of radius R > 0 and center z0 ∈ C DR (z0 ) DR (0) DR Positively oriented boundary of DR (z0 ) CR (z0 ) CR (0) CR The inner domain of the Cauchy contour γ Dγ
One-parameter families of linear operators Ω λ0 ¯ Ω ∂Ω P[L(U, V )] P[L(U )] rank u Σ(L) Eig(L) Algν (L) Alg(L) c(L, λ0 , j) Lj (L−1 )j M{j} χ[L; λ0 ] µ[L; λ0 ] κ[L; λ0 ]
An open subset of K A point of Ω Closure of the set Ω Boundary of the set Ω Set of polynomial operator families from U to V P[L(U, U )] Rank of u ∈ U as an eigenvector of a family L at λ0 Spectrum (set of singular values) of a family L Subset of Σ(L) formed by all eigenvalues Set of ν-algebraic values Union of all sets Algν (L), for all ν ≥ 0 jth coefficient of the Taylor or Laurent development of L at λ0 c(L, λ0 , j) c(L−1 , λ0 , j) jth derived family, according to Definition 5.1.1 Multiplicity of L at λ0 , according to Definition 4.3.3 Multiplicity of L at λ0 , according to Definition 5.1.3 Multiplicity of L at λ0 , according to Definition 7.2.5
306 m[L; λ0 ] κj [L; λ0 ] µj [L; λ0 ] χj [L; λ0 ] α[L; λ0 ] m[L; Ω] RP CP LA Sλr0 (U, V ) Sλ0 (U ) Sλr0 A∞ λ0 (U, V ) A∞ λ0 (U ) Mλ0 (U, V ) Mλ0 (U )
Notation Multiplicity of L at λ0 , when no special definition is considered jth partial κ-multiplicity of L at λ0 . See Definition 7.2.5 jth partial µ-multiplicities of L at λ0 . See Definition 5.1.3 µj [L; λ0 ] Ascent of L at λ0 Multiplicity of L in the open set Ω The family defined by (5.1) The family defined by (5.33) The family defined by (8.1) The set of families L of class C r from a neighborhood of λ0 to L(U, V ) such that L(λ0 ) ∈ Fred0 (U, V ) Sλ0 (U, U ) The class of families L for which there exist Banach spaces U and V with L ∈ Sλr0 (U, V ) The set of families L ∈ Sλ∞0 (U, V ) such that λ0 ∈ Alg(L) A∞ λ0 (U, U ) The set of families L from a neighborhood of λ0 to L(U, V ) such that the multiplicity of L at λ0 is defined Mλ0 (U, U )
Nonlinear operators S Deg(f, D) Ind(f, u0 )
Set of non-trivial solutions of the equation F(λ, u) = 0 Topological degree of F in D Leray–Schauder index of f at the isolated zero u0
Index additivity property, 280 adjoint, 31 algebraic eigenvalue concept, 89 order, 89 algebraic multiplicity classic, 6, 212 of an operator family χ, 91 κ, 160 µ, 111 non-Fredholm, 269 partial, 111, 160 dual, 133 algebraic value, 89 annihilator polynomial, 9 ascent of an eigenvalue of an operator, 8, 213 of an operator family, 110 dual, 133 Axiom A1, 140, 144, 145, 147 A2, 140, 144, 145, 147 A3, 144 Banach lemma, 62 bifurcation point, 275 value, 275 canonical form of Jordan, 23 Smith, 171 canonical set of Jordan chains, 159, 180 Cauchy contour, 49 domain, 49 integral formula, 49 product, 54
Cayley–Hamilton theorem, 12 chain Jordan, 155 canonical set, 159 indefinitely continued, 158 left, 202 of eigenvectors, 23 characteristic equation, 6 characterization of nonlinear eigenvalues, 277 compact operator, 76 Courant–Fisher theorem, 35 Crandall–Rabinowitz transversality condition, 106 degree, 221, 279 additivity property, 280 excision property, 280 homotopy invariance, 280 Leray–Schauder formula, 282 normalization property, 280 solution property, 280 derived family, 109 sequence, 109 dual, 133 diagonal, 5 block, 20 matrix, 5, 7, 20, 58 Dunford integral formula, 49 theorem, 50 eigenspace, 5 generalized, 7, 8 ascent, 8, 20, 23, 213 eigenvalue algebraic, 89 ascent
308 of a matrix, 8 of an operator, 213 of an operator family, 110 isolated, 210 nonlinear, 105, 277 of a matrix, 5 of an operator family, 78 semisimple, 7 simple, 7, 99 perturbation, 32, 100 transversal, 87 eigenvector of a matrix, 5 generalized, 8 of an operator family, 156 rank, 157 equivalence of norms, 38 of operator families, 194 theorem, 196 excision property, 280 exponential matrix, 53 finite Laurent development, 227 rank operator, 76 fixed point theorem, 283 Fredholm index, 76 operator, 76 Hermitian matrix, 31 homotopy invariance, 280 implicit function theorem, 32, 100 index of a Fredholm operator, 76 topological, 221, 282 interpolation polynomial, 13 isomorphism, 76 Jordan block, 20 real, 26 canonical form, 23 real, 26 chain, 23, 155 canonical set, 159 indefinitely continued, 158
Index left, 202 theorem, 8 generalized, 254 kernel, 5, 76 Kronecker’s delta, 14 Laurent coefficients, 65, 227, 229 finite development, 227 series, 65, 229 theorem, 65, 228 Leray–Schauder formula, 282 linearization of a matrix polynomial, 251 logarithmic residue theorem, 149, 235, 237 matrix adjoint, 31 diagonal, 5, 7, 20, 58 diagonalizable, 7, 32 exponential, 53 Hermitian, 31, 60 nilpotent, 23, 58 of the change of basis, 4, 32 orthogonal, 32 positive, 35 positive definite, 35 similar, 4, 33 trace, 33 matrix polynomial concept, 251 linearization, 251 monic, 251 multiplicity geometric, 6 invariance by homotopy, 220 of a family, 91, 101, 111 dual, 133 of a matrix, 6 of a root, 6 of an operator, 212
Index Neumann series, 62 nilpotent, 23, 58 nonlinear eigenvalue, 105, 277 norm, 38, 76 equivalence, 38, 39 subordinated, 40, 76 normalization property, 140 operator family, 78 ascent of an eigenvalue, 8, 213 compact, 76 finite rank, 76 Fredholm, 76 identity, 76 isomorphism, 76 projection, 76 rank, 76 trace, 230 operator family algebraic eigenvalue, 89 algebraic multiplicity χ, 91 κ, 160 µ, 111 non-Fredholm, 269 ascent, 110 dual, 133 concept, 78 derived sequence, 109 eigenvalue, 78 eigenvector, 156 equivalence, 194 polynomial, 78 product, 78 rank map, 157 singular value, 78 spectrum, 78 transversal eigenvalue, 87 transversal value, 87 transversality order, 87 transversalized, 90 transversalizing, 90 order of a function at a zero, 77 of a perturbed eigenvalue, 101 of an algebraic eigenvalue, 89 of transversality, 87
309 orthogonal matrix, 32 subspace, 34 vector, 32 orthonormal basis, 34 partial multiplicities, 111, 160 dual, 133 permutation, 64 phase space, 251 pole, 66 polynomial, 5 annihilator, 9 characteristic, 5, 6 interpolation, 13 matrix, 251 product formula, 123, 140, 144, 145, 147, 203, 243 projection operator, 76 sequence associated with a derived family, 109 similarity, 141 spectral, 69, 239 range, 5, 76 rank map of an operator family, 157 of an eigenvector, 157 of an operator, 76 finite, 76 Rayleigh quotient, 34 regularization theorem, 281 removable singularities theorem, 107 residue, 69 resolvent holomorphy, 47 operator, 45 pole, 66 set, 45 semisimple eigenvalue, 7 similar matrix, 4, 33 simple eigenvalue, 7, 99 singular value, 78 skew-linear product, 31, 32, 34 Smith form, 171
310 characterization of the existence, 181, 186 solution property, 280 spectral projection, 69, 239 radius, 59 structure, 211 theorem for matrix polynomials, 254 spectrum of a matrix concept, 6 continuous dependence, 35 of an operator family, 78 stability theorem, 218 theorem of fixed point of Brouwer–Schauder–Tychonoff, 283 fundamental of algebra, 255 of Cayley–Hamilton, 12 of characterization of isolated eigenvalues, 210 of characterization of nonlinear eigenvalues, 277 of characterization of the existence of the Smith form, 181, 186 of characterization of the local equivalence, 196 of coincidence of χ and κ, 178 of coincidence of χ and µ , 118 of Courant–Fisher, 35 of Dunford, 50 of existence and uniqueness of the topological degree, 279 of existence of finite Laurent developments for L−1 , 226 of holomorphy of the resolvent, 47, 228 of interpolation, 13 of invariance, 161 of invariance by homotopy of the complex multiplicity, 220 of Jordan, 8 genenaralized, 254 of Laurent, 65, 228 of regularization of Sard–Smale, 281 of removable singularities, 107
Index of spectral structure, 211 of stability, 218 of the logarithmic residue, 149, 235, 237 of transversalization, 90 of uniqueness, 140, 144, 145, 147 product formula, 123 spectral for matrix polynomials, 254 spectral mapping, 52 topological degree, 221, 279 additivity property, 280 excision property, 280 homotopy invariance, 280 Leray–Schauder formula, 282 normalization property, 280 solution property, 280 index, 221, 282 trace of a matrix, 33 of an operator, 230 transpose, 31 transversal eigenvalue, 87 characterization, 87 value, 87 transversality condition of Crandall–Rabinowitz, 106 transversalization theorem, 90 uniqueness theorem, 140, 144, 145, 147