P1: GEM/SPH CB385-FM
P2: GEM/SBA CB385-Gauthier
QC: GEM/UKS June 20, 2001
T1: GEM 15:29
Char Count= 22896
This pag...
45 downloads
871 Views
1MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
P1: GEM/SPH CB385-FM
P2: GEM/SBA CB385-Gauthier
QC: GEM/UKS June 20, 2001
T1: GEM 15:29
Char Count= 22896
This page intentionally left blank
ii
DETERMINISTIC OBSERVATION THEORY AND APPLICATIONS
This book presents a general theory as well as a constructive methodology to solve “observation problems,” that is, reconstructing the full information about a dynamical process on the basis of partial observed data. A general methodology to control processes on the basis of the observations is also developed. Illustrative but also practical applications in the chemical and petroleum industries are shown. This book is intended for use by scientists in the areas of automatic control, mathematics, chemical engineering, and physics. J-P. Gauthier is Professor of Mathematics at the Universit´e de Bourgogne, Dijon, France. I. Kupka is Professor of Mathematics at the Universit´e de Paris VI, France.
i
P1: GEM/SPH CB385-FM
P2: GEM/SBA CB385-Gauthier
QC: GEM/UKS June 20, 2001
T1: GEM 15:29
Char Count= 22896
ii
P1: GEM/SPH CB385-FM
P2: GEM/SBA CB385-Gauthier
QC: GEM/UKS June 20, 2001
T1: GEM 15:29
Char Count= 22896
DETERMINISTIC OBSERVATION THEORY AND APPLICATIONS JEAN-PAUL GAUTHIER IVAN KUPKA
iii
The Pitt Building, Trumpington Street, Cambridge, United Kingdom The Edinburgh Building, Cambridge CB2 2RU, UK 40 West 20th Street, New York, NY 10011-4211, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia Ruiz de Alarcón 13, 28014 Madrid, Spain Dock House, The Waterfront, Cape Town 8001, South Africa http://www.cambridge.org © Cambridge University Press 2004 First published in printed format 2001 ISBN 0-511-02878-4 eBook (Adobe Reader) ISBN 0-521-80593-7 hardback
P1: GEM/SPH CB385-FM
P2: GEM/SBA CB385-Gauthier
QC: GEM/UKS June 20, 2001
T1: GEM 15:29
Char Count= 22896
We dedicate this book to our wives, Ir`ene and Prudence, respectively
v
P1: GEM/SPH CB385-FM
P2: GEM/SBA CB385-Gauthier
QC: GEM/UKS June 20, 2001
T1: GEM 15:29
Char Count= 22896
The purpose of this book is to present a complete theory of observability and observation of finite dimensional nonlinear systems in the deterministic setting. The theory is used to prove very general results in dynamic output stabilization of nonlinear systems. Two real concrete applications are briefly described. Dijon, September 9, 2000
vi
P1: GEM/SPH CB385-FM
P2: GEM/SBA CB385-Gauthier
QC: GEM/UKS June 20, 2001
T1: GEM 15:29
Char Count= 22896
Contents
Preface 1
page ix
Introduction 1. Systems under Consideration 2. What Is Observability? 3. Summary of the Book 4. The New Observability Theory Versus the Old Ones 5. A Word about Prerequisites 6. Comments
Part I. Observability and Observers 2 Observability Concepts 1. Infinitesimal and Uniform Infinitesimal Observability 2. The Canonical Flag of Distributions 3. The Phase-Variable Representation 4. Differential Observability and Strong Differential Observability 5. The Trivial Foliation 6. Appendix: Weak Controllability 3
The Case d y ≤ du 1. Relation Between Observability and Infinitesimal Observability 2. Normal Form for a Uniform Canonical Flag 3. Characterization of Uniform Infinitesimal Observability 4. Complements 5. Proof of Theorem 3.2
vii
1 1 2 2 3 4 5
9 9 11 12 14 15 19 20 20 22 24 26 29
P1: GEM/SPH CB385-FM
P2: GEM/SBA CB385-Gauthier
QC: GEM/UKS June 20, 2001
viii
T1: GEM 15:29
Char Count= 22896
Contents
The Case d y > du 1. Definitions and Notations 2. Statement of Our Differential Observability Results 3. Proof of the Observability Theorems 4. Equivalence between Observability and Observability for Smooth Inputs 5. The Approximation Theorem 6. Complements 7. Appendix
36 37 40 42
5
Singular State-Output Mappings 1. Assumptions and Definitions 2. The Ascending Chain Property 3. The Key Lemma 4. The AC P(N ) in the Controlled Case 5. Globalization 6. The Controllable Case
68 68 71 73 78 81 84
6
Observers: The High-Gain Construction 1. Definition of Observer Systems and Comments 2. The High-Gain Construction 3. Appendix
4
51 57 58 59
86 87 95 120
Part II. Dynamic Output Stabilization and Applications 7 Dynamic Output Stabilization 1. The Case of a Uniform Canonical Flag 2. The General Case of a Phase-Variable Representation 3. Complements
123 125 126 132 141
8
143 143 163
Applications 1. Binary Distillation Columns 2. Polymerization Reactors
Appendix Solutions to Part I Exercises Bibliography Index of Main Notations Index
179 195 217 221 224
P1: GEM/SPH CB385-FM
P2: GEM/SBA CB385-Gauthier
QC: GEM/UKS June 20, 2001
T1: GEM 15:29
Char Count= 22896
Preface
A long time ago, while working on paper [19], we felt that there was a need to write a book on the subject of observability. Now, after many vicissitudes, this is a done thing. During the conception of the book, the very novel point of view we had developed in our papers did not change. We discovered that it was really the right one, and was extremely efficient. In fact, based on it, we could build a totally new, complete, and general theory, and a new methodology for the problems related to observability, such as “output stabilization.” At the same time, we applied our methodology to practical problems, and we realized that our methods were extremely efficient in practice. At the very beginning, we intended to write a “survey” on the problems of observability, including nonlinear filtering. As the work progressed, we changed our minds. First, from the practical point of view, we faced a daunting task: a book of that type, had it ever seen the light of a day, would have been a monster. But, more important, our theory would have been drowned in a mass of disparate, disconnected facts. Hence, this book presents only the general theory we have discovered, with a selection of real-life applications to convince the reader of the practical capability of the method. We strictly avoided the type of academic examples which are rife in many control theory publications. Several principles guided us in the elaboration of this book: r First, the book should be short. Including some developments in the
stochastic context was a definite possibility, but this would have required the use of deep mathematical tools for meager returns. Enough mathematical theories already are used in the book. r Second, the book is an excellent opportunity to convince people with a mathematical bent that “observation theory” is not out of place in mathematics. For that reason, the style of this book is a mathematical one. Also, we want to show that applied problems in the real world ix
P1: GEM/SPH CB385-FM
P2: GEM/SBA CB385-Gauthier
QC: GEM/UKS June 20, 2001
T1: GEM 15:29
x
Char Count= 22896
Preface
can be dealt with by using beautiful mathematics. On the other hand, mathematics is not the main object of this book, but an excellent tool to achieve our goals. r Third, we want to convince applied people (e.g., control engineers, chemical engineers) that our methodology is efficient. Therefore, they should strive to understand it and, above all, to use it. For this purpose, we want to point out the following: r We strove to make all the necessary mathematical tools accessible
to uninitiated readers.
r Bypassing the details of the proofs does not impair the understanding
of the statements of the theoretical results and the constructive parts of the theory (many of the proofs are not obvious). r Chapter 8, containing the applications, is friendlier to the nonmathematical reader, albeit rigorous. The development of the practical applications in this book, and of others not mentioned in it, was possible thanks to the cooperation with the French branch of the Shell company, its research center at “Grand-Couronne.” One of the applications was actually implemented at the refinery of Petit-Couronne (France). The first author particularly wants to express his deep gratitude to the whole process control group there, more especially to Denis Bossane, Fran¸cois Deza, Marjoleine Van Doothing, and Frederic Viel, for their help, support, and for the good time we spent together. A very special and friendly remembrance goes to Daniel Rakotopara, the head of the group at that time, who so unfortunately died recently. J-P. Gauthier expresses his warmest thanks to Jean-Jacques Dell’amico (head of the research center) and Pierre Sommelet (chief of the group), who not only took care of financial needs, but also are great friends. Chapters 3, 4, 5, and 7 of this book contain, among others, the results of the papers [18], [19], [32]. For their kind permission to reproduce parts of papers [18], [19], and [32], we thank, respectively, the Society for Industrial and Applied Mathematics (Observability and observers for nonlinear systems, SIAM Journal on Control, Vol. 32, No. 4, pp. 975–994, 1994), Springer-Verlag (Observability for systems with more outputs than inputs, Mathematische Zeitschrift 223, pp. 47–78, 1996), and Kluwer Academic Publishers (with P. Jouan, Finite singularities of nonlinear systems. Output stabilization, observability, and observers. Journal of Dynamical and Control Systems 2(2), pp. 255–288, 1996). Mexico City, September, 2000
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1 Introduction
In this book, we present a new, general, and complete theory of observability and observation, deriving from our papers [18, 19, 32]. This theory is entirely in the deterministic setting. Let us mention here that there are several papers preceding these three that exploit the same basic ideas with weaker results. See [16, 17], in collaboration with H. Hammouri. A list of all main notations is given in an index, page 221.
1. Systems under Consideration We are concerned with general nonlinear systems of the form: dx = f (x, u), () dt y = h(x, u),
(1)
typically denoted by , where x, the state, belongs to X, an n-dimensional, connected, Hausdorff paracompact differentiable manifold, y, the output, takes values in R d y , and u, the control variable, takes values in U ⊂ R du . For the sake of simplicity, we take U = R du or U = I du , where I ⊂ R is a closed interval. But typically U could be any closed submanifold of R du with a boundary, a nonempty interior, and possibly with corners. Unless explicitly stated, X has no boundary. The set of systems will be denoted by S = F × H, where F is the set of u-parametrized vector fields f , and H is the set of functions h. In general, except when explicitly stated, f and h are C ∞ . However, depending on the context, we will have to consider also analytic systems (C ω ), or C r systems, for some r ∈ N . Thus, if necessary, the required degree of differentiability will be stated, but in most cases the notations will remain S, F, H. The simplest case is when U is empty, the so-called “uncontrolled case.” In that situation, we will be able to prove more results than in the general case. 1
P1: FBH CB385-Book
CB385-Gauthier
2
June 21, 2001
11:11
Char Count= 0
Introduction
Usually, in practical situations, the output function h of the system does not depend on u. Unfortunately, from the theoretical point of view, this assumption is very awkward and leads to clumsy statements. For that reason, we will currently assume that h depends on the control u.
2. What Is Observability? The preliminary definition we give here is the oldest one; it comes from the basic theory of linear control systems. Roughly speaking, “observability” stands for the possibility of reconstructing the full trajectory from the observed data, that is, from the output trajectory in the uncontrolled case, or from the couple (output trajectory, control trajectory) in the controlled case. In other words, observability means that the mapping initial − state → output − trajectory is injective, for all fixed control functions. More precise definitions will be given later in the book.
3. Summary of the Book 1. When the number d y of observations is smaller than or equal to the number du of controls, then the relevant observability property is very rigid and is not stable under small perturbations, for germs of systems. Because of that rigidity, this observability property can be given a simple geometric characterization. This is the content of the paper [18] and the purpose of Chapter 3. 2. If, on the contrary, d y > du , a remarkable phenomenon happens: The observability becomes generic, in a very strong sense, and for very general classes of control functions. In Chapter 4, we state and prove a cornucopia of genericity results about observability as we define it. The most important of these results are contained in paper [19]. Some of these results present real technical difficulties. 3. The singular case: in the preceding two cases, the initial − state → output − trajectory mapping is regular. What happens if it becomes singular? This problem is too complex. In classical singularity theory, there is a useful and manageable concept of mapping with singularities: that of a “finite mapping.” It is interesting that, in the uncontrolled analytic case, this concept can be extended to our initial − state → output − trajectory mappings, according to a very original idea of P. Jouan. This idea leads to the very interesting results of paper [32]. The controlled case is very different: If the system is singular, then it is not controllable. In this case, we also have several results,
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
4. The New Observability Theory Versus the Old Ones
3
giving a complete solution of the observation problem. These developments form the content of Chapter 5. 4. Observers: An observer is a device that performs the practical task of state reconstruction. In all cases mentioned above, (1, 2, 3), an asymptotic observer can be constructed explicitly, under the guise of a differential equation that estimates the state of the system asymptotically. The estimation error has an arbitrarily large exponential decay. This is the so-called “high gain construction.” This construction is an adaptation to the nonlinear case of the “Luenberger,” or of the “extended Kalman filter” method. The last one performs very well in practice. We will present these topics in Chapter 6. 5. Output stabilization: this study can be applied to output stabilization in the preceding cases 1, 2, and 3 above. One of our main results in [32] states that one can stabilize asymptotically a system via an asymptotic observer, using the output observations only, if one can stabilize it asymptotically using smooth state feedback. This result is “semi-global”: One can do this on arbitrarily large compacta. Let us note that, in cases 1 and 2, the initial − state → output − trajectory mapping is always immersive. In that case, the stabilizing feedback can be arbitrary. But, if the initial − state → output − trajectory mapping is not immersive, then it has to belong to a certain special ring of functions. These results are developed in Chapter 7. 6. In the last chapter, Chapter 8, we give a summary description of two applications in the area of chemical engineering. These represent the fallout from our long cooperation with the Shell company. The first one, about distillation columns, is of practical interest because distillation columns really are generic objects in the petroleum and chemical industries. This application is a perfect illustration of the methods we are proposing for the problems of both observation and dynamic output stabilization. The second application deals with polymerization reactors, and it constitutes also a very interesting and pertinent illustration. Both applications are the subjects of the articles [64, 65]. The classical notions of observability are inadequate for our purposes. For reasons discussed in the next section, Chapter 2 is devoted to the introduction of new concepts of observability. We hope that our book will vindicate our iconoclastic gesture of discarding the old observability concepts. 4. The New Observability Theory Versus the Old Ones As we have said, observability is the injectivity of the mapping: initial − state → output − trajectory. However, the concept of injectivity per se is
P1: FBH CB385-Book
CB385-Gauthier
4
June 21, 2001
11:11
Char Count= 0
Introduction
very hard to handle mathematically because it is unstable. Hence, we have to introduce stronger concepts of observability, for example adding to the injectivity the condition of immersivity (infinitesimal injectivity), as in the classical theory of differentiable mappings. In this book, we haven’t discussed any of the other approaches to observability that have been proposed elsewhere, and we haven’t referenced any of them. The reason for this is simple: We have no use for either the concepts nor the results of these other approaches. In fact, we claim that our approach to observability theory, which is entirely new, is far superior to any of the approaches proposed so far. Since we cannot discuss all of them, let us focus on the most popular: the output injection method. The output injection method is in the spirit of the feedback linearization method (popular for the control of nonlinear systems). As for the feedback linearization, one tries to go back to the well-established theory of linear systems. First, one characterizes the systems that can be written as a linear system, plus a perturbation depending on the outputs only (in some coordinates). Second, for these systems only, one applies slight variations of the standard linear constructions of observer systems. This approach suffers from terminal defects. A. It applies to an extremely small class of systems only. In precise mathematical terms, it means the following. In situation 2 above, where observability is generic, it applies to a class of systems of infinite codimension. In case 1, where the observability is nongeneric, it also applies to an infinite codimension subset of the set of observable systems. B. Basically, the approach ignores the crucial distinction between the two cases: 1. d y ≤ du , 2. d y > du . C. The approach does not take into account generic singularities, and it is essentially local in scope. Of course, these defects have important practical consequences in terms of sensitivity. In particular, in case 2, where the observability property is stable, the method is unstable. 5. A Word about Prerequisites In this book, we have tried to keep the mathematical prerequisites to a strict minimum. What we need are the following mathematical tools: transversality theory, stratification theory and subanalytic sets, a few facts from several complex variables theory, center manifold theory, and Lyapunov’s direct and inverse theorems.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
6. Comments
5
For the benefit of the reader, a summary of the results needed is provided in the Appendix. It is accessible to those with only a modest mathematical background. 6. Comments 6.1. Comment about the Dynamic Output Stabilization Problem At several places in the book, we make the assumption that the state space X, is just the Euclidean space R n . If one wants only to estimate the state, this is not a reasonable assumption: the state space can be anything. However, for the dynamic output stabilization of systems that are state-feedback stabilizable, it is a reasonable assumption because the basin of attraction of an asymptotically stable equilibrium point of a vector field is diffeomorphic to R n (see [51]). 6.2. Historical Comments 6.2.1. About “Observability” The observability notion was introduced first in the context of linear systems theory. In this context, the Luenberger observer, and the Kalman filter were introduced, in the deterministic and stochastic settings, respectively. For linear systems, the observability notion is independent of the control function (either the initial − state → output − trajectory mapping is injective for all control functions, or it is not injective for each control function). This is no longer true for nonlinear systems. Moreover, as we show in this book, in the general case where d y ≤ du , observability (for all inputs) is not at all a generic property. For these reasons (and certainly also just for tractability), several weaker different notions of observability have been introduced, which are generic and which agree with the old observability notion in the special case of linear systems. In this setting, there is the pioneer work [24]. As we said, these notions are totally inadequate for our purposes, and we just forget about them. 6.2.2. About Universal Inputs Let us say that a control function separates two states, if the corresponding output trajectories, from these two initial states, do not coincide. For a nonlinear system, a universal input is a control function that separates all the couples of states that can be separated by some control. We want to mention a pioneer work by H. J. Sussmann [47], in which it is proved, roughly speaking, that “universal inputs do exist.” For this purpose, the author made use of the properties of subanalytic sets, in a spirit very similar to the one in this book.
P1: FBH CB385-Book
CB385-Gauthier
6
June 21, 2001
11:11
Char Count= 0
Introduction
6.2.3. About the Applications In Chapter 8, we present two applications from chemical engineering science. There are already several other applications of our theory in many fields, but we had to choose. The two applications we have chosen look rather convincing, because they are not “academic,” and some refinements of the theory are really used. Moreover, these two applications, besides their illustrative character, are very important in practice and have been addressed by research workers in control theory, using other techniques, for many years. It is hard to give an exhaustive list of other studies (related to control and observation theory) on distillation columns and polymerization reactors. However, let us give a few references that are significant: For distillation columns: [58], [61], [62]. For polymerization reactors: [56], [57], [60], [59].
Regarding distillation columns, it would be very interesting (and probably very difficult) to study the case of azeotropic distillations, which is not addressed in this volume. It seems that all the theory collapses in this case of azeotropic distillations.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
Part I Observability and Observers
7
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
8
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2 Observability Concepts
In this chapter, we will state and explain the various definitions of observability that will be used in this book (see Section 4 in Chapter 1).
1. Infinitesimal and Uniform Infinitesimal Observability The space of control functions under consideration will just be the space L ∞ [U ] of all measurable bounded, U -valued functions u : [0, Tu [→ U, defined on semi-open intervals [0, Tu [ depending on u. The space of our output functions will be the space L[R d y ] of all measurable functions y : [0, Ty [→ R d y , defined on the semi-open intervals [0, Ty [. Usually, input and output functions are defined on closed intervals. However, this is irrelevant. The following considerations led us to work with semi-open intervals. For any input uˆ ∈ L ∞ [U ] and any initial state x0 , the maximal solution of the Cauchy problem for positive times d xˆ ˆ = f (xˆ (t), u(t)), xˆ (0) = x0 dt ˆ x0 )[, where 0 < e(u, ˆ x0 ) ≤ Tuˆ . If is defined on a semi-open interval [0, e(u, ˆ x0 ) is the positive escape time of x0 for the time ˆ x0 ) < Tuˆ , then, e(u, e(u, ˆ dependent vector field f (., u(t)). It is well known that, for all uˆ ∈ L ∞ [U ], ∗ ∗ ˆ x0 ) ∈ R¯ + is lower semi-continuous ( R¯ + = {a|0 < the function x0 → e(u, a ≤ ∞}). Definition 1.1. The input-output mapping P of is defined as follows: ˆ x0 ) → P(u, ˆ x0 ), P : L ∞ [U ] × X → L[R d y ], (u, ˆ x0 ) is the function yˆ : [0, e(u, ˆ x0 )[→ R d y defined by where P(u, ˆ yˆ (t) = h(xˆ (t), u(t)). ˆ x0 ). The mapping Puˆ : X → L[R d y ] is Puˆ (x0 ) = P(u, 9
P1: FBH CB385-Book
CB385-Gauthier
10
May 31, 2001
17:54
Char Count= 0
Observability Concepts
ˆ x1 , x2 ) ∈ Definition 1.2.1 A system is called observable if for any triple (u, ˆ x1 ), e(u, ˆ x2 ))[ such L ∞ [U ] × X × X, x1 = x2 , the set of all t ∈ [0, min(e(u, ˆ x2 )(t) has positive measure. ˆ x1 )(t) = P(u, that P(u, Now, we define the “first variation” of , or the “lift of on TX.” The mapping f : X × U → T X induces the partial tangent mapping TX f : T X × U → T T X (tangent bundle of T X ). Then, if ω denotes the canonical involution of T T X (see [1]), ω ◦ TX f defines a parametrized vector field on T X, also denoted by TX f. Similarly, the function h : X × U → R d y has a differential d X h : T X × U → R d y . The first variation of is the input–output system: dξ = TX f (ξ, u) = TX f u (ξ ), (T ) dt (2) η = d X h(ξ, u) = d X h u (ξ ). Its input–output mapping is denoted by d P, and the trajectories of (1) and (2) are related as follows: ˆ the proIf ξ : [0, Tξ [→ T X is a trajectory of (2) associated with the input u, jection π (ξ ) : [0, Tξ [→ X is a trajectory of associated with the same input. ˆ : [0, e(u, ˆ x0 )[→ X is the trajectory of starting from Conversely, if ϕt (x0 , u) ˆ the map x → ϕτ (x, u) ˆ is a diffeomorphism from a neighborx0 for the input u, ˆ x0 )[. Let TX ϕτ : Tx0 X → Tz X, hood of x0 onto its image, for all τ ∈ [0, e(u, ˆ be its tangent mapping. Then, for all ξ0 ∈ Tx0 X : z = ϕτ (x0 , u) ˆ ξ0 ) = e (u, ˆ π(ξ0 )) = e (u, ˆ x0 ), eT (u, ˆ x0 )[: and, for almost all τ ∈ [0, e(u,
τ ˆ ξ0 )(τ ) = d X h(TX ϕτ (u, ˆ ξ0 ), u(τ ˆ )) = d X P, d P(u, uˆ (ξ0 ).
(3)
The right-hand side of these equalities (3) is the differential of the function τ : V → R d y , where V is the open set: P, uˆ τ ˆ x)(τ ). ˆ x)}, and P, V = {x ∈ X |0 < τ < e(u, uˆ (x) = P(u, dy For any a > 0, let L ∞ loc ([0, a[; R ) denote the space of measurable funcd y tions v : [0, a[→ R which are locally in L ∞ . For all uˆ ∈ L ∞ (U ), x0 ∈ X, ˆ × Tx0 X defines a linear mapping: the restriction of d P to {u} ∞ ˆ x0 )[; R d y ), d Pu,x ˆ 0 : Tx0 X → L loc ([0, e(u,
ˆ ξ0 )(t). d Pu,x ˆ 0 (ξ0 )(t) = d P(u, 1
(4)
In nonlinear control theory, the notion of observability defined here is usually referred to as “uniform observability.” Let us stress that it is just the old basic observability notion used for linear systems.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2. The Canonical Flag of Distributions
11
ˆ x) ∈ Definition 1.3. The system is called infinitesimally observable at (u, L ∞ [U ] ×X if the linear mapping d Pu,x is injective. It is called infinitesimally ˆ ˆ x), observable at uˆ ∈ L ∞ [U ] if it is infinitesimally observable at all pairs (u, x ∈ X, and called uniformly infinitesimally observable if it is infinitesimally observable at all uˆ ∈ L ∞ [U ]. Remark 1.1. In view of the relation 3 above, the fact that the system is infinitesimally observable at uˆ ∈ L ∞ [U ] means that the mapping Puˆ : X → L[R d y ] is an immersion of X into L[R d y ] (as was stated, Puˆ is differentiable ˆ x) ≥ e(u, ˆ x0 ) − ε in a neighborin the following sense: we know that e(u, hood Uε of x0 . Then Puˆ is differentiable in the classical sense from Uε into ˆ x0 ) − ε]; R d y ). Puˆ is an immersion in the sense that these differL ∞ ([0, e(u, ential maps are injective). This notion of uniform infinitesimal observability is the one which makes sense in practice, when d y ≤ du . In most of the examples from real life we know of, when d y ≤ du , the system is uniformly infinitesimally observable. A very frequent situation in practice is the following: The physical state space for x is an open subset Xˇ of X, and Xˇ is positively invariant under the dynamics of . The trajectories xˆ (t) that are unobservable take their values ˆ take their values in the boundary ∂ Xˇ , and the corresponding controls u(t) in ∂U. In particular, this will be the case for the first example we show in Chapter 8.
2. The Canonical Flag of Distributions In this section, we assume that d y = 1. As above, set: h u (x) = h(x, u), f u (x) = f (x, u). Associated with the system , there is a family of flags {D0 (u) ⊃ D1 (u) ⊃ . . . ⊃ Dn−1 (u)} of distributions on X (parametrized by the value u ∈ U of the control). D0 (u) = Ker(d X h u (x)), where d X denotes again the differential with respect to the x variable only. For 0 ≤ k < n − 1: Dk+1 (u) = Dk (u) ∩ Ker d X L k+1 f u (h u ) , where L f u is the Lie derivative operator on X, w.r.t. the vector field f u . Let us set: D(u) = {D0 (u) ⊃ D1 (u) ⊃ . . . ⊃ Dn−1 (u)}.
(5)
This u-dependent flag of distributions is not regular in general (i.e., Di (u) does not have the constant rank n − i − 1).
P1: FBH CB385-Book
CB385-Gauthier
12
May 31, 2001
17:54
Char Count= 0
Observability Concepts
Definition 2.1. The flag D(u) is called the canonical flag associated to . In the case where the flag D(u) is regular and independent of u (notation: ∂u D(u) = 0), the canonical flag is said to be uniform. The case in which D(u) is uniform will be especially important in Chapter 3. In fact, this case will characterize uniform infinitesimal observability. Note: Here, for us, a distribution D is just a subset of T X , the intersection of which with each tangent plane Tx X is a vector subspace of Tx X. Once the flag D(u) defined here is regular, the distributions Di (u) are smooth distributions in the usual sense.
3. The Phase-Variable Representation Here L kf u (h u ) denotes the d y -tuple of functions, the components of which are L kf u (h iu ), where h iu is the ith component of h u = h(., u). We consider control functions that are sufficiently continuously differentiable only: k times, for example. Consider R (k−1)du = R du × . . . × R du (k − 1 times ) and R kd y = R d y × . . . × d R y (k times). We denote the components of v ∈ R (k−1)du by (v , . . . , v (k−1) ) and the components of y ∈ R kd y by (y, y , . . . , y (k−1) ). Definition 3.1. (Valid for X with corners.) There exist smooth mappings k and S k (the notation Sk stands for suspension of k ): (k−1)du → R kd y , k : X ×U × R (k−1) → y, y , . . . , y (k−1) , k : x 0 , u, u , . . . , u
(6)
(k−1)du → R kd y × R kdu , S k : X ×U × R (7) (k−1) → y, y , . . . , y (k−1) , u, u , . . . , u (k−1) , S k : x 0 , u, u , . . . , u
which are polynomial in the variables (u , . . . , u (k−1) ) and smooth in (x0 , u), ˆ : [0, Tuˆ [ → X × U is a semitrajectory of our system startsuch that if (xˆ , u) ˆ is the corresponding output trajectory. ing at x0 , and t → y(t) = h(xˆ (t), u(t)) Then the jth derivative y ( j) (0) of y(t) at time 0 is the jth block-component du d k−1 u of k (x 0 , u(0), dt (0), . . . , dt k−1 (0)). Let us say that the system has the phase-variable property of order k, denoted by P H (k), if, for all x0 ∈ X and u(.) k-times differentiable: (k) (k−1) y (k) = H˘ S ,u , (8) k x 0 , u, u , . . . , u
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
3. The Phase-Variable Representation
13
for some smooth (C ∞ ) function H˘ : R kd y × R (k+1)du → R d y . Notice that if such a function does exist, it is not unique in general. If one denotes temporarily by Ck∞ the ring of smooth functions g : R kdy+(k+1)du → R, the property P H (k) means that the components (k) ¯ : = yi of y (k) belong to the ring k , pull back of Ck∞ by the mapping S k k ¯ )∗ C ∞ , where (S k k
¯ S k
¯ = S × I d du , S k k (k) (k−1) x, u, u , . . . , u (k−1) , u (k) = S ,u . k x, u, u , . . . , u
Then, we can consider the differential system k , on R kd y : z˙ 1 = z 2 , . . . . . , z˙ k−1 = z k , (k ) k z˙ k = H˘ z 1 , . . . , z k , u(t), . . . , ddtuk (t) .
(9)
This differential system k is called a phase-variable representation of . It has the following property, a consequence of the uniqueness of the solutions of smooth O.D.E.s. For any C k function u, k maps the trajectories x(t) of associated with u(t) into the corresponding trajectories of k : If x(t) is a trajectory of corresponding to u(t), then the curve (k−1) (t)) is the trajectory of correspondt → k k (x(t), u(t), u (t), . . . , u ing to u(t), starting from k (x(0), u(0), u (0), . . . , u (k−1) (0)). In particular, the output trajectory t → y(t) is mapped into t → z 1 (t), where z 1 denotes the first d y components of the state z of k . A very important particular case where the property P H (k) holds is the following. Assume that the map S k is an injective immersion. For any open relatively compact subset ⊂ X, let us consider the restriction , = (S , I ⊂ S, k k )|×U ×R (k−1)du . If I denotes the image of Sk R kd y × R kdu , then, y (k) (x, u, u , . . . , u (k) ) defines a function h˘ on I × R du and easy arguments using partitions of unity show that we can extend this function smoothly to all of R kd y × R (k+1)du . We temporarily leave this simple fact for the reader to show: In Chapter 5, we will prove a slightly stronger (but not more difficult) result. Denoting this extension of h˘ by H˘ , we get a phase-variable representation of order k for restricted to . As we shall see, there are other interesting cases in which the map S k is only injective, but P H (k), the phase-variable property of order k, holds for , for some k. This situation will be studied in Chapter 5. Strongly related to this phase-variable property, are the notions of differential observability, and strong differential observability.
P1: FBH CB385-Book
CB385-Gauthier
14
May 31, 2001
17:54
Char Count= 0
Observability Concepts
4. Differential Observability and Strong Differential Observability “Differential observability” just means injectivity of the map S k . “Strong differential observability” will mean that moreover S is also an immersion. k Let us relate precisely these notions to the notion of a dynamical extension of . The control functions u are assumed to be sufficiently smooth. We can consider the N th dynamical extension N of , and the N th dynamical extension f N of f, defined as follows. f N is just the vector field on X × U × R (N −1)du = X × U × (R du × . . . × R du ), (N − 1 factors R du ), du n N −2 ∂ (i+1) ∂ f i x, u (0) + uj . f N x, u (0) , . . . ., u (N −1) = (i) ∂ xi ∂u i=1 i=0 j=1 j
(10) Moreover, if we set b N = (biN ), with biN = “new control variable,” u (N ) b N =
du
∂ ∂u i(N −1)
, and u (N ) =
(N ) (u i ),
the
(N ) N bi ,
ui
i=1
then, we can give the following definition. ˇ of , is just Definition 4.1. The N th dynamical extension N = (F N , h) the control system on X × U × R (N −1)du with control variable u (N ) ∈ R du , parametrized vector field F N = f N + u (N ) b N , and observation function hˇ = (h(x, u (0) ), u (0) ). Remark 4.1. If U = I du , I = R, the state space of N has corners. In fact, N is just the system we get by adding to the state variables the N − 1 first derivatives of the inputs. The N th derivative is the new control. The observations are the observations of and the control variables u i denoted (0) here by u i , 1 ≤ i ≤ du , to stress that the function is the zeroth derivative of (0) itself. Also, u (0) , (resp. u ( j) ) denotes the vector with components u i (resp. ( j) u i ) , 1 ≤ i ≤ du . Let us set u N = (u (0) , . . . . . , u (N −1) ), and more generally, for a smooth function y(t), with successive derivatives y (i) (0) at t = 0, y N = (y(0), y (0), . . . . . . , y (N −1) (0)). The maps N , S N have already been defined in Section 3. It will also be important to make the system vary in the set S of systems. Hence, we
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. The Trivial Foliation
15
will have to consider the following maps: S N : X × U × R (N −1)du × S → R N d y × R N du , (x, u N , ) → h x, u (0) , L f N h(x, u 2 ), . . . ., (L f N ) N −1 h(x, u N ), u N = (y N , u N ) S N
: X ×U × R
(11) (N −1)du
→R
N dy
×R
N du
,
(x, u N ) → S N (x, u N , ), and N : X × U × R (N −1)du × S → R N d y , (x, u N , ) → h x, u (0) , L f N h(x, u 2 ), . . . ., (L f N ) N −1 h(x, u N ) = y N , (N −1)du → R N dy , N : X ×U × R
(12)
(x, u N ) → N (x, u N , ).
S N (x, u N , ) = ( N (x, u N , ), u N ) = S N (x, u N ) = N (x, u N ), u N . Definition 4.2. is said to be differentially observable of order N , if S N is an injective mapping and to be strongly differentially observable, if S N is an injective immersion. As we mentioned in Section 3, if is strongly differentially observable of order N , then possesses also the phase variable property P H (N ), when restricted to , where is any open relatively compact subset of X. The reason for these definitions is that when d y > du , strong differential observability is easily tractable for the purpose of construction of observer systems. Moreover, roughly speaking, it is a generic property. Therefore, it is a relevant definition in that case. This is the subject of Chapter 4. The motivation to consider differential (not strong) observability is that it is the most general concept adapted to the study of dynamic output stabilization (Chapter 7). 5. The Trivial Foliation Associated to , there is a subspace of the space C ∞ (X ) of smooth functions h : X → R. The subspace is the smallest subspace of C ∞ (X ) containing the components h iu of h u = h(., u), for all u ∈ U, which is closed under Lie differentiation on X, with respect to all of the vector fields f v = f (., v), v ∈ U. It is the real vector subspace of C ∞ (X ) generated by the functions (L f ur )kr (L f ur −1 )kr −1 . . . (L f u1 )k1 (h u 0 ), for u 0 , . . . . , u r ∈ U, where L denotes the Lie derivative operator on X.
P1: FBH CB385-Book
CB385-Gauthier
16
May 31, 2001
17:54
Char Count= 0
Observability Concepts
Here, is called the observation space of . The space d X of differentials (w.r.t. x) of elements of defines a codistribution that is, in general, singular. The distribution annihilated by d X is called the trivial distribution associated to . The level sets of (i.e. the intersections of the level sets of elements of ) define the associated foliation, called the trivial foliation, associated to . These notions are classical [24]. We call this foliation the trivial foliation because it is actually trivial (in the sense that the leaves are zero-dimensional) for generic systems. This is also true for systems that are uniformly infinitesimally observable, as our results will show. However, it is worth pointing out that the following theorem holds. Theorem 5.1. The rank of is constant along the (positive or negative time) trajectories of , in the analytic case [24]. Proof. By standard controllability arguments (recalled in Section 6 below), it is sufficient to show that the rank of is the same at a point x0 and at a point x1 of the form, x1 = exp t f v (x0 ), v ∈ U. To show this, we use the following analytic expansions, for ψ ∈ , valid for small t: ∞ k t (L f v )k ψ(x0 ), (i) ψ(exp(t f v ))(x0 ) = k! k=1 (13) ∞ k t k d X ((L f v ) ψ)(x0 ). (ii) d X (ψ(exp(t f v ))(x0 )) = k! k=1 If we denote by T the tangent mapping of the mapping exp t f v (x), T : Tx0 X → Tx1 X, and by T ∗ its adjoint, Formula (13), (ii), shows that T ∗ : Tx∗1 X → Tx∗0 X, maps the covector d X ψ(x1 ) ∈ d X (x1 ) to a covector in d X (x0 ). Hence, rank(d X (x1 )) ≤ rank(d X (x0 )). Interchanging the role of x0 and x1 , by taking t = −t, we conclude that rank( (x1 )) = rank( (x0 )). Hence, as soon as the analytic system is controllable in the weak sense of the transitivity of its Lie algebra (see Section 6), the distribution is regular. In particular, the leaves of the trivial foliation (the level sets of ) are submanifolds of the same dimension. Now, let us consider the case where is regular, nontrivial, and not necessarily analytic. By Theorem 5.1, it is always regular in the analytic controllable case.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. The Trivial Foliation
17
Exercise 5.1. Show that is preserved by the dynamics of . (i.e. for any control function u(.) ∈ L ∞ [U ], TX ϕt (., u) maps (x0 ) onto (ϕt (x0 , u))). Because h(., u) is constant on the leaves of , for two distinct initial conditions, sufficiently close in the same leaf, the corresponding output trajectories coincide for t small enough, whatever the control function. In particular, is not observable, for any fixed value of the control function, even if restricted to small open sets: For each control, one can find couples of points, arbitrarily close, that are not distinguished by the observations, for small times. The following simple fact is important. We leave it as an exercise. Exercise 5.2. In the case U = ∅, show the following: – If (iff in the C ω case) separates the points on X, then, is observable.
(14)
There is an alternative way to define the distribution in the analytic case, which will be of interest, together with Theorem 5.1 in Chapter 5. Let
us first define the vector subspace of H = C ∞ (X × U ), as follows: is the smallest real vector subspace of H , which contains the components h i of h, and that is closed under the action of the Lie derivatives L f u on X, and with respect to the derivations ∂ j = ∂u∂ j , j = 1, . . . , du . is generated by functions of the form: L kf1u (∂ j1 )s1 L kf2u (∂ j2 )s2 . . . . . L kfru (∂ jr )sr h i,u ,
ki , si ≥ 0.
(15)
Fixing u ∈ U , we obtain the vector subspace (u) ⊂ C ∞ (X ) and the space d X (u) of differentials of the elements of (u), with respect to the ¯ (u) the distribution annihilated by d X (u). x variable only. We call ¯ (u); (b) In the analytic case, ¯ (u) is indepenTheorem 5.2. (a) ⊂ ¯ dent of u and = (u). ¯ (u), whatever u is. Proof. First, let us show that ⊂ Consider the following functions ϕi,k (x, u, u 1 , . . . . , u r ), 1 ≤ i ≤ d y : k
−1 ϕi,k (x, u, u 1 , . . . . , u r ) = L kf1u+u . . . . . L fru+....+u 1
r −1
h i,u+u 1 +...+u r ,
ϕi,k : D → R, D ⊂ X × U × (R du )r .
(16)
P1: FBH CB385-Book
CB385-Gauthier
18
May 31, 2001
17:54
Char Count= 0
Observability Concepts
As functions of x, for u, u 1 , . . . , u r fixed, these functions are generators of . The main fact we will use is that ∂ α1,i1 ∂ αr,ir .... (ϕi,k (x, u 0 , u 1 , . . . . , u r )) = ∂u 1,i1 ∂u r,ir ∂ αr −1,ir −1 kr −1 ∂ αr,ir ∂ α1,i1 . . . L kf1u +u . . . . . L f u +....+u . . . h i,u 0 +u 1 +...+u r . 0 1 0 r −1 ∂u 0,i1 ∂u 0,ir −1 ∂u 0,ir Hence, differentiating the ϕi,k (x, u, u 1 , . . . . , u r ) a certain number of times with respect to the variables u l, j = (u l ) j , 1 ≤ j ≤ du , 1 ≤ l ≤ r and evaluating at u 1 = . . . = u r = 0 produces certain functions ϕ¯ i,k (x, u), which are generators of (u). If η ∈ , d X ϕi,k η = 0, which implies that d X ϕ¯ i,k η = 0, which is equiv¯ (u). alent to the fact that η ∈ In the analytic case, we can go further. One has the analytic expansion: ϕi,k (x, u, u 1 , . . . . , u r ) =
1 α (x) u¯ α , α! α
(17)
where α = (α1 , . . . , αr du ), α! = α1 ! . . . αr du !, u¯ α = (u 1,1 )α1 . . . . . (u r,du )αr du , and with α (x) = (∂1 )α1 . . . (∂du )αdu L kf1u . . . . . . L kfru (∂1 )α(r −1)du +1 . . . (∂du )αr du h i,u (x). (18) This analytic expansion is valid for u 1 , . . . . , u r small enough. Therefore, one has 1 d X α (x) η u¯ α , d X ϕi,k η = (19) α! α for all u 1 , . . . . , u r small enough, and all tangent vectors η ∈ Tx X. ¯ (u), the right-hand side of (19) vanishes. This In particular, if η ∈ shows that d X ϕi,k η vanishes for u 1 , . . . . , u r small enough. By analyticity and connectedness, d X ϕi,k η vanishes for all u, u 1 , . . . . , u r . This shows that η ∈ . The point of interest, used in Chapter 5, will be that in the analytic con¯ (u) = is a regular distribution. This is a consequence trollable case, of Theorem 5.1 and Theorem 5.2 above. ¯ (u) = for some u. Exercise 5.3. Find a C ∞ example for which
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
6. Appendix: Weak Controllability
19
6. Appendix: Weak Controllability Definition 6.1. A system being given, with state space X, the Lie subalgebra of the Lie algebra of smooth vector fields on X, generated by the vector fields f u , ( f u (x) = f (x, u)), is called the Lie algebra of , and is denoted by Lie(). The Lie algebra Lie() defines an involutive (possibly singular) distribution on X. Definition 6.2. Let a system be given, with state space X . The system is said weakly controllable, if the Lie algebra Lie() is transitive on X, i.e., dim(Lie()(x)), the dimension of Lie() evaluated at x, as a vector subspace of Tx X, is equal to n = dim(X ), for all x ∈ X. The following facts are standard, and are used in the book. They come from the classical Frobenius theorem, Chow theorem, and Hermann–Nagano theorem. 1. If a system is weakly controllable, then the accessibility set A (x0 ) of x0 ∈ X, i.e., the set of points that can be joined from x0 by some trajectory of , in positive time, has nonempty interior in X, for all x0 ∈ X. 2. If a system is weakly controllable, then the orbit O (x0 ) of x0 ∈ X, i.e. the set of points that can be joined to x0 by some continuous curve, which is a concatenation of trajectories of in positive or negative time, is equal to X, for all x0 ∈ X. 3. If moreover is analytic, statements (1) and (2) above are “if and only if.” 4. If is C ∞ , not weakly controllable, but the distribution Lie() on X has constant rank or if is analytic, then A (x0 ) has nonempty relative interior in the orbit O (x0 ), which is just the integral leaf through x0 of the distribution Lie(). 5. In statements (1), (2), (3), and (4) above, it is possible to restrict to piecewise constant control functions.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
3 The Case d y ≤ du
We will treat only the case d y = 1, du ≥ 1. General results for the case du ≥ d y > 1 are more difficult to obtain. However, Chapter 8 gives two nontrivial practical examples, where d y = du = 2, which are uniformly infinitesimally observable. In this chapter, with the exception of the first section, we assume that d y =1, du ≥ 1, and that everything is analytic. We characterize analytic systems that are uniformly infinitesimally observable when restricted to an open dense subset of X . The necessary and sufficient condition is that ∂u D(u) = 0, i.e., the canonical flag is uniform. This condition ∂u D(u) = 0 is extremely restrictive and is not preserved by small perturbations of the system. The analyticity assumption with respect to the x variable is made for purely technical reasons. It can certainly be removed to get similar results (see Exercise 4.6 below). On the other hand, analyticity with respect to u is essential. It is possible to obtain results in the nonanalytic case, but they will be weaker in the following sense: to have uniform infinitesimal observability, a certain condition has to hold on an open dense subset of X uniformly in u. In the nonanalytic case, we can only show that this condition has to hold on an open dense subset of X × U . This is much weaker. The reason for the “much weaker” result is: To prove the analytic case, we use the permanence properties of projections of semialgebraic or subanalytic sets. Again d X (resp. dU ) denotes the differential with respect to the x variables (resp. u variables) only.
1. Relation Between Observability and Infinitesimal Observability The relation is stated in the following theorem (also valid for d y > 1). 20
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
1. Relation Between Observability and Infinitesimal Observability
21
Theorem 1.1. ˆ the set θ(u) ˆ of states x ∈ X such (i) For any system and any input u, ˆ x) is open in X (and could that is infinitesimally observable at (u, be empty, of course). ˆ then θ(u) ˆ is dense everywhere in X. (ii) If is observable for an input u, ˆ x), then there exists an open (iii) If is infinitesimally observable at (u, neighborhood V of x such that the restriction P,uˆ |V is injective (i.e., ˆ restricted to V is observable for the input u). Proof. Because the output function h depends on u, the output trajectories are measurable functions only and hence, are not uniquely defined pointwise. Some regularization procedure will be needed to palliate this difficulty. ˆ D(u) ˆ x = Ker(d Pu,x For uˆ ∈ L ∞ (U ), let us define the distribution D(u): ˆ ). It ˆ x) iff follows from the definition that is infinitesimally observable at (u, ˆ x = {0}. D(u) ˆ x)[, For ξ ∈ Tx X, regularize the function d Pu,x ˆ (ξ )(t), defined on [0, e(u, t by setting d P u,x ˆ (ξ )(t) = 0 d Pu,x ˆ (ξ )(τ )dτ = ω(x, t)(ξ ). For each (x, t), ω(x, t) is a R d y valued linear form on Tx X . The function (x, t) ∈ X × ˆ x)[→ ω(x, t) ∈ Tx∗ X ⊗ R d y is continuous in (x, t), absolutely con[0, e(u, ˆ x = ∩t∈[0,e(u,x)[ tinuous in t, and analytic in x. It is clear that D(u) ˆ Ker(ω(x, t)). ˆ x)[, 1 ≤ i ≤ r, such that D(u) ˆ x= There is a finite set of times ti ∈ [0, e(u, ˆ x), then ∩i=1,...,r Ker(ω(x, ti )). If is infinitesimally observable at (u, ∩i=1,...,r Ker(ω(x, ti )) = {0}, and the same happens in a neighborhood of x ˆ x) and the continuity by the lower semi-continuity of the mapping x → e(u, of ω(., ti ). This shows (i). ˆ x ). To prove Let Y be any open subset of X. Set c = supx∈Y Codim(D(u) (ii), it is sufficient to prove that c = n. Applying the same argument as in the ˆ x ) = c is proof of (i), it is clear the set Z of all x ∈ Y such that Codim(D(u) ˆ |W = ∩i=1,...,r open. Now, consider an open subset W of Z , such that D(u) Ker(ω(x, ti )) for some t1 , . . . , tr . It is clear that ω(., ti ) = d X Fi, where Fi : W → R d y is the function ti ˆ x), u(τ ˆ ))dτ. Fi (x) = h(ϕτ (u, 0
ˆ Hence, the restriction D(u)|W is an integrable distribution, the leaves of which are the connected components of the level manifold of the mappings F1 , . . . , Fr . Let us take a compact connected set K contained in a leaf L of ˆ D(u)|W and containing more than one point. The infimum over K of the ˆ x0 )[, ˆ x) is attained at some x0 . For any time T ∈ [0, e(u, function x → e(u,
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
The Case d y ≤ du
22
ˆ x) > T for all x ∈ there is an open neighborhood VT of K in W such that e(u, T ˆ x), u(τ ˆ ))dτ, VT . Because ω(., T )|VT = d X F T , where F T (x) = 0 h(ϕτ (u, T ˆ and D(u)|W ⊂ Ker(ω(., T )), F is constant on any connected component of the intersection L ∩ VT , in particular on the one containing K . Hence, ˆ x0 )[. Differentiating with respect F T is constant on K for all T ∈ [0, e(u, ˆ x)(t) = P(u, ˆ x0 )(t) for all x ∈ K , and almost all t ∈ to T shows that P(u, ˆ x0 )[. This contradicts the observability assumption. Hence, c = n, [0, e(u, and this proves (ii). Number (iii) is easy and left to the reader. In the remaining part of the chapter, d y = 1. 2. Normal Form for a Uniform Canonical Flag We assume that the canonical flag associated to the system is uniform. We will show first that it is equivalent that can be put everywhere locally in a certain normal form, called the observability canonical form. Theorem 2.1. has a uniform canonical flag if and only if, for all x0 ∈ X, there is a coordinate neighborhood of x0 , (Vx0 , x 0 , . . . , x n−1 ), such that in these coordinates, can be written as follows: dxi dx0 = f 0 (x 0 , x 1 , u), . . . . , = f i (x 0 , x 1 , . . . , x i+1 , u), . . . , dt dt d x n−1 d x n−2 = f n−2 (x 0 , x 1 , . . . , x n−1 , u), = f n−1 (x 0 , x 1 , . . . , x n−1 , u), dt dt ∂h 0 y = h(x 0 , u), and ∀(x, u) ∈ Vx0 × U, (x , u) = 0, ∂x0 ∂ f n−2 ∂ f0 0 1 (x , x , u) = 0, . . . , n−1 (x 0 , . . . , x n−1 , u) = 0. (20) ∂x1 ∂x Proof. Let us chose coordinates x 0 , . . . , x n−1 in a neighborhood V of x0 , such that V is a cube in these coordinates, and Di (u) = ∩ j=0,..,i Ker(d x j ). Then,
Dn−1 = {0}, Dn−2 = Span
L
∂ ∂ x n−1
j
L fu h = 0 = L fu L
∂ ∂ x n−1
∂ ∂ x n−1
L fu h + L j−1
∂ ∂ x n−1
But, L
, and for 0 ≤ j ≤ n − 2,
∂ ∂ x n−1
j−1
L f u h = 0,
, fu
L j−1 h. fu
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2. Normal Form for a Uniform Canonical Flag
therefore,
∂ ∂ x n−1
23
, f u ∈ Dn−3 .
An obvious induction shows that ∂ , f u ∈ Dk−2 , ∂xk which implies ∂h = 0 for i ≥ 1, ∂xi ∂ fi (2) = 0 for j ≥ i + 2. ∂x j (1)
∂ fi Now, it is impossible that ∂∂h x 0 ( xˆ , v) = 0, or ∂ x i+1 ( xˆ , v) ∂=L i 0h for some v i, 0 ≤ i ≤ n − 2, for (xˆ , v) ∈ V × U, because it implies ∂ xfvi (xˆ ) = 0, ∂ ∂ which contradicts the fact that Di (v)(xˆ ) = Span{ ∂ x i+1 , . . . , ∂ x n−1 }. Conversely, if h depends only on u and x 0 , if the component f i depends ∂ fi only on u, x 0 , . . . , x i+1 , for 1 ≤ i ≤ n − 2, and if ∂∂h x 0 = 0, ∂ x i+1 = 0, for i = 0, . . . , n − 2, then it is easy to check that the canonical flag is uniform. j
j
Let us set h u (x) = h j (x, u) = L f u h u (x). Corollary 2.2. has a uniform canonical flag if and only if, for all x0 ∈ X and for all v ∈ U, there exists an open neighborhood Vx0 ,v of x0 , such that the functions x 0 = h 0v |Vx0 ,v , x 1 = h 1v |Vx0 ,v . . . . , x n−1 = h n−1 v |Vx0 ,v , form a i coordinate system on Vx0 ,v , and on U × Vx0 ,v , each h is a function of u, x 0 , . . . , x i only, 0 ≤ i ≤ n − 1. Proof. It is clear that the normal form of Theorem 2.1 implies that x 0 = h 0v |Vx0 ,v , x 1 = h 1v |Vx0 ,v . . . . , x n−1 = h n−1 v |Vx0 ,v , form a coordinate system on a neighborhood of x0 , and each h i is a function of u, x 0 , . . . , x i only, 0 ≤ i ≤ n − 1. Conversely, if these two statements hold, one computes immediately D(u):
∂ ∂ ∂ ∂ D(u) = span ⊃ span , . . . , , . . . , ∂x1 ∂ x n−1 ∂x2 ∂ x n−1
∂ ⊃ . . . ⊃ span ⊃ {0} . ∂ x n−1 Hence, the canonical flag is uniform.
P1: FBH CB385-Book
CB385-Gauthier
24
May 31, 2001
17:54
Char Count= 0
The Case d y ≤ du
3. Characterization of Uniform Infinitesimal Observability The first observation that can be made is the following theorem: Theorem 3.1. Assume that is such that its canonical flag is uniform. Then, ∀x0 ∈ X, there is an open neighborhood Vx0 of x0 such that the restriction |Vx0 of the system to Vx0 is observable and uniformly infinitesimally observable. Proof. We apply Theorem 2.1. Take a coordinate neighborhood (Vx0 , x 0 , . . . , x n−1 ), such that is in the normal form (20). The equation of the first variation for is setting x(t) = (x 0 (t), . . . , x n−1 (t)): ˙ = f (x(t), u(t)), x(t) 0 (t), x 1 (t), u(t)) ˙ f (x d ξ ξ0 X 0 0 . . . . . . (21) = . . . . 0 n−1 ˙ (t), u(t)) ξn−2 ξn−2 d X f n−2 (x (t), . . . , x 0 n−1 ˙n−1 ξ ξn−1 (t), u(t)) d X f n−1 (x (t), . . . , x 0 η = d X h(x (t), u(t))ξ0 . We assume that |Vx0 is not uniformly infinitesimally observable. This means that we can find a trajectory of |Vx0 , and a corresponding trajectory ξ (t) of (21) such that 1) ξ (0) = 0, 2) η(t) = 0 a.e.. However, because d X h(x 0 , u) is never 0, this implies that ξ0 (t) = 0 everywhere, by continuity. Hence, ξ˙0 (t) = 0 almost everywhere. Again, Equation (21) of the first variation, plus the fact that ∂ f 0 /∂ x 1 is never zero, imply that ξ1 (t) = 0. An obvious induction shows that ξi (t) = 0 for 0 ≤ i ≤ n − 1, which contradicts the fact that ξ (0) = 0. It remains to be shown that |Vx0 is observable. We assume that we can find u(.), and two distinct initial conditions w(0), z(0) such that the corresponding trajectories w(t), z(t) of |Vx0 produce the same output for almost all t < T : h(w(t), u(t)) = h(z(t), u(t)) a.e., where T is the minimum of the escape times of both trajectories relative to Vx0 . We can always take Vx0 so that its image by (x 0 , . . . , x n−1 ) is a cube, and h(w0 (t), u(t)) − h(z 0 (t), u(t)) = 0 for all t in a subset E of [0, T [ of full measure. But, for t ∈ E, it implies that: 0 0 0 = (w0 (t) − z 0 (t)) ∂∂h x 0 (c(t), u(t)), for some c(t), w (t) ≤ c(t) ≤ z (t) if w 0 (t) ≤ z 0 (t).
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
3. Characterization of Uniform Infinitesimal Observability
25
Because ∂h/∂ x 0 never vanishes, we get that w 0 (t) = z 0 (t) almost everywhere, and hence everywhere because w0 (t) and z 0 (t) are absolutely continuous functions. An induction similar to the one used in the first part of the proof shows that, w(t) = z(t) for all t ∈ [0, T [. This contradicts the fact that w(0) = z(0). Remark 3.1. We have proved a bit more than the statement of the theorem: we have proved that if has the normal form (20) on a convex subset C of R n , then the restriction |C is observable and uniformly infinitesimally observable. The main result in this chapter is that, conversely, the uniformity condition on the canonical flag is a necessary condition for uniform infinitesimal observability, at least on an open dense subset of X . Let us point out the following fact about this result: it is true “almost everywhere” with respect to X, but it is global with respect to U. This is the hard part to prove. If one is interested with a result true almost everywhere with respect to both x and u, the proof is much easier. Before proceeding, let us make the following standing assumptions: either, (H1 ) U = I du , I ⊂ R is a compact interval, and the system is analytic, or, (H2 ) U = R du , f and h are algebraic with respect to u. Let M˜ be the subset of U × X : M˜ = (u, x)|d X h 0u (x) ∧ . . . ∧ d X h n−1 u (x) = 0 . Let M be its projection on X. Then: Theorem 3.2. Assume either (H1 ) or (H2 ) and that is uniformly infinitesimally observable. Then: 1. The set M is a subanalytic (resp. semianalytic in case of (H2 )) set of codimension at least 1. In the case (H1 ), M is closed. In any case, denote by M¯ its closure, 2. The restriction |X \ M¯ of to X \ M¯ has a uniform canonical flag. This theorem will be proved in Section 5 of this chapter. Before this, let us give some comments and examples, and state some complementary results.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
The Case d y ≤ du
26
4. Complements 4.1. Exercises Exercise 4.1. Let be a system with uniform canonical flag. Show that is strongly differentially observable of order n, and hence has the phase variable property of order n, when restricted to sufficiently small open subsets of X. Exercise 4.2. Show that the (uncontrolled) system on R 2 : x˙ 1 = x2 , x˙ 2 = 0, y = (x1 )3 , is observable on R 2 , but not infinitesimally observable at x0 = 0. Exercise 4.3. The output function does not depend on u, and n = dim(X ) = 2. Assume that we work in the class of systems such that h does not depend on u, u ∈ I du , I compact. Fix x 0 ∈ X. 1. Show that the property that has a uniform canonical flag in a neighborhood of x 0 is stable under C 2 -small perturbations of . 2. Show that, if n > 2, this is not true. Exercise 4.4. In the class of control affine systems (i.e., x˙ = f (x) + u g(x), y = h(x), U = R), show that the result 1 of Exercise 4.3 is false. Exercise 4.5. Show that the system on X = R: x˙ = 1, 1 sin 2x(1 + u 2 ) 2 1 x− + x sin2 u, u ∈ R y= 1 2 2 2 2(1 + u ) is uniformly infinitesimally observable, but Theorem 3.2 is false (U is not compact).
4.2. Control Affine Systems We consider the control affine analytic systems (with single control, to simplify): x˙ = f (x) + u g(x), ( A ) (22) y = h(x), u ∈ R. In that case, there is a stronger statement than Theorem 3.2, which is much easier to prove. Consider the mapping : X → R n , (x) = (h(x), L f h(x), . . . , L n−1 f h(x)). The set of points x ∈ X at which is not a local
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
4. Complements
27
diffeomorphism, i.e., d X h(x) ∧ d X L f h(x) ∧ . . . ∧ d X L n−1 f h(x) = 0 is an analytic subset, closed in X, denoted by M. Theorem 4.1. 1. If A is observable, then M has codimension 1 at least, and, on each open subset Y ⊂ X \M such that the restriction |Y is a diffeomorphism, ¯ A of the form |Y maps A|Y into a system x2 x˙ 1 g1 (x1 ) x˙ 2 x3 g2 (x1 , x2 ) . . . ¯ A ) x˙ = . = . + u ( . . (23) . . . x˙ n−1 xn gn−1 (x1 , . . . , xn−1 ) y = x1 .
x˙ n
ϕ(x)
gn (x)
2. Conversely, if is an open subset of R n on which the system has the ¯ A , then the restriction | is observable. form The proof is simple, and contains the basic idea for the proof of Theorem 3.2. Proof of Theorem 4.1. The second part is very simple: Assume that two trajectories corresponding to u(t), say x(t) and x˜ (t), produce the same output x˜ 1 (t)) = 0 almost everywhere. It follows from x1 (t) = x˜ 1 (t). Then, d(x1 (t)− dt (23) that almost everywhere, 0 = (x2 (t) − x˜ 2 (t)) +u(t)(g1 (x1 (t) − g1 (x˜ 1 (t)). Hence, 0 = (x2 (t) − x˜ 2 (t)) almost everywhere, and by continuity, 0 = (x2 (t) − x˜ 2 (t)) everywhere (i.e., for all t ∈ [0, min(e(u, x(0)), e(u, x˜ (0))[). An obvious induction shows that x(t) = x˜ (t) everywhere, and in particular x(0) = x˜ (0). For the first part, it is clear that the set M is analytic. Assume that its codimension is zero. Then M = X and dn−1 (x) = dh(x) ∧ d L f h(x) ∧ . . . ∧ d L n−1 f h(x) is identically zero. Let k ≤ n − 1 be the first index such that dk ≡ 0. We cannot have k = 0 because it implies that h is a constant function, and then the system is not observable. We can take a small open coordinate system (x 1 , . . . , x n ) with first k coordinates (h(x), L f h(x), . . . , L k−1 f h(x)). Then, in these coordinates, L nf h is a function of (x 1 , . . . , x k ) for all n: dk (x) = 0 = d x 1 ∧ . . . ∧ d x k ∧ d L kf h(x). Hence, L kf h(x) = ϕ(x 1 , . . . , x k ) for a certain smooth function ϕ. In the same way, L k+1 f h(x) = L f ϕ(x) = ∂ϕ 2 ∂ϕ k ∂ϕ 1 , . . . , x k ) for a certain smooth funcx + . . . + x + ϕ = ψ(x k 1 k−1 ∂x ∂x ∂x tion ψ. By induction, we see that all the functions L nf h(x) are functions of
P1: FBH CB385-Book
CB385-Gauthier
28
May 31, 2001
17:54
Char Count= 0
The Case d y ≤ du
(x 1 , . . . , x k ) only. Hence they do not separate the points, and by (14) in Chapter 2, is not observable for the control function u(.) ≡ 0. Now, let Y be an open subset of X such that |Y is a diffeomorphism. On Y, (h(x), L f h(x), . . . , L n−1 f h(x)) is a coordinate system. Let us show that g has the required form, in these coordinates (it is already clear that f and h have the required form). Assume that the first component g1 of g depends on x i , for some i > 1. Let us consider two initial conditions x1 , x2 ∈ Y, such that (a) x11 = x21 , and (b) g1 (x1 ) = g1 (x2 ). This is possible because of the assumption that g1 , does not depend on x 1 only. We will construct a control function u(t), such that these two initial conditions together with the control u(t) produce the same output function. For this, let us consider the product system × of by itself: x˙ ∗1 = f (x1∗ ) + u g(x1∗ ),
x˙ ∗2 = f (x2∗ ) + u g(x2∗ ), y=
(24)
h(x1∗ ) − h(x2∗ ).
The state space of × is X × X. Let us substitute in × the following function uˆ for u: ˆ 1∗ , x2∗ ) = − u(x
L f h(x1∗ ) − L f h(x2∗ ) . L g h(x1∗ ) − L g h(x2∗ )
(25)
The resulting differential system is well defined in a neighborhood of the initial condition (x1 , x2 ) because the function uˆ is smooth in a neighborhood of (x1, x2 ): The condition g1 (x1 )= g1 (x2 ) is equivalent to L g h(x1 )− L g h(x2 )= 0. Call (x1∗ (t), x2∗ (t)) the trajectory of this differential system, starting at (x1 , x2 ). ˆ 1∗ (t), x2∗ (t)), the It is easy to check that, for the control function u(t) = u(x output of × is identically zero (for small times). This shows that is not observable for u(t). Similarly, an induction shows that gi depends only on (x 1 , . . . , x i ). Exercise 4.6. Give a statement and a proof of Theorem 4.1 in the C ∞ case. 4.3. Bilinear Systems (Single Output) Bilinear systems are systems on X = R n , that are control affine and state affine: x˙ = Ax + u (Bx + b), (B) (26) y = C x, where A : R n → R n , B : R n → R n are linear, b ∈ R n , C ∈ (R n )∗ . For these bilinear systems, the previous result, Theorem 4.1, can be made much stronger.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. Proof of Theorem 3.2
29
Exercise 4.7. Show the following theorem: Theorem 4.2. The single-output bilinear system (B) is observable if and only if it has the following form in the (linear) coordinate system x ∗ = ¯ ∗ + u( Bx ¯ ∗ + b), ¯ y = C¯ x, where C¯ = (C x, C Ax, . . . , C An−1 x): x˙ ∗ = Ax ¯ ¯ (1, 0, . . . . , 0), B is lower triangular, and A is a companion matrix: 0, 1, 0, . . . . . . . . . , 0 0, 0, 1, 0, . . . . . . . , 0 . . ¯ (27) A= . 0, . . . . . . . . . . . , 0, 1 a1 , . . . . . . . . . . . . , an The bilinear systems (single output or not) play a very special role from the point of view of the observability property: the initial − state → output − trajectory mapping is an affine mapping. In fact, the control function being known, they are just linear time-dependent systems. Therefore, for instance, the observer problem can be solved just by using the linear theory. A very important result is stated in the following exercise. Exercise 4.8. (Fliess–Kupka theorem in the analytic case) 1. Define a reasonable notion of an immersion of a system into another one. 2. Show that a control affine analytic system can be immersed into a bilinear one if and only if its observation space is finite-dimensional. (See Chapter 2, Section 5, for the definition of the observation space.) This result ((2) in Exercise 4.8) is not very difficult to prove. The original result, in paper [15], is a similar theorem in the C ∞ case, the proof of which is not that easy. 5. Proof of Theorem 3.2 We will need the following lemma: Lemma 5.1. Let Y be a connected analytic manifold, Z ⊂ R d be an open neighborhood of x0 ∈ R d , and let f 0 , . . . , f n : Y × Z → R be n + 1 analytic functions such that: 1. d Z f 0 ∧ . . . ∧ d Z f n ∧ dY d Z f n = 0 on Y × Z , 2. There exists y0 ∈ Y such that: f y00 , . . . , f yn0 : Z → R are independent,
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
The Case d y ≤ du
30
3. There exist analytic functions h i : Y × i → R, i open in R n , 0 ≤ i ≤ n − 1, such that: f i (y, z) = h i (y, f y00 , . . . , f yn−1 (z)) for all (y, z) ∈ 0 Y × Z. Then, there exists a subneighborhhood Z ⊂ Z of x0 , and a function : Y × n → R, n ⊂ R n+1 , n open, such that for all (y, z) ∈ Y × Z , f n (y, z) = h n (y, f y00 , . . . , f yn0 (z)).
hn
Proof. Let Z ⊂ Z be an open subneighborhood of x0 , Y0 ⊂ Y be an open neighborhood of y0 in Y , and ϕ = (x 0 , . . . . , x d ) : Z → R d , a global coordinate system on Z such that x i = f yi0 , 0 ≤ i ≤ n, and, (i) ϕ(Z ) is a convex subset of R d , (ii) f y0 , . . . , f yn , x n+1 , . . . , x d : Z → R, is a coordinate system on Z for all y ∈ Y0 . By restricting Y0 , we can assume that it carries a coordinate system y˜ = y 1 , . . . , y m : Y0 → R, such that y˜ (Y0 ) is convex and y˜ (y0 ) = 0. Then, d Z f 0 ∧ . . . ∧ d Z f n ∧ dY d Z f n = 0 on Y × Z implies:
n ∂f 0 n = 0 on Y0 × Z for all 1 ≤ i ≤ m. dZ f ∧ . . . ∧ dZ f ∧ dZ ∂ yi This in turn implies that t" m h nk (sy, x 0 (z), . . . , x n−1 (z), f n (sy, z))y k ds + f n (y0 , z), f n (t y, z) = 0 k=1
(28) n+1 ¯ ¯ ¯ for some functions : Y0 × , ⊂ R , open. on Y0 × Now, there exists a neighborhood Y0 ⊂ Y0 of y0 such that Equation (28) has a unique solution (given f n (y0 , z)) defined on Y0 × Z and for all t ∈ [0, 1]. This solution is an analytic function of t, y, x 0 , . . . , x n−1 , f yn0 . If we take t = 1, we obtain f n (y, z) = H n y, x 0 (z), . . . , x n−1 (z), f yn0 , Z ,
h nk
on Y0 × Z , or f yn = H n (y, x 0 , . . . , x n ), on Y0 × Z . However, because x 0 , . . . , x d : Z → R, is a coordinate system over Z , we have f n = G(y, x 0 , . . . , x d ) on Y × Z . On the open subset Y0 × Z , ∂ Hn ∂G = = 0, n + 1 ≤ j ≤ d. ∂x j ∂x j Hence,
∂G ∂x j
= 0 on Y × Z , for n + 1 ≤ j ≤ d.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. Proof of Theorem 3.2
31
Again, by the convexity of Z , G(y, x 0 , . . . , x d ) = 1 ∂G y, x 0 , . . . , x n , t x n+1 + (1 − t)x0n+1 , . . . , t x d + (1 − t)x0d n+1 x − x0n+1 n+1 ∂x 0 0 n n+1 + (1 − t)x0n+1 , . . . , t x d + (1 − t)x0d d ∂G y, x , . . . , x , t x d +... + x − x0 dt ∂xd + G y, x 0 , . . . , x n , x0n+1 , . . . , x0d .
Hence, f n = G(y, x 0 , . . . , x n ) on Y × Z . We will prove, by induction on N , the following statement (A N ): Let M N be the projection on X of the semi-analytic (resp. analytic, partially algebraic) subset M˜ N = {(u, x)|d X h 0u (x) ∧ . . . ∧ d X h uN (x) = 0}: r M is a subanalytic (respectively semi-analytic) subset of X of codiN
mension ≥ 1, and
r For any a ∈ X \ M ¯ N , and any v ∈ U, there exists an open neighborhood
V of a such that the restriction of h i to U × V is a function of u and of the restrictions h 0v|V , . . . , h iv|V of h 0v , . . . , h iv to V only, for all i, 0 ≤ i ≤ N (this second property we denote by (PN )).
It is clear, by Corollary 2.2 that the property (An−1 ) implies Theorem 3.2. In fact, because all the functions h 0v , . . . , h n−1 are independent on V, there exists v form an a (could be smaller) neighborhood Va of a, such that h 0v , . . . , h n−1 v analytic coordinate system on Va . Also, Mn = M, Mn−1 ⊃ . . . ⊃ M1 ⊃ M0 . Assume that we have proved that A0 , . . . , A N −1 , and let us prove A N . This will be done in four steps. In order to prove steps 1, 2, and 4, we will construct feedback laws contradicting infinitesimal observability. In step 1, this feedback will be a constant control. In step 2, it is a general feedback for the lift of our system on the tangent bundle (i.e., a feedback depending on ξ, with the notations of Section 1, Chapter 2), and in step 4, it will be a feedback depending on x only. Let Z N be the set of all x ∈ X such that dh 0u (x) ∧ . . . ∧ dh uN (x) = 0 for all u ∈ U . Because Z N = ∩u∈U Z N (u), where Z N (u) = {x ∈ X | dh 0u (x) ∧ . . . ∧ dh uN (x) = 0}, is an analytic subset of X, it follows that Z N also is analytic ([43], Corollary 2, p. 100). Step 1. We Claim that the Codimension of Z N is at Least 1 Were it otherwise, Z N would contain an open set . Then, \ M¯ N −1 is also open and nonempty. Because for any u ∈ U, dh 0u (x) ∧ . . . ∧ dh uN (x) = 0 on \ M¯ N −1 , any point a ∈ \ M¯ N −1 has, for a given v ∈ U, an open neighborhood W in \ M¯ N −1 , such that h vN is a function of h 0v , . . . , h vN −1 in W.
P1: FBH CB385-Book
CB385-Gauthier
32
May 31, 2001
17:54
Char Count= 0
The Case d y ≤ du
If ξ : [0, e[→ T W is any trajectory of T corresponding to the constant control u(t) = v, and such that ξ (0) ∈ Ta X, ξ (0) = 0, dh 0v (ξ (0)) = . . . = dh vN −1 (ξ (0)) = 0, then we will see that dh 0v (ξ (t)) = . . . = dh vN −1 (ξ (t)) = 0 for all t ∈ [0, e[. In d (dh iv (ξ (t)) = d(L f v (h iv ))(ξ ) = dh i+1 fact, dt v (ξ (t)) for all i, 0 ≤ i ≤ N − 1. Because dh vN (ξ ) is a linear combination of dh 0v (ξ ), . . . , dh vN −1 (ξ ), it follows from the uniqueness part of the Cauchy theorem for ordinary differential equations that dh 0v (ξ (t)) = . . . = dh vN −1 (ξ (t)) = 0. Thus, is not uniformly infinitesimally observable, which is a contradiction. ¯ N−1 ), d X h0 ∧ . . . ∧ d X h N ∧ dU d X h N = 0 Step 2. On TU × T(X\ M If x 1 , . . . , x n is a coordinate system on some open subset X of X, and u 1 , . . . , u du is a coordinate system on some open subset U of U, then dX hi =
du " n n " " ∂h i ∂ 2h N j N (u, x)d x , d d h = du k ∧ d x j . U X j k∂x j ∂ x ∂u j=1 k=1 j=1
(29)
Assume that the assertion of step 2 is not true. Then there exists a pair (u 0 , ξ0 ) ∈ Int(U ) × Ta (X \ M¯ N −1 ) such that d X h iu 0 (ξ0 ) = 0 for 0 ≤ i ≤ N ,
(30)
but dU d X h N (., ξ0 ) is not identically zero on Tu 0 U. We will construct an analytic feedback u¯ : W → U, W an open neigh¯ 0 ) = u 0 , and a solution ξˆ : [0, e[→ W of borhood of ξ0 in T (X \ M¯ N −1 ), u(ξ the feedback system dξ ¯ )), ξˆ (0) = ξ0 , = TX f (ξ, u(ξ dt ˆ ˆ = such that d X h iu(t) ˆ (ξ (t)) = 0 for all t ∈ [0, e[ and 0 ≤ i ≤ N − 1, where u(t) ˆ ¯ ξ (t)). This contradicts the infinitesimal observability assumption. u( N (ξ ) = 0, for all ξ ∈ W, The feedback u¯ is the solution of d X h u(ξ ¯ ) ¯ 0 ) = u 0 , obtained by the implicit function theorem. We have u(ξ
ˆ d 0 1 0 d u(t) ˆ ˆ ˆ d X h u(t) , ξ (t) , ˆ (ξ (t)) = d X h u(t) ˆ (ξ (t)) + dU d X h dt dt
ˆ d ˆ (t) , 1 ≤ i ≤ N − 2, ˆ (t)) = d X h i+1 (ξˆ (t))+dU d X h i d u(t) , ξ ( ξ d X h iu(t) ˆ ˆ u(t) dt dt
ˆ d d u(t) N −1 ˆ , ξˆ (t) , (31) (ξ (t)) = dU d X h N −1 d X h u(t) ˆ dt dt
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. Proof of Theorem 3.2
33
because by construction, N N ˆ (ξˆ (t)) = d X h u(t) d X h u( ˆ (ξ (t)) = 0. ¯ ξˆ (t))
(32)
We apply the induction assumption to a = (ξ0 ) and u 0 . We call Va the corresponding neighborhood. Restrict both W and Va so that (W ) ⊂ Va . For ¯ ), and any 0 ≤ i ≤ N − 1, h i is a function of u and h 0v , . . . , h iv any v ∈ u(W only on U × Va . It follows that
ˆ i d u(t) ˆ dU d X h , ξ (t) dt i ˆ ˆ is a linear combination of d X h 0u(t) ˆ (ξ (t)), for all 0 ≤ i ≤ ˆ (ξ (t)), . . . . . , d X h u(t) N − 1 and all t. The system (31) becomes a linear time-dependent system in the funci (ξˆ (t)) i ˆ tions d X h iu(t) |t=0 = d X h u 0 (ξ0 ) = 0, by (30). ˆ (ξ (t)). At t = 0, d X h u(t) ˆ ˆ By Cauchy’s uniqueness theorem, we get that d X h iu(t) ˆ (ξ (t)) = 0 for all t. This contradicts the fact that is uniformly infinitesimally observable.
Step 3. Proof of (PN ) Take any point a in X \(Z N ∪ M¯ N −1 ). There exists a v ∈ Int(U ) such that (v, a) is not in M˜ N . We know that d X h 0 ∧ . . . ∧ d X h N ∧ dU d X h N = 0 every where on X \ M¯ N −1 . Now, we can apply (A N −1 ) to a and v. Because (v, a) is not in M˜ N , d X h 0v (a) ∧ . . . ∧ d X h vN (a) = 0. Restricting the neighborhood Va given by (A N −1 ), we can assume that the set {h 0v , . . . , h vN } can be extended to a coordinate system in Va , meeting the assumptions of Lemma 5.1. Applying this lemma to Y = U, Z = Va , f i = h i , we get that h N is a function of u and h 0v , . . . , h vN only in U × Va . Step 4. Proof of the Fact that MN has Codimension 1 We chose a and Va as in the step 3. For simplicity let us denote the restrictions h 0v |Va , . . . , h vN |Va by x 0 , . . . , x N . Then h i = H i (u, x 0 , . . . , x i ) for 0 ≤ i ≤ N in U × Va . We have dX h0 ∧ . . . ∧ dX h N = d X h 0 ∧ . . . ∧ d X h N −1 =
∂ H0 ∂ H1 ∂HN 0 . . . dx ∧ . . . ∧ dx N , ∂x N ∂x0 ∂x1 ∂ H0 ∂ H1 ∂ H N −1 0 . . . d x ∧ . . . ∧ d x N −1 . ∂x0 ∂x1 ∂ x N −1
0 1 N −1 Because Va ∩ M¯ N −1 = ∅, ∂∂Hx 0 , ∂∂Hx 1 , . . . , ∂∂Hx N −1 are all everywhere nonzero in U ×Va . Because M˜ N = {(u, x) ∈ U ×X | dh 0u (x) ∧ . . . ∧ dh uN (x) = 0},
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
The Case d y ≤ du
34
we see that ∂HN (u, x) = 0 . M˜ N ∩ (U × Va ) = (u, x) ∈ U × Va | ∂x N What remains to be proven is that M N has an empty interior. Otherwise, M N would contain an open set , and \(Z N ∪ M¯ N −1 ) would be a nonempty open set. Take a point a ∈ \(Z N ∪ M¯ N −1 ). Apply the considerations just developed to a and restrict the neighborhood Va , we have constructed Va ∩ (\(Z N ∪ M¯ N −1 )). Denote by P : M˜ N ∩ (U × Va ) → Va the restriction of the projection U × Va → Va to M˜ N . Because P is surjective, Sard’s theorem and the implicit function theorem show that there is an open subset W of Va , and an analytic mapping u¯ : W → U, such that ∂HN ¯ (u(x), x) = 0 for all x ∈ W. ∂x N The same reasoning as before will show that cannot be uniformly infinitesimally observable. Let ξˆ : [0, e[→ T W be any maximal (for positive times) solution of the feedback system d ξˆ ¯ = TX f (u(( ξˆ )), ξˆ ) in T W, such that ξˆ (0) = 0, dt but ˆ ˆ d X h iu(x ¯ 0 ) (ξ0 ) = 0 for 0 ≤ i ≤ N − 1, x 0 = (ξ0 ). As before, we have
d ˆ ) + dU d X h i d uˆ , ξˆ , 0 ≤ i ≤ N − 1, ( ξ d X h iuˆ (ξˆ ) = d X h i+1 uˆ dt dt ˆ = u(( ¯ where u(t) ξˆ (t))), uˆ : [0, e[→ U. i Because ∂∂Hx i = 0 in U × Va , 0 ≤ i ≤ N − 1, dU d X h i ( ddtuˆ , ξˆ ) is a linear combination of d X h 0uˆ (ξˆ ), . . . , d X h iuˆ (ξˆ ). Also, d X h uNˆ (ξˆ ) =
N " ∂HN j=0
∂x j
ˆ (ξˆ ))d x j (ξˆ ). (u,
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. Proof of Theorem 3.2
35
However, N ∂HN ˆ )) = ∂ H (u(( ˆ ¯ ( u, ( ξ ξˆ )), (ξˆ )) = 0. ∂x N ∂x N So d X h uNˆ (ξˆ ) is again a linear combination of d X h 0uˆ (ξˆ ), . . . , d X h uNˆ −1 (ξˆ ). Again we can apply Cauchy’s uniqueness theorem to get a contradiction. Thus, M N , and hence M¯ N have codimension 1, because the interior of M N is empty. This ends the proof of Theorem 3.2.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
4 The Case d y > du
We refer to the notations of Chapter 2, Sections 3 and 4. The main purpose in this chapter is to show that, in the case of d y > du , the picture is completely reversed: Roughly speaking, the observability becomes a generic property. More precisely, the strong differential observability property of order 2n + 1 (in the sense of the Definition 4.2, Chapter 2) is generic (in the Baire sense only: It is an open problem to prove that the set of strongly differentially observable systems contains an open dense set of systems. If we were able to show this openness property, some deep technical complications could be avoided in the proof of the other results: in particular, the “complexification step” below). Another very important result is the following. Observability (for all L ∞ inputs) is a dense property, that is, any system can be approximated by an observable one. In the case where X is compact, strong differential observability means that S N is an embedding, or that N (., u N ) is an embedding for all u N . Therefore, the set of systems, such that S N is an embedding, is generic, if N ≥ 2n + 1. Of course, there is no chance to prove such a general result if X is not compact: Exercise 0.1. Show that, even among ordinary smooth mappings between finite dimensional manifolds X, Y , embeddings may not be dense, whatever the dimension N = dim(Y ) with respect to n = dim(X ). (For hints, see [26, p. 54].) The reason for the fact stated in Exercise 0.1 is that embeddings are proper mappings. For our practical purposes (synthesis of observers and output stabilization), we don’t need that the fundamental mapping S N be proper. It is sufficient for it to be an injective immersion. Hence, all the genericity results we prove are true for a noncompact X and for the Whitney topology. But for 36
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Definitions and Notations
37
the sake of simplicity, we shall assume in this chapter that X is a compact manifold, and leave all generalizations to the reader as exercises. Also, we will assume that X is an analytic manifold. One should be conscious of the fact that this is not a restriction: Any C ∞ manifold possesses a compatible C ω structure (see [26, p. 66]). Again, in this chapter, U = I du , where I is a closed bounded interval. Because we make an extensive use of subanalytic sets and their properties, this compactness assumption cannot be relaxed.
1. Definitions and Notations The systems under consideration are of the form ()
dx = f x, u (0) ; y = h x, u (0) dt
(33)
dx = f x, u (0) ; y = h(x), dt
(34)
or ()
in order to take into account the more practical cases in which the output function h does not depend on u: The proofs of the genericity results in that case are not different from the proofs in the general case, where h depends on u, but these results do not follow from the results in the general case. In agreement with the notations introduced in Section 4 of Chapter 2, we shall use the notation u (0) for the control variable. For technical reasons, we will have to handle these two classes of systems in the C r case, and also some other classes of systems. Let us explain this below. We will assume that f and h are at least C r w.r.t. (x, u (0) ), for r large enough. We shall endow the set of systems with the topology of C r uniform convergence over X × U. The set of systems with this topology will be denoted by Sr . In the particular case where h does not depend on u, it will be denoted by S 0,r . Then, Sr = F r × H r , S 0,r = F r × H 0,r , where F r denotes the set of u-parametrized vector fields f over X , that are C r with respect to both x and u. Also, H r denotes the set of C r maps h: X × U → R d y , and H 0,r denotes the set of C r maps h: X → R d y . The spaces F r , H r , and H 0,r will also be endowed with the C r topology. The spaces H r , H 0,r , and F r are the Banach spaces of C r sections of the following bundles B H , B H 0 , B F over X × U , X and X × U respectively: B H = X × U × R d y , B H 0 = X × R d y , B F = T X × U. The spaces S 0,r , H 0,r are closed subspaces of the Banach spaces Sr , H r , respectively.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
The Case d y > du
38
The bundles of k-jets of C r sections of B F , B H , B H 0 are denoted by J k F, J k H 0 , respectively. The symbol × X will mean the standard fiber-product of bundles over X. For bundles E : E → X × U and F : F → X, we set E× X F = {(e, f ) | pr1 ◦ E (e) = F ( f )}, with pr1 : X × U → X. Then E× X F is naturally a fiber bundle over X × U . The set of k-jets of systems is the fiber product J k S = J k F × X ×U J k H. Also, J k S 0 = J k F× X J k H 0 . The sets J k S 0 and J k H 0 × U are also subbundles of J k S and J k H , respectively. The evaluation jet mapping on the jet spaces is denoted by evk , J k H,
evk : Sr × X × U → J k S, (resp. S 0,r × X × U → J k S 0 ), evk , x, u (0) = j k x, u (0) = j k f x, u (0) , j k h x, u (0) , where j k (x, u (0) ) denotes the k-jet of at (x, u (0) ). As we said, the statements of our results are the same for Sr and S 0,r . Our proofs also are the same because the dependence of h in the control plays no role in them. In the following, we shall have to consider subspaces of Sr , F r , H r , or 0,r S , F r , H 0,r , which will be denoted by the letters S, F, H , possibly with some additional indices, and which will have the following two properties: (A1 ) S, F, H are subspaces of Sr , F r , H r , or S 0,r , F r , H 0,r . They are Banach spaces for a stronger norm than the one from the overspaces Sr , F r , . . . (A2 ) For all (x1 , u 1 ) = (x2 , u 2 ), (xi , u i ) ∈ X × U, i = 1, 2, there is a ∈ S (resp. S 0 ) such that the k-jets j k (xi , u i ) at the points (xi , u i ) have arbitrary prescribed values. For the proof of our observability Theorems 2.1, 2.2, and 2.3 below, we will need only the obvious case where S = Sr (resp. S 0,r ). For the proof of our Theorem 2.4 (the real analytic case), we will make the choice S = S K or 0 , which will be described now (for the details, refer to [21]). S = SK Here, X˜ denotes a complex manifold. O X (resp. O X˜ ) denotes the sheaf of germs of real analytic (resp. holomorphic) functions on X (resp. X˜ ). A complexification of (X, O X ) is a complex manifold ( X˜ , OX˜ ) together with an antiinvolution (σ, σ ∗ ) and a homomorphism (ρ, ρ ∗ ) : (X, O X ⊗ R C) → (F, OX˜ |F), where F is the set of fixed points of σ. Complexifications of real analytic Hausdorff paracompact manifolds do exist. Their germs along X are isomorphic. The complexification is a natural correspondence in the sense that X˜ × U˜ is a complexification of X × U in
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Definitions and Notations
39
a natural way, and so is the tangent bundle T X˜ with respect to the tangent bundle T X of X. We consider a complexification X˜ of X. By Grauert’s theorem (see [21, 22]), X˜ contains a neighborhood of X , which is a Stein manifold. Replacing X˜ by this neighborhood, we can assume that X˜ is Stein. Then, Cdu being the natural complexification of R du , X˜ × Cdu is also a Stein manifold, that we denote by X × U. Let K be the class of closures of open, connected, σ -invariant, relatively compact neighborhoods of X × U in X × U . For K ∈ K, let us denote by B F(K ) (resp. V F(K )), the set of all functions h(x, u (0) ) (resp. parametrized vector fields f (x, u (0) )) that are defined and continuous on K and holomorphic on Int(K ). The set B F(K ) endowed with the sup-norm becomes a Banach space. V F(K ) has also a Banach space structure, obtained by embedding X˜ into a complex space C p , the embedding being compatible with the conju B F(K ) (resp. V F(K )) is the real gation involutions on X˜ and C p . The set Banach subspace of conjugate-invariant elements of B F(K ) (resp. V F(K )). B F(K ) The restriction res K : h → h |X ×U (resp. f → f |X ×U ) maps (resp. V F(K )) continuously and injectively into C ∞ (X × U ) (resp. V F ∞ (X × U )). The subset C ω (X × U ) (resp. V F ω (X × U )) of all analytic functions (resp. parametrized vector fields) on X × U is the inductive limit over K ordered by inclusion, Lim( B F(K )) (resp. Lim(V F(K ))). −→ −→ ω This means that for any h ∈ C (X × U ) (resp. f ∈ V F ω (X × U )), B F(K ) (resp. f ◦ ∈ V F(K )) such that h = res K (h ◦ ) ∃K ∈ K, and h ◦ ∈ ◦ (resp. f = res K ( f )). We denote by S K , FK , HK , S K = FK × HK , S K ⊂ Sr , the Banach spaces formed by the restrictions res K () = (res K ( f ), res K (h)) of elements = ( f, h) of V F(K ) × B F(K ), with the norm induced by V F(K ) × B F(K ). Let us state some properties of S K needed later on: (i) the norm on S K is stronger than the norm on Sr , (ii) for any finite set of points pi = (xi , u i ), 1 ≤ i ≤ N in X × U, and elements ji ∈ J k S(xi , u i ), any K ∈ K, there is a ◦ ∈ S K such that j k ◦ (xi , u i ) = ji , (iii) the evaluation map ev K : S K × X × U → J k S, , x, u (0) → j k x, u (0) , is a C ∞ mapping. Hence, in particular, the assumptions (A1 ), (A2 ) above are met. The spaces 0 , H 0 , are also defined in the same way and have the same properties. SK K
P1: FBH CB385-Book
CB385-Gauthier
40
June 21, 2001
11:11
Char Count= 0
The Case d y > du
Let us end this section with a few more notations. At a point x such that f (x) = 0, TX f (x) denotes the linearisation of f at x, i.e. the endomorphism of Tx X associated to the vector field f Denoting by X the diagonal of X × X, define the maps DS N , DS N, (0) , u (1) , . . . , u (N −1) ) ): (remember the notation u = (u D N , and D N N DS N : ((X × X ) \ X ) × U × R (N −1)du × S → R N d y × R N d y × R N du , DS N (x1 , x2 , u N , ) = N (x 1 , u N ), N (x 2 , u N ), u N , D N : ((X × X ) \ X ) × U × R (N −1)du × S → R N d y × R N d y , D N (x1 , x2 , u N , ) = N (x 1 , u N ), N (x 2 , u N ) , D N (x 1 , x 2 , u N ) = D N (x 1 , x 2 , u N , ),
DS N (x 1 , x 2 , u N ) = DS N (x 1 , x 2 , u N , ).
As we said, in the remainder of the chapter we will be interested in the following properties: (N −1)du into R N d y × R N du . (F): S N is an embedding from X × U × R Since X and U are compact, by the definition of S N , this is equivalent to (F1 ) and (F2 ): (F1 ): S N is one-to-one, (F2 ): S N is an immersion. (F1 ) is equivalent to the fact that the map D N : ((X × X ) \ X ) × U × (N −1)d N d N d u y y →R × R , avoids the diagonal in R N d y × R N d y . R 2. Statement of Our Differential Observability Results In the next section, we shall prove the following theorems, that hold for r > 0, large enough. Theorem 2.1. The set of systems such that (F2 ) is true, i.e., S N is an r 0,r immersion, contains an open dense subset of S (resp. S ), for N ≥ 2n. Theorem 2.2. The set of systems such that (F) is true (i.e., S N is an embedding, equivalently is strongly differentially observable) contains a residual subset of Sr (resp. S 0,r ), for N ≥ 2n + 1. A bound B > 0 on the derivatives of the controls being given, denote by I B the interval [−B, B]. Theorem 2.3. The set of systems such that the restriction of S N to X × U × (k−1)du IB is an embedding, is open, dense in Sr (resp. S 0,r ), for N ≥ 2n + 1.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. Statement of Our Differential Observability Results
41
Theorem 2.4. (X analytic). The set of analytic systems such that S N is r 0,r an embedding, is dense in S (resp. S ), for N ≥ 2n + 1. Now, we shall give several examples showing that all these theorems are false when d y = du = 1. In all the examples, X = S 1 , the circle, and U = [−1, 1]. Consider θ˙ = 1, 1 ( ) y = ϕ1 (θ) + ϕ2 (θ)u, with the assumption (H ): ϕ1 (θ0 ) = 0, ϕ1 (θ0 ) = 0, ϕ2 (θ0 ) = 0. One should observe that the condition (H ) is stable under small perturbations and holds if θ0 = 0, ϕ1 (θ ) = cos(θ), ϕ2 (θ) = sin(θ). Exercise 2.1. Show that, if θ = θ0 , taking u (0) = 0, we can compute u (1) , . . . , u (N ) satisfying the equation y y˙ dθ . = 0, . y (N ) and show that there exists an open neighborhood U of 1 (C 2 open in S ∞ ), such that S k is not an immersion for any k, for ∈ U. This is a counterexample of Theorems 2.1 and 2.2 when d y ≤ du . It is also a counterexample of Theorem 2.4. A better example is the following system ε1 , for ε small: 1 θ˙ = 1, ε y = εϕ1 (θ) + ϕ2 (θ)u, with the same assumption (H ) on θ0 , ϕ1 and ϕ2 . Chose an arbitrary integer k > 0 and a real B > 0. Exercise 2.2. Show that there is an ε0 sufficiently small so that, for the system ε10 and for a C k neighborhood V of ε10 in S ∞ , there is a θ0 and a point (0) (1) u (0) , u (1) , . . . , u (k−1) such that S k is not immersive at θ0 , u , u , . . . , (k−1) (0) (i) u , u ∈ U, u ∈ I B , 1 ≤ i ≤ k − 1, ∈ V. This is a counterexample to Theorem 2.3.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
The Case d y > du
42
Using the same typology, we can also construct an example showing that, if d y > du , the set of systems such that S N is an immersion is not open, for any N . Consider the system: ˙ 2 θ = 1, ε y = ϕ1 (θ) + εϕ2 (θ)u, 1 y2 = 0, with ϕ1 (θ ) = cos(θ ), ϕ2 (θ ) = sin(θ). At θ0 = 0, the assumption (H ) is satisfied. 2
Exercise 2.3. Show that, for ε = 0, S2 ε is an immersion. For ε = 0, ε ε2 small, Sk is not an immersion for any k, using the same reasoning as in the previous examples. Exercise 2.4. Show that the mapping: Sr → C r −N +1 X × U × R (N −1)du , R N d y × U × R (N −1)du , → S N, is not continuous for the Whitney topology (over C r −k+1 (X × U × R (N −1)du , R kd y ×U × R (N −1)du )). Remark 2.1. Theorem 2.4 is not a consequence of Theorem 2.2: this theorem does not prove that there is an open dense subset of systems satisfying (F).
3. Proof of the Observability Theorems The considerations of this section apply to both Sr and S 0,r . Moreover, in order to prove Theorem 2.4, relative to analytic systems, case which is crucial for the proof of Theorem 5.1 below, we will have to apply our reasonings to 0 , for some K ∈ K. To avoid cluttering our the cases where S = S K or S = S K text with “respectively” and other alternatives, and for the sake of clarity, we will denote by S any of these classes of systems in all the proofs below, and H 0 will be considered as the subspace of H of all functions in H , independent of u (0) . Below, when the difference between H and H 0 is not explicitly stated, h( p) or d rX h( p; . . .) will mean h(x0 ) or d rX h(x0 ; . . .), if h ∈ H 0 and p q p = (x0 , u (0) ), and all the expressions d X dU h with q > 0 will be taken equal k 0 to 0. Note that the “bad sets” for J S (see below) are just the intersections of the bad sets for J k S with J k S 0 .
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
3. Proof of the Observability Theorems
43
3.1. Openness and Density of Immersivity 3.1.1. The “Bad Sets” All our bad sets will be partially algebraic or semi-algebraic subbundles of vector bundles: Definition 3.1. A subbundle B of a vector bundle E is said partially algebraic (semi-algebraic) (PA or PSA respectively), if its typical fiber is an algebraic (or semi-algebraic) subset of the vector space which is the typical fiber of E. It is equivalent to say that the fibers of the bundle are algebraic (resp. semi algebraic) in the corresponding fibers of E. Definition 3.2. (i) Bˆ 1 (k) is the subset of J k S × R (k−1)du of all ( j k (x, u (0) ), u ) such (0) that: 1) f (x, u (0) ) = 0, 2) rank(TX k (x, u , u )) < n, k (k−1)d u of all ( j k (x, u (0) ), u ) such (ii) Bˆ 2 (k) is the subset of J S × R that: 1) resp. f (x, u (0) ) = 0, 2) the linear observed system (0) (d X h(x, u (0) ), TX f (x, u (0) )) is observable, 3) rank(TX k (x, u , u )) < n, (here, u = u (1) , . . . , u (k−1) ). See Appendix 7.1 for the notion of observability for linear systems. Definition 3.3. B3 (k) is the subset of J k S of all j k (x, u (0) ) such that: (1) f (x, u (0) ) = 0 and (2) the linear observed system (d X h(x, u (0) ), TX f (x, u (0) )) is unobservable. Definition 3.4. Bim (k) is the union B1 (k) ∪ B2 (k) ∪ B3 (k), where Bi (k), i = 1, 2 denotes the canonical projection of Bˆ i (k) in J k S. The following two lemmas are obvious: Lemma 3.1. Bˆ 1 (k), Bˆ 2 (k), B1 (k), B2 (k), B3 (k), Bim (k) are PSA in their respective ambient vector bundles. k Lemma 3.2. S k is an immersion if and only if j avoids Bim (k).
3.1.2. Estimate of the Codimension of Bim (k) Clearly, if B is a PSA subbundle of a vector bundle E, Codim(B, E), the codimension of B in E, is equal to the codimension of the typical fiber of B in the typical fiber of E.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
The Case d y > du
44
a) Estimate of Codim(B3 (k), J k S): let J k S( p), p = (x, u (0) ), be a typical
fiber of J k S and let V ⊂ J k S( p) be the vector subspace V = { j k ( p)| f ( p) = 0}. V has codimension n in J k S( p). Let λ : V → (Tx∗ X )d y × End(Tx X ) be the mapping λ( j k ( p)) = (d X h( p), TX f ( p)). If U ⊂ (Tx∗ X )d y × End(Tx X ) denotes the set of unobservable couples (C, A), then the typical fiber of B3 (k) is λ−1 (U). But, by Appendix 7.1, Codim(U, (Tx∗ X )d y × End(Tx X )) = d y . Hence, Codim(B3 (k; p), V ) = d y , and Codim(B3 (k; p), J k S( p)) = Codim(B3 (k; p), V ) + Codim(V, J k S( p)) = n + dy . b) Estimate of Codim(B1 (k), J k S): let G be the open subset of
J k F×
XT X
× R (k−1)du of all ( j k f ( p), , u ) such that:
(i) f ( p) = 0 (again, p = (x, u (0) )), (ii) = 0. jk
Let µ : J k H × X ×U G (resp. J k H 0 × X ×U G) → R kd y , µ( j k h( p), f ( p), , u ) = TX k ( p, u ), where = ( f, h). The set G is the total space of an open subbundle of the vector bundle J k F× X T X × R (k−1)du → J k F × R (k−1)du ,
with fibers the tangent spaces to X. The bundle G is conical in the sense that each fiber of G is a cone in the corresponding tangent space to X, and µ is a homogeneous submersion (see Appendix 7.2, Lemma 7.2). Hence, because Bˆ 1 (k) is the canonical projection of µ−1 (0) in J k S × R (k−1)du : Codim µ−1 (0), J k S× X T X × R (k−1)du = kd y , Codim Bˆ 1 (k), J k S × R (k−1)du = kd y + 1 − n, Codim(B1 (k), J k S) = k(d y − du ) + 1 + du − n. c) Estimation of Codim(B2 (k), J k S) (the most difficult case). Bˆ 2 (k) is the
subset of J k S × R (k−1)du of all j k ( p, u ) such that: (1) f ( p) = 0; (2) the linear system (d X h( p), TX f ( p)) is observable, p = (x0 , u (0) ); (3) the tangent kd y has a nonzero kernel. mapping TX k ( p, u ) : Tx0 X → R In the study of Bˆ 2 (k), we will need the following lemma.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
3. Proof of the Observability Theorems
45
Lemma 3.3. Let W be a finite dimensional vector space, A : W → W, and C : W → R l be two linear mappings. If there exists a nonzero vector ∈ W such that (i) C Ar = 0 for 0 ≤ r ≤ t, t some integer and (ii) the vectors {Ar |0 ≤ r ≤ t + 1} in W are linearly dependent, then the system (C, A) is not observable. (See Appendix 7.1 for the notion of observability of linear systems.) Proof. (ii) implies that there exists an integer m, 0 ≤ m ≤ t, such that Am+1 belongs to the linear span, Span{Ai |0 ≤ i ≤ m}. Then, Ar ∈ Span{Ai |0 ≤ i ≤ m} for all integers r. Because Span{Ai |0 ≤ i ≤ m} ⊂ Ker C, Ar ∈ Ker C for all r. Hence, (C, A) is not observable. To study Bˆ 2 (k) more conveniently, we shall split it into several subsets that are easier to handle. Definition 3.5. For every integer ρ, 1 ≤ ρ ≤ k − 1: (i) Bˆ 6 (k, ρ) will be the subsets of Bˆ 2 (k) of all ( j k ( p), u ) such that: (1) u (1) = u (2) = . . . = u (ρ−1) = 0, (2) dU f ( p, u (ρ) ) = 0. (ii) Bˆ 7 (k, ρ) will be the subsets of Bˆ 2 (k) of all ( j k ( p), u ) such that: (1) u (1) = u (2) = . . . = u (ρ−1) = 0, (2) dU f ( p, u (ρ) ) = 0. (iii) Let B6 (k, ρ), B7 (k, ρ) denote the canonical projections of Bˆ 6 (k, ρ), Bˆ 7 (k, ρ), respectively, in J k S. Lemma 3.4. (i) Bˆ 2 (k) = ∪k−1 ( Bˆ 6 (k, ρ) ∪ Bˆ 7 (k, ρ)), ρ=1
B2 (k, ρ) = ∪k−1 ρ=1 (B6 (k, ρ) ∪ B7 (k, ρ)), (ii) Bˆ 6 (k, ρ), Bˆ 7 (k, ρ) (resp. B6 (k, ρ), B7 (k, ρ)) are PSA subbundles of J k S × R (k−1)du (resp. J k S), (iii) Bˆ 6 (k, ρ) = Bˆ 7 (k, ρ) = B6 (k, ρ) = B7 (k, ρ) = ∅ if ρ > n. Proof. (i) and (ii) are trivial. (iii): By Lemma 7.3, Appendix 7.2, for any ( j k ( p), u ) ∈ Bˆ 6 (k, ρ) ∪ Bˆ 7 (k, ρ), there exists a ∈ Tx0 X, = 0, such that d X h( p; TX f ( p)i ) = 0 for 0 ≤ i ≤ ρ − 1, ( p = (x0 , u (0) )). Applying Lemma 3.3 with t = ρ − 1, we get that (d X h( p), TX f ( p)) is not observable because the vectors TX f ( p)i , 0 ≤ i ≤ ρ − 1 are necessarily linearly dependent because dim Tx0 X = n < ρ. This is a contradiction.
P1: FBH CB385-Book
CB385-Gauthier
46
June 21, 2001
11:11
Char Count= 0
The Case d y > du
Now, we shall estimate the codimensions of B6 (k, ρ) and B7 (k, ρ). ˜ 8 (k, ρ) be the subset of d) Estimation of Codim(B7 (k, ρ), J k S). Let B
J k S× X P(T X ) × R du of all ( j k ( p), l, w) such that:
1. f ( p) = 0, 2. dU f ( p; w) = 0, 3. d X h( p; TX f ( p)i ) = 0, 0 ≤ i ≤ ρ − 1 and d X h( p; TX f ( p)ρ ) + d X dU h( p; ⊗ w) = 0, for all ∈ l, 4. The vectors TX f ( p)i , 0 ≤ i ≤ ρ are linearly independent. We claim that the canonical projection B8 (k, ρ) of B˜ 8 (k, ρ) in J k S contains B7 (k, ρ). An element e ∈ B7 (k, ρ) is the image of an element ( j k ( p), u ) ∈ Bˆ 7 (k, ρ). By condition 3 in the definition of Bˆ 2 (k), there exists a ∈ Tx0 X, = 0, such that TX k ( p, u ) = 0. P(Tx X ) is the projective space associated to Tx X. Let l be the class of in P(Tx0 X ). If we show that the triple ( j k ( p), l, u (ρ) ) belongs to B˜ 8 (k, ρ), it will follow that B8 (k, ρ) ⊃ B7 (k, ρ). Therefore, we have to check that the triple ( j k ( p), l, u ) satisfies the conditions 1-4 defining B˜ 8 (k, ρ). But conditions 1 and 2 of the definition of B˜ 8 (k, ρ) follow from condition 1 of the definition of Bˆ 2 (k) and (ii)-2 of Definition 3.5 of Bˆ 7 (k, ρ). Condition 3 follows from the fact that T X k ( p, u ) = 0 and Lemma 7.3-(2), Appendix 7.2. Finally, condition 4 follows from the condition 3 just proven and Lemma 3.3 applied with t = ρ − 1, W = Tx0 X, A = TX f ( p), C = d X h( p) and . We get a contradiction with the observability condition 2 of Definition 3.2. Clearly, B˜ 8 (k, ρ), B8 (k, ρ) are PSA subbundles of the bundles k J S× X P(T X ) × R du and J k S respectively. The inclusion B8 (k, ρ) ⊃ B7 (k, ρ) implies that 1. Codim(B7 (k, ρ), J k S) ≥ Codim(B8 (k, ρ), J k S). We have also: 2. Codim(B8 (k, ρ), J k S) ≥ Codim( B˜ 8 (k, ρ), J k S× X P(T X ) × R du ) − du − n + 1. To estimate Codim( B˜ 8 (k, ρ), J k S× X P(T X ) × R du ), let us note that
B˜ 8 (k, ρ) is the projection of the conical subset −1 (0) of J k S× X T X × R du where is a fiberwise mapping: G → pr x∗ T X × R (1+ρ)d y defined as follows. The set G is the PSA subbundle of J k S× X T X × R du of all ( j k ( p), , w) such that (i) f ( p) = 0, and (ii) the vectors TX f ( p)i , 0 ≤ i ≤ ρ, are linearly independent. ( j k ( p), ,w) = (dU f ( p;w), d X h( p; ), d X h( p;TX f ( p)), . . . , d X h ( p; TX f ( p)ρ−1 ), d X h( p; TX f ( p)ρ ) + d X dU h( p; ⊗ w)).
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
3. Proof of the Observability Theorems
47
Clearly, is a submersion because the TX f ( p)i , 0 ≤ i ≤ ρ are linearly independent. Hence, Codim( −1 (0), G) = n + (1 + ρ)d y . Clearly, G is a submanifold of J k S× X T X × R du of codimension n. Hence Codim( −1 (0), J k S× X T X × R du ) = 2n + (1 + ρ)d y , Codim( B˜ 8 (k, ρ), J k S× X P(T X ) × R du ) = 2n + (1 + ρ)d y . Using 1 and 2 above: 3. Codim(B7 (k, ρ), J k S) ≥ n + (1 + ρ)d y − du + 1. e) Estimation of Codim(B6 (k, ρ), J k S). Let G ⊂ J k F× X T X × R (k−1)du
be the subset of all ( j k f ( p), , u ) such that: (i) f ( p) = 0; (ii) u (1) = u (2) = . . . = u (ρ−1) = 0; and (iii) the vectors TX f ( p)i , 0 ≤ i ≤ ρ, are linearly independent; and (iv) dU f ( p, u (ρ) ) = 0. The set G is clearly a PSA conical subbundle of J k F× X T X × R (k−1)du and a submanifold of codimension: 4. Codim G, J k F× X T X × R (k−1)du = n + (ρ − 1)du . Let µ: J k H × X ×U G (resp. J k H 0 × X ×U G) → R kd y : µ( j k h( p), j k f ( p), , u ) = TX k ( p, u ; ), where = ( f, h).
Bˆ 6 (k, ρ) is contained in the image of µ−1 (0) by the canonical projection: J k S× X T X × R (k−1)du → J k S × R (k−1)du . To see this, let ( j k ( p), u ) ∈ Bˆ 6 (k, ρ). Then, by condition 3 of Definition 3.2, there is a ∈ Tx0 X, ( p = (x0 , u (0) )), = 0, such that TX k ( p, u ; ) = k −1 0. Let us show that ( j ( p), , u ) belongs to µ (0). All we have to do is to check that ( j k f ( p), , u ) ∈ G, where = ( f, h). Now, f ( p) = 0 by condition 1 of Definition 3.2 of Bˆ 2 (k), u (1) = u (2) = . . . = u (ρ−1) = 0, and dU f ( p, u (ρ) ) = 0 by the conditions 1 and 2 of Definition 3.5 for Bˆ 6 (k, ρ). To check that the TX f ( p)i , 0 ≤ i ≤ ρ are linearly independent, note that by the Lemma 7.3 (2) of Appendix 7.2, the fact that TX k ( p; u , ) = 0 i implies that d X h( p; TX f ( p) ) = 0 for 0 ≤ i ≤ ρ − 1. Then, Lemma 3.3 and condition 2 of Definition 3.2 for Bˆ 2 (k) imply that the TX f ( p)i ,
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
The Case d y > du
48
0 ≤ i ≤ ρ, are linearly independent. Hence, Codim Bˆ 6 (k, ρ), J k S × R (k−1)du ≥ Codim µ−1 (0), J k S× X T X × R (k−1)du + 1 − n. Statement 1 of Lemma 7.3 in Appendix 7.2 implies immediately that µ is a submersion. Hence Codim(µ−1 (0), J k H × X ×U G) = kd y, (resp. Codim(µ−1 (0), J k H 0 × X ×U G) = kd y ). Using 4,
Codim µ−1 (0), J k S× X T X × R (k−1)du = kd y + n + (ρ − 1)du , Codim Bˆ 6 (k, ρ), J k S × R (k−1)du ≥ kd y + n + (ρ − 1)du − n + 1, 5. Codim(B6 (k, ρ), J k S) ≥ kd y + 1 + (ρ − k)du .
Points 3, 5 and Lemma 3.4 (i) imply that Codim(B2 (k), J k S) ≥ min{n + (1 + ρ)d y + 1 − du , kd y + 1 + (ρ − k)du |ρ ≥ 1}. Hence Codim(B2 (k), J k S) ≥ min{n + 2d y + 1 − du , k(d y − du ) + 1 + du }. f) Final estimation of Codim(Bim (k), J k S). Combining the results of this
section, a, b, c, d, and e, we obtain Codim(Bim (k), J k S) ≥ min{n + d y , k(d y − du ) + 1 − n + du , n + 2d y + 1 − du , k(d y − du ) + 1 + du }. 3.1.3. Proof of the Openness and Density, for Immersivity Clearly, for k ≥ 2n, Codim(Bim (k)) > n + du , because d y > du . The same applies to the closure cl(Bim (k)), which is still PSA (see [38]). By Abraham’s theorem (see [1]) on transversal density and openness of nonintersection, the set of ∈ S such that j k avoids cl(Bim (k)) is open, dense. Of course, we have used our assumptions A1 , A2 of Section 2, which imply that the map j k : S → C r (X × U, J k S) is a C r representation, and evk : S × X × U → J k S is a submersion. Remark 3.1. In the case where d y = du , Bim (k) has codimension du − n + 1. This shows that the set on which S k is not immersive may have generically
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
3. Proof of the Observability Theorems
49
full codimension, whatever k. This is in agreement with the results of Chapter 3. (In that case, to get openness, we have to use an argument of openness of transversality to a closed stratified set, such as the one in [20].) 3.2. Density of Injectivity 3.2.1. The Bad Sets Definition 3.6. Let B4 (k) denote the subset of J k S × J k S of all couples ( j k ( p), j k (q)) such that: (1) p = q, p = (x1 , u (0) ), q = (x2 , u (0) ); (2) f ( p) = f (q) = 0, and (3) h( p) = h(q). Definition 3.7. (i) Let Bˆ 5 (k) be the subset of (J k S)2 × R (k−1)du of all tuples (( j k ( p), j k (q), u ) such that 1) p = q, ( p = (x1 , u (0) ), q = (x2 , u (0) )), 2) f ( p) = 0 or f (q) = 0, 3) k ( p, u ) = k (q, u ), k 2 (ii) B5 (k) denotes the canonical projection of Bˆ 5 (k) in (J S) . Again, we have the obvious lemmas: Lemma 3.5. B4 (k), Bˆ 5 (k), B5 (k), are respectively PSA subbundles of (J k S)2∗ , (J k S)2∗ × R (k−1)du , (J k S)2∗ , where (J k S)2∗ (resp. (J k S)2∗ × R (k−1)du ) denotes the restriction of (J k S)2 (resp. (J k S)2 × R (k−1)du ) to (X × X \ X ) × U, where X and U are the diagonals of X × X and U × U respectively. Lemma 3.6. Let Z ⊂ X × X \ X and let ∈ S be such that the mapping; Z ×U →(J k S)2∗ , (x, y, u (0) ) → ( j k (x, u (0) ), j k (y, u (0) )) avoids Bin (k) = (0) (0) (0) B4 (k) ∪ B5 (k). Then k (x, u , u ) = k (y, u , u ) for all (x, y, u , u ) ∈ (k−1)d u . Z ×U × R 3.2.2. Estimation of the Codimension of Bin (k)1 a) Estimation of the codimension of B4 (k) in (J k S)2∗ : It is obvious that this
codimension is 2n + d y . b) Estimation of the codimension of Bˆ 5 (k) in (J k S)2∗ × R (k−1)du and of B5 (k) in (J k S)2∗ : We will treat only the case f (x, u (0) ) = 0. The case f (y, u (0) ) = 0 is similar. Let (x, y, u (0) ) ∈ (X × X \ X ) × U. The typical fiber Bˆ 5 (k, x, y, u (0) ) of Bˆ 5 (k) in J k S(x, u (0) ) × J k S(y, u (0) ) × R (k−1)du is characterized by the following properties: (i) f (x, u (0) ) = 0 and (0) (0) (ii) k (x, u , u ) − k (y, u , u ) = 0. 1
Do not confuse with Bim (k) introduced in Definition 3.4.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
The Case d y > du
50
Let G be the subset of J k S(x, u (0) ) × J k S(y, u (0) ) × R (k−1)du of all tuples such that f (x, u (0) ) = 0 and let χ : G → kd k (0) y R be the mapping: χ ( j (x, u (0) ), j k (y, u (0) ), u ) = k (x, u , u ) − (0) (0) −1 k (y, u , u ). Then, Bˆ 5 (k; x, y, u , u ) = χ (0). The map χ is an algebraic mapping, affine in j k h(x, u (0) ). By Lemma 7.1 in Appendix 7.2, for fixed u ∈ R (k−1)du and j k f (x, u (0) ), k the linear mapping j k h(x, u (0) ) ∈ J k H (x, u (0) ) → j (x, u (0) , u ) is surjective. This shows that the map χ : G → R kd y is a submersion. Since Bˆ 5 (k; x, y, u (0) ) = χ −1 (0), Codim( Bˆ 5 (k; x, y, u (0) ), J k S(x, u (0) ) × J k S(y, u (0) ) × R (k−1)du ) = kd y . Hence, Codim( Bˆ 5 (k), (J k S)2∗ × R (k−1)du ) = kd y . It follows that ( j k (x, u (0) ), j k (y, u (0) ), u ),
Codim(B5 (k), (J k S)2∗ ) ≥ k(d y − du ) + du . c) Estimation of the codimension Codim(Bin (k), (J k S)2∗ ).
Codim(Bin (k), (J k S)2∗ ) ≥ min(2n + d y , k(d y − du ) + du ). 3.2.3. Proof of the Density of Injectivity Let k ≥ 2n + 1. Then, Codim(Bin (k), (J k S)2∗ ) ≥ 2n + 1 + du . Therefore, another application of Abraham’s transversal density theorem, (we omit the details), as in Section 3.1.3, shows that the set of all ∈ S such that ( j k (x, u (0) ), j k (y, u (0) )) avoids Cl(Bin (k)) for all (x, y) ∈ Z ⊂ X × X \ X and all u (0) ∈ U is residual. If Z is compact, it is open by openness of the nonintersection property on compact sets. 3.3. Proof of the Observability Theorems 2.1, 2.2, 2.3, and 2.4 Just applying the results of Sections 3.1.3 and 3.2, to the cases were S = Sr (resp. S = S 0,r ) proves Theorems 2.1 and 2.2. (k−1)du
Proof of Theorem 2.3. The set of embeddings of (X × U × I B R kd y × R kdu is open. On the other hand, the map (k−1)du Sr → C r −k+1 X × U × I B , R kd y × R kdu ,
) in
→ S k , where S k is the restriction of Sk to X × U × I B (by compactness).
(k−1)du
, is continuous2
Hence, the set of ∈ Sr (resp. ∈ S 0,r ) such that S k is an embedding is open. It is dense by Theorem 2.2. 2
Compare with Exercise 2.4.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
4. Observability and Observability for Smooth Inputs
51
Proof of Theorem 2.4. We consider a fixed C r system 0 . We approximate it in the C r topology by an analytic one, 1 . By Section 1, 1 is in one of our 0 ). The Banach space S satisfies the assumptions Banach spaces S K (resp. S K K A1 and A2 of Section 1, by the properties (i), (ii), and (iii) of the same section. We just apply the results of Sections 3.1.3 and 3.2.3 to the case in which 0 ), to get the result. S = S K (resp. S K 4. Equivalence between Observability and Observability for Smooth Inputs In this section, we consider analytic systems, and we prove that, for these systems, C ω – observability (i.e., observability for all C ω inputs) implies observability (i.e., observability for all L ∞ inputs). In fact, these notions are equivalent. This result will be crucial in order to prove our final approximation theorem (Theorem 5.1) in Section 5.3 To prove this, we need some technical tools, mainly, the tangent cone to a subanalytic set. Exercise 4.1. Show that, in the C ∞ case, observability and C ω observability are not equivalent properties. 4.1. Preliminaries We start with X, a real analytic manifold; Z ⊂ X , a subanalytic subset; Z reg , the subset formed by the regular points of Z (i.e., the points x ∈ Z having a neighborhood O in X , such that O ∩ Z is a real analytic connected manifold). The set Z reg is a disjoint union of analytic manifolds, and it is open and dense in Z . The subset Z reg is also subanalytic in X (see Appendix, Section 1.3). This is a key point for the proof of the facts P1 to P6 below. Let us now define T C(Z ), the tangent cone to Z : T Z reg , the tangent bundle to Z reg is well defined because Z reg is smooth. Denote by X : T X → X the canonical projection. We define T C(Z ) = −1 X (Z ) ∩ T Z reg , where T Z reg is the closure of T Z reg in T X. Now, let us state the main properties of T C(Z ): P1 : T C(Z ) is subanalytic in T X. If Z is closed then T C(Z ) = T Z reg , P2 : If Z is smooth, T C(Z ) = T Z , P3 : If Y ⊂ Z is subanalytic open in Z , then the restriction: T C(Z )|Y = T C(Y ). 3
There is a related result in the paper [46], based upon desingularization techniques.
P1: FBH CB385-Book
CB385-Gauthier
52
June 21, 2001
11:11
Char Count= 0
The Case d y > du
T C(Z )|Y = −1 Z (Y ), where Z : T C(Z ) → Z denotes the canonical projection (restriction of X : T X → X to T C(Z )). In particular, because Z reg is open in Z , T C(Z )|Z reg = T C(Z reg ) = T Z reg . P4 : If Z = {Z α |α ∈ A} is a stratification of Z satisfying the condition (a) of Whitney, (see Appendix, Section 1.4), then T C(Z ) ⊃ T Z α for all α ∈ A. P5 : If x : δ → X is an absolutely continuous curve on the interval δ, the image of which is contained in Z , then the set of t ∈ δ, such that d x(t) dt exists and is contained in T C(Z ), has full measure. P6 : The dimension of T C(Z ) is 2 dim(Z ). 1. Proof of P1 (see also [13]). The question being local, we can assume that
X = R n , T X = X × X, Z ⊂ X. −1 X (Z ) = Z × X is therefore subanalytic as a product of subanalytic sets. Hence, it is sufficient to show that T Z reg is subanalytic. As a consequence, all we have to prove is that if Z is subanalytic and smooth, then T Z is subanalytic. Let S be the unit sphere of X = R n . Consider Inc , the subset of X × X × S defined by Inc = {(x, y, v)|x, y ∈ X, v ∈ S, (y − x) ∧ v = 0, < (y − x), v > ≥ 0}, (here, < ., . > denotes the Euclidean scalar product on R n ). This subset Inc is semi-analytic, it is also a smooth submanifold of X × X × S with a boundary. Let p: Inc → X × X be the restriction to Inc of the canonical projection Pr1,2 : X × X × S → X × X. The set Z × Z is subanalytic in X × X. The diagonal X of X × X is analytic in X × X . (2) Therefore, Z ∗ = Z × Z \( X ∩ Z × Z ) = Z × Z \ Z is subanalytic. (2) (2) Hence, p −1 (Z ∗ ) = Pr−1 1,2 (Z ∗ ) ∩ Inc = Sec(Z ) is still subanalytic. −1 Set Dir(Z ) = Sec(Z ) ∩ p ( Z ). Both Sec(Z ) and p −1 ( Z ) = {(z, z, v)|z ∈ Z , v ∈ S} are subanalytic, hence so is Dir(Z ). (Dir(Z ) is nothing but the set of (z, z, v) such that v is tangent to Z at z: (z, z, v) ∈ Dir(Z ) n iff there is a sequence (xn , yn ) → (z, z), xn = yn , xn , yn ∈ Z , yynn −x −xn → v). Consider now µ : X × X × S × R → X × X = T X, µ(x, y, v, t) = (x, tv). Restricted to X × S × R, µ is proper. R + ⊂ R is semi-analytic, therefore, µ(Dir(Z ) × R + ) is subanalytic. But, clearly, µ(Dir(Z ) × R + ) = T C(Z ). Hence, T C(Z ) is subanalytic. 2. Proof of P2 . If Z is smooth, Z reg = Z , T Z reg = T Z . −1 T C(Z ) = T Z reg ∩ −1 X (Z ) = T Z ∩ X (Z ) = T Z .
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
4. Observability and Observability for Smooth Inputs
53
−1 −1 3. Proof of P3 . T C(Z )|Y = −1 Z (Y ) = X (Y ) ∩ T C(Z ) = X (Y ) ∩
−1 X (Z ) ∩
−1 T Z reg = −1 X (Y ) ∩ T Z reg . Otherwise, T C(Y ) = X (Y ) ∩ T Yreg . But, Yreg = Z reg ∩ Y, because Y is open in Z . Hence, T Yreg ⊂ T Z reg , T Yreg ⊂ T Z reg . Therefore, T C(Y ) ⊂ T C(Z )|Y . Conversely, if ξ y ∈ −1 X (Y ) ∩ T C(Z ), y ∈ Y , then there is a sequence ξ yn → ξ y , ξ yn ∈ T Z reg , yn ∈ Z reg , yn → y. For n sufficiently large, because Y is open in Z , yn ∈ Yreg , ξ yn ∈ T Yreg , hence, ξ y ∈ T Yreg . At the end, ξ y ∈ −1 X (Y ) ∩ T Yreg = T C(Y ).
4. Proof of P4 . Let Z α be a maximal stratum of Z (i.e., there is no Z β , Z α = Z β , Z α ⊂ Z β ). Then, Z α ⊂ Z reg and by P3 , T Z α ⊂ T Z reg ⊂ T C(Z ). If Z α is not maximal, i.e., Z α ⊂ Z β , then, by “Whitney condition (a),” T (Z α ) ⊂ T (Z β ). Hence, T (Z α ) ⊂ T Z reg . Also, because Z α is smooth, T (Z α ) ⊂ −1 X (Z α ). Finally, T (Z α ) ⊂ T C(Z ). 5. Proof of P5 . Let {Z α |α ∈ A} be a Whitney stratification of Z by analytic manifolds that are also subanalytic in X. For each α ∈ A, let δα be the set of t ∈ δ such that x(t) ∈ Z α . The set of points of δα that are not accumulation points of δα is countable and hence has measure zero. Let δα be the set of t ∈ δα that are accumulation points of δα . The set of t such that d x(t) dt exists has full measure in δ. Pick such a t in δα for some α. If t ∈ δα , then d x(t) dt ∈ Tx(t) Z α because Z α is smooth. But we know by P4 that T (Z α ) ⊂ T C(Z ). Hence, d x(t) dt ∈ T C(Z ). 6. Proof of P6 . (For the concept of dimension, see Appendix, Section 1.3). dim(Z ) = dim(Z reg ). dim(T C(Z )) = dim(T Z reg ) = 2 dim(Z reg ).
4.2. Back to Observability We consider an analytic system, just as in the previous sections: () x˙ = f (x, u), y = h(x, u). The considerations below are valid whether or not h depends on u. We just assume that u takes values in a semi-analytic compact set U (which is the case for the systems considered previously, U = I du , if I is a compact interval). The output space Y is just assumed to be an analytic manifold, X is not assumed to be compact. (2) Let X ∗ = X × X \ X, X being the diagonal of X × X, and let h × h: X × X × U → Y × Y be the mapping (x1 , x2 , u) → (h(x1 , u), h(x2 , u)). (2) Now, we define two decreasing sequences of subanalytic subsets of X ∗ × U (2) and X ∗ respectively, as follows:
P1: FBH CB385-Book
CB385-Gauthier
54
June 21, 2001
11:11
Char Count= 0
The Case d y > du
(2) ˆ 0 = (X ∗ × U ) ∩ (h × h)−1 ( Y ) = {(x1 , x2 , u)|h(x1 , u) = h(x2 , u)}, 0 = Pr1,2 (ˆ 0 ), Pr1,2 : X × X × U → X × X is the projection (x1 , x2 , u) → (x1 , x2 ). Pr1,2 is a proper map. It is clear that ˆ 0 is semi(2) analytic in X × X × U, closed in X ∗ × U. The set 0 is subanalytic in (2) X × X, closed in X ∗ , which is itself an analytic manifold. Assume that ˆ i , i have been defined for i ≤ j, ˆ i subanalytic closed in (2) (2) X ∗ × U, i subanalytic closed in X ∗ . (2) (2) Let us denote by fˆ : X ∗ × U → T X ∗ ⊂ T X × T X , the analytic map: −1 (x1 , x2 , u) → ( f (x1 , u), f (x2 , u)). Then, we define ˆ j+1 = fˆ (T C( j )) ∩ ˆ j , j+1 = Pr1,2 (ˆ j+1 ). (2) It is clear that ˆ j+1 is subanalytic closed X ∗ × U and j+1 is subanalytic (2) closed in X ∗ . Also:
ˆ 0 ⊃ ˆ 1 ⊃ . . . ⊃ ˆ i ⊃ . . . 0 ⊃ 1 ⊃ . . . ⊃ i ⊃ . . . We will show that as soon as i ≥ 2n + 1 = N , i = N and ˆ i = ˆ N +1 for i ≥ N + 1. First of all, let us recall the notion of local dimension [25]: If Z is a subanalytic subset of a submanifold X, and if x ∈ X, dimx (Z ) = InfV {dim(V ∩ Z reg )| V is an open neighborhood of x}, with the convention that dim ∅ = −1. If Z ⊂ Z is a subanalytic subset of X, then for all x ∈ X : dimx (Z ) ≤ dimx (Z ). We need the following fundamental lemma: (2)
Lemma 4.1. If for some integer m ∈ {−1, 0, 1, 2, . . .} and for some z ∈ X ∗ , dimz (m ) = dimz (m+1 ), then for all j ≥ m, dimz ( j ) = dimz (m ). The proof of this lemma is postponed to Section 4.3. Corollary 4.2. As soon as i > dim(0 ), i+1 = i . (2)
Proof. By Lemma 4.1, for all z ∈ X ∗ , as soon as i > dimz (0 ), dimz (i+1 )= dimz (i ). Because dim(0 ) = supz∈X ∗(2) dimz (0 ), dimz (i+1 ) = dimz (i ) (2) for all z ∈ X ∗ and all i > dim(0 ). This implies that i+1 = i , because (2) the i are closed in X ∗ . Corollary 4.3. As soon as i > 2n + 1, ˆ i+1 = ˆ i .
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
4. Observability and Observability for Smooth Inputs
55
−1 Proof. ˆ i+1 = fˆ (T C(i )) ∩ ˆ i . Because i > dim(0 ) + 1, Corollary 4.2 −1 shows that i = i−1 . Therefore, ˆ i+1 = fˆ (T C(i−1 )) ∩ ˆ i = −1 −1 fˆ (T C(i−1 )) ∩ ˆ i−1 , because ˆ i = fˆ (T C(i−1 )) ∩ ˆ i−1 . Hence, ˆ i+1 = ˆ i .
Setting = ∩i≥0 i , and ˆ = ∩i≥0 ˆ i , Corollary 4.3 implies obviously the following proposition: Proposition 4.4. (i) = m for m > 2n and (ii) ˆ = ˆ m for m > 2n + 1. The basic fact about is: ¯ : [0, T¯ ] → Proposition 4.5. Assume that there exist two trajectories (x¯ i , u) X × U, i = 1, 2, of the system such that x¯ 1 (0) = x¯ 2 (0) and ¯ = h(x¯ 2 , u) ¯ for almost all t ∈ [0, T¯ ]. Then = ∅ and ˆ = ∅. h(x¯ 1 , u) The proof of Proposition 4.5 is also postponed to Section 4.3. By definition and by Proposition 4.4, and ˆ satisfy: ˆ ⊂ T C(), (a) fˆ() ˆ (b) = Pr1,2 (). ˆ (a) and (b) Pick any (x1 , x2 ) ∈ reg . There is a u such that (x1 , x2 , u) ∈ . above and the property P3 of T C() imply that, for any such u ∈ U : (c) fˆ(x1 , x2 , u) ∈ T(x1 ,x2 ) reg . ˆ = and Pr1,2 is proper, it is known (see Appendix, Also, because Pr1,2 () Section 1.4) that the restriction Pr1,2|ˆ can be stratified, that is: for any stratum ˆ S of , ˆ ∩ Pr−1 1,2 (S) is a union of strata in and the restriction of Pr1,2 to ˆ such a stratum S, Pr1,2| Sˆ , is an analytic submersion. We pick two such strata S ˆ such that S has maximum dimension dim() (so that S is contained in and S, reg ). Considering the submersion Pr1,2| Sˆ : Sˆ → S, we see that we can chose an analytic section ˆ u(x ˆ 1 , x2 ) = (x1 , x2 , u(x1 , x2 )), uˆ : V → S, where V is an open subset of S. Also, by (c) above, we know that, for all (x1 , x2 ) ∈ V, H (x1 , x2 ) = ( f (x1 , u(x1 , x2 )), f (x2 , u(x1 , x2 ))) ∈ T(x1 ,x2 ) V. So H is an analytic vector field on V. Let γ : [0, T ] → V , γ = (x1 (t), x2 (t)), be an integral curve of
P1: FBH CB385-Book
CB385-Gauthier
56
June 21, 2001
11:11
Char Count= 0
The Case d y > du
H in V. Setting v(t) = u(x1 (t), x2 (t)) for t ∈ [0, t], one has by construcˆ ) ⊂ Sˆ ⊂ ˆ 0 = {(x1 , x2 , u) | tion: h(x1 (t), v(t)) = h(x2 (t), v(t)) because u(V h(x1 , u) = h(x2 , u)}. On the other hand, x1 (t) and x2 (t) are, by construction, distinct trajectories of our system , relative to the input v(t). Hence, v(t) is an analytic function defined on the time interval [0, T ], which “makes our system unobservable.” Finally, we have shown that, if our system is unobservable for some input, then it is also unobservable for another C ω input. This conclusion is summarized in the next theorem. Theorem 4.6. For an analytic system , (either ∈ S ω or ∈ S 0,ω ), the following properties are equivalent: (i) is observable for all L ∞ inputs. (ii) is observable for all C ω inputs.
4.3. Proof of Lemma 4.1 and Proposition 4.5. 1. Proof of Lemma 4.1. It is sufficient to show that dimz m+2 = dimz m+1 . There is an open neighborhood V0 of z in X such that for every open neighborhood V of z contained in V0 , dim(V ∩ i,reg ) = dimz (i ) for i = m, m + 1, m + 2. Because dimz m = dimz m+1 , V ∩ m+1 contains a submanifold ωV of dimension dimz m , open in m . If y ∈ ωV , there is a u y ∈ U such that (y, u y ) ∈ ˆ m+1 , since y ∈ m+1 . Therefore, fˆ(y, u y ) ∈ T C y (m ). By the property P2 Section 4.1, T C y (m ) = T C y (ωV ) = Ty (ωV ) = T C y (m+1 ). (Because ωV ⊂ m+1 ⊂ m , ωV is also open in m+1 .) Finally fˆ(y, u y ) ∈ T C(m+1 ) (y, u y ) ∈ ˆ m+1 , which shows that (y, u y ) ∈ ˆ m+2 and hence, y ∈ m+2 . Therefore, for each open neighborhood V of z contained in V0 , ωV ⊂ m+2 . On the other hand, there exists an open neighborhood W of z, W ⊂ V0 , such that dimz (m+2 ) = dim(W ∩ m+2 ). Because ωV ⊂ m+2 and dim(W ∩ m+1 ) = dimz (m+1 ) (by the definition of V0 ), one has dimz m = dim ωV ≤ dimz m+2 = dim W ∩ m+2 ≤ dim W ∩ m+1 = dimz m+1 . Finally, dimz m = dimz m+2 = dimz m+1.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
5. The Approximation Theorem
57
2. Proof of Proposition 4.5 Since the x¯ i , i = 1, 2 are continuous and x¯ 1 (0) = x¯ 2 (0), there exists a T, 0 < T < T¯ such that x¯ 1 (t) = x¯ 2 (t) for all t ∈ [0, T ]. Let us denote by z¯ : (2) [0, T ] → X ∗ the curve z¯ (t) = ( x¯ 1 (t), x¯ 2 (t)). We will show by induction on i ¯ that for almost all t ∈ [0, T ], (¯z (t), u(t)) ∈ ˆ i . This will imply the proposition. For i = 0, it results from the assumption of the proposition. Assume that it is proven for i ≤ m. Then, z¯ is an absolutely continuous curve taking its values in m . By the property P5 , Section 4.1, for almost all t ∈ [0, T ]: d z¯ ∈ T C(m ), dt this shows that for almost all t ∈ [0, T ], ¯ fˆ(¯z (t), u(t)) ∈ T C(m ), −1 ¯ hence, for almost all t ∈ [0, T ], (¯z (t), u(t)) ∈ fˆ (T C(m )) ∩ ˆ m = ˆ m+1 .
5. The Approximation Theorem Recall that, if (analytic), is as in the previous Sections 2 and 3, such that ω S k is an embedding, then is observable for all C inputs ( is observable k for all C inputs, which is stronger): A C k input u(t) being given on some interval [0, ], and x1 , x2 , x1 = x2 being given initial conditions, assume that the corresponding outputs are equal on some time subinterval [0, τ ]. Then their k − 1 first derivatives at time zero are also equal, and they, with the u ( j) (0), are just the components of S k by definition. This is impossible as S is injective. The system is observable k for u. In fact, the fact that S k is an embedding (strong differential observability) expresses that P,u , the initial − state → output − trajectory mapping, is an embedding for all of the considered k-times differentiable inputs u. Although, observable only means that P,u is injective, but for all L ∞ inputs u. The results of Sections 2 and 4 show that for d y > du : 1. Any Cr system 0 can be approximated by an analytic one 1 , which is observable for all C ω inputs (for all C 2n+1 inputs): Theorem 2.4. 2. 1 is in fact observable (for all L ∞ inputs): Theorem 4.6. Therefore: Theorem 5.1. (Approximation by observable analytic systems). Any system 0 ∈ Sr (resp. S 0,r ), r sufficiently large, can be approximated by an
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
The Case d y > du
58
observable one 1 ∈ Sr (resp. S 0,r ) (observable for all L ∞ inputs), that 1 moreover can be chosen analytic and such that S k is an embedding, for some k.
6. Complements The two following results are important. The first one concerns uncontrolled systems, i.e., is of the form (uc )
dx = f (x), y = h(x). dt
The manifold X is again assumed to be compact. Then, the following theorem holds. Theorem 6.1. The set of uncontrolled systems uc that are strongly differentially observable, of order N = 2n + 1, (i.e., N is an embedding) is open, dense. Exercise 6.1. Prove Theorem 6.1. This is easy, as a consequence of the main theorems in this chapter. For a direct proof, see [16]. The second important result concerns the class of control affine systems. These systems are very common in practice. Recall that they are of the form (ca )
du dx gi (x)u i , y = h(x). = f (x) + dt i=1
The following theorem holds. Theorem 6.2. The Theorems 2.1, 2.2, 2.3, and 2.4 are all true in the class of control affine systems. Exercise 6.2. Prove Theorem 6.2. This exercise is not that easy, although the general idea of the proof is the same. This has been done in [3]. Also, the following interesting result holds: X is again an analytic compact manifold. Let us say that a vector field f on X is observable if there exists a continuous function h : X → R, such that = ( f, h) is observable.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
7. Appendix
59
Theorem 6.3. (1) An analytic vector field f is observable iff it has only isolated singularities. (2) If (1) holds, then, the set of analytic maps h such that = ( f, h) is observable, is dense in H r . Exercise 6.3. Prove Theorem 6.3. This is not an easy exercise. For hints, see [31]. 7. Appendix 7.1. Unobservable Linear Systems Let K be a field. For our purposes, we need K = R or C (real or complex numbers). Let L K denote the set of all linear systems with observations, i.e., the vector space Lin(K n , K d y )× End(K n ). For L ∈ L K , L = (C, A), denote by I L the subspace ∩i≥0 ker C Ai of K n .The subspace I L is invariant under A. The system L is called unobservable if I L is not reduced to {0}. Denote by U K the subset of L K formed by these systems. It is easy to see that there exists a universal family of polynomials P1 , . . . , Pr , in variables representing the coefficients of the matrices C and A, with coefficients in Z, such that U K is an affine variety, i.e., the set of points of L K that are the common zeros of this family. This implies that U R = UC ∩ L R , and that Codim R (U R , L R ) ≥ CodimC (UC , L C ).
(35)
We want to compute Codim R (U R , L R ). To do this, let us introduce the space VK = {(C, A)| C annihilates an eigenvector of A in K n }. Clearly, VK ⊂ U K . Hence, Codim K (VK , L K ) ≥ Codim K (U K , L K ). However, for K = C, these codimensions are actually equal because VC = UC : Any vector subspace of Cn invariant by A and not reduced to {0} contains an eigenvector of A. Hence we have, using (35): Codim R (V R , L R ) ≥ Codim R (U R , L R ) ≥ CodimC (UC , L C ) = CodimC (VC , L C ).
(36)
Now, we show that Codim K (VK , L K ) = d y , if K = R or K = C. Let VK∗ be the subspace of L K × P(K n ), P(K n ) the n − 1 projective space, of all couples (L , l), such that Cl = 0, Al ⊂ l. Then, it is an algebraic submanifold
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
The Case d y > du
60
of L K × P(K n ), of codimension n + d y − 1. If : L K × P(K n ) → L K denotes the projection on the first factor, then, (VK∗ ) = VK . So, ◦ Codim K (VK , L K ) ≥ n + d y − 1 − (n − 1) = d y . Let VK be the subset of all (C, A, l) in VK∗ , such that l is the eigenspace of A corresponding to a simple eigenvalue, and the only eigenvectors annihilated by C are those ◦ belonging to l. The set VK is open in VK∗ , and the restriction of to it is a diffeomorphism onto its image. So Codim K (VK , L K ) = d y for K = R or C. Then, (36) implies that Codim K (U K , L K ) = d y , for K = R or C. 7.2. Lemmas If V is a vector space, Sym a (V ) denotes the space of symmetric tensors of degree a on V, which can be canonically identified with the homogeneous polynomials of degree a over V ∗ , dual of V. The symbol means symmetric tensor product. To stress that the map k defined in Chapter 2, Section 3, depends only k k ˆ on the k-jet j of , let us denote it by j in this section. Let (x, u (0) , u) ∈ X × U × R (k−1)du , and set p = (x, u (0) ). Let f ∈ F, such that f ( p) = 0. Then, j k f ( p) ∈ J k F( p). Recall that J k H 0 ( p) is the subspace of J k H ( p) formed by the j k h( p), such that h depends on x only. Lemma 7.1. k ˆ is linear (i) The mapping 0 : J k H 0 → R kd y , j k h → j (x, u (0) , u) and surjective, k ˆ (ii) The mapping 1 : J k H 0 → R kd y ⊗ T ∗ X, j k h → TX j (x, u (0) , u) is linear and surjective, (iii) For any ∈ Tx X, = 0, the mapping , : J k H ( p) (resp.J k H 0 (x)) → R kd y , p = x, u (0) , k ˆ , j k h( p) (resp. j k h(x)) → TX j x, u (0) , u; is linear and surjective. Remark 7.1. It is not true that the mapping: J k H 0 → J 1 (X × U, R kd y ), k ˆ is surjective. j k h(x) → j 1 j ( p, u), Proof. Let f be a representative of j k f ( p), with p = (x, u (0) ). Take a coordinate system (O, x1 , . . . , xn ) for X at x. Then, if f k denotes the kth dynamical extension of f : r −1 ˆ = d rX h(x; f ( p)r ) + ˆ d Xa h(x; Ra,r ( j k f ( p), u)). (L f k )r h( p, u) a=0
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
7. Appendix
61
The expression Ra,r is a universal polynomial mapping: J k f ( p) × → Sym a (Tx X ). This gives immediately (i) because 0 (h) = ˆ . . .). Let e1 , . . . , en be a basis of Tx X such that (. . . , (L f k )r h( p, u), f ( p) = e1 . Then, R (k−1)du
ˆ i ) = d rX+1 h(x; ei e1r ) d X (L f k )r h( p, u)(e r ˆ ei )), + d Xa h(x; Q a,r ( j r −a f ( p), u, a=0
J r −a F( p) ×
→ Syma (Tx X ) is a universal mapping. where Q a,r : r Because the elements ei e1 are linearly independent in Sym r +1 (Tx X ), the surjectivity of 1 follows. Number (iii) can be proven in a similar way. R (k−1)du
ˆ ∈ G if: Set G ⊂ J k F× X T X × R (k−1)du , ( j k f ( p), , u) (i) f ( p) = 0 (again, p = (x, u (0) ), (ii) = 0. Let µ : J k H × X ×U G (resp. J k H 0 × X ×U G) → R kd y , µ( j k h( p), j k f ( p), ˆ = TX ˆ , u) where = ( f, h). k ( p, u), Lemma 7.2. µ is a submersion. ˆ ∈ G, the mapping j k h( p) (resp. Proof. For any fixed ( j k f ( p), , u) k k k ˆ (resp. µ( j k h(x0 ), j k f ( p), , u)) ˆ is j h(x0 )) → µ( j h( p), j f ( p), , u) the linear mapping of Lemma 7.1 above. By (iii) in that lemma, this linear mapping is surjective. Hence, µ is a submersion. Lemma 7.3. ˆ be an element of J k f × X T X × R (k−1)du such 1. Let ( j k f (y), , u) that (i) f (y) = 0, (ii) u (1) = . . . = u (ρ−1) = 0 (ρ ≥ 1), and (iii) the vectors TX f (y)i , 0 ≤ i ≤ ρ are linearly independent, and (iv) dU f (y; u (ρ) ) = 0. Then, with y = (x0 , u (0) ) and = ( f, h), the linear mapping : J k H (y) → R kd y (resp. : J k H 0 (x0 ) → R kd y ), j k h(y) → TX j
k
(y, u; ˆ )
(resp. j k h(x0 )) → TX j
k
(y, u; ˆ )),
is surjective. 2. For 0 ≤ N < ρ: ˆ ) = d X h(y; (TX f (y)) N ), (resp. d X h(x0 ; (TX f (y)) N )), d X (L f k ) N h(y, u;
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
The Case d y > du
62
for N = ρ:
ˆ ) = d X h(y; (TX f (y))ρ ) + d X dU h y; ⊗ u (ρ) , d X (L f k )ρ h(y, u; (resp. d X h(x0 ; (TX f (y))ρ )).
Proof. In the following considerations, h has to be taken in H or in H 0 as a subspace of H by considering a function h ∈ H 0 as a function of (x, u (0) ) independent of u (0) . This has no effect on our formulas other than simplifying p q them without cancelling the important terms. All the terms d X dU h with q > 0 0 should be set equal to zero. If h ∈ H , when we write h(y) or d rX h(y; . . .) for y = (x0 , u (0) ), we mean h(x0 ) or d rX h(x0 ; . . .). To prove 1, we have to compute , at least partially. It is not easy to compute the Lie derivatives d X (L f k )r h , so we have to proceed differently. We use the “flow” of f k and computations with formal power series. Let (O, x 1 , . . . , x n ) be a coordinate system of X at x0 . There is an open set O1 , a number ε > 0 and a smooth mapping ϕ : ] − ε, ε[× O1 → O, (t, x) → ϕ(t, x), such that: (1) x0 ∈ O1 ⊂ O; (2) ϕ(0, x) = x for x ∈ O1 ; and (3) d dt ϕ(t, x) = f (ϕ(t, x), V (t)) for all (t, x) ∈] − ε, ε[×O1 , where V (t) is the 1 (ρ) ρ 1 u t + . . . + (k−1)! u (k−1) t k−1 . For later polynomial mapping: V (t) = ρ! purposes, set vi = i!1 u (i) . Hence, V (t) = vρ t ρ + . . . + vk−1 t k−1 . To compute ˆ If d X (L f k ) N h, we use the following simple observations: let u = (u (0) , u). h ∈ H (resp. H 0 ):
∂ 1. (L f k ) N h(x, u) = (∂t) N h(ϕ(t, x), V (t))|t=0 = h N (x, u), say, N 2. d X (L f k ) h(x, u; ) = d X h N (x, u; ) for ∈ Tx X ≈ R n using the coordinate system. (Note that h 0 = h). N
∂ Let ϕ N (x, u) = N1! (∂t) N ϕ(t, x)|t=0 for x ∈ O1 . Condition (2) on ϕ implies that ϕ0 (x, u) = x, for x ∈ O1 . We denote by ϕˆ the Taylor series of ϕ at t = 0: N
ϕˆ =
∞
t r ϕr ,
r =0
by hˆ the Taylor series of t → h(ϕ(t, x), V (t)) at t = 0: hˆ =
∞
t r hr .
r =0
1 1 p q d X dU h(ϕ, hˆ = ˆ . . . , ϕ, ˆ V, . . . , V ), ( p ϕ s and q V s). p! q! p,q≥0
(37)
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
7. Appendix
63
Let us compute h N and d X h N for a multi-index α ∈ N l , |α| = α1 + . . . + αl . For x ∈ O1 : 1 1 p q h N (x, u) = d X dU h x, u (0) ; ϕα (x, u) ⊗ vβ p ≥ 0, p! q! p q q ≥ 0, α ∈ N , β ∈ N , |α| + |β| = N , where ϕα (x, u) (resp. vβ ) will denote the symmetric tensor product ϕα1 (x, u) . . . ϕα p (x, u) (resp. vβ1 . . . vβq ) of the vectors ϕα1 (x, u), . . . , ϕα p (x, u) ∈ R n (resp. vβ1 , . . . , vβq ∈ R du ). For any x ∈ O1 , any ∈ Tx X ≈ R n : d X h N (x, u; ) = I N + I I N , IN
II N
(38)
1 1 p+1 q d X dU h x, u (0) ; ⊗ ϕα (x, u) ⊗ vβ p ≥ 0, = p! q! p q q ≥ 0, α ∈ N , β ∈ N , |α| + |β| = N , 1 1 p q = d X dU h x, u (0) ; ϕα1 . . . d X ϕαi (x, u; . . . p! q! p
(39)
q
ϕα p (x, u)⊗vβ ))| p ≥1, q ≥0, 1≤i ≤ p, α ∈N , β ∈ N , |α| + |β| = N . It is easy to see that 1 1 p q II N = d X dU h x, u (0) ; d X ϕr (x, u; ) ⊗ ϕγ (x, u) ⊗ vβ ( p − 1)! q! p−1 q | p ≥ 1, q ≥ 0, 1 ≤ r, γ ∈ N , β ∈ N , r + |γ | + |β| = N . (40) Before proceeding further, we have to determine the ϕ N and d X ϕ N for small N . The claim is that ϕ N (x0 , u) = 0, 0 ≤ N ≤ ρ, ϕρ+1 (x0 , u) 1 (0) 1 (ρ) . = dU f x0 , u ; u ρ+1 ρ! N 1 TX f x0 , u (0) , 0 ≤ N ≤ ρ. d X ϕ N (x0 , u; ) = N!
(41) (42)
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
The Case d y > du
64
To see this, write the equation power series in t: ∞
N ϕ N (x, u)t N −1 =
N =1
p,q≥0
d dt ϕ(t, x)
= f (ϕ(t, x), V (t)) as a formal
1 p q ˆ x) p ⊗ V (t)q . d X dU f x, u (0) ; ϕ(t, p!q! (43)
Hence,
1 p q d X dU f x, u (0) ; ϕα (x, u) ⊗ vβ p ≥ 0, q ≥ 0, N ϕ N (x, u) = p!q! p q α ∈ N , β ∈ N , |α| + |β| = N − 1 . Because vi = 0 for i < ρ, |β| ≥ qρ. This shows that if N ≤ ρ: ϕ1 (x, u) = f x, u (0) , 1 p (0) (0) p d f x, u ; ϕα (x, u) α ∈ N , |α| = N − 1 . N ϕ N (x, u ) = p! X (44) Because f (x0 , u (0) ) = 0, an easy induction on N shows that ϕ N (x0 , u) = 0 if 1 ≤ N ≤ ρ. In case N = ρ + 1, no terms with p ≥ 1 can appear, because for any α ∈ N p , with |α| ≤ ρ, αi ≤ ρ for all i, 1 ≤ i ≤ p. Hence, 1 (ρ) (0) 1 (ρ) (ρ + 1)ϕρ+1 (x0 , u) = dU f x0 , u ; u , Note that vρ = u . ρ! ρ! Deriving (44), we get for ∈ Tx X ≈ R n : N d X ϕ N (x, u, ) = III N + IV N , III N =
IV N =
(45)
1 p+1 d X f x, u (0) ; ⊗ ϕα (x, u) α ∈ N p , |α| = N − 1 , p!
1 p d f x, u (0) ; d X ϕr (x, u; ) ⊗ ϕγ (x, u) ( p − 1)! X
γ ∈ N p−1 , r ≥ 1, |γ | + r = N − 1 .
Because ϕ N (x0 , u) = 0 if N ≤ ρ, for x = x0 , this last formula reduces to: N d X ϕ N (x0 , u; ) = d X f (x0 , u (0) ; d X ϕ N −1 (x0 , u; )). An easy induction gives: d X ϕ N (x0 , u; ) = N1! d X f (x0 , u (0) ) N , for 0 ≤ N ≤ ρ.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
7. Appendix
65
Let us go back to d X h N . In the light of Formulas (41) and (42), the maxiN mum value of p appearing in a nonzero term in I N|x=x0 is m = [ ρ+1 ], integer N part of ρ+1 because p(ρ + 1) ≤ |α| ≤ N . In a nonzero term in I N|x=x0 with p = m, qρ ≤ |β| ≤ N − m(ρ + 1) ≤ ρ. Hence, q = 0 or q = 1. Finally, we get the following: If N − m(ρ + 1) < ρ, I N|x=x0 =
1 m+1 d X h x0 , u (0) ; ⊗ ϕα x0 , u (0) α ∈ N m , |α| = N m! 1 p+1 q d X dU h x0 , u (0) ; ⊗ ϕα x0 , u (0) ⊗ vβ p ≤ m − 1, + p!q! α ∈ N p , β ∈ N q , |α| + |β| = N ,
if N − m(ρ + 1) = ρ, I N|x=x0 =
1 m+1 d X h x0 , u (0) ; ⊗ ϕα x0 , u (0) α ∈ N m , |α| = N m! m 1 m+1 d X dU h x0 , u (0) ; ⊗ ϕρ+1 x0 , u (0) ⊗ vρ + m! 1 p+1 q + d X dU h x0 , u (0) ; ⊗ ϕα x0 , u (0) ⊗ vβ p ≤ m − 1, p!q! p q α ∈ N , β ∈ N , |α| + |β| = N .
The maximum value of p in a nonzero term II N|x=x0 is m + 1, because ( p − 1)(ρ + 1) ≤ |γ | ≤ N . In a nonzero term in II N|x=x0 with p = m + 1, qρ ≤ |β| ≤ N − |γ | ≤ N − m(ρ + 1) ≤ ρ. If N − m(ρ + 1) < ρ then q = 0. If N − m(ρ + 1) = ρ, qρ ≤ |β| ≤ N − |γ | − r ≤ ρ − 1 because r ≥ 1. We get, setting N − m(ρ + 1) = ε: II N|x=x0 = A N + B N , AN =
BN =
1 m+1 d X h x0 , u (0) ; d X ϕr (x0 , u; ) ⊗ ϕγ (x0 , u) m!
1 ≤ r ≤ ε, γ ∈ N m , |γ | + r = N
1 p q d X dU h x0 , u (0) ; d X ϕr (x0 , u; ) ⊗ ϕγ (x0 , u) ⊗ vδ ( p − 1)!q! p−1 q 1 ≤ r ≤ ε, p ≤ m, γ ∈ N , δ ∈ N , r + |γ | + |δ| = N .
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
The Case d y > du
66
Putting I N|x=x0 , II N|x=x0 together and using Formulas (41) and (42): Let N N ] and ε = N − m(ρ + 1), be an integer between 0 and k − 1, let m = [ ρ+1 0 ≤ ε ≤ ρ. If ε < ρ and ∈ Tx0 X ≈ R n , d X h N x0 , u (0) ; = d Xm+1 h x0 , u (0) ; N p q d X dU h x0 , u (0) ; N , p,q 1 ≤ p ≤ m, q ≤ N , + (46) if ε = ρ and ∈ Tx0 X ≈ R n , d X h N x0 , u (0) ; = d Xm+1 h x0 , u (0) ; N 1 + d Xm+1 dU h x0 , u (0) ; ⊗ ϕρ+1 (x0 , u)m ⊗ vρ m! p q + d X dU h x0 , u (0) ; N , p,q 1 ≤ p ≤ m, q ≤ N , (47) where ˆ = N = N ( j 1 f (y), , u) Um (i) =
ε 1 1 TX f (y)r ⊗ Um (ε − r ), (48) r! m! r =0
{ϕγ (x0 , u)|γ ∈ N m , |γ | = m(ρ + 1) + i}.
(49)
In particular Um (0) = ϕρ+1 (x0 , u)m =
m 1 (0) 1 (ρ) u d f x , u ; , U 0 (ρ + 1)m ρ!
(50)
ˆ N , p,q = N , p,q ( j N f (y), , u) =
1 d X ϕr (x0 , u; ) ⊗ ϕγ (x0 , u) ⊗ vδ | ( p − 1)!q! r ≥ 0, γ ∈ N p−1 , δ ∈ N q , r + |γ | + |δ| = N . (51)
Clearly, N is a polynomial mapping: J k S(y) × Tx0 X × R (k−1)du → Symm+1 (Tx0 X ), linear in ,
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
7. Appendix
67
and N , p,q is a polynomial mapping: J k S(y) × Tx0 X × R (k−1)du → Sym p (Tx0 X ) ⊗ Symq (R du ). In view of the expressions (48), (49), and (50) of the N and the assumpˆ | tions (iii) and (iv) of Part 1 of the lemma, the elements { N ( j 1 f (y), , u) m(ρ + 1) ≤ N ≤ m(ρ + 1) + ρ} in Sym m+1 (Tx0 X ) are linearly independent. Because ( j k h(y)) = (d X h 0 (x0 , u; ), d X h 1 (x0 , u; ), . . . , d X h k−1 (x0 , u; )), Formulas (46) and (47) show that is surjective. Part 2 of the lemma follows from Formulas (46) and (47): If m = 0, U0 (i) = 0 for i > 1, U0 (0) = 1, and N , p,q = 0 since 1 ≤ p ≤ m = 0, which is impossible.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5 Singular State-Output Mappings
In the two previous chapters, all initial − state → output − trajectory mappings are regular in some sense: either the system has a uniform canonical flag, and it is also, at least locally, strongly differentially observable (see Exercise 4.1 of Chapter 3), or, in the case where d y > du , systems are generically strongly differentially observable of some order. In both cases, the initial − state → output − trajectory mapping is an immersion in some sense, and as a consequence, the systems have the phase variable property. It can happen that the initial − state → output − trajectory mapping is not an immersion, but that nevertheless, the system possesses the phase variable property of some order. It is interesting to study these singular situations because, for observation or output stabilization, only the phase variable property matters, as will be clear in the next chapters. This study is the purpose of this chapter. The uncontrolled case is very different from the controlled one. We will show that, in the uncontrolled analytic case, a reasonable assumption is that the map N is a finite mapping for some N . Unfortunately, in this case, there is no C ∞ version of our results. In both cases (controlled and uncontrolled), the first step of the study is local (at the level of germs of systems). Afterward, assuming observability (injectivity), the phase variable representations can be glued together using a partition of unity. On the other hand, in the controlled case, we do not need the analyticity assumption. However, for the sake of simplicity of the exposition, we let it stand. 1. Assumptions and Definitions Here, we consider only analytic systems, of the following form: dx = f (x, u), y = h(x, u), (controlled case), or, () dt dx = f (x), y = h(x), (uncontrolled case). () dt 68
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
1. Assumptions and Definitions
69
As a first step, we will consider germs of such systems at a point (x0 , u (0) ) ∈ X × U (controlled case), or x0 ∈ X (uncontrolled case). In this chapter, U is not assumed to be compact. In most cases, for global considerations, we will consider that U = R du . 1.1. Notations Again in this section the value u = u (0) of the control plays a role different from the higher order derivatives u (i) , i ≥ 1. Hence, we introduce the following notations. Given a N −jet f N +1 = ( f (0) , f (1) , . . . , f (N ) ) of a curve f in a Euclidean space R m , we will write f˜N = f (1) , . . . , f (N ) , (52) f N +1 = f (0) , f˜N . We use this notation in the case of infinite jets (N = ∞), and we drop the subscript N = ∞ ( f ∞ = f , f˜∞ = f˜). N d y and Let us define the restricted mappings N ,u˜ N −1 : X × U → R N d d y u ×R , S N ,u˜ N −1 : X × U → R (0) = x , u (0) , u˜ N −1 , N ,u˜ N −1 x 0 , u N 0 (53) (0) (0) = x , u (0) , u˜ . S N −1 , u N ,u˜ N −1 x 0 , u N 0 Let Ox0 be the ring of germs of analytic functions at x0 ∈ R d . Let be a subring of Ox0 , x0 ∈ R d . For u 0 ∈ R p , {u; u 0 } will denote the ring of germs at (x0 , u 0 ) ∈ R d × R p of analytic mappings of the form G(u, ϕ1 (x), . . . , ϕr (x)), for G analytic at (u 0 , ϕ1 (x0 ), . . . , ϕr (x0 )), and for any finite subset {ϕ1 , . . . , ϕr } ⊂ . If is an analytic algebra (in the sense of [43], for instance), then {u; u 0 } is also an analytic algebra. 1.2. Rings of Functions We have to consider several rings of (germs of) analytic functions attached to the germ of a system at a point. There are different definitions for the controlled and uncontrolled case. Let us fix a point x0 ∈ X , and an infinite jet (0) (u 0 , u0 ) = u 0 . ˆ N (x0 ,u 0 N ). Let us define the rings N (x0 ), or N (x0 ,u 0 N ),
1. In the uncontrolled case:
∗
N (x0 ) = N (O y0 ),
(54)
P1: FBH CB385-Book
CB385-Gauthier
70
May 31, 2001
17:54
Char Count= 0
Singular State-Output Mappings
the pull back by N of the ring O y0 of germs of analytic real valued functions ϕ(y, y˜ N −1 ) at the point y0 = N (x 0 ), i.e.,
N = G ◦ (55) N (x) G is an analytic germ at y0 = N (x 0 ) . 2. In the controlled case: ∗
N x0 , u 0 N = S O y0 , N ,u˜ 0,N −1 ˆ N x0 , u 0 = S ∗ Oz 0 ,
N N where y0 = S N ,u˜ 0,N −1 (x 0 , u 0 ) and z 0 = S N (x 0 , u 0 , u0 N −1 ). (0)
(0)
(0)
If there is no ambiguity about the choice of x0 , (u 0 , u0 N −1 ) = u 0 N , we ˆ N in place of N (x0 ), N (x0 ,u 0 N ),
ˆ N (x0 ,u 0 N ). For will write N , N ,
ˆ ˆ N +1 :
ˆN ⊂ each N , N can be canonically identified to a subring of
ˆ
N +1 . ˆ N are Noetherian In both the controlled and the uncontrolled case, N and
rings. They form increasing sequences: . . . ⊂ N ⊂ N +1 ⊂ . . . ⊂ Ox0 or O(x0 ,u (0) ) , 0
ˆN ⊂
ˆ N +1 ⊂ . . . ... ⊂
(56)
ˇ will denote the ring of germs of analytic mapIn the controlled case,
(0) pings of the form G(u, ϕ1 , . . . , ϕ p ) at the point (x0 , u 0 ), for any positive integer p and for functions ϕi of the form s s s ϕi = L kf1u ∂ j1 1 L kf2u ∂ j2 2 . . . . . L kfru ∂ jr r h, kl , sl ≥ 0, (57) (see Formula (15) in the definition of , Chapter 2). Recall that ∂ j denotes the derivation with respect to the jth control variable. ˇ is closed under the action of the derivations L f u and ∂ j , Obviously,
1 ≤ j ≤ du . Also, ˇ for all N .
N ⊂
(58)
ˇ N will denote the ring of germs of analytic mappings of the form
(0) G(u, ϕ1 , . . . , ϕ p ) at the point (x0 , u 0 ), for all functions ϕi of the form (57) above, with ki + si ≤ N − 1. Remark 1.1. ˆ N {u (N ) , u (N ) } is exactly the ring of germs at a point of 1. The ring
0
(analytic) elements of the rings N , defined in Chapter 2, Section 3,
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2. The Ascending Chain Property
71 (0)
ˇ is just the ring of analytic germs at (x0 , u ) generated by 2. The ring
0 the germs of the elements of the space , plus the control variables, ( has been defined in Section 5 of Chapter 2).
2. The Ascending Chain Property Definition 2.1. A germ of analytic system (at the point x0 or at the point (0) (x0 , u 0 , u0 )) satisfies the “ascending chain property of order N ,” denoted by AC P(N ), if: Uncontrolled case: j = N for j ≥ N , ˆ j {u ( j) ; u ( j) } for j ≥ N . ˆ j+1 =
Controlled case:
0 Convention: For simplicity, we will say that a vector function belongs to ˆ N if each of its components does.
N or
The next two lemmas show the relation between the ascending chain property AC P(N ) and the phase-variable property P H (N ). The phase-variable property P H (N ) has been defined in Chapter 2 for systems. For germs of analytic systems, the definition is similar and left to the reader.
Lemma 2.1. satisfies the AC P(N ) at some point iff N +1 = N , (resp. ˆ N {u (N ) ; u (N ) }) in the uncontrolled (resp. controlled) case. ˆ N +1 =
0 Lemma 2.2. Each of the following two conditions is necessary and sufficient for to satisfy the AC P(N ): (i) y (N ) = N (y (0) , y˜ N −1 , u (0) , u˜ N ) for some analytic function N (0) (N ) (locally defined in a neighborhood of (S N (x 0 , u 0 , u0 N −1 ), u 0 ); (ii) y ( j) = j (y (0) , y˜ N −1 , u (0) , u˜ j ) for some analytic function j locally defined and for all j ≥ N . In the uncontrolled case, the conditions (i) and (ii) of Lemma 2.2 give: (i) y (N ) = N (y (0) , y˜ N −1 ), and (ii) y ( j) = j (y (0) , y˜ N −1 ). Remark 2.1. A priori condition (i) is necessary for to satisfy the AC P(N ), and condition (ii) is sufficient. Remark 2.2. The second (resp. first) condition of Lemma 2.2 is equivalent to the phase variable property P H ( j) of any order j ≥ N (resp. the phase variable property P H (N ) of order N ).
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
72
17:54
Char Count= 0
Singular State-Output Mappings
Let us prove these lemmas, considering the controlled case only. (0)
Proof of Lemma 2.1. Assume that satisfies the AC P(N ) at (x0 , u 0 , ˆ N +1 =
ˆ N {u (N ) ; u (N ) }. u0 N −1 ). Then, by definition,
0 ˆ N +1 =
ˆ N {u (N ) ; u (N ) }. By definition, For the converse, assume that
0 (N +1) (N +1) ˆ N +1 u ˆ N +2 ⊃
; u0 .
ˆ N +2 . Then, for some analytic G, we have: Let ϕ ∈
ϕ = G y (0) , y˜ N +1 , u (0) , u˜ N +1 . (N )
denoting the ith component of y (N ) , 1 ≤ i ≤ d y , we have For yi (N ) ˆ N +1 =
ˆ N {u (N ) ; u (N ) }. Hence, yi ∈
0 (N ) yi = i y (0) , y˜ N −1 , u (0) , u˜ N for some i . Therefore, (N +1)
yi
(N )
= y˙ i
d (0) = d i y (0) , y˜ N −1 , u (0) , u˜ N . y , y˜ N −1 , u (0) , u˜ N dt |t=0
= H i (y (0) , y˜ N , u (0) , u˜ N +1 ). This implies that ϕ = G y (0) , y˜ N , H y (0) , y˜ N , u (0) , u˜ N +1 , u (0) , u˜ N +1 ˆ N +1 u (N +1) ; u (N +1) . ∈
0 (N +1)
ˆ N +1 {u (N +1) ; u ˆ N +2 =
Hence,
0
}.
ˆ i+1 =
ˆ i {u (i) ; u (i) } Remark 2.3. It is easy to check that the condition
0 ˆ N +k =
ˆ N {u (N ) , . . . , u (N +k−1) ; u (N ) , . . . , for i ≥ N is equivalent to
0 (N +k−1) u0 } for k ≥ 1. (0)
Proof of Lemma 2.2. If satisfies the AC P(N ) at (x0 , u 0 , u0 ), then, (N ) by definition, yi = N ,i (y (0) , y˜ N −1 , u (0) , u˜ N ). Conversely, if y (N ) = ˆ N +1 , then N (y (0) , y˜ N −1 , u (0) , u˜ N ), by definition again, if ϕ ∈
(0) ϕ = G y , y˜ N , u (0) , u˜ N ˆ N u (N ) ; u (N ) , = G y (0) , y˜ N −1 , N y (0) , y˜ N −1 , u (0) , u˜ N , u (0) , u˜ N ∈
0 and
ˆ N +1 ⊂
ˆ N u (N ) ; u (N ) .
0
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
3. The Key Lemma
73
By Lemma 2.1, satisfies the AC P(N ). This proves (i). Applying (i), an obvious induction gives (ii). 3. The Key Lemma 3.1. Finite Germs Here, we prove a lemma about the ascending chain property, which will be used later on. Let ( f j , j > 0) be a sequence of analytic germs: (X, x0 ) → (Y, f j (x0 )). X, Y are analytic manifolds. As was done previously in a particular case, we can associate a sequence of local rings j to the sequence ( f j ) in the following way: We denote by j : X → Y j the map j (x) = ( f 1 (x), . . . , f j (x)), and by j :
j = ( j )∗ O j (x0 ) , the pull back by the map j of the ring O j (x0 ) of germs of analytic maps at the point j (x0 ). Clearly, again we have . . . ⊂ j ⊂ j+1 ⊂ . . . ⊂ Ox0 . Definition 3.1. We say that the sequence ( f j ) satisfies the AC P(N ) at x0 , if
j = N for j ≥ N . Definition 3.2. (of finite multiplicity). F : X → Y has finite multiplicity at x0 if Ox0 /[F ∗ (m(O y0 )).Ox0 ] has finite dimension as a real vector space. Here m(O y0 ) is the ideal of germs of analytic functions at (Y, y0 ), y0 = F(x0 ), which are zero at y0 . The dimension is the multiplicity. There is a simple and convenient criterion for a germ to be of finite multiplicity:
F has finite multiplicity at x0 iff there is an integer r > 0 such that: r m Ox0 ⊂ F ∗ m O y0 .Ox0 . (59) Therefore, to check that F has finite multiplicity at x0 = 0, (F : R n → Y ), it is sufficient to check that xiri belongs to F ∗ (m(O y0 )).Ox0 for some positive integers ri , i = 1, . . . , n. Let N : (X, x0 ) → Y N , N = ( f 1 , . . . , f N ), where the f i : (X, x0 ) → Y are germs of mappings at x0 . A “prolongation of N ” is an arbitrary sequence ( fˆj ) of germs of mappings, fˆj : (X, x0 ) → Y, such that fˆj = f j for j ≤ N .
P1: FBH CB385-Book
CB385-Gauthier
74
May 31, 2001
17:54
Char Count= 0
Singular State-Output Mappings
3.2. The Lemma We have the following key lemma: Lemma 3.1. (see, [32]). The following properties are equivalent: (i) All prolongations of N satisfy the AC P(k) for some k ≥ N , (k depends on the prolongation), (ii) N has finite multiplicity. Proof. (ii)=⇒ (i): We are going to use the following crucial property (P) (for the proof, see [43], Chapter 1): If a germ N has finite multiplicity at x0 , then Ox0 is a finitely generated N module. This last fact is also expressed by saying that N is a finite germ at x0 . For any given prolongation of N , the associated sequence of local rings
j satisfies: N ⊂ N +1 ⊂ . . . ⊂ Ox0 , and each of the j , j ≥ N , is a N submodule of Ox0 . As the image of a Noetherian ring by a morphism of rings, N is a Noetherian ring. Any finitely generated module over a Noetherian ring is Noetherian module over this ring (see [53], p. 158, Theorem 18). Therefore, Ox0 is a Noetherian N module. Hence, any increasing sequence of N submodules is stationary. This shows that the sequence j is stationary, which means that the prolongation satisfies the AC P(k) for some k. (i)⇒ (ii). This part of the proof uses the same trick as the proof of Nakayama’s lemma. We consider N = ( f 1 , . . . , f N ). We can assume that Y = R and f 1 (x0 ) = . . . = f N (x0 ) = 0. A prolongation ( fˆj ) being given, let m( j ) ⊂ j be the maximal ideal of j . The ideal generated by m( j ) in Ox0 is denoted by m( j ).Ox0 . We will first construct a prolongation ( fˆj ) such that m( r ) = m( N ).Ox0 for some r. We chose fˆN +1 ∈ m( N ).Ox0 , but fˆN +1 ∈ / m( N ), fˆN +2 ∈ m( N ).Ox0 , but fˆN +2 ∈ / m( N +1 ), and so on. For such a sequence, m( r ) = m( r +1 ) if r ≥ N , hence, r ⊂ r +1 but r = r +1 for r ≥ N . This contradicts the assumption (i) of the lemma. Therefore, for some r, m( r ) = m( N ).Ox0 . Now, m( N ).Ox0 is obviously a finitely generated r module (it is generated by fˆ1 , . . . , fˆr ). Also x1 fˆi ∈ m( N ).Ox0 , 1 ≤ i ≤ r, hence, x1 fˆi =
r k=1
ai,k fˆk , where ai,k ∈ r .
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
3. The Key Lemma
75
Therefore, (x1 I d − A)r = 0, where A is the matrix formed by the ai,k . It follows that the determinant = det(x1 I d − A) = 0. Expanding this determinant, we get x1r + αi x1i = 0, with αi ∈ r . i
For each i, we have αi = βi + γi , where βi is a constant, γi is a nonunit of r . We have x1r + βi x1i = − γi x1i ∈ m( N ).Ox0 . i
i
Evaluating this formula at x = x0 = 0 (which we can assume), we get that β0 = 0. If all the βi , i < r, are zero, then, x1r ∈ m( N ).Ox0 . If one is r−p p nonzero, then, for some p < r, x1 (x1 + · · · + δ2 x1 + δ1 ) ∈ m( N ).Ox0 , with δi constant and δ1 = 0. Hence (x1P + · · · + δ2 x1 + δ1 ) is a unit in Ox0 . This shows that for some l > 0, x1l ∈ m( N ).Ox0 . The same is true for all the variables xi , 1 ≤ i ≤ n. By the criterion, N has finite multiplicity. 3.3. Why Analyticity? If one looks at the classical proof of Property (P) above, one notices that it essentially follows from the Weierstrass Preparation Theorem (see [43]). If we assume that our mappings are C ∞ , then Lemma 3.1 is false. A counterexample is provided below. The step (i) =⇒ (ii) of the lemma is still valid in the C ∞ case but the step (ii) =⇒ (i) is not. Using the Thom–Malgrange theory (see [49], page 189, Corollary 3.3), it is still true that Ox0 is a finitely generated N module, but the proof of (ii) =⇒ (i) breaks down because N is not in general Noetherian. We shall construct a smooth map 2 on R 2 with finite multiplicity, and a smooth prolongation of it, which does not satisfy the AC P(k) for any k. This will imply that N , in this counterexample, is not Noetherian. Here, we show a smooth map 2 on R 2 , with finite multiplicity, and a smooth prolongation that does not satisfy the AC P(k) for any k. 3.4. Counterexample Let W be the “Weierstrass manifold,” W = {(x0 , x1 , t)|t 2 + x1 t + x0 = 0} ⊂ R 3 , and : W → R 2 , (x0 , x1 , t) → (x0 , x1 ). Certainly, W is a smooth
P1: FBH CB385-Book
CB385-Gauthier
76
May 31, 2001
17:54
Char Count= 0
Singular State-Output Mappings
manifold. Set f 0 = x0 , f 1 = x1 and f n = gn (x0 , x1 )t, with the sequence gn constructed as follows. Consider on R 2 the polar coordinates (r, θ), and the vector field X : X = −r
∂ ∂ + . ∂r ∂θ
We can construct some spiraloid disjoint subsets Sn of R 2 as follows: We ∗ , I small enough for S ∩ {x = 0} = pick an interval I1 = [a1 , b1 ] ⊂ R+ 1 1 1 {x1 = 0}, where S1 is the union set of all trajectories of X passing through the points (x 0 , 0), x 0 ∈ I1 . Now, we choose a second interval I2 = [a2 , b2 ], with 0 < a2 < b2 < a1 and I2 ∩ S1 ∩ {x1 = 0} = ∅, and construct the set S2 as the union set of all the trajectories of X through (I2 , 0). Iterating the construction, we get Sn . We chose gn in such a way that its support is cl(Sn ), the closure of Sn , and gn > 0 on Int(Sn ). This is possible because the complement of this set is closed, and since, given any closed set, there exists a C ∞ function having this set as zero set. The multiplicity of F = ( f 0 , f 1 ), F : W → R 2 is finite at (0, 0, 0), (it is 2). We show that the sequence f n does not satisfy the AC P(k) for any k. For this, we work in an arbitrary small ball B centered at (0, 0, 0) in W. We assume that f n+1 = ( f 0 , f 1 , . . . , f n ) on B for some smooth . By construction, if p = p ∈ Int(Sn+1 ), then f n+1 ( p ) = ( f 0 ( p ), f 1 ( p ), 0, . . . 0). Let D be the discriminant set of W (i.e., D = {(x0 , x1 , t)|x0 = 14 x12 }). We consider c = (c0 , c1 ), c ∈ B ∩ Int(Sn+1 ) ∩ D, c ∈ B ∩ −1 (c), and a sequence ( pk ) in −1 (Int(Sn+1 ))\D such that limk→∞ pk = c , and we set pk = pk . By definition, we have gn+1 (x0 , x1 )t = (x0 , x1 ) on Int(Sn+1 ). Differentiating, we get ∂t ∂ ∂gn+1 (x0 , x1 )t + gn+1 (x0 , x1 ) = (x0 , x1 ), ∂ x0 ∂ x0 ∂ x0 which should hold at pk , ∂t ∂ ∂gn+1 ( pk )t( pk ) + gn+1 ( pk ) ( pk ) = ( pk ). ∂ x0 ∂ x0 ∂ x0 Taking the limit when k → ∞, we get ∂t ∂ ∂gn+1 (c)t(c) + gn+1 (c) (c) = (c), ∂ x0 ∂ x0 ∂ x0 where gn+1 (c) is different from zero, and t(c) = − c21 . Hence, ∂∂tx0 (c) is well 1 defined. However, t 2 + x1 t + x0 = 0 implies ∂∂tx0 = − 2t+x . At c, x1 = c1 , 1 c1 t(c) = − 2 , and 2t + x1 = 0. This is a contradiction.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
3. The Key Lemma
77
Hence, despite the fact that the multiplicity is finite, the equality f n+1 = ( f 0 , . . . , f n ), never holds.
3.5. Consequences of the Key Lemma The main consequence of Lemma 3.1 is the following theorem. Theorem 3.2. Let be an uncontrolled system. Let x0 ∈ X be fixed. If for some k, k has finite multiplicity at x 0 , then satisfies the AC P(N ) at x 0 for some N ≥ k, and by Lemma 2.2, y (N ) = N (y (0) , y˜ N −1 ) for some analytic function N defined in a neighborhood of N (x 0 ). Example 3.1. X = R, y = h(x), x˙ = f (x), x0 is arbitrary, h is nonconstant. In that case, of course, the notion of multiplicity is equivalent to the usual natural notion of multiplicity of a smooth function of a single variable. The multiplicity is always finite because h is nonconstant, and hence for some N , we have (locally): y (N ) = N (y (0) , y˜ N −1 ). Example 3.2. X = R 2 , y = x1 , x˙ 1 = x23 , x˙ 2 = f (x1 , x2 ), x0 = (0, 0). This system is observable, and by our criterion, the multiplicity is finite. Hence, for some N , we have also: y (N ) = N (y (0) , y˜ N −1 ). Of course, it can happen that N = ( f 1 , . . . , f N ) does not have finite multiplicity, but some particular prolongations ( fˆr ) satisfy the AC P(k) for some k. (Just take the prolongation by the zero sequence for instance.) For uncontrolled systems, there are many other interesting examples where the AC P(N ) holds for some N , but the multiplicity is not finite. A case where the AC P(N ) holds everytime is the following: Exercise 3.1. (Linear systems observed polynomially). X = R n , y = p(x) is a polynomial, x˙ = Ax is a linear vector field. Show that the AC P(N ) holds for some N (compare with Exercise 4.8, Chapter 3).
Exercise 3.2. () : X = R 2 , y = x1 (x12 + x22 ), A = −10 10 . 1. Show that is observable. By the previous exercise, the AC P(N ) holds: y (2) = −y (0) . 2. Show that the multiplicity is infinite. (For hints, see [30].)
P1: FBH CB385-Book
CB385-Gauthier
78
May 31, 2001
17:54
Char Count= 0
Singular State-Output Mappings
The following important theorem will be proven later on as a consequence of more general results in the controlled case. It is a globalization of Theorem 3.2. Theorem 3.3. (Globalization of the AC P(N ) in the uncontrolled case). Assume that X is compact, is observable, and for each x0 ∈ X , there is an N (depending on x0 , it could be), such that N has finite multiplicity at x 0 . Then there is a k and a C ∞ function ϕ, defined and compactly supported on R kd y , such that y (k) (x) = ϕ y (0) (x), y˜ k−1 (x) , for all x ∈ X, that is, satisfies P H (k), the phase variable property of order k, globally on X. 4. The AC P(N) in the Controlled Case As stated in Theorem 3.2, the local AC P(N ) holds in the uncontrolled case, as soon as there is a k such that k has finite multiplicity at the point under consideration. As we shall see, in the controlled case, the situation is not so clear cut. Here, as we have said, the C ω assumption could be relaxed. Let us keep it for the clarity of exposition. But, in Chapter 7, we will use the results in the case where the systems are C ∞ . The main local result is the following. (0) Theorem 4.1. A point x0 and an infinite jet u 0 = (u 0 , u0 ) are fixed. ˇ satisfies the AC P(N ) iff = N . Moreover, in that case, N = N +1 , ˆ N.
N ⊂
ˇ = N . This comes Proof. To start, let us prove that the AC P(N ) implies
ˆ N , N ∩
ˆ N is stable from the following lemma (Lemma 4.2): h ∈ N ∩
ˇ ˇ by L f u and ∂u j , 1 ≤ j ≤ du . Therefore, by definition, ⊂ N . But, N ⊂
ˇ by definition. Hence, = N . Also, obviously, in that case, N = N +1 . ˇ = N implies that N ⊂
ˆ N . We denote temporarily Let us show that
by Y¯ the vector Y¯ (x, u (0) , u˜ N −1 ) = (y (0) , y (1) , . . . , y (N −1) ) = y N , by Z (x, u) the vector Z = (y(x, u (0) ), y (1) (x, u (0) , u01 ), . . . , y (N −1) (x, u (0) , u0 N −1 )), and by Z (x, u (0) ) the vector of all expressions L kf1u (∂ j1 )s1 L kf2u (∂ j2 )s2 . . . . . ˇ N ). L kfru (∂ jr )sr h, where ki + si ≤ N − 1 (the generators of
An obvious computation shows that (60) Y¯ x, u (0) , u˜ N −1 = Z x, u (0) + ϕ Z x, u (0) , u˜ N −1 , where ϕ is some polynomial mapping and ϕ(Z , u0 N −1 ) = 0 for all Z .
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
4. The AC P(N ) in the Controlled Case
79
Just to show how it works, the two first components of the above equality are, in the single input case, y (0) x, u (0) = h x, u (0) , y (1) x, u (0) , u (1) = L f u h + ∂u h(u 0 )(1) + ∂u h u (1) − (u 0 )(1) . ˇ = N , N is stable by ∂ j and L f u . Therefore, Now, assuming that
h being in N , all the components of Z (x, u (0) ) are in N , hence, Z = (Z , u (0) ) for a certain . Therefore, the relation (60) can be rewritten as Y¯ x, u (0) , u˜ N −1 = Z x, u (0) + ϕ Z , u (0) , u˜ N −1 , (61) and the differential of ϕ((Z , u (0) ), u˜ N −1 ) with respect to Z at u0 N −1 is zero. Hence, the map H, H Z , u (0) , u˜ N −1 = Z + ϕ Z , u (0) , u˜ N −1 , u (0) , u N −1 is a diffeomorphism from an open neighborhood U 0 of (Z (x0 , (u 0 )(0) ), (u 0 )(0) , u0 N −1 ), onto an open neighborhood V 0 of (Y¯ (x0 , (u 0 )(0) , u0 N −1 ), (u 0 )(0) , u0 N −1 ). In particular, Z (x, u (0) ) = G(Y¯ (x, u (0) , u˜ N −1 ), u (0) , u˜ N −1 ), for all (x, u (0) , u˜ N −1 ) ∈ W 0 , a certain neighborhood of (x0 , (u 0 )(0) , u0 N −1 ). Thereˆ N , and ϕ(Z , u (0) ) ∈
ˆ N for all ϕ : N ⊂
ˆ N. fore, Z (x, u (0) ) ∈
ˇ ⊂
ˆ N implies the AC P(N ). If To complete the proof, let us show that
ˇ ⊂
ˆ N , then { ˇ u˜ N −1 ; u0 N −1 } ⊂
ˆ N . But, by definition,
ˆ N ⊂ { ˇ u˜ N −1 ;
ˇ ˆ u0 N −1 }, hence {u˜ N −1 ; u0 N −1 } = N . Otherwise, also, ˆ N +1 ⊂ { ˇ u˜ N ; u0 N } =
ˆ N u (N ) ; (u 0 )(N ) .
ˆ N +1 =
ˆ N {u (N ) ; (u 0 )(N ) }, and, by definition, the AC P(N ) Therefore,
holds. Lemma 4.2. If the AC P(N ) holds, then: ˆ N ) ⊂ N ∩
ˆ N , j = 1, . . . , du , (i) ∂ j ( N ∩
ˆ N ) ⊂ N ∩
ˆ N. (ii) L f u ( N ∩
dk ϕ
ˆ N +k , (where d is the operator defined by ˆ N, k ∈
Proof. First, if ϕ ∈
dt dt the dynamics of in the obvious way): By definition, ϕ = G y (0) , y˜ N −1 , u (0) , u˜ N −1 , d (0) dϕ = dG y (0) , y˜ N −1 , u (0) , u˜ N −1 . y , y˜ N −1 , u (0) , u˜ N −1 dt dt ˆ N +1 . = H y (0) , y˜ N , u (0) , u˜ N ∈
An obvious induction gives the result.
P1: FBH CB385-Book
CB385-Gauthier
80
May 31, 2001
17:54
Char Count= 0
Singular State-Output Mappings
Second, we know (Remark 2.3 after the proof of Lemma 2.1) that ˆ N u (N ) , . . . , u (N +k−1) ; u (N ) , . . . , u (N +k−1) . ˆ N +k =
0 0 ˆ N , then Hence, if ϕ ∈ N ∩
dNϕ ˆ 2N =
ˆ N u (N ) , . . . , u (2N −1) ; u (N ) , . . . , u (2N −1) . ∈
0 0 N dt However, an obvious computation shows that du dNϕ (N ) ˜ = ϕ ( u ) + u j ∂ j (ϕ), 1 N −1 dt N j=1
because ϕ ∈ N . Therefore, since neither ϕ1 nor ∂ j (ϕ) depend on u (N ) , ˆ N. ∂ j (ϕ) ∈
u dϕ (1) ˆ N +1 =
ˆ N {u (N ) ; u (N ) }. Hence u j ∂ j (ϕ) ∈
Now, dt = L f u (ϕ) + dj=1 0 du (1) ˆ N . Because ∂ j (ϕ) ∈
ˆ N , this implies that L f u (ϕ) + j=1 u j ∂ j (ϕ) ∈
ˆ N. L f u (ϕ) ∈
ˆ N and do not depend on u (i) , i ≥ 1. Hence, L f u (ϕ) and ∂ j (ϕ) all belong to
L f u (ϕ) and ∂ j (ϕ) all belong to N by definition. Remark 4.1. (0) (i) If the AC P(N ) is true at (x0 , u 0 , u0 ) for some u0 , then, ˇN =
ˇ N +1 at x0 , u (0) ,
0 ˇ N +1 is implied by the fact that
ˇ N0 has finite ˇN =
(ii) This condition
multiplicity for some N0 (in the sense that the map with components ˇ N0 , given by Formula (57), has finite multiplicity). the generators of
Exercise 4.1. () : X = R 2 , U = R, y = x1 , x˙ 1 = x23 − x1 , x˙ 2 = x28 + x24 u. (0)
We work at x0 = (0, 0), and u 0 = 0. Show that: 1. is observable. ˇ = 4 = {G(x1 , x 3 , x 10 , x 17 , u)}, 2.
2 2 2 so that the AC P(4) holds (in fact it holds as soon as (x0 )2 = 0, and the AC P(2) holds everywhere else).
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. Globalization
81
3. Show that actually, y (4) can be written as a polynomial: y (4) = P y (0) , y (1) , y (2) , y (3) , u (0) , u (1) , u (2) , and compute P. 5. Globalization 5.1. Preliminaries We assume that is given, and we fix a compact subset K of X. We also assume that is differentially observable of order N . We denote by ,K N the following mapping: ,K Uncontrolled case: N is the restriction of N to K , ,K ,K N : K → N (K ) ⊂ R N d y , ,K (N −1)du , Controlled case: N is the restriction of S N to K × U × R ,K ,K N : K × U × R (N −1)du → N (K × U × R (N −1)du ) ⊂ R N d y × R N du. Lemma 5.1. ,K is a homeomorphism onto its image, which is closed. N Proof. ,K is proper. Apply Proposition 2, p. 113 of [7]. N Comments 1. Lemma 5.1 shows that, as soon as is differentially observable, x can be expressed on K as a continuous function ϕ NK of (y (0) , y˜ N −1 , u (0) , u˜ N −1 ). 2. If X = R n , then, by Urysohn’s lemma, ϕ NK can be extended to a continuous function defined on all of R N d y × R N du , hence, x can be written as a continuous function, defined on all of R N d y × R N du : x = ϕ NK y (0) , y˜ N −1 , u (0) , u˜ N −1 . 3. If X = R n , (or if X is not R n but ϕ NK is globally defined and continuous on R N d y × R N du ), the classical assumption that ϕ NK is smooth is equivalent to the strong differential observability assumption (in restriction to K ). It is much stronger than the differential observability assumption made here. Of course, it implies (in the uncontrolled case) that the multiplicity is finite. Actually, the multiplicity is one. It is the case in Chapters 3 and 4. In both chapters, strong differential observability holds (by assumption in Chapter 4, and as a consequence of uniform infinitesimal observability in Chapter 3).
P1: FBH CB385-Book
CB385-Gauthier
82
May 31, 2001
17:54
Char Count= 0
Singular State-Output Mappings
Exercise 5.1. Consider the system : () x˙ = 1, y = cos(x) + cos(αx), x ∈ R, where α is an irrational number. 1. Show that is observable. 2. Show that the observation space of is finite dimensional. Compare with Exercise 3.1. Therefore the AC P(N ) holds for some N . 3. Show that x cannot be expressed as a continuous function of (y (0) , y˜ M ), whatever M. Exercise 5.2. Show that, in Example 3.2 (uncontrolled): 1. is differentially observable of order 2. 2. Depending on the choice of f, it can happen that N is an immersion is not an immersion for any N . for some N , or N Exercise 5.3. Show that, in Exercise 4.1: 1. is differentially observable of order 2. 2. S N is not an immersion for any N . 5.2. Main Result The main result in this Section is the following Theorem: Theorem 5.2. Assume that satisfies the AC P(N ) at each point, and is differentially observable of order N . Consider K , any fixed compact subset of X. Then, there exists a C ∞ function ϕ NK , compactly supported w.r.t. (y (0) , y˜ N −1 ), such that y (N ) = ϕ NK y (0) , y˜ N −1 , u (0) , u˜ N , (62) for all x ∈ K , all u, u˜ N . That is, satisfies P H (N ), the phase variable property of order N , in restriction to K . Compactly supported w.r.t. (y (0) , y˜ N −1 ) means that, for any K , a compact
subset of U × R N du , ϕ NK restricted to R N d y × K , is compactly supported. (It is not equivalent that ϕ NK is compactly supported, for all fixed u, u˜ N .)
Proof. Let Z = (x, u (0) , u˜ N ) denote a typical element of X × U × R N du , Z = (Z 0 , u˜ N ), Z 0 = (x, u (0) ) ∈ X × U. Also, set S N (x, u, u˜ N ) = (N ) (S N (x, u, u˜ N −1 ), u ).
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. Globalization
83
For each Z , there is a neighborhood U Z of Z , and a map ϕ Z defined on V Z open, relatively compact in R N d y × R (N +1)du such that y (N ) = ϕ Z y (0) , y˜ N −1 , u (0) , u˜ N on U Z .
V Z,
Z Z Z Z Z We consider S N (U ) = V0 , V0 ∩ V = V0 , and N du Z ) = S V Z ∩ S N (K × U × R N (U ).
(63)
This is possible since S N is a homeomorphism onto its image when restricted to K × U × R N du by Lemma 5.1. We take U Z , V Z so small that the diameter of V Z is less than 1 in R N d y × R (N +1)du . We also consider N du ). This set V 0 is open, and V 0 = (R N d y × R (N +1)du )\S N (K × U × R 0 we set ϕ = 0. The family {V Z |Z ∈ K × U × R N du } ∪ {V 0 } forms an open covering of N R d y × R (N +1)du . By paracompactness, we can find a locally finite refinement of this covering, denoted by {W i |i ∈ I }. To each of these W i , we associate ϕ i as follows: W i is contained in V 0 , or in V Z i for some Z i ∈ K × U × R N du . If W i is contained in V 0 , then we set ϕ i = ϕ 0 = 0. If W i is contained in V Z i (we select one), then we set ϕ i = ϕ Z i . We chose a partition of unity {χ i } subordinated to this locally finite open covering {W i }, and we set, ϕ= χ i ϕi . i∈I
Clearly, ϕ is a C ∞ function, compactly supported w.r.t. (y (0) , y˜ N −1 ). It remains to prove that the equality (62) of the theorem holds. Let Z ∈ K × U × R N du . Set ω = S N (Z ). Then, ω belongs to a certain finite number of sets W i , say, W 1 , . . . , W k . All we have to prove is that ϕ i (ω) = y (N ) (Z ) for i = 1, . . . , k. This follows immediately from the injecZ i stated at the beginning of the tivity of S N and the property (63) of the V proof. 5.3. Consequences The following corollary will be also used in Chapter 7. Corollary 5.3. Theorem 5.2 is valid not only for y (N ) , but also for any function ˆ N {u (N ) ; u (N ) } (i.e., the germs of α at each point belong to ˆ N +1 =
α in
0 ˆ N , and in that case: ˆ N +1 ). It is true also for any function α in
(64) α = ϕ NK y (0) , y˜ N −1 , u (0) , u˜ N −1 , for all x ∈ K , all (u (0) , u˜ N −1 ).
P1: FBH CB385-Book
CB385-Gauthier
84
May 31, 2001
17:54
Char Count= 0
Singular State-Output Mappings
Proof. In the proof of Theorem 5.2, we only used the fact that α belongs to ˆ N {u (N ) ; u (N ) } at each point. Moreover, if α depends only on (x, u (0) , u˜ N −1 ),
0 then, applying the result, we get that α(x, u (0) , u˜ N −1 ) = ϕ NK (y (0) , y˜ N −1 , u (0) , u˜ N −1 , 0) for all x ∈ K , all (u (0) , u˜ N −1 ). Example 5.1. Example 3.2 and Exercise 4.1 (see also the Exercises 5.2 and 5.3) satisfy, in the uncontrolled and controlled cases, the assumptions of Theorem 5.2. In the case of Exercise 4.1, it has been already stated that Formula (62) holds globally, (for ϕ NK a certain polynomial). Exercise 5.4. (Single output, d y = 1). Let be a system with uniform canonical flag. Remember that satisfies the phase variable property P H (n) in restriction to small neighborhoods of each point in X (Exercise 4.1, Chapter 3). Moreover, assume that is differentially observable of some order N . Show that satisfies the P H (N ) (in restriction to any compact subset K of X ). Now, as a consequence, we can give the proof of Theorem 3.3, in the uncontrolled case. Proof of Theorem 3.3. Because is observable, then the observation space = span{L kf h(x)|k ≥ 0} separates points (see Exercise 5.2, Chapter 2). Define V m ⊂ X × X : V m = (x1 , x2 )|L kf h(x1 ) = L kf h(x2 ), 0 ≤ k ≤ m . The sequence V m is a decreasing sequence of analytic subsets of X × X, which is compact, and ∩m≥0 V m = X, the diagonal in X × X . By (Corollary 1, p. 99 of [43]), V m is a stationary sequence: for some N , V N = X. Hence, N is injective and is differentially observable of order N . Also, each x ∈ X has an open neighborhood U x such that the AC P(N x ) holds on U x . Extracting a finite open covering leads to an N such that the AC P(N ) holds at all x ∈ X. Applying Theorem 5.2 for N = sup(N , N ) gives the result. 6. The Controllable Case Let us assume that is controllable, in the usual weak sense of the transitivity of its Lie algebra (see the Appendix of Chapter 2). We will use the Theorems 5.1 and 5.2, of Chapter 2, which in this case allow us to conclude that the ¯ , as was already trivial foliation is regular, and equal to the foliation stated just after the proof of Theorem 5.2 in Chapter 2.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
6. The Controllable Case
85
Assume that these foliations are not trivial (i.e., their dimension is strictly > 0). Then, as was also stated in Section 5 of Chapter 2, cannot be observable, for any fixed input. Hence, cannot be differentially observable. The consequence in the (analytic) controlled case, if is controllable, is that this part of the theory is void: assume that, as in the assumptions of Theorem 5.2, satisfies the AC P(N ) and is differentially observable. Then, the “trivial foliation” has to be trivial, which implies that (u) has full rank ˇ = n. By Theorem 4.1, the AC P(N ) holds everywhere, hence, rank(d X ) ˆ N . Therefore, rank(d X
ˇ = N , and in that case, N ⊂
ˆ N ) = n, and iff
is an immersion. In fact, we are back to the situation of Chapters 3 or S N 4, where is strongly differentially observable.
Exercise 6.1. Study the controllability (in the weak sense) for the system of Exercise 4.1.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
6 Observers: The High-Gain Construction
The subject of this chapter is observers. The purpose of an observer is to obtain information about the state of the system, from the observed data. In Section 1 of this chapter, we are going to discuss the concept of observers. The main ingredient of that concept is the notion of “estimation.” There is no completely satisfactory definition of estimation. For that reason, we have to present several definitions of an observer, each having its domain of application. We shall explain these different definitions of observers and point out the relations between them. In the remainder of the chapter, we shall construct explicitly several types of observers. The fundamental idea behind all of these constructions is to use the classical observers for linear systems and to kill the nonlinearities by an appropriate time rescaling. The construction and its variations that we present provide explicit, efficient, and robust algorithms for state estimation. It is closely related to the results of the three previous chapters (3, 4, 5), and it applies in all the cases which were dealt with in these chapters. Our observers can be used for several purposes: 1. State estimation in itself. 2. Dynamic output stabilization. They will be used in Chapter 7 for purpose 2. Defining an observer presents several problems. First, there is no good definition of a state observer when the state space X is not compact. A second difficulty is the peak phenomenon, explained below in the Section 1.2.2, for observers with arbitrary exponential decay. Finally, let us point out that our construction of observers is related to nonlinear filtering theory. But this is beyond the scope of this book. A good reference for this relation is Reference [12]. In this chapter, we will make the following basic assumption: The system () is differentially observable of a certain order N ≥ 1. 86
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Definition of Observer Systems and Comments
87
1. Definition of Observer Systems and Comments An observer system is a system O , the inputs of which consist of the observed data of , i.e., the inputs of , their derivatives, and the outputs of . The
task of the observer is the estimation of the state of . Let us make a few remarks. The inputs are selected by the “operator” of the system. In particular, he can chose them differentiable, and then, their derivatives are known. On the other hand, it is hard to estimate the derivatives of the outputs from the knowledge of them. For this reason, we strictly avoid any use of these derivatives in our theory. Actually, the first problem we shall deal with will be to estimate the derivatives of the outputs. Because is differentially observable, for sufficiently smooth controls, estimating the state is equivalent to estimating the N − 1 first derivatives of the outputs, y˜ N −1 . We denote by U r,B the set of inputs u : [0, ∞[→ U = I du , such that u is r − 1 times continuously differentiable, its r th derivative belongs to L ∞ ([0, ∞[; R du ) and all the derivatives up to order r are bounded by B > 0. The set U 0,B is just the subset of L ∞ ([0, ∞[; U ) formed by the u(.) that are bounded by B. (Here, U = I du is not necessarily compact: I = R is possible). The norm . is the canonical Euclidean norm on R d y or on R N d y .
1.1. Output Observers 1.1.1. Definitions We use again the notation u˜ r = (u (1) , . . . , u (r ) ), introduced in Section 1 of Chapter 5. Definition 1.1. An U r,B output observer of , relative to is a system ( r,B, O y ), r ≥ N : (0) (0) r,B, dz dt = F z, u (t), u˜ r (t), y (t) , O y (65) η(t) = H z(t), u (0) (t), u˜ r (t), y (0) (t) , on the d Z dimensional manifold Z , where ⊂ X is open; where η, the output, belongs to R N d y ; F and H are C ∞ , and satisfy the following condition: For all u ∈ U r,B , for all x0 ∈ , such that the corresponding semi trajectory of , x(t, x0 ), is defined on [0, +∞[ and stays in , for all z 0 ∈ Z , the output η(t) is well defined and, lim η(t) − y N (t) = 0.
t→+∞
(66)
P1: FBH CB385-Book
CB385-Gauthier
88
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
Remark 1.1. We will be mostly interested in two cases: X is compact and = X , or X is noncompact but is relatively compact. Definition 1.2 below strengthens Definition 1.1. Definition 1.2. An exponential U r,B output observer of , relative to , is a one parameter family of output observers of , depending on the real parameter α > 0, with state manifold Z , independent of α, which satisfies the following condition (67), strengthening (66): For any K¯ , a compact subset of Z , for all z 0 ∈ K¯ , for all x0 ∈ , for all u ∈ U r,B : η(t) − y N (t) ≤ k(α)e−αt η(0) − y N (0),
(67)
as long as x(t, x0 ) stays in , where k : R+ → R+ is a function of polynomial growth, depending on K¯ in general. Such a one parameter family will be typically denoted by ( r,B, O ye ). Remark 1.2. The fact that k(α) has polynomial growth warrants that, if α is large enough, the estimate can be made arbitrarily close to the real value in arbitrary short time. 1.1.2. Comment ( r,B, Oy )
is an “output observer” and r = N . Because Let us assume that is differentially observable of order N , we obtain an estimation xˆ (t) of the state x(t) of as follows. For open, cl() ⊂ , denote by E(t, z 0 ) the set: ∗ ∗ x ∈ cl( ) η(t) − N (x , u(t), u˜ N −1 (t)) (68) = inf η(t) − N (x, u(t), u˜ N −1 (t)) . x∈
If is relatively compact, then can be taken relatively compact. In that case, this set E(t, z 0 ) is not empty, and for any metric d on X, compatible with its topology, limt→+∞ d(E(t, z 0 ), x(t)) = 0. If is not relatively compact, this is not true. Exercise 1.1. In this situation, show that limt→+∞ #(E(t, z 0 )) = 1, if moreover is strongly differentially observable (of order N ), for a trajectory x(t, x0 ) that stays in for all t ≥ 0.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Definition of Observer Systems and Comments
89
1.1.3. The Observability Distance The following could be a way to overcome the problems linked to the noncompactness of X or : One should try to construct a canonical distance over X, related to the observability properties. This canonical distance could then be used in the definition of observers. A trivial way to do this is to define the distance on X : x, u (0) , u˜ N −1 − y, u (0) , u˜ N −1 . sup d O (x, y) = N N u (0) ,u˜ N −1 ≤B
(69) Exercise 1.2. Show that (69) actually defines a distance over X. Unfortunately, this distance is not compatible with the topology of X in general: Exercise 1.3. Consider the system of Exercise 5.1 in Chapter 5 (uncontrolled case). Show that the observability distance is not compatible with the topology of X. Moreover, this distance is not very natural because it depends on both B (the bound on the input and its derivatives) and on N (the degree of differential observability). There is a special case where the situation is better: if X is compact, and if is analytic, uncontrolled and just observable, then by the proof of Theorem 3.3 in the previous Chapter 5, there is an N such that is differentially observable of order N . Taking the smallest such N , we get a canonical observability distance, in that case. Unfortunately, this is not very interesting, because X is compact. Exercise 1.4. Show that this distance is compatible with the topology of X. 1.2. State Observers 1.2.1. Definitions Definition 1.3. An U r,B state observer of , relative to , is a system ( r,B, O x ): dz = F z, u (0) , u˜ r , y (0) , η = H z, u (0) , u˜ r , y (0) , dt
(70)
P1: FBH CB385-Book
CB385-Gauthier
90
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
on the d Z dimensional manifold Z , where ⊂ X is open, η, the output, belongs to X, F, and H are C ∞ , and satisfy the condition: For all u ∈ U r,B and for all x0 ∈ , such that the corresponding semitrajectory of , x(t, x0 ), is defined for t ∈ [0, +∞[ and stays in , and for all z 0 ∈ Z , the output η(t, z 0 ) is well defined and, lim d(η(t, z 0 ), x(t, x0 )) = 0,
t→+∞
(71)
where d is any metric compatible with the topology of X. Again, this definition makes sense if is relatively compact only. Definition 1.4. An exponential U r,B state observer of , relative to , typically denoted by ( r,B, O xe ), is a one parameter family of state observers for , depending on the real parameter α > 0 (on the same manifold Z ), which satisfies the following condition (72), strengthening (71): For any compact K¯ ⊂ Z , for any Riemannian distance d on X, there exists a > 0, and k : R+ → R+ with polynomial growth, k and a depending on d and K¯ , such that: For all x0 ∈ , for all u ∈ U r,B , for all z 0 ∈ K¯ , Inf [a, d(η(t, z 0 ), x(t, x0 ))] ≤ k(α)e−αt d(η(0, z 0 ), x0 ),
(72)
as long as x(t, x0 ) stays in . It is important to note that, in the preceding definition, one can replace “there exists a > 0” by “for all a, 0 < a ≤ a0 .”
Again, if X is not compact, and is not relatively compact, the inequality (72) also does not make sense: All Riemannian metrics are not equivalent, hence, the inequality (72) cannot hold for all Riemannian metrics.
Remark 1.3. There is no hope of having a reasonable theory if we ask that condition (72) in Definition 1.4 be valid for any distance on X (compatible with the topology of X ), even if is relatively compact. But for Riemannian distances, everything is fine, because differentiable mappings between Riemannian manifolds are locally Lipschitz. Remark 1.4. The inequality (72) is more complicated than the inequality (67), because of the peak phenomenon, well known to engineers (and to control theoretists). In fact, it happens already in the linear theory. We explain it now.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Definition of Observer Systems and Comments
91
1.2.2. Peak Phenomenon Assume that is relatively compact. Let d be a given Riemannian metric, and assume that the following inequality is satisfied, instead of (72): d(η(t, z 0 ), x(t, x0 )) ≤ kd (α)e−αt d(η(0, z 0 ), x0 ),
(73)
where kd has polynomial growth. Inequality (73) cannot hold for all Riemannian metrics on X . This is due to the peak phenomenon: It can happen that there exists a trajectory x(t, x0 ) of , with x0 ∈ , ∗ → t ∈ R , which tends to zero as α tends to +∞, and a function α ∈ R+ α + and such that, if ηα (t, z 0 ) denotes a corresponding trajectory of the observer, ηα (tα , z 0 ) → ∞ as α tends to +∞. One can construct a Riemannian metric on X such that for the associated distance function δ on X, δ(ηα (tα , z 0 ), x(tα , x0 )) tends to +∞ faster than any power of α. The peak phenomenon already occurs in the linear theory for the classical Luenberger or Kalman observers. Of course, estimations of x that do not belong to are irrelevant, but this is unimportant. The only important point is that relevant estimations of x are obtained in arbitrary short time, for α large enough. 1.2.3. Consistency of Our Definition of an Exponential State Observer Exercise 1.5. Show that, on a manifold X, and on any compact set C ⊂ X, the distances induced on C by Riemannian distances are all equivalent. Lemma 1.1. If is relatively compact, then Definition 1.4 is independent of the choice of the Riemannian metric d on X. Proof. Let K be the subset of R (r +1)du formed by the (u (0) , u˜ r ) that have (0) components bounded by B. Let K be the set: K = ∪{h( , u (0) ); |u i | ≤ B}. Let K = H( K¯ × K × K ). This set K is relatively compact. Consider to be open, relatively compact, cl() ⊂ . Let δ be the distance associated to another Riemannian metric over X. Then, by the previous exercise, there are λ, µ > 0, such that λ d(x, y) ≤ δ(x, y) ≤ µ d(x, y), for any (x, y) ∈ cl( ∪K ). Set aδ = δ(, X \ ) = δ(∂, ∂ ), ad = d(, X \ ) = d(∂, ∂ ). One has aδ ≤ µ ad . Chose small enough for ad ≤ a.
P1: FBH CB385-Book
CB385-Gauthier
92
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
We want to show that, under the conditions of the definition: −αt ¯ δ(η(0, z 0 ), x0 ), Inf [aδ , δ(η(t, z 0 ), x(t, x0 ))] ≤ k(α)e
¯ with polynomial growth. for a certain function k, If η(t, z 0 ) ∈ / , then δ(η(t, z 0 ), x(t, x0 )) ≥ aδ , d(η(t, z 0 ), x(t, x0 )) ≥ ad , and Inf [aδ , δ(η(t, z 0 ), x(t, x0 ))] = aδ ≤ µ ad = µInf [ad , d(η(t, z 0 ), x(t, x0 ))]. If η(t, z 0 ) ∈ , then, δ(η(t, z 0 ), x(t, x0 )) ≤ µ d(η(t, z 0 ), x(t, x0 )), hence, Inf [aδ , δ(η(t, z 0 ), x(t, x0 ))] ≤ µInf [ad , d(η(t, z 0 ), x(t, x0 ))]. Therefore, for any η(t, z 0 ), Inf [aδ , δ(η(t, z 0 ), x(t, x0 ))] ≤ µInf [ad , d(η(t, z 0 ), x(t, x0 ))]
≤ µInf [a, d(η(t, z 0 ), x(t, x0 ))] ≤ µk(α)e−αt d(η(0, z 0 ), x0 ) µ ≤ k(α)e−αt δ(η(0, z 0 ), x0 ). λ
This shows the result, with k¯ =
µ λ k.
1.2.4. Alternative Definitions of an Exponential State Observer We give now two other apparently different, but more tractable, definitions than Definition 1.4. Definition 1.5. An exponential U r,B state observer of , relative to , is a one parameter family of state observers for , depending on the real parameter α > 0 (on the same manifold Z , independent of α), which satisfies the following condition: There exists a Riemannian metric d such that relation (73) holds for all x0 ∈ , for all K¯ compact, for all z 0 ∈ K¯ , and for all u ∈ U r,B , as long as x(t, x0 ) stays in . The function kd has polynomial growth and depends on K¯ . Definition 1.6. An exponential U r,B state observer of , relative to , is a one parameter family of state observers for , depending on the real parameter α > 0 (on the same manifold Z , independent of α), which satisfies the following condition: There exists a Riemannian metric d such that relation (73) holds for all x0 ∈ , for all z 0 ∈ Z , and for all u ∈ U r,B , as long as x(t, x0 ) stays in . The function kd has polynomial growth.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Definition of Observer Systems and Comments
93
Of course, Definition 1.6 is strictly contained in Definition 1.5. On the other hand, if is relatively compact, by Lemma 1.1, Definition 1.5 is itself contained in Definition 1.4. But, in fact, we have the following proposition. Proposition 1.2. Definitions 1.4 and 1.5 are equivalent. The only motive for introducing the equivalent Definition 1.4 is that it is independent of any special Riemannian metric over X.
Proof. (Proposition 1.2.). Take a Riemannian metric d over X such that X has finite diameter D. Exercise 1.6. Prove that such a Riemannian metric d does exist. Take the a from Definition 1.4 applied to the metric d, and a real λ > 1, with D < λa. Consider two trajectories in X, x(t, x0 ), η(t, z 0 ) of the system and of the observer system, x0 ∈ , z 0 ∈ K¯ . If Inf [a, d(η(t, z 0 ), x(t, x0 ))] = d(η(t, z 0 ), x(t, x0 )), then d(η(t, z 0 ), x(t, x0 )) ≤ k(α)e−αt d(η(0, z 0 ), x0 ), ≤ λk(α)e−αt d(η(0, z 0 ), x0 ). If Inf [a, d(η(t, z 0 ), x(t, x0 ))] = a, then d(η(t, z 0 ), x(t, x0 )) ≤ D < λa ≤ λk(α)e−αt d(η(0, z 0 ), x0 ). Then, in all cases, d(η(t, z 0 ), x(t, x0 )) ≤ λk(α)e−αt d(η(0, z 0 ), x0 ). This shows that (73) holds with kd = λk. The observers that we will construct will be of two types, as in the classical linear theory: (1) Luenberger type and (2) Kalman’s type. For the Luenberger case, the statement of Definition 1.6 is valid, and in the Kalman’s case, the statement of Definition 1.5 is valid. This is true both for the classical linear theory and our nonlinear theory.
1.3. Relations between State Observers and Output Observers is an output observer. If X = R n and if is 1. Assume that r,B, Oy relatively compact, then, by Lemma 5.1 and the comment just after, in N d y × R N du → X, such Chapter 5, there is a continuous function ϕ N : R that x = ϕ N (y, y˜ N −1 , u, u˜ N −1 ). This function provides a
P1: FBH CB385-Book
CB385-Gauthier
94
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
continuous single-valued estimation xˆ (t) of x(t, x0 ) : xˆ (t) = ϕ N (η(t), n u(t), u˜ N −1 (t)). If |.| denotes any norm on X = R , one has lim |x(t) − xˆ (t)| = 0.
t→+∞
(74)
allows to construct Therefore, in that case, the output observer r,B, Oy r,B an U state observer, relative to . 2. The assumptions are the same as in 1 but is strongly differentially observable of order N , and (67) holds. Then, the function ϕ N can be taken smooth, compactly supported, and if d is any Riemannian distance over X, d(x(t), xˆ (t)) ≤ kd (α) e−αt for some function kd with polynomial growth, depending on d, , K¯ . , r ≥ 0 is a state observer. Then, a fortiori, it is an 3. Assume that rO,B, x r,B state observer for some r0 , for all r ≥ r0 ≥ N . We know that U is relatively compact. We can use the mapping r in order to construct an U r,B output observer as follows: we can replace r by a smooth ˜ r , which is constant outside a compact set, and the (C ∞ ) mapping restriction of which to ×V coincides with r . Here, V is the set of (r − 1) jets at t = 0 of control functions u(t), the r − 1 first derivatives of which are bounded by B (V = (U ∩ (I B )du ) ×(I B )(r −1)du ). Taking ˜ r with H, as the output mapping the composition H˜ of this mapping r,B output observer relative to . of the observer, we get an U r,B, ˜ r (η(t, z 0 ), u r (t)) − ˜ r (x(t, x0 ), 4. If O x is exponential, then: i. ˜ r (η(t, z 0 ), u r (t)) ≤ λd(η(t, z 0 ), x(t, x0 )) for λ large enough. ii. ˜ r (x(t, x0 ), u r (t)) ≤ M because ˜ r is single-valued outside u r (t)) − a compact. Hence, for λ large enough: ˜ r (η(t, z 0 ), u r (t)) − ˜ r (x(t, x0 ), u r (t))
≤ λ Inf d(η(t, z 0 ), x(t, x0 )), M λ , ˜ r (η(t, z 0 ), u r (t)) − ˜ r (x(t, x0 ), u r (t)) ≤ λ k(α)e−αt d(η(0, z 0 ), x0 ). Exercise 1.7. (d y > du , r ≥ 2n + 1). If moreover is strongly differen˜ r can be chosen so that tially observable of order r , prove that
˜ r (η(0, z 0 ), u r (0)) − ˜ r (x(0, x0 ), u r (0)), for all d(η(0, z 0 ), x0 ) ≤ γ z 0 ∈ K¯ and all x0 ∈ , where γ depends on the compact K¯ . This shows that the modified observer is an U r,B exponential output observer.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
95
2. The High-Gain Construction 2.1. Discussion about the High-Gain Construction The high-gain construction is a general way to construct either state or output U r,B observers, that are also exponential. Before explaining this construction, we want to point out a certain number of facts, concerning the results in this chapter. 1. Systems with a phase variable representation: For the systems appearing in Chapters 4 and 5, we obtain a phase variable representation of a certain order N . As we shall see, our high-gain observers work for systems in the phase variable representation for C N controls. Hence, these apply to these general classes of systems. 2. Systems with a uniform canonical flag (the single output controlled case of Chapter 3): In that case, as we know (see Exercise 5.4 of Chapter 5), satisfies the phase variable property of some order, either locally or in restriction to arbitrarily large compact sets if is also differentially observable. Hence, the construction also applies for sufficiently smooth inputs. But there is a stronger result: If has the observability canonical form (20) of Chapter 3, then our observers work also for arbitrary L ∞ inputs, i.e., they are U 0,B observers. 3. The high gain construction has mainly two versions: Referring to the terminology of linear systems theory, the first version is in the Luenberger style and the second version is in the Kalman filter style (a deterministic version of the Kalman filter). 4. There are versions of the “high gain observer” that are “continuous– continuous”, and others that are “continuous–discrete”: Continuous– continuous means that the observer equation are ordinary differential equations (ODE’s), and observations are continuous functions of time. In the continuous–discrete version, which is more realistic, the observer equations are ODE’s with jumps, and observations are sampled. 2.2. The Luenberger Style Observer This section will concern uniformly infinitesimally observable systems. Let us assume that X = R n and our system , analytic, has the observability canonical form (20) globally. Recall that this canonical form exists locally as soon as has a uniform canonical flag. Let us denote by x i the vector (x 0 , . . . , x i ). The following two additional assumptions will be crucial.
P1: FBH CB385-Book
CB385-Gauthier
96
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
(A1 ) Each of the maps f i , i = 0, . . . , n − 1, is globally Lipschitz w.r.t. x i , uniformly with respect to u and x i+1 , (A2 ) there exists two real α, β, 0 < α < β, such that ∂h α ≤ 0 ≤ β, ∂x (75) ∂ fi α ≤ i+1 ≤ β, 0 ≤ i ≤ n − 2. ∂x In fact, these assumptions can be automatically satisfied, as soon as one is interested only in the trajectories that stay in a given compact convex set ⊂ X = R n , as shown in the following exercise. Exercise 2.1. Assume that X = R n , ⊂ X is the closure of an open, relatively compact, convex subset of X , and has the normal form (20), (it is sufficient that it has this normal form only on ). Show that for all B > 0, can be extended smoothly (C ∞ ) outside of × V B , so that the assumptions A1 , A2 , are satisfied (globally) on X × U. (Here, V B = {u ∈ U ; |u| ≤ B}.) In order to prove the main result of this section, we will need the following technical lemma: Lemma 2.1. Consider time-dependant real matrices A(t) and C(t): 0, ϕ2 (t), . . . . . . , 0 0, 0, ϕ3 (t), . . . . , 0 . , C(t) = (ϕ1 (t), 0, . . . , 0). A(t) = . 0, 0, . . . . , 0, ϕn (t) 0, 0, . . . . . . . , 0, 0 A(t) is n × n, and C(t) is 1 × n. Assume that there are two real constant α, β, such that 0 < α < β, α < ϕi (t) < β, 1 ≤ i ≤ n. Then, there is a real λ > 0, a vector K¯ ∈ R n , and a symmetric positive definite n × n matrix S, λ and S depending on α, β only, such that (A(t) − K¯ C(t)) S + S(A(t) − K¯ C(t)) ≤ −λ I d. Here, (A(t) − K¯ C(t)) means transpose of (A(t) − K¯ C(t)) and ≤ is the (partial) ordering of symmetric matrices defined by the cone of symmetric positive semidefinite matrices.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
97
Proof. (Personal communication from W. Dayawansa). The proof is an induction on the dimension n. For n = 1, we consider the quadratic form 2 2 S(x, x) = x2 . Then, S(x, (A − K¯ C)x) = − K¯ ϕ1 (t) x2 , and, if an arbitrary λ > 0 is given, any real K¯ sufficiently large does the job. Step n: We are looking for K¯ = (k, K ), k ∈ R, K ∈ R n−1 , and we have to consider:
x1 k −kϕ1 , C1 (t) A(t) − C(t) x = , x1 ∈ R, x2 ∈ R n−1 , x2 −K ϕ1 , A1 (t) K (76) with C1 (t) = (ϕ2 (t), 0, . . . , 0), and 0, ϕ3 (t), . . . . . , 0 0, 0, ϕ4 (t), . . . , 0 . . A1 (t) = . 0, 0, . . . , 0, ϕn (t) 0, 0, . . . . . . , 0, 0
First, we make the following linear coordinate change: z 1 = x1 , z 2 = x2 + x1 , where ∈ R n−1 . The square matrix appearing in the right-hand side of (76) becomes
−kϕ1 − C1 −(K + k)ϕ1 − (A1 + C1 )
C1 (A1 + C1 )
= B(t).
(77)
By the induction hypothesis relative to α and β, there is a λ > 0, an , and a quadratic Lyapunov function z 2 Sn−1 z 2 such that (A1 + C1 ) Sn−1 + Sn−1 (A1 + C1 ) ≤ −λ I d. We will look for Sn of the form
1 2
0
Sn−1 . Setting V (Z , Y ) = Z Sn Y, we get 0
2V (Z , B(t)Z ) = (−kϕ1 − C1 )z 12 + C1 z 2 z 1 + 2z 2 Sn−1 (A1 + C1 )z 2 + 2z 2 Sn−1 [−(K + k)ϕ1 − (A1 + C1 )]z 1 , 2V (Z , B(t)Z ) ≤ (−kϕ1 − C1 )z 12 − λz 2 2 + (ϕ2 + 2Sn−1 (K + k)ϕ1 + (A1 + C1 ))|z 1 |z 2 .
P1: FBH CB385-Book
CB385-Gauthier
98
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
We can choose K = −k, and k is to be determined. For any ε > 0,
1 2 1 |z 1 | z 2 = |z 1 /ε| εz 2 ≤ ε z 2 2 + 2 |z 1 |2 . 2 ε Hence,
δ(t) ε2 z 2 2 , 2V (Z , B(t)Z ) ≤ (−kϕ1 − C1 ) + 2 |z 1 |2 + −λ + δ(t) 2 2ε
where δ(t) = ϕ2 + 2Sn−1 (A1 + C1 ). Because δ(t) is bounded from above, ε can be chosen small en2 ough in order that (−λ + δ(t) ε2 ) < − λ2 . Because ϕ1 is bounded from below, k can be chosen large enough in order that (−kϕ1 − C1) + δ(t) 2ε 2 < 1 0 λ λ 2 − 2 . Hence, 2V (Z , B(t)Z ) ≤ − 2 Z . Setting θ = I d , we have Z = θ x, 2x θ Sn B(t)θ x ≤ − λ2 x θ θ x. Hence, setting S˜ n = θ Sn θ, we have
λ k ˜ C(t) x ≤ − x θ θ x, 2x S n A(t) − K 2 and we obtain
λ k C(t) x ≤ − γ x2 , 2x S˜ n A(t) − K 2
for some γ > 0, which ends the proof. Now, let us define the dynamics of our observer system O as follows: d xˆ = f (xˆ , u) − K θ (h(xˆ , u) − y), dt
(78)
where K θ = θ K , θ = diag(θ, θ 2 , . . . , θ n ) for θ > 1, and K (together with S and λ) comes from Lemma 2.1, relative to α, β, in the assumption A2 , (75). Theorem 2.2. For any a > 0, there is a θ > 1 (large enough) such that ∀(x0 , xˆ 0 ) ∈ X × X, xˆ (t) − x(t) ≤ k(a)e−at xˆ 0 − x0 ,
(79)
for some polynomial k, of degree n, where xˆ (t) and x(t) denote the solutions at time t of the observer system O and the system , with respective initial conditions xˆ 0 , x0 . Corollary 2.3. For all B > 0, for any relatively compact ⊂ X, the system O given by Formula (78) is an U 0,B exponential state observer for , relative to .
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
99
Corollary 2.4. Let be an open relatively compact convex subset of X = R n . Assume that the restriction |cl() is globally in the observability canonical form (20). Then, for all B > 0, there is a U 0,B exponential state observer for , relative to . Observation: in both corollaries, the polynomial k(a) is independent of the compact set K¯ in the definition of the observer. Corollary 2.3 is an immediate consequence of Theorem 2.2 and of the definition of a state observer. Corollary 2.4 is a consequence of Exercise 2.1 and of Theorem 2.2.
Proof of Theorem 2.2. Our system is written as ϕ1 (x1 , x2 , u), x˙ 1 . . = f (x, u). () . = . x˙ n
ϕn (x1 , . . . , xn , u), y = ϕ0 (x1 , u).
The equation of the observer is d xˆ = f (xˆ , u) − K θ (ϕ0 (xˆ , u) − y). dt Setting ε = xˆ − x, we get ε˙ = f (xˆ , u) − f (x, u) − K θ (ϕ0 (xˆ , u) − y).
(80)
We consider trajectories x(t), xˆ (t), ε(t) of (), ( O ), (80) respectively, corresponding to the control function u(t). We denote x i for (x1 , . . . , xi ). One has ϕi (xˆ , u) − ϕi (x, u) = ϕi (xˆ i , xˆ i+1 , u) − ϕi (x i , xi+1 , u) = ϕi (xˆ i (t), xˆ i+1 (t), u(t)) − ϕi (x i (t), xˆ i+1 (t), u(t)) + ϕi (x i (t), xˆ i+1 (t), u(t)) − ϕi (x i (t), xi+1 (t), u(t)) = ϕi (xˆ i (t), xˆ i+1 (t), u(t)) − ϕi (x i (t), xˆ i+1 (t), u(t)) +
∂ϕi (x (t), δi (t), u(t))εi+1 (t), ∂ xi+1 i
for some δi (t). We set gi+1 (t) =
∂ϕi (x (t), δi (t), u(t)), ∂ xi+1 i
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
100
11:11
Char Count= 0
Observers: The High-Gain Construction
to obtain ε˙ i = ϕi (xˆ i (t), xˆ i+1 (t), u(t)) − ϕi (x i (t), xˆ i+1 (t), u(t)) + gi+1 (t)εi+1 (t) − (K θ )i g1 (t)ε1 , where g1 (t) =
∂ϕ0 ∂ x1 (δ0 (t), u(t)).
Hence,
¯ ε˙ = (A(t) − K θ C(t))ε + F, with C(t) = (g1 (t), 0, . . . , 0),
0, g2 (t), 0, . . . , , 0 0, 0, g3 (t), 0, . . . , 0 , A(t) = . 0, . . . . . . . . . . , gn (t) 0, 0, . . . . . . . . . . . , 0
and
. . ¯ F = ϕi (xˆ i (t), xˆ i+1 (t), u(t)) − ϕi (x i (t), xˆ i+1 (t), u(t)) . . .
The assumption A1 states that the ϕi are Lipschitz with respect to x i . We set x 0 = (θ )−1 x, xˆ 0 = (θ )−1 xˆ , ε0 = (θ )−1 ε. We have ¯ ε˙ 0 = (θ )−1 ε˙ = (θ )−1 (A(t) − K θ C(t))θ ε 0 + (θ )−1 F. Otherwise,
. . −1 ¯ −1 0 0 (θ ) F = (θ ) ϕi θ xˆ i (t), xˆ i+1 (t), u(t) −ϕi θ x i (t), xˆ i+1 (t), u(t) . . . ¯ ≤ Lε0 , where √L is the Lipschitz The main fact is that (θ )−1 F n constant of the ϕi ’s with respect to x i (an easy verification left to the reader.)
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
101
Hence, using Lemma 2.1, 1 d 0 0 (ε Sε ) = ε0 S ε˙ 0 = ε 0 S(θ )−1 (A(t) − K θ C(t))θ ε 0 + ε 0 S(θ )−1 F¯ 2 dt ≤ θ ε0 S(A(t) − K C(t))ε0 + LS ε0 2
λ ≤ − θ + LS ε0 2 . 2 Hence, for an arbitrary γ > 0, if θ is sufficiently large, d 0 0 (ε Sε ) ≤ −γ ε0 Sε0 , dt which gives ε 0 Sε0 ≤ ε 0 Sε0 (0)e−γ t . The result follows immediately.
2.3. The Case of a Phase-Variable Representation Here, d y is arbitrary. Let us assume that we have a system in the phase-variable representation, y (N ) = ϕ(y (0) , y˜ N −1 , u (0) , u˜ N ), then the previous construction can be adapted to obtain an exponential U N ,B output observer, at the cost of an additional assumption: (A3 ) ϕ is compactly supported w.r.t. (y (0) , y˜ N −1 ). (Remember that this means that for any K , a compact subset of U × R N du , the restriction of ϕ to R N d y × K is compactly supported.) As we know, this assumption is satisfied in many situations, for instance: r In Chapter 5, in the situation of Theorem 5.2, r In particular, in the case where d > d , we have generically a phasey u
¯ of X, variable representation. If we restrict to a compact subset Theorem 5.2 gives us a phase-variable representation for , with the additional property (A3 ).
Let us denote by A the (N d y , d y ) block-antishift matrix:
0, I dd y , 0, . . . . . . . . , 0 0, 0, I dd y , 0, . . . . . . , 0 . A= 0, 0, 0, . . . . . . . . , 0, I dd y 0, 0, . . . . . . . . . . . , 0, 0
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
102
11:11
Char Count= 0
Observers: The High-Gain Construction
Here, zˆ denotes a typical element of R N d y . Then ddtzˆ = Aˆz is the linear ODE on R N d y corresponding to the vector field ∂ A(ˆz ) = zˆ id y + j . ∂ zˆ (i−1)d y + j i=1,..,N −1 j=1,d y
b(ˆz , u (0) , u˜
N d y , depending on u (0) , u˜ , Also, N ) denotes a vector field on R N defined as follows: dy ∂ ϕ j zˆ , u (0) , u˜ N . b zˆ , u (0) , u˜ N = ∂ zˆ (N −1)d y + j j=1
C : R N d y → R d y is the linear mapping with matrix: C = (I dd y , 0, . . . , 0), i.e., C(ˆz ) denotes the vector of R d y the components of which are the d y first components of zˆ . Definition 2.1. A square matrix A is called stable if all its eigenvalues have strictly negative real parts. Consider the system O , on R N d y , with output η ∈ R N d y , d zˆ ( O ) = A(ˆz ) + b zˆ , u (0) , u˜ N − K θ (C(ˆz ) − y(t)), η = zˆ , (81) dt where θ > 1 is a given real, K θ = θ K , θ is the block diagonal matrix, θ = Block − diag(θ I dd y , θ 2 I dd y , . . . , θ N I dd y ), and K is such that the matrix A − K C is a stable matrix. Exercise 2.2. Prove that such a K does exist. Theorem 2.5. O is an exponential U N ,B output observer relative to an arbitrarily large relatively compact ⊂ X (if the assumption A3 is satisfied; N du ) in order if not, one has first to modify the mapping ϕ outside S N ( × R that A3 be satisfied in which case, the observer system depends on ). Proof. This proof is just an adaptation of the proof of Theorem 2.2. It gives a more simple and explicit construction of the parameters K θ of the observer. Let us do it in detail again. First, we find a symmetric positive definite matrix S by solving the Lyapunov equation: (A − K C) S + S(A − K C) = −I d.
(82)
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
103
Exercise 2.3. Show that this equation has a single symmetric positive definite solution S. We know that, because has the phase-variable property of order N , the trajectories of (corresponding to sufficiently differentiable controls) are mapped by N into the trajectories of the system ( ) z˙ = A(z) + b z, u (0) (t), u˜ N (t) . (83) If the phase-variable property holds in restriction to a relatively compact subset ⊂ X only, this last property holds for the (pieces of) semitrajectories that stay in only. We consider any trajectory of , corresponding to the initial condition i x0 and the control u(t), t ≥ 0, such that | ddtui | ≤ B, 0 ≤ i ≤ N . Then z(0) = (0) z 0 = N (x 0 , u (0), u˜ N −1 (0)). The corresponding trajectory of is just (0) z(t) = (y (t), y˜ N −1 (t)), the successive time derivatives of the output of (as long as x(t) ∈ ). We fix an arbitrary initial condition zˆ (0) = zˆ 0 . We set ε(t) = zˆ (t) − z(t). By construction, ε satisfies dε = (A − K θ C)ε(t) + b zˆ (t), u (0) (t), u˜ N (t) − b z(t), u (0) (t), u˜ N (t) . dt (84) For the sake of simplicity of the notations, we will omit the variables u (0) (t), u˜ N (t) from now on and rewrite this equation as dε = (A − K θ C)ε(t) + b(ˆz (t)) − b(z(t)). dt Set ε = θ ε ◦ , z = θ z ◦ , zˆ = θ zˆ ◦ . We obtain
(85)
1 dε◦ = θ (A − K C)ε◦ (t) + N (b(θ zˆ ◦ (t)) − b(θ z ◦ (t))). dt θ The map ϕ is C ∞ , compactly supported in (y (0) , y˜ N −1 ). Hence, it has a Lipschitz constant L B with respect to (y (0) , y˜ N −1 ), depending on the bound B on the control and its first derivatives. As before, the Lipschitz constant of 1 θ N b(θ z) is also L B if θ > 1. One has θ 1 1 d ◦ ◦ ε Sε = − ε ◦ ε◦ + ε◦ S N (b(θ zˆ ◦ (t)) − b(θ z ◦ (t))) 2 dt 2 θ θ ◦ 2 ≤ − ε + SL B ε ◦ 2 2
θ ≤− − SL B ε ◦ 2 . 2
P1: FBH CB385-Book
CB385-Gauthier
104
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
Hence, for θ > 2 L B S, then
1 d ◦ ◦ 2 dt ε Sε
θ ◦ ◦ ≤ −( 2S − L B )ε Sε , and
θ
ε ◦ Sε ◦ ≤ e−2( 2S −L B )t ε ◦ (0)Sε ◦ (0), θ
ε◦ 2 ≤ e−2α( 2S −L B )t S −1 S ε◦ (0)2 . For θ > sup(1, 2 L B S), we have ε2 = ε◦ (θ )2 ε◦ ≤ θ 2N ε ◦ 2 θ
≤ e−2( 2S −L B )t S −1 Sθ 2N ε (0)(θ )−2 ε(0), θ
ε2 ≤ e−2( 2S −L B )t S −1 Sθ 2(N −1) ε(0)2 . Observation: again, the function k in the definition of the exponential output observer does not depend on the compact K¯ (in which the observer is initialized). This will not be the case any more in the next paragraph.
2.4. The Extended Kalman Filter Style Construction 2.4.1. Introduction and Main Result No prerequisites about the Kalman filter are required to understand this section. We remain in the deterministic setting, and we present complete proofs of all results. Let us just add a few comments about the extended Kalman filter. The “extended Kalman filter” (for simplicity denoted by EKF) applies the linear time-dependent version of the Kalman filter to the linearized system along the estimate of the trajectory. If it was along the real trajectory, then the procedure would be perfectly well defined. However, it has to be along the estimate of the trajectory, because the real trajectory is unknown. The purpose of the filter is precisely to estimate it. Therefore, it is an easy exercise to check that the equations of the extended Kalman filter are not intrinsic. They depend on the coordinate system. These equations were introduced by engineers, and they perform very well in practice, because they take the noise into account. Our point of view in this section is as follows: 1. We use special coordinates, for instance, the special coordinates of the uniform observability canonical form (23) in the single output control affine case, or the coordinates of a phase-variable representation in other cases. These coordinates are essentially uniquely defined, hence, the extended Kalman filter written in these coordinates becomes a welldefined object.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
105
2. it is possible to adapt the high gain construction shown in the previous section in order that the equations of the EKF in the special coordinates give the same results as in the previous section (arbitrary exponential convergence of the estimation error). The main difference with the Luenberger style version, is that the correction term “K θ ” is not constant: It is computed as a function of the information appearing at the current time t. We have observed in the applications that the EKF performs very well in practice, probably for this reason. Let us present this construction in the single-output, control-affine case: this last requirement seems essential. We make the following assumption: a1) is globally in the normal form (23). This is true in several situations, for example if is observable and if we make one of the following assumptions (a2), (a3). a2) = (h, L f h, . . . , L n−1 f h) is a global diffeomorphism, or, weaker, a3) in restriction to , the closure of an open relatively compact subset of X, is a diffeomorphism. Remember that the observability assumption implies that should be almost everywhere a local diffeomorphism from X into R n , and that, in the coordinates defined by , the system has to be in the normal form (23). (See Theorem 4.1 of Chapter 3.) In the case of the assumption (a3), all the functions ϕ and gi in the normal form (23) can be extended to all of R n so that they are smooth and compactly supported w.r.t. all their arguments, and gi depends on (x1 , . . . , xi ) only. Let us recall the normal form (23): x2 x3 . = . +u x˙ = . xn x˙ n−1 ϕ(x) x˙ n y = x1
x˙ 1 x˙ 2 . . .
g1 (x1 ) g2 (x1 , x2 ) . . . g (x , . . . , x n−1 1 n−1 ) gn (x)
Let us assume moreover that: a4) ϕ and gi are globally Lipschitz. In the case of (a3), this will be automatically true, by what we just said.
P1: FBH CB385-Book
CB385-Gauthier
106
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
Denote again by A the antishift matrix: 0, 1, 0, . . . . , 0 ... , A= . . . 0, . . . . . . . . 0, 1 0, 0, . . . , , 0, 0 and let C denote the linear form over R n with matrix C = (1, 0, . . . . . , 0). Let us rewrite the normal form (23) in matrix notations as x˙ = Ax + b(x, u), y = C x,
(86)
where bi, the ith component of b depends only on x i = (x1 , . . . , xi ) and u. Let Q be a given symmetric positive definite n × n matrix, and r, θ be positive real numbers, θ = diag(1, θ1 , . . . , ( θ1 )n−1 ). Let b∗ (x, u) denote the Jacobian matrix of b(x, u) w.r.t. x. Set Q θ = θ 2 (θ )−1 Q(θ )−1 . The equations, dz (87) = Az + b(z, u) − S(t)−1 C r −1 (C z − y(t)), dt dS (ii) = −(A + b∗ (z, u)) S − S(A + b∗ (z, u)) + C r −1 C − S Q θ S, dt η = z, (88) (i)
define what is called the extended Kalman filter for our system (86). (Q θ and r are analogous to the covariance matrices of the state noise and the output noise in the stochastic context.) We will show the following: Theorem 2.6. Under the assumptions (a1) and (a4), for θ > 1 and for all T > 0, the extended Kalman filter (87) satisfies, for t ≥ Tθ :
T T e−(θ ω(T )−µ(T ))(t− Tθ ) , (89) − x z(t) − x(t) ≤ θ n−1 k(T ) z θ θ for some positive continuous functions k(T ), ω(T ), µ(T ). Corollary 2.7. Under the assumptions (a1) and (a4), for any open relatively compact ⊂ X = R n and for any B > 0, the extended Kalman filter is an exponential U 0,B state observer, relative to . In fact, this theorem and this corollary generalize to the case of a multioutput system, having a phase-variable representation. Let us assume that
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
107
has the phase-variable representation y (N ) = ϕ(y, y˜ N −1 , u, u˜ N ), and that ϕ is compactly supported w.r.t. (y, y˜ N −1 ), as in Section 2.3. We consider systems on R N p , of the general form dx = A N , p x + b(x, u), dt where p is an integer, A N , p is the (N p, p)-antishift matrix: 0, I d p , 0, . . . , 0 . , A N , p= . 0, . . . . . . , 0, I d p 0, . . . . . . . . . . , 0 and where
∂bi ∂xj
(90)
≡ 0 if, for some integer k kp < i ≤ (k + 1) p,
and
j > (k + 1) p,
and all the functions bi (x, u) are compactly supported, w.r.t. their x arguments. Clearly, this form includes not only the normal form (23), but also the systems with p outputs, which are in the phase variable representation (in this case, u in (90) denotes not only the control, but its N first derivatives). Let us consider the same extended Kalman filter equations, dz = A N , p z + b(z, u) − S(t)−1 C r −1 (C z − y(t)), dt dS = −(A N , p + b∗ (z, u)) S − S(A N , p + b∗ (z, u)) (ii) dt + C r −1 C − S Q θ S, (i)
(91)
η = z, where C = (I d p , 0, . . . , 0), Q θ = θ 2 −1 Q−1 , = BlockDiag(I d p , 1 1 N −1 I d p ). The expression b∗ (z, u) is again the Jacobian matrix θ I dp, . . . , ( θ ) of b(z, u) w.r.t. z. As in the case of our Luenberger type observers, we have: Theorem 2.8. Theorem 2.6 holds also for systems of the form (90). Corollary 2.9. If a system has a phase-variable representation of order N , then for all open relatively compact subsets ⊂ X, for all B > 0, the system (91) is an exponential U N ,B output observer for , relative to . Corollary 2.9 is just a restatement of Corollary 2.7 in this new context.
P1: FBH CB385-Book
CB385-Gauthier
108
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
2.4.2. Preparation for the Proof of Theorems 2.6 and 2.8 To prove the theorems, we need a series of lemmas. The first one is a continuity result for control affine systems, very important in itself. Let be a control affine system (without outputs): p x˙ = f (x) + i=1 gi (x)u i , () x(0) = x0 . Let P : Dom(P ) ⊂ L ∞ ([0, T ], R p ) → C 0 ([0, T ], X ), be the “input → state” mapping of , i.e., the mapping that to u(.) associates the state trajectory t ∈ [0, T ] → x(t). Let us endow L ∞ ([0, T ], R p ) with the weak-* topology, and C 0 ([0, T ], X ) with the topology of uniform convergence. Lemma 2.10. 1) The domain of P is open in L ∞ ([0, T ], R p ), 2) P is continuous on bounded sets. Proof. See Appendix 3.1 in this chapter. Let now be a bilinear system on R N p , of the form x˙ = A N , p x +
N −1
p
{u kp+i,lp+ j ekp+i,lp+ j x},
(92)
k,l=0 i, j=1 l≤k
y = C x, where as above, A N , p is the (N p, p)-antishift matrix; C : R N p → R p , C = (I d p , 0, . . . , 0) is the projection on the p first components; ei, j is the matrix ei j (vk ) = δ jk vi , where {v j } is the canonical basis of R N p . The term after in (92) is a lower p-block triangular matrix. For u(.) = (u i j (.)), a measurable bounded control function for , defined on [0, T ], we denote by u (t, s) the associated resolvent matrix. (u (t, s) ∈ Gl+ (N p, R), s, t ∈ [0, T ]). Let us define the Gramm observability matrix of by T Gu = u (τ, T ) C Cu (τ, T ) dτ. 0
The matrix G u is a symmetric positive semi-definite matrix. It is easy to check that the bilinear system (92) is observable for all u(.), measurable, bounded. Lemma 2.11. If a bound B is given on the controls u i, j , then there exist positive numbers 0 < α < β, depending on B, T only, such that α I d N p ≤ Gu ≤ β I dN p .
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
109
Exercise 2.4. 1. Using Lemma 2.10, prove that the map u(.) → G u is continuous, 2. Using precompactness of the weak-* topology, and the observability of (92), prove Lemma 2.11. All the other lemmas in this section deal with properties of solutions of Riccati matrix equations.
Let us set A(t) = A N , p +
N −1 p k,l=0 l≤k
i, j=1 {u kp+i,lp+ j
ekp+i,lp+ j } for a
fixed control function u(.) bounded by B. Consider the Riccati type equation, dS = −A(t) S − S A(t) + C r −1 C − S Q S, dt S(0) = S0 ,
(93) (94)
where Q and r are given N p × N p and p × p symmetric positive definite matrices. We are led to consider also the associated other Riccati type equation: dP = P A(t) + A(t)P − PC r −1 C P + Q, dt
(95)
P(0) = P0. Of course we look for symmetric solutions of both equations. The following standard facts will be useful in this section. Facts:
1. 2. 3. 4. 5.
√ √ A, B are n × n matrices. |trace(AB)| ≤ trace(A A) trace(B B), S is n × n, symmetric positive semidefinite. trace(S 2 ) ≤ (trace(S))2 , S is as in 2, then trace(S 2 ) ≥ n1 (trace(S))2 , S is as in 2, then, trace(S Q S) ≥ qn (trace(S))2 , with q = minx=1 x Qx, (consequence of 2, 3), S is as in 2, |.| is any norm on n × n matrices, then l|S| ≤ trace(S) ≤ m|S|,
(96)
for l, m > 0. In particular, this is true for the Frobenius norm . associ1 ated to the Euclidean norm . on R n . (For A, n × n, A = (ρ(A A)) 2 , ρ the spectral radius). Here, we set m = N p, and we denote by Sm the set of symmetric m × m matrices, and by Sm (+) the set of positive definite matrices. For S ∈ Sm , we set |S| = trace(S 2 ).
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
110
11:11
Char Count= 0
Observers: The High-Gain Construction
Lemma 2.12. For any λ ∈ R ∗ , any solution S : [0, T [→ Sm of (93) (possibly T = +∞), we have, for all t ∈ [0, T [: t e−λ(t−s) u (t, s)C r −1 C u (t, s)ds S(t) = e−λt u (t, 0)S0 u (t, 0) + +λ
t
e 0
−λ(t−s)
0
S(s)Q S(s) u (t, s) S(s) − u (t, s)ds, λ
where u (t, s) is the solution of
d u (t,s) dt
(97)
= −A(t) u (t, s), u (t, t) = I dm .
ˆ = eλt S(t). Then S(t) ˆ satisfies the equation Proof. Let Sˆ : [0, T [→ Sm , S(t) d Sˆ ˆ − S(t)A(t) ˆ ˆ ˆ ˆ − e−λt S(t)Q + eλt C r −1 C + λ S(t) S(t), (t) = −A(t) S(t) dt for t ∈ [0, T [. Applying the variation of constants formula to the equation d (t) = −A(t) (t) − (t)A(t) + F(t), dt we get, going back to S, the relation (97). We will need also the following auxiliary lemma: Lemma 2.13. Let a, b, and c be three positive constants. Let x : [0, T [→ R+ (possibly T = +∞) be an absolutely continuous function satisfying for almost all t, 0 < t < T , the inequality x˙ ≤ −ax 2 + 2bx + c. Denote by ξ, −η, the roots of −a X 2 + 2bX + c = 0, ξ, η > 0. Then x(t) ≤ max(x(0), ξ ) for all t ∈ [0, T [. If x(0) > ξ, then, for all t > 0 in [0, T [: x(t) ≤ ξ +
ξ +η . ea(ξ +η)t − 1
Proof. If x(t) ≤ ξ for all t ∈ [0, T [, we are done. Assume now that E = {t ∈ [0, T [ | x(t) > ξ } = ∅. Take any connected component ]α, β[ of E such that α > 0. Either β < T , hence x(α) = x(β) = ξ , or β = T , and x(α) = ξ , x(β) ≥ ξ . Hence, in all cases, x(α) = ξ , x(β) ≥ ξ . The set F of all τ ∈]α, β[ ˙ ) ≥ 0 has positive measure: otherwise, x(β) < x(α). But for such that x(τ
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
111
˙ ) ≤ a(ξ − x(τ ))(x(τ ) + η) < 0, a contradiction. Hence, either E = τ ∈ F, x(τ ∅ or E = [0, t1 [, t1 > 0. In the first case, x(t) ≤ ξ for all t ∈ [0, T [. In the sec˙ ) ≤ a(ξ − x(τ ))(x(τ ) + η) < 0 ond case, x(t) ≤ ξ for t ≥ t1 . On [0, t1 [, x(τ a.e., and so, x(τ ) ≤ x(0) for τ ∈ [0, t1 [. ˙ ) −x(τ In the second case, on [0, t1 [, a(x(τ )−ξ )(x(τ )+η) ≥ 1 a.e., hence, log
x(τ ) + η x(0) + η ≥ a(ξ + η)τ + log , x(τ ) − ξ x(0) − ξ
for τ ∈ [0, t1 [. )+η a(ξ +η)τ . This gives x(τ ) ≤ ξ + η+ξ Therefore, x(τ x(τ )−ξ ≥ e ea(ξ +η)τ −1 for 0 ≤ τ ≤ t1 . However, because x(τ ) ≤ ξ for τ ≥ t1 , x(τ ) ≤ ξ + ea(ξη+ξ +η)τ −1 for τ ∈]0, T [.
Lemma 2.14. If S : [0, T [→ Sm (+) is a solution of Equation (93), then, for almost all t ∈ [0, T [ d(trace(S(t)) ≤ −a trace(S(t))2 + 2b trace(S(t)) + c, dt where a = m1 λmin , λmin is the smallest eigenvalue of Q, b = supt √ trace(A(t) A(t), c = trace(C r −1 C). Proof. This follows from (93) and facts 1 to 5 above. Lemma 2.15. Let S : [0, e(S)[→ Sm be a maximal semisolution of (93). If S(0) = S0 is positive definite, then e(S) = +∞ and S(t) is positive definite for all t ≥ 0. Proof. Assume that S is not always positive definite. Let θ = inf{t | S(t) ∈ / Sm (+)}. Then S(t) ∈ Sm (+) for all t ∈ [0, θ[. Using Lemmas 2.13 and 2.14, we see that trace(S(t)) ≤ max(trace(S0 ), ξ ). Then, |S(t)| = trace(S(t)2 ) ≤ trace(S(t)) ≤ max(trace(S0 ), ξ ). Choose λ > |Q| max(trace(S0 ), ξ ). Apply Lemma 2.12 with t = θ. S(θ ) = (I) + (II) + (III), (I) = e−λθ u (θ, 0)S0 u (θ, 0), θ (II) = e−λ(θ −s) u (θ, s)C r −1 C u (θ, s) ds, 0
(III) = λ
θ
e 0
−λ(θ −s)
S(s)Q S(s) u (θ, s) ds. u (θ, s) S(s) − λ
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
112
11:11
Char Count= 0
Observers: The High-Gain Construction
(I) is obviously positive definite. (II) is positive semidefinite at least. For (III): √ √
S(s)Q S(s) S(s)Q S(s) = S(s) I dm − S(s) − S(s), λ λ √ √ and | S(s)Q S(s)| ≤ |S(s)Q| ≤ |Q| max(trace(S0 ), ξ ). Hence, (S(s) − S(s)Q S(s) ) is positive definite. So is (III). Therefore, S(θ) is positive definite, λ a contradiction. Now, by Lemma 2.14 and Lemma 2.13, trace(S(t) ≤ max(trace(S0 ), ξ ) for all t ∈ [0, e(S)[. This implies that |S(t)| ≤ trace(S(t) ≤ max(trace(S0 ), ξ ). Hence, S(t) remains bounded as t → e(S), and e(S) = +∞. Let S : [0, +∞[→ Sm be a solution of (93), such that S(0) = S0 ∈ Sm (+). Then, as we know from Lemma 2.15 , S(t) ∈ Sm (+) for all t ∈ R+ , and by +η Lemmas 2.14 and 2.13, trace(S(t)) ≤ ξ + ea(ξξ+η)t −1 . Because S(t) ≤ trace(S(t)) I dm , we get: Lemma 2.16. Assume that S0 ∈ Sm (+). For all T2 > 0, for all t ≥ T2 , +η ). S(t) ≤ A(T2 ) I dm , with A(T2 ) = (ξ + ea(ξξ+η)T 2 −1 Lemma 2.17. For each time T3 > 0, there exists a constant γ3 > 0 such that, if S : [0, +∞[→ Sm is a solution of (93) such that S(0) = S0 ∈ Sm (+), then: S(t) ≥ γ3 I dm , for t ≥ T3 . Proof. Chose λ > A |Q|. Apply Lemma 2.12 but starting from T > 0. Then for t ≥ T, S(t) = (I) + (II) + (III), (I) = e−λ(t−T ) u (t, T )S(T ) u (t, T ), t e−λ(t−s) u (t, s)C r −1 C u (t, s) ds, (II) = T
t
(III) = λ
e T
−λ(t−s)
S(s)Q S(s) u (t, s) S(s) − u (t, s) ds. λ
(I ) is obviously positive definite. If T ≥ T2 , using a reasoning similar to the one in the proof of Lemma 2.15 and the bound in Lemma 2.16, we see S(s) ) is positive definite. Hence, (III) is that for all σ ∈ [T, t], (S(s) − S(s)Q λ positive definite.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
113
From now on, we take T = T2 , the arbitrary constant of Lemma 2.16. For t ≥ T2 + Tˆ0 , Tˆ0 arbitrary, we have t e−λ(t−s) u (t, s)C r −1 C u (t, s) ds, (II) ≥ (II) ≥
t−Tˆ0
t
t−Tˆ0
e−λ(t−s) u¯ (Tˆ0 , s − t + Tˆ0 )C r −1 C u¯ (Tˆ0 , s − t + Tˆ0 ) ds,
¯ ) = u(t + t − Tˆ0 ). Then where u¯ is the control u(t Tˆ0 ˆ (II) ≥ e−λ(T0 −s) u¯ (Tˆ0 , s)C r −1 C u¯ (Tˆ0 , s) ds, 0
By Lemma 2.11, for all Tˆ0 > 0, there exists a constant γ0 > 0 such that Tˆ0 u¯ (Tˆ0 , s)C r −1 C u¯ (Tˆ0 , s) ds ≥ γ0 I dm . 0
Then, (II) ≥
ˆ e−λT0
Tˆ0 0
u¯ (Tˆ0 , s)C r −1 C u¯ (Tˆ0 , s) ds,
(II) ≥ e−λT0 γ0 I dm . ˆ
Now, S(t) ≥ (II) ≥ e−λT0 γ0 I dm , if t ≥ T2 + Tˆ0 . Hence, we get the lemma ˆ taking T3 = T2 + Tˆ0 , γ3 = e−λT0 γ0 . ˆ
The following theorem summarizes all the results in the previous lemmas.
Theorem 2.18. The solution S(t) of the Riccati equation (93) is well defined and positive definite for all t ≥ 0. Moreover, S(t) = P(t)−1 , where P(t) is the solution of (95) for P0 = (S0 )−1 , and for all T > 0, there are constants 0 < γ < δ, depending on T ,B, Q, r only (not on S0 !) such that, for t ≥ T : γ I d N p ≤ S(t) ≤ δ I d N p . One has to be very careful: all original versions of statements and proofs of this theorem are wrong. In particular, the following classical inequality is false:
1 α2 + β2 , I d N p ≤ P(t) ≤ I d N p 1 + α 2 β1 α1 where α1 , β1 are the bounds on the Gramm observability matrix (α, β in Lemma 2.11), and α2 , β2 are the bounds on the Gramm controllability matrix. For t small, t ≤ T, we will need a more straightforward estimation:
P1: FBH CB385-Book
CB385-Gauthier
114
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
Lemma 2.19. There is a function ϕ(T ) = ϕ1 (T ) + P(0)ϕ2 (T ), depending on P(0), Q, s = Sup0≤t≤T A(t) only, such that P(t) ≤ ϕ(T ), 0 ≤ t ≤ T. Proof. For 0 ≤ t ≤ T, Equation (95) gives, after elementary manipulations, t (2sP(τ ) + Q) dτ, P(t) ≤ P(0) + 0
which, using Gronwall’s inequality, gives P(t) ≤ (P(0) + QT )e2sT . 2.4.3. Proof of Theorems 2.6 and 2.8, and Corollaries 2.7 and 2.9 Recall that = BlockDiag(I d p , θ1 I d p , . . . , ( θ1 ) N −1 I d p ). We set x˜ = x, z˜ = z, ε = z − x, ε˜ = ε, S˜ = θ−1 S−1 , −1 ˜ = b(−1 z), b˜ ∗ (z) = b∗ (−1 z)−1 . P˜ = S˜ = θ1 P, b(z)
(98)
Exercise 2.5. Show that: ˜ is Lipschitz, with the same Lipschitz constant as the one of b, 1. b(z) 2. b˜ ∗ is bounded, with the same bound as the one of b∗ (θ ≥ 1). Trivial computations give the following: d ˜ r −1 C)˜ε + 1 (b(˜ ˜ z ) − b( ˜ x˜ )) , (˜ε ) = θ (A N p − PC dt θ
1 ˜ ∗ ˜ ˜ 1 ˜∗ d ˜ −1 ( S) = θ − A N p + b S − S A N p + b + C r C − S˜ Q S˜ , dt θ θ
1 ˜ ∗ 1 ˜∗ ˜ d ˜ −1 ˜ ˜ ˜ ( P) = θ P A N p + b + A N p + b P − PC r C P + Q . dt θ θ (99) Therefore, we set
t t t ¯ ¯ = ε¯ (t), S˜ = S(t), P˜ = P(t), ε˜ θ θ θ and we get d ¯ r −1 C)¯ε + 1 (b(¯ ˜ z ) − b( ˜ x¯ ))], (¯ε ) = (A N p − PC dt θ
1 ˜ ∗ 1 ˜∗ d ¯ ¯ ¯ ¯ ( S) = − A N p + b (¯z ) S − S A N p + b (¯z ) + C r −1 C − S¯ Q S, dt θ θ
1 1 d ¯ ¯ r −1 C P¯ + Q. ( P) = P¯ A N p + b˜ ∗ (¯z ) + A N p + b˜ ∗ (¯z ) P¯ − PC dt θ θ (100)
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
115
We fix T > 0, and we consider successively the two situations t ≤ T and t ≥ T. Proof of Theorems 2.6, 2.8. The following is a direct consequence of Lemma 2.11 and Theorem 2.18: Because the control u is bounded by B and θ > 1, for t ≥ T, we can find constants, 0 < α(T ) < β(T ), independent of S¯ 0 , such that ¯ ≤ β(T )I d, α(T )I d ≤ S(t) 1 ¯ )≤ 1 . I d ≤ P(T β(T ) α(T )
(101)
It will be important that α(T ), β(T ) do not depend on θ. Also, by Theorem 2.18, if we start with an initial condition S(0) (resp. P(0)) in the set of positive definite symmetric matrices, then, for all t ≥ 0, ¯ (resp. P(t)) ¯ S(t) is also positive definite. Starting from (100), the following computations are straightforward: d ¯ ε (t)) = 2¯ε(t) S(t) ¯ d (¯ε (t)) + ε¯ (t) d ( S(t))¯ ¯ ε (t) (¯ε (t) S(t)¯ dt dt dt
1 ˜ ¯ ˜ = 2¯ε (t) S(t) A N p ε¯ + (b(¯z ) − b(x¯ )) − 2(C ε¯ ) r −1 C ε¯ θ ¯ ¯ 1 b˜ ∗ (¯z )¯ε ε (t) S(t) − 2¯ε (t) S(t)A N p ε¯ (t) − 2¯ θ −1 ¯ ¯ + (C ε¯ ) r C ε¯ − ε¯ (t) S(t)Q S(t)¯ε (t). If we denote by Q m > 0, the smallest eigenvalue of Q, we have, for T ≤ t: ¯ ¯ ¯ ≤ −Q m α(T ) S(t), − S(t)Q S(t) which implies
1 ˜ d ¯ ε (t)) ≤ −Q m α(T )¯ε S(t)¯ ¯ ε + 2¯ε S(t) ¯ ˜ x¯ ) − b˜ ∗ (¯z )¯ε ) (¯ε (t) S(t)¯ (b(¯z ) − b( dt θ L ¯ ε + S(t) ¯ ¯ε2 , ≤ −Q m α(T )¯ε S(t)¯ θ because of Exercise 2.5. Hence,
L d ¯ ¯ ¯ (¯ε (t) S(t)¯ε (t)) ≤ −¯ε S(t)¯ε Q m α(T ) − S(t) dt θα(T )
¯ ε Q m α(T ) − β(T ) L . ≤ −¯ε S(t)¯ α(T ) θ
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
116
11:11
Char Count= 0
Observers: The High-Gain Construction
This implies that, for all t, T ≤ t, β(T ) L
¯ ε (t) ≤ ε¯ (T ) S(T ¯ )¯ε (T )e−(Q m α(T )− α(T ) θ )(t−T ) . ε¯ (t) S(t)¯ β(T ) L
¯ ε (t) ≤ β(T )¯ε(T )2 e−(Q m α(T )− α(T ) θ )(t−T ) , and finally, Therefore, ε¯ (t) S(t)¯ t≥T : ) L β(T ) −(Q m α(T )− β(T )(t−T ) α(T ) θ e ¯ε (T )2 . ¯ε(t)2 ≤ α(T ) It follows that, for τ ≥
T θ
(102)
,
2 ) β(T ) −(Q m α(T )θ − β(T L)(τ − Tθ ) ε˜ T , α(T ) e α(T ) θ
˜ε(τ )2 ≤ and 2
ε(τ ) ≤ θ
2 T ε θ ,
β(T ) 2(N −1) β(T ) −(Q m α(T )θ − α(T ) L)(τ − Tθ ) α(T )
e
which proves Theorems 2.6 and 2.8. Proof of Corollaries 2.7 and 2.9. Starting from (102), and using the result of Lemma 2.19, it is only a matter of trivial computations to prove the corollaries. Let us give the details: We consider the case t ≤ T, and we assume that S(0) = S0 lies in the compact set: cI d ≤ S0 ≤ d I d. As a cond sequence, P(0) ≤ 1c I d. By Equation (100), we have, for t ≤ T : dt (¯ε) = 1 ˜ −1 ¯ ˜ (A N p − PC r C)¯ε + θ (b(¯z ) − b(x¯ )), hence,
t ¯ + ν dτ, ¯ε(τ )2 2A N p + 2C2 r −1 P ¯ε(t)2 ≤ ¯ε (0)2 + θ 0 ¯ and by Lemma 2.19, we know that P(t) ≤ ϕ1 (T ) + P¯ 0 ϕ2 (T ). Then, ¯ ≤ ϕ1 (T ) + P0 ϕ2 (T ) ≤ ϕ1 (T ) + because P¯ 0 = θ1 P0 , θ > 1, P(t) 1 c ϕ2 (T ) = ϕ(T ). t 2 2 2 −1 ¯ε (t) ≤ ¯ε(0) + (2s + 2C r ϕ(T ) + ν) ¯ε (τ )2 dτ. 0
Gronwall’s inequality implies that ¯ε(t)2 ≤ (T )¯ε(0)2 , with (T ) = e(2s+2C r 2
−1
ϕ(T )+ν)T . In particular, ¯ ε (T )2 ≤ (T )¯ε (0)2 .
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
117
Plugging this in (102), we get ¯ε (t)2 ≤
) L β(T ) −(Q m α(T )− β(T )(t−T ) α(T ) θ e (T )¯ε (0)2 , for t ≥ T. α(T )
(103)
Going back to t ≤ T, we have ¯ε (t)2 ≤ (T )¯ε(0)2 ≤ (T ) ≤
β(T ) ¯ε(0)2 α(T )
) L β(T ) (Q m α(T )− β(T )(T −t) α(T ) θ (T )¯ε (0)2 , e α(T )
)L β(T )L as soon as (Q m α(T ) − β(T α(T ) θ ) > 0, i.e., θ > Q m α(T )2 = θ0 (T ). Hence, in all cases (either t ≤ T or T ≤ t), if θ ≥ θ0 (T ), we have
¯ε(t)2 ≤
) L β(T ) −(Q m α(T )− β(T )(t−T ) α(T ) θ e (T )¯ε(0)2 . α(T )
This can be rewritten as ) L ) L β(T ) (Q m α(T )− β(T )T −(Q m α(T )− β(T )t α(T ) θ α(T ) θ e e (T )¯ε (0)2 , ¯ε(t)2 ≤ α(T ) and β(T ) L
¯ε(t)2 ≤ H (T )e−(Q m α(T )− α(T ) θ )t ¯ε(0)2 , with H (T ) =
β(T ) Q m α(T )T . α(T ) (T )e
Therefore, setting t = θτ : β(T )
˜ε (τ )2 ≤ H (T )e−(Q m α(T )θ − α(T ) L)τ ˜ε (0)2 , τ ≥ 0. Because ε = −1 ε˜ , and θ > 1, θ 2(N −1) ˜ε(τ )2 , we get, for all t ≥ 0:
ε(τ )2 ≤ (−1 )2 ˜ε (τ )2 ≤ β(T )
ε(t)2 ≤ θ 2(N −1) H (T )e−(Q m α(T )θ − α(T ) L)t ε(0)2 . Corollaries 2.7 and 2.9 are proven. Remark 2.1. Note that the function H (T ) depends on c, S(0) ≥ cI d. So that, contrarily to the case of the high-gain Luenberger observer, the function k(.) with polynomial growth in the definitions (1.2, 1.4) of exponential observers actually depends on the compact K¯ in the definitions. 2.4.4. The Continuous-Discrete Version of the High-Gain Extended Kalman Filter This is a more realistic version of the previous high gain observer: Observations are sampled. As the continuous high-gain extended Kalman filter, it
P1: FBH CB385-Book
CB385-Gauthier
118
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
applies to systems that are in the normal form (90), in restriction to compact sets. In particular, it applies to all systems that have a phase-variable representation for sufficiently smooth controls, and to control affine systems that have a uniform canonical flag, for general L ∞ controls. For the statement of our result, let us make exactly the same assumptions as in Section 2.4.1. For simplicity in exposition, let us consider the single output case only. Let us choose a time step δt that is small enough. The equations of the continuous-discrete version Oc.d. of our extended Kalman filter are, for t ∈ [(k − 1)δt, kδt[: (Prediction step) dz = Az + b(z, u), dt dS = −(A + b∗ (z, u)) S − S(A + b∗ (z, u)) − S Q θ S, (ii) dt and at time kδt: (Innovation step) (i)
(i) z k (+) = z k (−) − Sk (+)−1 C r −1 δt(C z k (−) − yk ), (ii) Sk (+) = Sk (−) + C r −1 Cδt.
(104)
(105)
The assumptions being the same as for Theorem 2.6 and Corollary 2.7, we have Theorem 2.20. For all T > 0, there are two positive constants, θ0 and µ, such that, for all δt small enough, θ > θ0 and θ δt < µ, one has, for all t > T:
T ¯ n−1 e−(λθ −ω)(t− θ ) z T − x T , z(t) − x(t) ≤ kθ θ θ ¯ λ, ω. for some positive constants k, This is the continuous-discrete analog of Formula (89). Hence, it is possible to state the continuous-discrete analogs of the other corollaries in the previous section. We leave this to the reader. Lemma 2.21. If Q is symmetric positive definite, and if λ is small, then (Q + λQC C Q)−1 = Q −1 − C (λ−1 + C QC )−1 C. Exercise 2.6. Prove Lemma 2.21.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. The High-Gain Construction
119
Proof of Theorem 2.20. We only sketch the proof. All details are similar to those of the proof of Theorem 2.8 and the notations are the same. For details of these computations in the standard linear case, see [29]. For t ∈ [(k − 1)δt, kδt[,
1 ˜ d ˜ ˜ x˜ ) − b˜ ∗ (˜z )˜ε ) , (˜ε S ε˜ ) ≤ θ − ε˜ S˜ Q S˜ ε˜ + 2˜ε S˜ (b(˜ z ) − b( (i) dt θ (106)
d ˜ 1 1 ∗ ∗ S = θ − S˜ Q S˜ − A + b˜ S˜ − S˜ A + b˜ (ii) , dt θ θ at kδt, we use Lemma 2.21 to get
r −1 (i) (˜ε S˜ ε˜ )k (+) = (˜ε S˜ ε˜ )k (−) − ε˜ k (−) C C S˜ k (−)−1 C + C ε˜ k (−), θδt (ii) S˜ k (+) = S˜ k (−) + C r −1 Cθδt, (107) provided that θ δt is small, θ δt < µ. By (107), (i), (˜ε S˜ ε˜ )k (+) ≤ (˜ε S˜ ε˜ )k (−). We set ˜ = S ˜ S˜ B( S)
−1
, Bk =
sup
˜ B( S(t)),
t∈[(k−1)δt,kδt[
λk =
˜ λmin ( S˜ Q S) (t), ˜ t∈[(k−1)δt,kδt[ λmax ( S) sup
where λmin and λmax denote the smallest and largest eigenvalues. Simple computations give k ≥ 1,
(˜ε S˜ ε˜ )k (+) ≤ e−(λk θ −ωBk )δt (˜ε S˜ ε˜ )k−1 (+).
(108)
Therefore, to finish, we have to bound S˜ k from above and from below. Setting ¯ = S( ˜ t ), we get, for t ∈ [θ(k − 1)δt, θkδt[: again S(t) θ
1 1 d ¯ S = − S¯ Q S¯ − A + b˜ ∗ S¯ − S¯ A + b˜ ∗ . dt θ θ
(109)
At kδt: S¯ k (+) = S¯ k (−) + C r −1 Cθδt.
(110)
A reasoning similar to that in the previous section shows that, for all µ T > 0, if δt is small enough, there is θ0 > 0, such that for θ0 ≤ θ ≤ δt and
P1: FBH CB385-Book
CB385-Gauthier
120
June 21, 2001
11:11
Char Count= 0
Observers: The High-Gain Construction
for k δt ≥ T, α I d ≤ S¯ k ≤ β I d, for some constants 0 < α < β. This, with (108), gives the result.
3. Appendix 3.1. Continuity of Input-State Mappings Let () be a control affine system on a manifold M: x˙ = X (x) +
du
Yi (x)u i ,
i=1
x(0) = x0 . Here, we consider T > 0 fixed, and the space of control functions is L ∞ ([0,T ];R du ) with the weak-* topology. The space of state trajectories of the system is the 0 space C([0,T ];M) of continuous functions: P : [0, T ] → M, with the topology of uniform convergence. We want to prove the following result: Theorem 3.1. The input-state mapping P : u(.) → x(.) has open domain and is continuous for these topologies, on any bounded set. As usual in this type of reasoning, we can assume that (1) M = R n , and (2) X and Yi are compactly supported vector fields. Also, to simplify the computations, let us prove the result for a system with a single control only. To prove the continuity on a bounded set of the mapping P, it is sufficient to prove that it is sequentially continuous for the weak-* topology, because the restriction to any bounded set of this topology is metrizable. We fix u, and we consider a sequence (u n ), converging *-weakly to u. The corresponding x trajectories are denoted by x(.) and xn (.) respectively. The following lemma is an obvious consequence of Gronwall’s inequality and of the fact that X and Y are Lipschitz: Lemma 3.2. There is a k > 0 such that, ∀n, ∀t ∈ [0, T ], xn (t) − x(t) ≤ θ k supθ ∈[0,T ] 0 (u n (s) − u(s))Y (x(s)) ds.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
3. Appendix
121
Proof.
t xn (t) − x(t) ≤ [X (xn ) + Y (xn )u n (s) − (X (x) + Y (x)u(s))] ds 0 t ≤ A+B = [X (x ) + Y (x )u (s) − (X (x) + Y (x)u (s))] ds n n n n 0 t + [Y (x)u n (s) − Y (x)u(s)] ds . 0
B ≤ sup θ ∈[0,T ]
A≤
t
θ
(u n (s) − u(s))Y (x(s)) ds.
0
t
X (xn ) − X (x) ds +
0
Y (xn ) − Y (x)|u n (s)| ds.
0
Because u n converges *-weakly, the sequence (u n ∞ ; n ∈ N ) is bounded. X and Y are t Lipschitz, hence, A ≤ m 0 xn (s) − x(s)| ds. Therefore, t xn (s) − x(s) ds. xn (t) − x(t) ≤ B + m 0
Gronwall’s inequality gives the result. Proof of Theorem 3.1. For all θ ∈ [0, T ], for all δ > 0, let us consider a subdivision {t j } of [0, T ], such that θ ∈ [ti , ti+1 ] and t j+1 − t j < δ for all j. We have θ t i (u n (s) − u(s))Y (x(s)) ds ≤ (u n (s) − u(s))Y (x(s)) ds 0 0 θ . (u (s) − u(s))Y (x(s)) ds + n ti
ε 2γ m ,
where γ = supx∈R n Y (x), With ε > 0 being given, we take δ ≤ and m = supn u n − u∞ . We get, for all θ ∈ [0, T ]: t θ i ε (u n (s) − u(s))Y (x(s)) ds ≤ (u n (s) − u(s))Y (x(s)) ds + 2. 0 0 By the *-weak convergence of (u n ), for n > N , we have t i ε (u n (s) − u(s))Y (x(s)) ds ≤ 2. 0 By the lemma, ∀n > N , ∀t ∈ [0, T ], xn (t) − x(t) ≤ ε.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
122
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
Part II Dynamic Output Stabilization and Applications
123
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
124
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
7 Dynamic Output Stabilization
Using the results of the previous chapters, we can derive a constructive method to solve the following problem. We are given a system : x˙ = f (x, u), () y = h(x, u), where u ∈ U = I du , and where I is a closed interval of R (possibly unbounded). This system is assumed to have an equilibrium point x0 , which is asymptotically stabilizable by smooth, U -valued state feedback, that is, there is a smooth function α(x), such that x0 is an asymptotically stable equilibrium of the vector field x˙ = f (x, α(x)). The problem is the following: Is it possible to stabilize asymptotically by using not the state information (as does the state feedback α(x)), but by using only the output information? As usual in this type of problem, we avoid differentiating the outputs, because, from the physical point of view, we would have to differentiate the noise, or the measurement errors, which is not reasonable. We will be interested only in the behavior of within the basin of attraction of the equilibrium x0 , hence, we can restrict X to this basin of attraction and assume that X = R n (see [51]). The basic idea, coming from the linear theory, is to construct a state observer, and to control the system using the feedback α evaluated on the estimate xˆ of the state. 1. We will show that this is possible for the systems of Chapter 3, for which a uniform canonical flag exists, and we can construct an exponential state observer, by the results of the previous chapter. 2. In the general case where we have a phase-variable representation only, i.e., for systems of Chapters 4 and 5, we will show that this is possible by using exponential output observers, but the construction is a bit more sophisticated. 125
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
126
17:54
Char Count= 0
Dynamic Output Stabilization
In fact, we will not be able to cover (as in the linear case) the whole original basin of attraction: We will obtain asymptotic stability within arbitrarily large compact sets contained in this basin of attraction, only. At the end of the chapter, we will say a few words about a situation in which the results can be improved from a practical point of view: Our theorems depend very essentially on the high-gain construction. Moreover, the output stabilization is “twice high gain” in a sense that will be clear later on. First, we need high gain for the observer to estimate exponentially. Second, this exponential rate of the observer has to be very large. It is important to understand that in some situations that are very common in practice, only exponential convergence of the estimation of the state is required but the exponential rate can be small. 1. The Case of a Uniform Canonical Flag We will make the assumptions of Section 2.2 of Chapter 6: X = R n , and our system is globally in the normal form (20). We know already that we can modify the normal form outside any open ball B 0 for the Lipschitz conditions (A1) and (A2) of Chapter 6 to be satisfied globally over X. In the proof of the main theorem (1.1) below, the semitrajectories of under consideration will not leave a compact subset, denoted below by Dm+1 . This justifies making these assumptions. We will consider first the Luenberger type observer. The same task can be performed by the Kalman type observer, but it applies to the control affine case only, and the proof is more complicated, as we shall see. 1.1. Semi-Global Asymptotic Stabilizability The most convenient notion to be handled in this chapter is not global asymptotic stabilizability, but the weaker semi-global asymptotic stabilizability, that we define immediately. Notation. In order to shorten certain statements, let us say that a vector field on X is “asymptotically stable at x0 ∈ X within a compact set ⊂ X ” if x0 is an asymptotically stable equilibrium point and the basin of attraction of x0 contains . Definition 1.1. We say that the (unobserved) system on X, x˙ = f (x, u) is semiglobally asymptotically stabilizable at (x0 , u 0 ) if, for each compact ⊂ X, there is a smooth feedback α : X → Int(U ), α (x0 ) = u 0 , such
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
1. The Case of a Uniform Canonical Flag
127
that the vector field on X , x˙ = f (x, α (x)),
(111)
is asymptotically stable at x0 within . We make the assumption that X = R n , but it may not be clear that the state space X of a semiglobally asymptotically stabilizable should be R n . However, in fact, this is true: By definition, any compact subset ⊂ X is contained in the basin of attraction of x0 for a certain smooth vector field. By the results of [51], such a basin of attraction is diffeomorphic to R n . Then, because we assumed X paracompact, the Brown–Stallings theorem (see [41]) gives the result. Comment. It does not follow immediately from [51] that the basin of attraction of the origin, for an asymptotically stable vector field, is diffeomorphic to R n , because it is assumed in the paper that the state space is R n . However, one can modify slightly the arguments to make them work for a general (paracompact) manifold. 1.2. Stabilization with the Luenberger-Type Observer We assume (reparametrization) that x = 0, u = 0 is the equilibrium point under consideration, and the smooth stabilizing feedbacks are α (x), i.e., α (0) = 0, and x = 0 is an asymptotically stable (within ) equilibrium point of the vector field x˙ = f (x, α (x)).
(112)
Now is fixed, together with the corresponding α . The basin of attraction of zero is B , ⊂ B . We have to study the following system on X × X : x˙ = f (x, α(z)), ˜ ˜ − y), z˙ = f (z, α(z)) ˜ − K θ (h(z, α(z)) y = h(x, α(z)), ˜
(113)
where K θ is as in Theorem 2.2 of Chapter 6. The purpose is to show that, if θ is large enough, (0, 0) is an asymptotically stable equilibrium of (113), and the basin of attraction of (0, 0) can be made arbitrarily large in B × X by increasing θ. Here, α˜ is a certain other smooth feedback, depending on the compact , that we construct now: Using the inverse Lyapunov theorems (Appendix, Section 3.3), we can find a smooth, proper strict Lyapunov function V : B → R+ , for the vector field
P1: FBH CB385-Book
CB385-Gauthier
128
May 31, 2001
17:54
Char Count= 0
Dynamic Output Stabilization
(112), V (0) = 0. This means that, along the trajectories of , d Vdt(x) < 0 (except for x = 0). The function V reaches its maximum m over . Let us consider Dm+1 = {x|V (x) ≤ m + 1}, and let us replace α by α˜ such that: 1. α˜ = α on Dm+1 . 2. α˜ is smooth, compactly supported, with values in Int(U ) (we have already assumed above that, by translation in the U space, 0 ∈ Int(U )). Let us also set B = supx∈R n α(x) . ˜ We will show the following. Theorem 1.1. Given arbitrary compact sets , ⊂ X, if θ is large enough, then (113) is asymptotically stable at the origin within × . Comment. This theorem means that provided that we know a compact set where the system starts, and provided that we modify the stabilizing feedback at infinity, we can just plug the estimate of the state given by our “state observer” into the feedback, and the resulting system is asymptotically stable at the origin. Moreover, the semitrajectories starting from × tend to the origin. That is, we can asymptotically stabilize at the origin via the observer, using observations only. This can be done within arbitrarily large compact sets. As we shall see in the proof, it is very important that the observer is exponential, with arbitrary exponential decay: We need the exponential decay for local asymptotic stabilization, but we also need an arbitrarily large exponential in order to estimate the state very quickly. Proof of Theorem 1.1. The proof is divided into three steps. In this type of considerations, it is always convenient to rewrite the equations of the global dynamics system-observer (113) in terms of the estimate z and the estimation error ε = z − x: ˜ − h(z − ε, α(z))), ˜ z˙ = f (z, α(z)) ˜ − K θ (h(z, α(z)) ˜ − h(z − ε, α(z))). ˜ ε˙ = f (z, α(z)) ˜ − f (z − ε, α(z)) ˜ − K θ (h(z, α(z))
(114)
First Step: (Asymptotic Stability of the Origin) The submanifold {ε = 0} is an invariant manifold for the system (114). Moreover, the dynamics on {ε = 0} is exactly (at least on a neighborhood of zero), the dynamics defined by the vector field (112). Therefore, the dynamics on this invariant manifold is asymptotically stable. On the other hand, let us
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
1. The Case of a Uniform Canonical Flag
consider the linearized dynamics at (z, ε) = 0, setting C = ∂ ( f (z, α(z)) ˜ (i) z˙ = |z=0 z − K θ Cε, ∂z ∂f (0, 0) ε − K θ Cε. (ii) ε˙ = ∂z The linear vector field
z˙ =
129 ∂h ∂ x (0, 0):
(115)
∂ ( f (z, α(z)) ˜ |z=0 z ∂z
has a number n s (possibly 0) of eigenvalues with strictly negative real part. Exercise 1.1. Show that the linear vector field on R n defined by Equation (115, ii) is asymptotically stable provided that θ is large enough. Hint: Use the same type of time rescaling as in the proof of Theorem 2.2, Chapter 6. Because of the last exercise, the number of eigenvalues with strictly negative real part of (115) is exactly n + n s . If n s = n, then the asymptotic stability of (114) at the origin follows from the asymptotic stability of the linearized equation (115). If n s < n, then the remaining n − n s eigenvalues of the linearized system (115) have zero real part, because (112) is asymptotically stable. In fact, there is a n − n s dimensional asymptotically stable center manifold C for the vector field (112) at the origin. By the fact that {ε = 0} is an invariant manifold for (114), it follows that C × {ε = 0} is also a (asymptotically stable) center manifold for (114). Hence, by the standard theorems on center manifolds, (114) is asymptotically stable at the origin, provided that θ is large enough. Second Step: Semitrajectories (x(t), z(t)), starting from × are bounded ( is an arbitrary compact subset of R n ) The dynamics of x is given by: x˙ = f (x, α(z)). ˜
(116)
˙ is bounded (Dm+1 is compact, f (x, u) is smooth, α(z) ˜ is Over Dm+1 , x bounded because α˜ is compactly supported over R n ). Therefore, all semitrajectories starting from ⊂ Dm need a certain strictly positive time Tmin to leave Dm+1 . Because α(z) ˜ is bounded ( α(z) ˜ ≤ B), Theorem 2.2 of Chapter 6 says that the inequality (79) in this theorem is satisfied.
P1: FBH CB385-Book
CB385-Gauthier
130
May 31, 2001
17:54
Char Count= 0
Dynamic Output Stabilization
Here is the place where the Lipschitz assumptions (A1 ), (A2 ) are important. If we have not (A1 ), (A2 ), but only the normal form (20), then by
Exercise 2.1 in Chapter 6, we can modify f outside Dm+1 for (A1 ), (A2 ) to hold, and in this case, the inequality (79) holds as long as the semitrajectory x(t) stays in Dm+1 . The quantity xˆ 0 − x0 in (79) (equal to z 0 − x0 ) is bounded because (x0 , z 0 ) ∈ × . Therefore, because of the polynomial k(a) against the exponential e−at in this theorem, we can chose θ large enough so that from Tmin on, the error ε(t) becomes smaller than an arbitrary given η > 0. Let us consider the compact boundary ∂ Dm+1 . It is a level set of the Lyapunov function V defined above, for the vector field (112): x˙ = f (x, α (x)) = F(x). Therefore, on ∂ Dm+1 , V˙ (t) = L F V (x) is smaller than a certain µ < 0. It follows that, if η is small enough, V˙ (t) is also negative along the trajectories of x˙ = f (x, α(x ˜ + ε)), at points of ∂ Dm+1 : ∂V ∂V ˙ ˜ + ε)) = ∂ x f (x, α (x + ε)). V (t) = ∂ x f (x, α(x The x-trajectories in (113) cannot leave Dm+1 , provided that θ is large enough, because ε is small and so V˙ (t) remains < 0 on the boundary ∂ Dm+1 . Comment. At this point, it is important to understand that the stabilization is in a sense “doubly high gain”. First, θ has to be large for the observer to perform its task. Second, θ has to be large enough for the trajectories of x stay in Dm+1 during this time. In many situations we meet in practice (for example in the first application in the next chapter), the trajectories of x are naturally bounded. In that case, the decay of ε has to be exponential, but it is not necessary that this exponential rate is large. Third Step: Semitrajectories That Stay in Dm+1 × X Tend to Zero This will prove that the basin of attraction of the origin in (113) contains × . Take such a semitrajectory (z(t), ε(t)), t ≥ 0 of (114). This semitrajectory being bounded, its ω-limit set is nonempty. Let (z ∗ , ε ∗ ) be in this ω-limit set . Then, the whole semitrajectory (z ∗ (t), ε∗ (t)), t ≥ 0 starting from (z ∗ , ε∗ ) is entirely contained in because is invariant. However, because ε(t) tends to zero when t → +∞, ε∗ = 0. Therefore, ε ∗ (t) = 0 for all t ≥ 0. On the other hand, the dynamics on Dm+1 × {ε = 0} is just (112), which is asymptotically stable at the origin within Dm+1 , by assumption (and Dm+1 × {ε = 0} is positively invariant by construction). Hence, (z ∗ (t), ε∗ (t)) → 0. Therefore, the origin is also in the ω-limit set of the semitrajectory (z(t), ε(t)), t ≥ 0 because the ω-limit set is closed. It follows that the semitrajectory (z(t), ε(t)), t ≥ 0 enters in a finite time in the basin of attraction of the origin.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
1. The Case of a Uniform Canonical Flag
131
1.3. Stabilization with the High-Gain EKF Assume that we are in the control affine case. Then, we could try to use the high-gain extended Kalman filter exactly in the same way (see Theorem 2.6 and Corollary 2.7 in Chapter 6). There are several additional difficulties. Let us consider the systems in normal form (86). Let us set again ε = z − x. The state (z, ε, S) of the couple (system, observer) lives in R n × R n × Sn (+), where Sn (+) is the set of positive definite symmetric matrices. x˙ = Ax + b(x, u), dz (117) = Az + b(z, u) − S(t)−1 C r −1 (C z − y(t)), dt dS = −(A + b∗ (z, u)) S − S(A + b∗ (z, u)) + C r −1 C − S Q θ S, dt with u = α(z). ˜
(118)
As soon as x(t) stays in Dm+1 , we know that we have the same type of exponential stability as in the previous case (Corollary 2.7 of Chapter 6). Hence, the same argument works, by increasing θ, to show that x(t) stays in Dm+1 for t ≥ 0. We also know that S(t) and S(t)−1 remain bounded in the open cone of positive definite matrices (see (101)). Hence, all semitrajectories (z(t), ε(t), S(t)), t ≥ 0 starting from × × are relatively compact in R n × R n × Sn (+), where is any compact set in Sn (+) (provided that θ is large enough). Moreover, ε(t) → 0. Hence, a point (z ∗ , ε∗ , S ∗ ) in the ω-limit set of the given semitrajectory satisfies ε∗ = 0. If T (t) = (z ∗ (t), 0, S ∗ (t)), t ≥ 0 is the semitrajectory starting from (z ∗ , ε∗ , S ∗ ), then again, the equation for z ∗ (t) is the one in (112), which is asymptotically stable within Dm+1 , by assumption. Hence, by the invariance and closedness of the ω-limit set, we can take z ∗ = 0. Again, the set = {z = 0, ε = 0} is ˇ of S starting invariant under the global dynamics, and the semitrajectory S(t) from a point of is a solution of a time-invariant Riccati equation, relative to an observable time-invariant linear system. It is known from the linear theory ˇ that S(t) tends to the solution S∞ of the corresponding algebraic Riccati equation. (see [8, 29]). Hence, the point (0, 0, S∞ ) belongs to . It remains to prove the local asymptotic stability of this point. If we can prove that the linearization at S∞ of the time-invariant Riccati equation is asymptotically stable, then the result will follow from the same argument as in the proof above: {ε = 0} will be an invariant manifold, containing the asymptotically stable center manifold (if any), and all other eigenvalues of the linearized system will have negative real parts. This
P1: FBH CB385-Book
CB385-Gauthier
132
May 31, 2001
17:54
Char Count= 0
Dynamic Output Stabilization
exponential stability of a time-invariant Riccati equation also follows from the linear theory. (See for instance [8], Theorem 5.4, p. 73.) Therefore, we get: Theorem 1.2. Replacing the Luenberger high-gain observer by the highgain extended Kalman filter, Theorem 1.1 is still valid, i.e., for any triple of compact subsets , , , × × ⊂ R n × R n × Sn (+), for θ large enough, the high-gain extended Kalman filter coupled with the system to which the feedback control α(z) ˜ is applied, is asymptotically stable within × × at (0, 0, S∞ ). Exercise 1.2. Give all details for the proof of Theorem 1.2. With exactly the same type of argument, it is clear that we can also use the continuous-discrete version of the high-gain extended Kalman filter: Exercise 1.3. State and prove a version of Theorem 1.2 using the continuousdiscrete version of the high-gain EKF. 2. The General Case of a Phase-Variable Representation 2.1. Preliminaries We will now deal with the case of Chapters 4 and 5, where the system has a phase-variable representation of a certain order. This happens generically if d y > du (at least for bounded and sufficiently differentiable controls, but it will be the case here). The systems of the previous paragraph have a phase-variable representation, as we have noticed already. So one could ask why the considerations in the previous paragraph? The answer is that the procedure for stabilizing (asymptotically) via output information is much less complicated in that case. In particular, now, we will have to deal with a certain number of successive derivatives of the inputs, because we have only U N ,B output observers. In the previous paragraph, we had a U 0,B state observer, hence, these problems did not appear. We will obtain the result with the high-gain Luenberger output observer. Generalizations to the case of the high-gain extended Kalman filters (either continuous-continuous or continuous-discrete) can be done in the same way as in the proof of Theorem 1.2 above. We will leave these generalizations as exercises. 2.1.1. Rings of C ∞ Functions Recall that in Chapter 5 we introduced several rings of (germs of) anaˆ N, ˇ N , . ˇ Recall that ˆ N was just the pull back ring: lytic functions: N ,
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2. The General Case of a Phase-Variable Representation
133
ˆ N = (S )∗ (Oz 0 ), where z 0 = S (x0 , u (0) , N N 0 u 0N −1 ). We will consider ˜ N, ¯ N, ˜ of the rings ˆ N , N , , ˇ i.e., the C ∞ analogs ˜ N x0 , u (0) , u˜ N −1 = G ◦ S , N (0) where G varies over the germs of C ∞ functions at the point S N (x 0 , u , u˜ N −1 ), and ¯ N x0 , u (0) , u˜ N −1 = G ◦ S N ,u˜ N −1 , (0) where G varies over the germs of C ∞ functions at the point S N ,u˜ N −1(x 0 , u ). (0) ˜ if there is no ambiguity, will be the ring ˜ 0 , u ), or simply , The ring (x of germs of functions of the form
G(u, ϕ1 , . . . , ϕ p ) at the point (x0 , u (0) ), where G is C ∞ and all the functions ϕi are of the form ϕi = L kf1u (∂ j1 )s1 L kf2u (∂ j2 )s2 . . . . . L kfru (∂ jr )sr h. (Again, ∂ j = ( ∂u∂ j )). Recall that, in the analytic case, the condition AC P(N ) is equivalent to ˆ N by Theorem 4.1 of Chapter 5. Of course, if AC P(N ) holds, ˇ = N ⊂ then a fortiori, ˜ = ¯N ⊂ ˜N :
(119)
˜ belong to N ⊂ ˆ N , hence, C ∞ Because the above generators u, ϕi of ¯ N . Therefore ˜ ⊂ ¯ N . Using the same reasonfunctions of them belong to ˜ ¯ ˜ ¯ ˜ ing, ⊂ N ⊂ N . Also, by definition, N ⊂ . Moreover, the analyticity assumption plays no role in this result, so that it is also true for a C ∞ system that AC P(N ) is equivalent to the condition (119). 2.1.2. Assumptions We will start with the most general situation, that is, we assume that our system satisfies the assumptions of Theorem 5.2 of Chapter 5, i.e., (H1 ) satisfies AC P(N ) at each point, (H2 ) is differentially observable of order N . In particular, if is strongly differentially observable of order N (the “generic” situation of Chapter 4), these assumptions are satisfied. For the same reasons as above, we assume also that X = R n . In this chapter, is semiglobally asymptotically stabilizable at (x0 = 0, u 0 = 0). We will have to make an additional assumption (H3 ), relative to the stabilizing feedbacks α :
P1: FBH CB385-Book
CB385-Gauthier
134
May 31, 2001
17:54
Char Count= 0
Dynamic Output Stabilization
¯ N (x1 , u (0) , (H3 ) The germs of the α, j (.) at x1 , j = 1, . . . , du , belong to n (0) (N −1)d u u˜ N −1 ), for all x1 ∈ X = R and for all (u , u˜ N −1 ) ∈ U × R . Equiv˜ by virtue of (119). alently, these germs belong to 2.1.3. Comments 1. This assumption (H3 ), (together with (H1 ) and (H2 )), is automatically satisfied if is strongly differentially observable of order N : in that ¯ N is just the ring of germs of smooth functions at case, the ring (0) (x1 , u ). Exercise 2.1. Prove this last statement. ˜ N, ¯ N, ˜ of our rings ˆ N , N , ˇ 2. We have defined the C ∞ analogs for the following reasons. First, in this section, we want to deal with C ∞ systems (recall that the results of Chapter 5 are valid also in the C ∞ case). Second, it could happen that, even if is analytic and asymptotically stabilizable, then it is asymptotically stabilizable by a feedback that is only C ∞ . 3. At this point, it is important to say a few words about U. In the previous section, we assumed that U = I du , where I is a closed interval of R. Assume that I is not equal to R. Then we can find a diffeomorphism : Int(I ) → R. The rings above are intrinsic objects that do not depend on coordinates on U. So that AC P(N ) does not depend on a change of variable over u. On the same way, the assumption (H2 ) is ¯ N at each point intrinsic. Also, the fact that the germ of α, j belongs to is intrinsic. Therefore, because our stabilizing feedbacks α take their values in the interior of U, we see that we can replace u by v, vi = (u i ) and assume that I = R. This is what we will do in the remainder of this section.
2.1.4. A Crucial Lemma In order to state and prove our main theorem in this section, we need a preliminary result: Lemma 2.1. Assume that is given, and that (H1 ) and (H3 ) are satisfied. Then, (H1 ) and (H3 ) are also satisfied for r , the r th dynamical extension of .
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2. The General Case of a Phase-Variable Representation
135
The definition of the r th dynamical extension of is given in Chapter 2, Definition 4.1. Proof. It is obvious that, if satisfies (H1 ), so does r . ¯ N and of the r th dynamical extension of , we From the definitions of see that it is sufficient to prove the lemma for r = 1. x˙ = f (x, u), ( 1 ) u˙ = u 1 , y = (h(x, u), u). A compact ⊂ X being given, we consider α : X → R du satisfying (H3 ), that is, the origin is asymptotically stable within for the vector field (112). Let B be the basin of attraction of the origin for (112). By the inverse Lyapunov’s theorems ([35, 40]), there is a proper C ∞ Lyapunov function for (112), i.e., (i) V is proper over B , V (0) = 0, V (x) > 0 if x = 0 and (ii) on B and along the trajectories of (112), V˙ < 0 except for x = 0 where V˙ (0) = 0. We will also consider some arbitrary interval I B = [−B, B] and prove that we can construct a feedback for 1 with the required properties, and such that the basin of attraction of the origin contains × (I B )du . We choose for 1 the feedback u 1 = L f α − r (u − α ), denoted by αr , for r > 0. Set H˜ (x, u) = u − α . We get for the system 1 under feedback:
dx ˜ dt = f (x, α (x) + H ), ( 1,r ) (120) d H˜ ˜ dt = −r H . du ( H˜ i )2 . This function W Consider the function W (x, H˜ ) = V (x) + 12 i=1 d u is proper on B × R . Denote by DWk the compact set DWk = {(x, u)| W (x, u) ≤ k}, k ≥ 0. Let M be the maximum of W on × (I B )du . Let = DW M+1 , = DW M . Then ⊂ Int( ), is compact and contains × (I B )du . We set = \Int( ). Along the trajectories of 1,r , we have du d ( H˜ i )2 . (W (x, H˜ )) = d V (x). f (x, α (x) + H˜ ) − r dt i=1
(121)
We claim that if r is sufficiently large, this quantity is strictly negative ˜ for (x, √u) ∈ . First, if V (x) is smaller than M/2, H has to be larger than M, and r can be chosen large enough for the quantity (121) above to be negative (on ∩ {V (x) < M 2 }). Second, let us examine the case
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
136
where V (x) ≥
17:54
Char Count= 0
Dynamic Output Stabilization M 2:
d (W (x, H˜ )) = d V (x). f (x, α (x)) dt du + (u i − α,i )d V (x).i (x, u − α (x)) i=1 du
−r =−
(u i − α,i )2
i=1 d u i=1
2
√ 1 (u i − α,i ) r − √ d V (x).i (x, u − α (x)) 2 r
+ d V (x). f (x, α (x)) +
du 1 (d V (x).i )2 . 4r i=1
du 2 But, for r large enough, 4r1 i=1 (d V (x).i ) is smaller than the mini mum of |d V (x). f (x, α (x))| over ∩ {V (x) ≥ M 2 }. This shows that the semitrajectories of 1,r starting from stay in . Now, going back to the expression (120) of 1,r , let us show that it is asymptotically stable at the origin: H 0 = { H˜ = 0} is an invariant manifold for 1,r . On H 0 , the dynamics is x˙ = f (x, α (x)), hence, it is asymptotically stable at x = 0 by assumption. If the linearized vector field at x = 0 is asymptotically stable, the same holds for 1,r obviously. If it is not, then we pick a (asymptotically stable) center manifold C of x˙ = f (x, α (x)). Obviously, C × {0} is also an asymptotically stable center manifold for 1,r . Hence, 1,r is asymptotically stable at the origin. Let us now consider a semitrajectory {x(t), u(t)|t ≥ 0} of 1,r starting from × (I B )du . We know that this semitrajectory stays in , and hence, is bounded. Its ω-limit set is nonempty. Let (x ∗ , α (x ∗ )) be a point in . (It should be of this form, because u(t) − α (x(t)) → 0). Because is invariant, the dynamics on is given by x˙ = f (x, α (x)), which is asymptotically stable within B by assumption. Hence, the trajectory issued from (x ∗ , α (x ∗ )) is in the basin of attraction of the origin of 1,r . Since the basin of attraction is open, the same is true for the trajectory (x(t), u(t)). It follows that (x ∗ , α (x ∗ )) = 0. We have proven that 1 is asymptotically stabilizable at the origin within × I B , with the feedback u 1 = αr (x, u), for r large enough. Recall that ¯N = ˜ (relaαr (x, u) = L f α − r (u − α ). By our assumptions, α, j ∈ ˜ is stable by L f . Hence, the germs of αr, j (x, u) ∈ tive to ). By definition, ¯ N ().
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2. The General Case of a Phase-Variable Representation
137
Now we will show that: (i) αr is a function of the state of 1 only, and ˜ N ( 1 ). This implies that αr is in ¯ N ( 1 ), because an (ii) αr belongs to 1 ˜ N ( ), which does not depend on the derivatives of the control element of ¯ N ( 1 ) by definition. Hence, αr, j (x, u) ∈ ¯ N ( 1 ). Number of 1 , is in (i) is obvious; and (ii) because satisfies the AC P(N ), we know by (119) ˜ N (). Also, ˜ N () ⊂ ˜ N ( 1 ), and αr, j (x, u) ∈ ¯ N (), ¯ N () ⊂ that 1 ¯ hence, αr, j (x, u) ∈ N ( ). This completes the proof. Corollary 2.2. If satisfies (H1 ), (H3 ), then N is stabilizable within any ˜ N () (the germs arbitrary compact set with a feedback α that belongs to ˜ N ()). of which belong to Proof. Obvious. 2.2. Output Stabilization Again First, we will consider the Luenberger version of the output observer. We set M = N (d y + du ), where N is such that satisfies the condition AC P(N ) and is differentially observable of order N . R M = R N (d y +du ) . We choose ⊂ X, ⊂ R N d y , ⊂ R N du , three arbitrary compact sets. We denote (as in the previous chapters) by Am, p the (mp, p) block-antishift matrix, i.e., Am, p : R mp → R mp , 0, I d p , 0, . . . . . . . . , 0 . . . Am, p = 0, . . . . . . . . . . . , 0, I d p 0, . . . . . . . . . . . . . . . , 0 Also, Cm, p : R mp → R p and bm, p : R p → R mp denote the matrices, Cm, p = (I d p , 0, . . . . . . , 0), 0 . . bm, p = . . 0 I dp N We take a feedback α× , given by Corollary 2.2, that stabilizes the N th N dynamical extension of within × . Recall that N is given by
x˙ = f (x, u), ω˙ = A N ,du ω + b N ,du u N , with the notation ω = (u (0) , u˜ N −1 ).
(122)
P1: FBH CB385-Book
CB385-Gauthier
138
May 31, 2001
17:54
Char Count= 0
Dynamic Output Stabilization
Then, the feedback system
x˙ = f x, u (0) ,
N ω˙ = A N ,du ω + b N ,du α× (x, ω),
(123)
is asymptotically stable at (x0 , 0) within × . Let B˜ denote the basin of attraction of (x0 , 0) for (123). ˜ The function Let V be a proper Lyapunov function for this vector field on B. V has a maximum m over × . Setting Dk = {s|V (s) ≤ k}, k ≥ 0, let us consider Dm , Dm+1 . Then, × ⊂ Dm ⊂ Int(Dm+1 ). Using the C ∞ version of Corollary 5.3 of Chapter 5, we can find a C ∞ function α defined on all of R M such that N (0) (0) α× , u˜ N −1 = α S (124) x, u N x, u , u˜ N −1 , for all (x, u (0) , u˜ N −1 ) ∈ Dm+1 , and moreover, α can be taken compactly supported. Hence, α reaches its maximum over R M . So do u (0) , u (i) , 1 ≤ i ≤ N − 1 over Dm+1 . Let B be the maximum of these maxima. Let ˜ be the image of Dm+1 by the projection 1 : X × R N du → X. The set ˜ is compact. ˜ We consider the ϕ N given by Theorem 5.2 of Chapter 5, applied to and ˜ i.e., , ˜ y (N ) = ϕ N y (0) , y˜ N −1 , u (0) , u˜ N , ˜ all u (0) , u˜ N . for all x ∈ , N N ,B output observer We can now “couple” the feedback α× with the U in its Luenberger form (Formula (81), Section 2.3, Chapter 6). Let us write the equation of the full coupled system over X × R M : (i) x˙ = f x, u (0) , (ii) ω˙ = A N ,du ω + b N ,du α(z, ω),
˜ (iii) z˙ = (A N ,d y − K θ C N ,d y )z + K θ h x, u (0) + b N ,d y ϕ N (z, ω, α(z, ω)). (125) Our result will be the following, as expected: Theorem 2.3. Assumptions (H1 ), (H2 ), and (H3 ) are made. For any ⊂ R N d y , for θ large enough, the system (125) is asymptotically stable within × × at (x0 , 0, 0). This theorem means that in all the cases we have dealt with in the previous chapters, we can stabilize asymptotically, using output information only,
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2. The General Case of a Phase-Variable Representation
139
within arbitrarily large compact sets as soon as we can stabilize asymptotically within compact sets by smooth-state feedback. In particular, if the system is strongly differentially observable of some order N, which is generic if dy > du , and if is smooth state feedback stabilizable, this theorem applies.
The proof of this theorem will go through the three same steps as the proof of Theorem 1.1. Proof of Theorem 2.3. Let us observe first that in X × R M , the set S = {z = N (x, ω)} is an embedded submanifold (it is the graph of a smooth mapping). Set S = (Dm+1 × R N d y ) ∩ S. It is clear that S is an invariant manifold for (125): If we set Y = N (x, ω), and ε = z− Y, remember that the equation of ε over Dm+1 is given by Equations (84) or (85) of Chapter 6. If we rewrite it using the notations of this chapter, it gives: ˜ dε = (A N ,d y − K θ C N d y )ε + b N ,d y ϕ N (z, ω, α(z, ω) dt ˜ − b N ,d y ϕ N (Y, ω, α(z, ω))),
(126)
so that, if ε = 0, it is just dε dt = 0. Moreover, by construction, Int((Dm+1 × R N d y ) ∩ S) = {(x, ω, z)|(x, ω) ∈ Int(Dm+1 ), z = N (x, ω)}. If ε = 0, Dm+1 is positively invariant by the dynamics (i) and (ii) in (125), because it is the dynamics of an asymptotically stable system on X × R N du , and Dm+1 is a sublevel set of a Lyapunov function. It follows that in fact S = (Dm+1 × R N d y ) ∩ S is a compact manifold with a boundary, which is positively invariant under the dynamics (125). To summarize, setting S = Int(S ), S is an embedded manifold with the same property that it is positively invariant under (125). The point (x0 , 0, 0) ∈ S and the dynamics on S (and on S ) is globally asymptotically stable. 2.2.1. First Step: Local Asymptotic Stability of (125) The linearized dynamics for ε at (x0 , 0, 0) is, after straightforward computations, ε˙ = (A N ,d y − K θ C N ,d y )ε + b N ,d y Lε.
(127)
Exercise 2.2. Using the same time reparametrization method as in the proof of Theorem 2.5 of Chapter 6, prove that for θ large enough, this linear vector field is asymptotically stable. Again, if the linearized dynamics on S is asymptotically stable, the same is true for (125): All the eigenvalues of the linearized vector field at the
P1: FBH CB385-Book
CB385-Gauthier
140
May 31, 2001
17:54
Char Count= 0
Dynamic Output Stabilization
equilibrium have negative real part. If the linearized dynamics on S is not asymptotically stable, again, the asymptotically stable center manifold for the dynamics restricted to S is also a center manifold for the full system (125). It follows that (125) is asymptotically stable at (x0 , 0, 0). 2.2.2. Second Step: All the Semitrajectories Staying in Dm+1 × R N d y Are Contained in the Basin of Attraction of (x0 , 0, 0) Let { p(t) = (x(t), ω(t), z(t)), t ≥ 0} be such a semitrajectory. Theorem 2.5 of Chapter 6 applies: |ωi, j | ≤ B for 1 ≤ i ≤ N , 1 ≤ j ≤ du , Hence, limt→+∞ ε(t) = 0. Therefore, {z(t), t ≥ 0} is bounded. So that the ω-limit set of { p(t), t ≥ 0} is nonempty. Let p ∗ = (x ∗ , ω∗ , z ∗ ) be in . Because ∗ ∗ ∗ ε(t) → 0, it follows that z ∗ = N (x , ω ). Hence, p ∈ S . S is positively invariant, and every semitrajectory starting in S tends to zero, as we said. Thus, the semitrajectory starting from p ∗ is in the basin of attraction of zero. The same happens to the semitrajectory p(t), and at the end, limt→+∞ p(t) = (x0 , 0, 0). 2.2.3. Third Step: All Semitrajectories Starting From × × Stay in Dm+1 × R N d y The argument is exactly the same as in the proof of Theorem 1.1. These trajectories take a certain time Tmin to cross (Dm+1 × R N d y )\(Dm × R N d y ): in this set, u (N ) (t) is bounded by B. This time can be used if we increase θ, to make ε very small at time Tmin and afterward. (Here we use again “the exponential against the polynomial,” in the estimation of the error in the output observer. We use also the fact that because z(0), ω(0), and x(0) belong to given compact sets, the error at time 0 has a bound.) Now, we consider our Lyapunov function V over the boundary ∂ Dm+1 . Along our trajectories, when d V (x(t), ω(t)) is strictly smaller than a certain negative constant −ζ 2 ε = 0, dt (because ∂ Dm+1 is compact and does not contain (x0 , 0)). However, when ε(t) is nonzero, Equations (125) (i) and (ii) can be rewritten as (i) x˙ = f x, u (0) , (ii) ω˙ = A N ,du ω + b N ,du α N (x(t), ω(t)) + ε(t), ω(t) . d Hence, if ε(t) is sufficiently small, dt V (x(t), ω(t)) is still strictly negative. This just means that (x(t), ω(t)) stays in Dm+1 . By virtue of step 2 above, all these semitrajectories tend to (x0 , 0, 0). By virtue of step 1, Equations (125) are asymptotically stable at (x0 , 0, 0), and the basin of attraction contains × × .
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
3. Complements
141
Exercise 2.3. Consider the system of Exercise 4.1 in Chapter 5: X = R 2 , U = R, y = h(x) = x1 , x˙ 1 = x23 − x1 , x˙ 2 = x28 + x24 u. 1. Show that the feedback u r (x) = −r (x2 )3 , r > 0, stabilizes asymptotically at (0, 0). Show that the basin of attraction is br = {x|x2 < r }. 2. Show that the previous theorem applies, but is not strongly differentially observable, of any order. Theorem 2.4. The statement of Theorem 2.3 also holds when we replace the Luenberger version of the output observer by its extended Kalman filter version (Corollary 2.9 of Chapter 6). Exercise 2.4. Give a proof of Theorem 2.4. Exercise 2.5. State and prove a version of Theorem 2.4, using the highgain extended Kalman filter in its continuous-discrete form (Theorem 2.20 of Chapter 6). 3. Complements 3.1. Systems with Positively Invariant Compact State Spaces This is a situation that seems to be very common in practice. In particular, it will appear in the first application of the next chapter. We make the additional assumption: (H4 ): (i) The system is such that the state space is not X = R n , but a certain relatively compact open subset ⊂ R n . We assume that is also defined and smooth on the boundary ∂, and the closure Cl() is positively invariant for the dynamics of whatever the control u(.), with values in U . (ii) The state feedback is asymptotically stabilizing within Cl(). Observation. (H4 , (i)) implies that also is positively invariant for the dynamics of . Proposition 3.1. In the case in which the assumption (H4 ) holds, all the theorems in the two previous sections in this chapter are true with a refinement: As soon as the observers are exponential, with any rate of decay of the error, the full system controller+observer is asymptotically stable within Cl() × Xˆ , where Xˆ is the state space of the observer.
P1: FBH CB385-Book
CB385-Gauthier
142
May 31, 2001
17:54
Char Count= 0
Dynamic Output Stabilization
Proof. 1. The proof of local asymptotic stability is the same. 2. The proof of the fact that bounded semitrajectories tend to zero is the same. 3. The semitrajectories of the full coupled system are automatically bounded: (a) x(t), the state of , is bounded, by assumption; (b) the observer error going to zero, the estimate is also bounded; (c) (in the case of the high-gain EKF only), the component S(t) is bounded and tends to S ∞ exactly for the same reason as in the general case. Exercise 3.1. Give details of the previous proof, especially in the case of the high-gain extended Kalman filter. This is very important in practice, because, as we explained, the observer is not doubly high-gain in that case.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
8 Applications
In this chapter, to illustrate the theoretical content of this book we will briefly present and comment on two applications of our theory to the real world of chemical engineering. The first application was really applied in practice. The second was done in simulation only.
1. Binary Distillation Columns Distillation columns are generic objects in the petroleum industry, and more generally, in chemical engineering. Here we limit ourselves to the case of binary distillation columns, i.e., distillation columns treating a mixture of two components. The example we consider in practice is a “depropanizer” column, i.e., a column distillating a mixture of butane and propane. There is at least one of these columns in each petroleum refinery. This work has been also generalized to some cases of multicomponent (nonbinary) distillation, but this topic will not be discussed here. For more details on this application, the reader should consult [64].
1.1. Presentation of a Distillation Column, the Problems, and the Equations 1.1.1. Binary Distillation Columns The column is represented in Figure 1. It is composed of a certain number of trays (n), a top condenser and a boiler. The top condenser is considered as the tray 1, and the bottom of the column is considered as the tray n. In practice, n is approximately 20. The feed (here, the butane-propane mixture) enters the column at a medium tray (the f th one, f ∼ = 10). The light component (propane) is expected to be extracted at the top of the column (distillate). The heavy component (butane) is the bottom product (residue) on the figure. The distillate and the residue 143
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
144
11:11
Char Count= 0
Applications V
Top condenser tray 1
L
tray j
V−L
Distillate
feed F, ZF
V tray n
Boiler
L+F
L+F−V
Residue
Figure 1. Schematic of a distillation column.
are partly recycled in the column. During this recycling, the residue is boiled, and the distillate is condensed. On each tray j arrive the vapor coming from tray j + 1 and the liquid coming from tray j − 1. A certain (constant) amount H j of liquid mixture is on the tray j at time t. The assumptions that are made are the standard Lewis hypotheses (see [54] for justification). Under these assumptions, the flowrate of liquid going down in the column from the top condenser is constant (doesn’t depend on the tray j) above the feed and below the feed: it is L above the feed, and L + F below the feed (the feed mixture is assumed to be liquid and enter the column at its “bubble point”). The flowrate of vapor is constant (does not depend on the tray j) and equal to V all along the column. The control variables are L(t) and V (t), the “reflux flowrate” and “vapor flowrate” (L(t) is controlled directly by the user, and V (t) is controlled via the boiler at the bottom of the column). On each tray j: r In the liquid phase, the composition (molar fraction) of the light
component (propane) is denoted by x j , the composition of the heavy component (butane) is 1 − x j .
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Binary Distillation Columns
145
r In the vapor phase, the composition of the light component on tray j
is denoted by y j , and the composition of the heavy component is just 1 − yj. On each tray, liquid-vapor equilibrium is assumed, and y j is related to x j by a certain function k: y j = k(x j ).
(128)
The function k is called the liquid-vapor equilibrium law. It comes from thermodynamics and depends only on the binary mixture (and the pressure in the column, which is assumed to be constant). We assume that the function k is monotonic, k(0) = 0, k(1) = 1. (This corresponds to what is called nonazeotropic distillation, the most common situation in practice.) Typically, the function k has the following expression: αx , (129) k(x) = 1 + (α − 1)x where α > 1 is a constant, depending on the mixture only (and of the pressure), called the “relative volatility” of the mixture. Now anyone is able to write a material balance on each tray to get the equations for the distillation column. 1.1.2. The Equations of the Column The condenser equation is d x1 = V (y2 − x1 ). dt The equation of a general tray above the feed tray is H1
dx j = L(x j−1 − x j ) + V (y j+1 − y j ). dt The equation of the feed tray is Hj
dx f = F(Z F − x f ) + L(x f −1 − x f ) + V (y f +1 − y f ). dt The equation of a general tray below the feed is Hf
dx j = (F + L) (x j−1 − x j ) + V (y j+1 − y j ). dt The equation of the bottom tray is Hj
Hn
d xn = (F + L) (xn−1 − xn ) + V (xn − yn ). dt
(130)
(131)
(132)
(133)
(134)
P1: FBH CB385-Book
CB385-Gauthier
146
June 21, 2001
11:11
Char Count= 0
Applications
This set of equations – (130), (131), (132), (133), and (134) – in which y j = k(x j ), constitutes the mathematical model of the binary distillation column. In these equation, x j , the j th state variable, is the molar fraction of the light product (propane) on tray j. The constant H j is the molar quantity of liquid phase contained on the tray j (a geometric constant). The variable Z F (t) is the feed composition (again, molar fraction of the light component); F is the feed flowrate (a constant); L(t), the first control variable is the reflux flowrate; and V (t), the second control variable is the vapor flowrate in the column. The column is expected to separate the products of the mixture so that the output variables are x1 , 1 − xn (the quality of the light product at the top of the column, and the quality of the heavy product at the bottom of the column). The output x1 has to be close to 1, xn has to be close to zero. Typical target values in the case of a butane–propane mixture are x1 = 0.996, xn = 0.004. Remark 1.1. This model is very simple. In fact, more sophisticated models have basically the same structure. They include a set of (algebraic) equations, which expresses the thermal balance on each plate. This set of equations is replaced here by the Lewis hypotheses. Also, the expressions (128) and (129), are replaced by a more complicated relation describing liquid–vapor equilibria. However, the structure remains the same, and all the developments presented in this chapter can be extended to these more sophisticated models. 1.1.3. The Problems to Be Solved Continuous quality measurements are extremely expensive (taking into ac-
count the maintenance of the devices, it costs about $100,000 a year for a single continuous measurement). Hence, it is completely unreasonable to expect to have a continuous measurement of the quality of a product, which is not a final product (to be sold on the market). In particular, in the case of a butane–propane distillation, there is no hope of measuring Z F . (This means that we don’t know what is entering the column.) If any quality is measured (continuously), it will be the quality of the output final products, x1 , xn . Here, we assume that the functions x 1 (t), x n (t) are measured continuously. This is an expensive assumption, but it was almost realized in our case. In
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Binary Distillation Columns
147
fact, we had discrete measurements, each of four minutes. This sampling time has to be compared to the response time of about one hour. First Problem. (a) Using the output observations only, estimate the state (x1 , . . . , xn )(t) of the distillation column, (i.e., estimate all compositions on each plate). (b) Estimate the quality of the feed Z F (t). This first problem is purely a problem of state estimation (there is no control at all). The second problem concerns control: As we shall see (Section 1.2.2), given reasonable target values of the outputs, (x1∗ , xn∗ ), there is a unique value of the controls (L ∗ , V ∗ ) corresponding to an equilibrium x ∗ for these target values (provided that the unknown Z F is constant). Second problem. Stabilize asymptotically the column at the equilibrium x ∗ corresponding to the given target values (x1∗ , xn∗ ) of the outputs. If possible, reduce the response time. Of course, we have to solve this second problem using output informations only. Notice that in our case, as we shall show, the equilibria are globally asymptotically stable in open loop (i.e., if we fix the control values to (L ∗ , V ∗ ), and if the feed composition Z F -which is unknown-, is a constant). That is, adding the feed composition as a state variable satisfying the additional equation Z˙ F = 0, we have a trivial stabilizing smooth state feedback, u 1 (x, Z F ) = (L ∗ (Z F ), V ∗ (Z F )),
(135)
However, of course, this feedback is not very good from the point of view of “response time”. Here, we will give solutions to both problems using our theory. These solutions were tested in practice. They perform very well and are robust. In particular, the users were really interested by the observation procedure. Especially, they were interested in the estimation of the quality of the feed of the distillation column. It is even a bit surprising that very reasonable dynamic state estimations are obtained on the basis of two measurements only, to estimate 20 state variables, even with the Luenberger version of our high gain observers. In the case of the other application (the polymerization reactor of the next section), we will be more or less in the same situation, moreover the reactor is open loop unstable.
P1: FBH CB385-Book
CB385-Gauthier
148
June 21, 2001
11:11
Char Count= 0
Applications
1.2. The Main Properties of the Distillation Column 1.2.1. Invariant Domains For an obvious physical reason at the bottom of the column, let us assume that F ≥ V − L . Consider the domain: Uφ,λ = {u = (L , V ) | 0 < λ ≤ L ≤ L max , 0 < φ ≤ V − L ≤ (V − L)max ≤ F}.
(136)
In practice, we will assume that the set of values of control is Uφ,λ , where φ, λ are arbitrarily small but strictly positive constants. This means in particular that the rates L , V, F in the column are all strictly positive. (This assumption prevents the system to be close to unobservability: as we shall see, unobservability holds if F = V = L = 0). F and Z F are assumed to be constant, and Z F ∈ [z −f , z +f ], z −f > 0, + z f < 1. In practice, Z F is not a constant, but its variation during the stabilization process can be neglected. The reason for this, in the case of the butane–propane column, is that the feed mixture is the top output (the distillate) of another much bigger distillation column. If a = (a1 , . . . , an ), b = (b1 , . . . , bn ), ai and bi are small positive real numbers, let us denote by a,b the cube a,b = [a1 , 1 − b1 ] × . . . × [an , 1 − bn ], 0 = 0,0 = [0, 1]n . The cube 0 is the physical state space. Theorem 1.1. (i) 0 is positively invariant under the dynamics of the distillation column. Moreover, (ii) for any compact K ⊂ Int( 0 ), it does exist a and b such that K ⊂ a,b and a,b is positively invariant. Proof. (i) is a consequence of (ii). Let us prove (ii) constructively. For this, we will exhibit (a, b), arbitrarily small, such that a,b is positively invariant. ¯ a¯ > 0 small and an = ε, We define a as follows: for j ≤ f, a j = j a, ε > 0 small. For f < j < n, a j = a j+1 + q j+1 , q j = q j+1 k M , qn = q, q > 0 small, k M = supx∈[0,1] ddkx . In the case where the relative volatility is assumed to be constant, i.e., if k(x) has the expression (129), then, k M = α. We assume that qε , qa¯ , aε¯ are small enough. ¯ − ε ), The vector b is defined in the following way: if 1 ≤ j ≤ f, q1 = b(1 ¯ ¯ ¯ for ε , b > 0 small, b1 = ε b, b2 = b, b j+1 = b j + q j , q j = β j q1 , β > sup V kL (x) . If j > f, bn = c, c > 0 small, b j+1 = b j + q j , qn−1 = ε c, k (x) . We assume also that c/b¯ is q j = β q j+1 = (β )n− j−1 qn−1 , β < inf VF+L small.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Binary Distillation Columns
149
The proof consists of showing that, if x = (x1 , . . . , xn ) ∈ 0 , 1. if x j = a j , then,
dxj dt
2. if x j = 1 − b j , then,
> 0, dxj dt
< 0.
This shows that any trajectory starting from the boundary of a,b enters immediately in the interior of a,b , and proves the result. Exercise 1.1. Prove 2- above. Let us prove 1 only: j = 1: from Equation (130), d x1 ¯ ≥ V (k(a2 ) − a) ¯ = V (y2 − a) H1 dt x1 =a1 =a¯
because k is a strictly increasing function and y2 = k(x2 ), x2 ≥ a2 . Then, ¯ − a) ¯ > 0. H1 ddtx1 x =a =a¯ ≥ V (k(2a) 1 1 2 ≤ j ≤ f − 1: the equation of x j is the Equation (131). Hence, d x j ¯ − V (k( j a) ¯ − k(x j+1 )), Hj = L(x j−1 − j a) dt x j =a j = j a¯
¯ − V (k( j a) ¯ − k(( j + 1)a)) ¯ ≥ L(( j − 1)a¯ − j a) ¯ (xa ), ≥ −L a¯ + V ak for some xa close to 0. Hence, k (xa ) > 1. Because V − L > 0, d x j H j dt x =a = j a¯ > 0. j j j = f : by Equation (132), d x f ¯ + L(x f −1 − f a) ¯ + V (k(x f +1 ) − k( f a)) ¯ Hf = F(Z F − f a) dt x f = f a¯
¯ + L(( f − 1)a¯ − f a) ¯ + V (k(a f +1 ) − k( f a)) ¯ ≥ F(Z F − f a) 1 − α n− f −1 ≥ F Z F − F f a¯ − L a¯ − V k (xa ) f a¯ − ε − q 1−α ¯ q, ε are small, this quantity is strictly for some xa close to zero. Because a, positive. j = f + 1: Equation (133) for j = f + 1 gives d x f +1 = (F + L)(x f − a f +1 ) + V (k(x f +2 ) − k(a f +1 )) H f +1 dt x f +1 =a f +1
≥ (F + L)(a f − a f +1 ) + V (k(a f +2 ) − k(a f +1 )) 1 − α n− f −1 ≥ (F + L) f a¯ − ε − q 1−α −V k (xa )α n− f −2 q = B.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
150
Char Count= 0
Applications
However, B > 0 is equivalent to
V k (xa ) n− f −2 q 1 α 1> + (F + L) f f a¯
ε q 1 − α n− f −1 , + 1−α a¯ a¯
which is true because aε¯ and qa¯ are both small. f + 2 ≤ j ≤ n − 1: by Equation (133), d x j Hj = (F + L)(x j−1 − a j ) + V (k(x j+1 ) − k(a j )) dt x j =a j ≥ (F + L)(a j−1 − a j ) + V (k(a j+1 ) − k(a j )) ≥ (F + L)q j − V k (xa )q j+1 ≥ ((F + L)α − V k (xa ))q j+1 , for some xa close to zero. α > k (xa ) and F + L > V , hence, this quantity is strictly positive. j = n: Equation (134) gives d xn Hn = (F + L)(xn−1 − ε) + V (ε − k(ε)) dt xn =an =ε ≥ (F + L)(an−1 − ε) + V (ε − k(ε)) ≥ (F + L)q + V (ε − εk (xa )), for xa close to zero. The ratio
ε q
is small, hence, this quantity is > 0.
1.2.2. Stationary Points −xn . Typically, We assume that x1 , xn , F, Z F are given. Let us set: β = Zx1F−x n at target stationary points, x1 is close to 1, xn is close to zero, and Z F is medium: the “quality” Z F of the feed is close to .5 (between 0.2 and 0.8, say). Then, β is “medium” (between 0.2 and 0.8). Let us consider the following open domain D in R 2 (Z F being fixed):
D = {(x1 , xn ); k ◦(n−1) (xn ) > x1 , k(xn (1 − β) + β) < x1 , xn < Z F < x1 }, where k ◦(n−1) (xn ) = k ◦ k ◦ . . . . ◦ k(xn ) (n − 1 times). The second and third conditions are always satisfied for target stationary points: xn (1 − β) + β is close to β, and k(β) < 1. αx The equilibrium coefficient k(x) = 1+(α−1)x , and α is between 1 and 3 for any binary mixture “not easy to distillate,” in particular for the butane–propane mixture.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Binary Distillation Columns
151
As we shall see, the first condition in the definition of D is a necessary condition for (x1 , xn ) correspond to an equilibrium. It is necessary and sufficient if the second condition is met. First, the sum of all (stationary) equations gives the global balance of light product over the column: V = β F + L . By considering the equations, 1 to f − 1, we can compute x j, 2 ≤ j ≤ f : (i) y2 = x1 , (ii) y j = x1 1 −
L L + βF
+
L x j−1 , L + βF
(137)
and using the equations of the trays n to f + 1, we compute x j , f ≤ j ≤ n: βF + L βF + L (i) xn−1 = xn 1 − + k(xn ), F+L F+L (138) βF + L βF + L + (ii) xn− j = xn 1 − k(xn− j+1 ), F+L F+L Exercise 1.2. Prove Formulas (137) and (138). Exercise 1.3. Prove that the stationary points satisfy x1 > x2 > . . . . > xn . Using these formulas, it is easy to compute two different estimations x f 1 and x f n of x f , and to show the following proposition. Proposition 1.2. (i) x f 1 (L) is a smooth function of L , with strictly negative derivative, and: x f 1 :]0, +∞[→]k ◦(− f +1) (x1), k −1 (x1 )[, (ii) x f 2 (L) is a smooth function of L , with strictly positive derivative and: x f 2 :]0, +∞[→]x˜ , k ◦(n− f ) (xn )[, where x˜ < (1 − β)xn + β. Exercise 1.4. Prove Proposition 1.2. If (x1 , xn ) belong to D, then, these two intervals overlap. It follows that the following proposition holds: Proposition 1.3. There is a smooth mapping: u sZ F : D → (R + )2 , which to (x1 , xn ) associates (L , V ) = u sZ F (x1 , xn ), the value of stationary controls
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
152
11:11
Char Count= 0
Applications
corresponding to the target stationary outputs (x1 , xn ). The mapping u sZ F is in fact smooth w.r.t. (z F , x1 , xn ). Now conversely, let us examine what are the equilibria for L , V fixed. Summing the (stationary) equations of the column, we have again, using V − L > 0 and F + L − V > 0: 0<
Z F − xn V −L =β= < 1. F x1 − xn
(139)
Assume that xn > Z F . Then, xn > x1 , hence, Z F > x1 . In that case, two easy inductions on the series of trays above and below the feed show that this is impossible: By Equation (137), j < f ; x j+1 < Z f , and by Equation (138), j > f, x j−1 > Z f , which give two contradictory estimations of x f . Hence, we have the following lemma. Lemma 1.4. At all equilibria, x1 > Z F > xn . This we already assumed in the first part of this paragraph. Therefore, Z F − xn > 0, x1 − xn > 0. Now, by (139), xn expresses in terms of x1 , −βx1 . It follows, because we look for x1 , xn ∈ [0, 1], that x1 is bexn = Z F1−β tween Z F and ZβF . Also, when x1 increases from Z F to ZβF , xn decreases from Z F to zero. Using this, and (137) and (138) again, it is easy to prove by induction that we get two estimates x f 1 (x1 ), x f 2 (x1 ) of x f , x f 1 : [Z F , ZβF ] → [x−, x+], x− < Z F , and x f 2 : [Z F , ZβF ] → [0, y+], y + > Z F , x f 1 (x1 ) has a strictly positive derivative, x f 2 (x1 ) has a strictly negative derivative. It follows with the implicit function theorem that (x1 , . . . , xn ) express smoothly in terms of (L , V, Z F ). ˜ → [0, 1]n , (L , V, Z F ) Proposition 1.5. There is a smooth mapping Eq : D → (x1 , . . . , xn ), which to (L , V, Z F ) associates the corresponding unique ˜ = {(L , V, Z F )| V − L > 0, equilibrium (x1 , . . . , xn ) ∈ [0, 1]n . Here, D , 1 > Z > 0}. 1 > β = V −L F F
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Binary Distillation Columns
153
Exercise 1.5. Show the details of the proof of Proposition 1.5. In particular, for (x1 , xn ) ∈ D, and if Z F is medium (i.e., Z F ∈ [0.2, 0.8]), there is a unique equilibrium (x, L , V ) corresponding to (x 1 , x n ), and no other equilibrium x ∗ ∈ [0, 1]n corresponds to (L , V ). Remark 1.2. Moreover, it is easy to check that no equilibrium x ∗ lies in the boundary ∂([0, 1]n ). 1.2.3. Asymptotic Stability Let us fix constant values of the control, L = L ∗ , V = V ∗ . (Of course, we assume V ∗ > 0, L ∗ > 0, V ∗ − L ∗ > 0, L ∗ + F − V ∗ > 0). Let us rewrite the equation of the column, for this constant control u ∗ as a vector field, x˙ = f (x). We know by the previous paragraph that we have a single equilibrium x ∗ ∈]0, 1[n , f (x ∗ ) = 0. Exercise 1.6. Show that the linearized vector field T f (x ∗ ) at the equilibrium is asymptotically stable. For hints, consult the paper by Rosenbrock [55]. We will now construct a locally Lipschitz, but nondifferentiable Lyapunov function V R (x) for the distillation column. Let us set V R (x) =
n
| f i (x)|.
(140)
i=1
As we shall see in our case, this nondifferentiable Lyapunov function will be sufficient for our purposes. The derivative along trajectories has to be replaced by the right-hand derivative. Here, the right-hand derivative of V R along the trajectories, D + (V R , f ) = R (x) limt→0+ VR (Exp(t f (x)))−V always exists is given by t D + (V R , f ) =
n k=1
αk f˙k (x) =
n
αk L f f k (x),
(141)
k=1
where αk is defined by +1, f k > 0 or ( f k = 0 and f˙k > 0), . αk = 0, f k = 0 and f˙k = 0, ˙ −1, f k < 0 or ( f k = 0 and fk < 0). Obviously, V R is ≥ 0, V R (x) = 0 iff = x = x ∗ .
(142)
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
154
Char Count= 0
Applications
Lemma 1.6. V R is a Lipschitz Lyapunov function for the distillation column, i.e., D + (V R , f ) ≤ 0. Moreover, D + (V R , f )(x) = 0 implies f 1 (x) = f n (x) = 0. Proof. In this proof, and only in it, let us make the change of variables Hi xi → xi .
Let us set ψi = − nk=1 ∂∂ xf ki . A direct computation shows that we have the following properties for the distillation column, on the cube [0, 1]n : ψ1 > 0, ψn > 0, ψi = 0, n > i > 1, (143) ∂ fi ∂ fk ≤ 0, i = 1, . . . , n, ≥ 0, k = i. ∂ xi ∂ xi t
A trajectory x(t) being fixed, let us set G = 0 nk=1 |ψk f k |dt. By Theorem 1.1, x is defined on [0, +∞[, if x0 ∈ [0, 1]n . Then, dG dt exists and is ≥ 0 for all t ≥ 0. + Let us compute A(t) = dG dt + D (V R , f ), A(t) =
n
αk f˙k +
k=1
n
|ψk f k |,
k=1
where αk is given by (142). Therefore, A(t) =
n
n
fi
i=1
k=1
αk
n ∂ fk + |ψk f k |. ∂ xi k=1
(144)
Otherwise, f i αi d∂ xfii = −| f i d∂ xfii | (by (142), and (143)). Hence, by the definition of ψi and by (143): n ∂ f k ∂ fi f i αi = −| f i | |ψi | + . (145) ∂x ∂ xi i k=i k=1 Also, fi
n k=1
∂ fk αk ∂ xi
k=i
n ∂ f k ≤ | fi | ∂x k=1
i
k=i
.
(146)
By replacing (145) and (146) in (144), we get n n n ∂ f k ∂ fk A(t) ≤ − | f ψ | + | f | − | fi | i i i ∂x ∂x i i k=i k=i i=1 k=1 k=1
+ | f i ψi | = 0.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Binary Distillation Columns
155
Therefore, D + (V R , f ) ≤ − By (143),
dG dt
dG ≤ 0. dt
(147)
is zero iff f 1 = f n = 0.
We can now apply the usual reasoning stating that x(t) tends to an invariant set contained in {D + (V R , f ) = 0}. Let us check that it works also in our nonsmooth situation. Let x(t) be a semitrajectory starting from x0 ∈ [0, 1]n . ¯ is nonempty. Let x¯ be in This semitrajectory being bounded, its ω-limit set ¯ Because V R is Lipschitz, V R is almost everywhere differentiable. Hence, . D + (V R , f ) is almost everywhere the derivative of V R (x(t)). It is bounded and V R (x(t)) is absolutely continuous. Hence, t V R (x(t)) − V R (x0 ) = (D + (V R , f )(x(θ))dθ 0
t
≤−
˙ G(x(θ))dθ,
0
and,
∞
V R (x¯ ) − V R (x0 ) ≤ −
˙ G(x(θ))dθ.
0
˙ ˙ x¯ ) > 0. Now, G(x) ≥ 0 is a continuous function of x. Assume that G( Then we can find a closed ball Br (x¯ ), of radius r, centered at x¯ , such that ˙ G(z) > δ > 0 for all z ∈ Br (x¯ ). Let tk → ∞ be a sequence such that x(tk ) → x¯ . For k large enough, ˙ is bounded, x(t) stays an infinite time in Br (x¯ ), x(tk ) ∈ B r2 (x¯ ). Because x and ∞ ˙ G(x(θ))dθ = +∞. 0
˙ x¯ ) = 0. Therefore, G( ¯ being invariant, if x¯ (t) is the semitrajectory starting The ω-limit set ˙ from x¯ , G(x¯ (t)) = 0. This implies that f 1 (x¯ (t)) = f n (x¯ (t)) = 0 for all t ≥ 0. Therefore, x¯ 1 (t) is constant, d x¯dt1 (t) = 0 = V ( y¯ 2 (t) − x¯ 1 (t)) implies that y¯ 2 (t), and hence x¯ 2 (t) are constant. By induction, x¯ 1 (t), . . . , x¯ f (t) are constant. A similar reasoning starting from x¯ n (t) = constant shows that, in fact, x¯ (t) = constant = x¯ . Therefore, x¯ = x ∗ because the equilibrium is unique. Hence, x ∗ is a globally asymptotically stable equilibrium in restriction to [0, 1]n (we already know that it is asymptotically stable).
P1: FBH CB385-Book
CB385-Gauthier
156
June 21, 2001
11:11
Char Count= 0
Applications
Theorem 1.7. x ∗ is a globally asymptotically stable equilibrium for the equations of the distillation column in restriction to [0, 1]n (if L(t) ≡ L ∗ , V (t) ≡ V ∗ , Z F (t) ≡ Z ∗F ). 1.2.4. Observability Recall that the feed composition Z F of the column is assumed to be an unknown constant. Therefore, we have to consider the state space e = [0, 1]n+1 , and the extended observed system, of the form x˙ = f Z F (x, u(t)), Z˙ F = 0, y = (x1 , xn ),
(148)
where u(t) = (L(t), V (t)) takes values in U φ, λ. We see that using Equation (130) and knowing x1 (t), we can compute y2 (t) almost everywhere; hence, we can compute x2 (t) everywhere because x2 (t) is absolutely continuous and y2 (t) = k(x2 (t)), where k is monotonous. This can be done provided that V is nonzero. The same reasoning using Equation (131) shows that, if V is nonzero, we can compute x3 (t), . . . , x f (t) for all t ≥ 0. In the same way, using (133) and (134), we can compute xn−1 (t), . . . , x f (t) on the basis of xn (t), for all t > 0, as soon as F + L > 0. Now, we see that we can compute Z F using the data of x f , x f −1 , x f +1 , and Equation (132), provided that F is positive. Therefore, if the set of values of control is U φ, λ, φ > 0, λ > 0, our system is observable. Theorem 1.8. If φ > 0, λ > 0, the extended system (148) is observable for all L ∞ inputs with values in Uφ,λ. (The state space is [0, 1]n+1 or ]0, 1[n+1 .) On the contrary, if L , V, F are all zero, the system is obviously unobservable. Exercise 1.7. Study intermediate situations, where at least one but not all variables L , V, F are identically zero. Remark 1.3. The bad input takes values in the boundary of the physical control set U p = {L ≥ 0, F ≥ 0, V − L ≥ 0, F + L − V ≥ 0}. For this input, there are indistinguishable trajectories entirely contained in the boundary of the physical state space X p = {0 ≤ xi , Z F ≤ 1}. This seems to be a very common situation in practice. Exercise 1.8. Prove that, in fact, if φ > 0, λ > 0, the system is also uniformly infinitesimally observable.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Binary Distillation Columns
157
Remark 1.4. The distillation column has the same number of outputs and inputs (2). It is observable, and even uniformly infinitesimally observable. Hence, it is nongeneric from the point of view of observability, by our theory. 1.3. Observers To simplify the exposition, let us assume that n = 2 f − 1. Let us consider the following embedding E : R 2 f → R 2 f +1 : (x1 , . . . , x2 f −1 , Z F ) → X = (X 1 , . . . , X f , Z F ), with
X 1 = X 11 , X 21 = (x1 , x2 f −1 ), X i = X 1i , X 2i = (xi , x2 f −i ), f −1 f −1 X f −1 = X 1 , X 2 = (x f −1 , x f +1 ),
(149)
X f = (x f , x f ), X f +1 = Z F . Let us set X i = (X 1 , . . . , X i ). Then, the equation of the column can be rewritten as 1 X˙ = F 1 (X 2 , L , V ), . . i i ˙ X = F (X i+1 , L , V ), . . f −1 f −1 ˙ X =F (X f , L , V ), f X˙ = F f (X, L , V ),
(150)
f +1 X˙ = 0, y = X 1.
Moreover, in this equation, we have αi (X i+1 , u), 0 ∂ Fi = , 1 ≤ i ≤ f − 1, ∂ X i+1 0, βi (X i+1 , u) ∂F f αf = , f +1 βf ∂X
(151)
where αi , βi , 1 ≤ i ≤ f − 1 are strictly positive functions, and α f , β f are strictly positive constants. (In fact, βi (X i+1 , u) depends only on u.) Also, we can extend the function k(x) smoothly to all of R, outside the interval [0, 1], so that, for all x ∈ R: 0 < µ < k (x) < ν < +∞.
(152)
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
158
Char Count= 0
Applications
Therefore, in fact, we get that Equations (150) of the column are defined on all of R n , smooth, and A2 . If u takes values in Uφ,λ , φ, λ > 0, there are positive constants A and B, such that 0 < A ≤ αi , βi ≤ B, A1 . the F i in (150) are globally Lipschitz on R n+2 , with respect to X i , uniformly with respect to u and X i+1 . Let us denote by A, C the following block matrices: ∂ F1 , 0, . . . . . . . . . . . . . , 0 0, ∂ X2 . . f −1 , A= ∂F (153) , 0 0, 0, . . . . . . . . , 0, ∂X f f 0, . . . . . . . . . . . . . , 0, ∂ F ∂ X f +1 0, . . . . . . . . . . . . . . . . . . . . . , 0 C = (I d2 , 0, . . . . . . . . . . . . . . . , 0). A : R 2 f +1 → R 2 f +1 , C : R 2 f +1 → R 2 . Along a trajectory X (t) of the system, set A = A(t). Exercise 1.9. Prove that Lemma 2.1 of Chapter 6 generalizes to this case, i.e., ˜ λ˜ > 0, K : R 2 → R n+2 , S, symmetric positive definite, there exist S, K , λ, such that (A(t) − K C) S + S(A(t) − K C) ≤ −λ˜ I d. Moreover, S, K , λ˜ depend only on φ, λ (the parameters of the set of values of control Uφ,λ ). In this exercise, the special form of the 2 × 2 matrices ∂∂XFii+1 is crucial. Now, Theorem 2.2 and Corollary 2.3 of Chapter 6 also generalize. Consider the system ( O ) with K θ = θ K ,
d Xˆ = F( Xˆ , u) − K θ (C Xˆ − y(t)), dt
θ I d2 , 0, . . . . . . . , 0 . . θ = . 0, . . . , 0, θ f I d2 , 0 0, . . . . . . . , 0, θ f +1
(154)
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Binary Distillation Columns
159
Theorem 1.9. The system (154) is an exponential state observer for all L ∞ inputs with values in Uφ,λ , relative to {(x, Z F ) ∈]0, 1[n+1 }. Remark 1.5. In fact, we have more than that: (154) is an U 0,B exponential state observer for (150) relative to any open relatively compact ⊂ R n+2 . However, estimations out of [0, 1]n+2 have no physical meaning. Exercise 1.10. Prove Theorem 1.9. (use the proof of Theorem 2.2 of Chapter 6). There is also the possibility to generalize the high gain EKF to this case. Let us show how. Considering the modified equation of the distillation column (150), remember that X 1i = xi , X 2i = xn−i+1 , i = 1, . . . , f. Set i X˜ 2 = X 2i , i = 1, . . . , f, 3 1 2 X˜ 1 = X 11 , X˜ 1 = k X 12 , X˜ 1 = k X 12 k X 13 , . . . , f −1 f f X˜ 1 = k X 12 . . . k X 1 k X1 , f +1 f +1 f −1 f X˜ 1 = k X 12 . . . k X 1 k X 1 Z F , X˜ 2 = Z F , i i i 1 f +1 ˜ X˜ = X˜ 1 , X˜ 2 , X˜ = ( X˜ , . . . ., X˜ ), X ∈ R n+3 = R 2 f +2 .
(155)
It is easy to check that the equation of the distillation column is now of the following form. Lemma 1.10. Equations (150) become d X˜ ˜ X˜ , u), = A(u) X˜ + F( dt 1 y = X˜ , where
0, U1 , 0, . . . . . . . . , 0 . . V, 0 A(u) = , U1 = 0, F + L , 0, . . . . . . . . , 0, U1 , 0 0, . . . . . . . . . . , 0, U2 0, . . . . . . . . . . . . . . , 0 F, 0 U2 = 0, F
(156) (157)
P1: FBH CB385-Book
CB385-Gauthier
160
June 21, 2001
11:11
Char Count= 0
Applications
and F˜ is lower block-triangular: the components F˜ 2i−1 ( X˜ , u) and F˜ 2i ( X˜ , u) 1 i depend only on ( X˜ , . . . , X˜ ), 1 ≤ i ≤ f + 1. Moreover, F˜ 2i−1 ( X˜ , u) and F˜ 2i ( X˜ , u), which are well defined on [0, 1]2i can be extended to all of R 2i as C ∞ compactly supported functions. Exercise 1.11. Prove Lemma 1.10. Now, the following theorem holds: Theorem 1.11. The high-gain extended Kalman filter construction extends (in both its continuous-continuous and continuous-discrete forms) to the systems of the form (156), provided that u ∈ Uφ,λ , φ, λ > 0. Exercise 1.12. Prove Theorem 1.11. Unfortunately, Theorem 1.11 is only a theoretical result. Because of the complication and the poor “conditioning” of the change of variables X → X˜ , this last construction did not show very good performances in practice. For a reasonable version of the high gain EKF from the practical point of view, see [64]. 1.4. Feedback Stabilization Before stabilizing by using the outputs only and an observer, we have to stabilize first by state feedback. We already know from Section 1.2.3 that, given target output values y ∗ = (x1∗ , xn∗ ), there is a unique value of the controls u ∗ = (L ∗ , V ∗ ) and a corresponding unique stationary state x ∗ , such the distillation column is asymptotically stable within [0, 1]n at x ∗ if one applies as controls the constant stationary values u ∗ . Moreover, u ∗ is a smooth function ¯ ⊂ [0, 1]3 . Therefore, of (x1∗ , xn∗ , Z F ), defined on a certain open domain D ∗ ∗ given (x1 , xn ), we have a first trivial feedback for the distillation column u ∗1 (Z F ) = u ∗ (x1∗ , xn∗ , Z F ).
(158)
This feedback will be sufficient to solve the second problem posed in Section 1.1.3. However, it does not increase the response time of the column. Let us now briefly discuss how we can construct another feedback u ∗2 (Z F , x1 , . . . , xn ) for this purpose. This feedback is constructed on the basis of u ∗1 using inverse Lyapunov’s theorems, and the classical idea to enhance stability.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
1. Binary Distillation Columns
161
Let us reparametrize the controls and set u = (u 1 , u 2 ), u 1 = L , u 2 = V − L . By our assumptions on the domain Uφ,λ , we can assume that 0 < u imin ≤ u i ≤ u imax , i = 1, 2, and that for Z F ∈ [.2, .8], u imin < (u ∗1 )i (Z F ) < u imax . For i = 1, 2, (u ∗1 )i can be extended to a smooth function: R →]u imin , u imax [, still denoted by (u ∗1 )i . Let ϕi : R → R be as follows: ϕi is C ∞ , increasing, is equal to u imin in a neighborhood of −∞, to u imax in a neighborhood of +∞, and to the identity on an interval containing the compact (u ∗1 )i ([.2, .8]). Let us observe also that the equations of the distillation column are controlaffine: x˙ = f (x) + u 1 g1 (x) + u 2 g2 (x) = f (x) + g(x)u,
(159)
and because the feed composition is unknown, Z˙ F = 0
(160)
is an additional equation to be taken into account. The feedback u ∗1 (Z F ) = (L ∗ , V ∗ − L ∗ ) being a smooth function of the unknown Z F , we can find VZ F (x), a family of smooth proper Lyapunov functions defined on open neighborhoods of [0, 1]n , for the vector fields x˙ = f (x) + g(x)u ∗1 (Z F ) = FZ∗F (x). Exercise 1.13. Show that such a family VZ F (x) does exist. This exercise is not very difficult. However, it is a bit technical. Using the result of Exercise 1.6, we can construct explicitly a smooth family of local Lyapunov functions, that is, solving the linear Lyapunov equation, depending smoothly on Z F , at each equilibrium point (which also depends smoothly on Z F ). Second, using the global asymptotic stability of each equilibrium within [0, 1]n , one can modify the arguments in the classical proof of the global Lyapunov’s inverse theorems. For R ≥ 0, let us define the feedback u ∗2 (Z F , x) as follows: (u ∗2 )i (Z F , x) = ϕi (−R L gi VZ F (x) + (u ∗1 )i (Z F )), i = 1, 2.
(161)
If R = 0, the smooth feedback u ∗2 is just the feedback u ∗1 . If R > 0, then the derivative of VZ F along the trajectories of the column is more negative than the derivative along the trajectories corresponding to the (constant) feedback u ∗1 (Z F ), by construction. Hence, with this feedback u ∗2 , the column is also globally asymptotically stable, and in practice we get a faster convergence. It is difficult to handle the Lyapunov function VZ F in practice. For a construction of another feedback based on the same idea but using the (singular)
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
162
11:11
Char Count= 0
Applications
Lyapunov function V R in place of VZ F , see Ref. [64]. (One cannot use directly this Lyapunov function, because it leads to discontinuous feedbacks. But, it is possible to smooth these feedbacks.)
1.5. Output Stabilization We can use the feedback u ∗1 (Z F ) coupled to the high-gain observer ( O ) of Formula (154). Again, we set Xˆ = (xˆ , Zˆ F ) ∈ R n+2 for the vector of estimates of x = (x1 , . . . , xn ) and Z F . Then, Xˆ = (xˆ , Zˆ F ) = E(xˆ 1 , . . . xˆ n , Zˆ F ), where E is the mapping defined in Section 1.3. We get a set of equations of the form (i)
d Xˆ = F( Xˆ , u ∗1 ( Zˆ F )) − K θ (C Xˆ − y(t)), dt
(162)
dx ˜ (ii) = F(x, u ∗1 ( Zˆ F )). dt Under this control u ∗1 ( Zˆ F )(t), we know that x stays in [0, 1]n because (u ∗1 )i ( Zˆ F )(t) ∈]u imin , u imax [(see Section 1.2.1). Any semitrajectory {x(t); t ≥ 0} is bounded (stays in [0, 1]n ). Set ε = Xˆ − E(x, Z F ). By Theorem 1.9, ε goes exponentially to zero. Hence, any semitrajectory {ζ (t) = ( Xˆ (t), x(t)); t ≥ 0} of (162) starting from R n+2 × [0, 1]n , is bounded in R 2n+2 . The ω-limit set of ζ (t) is nonempty. The set {ε = 0} is an invariant manifold, and the dynamics on {ε = 0} is exponentially stable (Exercise 1.6). Therefore, the whole system (2) is asymptotically stable within R n+2 × [0, 1]n , with the usual arguments. Theorem 1.12. The distillation column, with the feedback u ∗1 (Z F ) taken on the estimate Zˆ F of Z F given by the high-gain observer of Theorem 1.9, is globally asymptotically stable (asymptotically stable within R n+2 × [0, 1]n ). Exercise 1.14. Show a similar theorem with the high-gain extended Kalman filter of Theorem 1.11. (Consider both the continuous-continuous and continuous-discrete forms.) Exercise 1.15. Show the same results using the feedback u ∗2 . Remark 1.6. Observe that any (small) exponential rate of convergence of the observer is sufficient. We are in the propitious situation of Section 3 in the previous chapter (Chapter 7).
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. Polymerization Reactors
163
2. Polymerization Reactors This second application concerns “polymerization reactors.” The main difference with the previous example of distillation columns is that the “output stabilization problem” deals with (exponentially) unstable equilibria. In practice, we are concerned with styrene polymerization, which is very common. For details, we refer to paper [65] and to thesis [63].
2.1. The Equations of the Polymerization Reactor The kinetics of radical polymerization is well known. Here, we consider a “stirred tank reactor” in which “free radical polymerization” takes place. The reactor is continuously fed by monomer, initiator, and solvent. The growth of monomer molecules into polymer chains is induced by “free radicals,” which are generated by the initiator decomposition. Schematically, a certain number of reactions occur, as: initiator decomposition, initiation, propagation, chain transfer, and chain termination. A certain number of more or less realistic assumptions are made, that we mention for people who know something about polymerization: equal reactivity, quasi steady-state for all radical species. The equations are obtained just by writing a mass balance of monomer, solvent, and initiator. The polymer is obtained by difference. The variables W M , W S , W I denote the weight fractions of monomer, solvent, and initiator in the reactor. The variables W M F , W S F , W I F denote the same weight fractions in the feed of the reactor. We get the usual equations. Mass balance: (1)
QMF dW M MM = (W M F − W M ) − (k p + kttm )W M C R − 2 f kd W I , dt ρV MI
(2)
dW S QMF = (W S F − W S ) − ktts W S C R , dt ρV
(3)
QMF dW I = (W I F − W I ) − kd W I . dt ρV
(163)
Here Q M F is the mass flow of the feed, V is the volume of the reactor, C R is the total concentration of radical chains, and f is the “initiator efficiency” (a constant). The constants M M , M S , M I are the molecular weights of monomer, solvent, and initiator. The variables k p , kttm , ktts , kd are the kinetic rates, and
P1: FBH CB385-Book
CB385-Gauthier
164
June 21, 2001
11:11
Char Count= 0
Applications
they depend on the temperature T in the reactor. (The subscripts p, ttm, . . . mean “propagation”, “transfer to monomer”, . . . ). Moreover, we have WM WS WP WS 1 − WM − WS 1 WM = , + + + + (a) = ρ ρM ρS ρP ρM ρS ρP (164) ρW I f kd C I (b) C R = , CI = . ktc + ktd MI Here, ρ M , ρ S , ρ P are the densities of monomer, solvent, and polymer (constants). The variable C I is the molar concentration of initiator. Additionally, we have the thermal balance, expressing the heat exchange with a cooling flow. Heat balance: k p C R W M H QMF CPF U dT = TF − T + (Theat − T ) − . dt ρV CP ρV C P MM C P (165) The variables C P , C P F are the heat capacities of the reacting mixture and the feed (assumed to be constants, for simplicity). The feed temperature is denoted by TF , and U is the heat transfer coefficient (both are constants), Theat is the temperature of the cooling flow (one of the control variables, later on). Also, H denotes the global reaction heat for the polymerization reaction (a negative constant). The next (and last) step is the characterization of polymer chains. The length of polymer chains (the quality of the products of the reaction; the length of the polystyrene molecules in our case) is usually characterized by the leading moments λ0 =
∞ ∞ ∞ 1 1 1 C P x , λ1 = xC P x , λ2 = x 2C P x . ρ x=1 ρ x=1 ρ x=1
(166)
Here, x is the length chain and C P x is the concentration in the reacting mixture of dead polymer of length x. One has, by difference on the mass balance, λ1 =
1 − WM − WS . MM
(167)
In place of λ0 , λ2 , chemical engineers prefer to consider the following two quantities, Mn and P D, called respectively the “number average molecular
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. Polymerization Reactors
165
weight” and the “polydispersity”: Mn = M M
1 − WM − WS λ1 = , λ0 λ0
λ0 λ2 M2 . PD = (1 − W M − W S )2 M
(168)
These three quantities, λ1 , Mn , and P D, characterize the quality of the polymer produced by the reactor. As we saw, λ1 is a direct expression of the state variables W M , W S of the set of Equations (163). The variables λ0 and λ2 satisfy QMF 0 dλ0 = (λ0F − λ0 ) + , dt ρV ρ QMF 2 dλ2 = (λ2F − λ2 ) + , dt ρV ρ
(169)
where λ0F and λ2F are constants, and 0 and 2 are smooth functions of the state variables (W M , W S , W I , T ) in (163) and (165). For the detailed expressions of 0 , 2 , one can consult [63] or [65]. 2.2. Assumptions and Simplifications of the Equations 1. All the kinetic constants (kd , kttm , . . .), depending on the temperature of the reaction, have the Arrhenius form k = k0 e− RT . E
(170)
2. Engineers usually want to go to a certain equilibrium by acting on the two control variables: u 1 = W I F and u 2 = Theat . Then, u 1 is the initiator concentration in the feed, and u 2 is the cooling temperature. All the other variables are constant (such as W M F and W S F , fractions of monomer and solvent in the feed). They depend on the target steady-state only. Because the control is two-dimensional, u = (u 1 , u 2 ), it is reasonable to fix two output variables at the steady states. Usually, they are y¯ 1 = T, the reaction temperature, and y¯ 2 = χ = 1 − WWMMF . In practice, χ is called the “conversion rate.” It represents the ratio of monomer actually converted in polymer. The point is that, here the observations will not be the same variables as the outputs to be controlled: They are: y = (y1 , y2 ) = (T, ρ1 ). So that, we are in the situation of a two inputs, two outputs (to be driven to a “set value”), two observations, control system.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
166
11:11
Char Count= 0
Applications
The state is 6-dimensional: x¯ = (W M , W S , W I , T, λ0 , λ2 ). The six equations are the mass balance (163), the heat balance (165), and the equations of the moments (169). In fact, this set of six equations is triangular, as one can see. The evolution of the first part of the state x = (W M , W S , W I , T ) does not depend on λ0 , λ2 . Otherwise, the two output variables, y¯ 1 = T and y¯ 2 = χ, and also the observed variables, y1 = T and y2 = ρ1 do not depend on λ0 , λ2 . This means that λ0 , λ2 are “unobservable” variables. We can just forget them: This means that we quotient the system by the “trivial foliation” , which is not trivial in that case (see Section 5 of Chapter 2). If we don’t care about the behavior of λ0 , λ2 , this is perfectly correct. In fact, it happens that after asymptotically stabilizing the first part x of the state, the second part λ = (λ0 , λ2 ) goes to a uniquely determined equilibrium. This follows from the set of equations describing the reactor: If we have a stabilizing feedback for the first part x of the state, then the full system is also asymptotically stable (this is not obvious, but can be proven). Exercise 2.1. Prove the last statement (making reasonable assumptions on the functions 0 , 2 ). Therefore, the equations of the reactor reduce to the set of four equations for x, given by (163) and (165). The full treatment of these equations with our theory is shown in [65]. However, it is a bit tricky. Here, for the sake of clarity and simplicity of exposition, we will make one more simplification, reasonable in the case of the styrene reactor. We will assume that Equation (163), part (1) reduces to (1 )
QMF dW M = (W M F − W M ) − k p W M C R . dt ρV
(171)
It means that in Equation (163, 1), the effect of the kinetic coefficients kttm and kd (transfer to monomer and initiator decomposition) is negligible with respect to that of k p (propagation). It is what is expected of the reaction: to produce long chains. See [63] for details. Therefore, at the end, we get the final set of equations that we will consider to describe the polymerization
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. Polymerization Reactors
167
reactor: (1) (2)
dW M QMF = (W M F − W M ) − k p W M C R , dt ρV QMF dW S = (W S F − W S ) − ktts W S C R , dt ρV
QMF dW I = (W I F − W I ) − kd W I , dt ρV QMF CPF dT U = (4) TF − T + (Theat − T ) dt ρV CP ρV C P
(3)
−
(172)
k p C R W M H . MM C P
The additional relations (164) give the expressions of ρ and C R in terms of the other variables (W M , W S , W I , T ). The control variables are u = (u 1 , u 2 ) = (W I F , Theat ), the variables to be controlled (i.e., driven to an expected stationary value) are y¯ = ( y¯ 1 , y¯ 2 ) = (W M , T ), and the observed variables are y = (y1 , y2 ) = (T, ρ1 ). The state x = (W M , W S , W I , T ) belongs to X = [0, 1]3 × R + . The equations of the reactor are control affine: x˙ = f (x) + g1 (x)u 1 + g2 (x)u 2 .
(173)
In practice, the assumption of a continuous measurement for T and ρ is reasonable. 2.3. The Problems to Be Solved First Problem. Observation (estimation of the full state x). Second problem. Given target values for y¯ , asymptotically stabilize the reactor at an equilibrium x ∗ corresponding to y¯ . Do this using the observations y only. If possible, fix the “response time.” Depending on the operating conditions (mainly depending on χ, the “monomer conversion rate”), the reactor has equilibria with different behaviors: 1. Low monomer conversion rate χ. These equilibria are (locally) asymptotically stable (exponentially).
P1: FBH CB385-Book
CB385-Gauthier
168
June 21, 2001
11:11
Char Count= 0
Applications
2. High monomer conversion rate χ. These equilibria also are locally asymptotically stable. From the economic point of view, they would be the most interesting because χ just expresses the quantity of monomer that is transformed into polymer. However, they are not reasonable targets, because catastrophic phenomena occur, such as solidification, due to the high viscosity of the reacting mixture. 3. Medium monomer conversion rate χ. Unfortunately, these equilibria are unstable. Our theory shows the possibility of stabilizing asymptotically within large domains at these equilibria of type (3). This could be very interesting in
practice. In practice, the users are already much interested by solving Problem 1 (just observation).
2.4. Properties of the Equations (172) 2.4.1. The Equilibria The target values of W M and T > 0 being chosen, at equilibria, Equations (172) give (1) 0 = W M F − W M −
k p WM V ρC R . QMF
Here k p is a function of T, hence, ρC R is determined uniquely by the knowledge of T and W M . If 0 < W M < W M F (which will be the case), then ρC R > 0. (4) 0 =
k p WM H V U CPF TF − T + (Theat − T ) − ρC R . CP CP QMF MM C P Q M F
Now, ρC R , W M , T being known, Theat is determined uniquely (k p is a function of T, and other variables are constants). (2) 0 = W S F − W S −
ktts V ρC R W S . QMF
This equation determines W S uniquely, because ρC R > 0. We find 0 < WS < WS F . Now, W S and W M being known, ρ is determined uniquely by Equation (164a), and hence C R is determined uniquely. Then, W I > 0 is determined uniquely by (164b). (3) 0 = W I F − W I −
kd ρV WI . QMF
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. Polymerization Reactors
169
This last equation determines uniquely the control W I F . We have shown the following lemma. Lemma 2.1. If the target stationary value y¯ ∗ of y¯ = (W M , T ) is given, then there is a unique corresponding equilibrium x ∗ of x and a unique corresponding value u ∗ of the stationary control u = (W I F , Theat ). Moreover, for reasonable values of all the other constants and of y¯ ∗ , the corresponding values of x ∗ are reasonable (i.e., W M , W S , W I , W I F are between 0 and 1, T and Theat are larger than 0, ρ is > 0, . . .). For instance, let us chose a target stationary value, corresponding to a medium conversion rate, i.e., the corresponding equilibrium is unstable, as we shall see. To do this, we have to give the values of all constants in the equations: R = 8.32, ktd = 0, kd = 1.58 1015 e−128.10 /(RT ) , k p = 1.051 104 e−29.54 10 /(RT ) , 3
3
ktts = 5.58 105 e−71.8 10 /(RT ) , ktc = 0.6275 106 e−7.026 10 /(RT ) , 3
3
Q M F = 0.1, V = 1, f = 0.6, M I = 0.164, M M = 0.104, ρ M = 830, ρ S = 790, ρ P = 1025, U = 950, W M F = 0.9, W S F = 0.1, C P = 1855, C P F = 1978, TF = 288.15, H = −74400.
(174)
∗ , T ∗ ) as Let us chose the target values y¯ ∗ = (W M ∗ , T ∗ ) = (0.7, 330). y¯ ∗ = (W M
(175)
Then, the computations of Lemma 2.1 above give the corresponding values ∗ ): for the stationary state x ∗ and the stationary control u ∗ = (W I∗F , Theat ∗ x ∗ = (W M , W S∗ , W I∗ , T ∗ ) = (0.7, 0.0999997, 0.03837, 330), ∗ ) = (0.0413, 319.38). u ∗ = (W I∗F , Theat
(176)
Now, linearizing Equations (172) of the reactor at this equilibrium leads to the following set of eigenvalues: √ E 1 = {−0.0001238, 0.00001757 + 0.00005499 −1, √ (177) 0.00001757 − 0.00005499 −1, −0.0001165}, which shows that, as we said, this is an unstable equilibrium. Here, the time unit is the second, which explains these low values.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
170
11:11
Char Count= 0
Applications
The following observations will be crucial: Let us consider the equations of the reactor (172) at isothermal and isoinitiator operation, i.e., the two first equations in which the temperature is constrained at the value T ∗ , the equilibrium temperature, and W I is constrained at the value W I∗ , the equilibrium initiator concentration. Linearizing this set of two equations at the equilibrium gives the following second set of eigenvalues:
E 2 = {−0.0001429, −0.0001165}. Thus, the following holds: ∗ , T ∗ are fixed to the above values. Observation 1: The target outputs W M In isothermal and isoinitiator operation, the reactor is asymptotically stable at the corresponding equilibrium. Also, we can check on the linearized equations that the quadratic function VL (W M , W S ) =
1 ∗ 2 ((W M − W M ) + (W S − W S∗ )2 ) 2
(178)
is a quadratic Lyapunov function for the linearized Equations (172, (1) and (2)), (in isothermal and isoinitiator operation). In fact, there is more than that, as we shall see immediately. First, let us go back to the complete equations of the reactor in isothermal and isoinitiator operation: QMF dW M = (W M F − W M ) − k p W M C R , dt ρV QMF dW S = (W S F − W S ) − ktts W S C R , (2) dt ρV
(1)
with
(179)
WM WS WP WS 1 − WM − WS WM + + + + = , ρM ρS ρP ρM ρS ρP ρW I f kd C I (b) C R = , CI = , ktc + ktd MI (180) and T = T ∗ , W I = W I∗ . It is easy to check that the domain, 1 = (a) ρ
ISO = {(W M , W S ); 0 ≤ W M ≤ W M F , 0 ≤ W S ≤ W S F }, (181) is a positively invariant domain for (179). Also, one can consider the quadratic function VL (W M , W S ) defined in (178) above.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. Polymerization Reactors
171
0 0.1
-0.00002 -0.00004
0.05
-0.00006 -0.6
0 -0.4 -0.05
-0.2 0
0.2 -0.1
Figure 2. The derivative of VL along the trajectories, in isothermal and isoinitiator operation.
∗ , T ∗ are fixed to the above values. In Observation 2: The target outputs W M
isothermal and isoinitiator operation, the function VL , which is a Lyapunov function of the linearized at the equilibrium, is also a Lyapunov function of the nonlinearized system, in restriction to the domain ΩISO . (Here, by “Lyapunov function, we mean strict Lyapunov function”, i.e., the derivative of VL along trajectories is strictly negative, except for the equilibrium point trajectory.) Unfortunately, to show that this observation holds requires numerical investigation (easy). It is the reason why we gave the values (174) of all constants in the equations of the reactor. Figure 2 shows the graph of the function d VL ∗ ∗ dt (x, y), where x = W M − W M , y = W S − W S , in restriction to the domain I S O . Therefore, using the fact that I S O is a compact positively invariant domain, we can state the following proposition:
∗ , T ∗ are fixed to the above numerical Proposition 2.2. The target outputs W M values. The equations of the reactor in isothermal and isoinitiator operation are asymptotically stable within ΩISO .
Remark 2.1. Notice that to prove this proposition, one has to use Lasalle’s invariance principle: The function VL is not proper on the open domain Int( I S O ).
P1: FBH CB385-Book
CB385-Gauthier
172
June 21, 2001
11:11
Char Count= 0
Applications
2.4.2. Invariance Note first that, at points of the form W I = 0, there is a singularity, because of the presence of the square root in the expression of C R , Equation (180b). This difficulty will be overcomed by restricting to (arbitrarily large) positively invariant domains contained in the “physical” domain. Let us first go back to the equation of ρ, (180a): 1 1 1 1 1 1 − − . (182) + WS + = WM ρ ρM ρP ρS ρP ρP In practice, it is clear that the density of polymer is larger than the density of monomer and of solvent. See the numerical values (174) above. Hence, if 0 ≤ W M , W S ≤ 1, then ρ > 0. Now, W M F , and W S F are fixed constants, 0 < W M F , W S F < 1. Let us consider for the restricted state space any subset ⊂ R 4 of the form = [0, W M F ] × [0, W S F ] × [ε I , 1] × [Tmin , Tmax ],
(183)
and for the restricted set of values of the control u = (W I F , Theat ), a set U of the form U = [ε1 , 1] × [Tmin , Tmax ], with (P) : ε I , ε1 , εε1I , Tmin are > 0, small, and Tmax is large. ∗ , T ∗ ) determines uniquely By Lemma 2.1, the data of the target y¯ ∗ = (W M ∗ ). ∗ the target equilibrium x , and the target stationary control u ∗ = (W I∗F , Theat If (P) is satisfied, then x ∗ ∈ Int( ), u ∗ ∈ Int(U ). Remark 2.2. Notice that this “restricted state space” is in fact very big, and that physically, the state lives in . Lemma 2.3. If (P) is satisfied, then the restricted state space is positively invariant under the dynamics (172). On the restricted state space, the system is smooth. Exercise 2.2. Prove Lemma 2.3. For the exercise, use the fact that H in (174) is negative. 2.4.3. Observability As we said above, the observed output is y = (y1 , y2 ) = (T, ρ1 ). The inputs are still u = (W I F , Theat ). The equations under consideration are (172) and (164), on the compact domain defined above (183). The set of values of
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. Polymerization Reactors
173
control is U = [ε1 , 1] × [Tmin, Tmax ]. These equations are analytic on × U, ˜ × U˜ of × U. hence, they are analytic on a small open neighborhood 4 ˜ Let us consider the following mapping : → R , −k p C R W M H 1 d 1 ,µ = (T, W M , W S , W I ) = T, r = , λ = . ρ MM C P dt ρ (184) d 1 ( ρ ) is well defined, by (182), and does Note the important fact that dt not depend on the controls (W I F , Theat ). Hence, is a well-defined smooth ˜ of . mapping on , analytic on an open neighborhood
Exercise 2.3. Show that is a diffeomorphism, in restriction to Int( ). Now, let us write the equations of the reactor in the new coordinates (T, W M , W S , W I ) = (T, r, λ, µ) = X. The important observation is that the equation of T in the new coordinates is dT (185) = ϕ0 (T, r, u 2 ) + λ. dt Therefore, in the new coordinates, the reactor system becomes T˙ = ϕ0 (T, r, u 2 ) + λ, r˙ = µ, (186) (rea ) ˙ λ = ϕ1 (X, u), µ ˙ = ϕ2 (X, u). The observed outputs being T and r, it is easily shown that the system r ea has the following properties: Lemma 2.4. r ea , defined on the subset = (Int( )) ⊂ R 4 is observable and uniformly infinitesimally observable. Exercise 2.4. Prove the previous Lemma 2.4. Note: It is clear that the functions ϕ0 , ϕ1 , ϕ2 , defined on open subsets of R 3 , R 6 , R 6 , respectively, can be extended as smooth compactly supported functions to all of these vector spaces. Given an arbitrarily large compact subset K in Int( ), this can be done without perturbing them on (K ) × U . In the following, these extensions will be denoted again by ϕ0 , ϕ1 , ϕ2 , so that we can assume that the reactor system is globally in the form (186), with compactly supported functions ϕi , i = 0, 1, 2.
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
174
11:11
Char Count= 0
Applications
2.5. Observers It follows from the canonical form (186) of Lemma 2.4, and from the note after, that all of the methods of Chapter 6 can be applied, in order to construct exponential observers for the polymerization reactor, for all U -valued measurable inputs. In fact, again, we are in a case where d y = du but the system rea is observable in all possible strong senses. The theory of Chapter 6 can be applied to this case. We leave it to the reader. All the types of construction of observers we proposed apply. Exercise 2.5. There is a need of a small adaptation for the Luenberger style observer for uniformly infinitesimally observable systems, (Theorem 2.2 of Chapter 6), because we are not in the case d y = 1. Make this adaptation. 2.6. Feedback Stabilization Using Proposition 2.2, we will be able to construct feedbacks that stabilize asymptotically the reactor within “reasonably large” subdomains of the restricted state space . Let us consider the following feedbacks: ρ(W M , W S )V kd (T ) W I∗ − α(W I − W I∗ ) , W IαF = ϕ 1 + QMF (187) QMFCP β ∗ ∗ T − β(T − T ) + A , Theat = ψ 1 + U where A=−
QMFCPF H V TF + ρk p C R W M , U U MM
(188)
where α and β are real numbers, and ϕ and ψ, smooth, satisfy: ϕ : R → R, ϕ(θ ) = θ if 2ε1 ≤ θ ≤ 1 − ε1 , ∂ϕ ≥ 0 ∀θ, ∂θ ψ : R → R, ψ(θ) = θ if Tmax − ε2 ≥ θ ≥ Tmin + ε2 ,
ϕ(θ ) = ε1 if θ ≤ ε1 , ϕ(θ) = 1 if θ ≥ 1,
ψ(θ) = Tmin if θ ≤ Tmin , ψ(θ) = Tmax if θ ≥ Tmax , ∂ψ ≥ 0 ∀θ, ε2 small. ∂θ β
For x ∈ , W IαF (x) ∈ [ε1 , 1] and Theat (x) ∈ [Tmin , Tmax ]. Notice also that β if x = x ∗ , then W IαF = W I∗F , Theat = T ∗ .
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. Polymerization Reactors
175
Hence, by construction, the equations of the reactor controlled by the feedback just constructed are, in a neighborhood Ux ∗ of the equilibrium x ∗ : (1)
dW M QMF = (W M F − W M ) − k p W M C R , dt ρV
QMF dW S = (W S F − W S ) − ktts W S C R , dt ρV d(W I − W I∗ ) QMF = − kd + (1 + α) (W I − W I∗ ), (3) dt ρV QMFCP ρV C P d(T − T ∗ ) = − 1+ + β (T − T ∗ ). (4) U dt U (2)
(189)
Exercise 2.6. Using Proposition 2.2, show that these equations are (locally) asymptotically stable at the equilibrium x ∗ , provided that α, β > 0 or α, β ≤ 0 small. In fact, there are several other feedbacks that do the same job. In particular, just linear feedbacks would work, but this one is more interesting as we shall show immediately. ¯ of that are It happens that there are “reasonably large” subdomains ∗ contained in the basin of attraction of x (for (189), the reactor under feedback), and the constraints on the controls are satisfied along the corresponding semitrajectories (i.e., the corresponding feedback controls take values in U ). To see this, we have to go again to our special numerical values (174). We ¯ will consider the following domain : ¯ = {x|0 ≤ W M ≤ W M F , 0 ≤ W S ≤ W S F , ε I ≤ W I ≤ 0.1, T0 ≤ T ≤ 350}, (190) where T0 ≥ Tmin + ε2 . Let us recall that our stationary values (x ∗ , u ∗ ) are ∗ x ∗ = (W M , W S∗ , W I∗ , T ∗ ) = (0.7, 0.0999997, 0.03837, 330), ∗ ) = (0.0413, 319.38). u ∗ = (W I∗F , Theat
(191)
¯ ⊂ , there is no additional restriction on W M , W S , For the domain the maximum of W I is more than twice the stationary value (and is high, physically), and the temperature T is in a large domain around T ∗ (Tmax is 20 degrees more). If we show that the feedback controls take values in U on this domain ¯ and if these values are such that the functions ϕ(θ) andψ(θ) are just ,
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
176
11:11
Char Count= 0
Applications
¯ is positively invariant for the reactor under equal to θ, it will follow that feedback. We already know that is positively invariant, and the equations of the reactor under feedback show that the two last components of the state, W I ¯ and T , cannot go out the bounds just given in the definition (190) of . Let us start with the feedback W I0F . Clearly, W I0F is larger than W I∗ (kd and ρ are positive functions on , Q M F and V are positive constants). ¯ The The function kd is increasing, majorized by kd (350.), and ρ ≤ ρ P on . numerical values in (174) show that: kd(350.)ρ P V W I∗ = 0.089. W I0F ≤ 1 + QMF ¯ 2ε1 < W 0 (x) < 0.1. The same is true for W α We conclude that on , IF IF if α is sufficiently small. 0 ≤ ψ(T ∗ (1 + Q M F C P )). On the other hand, the quantity −A in Now, Theat U (188) above is positive and can be majorized as follows: −A ≤ −A∗ =
WIm ax
QMFCPF TF U
H V f kd − ρ P k p (350.)W M F ρ P WIm ax (350.), U MM MI ktc = 0.1.
0 is minored by ψ(A∗ + T ∗ (1 + Q MUF C P )). The quantity And Theat QMF CP ∗ + T (1 + U ) is computed using the numerical values (174), and β is strictly positive. Therefore, for β small enough, Theat is between Tmin + ε2 and Tmax − ε2 , for Tmax large, Tmin , ε2 > 0, small. We have shown:
A∗
Proposition 2.5. (For the numerical values (174), and for α, β small enough). ¯ is positively invariant for the reactor under feedback (189), The domain ¯ and the feedback takes values in U on . Now, one can use again Proposition (2.2) in order to show the following: Proposition 2.6. (For the numerical values (174) and for α, β small enough.) ¯ for the reactor under The equilibrium x ∗ is asymptotically stable within , feedback (189). For other more complicated feedbacks, in the more practical situation of the nonsimplified reactor, see paper [65] and thesis [63].
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
2. Polymerization Reactors
177
2.7. Dynamic Output Stabilization. First, let us notice that we are not in the good situation of Section 3 of Chapter 6. However, we are in the context of Chapter 7, Theorem 1.1 or Theorem 2.3. Exercise 2.7. Make the necessary adaptations of the proof of Theorem 1.1, Chapter 7 for it applies to the situation of the reactor (we are in the case d y ≤ du , but d y = 1). The result is that it is possible to stabilize asymptotically the polymerization reactor, using output informations only, at unstable equilibria, within ¯ large sets (arbitrarily large compacta in Int( )).
Remark 2.3. In fact, in practice, we observe that the basin of attraction B of ¯ We can asymptotix ∗ for the reactor under feedback is much larger than . cally stabilize using the observer within any compact subset of B ∩ Int( ).
P1: FBH CB385-Book
CB385-Gauthier
June 21, 2001
11:11
Char Count= 0
178
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
Appendix
1. Subanalytic Sets For details on this section, see [5, 6, 25, 45]. 1.1. Subanalytic Sets and Subanalytic Mappings Let M be a real analytic manifold. A subset S ⊂ M is called subanalytic if any point p ∈ M has an open neighborhood U , such that either U ∩ S is empty or N ( f i (Ai )\gi (Bi )), U ∩ S = ∪i=1
where A1 , . . . , A N , B1 , . . . , B N are analytic manifolds and f i : Ai → M, gi : Bi → M, 1 ≤ i ≤ N , are proper analytic mappings. Because the structure of S depends on M, we shall always mention it when we speak about the subanalytic sets. Notation: (S, M). Examples of such sets are: (i) the closed analytic subsets of M, (ii) complements of such sets, (iii) polyhedra in Euclidean spaces. Subanalytic mappings: Let (Si , Mi ), i = 1, 2, be two subanalytic sets and f : S1 → S2 a continuous mapping. We say that f is subanalytic if its graph G( f ) considered as a subset of M1 × M2 is subanalytic in M1 × M2 . 1.2. Stability Properties of Subanalytic Sets M, N will denote analytic manifolds. Sub(M), (resp. Sub(N )) will denote the family of all subanalytic subsets of M (resp. N ). Then 1. If S, T ∈ Sub(M); S ∪ T ∈ Sub(M), S ∩ T ∈ Sub(M), S\T ∈ Sub(M). If S ∈ Sub(M), T ∈ Sub(N ), S × T ∈ Sub(M × N ), 2. If S ∈ Sub(M) and S¯ is the topological closure of S in M, S¯ ∈ Sub(M); ¯ of S in M is in Sub(M), from 1, it follows that the frontier S\S 3. Each connected component of an S ∈ Sub(M) belongs to Sub(M). The family of the connected components of S is locally finite (i.e., any 179
P1: FBH CB385-Book
CB385-Gauthier
180
May 31, 2001
17:54
Char Count= 0
Appendix
point p ∈ M has a neighborhood meeting only a finite number of these components), 4. If S ∈ Sub(M) and f : S → N is subanalytic (N ∈ Sub(N ) !), then, if f is proper, f (S) ∈ Sub(N ), 5. If f is proper, S ∈ Sub(N ), then f −1 (S) ∈ Sub(M). 1.3. Singular Points of Subanalytic Sets, Dimension Let M be an analytic manifold, of dimension m. Given S ∈ Sub(M), a point p is called regular if it has an open neighborhood O such that O ∩ S is a closed analytic submanifold of O. We denote by Reg(S) the set of all regular points of S, and by Sing(S) its complement in S. The points in Sing(S) are called singular. Reg(S) is open in S and is an analytic submanifold of M. Its dimension (i.e., the maximum dimension of its connected components) will be called the dimension of S. It coincides with the Hurwicz–Wallman dimension of S as a topological space. Proposition 1.1. (Theorem 7.2 p. 37, [5]) If S ∈ Sub(M), Sing(S) is a subanalytic set in M and is closed in S. Reg(S) is also subanalytic in M. dim(Sing(S)) < dim(S) always. (dim(∅) = −1). 1.4. Whitney Stratifications Let M be an analytic manifold, countable at infinity. Let S ∈ Sub(M). Theorem 1.2. There exists a partition P of S such that: (i) Each P ∈ P is a real analytic connected submanifold in M, and is subanalytic in M, (ii) P is locally finite in M (i.e., if p ∈ M, there exits a neighborhood U of p such that {A ∈ P | A ∩ U = ∅} is finite, (iii) For any A, B ∈ P such that A¯ ∩ B is nonempty, B is in fact contained ¯ in A, (iv) For any A, B ∈ P such that A¯ ⊃ B, (A, B) is a Whitney pair. Whitney pair (X, Y ) at y ∈ Y : let X, Y be connected submanifolds in M, and y a point in Y such that y ∈ X¯ . We say that (X, Y ) is a Whitney pair at y if there exists a chart (, ψ) of M at y having the following property: r Given any sequences (x ), (y ) such that (α) (x |n ∈ N ) ∈ ∩ X, n n n
(yn |n ∈ N ) ∈ ∩ Y, xn = yn for all n ∈ N , xn → y, yn → y as
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2. Transversality
181
n → +∞, (β) the lines R(ψ(yn ) − ψ(xn )) converge to a line l, (γ ) the vector spaces (T ψ(Txn X ); n ∈ N ) converge to a vector subspace T of R m (the convergence is in the Grassmannian G(m, p), p = dim(X ), m = dim(M)); then, T ⊃ l. A couple (X, Y ) is called a Whitney pair if X¯ ⊃ Y and (X, Y ) is a Whitney pair at all points of Y. Let M, N be analytic manifolds both countable at infinity. Let S ∈ Sub(M), T ∈ Sub(N ), and f : S → T be a subanalytic mapping. We assume that f is proper. Theorem 1.3. There exist partitions P S , PT of S and T , respectively, satisfying the conditions (i), (ii), (iii), and (iv) of Theorem 1.2, and such that for any X ∈ P S , there exists a Y ∈ PT containing f (X ), and the restriction f |X is a C ω submersion. 1.5. Conditions a and b The property of a couple (X, Y ) at y in the definition of a Whitney pair has been called by Whitney: condition b. He has also introduced a weaker condition, condition a as follows: r Given a pair (X, Y ) of connected C 1 manifolds and a point y ∈ Y ∩ X¯ ,
we say that the pair (X, Y ) satisfies the condition a at y if there exists a chart (, ψ) of M at y with the following property: Given any sequence (xn |n ∈ N ) such that (α) (xn |n ∈ N ) ⊂ ∩ X, xn = y for all n ∈ N , and xn → y an n → +∞, (β) the vector spaces {T ψ(Txn X ) | n ∈ N } converge to a vector subspace T of R m , then T ⊃ T ψ(Ty Y ). It is known and easy to see that condition b implies condition a (but not vice versa). It is also easy to see that these conditions are independent of the chart (, ψ): if one is true for a chart, it is true for all the others. In fact, at the cost of introducing additional concepts, one can state conditions a and b independently of the choice of any chart. 2. Transversality 2.1. The Concept of Transversality Let r ∈ N or r = ∞ or r = ω. Let M, P be two C r manifolds, and N be a C r submanifold of P. Except explicit mention of the contrary, we allow both M and N to have corners. Denote by f : M → P a C r mapping.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
182
17:54
Char Count= 0
Appendix
Definition 2.1. (i) let x ∈ M. Then f is said to be transversal to N at x if either f (x) ∈ /N or f (x) ∈ N and Tx f (Tx M) + T f (x) (N ) = T f (x) P. (All these concepts have a meaning at any point of a manifold with corners.) If S ⊂ M is a subset, we say that f is transversal to N on S if it is transversal to N at all x ∈ S. (ii) If M is a submanifold of P and f : M → P is the canonical injection of M into P, we say that M and N are transversal at x (resp. on S). Remark 2.1. It is clear that the transversality relation defined in (ii) is symmetric in M and N . 2.2. Transversality Theorems 2.2.1. Stability If r = ω, the
Cr
topology is the C ∞ topology.
Proposition 2.1. If x0 ∈ M, f (x0 ) ∈ N and f is transversal to N at x0 , then there exists a neighborhood O of f in C r (M, P) endowed with the C r topology (if r = ω, the C ∞ topology), and a neighborhood U of x0 in M such that for any g ∈ O, any x ∈ U, g is transversal to N at x. Remark 2.2. Proposition 2.1 is still true if we do not assume that f (x0 ) ∈ N , provided that N is closed (in P). Otherwise, it is false as the following example shows. Example 2.1. Let r = ∞, M = R, P = R 2 with canonical coordinates (x, y). Let N = {(x, 0) | x > 0}, f : R → R 2 , f (t) = (t, t 2 ). The mapping f is transversal to N at zero, because f (0) = (0, 0) ∈ / N . But for any neighborhood O of f in C ∞ (M, P), any neighborhood U of 0 in M = R, there exists an ε ∈ U and an f ε ∈ O, where f ε : M → P is the mapping f ε (t) = (t, (t − ε)2 ), and f ε is not transversal to N at ε. 2.2.2. Abstract Transversality Theorem Here, we assume that M is without corners, and that M and P are second countable. r ∈ N ∪ {+∞}. Let T be a Hausdorff topological space and
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2. Transversality
183
ρ : T → C r (M, P) be a continuous mapping for either the C r or the Whitney C r topology, having the following property: r For each t ∈ T , there exists a C r manifold A, a point a ∈ A, and
a continuous mapping ϕ : A → T such that (i) ϕ(a) = t, and (ii) the mapping : A × M → P, (a, x) = (ρ ◦ ϕ(a))[x] (evaluation at x !) is C r and transversal to N .
Theorem 2.2. (Abstract transversality theorem) If r > dim(M)− Codim(N , P), then, (i) the set Tr of all t ∈ T such that ρ(t) is transversal to N is a dense residual subset in T, (ii) if N is closed and either ρ is continuous in the Whitney C r topology on C r (M, P), or M is compact, then Tr is open dense in T. (iii) if Codim(N , P) > dim(M), the result of (ii) is still true if we take r = 0. Transversal means that the mapping f : M → P avoids N . 2.2.3. Transversality to Stratified Sets M is again without corners. Assume now that N is a C r stratified subset of P with stratification N (we assume that the stratifications are locally finite !). Definition 2.2. f is said to be transversal to the stratification N if for any S ∈ N, f is transversal to S (each stratum S ∈ N is a C r submanifold of P). In general, the condition for a map to be transversal to a stratified set is not open, even if the set is closed. To ensure this fact, we have to consider special stratifications. Definition 2.3. A stratification N of a subset N of P is called a “Whitney-a” stratification if (i) for each stratum S ∈ N, S¯ is a union of strata, and if S1 , S2 ∈ N, S2 ∩ S¯ 1 = ∅, then S2 ⊂ S¯ 1 , (ii) for each pair S1 , S2 ∈ N such that S2 ⊂ S¯ 1 , the pair (S1 , S2 ) satisfies the condition “Whitney-a” at each point of S2 . Let us assume that M and P are second countable. Proposition 2.3. Let the stratification N of N be a C r Whitney-a stratification. Assume that N is closed. Then, the subset of all f ∈ C r (M, P), which are transversal to N, is open in the C r Whitney topology.
P1: FBH CB385-Book
CB385-Gauthier
184
May 31, 2001
17:54
Char Count= 0
Appendix
2.2.4. Transversality on Corner Manifolds Let : E → B be a C ∞ vector bundle over the C ∞ manifold B. Let J k () be the bundle of k-jets of sections of (k ∈ N ∗ ). Let M be a C ∞ submanifold with corners of B, of the same dimension. Denote by ∞ () (resp. ∞ ( M )) the space of all C ∞ sections of (respectively of M , restriction of to −1 (M)). Let N ⊂ J k () be a closed C ∞ stratified subset of J k (), endowed with a Whitney-a stratification N. We assume that B is second countable, and we will restrict it in the following tacitly to appropriate neighborhoods of M. Theorem 2.4. The subset of all f ∈ ∞ ( M ) such that the mapping j k f : M → J k () is transversal to N is open, dense in the C ∞ Whitney topology on ∞ ( M ). Proof. That the subset is open is a consequence of Proposition 2.3 (actually an easy modification of it). To see that it is dense, we use Stein extension theorem to construct an extension operator E : ∞ ( M ) → ∞ (), which is continuous for the Whitney C ∞ topologies on ∞ ( M ) and ∞ (). Let TrM (resp. Tr ) be the set of all sections f ∈ ∞ ( M ) (resp. ∞ () ) such that j k f is transversal to N . Then, res(Tr ) ⊂ TrM , where res : ∞ () → ∞ ( M ) is the restriction operator. If f ∈ ∞ ( M ), then any neighborhood of E( f ) contains elements of Tr . Now let U be a neighborhood of f in ∞ ( M ), V its inverse image, V = res−1 (U ). Then, V is a neighborhood of E( f ). It contains an element g of Tr . Hence, res(g) ∈ TrM ∩ U. 2.3. The Precise Arguments We Use in Chapter 4 All the arguments we use are consequences of the results just stated. First, X is a smooth manifold without corners, and U = I du , where I is a closed bounded interval. The set of systems is the set of (C r , r = 1, . . . ∞) couples ( f, h), where f is a U -parametrized vector field over X, and h : X × U → R d y , or h : X → R d y . The set J k S of k-jets of elements of S is a vector bundle over X × U. If X is compact, S can be given the structure of a Banach space. If X is not compact, we endow it with the (C r or C ∞ ) Whitney topology. We assume that r is sufficiently large. Theorem 2.5. N is a C r closed stratified subset of J k S, with Whitney-a stratification N. The subset of elements = ( f, h) of S such that j k is transversal to N is open dense.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
3. Lyapunov’s Theory
185
Now, J k S∗2 denotes the restriction of (J k S)2 to Z = ((X × X )\X ) × U. For ∈ S, ( j k )2 : Z → J k S∗2 is the map (x1 , x2 , u) → ( j k (x1 , u), j k (x2 , u)). We chose W , a compact subset of ((X × X )\X ). Theorem 2.6. N is a C r closed stratified subset of J k S∗2 , with Whitney-a stratification N. The set of ∈ S, such that ( j k )2 is transversal to N on W × U, is open dense. These two theorems contain what is needed for the proof of Theorems 2.1, 2.2, and 2.3 of Chapter 4. The considerations in this appendix show that these theorems are valid for X noncompact and for the Whitney topology. For a more detailed and elementary exposition of what is needed if X is compact (and U without corners), the reader can consult [1], as well as [20] for transversality to stratified sets. For the proof of Theorem 2.4 of Chapter 4, which is one of the two main steps in proving the final approximation theorem (Theorem 5.1) in the same chapter, the following theorem is needed. X is analytic, compact. S is now a set of systems of the form S K , constructed in Chapter 4, Section 1. In that case, S K has also the structure of a Banach space. Theorem 2.7. Theorems 2.5 and 2.6 are still valid for S K . 3. Lyapunov’s Theory For details on this section, see [35–37, 40]. 3.1. The Concept of Stability and Asymptotic Stability Let M be a C ∞ Manifold, F a C ∞ vector field on M, and x0 ∈ M a stationary point of F. For x ∈ M, the maximal positive semitrajectory of x for F is a C ∞ curve: t ∈ [0, e(x)[→ ϕt (x) ∈ M, where e(x) > 0 and possibly e(x) = +∞, such that: (i) dϕdtt (x) = F(ϕt (x)) for all t ∈ [0, e(x)[, (ii) ϕ0 (x) = x, (iii) t → ϕt (x) is maximal, that is, either e(x) = +∞, or e(x) < +∞ and the set ∩0
P1: FBH CB385-Book
CB385-Gauthier
186
May 31, 2001
17:54
Char Count= 0
Appendix
is empty, where the bar denotes the closure of the set in M. (It is equivalent to say that the trajectory t → ϕt (x) cannot be extended beyond e(x).) The maximal positive trajectory of a x ∈ M is unique. Definition 3.1. (Stability). x0 is called stable for F if any neighborhood N0 of x0 in M contains a subneighborhood N1 of x0 such that the maximal positive semitrajectory of any x ∈ N1 is contained in N0 (ϕt (x) ∈ N0 for all t ∈ [0, e(x)[). 3.1.1. Consequences of Stability 1. If x0 is stable, then it has a neighborhood U such that e(x) = +∞ for all x ∈ U. To see this, choose any compact neighborhood N0 of x0 . Then, we can take U = N1 associated to N0 by the definition. In fact, because N0 is compact and ϕt (x) ∈ N0 for all t ∈ [0, e(x)[, ∩0 0, the positive semitrajectory of ϕt (x) is contained in that of x; (d) K is closed and / K, hence compact: let x¯ be in the closure of K . Then x¯ ∈ N0 . If x¯ ∈ then there exists a t¯ such that ϕt¯(x¯ ) ∈ M\N0 . Because M\N0 is open, there exists a neighborhood U of x¯ such that e(x) > t¯ for all x ∈ U and ϕt¯(U ) ⊂ M\N0 . Now, U ∩ K is not empty. So we have a contradiction because ϕt (U ∩ K ) ⊂ N0 for all t ≥ 0. We call a compact subset K of V having these properties (a), (b), (c), and (d) with respect to V, “adapted to V .” Definition 3.2. (Asymptotic stability). We say that x0 is asymptotically stable for F if it has a neighborhood U such that: (i) e(x) = +∞ for all x ∈ U, (ii) ϕt (U ) ⊂ U for all t ≥ 0, (iii) ∩t≥0 ϕt (U ) = {x0 }. There is a simple criterion to see if x0 is asymptotically stable:
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
3. Lyapunov’s Theory
187
Lemma 3.1. x0 is asymptotically stable if and only if: (i) x0 is stable, (ii) there exists a neighborhood V of x0 such that ϕt (x) tends to x0 when t tends to +∞, for all x ∈ V (we have seen in (1) above that x0 has a neighborhood N such that e(x) = +∞ for all x ∈ N ). Proof. Assume that x0 satisfies the conditions (i) and (ii) of the lemma. Choose a K ⊂ V adapted to V . All we have to do to prove that x0 is asymptotically stable is to show that ∩t≥0 ϕt (K ) = {x0 }. Were it otherwise, there would exist an open neighborhood O of x0 such that (M\O) ∩t≥0 ϕt (K ) would not be empty. Choose a neighborhood L of x0 adapted to O. The interior L ◦ of L is a neighborhood of x0 and for any x ∈ K , there exists a tx > 0 such that ϕtx (x) ⊂ L ◦ . Then, there exists a neighborhood Px of x such that e(x ) > tx for all x ∈ Px and ϕtx (Px ) ⊂ L ◦ . Choose a finite set n P ⊃ K . Take t¯ = max {x1 , . . . , xn } ⊂ K such that ∪i=1 xi 1≤i≤n tn i . Because ϕt (L) ⊂ L for all t ≥ 0, ϕt¯(Pxi ) ⊂ L for all i, 1 ≤ i ≤ n. Then, ϕt¯(K ) ⊂ L ⊂ O. Hence, ∩t≥0 ϕt (K )∩ (M\O) is empty, which is a contradiction. Assume now that x0 is asymptotically stable. We prove that each relatively compact open neighborhood V ⊂ V¯ ⊂ U of x0 contains an open neighborhood of x0 such that ∪t≥0 ϕt () ⊂ V. In fact, define the set as follows: = {x ∈ V | ϕt (x) ∈ V for all t ≥ 0}. Clearly, x0 ∈ . We have to show that it is open or that V \ is closed (in V !). V \ = {x | ∃ tx > 0, / V }. Hence, ϕtx (x) ∈ M\V }. For each x ∈ V \, let τ (x) = inf{t | ϕt (x) ∈ ϕτ (x) (x) ∈ V¯ \V for all x ∈ V \. Now, let xn be a sequence in V \ converging to x∞ (in V !). We have to show that x∞ ∈ V \. Because the ϕτ (x) (x) ∈ V¯ \V , which is compact, choosing a subsequence, we can assume that ϕτ (xn ) (xn ) → ξ as n → +∞ and that {τ (xn ) | n ∈ N } converges either to a finite number τ∞ or to +∞. ξ ∈ V¯ \V. If τ (xn ) → τ∞ , then ξ = ϕτ∞ (x∞ ) and x∞ ∈ V \. If τ (xn ) → +∞, then ϕ−t (ξ ) is defined for all t ≥ 0 and the negative semitrajectory γ of ξ, γ = {ϕ−t (ξ )|t ≥ 0} is contained in V¯ as we show below. Then, for any t ≥ 0, ϕt (V¯ ) ⊃ ϕt (γ ) ⊃ γ . So, ∩t≥0 ϕt (U ) ⊃ ∩t≥0 ϕt (V¯ ) ⊃ γ = {x0 }, which is a contradiction. ¯ ⊂ ¯ for all t ≥ 0. Because Hence, is open. Clearly, ϕt () ⊂ , ϕt () ¯ is compact, it is easy to see that ϕt (x) → x0 as t → +∞ for any x ∈ : ¯ let N be any open neighborhood of x0 . Then, there is a T > 0 such that ¯ ⊂ N . Otherwise, ϕt () ¯ ∩ (M\N ) is nonempty for all t > 0. Hence, ϕT () ¯ ∩ (M\N ) is nonempty, which is a contradiction. (∩t≥0 ϕt ()) To see that ϕ−t (ξ ) is defined for all t ≥ 0, and that γ ⊂ V¯ , assume these assertions are not true. Then there exists a t¯ > 0 such that ϕ−s (ξ )
P1: FBH CB385-Book
CB385-Gauthier
188
May 31, 2001
17:54
Char Count= 0
Appendix
is defined for all 0 ≤ s ≤ t¯ and ϕ−t¯(ξ ) ∈ / V¯ . Because M\V¯ is open, there exists a neighborhood of ξ such that ϕ−s is defined on for all 0 ≤ ¯ τ (xn ) > t¯ and ϕτ (xn ) (xn ) ∈ . Hence, s ≤ t¯, ϕ−t¯() ⊂ M\V¯ . For n > n, ϕτ (xn )−t¯(xn ) ∈ M\V¯ . However, by the definition of τ (xn ), ϕτ (xn )−t¯ (xn ) ∈ V , since 0 ≤ τ (xn ) − t¯ < τ (xn ). This is a contradiction. 3.2. Lyapunov’s Functions Definition 3.3. A function V : O → R defined in an open neighborhood O of x0 is called a Lyapunov function for F if: (i) V is continuous, V (x0 ) = 0 and V (x) > 0 if x ∈ O\{x0 }. (ii) For any x ∈ O, the Lie derivative V˙ (x) of V in the direction of F exists, is continuous as a function of x, and V˙ (x) ≤ 0 for all x ∈ O. Theorem 3.2. (Stability). If F has a Lyapunov function V : O → R at x0 , then x0 is stable. Proof. Choose a coordinate system (U, x1 , . . . , xm ) of M at x0 such that: (i) xi (x0 ) = 0, 1 ≤ i ≤ m; (ii) there exists a r0 > 0 such that the image of U by x1 × . . . × xm : U → R m is the ball B m (0, 2r0 ); and (iii) U ⊂ O. For any r ∈ [0, r0 [, denote by S(r ) (resp. B(r )) the inverse images of the sphere S(0, r ) (resp. the open ball B(0, r )) by x1 × . . . × xm . Hence, 2 2 2 S(0, r ) = {q ∈ U | m = {q ∈ U | m k=1 x k (q) = r }, B(0, r ) k=1 x k (q) < m 2 2 ¯ ) = B(r ) ∪ S(r ) = {q ∈ U | r 2 }. Finally, let B(r k=1 x k (q) ≤ r }. The sets ¯ S(r ), B(r ) are compact subsets of U for r ∈ [0, r0 [. ∗ =]0, +∞[ as follows: Define µ : [0, r0 ] → R+ µ(r ) = inf{V (x) | r ≤ x < r0 }. ¯ ), for r ∈]0, r0 ] form a neighborhood basis of x0 in M. To The sets B(r prove stability of x0 it is sufficient to show that for any r ∈]0, r0 ], there exists a ¯ ) such that the maximal positive trajectory neighborhood K r of x0 , K r ⊂ B(r ¯ ). of F (in M !) starting from x ∈ K r is contained in B(r µ(r ) ¯ Take K r = B(r ) ∩ {x| x ∈ O, V (x) ≤ 2 }. Because V is continuous, K r ¯ ) !). Also, K r ∩ S(r ) = ∅ because V (x) ≥ µ(r ) for is closed (O ⊃ U ⊃ B(r all x ∈ S(r ). Hence, K r ⊂ B(r ). Let x ∈ K r and let ξx : [0, e(x)[→ M be the positive maximal semitrajectory starting at x. Then, ξx−1 (B(r )) is not empty. Let [0, τ [ be its connected component that contains 0. The funcd tion t ∈ [0, τ [→ V (ξx (t)) is nonincreasing because dt V (ξx (t)) ≤ 0. Hence, µ(r ) V (ξx (t)) ≤ V (ξx (0)) = V (x) ≤ 2 for all t ∈ [0, τ [. If τ < e(x) , we get ) ¯ ), ξx (τ ) ∈ K r ⊂ B(r ), which and because ξx (τ ) ∈ B(r V (ξx (τ )) ≤ µ(r 2
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
3. Lyapunov’s Theory
189
contradicts the definition of τ. Hence, τ = e(x). But, then, the maximal ¯ ), and hence, e(x) = +∞. In fact, ξx positive semitrajectory ξx lies in B(r lies in K r . This shows the stability of x0 . Theorem 3.3. (Asymptotic stability). If F has a Lyapunov function V : O → R at x0 such that V˙ (x) < 0 for all x ∈ O\{x0 }, then x0 is asymptotically stable. Proof. We use the same considerations and notations as in the proof of Theorem 3.2. By Lemma 3.1, all we have to do is to show that there exists a neighborhood L of x0 in M such that ξx (t) → x0 as t → e(x) for any x ∈ L (it implies the fact that e(x) = +∞ !). As L , we chose K r0 . We have seen 2 in the proof of Theorem 3.2 that if x ∈ K r0 its maximal positive semitra2 jectory ξx : [0, e(x) [→ M lies in K r0 and e(x) = +∞. Assume that a se2 quence (ξx (tn ) | n ∈ N ), where tn→+∞ as n → +∞, converges to a point y = x0 . Because K r0 is closed, y ∈ K r0 . Because y = x0 , V˙ (y) < 0. Hence, 2 2 there exists h > 0, ε > 0 such that V (ξ y (h)) < V (y) − ε. By taking a subsequence, we can assume that ξx (tn ) → y as n → +∞ and tn+1 − tn ≥ h for all n ∈ N . Then, ξx (tn + h) → ξ y (h) as n → +∞. Hence, limn→+∞ V (ξx (tn + h)) ≥ V (ξx (tn+1 )) for n ∈ N . Hence, limn→+∞ V (ξx (tn + h)) ≥ limn→+∞ V (ξx (tn+1 )) = V (y). We get a contradiction.
3.3. Inverse Lyapunov’s Theorems and Construction of Lyapunov’s Functions Here, we assume M to be paracompact. Hence, it is a metric space. Denote by d a distance function on M (for instance, defined by a Riemannian metric on M). Assume that x0 is asymptotically stable. Let A denote the basin of attraction of x0 , that is, the set of all x ∈ M such that the maximal positive semitrajectory ϕt (x) of F starting at x at t = 0 is defined on [0, +∞[ and ϕt (x) tends to 0 as t → +∞. A is an open set. Theorem 3.4. There exists a C ∞ function V : A → R + having the following properties: (i) V (x) > 0 for all x ∈ A\{x0 }, V (x0 ) = 0, (ii) There exists continuous functions a, b : R+ → R such that a(0) = ∗ such that, for all x ∈ A: b(0) = 0, a(t), b(t) > 0 for t ∈ R+ d V (F)(x) ≤ −a(d(x, x0 ))b(V (x)),
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
190
17:54
Char Count= 0
Appendix
(iii) V is proper. Hence, V is a Lyapunov’s function for x0 in a very strong sense. For the proof, see [35, 40].
3.4. Lasalle’s Theorem Lasalle’s theorem is a generalization of the Lyapunov’s direct method. Theorem 3.5. Let V : M → R + be a continuous function that is proper, such that the Lie derivative of V with respect to F exists at each point of M. Denote it by V˙ and assume that V˙ is continuous and ≤ 0 everywhere. Then, for any x ∈ M, the maximal positive semitrajectory ϕt (x) of x for F is defined on [0, +∞[, and its ω-limit set lies in {z ∈ M | V˙ (z) = 0, V (z) = inft≥0 V (ϕt (x))}. Proof. t → V (ϕt (x)) is a decreasing function of t. Hence, limt→+∞ V (ϕt (x)) = inft≥0 V (ϕt (x)) = a, say. Let y ∈ ω(x). Then, y = limn→+∞ ϕtn (x) where tn → +∞ as n → +∞. Then, V (y) = a. If V˙ (y) = 0, V˙ (y) < 0 and F(y) = 0. Using the flow box theorem, we see that for t¯ big enough V (ϕt¯(x)) < a. But V (ϕt¯(x)) ≥ V (y) = a, which is a contradiction. The same reasoning when V is not proper and not necessarily ≥ 0 shows that the following proposition holds. Proposition 3.6. Assume that V is as in Theorem 3.5 but V is not proper. Then, for all x0 ∈ M such that x(t, x0 ) is defined for all t ≥ 0, the ω-limit set ω(x0 ) is empty or is contained in the set {z ∈ M | V˙ (z) = 0}.
4. Center Manifold Theory 4.1. Definitions and Center Manifold Theorem Let M be an open subset in R m , F a C ∞ vector field on M, and x0 ∈ M a stationary point of F. The vector field F has a linear part at x0 , which is a linear mapping L F : R m → R m , whose matrix in the canonical coordinates (x1 , . . . , xm ) is { ∂∂ xFji (x0 ) | 1 ≤ i, j ≤ m}, where F1 , . . . , Fm are the components of F.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
4. Center Manifold Theory
191
The linear part L F is an important invariant of F at x0 . Hence, it must have a coordinate-free interpretation. Let ] − ε, ε[×U → M, (t, x) → ϕt (x) be the local flow of F, for U a neighborhood of x0 . Let t : Tx0 R m → Tx0 R m , t ∈] − ε, ε[ be the tangent mapping of ϕt at t x0 . Then, L F = d dt |t=0 . Assume now that L F satisfies the following conditions: R m has a direct sum decomposition R m = n ⊕ h, invariant under L F (i.e., L F(n) ⊂ n, L F(h) ⊂ h), such that the restriction of L F to n has only purely imaginary eigenvalues, and its restriction √ to h has no purely imaginary eigenvalue (Note: 0 is purely imaginary, 0 = −1.0). Theorem 4.1. (Center manifold theorem). Under the above assumptions, for any r ∈ N ∗ , there exists an open neighborhood U of x0 , a C r closed submanifold N of U such that: (i) x0 ∈ N and Tx0 N = n, and (ii) for any x ∈ N , the maximum trajectory of F in U passing through x at time 0 is contained in N . Such a closed submanifold N of U also satisfies the following property: For any x ∈ U such that the maximal positive (resp. negative) semitrajectory of F in U starting (resp. ending) for t = 0 at x, is defined for all t ≥ 0 (resp. t ≤ 0), then the set ωU (x)(resp. αU (x)) of all limit points of that semitrajectory as t → +∞ (rep. t → −∞) is contained in N . Definition 4.1. A couple (U, N ) satisfying the conditions of Theorem 4.1 is called a local center manifold of class C r of F, at x0 . Let us make two important remarks. Remark 4.1. In general, there does not exist a local center manifold of class C ∞ of F at x0 . Example 4.1. (Van der Strien). M = R 3 with canonical coordinates (x, y, z), ∂ . Then, h is the x0 = 0, and F is the field (−yx − x 3 ) ∂∂x + (−z + x 2 ) ∂z z-axis, n is the plane {z = 0}. Assume that F has a local center manifold of class C ∞ at x0 , (U, N ). Because Tx0 N = {z = 0}, restricting U and N , we can assume that U = O×] − ε, ε[, where O = {(x, y, 0) | x 2 + y 2 < r 2 }, r, ε > 0, and N is the graph of a C ∞ function ν : O →] − ε, ε[. Condition (ii) of Theorem 4.1 implies that for all (x, y) ∈ O, F3 (x, y, ν(x, y)) =
∂ν ∂ν (x, y)F1 (x, y, ν(x, y)) + (x, y)F2 (x, y, ν(x, y)), ∂x ∂y
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
192
Char Count= 0
Appendix
F1 , F2 (= 0), F3 , being the components of F. Hence: −ν(x, y) + x 2 +
∂ν (x, y)(yx + x 3 ) = 0. ∂x
By successive derivations w.r.t. x, 0=−
∂ 2ν ∂ν ∂ν (x, y) + 2x + 2 (x, y)(yx + x 3 ) + (x, y)(y + 3x 2 ), ∂x ∂x ∂x
∂ 2ν ∂ 3ν ∂ 2ν 3 (x, y) + 2 + (x, y)(yx + x ) + 2 (x, y)(y + 3x 2 ) ∂x2 ∂x3 ∂x2 ∂ν + 6 (x, y)x, ∂x
0=−
and for n ≥ 3, 0=−
∂nν ∂ n+1 ν ∂nν 3 (x, y) + (x, y)(yx + x ) + n (x, y)(y + 3x 2 ) ∂xn ∂xn ∂ x n+1
+ 3n(n − 1)x
∂ n−1 ν ∂ n−2 ν (x, y) + n(n − 1)(n − 2) (x, y). ∂ x n−1 ∂ x n−2
Hence, for all y ∈] − r, r [, we get 0 = (y − 1)
∂ν ∂ 2ν (0, y), 2 = (1 − 2y) 2 (0, y), ∂x ∂x
and for n ≥ 3, ∂ n−2 ν (1 − ny) ∂nν (0, y) = (0, y). n(n − 1)(n − 2) ∂ x n ∂ x n−2 From this, we get that ∂∂ x 2 p+1ν (0, y) = 0 for all y ∈] − r, r [ , all p ∈ N , and that for all y ∈] − r, r [ , all q ∈ N , q ≥ 1, q 2q k=1 (1 − 2ky) ∂ ν (0, y). 2= (2q)!(q − 1)!2q−1 ∂ x 2q 2 p+1
As soon as q >
1 2r ,
evaluating at y =
ν can be at most of class
Cρ,
1 2q ,
we get a contradiction. Hence,
where ρ is the integer part of
1 2r .
Remark 4.2. Obviously, local center manifolds of F at x0 are not unique. However, one could think that the germ of a local center manifold of F at x0 is unique. This is not true, as the following example shows.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
4. Center Manifold Theory
193
Example 4.2. Let M = R 2 , with canonical coordinates (x, y), x0 = (0, 0), F = −x 3 ∂∂x − y ∂∂y . For any a, b ∈ R, set: νa,b : R → R, − 1 ae 2x 2 if x > 0, νa,b (x) = 0 if x = 0, − 2x12 be if x < 0. For any a, b, the graph of νa,b is a local center manifold of class C ∞ of F at x0 = (0, 0). 4.2. Applications of the Center Manifold Theorem Let M, F, x0 , L F, n, k be as in Section 4.1. Let (U, N ) be a C r local center manifold of F at x0 . We can then define a new vector field FN , the reduced vector field: FN is the restriction of F to N . By Property (ii) of Theorem 4.1, it is a vector field on the manifold N , of class C r −1 . It has x0 as a stationary point. Essentially, the local properties of F at x0 are entirely determined by those of FN at x0 : Theorem 4.2. (i) If the restriction of L F to h has an eigenvalue with positive real part, x0 is an unstable point of F. (ii) If the restriction of L F to h has no eigenvalue with positive real part, then x0 is stable (asymptotically stable) for F if and only if x0 is stable (respectively asymptotically stable) for FN in N . We do not need point (i) in the book. The proof of (ii) is given in [9].
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
194
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
Solutions to Part I Exercises
Some of the exercises in the theoretical part (Part I) of this book are important for the development of our observability theory. Hence, we correct completely most of them. For a few exercises, we only give a reference: The results are rather clear, but they involve long developments, similar to those in the book. We leave to the reader the exercises of the second part of the book, which contains only applications of our observability theory.
1. Chapter 2 1.1. Exercise 5.1 (a) is f u −invariant for all u ∈ U (i.e. [ , f u ] ⊂ ): [ , f u ] ⊂ iff L [X, f u ] ϕ = 0 for all ϕ ∈ and all vector fields X that are sections of . Otherwise L [X, f u ] ϕ = L X L f u ϕ − L f u L X ϕ = 0, because L X ϕ = 0 by definition of , and L X L f u ϕ = 0 because is stable by L f u . (b) The question is local. Because is regular and nontrivial, we can find coordinates x 1 , . . . , x p , x p+1 , . . . , x n , p = 0, such that = span{ ∂∂x 1 , . . . , ∂ ∂x p }. The definition of implies that ∂h = 0, 1 ≤ i ≤ p. ∂ xi Point (a) above implies that ∂ f u, j = 0, 1 ≤ i ≤ p, p + 1 ≤ j ≤ n. ∂xi 195
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
196
17:54
Char Count= 0
Solutions to Part I Exercises
Hence, we have the following (local) normal form for : x˙ 1 = f 1 (x, u), . . x˙ p = f p (x, u), x˙ p+1 = f p+1 (x p+1 , . . . , x n , u), . . x˙ n = f n (x p+1 , . . . , x n , u), y = h(x p+1 , . . . , x n , u).
(192)
The result follows immediately, because the flow ϕt (x) corresponding to (192) for any u(.), maps two points x1 and x2 in some leaf {x p+1 = c p+1 , . . . , x n = cn } into two points in the same leaf of . 1.2. Exercise 5.2 (a) = span{(L f )k h ; 0 ≤ k}. Let x1 , x2 ∈ X, x1 = x2 . By assumption, there is a k such that (L f )k h(x1 ) = (L f )k h(x2 ). Hence, if y1 (t) and y2 (t) denote the output functions associated with the initial conditions x1 and x2 respectively, then, ∂ k y2 ∂ k y1 (0) = (0). ∂t k ∂t k This shows that the “initial − state→ output − trajectory” mapping is injective. (b) In the analytic situation, the output function y(t) is analytic function of the time, and we have the Taylor expansion, valid for small t: y(t) =
∞ k=0
(L f )k h(x)
tk . k!
If the system is observable, two (analytic) output functions y1 (t) and y2 (t) cannot have the same Taylor expansion at t = 0, otherwise, they would coincide on the interval [0, min(e(x1 ), e(x2 ))[. Hence, separates the points. 1.3. Exercise 5.3. X = R, du = 1, d y = 1, U = [−1, 1], x˙ = 0, () y = x ϕ(u), where ϕ is C ∞ , flat at u = 0, but nonidentically zero.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2. Chapter 3
197
2. Chapter 3 2.1. Exercise 4.1. Because has a uniform canonical flag, by Theorem 2.1, can be put everywhere locally under the normal form (20). A direct computation using this normal form shows that the map S n is an injective immersion. The considerations at the end of Section 3 of Chapter 2 give the result.
2.2. Exercise 4.2. (a) If two output trajectories y1 (t), y2 (t) of the system coincide on some √ √ interval [0, ε], then the functions 3 y1 (t), 3 y2 (t) coincide also. However, these functions are output functions of the (observable) linear system: x˙ 1 = x 2 , x˙ 2 = 0, y = x 1 . This shows that the system is observable. (b) The first variation has the following expression: x˙ 1 = x 2 , x˙ 2 = 0, ξ˙ 1 = ξ 2 , ξ˙ 2 = 0, η = 3 (x 1 )2 ξ 1 . If we start from x(0) = 0, for all initial conditions ξ (0), the output η(t) of the first variation is identically zero. This shows the result.
2.3. Exercise 4.3. 1. Because has a uniform canonical flag by Theorem 2.1, it can be put everywhere locally under the normal form (20). In this special case, this gives only the condition: d L f u h ∧ dh = 0. Therefore, has a uniform canonical flag on a neighborhood of x ∈ X iff d L f u h(x) ∧ dh(x) = 0. The point x 0 being fixed, the mapping ( f, h, u) → d L f u h(x 0 ) ∧ dh(x 0 ) is continuous for the C 2 topology on S. The result follows. 2. If n > 2, let us assume that du = 1, and U = [−1, 1]. The distribution D1 (u) is given by D1 (u) = ker dh ∩ ker d L f u h,
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
198
17:54
Char Count= 0
Solutions to Part I Exercises
and it has to be independent of u. Moreover, by the normal form, d L f 0 h ∧ dh = 0 and d L f u h ∧ d L f 0 h ∧ dh = 0: if not, the codistribution annihilating D1 (u) depends on u, which is impossible. Differentiating w.r.t u at u = 0, shows that, for all k ≥ 1 and for all u ∈ U, d L ( ∂ )k f u h |u=0 ∧ d L f 0 h ∧ dh = 0. ∂u
(193)
Let us work in the open subset O of the set Sr of C r systems, r large, such that d L f 0 h ∧ dh = 0 on , a fixed compact subset of X. A direct simple computation shows that the subset of J K +1 S defined by the relation (193) for all k ≤ K is a closed submanifold of codimension (n − 2)K . Because n > 2, this codimension can be made arbitrarily large, increasing K . By the transversality theorems, the set of C r systems that miss this condition at all x ∈ is open dense. Taking = {x0 } shows the result. 2.4. Exercise 4.4. In this special case, the conditions for the uniformity of the canonical flag are those of Theorem 4.1. The conditions of this theorem, in the two-dimensional case, can be expressed as follows: (a) dh ∧ d L f h = 0, (b) d L g h ∧ dh = 0. Again, let us work on the open set O of systems such that (a) is true at all x ∈ , a fixed compact subset of X. Then, for ∈ O, we have a vector field Z on an open neighborhood of , uniquely defined by (i) L Z h = 0, (ii) L Z L f h = 1. The condition (b) implies that L kZ (d L g h ∧ dh) = 0 on , for all k. A simple computation shows that this condition for k ≤ K defines a closed submanifold of J K +2 S of codimension K . Hence, the set of systems that miss this condition at all x ∈ is open, dense, for K > 2. Taking = {x0 } gives the result. 2.5. Exercise 4.5. 1 ∂y = sin(u)2 + sin(x(1 + u 2 ) 2 )2 . ∂x
Therefore,
∂y ∂x
= 0 iff u = kπ and x =
mπ 1 (1+k 2 π 2 ) 2
, for all integers k, m.
This last expression defines a countable dense set M on X = R. However, ˆ this this set is not invariant by the dynamics of the system. For any input u, shows that any output trajectory of the first variation of the system, starting
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
2. Chapter 3
199
from a ξ (0) ∈ / s0 (X ), where s0 is the zero section of π : T X → X, is nonzero as an element of L ∞ [0, T ], for T > 0. The system is uniformly infinitesimally observable. However, condition 1 of Theorem 3.2 is violated: M is dense. 2.6. Exercise 4.6. Let us denote by Mk , k = 0, . . . , n − 1, the subset of X = R n on which dh ∧ d L f h ∧ . . . ∧ dl kf h(x) = 0. Mk is open. The C ∞ version of Theorem 4.1 is as follows. Theorem 2.1. 1. If A is observable, then Mn−1 is open, dense, and, on each open subset Y ⊂ X \Mn−1 , such that the restriction |Y is a diffeomorphism, ¯ A of the form: |Y maps A|Y into a system g1 (x1 ) x2 x˙ 1 x˙ 2 x3 g2 (x1 , x2 ) . . . . (194) ¯ A , x˙ = . = . + u . . . . g (x , . . . , x x˙ xn n−1 n−1 1 n−1 ) gn (x) ϕ(x) x˙ n y = x1 . 2. Conversely, if is an open subset of R n on which the system has the ¯ A , then the restriction | is observable. form Proof. Once we have shown that Mn−1 open, dense (in place of being analytic, with codimension (1), the remainder of the proof is completely similar. Let us show by induction on n that Mn−1 is dense. First, if dh = 0 on an open connected subset, then h is constant on this subset, and A is not observable. Hence, M0 is dense. Assume that Mk , k < n − 1, is open, dense. Let U ⊂ X, U open, be such that dh ∧ d L f h ∧ . . . ∧ dl k+1 f h(x) = 0 on U. Then, on a V ⊂ U, V open, we can chose coordinates xi , x1 = h, . . . , xk+1 = L kf h, xk+2 , . . . , xn , with L k+1 f h(x) = ϕ(x1 , . . . , xk+1 ), where ϕ is smooth. With the same reasoning as in the proof of Theorem 4.1, L mf h(x) depends only on x1 , . . . , xk+1 on V for all m > 0. Therefore, considering the “drift system” (uncontrolled): x˙ = f (x), (d ) y = h(x), we can apply the result of Exercise 5.1 of Chapter 2, to d restricted to V, because on V, the foliation d is regular and nontrivial. The system d
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
200
17:54
Char Count= 0
Solutions to Part I Exercises
can be put in the normal form (192), and hence is not observable, which is contradiction. This ends the proof.
2.7. Exercise 4.7. Let us apply Theorem 4.1 to the special case of the bilinear system B. If B is observable, it implies that the linear forms C x, C Ax, . . . , C An−1 x form a linear coordinate system on R n . In this coordinate system, B is still bilinear, and has the bilinear normal form (194). This shows the result.
2.8. Exercise 4.8. 1. Given two systems , : x˙ = f (x, u), z˙ = f (z, u), , () , ( ) y = h (z, u), y = h(x, u), on two manifolds X, Z , with control spaces U and output spaces R d y , the most natural notion of an immersion of into is: Definition 2.1. τ : X → Z is called an immersion of into if: 1) τ is a smooth mapping (we do not require that τ is immersive, but nevertheless, the traditional terminology is “immersion”), 2) for all u ∈ L ∞ (U ), all x ∈ X, e (u, x) ≤ e (u, τ (x)), and, for all t ∈ [0, e (u, x)[: P (u, x)(t) = P (u, τ (x))(t). Here, e , e , P , P denote the explosion-time functions and the input-output mappings of , . 2. Assume that a system can be immersed into a “state affine” one : x˙ = f (x, u), z˙ = A(u)z + b(u) = B(u, z), () , ( ) y = h(x, u), y = C(u)z + d(u). Considering piecewise constant control functions, for all u 1 , . . . , u r +1 ∈ U, all x ∈ X, L f u1 ... f ur h u r +1 (x) = [L Bu1 ...Bur (C(u r +1 )z + d(u r +1 ))] ◦ τ (x). The right-hand terms in the previous expressions generate a finite dimensional vector space. Hence, the same applies to the left-hand terms. The space is finite dimensional.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
3. Chapter 4
201
3. Conversely, assume that dim( ) = m < ∞. Then, considering the following “state-linear” system on ( )∗ , dual space of : ϕ˙ = A(u)(ϕ), ( ) y = C(u)(ϕ), where r A(u):( )∗ →( )∗ , A(u)(ϕ)(ψ) = ϕ(L (ψ)), for ψ ∈ (A(u) = fu L f u , transpose of “L f u restricted to ”), r C(u)(ϕ) = ϕ(h ), u r τ : X → ( )∗ , τ (x)(ψ) = ev (ψ) = ψ(x), for ψ ∈ . x Then, the mapping τ is an immersion of into . 4. If moreover is control affine, the system above is control affine, hence bilinear. 3. Chapter 4 3.1. Exercise 0.1. The basic example comes from Hirsch [26]. It concerns mappings from Z to R n . Consider f : Z → R n , the image of which is the set of points with rational coordinates. Let g : Z → R n , any map such that f (i) − g(i) <
1 , |i|
for i = 0. The image of g is dense, hence, g is not an embedding. 3.2. Exercise 2.1. Let us denote here the circle by T, and the set of C r systems on T, with control space U and output space R, by Sr . The cyclic coordinate on T is θ. (i) Let 0 = (h 0 (θ, u), f 0 (θ, u)) be given, 0 ∈ S ∞ , with ∂h 0 ∂ 2h0 ∂ 2h0 (0, 0) = 0, (0, 0) = 0, (0, 0) = 0. ∂θ ∂θ∂u ∂θ 2
(195)
There exists a > 0, and neighborhoods V 0 ⊂ S 2 , W0 ⊂ T, θ0 : V 0 → W0 , C 2 -continuous, such that 0 (i)-1: ∂h ∂θ (θ0 (), 0) = 0, for all ∈ V , ∂2h (i)-2: | ∂θ ∂u (θ, 0)| > a, for all θ ∈ W0 , for all ∈ V 0 .
The statement (i)-1 is just a consequence of the implicit function theorem. (ii) There exists U˜ N −1 : (θ, ) ∈ W0 × (V 0 ∩ S N ) → U˜ N −1 (θ, ) ∈ ˜ R (N −1)d y , C N -continuous, such that dθ π2 N (θ, 0, U N −1 (θ, )) = 0. Here,
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
202
17:54
Char Count= 0
Solutions to Part I Exercises
π2 : R N d y → R (N −1)d y is the projections that forgets y : π2 (y, y˜ N −1 ) = y˜ N −1 . (ii) results from (i)-2 and from the following trivial computation:
∂h ∂ L fu h u + dθ y˙ = u˙ , ∂θ ∂u . .
dθ y
(i)
∂ ∂h (i) i (, u˜ i−1 ) + , = u ∂θ ∂u
∂ j1 k1 ∂ jr kr ∂ jr +1 where i is a universal function depending on ( ∂u ) L f u . . . .( ∂u ) L f u ( ∂u ) h u (x), u˜ i−1 , with js + ks ≤ i. Any system in V 0 ∩ S N has the properties (i) − 1, (i) − 2 above. With (ii) above, we can compute u˜ N −1 so that
˜ dθ N (θ0 (), 0, U N −1 (θ0 (), )) = 0, ˜ which means that N is not immersive at (θ0 (), 0, U N −1 (θ0 (), )). By taking for 0 , 1 in the statement of the exercise, we get the result.
3.3. Exercise 2.2. (a) U˜ N −1 (0, ε1 ) is uniquely defined for all ε > 0, as above (i.e., taking 1 the unique solution of dθ π2 Nε (0, 0, U˜ N −1 (0, ε1 )) = 0). Direct computations show that we can choose ε small enough for U˜ N −1 (0, ε1 ) ∈ (I B )(N −1)du . 2
(b) Let us take ε1 for 0 in (i) and (ii) of the proof of the previous exercise. If is C N -close to ε1 , then θ0 () is close to zero, and U˜ N −1 (θ0 , ) is close to U˜ N −1 (0, ε1 ) by (ii) in the previous exercise. Therefore, for sufficiently close (C N ) to ε1 , U˜ N −1 (θ0 , ) ∈ ˜ (I B )(N −1)du . This shows that N is not immersive at (θ0 , 0, U N −1 (θ0 , )), and U˜ N −1 (θ0 , ) ∈ (I B )(N −1)du . This is what we wanted to show.
3.4. Exercise 2.3. 2
2
˙ = (cos(θ), 0, − sin(θ), 0, u, u). ˙ S 2 0 is immersive. (a) S 2 0 (θ, u, u) (b) for ε nonzero,2 we are in the context of Exercise 2.1. Hence, for ε small enough, S k ε is not an immersion.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
4. Chapter 5
203
3.5. Exercise 2.4. Assume that → S N is continuous. Because the immersions are open in r −N +1
C X × U × R (N −1)du , R N d y × U × R (N −1)du ), Whitney topology , for r − N + 1 > 1, the set of such that S N is an immersion is open. This is false, by the previous exercise. 3.6. Exercise 4.1. This example is a personal communication of H. J. Sussmann. Let X = R 2 , U = [−1, 1], d y = 3. x˙ 1 = 1, () x˙ 2 = u, y1 = ϕ(x1 , x2 )x1 , y2 = ϕ(x1 , x2 )x2 , y3 = ϕ(x1 , x2 ), with u 0 : [0, 1] → U, a function which is everywhere C k , and nowhere C k+1 , t v(t) = 0 u 0 (τ )dτ, γ1 : [0, 1] → X, γ1 (t) = (t, v(t)), γ2 : [0, 1] → X, γ2 (t) = (t, v(t) + 1), ϕ : X → R is a smooth function, nonzero everywhere except on γ1 ([0, 1]) ∪ γ2 ([0, 1]) (such a function does exist because this last set is closed). 3.7. Exercise 6.1. This is a special case of Theorem 2.3. 3.8. Exercise 6.2. See [3]. 3.9. Exercise 6.3. See [31]. 4. Chapter 5 4.1. Exercise 3.1. The observation space of such a system is finite dimensional: It is contained in the vector space of polynomials of a fixed degree. By Exercise 4.8 in Chapter 3, it can be immersed in a linear (uncontrolled) system. This system can be taken to be observable. Any observable linear system satisfies the AC P(N ) for some N . Nevertheless, the multiplicity may be infinite, as the following exercise shows.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
204
17:54
Char Count= 0
Solutions to Part I Exercises
4.2. Exercise 3.2. 1. y = h(x) = x1 (x12 + x22 ), y˙ = L Ax h = x2 (x12 + x22 ). These two functions separate the points on R 2 . ∗ 2 ∗ 2. In fact, for N ≥ 2, N = ( 2 ) (O y0 ), and ( 2 ) (m(O y0 )).O x0 ⊂ (x 1 + 2 x2 ).Ox0 , which has infinite codimension in Ox0 . 4.3. Exercise 4.1. 1. y = x1 , y˙ = x23 − x1 . Any two distinct initial conditions are distinguished immediately. ˇ 2 = 2 = 2a. y = h = x1 , y˙ = L f u h = x23 − x1 , this shows that 3 {G(u, x1 , x2 )}. 2b. y¨ = x1 − x23 + 3 u x26 + 3 x210 , L f u (x23 ) = 3 x210 + 3 x26 u, this shows that:
ˇ 3 = 3 = G u, x1 , x 3 , x 10 . 2 2 2c. y (3) = −x1 + x23 − 3ux26 + 3u x26 + 18u 2 x29 − 3x210 + 48ux213 + 30x217 ,
L f u x210 = 10 x217 + u x213 , hence,
2d.
ˇ 4 = 4 = G u, x1 , x 3 , x 10 , x 17 . 2 2 2
L f u x217 = 17 x224 + u x220 ,
this shows that ˇ5 = ˇ 4 = 4 = . ˇ Hence, satisfies the AC P(4), by Theorem 4.1. 3. P = 3u y 2 − 12u u y 3 + 510u 3 y 4 + 510y 8 + 22u y y + 6u y y − 494u 2 y 2 y − 36u u y 2 y + 2040u 3 y 3 y + 4080y 7 y + 126u y 2 + 22u y 2 + 3u y 2 − 988u 2 y y 2 − 36u u y y 2 + 3060u 3 y 2 y 2 + 14280y 6 y 2 − 494u 2 y 3 − 12u u y 3 + 2040u 3 y y 3 + 28560y 5 y 3 + 510u 3 y 4 + 35700y 4 y 4 + 28560y 3 y 5 + 14280y 2 y 6 + 4080y y 7 + 510y 8 + 22u y y − 494u 2 y 2 y + 252u y y + 22u y y − 988u 2 y y y − 494u 2 y 2 y + 126u y 2 − y .
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
4. Chapter 5
205
4.4. Exercise 5.1. 1. If two trajectories x(t), x˜ (t) produce the same outputs y(t), y˜ (t), then y(0) = y˜ (0), y (0) = y˜ (0), y (0) = y˜ (0), y (0) = y˜ (0). This implies: cos(α x0 ) = cos(α x˜ 0 ), cos(x0 ) = cos(x˜ 0 ), sin(α x0 ) = sin(α x˜ 0 ), sin(x0 ) = sin(x˜ 0 ). Because α is irrational, this implies that x0 = x˜ 0 . is observable. 2. The observation space is finite dimensional: It is generated by cos(x), cos(αx), sin(x), sin(αx). The ACP(4) holds. Let us compare with Exercise 3.1: In Exercise 3.1, h, the output function, is a polynomial mapping. In this exercise, h is an almost periodic function on R. This is a special case of a general situation, for certain left invariant systems on Lie Groups. 3. The image of R by M is a relatively compact set M , for all M. Assume that x = ϕ(y, y˜ M−1 ) for some M, for a continuous mapping ϕ defined on R M . Then, ϕ( M ) = R, which is impossible because ϕ is globally defined and M is relatively compact. 4.5. Exercise 5.2. 1. The observation space of contains x1 , x23 . By Exercise 5.2 in Chapter 2, is differentially observable: the mapping 2 is injective. 2. If f (x1 , x2 ) = 1, is an embedding. If f (x , x ) = x , for N ≥ 2, 1 2 2 4 N = {G(x1 , x23 )}. It has not full rank at x = 0. Then, M is not an immersion for any M.
4.6. Exercise 5.3. 3 1. S 2 (x 1 , x 2 , u, u ) = (x 1 , x 2 − x 1 , u, u ). It is an injective mapping. is differentially observable of order 2. (N ) 2. An easy induction shows that ∂∂yx2 = 0 for all N . Then, is not |x=0 strongly differentially observable of any order.
4.7. Exercise 5.4. satisfies the P H (n) at each point by Exercise 4.1 in Chapter 3. Hence, it satisfies the P H (N ) or equivalently the AC P(N ) at each point, for N ≥ n. Theorem 5.2 shows the result.
P1: FBH CB385-Book
CB385-Gauthier
206
May 31, 2001
17:54
Char Count= 0
Solutions to Part I Exercises
4.8. Exercise 6.1. The point x = 0 is a fixed point for the system, and the axis x2 = 0 is invariant. Writing the dynamics in a control affine way, x˙ = f (x) + ug(x), and denoting the Lie bracket of f and g by e(x) = [ f, g](x), we see that g and e are independent, except if x2 = 0. If x2 = 0 but x1 = 0, f (x) is nonzero. Hence, the system is not controllable in the weak sense: X = R 2 is partitioned into five orbits. 5. Chapter 6 5.1. Exercise 1.1. Let us assume that limt→+∞ #E(t, z 0 ) ≥ 2. Then, by compactness of Cl(), Cl( ), U, I B , there exists tn → +∞, xˆ n1 , xˆ n2 ∈ E(tn , z 0 ), xˆ n1 = xˆ n2 , xn = x(tn ) → x ∗ , xˆ n1 → x ∗ , xˆ n2 → x ∗ , u nN = (0) ∗ ∗ (u n , u˜ nN −1 ) → u ∗N , ηn = η(tn ) → η∗ = N (x , u N ). Also, d X denoting again the differential with respect to the x variable only, 2 i n = 0, for i = 1, 2, d X N xˆ n , u N − ηn because, in coordinates, for all V, i n
xˆ , u − ηn 2 ≤ xˆ i + λV, u n − ηn 2 , N n N n N N if n is sufficiently large, and λ > 0 is small enough. But, by the implicit function theorem, and by the fact that S N is immern ) − η 2 = 0 has a single solution xˆ in a ˆ ( x , u sive, the equation d X n n N N neighborhood of (x ∗ , u ∗N , η∗ ). 5.2. Exercise 1.2. First, for a fixed γ = (u (0) , u˜ N −1 ), N (., γ ) − N (., γ ) defines a distance dγ (. , .), because S N is injective. For x and y fixed, the distance dγ (x, y) : G → R+ is continuous, where G is the set of values of γ . The set G is compact. Hence, d(x, y) = supγ dγ (x, y) < +∞. The other properties of a distance are easy to check.
5.3. Exercise 1.3. For N ≥ 4, N is an injective immersion, and the image is contained in a 4-dimensional subspace VN of R N . N of X = R by N Moreover, the closure of N is a 2-dimensional torus TN in VN , and N is dense in TN . ∗ Let x ∗ be any point of X = R, and y ∗ = N (x ). An open neighborhood ∗ N U y ∗ of y in R has a trace Vy ∗ on VN , and y ∗ on N . The set y ∗ N contains a sequence N (x n ), x n → +∞ in X = R . Hence, an open interval ∗ ∗ ]x − a, x + a[, a > 0, is not an open neighborhood of x ∗ in the topology defined by the observability distance.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. Chapter 6
207
5.4. Exercise 1.4. Because X is compact and N is an injective smooth mapping, is an homeomorphism onto its image, which is closed (Lemma N 5.1 of Chapter 5). This shows the result. 5.5. Exercise 1.5. Let d, δ be two Riemannian metrics over X. It is a standard fact that smooth mappings between Riemannian manifolds are locally Lipschitz. As a consequence, because C is compact, there is an open neighborhood U C of C (the diagonal of C × C) in X × X, and a real α > 0, such that: d(x, y) ≤ α δ(x, y), (x, y) ∈ U C . Let V C be the complement of U C in X × X, and W C = V C ∩ (C × C). Then, V C is closed and W C is compact. Hence, δ : W C → R+ , reaches its minimum δm > 0. Let d M be the (finite) diameter of C for the metric d. For (x, y) ∈ W C , d(x, y) ≤ d M = dδmM δm ≤ dδmM δ(x, y). Therefore, for (x, y) ∈ C × C, d(x, y) ≤ sup(α, dδmM )δ(x, y). This shows the result. 5.6. Exercise 1.6. First, on any Riemannian manifold, there is a complete Riemannian metric (see [34], p. 12, or use the Whitney’s embedding theorem to construct the metric). Let us fix the manifold X, and choose such a complete Riemannian metric g on X, with associated distance d. Fix x0 ∈ X, and set f (x) = d(x0 , x). Let ϕ : X → R+ be a C ∞ function, ϕ(x) > sup(1, f (x)). Such a function can be easily constructed, using a partition of unity. Let g be the Riemannian metric g = e−2ϕ g. The associated distance function is δ. The diameter of X for the distance δ is finite: Let x ∈ X, and γ : [0, t] → X be a geodesic from x0 to x, relative to g, parametrized by the arclength, so that f (x) = d(x0 , x) = t. Such a geodesic does exist ([34], p. 126). Then, denote by L δ (γ ) the length of the curve γ with respect to the metric g : t t 1 −2ϕ(γ (τ )) 2 L δ (γ ) = g(γ˙ (τ ), γ˙ (τ )) dτ = e−ϕ(γ (τ )) dτ. e 0
0
t But ϕ(γ (τ )) ≥ d(γ (τ ), x0 ) = τ, e−ϕ(γ (τ )) ≤ e−τ , 0 e−ϕ(γ (τ )) dτ ≤ t −τ 0 e dτ ≤ 1. Therefore, L δ (γ ) ≤ 1, which shows that δ(x0 , x) ≤ 1, and X has diameter less than 2 for the distance δ.
P1: FBH CB385-Book
CB385-Gauthier
208
May 31, 2001
17:54
Char Count= 0
Solutions to Part I Exercises
5.7. Exercise 1.7. If X is compact, the first part of the following proof is of no use. Consider , open, relatively compact, cl() ⊂ , cl( ) ⊂ ⊂ X . Fix K¯ ⊂ Z , K¯ compact. Set V = U × (I B )(r −1)du . Then, r ( × V ) is / r (cl( ) × V ). relatively compact. Let y0 ∈ ˜ smooth, equal to r on ×V, (x, ˜ First, we consider , u r ) = y0 for (x, u r ) ∈ / × V. This is possible. Second, we will modify slightly this ¯ according to the following lemma. ˜ to get a mapping , mapping , ¯ :X× Lemma 5.1. (d y > du , r > 2n). There exists a smooth mapping r d ˜ such that V → R y , arbitrarily close (Whitney) to , (a) (b)
˜ on ( × V ) ∪ ((X \ ) × V ), ¯ = ) × V ) = ∅. ¯ ¯ (( \cl( )) × V ) ∩ (cl(
Proof. The proof can be obtained using the transversality theorems. Let us give a direct proof here. Let ψ : X → R+ , smooth, ψ = 0 on cl( ) ∪ (X \ ), ψ > 0 on the (relatively compact) complement O of this set, O = \cl( ). Consider the equation in (x, , x˜ , u r ) on O × R r d y × cl( ) × V : ˜ x˜ , u r ) = 0. ˜ u r ) − ( ψ(x) + (x,
(196)
This equation has only the following trivial solution in : =
1 ˜ ˜ u r )), ( (x˜ , u r ) − (x, ψ(x)
: O × cl( ) × V → R r d y , : (x, x˜ , u r ) → (x, x˜ , u r ). The mapping is smooth. Because d y > du and r > 2n, dim(O × × ¯ in this V ) < r d y , and R r d y \Im() is dense (Sard’s theorem). Take any dense set, and set: ¯ ¯ + (x, ˜ (x, u r ) = ψ(x) u r ). ¯ can be chosen in any Whitney neighborhood of , ˜ because ¯ can In fact, be taken arbitrarily small, and ψ is compactly supported. ¯ Equation (196) has no solution (x, x˜ , u r ) ∈ O × cl( ) × V. For , ¯ ˜ x˜ , u r ) for (x, x˜ , u r ) ∈ O × This means that (x, u r ) is never equal to ( cl( ) × V.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. Chapter 6
209
¯ ˜ x˜ , u r ) because For (x, x˜ , u r ) ∈ (X \ ) × cl( ) × V , (x, u r ) = ( ¯ / r (cl( ) × V ). (x, u r ) = y0 , and y0 ∈ Finally, ˜ x˜ , u r ) = r (x˜ , u r ), ¯ (x, u r ) = ( for (x, x˜ , u r ) ∈ (X \cl( )) × cl( ) × V.
(197)
¯ is globally Lipschitz w.r.t. x, uniformly w.r.t. u r , and equal Moreover, to r on cl( ) × V. We have to show that ¯ 0 , u r ) − (x ¯ 0 , u r ), d(η0 , x0 ) ≤ γ (η for all η0 ∈ K¯ ⊂ X, x0 ∈ , u r ∈ V.
(198)
Here K¯ ⊂ X is a fixed compact subset of X . Typically, K¯ will be as cl(K ) in the proof of Lemma 1.1 of Chapter 6:
η0 = η(0, z 0 ) = H(z 0 , u 0 (0), u˜ r (0), h(x0 , u 0 (0))), takes it values in K¯ , when (z 0 , x0 ) takes values in K¯ × , where K¯ is a given compact subset of Z . If η0 ∈ K¯ \ (which is compact), then
¯ 0 , u r ) − (x ¯ 0 , u r ) ≥ δ > 0, M ≥ (η because, by the previous lemma, the minimum cannot be zero: We consider η0 ∈ K¯ \ , x0 ∈ cl() (two compact sets). The minimum will be reached, and if it is zero, it cannot be reached for η0 ∈ (X \cl( )), by (197), and it cannot be reached for η0 ∈ ∂ , because S r is an injective immersion. Hence, if η0 ∈ K¯ \ , L ¯ ¯ 0 , u r ). (η0 , u r ) − (x δ The only thing that remains to be proved is therefore d(η0 , x0 ) ≤ L ≤
¯ 0 , u r ) − (x ¯ 0 , u r ) d(η0 , x0 ) ≤ γ (η ≤ γ r (η0 , u r ) − r (x0 , u r ), for (η0 , x0 , u r ) ∈ × × V.
(199)
This can be rewritten in a simpler way, by considering the distance δ on X × R r du , associated to the Riemannian metric over X × R r du : ˙ u˙ r ), (x, ˙ u˙ r )) = g(x, ˙ x) ˙ + u˙ r 2Rr du . g ((x, (Here, g is the Riemannian metric on X, associated to d.)
P1: FBH CB385-Book
CB385-Gauthier
210
May 31, 2001
17:54
Char Count= 0
Solutions to Part I Exercises
Then, (199) is implied by
dg ((η0 , u r ), (x0 , u r )) ≤ γ S r (η0 , u r ) − S r (x0 , u r ) Rr (d y +du ) , (η0 , x0 , u r ) ∈ × × V.
(200)
Obviously, this is a consequence of the following general fact: Lemma 5.2. Let : X → R p be an injective immersion, dg a Riemannian distance on X, K ⊂ X a compact. In restriction to K : dg (x, y) ≤ l (x) − (y) R p . Proof. As in Exercise 1.5 of the same chapter, the question is local: Let U be any open neighborhood of the diagonal in X × X, V the complement of U in X × X, and W = (K × K ) ∩ V. Set m = inf(x,y)∈W (x) − (y). m > 0 because W is compact, and is an injective immersion. For M p (x, y) ∈ W, dg (x, y) ≤ M = M m m ≤ m (x) − (y) R . To solve the local problem, let us fix x0 ∈ X, y0 = (x0 ) ∈ R p , and consider a local diffeomorphism ψ : (R p , y0 ) → (R p , 0), mapping (X ) onto a coordinate plane P = {xk+1 = 0, . . . , x p = 0} (locally). Now, ψ ◦ (x) − ψ ◦ (y) R p is again a distance δ on a neighborhood Ux0 ⊂ X of x0 , which is Riemannian because P is totally geodesic in (R p , . R p ). Then, locally, we can write dg (x, y) ≤ k1 δ(x, y) = k1 ψ ◦ (x) − ψ ◦ (y) R p ≤ k1 k2 (x) − (y) R p . Covering the diagonal of K × K by such neighborhoods Ux0 × Ux0 , and extracting a finite covering gives the result. In particular, we have proven that ¯ 0 , u r ) − (x ¯ 0 , u r ), dg (η0 , x0 ) ≤ γ (η for all (η0 , x0 , u r ) ∈ K¯ × × V.
Going back to the statement of the exercise, we get ¯ ¯ ¯ x0 ), u r (t)) ≤ λγ k(α)e−αt (η(0, z 0 ), u r (0)) (η(t, z 0 ), u r (t)) − (x(t, ¯ − (x0 , u r (0)), where λγ depends on the compact K¯ , and for all z 0 ∈ K¯ , x0 ∈ , u ∈ U r,B . This shows that the modified observer is an exponential U r,B output observer, for N = r.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. Chapter 6
211
5.8. Exercise 2.1. Let us consider ϕ : ˜ = × V B ⊂ R n+du → R, with ϕ = ϕ(x1 , . . . , xn , u). Here, we take ϕ equal to h or to one of the f i ’s in the normal form (20). Let z = (xr +1 , . . . , xn ), x = xr , y = (x1 , . . . , xr −1 , u 1 , . . . , ˜ such that u du ). ϕ(x, y, z) is an analytic function on , ϕ does not depend on z, (201) ∂ϕ > 0. ∂x ˜ is compact, convex (so are and V B ). By analyticity, we can extend ϕ to an open, relatively compact neighborhood V ⊂ R m of ˜ (m = n + du ), such that ∂ϕ > 0 on V, (i) ∂x (202) ∂ϕ = 0 on V. (ii) ∂z Lemma 5.3. There exists two open, relatively compact neighborhoods V , ˜ cl(V ) ⊂ V , cl(V ) ⊂ V, V and V convex. V of , Proof. Let us find only V , and repeat the proof with ˜ = V , to find V . ˜ where B(z, 1 ) is the closed ball cenConsider ˆ n = ˜ ∪ {B(z, n1 )|z ∈ ∂ }, n 1 tered on z, with radius n . Set Vn = Convex H ull( ˆ n ). We claim that for n sufficiently large, Vn ⊂ V. Then, we just take V = Int(Vn ) for n sufficiently large. If the claim is false, there is a sequence z n ∈ Vn , z n ∈ / V, z n = tn xn + (1 − tn )yn , xn , yn ∈ ˆ n , tn ∈ [0, 1]. By compactness, we can assume that z n → z, tn → t, xn → x, yn → y, ˜ Therefore, z ∈ . ˜ It follows that z n ∈ V for n large, which is a x, y ∈ . contradiction. Lemma 5.4. On V (and on V ), ϕ does not depend on z. Proof. For (x, y, z), (x, y, z ) ∈ V , by the convexity of V , 1 ∂ϕ (z i − z i ) (x, y, t z + (1 − t)z)dt, ϕ(x, y, z ) = ϕ(x, y, z) + ∂z i 0 = ϕ(x, y, z). Now, if : V → R 1+ p is the projection (x, y, z) → (x, y), with p = r + du − 1, then, V and V are convex and open, ˜ ⊂ V , cl(V ) ⊂ V ,
P1: FBH CB385-Book
CB385-Gauthier
212
May 31, 2001
17:54
Char Count= 0
Solutions to Part I Exercises
and ϕ defines an analytic function ϕ˜ on V : ϕ(x, ˜ y) = ϕ(x, y, z) for z ∈ −1 (x, y),
Let : R 1+ p
∂ ϕ˜ > 0, for (x, y) ∈ V . ∂x → R p be the projection (x, y) = y.
Lemma 5.5. There exists a smooth mapping s : R p → R, compactly supported, ˜ (s(y), y) ∈ V , for y ∈ . Proof. For all y0 ∈ V , we chose (x0 , y0 ) ∈ V . Consider neighborhoods U y0 ⊂ V , Vx0 × U y0 ⊂ V and the mapping s y0 : U y0 → Vx0 × U y0 , s y0 (y) = x0 . ˜ s0 = 0. We also set U0 = R p \ , The sets {U0 } ∪ {U y0 |y0 ∈ V } form an open covering of R p . Let {(Ui , χi ) |i ∈ I } be a partition of unity on R p , Ui ⊂ U0 , or Ui ⊂ U yi for some yi ∈ V . If Ui ⊂ U0 , we set si = s0 = 0. If Ui ⊂ U yi (we select one), we set si = s yi . We consider s : R p → R, χi (y)si (y). s(y) = If y ∈
˜ ,
by convexity of
i∈I V , (s(y), y)
∈ V .
With standard arguments, the function ϕ, ˜ defined on the open set V , 1+ p , in such a way that it is compactly can be extended smoothly to all of R supported and unchanged on cl(V ). Let us call the resulting function ϕ˜ again. Set x ∂ ϕ˜ ϕ(x, ¯ y) = ϕ(s(y), ˜ y) + F◦ (θ, y)dθ, ∂x s(y) where F : R → R is as follows: F is increasing and locally constant outside a compact, equal to the identity on the interval [m, M], equal to m2 at −∞, equal to 2 M at +∞, with m= M=
inf
(x,y)∈cl(V )
∂ ϕ˜ (x, y), m > 0, ∂x
∂ ϕ˜ (x, y). (x,y)∈cl(V ) ∂ x max
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. Chapter 6
213
Let us show that the function ϕ¯ gives a solution of the exercise. 1. 2.
∂ ϕ¯ ∂ ϕ˜ ∂ ϕ¯ m ∂ x (x, y) = F ◦ ∂ x (x, y), hence, 2 ≤ ∂ x ≤ 2M. x ∂ ϕ¯ ∂ ˜ y) + ∂∂yi ( s(y) F ◦ ∂∂ ϕx˜ (θ, y)dθ) ∂ yi (x, y) = ∂ yi (ϕ(s(y),
= (I) + (II). (I) is bounded, because ϕ˜ is compactly supported. x (II) = − ∂∂syi (y)F ◦ ∂∂ ϕx˜ (s(y), y) + s(y) ∂∂yi (F ◦ ∂∂ ϕx˜ (θ, y))dθ = (III) + (IV). (III) is bounded because ∂∂syi (y) is bounded (s is compactly supported), and m2 ≤ F ≤ 2M. x 2 2 (IV) = s(y) F ( ∂∂ ϕx˜ (θ, y)) ∂∂x∂ϕ˜yi (θ, y)dθ. The functions F and ∂∂x∂ϕ˜yi are 2 bounded by b, and ∂∂x∂ϕ˜yi is zero for |θ| ≥ θ M , because ϕ˜ is compactly supported. θ Hence, (IV) ≤ −θMM b2 dθ = 2b2 θ M . ˜ 3. ϕ¯ coincides with ϕ˜ on ˜ : for (x, y) ∈ , x ∂ ϕ¯ ϕ(x, ¯ y) = ϕ(s(y), ¯ y) + (θ, y)dθ, s(y) ∂ x but ϕ(s(y), ¯ y) = ϕ(s(y), ˜ y) by definition of ϕ. ¯ Because (s(y), y) and (x, y) ∈ V , and V is convex, (θ, y) ∈ V for all θ ∈ [s(y), x], and ∂∂ ϕx¯ (θ, y) = F ◦ ∂∂ ϕx˜ (θ, y) = ∂∂ ϕx˜ (θ, y), by definition of F. Therefore, ϕ(x, ¯ y) = ϕ(x, ˜ y).
5.9. Exercise 2.2. This is a very elementary standard result from linear control theory. 5.10. Exercise 2.3. This is a standard result from stability theory (of linear differential equations). 5.11. Exercise 2.4. Let us consider the system on Gl+ (m, R), m = N p: p N −1 du (t, s) = A N , p u (t, s) + {u kp+i,lp+ j ekp+i,lp+ j u (t, s)}l≤k , dt k,l=0 i, j=1
= Au (t)u (t, s), u (s, s) = I d, as in Formula (92), where |u i, j (t)| ≤ B. Recall that u (t1 , t2 )u (t2 , t3 ) = u (t1 , t3 ). Set u (t, s) = [ −1 u ] (t, s). Then, d u (t, s) = −Au (t) u (t, s), dt u (s, s) = I d.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
214
17:54
Char Count= 0
Solutions to Part I Exercises
T Fix T > 0. Set G u = 0 u (T, s)C C u (T, s)ds. Here, G u is the matrix under consideration in Lemmas 2.12–2.17 of Chapter 6. T Gu = (u−1 ) (T, s)C Cu−1 (T, s)ds 0
=
T
u (s, T )C Cu (s, T )ds.
0
Hence, we have to consider the system: du (s, T ) = Au (s)u (s, T ), ds u (T, T ) = I d. This system is control affine. Hence, we can apply Lemma 2.10. (In fact it is a system with terminal condition on [0, T ], but the lemma also applies.) ∞ ∞ l Here, let us identify L ∞ ([0,T ],R l ) with (L ([0,T ],R) ) , and endow L ([0,T ],R) with the weak-* topology. The subset U B = {u = (u 1 , . . . , u l )|supi |u i |∞ ≤ B} ⊂ L∞ ([0,T ],R l ) is compact. The mapping 0 L∞ ([0,T ],R l ) → C ([0,T ],Gl+ (m,R)) ,
u → u (s, T ) is continuous, by Lemma 2.10. Also, the mapping 0 C([0,T ],Gl+ (m,R)) → Sm , T (t)C C(t)dt (.) → 0
is continuous (Lebesgue’s dominated convergence). Hence, the mapping F : L∞ ([0,T ],R l ) → Sm , u → G u maps the compact set U B onto a compact subset of Sm . In fact, F maps U B onto a compact subset of the set of positive semi definite matrices, by definition. It is sufficient to prove that all elements in F(U B ) are positive definite (i.e., F maps U B onto a compact subset of Sm (+)). This will imply ∗ Lemma 2.11. Assume that for u ∗ ∈ L ∞ ([0,T ],R l ) , G u is not positive definite. ∗ Let x0 ∈ K er (G u ), x0 = 0. Then, z(s) = u (s, T )x0 is a solution of dz(s) = Au (s)z(s), ds z(T ) = x0 , C z(s) = 0, for all s ∈ [0, T ]. This contradicts the observability.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
5. Chapter 6
215
5.12. Exercise 2.5. ˜ and z i denote the ith 1. Let b˜ i denote the ith p-block component of b, 1 ˜ p-block component of z; bi (z) = θ i−1 bi (z 1 , θ z 2 , . . . , θ i z i ). b˜ i (z) − b˜ i (w) =
1 bi z 1 , θ z 2 , . . . , θ i−1 z i
θ i−1
− bi w1 , θw2 , . . . , θ i−1 wi
Lb z 1 , θ z 2 , . . . , θ i−1 z i i−1 θ
− w1 , θw2 , . . . , θ i−1 wi
1 1 ≤ Lb z , z , . . . , z i θ i−1 1 θ i−2 2
1 1 − i−1 w1 , i−2 w2 , . . . , wi θ θ
≤
≤ L b z − w, because θ > 1. 1 bi,∗ j (−1 z)θ j−1 , 2. With i × i block notations for b∗ and b˜ ∗ , b˜ i,∗ j (z) = θ i−1 ∗ and b is lower block triangular. Hence, b˜ i, j = 0 for j > i; b˜ i,∗ j (z) = θ ( j−i) bi,∗ j (−1 z), j ≤ i, and θ > 1. Hence, b˜ i,∗ j (z) ≤ bi,∗ j (−1 z). 5.13. Exercise 2.6. (Q + λQC C Q)−1 is well defined for |λ| small. Let us show that (I) = (Q + λQC C Q)(Q −1 − C (λ−1 + C QC )−1 C) = I d. Here, C is a linear form, and
(I) = (Q + λQC C Q) Q
−1
λ −C C 1 + λC QC
(II) = λQC C − (QC Cλ + λ2 QC C QC C) (II) =
= I d + (II).
1 , 1 + λC QC
1 (λQC C + λ2 QC (C QC )C − λQC C 1 + λC QC − λ2 QC C QC C)
= 0.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
216
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
Bibliography
[1] R. Abraham and J. Robbin, Transversal Mappings and Flows, W.A. Benjamin, Inc., 1967. [2] V. I. Arnold, S. M. Gusein-Zade, and A. N. Varchenko, Singularit´es des applications diff´erentiables, French translation, ed. MIR Moscou, Vol. I, II, 1986. [3] M. Balde and P. Jouan, Observability of control affine systems, ESAIM/COCV 3, pp. 345–359, 1998. [4] E. Bierstone and P. Milman, Extensions and liftings of C∞ Whitney fields, L’Enseignement Math´ematique, T.XXVIII, fasc. 1–2, 1977. [5] E. Bierstone and P. Milman, Semi-analytic and subanalytic sets, Publications de l’IHES, No. 67, pp. 5–42, 1988. [6] E. Bierstone and P. Millman, Geometric and differential properties of subanalytic sets, Annals of Maths 147, pp. 731–785, 1998. [7] N. Bourbaki, El´ements de Math´ematiques, Topologie g´en´erale, livre III, Actualit´es Scientifiques et Industrielles, 1142, Hermann, Paris, 1961. [8] R. Bucy and P. Joseph, Filtering for stochastic processes with applications to guidance, Chelsea Publishing Company, 1968; second edition, 1987. [9] J. Carr, Applications of centre manifold theory, Appl. Math. Sci. 35, SpringerVerlag, 1981. [10] H. Cartan, Vari´et´es analytiques r´eelles et vari´et´es analytiques complexes, Bulletin de la Soci´et´e Math´ematique de France 85, pp. 77–99, 1957. [11] H. Cartan, Vari´et´es analytiques complexes et cohomologie, Colloque sur les fonctions de plusieurs variables, Bruxelles, pp. 41–55, 1953. Also in Collected Works of H. Cartan, Vol II, pp. 669–683, Springer-Verlag, 1979. [12] F. Deza, Contribution to the synthesis of exponential observers, Ph.D. thesis, INSA de Rouen, France, June 1991. [13] Z. Denkowska and K. Wachta, La sous analyticit´e de l’application tangente, Bulletin de l’Acad´emie Polonaise des Sciences, XXX, No. 7–8, 1982. [14] F. Deza, E. Busvelle, and J-P. Gauthier, High-gain estimation for nonlinear systems, Systems and Control Letters 18, pp. 295–299, 1992. [15] M. Fliess and I. Kupka, A finiteness criterion for nonlinear input-output differential systems, SIAM Journal Contr. and Opt. 21, pp. 721–728, 1983. [16] J-P. Gauthier, H. Hammouri, and I. Kupka, Observers for nonlinear systems; IEEE CDC Conference, Brighton, England, pp. 1483–1489, December 1991. [17] J-P. Gauthier, H. Hammouri, and S. Othman, A simple observer for nonlinear systems, IEEE Trans. Aut. Control 37, pp. 875–880, 1992.
217
P1: FBH CB385-Book
CB385-Gauthier
218
May 31, 2001
17:54
Char Count= 0
Bibliography
[18] J-P. Gauthier and I. Kupka, Observability and observers for nonlinear systems, SIAM Journal on Control, Vol. 32, No. 4, pp. 975–994, 1994. [19] J. P. Gauthier and I. Kupka, Observability for systems with more outputs than inputs, Mathematische Zeitschrift 223 pp. 47–78, 1996. [20] M. Goresky and R. Mc Pherson, Stratified Morse Theory, Springer-Verlag, 1988. [21] F. Guaraldo, P. Macri, and A. Tancredi, Topics on real analytic spaces, Advanced Lectures in Maths, Friedrich Vieweg and Sohn, Braunschweig, 1986. [22] H. Grauert, On Levi’s problem and the imbedding of real analytic manifolds, Annals of Math., 68(2), pp. 460–472, Sept. 1958. [23] R. Hardt, Stratification of real analytic mappings and images, Invent. Math. 28, pp. 193–208, 1975. [24] R. Hermann and all., Nonlinear controllability and observability, IEEE Trans. Aut. Control AC-22, pp. 728–740, 1977. [25] H. Hironaka, Subanalytic Sets, Number Theory, Algebraic Geometry and Commutative Algebra, in honor of Y. Akizuki, Kinokuniya, Tokyo, No. 33, pp. 453–493, 1973. [26] M. W. Hirsch, Differential Topology, Springer-Verlag, Graduate Texts in Math., 1976. [27] L. H¨ormander, An Introduction to Complex Analysis in Several Variables, North Holland Math. Library, Vol. 7, 1973. [28] M. Hurley, Attractors, Persistance and density of their basis, Trans. Am. Math. Soc. 269, pp. 247–271, 1982. [29] A. Jaswinsky, Stochastic Processes and Filtering Theory, Academic Press, New York, 1970. [30] P. Jouan, Singularit´es des syst`emes non lin´eaires, observabilit´e et observateurs, Ph.D. thesis, University of Rouen, 1995. [31] P. Jouan, Observability of real analytic vector fields on a compact manifold, Systems and Control Letters 26, pp. 87–93, 1995. [32] P. Jouan and J. P. Gauthier, Finite singularities of nonlinear systems. Output stabilization, observability and observers, Journal of Dynamical and Control Systems 2(2), pp. 255–288, 1996. [33] L. Kaup and B. Kaup, Holomorphic functions of several variables, De Gruyter Studies in Math., 1983. [34] W. Klingenberg, Riemannian geometry, De Gruyter Studies in Math., 1982. [35] J. Kurzweil, On the inversion of Lyapunov’s second theorem, On stability of motion, Transl. Am. Math. Soc. pp. 19–77, 1956. [36] J. Lasalle and S. Lefschetz, Stability by Lyapunov’s Direct Method with Applications, Academic Press, New York, 1961. [37] S. Lefschetz, Ordinary Differential Equations: Geometric Theory, J. Wiley Intersciences, 1963. [38] S. Lojasiewicz, Triangulation of semi analytic sets, Annal. Sc. Nor. Sup. PISA, pp. 449–474, 1964. [39] D. G. Luenberger, Observers for multivariable systems, IEEE Trans. Aut. Control 11, pp. 190–197, 1966. [40] J. L. Massera, Contribution to stability theory, Annals of Math. 64, pp. 182–206, 1956.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
Bibliography
219
[41] J. W. Milnor, Differential Topology, Lectures on Modern Mathematics, T. L. Saaty, ed., Vol. II, Wiley, New York, 1964. [42] J. W. Milnor, On the Concept of attractors: Corrections and remarks, Comm. Math. Phys. 102, pp. 517–519, 1985. [43] R. Narasimhan, Introduction to the Theory of Analytic Spaces, Springer, Middelburg New York. Lecture Notes in Mathematics 25, 1966. [44] N. Rouche, P. Habets and M. Laloy, Stability Theory by Lyapunov’s Direct Method, Lecture Notes in Applied Mathematical Sciences 22, Springer-Verlag, New York, 1977. [45] M. Shiota, Geometry of Subanalytic and Semi-Algebraic Sets, Birkhauser, P.M. 150, 1997. [46] H. J. Sussmann, Trajectory regularity and real analyticity, some recent results, Proceedings of 25th CDC Conference, Athens, Greece, Dec. 1986. [47] H. J. Sussmann, Single input observability of continuous time systems, Mathematical Systems Theory, 12(4), pp. 371–393, 1979. [48] M. Tamm, Subanalytic sets in the calculus of variations, Acta Mathematica 146, 3.4, pp. 167–199, 1981. [49] J. C. Tougeron, Id´eaux de fonctions diff´erentiables, Springer-Verlag, New York, 1972. [50] J. L. Verdier, Stratifications de Whitney et th´eor`eme de Bertini-Sard, Invent. Math. 36, pp. 295–312, 1976. [51] F. W. Wilson, Jr., The structure of the level surfaces of a Lyapunov function, Journal Diff. Equ. 3, pp. 323–329, 1967. [52] H. Whitney, Analytic extensions of differentiable functions defined in closed sets, Trans. Am. Math. Soc. 36, pp. 63–89, 1934. [53] O. Zariski and P. Samuel, Commutative Algebra, Van Nostrand Company, 1958. [54] C. D. Holland, Multicomponent Distillation, Prentice Hall, Englewood Cliffs, NJ, 1963. [55] H. H. Rosenbrock, A Lyapunov function with applications to some nonlinear physical systems, Automatica 1, pp. 31–53, 1962. [56] J. Alvarez, R. Suarez, and A. Sanchez, Nonlinear decoupling control of free radical polymerization continuous stirred tank reactors, Chem. Ing. Sci. 45, pp. 3341–3357, 1990. [57] D. K. Adebekun and F. J. Schork, Continuous solution polymerization reactor control, 1, Ind. Eng. Chem, Res. 28, pp. 1308–1324, 1989. [58] D. Bossane, Nonlinear observers and controllers for distillation columns, Ph.D. thesis, University of Rouen, France, 1993. [59] M. Van Dootingh, Polymerization radicalaire, commande g´eom´etrique et observation d’´etat a` l’aide d’outils non-lin´eaires, Ph.D. thesis, University of Rouen, France, 1992. [60] N. Petit, et al., Control of an industrial polymerization reactor using flatness, Second NCN Workshop, Paris, June 5–9, 2000, to appear in Lecture Notes in Control and Informatioon Sciences, Springer-Verlag. [61] P. Rouchon, Dynamic simulation and nonlinear control of distillation columns (in French), Ph.D. thesis, Ecole des Mines de Paris, France, 1990. [62] T. Takamatsu, I. Hashimoto, and Y. Nakai, A geometric approach to multivariable control system design of a distillation column, Automatica 15, pp. 178–202, 1979.
P1: FBH CB385-Book
CB385-Gauthier
220
May 31, 2001
17:54
Char Count= 0
Bibliography
[63] F. Viel, Stabilit´e des syst`emes non lin´eaires control´es par retour d’´etat estim´e. Application aux r´eacteurs de polym´erisation et aux colonnes a` distiller, Ph.D. thesis, University of Rouen, France, 1994. [64] F. Viel, E. Busvelle, and J-P. Gauthier, A stable control structure for binary distillation columns, International Journal on Control 67(4), pp. 475–505, 1997. [65] F. Viel, E. Busvelle, and J-P. Gauthier, Stability of polymerization reactors using input-output linearization and a high gain observer, Automatica 31, pp. 971–984, 1995.
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
Index of Main Notations
Following is a list of general notations used throughout the book. Local symbols are not listed. In particular, any symbol relative to the applications in Chapter 8 is not included in the list. R, N sets of real numbers and integers respectively [a, b[ real semi open interval {x ∈ R|a ≤ x < b} L f h Lie derivative of the function h in the direction of the vector field f L kf h k times iterated Lie derivative ∂ j = ∂u∂ j = f (x, u), system, Chapter 1, Section 1 y = h(x, u), X state space, n-dimensional, idem du , d y dimensions of control space and output space, idem U = I du , I ⊂ R closed interval, set of values of control, idem F : set of parametrized vector fields x˙ = f (x, u), F r , r = 1, . . , ∞, ω, idem H : set of output functions h(x, u), H r , r = 1, . . , ∞, ω, idem S = F × H : set of systems , S r , r = 1, . . , ∞, ω, idem L ∞ [U ], L[R d y ] sets of control functions and output functions, Chapter 2, Section 1 e(u, x) escape time, idem P input-output mapping, idem Pu initial state → output − trajectory mapping, idem T first variation of a system , idem d P tangent mapping to P, idem ˆ x) idem d P(u, D(u) = {D0 (u) ⊃ D1 (u) ⊃ . . . ⊃ Dn−1 (u)} canonical flag, Chapter 2, Section 2 (k−1)du (k−1) → R kd y , ) → (y, y , . . . , y (k−1) ), k : X ×U × R k : (x 0 , u, u , . . . , u Chapter 2, Section 3 (k−1)du S → R kd y × R kdu , k : X ×U × R (k−1) Sk : (x0 , u, u , . . . , u ) → (y, y , . . . , y (k−1) , u, u , . . . , u (k−1) ) idem P H (k) phase variable property of order k, idem f N (x, u (0) , . . . . , u (N −1) ), b N Chapter 2, Section 4 N N th dynamical extension of , idem u N = (u (0) , . . . , u (N −1) ) idem y N = (y (0) , . . . , y (N −1) ) idem k , Sk idem ()
dx dt
221
P1: FBH CB385-Book
CB385-Gauthier
222
May 31, 2001
17:54
Char Count= 0
Index of Main Notations
observation space, Chapter 2, Section 5
trivial distribution of , idem space of functions, relative to , of the form L kf1u (∂ j1 )s1 L kf2u (∂ j2 )s2 . . . . .L kfru (∂ jr )sr h i,u idem ¯ (u) idem
Lie () Lie algebra of a system , Chapter 2, Appendix A (x0 ) Accessibility set of through x0 , Chapter 2, Appendix O (x0 ) Orbit of through x0 , Chapter 2, Appendix d X h, dU h differentials of the mapping h(x, u) : X × U → R s with respect to the variable x ∈ X only (resp. u ∈ U only) If f u = f (x, u) ∈ F is a parametrized vector field, dU f is a mapping: T U ≈ U × du R → κ(X ), the set of smooth vector fields over X, etc . . . Chapter 3 x˙ = f (x) + ug(x), control affine system, Chapter 3, Section 4.2 ( A ) y = h(x), x˙ 1 = f (x1 , u), ( × ) y = h(x1 , u) − h(x2 , u), product of a system by itself, x˙ 2 = f (x2 , u), idem x˙ = Ax + u(Bx + b), x ∈ R n , bilinear system, Chapter 3, Section 4.3 (B) y = C x, S 0,r set of C r systems, where h is independent of u, Chapter 4, Section 1 H 0,r set of C r output functions h that are independent of u, idem B H , B H 0 , B F idem J k F, J k H, J k H 0 bundles of k-jets of C r sections of B F , B H , B H 0 , idem × X fiber product of bundles over X, idem × X idem J k S k-jets of systems, idem evk evaluation mapping, idem S K , S 0,K set of holomorphic systems over K , idem j k , j k f, j k h k-jets of , f, h, idem TX f (x) linearized of a vector field f at x ∈ X, f (x) = 0, idem D N : ((X × X ) \ X ) × U × R (N −1)du × S → R N d y × R N d y idem DS N : ((X × X ) \ X ) × U × R (N −1)du × S → R N d y × R N d y × R N du idem DN (x1 , x2 , u N ) = D N (x1 , x2 , u N , ), DSN (x1 , x2 , u N ) = DS N (x1 , x2 , u N , ), n ϕˆ = jt∞ (ϕ) = ∞ infinite jet of ϕ w.r.t. t, Chapter 4, Section 7.2 n=0 t ϕn (0) (1) f N = ( f , f , . . . , f (N ) ) N -jet of a curve, Chapter 5, Section 1 f˜N = ( f (1) , . . . , f (N ) ), f N = ( f (0) , f˜N ) ˆ N (x0 ,u 0 N ) rings of germs, idem N (x0 ), N (x0 ,u 0 N ), ˆ N idem N , N , N ,u˜ N −1 : X × U → R N d y restricted mapping, idem SN ,u˜ N −1 : X × U → R N d y × R du suspended restricted mapping, idem ˇ ring of analytic germs of functions of the form G(u, ϕ1 , . . . , ϕ p ), with ϕi = L kf1u (∂ j1 )s1 L kf2u (∂ j2 )s2 . . . . .L kfru (∂ jr )sr h, ki , si ≥ 0, idem ˇ with ki + si ≤ N − 1 idem ˇ N , same ring as , {u; u 0 } Chapter 5, Section 2 AC P(N ) ascending chain property, Chapter 5, Section 2
P1: FBH CB385-Book
CB385-Gauthier
May 31, 2001
17:54
Char Count= 0
Index of Main Notations r,B, r,B, output observer (resp. exponential), Chapter 6, Section 1 O y , O ye r,B, r,B, O x , O xe state observer (resp. exponential), idem x i = (x 0 , . . . , x i ) Chapter 6, Section 2 An, p (np, n) block-antishift matrix: 0, I d p , 0, . . . . . . . . , 0 0, 0, I d p , 0, . . . . . . , 0 , An, p = 0, 0, 0, . . . . . . . . , 0, I d p 0, 0, . . . . . . . . . . , 0, 0
Chapter 6, Section 2.4.1 ˜ N C ∞ analog of the ring ˆ N , Chapter 7, Section 2 ¯ N C ∞ analog of the ring N , idem ˜ C ∞ analog of the ring , ˇ idem
223
P1: FBH CB385-IND
CB385-Gauthier
May 29, 2001
11:9
Char Count= 0
Index
Abraham’s Theorem, 48, 217 Abstract transversality theorem, 183 Accessibility set, 19 Affine variety, 59 Algebraic Riccati equation, 131 Almost periodic function, 205 Analytic algebra, 69 Anti-involution (of a complexification), 38 Antishift matrix, 106 Approximation theorem, 57 Ascending chain property, 71 Asymptotic stability, 186 Asymptotic stability within a compact set, 126 Azeotropic distillation, 6, 145 Bad sets, 43 Basin of attraction, 125 Bilinear systems (immersions into), 29 Bilinear systems (observable), 29 Block-antishift matrix, 101 Brown–Stallings theorem, 127 Bundles (of jets), 37 Bundles (partially algebraic or partially semi algebraic), 43 Canonical flag (of distributions), 11 Canonical flag (uniform, regular), 12 Canonical involution (of a double-tangent bundle), 10 Center manifold, 180 Center manifold Theorem, 191 Chow Theorem, 19 Companion matrix, 29 Complete Riemannian metric, 207 Complex manifold, 38 Complexification (of systems, manifolds), 38 Conditions a and b (Whitney), 181
Conical bundle, 44 Continuity (of the input-state mapping), 120 Control, Control Space, 1 Control functions, 9 Control affine (systems), 26 Controllability (weak), 19 Conversion rate, 165 Corner manifolds, 1, 12, 184 Cr topology, 37, 182 Dayawansa, 97 Dimension (of a subanalytic set), 180 Distillation column, 143 Dynamical extension, 14 Escape (or explosion) time, 9 Evaluation jet mapping, 38 Exponential observer, 88 Extended Kalman filter, 95, 106 continuous-discrete, 117 high gain, 104 Feed (of a column), 143 Feed composition, 146 Feed flowrate, 146 Fiber product of bundles, 38 Finite germ, 73 Finitely generated module, 74 First variation (of a system), 10 Fliess–Kupka Theorem, 29 Frobenius norm, 109 Frobenius Theorem, 19 Global reaction heat, 164 Gramm observability matrix, 108 Grassmannian, 181 Grauert’s Theorem, 39, 218
224
P1: FBH CB385-IND
CB385-Gauthier
May 29, 2001
11:9
Char Count= 0
Index Hammouri, 1, 217 Heat (or thermal) balance, 146, 164 Hermann-Nagano Theorem, 19 Hirsch, 201, 218 Hurwicz–Wallman dimension, 180 Immersion (of a system), 29 Infinitesimal observability, 9 Initiator, 163 Initiator efficiency, 163 Innovation, 118 Input–output mapping, 9 Inverse Lyapunov’s theorems, 189 Isothermal and isoinitiator operation, 170 Jets, jet bundles, 38 Jouan, 2, 74, 218 Kalman filter, 3, 95 Kinetic coefficients, 163, 166 Lasalle’s Theorem, 190, 218 Leading moments (of length of dead polymer), 164 Length of polymer chains, 164 Lewis hypotheses, 144, 146 Lie algebra (of a system), 19 Linear systems (unobservable), 59 Linear systems (observed polynomially), 77 Liquid–vapor equilibrium, 145 Local center manifold, 191 Luenberger observer (high gain), 95, 218 Lyapunov equation, 102 Lyapunov’s functions, 188 Lyapunov (Lyapunov’s theory), 185 Mass (or material) balance, 145, 163 Monomer, 163 Multiplicity, finite multiplicity, 73 Nakayama’s Lemma, 74 Nonlinear filtering, 86 Noetherian (ring, module), 74 Number average molecular weight, 164 Observability, 10 differential, strong differential, 14 infinitesimal, uniform infinitesimal, 9 Observability canonical form, 22 Observability canonical form (for Control affine systems), 27
225
Observability distance, 89 Observable vector-field, 58 Observation space, 16 Observed data, 87 Observers (definition), 87 Observers (high gain), 95 Observers (output observers), 87 Observers (state observers), 89 ω-limit set, 130 Orbit (of a system through a point), 19 Output, Output space, 1 Output functions, 9 Output injection, 4 Output stabilization, 125 PA, Partially algebraic, 43 Partition of unity, 83 Peak phenomenon, 91 Phase variable (property, representation), 12 Polydispersity, 165 Polymerization reactors, 163 Prediction, 118 Projective tangent bundle, 46 Prolongation (of a sequence of germs), 73 PSA, Partially semi-algebraic, 43 Pull back ring, 73, 132 Reflux flowrate, 144 Regular (distribution), 11 Relative volatility, 145 Residual subset, 40, 183 Riccati (equation, solutions), 109 Riemannian (distance, metric), 90 Right hand derivative, 153 Rings of functions (attached to a system), 69 Rosenbrock, 153, 219 Sard’s Theorem 34, 208 Semi-global asymptotic stabilizability, 126 Sheaf of germs of analytic functions, 38 Singular points (of subanalytic sets), 180 Solvent, 163 Stable matrix, 102 Stability, asymptotic stability, 186 Stabilizing feedback, 3, 125 State, State space, 1 Stratifications (Whitney), 180 Stratified sets, 180, 183 Stein extension theorem, 184 Stein manifold, 39
P1: FBH CB385-IND
CB385-Gauthier
May 29, 2001
11:9
226 Subanalytic sets, 179 dimension, 180 local dimension, 54 singular points of, 180 Subanalytic mappings, 179 Suspension, 12 Sussmann, 5, 203, 219 Symmetric tensors, 60 Symmetric tensor product, 60 Systems, 1 Tangent cone (to a subanalytic set), 51 Thom-Malgrange theory, 75 Transitive Lie algebra, 19 Transversality (definition, theorems), 181 Tray (of a column), 143 Trivial foliation, 15
Char Count= 0
Index Urysohn’s Lemma, 81 Uncontrolled systems, 1, 2, 68 Universal input, 5 Unobservable, 59 Van der Strien, 191 Vapor flowrate, 144 Weak controllability, 19 Weak-∗ topology, 108, 120 Weierstrass preparation Theorem, 75 Whitney’s embedding Theorem, 207 Whitney pair, 180 Whitney stratifications, 180 Whitney topology, 36, 183