Control Engineering Series Editor William S. Levine Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742-3285 USA
Editorial Advisory Board Okko Bosgra Delft University The Netherlands Graham Goodwin University of Newcastle Australia Petar Kokotovic University of California Santa Barbara USA Manfred Morari ETH Zurich, Switzerland
William Powers Ford Motor Company USA ·-'1
Mark Spong University of Illinois Urbana-Champaign USA lori Hashimoto Kyoto University Kyoto, Japan
Control Engineering publishes research monographs and advanced graduate texts dealing with areas of current research in all areas of control engineering and its applications. We encourage the preparation of manuscripts in TeX-LaTeX is also acceptable-for delivery as camera-ready hard copy, which leads to rapid publication, or on a diskette. Proposals should be sent directly to the editor or to Birkhauser Boston, Computational Science and Engineering Program, 675 Massachusetts Avenue, Cambridge, MA 02139, USA. Published Books Robust Kalman Filtering for Signals and Systems with Large Uncertainties I.R. Petersen, A V. Savkin Qualitative Theory of Hybrid Dynamical Systems AS. Matveev, A V. Savkin Lyapunov-Based Control of Mechanical Systems M.S. de Queiroz, D.M. Dawson, S.P. Nagarkatti, F. Zhang Nonlinear Control and Analytical Mechanics H.G. Kwatny, GL Blankenship Control Systems Theory with Engineering Applications S.E. Lyshevski Control Systems with Actuator Saturation T. Hu, 2. Lin Deterministic and Stochastic Time Delay Systems E.-K. Boukas, 2.-K. Liu Hybrid Dynamical Systems A V. Savkin, R.J. Evans
Hybrid Dynamical Systems Controller and Sensor Switching Problems
Andrey V. Savkin Robin J. Evans
With 22 Figures
Birkhauser Boston. Basel. Berlin
Andrey V. Savkin School of Electrical Engineering and Telecommunications University of New South Wales Sydney 2052 Australia
Robin J. Evans Department of Electrical and Electronic Engineering University of Melbourne Melbourne 3052 Australia
Library of Congress Cataloging-in-Publication Data Savkin, Andrey V. Hybrid dynamical systems: controller and sensor switching problems / Andrey V. Savkin, Robin J. Evans. p. em. -- (Control engineering) ISBN 0-8176-4224-2 (aile paper) 1. Real-time control. 2. Digital control systems. 3. Switching theory. 1. Evans, Robin J. II. Title. III. Control engineering (Birkhauser) TJ217.7 .S28 2002 629.8--dc21 2001052614
ISBN 0-8176-4224-2
Printed on acid-free paper.
© 2002 Birkhauser Boston All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Birkhauser Boston, c/o Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights.
Printed in the United States of America.
9 8 7 6 543 2 1
SPIN 10794465
Typesetting: Pages created by the authors in TeX. www.birkhauser.com Birkhauser Boston Basel Berlin A member of BertelsmannSpringer Science+Business Media GmbH
Contents
Preface 1 Introduction 1.1 Hybrid Dynamical Systems . . . . . . . . 1.2 Controller and Sensor Switching Problems 1.2.1 Switched Dynamical Systems . . . 1.2.2 Switched Controller Systems . . . 1.2.3 Robust Switched Controller Systems 1.2.4 Switched Sensor Systems 1.2.5 Switched Controllers for Strong Stabilization of Linear Systems 1.3 N o t a t i o n . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Quadratic State Feedback Stabilizability via Controller Switching 2.1 rntroduction. . . . . . . . . . . . . . . . . . 2.2 Quadratic Stabilizability via Asynchronous Controller Switching . . . . . . . . . . . . . 2.2.1 Asynchronous Quadratic Stabilizability of Linear Systems . . . . . . . . . . . . . . . 2.3 The S-Procedure . . . . . . . . . . . . . . . . . . 2.3.1 An S-Procedure Result for Two Quadratic Forms. 2.3.2 The S-Procedure and Completeness . . . . . 2.4 A Sufficient Condition for Quadratic Stabilizability . . . .
ix
1 1 3 3 4 5 6 7 7
11 11 12 14 16 17 18 19
VI
Contents
2.5 The Case of Two Basic Controllers . . . . . . . . . . . . .. 2.6 Quadratic Stabilizability via Synchronous Switching . . .. 2.6.1 A Sufficient Condition for Quadratic Stabilizability via Synchronous Controller Switching . . . . . . .. 2.6.2 Stabilization via Synchronous Switching with Two Basic Controllers 2.7 Illustrative Example . . 2.8 Proof of Theorem 2.3.1 . 3 Robust State Feedback Stabilizability with a Quadratic Storage Function and Controller Switching 3.1 Introduction....................... 3.2 Uncertain Systems with Norm-Bounded Uncertainty 3.2.1 Special Case: Sector-Bounded Nonlinearities . 3.3 Robust Stabilizability via Asynchronous Controller Switching . . . . . . . . . . . . . . . . . . 3.3.1 Sufficient Conditions for Robust Stabilizability via Asynchronous Controller Switching. . . . . . . . .. 3.4 Robust Stabilizability via Synchronous Switching . . . . .. 3.4.1 Sufficient Conditions for Robust Stabilizability via Synchronous Controller Switching 3.5 Illustrative Examples. . . . . . . 3.5.1 A Second-Order System . 3.5.2 Two-Mass Spring System 4
Hoo Control with Synchronous Controller Switching 4.1 Introduction................ 4.2 State Feedback Hoo Control Problem. . . . . . . . 4.3 Output Feedback Hoo Control Problem . . . . . . 4.3.1 The Case of Linear Static Basic Controllers 4.4 Illustrative Example . . . . . . . . . . . . . . . . . 4.5 Output Feedback Hoo Control over Infinite Time . 4.5.1 Construction of an Infinite-Time Output Feedback Hoo Controller . . . . . . . . . . . . . . . . . .
5 Absolute Stabilizability via Synchronous Controller Switching 5.1 Introduction........................ 5.2 Uncertain Systems with Integral Quadratic Constraints .. 5.3 State Feedback Stabilizability via Synchronous Controller Switching . . . . . . . . . . . . . . . . . . . . . . . . . . .. 5.4 Output Feedback Stabilizability via Synchronous Controller Switching . . . . . . . . . . . . . . . . . . . . . . . . . . .. 5.5 A Necessary and Sufficient Condition for Output Feedback Stabilizability . . . . . . . . . . . . . . . . . . . . . . . . ..
20 21 24 25 25 27
33 33 34 35
36 40 41 45 46 46 49 55 55 56 58 64 66 68
69
75 75 76
80 82 84
Contents
A Constructive Method for Output Feedback Absolute Stabilization . . . . . . . . . 5.7 Systems with Structured Uncertainty. 5.8 Illustrative Example . . . . . . . . . .
vii
5.6
87
89 93
6 Robust Output Feedback Controllability via Synchronous Controller Switching 97 6.1 Introduction................. 97 6.2 Robust Output Feedback Controllability. 98 6.3 A Necessary and Sufficient Condition for Robust Controllability . . . . . . . . . . . 100
7 Optimal Robust State Estimation via Sensor Switching 7.1 Introduction...................... 7.2 Robust Observability of Uncertain Linear Systems 7.3 Optimal Robust Sensor Scheduling . 7.4 Model Predictive Sensor Scheduling
107 107 109 113 118
8
Almost Optimal Linear Quadratic Control Using Stable Switched Controllers 121 8.1 Introduction............. 121 8.2 Optimal Control via Stable Output Feedback Controllers . . . . . . . . . 122 8.3 Construction of Almost Optimal Stable Switched Controller . 124
.9
Simultaneous Strong Stabilization of Linear Time-Varying Systems Using Switched Controllers 129 9.1 Introduction...................... 129 130 9.2 The Problem of Simultaneous Strong Stabilization 9.3 A Method for Simultaneous Strong Stabilization 131 References
137
Index
151
j'
Preface
This book is primarily a research monograph that presents in a unified manner some recent research on a class of hybrid dynamical systems (HDS). The book is intended both for researchers and advanced postgraduate students working in the areas of control engineering, theoretical computer science, or applied mathematics and with an interest in the emerging field of hybrid dynamical systems. The book assumes competence in the basic mathematical techniques of IDodern control theory. The material presented in this book derives from a period of fruitful research collaboration between the authors that began'in 1994 and is still ongoing. Some of the material contained herein has appeared as isolated results in journal papers and conference proceedings. This work presents this material in an integrated and coherent manner and also presents many new results. Much of the material arose from joint work with students and colleagues, and the authors wish to acknowledge the major contributions made by Ian Petersen, Efstratios Skafidas, Valery Ugrinovskii, David Cook, Iven Mareels, and Bill Moran. There is currently no precise definition of a hybrid dynamical system; however 1 in broad terms it is a dynamical system that involves a mixture of discrete-valued and continuous-valued variables. Since the early 1990s, a bewildering array of results have appeared under the umbrella of HDS, ranging from the analysis of elementary on-off control systems to sophisticated mathematical logic-based descriptions of large real-time software systems. HDS problems were first addressed by computer scientists wishing to characterize and understand the behavior of software systems that interact
x
Preface
in real time with the external world. Representation methods such as Petrinets and transition systems were developed to capture the asynchronous and event-driven nature of the problem, which they did extremely well at a modeling level. However, they did not fully capture the real-valued continuous-time dynamical aspects, and many rather arbitrary extensions and augmentations to the modeling methods were required. Modern computer science research on HDS involves the use of various timed logic representations augmented with (sometimes simple) real-valued dynamics. Control engineers have addressed HDS from a quite different viewpoint. They were partly motivated, for example, by the need to more fully model and understand the behavior of computer implementations of control systems. Typically, in such implementations, the sampled-data feedback dynamics are augmented by a discrete-event system that is used to monitor both real-valued and logic-valued variables, turn controllers on and off, reset integrators, set alarms, and perform other tasks. The discrete-event logic-based component is mostly designed on an ad hoc basis; yet it is often the case that the discrete-event component is by far the dominant part of a control system software. As system complexity increases, it becomes increasingly important to understand how the logic components of the system influence overall system behavior and, if possible, how they can be designed to ensure desirable system characteristics. This book addresses one aspect of the problem of designing the logic component for a class of HDS called switched controller systems. By restricting ourselves to this class of HDS, we are able to present and prove many results concerning stability and optimality that are of practical significance for control system design. We hope that the reader finds this work both useful and interesting and is inspired to explore further the diverse and challenging area of HDS. The authors wish to acknowledge the support they have received throughout the preparation of this work from the Department of Electrical and Electronic Engineering at the University of Melbourne, the Department of Electrical and Electronic Engineering at the University of Western Australia, and the School of Electrical Engineering and Telecommunications at the University of New South Wales, Sydney. The authors are also extremely grateful for the financial support they have received from the Australian Research Council and the Centre of Expertise in Networked Decision Systems. Finally, the second author is indebted to the endless support he has received from Margaret, Jacqueline, and Jamie over very many years.
Sydney, Australia Melbourne, Australia
Andrey V. Savkin Robin J. Evans
1 Introduction
1.1
Hyb rid Dynamical Systems
most The term "hybrid dynamical system" (HDS) has many meanings, the of common of which is a dynamical system that involves the intera ction condiscrete and continuous dynamics. Such dynamical systems typically of real tain variables that take values from a continuous set (usually the set the numbers) and also variables that take values from a discrete set (e.g., accuset of symbols {Ql, Q2, .. ' ,Qk})' This model can be used to describe iated rately a wide range of real-time indus trial processes and their assoc home supervisory control and monitoring systems. A simple example is a led climate-control system. Due to its on-off natur e, the therm ostat is mode modas a discrete-event system, whereas the furnace or air conditioner is include eled as a continuous-time system. Other instances of such systems s, autom otive power-train systems, comp uter disk drives, robotic system s, autom otive engine management, high-level flexible manufacturing system rn intelligent vehicle/highway systems, sea/a ir traffic management, mode s, spacecraft control systems, job scheduling, interconnected power system Actuand chemical processes (e.g., see (12,2 0,38,5 1,70,7 4,87,1 37,14 4]). nably ally, most dynamical systems that we have aroun d us can be reaso ical dynam described in hybrid terms. Many intere sting examples of hybrid of van systems from various areas can be found in the introd uctor y book der Schaft and Schumacher (142]. deOne well-known instance of a hybri d system is a dynamical system or mulscribed by a set of ordin ary differential equations with discontinuous
2
1. Introduction
tivalued right-hand sides. This mathematical model can be used, for example, to describe various engineering systems that contain relays, switches, and hysteresis. Properties of this type of hybrid systems have been studied in great detail for the past fifty years, especially in the Soviet literature (e.g., see [5,34,37,63,138]). Another area that has been brought into the hybrid systems framework is the study of sliding mode control [140]. Computer-controlled or sampled-data systems [9,32,57] represent a very good example of a hybrid system where a continuous-time plant described by differential equations is controlled by a digital regulator described by difference equations. This is an area of great practical significance because advances in modern digital technology are such that nearly all control systems implemented today are based on microprocessors and sophisticated microcontrollers. If we consider quantization of the continuous-valued variables, then the hybrid systems contain not only continuous-valued signals but the discrete-valued variables as well. Furthermore, it is now straightforward to include complex decision-making logic as part of the control loop. Thus an understanding of the hybrid system aspects is essential to characterization of the behavior of these systems. A further class of hybrid dynamical systems was studied in the research monograph of Matveev and Savkin [73] (see also [71,72,103, 112-115]). The research presented in this book has been motivated in part by two very interesting examples of the discrete control of a continuous variable system introduced in the paper by Chase, Serrano, and Ramadge [28]. These examples exhibit what may be regarded as two extremes of complexity in the behavior of hybrid dynamical systems: one is eventually periodic, and the other is chaotic. They are of interest in their own right but have also been used to model certain aspects of flexible manufacturing systems [77,87]. Natural generalizations of the hybrid systems in [28] are various switched flow networks consisting of a number of interconnected buffers. Such networks can be used to model flexible manufacturing assembly/ disassembly systems [77) 87]. They can also be interpreted as models for various computer and communication systems, especially those with time-sharing schemes [162]. As a general mathematical model for flow networks, [73] employs the concept of a differential automaton introduced by Tavernini (134] and closely related to the mathematical model of a hybrid dynamical system considered by Witsenhausen in 1966 [151]. It is quite typical for differential automata to have no equilibrium points. Therefore, the simplest attractor in such systems is a limit cycle. The main results of the book [73] describe some broad and important classes of hybrid dynamical systems that have the following properties: (i) There exist a finite number of limit cycles.
(ii) Any trajectory of the system converges to one of these limit cycles.
, ,. "
1.2 Contr oller and Sensor Switching Proble ms
3
the Hence any trajec tory of the system is asym ptotic ally periodic and This syste m always exhibits a regular, stable, and predictable behavior. conclusion is very impo rtant for applications. d There has been a great deal of research activi ty in the area of hybri 62, control systems (e.g., see [6,8,1 0,11, 18,22 ,24,2 5,28, 39,40 ,44,5 2,53, ty 66, 71, 72, 84, 102, 104, 105, 108, 111, 113, 115,141, 143, 163]). This activi te-event was motiv ated in part by the development of the theor y of discre same dynam ical systems in the 1980s and 1990s (27,50,78,86,97]. At the amon g time there has been a growing intere st in hybri d dynamical systems In this theor etical comp uter scientists and mathe matic al logicians [2,3,8). hybri d a litera ture, the most common example is a timed autom aton. This is a finite syste m consisting of a set of simple integr ators (clocks) coupled with cols state autom aton. Such systems can be used, for example, to model proto cation with timin g requirements and const raints . The main issue is verifi em is that the system exhibits a desired behavior. The verification probl nontr ivial and in many cases may be undecidable. d The litera ture in the field of hybri d system s is vast, and we have limite ial ourselves to references that we found most useful or that conta in mater mean s suppl emen ting the text. The coverage in this brief overview is by no ibucomplete. We apologize in advance to the many autho rs whose contr tions have not been mentioned. conThe area of hybrid systems is a fascinating new discipline bridging matic s. trol engineering, theor etical comp uter science, and applied mathe to use In fact, many problems facing engineers and scientists as they seek HDS comp uters to control complex physical system s natur ally fit into the ult diffic framework. The study of hybri d dynam ical systems represents a to as and exciting challenge in control engineering and has been referred this "the control theor y of tomorrow" by SIAM News (41]. We hope that mono graph will help in some small way to meet this challenge.
1.2
Controller and Sensor Switching Problems
book. In this section, we briefly describe the HDS problems studie d in this
1.2.1
Switched Dynamical Syste ms
s In this book, we study an impo rtant class of hybri d dynamical system syscalled a switched system. By a switched system, we mean a dynam ical a logitem consisting of a finite numb er of continuous-time subsystems and hed cal rule that orchestrates switching between them. It is clear that a switc ucontin system natur ally fits into the HDS framework. In this case, the time ous state variables include the state variables of all of the contin uoushed subsystems, and the discrete variable is the subsy stem index. Switc
4
1. Introd uction
i
!
dynamical systems have numerous applications in control of mech anical systems, process control, the autom otive industry, power systems, aircra ft and traffic control, and many other fields. The book [80J and the survey article [68] conta in repor ts on various developments in the area of switch ed dynamical systems. From a mathe matic al viewpoint, a switched system can be described by an ordin ary differential equat ion of the form (1.2.1)
where x(t) E R n is the state, i(t) E {l,2, ... , k} is a piecewise const ant function of time, called a switching signal, and fl(-), 12('), .. · , ik(') are given continuous vector functions such that
h(O)
= 12(0) = .. , !k(D)
I
I I
\
I
I I I
= D.
In the partic ular case where all of the individual subsy stems are linear , we obtai n a switched linear system
i i
(1.2.2)
Note that most paper s on switched control systems study the linear switch ed system (1.2.2). In the article (68], Liberzon and Morse formulate the following three basic problems in stabil ity and design of switched dynamical systems.
Prob lem A
I I I
Find conditions that guarantee that the switched system (1.2.1 ) is asymptotically stable for any switching signal.
I
Prob lem B
I
Describe those classes of switching signals for which the switched system (1. 2.1) is asymptotically stable.
Prob lem C Construct a switching signal that makes the switched system {1.2.1} asymptotically stable. The last problem is a design problem, which is the most impo rtant from a practi cal viewpoint. This book is mainly concerned with various gener alizations of Probl em C.
1.2.2 Switched Controller Syste ms The system (1.2.1) togeth er with an asym ptotic ally stabilizing switch ing signal is called a switched controller system.
I
II I I
I 1
I \
1.2 Controller and Sensor Switching Problems
I I
II I I
!
t
5
There are several theoretically interesting and practically significant problems concerning the use of switched controllers. In some situations, it is possible to design several controllers and then switch between them to provide a performance improvement over a fixed controller as well as new functionality [7,33, 79]. In fact, sometimes it is easier to find a switching controller performing a desired task than to find a continuous one. Moreover, it is well-known that there are dynamical systems that cannot be stabilized by a continuous controller, but a stabilizing discontinuous feedback can be found (e.g., see [45,46]). Control techniques based on switching between different controllers have been applied extensively in recent years in adaptive control, where they have been shown to improve transient response (e.g., see [65,82]). In other situations, the choice oflinear or nonlinear controllers available to the designer is limited, and the design task is to use the available set of controllers in an optimal fashion [106-109,128,129)32]. The latter problem includes, for example, the optimal switching between gears in a gear box and the optimal switching between heating and cooling modes of operation in an air-conditioning plant.
!
1.2.3 Robust Switched Controller Systems
I
I II I
i
I I
I I
I
I
One of the key requirements for any control system is that of robustness. This is the requirement that the control system maintain an adequate level of performance in the face of significant plant uncertainty. Such plant uncertainties may be due to variations in the plant parameters and the effects on nonlinearities and unmodeled dynamics that have not been included in the plant model. In fact, the requirement for robustness is one of the main reasons for using feedback in control system design. Hence, the design of robust control systems is a key area of research in the field of control engineering. This book is specifically concerned with the design of robust hybrid dynamical systems. One approach to the design of robust control systems involves the notion of an uncertain system. In the research literature, various classes of uncertain systems have been investigated (e.g., see [15,30,31,60, 75, 117, 118,124,160]). Uncertain systems are mathematical models that include representations of the plant uncertainties. The choice of uncertain system to model a given plant will generally involve a trade-off between how well the chosen uncertainty representation models the real uncertainty in the plant and the mathematical tractability of the resulting control system design problem. In this book, we will concentrate on classes of uncertain systems as models for HDS in which the uncertainties are described by a norm-bounded constraint or a certain integral quadratic constraint. These uncertainty descriptions have attracted considerable attention (e.g., see [60,75,93,94,99,116,118-121,123-125,159,160]) and have been shown to provide a good representation of the uncertainty arising in many real control problems. Moreover, our investigations indicate that they lead to
6
1. Introduction
tractable mathematical frameworks for many problems arising in the control of hybrid systems. Most of the book concentrates on the case of switched controller uncertain systems of the form
x(t) y(t)
-
f(t, x(t), u(t)w(t)), g(t,x(t),v(t)),
(1.2.3)
where x(t) E Rn is the state, w(t) E RP and v(t) E R l are the uncertainty inputs, u(t) E Rh is the control input, y(t) E Rl is the measured output, and f(" " " .) and g(-,., .) are continuous matrix functions. We assume that the uncertainty inputs w(·) and v(·) belong to a given class. Furthermore, suppose that we have a collection of given controllers
Ul(t) U2(t)
U1(t,y(t)), U2(t,y(t)),
(1.2.4)
where U1 (·, '), U2 (-, .), .•. ,Uk(',') are given continuous matrix functions The controllers (1.2.4) are called basic controllers. The main problem considered in this book is to construct a rule for switching between the basic controllers (1.2.4) based on the measurement y(.) such that the system (1.2.3) is robustly asymptotically stable. Moreover, in many cases, we are able to derive necessary and sufficient conditions for the existence of a robustly stabilizing switching rule. A block diagram of the type of switched controller system we have in mind is shown in Figure 1. 2.1. Chapters 2-6 of the book consider a number of stabilization and robust control problems for the different classes of switched controller systems (1.2.3), (1.2.4). These problems include quadratic stabilizability, robust stabilizability, absolute stabilizability, H oo control, and robust controllability over a finite time interval. We consider various classes of system uncertainties, state and output feedback cases, and synchronous and asynchronous controller switching. Many of the results presented in Chapters 2-6 were originally presented in the papers [101,106-109,128,129,132].
1.2.4
Switched Sensor Systems
Another important class of switched dynamical systems studied in this book is that of uncertain switched sensor systems. More precisely, we consider an uncertain dynamical system with several sensors (measured outputs). Only one of these measurements is available to an intelligent or flexible state estimator at any time, but the estimator can dynamically switch the sensor
1.3 Notat ion
7
sensor mode. The corresponding design problem is to const ruct a rule for ate switching that produces an optim al (in some sense) robust state estim an filfor the uncer tain system. The deterministic interp retati on of Kalm lued tering provided in [19,93,120,125] is employed in Chap ter 7 for set-va pros result state estim ation of uncer tain switched sensor systems. The ation vided in Chap ter 7 consider an optim al robus t set-valued state estim the problem for an uncer tain sensor switched dynamical system in which result s uncer tainty is described by an integral quadr atic constraint. These were originally published in the conference paper [110].
1.2.5 Switched Controllers for Strong Stabilization of Line ar Syste ms
I I iI
I
i
pro bThe last two chapters of the book address various strong stabilization izatio n lems for linear systems. The term "strong stabilization" means stabil These in the class of (bounded-input bound ed-ou tput) stable controllers. llers problems are good examples of the situat ion where switched contro er is are used not because the choice of controllers available to the design n adlimited as in Chap ters 2-6 but because switched controllers have certai ems of vantages over linear continuous-time ones. More precisely, the probl ons strong stabilization considered in Chap ters 8 and 9 may not have soluti the in the class of linear controllers. Moreover, even if a solution does exist, to dimension of the required linear continuous-time compensator may tend entable infinity. However, we show that constructive and real-time-implem llers solutions can be found in the class of discontinuous switched contro timeboth for the problem of almost optim al stron g stabilization of linear stainvariant systems (Chap ter 8) and the problem of simultaneous strong ter 9). bilization of a finite collection of linear time-varying systems (Chap The results of Chap ters 8 and 9 were originally published in [126,127].
I ! t
1.3
Not atio n
ions. In the remainder of this book, we frequently use the following notat {ell e2, ... ,en} -
the set consisting of the elements el, e2, . " ,en'
8
1. Introduction
11·11
-
the standard Euclidean norm in Rn, i.e.,
detA v(t + 0)
-
the determinant of the square matrix A. the limit of the function v(.) at the point t from the right, Le., v(t + 0) t:.
L2[TI , T 2 )
-
lim
.
€>O, €--+o
the Hilbert space of square integrable vector-valued functions defined on the time interval [TI , T 2 ), where T I < T 2 < 00.
1.3 Notation
Time
Controller 1
ul
__.----t-. Controller 2
u2
Controller Switch
Plant
Output
__---+-to-i Controller i
Controller k
__----l--~
Ul
uk
FIGURE 1.2.1. Switched controller system.
9
jj
~-,!.,:.
!~\.
,
.~
J
~.
i.\
r 1
1:i:
I I i
i
I
2 Quadratic State Feedback Stabilizability via Controller Switching
2.1
I
I J
Introduction
In this chapter, we consider the problem of quadratic stabilizability of a given system via state feedback controller switching. The switched controller is defined by a finite collection of given continuous-time controllers called the basic controllers. Our stabilizability problem is to design a suitable rule for switching from one basic controller to another. We introduce the concept of quadratic state feedback stabilizability via controller switching. Roughly speaking, a system is said to be quadratically stabilizable via controller switching if there exists a switched controller such that the closed-loop system is stable with a quadratic Lyapunov function. We derive necessary and sufficient conditions for quadratic state feedback stabilizability via controller switching for both asynchronously switched controller systems, where controller switching is determined by the value of the plant state and for the more practically important problem of synchronously switched controller systems, where the controller can only be switched at prespecified switching times. We also propose algorithms to determine switching sequences that ensure quadratic stability of the corresponding closed-loop system. Furthermore, we give a number of more constructive sufficient conditions for quadratic stabilizability via controller switching. These conditions are based on a well-known theoretical tool from the robust control theory called the S-procedure [156]. We present some relevant results on the Sprocedure following the research monographs [37,94]. Some of the results
12
2. Quadratic State Feedback Stabilizability via Controller Switching
of this chapter were originally published in [132]. The remainder of the chapter is organized as follows. Section 2.2 introduces the definition of quadratic stabilizability via asynchronous controller switching and presents a necessary and sufficient condition for this kind of stabilizability. Some relevant results on the S-procedure are presented in Section 2.3. They will also be applied in Chapter 3. A sufficient condition for quadratic stabilizability via asynchronous controller switching is given in Section 2.4. A necessary and sufficient condition for quadratic stabilizability via asynchronous controller switching with two basic controllers is derived in Section 2.5. Section 2.6 addresses the problem of quadratic stabilizability via synchronous controller switching. A simple second-order example illustrating the main results of the current chapter is considered in Section 2.7. Finally, Section 2.8 presents a proof of the S-procedure result for two quadratic forms (Finsler's Theorem) stated in Section 2.3.
2.2
Quadratic Stabilizability via Asynchronous Controller Switching
In this section, we introduce the concept of quadratic stabilizability via state feedback asynchronous controller switching. Consider the nonlinear control system
x(t)
f(x(t), u(t)),
=
where x(t) E Rn is the state, u(t) E R h is the control input, and a continuous function.
(2.2.1)
fe·) is
Asynchronous Controller Switching
In this section, we consider the class of state feedback control laws generated by switching between the collection of given state feedback controllers
Ul(t) U2 (t)
-
U1(x(t)), U2 (x( t)),
(2.2.2) where U1C), U2(·),'" ,Uk(') are given continuous matrix functions of corresponding dimensions such that
Definition 2.2.1 The state feedback controllers {2.2.2} are called basic controllers.
2.2 Quad ratic Stabil izabil ity via Async hrono us Contr oller Switching
13
ion Let I (x) : R n -t {I, 2, .. , , k} be a symbolic piecewise const ant funct ... , k}. that maps the state of the syste m (2.2.1) to the set of symbols {I, 2, asynThen , for any such function we will consider the following class of chron ous switching state feedback controllers u(t) = Ui,,(x (t)) Vt E [0,00), where i x
6.
I(x(t) ).
(2.2.3)
hing Hence an asynchronous controller switching strate gy is a rule for switc of the from one basic controller to anoth er based on the measured value syste m state. Notat ion
),(2.2.3) We let .c denot e the collection of all controllers of the form (2.2.2 under synchronous controller switching. Rema rk
repre The closed-loop switched controller syste m (2.2.1), (2.2.3) can be sente d by the form
x(t)
=
g(x(t )).
(2.2.4)
boun dThe vecto r function g(x) may be discontinuous on the switching hand aries. An inher ent difficulty of systems with a discontinuous rightclassi a side is that the solutions of (2.2.4) are not always well-defined in can cal sense and the so-called sliding mode pheno meno n is possible. This we be overcome be the introd uctio n of Filipp ov solutions. In this book, book of do not pay any atten tion to these technicalities, and the classical sides Filipp ov (34] on differential equat ions with discontinuous right- hand is recommended to all intere sted readers. Defin ition 2.2.2 If there (2.2.2), (2. 2. 3) such that ally asymp totica lly stable, asymp totica lly stabilizable basic controllers (2.2.2).
exists a state feedback controller of the form the closed-loop syste m (2.2.1), (2.2.3) is globthen the syste m (2.2.1) is said to be globally via async hrono us controller switching with the
°
ratic Let P = pI > be a squar e matri x of order n, and introd uce a quad Lyapunov function of the form Vp(x)
6.
x' Px.
(2.2.5)
oller of Defin ition 2.2.3 Suppose that there exists a state feedback contr > 0 such the form {2. 2. 2), (2.2.3 ), a matri x P = P > 0, and a const ant €
14
2. Quad ratic State Feedb ack Stabil izabil ity via Contr oller Switc hing
that for the deriva tive Vp(x( t)) of the quadratic Lyapu nov funct ion (2.2.5) along traJ'ectoTies of the closed-loop system (2.2,1), (2.2.3), the condi tion
(2.2.6) holds for all x ERn . Then, the system (2.2.1) is said to be quadr atically stabilizable via async hrono us contro ller switch ing with the basic controllers
(2.2.2).
Rema rk
It can be easily show n that quadr atic stabil izabil ity impli es globa l asym ptotic stabil izabil ity (e.g., see [145]).
2.2.1
Asyn chro nous Quadratic Stabilizability of Line ar Syste ms
In this subse ction, we consi der the probl em of quadr atic stabil izabil ity via async hrono us contro ller switc hing for linear time- invar iant linear system s of the form x(t)
=
Ax(t)
+ Bu(t) ,
(2.2.7)
where x(t) ERn is the state, u(t) E Rh is the control input, t E (0,00) , and A, B are given const ant matri ces. Furth ermo re, we assum e that the basic state feedb ack contr ollers are linear: Ul(t)
-
L1x(t ),
U2(t)
-
L2X(t ),
(2.2.8) To prese nt the main result of this subse ction, we need the follow ing definition.
Defin ition 2.2.4 Let Zl = Z~, Z2 = Z~, ... ,Zk = Zk be given squar e matrices. The collection of matri ces {~, Z2,'" ,Zk} is said to be comp lete if for any XQ E RD there exists an index i E {I, 2, ... ,k} such that x'oZiXQ < O. Furthermore, the collection {Zl, Z2, ... ,Zk} is said to be strict ly complete if for any XQ E Rn / {O}, there exists an index i E {I, 2, ... ,k} such that X~ZiXQ < O. We are now in a positi on to prese nt a neces sary and sufficient condi tion for quadr atic stabil izabil ity via async hrono us contr oller switc hing.
ing 2.2 Quad ratic Stabil izabil ity via Async hrono us Controller Switch
15
linear Theo rem 2.2.5 Consider the linear system (2.2.7) and the basic controllers (2.2.8). Then the following statem ents are equivalent.
(i) The system (2.2.7) is quadratically stabilizable via asynchronous controller switching with the basic controllers (2.2.8). ces (ii) There exists a square matri x P = P > 0 such that the set of matri
+ BL I )', (A + BL2) P + P(A + BL2)',
(A + BL1) P + P(A
(2.2.9) is strictly complete. lic Furthermore, suppose that condition (ii) holds and introduce a symbo um functi on I(x) as I(x) D. ix, where i(x) is an index for which the minim in . min
~=1,2, ...
,k
x'((A + BLi) P + P(A + BLi)' )x
llers is achieved. Then, asynchronous controller switching with basic contro (2.2.8) defined by I(x) quadratically stabilizes the system (2.2.7). Proof
(i)
=}
(ii) For any index i = 1,2, ... ,k, introd uce a square matri x
us conIf the system (2.2.7) is quadr atical ly stabilizable via asynchrono have troller switching with the Lyapunov function Vp(x) = x' Px, then we for the closed-loop system (2.2.10) where i = I(x(t) ). Hence, quadr atic stabilizability implies that x'(A~P + PAi) x
= X'ZiX < 0
0 there if x :f:. O. In other words, we have proved that for any x E Rn, x :f:. is the exists an index i E {1, 2, ... ,k} such that x' ZiX < O. This condition definition of strict completeness of the set of matrices {Zl' Z2,'" ,Zk}. (ii) =} (i) Assume that the set of the matri ces
16
2. Qu adr atic Sta te Fee dba ck Sta bili zab ilit y via Co ntro ller Sw itch ing
is stri ctly complete. For any x =f:. 0, int rod uce the function
a(x )
D. _
mi n
~=l, 2, ...
,k
x'( (A + BL i)P + peA + BL i)') x.
Th e str ict com ple ten ess of {Z d implies tha t a(x ) < 0 for any x Fu rth erm ore , let
#- o.
D.
ao = ma x a(x). x:ll xll= l
Because the fun ctio n a(x) is con tinuous and a(x ) < 0, the ma xim um is achieved and ao < O. Finally, the ine qua lity (2.2.6) holds wit h the asynchronous sw itch ing controller def ined in the sta tem ent of the the ore m and € = -a . Th is com ple tes the pro of of the the ore m. 0
2.3 Th e S-Procedure Th e ter m "S-procedure" was int rod uce d by Aiz erm an and Ga ntm ach er in the mo nog rap h (lJ (see also [36)) to den ote a me tho d tha t had bee n freque ntl y used in the are a of non linear con tro l (e.g., see [37 ,93 ,94 , 156J). More recently, similar res ult s hav e bee n extensively used in the We ste rn control the ory lite rat ure and the nam e S-procedure has rem ain ed (e.g., see (23,75,119,123,130)). A feature of the S-p roc edu re app roa ch is tha t it allows for nonconservative res ult s to be obt ain ed for con tro l pro ble ms inv olving str uct ure d unc ert ain ty. In fac t, the S-procedure provides a me tho d for converting rob ust con tro l problems involving str uct ure d unc ert ain ty int o par am ete r-d epe nde nt problems inv olving uns tru ctu red unc ert ain ty. Furthe rm ore , S-p roc edu re me tho ds find app lica tio n to ma ny rob ust con tro l problems (e.g., see [93,94,119,122, 1231). A general and sys tem ati c des criptio n of the S-p roc edu re can be fou nd, for example, in the mo nog rap h [94]. Le t real-valued functionals go (x) ,~h (x), . " ,gk (x) be define d on an abstr act space X. Also, let T1, ... , Tk be a collection of rea l num ber s tha t form a vec tor T = [T1 ... Tk} ' and let k
SeT, x)
D.
go(x) -
L Tj9j(x).
(2.3.1)
.i= l
We consider the following condition s on the functionals 90(x), 91 (x), ... , 9k(X). S-p roc ed ur e: Co nd itio n 1:
90(x) > 0 for all x such tha t gl(X) 2:: 0, ... , 9k(X) 2:: O. (2.3.2)
S-p roc ed ur e: Co nd itio n 2:
2.3 The S-Procedure
17
There exist constants 1'1 > 0, ... ,1'k > 0 such that 2::=l1'i > 0 and
S(1',X)
> 0 Vx
E
X.
(2.3.3 )
Clearly, condition (2.3.3) implies condition (2.3.2). The term S-procedure refers to the procedure of replacing condition (2.3.2) with the stronger condition (2.3.3). One can easily find examples where condition (2.3.3) does not follow from condition (2.3.2). However, if one imposes certain additional restrictions on the functionals Yo (x), Y1 (x), . . . , Yk(x), then the implication (2.3.2) implies that (2.3.3) may be true. In this case, the S-procedure is said to be lossless for the condition Yo(x) 2: 0 and the constraints Y1(X) > 0, .. , , Yk(X) 2: o. In a fashion similar to the preceding, the losslessness of the S-procedure can be defined for the condition yo(x) > 0 and the constraints Yl(X) > 0, ... , Yk(X) > O. Other combinations are also possible. It should be pointed out that conditions of the form (2.3.2) often arise in problems involving construction of Lyapunov functions. In a typical application of the S-procedure, the functionals Yo (x), Y1 (x), ... , Yk(X) depend on physical parameters. One often seeks regions in the space of parameters where condition (2.3.2) is satisfied. The presence of multiple constraints Yl(X) 2: 0, ... , Yk(X) 2: 0 usually brings additional difficulty to the problem. However, if the S-procedure is applied, the problem can be reduced to one that does not involve multiple constraints. Let A and B be domains in the set of physical parameters defined by conditions (2.3.2) and (2.3.3), respectively. Because condition (2.3.3) implies condition (2.3.2), it immediately follows that B S A, that is the application of the S-procedure leads to a restriction on the set of admissible physical parameters. However, if the S-procedure is lossless, then A == B and no such restriction on the set of admissible physical parameters occurs.
2.3.1
An S- Procedure Result for Two Quadratic Forms
In the this subsection, we present a well-known theorem on the S-procedure. Although the first applications of the S-procedure in control theory began in the 1950s, the corresponding mathematical ideas can be traced back as far as Hausdorff [43] and Toeplitz [135]. An interest in establishing the equivalence between conditions (2.3.2) and (2.3.3) for the case of two quadratic forms arose in connection with a problem of classifying definite matrix pencils. In the Western literature, the main result on this problem is commonly referred to as Finsler's Theorem (e.g., see (139]). This paper gives a comprehensive survey of this problem, including an extensive Western bibliography and historical notes. The following theorem proves the losslessness of the S-procedure for the case of two quadratic forms [156]. Theorem 2.3.1 Let X be a real linear vector space and ~(xL Yl(X) be quadratic forms on X satisfying the following condition: Q>(x) > 0 for all
18
2. Quad ratic State Feedb ack Stabil izabil ity via Contr oller Switc hing
x E X such that ~;h(x) > O. Then there exist constants that TO + T1 > 0 and
TO
> 0,
T1
> 0 such (2.3.4)
for all x E X.
The proof of Theo rem 2.3.1 will be given in the last section of this chapter.
2.3.2
The S-Procedure and Completeness
In this subsection, we reformulate S-procedure results in terms of comp lete and strict ly complete collections of square matrices adopt ed in this book. Theo rem 2.3.2 Let {Zl, Z2, . " ,Zk} be a given collection of squar e matrices. Suppose that there exist constants T1 > 0, T2 > 0, ... , Tk > 0 such that L:~=1 Ti > 0 and
(2.3.5) Then, the collection {Zl, Z2, ... ,Zk} is complete. Similarly if there exist constants Ti as before such that the quantity in (2.3.5) is strictly negative then the collection {Zl, Z2, ... ,Zk} is strictly complete. Proof
The statem ent of this theor em follows from the implication (2.3.3) => (2.3.2) of the S-procedure.
o
Rema rk
Note that the condition (2.3.5) is only a sufficient condi tion for comp leteness. However, in the case k = 2, this condition is equivalent to comp leteness. More precisely, the following theor em can be proved. Theo rem 2.3.3 Let Zl = Z~ and Z2 = Z~ be given square matrices. Then, the following two statem ents are equivalent.
(i) The collection {Zl, Z2} is strictly complete. (ii) There exist constants T1
> 0, T2 > 0 such that (2.3.6)
2.4 A Sufficient Condi tion for Quadr atic Stabilizability
19
Proof Introd uce the following two quadr atic forms:
Qo(x) A_X' Z1 X, Q1(X)
A
X' Z2 X.
Theo rem Now the statem ent of this theor em imme diatel y follows from [] 2.3.1.
2.4
A Sufficient Condition for Qua drat ic Stabilizability
stabilizIn this section, we derive some sufficient condi tions for quadr atic ability via async hrono us controller switching. eigenDefin ition 2.4.1 A square matri x A is said to be stable if all its values lie in the open left half complex plane. xP = It is well know n that for any stable matri x A there exists a matri pI > 0 such that
AP + PA' < O.
(2.4.1)
Notat ion positi veLet A be a stable matri x. We will use the notat ion P(A) for any definite matri x P such that the inequ ality (2.4.1) holds. atic We are now in a positi on to prese nt a sufficient condi tion for quadr stabil izabil ity via async hrono us controller switching. linear Theo rem 2.4.2 Consider the linear system (2.2.7) and the basic 0, ... , controllers (2.2.8). Suppose that there exist constants 71 > 0, T2 2: Tk > 0 such that 2:::=1 Ti = 1 and the matri x k
AT
A
A+B LTiL i i=l
asynis stable. Then the system (2.2.7) is quadratically stabilizable V?a chronous controller switching with the basic controllers (2.2.8). Furthermore, one of the quadratically stabilizing asynchronously switch funct ion ing controllers can be described as follows. Introduce a symbolic in I(x) as I(x) A ix, where i(x) is an index for which the minim um . min
t=1,2, ... ,k
x' ((A + BLi) P(Ar )
+ P(AT)(A + BLi)')X
conis achieved. Then, asynchronous controller switching with the basic (2.2.7). trollers (2.2.8) defined by I(x) quadratically stabilizes the system
20
2. Quad ratic State Feedb ack Stabil izabil ity via Contr oller Switc hing
Proof If the matri x AT introd uced in the statem ent of the theor em is stable , then there exists a positive-definite matri x P(Ar ) such that
A,P( AT )
+ P(A
T
)A~
< 0.
This inequality can be rewri tten as k
L ((A + BTiLi)P(A
T )
+ P(A, )(A + BTiL i)') < 0,
i=l
or as k
LTiZ i < 0, i=l
where Zi
tJ.
(A
+ BLi) P + P(A + BLi )'.
Hence, Theo rem 2.3.2 implies that {Zl,Z 2, ... ,Zk} is strict ly comp lete. Finally, the statem ent of this theor em now follows from Theo rem 2.2.5. This completes the proof of the theorem. 0 Rema rk
In other words, Theo rem 2.4.2 states that a sufficient condItion for the plant (2.2.7) to be quadr atical ly stabilizable via asynchronous contr oller switching with the set of basic controllers (2.2.8) is that there exists a linear state feedback stabilizing controller that can be repre sente d as a convex comb inatio n of linear controllers from the set (2.2.8).
2.5 The Case of Two Basic Controllers In this section, we apply Finsl er's Theo rem or the S-procedure for two quadr atic forms to derive a necessary and sufficient condi tion for quadr atic stabilizability via asynchronous controller switching in the case of two basic controllers. Suppose that k = 2 and we consider the case of the two basic contr ollers Ul (t)
U2(t)
(2.5.1)
Now we are in a position to prese nt the main result of this section. Theo rem 2.5.1 Consider the linear system (2.2.7 ) and the basic linear controllers {2.5.1}. Then the following two statem ents are equivalent.
2.6 Quad ratic Stabil izabil ity via Synch ronou s Switching
21
con(i) The syste m (2.2.7) is quadratically stabilizable via asynchronous troller switching with the basic controllers {2.5.1}. {ii} There exist constants matri x
11
> 0 and 12 > 0 such that 11 + 12
= 1 and the
is stable. asynFurthermore, if condition (ii) holds, then one quadratically stabilizing uce a chronously switching controller can be described as follows. Introd which symbolic funct ion I (x) as I (x) A ix, where i (x) is an index for the minim um in
conis achieved. Then the asynchronous controller switching with the basic (2.2.7 ). trollers {2.5.1} defined by I(x) quadratically stabilizes the system Proof (i)
(ii) This implication follows from Theo rems 2.3.3 and 2.2.5. 0 (ii) =} (i) This part of the theor em is a special case of Theo rem 2.4.2. =}
Rema rk conIn other words, Theo rem 2.5.1 states that a necessary and sufficient asyndition for the plant (2.2.7) to be quadr atical ly stabil izable via an exchronously switching controller with two basic controllers is that there d as ists a linear state feedback stabilizing controller that can be repre sente a convex comb inatio n of the two linear basic controllers.
2.6
Qua drat ic Stabilizability via Synchronous Switching
only In this section, we address the case where controller switching can occur at prespecified times. Synchronous Controller Switching (2.2.2) Supp ose that we have the same collection of given linear controllers 0,1, ... } but now they can only be switched at the discrete times jT, {j = and T > is the switching interval. be a More precisely, let T > 0 be a given time, and let j E Nand I j (.)"T Po } to funct ion that maps from the set of plant state meas urem ents {x(·) ions the set of symbols {I, 2, ... ,k}. Then , for any sequence of such funct
°
""I'
22
2. Quad ratic State Feedb ack Stabil izabil ity via Contr oller Switc hing
{Ij }~o, we will consider the following dynamic nonli near state feedback
controller:
u(t)
= Uij (x(t))
Vj E {O, 1, 2, ... } Vt E [iT, (j
+ l)T)
where i j = Ij(x(· ) 16T ).
(2.6.1)
A synchronous controller switching strate gy is a rule for switching from one basic controller to anoth er at the discrete times jT. Defin ition 2.6.1 If there exists a state feedback controller of the form {2. 2. 2), (2.6.1) such that the closed-loop syste m (2.2.1), (2.6.1) is globally asymptotically stable, then the syste m (2.2.1) is said to be globally asymptotically stabilizable via synchronous controller switching with the basic controllers {2.2.2}.
Let P = pi > 0 be a squar e matri x of order n, and introd uce a quadr atic Lyapunov function of the form Vp(x)
D.
x'Px.
(2.6.2)
Defin ition 2.6.2 Suppose that there exists a state feedback contr oller of the form {2.2. 2}, (2.6.1), a matri x P = P > 0 and a constant c > 0 such that the following condition holds for the quadratic Lyapu nov funct ion (2.6.2) along all the solutions of the closed-loop syste m (2.2.1), {2.6.1 }: Vp(x ((j
+ I)T))
- Vp(x (jT))
< -€llx(jT)11 2
(2.6.3)
for all j = 0,1,2 , .... Then, the system {2.2.1} is said to be quadr atically stabilizable via synchronous controller switching with the basic contro llers (2.6.1). Rema rk
As in the case of asynchronous controller switching, it can be easily shown that quadr atic stabil izabil ity implies global asym ptotic stabilizability (e.g., see [145]). Now we consider the problem of quadr atic stabil izabil ity of the linear syste m (2.2.7) via synchronous controller switching with linear basic controllers (2.2.8). Consider the plant (2.2.7), and let T > 0 be the switching period. Defin e D.
~i =
exp( (A
+ BLi)T )
(2.6.4)
. to be the state trans ition matri x for the syste m (2.2.7) under the influence of the linear controller i between the time instan ts jT and (j + I)T. Here i = 1,2, ... ,k. We now prese nt the main result of this section.
2.6 Quad ratic Stabil izabil ity via Synchronous Switching
23
linear Theo rem 2.6.3 Consi der the linear system (2.2.7 ) and the basic Then the controllers (2.2.8). Let the matri ces
0 such that the set of matri ces
Z1 Z2
Zk
-
C.
1 - P,
f::,.
f::,.
(2.6.5)
is strict ly complete. symbolic Furthermore, suppose that condi tion (ii) holds and introduce a minim um funct ion I(x) as I(x) c. ix, where i(x) is an index for which the in min XIZiX i=1,2, ... ,k
conis achieved. Then, synchronous controller switc hing with the basic {2.2. 7}. trollers {2.2.8} defined by I(x(j T)) quadratically stabilizes the system Proof
(i)
=}
(ii) For any index i
= 1,2,. "
,k, introd uce a square matri x
us conIf the syste m (2.2.7) is quadr atical ly stabilizable via asynchrono for the troller switching with the Lyapunov function Vp(x) = Xl Px, then closed-loop syste m we have that Vp(x ((j
where i
+ 1)T)) -
Vp(x (jT)) = x(jT) ' Zix(j T)
(2.6.6)
= I(x(t) ). Hence, quadr atic stabilizability implies
x =f. 0 if x (jT) =I O. In other words, we have proved that for any x E Rn, tion is there exists an index i E {1, 2, ... ,k} such that Xl ZiX < O. This condi ,Zk}. the definition of strict completeness of the set of matri ces {Z1, Z2, ...
24
2. Quad ratic State Feedb ack Stabil izabil ity via Contr oller Switc hing
(ii) ~ (i) Assume that the set of the matri ces {Zd is strict ly comp lete. For any x =1= 0, introd uce the function a(x)
D. . min x' Zi X . '£=1,2, ... ,k
The strict completeness of {Zi} implies that a(x) < Furth ermo re, let D.
aa =
°
for any x
=1=
0.
max a(x).
x:llxll=l
Because the funct ion a(x) is continuous and a(x) < 0, the maxi mum is achieved and aa < 0. Finally, the inequ ality (2.6.3) holds with the synchronous switching controller defined in the statem ent of the theor em and € = -a. This comp letes the proof of the theor em. 0
2.6.1
A Sufficient Condition for Quadratic Stabilizability via Synchronous Controller Switching
In this subse ction , we derive some sufficient conditions for quadr atic stabi-
lizabi lity via synchronous controller switching.
Theo rem 2.6.4 Consi der the linear syste m (2.2.7) and the basic linear controllers (2.2.8). Suppose that there exist const ants 7i > 0, T2 > 0, ... , Tk > and a square matri x P = pI > such that 2:7=1 Ti = 1 and
°
°
where the matri ces Zi are defined by (2.6.4) and (2.6.5). Then the system (2.2. 7) is quadratically stabilizable via synch ronou s controller switching with the basic controllers (2.2.8). Furthermore, a quadratically stabilizing synchronously switching controller can be described as follows. Introd uce a symbolic funct ion I (x) as I (x) D. ix, where i (x) is an index for which the . . . mm1,mum m min
i=l,2, ... ,k
X'ZiX
is achieved. Then, synch ronou s controller switching with the basic controllers (2.2.8 ) defined by I(x(j T)) quadratically stabilizes the syste m (2.2. 7). Proof
The statem ent of this theor em follows imme diatel y from Theo rems 2.6.3 and 2.3.2. 0
2.7 Illustrative Example
2.6.2
25
Stabilization via Synchronous Switching with Two Basic Controllers
In this subsection, we apply Finsler's Theorem or the S-procedure for two quadratic forms to derive a necessary and sufficient condition for quadratic stabilizability via synchronous controller switching in the case of two basic controllers. Suppose that k = 2 and we consider the case of the two basic linear controllers (2.5.1). Now we are in a position to present the main result of this subsection. Theorem 2.6.5 Consider the linear system {2.2.7} and the two basic linear controllers (2.5.1). Then the following two statements are equivalent:
(i) The system {2.2.7} is quadratically stabilizable via synchronous controller switching with the basic controllers (2. 5.1).
(ii) There exist constants such that
T1
71
> 0 and
72
> 0 and a matrix P
= pI
> 0
+ T2 = 1 and
where the matrices Zl and Z2 are defined by (2.6.4) and {2.6.5}. Furthermore, if the condition (ii) holds, then a quadratically stabilizing synchronously switching controller can be described as follows. Introduce a symbolic function I(x) as I(x) A ix, where i(x) is an index for which the minimum in min x' ZiX i=1,2
is achieved. Synchronous controller switching with the basic controllers (2.5.1) defined by I(x(jT)) quadratically stabilizes the system (2.2.7). Proof
(i) => (ii) This implication follows from Theorems 2.3.3 and 2.6.3. (ii) => (i) This part of the theorem is a special case of Theorem 2.6.4. 0
2.7 Illustrative Example In this section, we present a simple example to illustrate the theoretical results of this chapter. We consider a second-order linear system of the form (2.2.7) with A=
[-1~25
i],
B = [
~].
(2.7.1)
26
2. Quad ratic State Feedb ack Stabil izabil ity via Contr oller Switc hing
Furth ermo re, we will consider the case of two linear basic state feedb ack controllers (2.5.1) where k = 2 and a set of basic state feedback contro llers [ 1
-2 ] ,
[ 3
-6 ].
It can be easily seen that both matri ces A + B L and A + B £2 are unsta ble 1 (Le., they have eigenvalues in the right half complex plane). Therefore, this plant canno t be stabilized by either of the two linear controllers given . First, we stabilize the syste m via asynchronous controller switching. We apply Theo rem 2.5.1. For this example, we can take T1 = T2 = 0.5 and p
= [2.62 5 2.0] 2.0
2.5
.
Figur e 2.7.1 shows the controller index i(x) that describes which of the two basic controllers we must use at each point of the state space. Contro ller Region s
2 r - - - - r - - -.........------.----,----~--.,._-
-__r_--_,
1.5
........
1
X
.
, ..
0.5 .--
t\I
.:
;
-
:u2
,
.
u1
a
:u1
-0.5 ....
. ..... u2....
-1
....
:
,'
. . ................... ......... ................... - .. .
-1.5
.
..
-2 '----_ _---'--_ _----l-_ _---l.._ _---'-2 -1.5 -1 -0.5 a x1
l -_ _..l..-_ _-J-_ _--I
0.5
1
1.5
2
FIGU RE 2.7.1. Contr oller regions for async hrono us contro ller switch ing.
Next we consider the probl em of quadr atic stabil izabil ity of the same linear syste m via synchronous controller switching with the same two basic controllers and the switching perio d T = 0.02.
:.t . .~
27
2.8 Proof of Theor em 2.3.1 Controller Regions r--2r----,-----r-----.-----
1.5·· ··· ... .... .....
-.----,..---~--__,
.....
.
1
U2
0.5 C\I
x
0
"
.
.. _
U1
-0.5 U2
-1
-1.5
-2 -2
-1.5
-1
........ . -
.
0.5
1
o
-0.5
.
1.5
2
x1
ing. FIGU RE 2.7.2. Contr oller regions for synch ronou s contro ller switch
We apply Theo rem 2.6.5 with
71
=
72
= 0.5 and
p = [0.15 24 0.039 3] 0.0393 0.2499
.
of the Figur e 2.7.2 shows the controller index i(x) that describes which other two basic controllers we must use at each point of the state space. In we words, if at switching time jT our trajec tory is in the region Ui, then use the controller i over the time interval (jT, (j + 1)T].
2.8 Proo f of The orem 2.3.1 for two In this section, we give the proof of the result on the S-procedure nted in quadr atic forms. It should be point ed out that this proof was prese the book [94] and is based on the appro ach of [37,1561. To prove Theo rem 2.3.1, we will need the following lemma.
28
2. Quad ratic State Feedb ack Stabil izabil ity via Contr oller Switc hing
Notation Let X be a real linear vecto r space and let go (x) and 91 (x) be two quadr atic forms on X. Consider the mapp ing X -t R2 defined by (2.8.1) We also define a set M as the range of this mapping: (2.8.2) Lem ma 2.8.1 The mapping {2.8.1} maps the space X onto a conve x cone M.
Proof It is obvious that the set M is a cone. It remains to show that the set M is a convex set. Consider two point s in M
(2.8.3)
In
We wish to prove that for every point Yo E {AYI + (1 - >")Y2 : >.. E [0, there exists a vector Xo E X such that Yo = (go(xo), gl(XO))' Let .c be the line passing throu gh the points Yl and Y2. Also, suppo se, that this line is repre sente d by the equat ion
see Figur e 2.8.1(b). A2
TIl
FIGU RE 2.8.1.
2.8 Proof of Theor em 2.3.1
29
Now consider the function
A
a90(AlXl
+ A2X2) + b~h (AlXl + A2X2)
(2.8.4)
and the mapp ing (2.8.5) Yb Y2, The mapp ing (2.8.5) maps the points (1,0), (0,1) into the points point s respectively. Also, the points (-1,0 ), (0, -1) are mapp ed into the Yl, Y2, respectively, because 90(-x ) = 9o(x) and 9l(-X ) = 9l(X). We now consider the set
(±1,0 ), Obviously, the set A is not empt y because it contains the points atic (0, ±1). From equat ion (2.8.4) and the fact that 90(x) and 9 1(x) are quadr forms, it follows that
= aAI + 2[3AlA2 + lA~,
these where a, [3, and I are real constants. Depe nding on the values of const ants one of the foliowing two cases will occur. = 0. In this case,
=,
°
°
a9o(Al(t)Xl
+ A2(t)X2) + bQl(Al(t)Xl + A2(t)X2) =
°
for all t E [0,1]. Thus, the corresponding continuous curve that connects point s Yl and Y2, lies entirely in .c. In this Case 2. At least one of the const ants a, [3, I is not equal to zero. [32. The set A that is defined by the equation 0, is then a second-order curve whose form depends on the value of 15. If will be A will be an ellipse. If 15 < 0, A will be a hyper bola and if 15 = 0, A 91 (x) a pair of parallel lines. Also, because 9o( -x) = 90(x) and 91 (-x) = . Thus , for all x ERn , the set A is symm etric with respe ct to the origin simpl y the set A is either a simply connected set or else consists of two are two connected sets Al and A2, as illust rated in Figur e 2.8.1(a). There possible alternatives:
a,-
:: . (
,.
30
2. Quad ratic State Feedb ack Stabil izabil ity via Contr oller Switch ing
(a) The point s (1, 0) and (0, 1) belong to the same branc h of the set A say AI. In this case, there exists a continuous curve (.A.l(t), A2(t)), 0
is conta ined in the line 12 and connects the point s Yl and Y2' (b) The point s (1,0) and (0,1) belong to different branc hes of the set A, say (1,0) E Al and (0,1) E A2 . Then , it follows from the symm etry of the set A that (0, -1) E Al and (-1,0 ) E A2; see Figur e 2.8.1(a). In this case, we can connect the point s (1, 0) and (0, -1) by a continuous curve that entire ly lies in AI. As in the previous case, the correspondi ng continuous curve
is conta ined in the line 12 and connects the point s Yl and Y2. Thus, in both of the preceding cases, one can find a continuous curve
such that the continuous curve
is contained in the line 12 and connects the point s Yl and Y2. Hence , given any point Yo on the line betwe en Yl and Y2, there exists a to E [0,1] such that Yo = y(to)· The corresponding vector Xo = Al(tO)Xl + A2(tO) X2 then satisfies
(90(xo), 91 (xo)) = (90(Al(tO)Xl = y(to)
+ A2(tO)X2),9l (Al(tO)Xl + A2(tO)X2))
= Yo·
Thus, we have established the required convexity of the cone M. This completes the proof of the lemma. 0 Proof of Theor em 2.3.1
In Lemm a 2.8.1, we have shown that the set M, which is the image of X under the mapp ing (2.8.1), is a convex cone. Let
2.8 Proof of Theorem 2.3.1
31
From the assumption that Qo(x) > 0 for all x such that Ql(X) > 0, it follows that M n Q = 0. This fact can be established by contradiction. Indeed, suppose that there exists a point
Then, there exists a vector xO E Rn such that
Furthermore, because yO = (1]~,1]~) E Q, then 1]~ = ~1t(XO) > O. Hence, the preceding assumption implies 1]~ = Qo(XO) > O. However, this fact contradicts the condition yO = (1]~, 7lg) E Q, which requires that 1]~ < O. Thus, we must have M n Q = 0. Because M and Q are convex cones and M n Q = 0, there exists a line that separates these convex sets. This conclusion follows from the Separating Hyperplane Theorem (e.g., see Theorem 11.3 of (98]); that is, there exist constants /01 /1, I/o I+ 1/11 > 0, such that /01]1 - /11]2 /01]1 - /11]2
< 0 for all (1]1,1]2) E Q, > 0 for all (1]1,1]2) E M.
(2.8.6) (2.8.7)
Note that because (-1,0) E Q, then condition (2.8.6) implies that /0 > O. Furthermore, because (-c, 1) E Q for all c > 0, it follows from (2.8.6) that /1 > -/oc for all c > O. Hence, /1 > O. Using the definition of the set M, the theorem now follows from (2.8.7). This completes the proof of the theorem.
o
·i :!
3 Robust State Feedback Stabilizability with a Quadratic Storage Function and Controller Switching
3.1
Introduction
In this chapter, we consider the problem of robust state feedback stabilizability with controller switching for a class of linear uncertain systems. The class of uncertain systems under investigation consists of linear time-varying systems with norm-bounded uncertainty. As in Chapter 2, we consider both asynchronous and synchronous controller switching. We introduce the concept of robust stabilizability via state feedback controller switching with a quadratic storage function. This notion is stronger than that of quadratic stabilizability from Chapter 2 and is closely related to the concept of dissipativity as in [47-49, 150]. We present a necessary and sufficient condition for robust stabilizability via asynchronous (synchronous) state feedback controller switching with a quadratic storage function. Furthermore, we derive a number of constructive sufficient conditions for robust stabilizability via controller switching that are based on the S-procedure approach of Chapter 2. Some of the main results of this chapter were originally published in [132}. The body of the chapter is organized as follows. Section 3.2 describes the class of linear uncertain systems with time-varying norm-bounded uncertainty. Section 3.3 is devoted to robust stabilizability via asynchronous controller switching. Section 3.4 addresses the more practically important problem of robust stabilizability via synchronous controller switching. Section 3.5 presents two computational examples to illustrate the theoretical results of the chapter.
34
3.2
3. Robu st State Feedb ack Stabil izabil ity with Contr oller Switc hing
Unc erta in Sys tem s with Nor m-B oun ded Unc erta inty
In this section, we define a class of uncer tain linear system s with so-ca lled norm -boun ded time-varying uncertainty. In the conti nuous -time case, this class of uncer tain system s is described by the following state equat ions (e.g., see [60,88,89,92,152,154])
x(t)
=
[A + Bl~(t)K]x(t)
+ [B2 + Bl~(t)G]u(t),
(3.2.1)
where x(t) E Rn is the state, u(t) E Rh is the control input, and ~(t) E Rpxq is a time-varying matrix of uncertain parameters satisf ying the boun d ~(t)/~(t)
<: I.
(3.2.2)
Also in equat ion (3.2.1), A, Bl, B 2 , K, and G are matri ces of suitab le dimensions. In the following, it will be convenient to use anoth er form of equat ion (3.2.1). We introd uce the vector variable II
z(t) = Kx(t)
+ Gu(t),
which is the uncer tainty outpu t of the system. Furth ermo re, let
wet)
II
~(t)z(t)
(3.2.3)
be the uncer tainty input . Thus for the uncer tainty block in Figur e 3.2.1, z(·) is the input and w(·) is the outpu t. We can now rewri te the syste m (3.2.1) as follows:
x(t) z(t)
-
Ax(t) + B1w(t) K x(t) + Gu(t).
+ B2U(t), (3.2.4)
The boun d (3.2.2) on the uncer tainty in this syste m then becomes
llw(t)1I <: Ilz(t)ll·
(3.2.5)
Here x(t) E Rn is the state, u(t) E Rh is the control input, z(t) E Rq is the uncer tainty outpu t, and wet) E RP is the uncer tainty input .
Remark From the preceding definitions, it follows that altho ugh this class of uncertai n syste ms allows for time-varying uncer tain param eters and also for cone-bounded memoryless nonlinearities, it does not allow for dynam ic uncertai nties such as might arise from unmodeled dynamics. This fact, along with other deficiencies of this class of uncer tain syste ms that were mentione d in Chap ter 1, motiv ate the integr al quadr atic uncer tainty descr iption that will be prese nted in Chap ter 5.
35
3.2 Uncer tain Systems with Norm -Boun ded Uncer tainty
z(t)
w(t)
~(t)
FIGU RE 3.2.1. Uncer tainty block
3.2.1
Special Case: Sector-Bounded Nonlinearities
ute staThis class of uncer tainti es arose from the celeb rated theor y of absol (3.2.1) bility (e.g., see [83]). Consider the time- invar iant uncer tain system G=0 with scalar uncer tainty input wand uncer tainty outpu t z in which and the uncer tainty is described by the equat ion (3.2.6)
w(t) = c/>(z(t)),
m is where c/>(.) : R ~ R is an uncer tain nonlinear mapping. This syste repre sente d in the block diagr am shown in Figur e 3.2.2. secto r We will suppo se that the uncer tain nonli neari ty ¢(.) satisfies the boun d (e.g., see [83])
c/>(z)
00
(3.2.7)
is a given const ant assoc iated with the system.
c/>(- )
z
w
Nominal
u
System
x
FIGU RE 3.2.2. Uncer tain system with a single nonlinear uncertainty.
Using the change of variables z = k/2z,
t = c/>(2/kz) -
Z, this syste m is
I \f~' (;:
36
3. Ro bu st St at e Fe ed ba ck St ab ili za bi lit y wi th Co nt ro lle r Sw itc hi ng
tra ns fo rm ed to th e sy st em
.
x( t) =
(A + ~ BlK) x(t) + B,w(t) + B u(t), 2
k z( t) = 2K x( t) .
Th e bo un d (3.2.7) on th e un ce rta in ty in th is sy ste m th en becomes da rd bo un d on th e no rm a st an of th e un ce rta in ty in pu t:
lw(t)\ < Iz(t)l·
(3.2.8) Th is observation m ot iv at es us to th in k of sect or-bounded un ce rta in ty special case of no rm -b as a ou nd ed un ce rta in ty .
."i
i .',
3.3 R o b u st Stabilizab ility v ia Asynchronous Controller Switching In th is ch ap te r, we will consider th e linear un ce rta in sy st em described th e equations (3.2.4). by For th is system, we co nsider th e pr ob le m of stabilizability vi a asyn ro bu st chronous controller sw itching w ith th e followi ea r basic st at e feedback ng lin controllers:
Ul(t)
-
L1 x( t),
U2(t)
-
L 2x( t),
(3.3.1) In tro du ce th e correspon ding definition of ro bu st stabilizability. D ef in iti on 3. 3. 1 Supp ose th at there exists a state feedback contro the fo rm (3.3.1), (2.2.3 ller of ) and a co ns ta nt c > such th at fo r th e close un ce rta in sy ste m (3.2 d-loop .1), (2.2.3), the following two co nd iti on s hold: (i) An y so lu tio n of the clo sed-loop sy st em [x(·), u( ·)] is defined on [0, +( 0) an d belongs to L [0, +( 0) . 2
°
(ii) The following inequa lity is satisfied fo r all so lu tio ns of the closedsy ste m: loop
Th en the sy ste m {3.2.4} is said to be robustly sta bilizable vi a asynchrono controller switching wi us th the basic controllers (3.3.1).
3.3 Robust Stabilizability via Asynchronous Controller Switching
37
Remark
It follows from the preceding definition that any solution of the closedloop system (3.2.1), (2.2.3) with a robustly stabilizing controller (2.2.3) has the property that x(t) -r 0 as t -+ +00. Indeed, because [xC), u(.)] E L2[0, +(0), we can conclude from (3.2.1) that x(·) E ~[O, +(0). However, using the fact that x(·) E L2 [0,+(0) and x(·) E L 2 [0,+oo), it now follows that x(t) --+ 0 as t --+ +00. Furthermore, we can introduce a stronger definition of robust stabilizability via asynchronous controller switching. Let H = H' > 0 be a square matrix of order n and introduce a quadratic function of the form (3.3.2) Definition 3.3.2 Suppose that there exists a state feedback controller of the form (3.3.1), (2.2.3), a square matrix H = If > 0, and a constant c > 0 such that for the derivative 11H (x (t)) of the quadratic function (3.3.2) along trajectories of the closed-loop system (3.2.4), (2.2.3), the condition
(3.3.3) holds for all x E Rn and w E RP. Then the system (3.2.4) is said to be robustly stabilizable with a quadratic storage function via asynchronous controller switching with the basic controllers (3.3.1). The function ~ (-) is called a storage function ( e.g., see 147, 150}).
The following theorem shows a connection between the concepts of robust stabilizability and robust stabilizability with a quadratic storage function. Theorem 3.3.3 Suppose that the linear system (3.2.4) is robustly stabilizable with a quadratic storage function via asynchronous controller switching with the basic controllers (3.3.1). Then the linear uncertain system (3.2.1) is robustly stabilizable via asynchronous controller switching with the basic controllers (3.3.1). Proof
Indeed, if the system (3.2.4) is robustly stabilizable with a quadratic storage function via asynchronous controller switching with the basic controllers (3.3.1), it follows from (3.3.3) that
x(T)'H x(T) - x( 0)' H x(O) + loT (II z( t) 11 2
- II w( t) 11
2
)dt <
T
-c lollx(t) 112 dt.
38
3. Robust State Feedback Stabilizability with Controller Switching
It immediately follows from this and (3.2.5) that
x(T)' H x(T) - x(O)/ H x(O) <
-E
fa
T
IIx(t) 11 2 dt
for any solution of the closed-loop system (3.2.1), (2.2.3). Because H > 0, we obtain
(3.3.4)
for all solutions of the closed-loop system (3.2.1), (2.2.3) wherecl c.. E-11IHII. Now the relationships (3.3.1) and (3.3.4) imply both conditions of Defini0 tion 3.3.1. This completes the proof of the theorem. We now investigate the relationship between the concept of the robust stabilizability with a quadratic storage function and the concept of quadratic stabilizability (e.g., see [99]). Indeed, consider the uncertain system (3.2.1) with uncertainty satisfying a norm-bounded condition (3.2.5). Now suppose that the uncertain system (3.2.4) is robustly stabilizable with a quadratic storage function via a controller (2.2.3). Then, it is obvious from the definitions that the closed-loop uncertain system (3.2.1), (2.2.3) is quadratically stable. Thus, we have established that if the system (3.2.4) is robustly stable with a quadratic storage function via the controller (2.2.3), then the corresponding uncertain system (3.2.4) is quadratically stabilizable using the same controller. Note that the converse of the preceding result is not necessarily true in general. However, if one restricts attention to linear controllers, then the notion of robust stabilizability with a quadratic storage function is equivalent to the notion of quadratic stabilizability. This fact can be easily established using Finsler's Theorem; see Theorem 2.3.l. We can now present a necessary and sufficient condition for robust stabilizability with a quadratic storage function via asynchronous controller switching. Theorem 3.3.4 Consider the linear system (3.2.4) and the basic linear controllers (3.3.1). Then the following statements are equivalent:
(i) The system (3.2.4) is robustly stabilizable with a quadratic storage function via asynchronous controller switching with the basic controllers (3.3.1).
nchronous Co ntr olle r Switching 3.3 Ro bus t Sta bili zab ilit y via Asy ..:,
x H = H' (ii) The re exi sts a square ma tri
Zl
> 0 suc h tha t the set of ma tric es I
+ HA + L 1B 2 H + HB 2 L 1 +K K + L1 G GL 1 + H B1 B1 H,
D,.
=A
I
I
H
D,.
I
I
I
I
Z2
39
I
I
= A H I
I
I
+ HA + L 2 B 2 H + HB 2 L 2 I
,
I
I
+K K+ L 2 GG L 2 +H B1 B1 H,
(3.3.5) is str ict ly com ple te. holds and int rod uce a sym bol ic (ii) ion dit con t tha e pos sup , ore Fu rth erm um ) is an ind ex for wh ich the mi nim i(x ere wh ix, D,. ) I(x as ) I(x fun cti on m
. mm
i=1 ,2,. .. ,k
x IZiX
ic con con tro ller sw itch ing wit h the bas is achieved. The n, asy nch ron ous 2.4) wit h a rob ust ly sta bil ize s the sys tem (3. tro ller s (3.3.1) defined by I(x ) qua dra tic storage fun cti on. Pro of
,k, int rod uce a square ma trix (i) => (ii) For any index i = 1,2 , ...
via asynchronous controller ble iza bil sta ly ust rob is .4) (3.2 If the system n VH(X) = x' Hx , the n we ctio fun e rag sto c ati adr qu the h switching wit 2.4), (2.2.3) tha t have for the closed-loop system (3. w + w'B~Hx VH(X) = x'(A~H + HA i)x + x' HB 1 +x IK 'K x - w'w < -EX'X
(3.3.6)
3.6), We fix the vector x. Th en, from (3. for all x and w, where i = I(x (t) ). we ob tai n sup VH(X) = x' ZiX. wE R
if x i= O. In oth er words, we have 0 < ZiX x' t tha ply im .6) (3.3 Th is and E {I, 2, ... , k} x i= 0 the re exists an index i , Rn E x any for t tha ved pro the definition of str ict completeness is ion dit con is Th O. < ZiX x' t tha such ,Zk }. of the set of matrices {Z l, Z2, . ..
~!T ..
~:.
40
3. Robus t State Feedb ack Stabil izabil ity with Contr oller Switc hing
(ii)
(i) Assume that the set of matri ces {Zi}, i E {I, ... ,k} is strict ly complete. For any x i=- 0, introd uce the functi on =?
a(x)
D. .
min
~=1,2, ...
,k
X'Zi X .
The strict comp letene ss of {Zi} implies that a(x) < Furth ermor e, let
ao
° for any x =f. 0.
=D. max a(x). x:llxll= 1
Because the functi on a(x) is contin uous and a(x) < 0, the maxim um is achieved and ao < 0. Finally, the inequ ality (3.3.3) holds with the asynchronously switc hing controller defined in the statem ent of the theor em and € = -a. This comp letes the proof of the theor em. 0
3.3.1
Suffi cient Conditions for Robust Stabilizability via Asyn chro nous Controller Switching
In this subse ction, we derive some sufficient condi tions for robus t stabilizabi lity with a quadr atic storag e functi on via async hrono us contro ller switching. • Theo rem 3.3.5 Consi der the linear system (3.2.4) and the basic linear controllers (3.3.1). Suppose that there exist const ants 7i. > 0, T2 > 0, ... , Tk > and a square matri x H = H' > such that 2::::=1 Ti > and
°
°
°
k
LTiZ i < 0, i=l
where the matri ces Zl, Z2, .. , ,Zk are defined by (3.3.5). Then, the system (3.2.4) is robustly stabilizable with a quadratic storage functi on via asynchronous controller switching with the basic controllers (3.3.1). Furthermore, a robustly stabilizing asynchronously switching contro ller can be described as follows. Introduce a symbolic functi on I (x) as I (x) D. ix, where i(x) is an index for which the minim um in
min
i=1,2, ... ,k
X'ZiX
is achieved. Async hrono us controller switching with the basic contro llers (3.3.1) defined by I(x) robustly stabilizes the system (3.2.4) with the quadratic storage functi on x' H x . Proof
The statem ent of this theor em imme diatel y follows from Theor ems 3.3.4 and 2.3.2. 0
3.4 Robust Stabilizability via Synchronous Switching
41
In this subsection, we also apply Finsler's theorem or the S-procedure for two quadratic forms to derive a necessary and sufficient condition for robust stabilizability with a quadratic storage function via asynchronous controller switching in the case of two basic linear controllers. Suppose that k = 2 and we consider the case of the following two basic controllers
(3.3.7)
Theorem 3.3.6 Consider the linear system (3. 2. 4) and the basic linear controllers (3.3.7). Then the following two statements, are equivalent.
(i) The system (3.2.4) is robustly stabilizable with a quadratic storage function via asynchronous controller switching with the basic controllers (3.3.7). (ii) There exist constants 71 ~ 0 and H' > 0 such that 71 + 72 > 0 and
Zl
72
>
0 and a square matrix H =
+ Z2 < 0,
where the matrices Zl and Z2 are defined by (3.3.5). Then the system (3.2.4) is robustly stabilizable with a quadratic storage function via asynchronous controller switching with the basic controllers (3. 3.7). Furthermore, if condition (ii) holds, then a robustly stabilizing asynchronously switching controller can be described as follows. Introduce a symbolic function I(x) as I(x) 6. ix, where i(x) is an index for which the minimum 'ln
is achieved. Asynchronous controller switching with the basic controllers (3. 3. 7) defined by I(x) robustly stabilizes the system (3.2.4) with the quadratic storage function 11 H x.
Proof
(i)
~
(ii) This implication follows from Theorems 2.3.3 and 3.3.4. (ii) => (i) This part of the theorem is a special case of Theorem 3.3.6. 0
3.4
Robust Stabilizability via Synchronous Switching
In this section, we consider the problem of robust stabilizability with a quadratic storage function via synchronous controller switching. More pre-
~n:l L" . ";~:' .
42
.".
3. Robus t State Feedb ack Stabil izabil ity with Contr oller Switch ing
,.:
..
.;
cisely, we consider the class of state feedback synch ronou sly switc hing controlle rs (2.6.1) with a given switc hing time T > 0 and given linear basic controllers (3.3.1). Introd uce the corres pondi ng defini tion of robus t stabil izabil ity via synchron ous contro ller switching.
Defin ition 3.4.1 Suppo se that there exists a state feedback contro ller of the form {3.3.1}, (2.6.1) and a const ant c > 0 such that for the closed-loop uncer tain system {3.2.1}, (2.6.1), the following two condi tions hold: {i} Any soluti on of the closed-loop system [x(·), u(·)] is defined on (0, +00) and belongs to L2 [0, +00). {ii} The follow ing inequ ality is satisfi ed for all soluti ons of the closed-loop system :
Then the system (3.2.4) is said to be robustly stabilizable via synch ronou s contro ller switch ing with the basic controllers (3.3.1). Rema rk
As in the case of async hrono us contro ller switc hing, it follows from the prece ding defini tion that any soluti on of the closed-loop syste m (3.2.1), (2.6.1) with a robus tly stabil izing contro ller (2.6.1) has the prope rty that x(t) ---r 0 as t ---r +00. See the remar k following Definition 3.3.1. Let H = H' > 0 be a squar e matri x of order n, and introd uce a quadr atic functi on VH (·) of the form (3.3.2).
Defin ition 3.4.2 Suppo se that there exists a state feedback contro ller of the form {3.3.1}, (2.6.1), a square matri x H = H > 0, and a const ant c> 0 such that for the deriva tive VH(X (t)) of the quadratic functi on (3.3.2) along trajec tories of the closed-loop system (3.2.4), {2.6.1}, the condi tion VH(X ((j r(j+l) T JjT
+ l)T)) -
VH(x (jT))
+
Ulz(t)112 -lI w (t)1I 2 )dt ~ r(j+l) T -€ JjT
IIx(t) \12dt
(3.4.1)
holds for all soluti ons of the closed-loop system (3.2.4), (2.6.1 ) with any locally square-integrable input w(·). Then the system (3.2.4) is said to be robustly stabilizable with a quadr atic storage functi on via synch ronou s controlle r switch ing with the basic contro llers (3.3.1). The functi on YH(') is called a storag e functi on.
3.4 Robust Stabilizability via Synchronous Switching
43
The following theorem shows a connection between the concepts of robust stabilizability and robust stabilizability with a quadratic storage function in the case of synchronous controller switching. Theorem 3.4.3 Suppose that the linear system (3.2.4) is robustly stabilizable with a quadratic storage function via synchronous controller switching with the basic controllers (3.3.1). Then the linear uncertain system (3.2.1) is robustly stabilizable via synchronous controller switching with the basic controllers (3.3.1).
Proof The proof of this theorem is similar to the proof of Theorem 3.3.3. 0 To establish a necessary and sufficient condition for robust stabilizability with a quadratic storage function via synchronous controller switching, we will need the following matrix Ricatti differential equations •
/
I
I
I
+ (L i B 2 + A )Pi(t) +Pi(t)(A + B 2 L i ) + c C + LiD DL i , Pi(T) = H
-Pi(t) = Pi(t)B1B1Pi(t)
I
I
I
(3.4.2)
where i E {l, 2, ... ,k}. It is well known that the solution of the preceding Riccati equation may not exist over the time interval [0, T]. We are now in a position to present a necessary and sufficient condition for robust stabilizability with a quadratic storage function via synchronous controller switching. Theorem 3.4.4 Consider the linear system (3.2.4) and the basic linear controllers (3.3.1). Then the following statements are equivalent.
(i) The system (3.2.4) is robustly stabilizable with a quadratic storage function via synchronous controller switching with the basic controllers (3.3.1).
°
(ii) There exists a square matrix H = H' > and a nonempty index set o C {l, 2, ... ,k} such that for any i E 0, the solution J;(.) of the Riccati equation (3.4.2) is defined on the time interval [0, T] and the s et of the matrices (3.4.3)
is strictly complete. Furthermore, suppose that condition (ii) holds and introduce a symbolic function I(x) as I(x) D. ix, where i(x) is an index for which the minimum in . IZiX mmx iEO
44
3. Robus t State Feedb ack Stabil izabil ity with Contr oller Switc hing
is achieved. Then, synch ronou s controller switching with the basic controllers {3.3.1} defined by l(x) robustly stabilizes the system {3.2.4 } with a quadratic storage functi on. Proof
(i) =}- (ii) Assume that condi tion (i) holds with some quadr atic storag e function (3.3.2). Let x ERn , x i= 0, and i = lex). Introd uce the follow ing functional:
Jj[w(·)] ~ x((j
+ l)T)'H x((j + l)T)
r(j+l) T
+ JjT
(c:llx(t)1I 2 + IIz(t) 11 2 -llw(t)11 2 )dt,
where [x(·), z(·), w(·)] is the solution of (3.2.4) with x(jT) Lix(t ). Then condition (3.4.1) implies that sup w(· )EL 2 [jT,(j+ l)T]
(3.4.4)
= x and u( t) =
Jnw(·)] < x' Hx.
This implies that the soluti on of the Riccati equat ion (3.4.2) is define d on [0, T] (e.g., see [16]). Moreover, it is well-known (e.g., see [16]) that sup wOEL 2(jT,u +l)T]
JP[w(-)l = x' Pi(O)x.
This and condition (3.4.1) imply that x/(H - Pi(O))x < O. In other words, we have proved that for any x E Rn, x i= 0 there exists an index i E {I, 2, ... , k} such that the solution of the corresponding Ricca ti equat ion is defined on [0, T] and x' ZiX < O. This condition is the definition of strict completeness of the set of matrices {Zi}' (ii) =}- (i) Assume that condi tion (ii) holds, and prove that the synchronous switching controller defined in the statem ent of the theor em robustly stabilizes the syste m (3.2.4) with the storag e function (3.3.2 ). Indeed, due to continuous dependence of the solution of the Ricca ti equat ions (3.4.2) on param eters of the equations, there exists a const ant c: > 0 such that the soluti on of the Ricca ti equat ion
-Fi€( t) = Pi€(t)B1B~pnt)
+ (L:B; + A')Pt (t) +Pi€(t)(A + B 2 L i ) + G'G + tI + L:D'D L i , I;€(T)
is defined on [0, T] for any i E
{ Zi€
=H
n and the set of matrices
~ Pi€(O) - H } iED
is strict ly complete. Furth ermo re, equat ion (3.4.5) implies that sup wOEL 2 [jT,(j+ l)T]
(3.4.5)
JnwC)] = X(jT) ' pnO) x(jT)
(3.4.6)
3.4 Robust Stabilizability via Synchronous Switching
45
for any solution of the closed-loop system (3.2.4), (2.6.1) and the functiona.l (3.4.4) (e.g., see [16]). This and strict completeness of the set of matrices (3.4.6) immediately imply condition (3.4.1). This completes the proof of the theorem. 0
3.4.1
Sufficient Conditions for Robust Stabilizability via Synchronous Controller Switching
In this subsection, we derive some sufficient conditions for robust stabilizability with a quadratic storage function via synchronous controller switching.
~
.
,
.~
,
:~ ~
:
Theorem 3.4.5 Consider the linear system {3.2.4} and the basic linear controllers {3.3.1}. Suppose that there exists a square matrix H = H > 0, a nonempty index set n E {I, 2, ... ,k}, and constants {Ti > OLEn such the solution ~(.) of the Riccati equation that 2:iEO 'Ii > 0, for any i E (3.4.2) is defined on the time interval [0, T], and
n
" ,.~
.. ~
.,
,
-.? .:.;
where the matrices Zi are defined by (3.4.3). Then, the system (3.2.4) is robustly stabilizable with a quadratic storage function via synchronous controller switching with the basic controllers (3.3.1). Furthermore, a robustly stabilizing, synchronously switching controller can be described as follows. Introduce a symbolic function I(x) as I(x) 6. ix, where i(x) is an index for which the minimum in
. mm
i=1,2, .. , ,k
x 'ziX
is achieved. Then, synchronous controller switching with the basic controllers (3.3.1) defined by I(x) robustly stabilizes the system (3.2.4) with the quadratic storage function 11 H x . Proof
The statement of this theorem immediately follows from Theorems 3.4.4 and 2.3.2. 0 Theorem 3.4.6 Consider the linear system (3.2.4) and the basic linear controllers (3.3. 7). Then, the following two statements are equivalent:
{i} The system {3.2.4} is robustly stabilizable with a quadratic storage function via synchronous controller switching with the basic controllers (3.3.7).
46
3. Robus t State Feedb ack Stabil izabil ity with Contr oller Switc hing
(ii) There exists a square matri x H = H' > 0, a nonem pty index set n E {I, 2}, and constants {Ti > O}iEn such that LiEn Ti > 0, for any i E n the soluti on PiC) of the Riccati equation (3.4.2) is defined on the time interv al [0, T], and
where the matri ces Zi are defined by (3.4.3). Then the system (3.2·4 ) is robustly stabilizable with a quadratic storage functi on via synchronou s controller switching with the basic controllers (3.3.1). Furthermore, if the condition (ii) holds, then a robustly stabilizing synchronously switching controller can be described as follows. Introd uce a symbolic functi on I (x) as I (x) {;}. ix, where i (x) is an index for which the minim um in . mm x IZiX i=1,2, ... ,k
is achieved. Then, synchronous controller switching with the basic controllers (3.3.1) defined by I(x) robustly stabilizes the system (3.2.4 ) with the quadratic storage functi on x H x . Proof
(i)
'* (ii) This implication follows from Theorems 2.3.3 and 3.4.4. (ii) '* (i) This part of the theor em is a special case of Theo rem 3.4.6. 0
3.5 Illus trati ve Examples In this section, we consider two examples to illust rate the main result s of this chapt er.
3.5.1
A Second-Order Syste m
In this subsection, we consider the second-order linear uncer tain system x(t) =
i1
[~
x(t)
+[
n
(3.5.1)
u(t),
where k is a param eter from the interval -1.35 < k < -1.15 . Note that if k = -1.25 , this system coincides with the example from Section 2.7. The system (3.5.1) can be rewri tten in the form (3.2.1) as x(t) =
0 [_1 25
i1
x(t)
+[
~ ] ~(t) [01 +[
0]
n
u(t),
(3.5.2)
3.5 Illustrative Examples
47
where 1~(t)1 < 1. Furthermore, the system (3.5.2) can be reduced to the system (3.2.4) with
A=
[-1°25 i], B, = [
n' n' 13,
K = [0.1 :~.
".
.i
=[
0), G
= O.
(3.5.3)
As in Section 2.7, we will consider the case of two linear basic state feedback controllers (3.3.7) where
.~
(1 -2), (3 -6).
(3.5.4)
It can be easily seen that both matrices A+BL1 and A+BL 2 are unstable (Le., they have eigenvalues in the right half complex plane). Therefore, this plant cannot be stabilized by either of the two given linear controllers even in the case ~(.) O.
=
. .-'
Controller Regions 2r----.---..,..----r-----.-----r-----,.---...,~-__.
~
"
1.5
U2 1
..
0.5
..
.
..
U1 C\I X
o
. U1
-0.5
-1
.............................. U2. ..
-1.5 -2 '---_ _ _ _--'---_ _- I -_ _- - ' -_ _-..L_ _---'-1.5 -1 1 -0.5 0.5 -2 o ~
'---_---'
1.5
2
x1
FIGURE 3.5.1. Controller-switching regions for the case of asynchronous controller switching.
First, we stabilize the system via asynchronous controller switching. According to Theorem 3.3.6, the system is robustly stabilizable with a
48
3. Robus t State Feedb ack Stabil izabil ity with Contr oller Switc hing Controller Regions 2.-- ---., .....- ---r- ---,- ----- -r--- -r--- ,----
-:r-- ----- ,
1.5 1
.
.
0.5 U1 C\l X
0·· . U1
-0.5 · ....
-1
.:
:
.
U2.
.
-1.5 -2 '---_ _.L--_ _-'---_ _---'--_ _---l-_ _ _ __ _ l -2 -1.5 -1 -0.5 o 0.5 1 ~
x1
' - -_
1.5
__J
2
FIGU RE 3.5.2. Contro ller-sw itchin g regions for the case of synch ronou s contro ller switch ing.
quadr atic storag e function by asynchronous controller switching if and only if there exists a squar e matri x H = H' > 0 and const ants 71 > 0,72 >0 such that 71Z1 + 72Z2 < 0, where Z1 and Z2 are defined by (3.3.5 ). This condition is satisfied if we choose
H
= [0.13 47
0.117 7] 0.1177 0.1285
and 71 = 72
= 0.5.
The switching regions of this asynchronously switched controller are shown in Figure 3.5.1. We now consider the problem of robus t stabilizability of the same system via synchronous controller switching with the same set of basic contro llers and the switching time T = 0.02. In this case, we apply Theo rem 3.4.6 with H = [0.15 24 0.00393
j ..
0.039 3] 0.2499
and 71
= 0.5348,
72
= 0.4652.
3.5 Illustrative Examples
49
The switching regions of the corresponding synchronously switched controller are shown in Figure 3.5.2.
l
.., .~
3.5.2
Two-Mass Spring System
To illustrate the results of this chapter, we now consider the problem of robust stabilizability with a quadratic storage function via asynchronous and synchronous controller switching applied to the benchmark system introduced in [148]. This system consists of two carts connected by a spring as shown in Figure 3.5.3. It is assumed that the masses are ml = 1 and m2 = 1, and the spring constant k is treated as an uncertain parameter subject to the bound 1.05 ~ k < 1.45. This may reflect the nonlinear nature of the true spring. It is assumed that the whole state vector x(t) is available for measurement. From this, we obtain an uncertain system of the form (3.2.1),
x(t) 0 0 -0.75 0.75
=
0 0 1 0 0 0 0 1 -1.25 1.25 0 0 1.25 -1.25 0 0
6.(t) [ 1 -1
0
x(t) 0 0 0 1
o ] x(t) +
+
u(t),
(3.5.5)
where 16.(t)! < 1. This uncertain system can be reduced to the system (3.2.4) with the following coefficients:
A=
0
0
0 -1.25 1.25
0 1.25 -1.25
1 0 0 1 0 0 0
0 0 -0.75 0.75
B1 =
0
K = [ 0.2
-0.2
0
,B2 =
0
0 1
0
o ]'
G = O. (3.5.6)
We will consider the case of controller switching with two basic controllers of the form (3.3.7), where
[0
0.1
-6
[0
-8
0 0.01].
-6],
As in the previous subsection, the nominal linear system is unstable with each of these two basic controllers. Again, in the case of asynchronous controller switching, we apply Thea-
50
3. Robus t State Feedb ack Stabil izabil ity with Contr oller Switc hing
rem 3.3.6 with
H=
0.2668 -0.11 36 0.0087 -0.03 13
-0.11 36 0.0087 0.3305 0.1965 0.1965 0.3082 0.0563 0.0196
-0.03 13 0.0563 0.0196 0.0355
(3.5.7)
and 71
=
72
= 0.5.
(3.5.8)
For comp uter simulations, we take an initial condition
x(O)
=
1 -1
o o
(3.5.9)
and uncer tainty input ~(t)
= sin lOt.
(3.5.10)
In this case, evolution of the states x(t) and the control input u(t) versus time t is shown in Figures 3.5.4 and 3.5.5. Now consider the case of synchronous controller switching with switc hing time T = 0.05. Apply Theo rem 3.4.6 with the squar e matri x H defined by (3.5.7) and const ants 71,72 defined by (3.5.8). For the corresponding synchronou sly switched controller, evolution of the states x(t) and the control input u(t) versus time t with initial condi tion (3.5.9) and uncer tainty input (3.5.10) is shown in Figures 3.5.6 and 3.5.7.
3.5 Illustrative Examples
51
k
FIGURE 3.5.3. Two-mass spring system.
.~
Plant States vs Time
2.5 .------,.-----r--~--_r_-__,r__-__r--__r_--__._--,__--
.', ;
.~
.......... :-
2 1.5 ..
:
.
.~(4).
.......
1
--
:-
~
..
t/)
CIS
( J)
c:
0.5
CIS
0:::
0
-1.5 '-_----l._ _---'-_ _-J-..._ _- ' - - _ - J ' - _ - - L_ _
o
1
2
3
4
5
6
~
7
_ _...L-_ _. 1 . - _ - - J
8
9
10
Timet
FIGURE 3.5.4. Evolution of the system states in the case of asynchronous controller switching.
52
3. Robus t State Feedb ack Stabil izabili ty with Contr oller Switch ing
Control Input Vs Time t
1O,-----,.-----,----,.---~-____,--___r_--__r_-
-_r_--,__-__,
.,
5
:J 0-
....
o
t:
ec o
o
-5
-10
.......... ..... .
-15 '____-- -J._ _--'-_ _....L--_ _- ' - - - _ - - '_ _
o
1
2
3
4
5 Timet
~
6
_ __ l __ _
7
~
8
_ _'____
9
__'
10 <\
FIGU RE 3.5.5. Evolu tion of the contro l input in the case of asynch ronou s controller switching.
1 ·1
3.5 Illustrative Examples
53
Plant States vs Time
2.5 ...
2 1.5
1
...
...
...
"
.
..
... ...
.
.. . , .. .,
-- .
."
~
C/J
-Q)
iii
Cf)
c
0.5
a:s
c::
a ..·.'
.....
-1
-1.5
a
.....
......
-0.5
2
3
... ...... ' ...
4
5
'
6
7
.
8
. ...
..
9
10
Time t
FIGURE 3.5.6. Evolution of the system states in the case of synchronous controller switching.
54
3. Ro bus t Sta te Fee dba ck Sta bili zab ilit y wit h Co ntr olle r Switching
10 r-----,---....,----,-Control Input vs Tim e --~--_r_--,.
•.....\\...L \\ \\ •
5
.
..... ..... . -:
--..._-__,
.
--
\
:;
__-__,.--_r_
/~.
o
Q.
i
c:
ec:
-
I /
o
/
/:
.
I
(.) -5
-10
I
-15 '--_ ---L_ _
o
1
~
2
_ _- '--_ _. L . -_
3
4
_ _ '_ _- J -_
5 Tim e t
6
_- ' -_ _
7
~
8
_ _' _ __
9
__J
10
FIG UR E 3.5.7. Evo luti on of the con trol inp ut in the case of syn chr ono us control ler switching.
,.-'
J
.:~
E.
4 HOO Control with Synchronous Controller Switching
',.~
.
4.1
Introduction
In this chapter, we consider Hoo control problems that employ synchronous state and output feedback controller switching. These problems are of interest in their own right but will also be used in the following chapters as a theoretical tool in robust stabilizability problems. As in the previous chapters, the controller is defined by a collection of given controllers called basic controllers. Then, our control strategy is a rule for switching from one basic controller to another. The control goal is to achieve a level of performance defined by an integral performance index similar to the requirement in standard Hoo control theory (e.g., see [16,54,59]). The switching rule is computed by solving a Riccati differential equation of the game type and a discrete-time dynamic programming equation. Riccati differential equations of the type considered in this chapter have been widely studied in the theory of Boo control, and there exist reliable methods for obtaining solutions. The solution to discrete-time dynamic programming equations has been the subject of much research in the field of optimal control theory. Furthermore, many methods of obtaining numerical solutions have been proposed for specific optimal control problems. The main results of this chapter were originally presented in the papers [106-109,128). The remainder of the chapter is organized as follows. Section 4.2 provides a necessary and sufficient condition for a state feedback problem on a finite time interval. This condition is given in terms of the existence of a solution to a dynamic programming equation. In Section 4.3, we obtain a solution
56
4. H oo Contr ol with Synch ronou s Contr oller Switc hing
for a finite time outpu t feedback H= control problem with a linear plant. The main result of the section is given in terms of the existence of suitab le solutions to a Ricca ti differential equat ion of the H= filtering type and a dynam ic progr ammi ng equation. If such solutions exist, then it is shown that they can be used to const ruct a corresponding controller. We also consider a special case where the basic controllers are linear. An illust rative example of a finite-time outpu t feedback H= control probl em is given in Section 4.4. Finally, Section 4.5 addresses the problem of outpu t feedb ack H= control over infinite time.
4.2
Stat e Feedback Roo Con trol Prob lem
Let N be a given numb er and to < t 1 < ... < tN-l < tN be given times. Consider the time-varying nonlinear system defined on the finite time interval (to, tN]
x(t) = f(t, x(t), u(t), wet)),
(4.2.1)
where x(t) E Rn is the state, wet) E RP is the disturbance input, u(t) E Rh is the control input, and f (', ') " .) is a continuous vector function.
Synchronous Controller Switching Suppose that we have a collection of given controllers
Ul(t) U2(t)
U1 (t,x(t )),
U2 (t, x(t)),
(4.2.2) where U1 (-, '), U2 (·, '), ... ,Uk(" ') are given continuous matri x functi ons. The controllers (4.2.2) are called basic controllers. We will consi der the following class of state feedback controllers. Let I (.) be a function that j maps from the set of the state meas urem ents {x(·) I;~} to the set of symb ols {I, 2, ... ,k}. Then , for any sequence of such functions {I }f=Ol, we will j consider the following dynamic nonlinear state feedback controller:
u(t) = Uij (t, x(t)) Vj E {O, 1, ... ,N - I} Vt E [tj, tj+l) where i·J -- I·J (x(·)
Itotj ) .
(4.2.3)
Hence, our control strate gy is a rule for switching from one basic contro ller to anoth er. Such a rule const ructs a sequence of symbols {ij}f~l from the state meas urem ent x (-). The sequence {i j } f::::a 1 is called the switch ing sequence. Also, ,c denotes the class of all controllers of the form (4.2.2 ), (4.2.3).
57
4.2 State Feedb ack Hoc Contr ol Proble m
be Defin ition 4.2.1 Let So(x( to)) > O,Sj( X(tN) ) > 0, and W(t,x ,u,w) ), given continuous functions. If there exists a controller of the form (4.2.2 (4.2.3) such that Sf(x( tN))
+ [ " W( t, x( t), u( t), w(t) )dt < .'i:>(x( to) 1 to
(4.2.4)
system (4.2.1), (4.2.3) with any disturthe H oo control problem with cost funcsaid to have a solution via synchronous controllers (4.2.2).
for all solutions to the closed-loop bance input w(·) E ~[to, tN), then tions So (.), S j (.) and W (', ., " .) is controller switching with the basic Remark
In the case of the cost function of the following form
Wet, x, u, w)
=IIz(t, x, u)1I
2
-llwI1 2 ,
(4.2.4) where z(t, x, u) is a controlled outpu t of the system (4.2.1), condition ard is similar to the finite interval Hoo control requi remen t from the stand H oo control theor y (e.g., see [16,54,59]).
Notation n a given Let M (.) be a given function from Rn to R, and let Xo E R be vector. Then ,
.
}"
~
FJ(xo,lv'I(.)) = sup w(')EL 2 [tj,tH d
[M(X (t j +1 ))
+
I
tH1
Wet, x(t), Ui(t, x(t)), W(t))dt] ,(4.2.5)
tj
) with where the supre mum is taken over all solutions to the system (4.2.1 w(-) E L2[tj ,tj+l) , u(t) = Ui(t,x (t)) and initia l condition x(tj) = xo. The main result of this section now follows. llers Theo rem 4.2.2 Consider the system (4.2.1) and the basic contro con(4.2.2). Let So(x(to)) > 0, Sj(X( tN)) > 0 and Wet, x, u, w) be given the tinuous functions. Suppose that FJ(-,·) is defined by (4·2.5). Then, following statements are equivalent:
and (i) The HOO control problem with the cost functions S:J (.), S j (.), W (-, " ., .) has a solution via synchronous controller switching with the basic controllers (4.2.2). (ii) The dynamic programming equation (4.2.6)
4. H oo Contr ol with Synch ronou s Contr oller Switc hing
58 -;~~ ~. ".:
,
has a solution for j = 0,1, ... , N - 1 such that VO(xo) < So(xo) for all Xo ERn .
;
...
Furthermore, if condition (ii) holds and 7j(xo) is an index such that the minimum in (4.2.6) is achieved for i = i;(xo), then the controller (4.2.2 ), (4.2.3) associated with the switching sequence {~}~(/, where i t:. ij(x(t j)) j solves the Hoc control problem (4.2.4). Proof (i)
=}
(ii) Introd uce for any j
inf
sup
u(·)E.c wOE L2[tj ,tN]
= 0,1, ... , N
[Sf(X (t N ))
+
l
the following function
Vj(xo)
t:. =
tN
tj
W(t,X (t),U (t),W (t))dt ], (4.2.7)
where the supre mum is taken over all solutions to the system (4.2.1 ) with initia l condi tion x(tj) = Xo. According to the theor y of dynam ic progr amming (e.g., see (16,54]), Vj(.) satisfies the equat ions (4.2.6). It also follows from (4.2.7) that if condi tion (i) holds, then %(xo) < So(xo). (ii) =} (i) The equat ions (4.2.6) imply that for the controller assoc iated with the switching sequence {ij we have
}f=c/
sup wOEL 2[tO,tN ]
[Sf(X(t N )) +
l.
tN
to
W(t,x (t),U (t),w (t))dt ] < Vo(x(to)).
Because Vo(x (to)) < So (x (to)), the controller associated with the switc hing sequence solves the probl em (4.2.4). 0
{ij}f:::c/
4.3
Out put Feedback Hoo Control Prob lem
In this section, we consider the following time-varying linear outpu t feedback control syste m defined on the time interv al (to, tN],
±(t) z(t) y(t)
A(t)x (t) + B 1 (t)w(t) K(t)x (t) + G(t)u (t), C(t)x (t) + v(t),
+ B2 (t)u(t), (4.3.1)
where x(t) ERn is the state, w(t) E RP and v(t) E R l are the distur bance inputs, u(t) E R h is the control input, z(t) E Rq is the controlled outpu t, y(t) E R l is the measured output, and A(·), B 1 C), B2(') ' K(·), G(·), and C(·) are bound ed piecewise continuous matri x functions.
,.:,..... ,":"
4.3 Output Feedback Hoc Control Problem
59
Output Feedback Synchronous Controller Switching Suppose that we have a collection of given controllers
Ul(t) U2 (t)
"1.
-
U1(t,y(t)), UZ(t, y( t)),
(4.3.2) where U1 (', '), Uz (-, '), ... ,Uk(',') are given continuous matrix functions such that U1 (-,0) 0, U2(', 0) 0, ... ,Uk(', 0) O. The controllers (4.3.2) are called output feedback basic controllers. We will consider the following class of output feedback controllers. Let Ij (.) be a function that maps from the set of the output measurements {y(.) l~~} to the set of symbols {I, 2, ... ,k}. Then, for any sequence of such functions {Ij } 1, we will consider the following dynamic nonlinear output feedback controller:
=
=
=
f=o
u(t) = Uij(t,y(t)) Vj E {O,l, ... ,N -I} Vt E [tj,tj+l) where i j = I j (y(-) I~~).
(4.3.3)
As before, our control strategy is a rule for switching from one basic controller to another. Such a rule constructs a symbolic sequence {ij} 1 from the output measurement y(.). The sequence {ij}';:Ol is called the switching sequence.
f=o
Definition 4.3.1 Let X o = Xb > 0 and Xf = Xj > 0 be given matrices. If there exists a function V (xo) > 0 such that V (0) = 0 and for any vector Xo E R n I there exists a controller of the form (4.3.2), {4.3.3} such that
t X(tN)' X fX(tN) + Jto
N
(llz(t) liZ
- IIw(t) 11 2 - Ilv(t)IIZ)dt
< (x(to) - xo)' Xo(x(to) - xo) + V(xo)
(4.3.4)
for all solutions to the closed loop system (4.3.1), (4.3.3) with any disturbance inputs [w(·),v(-)] E ~[to,tN], then the output feedback HOC control problem with the cost matrices Xo and X f is said to have a solution via synchronous controller switching with the output feedback basic controllers (4·3.2). Remark The problem (4.3.4) is an HOC control problem with transients (e.g., see [59]) with a special class of output feedback controllers.
4. H oo Control with Synchronous Controller Switching
60 ~
..
Our solution to the preceding problem involves the following Riccati differential equation
Pet) = A(t)P(t) + P(t)A(t)' +P(t)[K(t)' K(t) - C(t)'C(t)]P(t) + B 1(t)B1(t)'.
(4.3.5)
Also, we consider a set of state equations of the form
i(t)
= [A(t) + P(t) [K(t)' K(t)
- C(t)'C(t)]] x(t) +P(t)C(t)'y(t) + B 2(t)u(t).
(4.3.6)
Notation Let M (.) be a given function from Rn to R, and let Xo ERn be a given vector. Introduce the following cost function
Wet, x(t), u(t), yet)) t;. II(K(t)x(t) + G(t)u(t))112 -1l(C(t)x(t) - y(t))1I 2.
(4.3.7)
Then, .
Fl(xo, M(·))
[M(X(tj +1))
sup
t;.
=
+ J.tj+l Wet, x(t), Viet, yet)), y(t))dt]
Y(')EL2[tj,tHl)
,(4.3.8)
tj .1
where the supremum is taken over all solutions to the system (4.3.6) with yC) E L 2 [tj, tj+1) u(t) = Vi(t, yet)) and initial condition x(tj ) = :1;0' We now present the main result on the finite-time output feedback HOO control problem. Theorem 4.3.2 Consider the system (4.3.1) and the output feedback basic
controllers (4·3.2). Let Xo = X~ > 0 and Xf = Xi > 0 be given matrices. Suppose that KC)'GC) = 0 and FJ(.,.) is defined by (4.3.8). Then, the following statements are equivalent: (i) The output feedback Roo control problem with cost matrices Xo and X f has a solution via synchronous controller switching with output feedback basic controllers (4.3.2). (ii) The solution P(·) to the Riccati equation (4.3.5) with initial condition P(to) = X 0 1 is defined and positive-definite on the interval [to, tN], P(t N ) -1 > X f and the dynamic programming equation
VN(XO) = x~[Xf
+ Xf(P(tN )-1 - XI )-1 Xf]xo) Vj(xo) = . min FJ(xo, \0+1(')) 1=1'00' ,k
(4.3.9)
has a solution for j = 0,1, ... ,N - 1 such that 'fo(xo) > 0 for all Xo E Rn and Vo(O) = o.
4.3 Output Feedback Roo Control Problem
61
Furthermore, suppose that condition (ii) holds and let 7,(xo) be an index such that the minimum in (4.3.9) is achieved for i = 7, (xo) and x(·) be the solution to the equation (4.3.6) with initial condition x(1o) = xo. Then the controller (4.3.2), {4.3.3} associated with the switching sequence h}f=Ol -:
.
::."
where i j 6. ij(x(tj)) solves the output feedback H= control problem (4.3.4) with V C) = \fa (.) . To prove this theorem, we will use the following lemma. Lemma 4.3.3 Let Xo = X~
'<; .~
:
> 0 and Xf
= Xi
> 0 be given matrices,
Xo ERn be a given vector, V (xo) be a given constant and Yo (.) and Uo (.) be given vector functions. Suppose that the solution P(·) to the Riccati equation {4.3.5} with initial condition P(fo) = X 0 1 is defined and positivedefinite on the interval [to, t N]. Then, condition
(4.3.10)
holds for all solutions to the system {4.3.1} with yC) = YJ(,) and u(·) uo(') if and only if
t lto
·r .
=
N
W(t, x(t), ua(t), yo(t))dt
< V(xo) + (xf - X(tN))'P(tN)-l(Xf - X(tN)) - X/XfXf
(4.3.11)
for all xf E Rn, the cost function (4.3.1), and the solution to the equation {4·3.6} with u(·) = 'l1{)(.), y(.) = Yo(')' and initial condition x(to) = xo· Proof Given an input-output pair [Uo('),yo(·)], if condition (4.3.4) holds for all vector functions xC), w(-), and v(·) satisfying equation (4.3.1) with u(-) = uo (.) and such that
yo(t) = C(t)x(t)
+ v(t)
Vt
E
[to, tN),
(4.3.12)
then substitution of (4.3.12) into (4.3.4) implies that (4.3.4) holds if and only if
J[Xf'W(-)} > 0
(4.3.13)
for all w(·) E L 2 [to, tN], xf ERn, where J[xf'w(,)} is defined by
J[xf, w(·)}
+
t
lto
2
N (
6.
(x(to) - xo)' Xo(x(t o) - xo)
Ilw(t)11 -lI(K(t)x(t) II(Yo(t) - C(t)x(t))11 2
+ G(t)Uo(t)) 11 2 + )
dt
(4.3.14)
62
4. Hoc Contr ol with Synch ronou s Contr oller Switc hing
and x(.) is the solution to (4.3.1) with distur bance input w(·) and bound ary condition X(tN) = xf. Now, consider the minim izatio n probl em (4.3.15)
1·.
where the minim um is taken over all x(·) and w(·) conne cted by (4.3.1 ) with the bound ary condition x (t N) = x f. This problem is a linear quadr atic optim al tracki ng probl em in which the system opera tes in reverse time. We wish to convert the preceding tracki ng problem into a tracki ng problem of the form considered in [67] and [19]. To achieve this, first define Xl (t) to be the solution to the state equations (4.3.16) Now let x(t) 6. x(t) - Xl(t). Then it follows from (4.3.1) and (4.3.1 6) that x(t) satisfies the state equations
£(t) = A(t)x (t) where x(to) ten as
l
= x(to).
+ Bl(t)w (t),
(4.3.17)
Furth ermor e, the cost function (4.3.14) can be rewrit-
J[Xf'W (')] = J(Xf'W(')] = (x(to) - xo)'Xo(x(to) - xo) + ( llw(t) 11 2 -11(K (t)[x( t) + Xl(t)J + G(t)uo(t)) 11 2 + ) dt II (Yo(t) - C(t)[x (t) + xl(t)DI1 2 ,
tN
to
where X(tN) = xf = xf - Xl(tN). Equa tions (4.3.17) and (4.3.1 8) now define a tracki ng probl em of the form considered in [67], where Yo('), uo('), and Xl (-) are all treate d as reference input s. In fact, the only differ ence between this tracki ng probl em and the tracki ng probl em considered in the proof of the result of [19J is that in this chapt er we have a sign-indefin ite quadr atic cost function. The soluti on to this tracki ng probl em is well known (e.g., see Sectio n 3.6 of (67)). Indeed, if the Riccati equat ion (4.3.5) has a positive-de finite solution defined in [to, tN] with initia l condition P(to) = X then the minim um in minJ [xf, w(·)] will be achieved for any xo, uo(-), and yo(')' Furth ermor e, as in [191, we can write
o\
min
w(-)EL 2[to,tN l ,.
,
J[Xf I w(·)} = (xf - Xl (tN ))'P(t N) -1 (x f - Xl (tN))
_l
tN
Wet, Xl (t)
to
+ Xl (t), uo(t), yo(t»dt, (4.3.18)
where Xl (.) is the solution to state equat ions
.. <
f I (t) = (A(t) + P(t)[ K(t)' K(t) -
C(t)'C (t)]) [Xl(t) + Xl(t)] -A(t) Xl(t) + P(t)C(t)'Yo(t)
4.3 Output Feedback H oo Control Problem
63
with initial condition :h(to) = xo. Now let x(·) ~ Xl(') + Xl(')' Using the fact that xf = xf - Xl(tN), it follows that (4.3.18) can be rewritten as ·~
.
:1 ':~
.~ . '
..;'
where x(·) is the solution to state equations (4.3.6) with initial condition x(to) = xo. From this we can conclude that condition (4.3.10) with a given input-output pair [uo(·) , Yo(')] is equivalent to the inequality (4.3.11). 0 Remark
Lemma 4.3.3 will be used in the following chapters of the book as an important theoretical tool. "':,
.,
-\
; ":"~
.
~
. "
Proof of Theorem 4.3.2 If condition (i) holds, then there exists a solution to the output feedback finite interval H= control problem. Hence, it follows from the standard H= control theory (e.g., see [16,59]) that the solution P(·) to the Riccati equation (4.3.5) with initial condition P(to) = X 0 1 is defined and positive-definite on [to, tN]. Now Lemma 4.3.3 implies that condition (i) is equivalent to the existence of a controller (4.3.2), (4.3.3) such that the inequality (4.3.11) holds for all solutions to the system (4.3.6) with any initial condition x( to) = Xo and any input-output pair (y(.), u(·) J connected by (4.3.3). The inequality (4.3.11) is equivalent to the following two conditions: P(tN)-1 > Xf and
" ..x
:1,
X(tN )'[Xf
itot
+ Xf(P(tN )-1 -
X f )-1 X f]x(tN)
+
N
Wet, x(t), u(t), y(t»dt < Y(x(to)).
Finally, consider the system (4.3.6) with y(-) treated as the disturbance input and apply Theorem 4.2.2 with the functional (4.3.7), So(·) Y(.), and
=
The statement of the theorem follows immediately.
o
64
4. Hoc Contr ol with Synch ronou s Contro ller Switc hing
4.3.1
The Case of Linear Static Basic Controllers
In this subsection, we consider the following collection of outpu t feedb ack basic controllers L 1(t)y(t ), L 2(t)y(t ), ~
:
(4.3.19) Suppose that the soluti on PC) to (4.3.5) with initia l condi tion P(to) = X 01 is defined and positive-definite on [to, tN]. Introd uce for all i = 1,2, ... , k and all j = 0,1, ... ,N - 1 the following Hami ltonia n system:
i" .
-(A+ PK'K +B2 LiC) ' ( ~(PC' + B2Li) (PC' + B L )' 2 i
(4.3.20)
Let Wij(t j, t) be the transi tion matri x function of this system , that is, Wij(t j, tj) = I and 4"ij(t j, t) = Hij(t) 'Itij(t j, t) for t E [tj, tj+1)' Also, introd uce n x n matri ces ¥ij, Zij, ~j, and Mij by partit ionin g 'It ij (tj, t j + 1) as follows: I
, .
,
,.
,
,Tr . . 'it ~J
..
(t.J' t J'+1) -t::.. (~. lJ ~j
(4.3.21)
Now suppose that det~j =I=-
°\Ii = 1, ... , k and J'
= 0, 1, ... ,N - 1,
(4.3.22) and introd uce the quadr atic forms ~ ~ ) F~ ji ( Xj, Xj+1
" R-1~ -Xj+1 "\)' L ij ij Xj+l ~,
. ;.:
~,
Xj
1M Rij
~'(V R- 1M + Xj+1 L ij ij ij
~
ijXj
Z)~
-
ij
Xj,
(4.3.23)
h were Xj, Xj+1 E Rn . We are now in a positi on to prese nt the following result . Theo rem 4.3.4 Consider the system (4.3.1) and the outpu t feedba ck basic controllers (4.3.19). Let X o = X o> 0 and Xf = X > be given matri ces. Suppose that G(·) = 0, the solution P(·) to the equation (4.3.5) with initia l condition P(io) = X 0 1 is defined and positive-definite on (to, tN], assumption (4.3.22) holds) and FJ(., -) is defined by (4.3.23). Then, the following statem ents are equivalent: ~
:..1:
D. ~'R- 1 ~ = Xj ij Xj+l -
A
f
°
".
..
4.3 Output Feedback H oo Control Problem
~:: ~.
65
:
(i) The output feedback HOC control problem with the cost matrices Xo and X f has a solution via synchronous controller switching with linear output feedback basic controllers (4.3.19).
(ii) The inequality P(tN )-1 > Xf holds and the dynamic programming equation VN(XN) = x~rXf Vj (Xj) = . min ~=1, ...
+ Xf(P(tN )-1 -
Xf )-1 Xf]XN'
[Pf(Xj, Xj+1)
+ VJ+1 (Xj+l)]
sup
,k Xj +1E Rn
has a solution for j = 0,1, ... ,N - 1 such that VO(xo) xo E Rn and Vo(O) = O.
.~
~
(4.3.24)
0 for all
Furthermore, suppose that condition (ii) holds, and let ~ (Xj) be an index such that the minimum in (4.3.24) is achieved for i = ~(Xj) and x(·) be the solution to the equation {4.3.6} with initial condition x(~) = xo. Then the controller (4.3.19), (4.3.3) associated with the switching sequence g}f=ol, where iJ· ~ ij(x(tj), solves the output feedback HOC control problem (4.3.4) with V(.) Vo(')'
=
Proof .~
,
Let M (.) be a function, and W (', " " .) is defined by (4.3.7). Consider the problem
sup yOEL 2 [tj ,tj+l)
[M(X(t j+d ) +
l
Pj(Xj, Xj+l, M(·» ~ t j+l
Wet, x(t), Li(t)y(t), y(t))dt] {4.3.25)
tj
where the supremum is taken over all solutions to the system (4.3.6) with y(-) E L 2 [t j , tj+1) and boundary conditions x(tj) = Xj and X(tj+1) = Xj+1. The standard optimal control theory (e.g., see [67]) implies that if condition (4.3.22) holds for the Hamiltonian system (4.3.20), then the supremum in (4.3.25) is achieved and
P}(Xj, Xj+ll M(.») = x(tj)'p(tj) - x(tj+d'p(tj+d
+ M(Xj+1),
where [xC-), pC-)] is the solution to the system (4.3.20) with boundary conditions x(tj) = Xj and x(tj+d = Xj+l' From (4.3.21) and (4.3.22), we obtain that Hence,
Ff(Xj, M(-» where
=
sup
[M(xj+d
+ Pj(Xj, xj+d] ,
Xj+1ERn
FJ is defined by (4.3.8). Now the statement follows from 4.3.2.
0
66
4. Hoc Contr ol with Synch ronou s Contr oller Switching '°:1
e
4.4 Illus trati ve Exa mpl e In this section, we prese nt an example to illust rate the main result of Section 4.2. In partic ular, we consider the system (4.3.1) with n = 2 and the following coefficients A( t) -
~],
[-1°25 K(t)
=[
B, (t)
=[ 00.1 ]
I
C(t)
n' n, B 2 (t)
=[ 1-2]
I
=[
G(t)
=a.
lIX(t)11 vs time
2.5 r---.,--,.-----,~--.------,--~-__"T--_,___-__
2
.:C.Qntroller2 .
r--_r_-__,
.... Cqntro.ller1
1.5
1
••••••••••
,
••
0 •••••••••• "
:Switchirig Controller
OL... -_--'- _----C L-_-'- --_--'_ _-"-_--- --L_ _-'--_--
o
2
3
4
5
6
timet
7
-l-_ _-l..-_ --'
8
9
10
FIGU RE 4.4.1. Evolu tion of the norm of the plant states with time.
This plant will be controlled via synchronous controller switching with the following two basic outpu t feedback controllers:
Ul(t) U2(t)
,.,'.
i;
-
3y(t) -y(t) .
Moreover, we take the following param eters: N = 100, tj = O.lj for j = 0,1, ... ,99,10 0, X o = I, Xf = a.1I and Xo = O. We apply Theo rem 4.3.4 and numerically solve the dynam ic progr ammi ng equat ion (4.3.24). Figur e
4.4 Illustrative Example control input u(t) vs time
67
'. -----,
3.----,--~.----,---~--~-__r--_r1--...,..,--,
<-,'.,
2·· .
........•. J.Jll/lli/I,
'('.i .
/I
~ Of-·
-
/.
:::l
.
.
.:
:
-
.
"
. ....
..................... /
-
" '" / _. '" " '"flr-fl/'-I/'-'II~'/' - / ' ",,,
-/
.",/--
0-
.£:
(5 .... -1
E o o
. IIi' ....................................
-2 . .. -'
-3f- .
'"
:-
-: .
:
~
.
-5 '---_--'-_ _' - - _ - l . - _ - - - l_ _--I.i_ _~_----'-_ _- 1 - - _ - - - L_ o 1 2 3 4 5 6 7 8 9
-
.
__J
10
time t
FIGURE 4.4.2. Evolution of the control input u(t) with time.
4.4.1 presents the simulation results for the initial condition
x(O) = [
=~::
]
and the disturbance inputs 1
1
w(t) = 10 sin(51rt) , v(t) = 20cos(51rt) . .-._~
.
=
It can easily be seen that this system with control input u(·) Ul (.) or u(·) = U2(-) is unstable and Ilx(t)1I -t 00 for the corresponding solutions of the system; see Figure 4.4.1. However, the switching controller obtained from Theorem 4.3.4 makes the state of the system very small. The control input corresponding to the switching sequence is displayed in Figure 4.4.2.
~.~: .
.
.~
4. H oo Control with Synchronous Controller Switching
68
4.5
Output Feedback HOO Control over Infinite Time
In this section, we consider the following linear time-invariant output feedback control system defined on the infinite time interval [0,00):
x(t) z(t) y(t)
Ax(t) + B1w(t) Kx(t) + Gu(t),
-
Cx(t)
+ B 2 u(t),
+ v(t),
(4.5.1)
where x(t) E Rn is the state, w(t) E RP and v(t) E Rl are the disturbance inputs, u(t) E R h is the control input, z(t) E Rq is the controlled output, y(t) E R l is the measured output, and A, BI, B 2 , K, G, and C are given matrices. In this section, we have a collection of given nonlinear output feedback basic controllers
Ul(t) U2(t)
U1(y(t)), U2(y(t)),
(4.5.2)
where U1 (·), U2 (-), •.. ,Uk(') are given continuous matrix functions such that U1 (0) = 0, U2(0) = 0, ... ,Uk(O) = 0. We will consider the problem of Hoo control via synchronous controller switching. However, unlike in the previous sections of this chapter, the switching times are supposed to be uniformly spaced. More precisely, let T > 0 be a given time, and let j E Nand I j ( .) be a function that maps 'T from the set of the output measurements {y(.) I~ } to the set of symbols {I, 2, ... ,k}. Then, for any sequence of such functions {Ij}~o, we will consider the following dynamic nonlinear output feedback controller:
U(t)
= Uij (y(t))
Vj
E
{O, 1,2, ... } Vt
E
where ij
[iT, (j + l)T)
= Ij(y(.)
I~T).
(4.5.3)
Definition 4.5.1 Consider the system (4.5.1). Then the infinite-time output feedback H OO control problem is said to have a solution via synchronous controller switching with the output feedback basic controllers (4.5.2) if there exist constants c > 0 and € > and a function V(xo) > Osuch that V(O) = and for any vector Xo E Rn there exists a controller of the form (4·5.2), (4·5.3) such that the following conditions hold:
°
°
(i) For any initial condition x(O) and disturbance inputs [w(·),v(,)] E L2 [0, (0), the closed-loop system has a unique solution that is defined on [0,00).
oo over Infinite Tim e 4.5 Ou tpu t Feedback H Co ntr ol
·), vC)] (ii) The closed-loop system with [w( stable.
69
=0 is globally asymptotically
(iii) The inequality
Ellx(NT)11
{NT
2
+ Jo
2 (llz(t)1I
2
-
2 IIw(t)1I -lIv(t)1I )dt
2 < cll(x(O) - xo)1l + V(xo)
(4.5.4)
utions to the closed-loop system holds for all N E N and for all sol (.) J E ~ [0, NT] . with any disturbance inputs [w ('), v
4.5.1
Output Feedback Hoo Construction of an Infinite- Ti me Controller
i algeblem involves the following Riccat Ou r solution to the preceding pro braic equation: (4.5.5) + PA ' + P[ K' K - G'G]p + BIB~ = O.
-; .-,.,
AP
ations of the form Also, we consider a set of sta te equ
~(t)
.:, .~,
= [A
+ P[ K' K - GIGJ] x(t ) (4.5.6)
+P G' y(t ) + [PK'G + B 2 ]u(t).
.s
Notation ". ;
given Rn to R, and let Xo E Rn be a Let L(·) be a given function from t function: vector. Int rod uce the following cos
W( x(t ),u (t) ,y( t))
D.
II (K x(t ) + Gu(t))11
2
2 -II (Gx(t) - y(t))11 .
(4.5.7)
..:.!
Th en,
pi(XO, L(.))
sup
D.
yO E L 2[O,T]
] , ) () + [~~X(T)) )dt t Y ), t x(t ), Ui(y( Jo
(4.5.8)
W(
utions to the sys tem (4.5.6) wi th sol all r ove en tak is um rem sup where the initial con dit ion x(j T) = xo. y(.) E L 2 [0, T], u(t) Ui(y(t)), and
=
Assumptions ptions. We make the following two assum ) is controllable. As su mp tio n 4.1 Th e pai r (A, B 1 is observable. As su mp tio n 4.2 Th e pai r (A, K) ows. Th e ma in result of thi s section foll
70
4. Hoc Contr ol with Synch ronou s Contro ller Switch ing
Theo rem 4.5.2 Consi der the linear system (4.5.1) and the basic controllers (4.5.2). Suppose that Assum ption s 4.1 and 4.2 hold and that P(·,·) is defined by (4.5.8). Then, the following statem ents are equivalent:
'.-j
(i) The infini te time outpu t feedback Hoo control problem has a soluti on via synchronous controller switching with outpu t feedback basic controllers (4.5.2). (ii) There exists a const ant to > 0 and soluti on equation (4.5.5) such that the matri x
D
D.
P>
0 to the Riccati
A + P(K' K - C'C]
;;\
~~
::;::;~.':I'
is stable and the dynam ic programming equation
.•....
V(xo)
= 1.=1, . min pi(XO, V(.)) ... ,k
has a solution V(xo) such that V(O) Xo ERn.
:;: ,,
l'
=0
(4.5.9)
"{
and V(O) > EoHxol12 for a l l ;
,
Furthermore, suppose that condition {ii} holds; let '0-(xo) be an index such that the minim um in (4.5.9) is achieved for i = '0(xo), and let xC) be the solution to the equation {4.5.6} with initia l condition x(O) = lb. Then the controller (4.5.2), (4.5.3) associated with the switching sequence {~}~o, where i j D. iJ·(x( jT», solves the infini te-tim e outpu t feedback IF control problem with
c= and V(·)
II P~ -1 II,
E
1. {I } liP-Ill ~
= -2 mm
=V(.).
c,
'I
~,
I
.1 ·'1 ")
, EO
:.'l
Proof
(i)
(ii) If condi tion (i) holds, then for any 8 E (0,1) the contro ller (4.5.2), (4.5.3) corres pondi ng to Xo = 0 is a soluti on to the following outpu t feedback finite time interval Hoo contro l p r o b l e m =}
D.
J =
(1 - 8)
sup [w(-)'V (')]EL2 [O,T]
cllx(0)1I
2
T
J; IIz(t)
11
2
dt
+ Jo (1Iw(t)1I 2 + Ilv(t)1I 2 )dt
<1
(4.5.10)
for any T E (0,00 ) where c > 0 is a const ant such that condi tion (4.5.4) holds. Now consider the distur bance input v(·) = -CxC ). Then we have y(.) 0 and uC) O. Therefore, from (4.5.10) we have
=
=
T
"'0
D.
=
sup wOEL 2[O,rl
J; «1 - 8)IIKx(t)11
2
-lICx(t)11 2 )dt <1 cllx(O) 11 2 + fo IIw(t)112dt T
,
4.5 Output Feedback HOC Control over Infinite Time
71
for any T E (0,00). Hence, it follows from Lemma 5 of [81] that the solution Po (.) to the Riccati equation ;.:
F(j(t) = AP(j(t)
+ P(j(t)A' + P(j(t)[(1- fJ)K'K -
C'C]Po(t)
+ BIB~
with initial condition PoCO) = cI is defined and positive definite on [0,(0). Because condition (4.3.10) with X o = cI and Xf = EI holds for any N E N and u(·) and y(.) connected by the controller (4.5.2), (4.5.3), Lemma 4.3.3 implies that the inequality (4.3.11) holds. From the inequality (4.3.11), we obtain that P(j(NT)-l :,;
E t
:.;
> EI "IN EN.
(4.5.11)
°
Furthermore, we can take fJ -+ and obtain, from continuous dependence of solutions to the Riccati equations on parameters on a finite time interval and condition (4.5.11), that there exists a constant CO > 0 such that, for any c > Co, the solution Pe (·) to the Riccati equation (4.3.5) with initial condition Pe(O) = cI is defined and positive-definite on [0,(0) and
;'.="
(4.5.12)
(.
Now, let P(X)(-) be the solution to (4.3.5) with initial condition P(X)(O) = O. Then, well-known properties of the Riccati equation (4.3.5) (e.g., see [153]) imply that (4.5.13) The relations (4.5.12) and (4.5.13) imply that the limit ~
~
P = lim P(X)(t) > 0 t-+(X)
exists and p-l 2: d. Now, it is clear that P is a solution to equation (4.5.5). Also, because P(X) -+ Past -7 00, we have that P is the minimum constant solution. Hence, P is a stabilizing solution (e.g., see [153]). Now, let c > 0 be a constant such that condition (4.5.4) holds and cI < P. We have proved earlier that there exists a solution to equation (4.3.5) with initial condition P(O) = cI and
pet) -+ Past -+
(4.5.14)
00.
Let n be the class of all controllers of the form (4.5.2), (4.5.3) and let N E N be fixed. Introduce for any j = 0,1, ... ,N the function
~N(XO) ~ inf
sup
u(-)Ef2 yOE L2[jT,NT]
j
NT
jT
W(x(t), u(t), y(t)dt
+ ElIx(NT)112,
(4.5.15)
72
4. H oo Co ntr ol wit h Syn chr ono us Co ntr olle r Sw itch ing
where the sup rem um is tak en ove r all solutions to the sys tem (4.3.6 ) wi th initial condition x(j T) = xo. Accor ding to the the ory of dynamic pro gra mming (e.g., see [14,16]), ~N(.) sat isfies the equations
V; (xo) = Ellxoll2;
~N (xo) = __mi n
t-l , .. _,k
FJ(x0
1
~~1 C)), .~
where
,
.1
FJ(xo, L(.)) sup
Y(-)E L 2[jT ,(j+ l)T )
6.
L(x ((j + l)T ))+ ] (j+ l)T [ fjT W( x(t ), Ui(y(t)), y(t))dt
(4.5.16)
A
and the sup rem um is tak en over all solutions to the sys tem (4.3.6) wi th y(.) E L2[jT, (j + l)T ), u(t) = Ui(y(t )) and initial condition x(j T) = xo. Now, we prove tha t the re exists a function ZC) > 0 such tha t Z(O ) =0 and (4.5.17) Indeed, condition (4.3.11) of Le mm a 4.3.3 tog eth er wi th the def inition (4.5.15) implies tha t VoN (.) < V(· ) 'liN.
(4.5.18)
It follows from (4.5.15) tha t
~N(xo) < vt( xo ) +
jT
inf
u(') EO
Jo{
W( x(t ),u (t) ,yo (t) )dt
(4.5.19)
for all yo(·) E L 2 [0,jTl. Now consider an inp ut yo(t) = 0 for t E [0, NT ). Th en u(t) t E [O,jT) and for all u(·) En , and the inequality (4.5.19) implies
V:
(xo) < vo" (xo)
In' OIKx(t)1I
=
0 for _\
"T
+
2
-
lICx(t)1I 2 )dt
(4.5.20) for the solutions to (4.3.6) wit h u = 0 and y = O. Because P is a stabilizing solution, condition (4.5.14) implies tha t P(· ) is a stabilizing solution to the Ric cat i equ ati on (4.3.5). Hence, the sys tem (4.3.6) wi th u = 0 and y = 0 is exponentially stable. From thi s and the inequalities (4.5.18) and (4.5.2 0), we have tha t the re exists a con sta nt d > 0 such tha t condition (4.5 .17) holds wi th Z(xo) = V(xo) + dllxol1 2 . Also, it follows from (4.5.15) tha t (4.5.21)
4.5 Output Feedback Hoc Control over Infinite Time
73
Hence, from (4.5.17) and (4.5.21) we have that
Vj (xo)
jj.
lim ~N (xo) < Z(xo).
N-'H:JO
Clearly, Vj(-) is a solution to the equation
Vj(xo) = . min F}(xo, Vj+l('))' ~=l, ...
,k
Now, conditions (4.5.14) and (4.5.17) imply that the limit
exists and V(.) is a solution to equation (4.5.9). Also, it follows from the preceding that V(xo) > €llxol1 2 and V(O) = O. This completes the proof of this statement. (ii) => (i) Equation (4.5.9) implies that for the controller associated with the switching sequence {ij}~o defined by equation (4.5.9), we have ·~
,
[NT
io
W(x(t), u(t), y(t))dt < V(x(O))- V(x(NT)) < V(x(O))-€ollx(NT) 11 2 .
Furthermore, Lemma 4.3.3 implies that condition (4.3.10) holds with
X
o= P-"
€
=
~ min CIP~'I1'
and V = V. Hence, condition (4.5.4) holds with c = liP-Ill and V V. Conditions (i) and (ii) of Definition 4.5.1 follow immediately from the inequality (4.5.4) and observability of (A, K). This completes the proof of the theorem. 0 Remark It will be shown in the following chapter that the disturbance rejection
problem considered in this section is equivalent to a certain robust stabilization problem (e.g., see [116]). In this robust stabilization problem, the underlying linear system is described by the state equations (4.5.1). However, in this case, w(t) and v(t) are the uncertainty inputs and z(t) is the uncertainty output. The uncertainty inputs w(t) and v(t) are required to satisfy a certain integral quadratic constraint. Then Theorem 4.5.2 gives a solution for the corresponding problem of robust stabilization via output feedback synchronous controller switching (4.5.2), (4.5.3).
5 Absolute Stabilizability via Synchronous Controller Switching
5.1
".
;~>1
Introduction
This chapter considers the problems of state and output feedback robust stabilizability for a class of uncertain dynamical systems. The system under consideration consists of a linear uncertain continuous-time plant with control and uncertainty inputs and a synchronously switching controller. The plant uncertainty is required to satisfy a certain "integral quadratic constraint." This uncertainty description was developed in the work of Yakubovich and has been extensively studied (e.g., see [93,94,116,118, 119,160]). It has been shown to provide a good representation of the uncertainty arising in many real control problems. Associated with this class of uncertain systems is a corresponding robust stabilizability notion often referred to as absolute stabilizability. The controller is defined by a collection of given nonlinear state or output feedback controllers which are called basic controllers. The control goal is to absolutely stabilize the linear uncertain plant by synchronous switching from one basic controller to another. A necessary and sufficient condition for absolute output feedback stabilizability via controller switching is given for the case of uncertain systems with unstructured uncertainty. This condition is stated in terms of the existence of suitable solutions to a Riccati algebraic equation of the Hoo filtering type and a dynamic programming equation. If such solutions exist, then it is shown that they can be used to construct an absolutely stabilizing controller. The solution to discrete-time dynamic programming
76
5. Absolute Stabilizability via Synchronous Controller Switching
equations has been the subject of much research in the field of optimal control theory. Despite the fact that several methods of obtaining numerical solutions have been proposed for specific optimal control problems, it is not easy to solve dynamic programming equations in many realistic situations. We consider a special case where the basic controllers are linear and give a sufficient condition for absolute stabilizability and a corresponding method for absolute stabilization that is implementable in real time. Furthermore, we consider the case of uncertain systems with structured uncertainty and derive a sufficient condition for absolute stabilizability via synchronous controller switching. Some of the results of this chapter were originally published in [129]. The body of this chapter is organized as follows. Section 5.2 describes the class of uncertain dynamical systems with an integral quadratic constraint. Section 5.3 considers the problem of absolute state feedback stabilizability. Section 5.4 introduces the concept of absolute output feedback stabilizability via controller switching. A necessary and sufficient condition for absolute output feedback stabilizability via controller switching is given in Section 5.5. In Section 5.6, we derive a constructive and realtime-implementable method for robust output feedback stabilizability via synchronous controller switching. Section 5.7 addresses the problem of absolute stabilizability via controller switching for uncertain systems with structured uncertainty. Finally, Section 5.8 presents an illustrative example that shows how to implement the main results of the chapter.
5.2
Uncertain Systems with Integral Quadratic Constraints
In this section, we introduce mathematically rigorous definitions of the uncertainty classes referred to in Chapter 1 as uncertainties satisfying integral quadratic constraints. To motivate this uncertainty description, first consider a transfer function uncertainty block as shown in Figure 5.2.1. Assuming that the transfer function A( s) is stable and using Parseval's Theorem, it follows that the
z
w ~(s)
FIGURE 5.2.1. Transfer function uncertainty.
5.2 Uncer tain System s with Integr al Quad ratic Const raints
77
HOO norm boun d
1Ib.(jw) II < 1 for all w E R is equivalent to the time-domain boun d 2 { " IIw(tlll dt < {" Ilz(tlll
2
(5.2.1)
dt
in unfor all signals z(·) (provided these integrals exist). The time-doma raint. certai nty boun d (5.2.1) is an example of an integ ral quadr atic const case of This time- doma in uncer tainty boun d applies equal ly well in the ing. a time-varying real uncer tainty param eter b.(t) or a nonlinear mapp an inteNote that, by applying Laplace transforms, it can be seen that ing gral quadr atic const raint of the form (5.2.1) is equivalent to the follow frequency-domain integral quadr atic const raint:
r: [~~j r[~ ~I] [ ~~~l ~ ]
> o.
and Here 2(s) and w(s) denote the Laplace transf orms of the signals z(t) boun d w(t), respectively. The integral quadr atic const raint uncer tainty syste m (5.2.1) can be easily exten ded to model the noise acting on the rated as well as uncer tainty in the syste m dynamics. This situat ion is illust atic in Figur e 5.2.2. To model this situat ion, we modify the integral quadr const raint (5.2.1) to :.~
".
",:::
'j"
2
{ " IIw(tlll dt < d+
:--"; .
,."
1'' IIz(tlIl dt, 2
(5.2.2)
..
.•
~
...
;:
i-.' . .~',~' ;.. J
.~.
of the where d > 0 is a const ant that determines the boun d on the size the noise (again assuming that the integrals exist). If the signal z(t) is zero, ming uncer tainty block D. makes no contr ibutio n to the signal w(t) (assu ver, zero initial condition on the dynamics of the uncer tainty block). Howe intew(t) can still be nonzero due to the presence of the noise signal. This boun d gral quadr atic const raint modeling of noise corresponds to an energy that on the noise rathe r than a stoch astic white noise description. Also note ) can the presence of the d term in the integral quadr atic const raint (5.2.2 allow for a nonzero initial condition on the uncer tainty dynamics. m We now prese nt a formal definition of a conti nuous -time uncer tain syste uncer with an integral quadr atic const raint uncer tainty description. The tain syste m is described by the following state equations:
x(t) z(t) ,;
-
Ax(t) + B1w( t) Kx(t) + Gu(t) ,
+ B 2 u(t), (5.2.3)
E Rh where x(t) E Rn is the state, w(t) E RP is the uncer tainty input, u(t) , B , K, is the control input, z(t) E Rq is the uncer tainty output, and A, B l 2 and G are given matrices.
78
5. Absol ute Stabil izabil ity via Synch ronou s Contr oller Switc hing
'IjJ(t)
z(t) Unce rtaint y D..
./~ .",
o ..~
net)
wet) •
Nami nal syste m
u (t )
x (t)
FIGU RE 5.2.2. Uncer tain system with noise inputs .
The uncer tainty in the preceding syste m can be described by equat ions of the form
wet)
=
¢(t,z( ·)).
(5.2.4)
This uncer tainty description allows for nonlinear, time-varying, dynam ic uncertainties. An impo rtant feature of dynamic uncer tainty as oppos ed to an exogenous distur bance input is the functional dependence of the uncer tainty input on the control input as described by equations (5.2.4). The corresponding set of admissible uncer tainti es will therefore depen d on the control input u(·). The block diagr am of the uncer tain syste m (5.2.3), (5.2.4) is shown in Figur e 3.2.2. For the uncer tain syste m (5.2.3) and uncer tainty (5.2.4), a boun d on the uncer tainty is deter mined by the following integr al quadr atic const raint condition.
°
Defin ition 5.2.1 Let T > be a given time. An uncer tainty input w(·) is said to be an admissible uncer tainty input for the system (5.2.3 ) if, for any locally square-integrable control input u(·) for which any soluti on to equations {5.2.3} with the input s [u(·), w(·) exists on [0,00), there exists a sequence {Nq , q = 1,2, ... } C N U {+oo } with lim _ : N = +00 and a q q H xJ constant d > such that
°
(5.2.5) Rema rk
Note that {Nq } and d depen d on the uncer tainty input w(-). Also, in (5.2.5) the uncer tainty outpu t z(·) depends on the uncer tainty input w(·).
5.2 Uncertain Systems with Integral Quadratic Constraints
79
As mentioned in Chapter 1, the preceding definition of admissible uncertainty originates in the work of Yakubovich (155-157] and has been extensively studied in the Russian literature (e.g., see [64,100,131,147, 156,158-160]). In particular, references [147] and [64], give a number of examples of physical systems in which the uncertainty naturally fits into the preceding framework. The reader may notice a connection between the integral quadratic constraint uncertainty description and Popov's notion of hyperstability; see [95]. The preceeding definition extends this class of uncertain systems to systems containing a control input. This enables us to use H oo control theory in the problem of controller synthesis for such uncertain systems. Note that, in the preceding definition of the integral quadratic constraint, we do not require any assumption that the uncertain system be stable or even that its solutions exist on an infinite time interval. The integral quadratic constraint (5.2.5) involves a sequence of times {Nq}~l' In the particular case where u(·) is defined by a stabilizing controller that guarantees that xC) E L 2 [0,(0) for any w(·) E L 2 [0,(0), there is no need to introduce the sequence {Nq } ~1 to describe the uncertainty. Indeed, given a constraint in the form (5.2.5) and a stabilizing control input u(·), by passing to the limit in (5.2.5) as N q -+ 00, one can replace the integral over [0, NqT] in (5.2.5) with an integral over the infinite interval [0,(0). Furthermore, by making use of Parseval's Theorem, it follows that this integral quadratic constraint is equivalent to the frequency-domain integral quadratic constraint:
J
OO
-00
[
z(jw) ] ° -I0] [ w(jw) dw > -d.
z(jw) ] * [I w(jw)
Here, z(s) and w(s) denote the Laplace transforms of the signals z(t) and w (t), respectively. However, in this chapter, we wish to define the class of admissible uncertainties for a generic control input and avoid referring to any particular stabilizing properties of the control input when the constraints on the uncertainty are being defined. This is achieved in Definition 5.2.1 by considering control inputs, uncertainty inputs, and the corresponding solutions defined on a sequence of expanding finite intervals [0, NqT]. Remark
It is important to note that the class of uncertainties satisfying an integral quadratic constraint of the form (5.2.5) includes norm-bounded uncertainties as a particular case. Indeed, consider the uncertain system (3.2.1) with norm-bounded uncertainties satisfying (3.2.2). Then, any norm-bounded uncertainty input w(·) of the form (3.2.3) satisfies condition (3.2.5). Therefore, it also satisfies the integral constraint
fa'
2
Ilw(t)1I dt <
fa'
2
IIz(t)1I dt
80
5. Absol ute Stabil izabil ity via Synch ronou s Contr oller Switc hing
for all T > 0. Thus , any norm -boun ded uncer tainty of the form (3.2.3) satisfies the integ ral quad ratic const raint (5.2.5) with any d > and an arbitr arily chosen sequence {Nq}~l'
°
5.3
Stat e Feedback Stabilizability via Synchronous Controller Switching
In this section, we introd uce the concept of absol ute stabil izabil ity for the uncer tain syBtem (5.2.3), (5.2.5). For this system , we consider the probl em of absol ute stabil izabil ity via synchronous controller switc hing with the following linear basic state feedback controllers:
Ul(t) U2(t)
-
L1x(t), L 2x(t),
(5.3.1)
°
More precisely, let T > be a given time, and let j E Nan d I j (.) be a function that maps from the set of the plant state meas urem ents {x(·) 'T ~ } to the set of symbols {I, 2, ... ,k}. Then , for any sequence of such funct ions {Ij } ~o, we will consider the following dynam ic nonli near state feedback controller:
u(t) = Lijx( t) Vj E {a, 1,2, ... } Vt E [iT, (j + l)T) where i j = Ij(x(. ) I~T).
(5.3.2)
We now introd uce a corre spond ing notio n of robus t stabil izabil ity for the uncer tain syste m (5.2.3), (5.2.5) referred to as absol ute stabilizabil ity; see also [116].
Defin ition 5.3.1 The uncer tain system {5.2.3}, {5.2.5} is said to be absolutely stabil izable via synch ronou s controller switching with the basic state feedback controllers {5.3.1} if there exists a const ant c > and a state feedback contr oller of the form {5.3.1}, {5.3.2} such that the following condi tions hold:
°
{i} For any initia l condi tion x(o) and uncer tainty input w(·)
E !.Q[O, 00), the closed-loop system defined by {5.2.3} and {5.3.2} has a uniqu e soluti on that is defined on [0,00).
{ii} The closed-loop system defined by {5.2.3} and {5.3.2} with w(t) is globally asymp totica lly stable.
=
°
5.3 State Feedback Stabilizability via Synchronous Controller Switching
81
{iii} Given an admissible uncertainty [w(·) for the uncertain system (5.2.3), {5.2.5}, any solution to the equations {5.2.3}, {5.3.2} satisfies
[x(·), u(·), z(·), we)] E
~[O, (0)
and
faCXJ (lIx(t)11 2 + Ilu(t)1I 2 + llz(t) 11 2 + Ilw(t)1l2)dt
< c(llx(O)11 2 + d].
(5.3.3 )
The following theorem shows a connection between absolute stabilizability and robust stabilizability with a quadratic storage function (see Definition 3.4.2). oS _
.,
'oo~
,
Theorem 5.3.2 Suppose that the system {5.2.3} is robustly stabilizable with a quadratic storage function via synchronous controller switching with the basic controllers {5. 3.1}. Then, the uncertain system {5. 2.3}, {5. 2. 5} is absolutely stabilizable via synchronous controller switching with basic state feedback controllers {5. 3.1}.
.
-0;
Proof
According to Definition 3.4.2, if the system (5.2.3) is robustly stabilizable with a quadratic storage function, there exists a matrix H = H' > 0 and a constant (; > 0 such that
x((j
+ l)T)'Hx((j + l)T) r(j+l)T
ljT
x(jT)'Hx(jT)
+
2
2
(llz(t)11 -llw(t)11 )dt <
rU+
-€
ljT
1)T
2
(5.3.4)
IIx(t)11 dt
for any j. This implies that €
U+ r lo
1 )T
2
IIx(t)1I dt < x(O)' Hx(O) -
rU+ Jo
1 )T
(1Iz(t)11 2 -llw(t)1I )dt(5.3.5) 2
for any j. Combining this inequality with the integral quadratic constraint (5.2.5), we obtain
Jor
NqT
€
2
IIx(t)11 dt < x(O)' Hx(O) + d
(5.3.6)
for any N q . Now condition (5.3.3) follows from this, (5.2.5), (5.2.3), and (5.3.1). This completes the proof of the theorem. 0
82
5.4
5. Absolute Stabilizability via Synchronous Controller Switching
Output Feedback Stabilizability via Synchronous Controller Switching
In this section, we introduce the concept of absolute output feedback stabilizability for uncertain systems with a single integral quadratic constraint. We consider the linear uncertain system defined on the infinite time interval [0,(0):
x(t) z(t) y(t)
-
Ax(t) + Blw(t) Kx(t) + Gu(t), Cx(t) + v(t),
+ B 2u(t), (5.4.1)
where x (t) ERn is the state, w (t) E RP and v (t) E R l are the uncertainty inputs, u(t) E Rh is the control input, z(t) E Rq is the uncertainty output, y(t) E R l is the measured output, and A, B l , B 2, K, G, and C are given matrices. The uncertainty in the preceding system can be described by equations of the form
[~~?]
(5.4.2)
= 4>(t, z(-)),
which means that the system contains a single uncertainty block that produces uncertainty inputs w(t) and v(t). This system is represented in the block diagram shown in Figure 5.4.1, where
W) " [
~~?]
We will consider uncertainty inputs w(·) and v(·) such that the following integral quadratic constraint holds.
Definition 5.4.1 An uncertainty input [we), v(·)] is said to be an admissible uncertainty input for the system (5.4.1) if, for any locally squareintegrable control input u(·) for which any solution to equations {5.4.1} with [u(·), w('), v(·)] exists on [0, (0), there exists a sequence {Nq , q = 1,2, ... } c N U { +oo} with limq-too N q = +00 and a constant d > such that
°
(5.4.3)
Remark It is clear that the uncertain system (5.4.1) allows for uncertainty satisfying a norm-bounded condition. In this case, the uncertain system is described by the state equations
x(t) = [A + Bl~(t)Kl]X(t) + [B2 + Bl~(t)Gl]U(t), y(t) = [C + ~(t)K2)X(t) + ~(t)G2U(t), 1l.6.(t)II < 1,
(5.4.4)
5.4 Output Feedback Stabilizability via Synchronous Controller Switching
83
¢(.)
~
z
Nominal System
u
y
FIGURE 5.4.1. Uncertain output feedback system.
where 6.(t) is the uncertainty matrix and 11·11 denotes the standard induced matrix norm. To verify that such uncertainty is admissible for the uncertain system (5.4.1), let
w(t) v(t) K ~ [Kf
= 6.(t)[K1x(t) + G1u(t)), = 6.(t) [K2 x(t) + G2U(t)),
K~]', G(t) ~ [G~
Gz
J',
where 1!6.(t)lI < 1 for all t > O. Then, w(·) and vC) satisfy condition (5.4.3) with any N q and d = O. Let T > 0 be a given switching time. We will consider the problem of absolute stabilizability of the uncertain system (5.4.1), (5.4.3) via synchronous controller switching of the form (4.5.3) with the basic nonlinear output feedback controllers (4.5.2). We now introduce a corresponding notion of absolute output feedback stabilizability for the uncertain system (5.4.1), (5.4.3). Definition 5.4.2 The uncertain system (5.4.1), (5.4.3) is said to be absolutely stabilizable via synchronous controller switching with basic output feedback controllers (4.5.2) if there exists a constant c > 0 and a function V : Rn ~ R such that V(xo) > 0, V(O) = 0 and for any vector Xo E Rn there exists an output feedback controller of the form (4.5.2), (4.5.3) such that the following conditions hold:
(i) For any initial condition x(O) and uncertainty input [w('),vC)] E L2[O,00), the closed-loop system defined by (5.4.1) and (4.5.3) has a unique solution that is defined on [0,00). (ii) The closed-loop system defined by (5.4.1) and (4.5.3) with [w(t), v(t)] o is globally asymptotically stable.
=
84
5. Absolute Stabilizability via Synchronous Controller Switching
(iii) Given an admissible uncertainty [w(-) , v(.)] for the uncertain system (5.4.1)) (5.4.3)) any solution to the equations (5.4.1)) (4.5.3) satisfies [xC), u('), z('), w(·), v(·)] E ~[O, (0) and
LX> (lIx(t) 11 2 + lIu(t)11 2 + Ilz(t)1I 2 + IIw(t) 11 2 + IIv(t)1I 2 )dt < V(xo)
+ c[ll(xQ - x(O))11 2 + d].
(5.4.5)
Remark
In Definition 5.4.2, the vector Xo is the expected value of the initial condition x(O). It can be seen from (5.4.5) that the closer x(O) is to xo, the "more stable" the closed-loop system (5.4.1), (4.5.3) will be.
5.5
A Necessary and Sufficient Condition for Output Feedback Stabilizability
In this section, we derive an equivalent condition for absolute output feedback stabilizability via synchronous controller switching. Let E > 0 be a given constant. Our solution to the preceding problem involves the following Riccati algebraic equation: AP + PA'
+ P[K'K + EI -
C'G]P + BIB~
= O.
(5.5.1)
Also, we consider a set of state equations of the form
i:(t)
=
[A
+ P[K' K + EI -
C'G]] x(t)
+ PC'y(t) + B 2 u(t).
(5.5.2)
Assumptions
We make the following assumptions. Assumption 5.1 The pair (A, B 1 ) is controllable. Assumption 5.2 K'G = O. Assumption 5.3 G'G > O. Notation
Let LC) be a given function from R n to R, and let Xo E Rn be a given vector. Introduce the following cost function:
W(x(t), u(t), y(t)) 6. II (Kx(t) + Gu(t)) 11 2 + E(lIx(t)11 2 + Ilu(t)1I 2 ) - II (GX(t) - y(t))11 2 .
(5.5.3)
Then, T
pi(XO, L(·))
6.
sup Y(·)EL 2 [O,TJ
[L(X(T))
+
r
io
W(X, Ui(y), Y)dt] ,
(5.5.4)
":.;
<,
5,5 A Condition for Output Feedback Stabilizability
85
where the supremum is taken over all solutions to the system (5.5.2) with y(.) E L 2 [O, T], u(t) Ui(y(t)) and initial condition X(O) i o. Also, consider the following dynamic programming equation:
=
v (xo) = t=l, . min pi (xo, V (- )). ... ,k .~.
,.
;<
J.'..
(5.5.5)
Dynamic programming equations of this type have been used in nonlinear H oo control theory (e.g., see equation 3.6 in [54]). We now present a necessary and sufficient condition for absolute stabilizability.
Theorem 5.5.1 Consider the uncertain system (5.4.1), (5.4.3) and the basic controllers (4.5.2). Suppose that Assumptions 5.1-5.3 hold and P(·, .) is defined by (5.5.4). Then, the following two statements are equivalent: (i) The uncertain system (5.4.1), (5.4.3) is absolutely stabilizable via synchronous controller switching with the basic output feedback controllers {4.5.2}. {ii} There exists a constant E > 0 and a solution P > 0 to the Riccati equation (5. 5.1) $uch that the matrix D t:::. A
+ P[K'K + (1 -
0 1 RO]
is stable and the dynamic programming equation (5.5.5) has a solution V : Rn ~ R satisfying V(xo) > 0 and V(O) = O. Furthermore, suppose that condition (ii) holds. Consider a function i: R n
~
{1,2, ... ,k},
where i(xo) is an index such that the minimum in {5.5.5} is achieved for i = i(xo) and let i(·) be the solution to the equation {5.5.2} with initial condition i(O) = XQ. Then, the controller (4.5.2), {4.5.3} associated with ....
.,
the switching sequence {i j }~QJ where ij t:::. i(x(jT)), solves the absolute stabilizability problem for the uncertain system {5.4.1}, {5.4.3}. Proof
{i}
=}
(ii) To prove this implication, we first establish the following claim.
Claim
If the controller (4.5.3) is absolutely stabilizing for the uncertain system (5.4.1), (5.4.3), then there exist constants ( > 0 and CO > 0 and a function V (xQ) such that
f"
(€(Ilw(tW + Ilu(t)11
2 )
+ Ilz(t)II'
_1I~(t)1I2 -llv(t)112)dt
V(xo)
+ CO II (xo -
<
x(O)) 11 2
(5.5.6)
86
5. Absol ute Stabil izabil ity via Synch ronou s Contr oller Switc hing
for all solutions of the closed-loop syste m (5.4.1), (4.5.3) with [we), v(·)] E L2[0, 00). To establish this claim, let
Fo(z(·), w(·), v(·))
c,
fO
(lIz(t)11 2 - IIw(t)1I 2 - IIv(t)11 2 )dt
and d = {-Fo (z(') 'wC) ,v(.) )
o
if Fo(z(·),w(,),v(,)) < 0, if Fo(zC), w('), v(·)) > o.
(5.5.7)
With these definitions, it can be seen that for all [w(·), v(·)] E !.Q [0, 00), the corresponding solution to the closed-loop system (5.4.1), (4.5.3 ) will satisfy condition (5.4.3) with N q = 00. Hence, with x(O) = 0, condi tion (5.4.5) implies that
fO
(lIx(t)1I 2 + lIu(t)11 2 + lIz(t)1I 2 )dt < c[d + lI(xQ - x(0))112]
+ V(xo)
(5.5.8)
for all solutions of the closed-loop syste m (5.4.1), (4.5.3). We now consi der two cases. Case 1
Fo(z(·),w(,),v(·)) > O. In this case, it follows from (5.5.7) that d = 0, and (5.5.8) implies that the inequality (5.5.6) is satisfied with c = 1, Co = c, and V(.) VC).
=
Case 2
Fo(z(·), w('), v(·)) < O. In this case, it follows from (5.5.7) and (5.5.8 ) that the inequality (5.5.6) holds with c = c- 1 , CO = 1, and V(-) c-1V( -). Thus, the claim has been established in both cases. Hence, if a controller of the form (4.5.3) is absolutely stabilizing for the system (5.4.1), (5.4.3), then it solves the Hrx;) control problem (5.5.6 ). Now, Theorem 4.5.2 implies that condition (ii) holds. This completes the proof of this part of the theorem. (ii)=}(i): According to Theo rem 4.5.2, if condition (ii) holds, then the corresponding controller (4.5.2), (4.5.3) solves the Hrx;) control probl em (5.5.6) with CO = c and V(·) V(-). Hence, it follows from Theo rem 4.5.2 that conditions (i) and (ii) of Definition 5.4.2 hold. Furth ermo re, comb ining (5.5.6) and (5.4.3), we obtai n condition (5.4.5). This completes the proof of the theorem. 0
=
=
5.6 A Constructive Method for Output Feedback Absolute Stabilization
5.6
87
A Constructive Method for Output Feedback Absolute Stabilization
The previous section describes a necessary and sufficient condition for absolute stabilizability of a class of uncertain controlled switching systems. The result is given in terms of the existence of suitable solutions to a dynamic programming equation and a Riccati algebraic equation of the HOC filtering type. However, this method is computationally expensive. In this section, we describe a sufficient condition for robust stabilizability that is well-suited to real-time implementation. Consider the case of linear output feedback basic controllers
Ul(t) U2(t)
-
L1y(t), L 2y(t), (5.6.1)
.'.;.'
where L 1 , L 2 , . .. ,L k are given matrices. Let X = X' > 0 be a given matrix. Consider the Riccati differential equations
where i E {I, 2, ... ,k},
Ai
b..
A
+ P(K' K + EI) + B 2 L i C,
Qi b..
PC'
+ B2Li ,
Hi
6.
K' K
+ El.
We are now in a position to present a method for absolute stabilization via synchronous controller switching.
Theorem 5.6.1 Consider the uncertain system (5.4.1), (5.4.3) and the basic linear controllers {5.6.1}. Suppose that G = 0 and there exists a constant E > 0 and a matrix X = X' > 0 such that the following two conditions hold:
{i} There exists a solution P > 0 to the Riccati equation {5.5.1} such that the matrix D b.. A + P[K' K + El - C'C] is stable.
{ii} For any i E {I, 2, ... ,k}, the solution XiC) to the Riccati equation {5.6.2} is defined on [0, T] and the collection of matrices
is complete (see Definition 2.2.4 of Chapter 2).
88
5. Ab sol ute Sta bili zab ilit y via Syn chr ono us Co ntro ller Sw itch ing
Then, the uncertain system (5.4.1), (5·4.3) is absolutely stabilizable via synchronous controller switching wit h the basic output feedback controller s (5.6.1). Furthermore, suppose tha t conditions (i) and (ii) hold; let i(io) be an index such that xb (Xi (0) - X]X O < 0 and let x(·) be the solution to the equation (5.5.2) with initial con dition X(O) = lb. Then, the controller (5.6.1), (4.5.3) associated with the switching sequence {~}~O, where ij ~ i(x (jT )) solves the absolute stabilizab ility problem for the uncertain system (5.4.1), (5.4.3). Remark
..~
., .',
In Th eor em 5.6.1, we ass um e tha t G can be ext end ed to the case of any
<
= O. However, the cor res pon din g res ult
G.
j .~.:
Proof of Theorem 5.6.1 Su ppo se tha t con dit ion s (i) and (ii) hold. Le t N be a given num ber and let X o = P- l, wh ere P > 0 is a sol uti on to the Ric cat i equ ati on (5. 5.1). Fo r any j = 1,2 , ... ,N , con sid er all pai rs (y(.), u(·)] E L 2 ((j -1 )T ,jT ) rel ate d by (5.5.2), (5.6.1), (4.5.3). Th en,
"( 'T) 'X" ( 'T)
sup [ x J
x J
+
ljT
(j- l)T
(II-IIKX(C(t)1x(tl )+-y(Ellx(t)11
= x« j -1 )T )'Xi (O )x« j -l) T)
2
2
t))112
dt]
)
< x« j -1) T) 'Xx «(j -1 )T ), (5.6.3)
wh ere the sup rem um is tak en ove r all xC), y(-), and u(·) rel ate d by (5.5.2), (5.6.1), (4.5.3) and suc h tha t x« j -1 )T ) = Xo and y(.) E L [(j -1 )T ,jT ). 2 Ap ply ing the ine qua lity (5.6.3) for j = N, N -1 , ... ,2, 1, we ob tai n tha t .
sup
x(O)=XQ. y(-) EL2 [0,N T)
fo
NT (
0
\I Kx (t) 11 2 + Ellx(t)1I 2 -11(C"(t) _ (t))11 2 X Y
)
.~
".
,~
......:
,
." ~
,.,
",,!-
I
dt < xoX xo. (5.6.4)
Fu rth erm ore , Le mm a 4.3.3 and the ine qua lity (5.6.4) im ply tha t
lIKx(t) 11 2 + Ellx(t)11 2 ) dt 10 -lI (C x(t ) - y(t))112 < (x(O) - Xo)' Xo(x(O) - Xo) + x~XXO' {N T (
(5.6.5)
Co mb ini ng thi s ine qua lity wi th the int egr al qua dra tic con str ain t (5.4.3) and ass um pti on G = 0, we ob tai n con dit ion (iii) of De fin itio n 5.4 .2. Condit ion s (i) and (ii) of De fin itio n 5.4.2 follow im me dia tel y fro m (5.4 .5) and ass um pti on G = O. Th is com ple tes the pro of of thi s the ore m. 0
Remark
Th e me tho d for abs olu te sta bil iza tio n pre sen ted in thi s sec tio n req uir es the off-line sol uti on of Ric cat i equ ati ons and checking for a com ple ten ess
~.
5.7 Systems with Structured Uncertainty
89
condition. Then, real-time implementation requires only standard on-line linear state estimation and a simple look-up procedure to determine which of the basic controllers to use.
5.7
Systems with Structured Uncertainty
We consider the linear uncertain system with structured uncertainty defined on the infinite time interval [0, 00): m
x(t)
Ax(t)
+ L Bfws(t) + B 2u(t) ,
zo(t) Zl(t)
-
s=o Kox(t) + Gou(t) , K1x(t) + G1u(t),
Zm(t) y(t)
-
Kmx(t) + Gmu(t), Cx(t) + v(t),
(5.7.1)
where x(t) E Rn is the state, ws(t) E RPs and v(t) E Rl are the uncertainty inputs, u( t) E R h is the control input, Zs (t) E R qs are the uncertainty outputs, y(t) E R l is the measured output, and A,Bf,B2,Ks ,G s, and C are given matrices. The uncertainty in the preceding system can be described by equations of the form: ~(t)
Wl(t)
-
¢o(t, zo('))'
(5.7.2) where €(I) t> [
~(~~)
] .
The block diagram of this uncertain system with structured uncertainty is shown in Figure 5.7.1. Such a structured uncertainty corresponds to the case in which the process being modeled has a number of sources of uncertainty, each acting "independently." We now present our formal definition of a continuous-time uncertain system with structured uncertainty satisfying several integral quadratic constraints. For the uncertain system (5.7.1) and uncertainties (5.7.2), a bound on the uncertainty is determined by the following integral quadratic constraint condition.
90
5. Absolute Stabilizability via Synchronous Controller Switching
cPm(')
. cPt (.)
cPo(')
. U
~
Zo
Wt
Zt
wm
Nominal System
zm y
FIGURE 5.7.1. Block diagram of an uncertain system with structured uncertainty.
5.7 Systems with Structured Uncertainty
91
Definition 5.7.1 An uncertainty input [Wo (.) , WI ( .),
."
,Wm (.) , v (.)]
is said to be an admissible uncertainty input for the system (5.7.1) if, for any locally-square-integrable control input u(·) for which any solution to equations (5. 7.1) with [u (.) , Wo (.) , WI ( . ),
. . .
,Wm (.), v ( .)]
exists on [0,00), there exists a sequence {Nq , q = 1,2, ... } C N U {+oo} with limq~<Xl N q = +00 and a constant d > 0 such that
We now introduce a corresponding notion of absolute output feedback stabilizability for the uncertain system (5.7.1) with structured uncertainty satisfying (5.7.3).
Definition 5.7.2 The uncertain system (5.7.1), (5.7.3) is said to be absolutely stabilizable via synchronous controller switching with basic output feedback controllers (4.5.2) if there exists a constant c > 0 and a function V : Rn ~ R such that V(xo) > 0, V(O) = 0, and for any vector Xo E an there exists an output feedback controller of the form (4·5.2), (4.5.3) such that the following conditions hold: (i) For any initial condition x(O) and uncertainty input
the closed-loop system defined by (5.7.1) and (4.5.3) has a unique solution that is defined on [0,00). (ii) The closed-loop system defined by (5.7.1) and (4.5.3) with [wo (- ), WI ( . ), ... ,wm
(. ),
v (.)]
=0
is globally asymptotically stable. (iii) Given an admissible uncertainty
for the uncertain system (5.7.1), (5.7.3), then any solution to the equations (5.7.1), (4.5.3) satisfies [x (.) , u (.) , Wo (.), WI (. ), . .. ,Wm (.), v (.),
Z() ( • ) ,
ZI (.), . .. ,Zm(.)]
92
5. Absol ute Stabil izabil ity via Synch ronou s Contr oller Switc hing
belongs to L 2 [0,00) and
(:.0 ( Ilx(t)1I 2 + lIu(t) 11 2 + E::o lIzs(t) 11 2 + ) dt < 2 2
2:::1 IIw s (t)1I + IIv(t) 11
Jo
-
m
+ c(lI(xo - x(0))11 + L:ds J. 2
V(xo)
(5.7.4)
s=O
°
Let 71 > 0,72 > 0, ... ,7m > be given constants. Intro duce a new uncer tain syste m with unstr uctur ed uncer tainty by the state equat ions:
x(t) z(t) y(t)
= =
Ax(t) + B1 w(t) kx(t) + Gu(t) , Cx(t) + v(t),
+ B 2 u(t), (5.7.5)
where x(t) is the state, w(t) and v(t) are the uncertainty inputs, u(t) is the control input, z(t) E Rq is the uncertainty output, y(t) E R l is the measu red output, and Bll k, and G are given by
B~ 1 =t::.. [BO1 . J F1 l B11
.. ,
hB r ],
Ko ~
t::..
K=
Go
ftlK 1
~
t::..
,G=
ftlG 1
(5.7.6)
vr;;;Gm
vr;;; Km
Furth ermo re, assume that the uncer tainty in this syste m satisfies the following integral quadr atic const raint: {NqT
Jo
(1Iw(t)11 2
+ IIv(t)1I
2
{NqT
)dt < d + J
o
Ilz(t)11 2 dt Vq. (5.7.7)
The main result of this section now follows. Theo rem 5.7.3 Suppose that there exist constants 71 > 0,72 > 0, ... , 7m > such that the uncertain system with unstructured uncertainty (5.7.5), (5. 7. 6}, (5.7.7 ) is absolutely stabilizable via synchronous controller switching with the basic output feedback controllers (5.6.1). Then, the uncertain system with structured uncertainty (5.7.1), (5. 7.3) is also absol utely stabilizable via synchronous controller switching with the basic outpu t feedback controllers (5. 6.1) .
°
Proof Indeed, the syste m (5.7.1) can be rewri tten in the form (5.7.5), (5.7.6 ) with wo(t) zo(t)
w(t)
t::..
vfTlW1(t)
I
z(t)
t::..
vfTlZ1(t)
(5.7.8)
.:."., :,
;
t-
5.8 Illustrative Example
93
Furthermore, the integral quadratic constraints (5.7.3) imply the integral quadratic constraint (5.7.7). Hence, any controller that absolutely stabilizes the system (5.7.5), (5.7.7) will also absolutely stabilize the system (5.7.1), 0 (5.7.3). This completes the proof of the theorem.
5.8
Illustrative Example
To illustrate the theoretical results of this chapter, we consider the uncertain system (5.4.1), (5.4.3) with T = 0.05, tj = 0.05j for j = 0,1,2, ... , and the coefficients
o
A=
1 0.4 0 -1.25 1 0 0 o o 0 1 0.6 o -1.25 1
o
o
1
0.8
o
o o
1
o o o 1.05
This uncertain system will be controlled by the following two basic output feedback controllers: .
[005 0~5] y(t),
Ul(t) U2(t)
=
[1.~5 1~5] y(t).
We apply Theorem 5.6.1 to find an absolutely stabilizing synchronously switching controller. Let the matrix X be defined as
X=
0.0640 0.0378 0.0378 0.0959 0.0151 0.0220 0.0142 0.0232
0.0151 0.0220 0.1127 0.0941
0.0142 0.0232 0.0941 0.1089
> o.
We numerically solve the Riccati equations (5.5.1) and (5.6.2). Then using the S-procedure with 71 = 1.0 and 72 = 1.5, we show that the set of matrices {X 1 (0) - X,X2 (0) - X} is strictly complete. The simulation results are presented in Figure 5.8.1. Here, we take the initial condition
x(O) =
0.1 -0.1 -0.2 0.2
94
5. Absolute Stabilizability via Synchronous Controller Switching
and the uncertainty inputs
w( t) = sin(lOt) Ilz(t) II, v(t) = 0.1 cos( t)z(t). It is easy to show that these uncertainty inputs satisfy the integral quadratic constraint (5.4.3). It is also easily seen that our system with control input u(·) Ul(') or u(·) U2(') is unstable, because the corresponding nominal linear closed-loop systems have poles in the right half-plane. However, the switching controller, obtained from Theorem 5.6.1, robustly stabilizes the uncertain system. The first component of the control input u(·) is displayed in Figure 5.8.2. The control input is discontinuous because we always switch the basic controllers.
=
=
Plant States vs time
0.25 ,....-----..,-------r----...-----~---__r_---___,
.
. ... ..
"
.'. .
0.1
en
$ co
-
en
0 -0.05 -0.1
-0.2 '---
o
-'-
-.L--
2
4
...L.-
6
--'--
8
--'--
10
time t
FIGURE 5.8.1. Evolution of the plant states with time.
--'
12
I.~
95
5.8 Illustr ative Exam ple
••.
~>.... ;.~
-.,
~"'=-.
"
(
! ~
.
First Component of Control Input vs time
0.4
-.-r-r'"-------r-----,-----
\
\
.,
0.3\' ;
,.
.
.
.
e-c
0.1
.. . \ .....
E
..
......
0.2
C.
. .
\ \
::::l
---~---__r----_,
0
C,,)
'" ......
o 'E Q) c
8.
0··
.
E
•
:
o
C,,)
~ -0.1 " -0.2
.
.~:.:.<./I .... :
.
I
......... ...
/
/
/
/ ---_0.3 '---- --l.- ---- .!L- ----.. . . . . . .- ---. ...L 8 6 4 2 o
--'-- --... .J 10
12
time t
u(t) with FIGU RE 5.8.2. Evolu tion of the first compo nent of the contro l input time.
'
h~~.;··;:; . ,.
tt
I, "i'.
: tl~ 'I: il·T
~1' ;~
,
'I
t
r i'
"
~
;
i:·
;I, :,:
';'••'i.'
;,'
'"
:
-.. :
',.
6 Robust Output Feedback Controllability via Synchronous Controller Switching
6.1
""
Introduction
A fundamental concept in linear systems theory is that of controllability. This concept WM; introduced by Kalman [56} and plays an important role in the design of controllers. A linear system without uncertainty is said to be controllable if there exists a control input that steers a given initial state to the origin in a specified time. The concept of controllability is well-studied and understood in the case of linear systems without uncertainty. However, in most real-world problems, the model of the process includes some uncertain parameters. Moreover 1 we consider the case where only output feedback is available. In this chapter, we address the problem of robust output feedback controllability for a class of uncertain linear time-varying systems. We define an uncertain system to be robustly output feedback controllable if there exists an output feedback controller that steers a given initial state to the vicinity of the origin in a specified time. Because the underlying system is uncertain, and only output feedback is available, it may be unreasonable to expect a controller to steer a given initial state exactly to the origin in a finite time. For example, in (91], it was shown that even in the state feedback case, an uncertain system must satisfy a very strong M;sumption for this to be possible. The c1M;S of output feedback controllers considered in this chapter consists of nonlinear digital controllers with control signals belonging to a given set. Such controllers arise in many practical situations. It is well known,
98
6. Robust Output Feedback Controllability via Controller Switching
for example, that the presence of saturation limits on the control input significantly complicates the controller design even in the case of linear time-invariant systems without uncertainty. As in Chapter 5, the class of uncertain systems considered in this chapter consists of uncertain time-varying linear systems in which the uncertainty is described by a certain integral quadratic constraint. However, this integral quadratic constraint is defined on a finite-time interval. For this class of uncertain systems, we define the robust output feedback control problem with a control constraint described previously. For a given uncertain system, if this problem has a solution, then the system is said to be robustly output feedback controllable with the given control set. The main result of the chapter gives a necessary and sufficient condition for an uncertain linear system to be robustly output feedback controllable with a given control set. This condition is given in terms of the existence of suitable solutions to a Riccati differential equation of the game type and a dynamic programming equation. If such solutions exist, then it is shown that they can be used to construct a corresponding robust output feedback controller. The main results of this chapter were originally published in [101]. The remainder of the chapter is organized as follows. Section 6.2 describes the class of uncertain systems under consideration and introduces the concept of robust output feedback constrained controllability. Section 6.3 presents the main results of the chapter and their proofs.
6.2
Robust Output Feedback Controllability
Let N be a given number and T > 0 be a given time. Consider the timevarying uncertain linear system defined on the finite time interval [0, NT}:
x(t) z(t) y(t)
-
A(t)x(t) + B1(t)w(t) + B2 (t)u(t), K(t)x(t) + G(t)u(t), C(t)x(t) + v(t),
(6.2.1)
where x(t) E Rn is the state, w(t) E RP and v(t) E R l are the uncertainty inputs, u(t) E R h is the control input, z(t) E Rq is the uncertainty output, y(t) E R l is the measured output, and A(·), B 1 ('), B 2 (·), K(·), GC), and C(·) are bounded piecewise-continuous matrix functions defined on the interval
[O,NT]. Also, suppose that the following assumption holds. Assumption 6.1. G(t)' K(t) = 0 for all t E [0, NT].
System Uncertainty The uncertainty in the preceding system is required to satisfy the following integral quadratic constraint. Let X o = X o > 0 be a given matrix, a ERn I
6.2 Robus t Outpu t Feedback Contr ollabi lity
°
99
R(·) = be a given vecto r, d> be a given const ant, and Q(.) = Q(-)' and ons R( .)' be given boun ded piecewise-continuous matri x weighting functi that satisfying the following condition: there exists a const ant 80 > such al Q(t) > 80 1, R(t) > 80 1 for all t E [0, NT]. For a given finite time interv and [0, s] where s E (0, NT], we will consider the uncer tainty input s w(·) v(·) and initia l conditions x(o) such that
(x(O) - a)'Xo(x(O) - a)
°
+ [(w( t)' Q(t)w(t) + v(t)' R(t)v(t))dt < d + [llz (t) 112 dt. (6.2.2)
es Note that the preceding uncer tainty descr iption allows for uncer tainti on the in which the uncer tainty input s w (.) and v (.) depen d dynamically reted as uncer tainty outpu t z(·). In this case, the const ant d may be interp m and a meas ure of the size of the initia l condi tions on the nominal syste uncer tainty dynamics. tainty It is clear that the uncer tain syste m (6.2.1), (6.2.2) allows for uncer uncer that satisfies a stand ard norm -boun ded const raint. In this case, the the tain syste m could be described by the state equat ions (5.4.4). Also, initia l condi tions must satisf y the inequ ality
(x(o) - a)' Xo(x(O) - a) < d. syste m To verify that such uncer tainty is admissible for the uncer tain (6.2.1), (6.2.2), let w(t) = D. 1 (t)z(t), v(t) = D. 2 (t)z(t ), where
1lD. 1 (t)' D. 2 (t),11 < 1 .~ '
.
= 1 and for all t E [0, NT}. Then condi tion (6.2.2) is satisfied with Q(.) R(-) 1. class Let nCRh be a given nonem pty set. We will consider the following contr ol of nonli near digita l outpu t feedback controllers that upda te the signal u(t) at discre te times, with u(t) const ant betwe en updat es:
=
u(jT) = U[jT'Y(')I~T], where u(jT) En and u(t) = u(jT) Vt E [jT, (j + l)T) Vj = 0, 1, ... ,N - 1.
(6.2.3)
Rema rk (6.2.3) is In the case where the set n is finite, the contr oller of the form from a synch ronou sly switc hed outpu t feedback contr oller similar to those Chap ters 4 and 5. Notat ion
(6.2.3), Let a E Rn be a given vector, U(·] be any contr oller of the form of all and d > be a given const ant. Then , Xs[a, d, U[·]] denotes the set
°
100
6. Robu st Outpu t Feedb ack Contr ollabi lity via Contr oller Switching
possible states x(s) at time s for the uncer tain closed-loop syste m (6.2.1), (6.2.3) with uncer tainty input s and initia l conditions satisf ying the constrain t (6.2.2). Also, let uo(-) and Yo(-) be given vector functions, a E Rn be a given vector, and d > be a given const ant. Then , Zs(a, d, uo('), yo(-)] denotes the set of all possible states x (s) at time s for the uncer tain openloop syste m (6.2.1) with u(·) UO('), y(-) Yo(') and uncer tainty input s and initia l conditions satisfying the const raint (6.2.2).
°
=
=
Defin ition 6.2.1 Let a E R n be a given vector. A control U [·) of the form a {6.2.3} is said to be a boun ding contr ol for the uncer tain syste m (6.2.1), (6.2.2) if the set X s [a, d, Ua (·]] is bounded for all s E (0, NT] and d > 0. Defin ition 6.2.2 Let n c R h be a given set. The uncer tain syste m {6.2.1}, {6.2.2} is said to be robustly outpu t feedback controllable on [0, NT] via synchronous controller switc hing with the control set n if for any a E H;l there exists a bounding control Ua [·) of the form {6.2.3}.
6.3
A Necessary and Sufficient Con ditio n for Rob ust Controllability
Our soluti on to the preceding problems involves the following llicca ti differential equat ion
?(t) = A(t)P (t) + P(t)A (t)' + P(t) [K(t) ' K(t) - C(t)' R(t)C (t)]P (t) + B1 (t)Q( t)-l B (t)'. 1
(6.3.1)
Also, we consider a set of state equat ions of the form
£(t) = (A(t)
+ P(t) [K(t) ' K(t)
- C(t)' R(t)C (t)]]x (t) +P(t )C(t) 'R(t)y(t) + B2 (t)u(t).
(6.3.2)
Furth ermo re, introd uce the following cost function
W(x( t),u(t ),y(t) ) ~ II (K(t) x(t) + G(t)u(t))11 2 -(C(t )x(t) - y(t))' R(t)( C(t)x (t) - y(t)).
(6.3.3)
Let u~ E Rh and Xo E Rn be given vectors, L(·) be a given funct ion from Rn to R, and j E {O, 1, ... ,N - 1} be a given numb er. Then ,
Fj(xo,u~,L(.)) D. sup Y(·)EL 2[jT,(j +l)TJ
[
L(x(( j
+ 1)T)) +
l
(j+l)T
jT
]
W(x(t),u~,y(t))dt ,(6.3. 4)
6.3 A Necessary and Sufficient Condition for Robust Controllability
101
where the supremum is taken over all solutions to the system (6.3.2) with y(.) E L 2 [jT, (j + l)T], u(t) = u~ and initial condition x(jT) = xo. Also, consider the following dynamic programming equation:
VN(xo) =
VJ (xo)
°Vxa
ERn, inf Fj(xa, u~, VJ+l (.)) Vj = N - 1, ... ,1,0.
= uQEn
(6.3.5)
J
Note that VJ (xo) may be equal to +00 for some j, xo. Within this framework, the equation (6.3.5) always has a solution. We now present the main result of this chapter. Theorem 6.3.1 Consider the uncertain system {6.2.1}, {6.2.2} and suppose that Assumption 6.1 holds. Let n c R h be a given nonempty set. Then the following statements are equivalent: {i} The uncertain system {6.2.1}, (6.2.2) is robustly output feedback controllable on [0, NT] via synchronous controller switching with the control set n. (ii) The solution P(·) to the Riccati differential equation (6.3.1) with initial condition P(O) = X0 1 is defined and positive-definite on [0, NT], and the dynamic programming equation (5.5.5) has a solution '\;(xo) such that 110 (xo) < +00 for all Xo ERn. Moreover, suppose that condition (ii) holds. Then, for any 1I > choose a sequence of functions {u~(XO)}f=Ol such that u~(xo) E j = 0, 1, . " ,N - 1 and Xo ERn and
°we can
n for
VJ(xo) > Fj(xo,u~(xo), VJ+l(')) - 1I Vj = N -1, ... ,1,0 Vxo ERn.
all
(6.3.6)
Furthermore, consider a controller Ua,['] of the form {6.2.3} with u(jT) = u~(x(jT)),
(6.3.7)
where x(·) is the solution to the equation {6.3.2} with initial condition X(O) = a. Then, any such controller is bounding for the uncertain systems {6.2.1}, {6.2.2}.
To prove this theorem, we will use the following two lemmas. Lemma 6.3.2 Consider the uncertain system {6.2.1}, {6.2.2} and suppose that Assumption 6.1 holds. Let s > be a given time. Suppose that there exist a vector a E Rn and vector functions UO(,) and Yo(') defined on [0, s] such that the set Zs[a, d, uo(-), Yo(')] is bounded for all d > O. Then, the solution X (.) to the Riccati differential equation
°
- X(t) = X(t)A(t)
+ A(t)' X(t) + X(t)B1(t)Q(t)-lB1(t)'X(t) +K(t)'K(t) - G(t)' R(t)C(t) (6.3.8)
with initial condition X(O) = X o is defined on [0, s] and Xes) >
o.
102
6. Robus t Outpu t Feedb ack Contr ollabi lity via Contr oller Switc hing
Proof of Lemm a 6.3.2
Suppose that the condi tion of the lemm a holds and let X ERn be a given s vector. We have by definition of Zs [a, d, Uo ('), Yo (.)] that
Xs E Zs[a, d, ua('),yo(')] if and only if there exist vector functions xC), w(·), and v(·) satisf ying equat ion (6.2.1) and such that the const raint (6.2.2) holds, and
yo(t)
= C(t)x (t) + vet)
(6.3.9)
for all t E [0, s]. Subst itutio n of (6.3.9) into (6.2.2) implies that
Xs E Zs[a, d, uo('), yo(·)] if and only if there exists an uncer tainty input we) E
~(O,
s] such that
J[x s , wC)] < d,
(6.3.10)
where J(x s , w(·)] is defined by
+ j . ,
r(
io
J[x s,w(·)]
(x(O) - a)'Xo(x(O) - a) w(t)'Q (t)w( t) -1I(K (t)x(t ) + G(t)uo(t)) 11 2 + ) dt (Yo(t) - C(t)x (t))' R(t) (yo(t) - C(t)x (t)) L':l.
(631 1) ..
and x(·) is the solution to (6.2.1) with uncer tainty input w(·) and boun dary condition xes) = X S ' Note that the boun dary condition is imposed on xes) rathe r than x(O) because x(O) enters into the the const raint (6.2.2 ) and hence the cost (6.3.11). Thus x(O) must be treate d as a free variab le. We now consider the functional JO[X S1 wC)] = J[X w(·)] with a = 0, S1 uo(') 0, and yo(·) = O. Then Jo[x s,w(·)] is a homogeneous quadr atic functional with a termi nal cost term. Also, note that the quant ity J[x s , w(·)] will be a quadr atic function of [x s,w('), a, uaC), yo(·)]. In partic ular, for the pair [uoC), Yo(')] given earlier, and for the given a, J[xs,wC)] will be a nonhomogeneous quadr atic functional of [xs,w(,)], We now prove that the homogeneous part of this quadr atic functional must be positive for all X s =1= 0 and for all wC) E L2[0, s], that is,
=
(6.3.12)
for all X s =1= 0 and for all w(·) E L 2(0, 5]. We prove this claim by contr adiction. Assume that there exists a vector x~ =1= 0 and a vector funct ion wo(') E L 2 [0, s} such that Jo[x~,wo(')] < O.
Consider the function
q(c)
L':l.
J[cx~, woC)] "i/c E R,
(6.3.13)
6.3 A Necessary and Sufficient Condition for Robust Controllability
103
where J[.,'] is defined by (6.3.11). Then, the function q(c) is quadratic. Hence, (6.3.14) where ao, aI, and a2 are constants. Moreover, it is clear that
ao = Jo[x~, wo(')]' Therefore, condition (6.3.13) implies that ao < O. Furthermore, the set Zs[a,d,uo(')'Yo(')] is bounded for all d > O. Hence, for any d > 0, there exists Co > 0 such that, for any c where lei> co, the vector cx~ does not belong to Zs[a, d, uO('), Yo(·)). It implies that for any d > 0 there exists CO > 0 such that, for any c where leI > CO, we have q(c) > d. However, this is impossible for any quadratic function (6.3.14) with ao < O. Therefore, condition (6.3.12) holds. Hence, (6.3.15) for all x s ERn. The optimization problem (6.3.15) subject to the constraint defined by the system (6.2.1) is a linear quadratic optimal control problem in which time is reversed. Time is reversed in this optimal control problem because the terminal state x(s) = X s is fixed and the initial state x(O) is free. In this linear quadratic optimal control problem, a sign-indefinite quadratic cost function is being considered. Using a known result from linear quadratic optimal control theory, we conclude that condition (6.3.15) implies that the solution X(·) to the Riccati equation (6.3.1) with initial condition X(O) = X o is defined on [0, s) and satisfies X(s) > 0 (e.g., see page 23 of [29]). We now prove that X(s) > O. Indeed, if the inequality X(s) > 0 does not hold, then there exists a vector x~ 1= 0 such that X~' X (s )x~ ~ O. Then the infimum in the problem (6.3.15) with X s = x~ is achieved at some vector function wo(·) E L 2 [0, s) (e.g., see [29]) and inf
W(')EL2[O,S]
Jo[x~, we)} = Jo[x~, wo(')} = X~' X(s)x~ < O.
However, this contradicts to (6.3.12). This completes the proof of this lemma. 0 Lemma 6.3.3 Consider the uncertain system {6.2.1}, {6.2.2} and suppose that Assumption 6.1 holds. Let YJ(,) and uo(·) be given vector functions and X s E Rn be a given vector. Suppose that the solution P(·) to the Riccati equation {6.3.1} with initial condition P(O) = X;l is defined and positive definite on the interval [0, s]. Then the following two statements are equivalent: (i) The vector Xs does not belong to the set Zs[a, d, uO('), Yo(-)].
104
6. Robust Output Feedback Controllability via Controller Switching
{ii} The inequality (x s
-
xes))' p(s)-l(x s
d+[
-
xes)) <
W(:i(t), uo(t), yo(t))dt
(6.3.16)
holds for the cost function {6.3.3} and the solution to the equation {6.3.2} with u(·) = UO(-), y(.) = yo('), and initial condition X(O) = a. Proof of Lemma 6.3.3 We have by definition of Zs[a,d,uo('),yo(')] that X s E Zs[a,d,uo('),yo(-)] if and only if there exist vector functions x('), w(·), and v(-) satisfying equations (6.2.1) and (6.3.9) and such that the constraint (6.2.2) holds. Substitution of (6.3.9) into (6.2.2) implies that X s E Zs[a, d, uo(')' yo(·)] if and only if there exists an uncertainty input w(·) E ~[O, s] such that the inequality (6.3.10) holds where J(xs,w(-)] is defined by (6.3.11) and x(·) is the solution to (6.2.1) with uncertainty input w(·) and boundary condition xes) = Xs ' Now, the statement of this lemma immediately follows from Lemma 4.3.3. This completes the proof of the lemma. 0
Proof of Theorem 6.3.1 (i) ::::> (ii) Suppose that condition (i) holds. We first prove that the solution P(·) to the Riccati equation (6.3.1) with initial condition P(O) = X0 1 is defined and positive-definite on [0, NT]. Let a E R n be any given vector, and let Ua [·) be a bounding controller for the uncertain system (6.2.1), (6.2.2). Also, let [uo('),yo(')] be any output-input pair of the controller Ua [·]; that is, uo(') = Ua[yo(')]' Then, we have
Zs[a, d, uo(-), Yo(')]
C
Xs[a, d, Ua [·]] Vd> 0, s
E [0,
°
NT].
Hence, the set Zs[a, d, uo(')' YO(-)] is bounded for all d > and s E [0, NT]. Now, Lemma 6.3.2 implies that the solution X(·) to the Riccati equation (6.3.8) with initial condition X(O) = X o is defined and positive-definite on [0, NT]. From this, it follows that the required solution to the Riccati equation (6.3.1) is given by P(·) = X(·)-l. Furthermore, Lemma 6.3.3 implies that
Zs[a, d, uo(')' yo(·)) = n. (x s - xes))' p(S)-l(x s - xes)) < d+ } { Xs E R . W(x(t), uo(t), yo(t) )dt '
J;
(
)
6.3.17
where x(·) is the solution to (6.3.2) with X(O) = a, yet) = Yo(t), and u(t) = uo(t) = Ua[yo(')]' Now, let Un be the class of all controllers of the form (6.2.3). Introduce for any j = 0,1, ... , N the function
Vj(xo)
t:J..
inf
sup
u(-)EUn Y(-)E L 2[jT,NT]
jNT W(x(t), u(t), y(t))dt, (6.3.18) jT
6.3 A Necessary and Sufficient Condition for Robust Controllability
105
where the supremum is taken over all solutions to the system (6.3.2) with initial condition x(jT) = xo. Then, because the set ZNT(a, d, uO('), YO(-)] is bounded, it follows from description (6.3.17) that NT
sup Yo(-)EL 2 [O,NTJ
l°
W(x(t),UQ(t), Yo (t))dt < +00,
where the supremum is taken over all solutions of (6.3.2) with X(O) = a, y(t) = yo(t), and u(t) = uo(t) = Ua[yo(-)]. From this and (6.3.18), we have
Furthermore, according to the theory of dynamic programming (e.g., see [17]), Vj(.) satisfies the equations (6.3.5). This completes the proof of this part of the theorem. (ii) ~ (i) Suppose that condition (ii) holds, and let v > 0 be given. For any a E Rn, we will consider a corresponding control of the form (6.2.3), (6.3.7). Then, (6.3.6) implies that for the system (6.3.2) with X(O) = a and the control input u(-) defined by (6.2.3), (6.3.7), we have (NT
Jo
W(x(t), u(t), y(t))dt < VO(x(O)) - VN(x(NT)) + Nv = vo(a) + Nv.
Hence, Lemma 6.3.3 implies that
where x(·) is the solution to (6.3.2) with X(O) = a. Therefore, the set Xs[a, d, Ua (·)] is bounded for all a E R n , d > 0, and S E (0, NT]. This 0 completes the proof of the theorem.
.:y: ~ :):.' ..... .. : ~.
}:,.,:
7 Optimal Robust State Estimation via Sensor Switching
7.1
Introdliction
This chapter considers the sensor scheduling problem, which consists of estimating the state of an uncertain process based on measurements obtained by switching a given set of noisy sensors. Classical estimation theory deals with the problem of forming an estimate of a process given measurements produced by sensors observing the process (e.g., see [4]). A standard solution is to compute the posterior density of the process state conditioned on all of the available measurements. A more difficult class of estimation problem arises in applications such as robotics, command and control systems, and networked systems, where an estimator is given dynamic control over the measurements. These sensor scheduling problems occur, for example, when a flexible or intelligent sensor is able to operate in one of several different measurement modes and the estimator can dynamically switch the sensor mode. Alternatively, several sensors may be remotely linked to the estimator via a low-bandwidth communication channel, and only one sensor can send measurement data during any measurement interval. Again the estimator can dynamically select which sensor uses the channel. Finally, sensor scheduling problems arise when measurements from a large number of sensors are available to the estimator but the computational power is such that only data from a small selection of the sensors can be processed at any given time, hence forcing the estimator to dynamically select which sensor data are important for the task at hand. Sensor scheduling has been addressed for stochastic systems in [13, 76, 96),
108
7. Optimal Robust State Estimation via Sensor Switching
where it is assumed that the process is generated by a known linear system with Gaussian input noise. It is shown that the optimal sensor schedule can be computed a priori and that this schedule is independent of the observed data. In particular, a sufficient statistic for a linear zero-mean Gaussian process with linear sensors and a minimum variance estimation objective is given by the estimation error covariance matrix, which can be determined by the solution to a Riccati differential equation. This matrix depends on the sequence of sensors used but is independent of the actual observed measurements. Hence, for any given sequence of sensors, the estimation error covariance can be determined before the experiment has commenced. As a consequence, the optimal sensor sequence can be determined a priori and is given by the sequence that minimizes, under some measure (such as the trace of the error covariance matrix at the final time), the precomputable solution to a Riccati differential equation. In practice, however, it often occurs that the system model is not precisely known and standard stochastic models cannot be readily applied. Many of the recent advances in the area of robust control system design assume that the system to be controlled is modeled as an uncertain system (e.g., see [30)). In this chapter, we follow the approach of Chapters 5 and 6, where the uncertainties are modeled by unknown functions that satisfy an integral quadratic constraint. The problems of robust state estimation and model validation for this class of uncertain systems were originally considered in [120,125]. The papers [120,125] build on the deterministic interpretation of Kalman filtering presented in [19]. In this framework, the estimation problem is one of characterizing the set of possible states that could have given rise to the observed measurements. This approach to state estimation was also presented in the research monograph [93]. The results of this chapter were originally published in [110]. The remainder of the chapter is organized as follows. In Section 7.2, we introduce the concept of uniform robust observability for linear uncertain systems and give a necessary and sufficient condition for this requirement to be satisfied. This necessary and sufficient condition is given in terms of a pair of coupled Riccati differential equations. In Section 7.3, the measurement process is defined by a collection of given sensors that we call basic sensors. Without loss of generality, it is assumed that only one of the basic sensors can be used at any time. Hence, our sensor schedule is a rule for switching from one basic sensor to another. The objective is to ensure robust observability and an optimal estimate of the system state. We show that the optimal switching rule can be computed by solving a set of Riccati differential equations of the game type and a dynamic programming procedure. It is shown that for the framework considered here the optimal sensor sequence depends on the past history of measurements. This is unlike sensor scheduling problems with Gaussian noise models and mean-square estimation criteria, where the sensor schedule is independent of the observed measurements and can be computed a priori using only the
7.2 Robust Observability of Uncertain Linear Systems
109
statistical structure of the measurement and process noise. Finally, in Section 7.4, we apply ideas of model predictive or finite horizon control (e.g., see (26]) to derive a method for robust sensor scheduling that is nonoptimal but implementable in real time.
7.2
Robust Observability of Uncertain Linear Systems
Consider the time-varying uncertain system
x(t) z(t) y(t)
-
A(t)x(t) + B(t)w(t), K(t)x(t), C(t)x(t) + v(t),
(7.2.1)
where x(t) E Rn is the state, w(t) E RP and v(t) E R l are the uncertainty inputs, z(t) E Rq is the uncertainty output, y(t) E R l is the measured output, and A(·), B(·), K(·), and C(·) are bounded piecewise-continuous matrix functions.
Syster,n llncertainty The uncertainty in the preceding system is required to satisfy the following integral quadratic constraint. Let X o = X~ > 0 be a given matrix, Xo E Rn be a given vector, and d > 0 be a given constant. For a given finite time interval [0, s], we will consider the uncertainty inputs w(·) and vC) and initial conditions x(O) such that
(x(O) - xo)'Xo(x(O) - xo) + 1o'(lIw(t)1I2 + IIv(t) 11 2)dt
< d + 1o'lIz(t)112dt.
(7.2.2)
Rer,nark The uncertainty in the uncertain system (7.2.1), (7.2.2) can be regarded as a feedback interconnection between the nominal linear system (7.2.1) and an uncertainty block that takes the uncertainty output z and produces the uncertainty inputs wand v. This uncertainty block can be described by equations of the form
[~g? ]
=
¢(t, z()).
This system is represented in the block diagram shown in Figure 7.2.1, where
110
7. Optimal Robust State Estimation via Sensor Switching
¢(.)
",
.. .1::
~
z Nominal System
y
FIGURE 7.2.1. Uncertain linear system.
Notation Let y(t) = yo(t) be a fixed measured output of the uncertain system (7.2.1), and let the finite time interval [0, s) be given. Then, Xs[xo, YO(')I~, d) denotes the set of all possible states x(s) at time s for the uncertain system (7.2.1) with uncertainty inputs and initial conditions satisfying the constraint (7.2.2).
Definition 7.2.1 The uncertain system (7.2.1), (7.2.2) is said to be robustly observable on [0, TJ if, for any vector X{) ERn, any time s E (0, T], any constant d > 0, and any fixed measured output y(t) = Y:l(t), the set X s [xo , yo(-)I~, d) is bounded. Our necessary and sufficient condition for robust observability involves the following Riccati differential equation:
?(t) = A(t)P(t) + P(t)A(t)' + P(t) [K(t)' K(t) - C(t)' C(t))P(t) + B(t)B(t)'.
(7.2.3)
Also, we consider a set of state equations of the form
£(t)
= [A(t) + P(t) [K(t)' K(t) -
C(t)' C(t))) x(t)
+ P(t)C(t)' yo(t). (7.2.4)
The following theorem gives a necessary and sufficient condition for robust observability. This result is close to the main result of [125J.
Theorem 7.2.2 Let X o = Xb > 0 be a given matrix. Consider the uncertain system (7.2.1), (7.2.2). Then, the following two statements hold:
(i) The system (7.2.1), {7.2.2} is robustly observable on [0, T) if and only if the solution P(·) to the Riccati equation {7.2.3} with initial condition P(o) = X 0 1 is defined and positive-definite on the interval [0, TJ.
111
7.2 Robust Observability of Uncertain Linear Systems
(ii) Suppose that the system (7.2.1), (7.2.2) is robustly observable on [0, T]. Also, let s E (0, T] be given, and let Xo E R n be a given vector, d > 0 be a given constant, and ~(t) be a given vector function defined on [O,s]. Then, the set Xs[xo,1lo(')I~,d] is given by
{x,
Xs[xo, YO(')I~, d] = (x s - 5:(8))'P(S)-I(X s - 5:(S)) } ERn: < d + Ps[YO(')] ,
(7.2.5)
where
p,[yo(.)J
10.
[[II K (t)x(t ll !2 -1I(C(t)X(t) -
Yo (t))ll2] dt
(7.2.6)
and 5:(.) is defined by the equation (7.2.4) with initial condition £(0) = Xo·
Proof The statement of this theorem immediately follows from Lemma 6.3.2 and Lemma 6.3.3. 0
Remark It is of interest to note that equations (7.2.3) and (7.2.4) define a state estimator that is closely related to the state estimator that occurs in the output feedback HOC control problem (e.g., see [69,90]).
Notation Let A(S) be some measure of the size of a bounded convex set S. Let M = M' > 0 be a square matrix, a E Rn be a vector, and d > 0 be a number. Then,
£(M, a, d)
~
= {x E R n
:
(x - a)' M(x - a) < d}.
(7.2.7)
Assumptions We suppose that the following assumptions hold. Assumption 7.1 For all aI, a2, A(E(M, aI, d)) = A(E(M, a2, d)). Assumption 7.2 If d 1 > d2 , then A(£(M, a, dl )) > A(£(M, a, d2)). Assumption 7.3 A(£(M, a, d)) --+ 00 as d --+ 00.
Notation We will use the notation A(H, d) for the number A(£(H, a, d)), where £(H, a, d) is defined by (7.2.7) (according to Assumption 7.1, this number does not depend on a).
112
7. Optimal Robust State Estimation via Sensor Switching
Definition 7.2.3 The uncertain system (7.2.1), (7.2.2) is said to be uniformly robustly observable on [0, TJ if it is robustly observable and, for any vector Xo E Rn and any constant d > 0, the condition
C[xo, d] ~ sup A(XT[xo, YO(')I~, dJ) < 00
(7.2.8)
Yo(')
holds, where the supremum is taken over all fixed measured outputs YJ (t). Our goal in this section is to give an effective necessary and sufficient condition of uniform robust observability and determine the upper bound c[xo, d] from Definition 7.2.3. OUf necessary and sufficient condition for uniform robust observability involves the following Riccati differential equation:
- Y(t) = [A(t) + P(t)K(t)' K(t)]'Y(t) +Y(t)[A(t) + P(t)K(t)'K(t)] + K(t)' K(t) +Y(t)P(t)K(t)' K(t)K(t)' K(t)P(t)Y(t).
(7.2.9)
Now we are in a position to present a necessary and sufficient condition of uniform robust observability. Theorem 7.2.4 Let X o = X~ > 0 be a given matrix. Consider the uncertain system (7.2.1), (7.2.2). Then, the system (7.2.1), (7.2.2) is robustly
observable on [0, T] if and only if the following two statements hold: (i) The solution P (.) to the Riccati equation (7.2.3) with initial condition P(O) = X 1 is defined and positive-definite on the interval [0, T].
o
(ii) The solution Y (-) to the Riccati equation (7.2.9) with boundary condition Y (T) = 0 is defined and nonnegative-definite on the interval (0, T]. Furthermore, if conditions (i) and (ii) hold, then the upper bound (7.2.8) is defined by
C(xo, d]
=
A(P(T)-l, d + x~Y(O)xo).
(7.2.10)
Proof Necessity The necessity of condition (i) immediately follows from Definition 7.2.3 and Theorem 7.2.2. Now we prove the necessity of (ii). Indeed, Theorem 7.2.2 implies that the set XT[xO,Yo(')I~,d) is defined by (7.2.5). This and the requirement (7.2.8) imply that T
r
[lIK(t)x(t)11 2 -1I(C(t)x(t) - Yo(t) sup EL Yo(·) 2[O,T] Jo
11 2 ]
dt <
00
(7.2.11)
7.3 Optimal Robust Sensor Scheduling
113
where the supremum is taken over all solutions to the linear system (7.2.4) with X(O) = Xo· Using the linear substitution
y(t) = C(t)x(t) - Yo(t),
(7.2.12)
the requirement (7.2.11) can be rewritten as T
sup y(-)EL2[O,Tj
r [IIK(t)x(t)1I Jo
2
-lly(t)11 2 ] dt < 00, (7.2.13)
where the supremum is taken over all solutions to the linear system
£(t) = (A(t) with x(O)
= XQ.
+ P(t)K(t)' K(t)) x(t) + P(t)C(t)'y(t)
Furthermore, (7.2.13) immediately implies that
;, ..,
sup
.....'.~ \
y(-)EL 2 [to,Tj
:~
(7.2.14)
iT [II K (t):i;(tJI1 2- lIy(t)
2
dt < 00,
11 ]
to
.
(7.2.15)
where the supremum is taken over all solutions to the linear system (7.2.14) with x(to) = XQ. Moreover, it is easy to see that sup Y(')EL2[to,Tj
iT [11K(t)x(t)11 2-/Iy(t)11
2
]
dt > 0
to
for any Xo. This and (7.2.15) immediately imply that condition (ii) holds (e.g., see (29]). This completes the proof of this part of the theorem. Sufficiency Assume that conditions (i) and (ii) hold. Then it follows from Theorem 7.2.2 that the set Xs[xo,Yo(')I~,d] is given by (7.2.5). Furthermore, using the linear substitution (7.2.12) and standard results of the theory of linear quadratic optimal control (e.g., see [29]), we obtain that sup YO(')EL2[Q,Tj
(T [II K (t)x(t)11 2 -lI(C(t)x(t) - yo(t)) 11 2 ] dt = x~Y(O)xo,
Jo
where the supremum is taken over all solutions to the linear system (7.2.4) with X(O) = xo. Now, (7.2.5), (7.2.6), and (7.2.10) imply uniform robust observability and the equation (7.2.10). This completes the proof of the theorem. 0
7.3
Optimal Robust Sensor Scheduling
Consider the time-varying uncertain system defined on the finite interval
[0, T] x(t)
-
z(t) y*(t)
-
A(t)x(t) + B(t)w(t), K(t)x(t), C*(t)x(t) + v*(t),
(7.3.1)
114
7. Optim al Robu st State Estim ation via Senso r Switc hing
where x(t) E JrL is the state, wet) E RP and v*(t) E Rl are the uncertain input s, Z E Rq is the uncer tainty outpu t and y* (-) is the contin uously meas ured outpu t. Here, A(·), B(-), K(·) are given piecewise-continuous matrix functions. The matri x funct ion C* (.) is defined by a partic ular sensor schedule used to meas ure the syste m state. Note that the dimen sion of C* (-) can change with time, depen ding on the sensors used. Let N be a given positive integer, and let .
o = to < tl < t2 < ... < tN = T denot e the permissible sensor-switching times. Synch ronou s Senso r Scheduling
Suppose that we have the collection of meas ured outpu ts, which are called basic sensors
yl(-) y2(.)
_ _
Cl(')x(-) C2(')X(')
+ vI(.), + v 2(-), (7.3.2)
where C1 ('), C2 ('), ... ,Ck (.) are given matri x functions. Let I (.) be a funcj tion that maps the set of past meas urem ents {y*(.) I~~} to the set of symbols {1, 2, ... ,k}. Then , for any sequence of functions {I }f~ l, we consi der the j following dynam ic sensor schedule:
Vj E {O, 1, ... ,N}, y*(t) = yi j (t) Vt E [tj, tj+I] where ij
= Ij(y*( ·) I:~).
(7.3.3)
Hence, the sensor schedule is a rule for sequencing the basic senso rs and const ructs a sequence of symbols {ij }f=o from the past meas urem ents. Let L denot e the class of all sensor schedules of the form (7.3.2), (7.3.3 ). As in the previous section, the uncer tainty in the syste m (7.3.1) is require d to satisfy the following integ ral quadr atic const raint. Syste m Uncertainty
Given Xo = X~, Xo ERn, d > 0 and a finite time interv al [0, s], s < T, we consider the class of uncer tain input s {w(·), v* (.)} E L [0, s] and initia l 2 conditions x(O) such that
(x(O) - xo)' Xo(x(O) - xo) +
l'
(lIw(t)1I 2 + IIv'(t)11 2 )dt < d+
l'
IIz(t)11 2 dt.
(7.3.4)
-/
.
7.3 Optimal Robust Sensor Scheduling f
115
Notation Let M E .c be a given sensor schedule and y* (.) be the corresponding realized measured output. Then, for the finite time interval [0, s], s ::; T, X s [Xo, y*(.) 10' d, M] is the set of all possible states x(s) at time s for the uncertain system (7.3.1) with sensor schedule M, uncertain inputs w(·) and v* ('), and initial conditions satisfying the integral quadratic constraint (7.3.4) .
Definition 7.3.1 Let M E .c be a given sensor schedule. The system (7.3.1), {7.3.4} is said to be robustly observable with the sensor schedule M on the interval [0, T] if, for any vector xo E Rn, any time s E [0, T], any constant d > a, and any realized measured output y*(.), the set X s [Xo, y*(-) 10' d, M] is bounded. Let A be some measure of the size of a convex set satisfying Assumptions 7.1-7.3.
Definition 7.3.2 Let M E .c be a given sensor schedule. The uncertain system (7.3.1), {7.3.4} is said to be uniformly robustly observable with the sensor schedule M on [0, T] if it is robustly observable with this sensor schedule and, for any vector XO ERn and any constant d > 0, the condition
c[Xo, d, M] ~ sup A(XT[XO, YO(')I~, d, M)) < 00
(7.3.5)
y* (.)
holds, where the supremum is taken over all fixed realized measured outputs y*(t) . Notation Let No c .c denote the set of all sensor schedules such that the system (7.3.1), (7.3.4) is uniformly robustly observable.
Definition 7.3.3 The uncertain system (7.3.1), (7.3.4) is said to be uniformly robustly observable via synchronous sensor switching with the basic sensors (7.3.2) on [0, T] if the set No is nonempty. In other words, if there exists a sensor schedule such that the system (7.3.1), {7.3.4} is uniformly robustly observable with this schedule. Definition 7.3.4 Assume that the uncertain system (7.3.1), (7.3.4) is uniformly robustly observable via synchronous sensor switching with the basic be a given sensors (7.3.2) on [0, T]. Let Xo be a given vector and d > number. A sensor schedule MO is said to be optimal for the parameters XO and d if c[xo d, MOl = inf c[xo, d, M],
°
l
MENo
where c[xo,d,M] is defined by (7.3.5).
116
7. Optimal Robust State Estimation via Sensor Switching
Let m .6. rio, iI, ... ,iN-I], where 1 < i j < k is an index sequence defining a sensor schedule. Our solution to the optimal sensor-scheduling problem with continuous time measurements involves the Riccati differential equations associated with the sequence m,
+ pm(t)A' (t) + C:n (t)C:n(t)]pm(t) + B(t)B (t),
pm(t) I
pm(t)[K (t)K(t) -
A(t)pm(t)
= I
,
pm(O)
=
Xol,
(7.3.6)
and the following set of state estimator equations: •
I
I
X(t) = [A(t) + pm (t) [K (t) K (t) - C:n (t) C:n (t)]] X(t) I
+pm(t)c:n (t)y* (t), x(O) = Xo.
(7.3.7)
Here,
(7.3.8)
Let xo be a vector and y* (.) be a vector function. Introduce the value
FJ(xo, y*(.))
.6.
[+' [IIK(t)x(t)112 -11(C*(t)x(t) - y*(t))1I 2] dt,
(7.3.9)
J
where x(t) is the solution of (7.3.7) with x(tj). Xo and C~(·) defined by (7.3.8). For all Xo ERn, j = 1,2, ... ,N, and 1 < i j < k, introduce the functions 11; [xo, io, ill .. ' ,ij - l ] E Rnxn and Vj [xo, io, ill ... ,ij-l] ERas solutions of the following dynamic programming procedure. First, we define
VN[XO, io, il, ... , iN-I] := 0 VXo, io, il, VN[XO, io, i l , ... ,iN-I] := pm(T)-1 Vxo, m = [io, i1,
,iN-I, ,iN-I]. (7.3.10)
Note that we do not assume that the solution of the Riccati equation (7.3.6) exists on [0, T] for any m. If the solution does not exist for some m, we take pm(T)-1 := 00. Furthermore, for all Xo E R n and j = 0,1, ... ,N - 1, let ij(xo) be an index for which the minimum in the minimization problem . min
sup
1.=1,2, ... ,k Y.(-)E L 2[tj,tHl]
is achieved, where
A(V;+I[X(tj+I),io,i1, ... ,ij-l,i],d) (7.3.11)
I I I
7.3 Optimal Robust Sensor Scheduling
117
Note that this index may be nonunique. Moreover, if
ACCj+dx(tj+d, io, iI, ... , ij-l, ij(xo)], d) < 00, (7.3.12)
sup y" (')EL2 [tj ,tHl]
then there exists a matrix M j (xo) > 0 and a number dj (xo) > 0 such that the supremum in (7.3.12) is equal to A(Mj(xo), dj(xo)), Now, let
"Cj [xo, io, iI,
, ij-l] := M j (xo),
Vj[x(tj+l),io,i l ,
(7.3.13)
,ij-l] := dj(xo).
The main result of this section is now given by the following theorem. Theorem 7.3.5 Consider the uncertain system (7.3.1), (7.3.4) with the
basic sensors (7.3.2). Then the following two statements are equivalent: (i) The uncertain system (7.3.1), (7.3.4) is uniformly robustly observable via synchronous sensor switching with the basic sensors (7.3.2) on [0, T]. (ii) The dynamic programming procedure defined by (7.3.10), (7.3.11), (7.3.13) has a finite solution
for j = 0,1, ... , N - 1 for all Xo ERn. Furthermore, if condition (ii) holds and ?:i(xo) is an index defined in the preceding dynamic programming procedure, then the sensor schedule defined by the sequence of indexes ij(x(tj)) is optimal. Proof For all j = 0,1, ... ,N - 1, Xo E Rn, 1 < i o < k, ... , 1 < ij-l < k, there exists a number sj[xo,io,i l , ... ,ij-l] < 00 by sj[xo,io,il, ... ,ij-l]
sup
D.
y"OE L 2[tj ,TJ
A(XT[xo'Y*(')I~,d]),
where the supremum is taken over all solutions of the system (7.3.1), (7.3.4) with the sensor-switching sequence io, il,." , ij-l on [0, tj) and x(tj) = xo. According to Theorem 7.2.4, the set XT[xo, Y*(-)I~, d] is always an ellipsoid of the form
£(M, a, d)
D.
{x E R n
:
(x - a)' M(x - a) < d},
where the matrix M is one of the finite set of matrices pm(T)-I. Hence, if Sj[xo, io, il, ... , ij-l] < 00, then there exists a matrix "Cj[xo, io, i l ,··· , ij-l] and a number Vj[xo, io, i l ,· .. ,ij-l] such that Sj[xo,io,il,
A(£("Cj [xo, io, il, ... , ij-I], a, Vj[xo, io, i l ,
,ij-l]
=
,ij-I]))
118
7. Optimal Robust State Estimation via Sensor Switching
for some vector a. The dynamic programming procedure (7.3.10), (7.3.11), (7.3.13) can be derived from Bellman's principle of optimality (e.g., see [17]): A n optimal policy has the property that no matter what the previous decisions (i.e., controls) have been, the remaining decisions must constitute an optimal policy with regard to the state resulting from those previous decisions. The optimality of the sensor switching policy defined in the theorem follows immediately from the relationship (7.3.10), (7.3.11), (7.3.13) and Bellman's principle of optimality. This completes the proof of this theorem.
o
7.4
Model Predictive Sensor Scheduling
The solution to discrete-time dynamical equations derived in Section 7.3 has been the subject of much research in the field of optimal control theory. It is difficult to solve dynamic programming equations in realistic situations. In this section, we apply ideas of model predictive control (e.g., see [26]) to give a nonoptimal but real-time-implementable method for sensor switching.
Definition 7.4.1 Assume that the uncertain system (7.3.1), (7.3.4) is uniformly robustly observable via synchronous sensor switching with the basic sensors (7.3.2) on [0, T]. Let xo be a given vector and d > 0 be a given number. A sensor schedule MO E No is said to be one-step-ahead optimal for the parameters xo and d if, for any j = 0,1, ... ,N - 1, any realized measured output y* (.) I~j, and any schedule M such that M coincides with M O on [0, tjl, the condition sup A(XtH1 [xo, h(')I~Hl, d, M]) > h(-)E1-l
sup A(XtH1 [xo, (')I~+\ d, MOD h(')E1-l
holds, where 1-£
t:. = {h(·)
E L2 [O, tj+l] : h(t)
= y*(t)
Vt E [0, tj]}.
Remark
The idea of Definition 7.4.1 is very straightforward: we wish to design a schedule such that at any sensor-switching time tj, the upper bound of the size of the set of all possible states X tH1 [xo, y* (.) I~Hl , d, MO] is minimal. Let j < N - 1 and io, i 1 , ... ,i j - 1 be a fixed sequence of indexes (1 < i r < k), and let i = 1, 2, . .. ,k. The result of this section involves the following k pairs of Riccati differential equations associated with the sequence
-
,".1
7.4 Mode l Predic tive Senso r Sched uling
..
119
rio, iI, ... ,i j - I , i] and defined over the time interv al [tj) tj+1]:
pi(t) .
+ pi(t)A ' (t) + Ci (t)Ci (t)]P2(t) + B(t)B (t) =
A(t)p i(t)
P2(t) [K (t)K( t) -
I
•
/
/
pi(t j ) = pm(t j),
(7.4.1)
+ pi (t)K( t)' K(t)] 'yi(t) + yi(t)[ A(t) + pi(t) K(t)' K(t)) + K(t)' K(t) + yi (t ) pi (t ) K (t)' K (t )K (t)' K (t )pi (t )yi (t )
- yi(t)
=
[A(t)
yi(tj+ l) = 0.
(7.4.2)
Here, pmc) is the solution to the Riccati equat ion (7.3.6) with (7.4.3) Furth ermo re, introd uce the values
one of where x(t) is defined by (7.3.7). If for some i the soluti on to at least we take the Ricca ti equat ions does not exist on the time interv al [tj, tj+l], Ci[Xo, d) := 00. eadNow we are in a positi on to prese nt a meth od to design a one-step-ah optim al sensor-switching strategy. with the Theo rem 7.4.2 Consi der the uncer tain system (7.3.1), (1.3.4) and only basic senso rs (7.3.2). A schedule}v(J is one-s tep-ah ead optim al if i , ... ,ij if, for any j = 0,1, ... ,N -1, and any senso r index sequence 1.0, l ing two associated with some realized measu red outpu t 11 (.) I~, the follow statem ents hold: d (i) For i = ij, the soluti on P(·) to the Ricca ti equation (7.4.1) is define
and positi ve-de finite on the interv al [tj, tj+1], and the soluti on yi(.) on to the Ricca ti equation (7.4.2) is defined and nonne gative -defin ite the interv al [tj, tj+l)'
(ii) The minim um
. min
2=1,2, ... ,k
Ci
[xo, d]
is achieved at i = ij . Proof
This theor em imme diatel y follows from Theo rem 7.2.4.
o
120
7. Optimal Robust State Estimation via Sensor Switching
Remark Note that our method to design a one-step-ahead-optimal sensor switching rule requires at each step an on-line solution to k pairs of Riccati differential equations and a simple look-up procedure to determine which of the basic sensors to use.
Almost Optimal Linear Quadratic Control Using Stable Switched Controllers
8.1
Introduction
This chapter is concerned with the output feedback stabilization and optimal control of a linear time-invariant system via a stable controller. It is well known that standard methods for output feedback controller design, such as the LQG method, may lead to unstable controllers even if the original plant is stable. However, many control engineering practitioners regard the use of an unstable controller as highly undesirable. This has motivated a number of authors to consider the problem of strong stabilization. This problem involves finding an output feedback controller to stabilize a system such that the controller itself is also stable. For the case in which one restricts attention to linear time-invariant controllers, a necessary and sufficient condition for strong stabilizability has been obtained in terms of a certain parity interlacing condition; see [146, 161]. This parity interlacing condition is much stronger than the conditions of stabilizability and detectability that are the conditions to simply stabilize the system (e.g., see [55]). Furthermore, the dimension of the required linear time-invariant controller may be arbitrarily large (e.g., see [133]). In [61], a discrete-time strong stabilization problem is considered in which a time-varying (periodic) controller is allowed. These results are further extended in [58]. In the case where time-varying controllers are allowed, the conditions for the existence of a suitable controller are much less restrictive than for the linear time-invariant case. Also, the controller obtained is of the same dimension as the plant. However, [61] and [58] do not consider
122
8. Almost Optimal Control Using Stable Switched Controllers
the question of optimality with respect to a quadratic performance index. Furthermore, the results of [61] are given only in the discrete-time case. In this chapter, we consider a problem of optimal control and stabilization via a stable controller in which the controller is no longer required to be time-invariant or finite-dimensional. We propose a method for almost optimal control of linear systems using stable switched controllers. These controllers can be described by ordinary differential equations with discontinuous right-hand sides. Note that the papers [35] and [42] also consider the issue of optimal control when a stable controller is required. However, these papers only consider the case in which the controller is required to be linear and time-invariant. The main result of this chapter shows that if an extended class of controllers is allowed, then the assumptions of stabilizability and detectability are all that are required. The controller that we construct is described by a set of state equations that are twice the dimension of the plant. Also, the state of the controller is subject to jumps. However, the implementation of such a controller is no more complicated than that of a linear time-invariant controller. The main results of this chapter were originally published in [126]. The remainder of the chapter is organized as follows. Section 8.2 contains definitions and some preliminary results. Finally, Section 8.3 presents the main results and proofs.
8.2
Optimal Control via Stable Output Feedback Controllers
Consider the linear system x(t)
-
Ax(t)
y(t)
+ Bu(t),
Cx(t),
(8.2.1)
where x(t) E Rn is the state, u(t) E Rh is the control input, and y(t) E R l is the measured output. We will consider the stabilization problem for the system (8.2.1) via a linear time-varying causal output feedback controller of the form u(t)
= U(t,y(t)I~).
(8.2.2)
Definition 8.2.1 The linear output feedback controller (8.2.2) is said to be (bounded-input bounded-output) stable if, for any controller input y(.) such that sup tE[O,CXl)
Ily(t)11 < 00,
.~
:
8.2 Optimal Control via Stable Output Feedback Controllers
123
the corresponding controller output u(·) satisfies the condition
sup lIu(t) II < 00. tE[O,oo)
We now introduce the notion of strong stabilizability for the system (8.2.1) . Definition 8.2.2 The linear system {8.2.1} is said to be strongly stabilizable if there exists a stable linear output feedback controller of the form {8.2.2} such that the following condition holds: for any initial condition x(o) = Xo, the closed-loop system (8.2.1), {8.2.2} has a unique solution [x(·), u(·)] that is defined on (0,00) and [x(·), u(-)] E I.Q[O, 00). Remark
It follows from the preceding definition that if the system (8.2.1) is strongly stabilizable, then the corresponding closed-loop system (8.2.1), (8.3.1) will have the property that x(t) -7 as t -7 00. Indeed, because [x(·), u(·)] E L 2 [0, 00), we can conclude from (8.2.1) that x(·) E ~[O, 00). However, using the fact that x(·) E L2 (0, 00) and x(·) E L 2 [0, 00), it now follows that x(t) -7 as t -7 00.
°
°
Definition 8.2.3 The pair (A, B) is said to be stabilizable if for anY:Ib E Rn there exists a control input UQ(') E L2 [0, 00) such that, for the solution x(-) to the equation {8.2.1} with u(·) = VoC) and with initial condition x(O) = Xo, we ha.ve xC) E L 2[0,00).
vVe will use the following lemma, which follows immediately from standard results in linear systems theory (e.g., see [55]). Lemma 8.2.4 The pair (A, B) is stabilizable if and only if there exists a matrix K such that the matrix A + BK is stable.
°
Now, let Q = Q' and R = R' > be given matrices and to > 0 be a given time. Consider the following linear quadratic optimal control problem for the system (8.2.1) with initial condition x(to) = XQ and where the cost functional J(to, xo, u(·))
~ [00 [x(t)'Qx(t) + u(t)' Ru(t)]dt
ito
(8.2.3)
is to be minimized. It is well known (e.g., see [149]) that if the pair (A, B) is stabilizable, then the minimum in this optimal control problem is achieved for all Xo and to if and only if there exists a solution X = X' to the algebraic Riccati equation
A'X + XA - XBR-1B'X
+Q =
°
(8.2.4)
124
8. Almost Optimal Control Using Stable Switched Controllers
such that the matrix A + BK is stable where (8.2.5) Such a solution is said to be a stabilizing solution of (8.2.4). Furthermore, the optimal state feedback Uopt(-) control is given by Uopt(t) = Kx(t), and the optimal value of the cost function (8.2.3) is given by (8.2.6)
8.3
Construction of Almost Optimal Stable Switched Controller
In this section, we present the main result of the current chapter.
Assumption We make the following assumption. Assumption 8.1 The pair (A, C) is detectable (i.e., there exists a matrix H such that the matrix A + H C is stable).
Notation Let g(t) be a given vector function that is right-continuous and may be left-discontinuous. Then, g(t denotes the value of g(t) just before to, that is,
o)
g(to).6.
lim
€>O,€~O
g(to - E)
assuming that the limit exists.
Switched Controller Let T
>
0 be a given time. Also, let H be a matrix such that the matrix
is stable and K be a matrix such that the matrix A + BK is stable. (Lemma 8.2.4 implies that if the pair (A, B) is stabilizable, then such a matrix exists.) We consider the following state equations that contain jumps in their solutions:
A
+ HC
Xc(t) = Acxc(t)
+ Bcy(t) for t i= jT, xc(O) = 0,
xc(jT) = Dcxc(jT-) for j = 1,2,3, ... , u(t) = Ccxc(t) Vt,
(8.3.1)
8.3 Construction of Almost Optimal Stable Switched Controller
125
where xc(t) E R 2 n,
A
c
Do [
°
A + BK BK A+HC
],B [0-H ]' Do c =
Cc~[K O],Dc~[~
n.
(8.3.2)
Because A+BK and A+HC are stable matrices, we have that A c is stable and hence there exists a time To > such that
°
(8.3.3) for all T > To. Remark
The motivation for the preceding controller structure is as follows. One approach to the optimal control of the system is to apply an open-loop linear quadratic optimal controller. However, this does not work in the case of an unstable open-loop system. To overcome this difficulty, we can incorporate an observer in parallel with this open-loop controller. At periodic intervals, the open-loop controller state can then be reset to the observer state. This overcomes the instability problem because the observer state converges exponentially to the true state. This parallel combination of an open-loop LQ controller and an observer leads to the controller (8.3.1), (8.3.2). Remark
Although the controller equations (8.3.1), (8.3.2) do not define a finitedimensional linear time-invariant system, the computer implementation of (a discrete-time approximation to) such a controller would not be more complicated than that of a linear time-invariant controller. Definition 8.3.1 Suppose that there exists a stabilizing solution to the Riccati equation (8.2.4). A linear output feedback controller of the form (8.3.1) is said to be almost optimal (against initial state error) for the linear quadratic optimal control problem (8.2.1), (8.2.3) if, given any 8 > 0, there exists a constant E > 0 such that, for any to > 0, the closed-loop system (8.2.1), (8.3.1) has the following property: Suppose
Ilx(to) - x~(to)ll < Ellx(to)II and Ilx(to) - x~ (to) II < Ellx( to) II, where we partition the controller state vector vectors
Xc (t)
(8.3.4) into two n-dimensional
(8.3.5)
126
8. Almost Optimal Control Using Stable Switched Controllers
Then
IJ(to, x(to), u(·)) - Jopt(x(to))1 < 8IJopt (x(to))I·
(8.3.6)
We now present the main result of this chapter. Theorem 8.3.2 Consider the system (8.2.1) and suppose that Assumption 8.1 holds. Then, the following two statements are equivalent:
(i) The system (8.2.1) is strongly stabilizable. (ii) The pair (A, B) is stabilizable. Furthermore, suppose that condition (ii) holds, let H be any matrix such that the matrix A + He is stable (such a matrix exists due to Assumption 8.1), and let K be any matrix such that the matrix A + BK is stable (such a matrix exists due to Lemma 8.2.4). Also, let T > be any time such that condition (8.3.3) holds. Then, the system (8.2.1) can be strongly stabilized via the linear output feedback controller (8.3.1), (8.3.2). Moreover, if there exists a stabilizing solution to the Riccati equation (8.2.4) and the matrix K is defined by (8.2.5), then the controller (8.3.1), (8.3.2) is almost optimal (against initial state error in the sense of (8.3.4)-{8.3.6)) for the linear quadratic optimal control problem (8.2.3).
°
Proof
(i)
=}
(ii)
(ii) This statement follows from the definitions.
(i) To prove this part of the theorem, we first establish that the controller (8.3.1), (8.3.2) is stable. For the solutions to the linear system (8.3.1) with y(.) = 0, we have that xc(t) = F(t, r)xc(r), where F(t, r) is the transition matrix function for the linear system (8.3.1). Moreover, F(t, r) is the solution to the following linear matrix equation that contains jumps in its solution: =}
F(t, r)
= AcF(t, r) for t f jT, F( r, T) = I F(jT, T) = DcF(jT-, r) for t = jT, (8.3.7)
where j is an integer such that jT> r. It is clear that if there is an integer j such that )T < r < t < (j then
+ l)T, (8.3.8)
12 are such that ]1 < )2, = [D ceAcT ]U2-JI).
Furthermore, if the integers jl and F(12T, jlT)
Suppose that )IT < r < (jl ]1 < 12· Then,
then
(8.3.9)
+ l)T and]2 < t < (j2 + l)T, where r < t and (8.3.10)
127
8.3 Construction of Almost Optimal Stable Switched Controller
Furthermore, from this, (8.3.8), and (8.3.9), we have that
IIF(t, 7)11 < CIC2 a (h-id, where Cl
=
Ile Acto II,
sup
C2
sup lIe-Acto II, a =
=
toE[O,T)
IID c eAcT I1',
toE[O,T)
Because IIDcl1 = V2, condition (8.3.3) implies that a < 1. Now let -Ina. Then C> 0 and, using the fact that
.
.
J2 - J1
t-
7
C
~
2
> ----;y- - ,
we have (8.3.11) Furthermore, for any solution to the system (8.3.1), we have that
u(t) = Cc
l'
F(t, r)y(r)dr.
This and the bound (8.3.11) imply that the controller output u(·) is bounded for any bounded controller input y(.). Hence, the controller (8.3.1), (8.3.2) is bounded-input bounded-output stable. Now we prove that the closed-loop system (8.2.1), (8.3.1), (8.3.2) is stable. Indeed, let the controller state vector Xc be partitioned as in (8.3.5). Then from the relations (8.2.1), (8.3.1), and (8.3.2), we have that
i(t)
= [A + HC]z(t),
(8.3.12)
where z(t) ~ x(t) - x~(t). Furthermore, the system (8.2.1) with control input defined by (8.3.1), (8.3.2) can be rewritten as
x(t)
= [A + BK]x(t) + BKw(t),
(8.3.13)
where w(·) ~ x~(t) -x(t). From (8.2.1), (8.3.1), and (8.3.2), it follows that w(t) satisfies the following linear differential equation that contains jumps in its solution:
w(t) w(jT)
Aw(t) for t i= jT, z(jT) for j = 1,2,3, ... ,
(8.3.14)
where z(·) is defined by (8.3.12). From these equations, it follows that there exists a constant C > 0 such that given any integer j
IIw(t) II < cllz(jT) II
(8.3.15)
128
8. Almost Optimal Control Using Stable Switched Controllers
for all t E [jT, (j + l)T). Hence, using the fact that the matrices A + He and A + BK are stable, it now follows that all solutions to the equations (8.3.13), (8.3.14), (8.3.12) belong to L 2[O, 00). In fact, the system (8.3.13), (8.3.14), (8.3.12) will be exponentially stable. Hence, the controller (8.3.1) is stabilizing. We now suppose that K is defined by (8.2.5) and consider the almost optimality of the controller (8.3.1). For the optimal state feedback controller, the closed-loop system is described by the equations x
u
-
(A+BK)x, Kx,
with initial condition x(to). With the controller (8.3.1), the closed-loop system is described by the equations (8.3.13), (8.3.14), (8.3.12), and
u=Kx+Kw, with initial conditions x(to), w(to) = x~(to) - x(to) and z(to) = x(to) x~(to) respectively. Now, using the exponential stability of the system (8.3.13), (8.3.14), (8.3.12), and the equation (8.3.15), it follows that given any 8 > 0, there exists a E > 0 such that (8.3.4) implies (8.3.6). In other words, the controller (8.3.1) is almost optimal for the cost function (8.2.3). This completes the proof of the theorem. 0 Remark It is well known that Assumption 8.1 and condition (ii) in the preceding
theorem together form a necessary and sufficient condition for the system (8.2.1) to be stabilizable via output feedback control. Hence, it follows from Theorem 8.3.2 that if a system is stabilizable, then it will be strongly stabilizable via a controller of the form (8.3.1), (8.3.2). However, it should be pointed out that if a standard method such as LQG is used for stabilizing the system (8.2.1), then this will lead to a controller of the same order as the original system n. In contrast, the jump state equations (8.3.1), (8.3.2) are of order 2n.
...,.
i:.....
(
9 Simultaneous Strong Stabilization of Linear Time-Varying Systems Using Switched Controllers
9.1
Introduction
The problem of simultaneously stabilizing a finite collection of linear timeinvariant plants with a single controller is a problem that has attracted considerable interest (e.g., see [21]). In particular, if one restricts attention to the case of linear time-invariant controllers, the problem of finding a useful necessary and sufficient condition for simultaneous stabilization has remained unsolved. The motivation for the simultaneous stabilization problem is derived from the fact that many problems of robust control with parameter uncertainty can be approximated by a simultaneous stabilization problem. Also, the fact that the simultaneous stabilization problem is tantalizingly simple to state and yet apparently very difficult to solve has attracted many researchers. One case in which the problem of simultaneous stabilization via a linear time-invariant controller can be solved is the case of two plants. In this case, the problem can be reduced to a problem of strong stabilizability (e.g., see [146]). As described in the previous chapter, the problem of strong stabilizability involves finding a stabilizing controller that is itself stable. There are conditions for the solution to the strong stabilization problem. These conditions are given in terms of a certain parity interlacing property of the plant poles and zeros; see [161]. However, even in the case of two plants, no bound can be given on the order of the required compensator (e.g., see [133], [136]). One approach to the problem of simultaneous stabilization involves the
130
9. Simultaneous Strong Stabilization of Linear Time-Varying Systems
use of time-varying periodic controllers. In [85], an approach to simultaneous stabilization is presented that involves a finite-dimensional periodic "deadbeat" controller in the discrete-time case and an infinite-dimensional periodic controller in the continuous-time case. In this chapter, we propose a new method for simultaneous stabilization of a finite collection of linear time-varying plants. Our approach is based on the use of a switched controller. This controller is similar to the controllers described in Chapter 8, and its dynamics can be described by a system of ordinary differential equations with discontinuous right-hand sides. The results of this chapter lead to an infinite-dimensional time-varying controller that simultaneously stabilizes a collection of time-varying plants. However, the implementation of this controller is quite straightforward and no more computationally demanding than the implementation of a finite dimensional time-varying controller. An important feature of the results of this chapter is that we can handle the problem of simultaneous stabilization of a collection of time-varying plants. The simultaneous stabilization of time-varying plants cannot be easily handled using the approach of [85]. Also, the assumptions required in order to apply the results of this chapter are quite weak and hold for "almost all" collections of plants. Furthermore, the controller proposed in this chapter is a stable controller. That is, we solve a strong simultaneous stabilization problem. The main results of the chapter were originally published in the paper [127]. The remainder of the chapter is organized as follows. Section 9.2 contains definitions and the problem statement. Section 9.3 presents a method for strong simultaneous stabilization using switched controllers.
9.2
The Problem of Simultaneous Strong Stabilization
Let k be a given number. Consider k linear time-varying systems
Xi(t) y(t)
Ai(t)Xi(t) + Bi(t)u(t), Ci (t)Xi (t),
(9.2.1)
where i E {I, 2, ... ,k}, Xi(t) E Rni is the state, u(t) E R h is the control input, y(t) E R l is the measured output, and ~ ('), B i ('), and Ci (.) are piecewise-continuous matrix functions defined and bounded on [0,00). We will consider the simultaneous stabilization problem for the systems (9.2.1) with a stable (see Definition 8.2.1 from Chapter 8) time-varying causal output feedback controller of the form
u(t)
= U(t,y(t)I~).
(9.2.2)
We now introduce the notion of simultaneous strong stabilizability for the systems (9.2.1).
9.3 A Method for Simultaneous Strong Stabilization
1.- .~.
131
):
Definition 9.2.1 The linear systems {9.2.1} are said to be simultaneously strongly stabilizable if there exists a stable output feedback controller of the form {9.2.2} such that the following condition holds: for any i E {I, 2, ... ,k} and for any initial condition 1;;(0) = XOi, the closed-loop system {9.2.1}, {9.2.2} has a unique solution [Xi('), u(·)] that is defined on [0,00) and [XiC), u(·)J E L2 [0, 00). Remark It follows from the preceding definition that if the system (9.2.1) is exponentially stabilizable, then the corresponding closed-loop system (9.2.1), (9.3.2) will have the property that Xi(t) --+ as t --+ 00. Indeed, because [Xi('), uC)] E L 2[0, 00), we can conclude from (9.2.1) that Xi(') E L 2[0, 00). However, using the fact that Xi(') E L2[0, 00) and Xi(') E L2[0, 00), it now follows that Xi(t) --+ as t --+ 00.
°
°
Definition 9.2.2 The matrix function A(·) is said to be exponentially stable if there exist constants c > and v > such that
°
°
Ilx(t)1I < ce-v(t-T)llx(r)11 'it > r > 0 for any solution to the linear system x(t) = A(t)x(t).
Definition 9.2.3 The pair (A(·),B(·» is said to be stabilizable if there exists a bounded piecewise-continuous matrix function K (.) such that the matrix function AC) + B(·)K(·) is exponentially stable. Definition 9.2.4 The pair (A(·), C(·)) is said to be detectable if there exists a bounded piecewise-continuous matrix function DC) such that the matrix function A(·) + D(·)C(·) is exponentially stable.
9.3
A Method for Simultaneous Strong Stabilization
In this section, we present the main result of the current chapter. Introduce the matrices A(t) and B(t) as
A(t)
Al (t) 0
0 A 2 (t)
°
0
0
Ak(t)
0
b.
B(t)
BI(t) B 2 (t) b.
Bk(t)
(9.3.1)
132
9. Simultaneous Strong
Stabil~zation of
Linear Time-Varying Systems
Assumptions We make the following assumptions. Assumption 9.1 For any i E {I, 2, ... ,k}, the pair (A i (-), Ci (·)) is detectable. Assumption 9.2 The pair (A(·), Be)) is stabilizable.
Construction of Switched Controller Let Di (.) be matrix functions such that the matrix functions
are exponentially stable, and let Ke) be a matrix function such that the matrix function
A(·)
+ B(·)K(·)
is exponentially stable. Now, let T > 0 be a given time and define n A E:=I ni. We consider the following state equations that contain jumps in their solution:
xc(t) = Ac(t)xc(t) xc(jT)
=
+ Bc(t)y(t)
#- jT, xc(O) = 0,
for t
Dcxc(jT-) for j = 1,2,3, ... , u(t) = Cc(t)xc(t) 'Vt,
(9.3.2)
where xc(t) E R 2 n and A
Ac(t)
[
=
A(t) + B(t)K(t) B(t)K(t)
Bc(t) " [ DI(t)CI(t)
H(t)
A
~ D(t)
0
A(t)
+ H(t)
] ,Cc(t) " [K(t)
o
0
0
0
Dk(t)Ck(t)
~
,
0],
0 l)2(t)C2 (t)
Dc " [
]
0
n'
DI(t) D 2(t)
D(t) "
(9.3.3)
Dk(t)
Furthermore, for any i E {I, 2, ... ,k} introduce the (n x~) matrix Qi(t) A -D(t)Ci(t). Also, let PI (t), P2 (t), ... ,Pk(t) be (n x n) matrices defined as
9.3 A Method for Simultaneous Strong Stabilization
133
follows:
PI(t)
-Do
[ Ql(t)
Onxn2
Onxna
P2(t)
-Do
[ Onxnl
Q2(t)
Onxna
Pk(t)
-
[ Onxnl
Onxn2
Onxna
Do
where Onxni denotes the zero matrix of dimension n x T4. Also, let Do [
Gi(t) =
A(t) + B(t)K(t) B(t)K(t) + ~(t)
0
A(t) + H(t)
]
.
(9.3.4)
Note that, because Ai (·) + Di(·)Ci (·) are exponentially stable matrix functions, we have that A(·) + H(·) is an exponentially stable matrix function. Furthermore, let F o(', .) be the state transition matrix for the linear system Xc = Ac(t)xc(t), that is, Fo(-'·) is the solution to the matrix equation
(9.3.5) Also, let F i (-, .) be the state transition matrix for the linear system Xc Gi(t)xc(t), that is, Fi (-,·) is the solution to the matrix equation
=
Fi(t, r) = Gi(t)Fi(t, r); Fi(r, r) = I for i = 1,2, ... ,k. Then, because A(-) +B(-)K(-) and A(·) +H(·) are exponentially stable matrix functions, we have that Ac('), G I ('), G 2 ( '), ... ,Gk (.) are exponentially stable and hence there exists a time T > 0 such that 1
Do
j~~ .. IIFo((j
al
-Do
1 j~!: IIFl((j + l)T,jT)11 < y'2'
ak
-Do
j~~ .. lIFk((j
ao
+ l)T,jT)11 <
y'2'
.
+ l)T,jT)11 <
1
y'2'
(9.3.6)
Remark Although the controller equations (9.3.2), (9.3.3) do not define a finitedimensional linear system, the computer implementation of (a discrete-time approximation to) such a controller would not be more complicated than that of a linear time-varying controller. The main result of this chapter is now summarized in the following theorem.
134
9. Simultaneous Strong Stabilization of Linear Time-Varying Systems
Theorem 9.3.1 Consider the systems (9.2.1), and suppose that Assumptions 9.1 and 9.2 hold. Let Di (·) be any matrix functions such that the matrix functions A i (·) + Di(·)CiC) are exponentially stable (such a matrix functions exist due to Assumption 9.1), and let KC) be any matrix function such that the matrix function A(·) + B(· )K(·) is exponentially stable (such a matrix exists due to Assumption 9.2). Also let T > 0 be any time such that condition (9.3.6) holds. Then the systems (9.2.1) can be simultaneously strongly stabilized via the output feedback controller (9.3.2), (9.3.3).
Proof
To prove this theorem, we establish first that the controller (9.3.2), (9.3.3) is stable. For the solutions to the linear system (9.3.2) with yC) 0, we have that xe(t) = F(t,r)xe(r), where F(t,r) is the state transition matrix for the linear system (9.3.2). F(t, r) is the solution to the following linear matrix equation that contains jumps in its solution:
=
F(t, r)
Ae(t)F(t, r) for t =1= jT; F(r, r) = I, F(jT, r) = DeF(jT-, T) for t = jT, where j is an integer such that jT> T. =
It is clear that if there is an integer j such that jT < then
F(t, T)
=
T
(9.3.7)
< t < (j + l)T,
Fo(t, T),
where FoC .) is the solution of (9.3.5). Furthermore, if the integers j1 and 12 are such that j1
(9.3.8)
< j2, then
F(j2 T ,j1T ) = DeFo(12T, (12 - l)T) x DeFo((j2 - l)T, (j2 - 2)T) x ... X DeFo((j1 + 1)T,j1)T). Suppose that j1T < r < (jl j1 < j2. Then
(9.3.9)
+ l)T and 12 < t < (j2 + l)T, where r < t and
(9.3.10) Furthermore, using (9.3.10), (9.3.8), and (9.3.9), we have that
IIF(t, T)II < C1C2A~h-h), where C1
=
IIFo(jT + to,jT)ll,
sup j=O,1, ... ,toE[O,T)
C2
=
sup j=O,1, ... ,toE[O,T)
IIFo(jT + to,jT)-111,
-
(:
135
9.3 A Method for Simultaneous Strong Stabilization
and AO = 11Dcllao. Because IIDcl1 = V2", condition (9.3.6) implies that AO < 1. Now, let c ~ -lnAo. Then, C > 0 and, using the fact that 12- jl > ty.T - 2, we have (9.3.11) Furthermore, for any solution to the system (9.3.2), we have that
u(t)
= C,(t)
10' F(t, T)Y(T)dT.
This and the bound (9.3.11) imply that the controller output u(·) is bounded for any bounded controller input y(-). Hence, the controller (9.3.2), (9.3.3) is bounded-input bounded-output stable. Now we prove that for any i E {I, 2, . . . ,k}, the closed loop system (9.2.1), (9.3.2), (9.3.3) is stable. Indeed, let the controller state vector xc(t) be partitioned into two n-dimensional vectors
xc(t) ~ [h(t)'
f(t)'] I
,
and then let h(t) and v(t) be partitioned into k vectors each
h(t) ~ [h1(t)' v(t) ~ [Vl(t)'
J',
h2(t)'
hk(t)'
V2(t)'
Vk(t)' ] I
,
with dimensions nl, n2, ... ,nk· Then from the relations (9.2.1), (9.3.2) and (9.3.3), we have that (9.3.12) ~
~
where z(t) = Xi(t) - Vi(t). Furthermore, let w(·) = hi(t) - Xi(t). From (9.2.1), (9.3.2), and (9.3.3), it follows that w(t) satisfies the following linear differential equation which contains jumps in its solution:
w(t) w(jT)
Ai(t)w(t) for t
I: jT,
-z(jT) for j = 1,2,3, ... ,
(9.3.13)
where z(-) is defined by (9.3.12). From these equations, it follows that there exists a constant C > 0 such that, given any integer j,
Ilw(t)11 < cllz(jT) I for all t E [jT, (j
+ I)T).
Furthermore, because
(9.3.14)
'fi.(\j.:~
,.;;:':.;.': . .rl::·;··.
."
136
9. Simultaneous Strong Stabilization of Linear Time-Varying Systems
the system (9.2.1), (9.3.2), (9.3.3) can now be rewritten as
xc(t)
= Gi(t)xc(t) -
Bc(t)Ci(t)w(t) for t =I jT, xc(O) = 0, xc(jT) = Dcxc(jT-) for j = 1,2,3, ... ,
(9.3.15)
where w(·) is defined by (9.3.13), (9.3.12) and Gi(t) is defined by (9.3.4). Moreover, as in the preceding proof of the stability of our output feedback controller, we have that condition (9.3.6) implies that the linear homogeneous system
Gi(t)xc(t) for t =I jT, Dcxc(jT-) for j = 1,2,3, ... is exponentially stable. This together with the exponential stability of the system (9.3.12) and the relation (9.3.14) implies that all solutions to the system (9.3.15), (9.3.12), (9.3.13) belong to L 2 [0, (0). Hence, using the fact that Xi(t) = z(t) + Vi(t) and u(t) = Cc(t)xc(t), it now follows that all solutions to the equations (9.2.1), (9.3.2), (9.3.3) belong to L 2 [O, (0). In fact, the system (9.2.1), (9.3.2), (9.3.3) will be exponentially stable. Hence, the controller (9.3.2), (9.3.3) is stabilizing. This completes the proof of the theorem. 0
,
References
[1] M. A. Aizerman and F. R. Gantmacher. Absolute Stability of Regulator Systems. Holden-Day, San Francisco, 1964. [2] R. Alur, C. Couroubetis, T. Henzinger, and P. Ho. Hybrid automata: An algorithmic approach to the specification and verification of hybrid systems. In R.L. Grossman, A. Nerode, A.P. Ravn, and H. Rishel, editors, Hybrid Systems I. Verification and Control, pages 209-229. Springer-Verlag, Berlin, 1993. [3] R. Alur and D. L. Dill. A theory of timed automata. Theoretical Computer Science, 126:183-235, 1994. [4] B.D.a. Anderson and J.B. Moore. Optimal Filtering. Prentice-Hall, Englewood Cliffs, NJ, 1979. [5] D. V. Anosov. Stability of equilibrium positions in relay systems. Automation and Remote Control, 20:135-149,1959. [6] P. J. Antsaklis, J. A. Stiver, and M. Lemmon. Hybrid systems modeling and autonomous control systems. In R. L. Grossman, A. Nerode, A. P. Ravn, and H. Rishel, editors, Hybrid Systems. Springer-Verlag, New York, 1993. [7] Z. Artstein. Stabilization with hybrid feedback. In R. Alur, T.A. Henzinger, and E.D. Sontag, editors, Hybrid Systems III. Verification and Control, pages 173-185. Springer-Verlag, Berlin, 1996.
138
References
(8] E. Asarin, O. Maler, and A. Pnueli. Reachability analysis of dynamical systems having piecewise-constant derivatives. Theoretical Computer Science, 138:35-65, 1995. [9] K. J. Astrom and B. Wittenmark. Computer-Controlled Systems. Prentice Hall, Upper Saddle River, NJ, 1997. (10] J.P. Aubin. Impulse and Hybrid Control Systems: A Viability Approach. Preprint, Universite Paris-Dauphine, Paris, 2000. [11] A. Back, J. Guckenheimer, and M. Myers. A dynamical simulation facility for hybrid systems. In R.L. Grossman, A. Nerode, A.P. Ravn, and H. Rishel, editors, Hybrid Systems. Springer-Verlag, New York, 1993. [12} A. Balluchi, M. Di Benedetto, C. Pinello, C. Rossi, and A. Sangovanni-Vincentelli. Hybrid control for automotive engine management: The cut-off case. In T. A. Henzinger and S. Sastry, editors, Hybrid Systems: Computation and Control. Springer-Verlag, Berlin, 1998. [13] J. S. Baras and A. Bensoussan. Optimal sensor scheduling in nonlinear filtering of diffusion processes. SIAM Journal of Control and Optimization, 27(4):786-813, 1989. (14] J. S. Baras and M. R. James. Robust output feedback control for discrete-time nonlinear systems: The finite time case. In Proceedings of the 32nd IEEE Conference on Decision and Control, pages 51-55, San Antonio, Texas, December 1993. [15] B. R. Barmish. Stabilization of uncertain systems via linear control. IEEE Transactions on Automatic Control, 28(3):848-850, 1983.
[16] T. Basar and P. Bernhard. HOC-Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach. Birkhauser, Boston, 1991. [17] R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, 1957. [18] A. Bemporad and M. Morari. Control of systems integrating logic, dynamics, and constraints. Automatica, 35(3):407-428, 1999. [19} D. P. Bertsekas and 1. B. Rhodes. Recursive state estimation for a set-membership description of uncertainty. IEEE Transactions on A utomatic Control, 16(2):117-128, 1971.
; t. ~
.
References
139
[20] A. Beydoun, L. Y. Wang, J. Sun, and S. Sivashankar. Hybrid control of automotive powertrain systems: A case study. In T .A. Henzinger and S. Sastry, editors, Hybrid Systems: Computation and Control. Springer-Verlag, Berlin, 1998. [21] V. D. Blondel. Simultaneous Stabilization of Linear Systems. Springer-Verlag, London, 1994. [22] V. D. Blondel and J. N. Tsitsiklis. Complexity of stability and controllability of elementary hybrid systems. Automatica,35(3):479-490, 1999. [23] S. Boyd, 1. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia, PA, 1994. [24] R.W. Brockett. Hybrid models for motion control systems. In H.L. Trentelman and J.C. Willems, editors, Essays in Control. Birkhauser, Boston, 1993. [25] P.E. Caines and Y.-J. Wei. Hierarchical hybrid control systems: A lattice theoretic formulation. IEEE Transactions on Automatic Control, 43(4):501-508,1998. [26] E. F. Camacho and C. Bordons. Model Predictive Control in the Process Industry. Springer-Verlag, London, 1995.
[27] C.G. Cassandros. Discrete Event Systems. Irwin and Aksen Associates, Homewood, IL, 1993. [28] C. Chase, J. Serrano, and P. Ramadge. Periodicity and chaos from switched flow systems: Contrasting examples of discretely controlled continuous systems. IEEE Transactions on Automatic Control, 38(1):70-83, 1993. [29] D. J. Clements and B. D. O. Anderson. Singular Optimal Control: The Linear-Quadratic Problem. Springer-Verlag, Berlin, 1978. [30J P. Dorato, R. Tempo, and G. Muscato. Bibliography on robust control. Automatica, 29(1):201-214, 1993. [31] J. C. Doyle. Analysis of feedback systems with structured uncertainty. lEE Proceedings Part D, 129(6):242-250, 1982. [32J A. Feuer and G. C. Goodwin. Sampling in Digital Signal Processing and Control. Birkhauser, Boston, 1996. [33] A. Feuer, G. C. Goodwin, and M. Salgado. Potential benefits of hybrid control for linear time invariant plants. In Proceedings of the American Control Conference, Albuquerque, NM, 1997.
140
References
[34] A.F. Filippov. Differential Equations with Discontinuous Righthand Sides. Kluwer Academic Publishers, 1988. [35] G. Ganesh and J. B. Pearson. H 2 -optimization with stable controllers. A utomatica, 25(4) :629-634, 1989. [36] F. R. Gantmacher and V. A. Yakubovich. Absolute stability of nonlinear control systems. In Proceedings of the 2nd USSR Congress on Theoretical and Applied Mechanics, Nauka, Moscow, 1965. [37J A. Kh. Gelig, G. A. Leonov, and V. A. Yakubovich. Stability of Nonlinear Systems with a N on- Unique Equilibrium State. N auka, Moscow, 1978. [38] S.B. Gershwin. Hierarchical flow control: A framework for scheduling and planning discrete events in manufacturing systems. Proceedings of the IEEE, 77(1):195-209, 1989. [39) A. Gollu and P.P. Varaiya. Hybrid dynamical systems. In Proceedings of the 28m IEEE Conference on Decision and Control, Tampa, 1989. [40J J. Guckenheimer. Planar hybrid systems. In P. Antsaklis, W. Kohn, A. Nerode, and S. Sastry, editors, Hybrid Systems II. Verification and Control. Springer-Verlag, Berlin, 1995. [41J B. B. Habbard. Hybrid systems: The control theory of tomorrow? SIAM News, pages 12-13, July 1995.
[42] Y. Halevi, D. S. Bernstein, and W. H. Haddad. On stable full-order and reduced-order LQG controllers. Optimal Control Applications and Methods, 12(3):163-172,1991. [43] F. Hausdorff. Der wertvorrat einer bilinearform. Mathematisches Zeitschrift, 3:314-316, 1919.
[44] T.A. Henzinger, P.-H. Ho, and H. Wong-Toi. Algorithmic analysis of nonlinear hybrid systems. IEEE Transactions on Automatic Control, 43(4):540-554, 1998. [45J J. P. Hespanha, D. Liberzon, and A. S. Morse. Logic based switching control of a nonholonomic system with parametric modeling uncertainty. Systems and Control Letters, 38(3):167-177, 1999. [46J J. P. Hespanha and A. S. Morse. Stabilization of nonholonomic integrators via logic based switching. Automatica, 35:385-393, 1999. [47] D. Hill and P. Moylan. The stability of nonlinear dissipative systems. IEEE Transactions on Automatic Control, AC-21:708-711, 1976.
~~".
.:
•• 1."
,
~.f
f.' r
References
141
[48] D. Hill and P. Moylan. Stability results for nonlinear feedback systems. Automatica, 13:377-382 , 1977. [49] D. Hill and P. Moylan. Dissipative dynamical systems: Basic inputoutput and state properties. Journal of the Franklin Institute, 309:327-357, 1980.
·t~. ·
[50] Y. C. Ho and X. Cao. Perturbation Analysis of Discrete Event Dynamic Systems. Kluwer Academic Publishers, Dordrecht, 1991.
1,
[52J C. Horn and P. J. Ramadge. Dynamics of switched arrival systems with thresholds. In Proceedings of the 32nd IEEE Conference on Decision and Control, pages 288-293, San Antonio, Texas, December 1993.
"
"
I I.'·
tt~ i.~:•·.
[51] D.J. Hoitomt , P.B. Luh, E. Nlax, and K.R. Pattipati. Scheduling jobs with simple precedence constraints on parallel machines. IEEE Control Systems Magazine, 10(2):34-40, 1990.
[53] C. Horn and P. J. Ramadge. A topological analysis of a family of dynamical systems with nonstandard chaotic and periodic behavior. International Journal of Control, 67(6):979-1020,1997.
"
[54] M. R. James and J. S. Baras. Robust H oo output feedback control for nonlinear systems. IEEE Transactions on Automatic Control, 40(6) :1007-1017, 1995. [55J T. Kailath. 1980.
I
ff 1
Linear Systems. Prentice-Hall, Englwood Cliffs, NJ,
[56J R. E. Kalman. On the general theory of control systems. In Proceedings of the First IFAC Congress, volume 1, pages 481-492, Moscow, USSR, 1961. [57] R.E. Kalman and J.E. Bertram. A unified approach to the theory of sampling systems. Journal of the Franklin Institute, 26(7):405-436, 1959. [58] P. Khargonekar, A. M. Pascoal, and R. Ravi. Strong, simultaneous, and reliable stabilization of finite dimensional linear time-varying plants. IEEE Transactions on Automatic Control, 33(12):1158-1161, 1988. [59] P. P. Khargonekar, K. M. Nagpal, and K. R. Poolla. H oo control with transients. SIAM Journal on Control and Optimization, 29(6):13731393, 1991. [60J P. P. Khargonekar, 1. R. Petersen, and K. Zhou. Robust stabilization of uncertain systems and Hoo optimal control. IEEE Transactions on Automatic Control, AC-35(3):356-361, 1990.
J~~ :.:::. :1
142
References
[61] P. P. Khargonekar, K. Poolla, and A. Tannenbaum. Robust control of linear time-invariant plants using periodic compensation. IEEE Transactions on Automatic Control, 30(11):1088-1095, 1985. [62] M. Kourjanski and P. Varaiya. Stability of hybrid systems. In R. Alur, T.A. Henzinger, and E.D. Sontag, editors, Hybrid Systems111- Verification and Control, pages 413-423. Springer-Verlag, Berlin, 1996. [63] Jvr. A. Krasnosel'skii and A. V. Pokrovskii. Systems with Hysteresis. Springer-Verlag, Berlin, 1989. [64] A. A. Krasovskii, editor. Handbook of Automatic Control Theory. Nauka, Moscow, 1987. [65] S. R. Kulkarni and P. J. Ramadge. Model and controller selection policies based on output prediction errors. IEEE Transactions on Automatic Control, 41:1594-1604, 1996. [66] M. Lemmon, J. A. Stiver, and P. J. Antsaklis. Event identification and intelligent hybrid control. In R. L. Grossman, A. Nerode, A. P. Ravn, and H. Rishel, editors, Hybrid Systems. Springer-Verlag, New York, 1993. [67] F. L. Lewis. Optimal Control. Wiley, New York, 1986. [68] D. Liberzon and A. S. Morse. Basic problems in stability and design of switched systems. IEEE Control Systems Magazine, 19(5):59-90, 1999. [69] D. J. N. Limebeer, B. D. O. Anderson, P. P. Khargonekar, and M. Green. A game theoretic approach to Hoo control for time-varying systems. SIAM Journal on Control and Optimization, 30:262-283, 1992. [70] J. Lygeros, D.N. Godbole, and S. Sastry. Verified hybrid controllers for automated vehicles. IEEE Transactions on Automatic Control, 43(4):522-539,1998. [71] A. S. Matveev and A. V. Savkin. Reduction and decomposition of differential automata: Theory and applications. In T.A. Henzinger and S. Sastry, editors, Hybrid Systems: Computation and Control. Springer-Verlag, Berlin, 1998. [72] A. S. Matveev and A. V. Savkin. Towards a qualitative theory of differential automata. part 2: An analog of the Poincare-Bendixon theorem. In Proceedings of the 37th Conference on Decision and Control, Tampa, FL, 1998.
References
143
[73] A. S. Matveev and A. V. Savkin. Qualitative Theory of Hybrid Dynamical Systems. Birkhauser, Boston, 2000. (74] B. McCarragher and H. Asada. The discrete event modelling and planning of robotic assembly tasks. ASME Journal of Dynamic Systems, Measurement and Control, 117:394-400, 1995. [75] A. Megretsky and 8. Treil. Power distribution inequalities in optimization and robustness of uncertain systems. Journal of Mathematical Systems, Estimation and Control, 3(3):301-319, 1993. [76] B.M. Miller and W.J. Runggaldier. Optimization of observations: A stochastic control approach. SIAM Journal of Control and Optimization, 35(5) :1030-1052, 1997. [77] D. Mitra and 1. Mitrani. Analysis of a kanban discipline for cell coordination in production lines. Management Science, 36(12), 1990. [78] J. Moody and P. Antsaklis. Supervisory Control of Discrete-Event Systems Using Petri Nets. Kluwer Academic Publishers, Dordrecht, 1998. [79] A. 8. Morse. Control using logic-based switching. In A. Isidori, editor, Trends in Control: A European Perspective. Springer-Verlag, Berlin, 1995. [80] A. S. Morse(ed.). Control Using Logic Based Switching. SpringerVerlag, London, 1997. [81] K. M. Nagpal and P. P. Khargonekar. Filtering and smoothing in an HOC setting. IEEE Transactions on Automatic Control, AC36(2):152-166,1991. [82] K. S. Narendra and J. Balakrishnan. Adaptive control using multiple models. IEEE Transactions on Automatic Control, 42:171-187, 1997. [83] K. S. Narendra and J. H. Taylor. Frequency Domain Criteria for Absolute Stability. Academic Press, New York, 1973. [84] A. Nerode and W. Kohn. Models for hybrid systems: Automata, topologies, controllability, observability. In R. L. Grossman, A. Nerode, A. P. Ravn, and H. Rishel, editors, Hybrid Systems. Springer-Verlag, New York, 1993. [85] A. W. Olbrot. Robust stabilization of uncertain systems by periodic feedback. International Journal of Control, 45(3):747-758, 1987. [86] J.8. Ostroff and W.M. Wonham. A framework for real-time discreteevent control. IEEE Transactions on Automatic Control, 35(4):386397, 1990.
144
References
(87] J. R. Perkins and P. R. Kumar. Stable, distributed, real-time scheduling of flexible manufacturing/assembly/disassembly systems. IEEE Transactions on Automatic Control, 34(2):139-148,1989. [88] 1. R. Petersen. A procedure for simultaneously stabilizing a collection of single input linear systems using nonlinear state feedback control. Automatica, 23(1):33-40, 1987. [89] 1. R. Petersen. Guaranteed cost LQG control of uncertain linear systems. lEE Proceedings, Part D, 142(2):95-102,1995. [90] 1. R. Petersen, B. D. O. Anderson, and E. A. Jonckheere. A first principles solution to the non-singular Hoc control problem. International Journal of Robust and Nonlinear Control, 1(3) :171-185, 1991.
I
r
I 1
I
II I
[91] 1. R. Petersen, M. Corless, and E. P. Ryan. A necessary and sufficient condition for quadratic finite time feedback controllability. In M. Mansour, S. Balemi, and W. Truol, editors, Robustness of Dynamic Systems with Parameter Uncertainties: Proceedings of the International Workshop on Robust Control, pages 165-173, Ascona, Switzerland, March 1992. Birkhauser, Basel, 1992. [92) 1. R. Petersen and D. C. McFarlane. Optimal guaranteed cost control and filtering for uncertain linear systems. IEEE Transactions on Automatic Control, 39(9):1971-1977,1994. [93] 1. R. Petersen and A. V. Savkin. Robust Kalman Filtering for Signals and Systems with Large Uncertainties. Birkhauser, Boston, 1999. [94] 1. R. Petersen, V. Ugrinovskii, and A. V. Savkin. Robust Control Design Using Hoc Methods. Springer-Verlag, London, 2000. [95] V. M. Popov. Hyperstability of Control Systems. Springer-Verlag, London, 1973. [96] C. Raga, P. Willet, and Y. Bar-Shalom. Censoring sensor: A low communication rate scheme for distributed detection. IEEE Transactions on Aerospace and Electronic Systems, 28(2):554-568, 1996. (97] P. J. Ramadge and W. M. Wonham. The control of discrete event systems. Proceedings of the IEEE, 77:81-98, 1989. [98] R. T. Rockafellar. Convex Analysis. Princeton, NJ, 1970.
Princeton University Press,
[99] M. A. Rotea and P. P. Khargonekar. Stabilization of uncertain systems with norm bounded uncertainty. A control Lyapunov approach. SIAM Journal of Control and Optimization, 27(6):1462-1476,1989.
1
'.
References
145
[100] A. V. Savkin. Absolute stability of nonlinear control systems with nonstationary linear part. Automation and Remote Control, 41(3):362-367, 1991. [101] A. V. Savkin. Robust output feedback constrained controllability of uncertain linear time-varying systems. Journal of Mathematical Analysis and Applications, 215:376-387, 1997. [102] A. V. Savkin. Controllability of complex switched server queueing flow networks modelled as hybrid dynamical systems. In Proceedings of the 37th IEEE Conference on Decision and Control, Tampa, FL, December 1998. [103] A. V. Savkin. Optimal stable real-time scheduling of a flexible manufacturing system modelled as a switched server system. In Proceedings of the 37th Conference on Decision and Control, Tampa, Florida, USA, 1998. [104] A. V. Savkin. Regularizability of complex switched server queueing networks modelled as hybrid dynamical systems. Systems and Control Letters, 35(5):291-300, 1998. [105] A. V. Savkin. A hybrid dynamical system of order n with all trajectories converging to (n - I)! limit cycles. In Proceedings of the 14th IFAC World Congress, Beijing, China, July 1999. [106] A. V. Savkin and R. J. Evans. Robust stabilizability and h-infinity control problems for hybrid dynamical systems. In Proceedings of Control 95, Melbourne, Australia, October 1995. [107] A. V. Savkin and R. J. Evans. A new approach to robust control of hybrid systems: Infinite time case. In Proceedings of the 13th IFAC World Congress, pages G:297-301, San Francisco, California, July 1996. [108] A. V. Savkin and R. J. Evans. A new approach to robust control of hybrid systems over infinite time. IEEE Transactions on Automatic Control, 43(9):1292-1296, 1998. [109] A. V. Savkin, R. J. Evans, and 1. R. Petersen. A new approach to robust control of hybrid systems. In R. Alur, T.A. Henzinger, and E.D. Sontag, editors, Hybrid Systems III. Verification and Control. Springer-Verlag, Berlin, 1996. [110] A. V. Savkin, R. J. Evans, and E. Skafidas. The problem of optimal robust sensor scheduling. In Proceedings of the 39th IEEE Conference on Decision and Control, Sydney, Australia, 2000.
146
References
[111] A. V. Savkin and A. S. Matveev. Cyclic linear differential automata: A simple class of hybrid dynamical systems. In Proceedings of the 37th Conference on Decision and Control, Tampa, Florida, USA, 1998. [112] A. V. Savkin and A. S. Matveev. Cyclic linear differential automata: A simple class of hybrid dynamical systems. Automatica, 36:727-734, 1999. [113] A. V. Savkin and A. S. Matveev. Existence and stability of periodic trajectories in switched server systems. In Proceedings of the 14th IFAC World Congress, Beijing, China, July 1999. [114] A. V. Savkin and A. S. Matveev. Existence and stability of periodic trajectories in switched server systems. Automatica, 36:775-779, 1999. [115] A. V. Savkin and A. S. Matveev. Globally periodic behavior of switched flow networks with a cyclic switching policy. Systems and Control Letters, 38:151-155, 1999. [116] A. V. Savkin and 1. R. Petersen. A connection between HOC! control and the absolute stabilizability of uncertain systems. Systems and Control Letters, 23(3):197-203, 1994. [117] A. V. Savkin and 1. R. Petersen. A method for robust stabilization related to the Popov stability criterion. International Journal of Control, 62(5):1105-1115,1995. [118] A. V. Savkin and 1. R. Petersen. Minimax optimal control of uncertain systems with structured uncertainty. International Journal of Robust and Nonlinear Control, 5(2):119-137, 1995. [119] A. V. Savkin and 1. R. Petersen. Nonlinear versus linear control in the absolute stabilizability of uncertain linear systems with structured uncertainty. IEEE Transactions on Automatic Control, 40(1):122127, 1995. [120] A. V. Savkin and 1. R. Petersen. Recursive state estimation for uncertain systems with an integral quadratic constraint. IEEE Transactions on Automatic Control, 40(6):1080, 1995. [121] A. V. Savkin and 1. R. Petersen. Robust control with rejection of harmonic disturbances. IEEE Transactions on Automatic Control, 40(11):1968-1971, 1995. [122] A. V. Savkin and 1. R. Petersen. Robust state estimation for uncertain systems with averaged integral quadratic constraints. International Journal of Control, 64(5):923-939, 1995.
References
147
A. V. Savkin and 1. R. Petersen. Structured dissipativeness and absolute stability of nonlinear systems. International Journal of Control, 62(2) :443-460, 1995. A. V. Savkin and 1. R. Petersen. An uncertainty averaging approach to optimal guaranteed cost control of uncertain systems with structured uncertainty. Automatica, 31(11):1649-1654, 1995. A. V. Savkin and 1. R. Petersen. Model validation for robust control of uncertain systems with an integral quadratic constraint. Automatica, 32(4):603-606, 1996. [126] A. V. Savkin and 1. R. Petersen. Almost optimal LQ-control using stable periodic controllers. Automatica, 34(10):1251-1254, 1998.
I 1
J
!I .1
1
i
II !
I I
.1
1
I l
I1
[127] A. V. Savkin and 1. R. Petersen. A method for simultaneous strong stabilization of linear time-varying systems. International Journal of Systems Science, 31(6):685-695, 2000. [128] A. V. Savkin, 1. R. Petersen, E. Skafidas, and R.J . Evans. Hybrid dynamical systems: Robust control synthesis problems. Systems and Control Letters, 29(2) :81-90, 1996. [129] A. V. Savkin, E. Skafidas, and R. J. Evans. Robust output feedback stabilizability via controller switching. Automatica, 35(1):6974, 1999. [130] J .S. Shamma. Robust stability analysis of time-varying systems using time-varying quadratic forms. Systems and Control Letters, 24(1):1317, 1995. (131] S. V. Shil'man. Generating Function Method in the Theory of Dynamic Systems. Nauka, Moscow, 1978. (132} E. Skafidas, R. J. Evans, A. V. Savkin, and 1. R. Petersen. Stability results for switched controller systems. A utomatica, 35(4) :553-564, 1999. [133] M. C. Smith and K. P. Sondergeld. On the order of stable compensators. Automatica, 22:127-129, 1986. [134] L. Tavernini. Differential automata and their discrete simulators. Nonlinear Analysis. Theory, Methods and Applications, 11(6):665683, 1987. [135] O. Toeplitz. Das algebraische analogen zu einen satze von fejer. Mathematisches Zeitschrift, 2:187-197, 1918. [136] O. Toker. On the order of simultaneously stabilizing compensators. IEEE Transactions on Automatic Control, 41(3):430-433, 1996.
148
References
[137] C. Tomlin, G.J. Pappas, and S. Sastry. Conflict resolution for air traffic management: A study in multiagent hybrid systems. IEEE Transactions on Automatic Control, 43(4):509-521,1998. [138] Ya. Z. Tsypkin. Relay Control Systems. Cambridge University Press, Cambridge, UK, 1984. [139] F. Uhlig. A recurring theorem about pairs of quadratic forms and extensions: A survey. Linear Algebra and Its Applications, 25:219237,1979. [140] V. 1. Utkin. Sliding Modes in Control Optimization. Springer-Verlag, Berlin, 1992. [141] A. van der Schaft and H. Schumacher. Complementarity modeling of hybrid systems. IEEE Transactions on Automatic Control, 43 (4) :483-490, 1998. [142] A. van der Schaft and H. Schumacher. An Introduction to Hybrid Dynamical Systems. Springer-Verlag, London, 1999. [143] J. H. van Schuppen. A sufficient condition for controllability of a class of hybrid systems. In T.A. Henzinger and S. Sastry, editors, Hybrid Systems: Computation and Control. Springer-Verlag, Berlin, 1998. [144] P. P. Varaiya. Smart cars on smart roads: Problems of control. IEEE Transactions on Automatic Control, 38(2):195-207, 1993. [145} M. Vidyasagar. Nonlinear Systems Analysis. Prentice Hall, Englewood Cliffs, NJ, 1978. [146} M. Vidyasagar. Control System Synthesis: A Factorization Approach. MIT Press, Cambridge, MA, 1985. [147] A. A. Voronov. Stability, Controllability, and Observability. Nauka, Moscow, 1979. [148] B. Wie and D. S. Bernstein. Benchmark problems for robust control design. In Proceedings of the 1992 American Control Conference, Chicago, 1992. [149] J. C. Willems. Least squares stationary optimal control and the algebraic Riccati equation. IEEE Transactions on Automatic Control, 16(6):621-634,1971. [150] J. C. Willems. Dissipative dynamical systems. Part I: General theory. Archive of Rational Mechanics and Analysis, 45:321-351, 1972.
· 1
· •.•.•. l••
i~:.,
,.
l
tf
References
149
[151] H. S. Witsenhausen. A class of hybrid-state continuous-time dynamic systems. IEEE Transactions on Automatic Control, 11(2):161-167, 1966. [152] L. Xie and C. E. de Souza. Robust H oo control for linear systems with norm-bounded time-varying uncertainty. IEEE Transactions on Automatic Control, 37(8):1188-1191, 1992.
f
I
I .1
I
[153J L. Xie and C. E. de Souza. H oo state estimation for linear periodic systems. IEEE Transactions on Automatic Control, 38(11):17041707, 1993.
j
I 1
1
r
I l
I Jl
[154] L. Xie, M. Fu, and C.D. de Souza. H oo control and quadratic stabilization of systems with parameter uncertainty via output feedback. IEEE Transactions on Automatic Control, 37(8):1253-1256,1992. [155] V. A. Yakubovich. Frequency conditions of absolute stability of control systems with many nonlinearities. A vtomatica i Telemekhanica, 28(6):5-30, 1967. [156] V. A. Yakubovich. 1'Iinimization of quadratic functionals under the quadratic constraints and the necessity of a frequency condition in the quadratic criterion for absolute stability of nonlinear control systems. Soviet Mathematics Doklady, 14:593-597, 1973. [157] V. A. Yakubovich. The Methods of Research of Nonlinear Control Systems, chapter 2-3. Nauka, Moscow, 1975.
I
~
.•
1
1
1 ]
1 !
[158] V. A. Yakubovich. Abstract theory of absolute stability of nonlinear systems. Vestnik Leningrad University, Series 1, 41(13):99-118, 1977. [159] V. A. Yakubovich. Absolute stability of nonlinear systems with a periodically nonstationary linear part. Soviet Physics Doklady, 32(1):57, 1988. [160] V. A. Yakubovich. Dichotomy and absolute stability of nonlinear systems with periodically nonstationary linear part. Systems and Control Letters, 11(3):221-228, 1988.
1 1 1
[161] D. C. Youla, J. J. Bongiorno, and C. N. Lu. Single-loop feedback stabilization of linear multivariable plants. Automatica, 10: 159-173, 1974. [162] Z. L. Zhang, D. Towsley, and J. Kurose. Statistical analysis of the generalized processor sharing scheduling discipline. IEEE Journal on Selected Areas in Communications, 13:1071-1080,1995.
150
References
[163] P. V. Zhivoglyadov and R. H. Middleton. On stability in hybrid systems. In Proceedings of the 37th IEEE Conference on Decision and Control, Tampa, FL, December 1998.
I!
I I
1 I !I
Ii
II 1
I., I
I
I
absolute stability, 35 absolute stabilizability, 6, 80, 83, 85, 87, 91 adaptive control, 5 algebraic Riccati equation, 84, 123 almost optimal controller, 125, 126 asynchronous controller switching, 6, 12, 15, 19, 21, 26, 37, 38, 40, 41, 47, 49
discrete-event dynamical system, 3 dissipativity, 33 dynamic programming equation, 60, 85, 101 dynamic programming procedure, 117
basic controllers, 6, 12, 14, 36, 49, 56,59,80 basic sensors, 114 Bellman's principle of optimality, 118 bounding control, 100
Finsler's Theorem, 17, 20, 25, 38, 41 flexible manufacturing system, 2
exponential stability, 131
j
I II
complete collection of matrices, 14 computer-controlled system, 2 controllability, 97 cost function, 57, 60, 84, 100 cost matrix, 59 detectability, 124, 131 differential automaton, 2
global asymptotic stabilizability, 13 HOO control, 6, 55, 63, 79, 85 HOO control problem with transients, 59 Hoo norm bound, 76 HDS, 1 Hilbert space, 8 hybrid dynamical system, 1 hyperstability, 79
152
Index
integral quadratic constraint, 5, 76, 78, 82, 91, 98, 109, 114 intelligent state estimator, 7 jump state equation, 124, 132 Kalman filtering, 7, 108 Laplace transform, 77,79 limit cycle, 2 linear quadratic optimal control problem, 123, 126 Lyapunov function, 13, 22 model predictive sensor scheduling, 118 norm-bounded constraint, 38, 82, 99 norm-bounded uncertainty, 34, 79 one-step-ahead optimal sensor schedule, 118 optimal sensor schedule, 115, 117 optimal tracking problem, 62 output feedback Hoc control problem, 59, 111 Parseval's Theorem, 76 quadratic stabilizability, 6, 14, 15, 19, 21-26 quadratic storage function, 37, 42, 45, 81 Riccati differential equation, 43, 60,87,100,110,112,116, 119 robust controllability, 6 robust observability, 110 robust output feedback controllability, 100 robust stabilizability, 6, 36, 37,42, 45, 48, 81 robustness, 5
S-procedure, 16-18,20,25,27,41, 93 sampled-data system, 2 sector bounded nonlinearity, 35 sensor schedule, 114 set-valued state estimation, 7 simultaneous strong stabilizability, 131, 134 simultaneous strong stabilization, 7
sliding mode control, 2 stabilizability, 123, 131 stabilizing solution, 124 stable controller, 7, 122 stable matrix, 19 state estimator, 111, 116 state feedback HOC control problem, 57 state transition matrix, 133 strictly complete collection ofmatrices, 14, 18 strong stabilizability, 123, 126 strong stabilization, 7 structured uncertainty, 89 switched controller, 7, 132 switched controller system, 4 switched flow network, 2 switched sensor system, 6 switched system, 3 switching sequence, 56 switching signal, 4 synchronous controller switching, 6, 21-25, 27, 42, 43, 45, 48,49,56,59,80,83,91, 99, 101 synchronous sensor scheduling, 114 synchronous sensor switching, 115, 117 system uncertainty, 6, 77, 82, 89, 98, 114 Tavernini model, 2 timed automaton, 3 transfer function uncertainty, 76 two-mass spring system, 49
Index
uncertain system, 5,7,34, 36, 77, 82, 89, 98, 109, 113 uncertainty, 5, 76 uniform robust observability, 112, 115, 117 verification, 3
J
I I
I 1I.
I ~
.; :1"
Ii
I i
j I
I 1 1 1 1 !
Witsenhausen model, 2
153