This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
MA T ) ^or tfle multipliers in (6.19), (6.20), and (6.21), respectiveJy. A finite-dimensional multiplier description of the diagonal operator A in (6.18) is then obtained as
It turns out that we can obtain good results with very simple basis multipliers for this particular example. Table 6.2 shows numerical results obtained with LMI-Lab. The obtained bound on TO is very close to optimal. To see this consider the transfer function
6.7. Conclusions
125
Table 6.2: Numerical results for the antireset windup compensator example. The result shows stability for time delays up to T0 = 0.01.
Figure 6.5: The diagram shows the Nyquist curve for Gci- The stability condition in (6.22) can be satisfied with proper choice of the Popov parameter A and the multiplier H.
where the time delay has been included. If the time delay T is given (not uncertain but bounded as before), then stability of the control system in Figure 6.4 can be investigated by considering the inequality
where A 6 R and where H satisfies the same conditions as above. The Nyquist curve of Gci is given in Figure 6.5 for the case when T = 0.01. The system is very close to instability for this choice of time delay. In fact, the high frequency part of the Nyquist curve is very close to pass through the point (—1,0) in which case the system would be unstable for the case with unity gain feedback. However, from the numerical results in Table 6.2 it follows that we can satisfy the condition in (6.22) by using an appropriate Popov paramater A and with a suitably chosen multiplier H.
6.7
Conclusions
We have introduced a format for multiplier computation in robustness analysis. It is applicable to a large class of systems. The resulting computational problem can be formulated as a convex optimization problem in terms of linear matrix inequalities. It is easy to implement the ideas in terms of a set of subroutines that interfaces any of the existing software packages for solving convex optimization problems in terms of LMIs. All manipulation of transfer functions is then done in terms of corresponding state-space realizations. A Matlab 4 version of such a toolbox has been used for the examples in this chapter [224]. A new toolbox for IQC analysis is currently under development by Megretski and coworkers; see [272], The new toolbox uses the new data structures and the graphical user interface builder of Matlab 5.0 to obtain a user friendly environment for system analysis. There are many interesting problems that remain to be solved. It would, for example, be useful to develop efficient mathematical methods for choosing basis multipliers, in other words, methods that can give guidelines for improving a particular basis multiplier.
Jonsson and Rantzer
126
It is also useful in this perspective to have methods for obtaining bounds on what can be achieved with a particular infinite-dimensional cone of multipliers. This gives an indication of the quality of a particular basis. We have used results from duality theory for infinite-dimensional convex programs to obtain such bounds for general classes of problems in [220, 222, 223].
6.8
Appendix
6.8.1
Proof of Theorem 6.2 and Remark 6.4
Let w €. Ii2[0,oo) be in Winp. The stability assumption implies that A(y) € L™[0, oo). Consider the operator
We note that the condition A [ GH(OO)
£12(00) ] = 0 implies that
Hence, H is a bounded and Hermitian- valued operator on L g * (—00,00). Evaluation of the corresponding quadratic functional for [ A(y) w* ]* gives
The equality follows from the observations
and
where T 1 denotes the inverse Fourier transform. We note that assumed that the system is initially at rest. Hence, we get
= 0, since it is
This proves the theorem. The statement of Remark 6.4 follows, since the upper left block of the inequality (6.7) implies that
V u € [0,oo]. By assumption (ib) the second term is nonnegative. Hence, the stability follows from Theorem 6.1.
127
6.8. Appendix
6.8.2
Multiplier sets for Example 6.3
To obtain a finite-dimensional parameterization of the multipliers in (6.19) we introduce strictly proper basis functions Hi G RL^1, i = 1, . . . , N, for a finite-dimensional parameterization of H . We can express H as
where A+, A{ > 0. We can assume that Hi = HiC + Hiac, where the causal part has the realization HiC = CiC(sI — AiC}~lBiC and where the anticausal part has the realization Hiac = Ciac(sl — Aiac}"1 Biac. The Lx-norm of the weighting function corresponding to Hi can be computed as
Then the constraint ensures that \\h\\i < /IQ. To obtain a parameterization in the format ^(^A/A), we define
We can now use
v(^fiv^
MI^,), where ^i^, is as above and where
The Popov multiplier in (6.20) can in our format be represented as PI^\
,), where
To obtain a parameterization on the form P&T(ty, MA) in equation (6.21), we let x(ju) = R(juj)*UR(ju>), where R € RH^,xl is a basis multiplier and where U = UT > 0 are the corresponding coordinates. We note that we have the factorization ^Q(S) = H*(s)H(s), where
We now get the parameterization / PA T (^A T >MA T ),
This page intentionally left blank
Chapter 7
Linear Matrix Inequality Methods for Robust H2 Analysis: A Survey with Comparisons Fernando Paganini and Eric Feron 7.1 Introduction The robust H2 problem is rooted in the efforts of the 1970s to provide stability margins to the H2-optimal (LQG) regulator. The difficulties encountered in this combination of classical and modern control [348, 101, 109] led to a shift in focus to other performance criteria (Hoc, LI), which by means of the small gain theorem are directly linked to robust stability guarantees. Robust control based on the HOC and LI measures is now a mature field [446, 88]. The impression has remained, however, that such methods lean too heavily on robustness and sacrifice an adequate view of performance; the latter is often more naturally described by an H2 criterion, which can be used to capture both the transient response of the system and the response to stationary noise. These interpretations are reviewed in section 7.2. The promise of a successful combination of robustness and H2-performance was renewed in the late 1980s by the close ties between the state-space methods of H2 and HQQ control (see [107]). An abundant literature has pursued combinations of H2 and HQQ in various forms (see [48, 228, 447, 110, 384, 330, 334, 386, 187, 210, 134] and references therein), including mixed K^/Hoo control (multiobjective design) and the related but separate question of robust H2-performance analysis. Focusing on the latter question, in section 7.3 we provide a review of these methods, aiming at the most general results and a streamlined presentation, which is difficult to find in this large literature. Central to this task are linear matrix inequalities (LMIs) and the so-called «S-procedure, which refers (see Chapter 1 or [433]) to the use of multipliers to bound an optimization under nonconvex quadratic constraints; the procedure is called "lossless" when the bound is nonconservative. The above state-space theory has the limitation that it is not accompanied by ex129
130
Paganini and Feron
ternal, input-output descriptions or necessary and sufficient characterizations, as are available for robust stability and HQQ- or Li-performance. In pursuit of these characterizations, an operator version of robust H2-performance was proposed in [318, 319, 320], which views the H2 norm is the worst-case response under a suitable class of "typical" white noise signals. This method also brings in an infinite-dimensional «S-procedure, which in this case is proven to be lossless. In section 7.4 we will review this alternative diiv' the resulting characterizations, both in the frequency domain and in the state space. Ihe surveyed methods for robust H2-performance evaluation will be expressed in terms ^f a series of SDPs. Subsequently, in sections 7.5 and 7.6 we will proceed to compp/e these approaches in mathematical terms, in terms of computational cost, and by illustrative examples, showing their relative strengths and limitations. The common language will be key to providing transparent connections. We conclude in section 7.7 v;dth a few final remarks on the state of this area of research.
7.2
Problem formulation
The chapter will focus on continuous-time signals and systems, but analogous methods can be developed for the discrete time counterpart, which is in fact somewhat easier since some objects (impulses, white noise) are more easily formalized in that setting.
7.2.1
The H2 norm
We begin by defining the H2 norm of a linear time-invariant (LTI) system and remarking on its motivation as a performance measure. Let H be a stable LTI system of transfer function H(ju); the H^ norm of H is defined as
If H is finite dimensional, we can compute this norm using a state-space realization
which must have no feed-through term for the Hfe norm to be finite. Then we have
The main motivations of the H2 norm as a performance measure are the following: 1. \\H\\2 is the steady-state response of the system to white noise; this idealization of signals with flat spectrum over all frequencies is a common modeling tool for disturbances. Both deterministic (see section 7.4) and stochastic formalizations of this notion are possible. The stochastic version can be related to (7.3) by noting that Q satisfying AQ + QAT 4- BBT = 0 is the asymptotic covariance of the state x when it is driven by continuous-time white noise (formally treated in section 7.3.2); thus the steadystate variance of the output z = Cx is Tr(CQCT) = \\H\\%. 2. For scalar inputs, \\H\\% is the energy of the system impulse response; thus this quantity can be used to measure the transient response of an error variable in
7.2.
Problem formulation
131
response to known inputs or initial conditions (which may be generated by an impulse). In the multivariable case, we can write
where e^ denotes the fcth coordinate vector in input space. In other words, the norm measures the sum of impulse response norms over every input channel. Equivalently, \H.^ is equal to JEH-Hvc^)!^, the expected response for an impulse of random direction VQ with covariance E(VQV^} = / (which generates an initial condition of covariance E(XQXQ) = BBT). This interpretation is closely related to (7.2); if P satisfies ATP + PA + CTC = 0, XQPXQ is the energy of the transient response starting from x(0) = XQ; Tr(BTPB) adds this energy over initial conditions obtained by applying an impulse at each channel. 3. For scalar outputs, \\H\\z is the I<2 to LQQ induced norm of the system
and thus measures amplitude deviations for finite energy disturbances. The above motivations, mainly the first two, strongly motivate the use of this norm as a measure of performance. In particular, notice that by adding an input filter or weighting function, we can use the impulse response to study the response to given, known inputs (e.g., steps) and the white noise response to study the response to disturbances of known (colored) spectrum. The common feature is thus the following: when there is information on the spectral content of the inputs, H2 is an adequate performance measure. In contrast, other methods such as H^ treat disturbances or commands as being the worst in a very broad class, which is often unrealistic. Optimal control based on the H^-performance measure has an elegant solution, developed in the 1960s under the name LQG control (see [11]).
7.2.2
Robust H2-performance
Figure 7.1: Uncertain system (M, A). The robust ££2 problem arises when model uncertainty is added to an H2-performance specification; a standard formulation is depicted in Figure 7.1. The nominal map M is
132
Paganini and Feron
taken to be a finite-dimensional LTI system with state-space realization
Notice that for the problem to be meaningful there can be no feed-through term between v and z. In addition, we will assume here that Dqv = 0; i.e., there is no unfiltered input seen from the uncertainty A. This assumption is required in all the approaches except possibly for that of section 7.3.4, so we will adopt it from here onward. The uncertainty A is modeled as an unknown operator on the space Li2 of squareintegrable, vector-valued functions, restricted to the spatial structure
This standard form accommodates a variety of uncertainty models, including real parameter variations and unmodeled dynamics, which can be LTI, linear time-varying (LTV) or nonlinear (NL). These choices are reflected on the dynamic nature of the perturbation blocks, which are normalized to size 1 in the Ii2-induced norm. BA denotes the unit ball of structured perturbations under consideration. Remark. It is customary to restrict A to be a causal operator (past outputs depend only on past inputs). This restriction deserves some comment since A does not represent a physical system, it merely provides a parameterization for a class of systems. The convenience of a causal parameterization is that it allows for the fictitious feedback loop of Figure 7.1 to be treated like a "physical" loop and to consider questions of stability (which go beyond the signal space L2) within this framework. As we shall see, the causality restriction plays an important role in the bounds for robust H2-performance. The system (M, A) is robustly stable if M is stable and if / — AMn has a causal, bounded inverse for every A G BA- The question of robust stability under structured uncertainty has been extensively studied (e.g., [314, 441, 370, 274]). We will assume in what follows that the system is robustly stable. With this assumption, the closed-loop map T ZU (A) from v to z is a bounded operator for all A e BA; we wish to impose an H2-performance specification on this map. Now the only case in which we have an unequivocal definition of ||TZV(A)||2 (analogous to (7.1)) is when this map is LTI, i.e., when the blocks of A are LTI or real parametric. For this case, the question at hand of how to evaluate supA ||T2V(A)||2 is purely computational; unfortunately, in this case robust performance evaluation is necessarily hard, because already the robust stability problem is known to be NP-hard [72]; so the aim here is to obtain tight, tractable bounds. As is standard in robust stability and HQQ- or LI-performance problems, these bounds will involve uncertainty "scalings" or "multipliers" of the form
which commute with the structure (7.7); we denote by A the set of positive matrices of the form (7.8). With these other performance measures, the resulting bounds have also an interpretation in their own right as tests for LTV or NL uncertainty structures [227, 370, 274, 336]; we wish to emulate this in the H2 context. For NLTV perturbations, however, the right definition of the H2 norm is not so clear, a difficulty that was pointed out in [384]; in particular, one can build on various
7.3. State-space bounds involving causality
133
approaches: impulse response measure, stochastic white noise rejection, or deterministic versions of white noise. Sections 7.3 and 7.4 will provide an overview of these approaches and cast them in a common language, by exploiting the quadratic structure of uncertainty (described by quadratic signal constraints), noise (in terms of its spectrum, or second-order statistics), and performance (I<2 norms, variance). This structure allows for the use of state-space tools of linear-quadratic optimization, as well as the «S-procedure [433, 274] in which quadratic constraints are incorporated into a cost function by means of Lagrange multipliers. These tools come together to yield convex methods for analysis involving LMIs.
7.3
State-space bounds involving causality
We begin by stating an SDP problem, where the unknowns involve a state-space matrix and a matrix of constant uncertainty multipliers. To simplify the formulas we will assume that Dqp and Dzp are zero. Problem 1. Find Js,nitv := inf Tr(B^PBv) subject to P > 0, A € A, and
Remark. This problem can also be stated with (7.9) replaced by a Riccati equation ((7.13) below), which depends nonlinearly on A. We prefer the LMI version due to its convexity, but some authors [384, 334] have pursued nonconvex computation with the Riccati version; in the case of a scalar multiplier, nonconvexity can be handled easily by a linear search. We will show that Js,nitv provides a bound for robust H2-performance under structured, causal, otherwise arbitrary perturbations A. This holds both for the impulse response and the stationary noise interpretations. For simplicity we take A = diag[Ai , . . . , and the multiplier structure A = diag[Ai/, • • • , A.F/].
7.3.1
The impulse response interpretation
As remarked in section 7.2, the H2 criterion can be interpreted as a measure of the impulse response energy, added over the input channels. For the closed-loop map we write
If T ZW (A) is LTI, this coincides with the standard H2 criterion mentioned before, but the definition also applies to nonlinear time-varying (NLTV) systems as well. Remark. Alternatively, one could use the average response to an impulse of random direction where VQ is a random vector of covariance E(VQVQ) = I. While both notions need not be equivalent for a given NL system, the same bounds apply in the worst case over A. Many authors [384, 334, 210, 64, 134, 187] have applied this approach for robust H2 performance under a variety of uncertainty structures and obtained bounds of the form of Problem 1 or its Riccati equation counterpart. The following is a derivation based on the «S-procedure.
134
Paganini and Feron
Theorem 7.1. Suppose that the system of Figure 7.1 is robustly stable against the ball of structured NLTV perturbations. Then
Proof. To begin with, we observe that the effect of the impulse in the fcth channel is to "load" an initial condition XQ = Bve^ in the system, which subsequently responds autonomously. For this reason we first focus on the problem for fixed initial condition and no input, We now write the bounds
In the first step the iih uncertainty block is replaced by the integral quadratic constraint (IQC) || Ibilli? where qi and pi are the corresponding portions of q, p; this constraint would characterize the class of contractive (possibly noncausal) NLTV operators. However, by requiring p € L»2[0, oo), we are imposing some causality in the problem by not allowing p to anticipate the impulse. This does not, however, impose full causality in the map from q to p, hence the inequality. The second step is an application of the 5-procedure: the constraints are incorporated into the cost function via Lagrange multipliers A* > 0, and it is straightforward to show the stated inequality. What is not obvious is that in this case of IQCs the «S-procedure is lossless, and in fact [274, 433] we have equality in (7.11). This property will be relevant to the comparisons of section 7.5. To compute (7.11), observe that for fixed A* we have a classical linear-quadratic optimization problem
Following, e.g., [420], it is known that the optimal value of (7.12) is x^Pxo, where P is the stabilizing solution of the algebraic Riccati equation
For such a solution to exist, the HQQ norm condition
must hold, which happens to be feasible given the robust stability assumption. To see this, notice that robust stability under NLTV uncertainty implies [370] that || A^MnA'i ||c can be made less than 1, and this implies (7.14) by scaling up A. To derive an LMI version, we use the fact (see [420]) that the Riccati solution is the minimizing solution of the LMI (7.9), which is jointly affine in A and P. Thus we have
7.3. State-space bounds involving causality
135
which is a convex optimization problem. Now to consider the sum over impulse channel, we write
Here (7.17) results from (7.15). Notice that we have commuted the supremum with the sum in (7.16) and the infimum with the sum in (7.18), both potentially conservative steps (see section 7.5.3) for the multivariable case. In the scalar v case, the only source of conservatism is (7.10). D A final comment on the preceding result is that for NLTV uncertainty the response to an impulse may vary in time; however, exactly the same argument shows that the bound applies to the response to an impulse 6t applied at time t. Furthermore, it follows that the bound applies to the impulse response averaged over time, which we denote
for the single input case (and with a spatial sum for the multi-input case).
7.3.2
Stochastic white noise interpretation
We now inquire to what degree the preceding bound can be applied to questions of stationary white noise rejection. Of course, in the LTI uncertainty case the implication is direct, since we are bounding the standard H2 norm. The LTV and NLTV cases require more care; in particular, since time variation is allowed the output to stationary noise will not be stationary, and any performance measure must resolve this by taking either the average or the worst case over these variations. We consider here the averaging approach; the worst-case measure is considered in section 7.4. Define the average output variance norm of a system H as
where z is the response of H to stochastic white noise (formalized below). Does Problem 1 provide a bound for supA ||Tzv(A)||2)aoi,? The answer is affirmative for LTV uncertainty, since for LTV systems \\H\\2,avgimp = ||#H2,aotM as shown in [32]. However, this identity relies on superposition and hence does not extend to NL systems. Nevertheless, a stochastic argument can be given to show that the bound still applies for NLTV, causal uncertainty. This method was proposed in [330]; we extend it here to the structured uncertainty problem. Theorem 7.2. Suppose that the system of Figure 7.1 is robustly stable against the ball BA. of structured NLTV perturbations. Then
136
Paganini and Feron
Proof. As before, robust stability implies the feasibility of (7.9). To formalize the notion of continuous time white noise applied in v, (7.4) is replaced by the stochastic differential equation
in the sense of Ito calculus, where V(t) is multidimensional Brownian motion. For an introduction to stochastic calculus, including the Ito formula used below, see [304]. The white noise v(t) would be the derivative of V(t), except that this process is not differentiate, hence the need for this alternative calculus. For (7.19) to be well defined, the stochastic process p(t) must be as follows: • adapted to V(t), meaning that it is a function of past values of V(t): this in fact holds when p(t) is obtained through the configuration of Figure 7.1 with a causal
A;
• locally square integrable, i.e., E(f*2 \p(t)\2dt) < oo, which follows from robust stability. Remark. Additional technical assumptions are needed for stochastic differential equations that may possibly restrict the allowable class of A; these are outside our scope. Now we use Ito calculus to compute stochastic differentials; the relevant rules are dt2 = 0, dtdV = 0, (dV)(dV)T = Idt, and the Ito formula
Applying these to f(x) = xTPx for a symmetric matrix P, we have
Now take P, A satisfying (7.9); pre- and postmultiplying by (xT,pT) and (xT,pT)T we obtain
Integrating in [0,r] and exploiting |bi||L2[o,r] < llgtltaflVr] (A< is causal, contractive) gives
Now integrating (7.20) on the interval [0,r] for zero initial conditions, and using (7.21), we find
7.3.
State-space bounds involving causality
137
Taking expectations, the last term (a stochastic integral of zero mean) disappears, and we are led to which implies that Taking supremum over A and infimum over P, A completes the proof.
7.3.3 Refinements for LTI uncertainty We have described the constant multiplier method, suitable for arbitrary contractive perturbations, but additional properties of the uncertainty can also be exploited. For LTI uncertainty dynamic multipliers can be used; using, e.g., the impulse response interpretation (they are all equivalent for LTI systems), (7.10)-(7.11) can be replaced by
where \i(ju) are in principle any positive transfer functions. However, at this level of generality the restriction p € L2[0, oo) (related to causality) is not easily handled. To obtain a computable bound of the style of Problem 1, \i(ju) must be restricted a priori to the span of a finite set of rational basis functions, as is done in [134].8 This means that the multipliers A(JCJ) would be restricted to a certain subspace SN of rational functions, N denoting the dimensionality of this space, leading to Problem 2. Problem 2. Find
The preceding optimization bounds (7.23) from above and can be reduced to SDP computation by state-space methods. For details on this procedure—in particular, how to handle noncausal multipliers—we refer to [134]. Further refinements ("G-scalings," Popov multipliers [187, 437, 57]) apply to the real parametric uncertainty case; we refer to Chapter 9 in this book for some of these methods.
7.3.4 A dual state-space bound We briefly discuss a bound that is dual to Problem 1; this bound uses a different restriction on feed-through terms in the realization (7.4)-(7.6): The terms with output z must be zero but not necessarily Dqv. To set things in a common ground we will assume, as before, that all the D terms are zero. Problem 3. Find 3'
8 [134] uses passive rather than contractive uncertainty blocks; the methods are easily adapted to the present case.
138
Paganini and Feron
Note that (7.24) is feasible in Q provided that (compare with (7.14))
whose feasibility in A is once again equivalent to robust stability under NLTV perturbations. A first question is whether J8,nitv = JB: ^ is shown in section 7.6 via examples that the answer is negative and also that there is no fixed ordering between them. In terms of interpretation, J'a has not been studied as much, but the following can be shown. Theorem 7.3. Suppose that the LMI (7.24) is feasible. Then with A varying in the class of real constant matrices of the structure (7.7),
Proof. The main idea, which can be traced back to [50] in the context of Riccati equation conditions, is to bound the controllability Gramian (asymptotic state covariance) QA of the stable closed-loop
In other words, QA is the solution to the Lyapunov equation (analogous to (7.3))
We will show that if Q, A e A, satisfy (7.24), then
by deriving a Lyapunov equation for the difference Q — QA- To do this, first rewrite (7.24) by Schur complement as
Subtracting (7.25) from it gives
which can be rewritten after some algebra as
Since the last two terms are nonnegative (note that ||A|| < 1), we conclude that
that leads to (7.26) using the stability of A + Bp&Cq. Having shown this, we complete the proof by writing
and taking supremum over A, infimum over Q, A.
7.4. Frequency domain methods and their interpretation
139
Extensions and open questions • A similar procedure shows (see [389]) that the bound applies to the case of a memoryless time-varying uncertainty A(t), under the || • ||2,aov-norm. • However, we are not aware of analogous studies with a stochastic interpretation for the case of perturbations with memory (even LTI). Given (7.26), one can conjecture that if Q, A satisfy (7.24), Q should bound the time-averaged covariance of the system state when driven by continuous-time white noise, and when the uncertainty is structured, causal NLTV. This conjecture might follow if instead of this algebraic argument one attempts a signal-based 5-procedure argument similar to the one in Theorem 7.2; the extension is not, however, immediate and the conjecture is, to our knowledge, open. • A case where the extension to perturbations with memory has been pursued is in the Ii2 to LOQ induced norm interpretation (also studied in [50] for static matrix A). Notice that only for scalar outputs does this correspond to the H2 norm, so we focus on this case (see [210, 50]) for generalizations). Here Tr(C2QCj) = \CZQC^\, and it is shown in [210, 377] that Problem 3 gives a bound on the Li2 to LOO induced norm of T 2U (A) under arbitrary NLTV, causal uncertainty. In other words
The proof [210, 377] follows an e>-procedure argument similar to the one in section 7.3.1, plus a more extensive use of LMI properties. Another open question is the refinement of this bound by dynamic multipliers for LTI uncertainty, analogous to section 7.3.3.
7.4
Frequency domain methods and their interpretation
A separate line of research [318, 319, 320] has sought "external" characterizations of robust H2-performance, paralleling the frequency domain theory available for robust stability or Hoo-performance. This includes the structured singular value methods [314] for LTI uncertainty, as well as recent strong results that have shown the losslessness of the convex upper bound over classes of NLTV uncertainty [370, 274, 336]. These results are replicated in the ^-performance setting by applying an operator point of view to stationary white noise rejection.
7.4.1
The frequency domain bound and LTI uncertainty
Consider the following optimization. Problem 4. Find J f) i ti := inf /!^,Tt(y(a;))^ subject to A(w) € A and
140
Paganini and Feron
To relate this condition to the structured singular value methods [314], notice that replacing Y(w) by 72/ in (7.27), imposes the robust Hoo-performance specification
This means that the worst-case gain of the system across frequency and spatial direction is bounded by 7. In (7.27) the "slack variable" Y(u) allows such gain to vary, provided that the accumulated effect over frequency and spatial direction is minimized, which is indicative of an H2-type specification. For LTI uncertainty, this argument is easily formalized as follows. Theorem 7.4. Suppose that the family ofLMIs (7.27) is feasible. Then with A varying in the class of LTI perturbations of the structure (7.7),
Proof. Concentrating for simplicity on the full block case A = diag[Ai, . . . , consider a fixed frequency u. Multiplying (7.27) by (p(ju)*,v(ju)*) on the left and its adjoint on the right yields (refer to Figure 7.1)
Since A is time invariant and contractive we have ^(j'a;)) > jpiO'^)), so \i(ju) > 0 implies
follows. Using definition (7.1), we have
for every A, which proves the desired upper bound. The preceding proof is close in spirit to the methods of the structured singular value theory, by which the above-mentioned HQQ condition is proved. Analogously, one can also refine the bound for the case of real perturbations by adding to the left-hand side of (7.27) a "G-scaling" term of the form
where the additional scaling G(u) has the structure G = diag[Gi, • • • » G r , 0,..., 0], with blocks Gi = G* in the locations of the real uncertainty (see [441]). Proof of sufficiency follows similar lines. The next question we discuss is the conservatism of this bound, analogously to the study of "/x-simple" structures pursued in [314]. We state the following result. Proposition 7.1. Assume that v is scalar, and that 2L + F <2 in the structure (7.7). Let BA denote the ball of LTI, possible noncausal perturbations of this structure. Then
7.4. Frequency domain methods and their interpretation
141
This characterization is a direct consequence of the structured singular value theory, since for scalar v, Y(u) measures the worst-case gain of the system at that frequency, as follows from the characterization of /^-simple structures [314], adding the usual "performance block." For noncausal uncertainty this worst-case gain is achievable at every frequency by a suitably chosen A(ju;), which thus gives an H2 norm equal to the stated bound. Having identified the nonconservative case, we will now comment on the sources of conservatism in the general case: • Non-/i-simple structures. Clearly, as the number of blocks increases we expect a similar conservatism as the one we have in the structured singular value. • Multivariable noise. As we see in the preceding proof, we are imposing the matrix bound Tzv(&)(juj)*Tzv(A.)(j(jj) < Y(u>) as a way of constraining their traces. We will see in section 7.5.3 that this can be conservative. • Causal uncertainty. Imposing this, A(jfo;) would not be allowed to interpolate any frequency function; we will return to this issue in section 7.5.
7.4.2
Set descriptions of white noise and losslessness results
In view of these limitations, it is natural to inquire what exactly is the bound Jf,iti computing in regard to this problem. It turns out that frequency domain conditions correspond to treating white noise, as well as the uncertainty, from a worst-case perspective, in a sense explained below. This approach will apply as well to the case of NLTV uncertainty; in that case the conditions will involve the use of constant uncertainty scalings A G A, as the ones considered in section 7.3. In particular, we will consider the following modified problem. Problem 5. Find J
We now outline the set-based approach to white noise rejection [318, 320] that is directly tailored to the robustness analysis problem. By treating both noise and uncertainty from a worst-case perspective, exact characterizations can be obtained. At first sight, this proposition may seem strange to readers accustomed to a stochastic treatment of noise; notice, however, that a stochastic process is merely a model for the generation of signals with the required statistical spectrum, and other models (e.g., deterministic chaos) are possible. Here we take the standpoint that rather than model the generating mechanism, we can directly characterize a set of signals of white spectrum, defined by suitable constraints, and subsequently pursue worst-case analysis over such set. One way to do this, inspired in statistical tests for white noise commonly used in time series analysis, is to constrain the cumulative spectrum. We focus for the moment on scalar Li2 signals and define the set
The imposed constraints are represented in Figure 7.2. The signals in W^^B have approximately flat spectrum (controlled by the accuracy 77 > 0) up to bandwidth B, since the integrated spectrum must exhibit approximately linear growth in this region.
142
Paganini and Feron
Figure 7.2: Constraints on the accumulated spectrum. Notice that this integrated spectrum will have a finite limit as ft —»• oo, for Ii2 signals, so we only impose a sublinear upper bound for frequencies above B. Having defined such approximate sets, the white noise rejection measure for a given system H will be based on the worst-case rejection of signals in W^.B in the limit as
For an LTI system under some regularity assumptions, || • ||2,um can be shown to coincide with the standard H^ norm; see [320] for details. Notice that the preceding definition can apply to any bounded operator, even nonlinear. We are now ready to state a characterization of the frequency domain test. Theorem 7.5. Suppose that the system of Figure 7.1 is robustly stable against the ball B A of structured NLTV perturbations. Then
This result is proved in [320] and is based on the method of IQCs, which are used as in [274] to characterize the uncertainty but here are used also for the set W^B- In particular (7.29) imposes a parametric (in (3) family of constraints on v(ju), which are quadratic and shift invariant. The 5-procedure in this case generalizes to include an infinite-dimensional Lagrange multiplier corresponding to these constraints, in addition to the uncertainty multipliers. It is precisely this multiplier that gives rise to Y(u) in Problems 4 and 5. It is shown in [320] that such «5-procedure is lossless; the key step in proving Theorem 7.5. We remark that a similar interpretation can be given to the cost function Jf,iti, as a test for robust performance over white noise sets in the sense defined above, for uncertainty of arbitrarily slow time variation, in the sense of [336]. Thus this method provides a parallel theory to the analysis with an Hoo-performance measure. An important comment in regard to Theorem 7.5 is that it computes the limit a posteriori to the supremum, instead of computing supA ||T2w(A)||2)U,n. In other words
7.5. Discussion and comparisons
143
it is a uniform limit across A that makes the test exact. In fact this interchange can be conservative when the method is extended to multivariable noise (by imposing crosscorrelation constraints on the components), as was recently pointed out in [387]; we return to this in section 7.5.3.
7.4.3 Computational aspects In terms of computation, Problem 4 is infinite dimensional as posed. Note, however, that with A (jo;) free to vary in frequency, the problem decouples over frequency, and a gridding method analogous to the /u-analysis tools [31] can be used. At a given frequency point uji we can solve the low-dimensional SDP as follows.
Problem 6. Find inf Tr(Y(ui)) subject to
We obtain a value that has the intuitive appeal of bounding the worst-case gain at each frequency and can subsequently be accumulated over the frequency grid. Other computational methods based on this condition include basis functions and the use of a dual form [170]. If A is constant, a gridding method is not attractive because it would give a large, coupled problem. However, in this case the problem can be reduced exactly to the following finite-dimensional state-space problem, which is of the LMI type in the unknowns Z, P_, P+, and A. For details on this equivalence see [320]. Problem 7. Jf, n itv = inf Tr(Z) subject to
7.5
Discussion and comparisons
We have presented a number of approaches for the evaluation of robust H2-performance. As we have seen many different performance criteria fall under the umbrella of "H2," since they all coincide in the case of LTI uncertainty. In this case all upper bounds can be directly compared. In problems involving LTV or NL perturbations, the comparison is more involved since we are using different interpretations; nevertheless, some direct comparisons can be pointed out, as explained in this section. We will focus on the bounds Js and Jf of sections 7.3.1-7.3.3 and section 7.4, respectively.
7.5.1
NLTV uncertainty
For arbitrary NLTV perturbations, we have seen that Problem 1 provides a bound on the impulse response as well as the average response to stochastic noise in the worst case
144
Paganini and Feron
over A. In contrast, Problem 5 applies to the worst-case white noise response, which may be a larger quantity. Concretely, for scalar noise we clearly have the relationship
since the impulse (or, more exactly, an Ii2 approximation) is always an element of the "white" set W^B- The following questions arise: • Can the inequality be strict? • Do the answers to Problem 1 and Problems 5 and 7 reflect an analogous inequality? This is not obvious since since we only know that J8,nitv is an upper bound for the first quantity in (7.30). These questions have an affirmative answer. We first discuss the inequality between the LMI bounds, which can be proved (in general, including also multivariable noise) by transforming Problem 1 to the equivalent Problem 8. Problem 8. Js,nitv = inf Tr(Z) subject to
This transformation can be done by standard matrix operations, the variables P and A being the inverses of P and A in Problem 1. It is not hard to observe that the constraints on Z are weaker in Problem 8 than in Problem 7; hence the minimization will give a smaller number. This shows that Proposition 7.2 holds. Proposition 7.2. Js,nitv < Jf.nitv In the next section we will provide an example where Js,nitv < Jf.nitv This implies, in particular, that the inequality in (7.30) can be strict, answering also the first question above. The previous derivation does not, however, provide much insight. We now give an alternative argument, based on the frequency domain, focusing for simplicity on the case of scalar noise. A frequency domain version of the optimization (7.11) is
Now introducing the slack variable Y(u) to bound the integrand in (7.31) we can rewrite this problem as the minimization of /^ Y(U>)T£% subject to
V p(ju) in the Fourier image of L2JO, oo). Since z and q are the response of M to v(t) = 6(t),
7.5. Discussion and comparisons
145
which translates (7.32) to
Now we have an expression that closely resembles Problem 5; in fact, if p(ju] were allowed to be any frequency function, (7.33) would reduce to (7.27) and the two problems would be equivalent. However, the constraint p(t) € L2[0, oo), which embeds some causality in Problem 1, implies that p(juj) is not free but must have an analytic continuation to the right half plane. This restriction will in general lead to a smaller supremum, and thus we prove Proposition 7.2. Remark. A consequence of the previous argument is that in the scalar noise case, if we remove the causality restriction on A, then the worst-case impulse response is Jf, n itv, so the impulse is the worst-case white noise signal in that case. The situation is different if causality is enforced and may lead to a smaller result. A clarification is in order here: We are not saying that Theorem 7.5 fails with causal perturbations (indeed it holds; see [317]); what happens is that the impulse ceases to be the worst-case signal. Also, a gap appears between treating white noise in a worst-case perspective or in an average case perspective.
7.5.2
LTI uncertainty
We can also extend the previous argument to the case of LTI uncertainty. Notice that here we have an unambiguous H2 norm we are trying to compute, for which both approaches provide bounds. In this regard, once again we find that removing the restriction p G ^[0, oo) from (7.23) will lead to the result Jf,iti but that there is a gap between the two. This would mean that for causal, LTI uncertainty, the state-space approach could in principle give a tighter bound. Notice, however, that we do not have a Js,iti bound, only a family J^iti obtained by basis expansions of order N for the frequency-varying scalings. This means that while
we know nothing about the situation with a given, finite N. This is particularly relevant since the computational cost of state-space LMIs grows dramatically with the state dimension, in contrast with a more tractable growth rate for computation based on Problem 4, as is now discussed. Computation time for a standard LMI solver such as [153] grows approximately as n^ec, where n^c is the number of decision variables. Now for state-space conditions such as Problem 1, ridec includes the variables of the P matrix, which are of the order of the square of the number of states; roughly speaking, then, computation time grows as the tenth power of the state dimension, which indicates that the cost will be prohibitive for high-dimensional models. This also sets a limit on the applicability of dynamic multipliers for Problem 2, since they increase the state dimension. In contrast, the decoupled frequency-domain bound of Problem 4 for LTI uncertainty is not very sensitive to this problem: While of course a high-order system calls for the use of a fine frequency grid, this effect is milder since computation time grows linearly with the number of gridpoints. Also, there is no limitation on the order of multipliers. We will return to this issue in section 7.6.
Paganini and Feron
146
7.5.3 A common limitation: Multivariable noise Throughout this chapter we have focused on upper bounds for robust ^-performance, and only in Theorem 7.5 and Proposition 7.1 have we exhibited characterizations of some of these bounds. The conservatism inherent in, e.g., Theorems 7.1 or 7.4 has not been quantified. Recently, [387] has pointed out a source of conservatism in these approaches in the treatment of multivariable noise, which we now explain. We focus the discussion on the LTI uncertainty case, but in principle analogous limitations hold in general. The following example is adapted from [387]. Consider the system with A being a full 2 x 2 LTI uncertainty block, v G R2, p € R2, q G R2, and z e R, and state-space matrices
From an input-output perspective, the uncertain system is of the form
where W is a system of transfer function
In this case the worst-case
norm can be computed exactly:
Now let us consider the bound of Problem 4. It is not difficult to show that the LMI (7.27) reduces to
aand therefore the cost is
which shows a conservatism of a factor of \/2 with respect to the true worst-case H2 norm. More insight is gained by looking at the bound
that corresponds to (7.28) from the proof of Theorem 7.4. For this bound to hold for every A, Y(u) is forced to satisfy (7.34); however, each A can only produce gain in a one-dimensional direction, which means that the H2 norm will never be that large.
7.6. Numerical examples
147
We now look at this example from the point of view of the state-space bounds of section 7.3.1 and its refinements for LTI uncertainty of section 7.3.3. Instead of computing the state-space LMIs, however, it is easier to directly evaluate the sum
which is upper bounded by Js,nitv (and also by J^lti) as follows from the proof of Theorem 7.1. In this problem,
and the same happens for 626(1). Therefore, the sum in (7.35) is once again 2||W||2, which indicates that we will have at least a factor of \/2 of conservatism, like we had with Problem 4. The key difficulty is in the interchange of the supremum with the sum: Different perturbations A achieve the worst case for each input channel. It is clear that an analogous example can be constructed for noise dimension m, giving a conservatism of \/m. Thus we find that the need for a convex characterization is ill suited to treat the decoupling of uncertainty with input channel direction in both methods. At this time, this appears to be inherent to any convex method for this problem.
7.6
Numerical examples
We consider in this section several examples of robust H2-performance evaluation to compare the different approaches. The examples all involve scalar noise, so that the preceding limitation does not come into play. In all cases, the convex optimization problems were solved using the LMI Control Toolbox [153].
7.6.1
Comparing Js and J's
As mentioned before, the two dual state-space bounds of section 7.3 are not equivalent. This is now illustrated. For
it can be shown by LMI computation that J8,nitv < Jg> but the opposite inequality holds for
Paganini and Feron
148
7.6.2 NLTV case: Gap between stochastic and worst-case white noise This example exhibits a case where the inequality in Proposition 7.2 is strict. For the system
we obtain confirming that the gap can be indeed significant.
7.6.3
LTI case: Tighter bounds due to causality
We can also use the same example for the case of LTI uncertainty and solve Problem 4 with frequency-dependent multipliers, which gives Jf,iti = 3.434. Notice that this only reduces modestly the constant multiplier result Jf,nitv and is still larger than J8,nltvSo here the benefit of frequency-dependent multipliers is outweighed by the causality constraint embedded in Problem 1.
7.6.4
LTI case: Tighter bounds due to frequency-dependent multipliers
We now show an example in the extreme opposite end, where Jf,iti provides a nonconservative estimate of robust performance, tighter than the state-space bounds. Consider the second-order system with uncertain damping
Here 6 € [—1,1] is parametric uncertainty in the damping coefficient. In this case, the H2 norm of the system can be computed analytically and is equal to 1/^2(2 + 6), which has a maximum of 1/V2 = .707 over the range of uncertainty. Rewriting the problem in standard form, we compute the different bounds. Problem 4 with frequency-dependent multipliers gives 0.787 when the real parametric nature of the uncertainty is not exploited and tightens to 0.707 when we add the additional G-scaling. Problem 1 with constant multipliers gives a value of 0.883; we also studied the bound when a Popov multiplier was included as in [437], but the bound did not improve. So in this case the use of frequency-dependent multipliers is essential, while the causality restriction does not come into play. For the state-space method to become tighter would require the use of frequency-dependent multipliers as in Problem 2.
7.6.5
LTI case: Using both causality and dynamic multipliers
Next, we study an example used in [134] to illustrate the improvement due to dynamic multipliers and analyze the role of causality. Table 7.1, adapted from [134], gives values for J^lti from Problem 2 as a function of the order N of the dynamic multipliers. In comparison, the frequency-domain bound from Problem 4 on the same problem with dynamic multipliers gives a value of Jf,iti = 314.15, similar to the bound obtained from Problem 1 and constant multipliers but significantly worse than the bounds that
149
7.6. Numerical examples Table 7.1
Figure 7.3: Frequency responses and bound.
incorporate both causality and dynamic multipliers. More insight is gained by looking at the frequency plots below. In Figure 7.3, the envelope curve depicts the bound Y(u>) on the worst-case gain at each frequency, obtained from Problem 4. Since this problem has single-block uncertainty and scalar noise, it follows from the //-theory that this bound can be achieved at every frequency: i.e., for each uj we can find some A (a;) such that the system gain is in the envelope. The question of causality comes in to play when we want to interpolate such family of perturbations with a causal LTI system. To illustrate this, consider the other curves in the figure, which represent the gains |Tzv(Ai)(ju;)|2, with specific choices of constant, complex Ai. The central curve is for A = — 1; the others are for different phases. Each curve touches the envelope at a certain frequency for which the selected phase is the worst case. Now, while it is possible to interpolate a finite set of phases with a causal, contractive perturbation (this would give a response with all these peaks and "valleys" between them), it is in general impossible to do it for all u, hence the conservatism of the method of Problem 4.
7.6.6
A large-scale engineering example
As a final example with engineering content, we consider the application of these techniques to the Middeck Active Control Experiment (MACE) [285], which studies the control of lightly damped mechanical structures for space applications.9 The disturbance rejection specifications make it a natural testbed for studies in robust H2-performance [204, 437]. A feature of these problems is that a large number of modes fall in the frequency 9
We thank Kyle Yang for providing the MACE models.
150
Paganini and Feron
range of interest, leading to high-order models. For example, the simplest MACE model used for SISO control in [204, 437] has 24 states (which doubles to 48 once a controller is included) and 4 real uncertain parameters. This gives 1176 variables for the matrix P in Problem 1, which shows the high cost of state-space analysis, especially for LMI computation, which has growth rate n^ec. In addition, due to memory requirements this problem is beyond the reach (see [437]) of the standard LMI software [153] on workstation platforms. For this reason, [204, 437] have had to rely on nonconvex techniques based, for example, on the Riccati equation version (7.13). These have no convergence guarantees and actually have difficulty handling a more detailed, 59-state model required for MIMO (three-axis) control. In comparison, the frequency-domain bound of Problem 4 applied to the SISO model has 12 decision variables for A(o>), G(u>), and Y(UJ) at each u>. Using an n^ec growth rate for the LMI solver, the computational cost per frequency gridpoint is about 1010 times cheaper than state-space computation; even using a very fine frequency grid (e.g., 10,000 points), the overall cost is many orders of magnitude below. Also, memory constraints are not an issue here. We have computed the frequency-domain bounds of Problem 4 for both MACE models on a SUN workstation, in reasonable computation time and with no convergence difficulties.
7.7
Conclusion
We have reviewed different approaches for the evaluation of robust H2-performance under structured uncertainty, all of which are based on multipliers for the uncertainty and SDP computation. However, the approaches build on different interpretations of the H^ metric and do not give equivalent results. For arbitrary NLTV uncertainty (constant uncertainty multipliers), we have the following problems: (i) Problem 1, which bounds the impulse response norm, and stochastic white noise response under causal uncertainty; (ii) Problem 3 that bounds the LZ —> L^ induced norm; and (iii) Problem 5 (equivalent to Problem 7) that can be characterized as lossless under a worst-case treatment of white noise rejection. The three problems give different numerical values, which is not surprising due to the different nature of the interpretations. The main open question here is whether a stochastic interpretation can be developed for Problem 3, more general than the very limited memoryless case that was discussed. For the LTI uncertainty case, all approaches are attempting to compute the same quantity. In this case refinements are available only for (i) and (iii) in the above list (Problems 2 and 4, respectively) using frequency-dependent multipliers. We have seen that by using a multiplier of high enough order, Problem 2 will give a tighter bound; this advantage should be taken with a grain of salt, however, given the high cost of state-space LMI computation as state dimension is increased. In comparison, the frequency-domain method is always tractable: It essentially comes for free given a // analysis for robust stability and would appear to be a good first-cut analysis. A competing first-cut test is the constant multiplier bound of Problem 1 (or its extension to Popov multipliers) provided that the state dimension is not too large. Only in rare cases it would probably be justified to make an expensive dynamic multiplier computation in state space. In this case, a reasonable combination of the approaches is to find the multipliers A.(ju) from Problem 4 and based on this information select an appropriate basis function to use in Problem 2.
7.7. Conclusion
151
However, we have seen that both approaches are potentially conservative when dealing with multivariable noise signals, with a gap increasing with the noise dimension. This difficulty appears at present to be inherent in a convex analysis for this problem, but further research is required to settle this question, which also highlights the need for lower bounds to assess these gaps, analogously to those available [314, 441] for other performance measures. Some work in this direction for the state-space bounds is reported in [57]. We have concentrated on the question of robust Eb analysis and have not attempted to cover the (more difficult) problem of controller synthesis. We mention, however, that the approach of section 7.3 has led to a family of synthesis methods combining mixed H2/Hoo control with a search over multipliers (see, e.g., [384, 334] or Chapter 9 in this book). Also synthesis methods based on a set-based approach for noise as in section 7.4 have been developed in [89]. A rare feature of HQQ theory has been the complete duality between two ways of thinking, one based on state-space and linear-quadratic optimization, the other on operator theory and the frequency domain. After two decades of research, the robust H2 problem has found these two faces but has not achieved the same unity or a satisfactory treatment of multivariable white noise: LQG has not quite adapted to the world of robustness.
This page intentionally left blank
Part IV
Synthesis
This page intentionally left blank
Chapter 8
Robust H2 Control Kyle Y. Yang, Steven R. Hall, and Eric Feron 8.1
Introduction
Among all performance indices known to control engineering, the H2-performance index holds a special place for historical and practical reasons. The historical reasons are that minimizing the H2 norm of a linear control system via feedback, better known as the LQG problem, is among the first optimal control problems to have been solved analytically (for an extensive presentation and bibliography, see [11]). The practical reason is that this problem can be solved using reliable and fast computational procedures [25, 239, 409]. It is, however, well known that the performance of the LQG-optimal controller can be very sensitive to perturbations on the nominal system [101]. In view of this fact, devising analysis and synthesis tools that will, respectively, evaluate and minimize worst-case H2 norms of control systems is especially relevant. In this chapter, we consider the following problems: Given a linear system perturbed by linear time-invariant (LTI) perturbations, what is its worst-case H2 norm? Second, given such a perturbed system, what LTI control system will minimize the closed-loop worst-case H2 norm? The first question is the subject of a number of works. Packard and Doyle [313], and Bernstein and Haddad [47, 48, 49] are among the first to consider the problem of robust H2-performance in the face of dynamic and parametric uncertainty. Stoorvogel [383, 384], Petersen and McFarlane [332], and Petersen, McFarlane, and Rotea [334] find bounds on the worst-case H2 norm of a system subject to norm-bounded, noncausal, possibly nonlinear and time-varying uncertainties. Peres and Geromel [328] and Peres, Geromel, and Souza [329] find upper bounds on the H2 norm of linear time-varying (LTV) and uncertain LTI systems based on quadratic Lyapunov functions. The book and the papers by Boyd, El Ghaoui, Feron, and Balakrishnan [64, 136, 60] and Feron [133] show that the computation of all these bounds on H2-performance can be reduced to convex optimization problems involving linear matrix inequalities (LMIs), which can be solved via efficient convex optimization techniques. In [64,133], attempts are made to refine the upper bounds on H2-performance when dealing with particular classes of perturbations such as static nonlinearities and parametric uncertainties, using Lur'e-Lyapunov functions and causal multipliers. Other attempts at obtaining reliable upper bounds on robust H2-performance include the recent paper by Paganini, D'Andrea, and Doyle [321]. 155
156
Yang, Hall, and Feron
This chapter presents a general account of the recent research on the use of multipliers to evaluate and design controllers that optimize the worst-case H2 norm of linear systems perturbed by LTI perturbations. Using multipliers is a well-known technique to determine the stability of uncertain systems (see [98, 419] and the references therein) and has proven to yield effective computational procedures [357, 355, 102]. A variety of other methods have been used in practice to try and make H2 controllers less sensitive to parameter variations in structural systems. For a survey of the most promising of these techniques, the reader is referred to [177]. However, these methods (such as the sensitivity-weighted LQG method or the multiple-model method) do not provide a priori guarantees that the system will be robust nor can they provide a bound on the performance of the system. This chapter uses an iterative scheme, commonly referred to as a "D-K iteration," [446] to design robust controllers. A D-K iteration consists of alternating synthesis and analysis steps. In the analysis, or D, step, the controller is held fixed, and optimal weights (stability multipliers) are found to calculate the performance of the closed-loop system. In the synthesis, or K step, the weights are held fixed and a controller is synthesized. One advantage of this design methodology is that at each step, the closed-loop cost should decrease monotonically. Unfortunately, the overall D-K iteration is not convex. It can converge to a local minimum and can even fail to converge. To be clear, note that we refer to a D-K iteration as a "design" (not a "synthesis") method. Several methods exist to synthesize robust, dynamic H2 controllers. Synthesis performed using coupled Riccati equations is presented in [48]. In contrast, a robust synthesis performed using an optimization subject to LMI constraints is discussed in [438]. Not all robust control design techniques rely upon a D-K iteration. The great difficulty with trying to simultaneously optimize all the unknowns in a robust control design is that the cost function and constraints are bilinear. Some work has been conducted into the solution of bilinear matrix inequalities, but such routines typically require the solution of alternating LMIs [165]. Later in this chapter, we restrict our attention to designing H2 controllers for systems described by a particular multiplier, namely, the Popov multiplier. A design scheme for these controllers, the so-called H2/Popov controllers, is investigated by How [201]. How sets up an augmented cost function, composed of a bound on the H2 cost and an associated, nonlinear, matrix equality constraint on the performance. This augmented cost function is then minimized using a gradient search algorithm. This technique simultaneously solves for both the controller and weighting functions (multipliers). In practice, this can be a problematic optimization, as it can be difficult to find an initial guess for the unknown parameters. The multipliers are not motivated by any physical or heuristic rules. This chapter is organized as follows. First, the general robust H2 analysis problem is formulated and upper bounds on robust H2-performance are proposed via a multiplier formulation. Then, the problem specified in the first section is specialized for two classes of uncertainties and a complete solution procedure is proposed for both cases that relies on either LMIs or Riccati equations. In the third part of this chapter, the problem of synthesizing robust controllers that minimize a worst-case ^-performance index is considered and solved. In the fourth and last part of this chapter, numerical issues associated with the design of H2/Popov controllers are considered. Numerical results are provided including those for a flexible space structure, the Middeck Active Control Experiment (MACE).
8.2. Notation
157
8.2 Notation L 2 (R) denotes the Hilbert space of functions h mapping R into R n x p that satisfy
It is equipped with the standard scalar product
and, for all h 6 L2(R), the Euclidean norm {/i, /i)1/2 of h is noted \\h\\2. L 2 (R+) denotes the subspace of L 2 (R) made of the functions h satisfying h(t) = 0 when t < 0. L 2e denotes the space of functions h mapping R into R n x p satisfying h(t) = 0 for t < 0 and
Following the usage of Francis [142], we suppress the dependence of these spaces on the integers n and p. For a given operator A and a function p mapping R into R n , (Ap)(£) denotes the value taken by the image function Ap at time t. An operator A is causa/ if for any function p and any time t, (Ap)(t) depends only on the past values of p up to time t. It is anticausal if (Ap)(t) only depends on the future values of p from time t. In any other case, A is noncausal. Given a set of operators AI, . . . , A^, diag(Ai, . . . , AX,) stands for the operator that maps the function taking the value \u\ (t)T ... UL (t)T] at time t to the function taking the value [(AiUi)(t) T . . . (AI,UL)(*)T] at time t. Let G map L2(R) into L2(R) and be linear. The adjoint of G, noted G*, is the unique linear operator satisfying rrt
rr\
Defining s as the usual Laplace variable, the transfer function of G is denoted by G(s), if it exists. Let us now introduce the notions of finite-gain positivity and passivity that we will use throughout this chapter. Definition (see [343]). An operator F mapping L 2 (R) into L2(R) has finite gain (or, equivalently, is bounded) if there exists a positive k such that for any u € L2, The smallest such k is called the gain of F and noted Definition (see [98]). A linear operator G mapping L2(R) into L 2 (R) is said to be positive if for any u 6 L2(R), G is said to be strictly positive if there exists k > 0 such that for any u € L2(R), Definition (see [98]). A linear, causal operator G mapping L2e into L2e is said to be passive if for any u e L2e and T > 0,
158
8.3
Yang, Hall, and Feron
Analysis problem statement and approach
Consider the system
where a: : R -> R n , w : R -> R n ™, z : R -> Rn", e : R -+ R n «, and d : R -» Rn" and all quantities are equal to 0 for t < 0. Assume that the matrix A is stable. A is a perturbation that satisfies the following set of assumptions:
In addition, A is bounded in some form: For example, A may be bounded above and below by the identity matrix, or A may be only known to be a passive system. A may be known to be a static gain or be a dynamic operator or any combination of the above. Many of these formats and the equivalences among them are described in detail in [102, 98], for example. Much of the existing literature is devoted to studying the robust stability of system (8.1) against an uncertainty, A. The system is robustly stable if, for w = 0 and any initial condition XQ, the signals x, w, z, and e belong to L2(R+) for all allowable instances of A. In the analysis problem, we assume system (8.1) to be robustly stable, and we are interested in evaluating its worst-case fL^-performance against the uncertainty A: Let
where
The H2 norm of system (8.1) is defined as
Equivalently, using Parseval's theorem, ||G ||2 may also be expressed as ||0A||2», where PA is the impulse matrix of G'A- In the subsequent developments of this chapter, it will also be convenient to express it as
where e is the output of system (8.1) with the following assumptions: The input d is identically 0 and the initial condition XQ is equal to Bdv, where v is a random variable
8.3. Analysis problem statement and approach
159
satisfying EvvT = /. (The expectation appearing in (8.4) is therefore to be taken with respect to v.) The robust H2 analysis problem is to compute the worst-case H^ norm of system (8.1) over all possible values of A that satisfy (8.2) and the additional boundedness conditions. This computation is, in general, a challenging problem. Thus, we propose to replace it by the computation of upper bounds on the robust H2 norm, using a technique similar to the classical technique of Lagrange multipliers: Consider any family, M, of operators, M, mapping L2(R) into La(R). For an appropriate choice of Ai, the following lemma gives an upper bound on the worst-case H^ norm of system (8.1). Lemma 8.1. The following inequality holds:
where w, z, and e are the inputs and outputs of the system
where all variables belong to L2(R+), and v is a random variable satisfying EvvT = /. For this lemma to hold, it is sufficient that
for any admissible perturbation A and any z and w such that z = Aty. Note that the multiplier M can indeed be seen as a Lagrange multiplier. Such an approach is not unlike the one encountered in the papers by Yakubovich [433, 434], Fradkov and Yakubovich [141], and Megretsky [269, 267, 270], where it is called the 5-procedure. In the remainder of this chapter, we will show that a suitable choice of the family of multipliers M allows for the right-hand side of (8.5) to be easily computed using state-space computations. Note that, using Parseval's theorem, the computation of the upper bound in Lemma 8.1 may be done in the frequency domain as well. The essential difficulty with frequency-domain approaches, however, is their inability to easily discriminate signals belonging to L2(R-t-) from signals that belong to L2(R). The result is that a frequency-domain approach may lead to perturbations w that may anticipate the impulse Bjv sent in the system, which is impossible in physically significant systems and may result in conservative estimates. For that reason, the analysis of robust H2-performance distinguishes itself from the analysis of robust stability or robust Hoc-performance. A detailed discussion of this issue together with numerical comparisons is provided in Chapter 7 of this book, by Paganini and Feron, and in [322].
Yang, Hall, and Feron
160
8.4 8.4.1
Upper bound computation via families of linear multipliers Choosing the right multipliers
Depending on the perturbation class under consideration, the choice of multipliers M will vary. There is a significant body of literature devoted to determining the appropriate choice of multipliers for the families of uncertainties under consideration. An extensive presentation of the techniques to obtain large classes of multipliers is presented in [273], for example. The following are examples of such multipliers. If A is a diagonal matrix of positive operators, then the largest known family of corresponding multipliers is written as
where M\i is any diagonal, positive, and self-adjoint operator. If, in addition, A is a static gain, then M\i need only be a diagonal and positive operator. Likewise, if A is a matrix of numbers that are only known to lie between —1 and 1, then the largest known useful family of multipliers is given by
where Mn is self-adjoint and M\i is a skew-symmetric operator. If A is any diagonal, LTI operator whose gain is less than 1, then the largest known family of multipliers is
where Mn is any positive, self-adjoint operator.
8.4.2
Computing upper bounds on robust H2-performance
In this section, we consider numerical schemes to solve the right-hand side of (8.5). Two specific uncertainty instances will be considered. First, A will be assumed to be an unknown but diagonal, passive, and LTI operator. Second, A will be assumed to be a diagonal matrix whose diagonal elements all have absolute value less than 1. In this second case, a special class of multipliers, named Popov multipliers, will be considered. It will be shown that they deserve special treatment. LTI and positive uncertain operators Following an idea arising in [61, 353, 64], we consider the family of finite-dimension operators
8.4. Upper bound computation via families of linear multipliers
161
(We refer the reader to [142] for a complete discussion of the representation of self-adjoint and general noncausal operators via transfer functions with unstable poles.) Thus, the set M is parameterized linearly by the real numbers mij, i = 1,..., np, j = 0,..., N. Each transfer function Mi(s) may alternatively be written as Mj(s) = CMi(sI — ^Mi)~1-5M< + Dmi, where
and
Likewise, the transfer function M(s) may also be written as M(s) = CM(S!-AM)' DM, with
and
Positivity of M is ensured by the inequality Mi(ju) -f- Mi(ju>)* > 0 for all u; e R and can be expressed in a convenient form via a straightforward application of Theorems 3 and 4 of Willems [420], subsequently corrected in [421]. Lemma 8.2. The inequality
is satisfied if and only if there exists a symmetric matrix Pi satisfying
Note that this lemma requires controllability of assumption is indeed satisfied.
^ to hold and that this
162
Yang, Hall, and Feron
In order to calculate the upper bound on the worst-case H2 norm given in Lemma 8.1 we introduce the augmented system
where AMH , BMH, CMH, DMH, CM He are given by
We now summarize the computation of the upper bound in the following theorem. Theorem 8.1. Consider system (8.6). The quantity
where e, w, and z satisfy (8.6), is computed as the minimum of TrF over the variables F, P, PI, ..., Pnw, rriij, i = 1,...,n w , j = 0,..., N, satisfying the constraints
and
Where
has been partitioned conformally with the dimensions of A, Aca, and AM (where AC& and AM are given by (8.10)). A detailed proof of this theorem may be found in [134]. Popov multipliers for real parametric uncertainties Consider the case of system (8.1) with Dzw = 0. When real uncertainties are present and when the system is scaled such that these uncertainties lie between —1 and 1, a popular class of multipliers known as Popov multipliers can be employed. A Popov multiplier is described by
8.4. Upper bound computation via families of linear multipliers
163
where H and N are diagonal and the coefficients of H are nonnegative. Popov's multiplier does not lend itself to exactly the same calculations as other multipliers, because of the derivative term present in it. More precisely, given the impulsive input d = 6(t)v we compute
Thus, Lemma 8.1 needs to be adapted and becomes Lemma 8.3. Lemma 8.3. The following inequality holds:
where w, z, and e are the inputs and outputs of the system (8.6). Under these conditions, the right-hand side of (8.5) can be computed using state-space techniques given by the following theorem. Theorem 8.2. The robust H2 -performance of system (8.1) is bounded above by the minimum of maxA TrjBj(P + Cf(A + I)NCz)Bd over the variables P, H, and N that satisfy the constraints H > 0 and
If, in addition, the multiplier simplifies to
is required to be positive, then the objective function
and computing the upper bound on robust ^-performance reduces to optimizing a linear objective subject to LMI constraints [438]. If N is not required to be positive, then a more accurate upper bound as well as lower bounds on the worst-case ^-performance may be obtained by solving another set of LMIs, as described in detail in [58]. The LMI condition (8.18) may also be written as the equivalent Riccati equality constraint
It can be shown that the solution to (8.20) is the smallest (in the sense of the partial ordering of symmetric matrices) of all matrices satisfying (8.18).
Yang, Hall, and Feron
164
Figure 8.1: The controller design problem.
8.5
Robust H2 synthesis
We turn now to the question of robust H^ controller synthesis. For a fixed set of multipliers, we wish to design a full-order LTI controller that minimizes the cost function. Because the form of the resulting synthesis equations (and inequalities) is highly dependent on our choice of multipliers, we choose to specify a priori that we will use the Popov multipliers discussed previously. Popov multipliers are convenient, because they do not increase the number of states in the controller beyond those of the plant. Furthermore, H2/Popov controllers have been used succesfully in structural control problems [204], and we can compare the utility of our synthesis routines with those used in previous designs.
8.5.1 Synthesis problem statement Consider the system in Figure 8.1. The LTI plant, G, in the system is given by
with all signals as defined for (8.1), and u : R —* Rn". Note that A is no longer assumed to be stable. The realization is assumed to be minimal. Also, we assume that (A, Bu) is stabilizable and that (A, Cy) is detectable. The exogenous disturbance, d, is assumed to be stationary and ergodic. As in the analysis problem, the goal is to minimize (E \\e\\%). However, for simplicity, in this case we assume that the control cost is decoupled from the state cost such that C^Deu = 0. The uncertainty block, A, is a diagonal matrix whose diagonal entries are nw static, independent, real, uncertain parameters. For convenience, we assume that all the entries of A have magnitude less than unity. Note that the dynamics matrix of a perturbed system is not the nominal A, but A + A.A, where A.A = BWACZ and —/ < A < /. Our goal is to find the LTI controller, K, given by
with xc : R —> Rn, that minimizes the H2 norm of the system, as measured at the output error signal, e. For the nominal system, the controller that minimizes the H2 norm is given by the celebrated LQG controller. However, it is also known that the problem of determining whether a system that is subject to real parametric uncertainties is robustly
8.5. Robust H2 synthesis
165
stable is NP-hard [71, 442]. Therefore, finding the extremal values of the H2 norm is not generally possible at low computational cost. For a closed-loop system with a known controller, the performance can be analyzed using (8.19) if the given matrices are replaced by their closed-loop counterparts. That is, A «— A, where
Ce <— Ce, and so on. The analysis Riccati equation (8.20) is then of size In rather than n. The closed-loop cost function that bounds the system H2 norm is therefore given by
8.5.2
Synthesis via LMIs
Robust H2 synthesis appears complicated because, ostensibly, we must minimize a nonlinear cost function subject to nonlinear constraints. The closed-loop cost equation (8.27) cannot be solved using standard Riccati equation solvers because it contains unknown controller parameters. Even if the constraint is written as a matrix inequality, we find that it is bilinear in the unknowns. Similarly, writing out (8.27) reveals that the cost is quadratic in the unknown controller input matrix, Bc. Fortunately, we do not have to solve a nonlinear optimization. The following theorem, from [251], shows that the cost function and constraint can be linearized, making the problem convex. The proof of this theorem relies heavily upon the elimination lemma of [315]. Theorem 8.3. For the plant, G, with controller, K, and fixed stability multipliers, H and N, define a matrix Bc e R nxn « that is full column rank. Furthermore, define
Tien an H2/Popov controller exists if and only if a positive definite, symmetric matrix P 6 R2n exists and minimizes
while satisfying
and
166
Yang, Hall, and Feron * A
A
T1
for some symmetric matrix Qu € Rn, where A = A — BWCZ and Eu = D^uDeu. Furthermore, the closed-loop cost is equal to the constrained minimum of (8.29). Given a solution to the optimization problem for P and Qu, the resulting controller can be found via an algebraic solution of the closed-loop LMI robustness constraint. Interestingly, for a case without an uncertainty block, this theorem yields the solution for the nonrobust LQG controller. Furthermore, in the case where the N scaling function is set to zero, this formulation produces a closed-form realization for the controller in terms of the Lyapunov matrix [438].
8.5.3
Synthesis via a separation principle
We now discuss the separation structure of the H2/Popov controller. It is well known that for a given, fixed LTI system, any output feedback controller that stabilizes the system could be built by combining the solutions of two separate problems—a state feedback problem and an output estimation problem (see, for instance, [446]). This can be discerned via a Youla parameterization of the controller. However, this separation structure is not always exploited for controller synthesis. The separation structure exists but is not considered useful. Two obvious examples where the separation principle plays an explicit and significant role in controller synthesis are the E^ (LQG) controller and the HQO controller [107]. The separation principle requires us to solve two subsidiary problems—a robust fullinformation (FI) problem and a robust output estimation (OE) problem. In the FI problem, we desire to find the static feedback gain that minimizes the cost for the uncertain system. This can be found by solving a single Riccati equation. In the OE problem, we design a dynamic filter to track a system input with minimum error on average, leading to a solution via a pair of coupled Riccati equations. The final H2/Popov control signal is equal to the optimal state estimate from the OE problem, multiplied by the gain from the FI problem. This is summarized in the following theorem from [437, 436]. Theorem 8.4. Consider the plant, G, with controller, K. Assume that we are given fixed stability multipliers, H and N. Define the following constant matrices
where Eu = D^uDeu. Furthermore, assume that both Z and £ are positive definite. If a symmetric matrix R e Rn exists such that it is the unique, positive semidefinite, stabilizing solution to the related full-state feedback Riccati equation
*
where A = A — BWCZ, and if positive semidefinite, symmetric matrices X € Rn and Y € Rn exist that are the stabilizing solutions to the related output estimation coupled
8.6. Implementation of a practical design scheme
167
Riccati equations
and
where Ed = DydD^d, and
then a controller given by
will stabilize the system for all allowable A. Furthermore, for all allowable A, the closedloop performance of the system, J, will be bounded by
These results are effectively a stochastic version of principles found in [107] and [110]. The coupled Riccati equations are also related to those of [48] . In [436], the cost function, JOY, is shown to overbound the H2 cost of the system. It is believed to be the case, though it has not been proven, that the controller resulting from the separation-based solution is identical to a controller derived from the LMI-based solution of Theorem 8.3.
8.6
Implementation of a practical design scheme
In this section, we will compare three different methods used to design H2 /Popov controllers. They are the following: (1) a D-K iteration in which both the analysis and synthesis steps are solved as LMI problems (using Theorems 8.3 and 8.2), (2) the augmented cost function approach of How [204], and (3) a novel D-K iteration that solves the synthesis problem using the separation principle Riccati equations and solves the analysis problem with a gradient optimization. The steps in the final method will next be described in detail, and then two design problems will be presented.
168
8.6.1
Yang, Hall, and Feron
Analysis step
The H2/Popov analysis problem requires that, for a fixed controller, we find the (2n)2 parameters of the Lyapunov matrix, JP, and the 2nw parameters in the scalings, H and JV, that minimize the closed-loop cost function (8.27). Since the problem is convex, there are no local minima. In fact, because it is convex, there are quite a few different algorithms that could be chosen to solve the problem. Even algorithms that are not specifically designed for convex problems, such as a gradient search routine, should converge to the unique minimum. Regardless of the solution technique, however, we need to make the routine as efficient as possible, both in terms of speed as well as in terms of memory requirements, so that it can be used for problems with greater than, say, 30 states. We can reduce the size of the analysis problem by explicitly minimizing over just the stability multipliers, rather than both the multipliers and the Lyapunov matrix. Consider the closed-loop cost, J7, given in (8.27). The gradient of J', with respect to H alone, is a size nw vector, |^ = [ • • • ^ • • -]T. Each of the scalar derivatives, $£, is equal to Tr(BdBdlHr)> wnere ^t is *ne **h diagonal element in the H matrix. However, we wish to consider only gradient directions that keep the optimization trajectory in the space specified by the closed-loop form of the Riccati constraint (8.20). By examining the first-order variations of the Riccati equation with respect to P and /i», we find that the change in P due to a change in H can be calculated using a Lyapunov equation. After defining
where Q indicates that a matrix is closed loop, we can appeal to duality to show that
where Qu is the solution to the Lyapunov equation
and A#i is a matrix of zeros except for the ith diagonal element, which is equal to unity. Therefore, to calculate the constrained gradient of P with respect to H, we need only evaluate a single Lyapunov equation, (8.50), and then calculate nw scalar values using (8.49). The constrained gradient with respect to N is calculated similarly. The analysis routine is implemented using the sequential quadratic programming (SQP) algorithm found in Matlab [174] with a slightly modified line search routine. SQP routines do not rely upon convexity. Constraints are imposed that hi > 0, rij > 0 for
8.6.2 Synthesis step Given fixed scalings, H and N, we wish to solve equations (8.36)-(8.38) for R, X, and Y to synthesize an H2/Popov controller. In principle, solving these three size-n, nonlinear equations poses no serious memory problems for practical systems. The important question is whether or not these equations can be solved quickly. One method that is used to solve this type of coupled equations is a homotopy algorithm [83]. The problem with such methods is that in order to calculate the derivative
8.6. Implementation of a practical design scheme
169
of the solution versus the homotopy parameters, they require the solution of two matrix equations of size n2 x n2 at each step. Thus, this method is not attractive for large systems. On the other hand, a straightforward method to try and solve the coupled equations is to iterate between solutions of standard Riccati equations; i.e., first solve (8.36) for R, then to fix X = Xk and solve (8.38) for y fc+ i, then to fix Y = Yk+i and solve (8.37) for Xk+2 using standard Riccati solvers. This sequence is repeated until the solutions appear to converge. We refer to this procedure as the "standard" iteration. The main problem with the standard iteration is that (8.37) does not necessarily have a stabilizing solution. Often, the procedure can iterate for some time, but eventually Yk enters a region where there is no solution for Xk+i. This is because the sign of the quadratic X term is positive. It is akin to the form of a typical HQQ Riccati equation. In contrast, (8.38) has a negative quadratic term, so, given detectability assumptions, it always has a stabilizing solution for Y. Reference [438] proposes two alternative iterative methods that eliminate the problem of not finding a solution for X. The first of the iterative methods, called the control gain (CG) iteration, solves the following equation in lieu of solving (8.37):
where (• • •) indicates a repeat of the previous term in parentheses. It can be seen that, if the iteration is on a stationary point (Xk — Xk+i), then this equation is the same as (8.37). The primary difference is that the quadratic term is now negative, so that a stabilizing Xk+i always exists. The CG iteration is akin to a perturbation method because it always has a solution at the initial step. A valid initial condition is Xk=Q = X = 0. The second iterative scheme described in [438] is the CG with relaxation (CGR) iteration. In this scheme, (8.38) is effectively solved twice for every new solution of (8.51). Unfortunately, because these are matrix rather than scalar equations, there is no known way to determine if these iterative techniques will converge from any given starting point. However, [438] contains a comparison of the CG, CGR, and standard iterations, together with a synthesis algorithm based on the LMI method of Theorem 8.3. It is shown on an 8-state benchmark example that the CG and CGR iterations could synthesize controllers for a much wider range of uncertainties than could the standard iteration. The LMI-based synthesis algorithm synthesized controllers for all desired levels of uncertainty. However, in the ranges where the CG and CGR iterations converged, they were significantly faster than the LMI-based algorithm.
8.6.3
Design examples
We now present H2/Popov controller designs that were performed using the three different design methodologies. In the cases using a D-K iteration, algorithms were initialized using a synthesis step with H = I and N = 0. The routines iterated between analysis and synthesis steps until the cost function (evaluated after every synthesis step) converged. All results were performed in Matlab on a Sun Sparc-20 workstation. For more details, the reader is referred to [437, 436].
Yang, Hall, and Feron
170
Figure 8.2: The MACE system.
Four mass system This problem consists of 4 masses connected by 3 springs, corresponding to 8 states. Using a noncollocated force input, the goal is to control the position of one of the masses, which is measured. The stiffness of two of the springs is uncertain (see [438] for details). The system is small enough that a D-K iteration that utilizes LMIs for both the analysis and synthesis steps can be run [438]. Equivalent designs for H2/Popov controllers for this system also appear in [202]. The proposed algorithm converged smoothly and quickly to a design for this system. The CG iteration was used for all synthesis steps. Overall, the design required 3.5 minutes to converge (w22 sec/iteration). In contrast, the LMI-based design required 129 seconds per iteration [438]. Furthermore, as expected, the controller correctly matched the designs found in [438] and [202].
MACE system The MACE development model is discussed in more detail in [204]. The structure, pictured in Figure 8.2, consists of four Lexan tubes, connected by aluminum nodes. It is suspended by wires from a sophisticated air-spring suspension system. In the center of the structure are three torque wheels, and at one end is an actuated payload—the pointing/scanning payload. The goal of the controller is to keep the pointing/scanning payload as still as possible, on average. For this configuration of the MACE system, broadband disturbances are injected into the system through the actuators. These actuators also serve to control the structure. Furthermore, the performance and sensed signals for the structure are the same. We design controllers for the same two models discussed in [204]: a 24-state SISO model and a 59-state 3-input/3-output model. The SISO model has 4 uncertain frequencies in its modes, and the MIMO model has 11 uncertain frequencies.
171
8.6. Implementation of a practical design scheme Total solution time 12:00
#of synthesis steps 4
Avg. synthesis time 0:44
#of analysis steps 3
Avg. analysis time 3:16
Time per full iteration 4:00
Table 8.1: SISO MACE problem: Solution time (min:sec). Data do not include the iteration run as a check on convergence or the initial controller synthesis.
Figure 8.3: Cost function of the SISO MACE problem vs. time. Cost after a synthesis step denoted by o; cost after an analysis step denoted by *. Open loop cost is 17.1. Plot includes extra iteration. These models were found to be too large to be run using the LMI-based D-K iteration used on the previous example. The results, however, can be compared with the H2/Popov controllers designed using the augmented cost function optimization of How [204]. The new D-K iteration performed very well for the SISO MACE system. All synthesis steps were performed using a CG iteration. The overall D-K iteration converged in only three iterations (4 synthesis steps and 3 analysis steps). However, to double check that the solution had actually converged to the correct solution, the iteration was forced to restart from the solution point with tighter tolerances. As shown in Table 8.1, the total solution time for the problem was only 12 minutes. The cost function is plotted as a function of time in Figure 8.3. The plot graphically demonstrates that the analysis steps required more time to solve than the synthesis steps. This design can be compared with the SISO controller design of [204]. It is estimated that, after accounting for differences in computational speed, the new routine is at least three times faster than the older routine. With a nominal plant, the new controller guarantees a cost no higher than 0.103 times the open-loop cost, while the older design only guarantees 0.11 times the open-loop cost. To check a measure of the conservatism of our control designs, we examine cases when the stiffness changes in the four modes are exactly correlated; i.e., they are either all 1%
172
Yang, Hall, and Feron
Figure 8.4: SISO MACE: H2/Popov robustness guaranteed to ±0.02 (dotted line). Cost normalized by open loop cost. LQG performance=>solid line. H^/Popov performance=>dashed line. Optimal LQG cost = 0.0922. Actual nominal H2/Popov controller cost = 0.0962. high together, or 2% low together, etc. The exact performance of perturbed systems with controllers from the D-K iteration is plotted in Figure 8.4. Notice that the range of stiffnesses for which the LQG controller can stabilize the system is very narrow. As expected, the H2/Popov controller achieves a significantly wider range of robustness. The controller was designed to be robust to ±2% variations in stiffness. In fact the H2/Popov controller maintains a level of near-optimal performance over more than twice the required range. Furthermore based on the nominal performance, it is clear that the controller has not sacrificed performance significantly for the sake of robustness. A similar plot was shown for the design in [204]. For perturbations in the ±2% range, the controller from [204] achieves inferior performance compared with the new controller. Furthermore, the controller from [204] maintains a flat level of performance for perturbations ranging in size from 0.1 down to below -0.25. It is more conservative than the new controller. Because the new SISO controller is relatively nonconservative, it is possible to design controllers with even greater robustness margins. By starting from the new controller's multipliers, the solution can be expanded to handle 110% and then 125% of the original uncertainty size. Also, [436] shows that the Bode plot of the new SISO controller is virtually indistinguishable from the LQG controller. This indicates that subtle changes in a controller can have significant effects on its robustness characteristics. For the MIMO MACE design, it was found that the initial conditions of H = 7, N = 0 could not start the initial synthesis routine. However, the initial condition did
173
8.7. Summary Table 8.2: MIMO MACE problem: Solution time (hrmin). Percentage of required uncertainty 65 82.5 100 Overall
Total solution time 5:11 2:35 4:11 11:57
#of synthesis steps 7 4 6 17
Avg. synthesis time 0:07 0:07 0:07 0:07
#of analysis steps 6 3 5 14
Avg. analysis time 0:44 0:44 0:43 0:44
start a design at 65% of the required uncertainty level, using the CG iteration. Therefore, a continuation method was used to extend this solution. The 65% solution was used to start a new design at 82.5%. A final design was then run at the required level. The final two designs were performed using the CGR iteration for the synthesis step. The time required for each of the designs to be completed is shown in Table 8.2. If the MIMO system could have been solved in one step, as the SISO problem was, then the design time would essentially be cut by two-thirds, down to perhaps 4 hours. Four hours is 20 times longer than was required for the SISO system. However, if design time for a system with n states scales with n3, then the MIMO case should take just 15 times longer than the SISO problem. In contrast, if the design time scales with n 4 , then the MIMO case should take 37 times longer than the SISO problem. Therefore, for this system, bringing a single D-K iteration to convergence is a problem that seems to scale at a rate better than n4. Due to a lack of data, detailed comparisons between the new MIMO controller and the MIMO controller of [204] are difficult to make. However, it is noted that the new design should theoretically achieve a 13.45 dB improvement in the cost over the nominal open-loop cost. As detailed in [204], the H^/Popov controller designed using the augmented cost function was implemented on the MACE system in the laboratory. It was experimentally found to achieve a 12.4 dB improvement over the open-loop system.
8.7
Summary
For an uncertain system, a multiplier analysis allows one to derive H2-performance bounds that the system is guaranteed to not exceed, even for the worst-case system pertubation (A). Although such results are necessarily conservative, there is evidence that, for practical systems, the conservatism is not overly restrictive. In particular, because the multiplier framework operates in the time domain, it is able to restrict the causality of disturbances acting upon the system, thereby reducing conservatism. The multiplier framework has been derived in the context of a generic optimization problem, rather than one specifically derived from a control analysis framework. Of course, the choice of the type of multipliers does depend heavily upon knowledge of the system and its uncertainties. The multiplier framework was shown to lead directly to an optimization of a cost function subject to LMI constraints. These constraints could also be written as Riccati equations. The analysis problem was solved using two different methods. LMI optimization routines were found to be limited to the analysis of small systems because of memory limitations. Therefore, a constrained gradient optimization routine was developed in which gradients were evaluated by solving Lyapunov equations.
174
Yang, Hall, and Feron
The problem of synthesizing a robust controller for a fixed set of multipliers was also studied. It was shown that the inherently bilinear problem could be reduced to an optimization subject to LMI constraints or to the solution of a set of coupled Riccati equations. The coupled Riccati equation solution can be derived by separating the problem into a robust full information control problem and a robust output estimation problem. Iterative methods to solve the coupled Riccati equations were investigated. Assuming that the iterative routine converges, the speed advantage of the iterative routine is expected to be even more significant as system size increases. A D-K iteration design algorithm was tested on two systems, including a large structural control problem. The iteration that employed coupled Riccati equation solvers for synthesis and a gradient optimization for analysis proved efficient and accurate. In a development setting, it is expected that the reduced computation time experienced with this routine would allow a control designer to implement and test a larger number of compensators, thereby improving the finished product.
Chapter 9
A Linear Matrix Inequality Approach to the Design of Robust H2 Filters Carlos E. de Souza and Alexandre Trofino 9.1 Introduction One of the fundamental problems in control systems is the estimation of the state variables of a dynamic system using available noisy measurements. Over the past three decades considerable interest has been devoted to estimation methods that are based on the minimization of the variance of the estimation error, i.e., the celebrated Kalman filtering approach [10]. These methods rely on the knowledge of a perfect dynamic model for the signal generating system in order to provide an optimal performance. In many cases, however, only an approximate model of the system is available. In such situations, it has been known [79], [91] that the standard Kalman filtering method fails to provide a guaranteed error variance. In order to cope with this problem, over the past few years interest has been devoted to the design of linear estimators that achieve a guaranteed upper bound on the error variance for any allowed modeling uncertainty; see, e.g., [182], [218], [331], [333], [369], [390], [429], [430], [431]. These filters are referred to as robust JJ.2 filters and can be regarded as extensions of the standard Kalman filter to the case of uncertain linear systems. The design of robust H2 filters on finite horizon for linear systems with an ellipsoidaltype parameter uncertainty in the state and input noise matrices has been studied in [218]. Robust H2 filters for linear systems with norm-bounded parameter uncertainty that provide an optimal bound for the error variance, in the stationary regime, have been developed in [182], [333], and [429]. The filter design in [333] allows for uncertainty in the state matrix only and can be viewed as a particular case of [182], which treated the design of robust, reduced-order filters in the presence of parameter uncertainty in both the state and the output matrices. On the other hand, [429] considered the general case of linear systems with norm-bounded parameter uncertainty in all the matrices of the system state-space model, including the coefficient matrices of the noise signals. An alternative technique of designing robust H2 filters for systems with norm-bounded uncertainty has been developed in [431]. However, the proposed filter is suboptimal 175
176
de Souza and Trofino
in the sense of the error variance upper bound. Recently, a general treatment of the robust minimum variance estimation problem for linear systems subject to norm-bounded parameter uncertainty in the state and output matrices has been presented in [369] for time-varying systems on finite horizon as well as the stationary infinite-horizon case. In this chapter we consider the design of robust H2 filters for linear continuoustime systems with parameter uncertainty in all the matrices of the system state-space model, including the coefficient matrices of the noise signals. The parameter uncertainty belongs to a given convex-bounded polyhedral domain and is allowed to be structured. The process noise and the measurement noise are assumed to be white signals with known statistics. The problem we address is the design of a linear stationary, asymptotically stable filter with an optimized upper bound for the estimation error variance, irrespective of the uncertainty. Both the design of full-order and reduced-order robust filters are analyzed. Linear matrix inequality (LMI)-based methodologies for designing such robust filters will be developed. The proposed design methodologies have the advantage that they can easily handle additional design constraints that keep the problem convex, for instance, when the filter is required to satisfy certain structure constraints. This is in contrast with the existing robust filter designs of [333], [369], [429], and [431], which are aimed at linear systems with norm-bounded parameter uncertainty and are given in terms of Riccati equations. We would like to stress that an alternative LMI-based solution to the robust filtering problem addressed in this paper has also been independently proposed in [156] and [157], which treated the continuous- and discrete-time cases, respectively. The chapter is organized as follows. A formal statement of the robust filtering problem is given in section 9.2. In section 9.3, an LMI-based methodology for designing full-order robust filters is developed. First, in order to gain insights for designing robust filters, the case of linear systems with a known state-space model is treated. Connections of the proposed LMI design and the celebrated Kalman filter will also be analyzed. This will be followed by the robust filter design. Section 9.4 parallels section 9.3 and deals with reduced-order robust filters.
9.2
The filtering problem
We consider the following linear time-invariant (LTI) system (E):
where x(t) G Rn is the state, XQ is a zero-mean random vector, w(t) € Rnt" is a zeromean, white noise signal with identity power spectral density matrix that is uncorrelated with XQ for all t > 0, y(t) € Rn" is the measurement, z(t) e Rn* is a linear combination of the state variables to be estimated, and A, B, C, D, and L are constant real matrices of appropriate dimensions, where L is known and A, 5, C, and D are unknown matrices such that the system matrix
belongs to a given polytope V described by
9.3. Pull-order robust filter design
177
i.e., any admissible system matrix S can be written as an unknown convex combination of na vertices 5,, i = 1, . . . , n s , given by
where Ai, Bi, C^, and Di, i = 1, . . . ,n s , are given matrices. Clearly, na = 1 corresponds to the case where the system (E) is perfectly known. We observe that the case where the input and measurement noise signals are different, say, vi(t) and V2(t), respectively, is a particular case of (9.1)-(9.3), where w(t) = [vf(t) V2(t)]T and the matrices B and D are replaced by [B 0] and [0 Z)], respectively. In this chapter we shall address the problem of designing a robust linear filter for estimating z with a guaranteed performance in the mean square error sense, irrespective of the uncertainty. More specifically, we look for a linear estimate z = T • y of the signal z, where F is a causal linear, asymptotically stable operator to be determined in order to minimize a suitable upper bound ^(F) for the worst-case asymptotic error variance, namely,
where $ is the set of the admissible filters. It is assumed that $ is the set of all LTI, asymptotically stable operators with state-space realization of the form
where the matrices Af € R n ' xn ', Kf € R n / x n v, and Lf e R n * xn / are to be determined and n/ is a given positive scalar. In the case where n/ = n, the filter will be referred to as a full-order robust filter and as a reduced-order robust filter when n/ < n. Observe that in terms of the state variables of (S) and (9.9), the estimation error e := z — z can be described by the following state-space model:
where
9.3
and
Full-order robust filter design
In this section we shall deal with the design of full-order robust filters. In order to pave the way for deriving the robust filter, initially we consider the case where the system matrix S is perfectly known, i.e., ns = 1. Considering the error system (9.10), it follows from well-known results that, subject to the asymptotic stability of this system, the variance of the estimation error as t —> oo is given by
178
de Souza and Trofino
where X is the solution to the Lyapunov equation
In light of the above, consider the following optimization problem:
Note that if such solution P exists, then the error system of (9.10) is asymptotically stable and P > X. Moreover, the objective function as above is strictly larger than the asymptotic error variance of (9.12). However, since the optimal solution to problem (9.14) is arbitrarily close to the solution to the Lyapunov equation (9.13), the gap between the objective function of (9.14) and the asymptotic error variance can be made arbitrarily small. In view of the above, we obtain using Schur's complements that the filtering problem of (9.7)-(9.8) can be reformulated as follows:
Note that the inequalities constraints of (9.16) and (9.17) are not convex on the decision variables P, A / , K f , and Lj. However, as it will be shown in the next theorem, by performing appropriate transformations and introducing new decision variables these inequalities can be transformed into LMIs. It should be observed that since we are dealing with the case where the system matrix S is perfectly known, it follows that the solution to our filtering problem is given by the Kalman filter. Thus, it is clear that the inequalities constraints of (9.16) and (9.17) should be feasible if the Kalman filter exists. Theorem 9.1. Consider the system (S) where A, B, C, and D axe known matrices. The full-order filter of the form of (9.9) that minimizes the asymptotic variance of the estimation error can be obtained in terms of the following LMIproblem in MA, PO, PI € R nxn , MK e R nxn «, ML e R n * xn , and N e R n - xn - ;
subject to
179
9.3. Full-order robust filter design With the optimal solution N, PQ, PI, MA, optimal filter are given by
, and MI found, suitable matrices of the
where P2 and P3 are any n x n matrices with P2 symmetric and such that P3P2~1P3r = PO . Moreover, the asymptotic estimation error variance satisfies E[eTe] = Tr[N] - e, where e > 0 is arbitrarily small. Proof 9.1. It will be shown that the inequalities of (9.16) and (9.17) are equivalent to the LMIs of (9.19)-(9.21). First, we partition P accordingly to A a , namely,
where P\ and P2 are n x n symmetric positive definite matrices. Note that, without loss of generality, the matrix P3 can be assumed nonsingular. To see this, suppose that the optimal P for the problem of (9.15)-(9.17) is such that P3 is singular. Let the matrix where a is a positive scalar and
Observe that since P > 0, we have that Q > 0 for a > 0 in a neighborhood of the origin. Thus, it can be easily verified that there exists an arbitrarily small a > 0 such that Q3 is nonsingular and the inequalities of (9.16) and (9.17) are feasible with P replaced by Q and such that the objective function of (9.15) will be increased only by an arbitrarily small quantity. Since Q3 is nonsingular, we thus conclude that there is no loss of generality to assume the matrix P3 to be nonsingular. In light of the definitions of the matrices Aa and Ca in (9.11), the inequality of (9.17) can be rewritten as
Premultiplying and postmultiplying (9.23) by J^ and Ji, respectively, where
and introducing the new variables
we have that the inequality of (9.23) becomes the LMI of (9.21). On the other hand, by Schur's complements, the condition P > 0 of (9.16) is equivalent to the inequalities of (9.19). We now prove the equivalence of the first inequality of (9.16) and (9.20). First, note that in view of the definition of Ba in (9.11), the inequality of (9.16) becomes
180
de Souza and Trofino
Premultiplying and postmultiplying (9.25) by Jf and Ji, respectively, where
and considering the definitions of PO and MK, we readily obtain that the inequality of (9.25) is equivalent to the LMI of (9.20). Finally, the filter matrices of (9.22) are obtained immediately from (9.24), which concludes the proof. D Remark 9.1. Note that the filter matrices of (9.22) can be rewritten as This implies that the matrix Pa can be viewed as a similarity transformation on the state-space realization of the filter and, as such, has no effect on the filter mapping from y to z. Thus, without loss of generality, we can set Pa = /„, and suitable state-space matrices of the optimal filter are as follows: Remark 9.2. Theorem 9.1 provides an LMI method for designing a full-order linear, stationary minimum variance filter for system (E) in the case where the system matrix S is known. It should be noted that in this situation, and subject to the detectability of (A,C) and stabilizability of (A,B), the Kalman filter also provides a state-space realization for the optimal filter. In the sequel we shall discuss the connections between the filter of Theorem 9.1 and the Kalman filter. We start by presenting an LMI formulation for the Kalman filter. First, note that in Kalman filtering, we set Af = A — KfC and Lf = L, which implies that the estimation error can be described by the following nth-order state-space model:
where x := x — x. The filter gain Kf is then determined in order to minimize the asymptotic variance of the estimation error, namely, to solve the following problem:
is asymptotically stable. The above can be reformulated in terms of the following optimization problem:
Hence, introducing the new variable Y = -WKf, we are led to the following LMI formulation for the Kalman filter:
9.3. Full-order robust filter design
181
Observe that since in Kalman filtering the estimation error can be described in terms of an n-dimensional state-space model, the state-space realization (9.10) for the estimation error is nonminimal. The implication of this is that the LMI of (9.21) can be relaxed to a nonstrict inequality as long as the (l,l)-block is positive definite. This will ensure the asymptotic stability of the estimation error dynamics as it will be shown below. Moreover, (9.21) can be reduced to an inequality of lower dimension. Premultiplying and postmultiplying the inequality of (9.21) by Jj and Ji, respectively, where
we get
where
If we introduce the constraint, P\ = PO -f £/, where £ is a positive scalar with e the inequality of (9.30) implies, in the limiting case, that
0,
Moreover, (9.30) reduces to
Hence, it follows from (9.31) that
Next, choosing PI — — PS, and considering (9.24), we get
Thus, we obtain
On the other hand, premultiplying and postmultiplying the inequality of (9.20) by Jj and Ja, respectively, where
it follows that (9.20) holds if and only if
182
de Souza and Trofino
Taking into account that PI —> PO,we obtain in the limiting case that (9.34) and (9.19) are equivalent to
Observe that the constraints of (9.32), (9.33), and (9.35) are exactly those for the LMI formulation of the Kalman filter of (9.27)-(9.29). Therefore, we conclude that by setting PQ = PI and choosing PI = —PS, the filter of Theorem 9.1 reduces to the Kalman filter. We now consider the design of full-order robust filters. To this end, we consider the system (E) and the uncertainty domain T> of (9.5). In light of the arguments developed in the case where the system matrix S of (9.4) is perfectly known, the robust filtering problem is formulated in terms of the optimization problem of (9.14) or, equivalently,
where Aa, Ba, and Ca are as in (9.11), and the system matrix 5 is any matrix belonging to the polytope T>. In this case, it follows that as t —» oo,
i.e., Tr[N] provides a suitable upper bound for the asymptotic error variance. Recall that when the system matrix 5" is known, i.e., ns = 1, this bound coincides with the variance of the estimation error. Our robust filter will thus be designed to minimize the upper bound of the asymptotic error variance of (9.39) or, equivalently, the robust filter design is formulated in terms of the optimization problem of (9.36)-(9.38). In light of the above arguments and considering Theorem 9.1, we have the following result. Theorem 9.2. The full-order filter of the form of (9.9) for the system (S) that minimizes the bound for the asymptotic error variance of (9.39) can be obtained in terms of the following LMI problem in MA,P0,Pi e R nxn , MK 6 R n x n «, ML € R n « xn , and N eR n <" xn ":
subject to
9.4. Reduced-order robust filter design
183
for i = 1, . . . , na, where the matrices [ £j ^ ] are the vertices of the polytope T>. With the optimal solution N, PQ, PI, MA, MK, and ML found, suitable matrices of the optimal robust filter are given by where P2 and P$ are any n x n matrices with P2 symmetric and such that P3P2~1P3r = PO- Moreover, the asymptotic estimation error variance satisfies E[eTe] < Tr[JV] for all admissible uncertainties. Proof 9.2. Using the same arguments as in the proof of Theorem 9.1, it follows that the optimization problem of (9.41)-(9.43) is equivalent to
where Aai and Bai are as in (9.11) with A, B, C, and D replaced by ^4i, J3i, Cj, and Z?i, respectively. Now, taking into account the convexity of the uncertainty domain and considering that the inequalities constraints of (9.46) and (9.47) are affine in the matrices Ai, Bi, Ci, and Di, we have the result that the optimization problem of (9.45)-(9.47) is equivalent to that of (9.36)-(9.38). D Remark 9.3. Theorem 9.2 provides an LMI-based design of a linear stationary filter for the system (£), which minimizes the upper bound of (9.39) for the asymptotic variance of the estimation error. This is in contrast with the existing robust H2 filter designs of [333, 369, 429, 431], which are aimed at linear systems with norm-bounded parameter uncertainty and are given in terms of Riccati equations. Remark 9.4. It should be noted that similarly to Remark 9.1, without loss of generality, suitable state-space matrices of the optimal robust filter of Theorem 9.2 are as follows:
9.4
Reduced-order robust filter design
In the previous section we addressed the problem of designing a robust filter with the same order as the system model. Now, we shall consider the design of reduced-order robust filters, i.e., the case where the order of the filter n/ is smaller than the order n of the system model. Similarly to section 3, we first consider the design for the case where the system matrix S is known. In order to be able to solve the optimization problem of (9.15)(9.17) via LMIs, we shall restrict the (1, 2)-block, P3, of P with dimensions n x n/ to be of the form P3 = [P4T 0]T. Note that now and the above inequality is not guaranteed to be tight. Thus, in general, Tr[7V] only provides an upper bound for the asymptotic error variance. In this situation, we have the following result.
184
de Souza and Trofino
Theorem 9.3. Consider the system (E) where A, B, C, and D are known matrices. A filter of order n/ < n of the form (9.9) that minimizes the bound for the asymptotic error variance of (9.48) can be obtained in terms of the following LMI problem in MA, PO €
subject to
where
and where P0 € R nxn ', M,i € R nxn ', and MK e R nxn «. With the optimal solution N, PO, PI, MA, MK, and M^ found, suitable matrices of the optimal filter are given by
where P2 and P4 are any n/ x n/ matrices with P2 symmetric and such that PjP2 1P^ = PO. Moreover, the asymptotic estimation error variance satisfies E[eTe] < Tr[AT]. Proof 9.3. The proof is along the same lines of that of Theorem 9.1. First, we partition P accordingly to Aa, namely,
where PI € R nxn and P2 e R n / x n / are positive definite matrices and P4 € R n ' xn '. Moreover, as in the proof of Theorem 9.1, without loss of generality, the matrix P4 is assumed to be nonsingular. Next introduce the matrix
Premultiplying and postmultiplying (9.52) by Jj and J^ respectively, it is easy to verify that this lead to the inequality of (9.17). Next, by Schur's complements and considering the definitions of PO and PS as above, it follows from the inequalities of (9.50) that P > 0. Finally, the remainder of the proof parallels that of Theorem 9.1.
9.5. Conclusions
185
In light of Theorem 9.3 and using similar arguments as in the case of the full-order robust filter design, we have the following result, which provides an LMI method for designing reduced-order, linear stationary filters for the system (E) that minimizes the upper bound of (9.39) for the asymptotic variance of the estimation error. Theorem 9.4. The filter of order n/ < n of the form (9.9) that minimizes the bound for the asymptotic error variance of (9.39) can be obtained in terms of the following LMI problem in MA,P0 e R n ' xn ', PI € R n x n , MK € R"/ x "«, ML € R n * xn ', and N eR n - x n -:
subject to
for i = 1,..., n s , and where PO, MA, and MK are as in (9.53). With the optimal solution N, PO, PI, MA, MK, and ML found, suitable matrices of the optimal robust filter are given by where P2 and Pj are any n/ x n/ matrices with P2 symmetric and such that PO. Moreover, the asymptotic estimation error variance satisfies E[eTe] < Tr[JV] for all admissible uncertainties. Remark 9.5. It can be easily verified that the matrix P± of Theorems 9.3 and 9.4 can be viewed as a similarity transformation on the state-space realization of the corresponding filters. Thus, without loss of generality, suitable state-space matrices of the reduced-order filters of Theorems 9.3 and 9.4 are as follows:
9.5
Conclusions
This chapter addressed the design of robust H2 filters for linear continuous-time systems with a state-space model subject to parameter uncertainty that belongs to a given convex bounded polyhedral domain. We developed LMI-based methodologies for designing linear stationary, asymptotically stable filters with an optimized upper bound for the asymptotic estimation error variance, irrespective of the uncertainty. These filters can be regarded as an extension of the celebrated Kalman filter to the case of systems with polytopic uncertainty in the matrices of the system state-space model. Both the design of full-order and reduced-order robust filters have been considered. The proposed design methodologies have the advantage that they can easily handle additional design constraints that keep the problem convex, for instance, when the filter is required to satisfy certain structure constraints.
This page intentionally left blank
Chapter 10
Robust Mixed Control and Linear Parameter-Varying Control with Full Block Scalings Carsten W. Scherer 10.1
Introduction
In the past years mixed control problems have attracted considerable interest, and the linear matrix inequality (LMI) approach to controller design has shown to be beneficial to tackle design specifications that have deemed untractable. For the design of output feedback controllers, it has been revealed quite recently how a suitable nonlinear change of controller parameters allows us to proceed in a straightforward fashion from an analysis specification for a controlled system formulated in terms of matrix inequalities to the corresponding synthesis inequalities for controller design [78, 264, 365]. One purpose of this chapter is to extend this general paradigm to robust performance specifications if the underlying system is affected by time-varying parametric uncertainties [265, 396]. We introduce a general technique that allows us to equivalently translate robust performance objectives formulated in terms of a common Lyapunov function into the corresponding analysis test with multipliers. Similar approaches, as those in [265, 396], are usually based on the so-called <S-procedure that introduces conservatism, since the resulting multipliers have, in general, a block-diagonal structure. Instead, we propose to use full block multipliers that are only indirectly described by LMIs such that the reformulation will not introduce conservatism. Hence we call the underlying abstract result for quadratic forms a full block 5-procedure. For robust stability specifications with affine dependence on the uncertainties, such an equivalent reformulation has been provided before in [341]. The main goal of this chapter is to generalize these ideas to robust performance specifications if the matrices describing the system can be rational functions of the parameters and to base the transformation on one abstract result that can be applied with 187
Scherer
188
ease to several performance specifications. Moreover, we will provide a discussion of the (straightforward) consequence for robust controller design. Related results for the robust stabilization problem have appeared in the inspiring paper [212]. Finally, we will turn to the design of linear parametrically varying controllers that has been fully tackled for block-diagonal multipliers [312, 191, 367]. Our purpose is to show how to overcome this structural restriction to arrive at a solution even if the multipliers are full block matrices that are only indirectly described by LMIs. Contrary to previous approaches one has to modify the structure of the scheduling function: One cannot just employ a copy of the parameters to schedule the controller, but one has to use a quadratic function thereof.
10.2
System description
We concentrate on systems that are affected by time-varying parametric uncertainties. Since we deal with linear fractional system representations, we assume that all these parameters are collected in the matrix A. The admissible set of values that can be taken by the uncertainties is denoted as
and assumed to be compact. Note that A captures both the size of the uncertainties as well as their structure. Typically, for a rational dependence of the system description on the parameters, the matrix can be taken to be block-diagonal, and the blocks take a real repeated structure; for the results in this chapter, such a structure is not required. Given the value set A, the actual time-varying parametric uncertainties consist of all continuous curves Let us now look at
a linear time-invariant (LTI) system in which the uncertainties enter in a linear fractional fashion to define the time-varying uncertain system under investigation. Obviously, Wi —» z\ constitutes the uncertainty channel, whereas Wj —> Zj, j — 2, . . . , m, are the performance channels used to describe the desired performance specifications, and u —> y is the control channel. In fact, u is the control input and y is the measured output, and any LTI system that closes the loop as
is said to be a controller. The resulting controlled system admits the description
10.3. Robust performance analysis with constant Lyapunov matrices
189
with the realization
The multiobjective robust controller design problem can be sketched as follows: Find a controller that exponentially stabilizes the closed-loop system and that achieves a mixture of various performance specifications on the different performance channels of the controlled system for all possible uncertainties affecting the system. In the next section we extend well-known analysis tests for nominal performance as described in [365] to the corresponding robust performance tests against time-varying uncertainties in terms of a constant quadratic Lyapunov function. The essential new ingredient will be the ease in which one can derive the corresponding multiplier test by just referring to the abstract full block 5-procedure that is presented in Lemma 10.1 in the appendix.
10.3
Robust performance analysis with constant Lyapunov matrices
In this section, we provide analysis tests that are based on finding a constant quadratic Lyapunov function in order to guarantee the following properties: • Well-posedness of the linear fractional transformation (LFT) used to describe the uncertain system. • Uniform (in the uncertainty) exponential stability. • Robust performance, specified as quadratic performance (such as bounding the 1/2gain or dissipativity) or as bounding the H-2-norm, the generalized #2-norm, or the peak-to-peak gain.
10.3.1
Well-posedness and robust stability
We call the description (10.3) well posed if
Then the channel Wj —> Zi of the uncertain closed-loop system admits the alternative representation
with the function
that is, due to (10.5), continuous (and even smooth) on A.
190
Scherer
If the interconnection is well posed, (10.3) or (10.6) are uniformly exponentially stable if there exist constants K and a > 0 such that, for every uncertainty A(.) and for every unforced (w^ = 0, . . . , wm = 0) system trajectory £(.), Under the hypothesis (10.5), it is well known that uniform exponential stability is guaranteed by the existence of an X > 0 that satisfies the Lyapunov inequality A.( A)T X+ XA.(&) < 0 VA in the admissible value set A. In the following result the Lyapunov inequality is rewritten into a form that facilitates the application of the full block <Sprocedure. Theorem 10.1. Suppose the interconnection (10.3) is well posed, and suppose there exists an X > 0 satisfying
Then the uncertain system (10.3) is uniformly exponentially stable. Proof 10.1. The very standard proof is only included for the convenience of the reader. By compactness of A, there exists an e > 0 such that
VA € A. Now suppose A(.) is any admissible uncertainty curve, and let £(.) be an arbitrary unforced system trajectory. With v(t) := £(t)TX£(t), we obtain by left multiplying £(t)T and right multiplying £(t). With the integrating factor ec*, this implies Due to Amin(^)||e(<)||2 < v(t] < A max (*)||£WII 2 , we finally obtain
which finishes the proof with explicit formulas for the constants a and K.
D
The condition (10.7) is formulated in terms of the rational function .A(A) on the whole set A such that it is pretty hard to verify directly. The main purpose of the full block «S-procedure proposed in this chapter is to equivalently reformulate this test into a more explicit condition that makes use of multipliers. Here, the relevant set of multipliers is defined as
Whenever required we will tacitly assume that any such multiplier is partitioned as.
For robust stability, we will reveal in detail how to apply Lemma 10.1 such that we can easily extend the technique to robust performance tests.
10.3. Robust performance analysis with constant Lyapunov matrices
191
Theorem 10.2. The interconnection (10.3) is well posed and X satisfies (10.7) if and only if there exists a multiplier P € P with
Proof 10.2. To apply Lemma 10.1, we introduce
and
Hence 5t/ = ker(E/T) n 5 is described as
Therefore, we conclude that Su is complementary to SQ if and only if / - A£>n is nonsingular. Moreover, if I—APu is nonsingular, we arrive at the alternative description of <Sjy as
Then it is obvious that the following conditions are equivalent: inequality (10.7) and N < 0 on Su] inequality (10.9) and N + TTPT < 0 on 5; and, finally, the condition P € P and P > 0 on ker(t/). The latter just follows from the observation that the kernel of U is nothing but the image of ( 7 ). All this shows that Theorem 10.2 is a reformulation of Lemma 10.1, which finishes the proof.
10.3.2
Robust quadratic performance
Suppose Ppi is a symmetric matrix that defines a performance index. Then the robust quadratic performance (QP) specification on the channel i is formulated as follows: There exists an e > 0 such that
holds for any trajectory of the uncertain system (10.3) with £(0) = 0. The following technical hypothesis on the performance index is both indispensable to apply the full block «S-procedure and to arrive at nominal controller synthesis procedures that can be based on solving LMIs:
Scherer
192
Among others, this hypothesis holds true for the following very important special cases: Robust QP with guarantees that the Z/2-gain of the channel Wi —> Zi is robustly smaller than 7. guarantees strict robust dissipativity of the channel Wi —> Zi, generalizing positive realness. Based on Lyapunov arguments, one can easily obtain a sufficient LMI condition for robust QP. Theorem 10.3. Suppose the interconnection (10.3) is well posed, and suppose there exists an X satisfying
V A G A. Then the uncertain system (10.3) is uniformly exponentially stable and satisfies the robust QP specification for the channel Wi —> z». Proof 10.3. The upper-left block of (10.12) reads as
At this point we exploit (10.11) to infer (10.7) such that we can conclude uniform robust exponential stability. The proof of robust performance is, again, straightforward. By continuity and compactness of A, we can add ( Q ^ ) on the left-hand side of (10.12) for some small e > 0 without violating the inequality. For any Wi € LI, let £(.) and Zi(.) be the corresponding state and output trajectory of the uncertain system (10.3) with £(0) = 0. By right and left multiplication with ( Jf/A ) and its transpose, we infer
Integrating over [0, T] and letting T go to infinity implies the desired inequality (10.10) if we recall f (0) = 0 and lim^oo t(T) = 0. On the basis of Lemma 10.1, we can obtain with ease the equivalent multiplier test. Theorem 10.4. The interconnection (10.3) is well posed, and X satisfies (10.12) if and only if there exists a P € P such that
10.3. Robust performance analysis with constant Lyapunov matrices
193
Proof 10.4. As in the proof of Theorem 10.2, apply Lemma 10.1 to
and
10.3.3
Robust H2 performance
If the disturbance Wi that affects the system is stochastic in nature, we can also consider the H2 criterion as a performance measure for the perturbed system. Let us assume that £(0) = 0, that Wi is white noise, and that we are interested in bounding the variance of the output Zi by a given number 7. We need to assume that the uncertainty model and the controller are such that
Then it is easy to arrive at the following analysis result. Theorem 10.5. Suppose the interconnection (10.3) is well posed, and suppose there exist X > 0, Z > 0 satisfying, VA € A,
Then the uncertain system (10.3) is uniformly exponentially stable, and the variance of the output Zi on the time interval [0, oo) is smaller than 7. Due to the zero row and column, the inequality in (10.16) could be simplified. We kept it in this form to display the similarity to the robust QP test. Proof 10.5. Uniform exponential stability follows as in the proof of Theorem 10.3. For verifying robust performance, one can easily rewrite (10.16) with y = X~l to
194
Scherer
Using Lemma 10.2, (10.17) is seen to be equivalent to
Let us now recall that the state covariance matrix of the uncertain system is given as the solution of the initial value problem
Standard comparison results for linear matrix differential equations imply )C(t) < ^y for t > 0 and hence we infer
which leads to the desired bound on the output variance. We end up with two inequalities in the parameter A. Therefore, we have to apply Lemma 10.1 to each of these inequalities individually. This leads to two independent multipliers to equivalently reformulate the robust H2 analysis condition to the multiplier version. Theorem 10.6. The interconnection (10.3) is well posed and X, Z satisfy (10.16)(10.17) if and only if there exist Pi,P2 G P such that
Remark. Instead of the stochastic interpretation, the criterion derived here also admits a deterministic interpretation; it guarantees a bound on the sum of the energy of the output responses to initial conditions taken as the columns of #i(A(0)).
10.3.4
Robust generalized H2 performance
The generalized H2 norm is the gain of the system mapping Wi € L% into z» 6 £«,, i.e., the energy-to-peak gain [347]. The corresponding performance specification is to robustly guarantee the bound
Note that the gain can only be finite if (10.15) holds, which is assumed throughout this section.
10.3. Robust performance analysis with constant Lyapunov matrices
195
Theorem 10.7. Suppose that the interconnection (10.3) is we]] posed and that there exists an X > 0 such that, V A € A, the inequality (10.16) and
hold true. Then (10.3) is robustJy exponentially stable, and the gain of L^ 3 Wi —* Zi € LOO is smaller than 7. Proof 10.6. As for QP, the proof of stability is obvious, and one can infer from (10.16) that there exists an e > 0 such that
for any wt e L2. Integrating over [0,T] leads to £(T)T#£(T) < (7 - e) /QT iyi(t) T tWi(t) dt and hence The second inequality (10.21) implies Ci(A) T Ci(A) < ^X and therefore, due to £>ii(A) = 0, Both relations taken together lead to (10.20).
D
Similarly as for the H2-performance specification, we have to introduce two independent multipliers to equivalently reformulate these conditions to the corresponding multiplier versions. Theorem 10.8. The interconnection (10.3) is we]] posed and X satisfies (10.16) and (10.21) if and on]y if there exist multipliers Pi,P2eP with (10.18) and
10.3.5
Robust bound on peak-to-peak gain
Let us finally include in our list the important time-domain specification to keep the peak-to-peak gain L^ 3 Wi —> Zi e LOO smaller than 7. To be precise, we intend to robustly guarantee
Again, simple Lyapunov arguments provide sufficient conditions for this inequality to hold. Theorem 10.9. Suppose the interconnection (10.3) is we]] posed. If there exist X, A > 0, /x such that, VA € A,
196
Scherer
then (10.3) is robustly exponentially stable, and the gain of LOO 3 robustly smaller than 7.
is
Proof 10.7. As before we observe that (10.24) implies
We infer £(t)TX£(t) < pf* e-A(*-T)|K(r)||2dT and hence
The inequality (10.25) leads (due to 7 - n > 0) to
for some small e > 0. If we combine the inequalities we infer and hence equation (10.23) holds. The reformulation into the corresponding multiplier version is, again, straightforward. Theorem 10.10. The interconnection (10.3) is well posed, and (10.24)-(10.25) hold if and only if there exist multipliers PI , P2 £ P that satisfy
Note that X and A enter these inequalities nonlinearly. In an analysis problem, it is advisable to search for the best upper bound on the peak-to-peak gain by fixing A > 0 and minimizing the bound 7 over the LMIs (10.26)-(10.27) (which is a convex optimization problem) to get the optimal value 7*(A). Then one can perform an additional line search over A to further minimize 7* (A) and to get to the smallest achievable bound.
10.4
Dualization
On the basis of Lemma 10.2, one can dualize all robust performance tests we have provided in the previous section. Let us concentrate on QP only with an index matrix that is nonsingular and whose inverse is denoted as
10.5. How to verify the robust performance tests
197
Note that this property can be enforced by a slight perturbation that changes neither the problem formulation nor its solution. For an arbitrary matrix M, we observe that
Hence the set of dual scalings has to be denned as
and each P € P is partitioned in the same fashion as P. Corollary 10.1. There exists an X and a multiplier P £ P with (10.12) if and only if there is a dual Lyapunov matrix y > 0 and a dual multiplier P G P with
The Lyapunov matrices and multipliers are related as
Proof 10.8. Suppose X and P satisfy (10.12). Since A = 0 is in the value set A, we conclude that R > 0. Hence we infer
This implies that S as denned in (10.14) is, in fact, a negative subspace of N of maximal dimension. Moreover, since (10.13) implies ( ^ }TP( pn ) < 0, P is nonsingular. Hence the same is true of N. Due to (10.28) (after a row permutation), the proof is finished by applying Lemma 10.2.
10.5
How to verify the robust performance tests
The set of multipliers P is obviously convex. However, since this set is described by infinitely many inequalities, and since it is not obvious, in general, how to reduce to finitely many conditions, it is hard to even decide whether a matrix P is indeed contained in P or not. This precludes the verification of the robust performance tests in Section 10.3 by standard algorithms.
198
Scherer
This is the motivation to confine, at the expense of conservatism, the search to a smaller set of multipliers that admits a simple description, preferably in terms of finitely many LMIs. For that purpose we assume that the value set A is the convex hull of finitely many prespecified matrices:
Let us then introduce the multiplier set
Note that, in general, this is a strict inclusion such that the restriction of the search to PI introduces conservatism. However, a straightforward convexity argument (based on Q < 0) reveals that
Hence, the set PI can indeed be fully described by finitely many LMIs, and the search for P G PI renders our robust performance test verifiable by solving standard LMI problems. Remark. If T>u = 0 such that the parameters enter (10.3) affinely, we observe that all the robust performance inequalities already imply that any multiplier in P satisfies Q < 0 and is, hence, contained in P\. Hence, in this case, P and P\ conincide and no conservatism has been introduced. It is possible to reduce the conservatism for block-diagonal uncertainties. For that purpose let us assume that
Then we obtain (10.29) where the N = 2m generators A,, are defined by letting each 6i vary in {—1,1}. Let us now partition the upper-left block Q of the multiplier P as A, which defines ra diagonal blocks Q\,..., Qm. If we introduce
we infer
on the basis of a simple multiconvexity argument. Again, the set P2 is described in terms of finitely many LMIs. As expected, one can demonstrate by simple examples that the set P2 is in general larger than PI, and that P2 can lead to less conservative robust performance tests. All these techniques apply in the same fashion to the corresponding dual inequalities as provided in section 10.9.2 for the corresponding sets of dual multipliers P, PI, and ft-
10.6. Mixed robust controller design
10.6
199
Mixed robust controller design
We have listed several important analysis tests for robust performance. In a typical multiobjective robust controller design problem one tries to find a controller that meets a selection of all these specifications on various channels of the controlled system. In order to render the underlying analysis problems algorithmically tractable, we constrain ourselves to the multiplier set P\ or, for block-diagonal uncertainties, to P%, respectively. Even for the nominal performance specifications, the corresponding multiobjective control problems are still hard and mostly open. It is well known that the main difficulty arises due to the fact that each of the performance specifications requires, in general, a different Lyapunov matrix to render it satisfied. The currently known design techniques do not easily overcome this obstacle. This has been the motivation to consider, instead, the so-called mixed control problems in which the goal is to render the specifications satisfied with a common Lyapunov function X for all specifications under investigation. To be specific, we consider the robust mixed QP/H2 problem: Try to robustly achieve QP with index Ppi on channel Wi —> Zi and an H2 bound 7 on channel Wj —* Zj. Note that the generalization to any other combination of (possibly repeated) performance specifications on other channels is obvious. The robust mixed QP/H2 control problem hence aims at finding a controller, multipliers P, PI, PI € P, and a Lyapunov matrix X as well as an auxiliary variable Z > 0 that render the LMIs (10.13) and (10.18)-(10.19) with i replaced by j satisfied.
10.6.1
Synthesis inequalities for output feedback control
Quite recently it has been observed [264, 78, 365, 361] how to step in a formal manner from the analysis to the corresponding synthesis inequalities. This procedure is based on transforming the Lyapunov matrix X and the controller parameters as into the new symmetric blocks X, Y and the transformed controller parameters K, L, M, N that are collected in the variable v. Let us now introduce the functions
and
that are affine in v. In order to transform the synthesis inequalities, one needs to find a formal congruence transformation involving Z such that the blocks in the analysis inequalities transform as
Then it suffices to perform the substitution
200
Scherer
to arrive at the synthesis inequalities in the new variables v and all other variables that appear in the analysis test (such as multipliers and auxiliary parameters). Once having solved the synthesis inequalities, we are led by the inversion of (10.30) back to the Lyapunov matrix and to the desired controller parameters. This inverse is easily calculated by finding nonsingular matrices U, V with / — XY = UVT and solving the equations
for X and Ac, Bc, Cc, Dc. If one performs these two steps for the robust mixed QP/H2 problem, one arrives at the synthesis inequalities
where we suppress the variable v for reasons of space. Unfortunately, testing the feasibility of these inequalities, in general, does not amount to solving a convex optimization or LMI problem. Hence one has to resort to controller scalings iteration as they are known from /^-theory [265]. One might suggest the following procedure: Consider the scaled uncertainty set rA and try to maximize r such that the synthesis inequalities are satisfied for the value set rA. Start with a nominal design for r = 0. Then the iteration proceeds as follows: In the first step, fix K, L, M, N and find X, Y and multipliers P, PI, PI, which correspond to rA such that the synthesis inequalities hold and r is maximal. If one parametrizes the multipliers by finitely many LMIs, then testing the feasibility for fixed r corresponds to an analysis problem and hence reduces to a standard LMI feasibility test; the maximization of r can be performed by bisection. In the second step, one fixes P, PI, PI and maximizes r by varying the whole variable v. For fixed r, this corresponds to a nominal performance design problem (such that it reduces to an LMI feasibility test), and the maximization of r is done by bisection.
10.6. Mixed robust controller design
201
One can immediately advise many variations of this principal procedure. In particular, one might choose other combinations of fixed and varying parameters in the optimization steps to increase r [361].
10.6.2 Synthesis inequalities for state-feedback control The procedure described in the previous section applies literally to state-feedback design. One just needs to employ the functions
and the equations to reconstruct the Lyapunov matrix and the (static) controller read as As such, the synthesis inequalities are not affine in all variables. However, as demonstrated for the analysis inequalities, one can straightforwardly dualize the synthesis inequalities that we have derived above. Due to the mere fact that Bj(v) and Dij(v) actually do not depend on v, the dual inequalities define convex constraints even if letting both the multipliers, the auxiliary parameter Z, and the whole parameter v vary; in fact, they can be easily rearranged to become affine in all unknowns. Hence, what is known from single-objective control problems or for block-diagonal multipliers completely generalizes to robust mixed control problems with full block scalings.
10.6.3
Elimination of controller parameters
It is well known how to eliminate the controller parameters in single-objective nominal design problems. This is also possible in single-objective robust control problems. Let us consider, as an example, the robust QP problem with index Pp on the channel w2 —> z2. We will provide a variation of the standard procedure based on the projection lemma that leads, even for a general QP index, to particularly simple formulas. We just apply Lemma 10.3 and observe that we can work with basis matrices K\ and K2 of the kernels of respectively. Then we arrive at the following equivalent synthesis test: Find X, Y and multipliers P € P, P €. P that satisfy
Scherer
202
and the duality coupling condition
Nonconvexity enters via the generally nonconvex constraint (10.34). Although this result has not appeared in the literature, it is a straightforward (and notationally simple) extension of those in [341, 212] to robust QP. The main purpose for being so detailed is to point out the relation to linear parameter-varying (LPV) control in the next section.
10.7
LPV design with full scalings
In LPV control, it is assumed that the parameters A(£) that enter the system are not unknown but that they can be measured online. This allows, among other applications, to approach (robust) gain-scheduling synthesis problems for nonlinear control systems. So far, the LPV problem has been solved for block-diagonal parameter matrices with the corresponding block-diagonal scalings [19, 191, 312, 367], In [364] we have sketched how to extend these techniques to full block scalings, and in this chapter we give for the first time the full problem solution. Adjusted to the structure of (10.1), we assume that the measured parameter curve enters the controller also in a linear fractional fashion. Hence an LPV controller is defined by scheduling the LTI system
with the actual parameter curve as
The controller is hence parameterized through the matrices Ac, Bc, Cc, Dc, and through a possibly nonlinear scheduling function AC(A). Note that previous approaches were based on AC(A) = A such that the controller is scheduled with an identical copy of the parameters. However, full block scalings require the extension to a more general scheduling function that will turn out a posteriori to be quadratic functions. The goal is to construct an LPV controller that renders the QP specification with index Pp for the channel w% —» z% for all possible parameter curves satisfied. As standard, the solution of this problem is obtained with a simple trick. In fact, the controlled system can, alternatively, be obtained by scheduling the LTI system
with the parameters as
10.7. LPV design with full scalings
203
and then controlling this parameter-dependent system with the LTI controller (10.35). Hence, with the chosen controller structure, the LPV problem we started out with is equivalently reformulated to a robust performance design problem as discussed previously: Find an LTI controller (10.35) that renders the system (10.36)-(10.37) uniformly exponentially stable such that the robust QP specification for tu2 —>• ^2 with nonsingular index Pp is satisfied. Due to the fact that the parameters are measured online, the uncertainties enter the system in a very specific form, which is reflected in the particular structure of the describing matrices in (10.36) and of the parameters in (10.37). This particular structure implies that the synthesis inequalities related to this robust performance problem are standard LMI problems. Hence they can be solved (without conservatism) using existing algorithms. For guaranteeing robust stability and performance of the closed-loop system, we employ extended multipliers adjusted to the extended uncertainty structure that are given as
and that satisfy
The corresponding dual multipliers Pe = P
l
are partitioned similarly as
As indicated by this notation, it will turn out that the LPV synthesis inequalities will be influenced only by the multipliers blocks in Pe and Pe without indices, and they will be actually identical to those of the robust control problem apart from the coupling condition (10.34). Therefore, testing these synthesis conditions indeed amounts to solving a standard LMI problem. Theorem 10.11. There exists a controller (10.35) and a scheduling function such that the system (10.36)-(10.37) controlled with (10.35) satisfies the analysis conditions for robust QP with multipliers (10.38)-(10.39) if and only if there exist X, Y and scalings P € Pi, P € Pi that satisfy the LMIs (10.31)-(10.33). Proof 10.9. The proof of "only if" is straightforward: Eliminate the controller parameters in the analysis inequalities. Due to the specific structure of the describing matrices in (10.36), the resulting synthesis inequalities simplify to (10.31)-(10.33) such that only the multiplier parts
204
Scherer
appear. The constructive proof of "if" is more involved. Let us assume that we have found a solution to (10.31)-(10.33). First step: Extension of scalings. Let us define the matrices
with the row partition of P. Note that im(Z) is the orthogonal complement of im(Z) and recall For the given P and P, we try to find an extension Pe with (10.38) such that the dual multiplier Pe — P~l is related to the given P as in (10.40). After a suitable permutation, this amounts to finding an extension
where the specific parametrization of the new blocks in terms of a nonsingular matrix T and some symmetric N will turn out to be convenient. Such an extension is very simple to obtain. However, we also need to obey the positivity/negativity constraints in (10.38) that amount to
and
We can assume without loss of generality (perturb, if necessary) that P — P~l is nonsingular. Then we set and observe that (10.42) holds for any nonsingular T. The main goal is to adjust T to render (10.43)-( 10.44) satisfied. We will in fact construct the subblocks TI = TZ and T2 = TZ of T - (Ti T2). Due to (10.41), conditions (10.43)-(10.44) read in terms of these blocks as
If we denote by n+(S) and n_(S'), respectively, the number of positive and negative eigenvalues of the symmetric matrix S, we have to calculate n_(AT — Z(ZTPZ)~1ZT) and n+(N - Z(Z?PZ}~1ZT}. Simple Schur complement arguments reveal that
equals By Lemma 10.2 and ZTPZ > 0, we infer ZTP~1Z < 0, and hence n-(ZTPZ) n-(ZTP~lZ}. This leads to
=
10.7. LPV design with full scalings
205
This implies that there exist TI, T2 with n_(AT), n+(N) columns that satisfy (10.45). Since n+(N) + n_(./V) equals the number of rows of T, these two blocks indeed define a square T that can be assumed (after perturbation, if necessary) to be nonsingular. This finishes the construction of the extended multiplier (10.38), where we observe that the dimensions of Q22/R22 equal the number of columns of T\/T2 which are, in turn, identical to n-(N)/n+(N). Second step: Construction of the scheduling function. Let us recall that
on A. By Lemma 10.3, we can hence infer that, for each A € A, there indeed exists a A C (A) that satisfies (10.39). Due to the structural simplicity, we can even provide an explicit formula that shows that A C (A) can be selected to depend smoothly on A. Indeed, by a straightforward Schur-complement argument, (10.39) is equivalent to
for U = Re- S^Q~lSe > 0, V = -Q"1 > 0, W = Q^lSe. Since there exists a solution, the inequality can be rearranged to
and it is clear that one solution is obtained by rendering the (2, l)-block to vanish:
Note that A C (A) has dimension n_(N) x n+(N}. Third step: LTI controller construction. After having reconstructed the scalings, the design of the LTI part of the controller amounts to solving a nominal QP problem that can be done along standard lines, as shown, e.g., in [365], Remark. The proof reveals that the scheduling function A C (A) has as many rows/ columns as there are negative/positive eigenvalues of P — P~l (if assuming without loss of generality that the latter is nonsingular). If it happens that P — P~l is posititve or negative definite, there is no need to schedule the controller at all; then we obtain a controller that solves the robust QP problem. The search for X and Y and for the multipliers P € PI and P € PI to satisfy (10.31)(10.33) amounts to testing the feasibility of a standard LMI. Moreover, the controller construction in the proof of Theorem 10.11 is constructive. Hence we conclude that we have found a full solution to the QP LPV control problem (including L2-gain and dissipativity specifications) for full block scalings Pe that satisfy Qe < 0. The more interesting general case without this still restrictive negativity hypotheses is dealt with in a forthcoming paper.
206
10.8
Scherer
Conclusions
In this chapter we have discussed how to handle multiobjective robust performance analysis and synthesis problems for systems that are affected by time-varying parametric uncertainties. We introduced a general technique, the so-called full block «S-procedure, that allows us to formally introduce full block multipliers in order to reduce the conservatism that could result from the restriction to standard block-diagonal multipliers. Finally, we provided a solution to the single-objective LPV control problem if we employ a subclass of full block scalings; the fully general case is left to a forthcoming paper.
10.9
Appendix: Auxiliary results on quadratic forms
10.9.1 A full block ^-procedure Suppose «S is a subspace of R n , T G R/ xn is a full row rank matrix, and U C Rkxl is a compact set of matrices of full row rank. Define the family of subspaces
indexed by U G U. Suppose N G R n x n is a fixed symmetric matrix. The goal is to render the implicit negativity condition explicit. We want to relate this property, under certain technical hypotheses, to the existence of a symmetric multiplier P that satisfies
As a technical hypothesis, we require that all subspaces Sy are complementary to a fixed subspace «Sb C 5 that has the two properties
In the intended applications, «S is an unperturbed system, T picks the interconnection variables that are constrained by the uncertainties, the elements ofU^U define kernel representations of the uncertainties, and Su is the uncertain system. The complementarity condition amounts to a well-posedness property of the uncertain system description, and the nonnegativity of N is a condition on the performance index of interest. Lemma 10.1. The two conditions
hold if and only if there exists a matrix P that satisfies
10.9.2
Dualization
The dualization of robust performance tests is most easily achieved with the following well-known auxiliary result that is the abstract version of a dualization argument in [212], which is based on manipulating block matrices.
10.9. Appendix: Auxiliary results on quadratic forms
207
Lemma 10.2. Suppose that N is symmetric and nonsingular, and that S is a negative subspace of N of maximal dimension. (N is negative definite on S, and the number of negative eigenvalues of N coincides with the dimension ofS.) Then is positive definite on
10.9.3
Solvability test for a quadratic inequality
The following explicit solvability characterization for a quadratic matrix inequality serves to conveniently eliminate controller parameters in synthesis tests. Consider the quadratic inequality
in the unstructured unknown X. We assume that and that
is nonsingular.
By Lemma 10.2, we can dualize to equivalently reformulate the inequality as
The solvability test makes use of matrices A± and J5j_ whose columns form a basis of the kernels of A and B, respectively. Lemma 10.3. The quadratic inequality (10.48) has a solution X if and only if
and
The proof of "only if" is obvious. The proof of the converse can be based on the projection lemma and is constructive; if a solution is known to exist, one can explicitly calculate it.
This page intentionally left blank
Chapter 11
Advanced Gain-Scheduling Techniques for Uncertain Systems Pierre Apkarian and Richard J. Adams 11.1
Introduction
The gain-scheduling problem has been the subject of a great deal of research over recent years, from both theoretical and practical viewpoints. This renewed interest probably stems from the development of new techniques and software that allow for a more rigorous and systematic treatment of the gain-scheduling problem. The classical approach to this problem essentially consists of repeated design syntheses associated with some scheduling strategy connecting locally designed controllers. Such schemes, however, lack supporting theories that guarantee the behavior of the scheduled controller. A significant contribution toward the elimination of such weaknesses is the formulation of the gainscheduling problem in the context of convex semidefinite programming [315], an elegant and solidly based branch of optimization theory [296, 292, 403]. Expressed in terms o linear matrix inequalities (LMIs), the gain-scheduling problem is readily and globally solved using currently available efficient optimization software [153]. LMI techniques now appear as very natural mechanisms for the formulation of gain-scheduling problems as well as for a vast array of other problems in the control field. Reference [64] gives a overview of the scope of application of such techniques. As emphasized in HQQ control theory, a key stage in the characterization of gainscheduled controllers is the search for adequate Lyapunov functions that establish stability and a performance bound for the closed-loop system. The linear-fractional transformation (LFT) gain-scheduling techniques in [312, 19, 252, 253] or the so-called quadrat gain-scheduled techniques in [36, 21] make use of a fixed Lyapunov function, as opposed to one that depends on the scheduled variables, to characterize stability and performance. According to [424], such approaches are potentially very conservative because they allow for arbitrary rates of variation in the scheduled variables. More dramatically, it has been shown in [424] that some systems are not even quadratically stabilizable, that is, are no stabilizable on the basis of a single Lyapunov function. A significant improvement over such techniques can be obtained by exploiting the concept of parameter-dependent Lya209
210
Apkarian and Adams
punov functions. This is discussed in the context of robustness analysis and synthesis in [149, 135, 191] and for the gain-scheduling problem in [424, 191]. Parameter-dependent Lyapunov functions allow the incorporation of knowledge on the rate of variation in the analysis or synthesis technique, and therefore lead to much less conservative answers. The reader is referred to [33, 424] for earlier work related to the approaches considered here. The discretization of continuous-time gain-scheduled controllers is considered in [17]. In this chapter we investigate two different techniques: [362, 78, 265] and an extension of [148, 147] to the gain-scheduling problem. These techniques impose no restriction on the plant and provide a simple and streamlined treatment of the gain-scheduling problem. Moreover, the technique in [362, 78] allows the incorporation of multiple specifications into the design problem such as H2 — HQQ , pole clustering, or control effort constraints. The second technique is more restrictive but offers computational advantages. The focus of this work is on the computational effort for controller calculation and on the practical issues of controller implementation. A special emphasis is placed on the development of scaling techniques that take advantage of the problem's structural properties and thus reduce the conservatism of the gain-scheduling approach. It is further shown that combining the capabilities of both techniques provides a comprehensive and effective methodology, encompassing all components of the gain-scheduling task from theoretical constructions to real-time implementations. The chapter is structured as follows. Sections 11.2 to 11.4 give a thorough discussion of gain-scheduling synthesis techniques together with some refinements and improvements in section 11.5. Finally, the validity and applicability of concepts and techniques are demonstrated for a two-link flexible manipulator application in section 11.6. The work here is also related to the linear parameter-varying (LPV) control problem considered in Chapter 10. In Chapter 10, the rates of variation of the parameter are not taken into account, hence, potentially leading to conservative answers. However, as is common to most modern control techniques, the refinements in this chapter increase the computational cost, and this could be prohibitive for large systems. Some techniques to alleviate the computational cost due to the gridding phase are considered in [22, 399].
11.2
Output-feedback synthesis with guaranteed Ly gain performance
In this section we recap some known results on the gain-scheduling technique with bounded parameter variations rates and point out connections between different approaches. We first give a general characterization of gain-scheduled controllers, the solution to which involves both intermediate controller matrices and Lyapunov variables X and Y. This formulation will be referred to as the basic characterization, emphasizing the fact that it can be easily extended to multiple objective problems, pole clustering problems, etc. [362, 78]. Next, a second formulation of gain-scheduled controllers is presented. It will be referred as the projected characterization, as the intermediate controller matrices have been eliminated through projections [148]. Reconstructing the controller state-space data from the projected conditions has been addressed in [148, 147] for the customary HQQ control problem. The reconstruction procedure is again described here, in the case of the gain-scheduling problem, for completeness of the discussion. The reader is referred to [33, 34, 424] for details, insights, and applications of analogous gain-scheduling techniques. The problem addressed throughout the chapter is the following. Suppose we are given
11.2. Output-feedback synthesis with guaranteed Z/2-gain performance
211
an LPV plant G(0) with state-space realization
where define the problem dimension. The time- varying parameter 6 := (0i, . . . , OL)T &s well as its rates of variation 0 are assumed bounded as follows: (a) each parameter 0; ranges between known extremal values 9t and 0*:
(b) the rate of variation 0* is assumed well defined at all times and satisfies
where v^ < v^ are known lower and upper bounds on 0$. The first assumption means that the parameter vector 0 is valued in a hypercube G. Similarly, (11.3) defines a hypercube O of RL with vertices in
The gain-scheduled output-feedback control problem consists of finding a dynamic LPV controller, K(0), with state-space equations
which ensures internal stability and a guaranteed Z/2-gain bound 7 for the closed-loop operator (11.1)-(11.5) from the disturbance signal w to the error signal z, that is,
and all admissible trajectories (0,0) and zero-state initial conditions. Note that A and AK have the same dimensions, since we restrict the discussion to the full-order case. The formulation of such controllers can be handled via an extension of the bounded real lemma with quadratic parameter-dependent Lyapunov functions V(xd, 0) = x^P(0)xc£, where Xd stands for the state vector of the closed-loop system. See [424, 149, 135, 362] for details. Note that the controller state-space matrices are allowed to depend explicitly on the derivative of the time-varying parameter 0. Different techniques to remove the dependence on 0 will be extensively discussed in section 11.3; see also [34]. Except for the usual smoothness assumptions on the dependence on 0, the problem data and variables will be unrestricted in the subsequent derivations. The basic characterization of gain-scheduled controllers with guaranteed L2-gain performance is presented in the next theorem where the dependence of data and variables on 0 and 0 has been dropped for simplicity.
Apkarian and Adams
212
Theorem 11.1 (basic characterization). Consider the LPVplant governed by (11.1), with parameter trajectories constrained by (11.2), (11.3). There exists a gain-scheduled output-feedback controller (11.5) enforcing internal stability and a bound 7 on the L2 gain of the closed-loop system (11.1) and (11.5), whenever there exist parameterdependent symmetric matrices Y and X and a parameter-dependent quadruple of statespace data (AK,BK,CK,DK) such that for all pairs (0,0) in & x 6
In such a case, a gain-scheduled controller of the form (11.5) is readily obtained with the following two-step scheme: • solve for N, M, the factorization problem
• compute AK, BK, CK with
Proof 11.1. See [362, 78]. Note that since all variables are involved linearly, the constraints (11.6) and (11.7) constitute an LMI system. This system is, however, infinite due to its dependence on (0,0) ranging over Q x G^. Using the projection lemma, detailed in [148], the controller variables can be eliminated, leading to a characterization involving the variables X and Y only. This is presented in the next theorem. Theorem 11.2 (projected solvability conditions). Consider the LPV plant governed by (11.1), with parameter trajectories constrained by (11.2) and (11.3). There exists a gain-scheduled output-feedback controller (11.5) enforcing internal stability and a bound 7 on the L2 gain of the closed-loop system (11.1) and (11.5) whenever there exist parameter-dependent symmetric matrices Y(0) and X(0) such that for all pairs (0,0) in G x Gd the following infinite-dimensional LMI problem holds:
11.2. Output-feedback synthesis with guaranteed I/2-gain performance
213
where A/x and A/y designate any bases of the null spaces of [ C2 D2\ ] and [ B% respectively. Proof 11.2. This is a straightforward application of the projection lemma [148] to the LMI (11.6), with respect to the matrix variable
Theorem 11.2 only provides existence conditions for controllers of the form (11.5). These conditions become necessary and sufficient if we confine the involved Lyapunov functions to the set of quadratic forms
As an immediate extension of the results in [147], the next theorem provides for controller construction. Once again, the dependence on 9 has been dropped to facilitate manipulations. It is further assumed that • (HI) Di2 and D2\ are full-column and full-row rank, respectively. This assumption is without restriction and greatly simplifies the presentation. The construction is easily extended to the singular case along the lines of [147]. Theorem 11.3 (controller construction from projections). Assume the conditions of Theorem 11.2 hold for a pair (X, Y) and some performance level 7. Then a gainscheduled controller can be constructed for any pair (0, Q] in 9 x 0^ by the following sequential scheme: Compute DK solution to
and set
Compute BK and CK solutions to the linear matrix equations
Compute
214
Apkarian and Adams Solve for N, M the factorization problem
Finally, compute AK, BK, and CK with the help of (11.8)-(11.10). It should be noted that in spite of their different structures, the characterizations given in Theorems 11.1 and 11.2-11.3 are equivalent and can virtually be used interchangeably for controller synthesis. In contrast, when the focus is on computational complexity or practical implementation, these techniques exhibit significant differences. This is discussed in section 11.4. Finally, the case where only some parameters Bi are subject to constraints on their derivatives is easily handled by removing the unconstrained parameters from the matrix functions X ( . ) and Y(.).
11.2.1
Extensions to multiobjective problems
A useful practical advantage of the basic technique is that it easily extends to multiobjective problems. Various channels of the closed-loop system can be specified independently with a rich list of specifications. See [365, 265] for a thorough discussion. As an example, it is possible to specify an L2 gain bound with regional pole constraints on the closed-loop dynamics of the underlying LTI systems (9 frozen). Such constraints consist of vertical and horizontal strips, disks, conic sectors, parabolas, or intersections of such regions. The LMIs (11.6)-(11.7) must then be complemented with
where the data Xjk and p,kj define the geometry of the region.
11.3
Practical validity of gain-scheduled controllers
It must be stressed that an LPV controller derived from Theorem 11.1 or 11.2-11. is not gain scheduled in the usual sense of the term. Its implementation requires not only the real-time measurement of the parameter 0, but also of its time derivative B. This is generally prohibitive, since parameter derivatives either are not available or are difficult to estimate during system operation. Gain-scheduled controllers that do not require a measurement of B will be called practically valid hereafter. As discussed in [33], there is no systematic and tractable approach for removing the dependence on B while maintaining the generality of Theorem 11.1 or 11.2-11.3. As suggested by the controller formula (11.8), a simple but conservative approach has been proposed in [34]. It consists of restricting the variable Y(B] to Y = 0, that is, Y not depending on B. This operation amounts to using a fixed Lyapunov function for the parameter-dependent control problem described in (11.12). It thereby sacrifices some performance, resulting in a higher 7. Keeping in mind that the dependence of the controller data on B stems from the term XY + JVMT, (11.8), the general characterization of Theorem 11.3 offers additional freedom that is worth pointing out. The discussion is summarized in Table 11.1. Row 1 of the table simply says that if the scheduled variable is assumed constant in time, a practically valid gain-scheduled controller can theoretically be constructed
215
11.4. Reduction to finite-dimensional problems Table 11.1: Selection of variables in the gain-scheduled control problem. Variables X, Y
Variables N, M
Practical validity
NMT = 1- X(d}Y(0] NMT = 1- X(e}Y(0] N := I - X(8)Y0, M :=I
Yes No
fee, fee.
X := X(0) Y := Y(9) X := X(9) Y := Y(0] X := X(0) Y := YQ X := X0 Y := Y(0)
N := /, M := / - F(0)X0
Yes Yes
4J£ unbounded
X :— XQ Y := YQ
NMT = 1- X0Yo
Yes
f =o f eed
using Theorem 11.1, or alternatively Theorems 11.2-11.3, for any matrix functions X(.) and Y(.) of 0. Such an approach ignores possible time variations of 9 and provides neither performance nor stability guarantees for the closed-loop system in the face of time variations. With the same choice of matrix functions X(.) and Y(.), but the rate of variations of 9 being confined to a compact 0^, row 2 says that there is no known techniques to compute a practically valid gain-scheduled controller. In rows 3 and 4, we have assumed the conservative choices that X or Y are constant matrix variables. In both cases, the gain-scheduling problem with bounded rate of variations admits practically valid controller solutions, provided the variables N and M are adequately selected in Theorems 11.1 and 11.3. With further conservatism, that is, 9 is unbounded, row 5 says that the problem is again tractable and solvable using the same techniques. The case of time-varying parameters with bounds on the rate of variation can be constructively handled by the choices of rows 3 and 4. However, due to the loss of duality in the variables X and F, such choices are not equivalent. As a consequence, there are some problems for which it is better to take a parameter-dependent X and a constant Y, while others will require the converse. Hence, both alternatives must be tried to get a less conservative design. In the controller construction scheme, the variables N and M are subject to the algebraic constraint / — XY = NMT from which one easily infers the identity
In light of this identity, a practically valid gain-scheduled controller in the cases of rows 3 and 4 can be derived using the same formulas (11.9) and (11.10) but with AK suitably updated to
The same formulas are still valid for the case of frozen-in-time parameters, row 1, and for arbitrarily varying parameters, row 5, the variables X and Y being replaced by their constant values XQ and YO, in the latter case. Summing up, Table 11.1 displays all options to handle any situations from the frozen-in-time parameters to arbitrarily time-varying parameters. However, the case in which both X and Y depend on 9 with a bounded 9 still resists a convex formulation for a practically valid controller.
11.4
Reduction to finite-dimensional problems
Even with the simplifications of Table 11.1 in place, the characterizations of Theorem 11.1 or 11.2-11.3 involve the solution of a convex but infinite-dimensional and infinitely constrained problem. This is the price to pay for allowing a general parameter dependence in the plant (11.1). Generally speaking, there is no systematic rule for selecting the
216
Apkarian and Adams
functional dependence of the matrix functions X and Y on 9. We are therefore led to some simple heuristics in order to simplify the computation of solutions to the LMI problems (11.6)-(11.7) or (11.11)-(!!. 13). A simple but practical technique has been proposed in [424]. The key idea is to "mimic" the parameter dependence of the plant in the Lyapunov function variables X and Y. Interestingly, the same idea can be used in the more general context of the basic characterization of Theorem 11.1. In return, this offers new potential approaches for the synthesis of gain-scheduled controllers with multiple objective constraints (mixed H2 - HOC, pole clustering, and others still to find). To be more specific, consider the class of plants (11.1) having a linear-fractional transformation (LFT) dependence on nonlinear functions of the scheduled variable, that is, whose statespace data further satisfy
where Pi(.},i = 1, . . . , N, are differentiate functions of 9. Note that such a description encompasses many practical situations, since most systems in aeronautics and robotics can be represented as an LFT in nonlinear functions of the time-varying parameters. Copies of the plant's nonlinear functions, /9i(.), can be introduced into the quadruple and the pair (X(.),Y(.)) in an affine fashion,
and
The functional dependence of X and Y being fixed, the matrices 4fir,o»-A/f,t>- • • play the role of decision variables in the infinitely constrained LMI problems (11.6)-(11.7) or (11. 11 )-(!!. 13). A simple remedy for turning such problems into a finite set of LMIs is to grid the value set of 9 [424]. Since the derivative 9 appears linearly in the LMIs (11.6) and (11.11)-(11.12), there is only need to check the extreme points of the set 0^, denoted T, for all admissible values of 9. The overall procedure can be described as follows. Step 1. Define a grid Q for the value set of 9. Step 2. Minimize 7 subject to the LMI constraints associated with Q x T . Step 3. Check the constraints with a denser grid. Step 4. If Step 3 fails, increase the grid density and return to Step 2. Computing solutions (11.21) and (11.22) to the LMI system associated with Q x T is a convex optimization that can be solved by polynomial-time algorithms [296, 63] and the software [153]. Such problems generally require a large number of variables and
11.4. Reduction to finite-dimensional problems
217
constraints that today limit the scope of application of such techniques. With this considered, available solvers are still efficient for problems of reasonable size, say, for up to 15 states and 2 or 3 scheduled variables. LMI-based gain-scheduling techniques have proven very powerful in a number of delicate applications [424, 33, 34, 21]. When restricted to the parameterization (11.21) and (11.22), the basic and projected characterizations are no longer equivalent. In the first one, we have further restrictions on the structure of the quadruple (AK(-), BK(-),CK(-),DK(-))- As a result, the first approach is generally more conservative, although we have observed very little difference in practice. See the application section 11.6 for comparisons. From a complexity viewpoint, the first technique requires a larger number of scalar variables to be optimized; the number of additional variables being approximately n(n + 7712 +P2)L, where L is number of blocks in the LFT (11.20). Its scope of application is therefore more restricted. In contrast, the controller equations resulting from the basic characterization are significantly less complex than those resulting from the projected characterization. Note that such controller constructions are essentially dominated by matrix inversions and QR decompositions in (11.8)-(11.10) and (11.15)-(11.17). At each sampling time, the basic characterization essentially requires 1 matrix inversion, whereas the projected characterization will require 2 QR decompositions and 3 matrix inversions for problem (11.14), 2 matrix inversions for the computation of AK, BK, CK by exploiting partitioning. Thus, in the light of these comments and because the expressions (11.21) essentially reduce to scalar-by-matrix multiplications, controllers resulting from the first technique are more easily implemented for rapidly varying LPV systems. In addition, these controllers have an LFT representation in terms of the nonlinear functions, Pi(.), and hence are computationally comparable with those of the LFT gain-scheduling approaches in [312, 19]. Note also that for both techniques, the most computationally demanding step comes from the inversion of the term / — X(0)Y(0), typically a large matrix. It is sometimes possible to exploit rank deficiencies in the X^s and the Yi's to further reduce computational efforts, for instance, by using inversion with rank correction formulas. A simple case that does not require the inversion of I — X(0)Y(9) is when the LFT system (11.20) depends on a single nonlinear function pi(9). Noting that
the inverse of / — X(0)Y(6) can be computed from the lower-left block in the above expression. Moreover, if we assume without restriction that 0 is in the image set of /f>i(.), then
The positive condition in (11.23) ensures that the matrices
218
Apkarian and Adams
are simultaneously diagonalizable. Hence there exists a congruence transformation T and a diagonal matrix AI, computed offline, such that for any value of the map pi(.)>
Therefore, a cheap way of computing (/ — XY}~1 at each sample of time is simply to invert the diagonal matrix in (11.24) and perform multiplications of corresponding blocks. Since they offer complementary advantages, the techniques described above can be used together to yield a more effective methodology. Confirmed by practical experience, the following rules have proven useful. 1. All necessary tunings requiring repeated computations should be based on the less costly projected technique. 2. The procedure is completed by running the basic technique for controller implementation purposes. Although the last phase may be very slow, it is run only once in the whole design process.
11.4.1
Overpassing the gridding phase
As discussed earlier, there is no direct technique to overpass the gridding phase, hence making the design more direct. Under special circumstances, LFT, affine, or polynomial parameter dependence of data and variables or polytopic approximation of the original plant, techniques such as the 5-procedure or multiconvexity concepts can be used repeatedly to get a finite number of (sufficient) LMI conditions. See the conference version of the present work [18] and also [135, 149] and references therein.
11.5
Reducing conservatism by scaling
As is common in robust control theory, it is possible to further enhance the design procedure by exploiting structural information on the operator relating the signals w and z. Yet, the conditions of Theorems 11.1 or 11.2 also provide robust stability conditions in face of • multiplicative memoryless time-varying uncertainties
• nonexpansive dynamic uncertainty (for instance LTI) [64]
This description, however, ignores potential structures of the operator A. We therefore assume, hereafter, that the plant is governed by (11.1) with w and z subject to
11.5. Reducing conservatism by scaling
219
where A is a multiplicative memoryless time-varying operator with structure
and confined to the compact set
The set of scalings associated with the structure (11.26) is defined as
We have implicitly assumed, without restriction, that the problem has been squared so that mi = pi in the subsequent derivations. With this notation in mind, a scaled version of the bounded real lemma can be established. Lemma 11.1 (structured robust stability). The LPV system governed by
and (11.25)-(11.27), with parameter trajectories 6(t) constrained with (a) and (b), is internally stable whenever there exists a parameter-dependent symmetric matrix P(6) and a parameter-dependent scaling S(d] in S& such that P(0) > 0 and
holds for all admissible values of the parameter vector 9 and of its time derivative 0. Proof 11.3. See Appendix A. Note that Lemma 11.1 provides robust stability conditions for nonexpansive uncertain operators Aj. These conditions also guarantee a robust L2-gain performance bound 7 with respect to any input/output channel (wi,Zi), with the remaining channels corresponding to uncertainties of the form described earlier. See, for instance, [64] for details Other extensions to repeated-scalar uncertainties Aj = <5j/Si are straightforward. Also important is the fact that the scaling matrix S(0) is allowed to vary with 9. Hence, the conservatism of the gain-scheduling technique can potentially be reduced for all values of 6 independently. As an immediate consequence of the above lemma, the gain-scheduling techniques in Theorems 11.1 and 11.2-11.3 are readily extended to handle the structural constraint (11.25)-(11.26). Such extensions simply follow from the substitutions
in Theorems 11.1 and 11.3 and
in Theorem 11.2.
220
Apkarian and Adams
Owing to the dependence of the modified characterizations on both S and S 1, they are no longer standard LMI problems. Such problems are in the class of LMI problems with rank-reduction constraints hence difficult to solve. Here, we adopted a simple computational scheme in the spirit of n synthesis techniques. Recall that the LMI (11.6), with the scaling 8(9} in place, can take the form
Note^that this is an LMI in the variables S"1, y, AK, and CK provided that the variables X, BK, and DK are maintained fixed. Therefore, coupled with Theorem 11.1, this result suggests a first scheme to compute a best possible S"1. Now our goal is to obtain the counterpart of the projected characterization Theorem 11.2 for computing S~l. In view of the discussion of section 11.4, this will be at a reduced computational cost. Interestingly, such a scheme cannot be directly derived from the LMIs (11.11)-(11.12). We shall make use of a partially projected characterization as follows. Applying the projection lemma [148], with respect to the (2,1) free term in (11.31), equivalent LMI conditions are
Assuming now that (11.32) holds and projecting again with respect to the controller variable CK in (11.33), the following equivalent LMI condition is obtained:
Gathered together with (11.13), the matrix inequalities (11.32) and (11.34) form an LMI system in the variable Y and S'1 with fixed X, BK, and DK- It turns out that the conditions (11.31) and (11.7) on one side or the conditions (11.13), (11.32), and (11.34) on the other side form the basis of two possible schemes for iteratively reducing the conservatism of the gain-scheduling techniques of section 11.2. Such schemes proceed as follows. Step 0. Setting 5(0) = /, minimize 7 with the basic or the projected technique. Step 1. Compute a scaling S(9)~1 minimizing 7 with the help of (11.31) and (11.7) or alternatively (11.32), (11.34), and (11.13).
11.6. Control of a two-link flexible manipulator
221
Step 2. With S(0] being fixed, perform a gain-scheduling synthesis with the basic or projected technique. Step 3. Iterate over Steps 1 to 3 until convergence. As before, we are using a grid of the set 0 to perform the optimizations and a user-defined functional dependence of S~l on 0. For LFT plants of the form (11.20), a practical choice is the affine expansion
It is important to mention that the previously described iterative procedure, although not giving a global solution to the problem, has proven very efficient in practice. A demonstration is given in the following section. Unlike the standard D — K iteration procedure, it involves not only scalings and Lyapunov variables, but also some controller variables in the same optimization step. In our opinion, this is a central factor favoring convergence, both in speed and accuracy.
11.6
Control of a two-link flexible manipulator
11.6.1
Problem description
The gain-scheduled control of a two-link flexible manipulator is a nontrivial problem. The dynamics of such a system include both rigid body and lightly damped structural modes. The problem is complicated by uncertainty in the high frequency dynamics of the system and by the variation of dynamics with manipulator geometry. The first of these complications drives the requirement for closed-loop robust stability, while the second drives the requirement for gain scheduling. In addition, a rapid closed-loop response to position commands is desired. The ability of a control synthesis approach to handle the trade-offs between robustness, performance, and gain scheduling with the least possible conservatism is thus critical for such a system. For example, if the gain-scheduling parameters are allowed to vary infinitely quickly, closed-loop performance and robustness will suffer. If the uncertainty structure of the design model is not considered, it will be impossible to find a controller that meets design objectives. These trade-offs are studied and addressed in this section. SECAFLEX is a two-link flexible planar manipulator driven by geared DC motors, used as a laboratory platform for control-structure interaction experiments at CERTONERA in Toulouse, France. The two flexible members are homogeneous beams. There is a concentrated mass at the elbow due to the DC motor and a concentrated mass at the tip of the second beam, which is the payload. The modeling of the manipulator has been studied extensively [4]. A simple drawing of the two-link manipulator is shown in Figure 11.1. #1 and 6-2 are the shoulder and elbow joint angles, respectively. r\ and r^ are the corresponding control torques. The second-order form of the manipulator equations of motion are
where M is the inertia matrix, D is the damping matrix, and K is the stiffness matrix. u(t) is the input vector, u = (TI T2)T. Due to the variable geometry of the system, the inertia matrix is a function of the second joint angle, #2- This dependence causes
Apkarian and Adams
222
Figure 11.1: Two-link flexible manipulator.
Figure 11.2: cri(G(ju),02)) for different manipulator geometries. 62 = 0: solid; 0% = f: dashed; 02 = TT: dotted. significant changes in the response of the system to input torques over the range of possible configurations, 02 € [0, TT] (rad.). If we consider an output vector, y = (Oi 02)T, then we can define the transfer function from u to y as G(s,2). With the parameter 62 frozen in time, Figure 11.2 illustrates the variation of the manipulator dynamics with geometry by showing the singular values of G(s, #2) at three different values of #2The numerical values that define this system are as follows. The dependence of the inertia matrix on 02 can be expressed as
where
11.6. Control of a two-link flexible manipulator
223
Figure 11.3: Synthesis model. and the damping, stiffness, and control effectiveness matrices, respectively, are
The manipulator equations of motion can be rewritten in first-order LFT form in 02 [1]>
To provide closed-loop command tracking, a simple weighted minimization of the sensitivity function is used. A frequency-dependent weight, Wp, penalizes the error, e, between angular position commands, iui, w2, and the system response, y. By forcing this weighted sensitivity function to be less than unity, the complementary sensitivity function approaches identity at low frequencies, thus providing good command tracking. In order to account for uncertainties in high frequency dynamics, an additive uncertainty model is incorporated into the synthesis model. The additive uncertainty weight is formulated by considering the difference between the full-order geometry-dependent model, G(s, 92), and some reduced-order design model, G>(s, 92), of lower order but still dependent on manipulator geometry. Consider W/, the additive uncertainty weighting function and A/, a complex uncertainty block, scaled such that ||A/||oo < 1. We can define the error between the full-order and reduced-order models as E(s,02):
The additive uncertainty weight must then provide a frequency domain bound on this error; that is,
The LPV design model has the form (11.1) and is built by combining the above performance and robustness formulations into a single multiobjective synthesis model; see Figure 11.3. The design weights used for the following examples are
224
Apkarian and Adams Table 11.2: Performance comparisons (7) with ignored structure. Basic technique with X(Qi) and ¥(62) Projected technique with X (#2) and ^(#2) LFT technique in [19]
3.82 3.82 3.82
It should be noted here that, in the manipulator example, 0 = 6-2 is not an independently evolving external parameter, it is a state of the system. By treating #2 as an external parameter, we are actually immersing the "quasi-linear" dynamics (11.39) into the larger class of LPV dynamics (11.29). Therefore, in guaranteeing stability and performance for a class of parameter trajectories (through bounds on 82 and ^)> we ignore the fact that the trajectories themselves are defined by the plant dynamics. The result is thus a degree of conservatism in the design. The resulting controller is in the class of nonlinear controllers since 6 := 62 is here a subvector of the measurement y := (#1, ^)TIt is usually referred to as a scheduled-on-the-plant-output controller [371]. It captures the nonlinearity of the plant (11.39) in 62- The controller matrices (AK,BK,CK,DK) are instantaneously updated with respect to the plant's state #2* as described in section 11.2.
11.6.2 Numerical examples Different cases have been investigated that highlight the merits and some of the concerns in using the gain-scheduling methodologies detailed in sections 11.2 to 11.5. Comparisons with existing gain-scheduling techniques are also given. Case 1. In this first case, the two-block structure associated with robustness and performance is ignored. The basic and projected techniques are applied with both X and Y depending on #2:
As discussed earlier, this amounts to assuming that the scheduled variable #2 is frozen in time. The corresponding 7 levels are compared with the LFT gain-scheduling technique in [19], a technique that puts no bound on the parameter variation rates. The results are presented in Table 11.2. Surprisingly, all techniques give the same result. The achieved value of 7 is actually a lower bound that can be checked by performing an HQQ synthesis on a nominal model (62 = f). Therefore, if the structure of the problem is ignored, the techniques here or [312, 19] may offer no advantages over existing LFT gain-scheduling techniques. Case 2. In this second case, the problem's uncertainty structure is explicitly taken into account. The performance and robustness objectives are relaxed by introducing a fixed scaling S := diag(0.5e-3/2X2,-^2x2), found by performing a standard n synthesis with constant scaling on the nominal plant (#2 = f )• The application of gain-scheduling techniques to this scaled problem leads to the bound 7 = 0.58. Results are presented in Table 11.3 for different selections of the Lyapunov variables X and Y and assuming 62 is fixed in time (#2 = 0). Both the basic and projected techniques guarantee the same performance bound for all admissible values of QI, provided that both X and Y depend on #2- A slight degradation occurs when the dependence on 62 is restricted to X, but the design is still acceptable.
11.6. Control of a two-link flexible manipulator
225
Table 11.3: Performance comparisons of basic and projected techniques (9 — 0). 7
Basic technique Projected technique
x(o2), y(02) 0.58 0.58
X(62], Yo 0.95 0.95
X0, Y(e2} 25.51 25.35
XQ, YQ
380.60 380.60
Conversely, when Y only depends on Q^, both techniques appear very conservative. As predicted in section 11.3, the selections of columns 2 and 3 are not equivalent and both must be tested to reduce conservatism. The designs in column 4 use fixed X and Y matrices and thus yield much more conservative answers. The corresponding value, 7 = 380.60, was also obtained using the LFT technique in [19]. The consequences of this discussion are threefold. • Ignoring the uncertainty structure introduces undesirable limitations on the power of advanced gain-scheduling techniques. One must compromise between robustness/performance requirements and gain-scheduling objectives in order to fully benefit from such techniques. • Trade-offs can be systematically handled through the use of scalings. Good initial guesses for such scalings can often be found on the basis of a nominal system issued from the LPV plant. • The basic and the projected techniques give about the same results, the first one being preferable in a real-time implementation perspective. Case 3. We investigate here the effects of a bound on the rate of variation of 62 on the achievable robustness/performance level with the projected synthesis technique. The scaling 5 is as defined previously, and we assume symmetric bounds
It is intuitively clear that increasing the bound on the variation of 62 degrades both robustness and performance. For our manipulator system a realistic bound is 100 deg./sec. As can be observed in Figure 11.4, which describes 7 as a function of a, the proposed techniques perform well for variations of up to an order of magnitude greater than those expected. Case 4. In this last case, the iterative schemes proposed in section 11.5 are used to further improve robustness and performance. We exploit the scheme based on the matrix inequalities (11.32), (11.34), and (11.13). Recall that such schemes take advantage of parameter-dependent scalings. According to the definition of SA in (11.28), the scaling assumes here the special form
The Lyapunov variables have been selected in the form X — XQ + cos(02}Xi and Y = YQ. Practical validity of the controller is thus ensured from the analysis in section 11.3. The evolution of 7 during the alternate iterations is depicted in Figure 11.5. Convergence required 12 elementary steps as described in section 11.5 and led to a best 7 value of 0.30. This is obviously the best design among those attempted. The result in Figure 11.5 was achieved with the projected technique and the characterizations given in (11.32) and (11.34) for the scaling computation. The resulting scaling matrix is
Apkarian and Adams
226
Figure 11.4: Performance level 7 vs. bound on the rate of variations. *: value of 7 for unbounded derivatives; o: value of 7 for frozen parameter.
Figure 11.5: Evolution of 7 vs. alternate iterations. Finally, using the above scaling, we recomputed the gain-scheduled controller with the basic technique. The same value, 7 = 0.30, was obtained. Since it is of a simpler form and provides satisfactory performance, this last gain-scheduled controller is used in the subsequent analysis and simulations.
11.6.3 Frequency and time-domain validations In this section, the best-7 controller from case 4 is analyzed. In Figure 11.2 we saw how the manipulator dynamics changed with 6-2- It is now interesting to examine how the resulting gain-scheduled controller varies with manipulator geometry. Figure 11.6 shows the singular values of the underlying LTI controllers K(s, #2) at three different values of0 2 . We see that the gain-scheduled controller evolves as physically expected, applying higher gains when the manipulator inertias are greater (62 = 0) and reduced gains when the inertias are smaller (#2 = TT). Notice that at frequencies above 10 rad/sec, both the manipulator system and the controller are relatively independent of the parameter. Extensive nonlinear simulations of the response of the closed-loop system to various commands have been performed. For the sake of brevity, we will present only one of these. The high frequency flexible modes that were removed from the synthesis model are
11.6. Control of a two-link flexible manipulator
227
Figure 11.6: cri(K(ju>,02}) for different manipulator geometries. 6-2 = 0: solid; #2 = f ' dashed; 62 — TT: dotted.
Figure 11.7: Nonlinear simulation results. Solid: 0\, T\\ dashed: #2?
reintroduced in the simulation model. With initial conditions of #i(0) = 0 and #2(0) = 0, a step command of 180 degrees in both angles is given. This maneuver was chosen to take the manipulator through the entire range of possible dynamics as quickly as possible. The angular responses and corresponding control inputs are depicted in Figure 11.7. The rise time (3.5 seconds), settling time (5 seconds), and overshoot (< %5) are markedly superior to those observed using either robust LTI controllers or heuristically motivated gain-scheduling approaches [1]. The maneuver is completed without violating the limits on control authority, \r\\ < lOOJV/m and \T?\ < 20JV/m.
228
Apkarian and Adams
11.7 Conclusions Advanced gain-scheduling design approaches for LPV systems have been presented with emphasis on the practical goals of reduced computational burden and ease of implementation. Two complementary techniques for the calculation of such controllers have been investigated that, when used together, achieve these two objectives. The methodology is completed with a new scaling technique that takes into account the uncertainty structure of multiobjective synthesis problems. The challenging problem of the control of a two-link flexible manipulator is introduced in this context and used to demonstrate the validity of the theoretical solutions.
Appendix A Proof of Lemma 11.1. The system governed by (11.29) and (11.25)-(11.27) is stable whenever there exists a quadratic Lyapunov function V(x, 0} = xTP(9}x such that P(ff) > 0 and
for all admissible trajectories x(t), w(t), z(t), 0(t), and A(£). The signals w and z related by (11.25)-(11.27) satisfy the quadratic constraints
where S is in «SA and may depend on 9. Therefore, stability is ensured whenever (11.43) holds for all trajectories x(t), 9(t) and any pair (w(t),z(t)) characterized by (11.44). By an ^-procedure argument [434, 433], this is guaranteed whenever
This expression can be rewritten in the form
which is nothing else than the quadratic form condition associated with the matrix inequality in (11.30). This completes the proof of the lemma.
Acknowledgment We are thankful to P. Gahinet and G. Becker for their helpful and stimulating support during this work.
Chapter 12
Control Synthesis for Well-Posedness of Feedback Systems Tetsuya Iwasaki 12.1
Introduction
Active use of linear matrix inequalities (LMIs) for analysis and synthesis of robust control systems dates back at least to the late 1980s [51, 69,158, 228]. A new trend for computeraided control design had also been started during the same period, motivated by [61, 59], which showed the nontrivial impact of convex programming on control design. Recent recognition of LMIs as a powerful tool for robust control [108] has put a tremendous impetus on the development of new control analysis and synthesis methods [64, 377] with the aid of efficient computational algorithms [63, 153, 122, 296]. A wide variety of control synthesis problems have been addressed using LMI approaches in the state-space setting, including the HQQ [148, 215], the scaled HQQ [316], the gain-scheduled HQQ [19, 312], the positive real control [398], and other problems related to the H2-performance [209, 363]. A remarkable fact is that all these different control problems reduce to the same type of computational problems involving LMIs: Standard problem (SP). Find positive definite (possibly structured) matrices X and Y satisfying LMIs and additional rank constraints. From this observation, it is tempting to state that "many" of the control synthesis problems posed in the state-space setting can be reduced to the same computational problem stated above when a certain LMI approach is employed. Our main objective is to show explicitly a general class of control problems that reduce to SP. The class is defined by the closed-loop specifications given in terms of the well-posedness of feedback systems [211, 212, 213] and includes the H^, the positive real, nominal/robust/gain-scheduled, continuous-time/discrete-time control problems as special cases. We show that the existence of a controller, yielding a closed-loop system whose well-posedness can be proved by a P-separator [212], can be checked by solving the SP. 229
230
Iwasaki
Figure 12.1: Feedback system for analysis.
A fundamental fact to be revealed here is that the troublesome rank constraints in the SP disappear and the resulting problem becomes convex if the controller is allowed to use sufficient information on the plant. This fact can also be anticipated from the Packard's result [312], which showed that the gain-scheduled H^ control problem can be reduced to an LMI optimization problem if the controller is allowed to be scheduled by the same (or greater) number of the repeated scalar parameters of the plant. Our results extend Packard's result for the more general class of control problems including the HQQ control problem considered in [312] as a special case. The generality of our problem formulation also allows us to interpret the rank constraint arising in the reduced-order control problem [148, 215] as the lack of sufficient plant information.
12.2
Well-posedness analysis
12.2.1 Exact condition Consider the feedback system shown in Figure 12.1, where M € Hrxq is a known constant matrix and V is an unknown matrix belonging to a known compact set V C C 9Xr . A large class of uncertain systems can be described by this feedback system [102]. For instance, linear time-invariant (LTI) continuous-time systems subject to structured uncertainties are given by the feedback system with M consisting of the state-space matrices of the nominal part and V containing the frequency variable s and the uncertain component. The feedback system in Figure 12.1 is said to be well posed if Det(7 —VM) ^ 0 for all V € V. The notion of well-posedness is in fact equivalent to robust stability/performance when M and V are chosen appropriately [108, 211]. The following result gives a necessary and sufficient condition for well-posedness. Theorem 12.1. Let M € Rrxq and V C C 9Xr be given. Suppose that V is a compact set. The following statements are equivalent. such that
Proof 12.1. The result is proved in [213]. Here, we give a proof for completeness. Suppose statement (i) holds. Let
12.2.
Well-posedness analysis
231
for sufficiently small e > 0. Then, using the compactness of V, it can readily be verified by substitution that (iia) and (iib) hold. Suppose (i) does not hold, but (ii) holds. We show a contradiction as follows. First note that there exist z ^ 0 and V G V such that
Let w := V*z ^ 0. Then using
we see that (iia) and (iib), respectively, imply C*©C < 0 and C*©C > 0, which is a contradiction. This result is closely related to recent results on the integral quadratic constraints (IQCs) [273] and the linear-fractional transformation (LFT) scaling [26]. One of it implications is that robust stability of a certain class of systems can be examined by searching for a matrix 6 that satisfies the conditions in statement (ii). This search is, however, nontrivial in general due to the infinitely many constraints in (iib).
12.2.2
P-separator
The computational difficulty caused by statement (iib) of Theorem 12.1 can be avoided by restricting our search for 0 within a certain class depending upon a specific V. To this end, consider the following special class of V:
where Vn repeats kn times in V£", and
with given real matrices Rn — R^, Qn = Qj[, and 5n. For this class of V, the following result shows a class of O for which condition (iib) is automatically satisfied. Lemma 12.1. Consider the set V given in (12.2) and define
Then, for any 0 € 0, statement (iib) of Theorem 12.1 holds. Proof 12.2. Let O € 0 and V € V be given. Note that
Iwasaki
232
Figure 12.2: Feedback configuration for control synthesis.
is a block diagonal matrix whose nth block matrix I5n is given by
Using Pn > 0 and Wn > 0, it can be verified that I3n > 0, and hence we conclude that U>0. This result provides a sufficient condition for well-posedness of the feedback system in Figure 12.1; if there exists G G 0 such that statement (iia) of Theorem 12.1 holds, then the feedback system is well posed for all V € V. This condition can be verified by searching for the parameters Pn appearing in the definition of 0, which is a standard LMI feasibility problem. We call the class of G defined in Lemma 12.1 the "P-separator," since it establishes the topological separation of the graphs of M and V [166, 211, 348]. The P-separator provides a unified description of the HQQ condition with the J9-scaling and the positive realness condition with multiplier. In general, the P-separator introduces conservatism; there may be no 9 € 0 such that (iia) of Theorem 12.1 holds even if statement (ii) of Theorem 12.1 holds true. However, this gap vanishes if 2s + / < 3, where s and / are, respectively, the numbers of the repeated scalar blocks and the (unrepeated) full blocks in V. This fact can be shown using the corresponding result for the .D-scaling [314]. See [212] for the detail.
12.3
Synthesis for well-posedness
12.3.1 Problem formulation Consider the feedback system in Figure 12.2, where M is a given real matrix and VM and VK belong to a known compact subset of complex matrices V:
It can readily be seen that the feedback system in Figure 12.2 can be recognized as the feedback connection of V and F(M,K). We then consider the following synthesis problem:
12.3. Synthesis for well-posedness
233
Well-posedness synthesis problem (WSP). Given a real matrix M and a compact subset of complex matrices V, find a matrix K such that the feedback system of V and F(M,K) is well posed, i.e.,
holds. This problem is a general mathematical problem that includes many control synthesis problems as special cases, as shown in section 12.4. In the context of control design, Fu(M, VM) is the plant, while TL(^ K, K) is the controller. In particular, M and K correspond to state-space matrices of the plant and the controller, respectively. Choosing the "perturbation" V appropriately, various classes of control problems can be generated. For instance, V may dictate the frequency variable, uncertain dynamics and/or parameters, scheduling parameters, nonlinear elements, and infinite-dimensional dynamics, leading to such control problems as reduced-order, robust, nonlinear, delay /preview, gain-scheduled controls. Unfortunately, it is difficult to solve the WSP exactly. We will restrict our attention to a subclass of the feasible controllers defined by the P-separator and give an exact solution to the auxiliary problem. To this end, the rest of this section is devoted to defining the auxiliary problem. From Theorem 12.1, condition (12.4) holds if and only if there exists a real symmetric matrix G such that
Hence, WSP reduces to the problem of finding matrices K and B satisfying the above inequalities. However, it is difficult to directly solve these inequalities numerically due to the following two main reasons: (i) condition (12.5) imposes a nonconvex constraint on K and 0, and (ii) condition (12.6) is essentially described by infinitely many inequality constraints unless V has only a finite number of elements. Below, we introduce a specific V and overcome the second difficulty by imposing the structure of the P-separator on 0 such that (12.6) is automatically satisfied. Thus, in the statement of the auxiliary problem given later, the latter difficulty will not show up while the former remains. Consider the following class of "perturbations" :
where t = mn or kn (n = 1,... ,a), Vn repeats i times in V£, and Vn is a subset of complex matrices defined by (12.3) with given real matrices Rn — R^, Qn — Qj, and Sn such that
The significance of condition (12.8) may be found in [212]. For this class of V, we present the following definition.
234
Iwasaki
Definition 12.1. Consider the set V given by (12.7). Define a subset of real symmetric matrices 0 as follows. A given matrix 0 belongs to 0 if there exist real symmetric positive definite matrices Pn e R( m «+ A: «) x ( m «+ fc «) (n = 1,... ,a) such that
where 72 is given by
and similarly for 5 and Q. Lemma 12.2. Let real matrices Rn = R%, Qn = Q^, and Sn (n = 1,... ,a) be given such that (12.8) holds, and define the set V by (12.7). Consider the set 0 in Definition 12.1. For any 060, condition (12.6) holds. Proof 12.3. Fix V := diag(VM, VAT) € V and 0 € 0. Let T be an orthogonal matrix that transforms V into the following block-diagonal matrix V:10
Noting the identities
and similar identities for S and Q, we have
where the last inequality can be verified in a similar manner to the proof of Lemma 12.1. Thus (12.6) holds. We have shown that any 0 € 0 satisfies (12.6). Hence, by restricting our search for 0 within the set 0, the troublesome constraint (12.6) is automatically satisfied and WSP reduces to the following. 10
For example, when a = 3, we have
12.3. Synthesis for well-posedness
235
Auxiliary well-posedness synthesis problem (AWSP). Given a real matrix M and a subset of complex matrices 0 in Definition 12.1, find 0 6 0 and K such that (12.5) holds. This problem is only a (conservative) approximation of the WSP, and there is a gap between the two in general. That is, any solution K of the AWSP is also a solution of the WSP, but the converse may not be true. Nevertheless, as shown later, there are certain classes of control problems (e.g., HOC control, positive real control, and robust stability/performance synthesis against linear time- varying (LTV) dynamic uncertainties [270, 370]) that can be formulated as the AWSP exactly. Finally, we impose an assumption on the problem data M. From Figure 12.2, we see that M has two inputs and two outputs. Accordingly, M can be partitioned as
The star product F(M,K] in (12.5) can be written by the following LFT:
where matrix L is defined by
where
x
is the size of
We assume that M22 = 0. In this case, TL(K, L) becomes affine in K as follows:
This assumption simplifies the proof of our main results and can be removed in a standard manner (e.g., [215]).
12.3.2
Main results
This section presents a solution to the AWSP. Proofs of the results will be given in the next section. The following theorem provides a necessary and sufficient condition for solvability of the AWSP. Theorem 12.2. Let integers a > 0, mn > 0, kn > 0, and real matrices Rn = R^, Qn = Qn, Sn (n = 1,..., a), and M be given. Define the set 0 by Definition 12.1. The following statements are equivalent. (i) There exist 9 e 0 and K satisfying (12.5).
Iwasaki
236
(ii) There exist real symmetric matrices
such that
where
Assuming that statement (ii) of Theorem 12.2 holds, a solution for the AWSP can be computed as follows. Procedure for computing 0 € © and K 0. AWSP data. M, ©, where the set © is defined by Definition 12.1 in terms of a, ?nn, kn, Rn, on, and CJn (71= 1,..., a). 1. Compute matrices X and Y satisfying condition (ii) of Theorem 12.2. Denote by Xn and Yn the nth block matrices on the diagonal of X and V, respectively. 2. For each n = 1,..., a, if kn ^ 0, let Fn € Rm"xkn be such that and define Pn by If fcn = 0, then just let Pn := Xn. 3. Compute 7^, 5, and Q that specify 9 as in Definition 12.1, using Pn. 4. Let
where
and L is given by (12.11).
12.3. Synthesis for well-posedness
237
In this procedure, steps 2, 3, and 4 are straightforward and require as much computation as several singular value decompositions. On the other hand, step 1 is not easy. Constraints (12.12)-(12.14) on X and Y are LMIs, while the rank conditions in (12.15) are not convex. It is these rank constraints that make step 1 computationally nontrivial. This type of nonconvex problem has been shown to be NP-hard [144], and no efficient algorithms are currently known to guarantee global convergence. Nevertheless, there are several algorithms that work well in practice [159, 128, 176, 217]. The troublesome rank condition disappears for certain cases. An important special case is that kn > mn. Note that the size of matrices Xn and Yn in (12.15) is ran. Hence, if kn > m n , the rank condition holds automatically and the computational problem in step 1 becomes an LMI feasibility problem for which efficient algorithms exist [153, 122]. Now, a natural question is, What is the "physical" significance of condition kn > m n ? By definition, kn and m n , respectively, denote the numbers of nth "perturbation" Vn in the "controller" FL^K, K} and the "plant" Fu(M, VM)- Hence, condition kn > mn means that the controller has sufficient information on the plant perturbation. As we see later, the rank condition remains for a robust control setup since the information on the plant perturbation is unknown to the controller, while for the gain-scheduling problem the rank condition disappears since the plant parameters are assumed to be measurable on line. The reason why the fixed (low) order control design is difficult can also be explained by a similar argument.
12.3.3
Proof of the main results
In this section, we prove Theorem 12.2. The procedure for computing a solution to the AWSP will also be justified constructively by the proof. Consider the matrix inequality in (12.5) where the variables are 6 and K. First, we ask the following question: For a fixed 6, under what condition does there exist K such that (12.5) holds? To answer the question, we use the following lemma.
Lemma 12.3. Let matrices A, B, and C be given. The following statements are equivalent:
When statement (ii) holds, a matrix K satisfying the condition in statement (i) is given by where Proof 12.4. See, e.g., [95, 446]. Using Lemma 12.3 and noting that F(M,K) in (12.5) can be written as Fi,(K,Ij) with L given by (12.11), we obtain the condition for the existence of K in (12.5) as follows. Lemma 12.4. Let a reaJ symmetric matrix G and a matrix L be given as
Suppose Q > 0. Then the following statements are equivalent.
238
Iwasaki
(i) There exists a matrix K such that
(ii) The following three conditions hold:
Proof 12.5. Let Then we have
where Since Q > 0, the left-hand side of inequality (12.22) is nonnegative definite, and hence $ > 0. Therefore, condition (12.22) can be rewritten
Then applying Lemma 12.3, we see that there exists K satisfying (12.24) if and only if
hold. It is easy to verify that these inequalities can be written as (12.20) and (12.21). D Lemma 12.4 provides a necessary and sufficient condition for the existence of K satisfying condition (12.18). When the condition is met, one such K can be found by applying Lemma 12.3 to (12.24), as described in step 4 of the procedure for computing 0 6 0 and K in the previous section. AWSP has now reduced to the problem of finding 9 6 0 such that (12.19)-(12.21) hold, where L is defined by (12.11) in terms of the problem data M. It should be noted here that the supposition Q > 0 in Lemma 12.4 holds for any 0 6 0 . Next we show that condition (12.19) is automatically satisfied by any 0 € 0. Let T be the transformation matrix introduced in the proof of Lemma 12.2. Noting the relationships in (12.10), we have
Using the identities [73]
12.3. Synthesis for well-posedness
239
we have
where the last inequality holds due to (12.8) and Pn > 0 [73]. Thus (12.19) holds. In the remaining part of the proof, we will show that conditions (12.20) and (12.21) reduce to (12.12) and (12.13), respectively. For this purpose, the following lemma is useful. Lemma 12.5. Consider the set 0 in Definition 12.1 and fix any G € 0. Then G"1 is given by
where it is defined by
and similarly for S and Q. Proof 12.6. Let an orthogonal matrix T be such that
holds for all 6 € 0 (see the proof of Lemma 12.2). Let 7£n := P~l ®Rn, and similarly for 5n and Qn. Noting the relation
we have
where we used TTT = 7. Note that, from (12.11), matrices L^2 and L^ can be chosen as
Using these choices and substituting 9 € 0 in Definition 12.1 and G"1 in Lemma 12.5 into (12.20) and (12.21) yields (12.12) and (12.13), respectively, where Xn and Yn are the mn x mn matrices defined by
(Here * denotes irrelevant entries.) Now we need the following lemma.
240
Iwasaki
Lemma 12.6. Given integers m > 0, k > 0, and mxm real symmetric matrices X and Y, the following statements are equivalent. (i) 3 P 6 R(™+ fc ) x ( m + fc ) such that and Rank
(ii)
hold.
Moreover, if (ii) hoJds, then one such P in (i) can be constructed as
where F e R mxfc is a matrix factor such that Proof 12.7. See, for example, [215, 315]. From Lemma 12.6, we see the necessity of conditions (12.12)-(12.15). Conversely, if there exist X and Y satisfying these conditions, one can find Fn e R m " xfc " satisfying (12.16). Then Pn can be constructed as in (12.17), which in turn specifies 0 as in Definition 12.1. Clearly, this © satisfies (12.20) and (12.21). Moreover, from (12.19)(12.21), the existence of K such that (12.24) holds is guaranteed. These K and 9 will satisfy (12.5).
12.4
Applications to control problems
In this section, we show how the general results obtained in the previous section can be specialized to solve various control problems.
12.4.1
Fixed-order stabilization
Consider the npth-order LTI plant
and the ncth-order LTI controller
where A is either the differentiation operator s for continuous-time systems or the delay operator z for discrete-time systems. Fixed-order stabilization problem. For the npth-order plant in (12.25), design an ncth-order controller in (12.26) such that the closed-loop system is stable, where nc < np. This problem is a special case of WSP defined and solved in the previous section. The matrices M and K in (12.4), respectively, correspond to the plant and the controller as follows:
12.4.
Applications to control problems
241
The set V is given by
where (continuous-time case), (discrete-time case).
This V is a special case of the general class defined by (12.7). In particular, if we specify the data in (12.7) as
(continuous time), (discrete time),
with sufficiently small scalar e > 0, then we have the above V (except for the difference caused by t ^ 0). Note that assumption (12.8) is satisfied due to the introduction of e > 0. This auxiliary parameter will be removed later. As shown above, the fixed-order stabilizationFixed-order stabilization problem is a special case of WSP. In fact, for this particular class of V, the use of the P-separator does not introduce conservatism, and therefore the problem reduces to an AWSP exactly. Hence, a solution to the problem can be obtained by applying Theorem 12.2 to this special case. Theorem 12.3. The fixed-order stabilization problem admits a solution if and only if there exist real symmetric matrices X and Y satisfying
and either for the continuous-time case, or
for the discrete-time case. This result is known [209]. It is straightforward to reduce the result from Theorem 12.2 except for the treatment of e > 0, which deserves some explanation. If we specialize (12.12) and (12.13) for the continuous-time case, we get
Since X > 0 and y > 0, there exists a sufficiently small e > 0 satisfying these inequalities if and only if the inequalities hold for e = 0. Thus we obtain the conditions in Theorem 12.3.
Iwasaki
242
Figure 12.3: Robust control system.
12.4.2
Robust control
Consider the uncertain system described by ^t/(P(A),A), where A is the uncertain component, P(A) is the nominal transfer function, and A is either the differentiation operator s or the delay operator z. Assume that A belongs to the set
where 7 > 0 is a given scalar. The nominal transfer function P(A) is of order np and has the following state space realization:
Note that ^c/(P(A),A) is the transfer function from u to y when the feedback loop w = A.Z is added to the above state-space system. Our objective is to design an ncth-order controller C(X) with state-space realization
such that the closed-loop system in Figure 12.3 is robustly stable against A € A. Robust control problem. Consider an uncertain system described by the LFT ^rc/(P(A), A) of the uncertainty set A in (12.30) and the nominal plant P(A) in (12.31). Design a controller C(A) in (12.32) such that the closed-loop system in Figure 12.3 is robustly stable against A € A. This is a robust stability problem. Note, however, that the robust performance specification can also be incorporated by adding a fictitious "performance block" in A [104]. This robust control problem can be reduced to a WSP. Let M and K be given by
and define the set V as
12.4.
Applications to control problems
243
where A is the "unstable pole region" defined by (12.28). Then we see that the specification of the robust control problem is exactly given by (12.4). Thus, with the problem data defined above, the robust control problem reduces to the WSP. To solve the problem, we "approximate" the original problem by the corresponding AWSP and apply Theorem 12.2. In general, this step introduces conservatism into the result. If the uncertainty is LTV dynamic systems, however, this step does not change the problem and, in this case, the WSP and AWSP are equivalent11 [270, 370]. Applying Theorem 12.2 to the AWSP with the problem data12
which defines the above V, we obtain the following result. Theorem 12.4. The robust control problem admits a solution if there exist real symmetric matrices X, Y, V, and W satisfying
and either
for the continuous-time case, or
for the discrete- time case, where "P is the set of scaling matrices
commuting with the structure of A. 11
In this case, V becomes an operator on La and condition (12.4) must be interpreted appropriately. Here, e > 0 is a sufficiently small scalar. Its role is the same as that discussed in section 12.4.1.
12
Iwasaki
244
This result is first shown in [124, 316]. A proof of this theorem can be obtained by specifying the relevant parameters such as Rn, Sn, and Qn appropriately in Theorem 12.2. Note, however, that such a procedure leads to the following conditions instead of (12.35): Rank Rank where
The equivalence of this condition and (12.35) is clear from Lemma 12.6.
12.4.3 Gain-scheduled control Consider the linear parameter-varying (LPV) system described by the following state equation: (12.40)
where A is the differentiation operator or the delay operator, and the time-varying parameter A(£) belongs to the set
for all t. Note that this LPV system is given by the LFT , where P(A) is the transfer function for system (12.40). Our objective is to design a controller such that the L2-induced operator norm of the closed-loop mapping from w\ to z\ is strictly less than a given level 7 for all possible parameter variation A(£) e A. This problem is basically the same as the robust control problem in the previous section if no information on A(£) is assumed to be given other than A(t) 6 A. Here, we assume that additional information for A(i) is available. In particular, we consider the situation where the value of A(£) can be measured in real time but is not known exactly when designing a controller. A possible structure for the controller that fits naturally into the situation is the following [312]:
Note that this controller has the same LFT structure as the plant and can be described as Tu(C, A), where C(A) is the transfer function of the state-space system given by the first equation in (12.43). The feedback system consisting of the plant in (12.40) and (12.41) and the controller in (12.43) is depicted in Figure 12.4.
12.4. Applications to control problems
245
Figure 12.4: Gain-scheduled control system.
Gain-scheduled control problem. Consider the LPV plant described by (12.40) and (12.41). Assume that the time-varying parameter A(£) belongs to the set A defined in (12.42) for all t. Design an LPV controller in (12.43) such that the closed-loop system in Figure 12.4 is robustly stable against A(£) € A and
holds, where 7 > 0 is a given scalar. This problem can be reduced to an AWSP with the problem data
where the partitions in M and K indicate how ^(M, K} in AWSP is defined, we have that Ap is the "performance block" inserted to accommodate the performance specification, J-Y := diag(7,7/7) is the scaling that normalizes the maximum norm of the performance block Ap, and A is the "unstable pole region" defined in (12.28). Note that the above V is specified by the following choice of parameters:
246
Iwasaki
where e > 0 is a sufficiently small scalar and its role is as before. Using these parameters, Theorem 12»2 «an be specialized to known results [19, 312] as follows. Theorem 12.5. The gain-scheduled control problem admits a solution if there exist real symmetric matrices X, Y, V e T>, and W e T> satisfying
for the continuous-time case, or
for the discrete-time case, where
and £> is the set of scaling matrices
commuting with the structure of A.
12.5
Concluding remarks
We have defined a general algebraic problem, WSP, of finding a matrix K that makes a certain feedback connection well posed. A (partial) solution to WSP is given in terms of the exact solution to the auxiliary problem, AWSP. The solvability condition is given by an SP indicated in section 12.1; that is, AWSP is solvable if and only if there exist real symmetric matrices satisfying certain LMIs and additional rank constraints (see Theorem 12.2). In general, SP is not a convex problem due to the rank constraints. No efficient algorithms are known to guarantee global convergence to a feasible solution of SP whenever the solution set is nonempty. Nevertheless, there are several heuristic algorithms that seem to perform well in practice [159, 128, 176, 217]. The result presented here indicates
12.5. Concluding remarks
247
the importance of SP and should motivate further improvement of existing algorithms and development of new algorithms. A fundamental fact implied by our main result (also anticipated from the result of [312]) is the following: AWSP can be solved exactly via convex programming if all the information on the plant "perturbation" is available for the controller; otherwise, AWSP reduces to a nontrivial SP with nonconvex rank constraints. We stress that the plant "perturbation" may include not only the actual parameter variation as in [312] but also more general dynamical elements such as the frequency variable and infinite-dimensional, nonlinear components. In section 12.4, we have shown that our result can be specialized to existing results that solve important control problems. The basic strategy is to describe the control problem in question as a special case of WSP or AWSP by appropriately specifying the "perturbation" V. Typically, the set V is defined in terms of the frequency variable A and the uncertainty A. In fact, the class of control problems reducible to WSP or AWSP can be enlarged by considering a wide variety of possible V. For instance, V may dictate infinite-dimensional system components such as e~ST and nonlinear components such as the limiter.
This page intentionally left blank
Part V
Nonconvex Problems
This page intentionally left blank
Chapter 13
Alternating Projection Algorithms for Linear Matrix Inequalities Problems with Rank Constraints Karolos M. Grigoriadis and Eric B. Beran 13.1
Introduction
A large number of controller synthesis problems, such as full-order stabilization, HQQ, ^-synthesis with constant scaling, robust gain-scheduling, and other full-order control design problems, have been formulated as convex feasibility or optimization problems involving linear matrix inequalities (LMIs) [315, 64, 216, 215, 148, 19, 43]. The book by Skelton, Iwasaki, and Grigoriadis [377] presents a collection and a unified formulation of many of these problems. Recently developed interior-point algorithms provide efficient computational tools for numerical solution [401, 404, 296, 403] and newly develop computer-aided control design software packages have been based on these algorithms; see, for example, the LMI Control Toolbox for Matlab [153, 150] and the Semidefin Programming Package [401, 402] with user friendly interfaces [68, 122]. These algorithms converge in polynomial time, and the problem structure is exploited to increase computational efficiency. The above LMI control design problems provide controllers of order equal to the order of the generalized plant. However, controller implementation constraints often dictate the use of low-order controllers because of simplicity, hardware limitations, or computational reasons; e.g., [448] discusses the controller-order limitations of the Hubble Space Telescope pointing control system. Many control design problems where the order of the controller is less than the order of the generalized plant can be formulated as LMI problems with an additional matrix coupling rank constraint that destroys the convexity of the optimization problem. Hence, interior-point algorithms with guaranteed convergence cannot be used to obtain a solution. Heuristic algorithms to address rankconstraint problems for low-order control synthesis have been proposed recently [94,124, 217, 128, 282], but convergence of these algorithms to a solution is not guaranteed. In this context, we present an approach based on alternating projection techniques 251
Grigoriadis and Beran
252
for low-order control design problems described by LMIs and coupling matrix rank constraints. Similar techniques have been used successfully in the past to solve image reconstructions and statistical estimation problems [84, 440]. In a general formulation, the idea behind these techniques is the following: Given a family of closed convex sets the sequence of alternating projections onto these sets converges to a point in the intersection of the family. Several variations of this standard approach have been proposed to solve minimum-distance problems with respect to a family of convex sets [70, 188] and to accelerate the rate of convergence using information about the direction toward the intersection [180]. We are interested to solve feasibility and optimization problems involving nonconvex matrix rank constraints. However, in such a case convergence of the alternating projection algorithms can be guaranteed only locally, i.e., when the initial starting point is in a neighborhood of a feasible solution [84]. Alternating projections for nonconvex constraint sets have been used in the past in image reconstruction problems with satisfactory performance [416]. In this work, the method of alternating projections combined with efficient semidefinite programming (SP) algorithms is presented and applied to the solution of fixed-order control synthesis problems. Extensive numerical experimentation is provided to assess the computational efficiency and reliability of the methods.
13.2
LMI-based control design with rank constraints
In this section we formulate the fixed-order stabilization with prescribed degree of stability and the HQQ control problems in terms of LMIs and a coupling matrix rank constraint. An algebraic approach is followed to eliminate the unknown matrix parameters from the design problem. When this LMI problem with the rank constraint is solved, then the desirable fixed-order controller is obtained via the solution of an additional LMI problem or via the use of explicit formulas. This approach follows the work of Iwasaki and Skelton [215] and Apkarian and Gahinet [19]. A similar mathematical formulation can be obtained for a large class of fixed-order control design problems that result in LMIs with additional rank constraints [377]. Hence, a unified approach can be followed for the solution of these problems.
13.2.1 Control problem formulation We consider linear time-invariant (LTI) systems of the form
where x € Rn is the state vector of the system, u € Rn" is the control input, w € Rni" is the external input (that includes plant disturbances and measurement noise), z € Rn* is the regulated output, and y G Rn" is the measured output. Without loss of generality we assume that Dyu = 0. We seek to design LTI controllers of fixed order nc
where xc € Rnc is the controller state. The matrix G contains all the unknown controller parameters.
13.2. LMI-based control design with rank constraints
253
If we augment the open-loop system (13.1) with the states corresponding to the controller (13.2), we obtain the augmented system
We have introduced the abbreviations
which allow us to write the control law (13.2) as u = Gy. Connecting the system (13.1) and the controller (13.2), and eliminating the variables u and y, we obtain the closed-loop system
where the closed-loop system matrix is an affine function of the unknown controller parameters as follows:
13.2.2 Stabilization with prescribed degree of stability To examine the stabilization problem consider the following representation of the system (13.1) that ignores the external input and the regulated output variables
The closed-loop system x = Ax — (A + BuGCy)x can be computed via (13.5) by deleting the appropriate columns and rows that correspond to the external input and the regulated output. We will say that the system x = Ax is stable with prescribed degree of stability a (or a-stable) if all the eigenvalues of A are to the left of the line -aj in the complex plane, where a is a given positive scalar. The following result provides necessary and sufficient conditions for a-stability. Theorem 13.1. Let A be a given square matrix and a be a given positive scalar. Then the following statements are equivalent: (i) The system x = Ax is a-stable. (ii) There exists a matrix Y > 0 such that (A + a/)T Y + Y (A + al) < 0. Notice that a-stability guarantees that the free response of the system decays to zero faster than e~at. We seek to design a controller of fixed-order nc to guarantee that the closed-loop system x = Ax is a-stable; that is, we are interested to solve the following problem:
254
Grigoriadis and Beran
Fixed-order a-stabilization problem. Consider the system (13.6) and let a be a given positive scalar. Find a controller (13.2) of order nc, if it exists, such that the closed system is a-stable. Using the matrix inequality condition (ii) of Theorem 13.1 we obtain the followin formulation of the fixed-order a-stabilization problem: Find a parameter matrix Y > 0 and a controller matrix G such that
Notice that condition (13.7) is a bilinear matrix inequality (BMI) in terms of the matrix variables Y and G. Techniques to solve these type of BMIs have been considered in the literature; e.g., see [45], [400], or [281]. However, only very low order problems can be addressed effectively. In this work, our objective is to eliminate the unknown control variables from the BMI (13.7). To this end, rewrite (13.7) as follows:
The following result provides necessary and sufficient conditions for the solvability of this matrix inequality (13.8) with resect to G. Lemma 13.1 (elimination lemma). Let matrices B € R nxm , C e R fcxn , and Q = QT G R n x n be given matrices. Consider the set of matrices
Then the following statements are equivalent: (ii) The following conditions hold:
Proof 13.1. This result is an extension of Finsler's theorem; see [209]. To use this elimination lemma notice that the orthogonal complements of the coefficient matrices Ba and C% can be computed as follows:
Hence, we obtain the following necessary and sufficient conditions for the existence of a controller that solves the a-stabilization problem
By introducing the notation
13.2. LMI-based control design with rank constraints
255
where X,Y e R n x n and X 22 ,y 22 6 R,«cxn Cj aft,ei simpie matrix manipulations and the choice of we obtain the conditions
Note that the above conditions are independent of the controller order. In addition, it is possible to show via the Schur's complement formula [64, page 7] and the matrix inversion lemma [446, section 2.3] that the relationship between X and Y in (13.11) provides the conditions
and Notice also that via the Schur's complement formula, condition (13.14) is equivalent to Rank(7 — XY) < nc. The rank condition (13.14) is obtained by observing that
and noting that
Conditions (13.12), (13.13), and (13.14), where X > 0 and Y > 0, are the necessary and sufficient conditions for the existence of an a-stabilizing controller of order nc. Notice that the rank constraint (13.14) is trivially satisfied when we seek for a full-order controller of order nc = n. We will denote the set of matrices X and Y that satisfy the constraints (13.12) and (13.13) by Fa. Since Fa is the feasibility set of a coupled set of LMIs, it is a convex set. If matrices X and Y that satisfy the conditions (13.12), (13.13), and (13.14) are obtained, then the unknown controller parameter matrix G that solves the a-stabilization problem can be obtained by solving the LMI (13.7) for G, where Y is obtained by (13.11). Alternatively, algebraic expressions that parameterize all controllers that correspond to a feasible solution X and Y are provided in [377].
13.2.3
HOC control
In this section, we consider the problem of designing a controller (13.2) of order nc to bound the L2 to L2 induced norm of the system (13.1) from w to z. It is known that this induced norm is the HQQ norm of the transfer function from w to z; see, for instance, [446, Chapter 4]. We seek to solve the following control problem. Fixed-order HQQ control problem. Consider the system (13.1) and let 7 be a given positive scalar. Find a controller of the form (13.2) of order nc, if it exists, such that the closed-loop system (13.4) has HQQ norm less than or equal to 7.
Grigoriadis and Beran
256
The following bounded real lemma provides necessary and sufficient conditions for a system to satisfy a given HQQ norm bound. Theorem 13.2 (bounded real lemma). Given a system of the form
then the following statements are equivalent: the system from w to z;
is the transfer function of
(ii) there exists a positive definite matrix Y such that
By inserting the expressions (13.5) for the closed-loop matrices in the bounded real lemma condition (ii) we obtain the following BMI formulation of the HQQ control problem: Find a parameter matrix Y > 0 and a controller matrix G such that
or, equivalently,
where the second term is repeated in (*) to provide a symmetric expression. Using the elimination Lemma 13.1 and following an algebraic procedure similar to the a-stabilization problem of section 13.2.2 we obtain the following necessary and sufficient conditions for the HOC control problem: There exists a controller that solves the fixedorder HOQ control problem if and only if there exist positive definite matrices X and Y such that
13.3. Alternating projecting schemes
257
and
Notice that the coupling constraints (13.19) and (13.20) are same as in the a stabilization problem. We will denote the set of matrices X and Y that satisfy the convex constraints (13.17), (13.18), and (13.19) by r Hoo . When the feasibility problem with constraints (13.17)-(13.20) is solved with respect to X and Y, then a fixed-order H^ controller that solves the design problem is obtained by solving the LMI (13.15) with respect to G or via the explicit formulas presented in [215, 377]. It is emphasized that a large class of many other fixed-order control design problems such as linear quadratic control, covariance control, positive-real control, /i-synthesis with constant scaling, and linear parameter-varying control can be formulated in a similar mathematical framework via linear matrix inequalities and a coupling matrix rank constraint; see [377].
13.3
Alternating projecting schemes
Based on the formulation provided in section 13.2, the fixed-order control design problem reduces to a feasibility problem of obtaining a matrix pair that satisfies a family of LMIs and a coupling matrix rank constraint. In this section, alternating projection methods are presented for the solution of this type of feasibility problem. These methods exploit the geometry of the design space to find feasible solutions, and they have been used successfully to address image restoration and statistical estimation problems in signal processing [84, 440, 416]. Both the standard alternating projection method and the directional alternating projection method are presented [70, 188, 180].
13.3.1
The standard alternating projection method
Consider a family C\, £2, • • • , Cm of closed, convex sets in the space of symmetric matrices. We suppose that the sets have a nonempty intersection, and we seek to solve the feasibility problem of finding a matrix in the intersection C\ {\Ci H • • • (\Cm. Let Pct denote the orthogonal projection operator onto the set C$, where i = l,...,ra. Hence, for any n x n symmetric matrix X, the matrix Pc^X) denotes the orthogonal projection of X onto Cj, that is, the matrix in Ci that has minimum distance from the matrix X. The orthogonal projection theorem [255] guarantees that this projection is unique. The question we would like to answer is the following: Is it possible to provide a solution to the feasibility problem by making use of the orthogonal projections onto each constraint set? The answer is yes and is provided by the following result, which we call the standard alternating projection theorem [76, 180]. Theorem 13.3. Let XQ be any n x n symmetric matrix and C\,C<2,... ,Cm be a family of closed, convex sets in the space of symmetric matrices. Then if there exists an intersection, the sequence of alternating projections
258
Grigoriadis and Beran
converges to a point in the intersection of the sets, i.e., Xj —»• X, where If no intersection exists, the sequence does not converge. Hence, starting from any symmetric matrix, the sequence of alternating projections onto the constraint sets converges to a solution of the feasibility problem, if one exists. The case of no intersection can be detected by examining the convergence of the even and odd subsequences of the above sequence of alternating projections. A schematic representation of the standard alternating projection method is shown in Figure 13.1. It can be easily verified that the limit X of the alternating projection sequence depends on the starting point XQ, as well as on the order of the projections. Hence, by rearranging the sequence of projections we can obtain a different feasible point.
Figure 13.1: Standard alternating convex projection algorithm. An important feature of the standard alternating projection algorithms (13.21) is that the algorithms can be implemented very easily, and usually the amount of calculations in one iteration is very small. However, in some cases the algorithms may suffer from slow convergence. For example, consider the case of two planes intersecting with a small angle. In this case the standard alternating projection algorithm (13.21) might oscillate for many iterations between the two sets before it converges to a point in the intersection. An effective remedy is often obtaioned by the directional alternating convex projection Algorithm, described below [180].
13.3.2 The directional alternating projection method The directional alternating projection method uses information about the geometry of the constraint sets to provide an algorithm with accelerated convergence to solve the matrix feasibility problem. The basic idea behind this approach is to utilize in each iteration the tangent plane of one of the constraint sets, so that the sequence of points
13.4. Mixed alternating projection/SP (AP/SP) design for low-order control
259
we obtain approaches the intersection of the sets more rapidly (see Figure 13.2). For simplicity, we will consider the case of two closed and convex constraint sets C\ and £2The directional alternating convex projection algorithm is described next.
Figure 13.2: Directional alternating projection algorithm. Theorem 13.4. Let XQ be any nxn symmetric matrix. Then the sequence of matrices {Xi}, i = 1, 2, . . . , oo, given by
converges to a point in the intersection of the sets Ci and €2Hence, starting from any symmetric matrix, the sequence of directional alternating projections (13.22) provides an accelerated numerical algorithm to obtain a feasible matrix in the intersection of the constraint sets C\ H^2- In fact, it can be easily verified that when the two sets C\ and €2 are hyperplanes in the space of symmetric matrices, then the alternating projection algorithm converges to a feasible point in one cycle, independently of the angle between the two hyperplanes.
13.4
Mixed alternating projection/SP (AP/SP) design for low-order control
In this section, we describe how to solve low-order control problems by exploiting the alternating projection technique described in the previous section and efficient SP algorithms. The presentation given here was originally presented in [44].
260
Grigoriadis and Beran
Recall that the fixed-order control design problem has a solution if and only if there exist a matrix pair (X, Y) in the intersection of a family of LMI constraint sets and a rank-constraint set. Let's denote by FConvex the LMI constraint that the matrices X and Y need to satisfy and define the constraint set
Hence, a necessary and sufficient condition for the existence of a controller for fixed order nc is that there exists (X, Y) € FCOnvex n ZUc. The set ZUc that restricts the controller order is a nonconvex constraint set. Also, notice that, depending on the specific control design problem, the set Fconvex corresponds to the set Fa or the set FH^ defined in section 13.2. The alternating projection scheme presented in the last section can now be used to find a matrix pair (X, Y) in the intersection of the convex constraint set FCOnvex and the nonconvex rank-constraint set Znc. Our first task is to compute the projections onto the convex constraint set Fa or FH^ • One approach is to decompose the LMI constraints as intersections of sets of simpler geometry and to find analytical expressions for the projection operators onto these simpler sets. This approach was followed in [176]. However, the iterative projections onto these multiple sets require a large number of iterations for convergence resulting in computationally expensive algorithms. Alternatively, the projection onto the set Fa or FH^ can be computed using SP for convex optimization. Hence, the multiple projections required in [176] onto the set FCOnvex are eliminated, and faster convergence can be achieved. The following result, see [64], provides the projection onto a general LMI constraint set F as the solution to an SP problem. Proposition 13.1. Let F be a convex set described by an LMI. Then the projection X* = PpX can be computed as the unique solution Y to the SP problem
In our problem, our objective is to compute the projection onto the joint set Fconvex C Sn x <Sn of matrix pairs (X, Y). This projection can be found by solving the following SP problem, where (XQ,YQ) are the given matrices that we seek to find the projection, and X, y, 6", T € <Sn are the free variables
We denote the minimizing solutions by (X*,y*); that is, the projection onto FCOnvex is written as
13.4. Mixed alternating projection/SP (AP/SP) design for low-order control
261
In addition to the above LMI constraints sets, we seek to compute the orthogonal projection onto the nonconvex constraint set ZUc. To this end, define the following sets in the space of symmetric matrices
and
Then the connection between
Notice that the sets T> and P are closed convex sets, where 72.^ is the only nonconvex set. The expressions for the orthogonal projections onto these sets are provided next.
Theorem 13.5. Let The orthogonal projection, Z* = PpZ, of Z onto the set X> is given by
The orthogonal projection onto the set P is provided by the following result, which follows from [193]. Theorem 13.6. Let Z G <Sn and ]et Z + J = LKLT be the eigenvalue-eigenvector decomposition of Z + J, where A is the diagonal matrix of the eigenvalues and L is the orthogonal matrix of the normalized eigenvectors. The orthogonal projection, Z* = PpZ, of Z onto the set P is given by
where A_ is the diagonal matrix obtained by replacing the negative eigenvalues in A by zero. Hence, this projection requires an eigenvalue-eigenvector decomposition of the 2n x 2n symmetric matrix Z + J. We note that the rank-constraint set 72.^, defined by (13.27), is a closed set, but it is not convex. Therefore, given a matrix Z in <S2n, there might be several matrices in 72* that minimize the distance from Z. We will call any such matrix a projection of Z on 7£fc. The following result provides a projection onto the set 72* [198]. Theorem 13.7. Let Z e Sn and let Z + J = UY,VT be a singular value decomposition of Z + J. The orthogonal projection, Z* = Pnk Z, of Z onto the set 72* is given by
where S& is the diagonal matrix obtained by replacing the smallest In — k = n — nc singular values in Z -f J by zero.
262
Grigoriadis and Beran
Notice that the sequence of the projections onto the two sets P and Tlk can be computed in one step via the eigenvalue-eigenvector decomposition of Z + J followed by zeroing the appropriate number of the smallest eigenvalues. If we denote this sequence of projections by Pp-R.k Z, then the directional alternating projection can be used to find the projection onto the set ZUc via the following sequence of iterations:
We will call this step of our alternating projection algorithm an inner iteration. Hence, the above inner iteration provides the projection Pznc (X, Y) of (X, Y) onto Zn<. . The alternating projection algorithm for the fixed order control problem can now be programmed utilizing SP for the projection onto rconvex and the above inner iteration scheme for the projection onto ZUc . The proposed procedure is the following: First find a solution that corresponds to a full-order controller. This is simply done by solving an LMI feasibility problem for the constraint set rc0nvex- Next, obtain a solution that corresponds to a controller of order at most nc — 1. This can be done via the SP problem
The obtained solution will be the starting point for our alternating projection algorithm. The steps of this alternating projection algorithm are as follows. Notice that in order to have fast convergence the directional alternating projection algorithm is used as mentioned in section 13.3.2. Step 1. Solve the SP problem (13.34) that corresponds to a controller of order at most nc = n — 1. The solution (Xo,Y0) will be our starting point. Step 2. Consider the problem where the controller order is reduced by one; i.e., set nc = nc — 1. Compute the following iterative sequence of projections:
Continue until (X^Yi) G rconvex n Znc, or until infeasibility has been detected. Step 3. Return to Step 2, until the controller order nc is the desired one.
263
13.5. Numerical experiments
We will call each sequence of iterations in Step 2 an outer iteration, as opposed to the inner iterations for the projection onto ZHc. It is emphasized that global convergence of the above alternating projection schemes is not guaranteed. Hence, it is possible that a specified low-order controller order might not be achieved even though such a controller exists.
13.5
Numerical experiments
In this section extensive numerical experimentation is provided to assess the efficiency and the complexity of the proposed alternating projection schemes for the solution of LMI problems with rank constraints. The fixed-order a-stabilization problem for a nth-order spring-mass system, a helicopter system, and the static output feedback stabilization problem for randomly generated systems are considered. The results were obtained on an HP735 workstation using Matlab version 4.2 and the SP package by Boyd and Vandenberghe [401].
13.5.1 Randomly generated stabilizable systems The first numerical experiment considers randomly generated static output-feedback stabilizable systems. A stabilizable system is obtained by reflecting the eigenvalues of randomly generated matrices via an eigenvalue-eigenvector decomposition, where the positive eigenvalues are replaced with their negative values. Then a product of arbitrary input, feedback gain, and output matrices is subtracted from this matrix to guarantee that the result is static-output-feedback stabilizable. By shifting all eigenvalues by an amount —a, where a is a positive scalar, desired a-degree of stability can be introduced in the static output feedback problem. Table 13.1: Sizes for the random generated experiments. Case a b c d e f
System order n 4 6 8 4 6 8
Number of inputs nu I 2 3 3 4 5
Number of outputs ny 1 2 3 2 3 4
We seek to obtain the lowest-order a-stabilizing controller obtained via the proposed alternating projection methods. We will compare this result with the following Kimura bound &{,, which provides an upper bound on the control order for the stabilizing control problem [229] This bound is for a = 0. Table 13.1 shows the cases we have considered. Notice that the Kimura bound kf, is equal to k\> = 3 for cases a, b, and c, and kb = 0 for cases d, e, and f. For each one of the above six cases 200 hundred random experiments were carried out. A degree of stability a = 0.1 has been introduced in the randomly generated
Grigoriadis and Beran
264
systems, and the objective is to obtain the lowest-order stabilizing controller that places the closed-loop poles to the left of —a. Tables 13.2 and 13.3 show the results for each one of the cases considered in Table 13.1. Table 13.2: The results for the random example cases a, b, and c. All controllers were of order 0 or 1, as expected with the Kimura bound kb = 3. The CPU time is the average time used on all examples. Controller order
0
Outer iterations 0 1 2 3 4 5 6 7-38
1 Average CPU time
Case a Rate # 176 88.00 0 0.00 3 1.50 3 1.50 4 2.00 3 1.50 0 0.00 6 3.00 5 2.50 18.71s
Case b Rate # 177 88.50 1.00 2 3.50 7 4 2.00 3 1.50 0.00 0 1.00 2 5 2.50 0 0.00 15.93s
Case c Rate # 184 92.00 3 1.50 7 3.50 2 1.00 1 0.50 0 0.00 1 0.50 1 0.50 1 0.50 26.15s
From these results it is observed that in the majority of the experiments, the lowestorder achievable controller is obtained in 0 iterations, that is, by solving the convex problem (13.34) as described in section 13.4. In all the experiments, the lowest-order achievable controller is of order lower than or equal to the Kimura bound &{,. In 5 experiments for case a, the lowest-order achievable controller was 1, instead of the zeroth order that is guaranteed by the construction of the experiments. Table 13.3: The results for the random example cases d, e, and f. All controllers were of order 0, as expected with the Kimura bound kb = 0. The CPU time is the average time used on all examples. Outer iterations 0 1 2 3 4 CPU time
Case d Rate # 200 100.00 0 0.00 0 0.00 0.00 0 0 0.00 1.17s
Case e Rate # 199 99.50 0 0.00 0 0.00 1 0.50 0 0.00 3.19s
Casef Rate # 196 98.00 0 0.00 3 1.50 0 0.00 1 0.50 11.18s
13.5.2 Helicopter example The following example is from [226]. The goal is to obtain a static state feedback controller for the following helicopter model such that the closed-loop poles are located to the left of -a = -0.1 at the complex plane. The system is of the form (13.6), and the
13.5. Numerical experiments
265
data are the following:
The algorithm converged in zero iterations; that is, the solution of the convex minimization problem (13.34) provides a static controller
that achieves the desired objective. The closed-loop poles are -20.8379, -0.1042, and -0.2572 ± 0.9738J. The required CPU time to obtain this controller is 1.27 sec.
13.5.3
Spring-mass systems
In this numerical experiment interconnected spring-mass system models were generated. The order of the system is equal to twice the number of the interconnected masses. We first consider the ^-stabilization problem for a 2-mass/spring system. We assume that the two bodies have equal mass mi = m-i = 1 and they are connected by a spring with stiffness k = 1. We assume that only the position of body 2 is measured and a control force acts on body 1; i.e., the problem is noncollocated. A state-space representation of this system, for the noise-free case (w = 0), in the form (13.6) is the following:
where the state variables x\ and Xs are the position and velocity, respectively, of body 1; and X2 and x\ are the position and velocity of body 2. We seek fixed-order a-stabilizing controllers using the mixed AP/SP method. The results are presented in Table 13.4. The convex approach (13.34) provides controllers of order 2 for 0 < a < 0.16. The same approach gives controllers of order 3 for a between 0.16 and 1.0. In fact a controller of order 3 is in theory possible for all possible a's, since the system is controllable and observable. Controllers of order 2 are obtained for 0.16 < a < 0.42 via our proposed AP/SP method. The number of flops required to obtain the given solution was estimated as follows: The SP program provides two types of CPU time, "user time" and "system time," but only the "user time" is included in this measure. Moreover, the number of flops per CPU second is estimated to be about 105. Computing the value in flops in Matlab gives the value of 6.6104. The total amount of flops is therefore estimated to be CPU time x!05-f Matlab counted mflops. We give the best controller computed with the mixed AP/SP method. For a = 0.42 and a desired controller order of 2, the mixed AP/SP method required 13.18 mflops and 12 iterations. The computed controller is
Grigoriadis and Beran
266
Table 13.4: Results of the combined alternating projection and semidefinite programming algorithm.
3
Effort required Mega flops Outer iter.
2
Mega flops Outer iter.
nc
Prescribed degree of stability a 0.16 0.2 0.3 0.42 0.01 Same as Same as 0.29 0.33 0.29 below 0 0 0 below
0.17 0
0.16 0
3.33 4
3.58 3
13.18 12
1.0 0.26 0
NC
and the closed-loop eigenvalues
We now proceed to examine higher-order spring-mass systems. Tables 13.5 and 13.6 provide the lowest-order achievable controller and the number of iterations needed for a = 0.001 and a = 0.1 for different numbers of the system masses. Hence, either zero or one iteration of the AP/SP algorithm is enough to provide a static output feedback controller depending on the desired degree of stability a.
Table 13.5: Results for degree of stability a = 0.001. Number of masses
Number of iterations
2 3 4 5
0 0 0 0
Lowest-order achievable controller 2 3 4 5
Table 13.6: Results for degree of stability a = 0.1. Number of masses
Number of iterations
2 3 4 5
0 0 1 1
Lowest-order achievable controller 2 3 5 6
13.6. Conclusions
13.6
267
Conclusions
Alternating projection algorithms are proposed to solve LMI control design problems with coupling rank constraints. These problems appear in many reduced-order control synthesis problems. The above problems are formulated as matrix feasibility problems of finding matrix parameters in the intersection of a family of LMI and matrix rank constraint sets. Alternating projection methods combined with SP algorithms utilize the geometry of the design space along with efficient LMI solvers to obtain matrix parameters that satisfy the design constraints. Directional alternating projections provide a more efficient implementation. However, only local convergence of the alternating projection algorithms to a matrix solution that satisfies the above constraints is guaranteed. Computational experiments have been used to demonstrate the applicability of the proposed algorithms for addressing fixed-order control problems. Fixed and random benchmark examples have been developed for this objective. These results indicate that the proposed algorithms combined with SP are effective in addressing low-order controllers for small and medium-size problems.
This page intentionally left blank
Chapter 14
Bilinearity and Complementarity in Robust Control Mehran Mesbahi, Michael G. Safonov, and George P. Papavassilopoulos 14.1
Introduction
In this chapter, we present an overview of the key developments in the methodological, structural, and computational aspects of the bilinear matrix inequality (BMI) feasibility problem. In this direction, we present the connections of the BMI with robust control theory and its geometric properties, including interpretations of the BMI as a rankconstrained linear matrix inequality (LMI), as an extreme form problem (EFP), and as a semidefinite complementarity problem (SDCP). Computational implications and algorithms are also discussed. The simultaneous appearance of the (unknown) variables x and y in the matrix inequality
for a given set of symmetric matrices Fij G R pxp (i = 1, . . . , n; j = 1, . . . , m) not only provides an unexpectedly powerful formulation for a wide range of robust control problems, but it also introduces new and elegant structural and computational questions. A possible initial attempt to rename the products X^T/J as Zij in (14.1) and rewriting it in terms of z^ 's as an LMI [64] ,
introduces yet another twist to this problem, since the unknown variables Zij's are now constrained to be related in a rather peculiar manner, for example,
269
270
Mesbahi, Safonov, and Papavassilopoulos
Since at the present time this approach seems to present neither aesthetic insights nor suggest a computational approach for finding the variables x and y in (14.1) (or prove the nonexistence of them), we abandon our initial temptation and decide to treat the problem with complete regard to its most distinguished property: bilinearity; we shall refer to this problem as the BMI. What makes the BMI an extremely important problem in robust control, however, is far beyond an initial mathematical curiosity to make the LMI "bilinear." The BMI is introduced in response to some of the most important issues facing the field of robust control and in particular those that cannot be addressed in the LMI framework [167], [354]. The chapter presents various facets of the research performed by the authors and their colleagues on the BMI problem. The topics to be covered are categorized as 1. methodological, 2. structural, and 3. computational. Under the first category, we present the very important results pertaining to the formulation of the robust synthesis problems as a BMI. In particular, we cover reformulating the n/km synthesis [103], [349], along with specifications on the controller order and structure, to a BMI. Our presentation in this part is based on the results reported in Safonov, Goh, and Ly [354] and is influenced by the dissertations of Goh [164] and Ly [260]. We then proceed to present some of the structural aspects of the BMI. It turns out that the investigations into the geometry of the solution set of the BMI lead to some very interesting nonconvex and convex programming problems over cones. In particular, we discuss the results pertaining to the equivalence of the BMI with examining the magnitude of the diameter of a certain convex set [356]. The results connecting the BMI to a class of cone optimization problems, namely, the SDCPs [208], [280], [281], [279], [283], will also be given particular attention. The computational methods for solving the BMI are briefly reviewed in the final section. Section 14.1.1 introduces the relevant preliminaries.
14.1.1
Preliminaries on convex analysis
Let H be a finite-dimensional Hilbert space equipped with the inner product {-, •) : H x H —» R (e.g., the n-dimensional Euclidean space or the space of n x n matrices with the appropriate notion of an inner product defined on them). A set 1C C H is a cone if for all a > 0, atC C 1C. fC is a convex cone if /C is a cone and it is convex, i.e., for all a G [0,1], a/C + (1 — a)/C C /C, or, equivalently, if 1C is a cone and /C -f /C C /C. A convex cone 1C is called pointed if 1C n (-1C) = {0} and solid if it has a nonempty interior. An extreme form (or an extreme ray) of a convex cone 1C is a subset E = {ax : a > 0} of /C, such that if x = ay + (1 — a)z for 0 < a < 1 and y, z e /C, one can conclude that y,z G E [185], [198]. The set of extreme forms (rays) of the cone 1C is denoted by /C. The dual cone of a set 5 C H, denoted by S*, is defined to be
If S is a pointed closed convex cone, then the interior of its dual cone, int S*, is given by
A closed convex cone in a finite-dimensional Hilbert space is pointed if and only if its dual is solid [46]. In particular, Sj. is a pointed, closed, and self-dual cone—the cone
14.1. Introduction
271
that induces the ordering used in the LMI and the BMI formulations. It can easily be shown that S* is always a convex set, and that if Si C 52, then 5£ C 5J". In addition, S = (5*)* if and only if 5 is a closed convex cone. Given a pointed closed convex cone 1C C n, a linear map M : U -» H is called /C-positive if VO ^ X e /C, Af (X) € int /C*. Furthermore, a linear map M : H —> H. is called /C-copositive if (X, M(X)) > 0 VX € /C [46], [173], [208]. We are now ready to formulate the cone problems that are considered in section 14.3.2. The cone-LP is formulated as follows: Given a cone /C C H, a linear map M : H —>• W, and the elements Q and C in 7i, find Z eH (if it exists) as a solution to
Similarly, the cone-LCP is formulated as follows: Given a cone /C C H, a linear map M : H -* H, and Q € ft, find Z e ft (if it exists) such that
The above instances of the cone-LP and the cone-LCP shall be referred to as the cone-LPjc(C, Q,M) and cone-LCP/c(Q, M). When 1C is the nonnegative orthant in the n-dimensional Euclidean space, the cone-LP/c(C, Q,M) (14.3)-(14.5) and the coneLCPjc(Q,M) (14.6)-(14.8) are equivalent to the familiar linear programming (LP) and the linear complementarity problems (LCPs) [85], [90]. A problem that serves as a bridge between the BMI and the cone-LP/LCPs is the EFP: Given a pointed closed convex cone /C C H, a linear map M : H —> 7i, find X 6 H (if it exists), such that
The above instance of the EFP is referred to as the EFPjc(M). When /C is the nonnegative orthant in the n-dimensional Euclidean space, the EFP is a trivial problem. It should be noted that the solution set of an EFP is generally nonconvex; given the two extreme forms of /C that solve the EFP/c(M), a strict convex combination of them is not an extreme form of /C. It is also important to note that the EFP requires M(X) to lie in the interior of the dual cone. This is in light of the fact that for certain important classes of linear maps M, including the map that is encountered in the context of the BMIs, M(X) is known to lie in, but possibly on the boundary of, the dual cone, for all the extreme forms X of the cone /C. As it is shown in section 14.3.2, the EFP formulation, in spite of its nice geometrical interpretation, includes as its special case the BMI problem. Since the extreme forms of the matrix classes that are considered in the chapter can be characterized by their rank, the EFP formulation of the BMI also translates directly to a rank minimization problem over a convex set of matrices [281] (refer to section 14.3.2).
14.1.2
Background on control theory
We use Ti and Si to denote the dimensions of ith input and the ith output of the finitedimensional linear time invariant (LTI) plant C(s), respectively; the order of G(s) is
272
Mesbahi, Safonov, and Papavassilopoulos
Figure 14.1: T(s) = Fl{G(s),K(s)}.
denoted by n; similarly q and qM designate the order of the controller and the multiplier to be synthesized. Consider the LTI plant G(s) partitioned as
where
and A e R nxn , Dn e R S l X r i , D22 € R 32Xr2 , and Gy(«) := A, + Ci(Is - A}-lBj. With respect to this particular partition of the transfer matrix G, the feedback connection shown in Figure 14.1 has the transfer function
where K(s) denotes a controller of order q given by
and
The inclusion of the uncertainty in this framework is now really an extra bonus; see Figure 14.2. The resulting transfer function from u\ to y\ is simply
provided that all the appropriate inverses exist. Motivated by the way that uncertainty manifests itself in the generalized plants (those that include the nominal plant, the sensors, and the actuators), one is led to consider uncertainties of the form
273
14.1. Introduction
Figure 14.2: T := Fu{ (s),fl{G(s),K(s)}}
for some prescribed positive integers L, F, and ni, . . . ,n^, such that ||A||oo < 1. The <5j's are real or complex valued and in general correspond to parametric uncertainties; Ai's on the other hand are time invariant operators used to account for such things as unmodeled dynamics. Given that A has arisen from our modeling technique and/or our inadequate knowledge about the nature of things, and that G is a physical reality, K is our only hope to make the system in Figure 14.2 operate in a way that is desirable to us: the controller K is chosen to provide internal stability and (external) performance for all possible perturbations A; performance in robust control setting is often taken as the ability of the system to reject the disturbances lumped in the term u\ , not only necessitating that HTHoo < 1 (as the result of the small gain condition, the so-called bounded realness of the operator T [360], [444]) but also requiring IIT^Hcx, to be as small as possible. Let us shift our attention back to the unperturbed configuration of Figure 14.1. It is well known that the bilinear sector transform T(s) = sect{T(s)} := (/ — T(s))(7 4T(s))"1 maps bounded real systems HT^S)!^ < 1 into positive real systems Herm(T(jo;)) > 0 and vice versa. A routine calculation reveals that the state-space matrices of f ( s ) = sect{T(s)} and T(s) are related by
where
the corresponding bilinearly transformed open-loop plant has the state-space system matrix
It is known that we may linearly parameterize the set of realizable S~ matrices as follows [315], [316]: (14.21)
274
Mesbahi, Safonov, and Papavassilopoulos
where SQ is the state-space system matrix of
and
Thus we have all the necessary machinery to translate between positive real and bounded real conditions. The principle tool in translating frequency domain results related to positive real conditions in terms of the corresponding matrix inequalities is the following important lemma. Lemma 14.1 (generalized strong positive real LMI: see [13], [354]). LetG(s) = C(Is — A)~1B + D be a minimal state-space realization. Then for some e > 0, Herm( if and only if there exists P = PT such that
Moreover, G(s) is stable if and only if P > 0.
14.2
Methodological aspects of the BMI
Mathematics often takes a precise problem formulation as its starting point; theoretical research in engineering more often than not takes it close to its ending. It is common to see statements like "since this problem is equivalent to solving a system of linear equations, we consider the problem solved" in the engineering literature. But when should we consider an engineering problem solved? Certainly, one could respond to this question by saying that an engineering problem is solved when the corresponding mathematical problem is solved; however, this correspondence is never one to one and, moreover, there are still many questions as to when a mathematical problem, especially in optimization, is declared to be solved. Let us then take the following as our starting point: An engineering problem is solved when a computationally reasonable method (on a reasonable model of computation) exists for at least one of its equivalent mathematical formulations. Given that the statement above is a reasonable approximation to our actual motivation in control research, we strive for a problem formulation that leads to an efficient computational method for the solution of a control problem. On the other hand, we strongly prefer a mathematical formulation and a solution method that are directly linked to our engineering intuition and judgment. The frequency domain techniques in control system design are prime examples of such preferences. To make matters even more interesting, we often search for a formulation that not only captures our engineering considerations,
14.2. Methodological aspects of the BMI
275
but that also has a nice mathematical representation. Alas often these considerations cannot be satisfied at the same time, although exceptions exist.13 The engineering problem considered in this chapter is the synthesis of controllers for systems whose models are not precisely known. We adopt the framework of the robust control theory and in particular that of fi/km analysis and synthesis. Roughly speaking, with reference to Figure 14.1, p,/km synthesis is the problem of finding the controller K such that Fu{A,T} is stable for all admissible A's. Since one would like to exploit the structure of the uncertainty A (of the form (14.17)) in order to establish stability, we are led beyond requiring \\T\\oo < 1 to such conditions as solving [446]
where D is a stable, minimum phase scaling matrix such that D(s)A(s) = A(s)D(s); the infimum is guaranteed to provide a lower bound (upper bound) for the multivariable stability margin /CM [351], [352] (respectively, n [102]), where for an asymptotically stable transfer matrix T(s),
and
The D-K iteration is a solution method for the n/km synthesis, which proceeds by alternating between solving an HQQ optimization problem by fixing D(s) in (14.25), followed by a convex optimization with K fixed [31], [77], [349]. An improvement in the conservativeness of the D-K iteration is known as the D, G-K iteration [131], [314]. Both design techniques can be enhanced by considering the n/km synthesis in the positive real framework, resulting in what is known as the M-K iteration [353]. The M-K iteration approach has a very nice property of bypassing the curve fitting step in the D-K iteration; this latter approach in fact forms the basis for the proof of Theorem 14.1 (below).
14.2.1
Limitations of the LMI approach
It remains true, however, that much difficulty remains in the robust synthesis of practical, nonconservative controllers. At least three very important classes of robust control design issues have not been found to be readily transformable into the LMI framework, which has offered a very promising direction to study many robust analysis problems. These are (1) /i/fcmsynthesis via dynamical scalings/multipliers, (2) fixed order control synthesis, and (3) decentralized controller design (i.e., synthesis of controllers with "block diagonal" or other specified structure). • Consider the robust control problem of p,/km synthesis. Current n/km techniques [31], [77], remain inherently conservative, i.e., suboptimal, because the D-K iteration approach of alternately synthesizing first a controller K(s) and then diagonal scalings D(s) is in no way guaranteed to achieve a globally or, it turns out, even a locally optimal solution. The D-K iteration can, in theory, get stuck at points that are local minima with respect to D alone and with respect to K alone, but are not a true local minima with respect to joint variations in D and K. 13
LP, the simplex algorithm, and the corresponding economic interpretation, are beautiful examples of such an interaction, even though finding a polynomial time algorithm for LP that corresponds to a certain pivoting strategy of the simplex method is still an open problem.
Mesbahi, Safonov, and Papavassilopoulos
276
• One of the main objections to the modern control synthesis theories, such as LQG, HOQ, and n/km, is that the resultant controllers are typically of relatively high order—at least as high as the original plant and often much higher in the usual case where the plant must be augmented by dynamical scalings, multipliers, or weights in order to achieve the desired performance. • Control system designs for very large or complex systems must often be implemented in a decentralized fashion; that is, local loops are decoupled and closed separately with little or no direct communication among local controllers. The synthesis of optimally robust decentralized systems has obvious benefits. However, even the synthesis of optimal decentralized HQQ and LQG control systems has remained beyond the scope of the existing theories. Neither the LMI framework nor other existing theories has yet proved to be sufficiently flexible to handle problems in the foregoing classes. The purpose of this section is to demonstrate that the BMI framework is sufficiently flexible to simultaneously accommodate all three of the foregoing types of control design specifications, in addition to handling all those that the LMI handles. In particular, we provide a proof of the following theorem. Theorem 14.1 (see [354]). The fj,/km synthesis problem can be formulated as the following matrix inequality: Given real matrices Raug, Uaug, Vaug, and R^t, U^ , V^ find matrices Z, SQ such that,
where
and P = PT and X = XT € R("+9)x ("+<*), and W is constrained to have the structure consistent with the uncertainty structure of the underlying p,/km problem (14.17). The rest of this section is devoted to the proof of Theorem 14.1 and one of its important consequences in the design of the decentralized controllers.
14.2.2
Proof of Theorem 14.1
We first make a note of the following: Any constraint of the form Xi = 1, yi = 1 for an instance of the BMI can be transformed to an instance with no such constraints [354]. Proof 14.1 (see [354]). In the n/km synthesis procedure outlined in [353], the p,/km problem is shown to be solved if there exists a suitably structured block-diagonal rational multiplier matrix M(s) such that
No constraint is placed on the stability of M(s), but it is required to be uniformly bounded on the ju-axis and to satisfy the generalized strong positive real condition; i.e.,
14.2. Methodological aspects of the BMI
277
for some e > 0,
It is supposed in [259] (without loss of generality if N = oo) that M(s) is a weighted sum of certain suitably structured block-diagonal transfer function matrices Mi(s),
where
and W := [Wi,..., WN] G R S l X < J S l . In most cases Wi G R, but more generally, as when there are repeated uncertainty blocks, Wi may be block diagonal matrices of a certain specified form. The specific details of the structure constraints on Wi and Mi(s) required for the various types of real and/or complex uncertainty block structures are described in [259] and [355]. One may form the "augmented" closed-loop system
then Taug(s) has a state-space realization of the form (14.21),
In view of the form of (14.34), -Aaug naturally assumes the form
where A^ denotes the A-matrix of M(s) and At G R( n+ 9) x ( n+ 9) is the A-matrix of the bilinear sector transformed closed-loop plant sect{T(s)}. Under (14.30)-(14.31), the stability of sect{T(s)} is equivalent to the stability of the original untransformed closed-loop plant T(s) [353]. In view of (14.36) and (14.35), one has
where
Theorem 14.1 is now proved by applying Lemma 14.1 for checking the positive realness °f -^aug-
We note that the matrix P in the above theorem is the solution to the LMI that results from the application of Lemma 14.1. Since we do not require A&ug to be stable, no definiteness condition is imposed on P. Instead, stability of the closed-loop At (and
278
Mesbahi, Safonov, and Papavassilopoulos
hence T(s)) is ensured by testing existence of X = XT > 0 such that Herm(—XA t ) > 0 (e.g., [289, page 63]); this is the role of the matrix X in the BMI (14.28). Clearly (14.28) is a BMI feasibility problem. It is jointly linear in the parameters of the matrices W, X, P. It is affine in the controller parameter matrix SQ. Some special cases of this problem have been found by Packard et al. [315], [316] to be reducible to LMIs via the Parrott theorem. These include full-state feedback H^ and full-order HQO with constant diagonal scalings (M(s) = "constant matrix"), as well as certain simultaneous stabilization and related gain-scheduling problems. But optimal solution to even the fixed order (q < n) HQQ problem (i.e., M(s) = I) has remained previously elusive despite some determined efforts. The foregoing BMI formulation (14.28) provides a simple formulation of these and the related synthesis problems. Interestingly, the BMI formulation is flexible enough to accommodate constraints on controller structure as well. To simplify matters, we make the following assumption (refer to (14.13)): Hence by (14.22), Note that no significant loss of generality results from this assumption since it can always be made to hold via a singular perturbation of the plant (given that our constraints do not require an infinite bandwidth controller). With D22 = 0, it is clear that the controller K(s) inherits the same block structure as Q(s). In particular, if the state-space matrices AQ,BQ,CQ,DQ are constrained to have a block-diagonal structure, then K(s) will be block diagonal, too; i.e., K(s) will be a "decentralized" controller.
14.2.3
Why is the BMI formulation important?
We conclude this section with a recapitulation of the reasons why the BMI formulation is important: • The main attraction of the BMI formulation is its simplicity and generality. It allows the controller and multiplier/scaling optimization in fj,/km synthesis to be formulated as a single finite-dimensional optimization over the controller parameters SQ and the multiplier/scaling and Lyapunov parameters W, X, P. Consequently, application of nonlinear programming techniques to the BMI at least assures convergence to a joint local optimum in D and K—the D-K iteration of ^/km synthesis cannot make this claim. The BMI formulation also eliminates the curve fitting step of the traditional D-K iteration approaches to n/km. • A broad spectrum of robust control synthesis problems can be formulated within the BMI framework. These include order-constrained controller (J,/km synthesis with specifications requiring such additional properties as decentralized control. Gain scheduling and simultaneous n/km synthesis for several plants also fall in this framework. We note, however, that the BMI formulation is not without its drawbacks. One major concern is that biconvex optimization problems are in general difficult to solve. Moreover, much available structure is hidden in the BMI formulation. For example, we know that the BMI for the full-order output feedback HQQ synthesis may be reduced to an LMI. It seems unlikely, however, that order-constrained or decentralized control problems will admit an LMI embedding. It will be interesting to see just how broad a class of BMIs will admit an embedding within the LMI framework.
14.3. Structural aspects of the BMI
14.3
279
Structural aspects of the BMI
Optimization often provides a very convenient framework for constructing computational procedures for a wide range of problems. Applying a standard trick that changes a feasibility problem to an optimization one results in rewriting the BMI as (14.40)
infa
subject to
the BMI has a feasible point if and only if the value of the infimum is negative. At this point we can theoretically apply some general-purpose global optimization technique to (14.40)-(14.41). Our intuition, however, suggests that the geometrical properties of the BMI should be useful in the construction of the computational procedures. Our goal is to gain as much insight into the geometrical and analytic properties of the BMI to the extent that our choice of the algorithm for its solution is judicious and transcends beyond a rather blind application of some global optimization technique. There are at least three issues that have to be addressed in connection with the BMI and the global optimization methods: 1. What are the geometric interpretations of the BMI? 2. What are the specific properties of the global optimization problem that arises from the BMI, and can these properties be used to devise more efficient algorithms for the BMI? 3. Which instances of the BMI can be solved efficiently? Moreover, are there instances for which certain "structural" properties can be established? All of the above issues can be addressed by studying the BMI on its own. We believe, however, that many important structural and computational issues of the BMI can be studied by establishing a connection between the BMI and problems that are more well understood in optimization theory. This section is devoted to such investigations. In the first part, we present the result showing that the BMI can be formulated as a convex maximization problem. The second part establishes the connection between the BMI and two optimization problems over cones, namely, the EFP and the SDCPs. The two approaches discussed in this section, besides providing very nice geometrical insights into the structure of the solution set of the BMI, also suggest computational procedures for its solution, a subject that we shall comment on in section 14.4.
14.3.1
Concave programming approach
It is shown that a BMI has a nonempty solution set if and only if the diameter of a certain convex set is greater than two. The convex set in question is simply the intersection of ellipsoids centered at the origin. In this avenue we end up addressing the first two issues raised above about the relationship between the BMI and the global optimization methods. Specifically, we prove the following theorem. Theorem 14.2 (see [356]). The BMI (14.1) is feasible if and only if the diameter of a suitably constructed convex set C is strictly greater than 2; i.e.,
280
Mesbahi, Safonov, and Papavassilopoulos
The implications of Theorem 14.2 are twofold. First, it opens up a wide range of possibilities for the application of the concave minimization algorithms for the solution of the BMI. At the same time, the concave minimization result indicates, in a rather transparent way, why it is not a good idea to spend research time looking for polynomial time BMI solvers.14 This is due to the fact that concave minimization belongs to a class of computational problems for which the existence of a polynomial time algorithm is highly unlikely (concave minimization is NP-hard [410]). The NP-hardness of the BMI was also explicitly proved in [394]. The rest of the section is devoted to the proof of Theorem 14.2 and some of its implications [356]. We begin by noting that the BMI problem, that of finding the vectors x and y such that 53-. XiyjFij > 0 (14.1), is equivalent to finding the corresponding vectors such that Z^ XiUjFij < 0> which can be written as
where Let p be a real positive number such that
and let C C R"+™ be the convex set
where (14.47) Notice that by (14.45) one has that the matrix Q(z) > 0 for all \\z\\ = 1. It follows that the set C is the intersection of ellipsoids in Rn+m. Suppose that C has a diameter strictly greater than 2. C is the intersection of an infinite number of ellipsoids and thus its maximum diameter is achieved at some wi — (x,y) on the boundary of C and, by symmetry, also at w% = —w\. It thus holds that
where
and
Inequality (14.48) is equivalent to
14 It is generally accepted in computer science that polynomial running time of an algorithm is equivalent to efficiency.
14.3. Structural aspects of the BMI
281
from which it follows that
Thus,
and consequently (—x,y) satisfy (14.1). On the other hand, suppose (—x, y) satisfy (14.1). Without loss of generality we may
uniformly in z, ||z|| = 1. Thus the radius of the set C is strictly greater than (||x||2 + ||y||2) = 1. Since C is Hermitian and centered about the origin, the diameter is precisely twice the radius. Hence, the diameter of C is strictly greater than 2. The p's used in (14.47) may all be the same or they may be chosen to depend on z. All we need is to guarantee that
and thus we can take p to be z-dependent, e.g.,
Finding a different p for each z is a laborious task. A single constant p that will satisfy (14.45) for all z can easily be computed via the matrix inequality
where the ijih entry of the n x m matrix Q is given by
In view of Theorem 14.2, the following statement is obvious: Consider the optimization problem
subject to
Then there exists a feasible pair (x, y) (14.1) if and only if the global optimum of (14.59)(14.60) is strictly greater than 1; note that since all points of the form (x,y) = (x,0) or (x,y) = (0,y) with ||x|| = ||y|| = 1 satisfy (14.60), the optimum of (14.59) is always greater than or equal to 1. Actually, instead of solving the problem (14.59) for its global optimum, all we need is a point (x,y) where J(x,y) > 1. Note that (14.59)-(14.60) is
282
Mesbahi, Safonov, and Papavassilopoulos
a nonlinear programming problem where a convex function is to be maximized subject to an infinite number of quadratic constraints (parameterized by z) all of which are ellipsoids centered at the origin. We note that the BMI can be formulated as follows: Find an n x m matrix N of rank 1 (i.e., N = xyT] such that Vz, with ||z|| = 1,
that is, in the Hilbert space of n x m real matrices find a hyperplane that strictly separates the origin from the set W = {G(z) \ \\z\\ = 1}, and in addition it should hold that the matrix N that defines the perpendicular to this hyperplane is of rank one. In the absence of the restriction rank(N) = I , the problem may be interpreted as the LMI
subject to
a solution exists if and only if the minimal cost e* < 0. If the rank of N is restricted to be less than min{ra,n}, the LMI formulation fails. We notice that the (14.62)-(14.64) formulation of the BMI comes remarkably close to an LMI. At the same time, however, this problem formulation signifies the role that rank-restricted LMIs play in the control synthesis problems in a rather transparent way. We shift our attention now to the cone programming formulations of the BMI.
14.3.2
Cone programming approach
We present an approach for solving the BMI based on its connections with certain problems defined over matrix cones. These problems are, among others, the cone generalization of the LP and the LCP (referred to as the cone-LP and the cone-LCP, respectively). Specifically, we show that solving a given BMI is equivalent to examining the solution set of a suitably constructed cone-LP or cone-LCP. In this direction we end up addressing all of the issues raised above pertaining to the BMIs and global optimization techniques. The main results proved in this section are the following. Theorem 14.3 (see [281]). The BMI is an instance of the EFP. Theorem 14.4 (see [281]). The BMI has a solution if and only if a suitably constructed SDCP with a copositive linear map has a rank-one solution. The emphasis of this section is on the matrix theoretic aspects of the cone problems that the BMI leads to. In this direction, we spent quite an effort to understand the geometry of a certain classes of matrix cones and subsequently to establish their relevant properties, helpful in our understanding of the BMI. The rest of this section is devoted to the proofs of the above two theorems. Few initial steps Let us rewrite the BMI (14.1) as
14.3. Structural aspects of the BMI
283
where
As it becomes apparent by the subsequent developments, it is convenient to assume that m = p, and when necessary, that y^s (1 < j < m) are nonnegative. The first assumption is made to avoid defining inner products between matrix classes of different dimensions. The second assumption is made in section 14.3.2 to facilitate the formulation of the problems in terms of dual cones. In section 14.3.2 we shall drop the nonnegativity assumption on the vector y for the cone-LCP formulation. These assumptions are warranted for the following reasons. First, note that if we define F* = Y^ixi^ij ^ $P (1 ^ J' ^ m )> then ^ZjXiF^ = ^jUjF*. But the last sum is a linear inequality in -FJ's. Thus, as is customary in LP, one can assume that m < p and that yj 's are positive (by an appropriate augmentation). Now we would only need to define F»ji = 0 (1 < i < n;m < j < p) for the assumption m = p to be justified. Recall that Gordan's theorem of alternative [85], [382] relates the solvability of the following two systems of linear inequalities: Given A € R m x n , the system Ax > 0 has a solution if and only if the system yTA = 0, y > 0,y ^ 0, has no solution. This theorem can be generalized for the linear inequalities over matrix cones as follows. Proposition 14.1. Given the symmetric matrices A'^s e Sp (1 < i < n), the system SiLi xiAi > 0 has a solution if and only if the system
has no solution. From Gordan's theorem of alternative over the cone of symmetric positive semidefmite matrices, one concludes that the BMI (14.1) does not have a solution if and only if
Therefore, the BMI (14.1) has a solution if and only if
Now let
and y=diag(y) € Sp. Since (Ff )T = F? = Y^j yjFij (recalling that F^s are symmetric matrices),
therefore
284
Mesbahi, Safonov, and Papavassilopoulos
Combining (14.67) and (14.70) we conclude that (14.1) has a solution if and only if there exists Y > 0, V=diag(y), for some y > 0, such that for all Z > 0, Z =£ 0,
We observe the following: 1. According to (14.68) for all p x p skew-symmetric matrices Z G o , Fi Vec Z = 0 (i = 1, . . . , n). Consequently, if there exists a matrix Y € R pxp such that (14.71) holds, then one can assume that Y is symmetric, since the skew-symmetric part of Y does not contribute to the left-hand side of inequality (14.71): Let Y = Y\ + YI, with Y\ and Y% being the symmetric and skew-symmetric part of Y, respectively. Then for 1 < i < n, Fi Vec Y = F^Vec YI + Vec Y2) = Fi Vec YI. 2. According to (14.69), for all matrices Y e R pxp , Fi Vec Y = Vec Wit for some Wi € 5P. Therefore, if X = (Vec F)(Vec F)T, then M(X) can be represented by
Remark 14.1. Suppose that the vector y is not required to be nonnegative in the above analysis. It is clear that the above steps are still valid with the obvious modifications and that the end result would read as follows: The BMI has a solution if and only if there exists a diagonal matrix Y such that VZ > 0, Z ^ 0, inequality (14.71) holds. We shall use this observation later in section 14.3.2. Inequality (14.71) can be interpreted as requiring M(X) to belong to a certain matrix class. The matrices in this class are symmetric (given that X is symmetric) and have quadratic forms that are positive over the vec form of the nonzero matrices in «Sph. This observation justifies the introduction of the following matrix classes. Denote by PST> the class of p2 x p2 symmetric positive semidefinite matrices, i.e., matrices for the which the quadratic form is nonnegative over the vec form of the pxp matrices (Rpxp),
Let PST>o denote the subset of p2 x p2 symmetric matrices with quadratic forms nonnegative over the vec form of the symmetric pxp matrices («SP), and with the vec form of the skew-symmetric matrices (o ) in their null space; i.e.,
Clearly both PST> (14.74) and PSP0 (14.75) are closed convex cones. Moreover, it can be shown that certain essential features of the PSD cone can be generalized for the class of PST>o matrices, including rank-one decomposition property, self-duality, and the unity rank of the extreme forms [198], [281],
14.3. Structural aspects of the BMI
285
Let C denote the class of symmetric PSD-copositive matrices,
A particular subset of C which be useful in our cone formulations is a subset of matrices in Cs which have the p x p skew-symmetric matrices in their null space, i.e.,
Finally, let B denote the class of symmetric PSD-completely positive matrices
One can establish the following results. Lemma 14.2 (see [281]). The matrix classes 13, C, and CQ are closed convex cones in 5p2. Moreover, B* = C,C* =B,C is solid, and B is pointed. Lemma 14.3 (see [281]). The extreme forms of B are matrices (Vec Z)(Vec Z)T, Z>0. Note that It is now observed that (14.71) states whether a nonlinear combination of matrices 2 2 F/s € Rp xp (1 < i < n) belongs to the interior of the cone of PSD-copositive matrices C. In fact, due the particular form of the linear map M (14.73), M(X) is required to be in Ci := C0 n int C. EFP formulation of the BMI We present the proof of Theorem 14.3 in this section, i.e., the reformulation of the BMI as the EFPs(M), where M is defined by (14.72), and B is the cone of PSD completely positive matrices. Proof 14.2 (see [281]). If the BMI has a solution X, then there exists Y = diag(y), y > 0, X = (Vec Y)(Vec Y)T such that M(X) G int C, and, hence, the EFPB(M) has a solution. Conversely, suppose that the EFPg(M) has a solution X. Then there exists V > 0 such that X = (VecV)(Vec V)T and M(X) € int C; i.e., VZ > 0, Z ^ 0,
Let F = TTYT be such that y is diagonal, T is nonsingular, and TT = T'1. Then VecF = (T
286
Mesbahi, Safonov, and Papavassilopoulos
The last inequality follows from the fact that if Vec W = (T
14.3. Structural aspects of the BMI
287
The starting point for the cone-LCP approach is the remark made after the proof of Theorem 14.3: The BMI has a solution if and only if the image of an extreme form of the matrix cone PST>Q under the linear map M is in the interior of C or in fact in C\ . As the next proposition states, the cone PST> can substitute the cone PSV^ in the above statement. Proposition 14.2 (see [281]). There exists an extreme form of the matrix cone PSD, X, such that M(X) 6 int C if and only if there exists an extreme form of the matrix cone PSV0, Y, such that M(Y) e int C. Let us denote by p = p(p + l)/2 the dimension of the space of symmetric p x p matrices. Before stating the main result of this section we make the following observation. Lemma 14.4 (see [281]). Let Y be an extreme form of the cone PST>Q. Then there exists a symmetric W e PST>Q such that Tr(YW) = 0 and rank(W) = p - 1. Consider the cone-LCP-p,s£>(Q,M) and let M be defined by equation (14.72): Find 2 X 6 Sp (if it exists) such that
Theorem 14.5 (see [281]). The BMI has a solution if and only if there exists a matrix Q € int (-C} (or Q e —C\), such that the cone-LCPp$T>(Q,M) has a rank one solution. Proof 14.4 (see [281]). Suppose that the BMI has a solution X*; that is, we have that X* = (Vec Y)(Vec Y}T, Y e <SP, M(X*) € int C, and, in fact, M(X*) € Ci. By Lemma 14.4, there exists W e PSVQ, ranW = p-l, such that Tr(WX*) = 0. Without loss of generality, assume that \\W\\ = 1. Let Qa = aW - M(X*}. Note that since M(X) is symmetric (for X € PST>0], Qa is also symmetric. Moreover, Qa(Vec Z) = 0 VZ € o , since both M(X) and W are in the PST>Q (see (14.73)). It suffices to show that there exists an a > 0 such that QQ G int(-C) (or Q € -Ci). Since M(X*) € intC and B is closed, there exists 0 > 0 such that
Therefore, choosing a < /?, we see that int (-C) (in fact Q& e -Ci). Moreover,
a)
< 0. Hence Q&
By construction, X* € ^51), rankX* = 1, and On the other hand suppose that there exists a rank-one matrix X* in the solution set of the cone-LCP (Q, M), with Q e int (-C). Then there exists Z* € R pxp such that
288
Mesbahi, Safonov, and Papavassilopoulos
Since Q & int (-C) and Q 4- M(X*) G PST>, using the inclusion B C PST>, one has the following:
The last inequality follows from the fact that, for all A G B (A ^ 0), Tr(AQ) < 0. Consequently, M(X*) G int C. In view of Proposition 14.2, there exists a rank-one matrix
such that M(y*) € int C. Therefore, the BMI has a solution. The above proof can be modified in an obvious way to conclude that it is only sufficient to take Q G -C\. We shall refer to the special case of the LCP over the PSD cone (14.79)-(14.81), as the semidefinite complementarity problem (SDCP). An immediate consequence of the above theorem is that if a matrix Q G int(— C) cannot be found for which the corresponding SDCP has a solution, then the BMI does not have a solution. Corollary 14.2 (see [281]). The BMI does not have a solution if the SDCP (14.79)(14.81) is not solvable for any Q G int (— C) (or in fact Q G — C\). It is noteworthy that the linear map M in the SDCP formulation, which arises in the context of the BMI, is itself copositive with respect to the matrix cone PST>. Proposition 14.3 (see [281]). The linear map M denned by (14.72) is PSV-copositive. Consequently if we define M*(X] = XlILi FfXFii an^ ^ne implication holds, then VQ G —C\, the cone-LCPpsx>(Q, M} is solvable if it is feasible. Another cone-LCP formulation of the BMI, in addition to the one mentioned above, is to incorporate the problem of finding the matrix Q in Theorem 14.5 in setting up the corresponding cone-LCP. For this purpose it is convenient to associate to the matrix cones PST>, B, and C (subsets of «Sp2), the cones PST>, B, and C (subsets of Rp4), which are obtained by applying the vec operator to these matrix cones, i.e.,
and
It is easy to verify that PST>, B, and C are closed convex cones in Recall that
Therefore, in view of the relation PSD = PSD*, the only matrices in PSV* that are the Vec form of a symmetric matrix are those in PST). Let F = 2?»i ft ®Fi € R p4xp4 . For the linear map M defined by (14.72) and using the property of the Kronecker products, Vec M(X) = FVec X. Combining the above ideas with the result of Theorem 14.5, one readily obtains the following corollary.
14.4. Computational aspects of the BMI
289
Corollary 14.3 (see [281]). Let
and
Then the BMI has a solution if and only if the homogeneous cone-LCP^ xpj^(Q, M) has a solution of the form
where — Q € int C, and X has rank one. We note that if Q € — C and X e PSV, then Q + M(X) is automatically symmetric and therefore if Vec (Q + Af (X*)) € PSP*, then Q + M(X*) € PSX>. The above corollary reduces the BMI feasibility problem to the problem of examining the solution set of a certain cone-LCP. This can be a "tractable" problem if the solution set is finite or if the linear map M enjoys certain "additional" properties. Since there are various results in the complementarity theory that pertain to the cardinality of the solution set of a cone-LCP [208] , classification of the efficiently solvable instances of the BMI can be based on these results as well.
14.4
Computational aspects of the BMI
This part of the chapter is devoted to a brief overview of the computational methods for solving the BMI, those that are directly related with the presentation of sections 14.2 and 14.3. We shall first provide a few words on the approaches that are not touched upon in this section. Motivated by observing that the function
is convex in x for a fixed y and vice versa, Goh, Safonov, and Papavassilopoulos [168] proposed a global optimization algorithm based on the branch and bound strategy for solving the BMI; in this approach, the bilinearity of the function (14.83) is successively employed in the bounding part of each iteration. The dissertation of Liu [249] discusses many interesting aspects of the parallel implementation of BMI solvers that are based on the branch and bound strategy (see also [250]). In [45] a global optimization technique based on the generalized Benders decomposition, which is used in bilinear and biconvex programming [138], [155], [412], is proposed. Finally we should mention the alternating LMI method [214], but this later class of algorithms is not guaranteed to find a feasible point of the BMI (14.1), even if one exists. Back to the methods that are linked directly to the aspects of the BMI investigated in this chapter, we present a brief overview of each. Starting with the optimization problem (14.40)-(14.41) obtained trivially from the BMI feasibility problem, one can proceed to devise a computational method based on the following strategy. Initially pick an arbitrary XQ and yo and then choose ao > 0 such
290
Mesbahi, Safonov, and Papavassilopoulos
that (14.41) is satisfied (thus we obtain a feasible point for the optimization problem); proceed by trying to reduce o^ at each step without leaving the feasible region. In the spirit of the interior-point methods for solving the LMIs and SDP problems, Goh et al. [169] proposed using the logarithmic barrier functional to ensure that the iterations stay inside the feasible region. A barrier method for the BMI
(2) Choose some (XQ, yo) and CXQ such that (3) Until
The local minimization step (3a) is initiated from (ajk,£fc>2/fc)- The convergence of the above algorithm is contingent upon a wise choice of the parameter 0. Note that as Hk —* oo, the first term of the objective functional in (3a) approaches that of minimizing the parameter a, whereas its second term ensures that this minimization is performed without leaving the feasible region of (14.40)-(14.41). Under some mild conditions, the barrier method described above is guaranteed to converge to a local minimum of (14.40)-(14.41). The choice of the parameter 0 is guided by the methods for solving LMIs and it is closely related to the self-concordant barrier parameter for the cone of PSD matrices [296] . We observe that our choice of the initial points XQ and yo can be influenced by the results of the more conservative approaches, such as the alternating LMI approach or the D-K iteration. In any event the algorithm above, which is based on the BMI formulation of the n/km synthesis problem along with a methodology borrowed directly from convex programming, is guaranteed to provide improvements over that of the existing methods [164]. Barrier methods are not the only variant of the interior-point methods that can be used for obtaining local solutions to BMIs, although they are probably among the most well understood. Other interior-point solution methods for locally solving the BMI is presented in the dissertation of Goh [164], and in particular one that corresponds to the method of centers in the spirit of [63] and [206]. Computational examples are also provided in [164]. Motivated by the reformulation of the BMI as a concave minimization problem presented in section 14.3.1, Safonov and Papavassilopoulos [356] proposed the following conceptual algorithm for finding a feasible point of (14.1). The approach is based on the optimization problem (14.59)-(14.60), which is obtained directly from Theorem 14.2. In order to solve (14.1), we can solve a sequence of problems of the type (14.59)-(14.60), each one having a finite number of inequalities. At the beginning of step fc, assume that the points z^\ . . . , z^k~1^ have been generated from an arbitrary initial guess z^ with \\zW\\ = 1. Then the kth step of the algorithm follows: Solve for a globally maximizing pair (or, y) in (14.84)
14.5. Conclusion
291
subject to
If the global minimum J* = 1, stop; the problem (14.59) is infeasible. If J* ' > 1, let the solution be ( z ^ y ) and solve
where
Note that the maximal value in (14.86) is the maximal eigenvalue Amax(J9r). Take z^ € Rfc to be a maximizing z in (14.86); i.e., z^> is any unit norm eigenvector of the matrix (14.87) associated with Amax(#). If Amax(#) < 0, we stop and the pair (z( fc) ,7/ fc) ) provides a solution of BMI (14.1); otherwise, we choose pW > A max (H) and go to step fc + 1. It can be shown that this process will stop in a finite number of steps if (14.1) has a solution; otherwise, (14.1) is infeasible. Note that solving (14.84) for the global maximum may be quite a time-consuming problem, although there exist several algorithms for solving nonconvex maximization problems of this type [200]. Finally we observe that it is not necessary to solve (14.84) for the global maximum, but we can stop as long as a pair (x^, y^) with \\x^ \\2+\\y^ \\2 > 1 has been generated. This may be detrimental to the speed of convergence of the algorithm but avoids spending a lot of time in finding the global maximum of (14.84). If one chooses to do this, it might be advisable now and then to solve (14.84) globally. Last, we mentioned the solution method based on the SDCP approach, which is based on examining the solution set of a given SDCP. We note that often an SDCP has only a finite number of solutions; in fact, there are many classes of SDCPs for which one can guarantee even the uniqueness of the solution. This is the main advantage of the SDCP approach, since examining the feasible region defined by an LMI for the existence of a rank-one matrix is itself a difficult computation problem (the rank-minimization problem under LMI constraints is a powerful framework for studying many robust control synthesis problems as well [124], [282]). The results of section 14.3.2 provide an explicit expression for the linear maps that appear in the SDCP formulation of the BMI based on the matrices F'^s (i = 1, . . . , n; j = 1, . . . , m). The complementarity theory can be used to classify instances of the BMI that reduce to a convex optimization problem over the PSD cone. For example, as a result of Corollary 14.1, 5-positivity guarantees the existence of a feasible point for the BMI, thus reducing the BMI to characterizing a single matrix. The existence of the interior-point methods for monotone SDCPs also provide local and in some cases globally convergent methods for certain classes of the BMIs.
14.5
Conclusion
Rooted in some of best traditions in control theory, the BMI provides a framework to address the most important issues facing the field of robust control. The BMI formulation
292
Mesbahi, Safonov, and Papavassilopoulos
offers direct interpretation of the robust control synthesis problem, exemplified by a search for the parameters of the controller on one hand, and the multiplier on the other— and at the same time, it has led to elegant mathematical investigations in optimization theory, some of which were presented in this chapter. Last, and most important, the BMI formulation promises efficient, reliable, and automated procedures for the synthesis of nonconservative robust controllers, improving upon those obtained by employing the existing methods, including the D-K and the M-K iterations.
Part VI
Applications
This page intentionally left blank
Chapter 15
Linear Controller Design for the NEC Laser Bonder via Linear Matrix Inequality Optimization Junichiro Oishi and Venkataramanan Balakrishnan 15.1
Introduction
Over the past few decades, rapid strides have been made in control theory, resulting in the solution of several important controller design problems: LQR, LQG, HQQ, to name a few [388, 107, 175]. Many of these controllers have been successfully implemented in industrial applications. However, a disadvantage with most of these methods is that they are optimal in only a narrow sense, and actual engineering specifications (which are usually stated as competing constraints) must be translated or reinterpreted so as to fit into the narrow framework of these methods. In parallel with the theoretical developments in systems and control, there have been significant advances in optimization theory and algorithms, as well as an almost exponential growth in computing power, so that numerical controller design methods, especially those based on convex optimization, have become increasingly relevant [59, 65, 62, 61, 284, 303, 30]. Such numerical methods enjoy the advantage that several commonly encountered design requirements can be specified directly in a natural manner and that the design interaction between various competing performance specifications can be readily studied. In this chapter, we describe the application of one such computer-aided design method for controlling the NEC Laser Bonder. This method combines the Youla parametrization of the set of achievable stable closed-loop maps with convex optimization based on linear matrix inequalities (LMIs) to numerically design optimal linear time-invariant (LTI) controllers under multiple design specifications. The significance of Youla parametrization for computer-aided control system design has long been recognized; see, for example, [59, 61]. In these references, the Youla parametrization is used to reformulate the problem of LTI controller design for LTI 295
296
Oishi and Balakrishnan
systems as a quadratic programming problem, which is then solved numerically. The approach that we present in this chapter is very close in spirit to the one in [59, 61]. The main difference is that here the LTI controller design problem is reformulated instead as an LMI optimization problem; we describe this reformulation in section 15.2. We apply this technique, in section 15.3, to design an LTI controller for the NEC Laser Bonder.
15.2
Controller design using the Youla parametrization and LMIs
A standard framework for controller design is shown in Figure 15.1. P is the model of the plant, i.e., the system to be controlled. K is the controller that implements the control strategy for improving the performance of the system, y is the signal that the controller has access to, and u is the output of the controller that drives the plant, w and z represent inputs and outputs of interest. Note that z can include components of the control input u, so that specifications on the control input such as bounds on the control effort can be handled. In other words, the map Hzw from w to z contains all the input-output maps of interest. The controller design problem is the design of an LTI controller K such that the closed-loop map Hzw, with the controller K in place, satisfies some design specifications. Often an "optimal" controller is sought, one that yields an optimal performance measure subject to the design specifications.
Figure 15.1: A standard controller design framework. The Youla parameterization (see [411] for details) yields a convex (in particular affine) parameterization, in terms of the Youla parameter Q, of the set of achievable stable closed-loop maps Hzw from w to z for the system in Figure 15.1. This fact is of great significance, as the numerical search for the "optimal" Youla parameter Q—one that minimizes an objective that is a convex function of Hzw subject to convex constraints on Hzw—is a convex optimization problem. In this section, we show how by restricting Q to lie in a finite-dimensional subspace, we can reformulate several important practical controller design specifications as LMI constraints on Q. Some of the design specifications that we will consider are constraints on the response z for some reference input w, asymptotic tracking constraints, bounds on the H^ and HOO norms of Hzw, etc. Thus several important LTI controller design problems can be solved numerically using optimization over LMIs.
15.2. Controller design using the Youla parametrization and LMIs
297
15.2.1 Youla parametrization We begin with a brief review of the Youla parametrization of the set of achievable stable closed-loop maps for LTI plants with LTI controllers. Let the state equations describing the plant P be
Let the open-loop plant transfer matrix in Figure 15.1, denoted P, be partitioned as
Then, with K(X) denoting the transfer function of the LTI controller, the transfer function from w to z is
The set of achievable, stable closed-loop maps from w to z is given by is stable and satisfies The set of the controllers K that stabilize the system is in general not a convex set. Thus optimizing over H using the description (15.2), with K as the (infinite-dimensional) optimization variable, is a difficult numerical problem. However, the theory of Youla parameterization enables us to give a convex parameterization of 7i. It turns out (see [61] and the references therein for details) that the set H can be also written as
where TI, T2, and T3 are fixed, stable transfer matrices that can be computed as follows. Let Knom and Lnom be real matrices of appropriate sizes such that A — BuKnom and A — LnomCy are stable (i.e., have all their eigenvalues in the open unit disk). Then TI, T2, and T3 are given by
where
The most important observation about this reparametrization of Ji is that it is affine in the infinite-dimensional parameter Q; it is therefore a convex parameterization of the set of achievable stable closed-loop maps from w to z. (The parameter Q is also referred to as the Youla parameter.) This fact has an important ramification—it is possible now
298
Oishi and Balakrishnan
to use convex optimization techniques to find an optimal parameter Qopt and therefore an optimal controller Kopt. The general procedure for designing controllers using the Youla parameterization proceeds as follows. Let 0o> >i> • • • > 4>m be (not necessarily differentiate) convex functional on the closed-loop map that represent performance measures. These performance measures may be norms (typically H2 or Hoonorms), certain time-domain quantities (step response overshoot, steady-state errors), etc. Then the problem
is a convex optimization problem (with an infinite-dimensional optimization variable Q). This problem corresponds to minimizing a measure of performance of the closed-loop system subject to other performance constraints. In practice, problem (15.3) is solved by searching for Q over a finite-dimensional subspace. Typically, Q is restricted to lie in the set
where QI,--.,QN are fixed, stable transfer matrices. For convenience, we let G = [&i • • • 9N], and Q(&) = diQi + ---- h 0NQN. This enables us to solve problem (15.3) "approximately" by solving the following problem with a finite number of scalar optimization variables:
The transfer matrices Qi and their number, N, should be so chosen that the optimal parameter Q can be approximated with sufficient accuracy. With 0opt denoting the optimizer of problem (15.5), we can compute optimal controller Kopt as follows:
where
and (AQ, BQ, CQ, DQ) is a realization of <5(©opt)The book [61] describes perhaps the best-known work—the computer-aided control system package QDES —that combines Youla parametrization and convex optimization for LTI controller design (see also [59]). QDES consists of a front-end compiler that translates a control design problem into problem (15.5); this optimization problem is solved numerically by further reducing it to a quadratic program. In order to reformulate problem (15.5) into a quadratic program, some approximation and sampling of frequency domain constraints is necessary. However, the positive and bounded real lemmas and
15.2. Controller design using the Youla parametrization and LMIs
299
their variations (see, for example, [13, 432, 225]) can be used to exactly formulate a number of frequency domain constraints into LMI constraints. This motivates the use of LMI techniques to solve problem (15.5), which we develop in the rest of the chapter.
15.2.2
Controller design specifications as LMI constraints
We now show how some typical constraints on Hzw can be reformulated as LMI constraints. Of course, since quadratic programs can be directly translated into LMI optimization problems [64], every controller design constraint listed in [59] can be reformulated as an LMI constraint. In addition, we will show how certain frequency domain constraints can be naturally reformulated as LMI constraints. We will restrict the Youla parameter Q to lie in the finite-dimensional subspace Qp.n) given in (15.4). The corresponding set of achievable closed-loop maps is
Constraints on the response to specific inputs Suppose that a certain exogenous output (i.e., a component of z] is required to lie within specified upper and lower bounds for a given reference input signal (i.e., a component of w). Let HZWtTef be the corresponding transfer function. Then the response constraint is simply an affine constraint on the Hzw,Tef and therefore on G (see [59, 61]).
LMIs for asymptotic tracking constraints Suppose that a certain exogenous output is required to asymptotically track a specific input signal. Let -H^™, track be the corresponding transfer function. Then the tracking constraint is simply an equality constraint on Hzw track(l) and therefore on 6 (see [59, 61]).
LMIs for HZ norms objectives The H2 norm of a transfer function H of a stable linear system is defined as
Prom Parseval's theorem, the H2 norm can also be calculated as the square root of the sum of the squares of the impulse response coefficients. As the H2 norm measures the root mean square (RMS) value of the output of the system when it is driven by white noise with unit power spectral density, constraints on ||/fztt;||2 provide a way of incorporating noise sensitivity constraints on the design. Suppose the transfer function H has a state-space realization (A, B,C, D). Then \\H\\2 can be calculated as follows. Define the controllability Gramian W as the solution to the Lyapunov equation Then the H2 norm is Now consider the constraint that the H2 norm of the transfer function from some (or all) of the components of w to some (or all) of the components of z be less than some 7. Let this transfer function be denoted Hzw. Since Hzw € Wfi n , the transfer function Hzw
300
Oishi and Balakrishnan
has a state-space realization (AZW,BZW,CZW(&), DZW(Q)), where Azw and Bzw are real matrices, and Czw and Dzw are affine functions of 0. Then the constraint ||/rztl,||2 < 7 is equivalent to
This is easily rewritten as the LMI constraint
LMIs for HOQ norms objectives The HQO norm of a transfer function H of a. stable linear system is defined as
where crmax(M) denotes the maximum singular value (spectral norm) of a matrix M. The HQO norm measures the L2- or energy-gain of the system; therefore, constraints on ||#ziu||oo provide a way of incorporating noise amplification constraints on the design. In addition, HQQ constraints provide a way of incorporating robustness requirements into the design; see, for example, [98, 106, 175]. Suppose the transfer function H has a state-space realization (A, B,C,D). Then the bounded real lemma (see [64] and the references therein) enables the reformulation of the constraint ||#||(x> < 7 as the following LMI constraint on P:
Now, consider the constraint that the HQQ norm of the transfer function Hzw from some (or all) of the components of w to some (or all) of the components of z be less than some 7. Using a state-space realization (Azw, Bzw, CZW(Q), Dzw(&)} for Hzw, where Azw and Bzw are real matrices, and Czw and Dzw are affine functions of 0, the constraint ll-^Tztulloo < 7 is equivalent to the existence of P = PT such that
holds. This is easily rewritten as the LMI constraint
The implication of section 15.2.2 is that the problem of LTI controller design with several common design specifications can be posed as an optimization problem (15.5) with LMI constraints (and objective) and solved numerically using standard software tools such as [126, 153]. We demonstrate the application of this approach toward controller design for the NEC Laser Bonder.
15.3. Design of an LTI controller for the NEC Laser Bonder
15.3
301
Design of an LTI controller for the NEC Laser Bonder
A laser bonder is a soldering machine that connects the leads of an integrated circuit (1C) chip with pads on the 1C board. The bonding head consists of a small tool that presses a lead of the 1C on the corresponding pad. Lasers then melt the solder of the pad thereby completing the connection. Once a lead is successfully connected, the tool moves up to its initial position. Then the positioning stage that holds the 1C board moves above the next lead, and the process is repeated in order for the new connection to be made. A schematic model of the bonder head is shown in Figure 15.2. The bonder head consists of a tool, a linear voice coil motor (VCM), which drives the tool up and down, and a position sensor (with a linear scale), which measures the vertical position of the tool. Our objective is the design of a controller that achieves the up-down positioning with high speed and precision.
Figure 15.2: Schematic model of the NEC Laser Bonder.
15.3.1
Modeling of NEC Laser Bonder
The equation governing the motion of the bonder head is
where F is the thrust force of the linear motor, ks is the spring constant, and h is the tool position. The thrust force is given by F(i) = kfi(t), where kf is the thrust force constant and i is the current through the motor. This current, in turn, satisfies
where Vm is the voltage input to the motor, R is the resistance of the motor, L is its inductance, and ke is its velocity-current coefficient. The values of the various constants are shown in Table 15.1.
Oishi and Balakrishnan
302
Table 15.1: Model parameter values for the NEC Laser Bonder. Mass M Resistance R Inductance L Spring constant ks Thrust force constant kf Velocity-current coefficient ke
1.91 Kg 5.96ft 2.25 mH 422 N/m 12.8 N/A 12.8 V-s/m
Equations (15.7) and (15.8) can be combined to yield the following state equation:
The controller to be designed has access to the position reference input /iref and the output of the tool position sensor /ise, which is simply the actual tool position with some additive sensor noise nse. The control input, i.e, the output of the controller, is a motor voltage Vi n ; this signal, corrupted by an additive noise nact, drives the motor. In addition, there is a disturbance fd that acts on the motor. For convenience, we scale this disturbance by 1/M and define Td = fd/M. In our framework, the position reference input /iref, the scaled motor disturbance Td, the position sensor noise nse, and the actuator noise nact form the exogenous input w; the computed voltage Vjn is the control input u; the actual position h and the actual motor input voltage Vm form the regulated variables z; and the reference input href and the sensed tool position hse form the measured variable y (see Figure 15.3). From (15.9), we then have
Substituting the parameter values shown in Table 15.1, we have the following state equations that describe a linear model for the Laser Bonder:
15.3. Design of an LTI controller for the NEC Laser Bonder
303
Figure 15.3: Block diagram of the NEC Laser Bonder. Discretizing the system with a sampling frequency of 1KHz, we obtain
where
and
15.3.2
Design specifications
There are three main design specifications: high speed, high precision, and high reliability. More precisely, we have the following design constraints. SP-1 Tracking a given reference input. In the absence of disturbances and noises, for the reference input href, shown in solid lines in Figure 15.4, the position h is required to lie within the limits shown in dashed lines. This constraint ensures that the design satisfies a number of specifications: - Tracking delay. The delay in tracking the reference input must not exceed 3ms. — Settling interval. The response is required to lie within ±3/L/m from 20ms onward. - Overshoot. The overshoot must not exceed 3/Ltm. SP-2 Asymptotic tracking. In the absence of disturbances and noises, the step response from the reference input hret to the position h must eventually settle at one.
Oishi and Balakrishnan
304
Figure 15.4: Constraints on the response to a given reference input. The solid line shows the reference input and the dashed lines the limits within which the response is required to lie. SP-3 Bounds on input signal Vm. In the absence of disturbances and noises, the voltage input V to motor is limited to ±10V, reflecting the limitations of the transformer driving the motor. The objective follows: OBJ Minimize the HOC norm of transfer matrix from reference input (/iref) and disturbance (TO) to position (h) and motor voltage (Vm). This serves to mitigate the effect of disturbances at the signals of interest.
15.3.3
Setting up the LMI problem
The open-loop system is stable, and Knom and Lnom can be taken to be zero in order to generate TI, T%, and T3. However, since the open-loop system is very lightly damped (i.e., the poles of the open-loop system were very close to but less than one), we choose the feedback gain Knom and observer gain Lnom, using standard LQG control, to increase the damping on the system (i.e., the LQG controller served to decrease the magnitude of the poles of the closed-loop system). This step, strictly speaking, is unnecessary; however, our experience has shown that the optimization problems are much better conditioned with this additional step. The set Q was chosen as
This set can be easily rewritten in the form of (15.4), with 9 = [B\ • • • 0$]. From the argument in section 15.2.2, the specifications SP-1 and SP-2 each lead to two linear constraints on 6; in the notation of problem (15.5), we obtain four LMI
15.3. Design of an LTI controller for the NEC Laser Bonder
305
constraints 0 i , . . . , >4. The specification SP-3 is an equality constraint on 9, which can be represented in the form of two LMI constraints ^5 and 06The objective OBJ is incorporated in problem (15.5) by defining 4>7 as the LMI constraint (15.6) and setting 00 = 7 2 -
15.3.4
Numerical results
We used the Matlab LMI Control Toolbox [153] to solve problem (15.5) to obtain 0opt and thus Kopt. Figure 15.5 shows the nominal (i.e., noise- and disturbance-free) response of the closed-loop system with the optimal controller in place. The response lies between the specified upper bound and lower bounds. In particular, the tracking delay, settling interval, and overshoot constraints are satisfied. Moreover, the response converges to the steady-state reference input value of 0.001m.
Figure 15.5: The response to the reference input, shown in a solid line. The dashed lines show the constraints. Figure 15.6 shows the motor voltage Vm corresponding to the reference input. Note for all time the magnitude is under 10V, as required. The optimal value of the objective OBJ subject to the constraints SP-1 through SP-3, i.e., the smallest HOC norm from [/iref rj] to [h Vm] subject to the various design constraints, is 4.4 x 105. For purposes of comparison, we attempted to find the smallest HQO norm achievable with LQG controllers that satisfied the design constraints and obtained a value of 5.8 x 105. Thus the H^ norm with the best LQG controller that we could design is about 20% larger in this example; moreover, there is no systematic way of incorporating multiple design constraints in the LQG design procedure (we had to resort to trial and error). It is possible that a more experienced LQG control designer could match the LMI design; however, this makes the point that with the assistance of the computer-aided procedure presented herein, even inexperienced users can quickly create useful designs.
Oishi and Balakrishnan
306
Figure 15.6: The measured voltage of the motor to the reference input, shown in a solid line. The dashed lines show the motor voltage constraints. Next, we study numerically the interaction between two competing constraints, viz., the HOQ norm of transfer matrix from [hre{ Td] to [h Vm], and the tracking delay. Clearly, increasing the allowable tracking delay, i.e., relaxing the tracking delay constraint, offers the potential of reducing the H^ norm. We study the trade-off between these quantities for three values for the settling interval. The trade-off curves are shown in Figure 15.7. These results show that in the case of a settling interval of ±3/^m, the HQQ norm does not get smaller when the tracking delay constraint is relaxed beyond 4ms, suggesting that other design constraints are then binding. Similar comments can be made in the case of setting intervals of ±10//m and ±30^um. As expected, for every value of the tracking delay, the HQQ norm decreases when the settling interval is allowed to be larger. Besides quantifying the interaction between competing constraints, the trade-off curves serve to illustrate the limits of performance of LTI controllers; every point (d, 7) above a trade-off curve is an achievable design specification, meaning that there is an LTI controller that simultaneously satisfies the maximum actuator input and settling interval constraints and results in a closed-loop response that has a tracking delay that does not exceed cf, with an HQQ norm from [hre{ Td] to [h Vm] that does not exceed 7. Every point below the curve represents an (almost certainly)15 unachievable design specification.
15.4
Conclusions
We have presented a numerical control design method for designing optimal LTI controllers for LTI plants with multiple design constraints. The techniques combine convex optimization over LMIs with the Youla parametrization of the set of achievable stable closed-loop maps. The advantages of this approach are that engineering specifications 15 Recall that we are restricting Q to lie in the finite-dimensional subspace Qfin, hence the parenthetical qualification.
15.4. Conclusions
307
Figure 15.7: Trade-off between the tracking delay and HOC norm of the transfer function from [href rd] to [h Vm}. can be directly incorporated in the design, without calling on the experience or intuition of the designer to "tune" the design parameters. The particular advantage with the use of LMIs is that several important frequency domain constraints can be exactly reformulated as LMI constraints using the bounded real lemma. In addition to designing controllers, it is possible to study the limits of performance of linear controllers, that is, to numerically determine the best achievable performance using linear controllers. We can also study the interaction between competing design constraints by numerically determining trade-off curves. The task of implementing the controllers presented herein on the "real" NEC Laser Bonder system remains. Another issue that is of interest is the incorporation of robustness requirements on the design so as to account for parameter and operating condition variations.
This page intentionally left blank
Chapter 16
Multiobjective Robust Control Toolbox for Linear-MatrixInequality-Based Control Stephane Dussy 16.1
Introduction
When applying linear matrix inequality (LMI) methods to control problems, see, e.g., [64, 108, 315, 148], the LMI conditions that arise are often complicated, making a direct application of LMI solvers (LMI Control Toolbox [153], SDPsol [426], Imitool [126]) cumbersome. The user needs to understand a wide field of theoretical knowledge before starting to compute a control law. Moreover, this knowledge shall include both a strong background in control theory and a deep understanding of the recent advances in optimization, namely, the polynomial time interior-point algorithms. In view of this theoretical difficulty inherent to the use of LMI techniques, the main purpose of the Multiobjective Robust Control Toolbox (MRC Toolbox) [116, 113] is to provide a set of functions that allow us to synthesize a robust control law under various constraints without expert knowledge on robust control theory. These user friendly functions just require the practitioner to enter the model state-space representation and the physical specifications. The toolbox functions are based on recent advances in control theory, in particular, those pertaining to multiobjective robust gain scheduling [115, 114, 111]. We consider a wide class of nonlinear uncertain systems for which we can derive a linear fractional representation (LFR). From this standard representation, we describe a formulation of the control problem based on the gain-scheduled framework introduced by Packard [35, 312]. This formulation includes various specifications such as a norm bound on the command input and some saturations on specific outputs. The requirements in terms of performance will be specified via an a-stability condition, i.e., exponential stability with a decay rate a, or via an L2-gain condition. The public domain MRC Toolbox [113] is a set of Matlab functions built on top of Imitool [126] and SP [402]. For the output-feedback case, these M-functions use a cone complementarity algorithm [128] adapted to robust control. They may be complemented with recent LMI-based tools, see, e.g., the toolbox developed by Beran [43] for linear 309
310
Dussy
time-invariant (LTI) systems. The theoretical framework of the methodology is defined in section 16.2. The main functions of the MRC Toolbox are described in section 16.3. Finally, numerical examples are given in section 16.4 with the control of an inverted pendulum [114] and a nonlinear benchmark called RTAC [115].
Notation For a given integer vector r = [7*1, . . . , rn], r» € N, i = 1, . . . , n, we define the sets
16.2 16.2.1
Theoretical framework Specifications
To describe the methodology, let us consider a parameter-dependent nonlinear system of the form
where x is the state; u is the command input; y is the measured output; z is the controlled output; Q, i = 1,...,JV, are some outputs of interest; and w are the disturbances. The vector £ contains time- varying parameters, which are known to belong to some given bounded set V. Also, we assume that A(-,-), B u (-,-), C y (-,-), Bw(-,-), Dyw(-,-), and Dzu(-,-} are rational functions of their arguments. We seek a dynamic, possibly nonlinear, output-feedback controller, with input y and output u such that a number of specifications are satisfied. To define our specifications, we consider a given polytope P of initial conditions. We also consider the set of admissible disturbances, chosen to be of the form
where wmax is a given scalar. To P, W and the uncertainty set V, we associate the family X(XQ, Wmax) of trajectories of the above uncertain system in response to a given XQ € P. Our design constraints are as follows: 5.1 The system is well posed along every trajectory in A^zoj^max); that is, for every t > 0, the system matrices A(x,£(t)), Bu(x, £(£))» etc. are well defined. 5.2 Every trajectory in X(XQ,$) decays to zero at rate a, that is, limt-»oo eatx(t) = 0. 5.3 For every trajectory in X(x$, w ma x)> the command input u satisfies W > 0, ||w(<)ll2 < S.4 For every trajectory in X(xo,wmgLX), the outputs ?i = EiX, i = 1,...,./V, are bounded; that is, ||?t(0ll2 < Ci.max for every t > 0.
16.2. Theoretical framework
311
S.5 For every trajectory in X(Q,wma.x), the closed-loop system exhibits good disturbance rejection; i.e., a maximum L2-gain 7 from z to it; is guaranteed. This is equivalent to impose a bound 7 such that
We recall that the above specifications should hold robustly, that is, for every £(t) G T>.
16.2.2
LFR of the open-loop system
The first step is to derive an LFR for the system. In other words, we seek a set of equations
that is formally equivalent to the nonlinear equations (16.1). We will refer to [114, 129], where the framework for the construction and the normalization of an LFR for such a system is described. For this example, we considered the case of bounded uncertainties. The LFR, and therefore the associated LMI-based conditions, are different when the uncertainties are expressed through sector conditions. In this case, the complete methodology is described in [112]. The next sections describe the basic idea of this methodology for nonlinear systems with bounded uncertainties.
16.2.3
LFR of the controller
We take our measurement-scheduled controller to be in the LFR format
where p, q are fictitious measured inputs and outputs of the controller. Note the crucial assumption here: The controller is scheduled with respect to the measured element in the nonlinear block A, namely, A m (y) (rank i/m), and it is robust with respect to the uncertain part, namely, A u (y) (rank vu}. As described in the first chapter of the book, our theoretical framework refers to the mixed parameter-scheduled/robust problem, also called robust measurement scheduling. The closed-loop system is shown in Figure 16.1.
16.2.4
LMI conditions
We seek quadratic Lyapunov functions ensuring specifications S.I—5 for the closed-loop system. As seen in [111, 129], it is possible to formulate the synthesis conditions to ensure these specifications in the form of a set of LMI conditions associated with a nonconvex constraint. For the specific problem described in the previous sections, these conditions are as follows.
Dussy
312
Figure 16.1: LFR of the closed-loop system. Let N be a matrix whose columns span the nullspace of [Cy Dyp Dyw]T. The synthesis conditions are
16.2.
Theoretical framework
313
This quite complex formulation is easy to solve thanks to efficient LMI solvers. The above LMIs are interpreted as follows: An input norm bound is specified with (16.8), (16.9), and (16.10). Bounds on specific outputs are ensured with (16.8), (16.9), and (16.11). The remaining matrix conditions specify the stability and the guaranteed L2gain of the system. Every condition above is an LMI, except for the nonconvex equations (16.7) and (16.9). When (16.4) holds, enforcing this condition can be done by imposing TrSuTu + w tn i *^e additional LMI constraint
In fact, the problem belongs to the class of "cone-complementarity problems," which are based on LMI constraints of the form
where V, W, Z are matrix variables ( V, W being symmetric and of same size) and F( V, W, Z) is a matrix- valued, affine function, with F symmetric. The corresponding cone-complementarity problem is The heuristic proposed in [128], which is based on solving a sequence of LMI problems, can be used to solve the above problem. This heuristic is guaranteed to converge to a stationary point. Algorithm 7i. 1. Find VQ, WQ, ZQ that satisfy the LMI constraints (16.13). If the problem is infeasible, stop. Otherwise, set k = 1. 2. Find Vfc, Wk, Zk that solve the LMI problem
3. If the objective Tr(Vfc_i Wk + Wk-\Vk) has reached a stationary point, stop. Otherwise, set k = k + 1 and go to (2).
16.2.5
Main motivations for the toolbox
As described in the first chapter of the book, there are several key features in the proposed approach. • Efficient numerical solution. LMI optimization problems can be solved very efficiently using recent interior-point methods (the global optimum is found in modest computing time). This brings a numerical solution to problems when no analytical or closed-form solution is known. • A systematic method. The approach is very systematic and can be applied to a very wide variety of nonlinear control design problems. This includes (and is not reduced to) systems whose state-space representations are rational functions of the state vector.
314
Dussy
• Multicriteria problems. The approach is particularly well suited to problems where several (possibly conflicting) specifications are to be satisfied by the closed-loop system. This is made possible by use of a common Lyapunov function proving that every specification holds. The main problem remains the complexity of the LMI conditions. Therefore, it appears necessary that this part of control synthesis remains "transparent" for the user. In view of the above features, the approach opens the way to CAD tools and user friendly interfaces for the analysis and the control of nonlinear uncertain systems. All of these key features allow us to consider the LMI formulation as one of the most promising approaches in order to reduce the gap between the engineering and research communities. During the last decade, control engineers have been frustrated by the theoretical advances that could appear to them more and more complicated and cumbersome to apply to industrial problems. In fact, from an engineering point of view, every robust control problem can be summarized by the description of the system (including its variation) and the design goals (through performance specifications). The main purpose of the MRC Toolbox is to build some user friendly functions that only require the plant description and the physical specifications before synthesizing the appropriate control law. Then, these "engineerlike requirements" are translated by the toolbox into some advanced LMI-based conditions. To do so, the underlying theoretical background of these Matlab functions attempts to make the best trade-off between calculability and complexity of the synthesis conditions. Quadratic Lyapunov functions represent for the moment the best solution to get some LMI-based conditions (therefore, easy to solve numerically) for a wide class of specifications and systems (systems with bounded uncertainties [114], Lur'e system [112], discrete-time systems [117]).
16.3
Toolbox contents
16.3.1
General structure of the toolbox
The main driver, named mrct, can be called directly. The toolbox also contains the functions mrstsf, mrstof, mr!2sf, and mr!2of that solve specific problems, mrct invokes these M-functions and also some other functions that remain hidden. The latter are referenced with block letters MR, e.g., MRspec that asks interactively the specifications. The data of the system, i.e., the matrices A, Bu, Bp, Bw, Cq, Dqu, Dqp, Dqw, Cy, Dyu, Dyp, Dyw, Cz, Dzu, Dzp, and Dzw from LFR (16.2), can be packed and unpacked in a convenient way with the routines mrpck and mrunpck.
16.3.2
Description of the driver mrct
The use of mrct is straightforward and just requires following on-line instructions. This function invokes MRspec that asks the user to enter the specifications and store them in a data file. According to these specifications, the driver follows the theoretical steps of LMI-based robust synthesis described in section 16.2.4: 1. Build the LMI conditions, each one reflecting a specification. 2. Solve the LMI problem. In output feedback, it requires the resolution of the cone complementarity algorithm. 3. Reconstruct the controller thanks to the solutions of the LMI conditions. This can be achieved by solving another LMI problem (see [111] for more details).
315
16.3. Toolbox contents Table 16.1: The main specific functions. Problem description
State-feedback Output-feedback
Bounded uncertainties Real bounded uncertainties Sector conditions Bounded uncertainties Sector conditions
Function a-stab. L2-gain mrstsf
mr!2sf
mrstof
mr!2of
Type
'b' 'br' 's' 'b' 's'
16.3.3 Description of the main specific functions The computation of a controller can be performed directly through the functions mrstsf, mrstof, mr!2sf, and mr!2of, where the specifications are entered as arguments of corresponding M-functions. Each function can solve different kinds of problems, depending on what the user specifies in the argument type of the function. Table 16.1 summarizes the various problems that can be handled with the MRC Toolbox. The specifications of the control problem appear through the arguments of the functions as described in the following example »[K,opt,info] = mr!2of(type,G,alpha,gamma,normw,rho,cz,czb,v,nnm,r,lambda) where K contains the desired controller matrices packed with mrkpck,
opt is the optimized value of the minimization problem, and 1. type specifies the kind of problem to be handled: 'b' for bounded uncertainties (choice by default), 's' for sector conditions, 'br' for real bounded uncertainties. 2. G contains the matrix representation of the system A, Bu, Bp, Bw, Cq, Dqu, Dqp, Dqw, Cy, Dyu, Dyp, Dyw, Cz, Dzu, Dzp, Dzw. This single matrix can be packed/unpacked with the functions mrpck and mrunpck. 3. alpha specifies the required decay rate of the closed-loop responses. 4. gamma specifies the imposed maximum Li2-gain. gamma = ' opt' for a minimization problem. 5. normw specifies the norm bound on the disturbance energy. 6. rho represents the norm bound on the command input, rho = ' opt' for a minimization problem. 7. czb represents the bounds on outputs of interest stored rowwise in cz. 8. v contains the polytope of initial conditions stored columnwise. 9. nnm is the number of unmeasured uncertainties in A = diag(A m , A u ), so that nnm = rank(A u ).
Dussy
316
Figure 16.2: Inverted pendulum on a cart. 10. r contains the structure of A stacked in the row vector. It contains the way some of the uncertain parameters are repeated in A. 11. lambda represents the norm bound on the variables P, Q, 5", and T defined in section 16.2.4. Remark. The controller order for the output-feedback case is a priori equal to the order of the plant. The synthesis of a low-order controller can be achieved by using some well-known LMI-based algorithms [176, 414] that are not included in the toolbox.
16.4
Numerical examples
16.4.1 An inverted pendulum benchmark The inverted pendulum is built on a cart moving in translation without constraint (see Figure 16.2). Our goal is to stabilize the inverted pendulum in the vertical position, with an uncertain but constant mass m. ( is the position of the cart, 0 is the angle position of the pendulum with the vertical axis, and u is the control input acting on the cart translation. We seek to control the angle 0 without limitation on the cart's horizontal position. We assume that the mass m is constant and unknown but bounded: m € [mo — Am , mo + Am], where mo, Am are given. The parameter vector here is taken to be f = (m - m0)/Am, so that f € T> = [-1 , 1]. Our specifications are described by S.l-4, where a, umax and xmax = [Omax Omax]T are given. The polytope of admissible initial conditions is determined by two given scalars
A detailed description of the methodology for this example is described in [114]. The necessary steps for the synthesis of the desired controllers, and in particular the construction of the LFR, are described in the following Matlab file:
16.4. Numerical examples
317
7. LFR of the system [0 ; -0.87]; A = [0 1 ; 12.43 0]; Bu -0.03 0.05 0.04 -0.261.07 -0.07 0.11 1.13 -0.26] Bp= [ 0 0 0 0 0 0 0 0 0 C q = [ 0 0 0 0 0 0 0 0 1 ; 0 5 . 7 4 7.46 0 1 0 -5.52 0.92 0]' Dqu - [0 -0.40 -0.52 1 0 0 0.38 -0.00 0] 2.52 0 Dqp = [0 0 0 0 0 0 0 5 2 -0.12 0.05 0.. -0.01 0.02 0.25 -0.12 0.49 -0.03 5 2 -0.02 0.03 0.02 -0.16 0.64 -0.04 0.07 0.68 -0 16 0 0; 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 1.9 0 0 0 0 0 0.23 0 0.01 0.21 0 .16 0.11 -0.48 0.03 -0.05 -0.500.1
0 ; -0.00 -0.00 -0. 0.08 -0.00 0.00 0.00 0 0 0 0 0 0 0 0]; Cy = [ 1 0 ] ; Dyu = 0; Dyp = zeros(l,9); 7. fill matrices (z,w) with zeros [Bw,Dqw,Dyw,Cz,Dzu,Dzp,Dzw] - mrzerozw(A,Bu,Bp,Cy) ; G = mrpckCA.Bu.Bp.Bw.Cq.Dqu.Dqp.Dqw.Cy.Dyu.Dyp.Dyw.Cz.Dzu.Dzp.Dzw) ; 0 0
7. Specifications 7. type = 'b'; Alph =0.15 Rho = 50 Cz - [1 0 ; 0 1] ; Czb - [13]; v = [0.3 0.3 ; 0.3 -0.3]
nnm = 4 r = [4131]; Lamb - 2000;
7. bounded uncertainties 7. required decay-rate 7. input norm bound '/. bounded outputs 7. associated norm bounds 7. it means I [1 0]X| < 1 and I [0 1]X|<3 7. polytope of initial conditions 7. it means 1X1(0) l<0.3 and 1X2(0) I <0.3 7. last 4 uncertainties are unmeasured 7. structure of Delta 7. norm bound on the variables
7t Controller Synthesis y. [K,opt,info] = mrstof(type,G,Alph,Rho,Cz,Czb,v,nnm,r,Lamb); [Ab,Bbp,Bby,Cbq, Dbqp,Dbqy,Cbu]=mrkunpck(K)
The above code gives the solution
The plots of Figure 16.3 show the envelopes (solid line) of the closed-loop responses in 0(t) and u(t) of the inverted pendulum, with m varying within its two extremal values, i.e., from -50% to -1-50% of its nominal value mo. They also show (dotted line) the responses of the system for the nominal value m = mo-
318
Dussy
Figure 16.3: Envelopes of 9(i) in rad and u(i) in Newton for the mixed robust/gainscheduled controller.
Figure 16.4: Rotational actuator to control a translational oscillator.
16.4.2 The rotational/translational proof mass actuator The RTAC is a nonlinear benchmark proposed by Bupp, Bernstein, and Coppola [75]. Consider the unbalanced oscillator described in Figure 16.4. It is built with a cart (mass M) fixed to the wall by a linear spring (stiffness k) and constrained to have onedimensional travel along the Z-axis. An embedded proof mass actuator (mass m and moment of inertia /) is attached to the center of mass of the cart and can rotate in the horizontal plane. The radius of rotation is e. The cart is submitted to a disturbance F, and a control torque N is applied to the proof mass. We seek a state-feedback controller such that S.I and S.3—5 are ensured for the closed-loop system. We also want to minimize the upper bound on command input um&x. For more details, the complete methodology is described in [115]. The solution has been computed with the following Matlab file:
16.4.
Numerical examples
319
7. LFR of the system A = [0 1 0 0 ; -1.04 0 0 0 ; 0 0 0 1 ; 0.19 0 0 0 ] ; Bu = [0 ; -0.19 ; 0 ; 1.04]; Bp = [0 0 0 0 ; 0.01 -0.01 -0.01 0.05 ; 0 0 0 0 ; 0.01 0.01 0.00 -0.01]; Bw = [0 ; 1.04 ; 0 ; -0.19]; Cq = [0.84 0 0 0 ; 1 . 23 0 0 0 ; 0 0 0 0 ; 0 0 0 1.00]; Dqp = [-0.01 0 0.01 -0.04 ;0 0.02 0.02 -0.06 ; 0 0 0 0 ; 0 0 0 0 ] ; Dqu = [0.16 ; 0.23 ;1.00 ; 0]; Dqw » [-0.84 ; -1.23 ; 0 ; 0] ; Cy - [1 0 0 0; 0 0 1 0]; Dyu = [0 ; 0] ; Dyp = zeros (2, 4); Dyw = [0 ; 0] ; Cz = [0.32 0 0 0; 0 0 0.32 0; 0 0 0 0 ] ; Dzu = [0 ; 0 ; 1] ; Dzp = zeros(3,4); Dzw = [0 ; 0 ; 0] ; G = mrpck(A,Bu,Bp,Bw,Cq,Dqu,Dqp,Dqw,Cy,Dyu,Dyp,Dyw,Cz,Dzu,Dzp,Dzw) ;
'/. Specifications 7. -------------type = 'br' ; Alph = 0; Gam = 30; Rho = ' opt ' ; cz ™ [1 0 0 0 ; 0010; 0001]; czb = [1.28 0.5 0.5]; v = [0.05 ; 0 ; 0 ; 0] ; r = [31]; Lamb = 5000;
7. real bounded uncertaintie 7. required decay-rate 7. L2-gain 7. Minimize input norm bo 7. bounded output 7. and associated norm bound 7. it means |X1|<1.28, |X3|<0. 7. and |X4|<0. 7. polytope of initial condition 7. it means |X1(0)I<0.05 7. structure of Delt 7. norm bound on the variable
7. Controller Synthesis y. -------------------[K, opt, info] = mr!2sf (type, G, Alph, Gam, Rho, cz, czb, v,r,
The string "opt" for Rho means that this variable will be minimized; that is, we seek to minimize the command input bound. We obtained the state-feedback controller
with the guaranteed bound -/Vmax = 0.4586. The plots of Figure 16.5 show the closed-loop responses in Z(t) and N(t) of the RTAC for nonzero initial conditions. These plots point out one of the main drawbacks of this approach based on quadratic Lyapunov functions. Depending on the problem, the LMI conditions may be very conservative and lead to some constraints more stringent than we expected them to be (this is the case here for the bound on the command input). Therefore, to be optimal, this approach requires some guidelines to relax the design parameters [115]. Then it generally provides very helpful trade-off curves for numerous problems of control design. Note that the LMI approach is particularly well suited for designing trade-off curves, since it provides a solution in reasonable time, it guarantees that all the criteria are robustly respected, and it allows us to minimize a parameter and thus to provide an optimal solution. It does not require any longer to run numerous simulations (corresponding to the different possible variations of the uncertain plant) in order to check that the specifications hold. Figure 16.6 shows trade-off curves decay rate/control effort with 7 = oo (right-hand side) and performance/control effort with a = 0 (left-hand side). Performance is measured by 7, the upper bound on the closed-loop Li2-gain, and control effort is measured
Dussy
320
Figure 16.5: Closed-loop system responses with w = 0, zero initial conditions except for Z(0) = 0.01, Z in meter, N in Newton.
Figure 16.6: Trade-off plots command input/decay rate/L2-gain for the state- (solid line) and output-feedback (dashed line) controller. On the right side, 7 = oo, and on the left side, a = 0. Nm&x is in Newton. by the upper bound on peak command input ATmax. In each plot, two curves are shown: one for state feedback (solid line) and the other for output feedback (dashed line).
16.5
Concluding remarks
The MRC Toolbox provides a user friendly interface for the robust control of a wide class of uncertain nonlinear systems under constraints. It is based on recent results on LMI-based robust measurement scheduling and bounded multiobjective control. This toolbox provides an easy and fast controller synthesis under various specifications. It handles nonconvex problems via an efficient heuristic. The main idea that gave birth to this software was to make a compromise between complex theoretical foundations and a friendly use for an engineer without expert knowledge of the recent results in control theory. The future updates of this software shall include the implementation of an algorithm for the synthesis of low-order controllers [176, 414] and the construction of new functions for discrete-time systems [117]. These updates shall also include a graphical interface for the description of the system (via block diagrams and Simulink-like icons), its variation, and the design goals.
Chapter 17
Multiobjective Control for Robot Telemanipulators Jean Pierre Folcher and Claude Andriot 17.1
Introduction
Teleoperation is the branch of robotics dedicated to manipulation in inaccessible environments. Such environments can be distant or hostile (space or nuclear) or can be at a different power scale (microsurgery or electronic assembly); see [80, 146]. A standard teleoperator system consists of a slave robot tracking a master robot via a bilateral link (the controller and the actuators). The master robot is manipulated by a human operator and the slave robot interacts with the environment (the load). Ensuring good tracking performance and maintaining system stability are the control objectives usually expressed in terms of network-based properties. It has been shown in [82] that passiv ity theory is useful to analyze the teleoperator stability for a wide range of operators and environments. A stability criterion was derived in [81] by verifying constraints on the scattering or admittance matrices of the teleoperator. A passivity approach for the design of a bilateral controller has been presented in [82]. A related method, based on HOQ optimization, is examined in [16, 435]. The use of convex optimization for a passive design is shown in [205]. A key point in the design of a bilateral controller is to ensure good trade-offs between the conflicting objectives of stability and performance. We propose a multiobjective design approach expressed in terms of linear matrix inequalities (LMIs). This formulation appears well suited for many control problems and especially for multiobjective ones; see [123, 365]. The present approach is close to the one discussed in [123] and generalizes the results obtained for linear time-invariant (LTI) systems subject to "structured" dissipative perturbations. The central idea is that a number of design specifications, such as robust (H2 norm) performance, robust input peak bound, etc., can be translated into LMI constraints associated with a nonconvex constraint of the form ST = /, where 5, T are two (block-diagonal) matrix variables. Our problem can thus be solved using LMI optimization associated with an efficient cone complementarity linearization algorithm described in [128]. This makes our multiobjective robust design control problem amenable to an efficient numerical solution. Finally, the multiobjective approach is illustrated with a prototype telerobotic sys321
322
Folcher and Andriot
tern: the RD500 of the Commissariat a 1'Energie Atomique (CEA). The force control problem of a single-joint slave manipulator is considered. A proportional integer (PI) force controller derived in [299] is also designed in this chapter to compare performances. The organization of the chapter is the following. In section 17.2, main ideas of telemanipulation, background on network theory, and network-based telemanipulation control objectives are given. The multiobjective design approach for the control of linear systems subject to dissipative perturbation is developed in section 17.3. The proposed approach is applied in section 17.4 to design the force control law of a single-joint slave manipulator.
17.2
Control of robot telemanipulators
17.2.1 Teleoperation systems A general model of a teleoperation system is shown in Figure 17.1.
Figure 17.1: Teleoperation system. A teleoperation system includes the human operator, the environment, and the teleoperator. This last one is composed of the master manipulator, the slave manipulator, and the bilateral link. In the case of the mechanical bilateral link, the intrinsic reversibility property implies that the action of the environment on the slave manipulator is transmitted to the master manipulator. The operator feels interactions between the slave manipulator and the environment: this is kinesthetic feedback. To obtain good performance, manipulators need to be mechanically reversible (small inertia and friction terms), and the bilateral link has to be rigid. To increase distance between the master and the slave space, electrical teleoperators have been developed. They include two manipulators and a bilateral link, which consists of electrical actuators and a controller. Control laws implemented on the calculator achieve the kinesthetic feedback. In the sequel we study the design of control laws for electric teleoperators.
17.2.2
Background on network theory
Telemanipulation systems can be seen as subsystems interacting dynamically. These composite systems are intrinsically structured, and network theory appears to be a con-
17.2. Control of robot telemanipulators
323
venient tool to describe teleoperator properties [408]. Widely known as an analysis tool for electrical systems [13], its use for mechanical and robotics systems is restricted. In this section we review basic concepts of network theory extended to robotics systems analysis. Networks. This representation is based on a particular choice of physical variables: Force / and flow v are chosen such that quantity P = fv represents a power. Mechanical basic systems can be described by elementary LTI 2-port networks shown in Figure 17.2.
Figure 17.2: 2-port network. fit f0 are input and output force vectors, respectively; Vi, v0 are the input and output speed vectors. Examples of basic networks are given: - An inertial and viscous friction in a series; see Figure 17.3(a). - A stiffness connected in parallel with viscous friction; see Figure 17.3(b). - A geared drive with a gear ratio n; see Figure 17.3(c).
Figure 17.3: Basic networks. These networks, respectively, are governed by the following equations:
Folcher and Andriot
324
Y(s), Z(s), and H(s) correspond to a particular choice of input and output variables. They are called, respectively, the admittance matrix, the impedance matrix, and the hybrid matrix. Introducing the input wave vector a = [ai, a0]T = [fi — Vi, f0 — VO]T and the reflected wave vector 6 = [6j, b0]T = [fi + Vi, f0 + v0]T the network equation becomes
where 5(s) is called the scattering matrix. Matrices Y(s), Z(s), H(s), and S(s) are interrelated: For instance, Z(s) = ^(s)"1 (when Y(s) is invertible) and S(s) = (Y(s) —
Passivity. A multiport network is said to be passive if and only if for any set of the motion v and effort / satisfying the network function, the inequality
is ensured; see [339], E(T] is the energy provided to the network, and we note that a passive network can only dissipate energy. Passive LTI networks possess positive real admittance matrix Y(s) or bounded real scattering matrix S(s). That is, for Re s > 0, entries of Y(s), S(s) are analytical and Y(s)* + Y(s) > 0, / - S(s)*S(s) > 0; see [13].
17.2.3
Toward an ideal scaled teleoperator
We consider that geometric similarity is enforced for the telemanipulation system. The control problem is broken down into independent problems for each coupled joint: Their behavior is modeled as networks; see Figure 17.4.
Figure 17.4: Single-joint teleoperation network model. are the networks related to, respectively, operator, master manipulator, bilateral link, slave manipulator, and the environment. /m, vm are the effort and the velocity of the master joint; fa, va are the effort and the velocity of the slave joint, vim, vis are the velocities seen by the bilateral link and /j m , fia are auxiliary forces (when the manipulator has no local control law it corresponds to the effector force). Teleoperator transparency. The goal is to obtain constant velocities and effort ratio between the master joint and the slave joint for all the frequencies: This is transparency.
17.2. Control of robot telemanipulators
325
The associated ideal network will be described by the hybrid representation
where n/, nv are the force ratio and the velocity ratio. For micromanipulations n/, nv > 1, and for large scale n/, nv < 1 are fixed. When n/ = nv = 1 the operator is virtually connected to the network Na. Teleoperation system stability. The teleoperation system represented in the network framework in Figure 17.5 has to be stable. Modeling the environment (network Ne) and the operator (network N0) accurately in a simple way is difficult. In [81], Colgate proposed a stability test based on passivity that we recall.
Figure 17.5: Networks interconnection. We suppose that N0 modeling the operator and Ne modeling the environment are passive networks; that is, VT > 0,
The interconnected system will be stable if and only if there exists a scalar (3 > 0 such that
where Y(s) is the admittance matrix of the network Nt. Structured network synthesis.
Transparency of the telemanipulator is ensured when
- the bilateral link is rigid; - the master and the slave joints are backdrivable. To check the first specification, the network -/Vj, which represents the bilateral controller, is designed as a stiffness K\ connected in parallel with viscous friction BI\ see Figure 17.3(b). This network can be interpreted as a proportional integral speed control law. The second requirement is achieved by considering force control strategies (see [197, 417, 418]), which ensure transparent properties for networks JVm, Ne. In the next section we present a multiobjective robust control design approach to satisfy teleoperation control specifications and take into account additional time-domain properties useful for coping with noise measurement and actuators saturation.
Folcher and Andriot
326
17.3
Multiobjective robust control
17.3.1
Problem statement
We consider the system defined as follows:
where x G Rn is the state, u G R"u is the command input, w G Rnt" is the exogeneous input, z G Rn* is the regulated output, and y G Rn» is the measured output. We assume that Dqw, Dzp, Dzw are zero. This system can be seen as a linear time-invariant system connected to m operators A* from C
where Ui < 0, Vi, Wi > 0 are given real matrices. For instance, {— /, 0, <72/}-dissipativity property expresses that operator A^ has an £-2 gain less than or equal to a. Note that the composite operator A defined as
is also {5f7, SV, 5W}-dissipative with
and
17.3.2
Control objectives
Our purpose is to find a strictly proper LTI controller of the form
17.3. Multiobjective robust control
327
where x € R™ is the controller state, and A, B, C are constant matrices of appropriate size, such that the closed-loop system (with x = \XT XT}T) defined by
with A {St/, 5V, 5W}-dissipative, achieves simultaneously a number of desired properties.
Figure 17.6: Closed-loop system. We partition vectors w and z as w = [w{ w%^r, z — [zf 2^]T, and we define our control objectives as follows. O.I Robust stability. For every A {U, V, W}-dissipative, the closed-loop system (17.7) with w = 0 is stable. O.2 Robust performance. For every A {£/, V, W}-dissipative, for every t > 0, the responses z\ of the closed-loop system (17.7) to zero initial conditions, and a unit impulse in the zth coordinate of w\, satisfy
where 7 > 0 is a prespecified number. (This control objective can be interpreted as a "robust H2 norm" condition on the closed-loop system.) O.3 Robust output peak bound. For every A {£/, V, W}-dissipative, the responses z\ of the closed-loop system (17.7) to zero initial conditions, and a unit impulse in the ith coordinate of 1^2, the other inputs being zero, must satisfy
where zmax is a given positive number.
328
Folcher and Andriot
O.4 Robust input peak bound. For every A {£/, V, W}-dissipative, for the trajectories defined in O.3, the resulting command input satisfies, for some prespecified number
17.3.3 Conditions for robust synthesis Analysis results are based on quadratic functions V(x) = xTPx, and they lead to matrix inequalities involving the closed-loop system (17.7) state-space description. This approach is similar to the one developed in [123], where the system included a timevarying norm bounded gain that we can interpret as pointwise quadratic constraints on fictitious signals p and q. Here the constraints on A involve integral quadratic constraints on signals p and q, and the matrix inequalities we obtain are closely related to the ones obtained in [123]. However, we cannot interpret V as a quadratic Lyapunov function in the conventional sense because V decreases to zero but not necessarily monotically. We assume that there exist symmetric, positive-definite matrices P € R 2nx2n and an 5 given by (17.5) such that with w = 0,
Integrating both sides of this relation from 0 to T, we obtain
As A is {SU, SV, 5W}-dissipative, the last right-hand-side term is negative and the above relation implies that the ellipsoid £p -^ = {x \ xTPx < x(0)TPx(0)} is invariant. Thus, the system (17.7) satisfies control objective O.I. Condition (17.9) holds if and only if
In addition, condition (17.11) ensures that the output energy in z\ in response to an impulse input is bounded above by x(0)TPx(0). To be precise, a trajectory such as considered in O.2 satisfies
Condition (17.11) together with
guarantees that control objectives O.1-O.2 hold. Condition (17.11) also implies that for every v > 0, the ellipsoid Sp v = {x \ xTPx < v} is invariant [64]. That is, for every trajectory with zero input w and af(0) € £pv,
17.3. Multiobjective robust control
329
we have x(i] G £p v for every t > 0. Applying this to a trajectory in response to initial condition x(0) = BW2ti, we see that the condition x(0) <E £p;l/,
implies that z(t) G £p v for every t > 0. If, in addition,
then the bound (17.8) holds. We conclude that the condition
together with (17.11) and (17.13), imply that control objectives O.I and O.3 hold. Likewise, the input peak bound can be enforced by considering u as an output u = Cux, where Cu = [0 C}. We obtain that the additional condition
with (17.11) and (17.13), enforces O.I and O.4. The above results are summarized in the following theorem. Theorem 17.1. If there exists a symmetric, positive-definite matrix P G R 2nx2n such that conditions (17.11), (17.12), (17.13), (17.14), and (17.15) hold, then control objectives O.1-O.4 hold for the closed-loop system (17.7). We present a useful lemma needed to establish inequality conditions depending on data matrices (A, Bp,...) of system (17.1). Lemma 17.1 (completion [312]). Let P, Q G R n x n be two positive-definite matrices. There exists a positive symmetric matrix P G R 2nx2n such that the upper-left n x n bloclc of P is P, and that of P~l is Q if and only if Q > 0, P > Q~l. For every Q > 0, P > Q"1, the set of matrices P and their inverse Q satisfying the conditions above are parameterized by
where M G R nxn is an arbitrary invertible matrix and N = (I - QP}M~T. Without loss of generality, we may assume that P > Q~l (this restricts our search to full-order controllers only; see [148]). We can define K, Z as follows:
and we obtain that inequality (17.12) is equivalent to the existence of a symmetric matrix X such that
330
Folcher and Andriot
Similarly, condition (17.13) is equivalent to the existence of a scalar v such that
Condition (17.14) is equivalent to the existence of scalars r, v such that
Finally, the control input peak condition (17.15) can be written as
or
17.3.4 Reconstruction of controller parameters We seek A,B,C such that inequality (17.11) holds using the approach in [359], based on the following lemma. Lemma 17.2. Let the matrices G € R 2nx2n , G E R2"*2", and f € R 2nx2n ,
where R G R nxn is inversiWe. When conditions
hold, then G < 0. Using the Schur complement, inequality (17.11) is ensured if
and
17.3. Multiobjective robust control
331
hold. Without loss of generality, we choose R = M with M2 = P - Q l (the choice of M has no effect on the controller's transfer function, since it corresponds to a change of coordinates in the controller's states, x —> MX). Condition Gn < 0 of Lemma 17.2 becomes
Note that inequality (17.22) is ensured by the (2,2) block of the left-hand-side term of inequality (17.24). Condition G\\ < 0 of Lemma 17.2 can be written as
Condition G\2 = -G\\R~T is verified for P, Q,y, Z, S given by fixing A as
We obtain the following result. Theorem 17.2. If there exist P, Q, y, Z,S, T,r, v that satisfy constraints (17.18), (17.19), (17.20), (17.21), (17.24), and (17.25), then_ there exists a controller of the form (17.6) with A given by (17.26), B = M~1Z and C = -YQ~lM~l with M2 = P - Q~l such that the closed-loop system (17.7) satisfies O.1-O.4.
17.3.5
Checking the synthesis conditions
The conditions of Theorem 17.2 are not convex, due to the coupling conditions ST = / and rv = 1. However, they are simple LMI conditions for fixed S and T. We may use the cone complementarity linearization algorithm proposed in [128]. Consider the LMIs
332
Folcher and Andriot
The following heuristic is guaranteed to converge to a stationary point of the objective (see [128]). 1. Find P, Q, y, Z, S0, TO, TO, ^0 that satisfy constraints (17.18), (17.19), (17.20), (17.21), (17.24), and (17.25), where the nonconvex equalities ST = I, TV = 1 are replaced by the LMIs (17.27). If the problem is infeasible, stop. Otherwise, set k = 1. 2. Find Sjt, Tk r/t, Vk that solve the LMI problem of minimizing Tr(7);_15 + Sk-iT) + i/k-iT + Tk-\v subject to the LMI constraints of step 1. 3. If the objective Tr(Tk-iS-\-Sk-iT) + i'k-iT+Tk-ii' has reached a stationary point, stop. Otherwise, set k = k + 1 and go to step 2.
17.4
Force control of manipulator RD500
Ensuring the backdrivability is the main objective of the control of the slave manipulator RD500 of CEA. This control problem is addressed by reducing the manipulator impedance (inertia and friction terms). For instance, the behavior of a rigid joint modeled by an inertia M is described by
where ve, /„/, /e, fu are the manipulator speed, the friction effort, the exogeneous effort, and the control effort, respectively. When this joint is controlled by the force feedback law where g > 0 is a constant gain, then the closed-loop system behavior is governed by
The resultant inertia and friction terms of the closed-loop system are divided by the factor g+1. This means that force feedback law is relevant to impose the backdrivability property for a rigid joint. This force control law feature also holds for a flexible joint: this is the case that we encounter in the sequel of this section.
17.4.1
Single-joint slave manipulator model
The single-joint slave teleoperator that we consider is shown in Figure 17.7. This manipulator is composed of a motor, a geared drive, a torque sensor, and a joint load. The motor is a current-controlled torque source associated with inertia Mm and viscous friction Bm. It delivers an effort /m, and its angular speed is vm. The geared drive with ratio n has a rotational compliance and friction, modeled by inertia Md, stiffness Kd, and viscous friction Bd- A torque sensor delivers the measured effort fa and introduces a vibrational mode, modeled by stiffness Ka. The joint load is defined by inertia Mj and viscous friction Bj. fe and ve are the effort applied by the environment and the angular speed of the joint, respectively.16 A convenient representation of this model based on network theory is shown in Figure 17.8. 16
The effort fe and the flow ve, respectively, equal the bilateral force fa and the bilateral speed — v9 defined in the single-joint teleoperation network model; see Figure 17.4.
17.4. Force control of manipulator RD500
333
Figure 17.7: Single-joint robot mechanical model.
Figure 17.8: Single-joint robot network model.
This network is built with basic subnetwork (an inertia and viscous friction in a series, a stiffness, a geared drive) governed by elementary relations given in section 17.2.2. By combining these relations, we obtain the state-space equation
where the states are the motor speed v m , the geared drive speed t^, the joint load speed i>e, the effort in the geared drive fd, and the effort in the torque sensor fs. Signals /e, da, fm, respectively, are the effort applied by the environment, a motor speed disturbance, and the torque delivered by the motor. The sensed signal fy = fa 4- ns represents the measure of the torque sensor effort fa corrupted by additive noise n a . The reference
Folcher and Andriot
334
effort17 fr and the sensed signal fy are processed by the force controller Kf to produce the motor torque signal fm such that
The ratio ^ allows us to scale the controller K/ output as an effort in the joint coordinate. The resultant closed-loop system forms a one-port network (with force variable fe and flow ve) that we called the closed-loop network in the sequel. Parameter values for the RD500 of CEA are given in Table 17.1. Table 17.1: RD500 parameter values. Symbols Motor inertia Mm Motor friction Bm Geared drive ratio n Geared drive stiffness Kd Geared drive inertia Md Geared drive friction Bd Sensor drive stiffness K3 Joint load inertia MI Joint load friction BI
17.4.2
Values 3.1 1(T3 9.4 1(T3 275.3947 64077 2.152 0
12146 10.98 25.798
Units Kgm2 Nms rad~ l Nmrad~l Kgm2 Nmsrad~l Nmrad~l Kgm2 Nms rad~ l
Controller design
The slave manipulator is in contact with a wide range of of environments represented in Figure 17.8 by the network Ne. The first approach is to model the environment as a stiffness. That is, we obtain the additional relation
Two extremal configurations are considered: - When the slave manipulator is free (no contact with the environment), then fe is negligible and the value of Ke is equal to zero. - When the slave manipulator is locked, then ve is negligible and the value of Ke is large. The free configuration is associated with tasks where the bilateral (position) control only is active. The locked configuration corresponds to a high impedance environment: The use of Thevenin's theorem shows that the system gain is high. For a given force controller, instability behavior may occur in this configuration. We "incorporate" the environment stiffness in the system by defining
and also a new interaction port with effort /* and flow ve for this "augmented" closed-loop network. The following control specifications are required for the blocked configuration: 17 The reference effort fr equals the bilateral force /ja (generated by the bilateral controller), which is defined in the single-joint teleoperation network model; see Figure 17.4.
17.4. Force control of manipulator RD500
335
(i) The "augmented" robot in closed loop with the force controller is stable when connected to any passive environment. (ii) The step response has no overshoot (no damage to the environment), and the settling time is minimized. (iii) The magnitude of the control input is reasonable (the maximum torque delivered by the motor is limited to 5 Nm). (iv) In the absence of tracking input, the control input is insensitive to the sensor noise (motor lifetime). Specification (i) is ensured when the "augmented" closed-loop network is strictly passive; see [82]. By using Tellegen's theorem given in [14], it is easy to prove that the closed-loop network is strictly passive (because the stiffness Ke is lossless). Therefore, the robot in closed loop with the force controller is stable when connected to any passive environment for the free configuration. Specification (i) is related to the stability problem for a linear time-invariant system (the "augmented" closed-loop network) connected to a dissipative operator /* = A(v e ) defined in section 17.3 with A a {0, — /, 0}-dissipative operator. This closed-loop system is represented in Figure 17.9. The control input signal u is the scaled motor torque in the joint coordinate; that is, u = nfm and the measured output is fr — fs- Note the similarity between Figure 17.6 in section 17.3 and Figure 17.9. Therefore, we can impose specifications (i)-(v) using the multiobjective design approach shown in section 17.3.
Figure 17.9: Closed-loop system. Specification (i) is directly control objective O.I. The tracking specification (ii) is expressed as a regulation problem (we consider fr = 0) with control objective O.3. The regulated output is z-z = fa, and the perturbation input is w% = da. Low values of the design parameter zmax ensure good tracking performance. Actuator saturation
Folcher and Andriot
336
specification (iii) is directly taken into account by control objective O.4 with u = fm. The parameter umax reflects for the design the bound of the control input peak. Specification (iv) is reflected in control objective O.2 with z\ = u and w\ = na. The design parameter 7 adjusts the level of the control noise power induced by the sensor noise na. The open-loop system G is described by (17.1) in section 17.3, where
stands for
with the state x = [vm, i^, ve, /, /s, fe] KR is taken as Ke = 1000007V mracT1.
17.4.3
• In the sequel of this chapter the value of
Results
We choose design specifications: The LMIs were solved using the code SP [402] and its Matlab interface LMITOOL [126]. After 31 iterations using the heuristic presented in section 17.3.5, we obtain a stationary point. We compute state-space representation for the multiobjective controller K^ using formulas in Theorem 17.2. The associated controller has been reduced to obtain a 3states controller with negligible degradation of the open-loop frequency response in the bandwidth. That is, we obtain the following K¥(S) transfer function:
Proportional integral controller. For performance comparison purposes we consider a PI force controller A T 7 ( s ) derivedfrom[299]
17.4.
Force control of manipulator RD500
337
which guarantees specification (i) for one compliance joint with the assumption Bj = 0. This system is related to the single-joint robot represented in Figure 17.8, where the torque sensor stiffness, the joint load inertia MI, and the viscous friction BI are incorporated in the environment. According to Tellegen's theorem, it can be shown that this PI controller ensures specification (i) for the the slave manipulator RD500. The Bode diagram of the controller transfer functions is plotted in Figure 17.10 in the solid line for K1(s) and in the dashed line for
Figure 17.10: Kf(s)
and K^(s) Bode plots.
Figure 17.11 shows the Bode plots of GKf(s) in the solid line and GK^(s) in the dashed line. The multiobjective force controller K¥ enjoys a 13.83 dB gain margin, a 78.69 degree phase margin, and a 1.71 rad/s bandwidth, which are reasonable values. In Figure 17.12, we plot the Nyquist diagram of the closed-loop admittance matrices Ye(s) = v e ( s ) / f e ( s ) to check its strict positivity. That is, for the two controllers, closedloop networks are passive and specification (i) is ensured. Tracking performance ensured by force controllers K¥ and K?I are evaluated in Figures 17.13(a) and 17.13(b), respectively. Closed-loop systems responses to a 1000 Nm reference step fr are shown. The noise-free-sensed outputs fs are plotted in the solid line. The normalized controller outputs in the joint coordinate nfm are plotted in the dashed line. For the two controllers, no overshoot appears in the time response; that is, specification (ii) is ensured. Specification (iii) in the joint coordinate leads to a 1377 N m saturation level, and the normalized controller output peaks do not exceed this saturation level. We compare the transient response plotted in Figures 17.13(a) and 17.13(b). Note that damped oscillations appear in 17.13(b) and that the transient is slower. This fact is confirmed by the values of the 5% settling time, which are, respectively, 2.9 s and 1.4 s.
Folcher and Andriot
338
Figure 17.11: GKf(s} and GK^(s) Bode plots.
Figure 17.12: Closed-loop admittance matrice Ye(s) Nyquist plots.
17.5
Conclusion and perspectives
In this chapter, we present the design of a controller for robot telemanipulators based on a multiobjective approach involving LMI optimization problems. Telemanipulation control objectives (stability and tracking performance) are usually expressed in terms of network-based properties. The proposed approach, which considers the control of an LTI
17.5. Conclusion and perspectives
339
Figure 17.13: Time responses to a 1000 Nm reference step fr system subject to dissipative perturbation, can be related to the traditional networkbased approach founded on passivity theory. Furthermore, the approach incorporates many practical specifications as H2 norm bounds or time-domain bounds (output and command input peak) in the design. In our opinion, this approach directly uses teleoperation engineering expertise and allows us to analyze trade-offs between the different performances and stability objectives. Finally, we have applied the multiobjective approach to the force controller design of the slave teleoperator RD500 for a single joint. The achieved performances have been evaluated in view of performances ensured by a PI controller designed earlier. The multiobjective control law gives satisfactory results. Future work will include the master manipulator force controller and the bilateral controller designs to complete the structured network-based design. To keep a clear physical interpretation of the telemanipulation controller design, a multiobjective decentralized control approach is under study.
This page intentionally left blank
Bibliography [1] R. J. Adams, P. Apkarian, and J. P. Chretien, Robust control approaches for a two-link flexible manipulator, in 3rd International Conference on Dynamics and Control of Structures in Space, Cranfield, UK, May 1996, pp. 101-116. [2] I. Adler and F. Alizadeh, Primal-Dual Interior Point Algorithms for Convex Quadratically Constrained and Semidefinite Optimization Problems, Technical Report RRR 46-95, RUTCOR, Rutgers University, New Brunswick, NJ, December 1995. [3] M. Aitrami and L. El Ghaoui, LMI optimization for non standard Riccati equations arising in stochastic control, IEEE Trans. Automat. Control, 41:1666-1670, 1996. [4] D. Alazard and J. P. Chretien, The impact of local masses and inertias on the dynamic modelling of flexible manipulators, in International Symposium on Artificial Intelligence, Robotics and Automation in Space, Toulouse, Prance, 1992. [5] F. Alizadeh, Interior point methods in semidefinite programming with applications to combinatorial optimization, SI AM J. Optim., 5:13-51, 1995. [6] F. Alizadeh, J.-P. A. Haeberly, M. V. Nayakkankuppam, M. L. Overton, and S. Schmieta, SDPpack User Guide (Version 0.9 Beta), Technical Report 737, Courant Institute of Mathematical Sciences, New York, June 1997. Available at http://www.cs.nyu.edu/faculty/overton/sdppack. [7] F. Alizadeh, J.-P. A. Haeberly, and M. L. Overton, Complementarity and nondegeneracy in semidefinite programming, Math. Programming (Ser. B), 77:111-128, 1997. [8] F. Alizadeh, J.-P. A. Haeberly, and M. L. Overton, Primal-dual interior-point methods for semidefinite programming: Convergence rates, stability and numerical results, SIAM J. Optim., 8:746-768, 1998. [9] K. D. Andersen, An efficient newton barrier method for minimizing a sum of euclidean norms, SIAM J. Optim., 6:74-95, 1996. [10] B. D. O. Anderson and J. B. Moore, Optimal Filtering, Prentice-Hall, Englewood Cliffs, NJ, 1979. [11] B. Anderson and J. B. Moore, Optimal Control: Linear Quadratic Methods, Prentice-Hall, Englewood Cliffs, NJ, 1990. [12] B. D. O. Anderson, S. Dasgupta, P. P. Khargonekar, F. J. Kraus, and M. Mansour, Robust strict positive realness: Characterization and construction, IEEE Trans. Circuits Systems, 37:869-876, 1990. 341
342
Bibliography
[13] B. D. O. Anderson and S. Vongpanitlerd, Network Analysis and Synthesis—A Modern Systems Theory Approach, Prentice-Hall, Englewood Cliffs, NJ, 1973. [14] R. J. Anderson, Dynamic damping control: Implementation issues and simulation results, in Proc. IEEE International Conference on Robotics and Automation, 1990, pp. 68-77. [15] R. J. Anderson and M. W. Spong, Bilateral control of teleoperators with time delay, in Proc. IEEE International Conference on Decision and Control, Austin, TX, 1989. [16] C. Andriot, J. Dudragne, R. Fournier, and J. Vuillemey, On the bilateral control of teleoperators with flexible joints by the HQQ approach, in Proc. SPIE Conference on Teleoperation and Control, Boston, 1992, pp. 80-91. [17] P. Apkarian, On the discretization of LMI-synthesized linear parameter-varying controllers, Automatica, 33:655-661, 1997. [18] P. Apkarian and R. Adams, Advanced gain-scheduling techniques for uncertain systems, in Proc. American Control Conference, June 1997. [19] P. Apkarian and P. Gahinet, A convex characterization of gain-scheduled HOO controllers, IEEE Trans. Automat. Control, AC-40:853-864, 1995. [20] P. Apkarian, P. Gahinet, and G. Becker, Self-scheduled HQQ control of linear parameter-varying systems, in Proc. American Control Conference, 1994, pp. 856860. [21] P. Apkarian, P. Gahinet, and G. Becker, Self-scheduled H^ control of linear parameter-varying systems: A design example, Automatica, 31:1251-1261, 1995. [22] P. Apkarian and H. D. Tuan, Parameterized LMIs in control theory, SIAM J. Control Optim., to appear. [23] K. J. Astrom and T. Hagglund, PID Control: Theory Design and Tuning, Instrument Society of America, Research Triangle Park, NC, 1995. [24] V. I. Arnold, On matrices depending on parameters, Russian Math. Surveys, 26:2943, 1971. [25] W. F. Arnold and A. J. Laub, Generalized eigenproblem algorithms and software for algebraic Riccati equations, Proc. IEEE, 72:1746-1754, 1984. [26] T. Asai, S. Kara, and T. Iwasaki, Simultaneous modeling and synthesis for robust control by LFT scaling, IF AC World Congress, G:309-314, 1996. [27] B. D. Bainov and P. S. Simeonov, Systems with Impulse Effect: and Applications, Wiley, New York, 1989.
Stability, Theory
[28] V. Balakrishnan, Linear matrix inequalities in robustness analysis with multipliers, Systems Control Lett, 25:265-272, 1995. [29] V. Balakrishnan, Y. Huang, A. Packard, and J. C. Doyle, Linear matrix inequalities in analysis with multipliers, in Proc. American Control Conference, Baltimore, MD, 1994, pp. 1228-1232.
Bibliography
343
[30] V. Balakrishnan and A. Tits, Numerical optimization-based design, in The Control Handbook, W. Levine, ed., CRC Press, Boca Raton, FL, pp. 749-758, 1995. [31] G. J. Balas, J. C. Doyle, K. Glover, A. Packard, and R. Smith, /j,-Analysis and Synthesis Toolbox fa-Tools), MathWorks, Inc., Natick, MA, 1991. [32] B. A. Bamieh and J. B. Pearson Jr., The HZ problem for sampled-data systems, Systems Control Lett, 19:1-12, 1992. [33] G. Becker, Parameter-dependent control of an under-actuated mechanical system, in Proc. IEEE Conference on Decision and Control, Los Angeles, 1995. [34] G. Becker, Additional results on parameter-dependent controllers for LPV systems, in IFAC World Congress, San Francisco, 1996. [35] G. Becker and A. Packard, Robust performance of linear parametrically varying systems using parametrically-dependent linear feedback, Systems Control Lett., 23:205-215, 1994. [36] G. Becker, A. Packard, D. Philbrick, and G. Balas, Control of parametricallydependent linear systems: A single quadratic Lyapunov approach, in Proc. American Control Conference, San Francisco, 1993, pp. 2795-2799. [37] R. Bellman and K. Fan, On systems of linear inequalities in Hermitian matrix variables, in Proc. Sympos. Pure Math., V. L. Klee, ed., 1963, pp. 1-11. [38] S. Benson, Y. Ye, and X. Zhang, Solving Large-Scale Sparse Semidefinite Programs for Combinatorial Optimization, Technical Report, Department of Management Sciences, The University of Iowa, Iowa City, IA. Revised October 13, 1997. [39] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski, Robust semidefinite programming, in Semidefinite Programming and Applications, H. Wolkowicz, R. Saigal, and L. Vandenberghe, eds., 1998. [40] A. Ben-Tal and A. Nemirovski, Robust Convex Programming, Technical Report 1/95, Optimization Laboratory, Faculty of Industrial Engineering and Management, Technion, Israel Institute of Technology, Technion City, Haifa, Israel, 1995. [41] A. Ben-Tal and A. Nemirovski, Robust truss topology design via semidefinite programming, SI AM J. Optim., 7:991-1016, 1997. [42] A. Ben-Tal and A. Nemirovski, Robust convex programming, IMA J. Numer. Anal, 1998. [43] E. Beran, Induced Norm Control Toolbox (INCT)—Users Manual, 1995. Available at http://www.iau.dtu.dk/Staff/ebb/INCT. [44] E. Beran and K. Grigoriadis, A combined alternating projection and semidefinite programming algorithm for low-order control design, in Proc. IFAC 96, San Francisco, Vol. C, July 1996, pp. 85-90. [45] E. Beran, L. Vandenberghe, and S. Boyd, A global BMI algorithm based on the generalized Benders decomposition, in European Control Conference, Brussels, Belgium, July 1997. [46] A. Berman, Cones, Matrices, and Mathematical Programming, Springer-Verlag, Berlin, New York, 1973.
344
Bibliography
[47] D. Bernstein and W. Haddad, LQG control with an HQQ performance constraint: A Riccati equation approach, in Proc. American Control Conference, 1988, pp. 796802. [48] D. S. Bernstein and W. H. Haddad, LQG control with an HOO performance bound: A Riccati equation approach, IEEE Trans. Automat. Control, 34:293-305, 1989. [49] D. Bernstein and W. Haddad, Robust stability and performance analysis for linear dynamic systems, IEEE Trans. Automat. Control, AC-34:751-758, 1989. [50] D. S. Bernstein and W. H. Haddad, Robust stability and performance analysis for state-space systems via quadratic Lyapunov bounds, SI AM J. Matrix Anal. Appi, 11:239-271, 1990. [51] J. Bernussou, P. L. D. Peres, and J. C. Geromel, A linear programming oriented procedure for quadratic stabilization of uncertain systems, Systems Control Lett, 13:65-72, 1989. [52] V. Blondel and J. N. Tsitsiklis, NP-hardness of some linear control design problems, European J. Control, 1, 1995. [53] V. Blondel and J. N. Tsitsiklis, A Survey of Computational Complexity Results in Systems and Control, Technical Report, LIDS, Massachusetts Institute of Technology, Cambridge, MA, October 1998. [54] P. Bolzern, P. Colaneri, and G. De Nicolao, Optimal robust state estimation for uncertain linear systems, in Proc. IF AC Symposium on Robust Control Design, Rio de Janeiro, Brazil, 1994, pp. 152-161. [55] J. F. Bonnans, R. Cominetti, and A. Shapiro, Pertubed Optimization under the Second Order Regularity Hypothesis, Application to Semidefinite Optimization, 1996, manuscript. [56] N. K. Bose, Applied Multidimensional System Theory, Van Nostrand Rheinhold, New York, 1982. [57] C. Boussios and E. Feron, Estimating the conservatism of the Popov criterion, in Proc. AIAA GNC Conference, 1996. [58] C. Boussios and E. Feron, Estimating the conservatism of Popov's criterion for real parametric uncertainties, Systems Control Lett, 31:173-183, 1997. [59] S. P. Boyd, V. Balakrishnan, C. H. Barratt, N. M. Khraishi, X. Li, D. G. Meyer, and S. A. Norman, A new CAD method and associated architectures for linear controllers, IEEE Trans. Automat. Control, 33:268-283, 1988. [60] S. Boyd, V. Balakrishnan, E. Feron, and L. El Ghaoui, Control system analysis and synthesis via linear matrix inequalities, in Proc. American Control Conference, Vol. 2, San Francisco, June 1993, pp. 2147-2154. [61] S. Boyd and C. Barratt, Linear Controller Design: Limits of Performance, Prentice-Hall, Englewood Cliffs, NJ, 1991. [62] S. Boyd, C. Barratt, and S. Norman, Linear controller design: Limits of performance via convex optimization, Proc. IEEE, 78:529-574, 1990.
Bibliography
345
[63] S. Boyd and L. El Ghaoui, Method of centers for minimizing generalized eigenvalues, Linear Algebra Appi, 188:63-111, 1992. [64] S. P. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SI AM Studies in Applied Mathematics 15, SI AM, Philadelphia, 1994. [65] S. Boyd, N. Khraishi, C. Barratt, S. Norman, V. Balakrishnan, and D. Meyer, QDES Version 1.5, Electrical Engineering Dept., Information Systems Laboratory, Stanford University, Stanford, CA, 1990. [66] S. P. Boyd and L. Vandenberghe, Introduction to Convex Optimization with Engineering Applications, Course Notes, Information Systems Library, Stanford University, Stanford, CA, 1997. Available at http://www.stanford.edu/class/ee364/. [67] S. Boyd, L. Vandenberghe, and M. Grant, Efficient convex optimization for engineering design, in Proc. IFAC Symposium on Robust Control Design, September 1994, pp. 14-23. [68] S. Boyd and S. Wu, SDPSOL: A Parser/Solver for Semidefinite Programming and Determinant Maximization Problems with Matrix Structure, User's Guide, Version Beta, Stanford University, Stanford, CA, 1996. Available at http://www. stanford.edu/~boyd/SDPSOL.html. [69] S. Boyd and Q. Yang, Structured and simultaneous Lyapunov functions for system stability problems, Internat. J. Control, 49:2215-2240, 1989. [70] J. P. Boyle and R. L. Dykstra, A method for finding projections onto the intersection of convex sets in Hilbert space, Lecture Notes in Statist., 37:28-47, 1986. [71] R. D. Braatz, P. M. Young, J. C. Doyle, and M. Morari, Computational complexity of /i calculation, Proc. American Control Conference, June 1993, pp. 1682-1683. [72] R. Braatz, P. Young, J. Doyle, and M. Morari, Computational complexity of /i calculation, IEEE Trans. Automat. Control, 39:1000-1002, 1994. [73] J. W. Brewer, Kronecker products and matrix calculus in system theory, IEEE Trans. Circuits Systems, CAS-25:772-781, 1978. [74] N. Brixius, F. A. Potra, and R. Sheng, Solving Semidefinite Programs with Mathematica, Technical Report 97/1996, Department of Mathematics, University of Iowa, 1996. [75] R. Bupp, D. Bernstein, and V. Coppola, A benchmark problem for nonlinear control design: Problem state, experimental testbed, and passive nonlinear compensation, in Proc. IEEE American Control Conference, Seattle, WA, June 1995, pp. 43634367. [76] W. Cheney and A. A. Goldstein, Proximity maps for convex set, Proc. Amer. Math. Soc., 12:448-450, 1959. [77] R. Y. Chiang and M. G. Safonov, Robust Control Toolbox, Ver. 2.0, Math Works, Inc., Natick, MA, 1992. [78] M. Chilali and P. Gahinet, HOC design with pole placement constraints: An LMI approach, IEEE Trans. Automat. Control, 41:358-367, 1995.
346
Bibliography
[79] R. C. Chung and P. R. Belanger, Minimum-sensitivity filter for linear time-invariant stochastic systems with uncertain parameters, IEEE Trans. Automat. Control, AC21:98-100, 1976. [80] J. E. Colgate, Power and impedance scaling in bilateral teleoperation, in Proc. IEEE International Conference on Robotics and Automation, 1991, pp. 2292-2297. [81] J. E. Colgate, Robust impedance shaping telemanipulation, IEEE Trans. Robotics Automation, 9:374-384, 1993. [82] J. E. Colgate and N. Hogan, Robust control of dynamically interacting systems, Internat. J. Control, 48:65-88, 1988. [83] E. G. Collins Jr., L. D. Davis, and R. Stephen, Homotopy algorithm for maximum entropy design, AIAA J. Guidance Control Dynamics, 17:311-321, 1994. [84] P. L. Combettes and H. J. Trussell, Method of successive projections for finding a common point of sets in metric spaces, J. Optim. Theory Appl., 67:487-507, 1990. [85] R. W. Cottle, J. S. Pang, and R. E. Stone, The Linear Complementarity Problem, Academic Press, New York, 1992. [86] G. E. Coxson and C. L. DeMarco, Testing robust stability of general matrix polytopes is an NP-hard computation, in Proc. Annual Allerton Conference on Communication, Control and Computing, Allerton House, Monticello, IL, 1991, pp. 105106. [87] J. Cullum, W. E. Donath, and P. Wolfe, The minimization of certain nondifferentiable sums of eigenvalues of symmetric matrices, Math. Programming Study, 3:35-55, 1975. [88] M. A. Dahleh and I. Diaz-Bobillo, Control of Uncertain Sytems: A Linear Programming Approach, Prentice-Hall, Englewood Cliffs, NJ, 1995. [89] R. D'Andrea, Generalized L2 synthesis: A new framework for control design, in Proc. 35th IEEE Conference on Decision and Control, 1996. [90] G. B. Dantzig, Linear Programming and Extensions, Princeton University, Princeton, NJ, 1963. [91] J. A. D'Appolito and C. E. Hutchinson, Low sensitivity filters for state estimation in the presence of large parameter uncertainties, IEEE Trans. Automat. Control, AC-14:310-312, 1969. [92] S. Dasgupta, G. Chockalingam, B. D. O. Anderson, and M. Fu, Lyapunov functions for uncertain systems with applications to the stability to time varying systems, IEEE Trans. Circuits Systems, 41:93-106, 1994. [93] J. David, Algorithms and Design for Robust Controllers, Ph.D. thesis, Catholic University of Leuven, 1994. [94] J. David and B. De Moor, The opposite of analytic centers for solving minimum rank problems in control and identification, in Proc. 32nd Conference on Decision and Control, 1993. [95] C. Davis, W. Kahan, and H. Weinberger, Norm preserving dilations and their applications to optimal error bounds, SIAM J. Numer. Anal., 19:445-469, 1982.
Bibliography
347
[96] B. de Moor, Total least squares for affinely structured matrices and the noisy realization problem, IEEE Trans. Acoust., Speech, Signal Processing, 42:3104-3113, 1994. [97] M. A. H. Dempster, Stochastic Programming, Academic Press, New York, 1980. [98] C. A. Desoer and M. Vidyasagar, Feedback Systems: Input-Output Properties, Academic Press, New York, 1975. [99] O. Didrit, L. Jaulin, and E. Walter, Guaranteed analysis and optimization of parametric systems, with application to their stability degree, European J. Control, 1997. [100] O. Diekmann, S. A. von Gils, S. M. Verduyn Lunel, and H. O. Walther, Delay Equations. Functional, Complex, and Nonlinear Analysis, Appl. Math. Sci., Springer-Verlag, New York, 1995. [101] J. Doyle, Guaranteed margins for LQG regulators, IEEE Trans. Automat. Control, AC-23:756-757, 1978. [102] J. Doyle, Analysis of feedback systems with structured uncertainties, IEEE Proc., 129-D:242-250, 1982. [103] J. C. Doyle, Synthesis of robust controllers and filters with structured plant uncertainty, in Proc. IEEE Conference on Decision and Control, Albuquerque, NM, December 1983. [104] J. C. Doyle, Structured uncertainty in control system design, Proc. IEEE Conference on Decision and Control, December 1985, pp. 260-265. [105] J. C. Doyle, Uncertainty and robustness out of control, Talk delivered at the Workshop on Robust Control, Siena, Italy, July 1998. [106] J. Doyle, B. Francis, and A. Tannenbaum, Feedback Control Theory, Macmillan, New York, 1992. [107] J. Doyle, K. Glover, P. Khargonekar, and B. Francis, State-space solutions to standard H2 and H^ problems, IEEE Trans. Automat. Control, 34:831-847, 1989. [108] J. Doyle, A. Packard, and K. Zhou, Review of LFT's, LMI's and /i, in Proc. IEEE Conference on Decision and Control, Vol. 2, Brighton, December 1991, pp. 12271232. [109] J. C. Doyle and G. Stein, Multivariable feedback design: Concepts for a classical/modern synthesis, IEEE Trans. Automat. Control, AC-26:4-16, 1981. [110] J. Doyle, K. Zhou, K. Glover, and B. Bodenheimer, Mixed H^ and H^ performance objectives II: Optimal control, IEEE Trans. Automat. Control, 39:1575-1587, 1994. [Ill] S. Dussy, An LMI Approach for Multiobjective Robust Control, Ph.D. thesis, University of Paris IX-Dauphine, Paris, March 1998. [112] S. Dussy, LMI-based robust control of Lur'e system, IASTED Control Comput. J., 26:9-12, 1998. [113] S. Dussy and L. El Ghaoui, Multiobjective Robust Control Toolbox (MRCT): User's Guide, 1996. Available at http://www.ensta.fr/~gropco/stafF/dussy/ gocpage.html.
348
Bibliography
[114] S. Dussy and L. El Ghaoui, Multiobjective bounded control of uncertain nonlinear systems: An inverted pendulum example, in Control of Uncertain Systems with Bounded Inputs, Springer-Verlag, Berlin, New York, 1997, pp. 55-73. [115] S. Dussy and L. El Ghaoui, Measurement-scheduled control for the RTAC problem, Internat. J. Robust Nonlinear Control, 8:377-400, 1998. [116] S. Dussy and L. El Ghaoui, Multiobjective robust control toolbox for LMI-based control, in Proc. IFAC Symposium on Computer Aided Control Systems Design, Gent, Belgium, April 1997, pp. 353-358. [117] S. Dussy and L. El Ghaoui, Multiobjective robust measurement-scheduling for discrete-time systems: An LMI approach, in Proc. IFAC Conference on Systems Structure and Control, Bucharest, Romania, October 1997, pp. 343-348. [118] A. El Bouhtouri and A. J. Pritchard, Stability radii of linear systems with respect to stochastic perturbations, Systems Control Lett., 19:29-33, 1992. [119] L. El Ghaoui, LMITOOL: An Interface to Solve LMI Problems, ENSTA, France, 1995. [120] L. El Ghaoui, State-feedback control of systems with multiplicative noise via linear matrix inequalities, Systems Control Lett., 24:223-228, 1995. [121] L. El Ghaoui and M. Aitrami, Robust state-feedback stabilization of jump linear systems, Internat. J. Robust Nonlinear Control, 6:1015-1022, 1996. [122] L. El Ghaoui, F. Delebecque, and R. Nikoukhah, LMITOOL: A User-Friendly Interface for LMI Optimization. Available at ftp.ensta.fr in pub/elghaoui/lmitool, February 1995. [123] L. El Ghaoui and J. P. Folcher, Multiobjective robust control of LTI systems subject to unstructured perturbations, Systems Control Lett., 28:23-30, 1996. [124] L. El Ghaoui and P. Gahinet, Rank minimization under LMI constraints: A framework for output feedback problems, in Proc. European Control Conference, July 1993, pp. 1176-1179. [125] L. El Ghaoui and H. Lebret, Robust solutions to least-squares problems with uncertain data, SIAM J. Matrix Anal. Appi, 18:1035-1064, 1997. [126] L. El Ghaoui, R. Nikoukha, and F. Delebecque, LMITOOL: A Front-End for LMI Optimization, User's Guide, 1995. Available via anonymous ftp at ftp.ensta.fr, under /pub/elghaoui/lmitool. [127] L. El Ghaoui, F. Oustry, and H. Lebret, Robust solutions to uncertain semidefinite programs, SIAM J. Optim., 9:33-52, 1998. [128] L. El Ghaoui, F. Oustry, and M. Aft Rami, A cone complementary linearization algorithm for static output-feedback and related problems, IEEE Trans. Automat Control, 42:1171-1176, 1997. [129] L. El Ghaoui and G. Scorletti, Control of rational systems using linear-fractional representations and linear matrix inequalities, Automatica, 32:1273-1284, 1996. [130] M. K. H. Fan, A quadratically convergent local algorithm on minimizing the largest eigenvalue of a symmetric matrix, Linear Algebra Appl., 188/189:231-253, 1996.
Bibliography
349
[131] M. K. H. Fan, A. L. Tits, and J. C. Doyle, Robustness in the presence of mixed parametric uncertainty and unmodeled dynamics, IEEE Trans. Automat. Control, 36:25-38, 1991. [132] L. Faybusovich, Linear systems in Jordan algebras and primal-dual interior-point algorithms, J. Comput. Appl. Math., 86:149-175, 1997. [133] E. Feron, Linear Matrix Inequalities for the Problem of Absolute Stability of Control Systems, Ph.D. thesis, Stanford University, Stanford, CA, October 1993. [134] E. Feron, Analysis of robust H2 performance using multiplier theory, SIAM J. Control Optim., 35:160-177, 1997. [135] E. Feron, P. Apkarian, and P. Gahinet, Analysis and synthesis of robust control systems via parameter-dependent Lyapunov functions, IEEE Trans. Automat. Control, 41:1041-1046, 1996. [136] E. Feron, V. Balakrishnan, S. Boyd, and L. El Ghaoui, Numerical methods for U.2 related problems, in Proc. American Control Conference, Vol. 4, Chicago, June 1992, pp. 2921-2922. [137] R. Fletcher, Semidefinite matrix constraints in optimization, SIAM J. Control Optim., 23:493-513, 1985. [138] C. A. Floudas and V. Visweswaran, A primal relaxed dual global optimization approach, J. Optim. Theory Appl, 10:237-260, 1972. [139] B. K. R. Fourer and D. Gay, AMPL—A Modeling Language for Mathematical Programming, Scientific Press, San Francisco, 1993. [140] A. Fradkov and V. A. Yakubovich, The /-Procedure and Duality Theorems for Non Convex Problems of Quadratic Programming, Technical Report 1, Vestnik Leningrad University, 1973 (in Russian). [141] A. L. Fradkov and V. A. Yakubovich, The 5-procedure and duality relations in nonconvex problems of quadratic programming, Vestnik Leningrad Univ. Math., 6:101-109, 1979 (in Russian, 1973). [142] B. A. Francis, A Course in HQQ Control Theory, Lecture Notes in Control and Inform. Sci. 88, Springer-Verlag, New York, 1987. [143] M. Fu, H. Li, and S. I. Niculescu, Robust stability analysis and stabilization for time-delay systems using the integral quadratic constraint approach, in Stability and Control of Delay Systems, Lecture Notes in Control and Inform. Sci., SpringerVerlag, London, 1997. [144] M. Fu and Z. Q. Luo, Computational complexity of a problem arising in fixed order output feedback design, Systems Control Lett, 30:209-215, 1997. [145] K. Fujisawa and M. Kojima, SDPA (Semidefinite Programming Algorithm) User's Manual, Technical Report B-308, Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, 1995. [146] T. Fukuda, K. Tanie, and T. Mitsuoka, A new method of master-slave type of teleoperation for a micro-manipulator system, in Proc. IEEE Micro Robots and Teleoperation Workshop, Hyannis, MA, 1987.
350
Bibliography
[147] P. Gahinet, Explicit controller formulas for LMI-based HQQ synthesis, in Proc. American Control Conference, 1994, pp. 2396-2400. [148] P. Gahinet and P. Apkarian, A linear matrix inequality approach to HQQ control, Internal. J. Robust Nonlinear Control, 4:421-448, 1994. [149] P. Gahinet, P. Apkarian, and M. Chilali, Parameter-dependent Lyapunov functions for real parametric uncertainty, IEEE Trans. Automat. Control, 41:436-442, 1996. [150] P. Gahinet and A. Nemirovskii, General purpose LMI solver with benchmarks, in Proc. 32nd Conference Decision Control, San Antonio, TX, 1993. [151] P. Gahinet, A. Nemirovskii, A. J. Laub, and M. Chilali, The LMI control toolbox, in Proc. IEEE Conference on Decision and Control, December 1994, pp. 2038-2041. [152] P. Gahinet, A. Nemirovskii, A. J. Laub, and M. Chilali, The LMI control toolbox, in Proc. 33rd IEEE Conference on Decision and Control, 1994, pp. 2038-2041. [153] P. Gahinet, A. Nemirovskii, A. J. Laub, and M. Chilali, LMI Control Toolbox, The Math Works, Inc., Natick, MA, 1995. [154] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman, San Francisco, CA, 1979. [155] A. M. Geoffrion, Generalized benders decomposition, J. Optim. Theory Algorithms, 10:237-260, 1972. [156] J. C. Geromel, Optimal linear filtering under parameter uncertainty, IEEE Trans. Signal Processing, 47:168-175, 1999. [157] J. C. Geromel, J. Bernussou, G. Garcia, and M. C. de Oliveira, H2 and HQQ robust filtering for discrete-time linear systems, in Proc. IEEE Conference on Decision and Control, 1998, pp. 632-637. [158] J. C. Geromel, P. L. D. Peres, and J. Bernussou, On a convex parameter space method for linear control design of uncertain systems, SI AM J. Control Optim., 29:381-402, 1991. [159] J. C. Geromel, P. L. D. Peres, and S. R. Souza, Output feedback stabilization of uncertain systems through a min/max problem, in Proc. IFAC World Congress, 1993. [160] M. X. Goemans, Semidefinite programming in combinatorial optimization, Math. Programming, 79:143-161, 1997. [161] M. X. Goemans and D. P. Williamson, .878-approximation for MAX CUT and MAX 2SAT, in Proc. 26th ACM Symposium on the Theory of Computing, 1994, pp. 422-431. [162] J. L. Coffin, A. Haurie, and J. Ph. Vial, Decomposition and nondifferentiable optimization with the projective algorithm, Management Sci., 38:284-302, 1992. [163] J. L. Coffin, Z. Q. Luo, and Y. Ye, Complexity analysis of an interior cutting plane method for convex feasibitlity problems, SI AM J. Optim., 6:638-652, 1996. [164] K. C. Goh, Robust Control Synthesis via Bilinear Matrix Inequalities, Ph.D. thesis, University of Southern California, Los Angeles, May 1995.
Bibliography
351
[165] K. C. Goh, J. H. Ly, L. Turan, and M. G. Safonov, n/Km-synthesis via bilinear matrix inequalities, in Proc. IEEE Conference on Decision and Control, December 1994, pp. 2032-2037. [166] K.-C. Goh and M. G. Safonov, Robust analysis, sectors, and quadratic functionals, in Proc. IEEE Conference on Decision Control, 1995, pp. 1988-1993. [167] K. C. Goh, M. G. Safonov, and J. H. Ly, Robust synthesis via bilinear matrix inequalities, Internat. J. Robust Nonlinear Control, 6:1079-1095, 1996. [168] K. C. Goh, M. G. Safonov, and G. P. Papavassilopoulos, A global optimization approach for the BMI problem, in Proc. IEEE Conference on Decision and Control, Lake Buena Vista, FL, December 1994. [169] K. C. Goh, L. Turan, M. G. Safonov, G. P. Papavassilopoulos, and J. H. Ly, Biaffine matrix inequality properties and computational methods, in Proc. American Control Conference, July 1994. [170] K.-C. Goh and F. Wu, Duality and basis functions for H2 performance analysis, Automatica, 33:1949-1959, 1997. [171] G. H. Golub and C. F. Van Loan, Matrix Computations, 2nd ed., Johns Hopkins University Press, Baltimore, 1989. [172] J. Gondzio, Multiple centrality corrections in a primal-dual method for linear programming, Comput. Optim. Appl, 6:137-156, 1996. [173] M. S. Gowda, Complementary problems over locally compact cones, SIAM J. Control Optim., 27:836-841, 1989. [174] A. Grace, Optimization Toolbox User's Guide, MathWorks, Inc., Natick, MA, 1992. [175] M. J. Green and D. J. N. Limebeer, Linear Robust Control, Prentice-Hall, Englewood Cliffs, NJ, 1995. [176] K. M. Grigoriadis and R. E. Skelton, Low order control design for LMI problems using alternating projection methods, Automatica, 32:1117-1125, 1996. [177] S. C. O. Grocott, J. P. How, and D. W. Miller, A comparison of robust control techniques for uncertain structural systems, in Proc. AIAA Guidance Navigation and Control Conference, August 1994, pp. 261-271. [178] M. Grotschel, L. Lovasz, and A. Schrijver, Geometric Algorithms and Combinatorial Optimization, Springer-Verlag, Berlin, New York, 1983. [179] K. Gu, Discretized Imi set in the stability problem of linear uncertain time-delay systems, Internat. J. Control, 68:923-934, 1997. [180] L. G. Gubin, B. T. Polyak, and E. V. Raik, The method of projections for finding the common point of convex sets, USSR Comput. Math. Phys., 7:1-24, 1967. [181] S. Gupta, Robust stability analysis using LMIs: Beyond small gain and passivity, Internat. J. Robust Nonlinear Control, 6:953-968, 1996. [182] W. M. Haddad and D. S. Bernstein, Robust, reduced-order, nonstrictly proper state estimation via the optimal projection equations with Petersen-Hollot bounds, Systems Control Lett, 9:423-431, 1987.
352
Bibliography
[183] W. M. Haddad and D. S. Bernstein, Parameter-dependent Lyapunov functions, constant real parameter uncertainty, and the Popov criterion in robust analysis and synthesis: Parts 1 and 2, in Proc. Conference on Decision and Control, 1991, pp. 2262-2263, 2274-2279. [184] J.-P. A. Haeberly, M. V. Nayakkankuppam, and M. L. Overton, Extending Mehrotra and Gondzio higher order corrections to mixed semidefinite-quadratic-linear programming, Optim. Methods Softw., to appear. [185] M. Hall, Jr., Combinatorial Theory, Wiley, New York, 1987. [186] J. K. Hale and S. Verduyn Lunel, Intoduction to Functional Differential Equations, Springer-Verlag, New York, 1991. [187] S. Hall and J. How, Mixed T^/A* performance bounds using dissipation theory, in Proc. 1993 Conference on Decision and Control, San Antonio, pp. 1536-1541. [188] S. P. Han, A successive projection method, Math. Programming, 40:1-14, 1988. [189] C. Helmberg and F. Rendl, A spectral bundle method for semidefinite programming, SI AM J. Optim., to appear. [190] C. Helmberg, F. Rendl, R. Vanderbei, and H. Wolkowicz, An interior-point method for semidefinite programming, SI AM J. Optim., 6:342-361, 1996. [191] A. Helmersson, Methods for Robust Gain Scheduling, Ph.D. thesis, Linkoping University, Linkoping, Sweden, 1995. [192] U. Helmke and J. B. Moore, Optimization and Dynamical Systems, Springer-Verlag, London, 1994. [193] N. J. Higham, Computing the nearest symmetric positive semidefinite matrix, Linear Algebra AppL, 103:103-118, 1988. [194] J. B. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms, Springer-Verlag, Berlin, 1993. [195] J.-B. Hiriart-Urruty and D. Ye, Sensitivity analysis of all eigenvalues of a symmetric matrix, Numer. Math., 70:45-72, 1995. [196] G. Hirzinger, Direct digital robot control using a force torque sensor, in Proc. IFAC Symposium on Real Time Digital Control Applications, Guadalajara, Mexico, 1983, pp. 243-255. [197] N. Hogan, Impedance control: An approach to manipulation, parts I, II, III, J. Dynamic Systems Measurement Control, 107:1-24, 1985. [198] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1990. [199] R. Horn and C. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, 1991. [200] R. Horst and H. Tuy, Global Optimization: Deterministic Approaches, 2nd ed., Springer-Verlag, Berlin, 1993.
Bibliography
353
[201] J. P. How, Robust Control Design with Real Parameter Uncertainty using Absolute Stability Theory, Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA, February 1993. [202] J. P. How, W. M. Haddad, and S. R. Hall, Robust control synthesis examples with real parameter uncertainty using the Popov criterion, in Proc. American Control Conference, June 1993, pp. 1090-1095. [203] J. P. How and S. R. Hall, Connections between the Popov stability criterion and bounds for real parameter uncertainty, in Proc. American Control Conference, 1995, pp. 1084-1089. [204] J. P. How, S. R. Hall, and W. M. Haddad, Robust controllers for the Middeck active control experiment using Popov controller synthesis, IEEE Trans, on Control Systems Technology, 2:73-87, 1994. [205] Z. Hu, S. E. Salcudean, and P. D. Loewen, Optimisation-based teleoperation controller design, in Proc. IEEE of the International Federation of Automatic Control Congress, San Francisco, CA, June 1996, pp. 405-410. [206] P. Huard, Resolution of mathematical programming with nonlinear constraint by the method of centers, in Nonlinear Programming, J. Abadie, ed., North-Holland, Amsterdam, 1967. [207] E. F. Infante and W. B. Castelan, A Lyapunov functional for a matrix differencedifferential equation, J. Differential Equations, 29:439-451, 1978. [208] G. Isac, Complementarity Problems, Springer- Verlag, New York, Berlin, 1992. [209] T. Iwasaki, A Unified Matrix Inequality Approach to Linear Control Design, Ph.D. thesis, Purdue University, West Lafayette, IN, December 1993. [210] T. Iwasaki, Robust performance analysis for systems with norm-bounded timevarying structured uncertainty, in Proc. 1994 American Control Conference, Baltimore, MD. [211] T. Iwasaki and S. Kara, Well-posedness of feedback systems: insights into exact robustness analysis, in Proc. IEEE Conference on Decision and Control, Kobe 1996, pp. 1863-1868. [212] T. Iwasaki and S. Kara, Well-posedness of feedback systems: insights into exact robustness analysis and approximate computations, IEEE Trans. Automat. Control, 43:619-630, 1998. [213] T. Iwasaki, S. Kara, and T. Asai, Well-posedness theorem: A classification of LMI/BMI-reducible control problems, in Proc. International Symposium on Intelligent Robotic Systems, Bangalore, India, 1995, pp. 145-157. [214] T. Iwasaki and R. E. Skelton, A complete solution to the general Hooproblem, LMI existence conditions and state-space formulas, in Proc. American Control Conference, San Francisco, CA, June 1993. [215] T. Iwasaki and R. E. Skelton, All controllers for the general HQQ control problem: LMI existence conditions and state space formulas, Automatica, 30:1307-1317, 1994.
354
Bibliography
[216] T. Iwasaki and R. E. Skelton, Parametrization of all stabilizing controllers via quadratic Lyapunov functions, J. Optim. Theory Appl, 85:291-308, 1995. [217] T. Iwasaki and R. E. Skelton, The XY-centering algorithm for the dual LMI problem: A new approach to fixed order control design, Internal. J. Control, 62:12571272, 1995. [218] B. N. Jain, Guaranteed error estimation in uncertain systems, IEEE Trans. Automatic Control, AC-33:230-232, 1975. [219] F. Jarre, An interior-point method for minimizing the maximum eigenvalue of a linear combination of matrices, SIAM J. Control Optim., 31:1360-1377, 1993. [220] U. Jonsson, Robustness Analysis of Uncertain and Nonlinear Systems, Ph.D. thesis, Department of Automatic Control, Lund Institute of Technology, Lund, Sweden, 1996. [221] U. Jonsson, Stability analysis with Popov multipliers and integral quadratic constraints, Systems Control Lett, 31:85-92, 1997. [222] U. Jonsson and A. Rantzer, On duality in robustness analysis, in Proc. 34th IEEE Conference on Decision and Control, New Orleans, LA, 1995, pp. 1443-1448. [223] U. Jonsson and A. Rantzer, Duality bounds in robustness analysis, Automatica, 33:1835-1844, 1997. [224] U. Jonsson and A Rantzer, A Matlab Toolbox for System Analysis via Integral Quadratic Constraints, Technical Report TFRT-7556, Department of Automatic Control, Lund Institute of Technology, Lund, Sweden, January 1997. [225] R. E. Kalman, Lyapunov functions for the problem of Lur'e in automatic control, Proc. Nat. Acad. Sci. U.S.A., 49:201-205, 1963. [226] L. H. Keel, S. P. Bhattacharyya, and J. W. Howze, Robust control with structured perturbations, IEEE Trans. Automat. Control, AC-33:68-78, 1988. [227] M. Khammash and J. B. Pearson, Performance robustness of discrete-time systems with structured uncertainty, IEEE Trans. Automat. Control, AC-36:398-412, 1991. [228] P. Khargonekar and M. Rotea, Mixed Hz/Hoo control: A convex optimization approach, IEEE Trans. Automat. Control, 36:824-837, 1991. [229] H. Kimura, Pole assignment by gain output feedback, IEEE Trans. Automat. Control, AC-20:509-516, 1975. [230] K. C. Kiwiel, Methods of Descent for Nondifferentiable Optimization, Lecture Notes in Math. 1133, Springer-Verlag, Berlin, New York, 1985. [231] K. C. Kiwiel, A linearization algorithm for optimizing control systems subject to singular value inequalities, IEEE Trans. Automat. Control, AC-31:595-602, 1986. [232] K. C. Kiwiel, A direct method of linearizations for continuous minimax problems, J. Optim. Theory Appl., 55:271-287, 1987. [233] M. Kojima, M. Shida, and S. Shindoh, Local convergence of predictor-corrector infeasible-interior-point algorithms for sdps and sdlcps, Math. Programming, 80:129-161, 1998.
Bibliography
355
[234] M. Kojima, S. Shindoh, and S. Kara, Interior-Point Methods for the Monotone Linear Complementarity Problem in Symmetric Matrices, Technical Report, Department of Information Science, Tokyo Institute of Technology, Japan, 1994. [235] M. Kojima, S. Shindoh, and S. Kara, Interior point methods for the monotone linear complementarity problem in symmetric matrices, SIAM J. Optim., 7:86-125, 1997. [236] I. E. Rose, F. Jabbari, and W. E. Schmittendorf, A direct characterization of L^gain controllers for LPV systems, Proc. IEEE Conference on Decision and Control, 1996, pp. 3990-3995. [237] H. Kushner, Introduction to Stochastic Control, Holt, Rinehart and Winston, Inc., 1971. [238] V. Lakshmikantam and S. Leela, Differential II, Academic Press, New York, 1969.
and Integral Inequalities, Vols. I and
[239] A. J. Laub, A Schur method for solving algebraic Riccati equations, IEEE Trans. Automat. Control, AC-24:913-921, 1979. [240] H. Lebret, Antenna pattern synthesis through convex optimization, in Advanced Signal Processing Algorithms, SPIE-The International Society for Optical Engineering, 1995, pp. 182-192. [241] F. C. Lee, H. Flashner, and M. G. Safonov, Positivity-based control system synthesis using alternating LMIs, in Proc. American Control Conference, Seattle, WA, June 1995. [242] F. C. Lee, H. Flashner, and M. G. Safonov, Positivity embedding for noncolocated and nonsquare flexible systems, Appl. Math. Comput., 70:233-246, 1995. [243] F. C. Lee, H. Flashner, and M. G. Safonov, An LMI approach to positivity embedding, in Proc. American Control Conference, Seattle, WA, June 1995. [244] C. Lemarechal, A. Nemirovskii, and Yu. Nesterov, New variants of bundle methods, Math. Programming, 69:111-147, 1995. [245] C. Lemarechal, F. Oustry, and C. Sagastizabal, The W-Lagrangian of a convex function, Trans. Amer. Math. Soc., to appear. [246] C. Lemarechal and C. Sagastizabal, Variable metric bundle methods: From conceptual to implementable forms, Math. Programming, 76:394-410, 1997. [247] A. S. Lewis and M. L. Overton, Eigenvalue optimization, Acta Numer., 5:149-190, 1996. [248] H. Li, S. I. Niculescu, L. Dugard, and J. M. Dion, Robust HQQ control of uncertain linear time-delay systems: A linear matrix inequality approach with guaranteed a-stability, Part II, in Proc. 35th IEEE Conference on Decision and Control, December 1996. [249] S-M. Liu, Parallel Algorithms for Global Nonconvex Minimization with Applications to Robust Control Design via BMIs, Ph.D. thesis, University of Southern California, Los Angeles, August 1995. [250] S-M. Liu and G. P. Papavassilopoulos, Numerical experience with parallel algorithms for solving the BMI problem, in Proc. IF AC World Congress, July 1996.
356
Bibliography
[251] C. Livadas, Optimal JJ.2/Popov Controller Design Using Linear Matrix Inequalities, Master's thesis, Massachusetts Institute of Technology, Cambridge, February 1996. [252] W. M. Lu and J. C. Doyle, HOQ control of LFT systems: An LMI approach, in Proc. IEEE Conference on Decision and Control, Tucson, AZ, 1992, pp. 1997-2001. [253] W. Lu and J. C. Doyle, HQQ control of nonlinear systems: A convex characterization, IEEE Trans. Automat. Control, 40:1668-1674, 1995. [254] W. M. Lu, K. Zhou, and J. C. Doyle, Stabilization of uncertain linear systems: An LFT approach. IEEE Trans. Automat. Control, 1:50-65, 1996. [255] D. G. Luenberger, Optimization by Vector Space Methods, John Wiley, New York, 1968. [256] D. G. Luenberger, Investment Science, Prentice-Hall, Englewood Cliffs, NJ, 1997. [257] Z.-Q Luo, J. F. Sturm, and S. Zhang, Duality and Self-Duality for Conic Convex Programming, Technical Report El 9620/A, Econometric Institute, Erasmus University Rotterdam, The Netherlands, 1996. [258] A. I. Lur'e and V. N. Postnikov, On the theory of stability of control systems, Appl. Math. Mech., 8:464-465, 1944 (in Russian). [259] J. H. Ly, M. G. Safonov, and R. Y. Chiang, On computation of multivariable stability margin using generalized Popov multipliers—LMI approach, in Proc. American Control Conference, July 1994. [260] J. H. Ly, A Multiplier Approach to p/km Synthesis, Ph.D. thesis, University of Southern California, Los Angeles, February 1995. [261] J. H. Ly, K. C. Goh, and M. G. Safonov, LMI approach to multiplier km/^isynthesis, in Proc. American Control Conference, Seattle, WA, June 1995. [262] J. H. Ly, M. G. Safonov, and R. Y. Chiang, Real/complex multivariable stability margin computation via generalized Popov multiplier-LMI approach, in Proc. American Control Conference, Baltimore, MD, 1994, pp. 425-429. [263] H. M. Markowitz, Mean-Variance Analysis in Portfolio Choice and Capital Markets, Blackwell, Oxford, 1987. [264] I. Masubuchi, A. Ohara, and N. Suda, LMI-based controller synthesis: A unified formulation and solution, in Proc. American Control Conference, 1995, pp. 34733477. [265] I. Masubuchi, A. Ohara, and N. Suda, Robust multi-objective controller design via convex optimization, in Proc. IEEE Conference on Decision and Control, 1996, pp. 263-264. [266] MATLAB: High-performance Numeric Computation and Visualization Software, Version 4.1, MathWorks, Inc., Natick, MA, 1993. [267] A. Megretsky, Power Distribution Approach in Robust Control, Technical Report TRITA/MAT-92-0027, Department of Mathematics, Royal Institute of Technology, Stockholm, Sweden, 1992.
Bibliography
357
[268] A. Megretski, Power distribution approach in robust control, in Proc. IF AC Congress, Sydney, Australia, 1993, pp. 399-402. [269] A. Megretsky, S-Procedure in Optimal Non-Stochastic Filtering, Technical Report TRITA/MAT-92-0015, Department of Mathematics, Royal Institute of Technology, Stockholm, Sweden, March 1992. [270] A. Megretsky, Necessary and sufficient conditions of stability: A multiloop generalization of the circle criterion, IEEE Trans. Automat. Control, AC-38:753-756, 1993. [271] A Megretski, Integral Quadratic Constraints for Systems with Rate Limiters, Technical Report LIDS-P 2407, Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, 1997. [272] A Megretski, C. Kao, U. Jonsson, and A. Rantzer, A Guide to IQC-Beta: Software for Robustness Analysis, Laboratory for Information and Decision Systems, Masschusetts Institute of Technology, Cambridge, MA, 1997. Available at http://www.mit.edu/people/ameg/home.html. [273] A. Megretski and A. Rantzer, System analysis via integral quadratic constraints, IEEE Trans. Automat. Control, 42:819-830, 1997. [274] A. Megretsky and S. Treil, Power distribution inequalities in optimization and robustness of uncertain systems, J. Math. Systems Estim. Control, 3:301-319, 1993. [275] S. Mehrotra, Higher Order Methods and Their Performance, Technical Report 90-16R1, Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL, 1991. [276] S. Mehrotra, On the implementation of a primal-dual interior point method, SIAM J. Optim., 2:575-601, 1992. [277] G. Meinsma, Y. Shrivastava, and M. Fu, Some properties of an upper bound of /z, IEEE Trans. Automat. Control, 41:1326-1330, 1996. [278] D. Meizel, Stabilisation robuste des systemes a dynamique perturbee, in Systemes Non-Lineaires 2: Stabilite—Stabilisation, vol. 2, A. J. Fossard and D. NormandCyrot, eds., Masson, Paris, 1993. [279] M. Mesbahi, Matrix Inequalities, Stability Theory, Interior Point Methods, and Distributed Computation, Ph.D. thesis, University of Southern California, Los Angeles, June 1996. [280] M. Mesbahi, Solving a class of rank minimization problems as semi-definite programs with applications to fixed order output feedback synthesis, Systems Control Lett, 33:31-36, 1998. [281] M. Mesbahi and G. P. Papavassilopoulos, A cone programming approach to the bilinear matrix inequality and its geometry, Math. Programming, 77:247-272, 1997. [282] M. Mesbahi and G. P. Papavassilopoulos, On the rank minimization problem over a positive semi-definite linear matrix inequality, IEEE Trans. Automat. Control, 42:239-243, 1997.
358
Bibliography
[283] M. Mesbahi, G. P. Papavassilopoulos, and M. G. Safonov, Matrix cones, complementarity problems, and the bilinear matrix inequality, in Proc. IEEE Conference on Decision and Control, New Orleans, LA, December 1995. [284] D. G. Meyer, V. Balakrishnan, C. Barratt, S. Boyd, N. Khraishi, X. Li, and S. Norman, Intelligent specification language compilers, the Q-parametrization, and convex programming: Concepts for an advanced computer-aided control system design method, in Advanced Computing Concepts and Techniques in Control Engineering, M. J. Denham and A. J. Laub, eds., NATO Adv. Sci. Inst. Ser. F Comput. Sys. Sci. 47, Springer-Verlag, Berlin, 1987, pp. 487-496. [285] D. Miller, J. de Luis, G. Stover, and E. Crawley, MACE: Anatomy of a modern control experiment, in Proc. IFAC 13th World Congress, July 1996. [286] R. D. C. Monteiro, Polynomial convergence of primal-dual algorithms for semidefinite programming based on Monteiro and Zhang family of directions, SIAM J. Optim., 8:797-812, 1998. [287] R. E. Moore, Methods and Applications of Interval Analysis, SIAM, Philadelphia, 1979. [288] J. D. Murray, Mathematical Biology, Springer-Verlag, Berlin, New York, 1993. [289] K. J. Narendra and J. H. Taylor, Frequency Domain Criteria for Absolute Stability, Academic Press, New York, 1973. [290] A. Nemirovski, Subject: Quality of approximations, private communication, 19xx. [291] A. Nemirovskii, Several NP-hard problems arising in robust stability analysis, Math. Control Signal Systems, 6:99-105, 1993. [292] A. Nemirovski and P. Gahinet, The projective method for solving linear matrix inequalities, in Proc. American Control Conference, 1994, pp. 840-844. [293] A. Nemirovski and P. Gahinet, The projective method for solving linear matrix inequalities, Math. Programming, 77:163-190, 1997. [294] Yu. Nesterov, Complexity estimates of some cutting plane methods based on the analytic barrier, Math. Programming, 69:149-176, 1995. [295] Yu. Nesterov and A. Nemirovskii, An interior-point method for generalized linearfractional problems, Math. Programming, Ser. B, 1993. [296] Yu. Nesterov and A. Nemirovskii, Interior Point Polynomial Methods in Convex Programming: Theory and Applications, SIAM, Philadelphia, 1994. [297] Yu. Nesterov and M. J. Todd, Primal-dual interior-point methods for self-scaled cones, SIAM J. Optim., 8:324-364, 1998. [298] Yu. E. Nesterov, M. J. Todd, and Y. Ye. Infeasible-Start Primal-Dual Methods and Infeasibility Detectors for Nonlinear Programming Problems, Technical Report, Department of Management Sciences, University of Iowa, Iowa City, IA, February 1998; Math. Programming, to appear. [299] W. S. Newman and Y. Zhang, Stable interaction control and coulomb friction— Compensation using natural admittance control, J. Robotic Systems, 11:3-11, 1994.
Bibliography
359
[300] S.-I. Niculescu, On Some Frequency Sweeping Tests for Delay-Dependent Stability: A Model Transformation Case Study, manuscript. [301] S. I. Niculescu, Systemes a retard. Aspects qualitatifs sur la stabilite et la stabilisation, Nouveaux Essais, Diderot, Paris, 1997. [302] G. Niemeyer and J. J. E. Slotine, Stable adaptative teleoperation, in Proc. IEEE American Control Conference, 1991. [303] S. Norman and S. Boyd, Numerical solution of a two-disk problem, in Recent Advances in Robust Control, P. Dorato and R. K. Yedavalli, eds., IEEE Press, Piscataway, NJ, pp. 285-287. [304] B. Oksendal, Stochastic Differential
Equations, Springer-Verlag, New York, 1985.
[305] F. Oustry. The W-Lagrangian of the maximum eigenvalue function, SIAM J. Optim., 9:526-549, 1999. [306] F. Oustry, Vertical developments of a convex function, J. Convex Anal., 5:153-170, 1998. [307] F. Oustry, A second-order bundle method to minimize the maximum eigenvalue function, Math. Programming, 1998, submitted. [308] M. L. Overton, On minimizing the maximum eigenvalue of a symmetric matrix, SIAM J. Matrix Anal. Appi, 9:256-268, 1988. [309] M. L. Overton, Large-scale optimization of eigenvalues, SIAM J. Optim., 2:88-120, 1992. [310] M. L. Overton and R. S. Womersley, Second derivatives for optimizing eigenvalues of symmetric matrices, SIAM J. Matrix Anal. Appl., 16:667-718, 1995. [311] M. L. Overton and X. Ye, Toward second-order methods for structured nonsmooth optimization, in Advances in Optimization and Numerical Analysis, S. Gomez and J.-P. Hennart, eds., Kluwer Academic Publishers, Norwell, MA, 1994, pp. 97-109. [312] A. Packard, Gain scheduling via linear fractional transformations, Systems Control Lett., 22:79-92, 1994. [313] A. Packard and J. C. Doyle, Robust control with an H2 performance objective, in Proc. American Control Conference, 1987, pp. 2141-2146. [314] A. Packard and J. Doyle, The complex structured singular value, Automatica, 29:71-109, 1993. [315] A. Packard, K. Zhou, P. Pandey, and G. Becker, A collection of robust control problems leading to LMI's, in Proc. IEEE Conference on Decision and Control, Brighton, UK, 1991, pp. 1245-1250. [316] A. Packard, K. Zhou, P. Pandey, J. Leonhardson, and G. Balas, Optimal, constant I/O similarity scaling for full-information and state-feedback control problems, Systems Control Lett, 19:271-280, 1992. [317] F. Paganini, Sets and Constraints in the Analysis of Uncertain Systems, Ph.D. thesis, California Institute of Technology, Pasadena, December 1995.
360
Bibliography
[318] F. Paganini, A set-based approach for white noise modeling, IEEE Trans. Automat. Control, 41:1453-1465, 1996. [319] F. Paganini, Frequency domain conditions for robust H2 performance, IEEE Trans. Automat. Control, 44:38-49, 1999. [320] F. Paganini, Convex methods for robust E^ analysis of continuous time systems, IEEE Trans. Automat. Control, 44:239-252, 1999. [321] F. Paganini, R. D'Andrea, and J. Doyle, Behavioral approach to robustness analysis, in Proc. American Control Conference, Baltimore, MD, June 1994, pp. 27822786. [322] F. Paganini and E. Feron, Analysis of robust H2 performance: Comparisons and examples, in Proc. IEEE Conference on Decision and Control, December 1997. [323] C. H. Papadimitriou, Computational Complexity, Addison-Wesley, Reading, MA, 1994. [324] P. Park and T. Kailath, Dual ^-spectral factorization, Internat. J. Control, 1997. [325] P. C. Parks, A new proof of the Routh-Hurwitz stability criterion using the second method of Lyapunov, Proc. Cambridge Philos. Soc., 58:669-672, 1962. [326] G. Pataki, On the Rank of Extreme Matrices in Semidefinite Programs and the Multiplicity of Optimal Eigenvalues, Technical Report MSRR-604(R), Management Science Research Group, GSIA, Carnegie Mellon University, Pittsburgh, PA, August 1995. [327] R. P. Paul. Problems and research issues associated with the hybrid control of force and displacement, in Proc. IEEE International Conference on Robotics and Automation, 1987, pp. 1966-1971. [328] P. L. D. Peres and J. C. Geromel, H2 control for discrete-time systems, optimality and robustness, Automatica, 29:225-228, 1993. [329] P. L. D. Peres, S. R. Souza, and J. C. Geromel, Optimal H2 control for uncertain systems, in Proc. American Control Conference, Chicago, June 1992. [330] M. A. Peters and A. A. Stoorvogel, Mixed E^/Hoo control in a stochastic framework, Linear Algebra Appl, 205-206:971-996, 1994. [331] I. R. Petersen and D. C. McFarlane, Optimal guaranteed cost filtering for uncertain discrete-time linear systems, Internat. J. Robust Nonlinear Control, 6:267-280, 1991. [332] I. R. Petersen and D. C. McFarlane, Optimal guaranteed cost control of uncertain linear systems, in Proc. American Control Conference, Chicago, June 1992. [333] I. R. Petersen and D. C. McFarlane, Robust state estimation for uncertain systems, in Proc. 30th IEEE Conference Decision and Control, Brighton, UK, 1991, pp. 2630-2631. [334] I. R. Petersen, D. C. McFarlane, and M. A. Rotea, Optimal guaranteed cost control of discrete-time uncertain systems, in Proc. 12th IFAC World Congress, Sydney, Australia, July 1993, pp. 407-410.
Bibliography
361
[335] B. T. Polyak, Subgradient methods: A survey of soviet research, in Nonsmooth Optimization, C. Lemarechal and R. Mifflin, eds., Pergamon Press, Elmsford, NY, 1977. [336] K. Poolla and A. Tikku, Robust performance against time-varying structured perturbations, IEEE Trans. Automat. Control, 40:1589-1602, 1995. [337] V. M. Popov, Absolute stability of nonlinear systems of automatic control, Automat. Remote Control, 22:857-875, 1962. [338] F. A. Potra and R. Sheng, Superlinear Convergence of Interior-Point Algorithms for Semidefinite Programming, Technical Report 86, Department of Mathematics, University of Iowa, Iowa City, November 1996. [339] G. Raisbeck, A definition of passive linear networks in terms of time and energy, J. Appl. Physics, 25:1510-1514, 1954. [340] A. Rantzer, On the Kalman-Yakubovich-Popov lemma, Systems Control Lett, 28:7-10, 1996. [341] A. Rantzer and A. Megretski, System analysis via integral quadratic constraints, in Proc. 34#i IEEE Conference on Decision and Control, December 1994, pp. 30623067. [342] A. Rantzer and A. Megretski, Stability criteria based on integral quadratic constraints, in Proc. IEEE Conference on Decision and Control, Kobe, Japan, 1996, pp. 215-220. [343] F. Riesz and B. Sz.-Nagy, Functional Analysis, Dover, New York, 1990. [344] R. T. Rockafellar, Convex Analysis, Princeton, Princeton, NJ, 1970. [345] J. Rohn, Systems of linear interval equations, Linear Algebra Appi, 126:39-78, 1989. [346] J. Rohn, Overestimations in bounding solutions of perturbed linear equations, Linear Algebra Appl., 262:55-66, 1997. [347] M. A. Rotea, Generalized HI control, Automatica, 29:373-385, 1993. [348] M. G. Safonov, Stability and Robustness of Multivariable Feedback Systems, Massachusetts Institute of Technology Press, Cambridge, MA, 1980. [349] M. G. Safonov, L°° optimal sensitivity vs. stability margin, in Proc. IEEE Conference on Decision and Control, Albuquerque, NM, December 1983. [350] M. G. Safonov, Stability margins of diagonally perturbed multivariable feedback systems, IEEE Proc., 129:251-256, 1982. [351] M. G. Safonov and M. Athans, Gain and phase margin for multiloop LQG regulators, in Proc. IEEE Conference on Decision and Control, Clearwater Beach, FL, December 1976. [352] M. G. Safonov and M. Athans, A multiloop generalization of the circle criterion for stability margin analysis, IEEE Trans. Automat. Control, 26:415-422, 1981.
362
Bibliography
[353] M. G. Safonov and R. Y. Chiang, Real/complex A"m-synthesis without curve fitting, in Control and Dynamic Systems, vol. 56, C. T. Leondes, ed., Academic Press, New York, 1993, pp. 303-324. [354] M. G. Safonov, K. C. Goh, and J. H. Ly, Control system synthesis via bilinear matrix inequalities, in Proc. American Control Conference, Baltimore, MD, June 1994. [355] M. G. Safonov and P. H. Lee, A multiplier method for computing real multivariable stability margin, in IFAC World Congress, Sydney, Australia, July 1993. [356] M. G. Safonov and G. P. Papavassilopoulos, The diameter of an intersection of ellipsoids and BMI robust synthesis, in Proc. IFAC Symposium on Robust Control Design, Rio de Janeiro, Brazil, September 1994. [357] M. G. Safonov and G. Wyetzner, Computer-aided stability criterion renders Popov criterion obsolete, IEEE Trans. Automat. Control, AC-32:1128-1131, 1987. [358] A. M. Samoilenko and N. A. Perestyuk, Impulsive Differential Equations, World Scientific, Singapore, 1995. [359] M. Sampei, T. Mita, and M. Nakamichi, An algebraic approach to HQQ output feedback control problems, Systems Control Lett., 14:13-24, 1990. [360] I. W. Sandberg, On the Z/2-boundedness of solutions of nonlinear functional equations, Bell System Tech. J., 43:1581-1599, 1964. [361] C. W. Scherer, Prom LMI analysis to multichannel mixed Imi synthesis: A general procedure, Selected Topics Identification Model. Control, 8:1-8, 1995. [362] C. W. Scherer, Mixed HijH^ control, in Trends in Control, A European Perspective, A. Isidori, ed., Springer-Verlag, Berlin, 1995, pp. 173-216. [363] C. Scherer, Mixed H^jB.^ control for linear parametrically varying systems, in IEEE Conference on Decision and Control, 1995, pp. 3182-3187. [364] C. W. Scherer, Robust generalized HI control for uncertain and LPV systems with general scalings, in Proc. IEEE Conference on Decision and Control, 1996, pp. 3970-3975. [365] C. W. Scherer, P. Gahinet, and M. Chilali, Multiobjective output-feedback control via LMI optimisation, IEEE Trans. Automat. Control, 42:896-911, 1997. [366] H. Schramm and J. Zowe, A version of the bundle idea for minimizing a nonsmooth function: Conceptual idea, convergence analysis, numerical results, SIAM J. Optim., 2:121-152, 1992. [367] G. Scorletti and L. El Ghaoui, Improved linear matrix inequality conditions for gain scheduling, in Proc. IEEE Conference on Decision and Control, 1995, pp. 36263631. [368] G. Scorletti and L. El Ghaoui, Improved LMI conditions for gain scheduling and related control problems, Internat. J. Robust Nonlinear Control, 8:845-877, 1998. [369] U. Shaked and C. E. de Souza, Robust minimum variance filtering, IEEE Trans. Signal Process., 40:2474-2483, 1995.
Bibliography
363
[370] J. S. Shamma, Robust stability with time-varying structured uncertainty, IEEE Trans. Automat. Control, 39:714-724, 1994. [371] J. F. Shamma and M. Athans, Analysis of gain scheduled control for nonlinear plants, IEEE Trans. Automat. Control, 35:898-907, 1990. [372] A. Shapiro, Perturbation theory of nonlinear programs when the set of optimal solutions is not a singleton, Appl. Math. Optim., 18:215-229, 1988. [373] A. Shapiro, First and second order analysis of nonlinear semidefinite programs, Math. Programming, Ser. B, 77:301-320, 1997. [374] A. Shapiro and M. K. H Fan, On eigenvalue optimization, SIAM J. Optim., 5:552568, 1995. [375] N. Z. Shor, Minimization Methods for Non-Differentiable Verlag, Berlin, 1985.
Functions, Springer-
[376] N. J. Shor, Quadratic optimization problems, Soviet J. Circuits Systems Sci., 25:111, 1987. [377] R. Skelton, T. Iwasaki, and K. Grigoriadis, A Unified Algebraic Approach to Linear Control Design, Taylor & Francis, London, 1998. [378] R. S. Smith and M. Dahleh, The Modeling of Uncertainty in Control Systems, Lecture Notes in Control and Inform. Sci., Springer-Verlag, Berlin, 1993. [379] G. Sonnevend, New algorithms in convex programming based on a notion of "center" (for systems of analytic inequalities) and on rational extrapolation, in Trends in Mathematical Optimization, K. H. Hoffmann, J. B. Hiriart-Urruty, C. Lemarechal, and J. Zowe, eds., Internat. Ser. Numer. Math. 84, Birkhauser-Verlag, Basel, 1988, pp. 311-327. [380] A. G. Sparks and D. S. Bernstein, The scaled Popov criterion and bounds for the real structured singular value, in Proc. IEEE Conference on Decision and Control, December 1994, pp. 2998-3002. [381] R. J. Stern and H. Wolkowicz, Indefinite trust region subproblems and nonsymmetric eigenvalue perturbations, SIAM J. Optim., 5:286-313, 1995. [382] J. Stoer and C. Witzgall, Convexity and Optimization in Finite Dimension, Springer-Verlag, Berlin, New York, 1970. [383] A. A. Stoorvogel, The robust ££2 problem: A worst case design, in Con/. Decision and Control, Brighton, England, December 1991, pp. 194-199. [384] A. A. Stoorvogel, The robust H^ control problem: A worst-case design, IEEE Trans. Automat. Control, 38:1358-1370, 1993. [385] M. K. Sundareshan and M. A. L. Thathachar, Instability of linear time-varying systems—Conditions involving noncasual multipliers, IEEE Trans. Automat. Control, 17:504-510, 1972. [386] M. Sznaier, An exact solution to general SISO mixed H2/H infinity problems via convex optimization, IEEE Trans. Automat. Control 39:2511-2517, 1994.
364
Bibliography
[387] M. Sznaier and J. Tierno, Is set modeling of white noise a good tool for robust H2 analysis?, in Proc. 37th IEEE Conference on Decision and Control, Tampa, FL, 1998, pp. 1183-1188. [388] G. Stein and M. Athans, The LQG/LTR procedure for multivariable feedback control design, IEEE Trans. Automat. Control, AC-32:105-114, 1987. [389] K. Takaba and T. Katayama, Robust H2 control of descriptor system with timevarying uncertainty, in Proc. American Control Conference, Philadelphia, June 1998, pp. 2421-2426. [390] Y. Theodor and U. Shaked, Robust discrete-time minimum variance filtering, IEEE Trans. Signal Process., 44:181-189, 1996. [391] M. J. Todd, K. C. Toh, and R. H. Tiitiincu, On the Nesterov-Todd direction in semidefinite programming, SIAM J. Optim., 8:769-796, 1998. [392] K. C. Toh, M. J. Todd, and R. H. Tiitiincu, SDPT3 : A MATLAB Software Package for Semidefinite Programming, 1996. Available at http://www.math.nus.sg/ "mattohkc/index.html. [393] O. Toker, On the complexity of the robust stability problem for linear parameter varying systems, in 13th Triennial IFAC World Congress, Vol. H, 1996, pp. 53-57. [394] O. Toker and H. Ozbay, On the NP-hardness of solving bilinear matrix inequalities and simultaneous stabilization with static output feedback, in Proc. American Control Conference, 1995. [395] O. Toker and H. Ozbay, Complexity issues in robust stability of linear delaydifferential systems, Math. Control Signals Systems, 9:386-400, 1996. [396] H. Tokunaga, T. Iwasaki, and S. Hara, Multi-objective robust control with transient specifications, in Proc. IEEE Conference on Decision and Control, 1996, pp. 34823483. [397] M. Torki, Epi-differentiabilite du second ordre de certaines fonctions spectrales et conditions d'optimalite en optimisation de valeurs propres, Technical Report, LAO, Universite Paul Sabatier, Toulouse, Prance, December 1997. [398] L. Turan, C. H. Huang, and M. G. Safonov, Two-Riccati positive real synthesis: LMI approach, in Proc. American Control Conference, 1995, pp. 2432-2436. [399] H. D. Tuan and P. Apkarian, Relaxations of parameterized LMIs with control applications, Internat. J. Robust Nonlinear Control, to appear. [400] H. D. Tuan, S. Hosoe, and H. Tuy, D.C. Optimization Approach to Robust Control: Feasibility Problems, Technical Report 9601, Nagoya University, Nagoya, Japan, October 1996. [401] L. Vandenberghe and S. Boyd, SP: Software for Semidefinite Programming. Users Guide, Stanford University, Stanford, CA, 1994. Available at http://www.stanford. edu/~boyd/SP.html. [402] L. Vandenberghe and S. Boyd, SP, Software for Semidefinite Programming, User's Guide, December 1994. Available via anonymous ftp at isl.stanford.edu, under /pub/boyd/semidef_prog.
Bibliography
365
[403] L. Vandenberghe and S. Boyd, A primal-dual potential reduction method for problems involving matrix inequalities, Math. Programming Ser. B, 69:205-236, 1995. [404] L. Vandenberghe and S. Boyd, Semidefinite programming, SIAM Rev., 38:49-95, 1996. [405] L. Vandenberghe, S. Boyd, and A. El Gamal, Optimal wire and transistor sizing for circuits with non-tree topology, in Proc. IEEE/A CM International Conference on Computer-Aided Design, San Jose, CA, November 1997, pp. 252-259. [406] L. Vandenberghe, S. Boyd, and A. El Gamal, Optimizing dominant time constant in RC circuits, IEEE Trans. Computer-Aided Design, 2:110-125, 1998. [407] L. Vandenberghe, S. Boyd, and S.-P. Wu, Determinant maximization with linear matrix inequality constraints, SIAM J. Matrix Anal. Appl., 19:499-533, 1998. [408] J. J. van Dixhoorn and F. J. Evans, Physical Structure in Systems Theory, Academic Press, New York, 1974. [409] P. Van Dooren, A generalized eigenvalue approach for solving Riccati equations, SIAM J. Sci. Statist. Comput, 2:121-135, 1981. [410] S. A. Vavasis, Nonlinear Optimization: Complexity Issues, Oxford University Press, New York, 1991. [411] M. Vidyasagar, Control System Synthesis: A Factorization Approach, MIT Press, Cambrigde, 1985. [412] V. Visweswaran, C. A. Floudas, M. G. lerapetritou, and E. N. Pistikipoulos, A Decomposition-Based Optimization Approach for Solving Bilevel Linear and Quadratic Programs, Kluwer Academic Publishers, Norwell, MA, 1996. [413] P. G. Voulgaris, Optimal h2/h control via duality theory, IEEE Trans. Automat. Control, 11:1881-1888, 1995. [414] W. Wang, J. Doyle, C. Beck, and K. Glover, Model reduction of LFT systems, in Proc. IEEE Conference on Decision and Control, Brighton, England, December 1991, pp. 1233-1238. [415] Z. Q. Wang and S. Skogestad, Robust controller design for uncertain time delay systems, in Analysis and Optimization of Systems: State and Frequency Domain Approaches for Infinite-Dimensional Systems, Lecture Notes in Control and Inform. Sci., Springer-Verlag, Berlin, 1993, pp. 610-623. [416] C. Weber and J. Allebach, Reconstruction of frequency offeset Fourier data by alternating projections, in Proc. 24th Allerton Conference, Monticello, IL, 1986, pp. 194-201. [417] D. E. Whitney, Force feedback of manipulator fine motion, J. Dynam. Systems and Measurement Control, 9:91-97, 1977. [418] D. E. Whitney, Historical perspective and state of the art in robot force control, Internat. J. Robotics Res., 6:3-14, 1987. [419] J. C. Willems, The Analysis of Feedback Systems, Research Monographs 62, MIT Press, Cambridge, 1969.
366
Bibliography
[420] J.C. Willems, Least squares stationary optimal control and the algebraic Riccati equation, IEEE Trans. Automat. Control, AC-16:621-634, 1971. [421] J. C. Willems, On the existence of a nonpositive solution to the Riccati equation, IEEE Trans. Automat. Control, AC-19:592-593, 1974. [422] W. M. Wonham, Random differential equations in control theory, in Probabilistic Methods in Applied Mathematics, Vol. 2, A. T. Bharucha-Reid, ed., Academic Press, New York, 1970, pp. 31-212. [423] S. Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, 1997. [424] F. Wu, X. Yang, A. Packard, and G. Becker, Induced L2-norm control for LPV system with bounded parameter variations rates, in Proc. American Control Conference, Seattle, WA, 1995, pp. 2379-2383. [425] S.-P. Wu and S. Boyd, sdpsol: A Parser/Solver for Semidefinite Programming and Determinant Maximization Problems with Matrix Structure. User's Guide, Beta Version, Stanford University, Stanford, CA, June 1996. [426] S.-P. Wu and S. Boyd, Design and implementation of a parser/solver for SDPs with matrix structure, in Proc. Conference on Control and Applications, Dearborn, MI, September 1996, pp. 240-245. [427] S.-P. Wu, L. Vandenberghe, and S. Boyd, MAXDET: Software for Determinant Maximization Problems. User's Guide, Alpha Version, Stanford University, Stanford, CA, April 1996. [428] L. Xie and C. E. de Souza, LMI approach to delay-dependent robust stability and stabilization of uncertain linear delay systems, in Proc. 35th IEEE Conference on Decision and Control, December 1995. [429] L. Xie and C. E. de Souza, On robust filtering for linear systems with parameter uncertainty, in Proc. 34th IEEE Conference on Decision and Control, New Orleans, LA, 1995, pp. 2087-2092. [430] L. Xie, C. E. de Souza, and Y. C. Soh, Robust Kalman filter for uncertain discretetime systems, IEEE Trans. Automat. Control, 39:1310-1314, 1994. [431] L. Xie and Y. C. Soh, Robust Kalman filtering for uncertain systems, Systems Control Lett, 22:123-129, 1994. [432] V. A. Yakubovich, The solution of certain matrix inequalities in automatic control theory, Soviet Math. DokL, 3:620-623, 1962 (in Russian, 1961). [433] V. A. Yakubovich, Nonconvex optimization problem: The infinite-horizon linearquadratic control problem with quadratic constraints, Systems Control Lett., 19:1322, 1992. [434] V. A. Yakubovich, The 5-procedure in non-linear control theory, Vestnik Leningrad Univ. Math., 4:73-93, 1977 (in Russian, 1971). [435] J. Yan and S. E. Salcudean, Teleoperation controller design using optimization with application to motion-scaling, IEEE Trans. Control Systems Tech., 4:244258, 1996.
Bibliography
367
[436] K. Y. Yang, Efficient Design of Robust Controllers for JJ.2 Performance, Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA, February 1997. [437] K. Yang, S. Hall, and E. Feron, A new design method for robust H2 controllers using Popov multipliers, in Proc. American Control Conference, Albuquerque, NM, June 1997, pp. 1235-1240. [438] K. Y. Yang, C. Livadas, and S. R. Hall, Using linear matrix inequalities to design controllers for robust H2 performance, in Proc. AIAA Guidance, Navigation, and Control Conference, July 1996. [439] T. Yang and L. O. Chua, Impulsive stabilization for control and synchronization of chaotic systems: Theory and application to secure communication, IEEE Trans. Circuits Systems, 44:976-988, 1997. [440] D. C. Youla and H. Webb, Image restoration by the method of convex projections, IEEE Trans. Medical Imaging, 1:81-94, 1982. [441] P. Young, Robustness with Parametric and Dynamic Uncertainty, Ph.D. thesis, California Institute of Technology, Pasadena, CA, 1993. [442] P. M. Young and J. C. Doyle, Properties of the mixed /i problem and its bounds, IEEE Trans. Automat. Control, 155-159, 1996. [443] P. M. Young, M. P. Newlin, and J. C. Doyle, Practical computation of the mixed p, problem, in Proc. American Control Conference, Chicago, IL, 1992. [444] G. Zames, On the input-output stability of nonlinear time-varying feedback systems—Parts I and II, IEEE Trans. Automat. Control, 11:228-238, 465-467, 1966. [445] G. Zames and P. L. Falb, Stability conditions for systems with monotone and slope-restricted nonlinearities, SIAM J. Control, 6:89-108, 1968. [446] K. Zhou, J. Doyle, and K. Glover, Robust and Optimal Control, Prentice-Hall, Englewood Cliffs, NJ, 1995. [447] K. Zhou, K. Glover, B. Bodenheimer, and J. Doyle, Mixed H.2 and H.^ performance objectives I: Robust performance analysis, IEEE Trans. Automat. Control, 39:1564-1574, 1994. [448] G. Zhu, K. M. Grigoriadis, and R. E. Skelton, Covariance control design for the Hubble space telescope, J. Guidance Control Dynam., 18:230-236, 1996.
This page intentionally left blank
Index Control channel, 188 Control gain iteration, 169, 170 Control gain with relaxation iteration, 169, 173 Controller order, 270 Convex analysis, 58 Convex cone, 270 Convex optimization, 3 Convexity, 57 Coupled Riccati equations, 156, 167 Cumulative spectrum, 141 Cutting-plane method, 60
a-stability, 253 Affine multiplier, 97 Affine quadratic stability, 97 Affine quadratic stability test, 105 Alternating projection methods, 257 Analytic centers, 61 Asymptotic properties, 57 Asymptotic quadratic convergence, 69 Barrier, 61 Barrier methods, 290 Bilinear matrix inequality, 34, 156, 269 computational aspects, 289 methodological aspects, 274 structural aspects, 279 Bilinear sector transform, 273 Bounded real, 274 £-positivity, 286, 291 Bundle, 57 Bundle methods, 24, 61
Decentralized, 276 Decentralized controller design, 275 Decision variables, 5 Delay differential systems, 27 Determinant maximization problems, 22 D, G-K iteration, 275 Differential geometry, 70 Discrete-time parametric Kalman-YakubovichPopov lemma, 106 D-K iteration, 156, 167, 171, 275 Dominant delay, 51 Dominant harmonics, 115 D-scaling, 232 Dual map, 15 Dual problem, 83 Duality, 21 Dynamical systems over cones, 35
Central path, 23, 45 Certainty channel, 188 Clock mesh design, 51, 52 Combinatorial optimization, 50 Combinatorial problems, 29 Combinatorial problems with uncertain data, 29 Complementarity, 70 Complementarity conditions, 23, 43 Concave minimization, 290 Concave programming, 279 Cone optimization problems, 270 Cone programming, 282 Cone-LCP, 271, 287, 288 Cone-LCP formulation of the bilinear matrix inequality, 286 Cone-LP, 271 Conic complementarity problems, 34 Conservatism, 173 Conservative, 172 Constraints, 5
Eigenvalue optimization, 57 Elimination lemma, 165 Ellipsoid algorithm, 60 Embedding, 5 Equality constraints, 85 Estimation, 175 Experiment design, 88 Extreme form, 270, 287 Extreme form problem, 269, 271, 282, 285 Extreme ray, 270 369
Index
370
Finite gain, 157 Fixed order (q < n] HOQ problem, 278 Fixed order control synthesis, 275 Force control, 332 Full block ^-procedure, 187 Full-order robust filter, 177 Gain-scheduled control, 245 Gain-scheduled output-feedback control problem, 211 Gain-scheduling techniques, 209 Generalized eigenvalue problem, 22, 90 Generalized Kalman-Yakubovich-Popov lemma, 98 Global convergence, 69 Global maximum, 291 Gordan's theorem of alternative, 283 Graph problem, 59 H2, 145 H2 norm, 130 impulse response interpretation, 133 stochastic white noise interpretation, 135 H2-performance, 129, 155 H2/Popov controllers, 156,164,165,169, 170 Hoc, 129 Homogeneous cone, 289
Impulse differential systems, 26 Integral quadratic constraints, 13, 110, 231 Interior-point methods, 22, 290 Intermediate oracle, 59 Interval calculus, 30 Kalman filter, 176, 180 Kalman-Yakubovich-Popov lemma, 28, 98 Li, 129 L2-gain performance, 210 L2-induced operator norm, 244 Lagrange multipliers, 133, 159 Lagrange relaxation, 7, 15, 19 Large-scale, 57 Limitations of the LMI approach, 275 Linear complementarity problem, 271 Linear Lyapunov function, 21 Linear matrix inequality, 3,129, 229, 274 Linear parameter-varying system, 244
Linear program, 16 Linear program with implementation constraints, 16 Linear programming, 271 Linear time-invariant, 95 Linear-fractional model, 9 Linear-fractional transformation, 10, 231, 235 Lovasz ^-function, 50 Lovasz problem, 74 Lyapunov equation, 168 Lyapunov exponent, 90 Lyapunov functions, 18, 19, 103 Lyapunov inequality, 86 Lyapunov-Krasovskii functional, 28 H/km analysis and synthesis, 275 H/km synthesis, 270, 275, 276 Matlab, 168 MAX-CUT, 29, 50 Max-det problem, 79 Maximum eigenvalue function, 59 Maximum eigenvalue minimization, 59 Mehrotra's formula, 46 Middeck Active Control Experiment (MACE), 156, 170 Mixed parameter-scheduled/robust problems, 6 Mixed semidefimte-quadratic-linear programs, 41 M-K iteration, 275 Moreau-Yosida, 62 Multiaffine Lyapunov function, 97 Multiaffine Lyapunov matrix, 98 Multiobjective robust control, 326 Multiobjective Robust Control Toolbox, 309 Multiplier formulation, 156 Multipliers, 98, 109, 156, 160 Network theory, 322 Nonconservative, 172 Nondegeneracy, 70 Nondifferentiable optimization methods, 24 NP-hardness of the bilinear matrix inequality, 280 Numerical experiments, 61 Numerical illustrations, 57 Optimal control, 25
371
Index Optimality conditions, 23 Parallel implementation of bilinear matrix inequality, 289 Parameter-dependent Lyapunov functions, 210, 211 Parameter-scheduled synthesis, 6 Parametric Kalman-Yakubovich-Popov lemma, 98 Parametric Lyapunov functions, 96 Parametric multiplier approach, 95 Passive, 157 Passivity, 157, 325 Path following, 23 Path-following algorithm, 44 Peak-to-peak gain, 195 Performance block, 242, 245 Performance channels, 188 Pointed closed convex cone, 270 Polynomial interpolation, 31 Polytopic linear differential inclusions quadratic stability, 75 Polytopic model, 14 Poor oracle, 59, 76 Poor-oracle methods, 58 Popov analysis, 86 Popov multiplier, 111, 156 Popov multipliers, 160, 162 Positive, 157 Positive real, 274 Positivity, 157 Potential function, 77 Predictive robust control, 28 Predictor-corrector scheme, 44 Primal-dual interior-point method, 22, 44 Projection operator, 257 Proximal point, 62 PSD-completely positive matrices, 285 •pSD-copositive, 288 PSD-copositive matrices, 285 P-separator, 232, 241 Quadratic stability, 20 Quadratic-sdp problem, 68 Quality of relaxation, 18, 36 Rank relaxation, 16 Rank-constrained linear matrix inequalities, 269 Rank-constrained problems, 34
Rank-minimization problem, 291 Rank-restricted linear matrix inequalities, 282 Rayleigh variational formulation, 58 Reduced-order robust filter, 177 Riccati equality, 163 Riccati equations, 156 Rich oracle, 59, 64, 77 Robot telemanipulators, 322 Robust control, 242 Robust controller design, 199 Robust decision problems, 6 Robust feasibility problem, 6 Robust full-information problem, 166 Robust H 2 , 129 Robust H2 control, 155 Robust H2 niters, 175 Robust H2-performance, 131, 155, 193 Robust H2 synthesis, 164 Robust least squares, 31, 51, 53 Robust linear programming, 16 Robust measurement scheduling, 311 Robust optimality problem, 6 Robust output estimation problem, 166 Robust performance, 116 Robust performance analysis, 113 Robust portfolio optimization, 33 Robust quadratic performance, 191 Robust semidefinite program, 14 Robust stability, 158, 189 Robustness, 129 Robustness analysis, 6, 111 SDPpack, 47 Second-order analysis, 57, 70 Second-order bundle methods, 24 Second-order schemes, 70 Self-concordant barrier, 290 Semidefinite complementarity problem, 269, 282
Semidefinite program, 3 Semidefinite program duality, 25 Semidefinite-quadratic-linear program, 41 Separation principle, 166, 167 Software for solving semidefinite programs, 24
Solid, 270 5-procedure, 8, 129, 130, 159 Star product, 235 Stochastic Lyapunov functions, 20 Stochastic optimization, 7
372 Structured linear algebra, 31 Structured total least squares, 32 Subdifferential, 58 Subgradient methods, 24, 60 Support function, 58 Symmetrized Kronecker product, 45 Toolbox for IQC analysis, 125 Topological separation, 232 Truncation operator, 110 Truss topology design, 53 W-Hessian, 77 ZY-Lagrangian theory, 70 Uncertain systems, 5 Uncertainty, 272 Uncertainty model, 9 Uncertainty set assumption, 9 Unsymmetric eigenvalue problem, 17 Upper bound, 159, 160 Upper bounds on robust ^-performance, 156 Well-posedness, 9, 189, 230 Worst-case H/2 norm, 159 Worst-case H2-performance, 156, 158 Worst-case asymptotic error variance, 177 Youla parameterization, 166
Index