II.1. STATE-SPACE ANALYSIS. Didier HENRION

Similar documents
Lecture 7: Finding Lyapunov Functions 1

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Lecture 13 Linear quadratic Lyapunov theory

Chapter 6. Orthogonality

by the matrix A results in a vector which is a reflection of the given

Inner Product Spaces and Orthogonality

Similarity and Diagonalization. Similar Matrices

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Conic optimization: examples and software

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

3. Linear Programming and Polyhedral Combinatorics

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

1 Norms and Vector Spaces

How To Prove The Dirichlet Unit Theorem

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Ideal Class Group and Units

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

4 Lyapunov Stability Theory

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

Algebra II. Weeks 1-3 TEKS

x = + x 2 + x

3. Reaction Diffusion Equations Consider the following ODE model for population growth

Algebra 1 Course Title

α = u v. In other words, Orthogonal Projection

LINEAR ALGEBRA. September 23, 2010

Linear Algebra Notes

Semidefinite characterisation of invariant measures for one-dimensional discrete dynamical systems

LINEAR ALGEBRA W W L CHEN

Vector and Matrix Norms

Examples of Tasks from CCSS Edition Course 3, Unit 5

1 if 1 x 0 1 if 0 x 1

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

2.3 Convex Constrained Optimization Problems

5.1 Bipartite Matching

Linear Programming I

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

19 LINEAR QUADRATIC REGULATOR

Inner Product Spaces

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

1 Local Brouwer degree

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MA107 Precalculus Algebra Exam 2 Review Solutions

Linearly Independent Sets and Linearly Dependent Sets

Notes on Symmetric Matrices

LS.6 Solution Matrices

Duality of linear conic problems

Solutions to Math 51 First Exam January 29, 2015

System of First Order Differential Equations

Section 13.5 Equations of Lines and Planes

Estimated Pre Calculus Pacing Timeline

MA4001 Engineering Mathematics 1 Lecture 10 Limits and Continuity

( ) which must be a vector

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

Linear Algebra Review. Vectors

Eigenvalues and Eigenvectors

Algebra 2 Year-at-a-Glance Leander ISD st Six Weeks 2nd Six Weeks 3rd Six Weeks 4th Six Weeks 5th Six Weeks 6th Six Weeks

BookTOC.txt. 1. Functions, Graphs, and Models. Algebra Toolbox. Sets. The Real Numbers. Inequalities and Intervals on the Real Number Line

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Section 1.1. Introduction to R n

Classification of Cartan matrices

The Ideal Class Group

Prentice Hall Mathematics: Algebra Correlated to: Utah Core Curriculum for Math, Intermediate Algebra (Secondary)

Lecture 14: Section 3.3

160 CHAPTER 4. VECTOR SPACES

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

3. INNER PRODUCT SPACES

BANACH AND HILBERT SPACE REVIEW

State Feedback Stabilization with Guaranteed Transient Bounds

Section Inner Products and Norms

NOTES ON LINEAR TRANSFORMATIONS

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

PYTHAGOREAN TRIPLES KEITH CONRAD

1 Sets and Set Notation.

5.3 The Cross Product in R 3

Completely Positive Cone and its Dual

Math 120 Final Exam Practice Problems, Form: A

Can linear programs solve NP-hard problems?

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?

Numerical Analysis Lecture Notes

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

3.2 Sources, Sinks, Saddles, and Spirals

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

CHAPTER 9. Integer Programming

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

1 Introduction to Matrices

Mathematical Methods of Engineering Analysis

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

APPLICATIONS. are symmetric, but. are not.

Transcription:

II.1. STATE-SPACE ANALYSIS Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010

State-space methods Developed by Kalman and colleagues in the 1960s as an alternative to frequency-domain techniques (Bode, Nichols..) Starting in the 1980s, numerical analysts developed powerful linear algebra routines for matrix equations: numerical stability, low computational complexity, largescale problems Matlab launched by Cleve Moler (1977-1984) heavily relies on LINPACK, EISPACK & LAPACK packages Pseudospectrum of a Toeplitz matrix

Linear systems stability The continuous-time linear time invariant (LTI) system ẋ(t) = Ax(t) x(0) = x 0 where x(t) R n is asymptotically stable, meaning if and only if lim t x(t) = 0 x 0 there exists a quadratic Lyapunov function V (x) = x P x such that V (x(t)) > 0 V (x(t)) < 0 along system trajectories equivalently, matrix A satisfies max i real λ i (A) < 0

Lyapunov stability Note that V (x) = x P x = x (P + P )x/2 so that Lyapunov matrix P can be chosen symmetric without loss of generality Since V (x) = ẋ P x + x P ẋ = x A P x + x P Ax positivity of V (x) and negativity of V (x) along system trajectories can be expressed as an LMI A P + P A 0 P 0 Matrices P satisfying Lyapunov s LMI with A = [ 0 1 1 2

Lyapunov equation The Lyapunov LMI can be written equivalently as the Lyapunov equation A P + P A + Q = 0 where Q 0 The following statements are equivalent the system ẋ = Ax is asymptotically stable for some matrix Q 0 the matrix P solving the Lyapunov equation satisfies P 0 for all matrices Q 0 the matrix P solving the Lyapunov equation satisfies P 0 The Lyapunov LMI can be solved numerically without IP methods since solving the above equation amounts to solving a linear system of n(n + 1)/2 equations in n(n + 1)/2 unknowns

Alternative to Lyapunov LMI Recall the theorem of alternatives for LMI F (x) = F 0 + i x i F i Exactly one statement is true there exists x s.t. F (x) 0 there exists a nonzero Z 0 s.t. trace F 0 Z 0 and trace F i Z = 0 for i > 0 Alternative to Lyapunov LMI F (x) = [ A P P A 0 0 P 0 is the existence of a nonzero matrix Z = such that [ Z1 0 0 Z 2 0 Z 1 A + AZ 1 Z 2 = 0

Alternative to Lyapunov LMI (proof) Suppose that there exists such a matrix Z 0 and extract Cholesky factor Z 1 = UU Since Z 1 A + AZ 1 0 we must have AUU = USU where S = S 1 + S 2 and S 1 = S 1, S 2 0 It follows from AU = US that U spans an invariant subspace of A associated with eigenvalues of S, which all satisfy real λ i (S) 0 Conversely, suppose λ i (A) = σ + jω with σ 0 for some i with eigenvector v Then rank-one matrices Z 1 = vv solve the alternative LMI Z 2 = 2σvv

Discrete-time Lyapunov LMI Similarly, the discrete-time LTI system x k+1 = Ax k is asymptotically stable iff there exists a quadratic Lyapunov function V (x) = x P x such that V (x k ) > 0 V (x k+1 ) V (x k ) < 0 along system trajectories equivalently, matrix A satisfies max i λ i (A) < 1 Here too this can be expressed as an LMI A P A P 0 P 0

More general stability regions Let D = {s C : [ 1 s [ d0 d 1 d 1 d 2 [ 1 s < 0} with d 0, d 1, d 2 C 3 be a region of the complex plane (half-plane or disk) Matrix A is said D-stable when its spectrum σ(a) = {λ i (A)} belongs to region D Equivalent to generalized Lyapunov LMI [ I A [ d0 P d 1 P d 1 P d 2P [ I A 0 P 0

LMI stability regions We can consider D-stability in LMI regions D = {s C : D(s) = D 0 + D 1 s + D1 s 0} such as D real(s) < α real(s) < α, s < r α 1 < real(s) < α 2 imag(s) < α real(s) tan θ < imag(s) dynamics dominant behavior oscillations bandwidth horizontal strip damping cone or intersections thereof Example for the cone [ sin θ cos θ D 0 = 0 D 1 = cos θ sin θ

Lyapunov LMI for LMI stability regions Matrix A has all its eigenvalues in the region D = {s C : D 0 + D 1 s + D 1 s 0} if and only if the following LMI is feasible D 0 P + D 1 AP + D 1 P A 0 P 0 where denotes the Kronecker product Litterally replace s with A! Can be extended readily to quadratic matrix inequality stability regions D = {s C : D 0 + D 1 s + D 1 s + D 2 s s 0} parabolae, hyperbolae, ellipses etc convex (D 2 0) or not

Uncertain systems and robustness When modeling systems we face several sources of uncertainty, including non-parametric (unstructured) uncertainty unmodeled dynamics truncated high frequency modes non-linearities effects of linearization, time-variation.. parametric (structured) uncertainty physical parameters vary within given bounds interval uncertainty (l ) ellipsoidal uncertainty (l 2 ) diamond uncertainty (l 1 ) How can we overcome uncertainty? model predictive control adaptive control robust control A control law is robust if it is valid over the whole range of admissible uncertainty (can be designed off-line, usually cheap)

Uncertainty modeling Consider the continuous-time LTI system ẋ(t) = Ax(t) A A where matrix A belongs to an uncertainty set A For unstructured uncertainties we consider norm-bounded matrices A = {A + B C : 2 µ} For structured uncertainties we consider polytopic matrices A = conv {A 1,..., A N } There are other more sophisticated uncertainty models not covered here Uncertainty modeling is an important and difficult step in control system design!

Robust stability The continuous-time LTI system ẋ(t) = Ax(t) A A is robustly stable when it is asymptotically stable for all A A If S denotes the set of stable matrices, then robust stability is ensured as soon as A S Unfortunately S is a non-convex cone! Non-convex set of continuous-time stable [ matrices 1 x y z

Symmetry If dynamic systems were symmetric, i.e A = A continuous-time stability max i real λ i (A) < 0 would be equivalent to A + A 0 and discrete-time stability max i λ i (A) < 1 to [ I A A A I A 0 I which are both LMIs! We can show that stability of a symmetric linear system can be always proven with the Lyapunov matrix P = I Fortunately, the world is not symmetric!

Robust and quadratic stability Because of non-convexity of the cone of stable matrices, robust stability is sometimes difficult to check numerically, meaning that computational cost is an exponential function of the number of system parameters Remedy: The continuous-time LTI system ẋ(t) = Ax(t) is quadratically stable if its robust stability can be guaranteed with the same quadratic Lyapunov function for all A A Obviously, quadratic stability is more pessimistic, or more conservative than robust stability: Quadratic stability = Robust stability but the converse is not always true

Quadratic stability for polytopic uncertainty The system with polytopic uncertainty ẋ(t) = Ax(t) A conv {A 1,..., A N } is quadratically stable iff there exists a matrix P solving the LMI A T i P + P A i 0 P 0 Proof by convexity i λ i (A T i P + P A i) = A T (λ)p + P A(λ) 0 for all λ i 0 such that i λ i = 1 This is a vertex result: stability of a whole family of matrices is ensured by stability of the vertices of the family Usually vertex results ensure computational tractability

Quadratic and robust stability: example Consider the uncertain system matrix A(δ) = [ 4 4 5 0 + δ [ 2 2 1 4 with real parameter δ such that δ µ = polytope with vertices A( µ) and A(µ) stability max µ quadratic 0.7526 robust 1.6666

Quadratic stability for norm-bounded uncertainty The system with norm-bounded uncertainty ẋ(t) = (A + B C)x(t) 2 µ is quadratically stable iff there exists a matrix P solving the LMI [ A P + P A + C C P B B P γ 2 I 0 P 0 with γ 1 = µ This is called the bounded-real lemma proved next with the S-procedure We can maximize the level of allowed uncertainty by minimizing scalar γ

Norm-bounded uncertainty as feedback Uncertain system ẋ = (A + B C)x can be written as the feedback system ẋ = Ax + Bw z = Cx w = z. x = Ax+Bw w z so that for the Lyapunov function V (x) = x P x we have V (x) = 2x P ẋ = 2x P (Ax + Bw) = x (A P + P A)x + 2x P Bw = [ x w [ A P + P A P B B P 0 [ x w

Norm-bounded uncertainty as feedback (2) Since µ 2 I it follows that w w = z z µ 2 z z [ [ w w µ 2 z x C z = C 0 w 0 γ 2 I [ x w 0 Combining with the quadratic inequality [ [ x A V (x) = [ P + P A P B x w B P 0 w and using the S-procedure we obtain [ A [ P + P A P B C B C 0 P 0 0 γ 2 I or equivalently < 0 [ A P + P A + C C P B B P γ 2 I 0 P 0

Norm-bounded uncertainty: generalization Now consider the feedback system ẋ = Ax + Bw z = Cx+Dw w = z with additional feedthrough term Dw We assume that matrix I D is non-singular = well-posedness of feedback interconnection so that we can write w = z = (Cx + Dw) (I D)w = Cx w = (I D) 1 Cx and derive the linear fractional transformation (LFT) uncertainty description ẋ = Ax + Bw = (A + B(I D) 1 C)x

Norm-bounded LFT uncertainty The system with norm-bounded LFT uncertainty ẋ = ( A + B(I D) 1 C ) x 2 µ is quadratically stable iff there exists a matrix P solving the LMI [ A P + P A + C C P B + C D B P + D C D D γ 2 I 0 P 0 Notice the lower right block D D γ 2 I 0 which ensures non-singularity of I D hence well-posedness LFT modeling can be used more generally to cope with rational functions of uncertain parameters, but this is not covered in this course..

Sector-bounded uncertainty Consider the feedback system ẋ = Ax + Bw z = Cx + Dw w = f(z) where vector function f(z) satisfies z f(z) 0 f(0) = 0 which is a sector condition f(z) z f(z) can also be considered as an uncertainty but also as a non-linearity

Quadratic stability for sector-bounded uncertainty We want to establish quadratic stability with the quadratic Lyapunov matrix V (x) = x P x whose derivative V (x) = 2x P (Ax + Bf(z)) = [ x f(z) must be negative when [ A P + P A P B B P 0 2z f(z) = 2(Cx + Df(z)) f(z) [ [ x 0 C = f(z) C D + D [ [ x f(z) x f(z) is non-positive, so we invoke the S-procedure to derive the LMI [ A P + P A P B C B P C D D 0 P 0 This is called the positive-real lemma

Parameter-dependent Lyapunov functions Quadratic stability: fast variation of parameters computationally tractable conservative, or pessimistic (worst-case) Robust stability: no variation of parameters computationally difficult (in general) exact (is it really relevant?) Is there something in between? For example, given an LTI system affected by box, or interval uncertainty where ẋ(t) = A(λ(t))x(t) = (A 0 + N i=1 λ i(t)a i )x(t) λ Λ = {λ i [λ i, λ i } we may consider parameter-dependent Lyapunov matrices, such as P (λ(t)) = P 0 + i λ i(t)p i

Polytopic Lyapunov certificate Quadratic Lyapunov function V (x) = x P (λ)x must be positive with negative derivative along system trajectory hence and P (λ) 0 λ Λ A (λ)p (λ) + P (λ)a (λ) + Ṗ (λ) 0 We have to solve parametrized LMIs λ Λ Parametrized LMIs feature non-linear terms in λ so it is not enough to check vertices of Λ, denoted by vertλ λ 2 1 λ 1 + λ 2 0 on vert but not everywhere on = [0, 1 [0, 1

Parametrized LMIs Central problem in robustness analysis: find x such that F (x, λ) = α λ α F α (x) 0, λ Λ where Λ is a compact set, typically the unit simplex or the unit ball Convex but infinite-dimensional problem which is difficult in general Matrix extensions of polynomial positivity conditions, for which various hierarchies of LMIs are available: Pólya s theorem Schmüdgen s representation Putinar representation See EJC 2006 survey by Carsten Scherer