Lecture 15 Linear matrix inequalities and the S-procedure

Similar documents
Lecture 13 Linear quadratic Lyapunov theory

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?

2.3 Convex Constrained Optimization Problems

Lecture 7: Finding Lyapunov Functions 1

Nonlinear Programming Methods.S2 Quadratic Programming

On the D-Stability of Linear and Nonlinear Positive Switched Systems

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

Direct Methods for Solving Linear Systems. Matrix Factorization

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

19 LINEAR QUADRATIC REGULATOR

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Conic optimization: examples and software

NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems

DATA ANALYSIS II. Matrix Algorithms

3. INNER PRODUCT SPACES

NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems

On using numerical algebraic geometry to find Lyapunov functions of polynomial dynamical systems

4 Lyapunov Stability Theory

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

LINEAR ALGEBRA W W L CHEN

Lecture 5: Singular Value Decomposition SVD (1)

Inner Product Spaces

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems

Interior-Point Algorithms for Quadratic Programming

BX in ( u, v) basis in two ways. On the one hand, AN = u+

Inner Product Spaces and Orthogonality

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Numerical Analysis Lecture Notes

by the matrix A results in a vector which is a reflection of the given

Notes on Symmetric Matrices

LS.6 Solution Matrices

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

State Feedback Stabilization with Guaranteed Transient Bounds

MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Duality of linear conic problems

Study Guide 2 Solutions MATH 111

7 Gaussian Elimination and LU Factorization

Methods for Finding Bases

A note on companion matrices

Solving Systems of Linear Equations

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach

PID Controller Design for Nonlinear Systems Using Discrete-Time Local Model Networks

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Network Traffic Modelling

Solving Systems of Linear Equations

( ) which must be a vector

Linear Control Systems

Zeros of a Polynomial Function

Putnam Notes Polynomials and palindromes

8 Hyperbolic Systems of First-Order Equations

Section Inner Products and Norms

Annual Report H I G H E R E D U C AT I O N C O M M I S S I O N - PA K I S TA N

15 Limit sets. Lyapunov functions

Factoring Polynomials

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Similar matrices and Jordan form

Chapter 7 Nonlinear Systems

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Big Data - Lecture 1 Optimization reminders

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Homework # 3 Solutions

Math Practice Exam 2 with Some Solutions

3. Reaction Diffusion Equations Consider the following ODE model for population growth

Equations, Inequalities & Partial Fractions

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Least-Squares Intersection of Lines

[1] Diagonal factorization

Systems of Linear Equations

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Lecture 11: 0-1 Quadratic Program and Lower Bounds

CHAPTER 9. Integer Programming

Polynomial Invariants

Date: April 12, Contents

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Lecture 2: Homogeneous Coordinates, Lines and Conics

Optimization in R n Introduction

FIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper.

Convex Programming Tools for Disjunctive Programs

Zeros of Polynomial Functions

24. The Branch and Bound Method

LINEAR ALGEBRA. September 23, 2010

Lecture 5 Principal Minors and the Hessian

Nonlinear Optimization: Algorithms 3: Interior-point methods

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Lecture 3: Finding integer solutions to systems of linear equations

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

Computing a Nearest Correlation Matrix with Factor Structure

4.6 Linear Programming duality

Theory of Matrices. Chapter 5

Transcription:

EE363 Winter 2008-09 Lecture 15 Linear matrix inequalities and the S-procedure Linear matrix inequalities Semidefinite programming S-procedure for quadratic forms and quadratic functions 15 1

Linear matrix inequalities suppose F 0,..., F n are symmetric m m matrices an inequality of the form F(x) = F 0 + x 1 F 1 + + x n F n 0 is called a linear matrix inequality (LMI) in the variable x R n here, F : R n R m m is an affine function of the variable x Linear matrix inequalities and the S-procedure 15 2

LMIs: can represent a wide variety of inequalities arise in many problems in control, signal processing, communications, statistics,... most important for us: LMIs can be solved very efficiently by newly developed methods (EE364) solved means: we can find x that satisfies the LMI, or determine that no solution exists Linear matrix inequalities and the S-procedure 15 3

Example F(x) = x1 + x 2 x 2 + 1 x 2 + 1 x 3 0 F 0 = 0 1 1 0, F 1 = 1 0 0 0, F 2 = 1 1 1 0, F 3 = 0 0 0 1 LMI F(x) 0 equivalent to x 1 + x 2 0, x 3 0 (x 1 + x 2 )x 3 (x 2 + 1) 2 = x 1 x 3 + x 2 x 3 x 2 2 2x 2 1 0... a set of nonlinear inequalities in x Linear matrix inequalities and the S-procedure 15 4

Certifying infeasibility of an LMI if A, B are symmetric PSD, then Tr(AB) 0: Tr(AB) = Tr suppose Z = Z T satisfies ( A 1/2 B 1/2 B 1/2 A 1/2) = A 1/2 B 1/2 2 Z 0, Tr(F 0 Z) < 0, Tr(F i Z) = 0, i = 1,..., n F then if F(x) = F 0 + x 1 F 1 + + x n F n 0, 0 Tr(ZF(x)) = Tr(ZF 0 ) < 0 a contradiction Z is certificate that proves LMI F(x) 0 is infeasible Linear matrix inequalities and the S-procedure 15 5

Example: Lyapunov inequality suppose A R n n the Lyapunov inequality A T P + PA + Q 0 is an LMI in variable P meaning: P satisfies the Lyapunov LMI if and only if the quadratic form V (z) = z T Pz satisfies V (z) z T Qz, for system ẋ = Ax the dimension of the variable P is n(n + 1)/2 (since P = P T ) here, F(P) = A T P PA Q is affine in P (we don t need special LMI methods to solve the Lyapunov inequality; we can solve it analytically by solving the Lyapunov equation A T P + PA + Q = 0) Linear matrix inequalities and the S-procedure 15 6

Extensions multiple LMIs: we can consider multiple LMIs as one, large LMI, by forming block diagonal matrices: F (1) (x) 0,..., F (k) (x) 0 diag ( ) F (1) (x),..., F (k) (x) 0 example: we can express a set of linear inequalities as an LMI with diagonal matrices: a T 1 x b 1,..., a T k x b k diag(b 1 a T 1 x,..., b k a T k x) 0 linear equality constraints: a T x = b is the same as the pair of linear inequalities a T x b, a T x b Linear matrix inequalities and the S-procedure 15 7

Example: bounded-real LMI suppose A R n n, B R n m, C R p n, and γ > 0 the bounded-real LMI is A T P + PA + C T C PB B T P γ 2 I with variable P 0, P 0 meaning: if P satisfies this LMI, then the quadratic Lyapunov function V (z) = z T Pz proves the RMS gain of the system ẋ = Ax + Bu, y = Cx is no more than γ (in fact we can solve this LMI by solving an ARE-like equation, so we don t need special LMI methods... ) Linear matrix inequalities and the S-procedure 15 8

Strict inequalities in LMIs sometimes we encounter strict matrix inequalities F(x) 0, F strict (x) > 0 where F, F strict are affine functions of x practical approach: replace F strict (x) > 0 with F strict (x) ǫi, where ǫ is small and positive if F and F strict are homogenous (i.e., linear functions of x) we can replace with F(x) 0, F strict (x) I example: we can replace A T P + PA 0, P > 0 (with variable P) with A T P + PA 0, P I Linear matrix inequalities and the S-procedure 15 9

Quadratic Lyapunov function for time-varying LDS we consider time-varying linear system ẋ(t) = A(t)x(t) with A(t) {A 1,..., A K } we want to establish some property, such as all trajectories are bounded this is hard to do in general (cf. time-invariant LDS) let s use quadratic Lyapunov function V (z) = z T Pz; we need P > 0, and V (z) 0 for all z, and all possible values of A(t) gives P > 0, A T i P + PA i 0, i = 1,..., K by homogeneity, can write as LMIs P I, A T i P + PA i 0, i = 1,..., K Linear matrix inequalities and the S-procedure 15 10

in this case V is called simultaneous Lyapunov function for the systems ẋ = A i x, i = 1,..., K there is no analytical method (e.g., using AREs) to solve such an LMI, but it is easily done numerically if such a P exists, it proves boundedness of trajectories of ẋ(t) = A(t)x(t), with A(t) = θ 1 (t)a 1 + + θ K (t)a K where θ i (t) 0 in fact, it works for the nonlinear system ẋ = f(x) provided for each z R n, Df(z) = θ 1 (z)a 1 + + θ K (z)a K for some θ i (z) 0, θ 1 (z) + + θ K (z) = 1 Linear matrix inequalities and the S-procedure 15 11

Semidefinite programming a semidefinite program (SDP) is an optimization problem with linear objective and LMI and linear equality constraints: minimize c T x subject to F 0 + x 1 F 1 + + x n F n 0 Ax = b most important property for us: we can solve SDPs globally and efficiently meaning: we either find a globally optimal solution, or determine that there is no x that satisfies the LMI & equality constraints Linear matrix inequalities and the S-procedure 15 12

example: let A R n n be stable, Q = Q T 0 then the LMI A T P + PA + Q 0, P 0 in P means the quadratic Lyapunov function V (z) = z T Pz proves the bound 0 x(t) T Qx(t) dt x(0) T Px(0) now suppose that x(0) is fixed, and we seek the best possible such bound this can be found by solving the SDP minimize x(0) T Px(0) subject to A T P + PA + Q 0, P 0 with variable P (note that the objective is linear in P) (in fact we can solve this SDP analytically, by solving the Lyapunov equation) Linear matrix inequalities and the S-procedure 15 13

S-procedure for two quadratic forms let F 0 = F T 0, F 1 = F T 1 R n n when is it true that, for all z, z T F 1 z 0 z T F 0 z 0? in other words, when does nonnegativity of one quadratic form imply nonnegativity of another? simple condition: there exists τ R, τ 0, with F 0 τf 1 then for sure we have z T F 1 z 0 z T F 0 z 0 (since if z T F 1 z 0, we then have z T F 0 z τz T F 1 z 0) fact: the converse holds, provided there exists a point u with u T F 1 u > 0 this result is called the lossless S-procedure, and is not easy to prove (condition that there exists a point u with u T F 1 u > 0 is called a constraint qualification) Linear matrix inequalities and the S-procedure 15 14

S-procedure with strict inequalities when is it true that, for all z, z T F 1 z 0, z 0 z T F 0 z > 0? in other words, when does nonnegativity of one quadratic form imply positivity of another for nonzero z? simple condition: suppose there is a τ R, τ 0, with F 0 > τf 1 fact: the converse holds, provided there exists a point u with u T F 1 u > 0 again, this is not easy to prove Linear matrix inequalities and the S-procedure 15 15

Example let s use quadratic Lyapunov function V (z) = z T Pz to prove stability of ẋ = Ax + g(x), g(x) γ x we need P > 0 and V (x) αv (x) for all x (α > 0 is given) V (x) + αv (x) = 2x T P(Ax + g(x)) + αx T Px = x T (A T P + PA + αp)x + 2x T Pz T x A = T P + PA + αp P x z P 0 z where z = g(x) Linear matrix inequalities and the S-procedure 15 16

z satisfies z T z γ 2 x T x so we need P > 0 and x z T A T P + PA + αp P P 0 x z 0 whenever x z T γ 2 I 0 0 I x z 0 by S-procedure, this happens if and only if A T P + PA + αp P P 0 τ γ 2 I 0 0 I for some τ 0 (constraint qualification holds here) Linear matrix inequalities and the S-procedure 15 17

thus, necessary and sufficient conditions for the existence of quadratic Lyapunov function can be expressed as LMI P > 0, A T P + PA + αp + τγ 2 I P P τi 0 in variables P, τ (note condition τ 0 is automatic from 2, 2 block) by homogeneity, we can write this as A P I, T P + PA + αp + τγ 2 I P P τi 0 solving this LMI to find P is a powerful method it beats, for example, solving the Lyapunov equation A T P + PA + I = 0 and hoping the resulting P works Linear matrix inequalities and the S-procedure 15 18

S-procedure for multiple quadratic forms let F 0 = F T 0,..., F k = F T k Rn n when is it true that for all z, z T F 1 z 0,..., z T F k z 0 z T F 0 z 0 (1) in other words, when does nonnegativity of a set of quadratic forms imply nonnegativity of another? simple sufficient condition: suppose there are τ 1,..., τ k 0, with F 0 τ 1 F 1 + + τ k F k then for sure the property (1) above holds (in this case this is only a sufficient condition; it is not necessary) Linear matrix inequalities and the S-procedure 15 19

using the matrix inequality condition as a sufficient condition for F 0 τ 1 F 1 + + τ k F k, τ 1,..., τ k 0 for all z, z T F 1 z 0,..., z T F k z 0 z T F 0 z 0 is called the (lossy) S-procedure the matrix inequality condition is an LMI in τ 1,..., τ k, therefore easily solved the constants τ i are called multipliers Linear matrix inequalities and the S-procedure 15 20