# Lecture 13 Linear quadratic Lyapunov theory

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time results linearization theorem 13 1

2 The Lyapunov equation the Lyapunov equation is A T P + PA + Q = where A, P, Q R n n, and P, Q are symmetric interpretation: for linear system ẋ = Ax, if V (z) = z T Pz, then V (z) = (Az) T Pz + z T P(Az) = z T Qz i.e., if z T Pz is the (generalized)energy, then z T Qz is the associated (generalized) dissipation linear-quadratic Lyapunov theory: linear dynamics, quadratic Lyapunov function Linear quadratic Lyapunov theory 13 2

3 we consider system ẋ = Ax, with λ 1,..., λ n the eigenvalues of A if P >, then the sublevel sets are ellipsoids (and bounded) V (z) = z T Pz = z = boundedness condition: if P >, Q then all trajectories of ẋ = Ax are bounded (this means Rλ i, and if Rλ i =, then λ i corresponds to a Jordan block of size one) the ellipsoids {z z T Pz a} are invariant Linear quadratic Lyapunov theory 13 3

4 Stability condition if P >, Q > then the system ẋ = Ax is (globally asymptotically) stable, i.e., Rλ i < to see this, note that V (z) = z T Qz λ min (Q)z T z λ min(q) λ max (P) zt Pz = αv (z) where α = λ min (Q)/λ max (P) > Linear quadratic Lyapunov theory 13 4

5 An extension based on observability (Lasalle s theorem for linear dynamics, quadratic function) if P >, Q, and (Q,A) observable, then the system ẋ = Ax is (globally asymptotically) stable to see this, we first note that all eigenvalues satisfy Rλ i now suppose that v, Av = λv, Rλ = then A v = λ v = λ v, so Q 1/2 v 2 = v Qv = v ( A T P + PA ) v = λv Pv λv Pv = which implies Q 1/2 v =, so Qv =, contradicting observability (by PBH) interpretation: observability condition means no trajectory can stay in the zero dissipation set {z z T Qz = } Linear quadratic Lyapunov theory 13 5

6 An instability condition if Q and P, then A is not stable to see this, note that V, so V (x(t)) V (x()) since P, there is a w with V (w) < ; trajectory starting at w does not converge to zero in this case, the sublevel sets {z V (z) } (which are unbounded) are invariant Linear quadratic Lyapunov theory 13 6

7 The Lyapunov operator the Lyapunov operator is given by special case of Sylvester operator L(P) = A T P + PA L is nonsingular if and only if A and A share no common eigenvalues, i.e., A does not have pair of eigenvalues which are negatives of each other if A is stable, Lyapunov operator is nonsingular if A has imaginary (nonzero, iω-axis) eigenvalue, then Lyapunov operator is singular thus if A is stable, for any Q there is exactly one solution P of Lyapunov equation A T P + PA + Q = Linear quadratic Lyapunov theory 13 7

8 Solving the Lyapunov equation A T P + PA + Q = we are given A and Q and want to find P if Lyapunov equation is solved as a set of n(n + 1)/2 equations in n(n + 1)/2 variables, cost is O(n 6 ) operations fast methods, that exploit the special structure of the linear equations, can solve Lyapunov equation with cost O(n 3 ) based on first reducing A to Schur or upper Hessenberg form Linear quadratic Lyapunov theory 13 8

9 The Lyapunov integral if A is stable there is an explicit formula for solution of Lyapunov equation: P = e tat Qe ta dt to see this, we note that A T P + PA = = = e tat Qe ta = Q ( ) A T e tat Qe ta + e tat Qe ta A ( ) d T dt eta Qe ta dt dt Linear quadratic Lyapunov theory 13 9

10 Interpretation as cost-to-go if A is stable, and P is (unique) solution of A T P + PA + Q =, then where ẋ = Ax, x() = z V (z) = z T Pz = z T ( = ) e tat Qe ta dt z x(t) T Qx(t) dt thus V (z) is cost-to-go from point z (with no input) and integral quadratic cost function with matrix Q Linear quadratic Lyapunov theory 13 1

11 if A is stable and Q >, then for each t, e tat Qe ta >, so P = e tat Qe ta dt > meaning: if A is stable, we can choose any positive definite quadratic form z T Qz as the dissipation, i.e., V = z T Qz then solve a set of linear equations to find the (unique) quadratic form V (z) = z T Pz V will be positive definite, so it is a Lyapunov function that proves A is stable in particular: a linear system is stable if and only if there is a quadratic Lyapunov function that proves it Linear quadratic Lyapunov theory 13 11

12 generalization: if A stable, Q, and (Q, A) observable, then P > to see this, the Lyapunov integral shows P if Pz =, then = z T Pz = z T ( ) e tat Qe ta dt z = so we conclude Q 1/2 e ta z = for all t Q 1/2 e ta z 2 dt this implies that Qz =, QAz =,..., QA n 1 z =, contradicting (Q,A) observable Linear quadratic Lyapunov theory 13 12

13 Monotonicity of Lyapunov operator inverse suppose A T P i + P i A + Q i =, i = 1, 2 if Q 1 Q 2, then for all t, e tat Q 1 e ta e tat Q 2 e ta if A is stable, we have P 1 = e tat Q 1 e ta dt e tat Q 2 e ta dt = P 2 in other words: if A is stable then Q 1 Q 2 = L 1 (Q 1 ) L 1 (Q 2 ) i.e., inverse Lyapunov operator is monotonic, or preserves matrix inequality, when A is stable (question: is L monotonic?) Linear quadratic Lyapunov theory 13 13

14 Evaluating quadratic integrals suppose ẋ = Ax is stable, and define J = x(t) T Qx(t) dt to find J, we solve Lyapunov equation A T P + PA + Q = for P then, J = x() T Px() in other words: we can evaluate quadratic integral exactly, by solving a set of linear equations, without even computing a matrix exponential Linear quadratic Lyapunov theory 13 14

15 Controllability and observability Grammians for A stable, the controllability Grammian of (A, B) is defined as W c = e ta BB T e tat dt and the observability Grammian of (C, A) is W o = e tat C T Ce ta dt these Grammians can be computed by solving the Lyapunov equations AW c + W c A T + BB T =, A T W o + W o A + C T C = we always have W c, W o ; W c > if and only if (A, B) is controllable, and W o > if and only if (C, A) is observable Linear quadratic Lyapunov theory 13 15

16 Evaluating a state feedback gain consider ẋ = Ax + Bu, y = Cx, u = Kx, x() = x with closed-loop system ẋ = (A + BK)x stable to evaluate the quadratic integral performance measures J u = u(t) T u(t) dt, J y = y(t) T y(t) dt we solve Lyapunov equations (A + BK) T P u + P u (A + BK) + K T K = (A + BK) T P y + P y (A + BK) + C T C = then we have J u = x T P u x, J y = x T P y x Linear quadratic Lyapunov theory 13 16

17 write ARE (with Q, R > ) Lyapunov analysis of ARE A T P + PA + Q PBR 1 B T P = as (A + BK) T P + P(A + BK) + (Q + K T RK) = with K = R 1 B T P we conclude: if A + BK stable, then P (since Q + K T RK ) i.e., any stabilizing solution of ARE is PSD if also (Q,A) is observable, then we conclude P > to see this, we need to show that (Q + K T RK, A + BK) is observable if not, there is v s.t. (A + BK)v = λv, (Q + K T RK)v = Linear quadratic Lyapunov theory 13 17

18 which implies v (Q + K T RK)v = v Qv + v K T RKv = Q 1/2 v 2 + R 1/2 Kv 2 = so Qv =, Kv = which contradicts (Q,A) observable (A + BK)v = Av = λv, Qv = the same argument shows that if P > and (Q, A) is observable, then A + BK is stable Linear quadratic Lyapunov theory 13 18

19 Monotonic norm convergence suppose that A + A T <, i.e., (symmetric part of) A is negative definite can express as A T P + PA + Q =, with P = I, Q > meaning: x T Px = x 2 decreases along every nonzero trajectory, i.e., x(t) is always decreasing monotonically to x(t) is always moving towards origin this implies A is stable, but the converse is false: for a stable system, we need not have A + A T < (for a stable system with A + A T, x(t) converges to zero, but not monotonically) Linear quadratic Lyapunov theory 13 19

20 for a stable system we can always change coordinates so we have monotonic norm convergence let P denote the solution of A T P + PA + I = take T = P 1/2 in new coordinates A becomes Ã = T 1 AT, Ã + ÃT = P 1/2 AP 1/2 + P 1/2 A T P 1/2 = P 1/2 ( PA + A T P ) P 1/2 = P 1 < in new coordinates, convergence is obvious because x(t) is always decreasing Linear quadratic Lyapunov theory 13 2

21 Discrete-time results all linear quadratic Lyapunov results have discrete-time counterparts the discrete-time Lyapunov equation is A T PA P + Q = meaning: if x t+1 = Ax t and V (z) = z T Pz, then V (z) = z T Qz if P > and Q >, then A is (discrete-time) stable (i.e., λ i < 1) if P > and Q, then all trajectories are bounded (i.e., λ i 1; λ i = 1 only for 1 1 Jordan blocks) if P >, Q, and (Q, A) observable, then A is stable if P and Q, then A is not stable Linear quadratic Lyapunov theory 13 21

22 Discrete-time Lyapunov operator the discrete-time Lyapunov operator is given by L(P) = A T PA P L is nonsingular if and only if, for all i, j, λ i λ j 1 (roughly speaking, if and only if A and A 1 share no eigenvalues) if A is stable, then L is nonsingular; in fact P = (A T ) t QA t t= is the unique solution of Lyapunov equation A T PA P + Q = the discrete-time Lyapunov equation can be solved quickly (i.e., O(n 3 )) and can be used to evaluate infinite sums of quadratic functions, etc. Linear quadratic Lyapunov theory 13 22

23 Converse theorems suppose x t+1 = Ax t is stable, A T PA P + Q = if Q > then P > if Q and (Q,A) observable, then P > in particular, a discrete-time linear system is stable if and only if there is a quadratic Lyapunov function that proves it Linear quadratic Lyapunov theory 13 23

24 Monotonic norm convergence suppose A T PA P + Q =, with P = I and Q > this means A T A < I, i.e., A < 1 meaning: x t decreases on every nonzero trajectory; indeed, x t+1 A x t < x t when A < 1, stability is obvious, since x t A t x system is called contractive since norm is reduced at each step the converse is false: system can be stable without A < 1 Linear quadratic Lyapunov theory 13 24

25 now suppose A is stable, and let P satisfy A T PA P + I = take T = P 1/2 in new coordinates A becomes Ã = T 1 AT, so i.e., Ã < 1 Ã T Ã = P 1/2 A T PAP 1/2 = P 1/2 (P I)P 1/2 = I P 1 < I so for a stable system, we can change coordinates so the system is contractive Linear quadratic Lyapunov theory 13 25

26 Lyapunov s linearization theorem we consider nonlinear time-invariant system ẋ = f(x), where f : R n R n suppose x e is an equilibrium point, i.e., f(x e ) =, and let A = Df(x e ) R n n the linearized system, near x e, is δx = Aδx linearization theorem: if the linearized system is stable, i.e., Rλ i (A) < for i = 1,..., n, then the nonlinear system is locally asymptotically stable if for some i, Rλ i (A) >, then the nonlinear system is not locally asymptotically stable Linear quadratic Lyapunov theory 13 26

27 stability of the linearized system determines the local stability of the nonlinear system, except when all eigenvalues are in the closed left halfplane, and at least one is on the imaginary axis examples like ẋ = x 3 (which is not LAS) and ẋ = x 3 (which is LAS) show the theorem cannot, in general, be tightened examples: eigenvalues of Df(x e ) conclusion about ẋ = f(x) 3,.1 ± 4i,.2 ± i LAS near x e 3,.1 ± 4i,.2 ± i not LAS near x e 3,.1 ± 4i, ± i no conclusion Linear quadratic Lyapunov theory 13 27

28 Proof of linearization theorem let s assume x e =, and express the nonlinear differential equation as where g(x) K x 2 ẋ = Ax + g(x) suppose that A is stable, and let P be unique solution of Lyapunov equation A T P + PA + I = the Lyapunov function V (z) = z T Pz proves stability of the linearized system; we ll use it to prove local asymptotic stability of the nonlinear system Linear quadratic Lyapunov theory 13 28

29 V (z) = 2z T P(Az + g(z)) = z T (A T P + PA)z + 2z T Pg(z) = z T z + 2z T Pg(z) z z P g(z) z 2 + 2K P z 3 = z 2 (1 2K P z ) so for z 1/(4K P ), V (z) z 2 2λ max (P) zt Pz = 1 2 P zt Pz Linear quadratic Lyapunov theory 13 29

30 finally, using we have z 2 1 λ min (P) zt Pz V (z) λ min(p) 16K 2 P 2 = z 1 4K P = V (z) 1 2 P V (z) and we re done comments: proof actually constructs an ellipsoid inside basin of attraction of x e =, and a bound on exponential rate of convergence choice of Q = I was arbitrary; can get better estimates using other Qs, better bounds on g, tighter bounding arguments... Linear quadratic Lyapunov theory 13 3

31 Integral quadratic performance consider ẋ = f(x), x() = x we are interested in the integral quadratic performance measure J(x ) = x(t) T Qx(t) dt for any fixed x we can find this (approximately) by simulation and numerical integration (we ll assume the integral exists; we do not require Q ) Linear quadratic Lyapunov theory 13 31

32 Lyapunov bounds on integral quadratic performance suppose there is a function V : R n R such that V (z) for all z V (z) z T Qz for all z then we have J(x ) V (x ), i.e., the Lyapunov function V serves as an upper bound on the integral quadratic cost (since Q need not be PSD, we might not have V ; so we cannot conclude that trajectories are bounded) Linear quadratic Lyapunov theory 13 32

33 to show this, we note that V (x(t)) V (x()) = T V (x(t)) dt T x(t) T Qx(t) dt and so T x(t) T Qx(t) dt V (x()) V (x(t)) V (x()) since this holds for arbitrary T, we conclude x(t) T Qx(t) dt V (x()) Linear quadratic Lyapunov theory 13 33

34 Integral quadratic performance for linear systems for a stable linear system, with Q, the Lyapunov bound is sharp, i.e., there exists a V such that V (z) for all z V (z) z T Qz for all z and for which V (x ) = J(x ) for all x (take V (z) = z T Pz, where P is solution of A T P + PA + Q = ) Linear quadratic Lyapunov theory 13 34

### Lecture 12 Basic Lyapunov theory

EE363 Winter 2008-09 Lecture 12 Basic Lyapunov theory stability positive definite functions global Lyapunov stability theorems Lasalle s theorem converse Lyapunov theorems finding Lyapunov functions 12

### Lecture 7: Finding Lyapunov Functions 1

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

### Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

Lecture 4. LaSalle s Invariance Principle We begin with a motivating eample. Eample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum Dynamics of a pendulum with friction can be written

### Lecture 11 Invariant sets, conservation, and dissipation

EE363 Winter 2008-09 Lecture 11 Invariant sets, conservation, and dissipation invariant sets conserved quantities dissipated quantities derivative along trajectory discrete-time case 11 1 Invariant sets

### Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability p. 1/? p. 2/? Definition: A p p proper rational transfer function matrix G(s) is positive

### CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

### Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems

Chapter 4 Stability Topics : 1. Basic Concepts 2. Algebraic Criteria for Linear Systems 3. Lyapunov Theory with Applications to Linear Systems 4. Stability and Control Copyright c Claudiu C. Remsing, 2006.

### Control Systems Design

ELEC4410 Control Systems Design Lecture 17: State Feedback Julio H Braslavsky julio@eenewcastleeduau School of Electrical Engineering and Computer Science Lecture 17: State Feedback p1/23 Outline Overview

### 2.5 Complex Eigenvalues

1 25 Complex Eigenvalues Real Canonical Form A semisimple matrix with complex conjugate eigenvalues can be diagonalized using the procedure previously described However, the eigenvectors corresponding

### On the D-Stability of Linear and Nonlinear Positive Switched Systems

On the D-Stability of Linear and Nonlinear Positive Switched Systems V. S. Bokharaie, O. Mason and F. Wirth Abstract We present a number of results on D-stability of positive switched systems. Different

### 4 Lyapunov Stability Theory

4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We

### Control Systems 2. Lecture 7: System theory: controllability, observability, stability, poles and zeros. Roy Smith

Control Systems 2 Lecture 7: System theory: controllability, observability, stability, poles and zeros Roy Smith 216-4-12 7.1 State-space representations Idea: Transfer function is a ratio of polynomials

### R mp nq. (13.1) a m1 B a mn B

This electronic version is for personal use and may not be duplicated or distributed Chapter 13 Kronecker Products 131 Definition and Examples Definition 131 Let A R m n, B R p q Then the Kronecker product

19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x

### Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

### Lecture 3: Convex Sets and Functions

EE 227A: Convex Optimization and Applications January 24, 2012 Lecture 3: Convex Sets and Functions Lecturer: Laurent El Ghaoui Reading assignment: Chapters 2 (except 2.6) and sections 3.1, 3.2, 3.3 of

### The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

### Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient

### State Feedback Stabilization with Guaranteed Transient Bounds

State Feedback Stabilization with Guaranteed Transient Bounds D. Hinrichsen, E. Plischke, F. Wirth Zentrum für Technomathematik Universität Bremen 28334 Bremen, Germany Abstract We analyze under which

### BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### 1 Norms and Vector Spaces

008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

### Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 1. Chapter 2. Convex Optimization

Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 1 Chapter 2 Convex Optimization Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i (x)

### Dynamical Systems Analysis II: Evaluating Stability, Eigenvalues

Dynamical Systems Analysis II: Evaluating Stability, Eigenvalues By Peter Woolf pwoolf@umich.edu) University of Michigan Michigan Chemical Process Dynamics and Controls Open Textbook version 1.0 Creative

### Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th

Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

### LS.6 Solution Matrices

LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

### Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

### x if x 0, x if x < 0.

Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

### Introduction to Flocking {Stochastic Matrices}

Supelec EECI Graduate School in Control Introduction to Flocking {Stochastic Matrices} A. S. Morse Yale University Gif sur - Yvette May 21, 2012 CRAIG REYNOLDS - 1987 BOIDS The Lion King CRAIG REYNOLDS

### Factoring Cubic Polynomials

Factoring Cubic Polynomials Robert G. Underwood 1. Introduction There are at least two ways in which using the famous Cardano formulas (1545) to factor cubic polynomials present more difficulties than

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

### PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

### V(x)=c 2. V(x)=c 1. V(x)=c 3

University of California Department of Mechanical Engineering Linear Systems Fall 1999 (B. Bamieh) Lecture 6: Stability of Dynamic Systems Lyapunov's Direct Method 1 6.1 Notions of Stability For a general

### Chapter 4 Sequential Quadratic Programming

Optimization I; Chapter 4 77 Chapter 4 Sequential Quadratic Programming 4.1 The Basic SQP Method 4.1.1 Introductory Definitions and Assumptions Sequential Quadratic Programming (SQP) is one of the most

### 5.3 Improper Integrals Involving Rational and Exponential Functions

Section 5.3 Improper Integrals Involving Rational and Exponential Functions 99.. 3. 4. dθ +a cos θ =, < a

### be a nested sequence of closed nonempty connected subsets of a compact metric space X. Prove that

Problem 1A. Let... X 2 X 1 be a nested sequence of closed nonempty connected subsets of a compact metric space X. Prove that i=1 X i is nonempty and connected. Since X i is closed in X, it is compact.

### 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

### Dimension Theory for Ordinary Differential Equations

Vladimir A. Boichenko, Gennadij A. Leonov, Volker Reitmann Dimension Theory for Ordinary Differential Equations Teubner Contents Singular values, exterior calculus and Lozinskii-norms 15 1 Singular values

### TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

### Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

### Linear Control Systems

Chapter 3 Linear Control Systems Topics : 1. Controllability 2. Observability 3. Linear Feedback 4. Realization Theory Copyright c Claudiu C. Remsing, 26. All rights reserved. 7 C.C. Remsing 71 Intuitively,

### Lecture 11: 0-1 Quadratic Program and Lower Bounds

Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite

### MATH : HONORS CALCULUS-3 HOMEWORK 6: SOLUTIONS

MATH 16300-33: HONORS CALCULUS-3 HOMEWORK 6: SOLUTIONS 25-1 Find the absolute value and argument(s) of each of the following. (ii) (3 + 4i) 1 (iv) 7 3 + 4i (ii) Put z = 3 + 4i. From z 1 z = 1, we have

### Chapter 4: Time-Domain Representation for Linear Time-Invariant Systems

Chapter 4: Time-Domain Representation for Linear Time-Invariant Systems In this chapter we will consider several methods for describing relationship between the input and output of linear time-invariant

### 3.2 Sources, Sinks, Saddles, and Spirals

3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients

### Fuzzy Differential Systems and the New Concept of Stability

Nonlinear Dynamics and Systems Theory, 1(2) (2001) 111 119 Fuzzy Differential Systems and the New Concept of Stability V. Lakshmikantham 1 and S. Leela 2 1 Department of Mathematical Sciences, Florida

### General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1

A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions

### Math 407A: Linear Optimization

Math 407A: Linear Optimization Lecture 4: LP Standard Form 1 1 Author: James Burke, University of Washington LPs in Standard Form Minimization maximization Linear equations to linear inequalities Lower

### Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

### (1 T z) 2zzT (z k = exp x k )

log-sum-exp: f(x) = log n k=1 exp x k is convex 2 f(x) = 1 1 T z diag(z) 1 (1 T z) 2zzT (z k = exp x k ) to show 2 f(x) 0, we must verify that v T 2 f(x)v 0 for all v: v T 2 f(x)v = ( k z kv 2 k )( k z

### Pacific Journal of Mathematics

Pacific Journal of Mathematics GLOBAL EXISTENCE AND DECREASING PROPERTY OF BOUNDARY VALUES OF SOLUTIONS TO PARABOLIC EQUATIONS WITH NONLOCAL BOUNDARY CONDITIONS Sangwon Seo Volume 193 No. 1 March 2000

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

### Section 3 Sequences and Limits

Section 3 Sequences and Limits Definition A sequence of real numbers is an infinite ordered list a, a 2, a 3, a 4,... where, for each n N, a n is a real number. We call a n the n-th term of the sequence.

### Introduction to polynomials

Worksheet 4.5 Polynomials Section 1 Introduction to polynomials A polynomial is an expression of the form p(x) = p 0 + p 1 x + p 2 x 2 + + p n x n, (n N) where p 0, p 1,..., p n are constants and x os

### THEORY OF SIMPLEX METHOD

Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize

### 3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

### Review: Vector space

Math 2F Linear Algebra Lecture 13 1 Basis and dimensions Slide 1 Review: Subspace of a vector space. (Sec. 4.1) Linear combinations, l.d., l.i. vectors. (Sec. 4.3) Dimension and Base of a vector space.

### EE 580 Linear Control Systems VI. State Transition Matrix

EE 580 Linear Control Systems VI. State Transition Matrix Department of Electrical Engineering Pennsylvania State University Fall 2010 6.1 Introduction Typical signal spaces are (infinite-dimensional vector

### Math 317 HW #5 Solutions

Math 317 HW #5 Solutions 1. Exercise 2.4.2. (a) Prove that the sequence defined by x 1 = 3 and converges. x n+1 = 1 4 x n Proof. I intend to use the Monotone Convergence Theorem, so my goal is to show

### The Predator-Prey Equations. x = a x α xy y = c y + γ xy

The Predator-Prey Equations An application of the nonlinear system of differential equations in mathematical biology / ecology: to model the predator-prey relationship of a simple eco-system. Suppose in

### THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

### 6. Cholesky factorization

6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

### Apr 23, 2015. Calculus with Algebra and Trigonometry II Lecture 23Final Review: Apr Curve 23, 2015 sketching 1 / and 19pa

Calculus with Algebra and Trigonometry II Lecture 23 Final Review: Curve sketching and parametric equations Apr 23, 2015 Calculus with Algebra and Trigonometry II Lecture 23Final Review: Apr Curve 23,

### MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

### Multiplicity. Chapter 6

Chapter 6 Multiplicity The fundamental theorem of algebra says that any polynomial of degree n 0 has exactly n roots in the complex numbers if we count with multiplicity. The zeros of a polynomial are

### Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

### (Quasi-)Newton methods

(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

### Math 2280 - Assignment 6

Math 2280 - Assignment 6 Dylan Zwick Spring 2014 Section 3.8-1, 3, 5, 8, 13 Section 4.1-1, 2, 13, 15, 22 Section 4.2-1, 10, 19, 28 1 Section 3.8 - Endpoint Problems and Eigenvalues 3.8.1 For the eigenvalue

### Inner products on R n, and more

Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

### Example 1: Competing Species

Local Linear Analysis of Nonlinear Autonomous DEs Local linear analysis is the process by which we analyze a nonlinear system of differential equations about its equilibrium solutions (also known as critical

### The Real Numbers. Here we show one way to explicitly construct the real numbers R. First we need a definition.

The Real Numbers Here we show one way to explicitly construct the real numbers R. First we need a definition. Definitions/Notation: A sequence of rational numbers is a funtion f : N Q. Rather than write

### The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

### Nonlinear Algebraic Equations Example

Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities

### Math 312 Homework 1 Solutions

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

### 4.3 Lagrange Approximation

206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average

### Cylinder Maps and the Schwarzian

Cylinder Maps and the Schwarzian John Milnor Stony Brook University (www.math.sunysb.edu) Bremen June 16, 2008 Cylinder Maps 1 work with Araceli Bonifant Let C denote the cylinder (R/Z) I. (.5, 1) fixed

### 1 if 1 x 0 1 if 0 x 1

Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

### Numerical methods for American options

Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Section 3 Sequences and Limits, Continued.

Section 3 Sequences and Limits, Continued. Lemma 3.6 Let {a n } n N be a convergent sequence for which a n 0 for all n N and it α 0. Then there exists N N such that for all n N. α a n 3 α In particular

### Text: A Graphical Approach to College Algebra (Hornsby, Lial, Rockswold)

Students will take Self Tests covering the topics found in Chapter R (Reference: Basic Algebraic Concepts) and Chapter 1 (Linear Functions, Equations, and Inequalities). If any deficiencies are revealed,

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### Notes on Symmetric Matrices

CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

### On using numerical algebraic geometry to find Lyapunov functions of polynomial dynamical systems

Dynamics at the Horsetooth Volume 2, 2010. On using numerical algebraic geometry to find Lyapunov functions of polynomial dynamical systems Eric Hanson Department of Mathematics Colorado State University