Lecture 13 Linear quadratic Lyapunov theory
|
|
|
- Clinton Stafford
- 9 years ago
- Views:
Transcription
1 EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time results linearization theorem 13 1
2 The Lyapunov equation the Lyapunov equation is A T P + PA + Q = where A, P, Q R n n, and P, Q are symmetric interpretation: for linear system ẋ = Ax, if V (z) = z T Pz, then V (z) = (Az) T Pz + z T P(Az) = z T Qz i.e., if z T Pz is the (generalized)energy, then z T Qz is the associated (generalized) dissipation linear-quadratic Lyapunov theory: linear dynamics, quadratic Lyapunov function Linear quadratic Lyapunov theory 13 2
3 we consider system ẋ = Ax, with λ 1,..., λ n the eigenvalues of A if P >, then the sublevel sets are ellipsoids (and bounded) V (z) = z T Pz = z = boundedness condition: if P >, Q then all trajectories of ẋ = Ax are bounded (this means Rλ i, and if Rλ i =, then λ i corresponds to a Jordan block of size one) the ellipsoids {z z T Pz a} are invariant Linear quadratic Lyapunov theory 13 3
4 Stability condition if P >, Q > then the system ẋ = Ax is (globally asymptotically) stable, i.e., Rλ i < to see this, note that V (z) = z T Qz λ min (Q)z T z λ min(q) λ max (P) zt Pz = αv (z) where α = λ min (Q)/λ max (P) > Linear quadratic Lyapunov theory 13 4
5 An extension based on observability (Lasalle s theorem for linear dynamics, quadratic function) if P >, Q, and (Q,A) observable, then the system ẋ = Ax is (globally asymptotically) stable to see this, we first note that all eigenvalues satisfy Rλ i now suppose that v, Av = λv, Rλ = then A v = λ v = λ v, so Q 1/2 v 2 = v Qv = v ( A T P + PA ) v = λv Pv λv Pv = which implies Q 1/2 v =, so Qv =, contradicting observability (by PBH) interpretation: observability condition means no trajectory can stay in the zero dissipation set {z z T Qz = } Linear quadratic Lyapunov theory 13 5
6 An instability condition if Q and P, then A is not stable to see this, note that V, so V (x(t)) V (x()) since P, there is a w with V (w) < ; trajectory starting at w does not converge to zero in this case, the sublevel sets {z V (z) } (which are unbounded) are invariant Linear quadratic Lyapunov theory 13 6
7 The Lyapunov operator the Lyapunov operator is given by special case of Sylvester operator L(P) = A T P + PA L is nonsingular if and only if A and A share no common eigenvalues, i.e., A does not have pair of eigenvalues which are negatives of each other if A is stable, Lyapunov operator is nonsingular if A has imaginary (nonzero, iω-axis) eigenvalue, then Lyapunov operator is singular thus if A is stable, for any Q there is exactly one solution P of Lyapunov equation A T P + PA + Q = Linear quadratic Lyapunov theory 13 7
8 Solving the Lyapunov equation A T P + PA + Q = we are given A and Q and want to find P if Lyapunov equation is solved as a set of n(n + 1)/2 equations in n(n + 1)/2 variables, cost is O(n 6 ) operations fast methods, that exploit the special structure of the linear equations, can solve Lyapunov equation with cost O(n 3 ) based on first reducing A to Schur or upper Hessenberg form Linear quadratic Lyapunov theory 13 8
9 The Lyapunov integral if A is stable there is an explicit formula for solution of Lyapunov equation: P = e tat Qe ta dt to see this, we note that A T P + PA = = = e tat Qe ta = Q ( ) A T e tat Qe ta + e tat Qe ta A ( ) d T dt eta Qe ta dt dt Linear quadratic Lyapunov theory 13 9
10 Interpretation as cost-to-go if A is stable, and P is (unique) solution of A T P + PA + Q =, then where ẋ = Ax, x() = z V (z) = z T Pz = z T ( = ) e tat Qe ta dt z x(t) T Qx(t) dt thus V (z) is cost-to-go from point z (with no input) and integral quadratic cost function with matrix Q Linear quadratic Lyapunov theory 13 1
11 if A is stable and Q >, then for each t, e tat Qe ta >, so P = e tat Qe ta dt > meaning: if A is stable, we can choose any positive definite quadratic form z T Qz as the dissipation, i.e., V = z T Qz then solve a set of linear equations to find the (unique) quadratic form V (z) = z T Pz V will be positive definite, so it is a Lyapunov function that proves A is stable in particular: a linear system is stable if and only if there is a quadratic Lyapunov function that proves it Linear quadratic Lyapunov theory 13 11
12 generalization: if A stable, Q, and (Q, A) observable, then P > to see this, the Lyapunov integral shows P if Pz =, then = z T Pz = z T ( ) e tat Qe ta dt z = so we conclude Q 1/2 e ta z = for all t Q 1/2 e ta z 2 dt this implies that Qz =, QAz =,..., QA n 1 z =, contradicting (Q,A) observable Linear quadratic Lyapunov theory 13 12
13 Monotonicity of Lyapunov operator inverse suppose A T P i + P i A + Q i =, i = 1, 2 if Q 1 Q 2, then for all t, e tat Q 1 e ta e tat Q 2 e ta if A is stable, we have P 1 = e tat Q 1 e ta dt e tat Q 2 e ta dt = P 2 in other words: if A is stable then Q 1 Q 2 = L 1 (Q 1 ) L 1 (Q 2 ) i.e., inverse Lyapunov operator is monotonic, or preserves matrix inequality, when A is stable (question: is L monotonic?) Linear quadratic Lyapunov theory 13 13
14 Evaluating quadratic integrals suppose ẋ = Ax is stable, and define J = x(t) T Qx(t) dt to find J, we solve Lyapunov equation A T P + PA + Q = for P then, J = x() T Px() in other words: we can evaluate quadratic integral exactly, by solving a set of linear equations, without even computing a matrix exponential Linear quadratic Lyapunov theory 13 14
15 Controllability and observability Grammians for A stable, the controllability Grammian of (A, B) is defined as W c = e ta BB T e tat dt and the observability Grammian of (C, A) is W o = e tat C T Ce ta dt these Grammians can be computed by solving the Lyapunov equations AW c + W c A T + BB T =, A T W o + W o A + C T C = we always have W c, W o ; W c > if and only if (A, B) is controllable, and W o > if and only if (C, A) is observable Linear quadratic Lyapunov theory 13 15
16 Evaluating a state feedback gain consider ẋ = Ax + Bu, y = Cx, u = Kx, x() = x with closed-loop system ẋ = (A + BK)x stable to evaluate the quadratic integral performance measures J u = u(t) T u(t) dt, J y = y(t) T y(t) dt we solve Lyapunov equations (A + BK) T P u + P u (A + BK) + K T K = (A + BK) T P y + P y (A + BK) + C T C = then we have J u = x T P u x, J y = x T P y x Linear quadratic Lyapunov theory 13 16
17 write ARE (with Q, R > ) Lyapunov analysis of ARE A T P + PA + Q PBR 1 B T P = as (A + BK) T P + P(A + BK) + (Q + K T RK) = with K = R 1 B T P we conclude: if A + BK stable, then P (since Q + K T RK ) i.e., any stabilizing solution of ARE is PSD if also (Q,A) is observable, then we conclude P > to see this, we need to show that (Q + K T RK, A + BK) is observable if not, there is v s.t. (A + BK)v = λv, (Q + K T RK)v = Linear quadratic Lyapunov theory 13 17
18 which implies v (Q + K T RK)v = v Qv + v K T RKv = Q 1/2 v 2 + R 1/2 Kv 2 = so Qv =, Kv = which contradicts (Q,A) observable (A + BK)v = Av = λv, Qv = the same argument shows that if P > and (Q, A) is observable, then A + BK is stable Linear quadratic Lyapunov theory 13 18
19 Monotonic norm convergence suppose that A + A T <, i.e., (symmetric part of) A is negative definite can express as A T P + PA + Q =, with P = I, Q > meaning: x T Px = x 2 decreases along every nonzero trajectory, i.e., x(t) is always decreasing monotonically to x(t) is always moving towards origin this implies A is stable, but the converse is false: for a stable system, we need not have A + A T < (for a stable system with A + A T, x(t) converges to zero, but not monotonically) Linear quadratic Lyapunov theory 13 19
20 for a stable system we can always change coordinates so we have monotonic norm convergence let P denote the solution of A T P + PA + I = take T = P 1/2 in new coordinates A becomes à = T 1 AT, à + ÃT = P 1/2 AP 1/2 + P 1/2 A T P 1/2 = P 1/2 ( PA + A T P ) P 1/2 = P 1 < in new coordinates, convergence is obvious because x(t) is always decreasing Linear quadratic Lyapunov theory 13 2
21 Discrete-time results all linear quadratic Lyapunov results have discrete-time counterparts the discrete-time Lyapunov equation is A T PA P + Q = meaning: if x t+1 = Ax t and V (z) = z T Pz, then V (z) = z T Qz if P > and Q >, then A is (discrete-time) stable (i.e., λ i < 1) if P > and Q, then all trajectories are bounded (i.e., λ i 1; λ i = 1 only for 1 1 Jordan blocks) if P >, Q, and (Q, A) observable, then A is stable if P and Q, then A is not stable Linear quadratic Lyapunov theory 13 21
22 Discrete-time Lyapunov operator the discrete-time Lyapunov operator is given by L(P) = A T PA P L is nonsingular if and only if, for all i, j, λ i λ j 1 (roughly speaking, if and only if A and A 1 share no eigenvalues) if A is stable, then L is nonsingular; in fact P = (A T ) t QA t t= is the unique solution of Lyapunov equation A T PA P + Q = the discrete-time Lyapunov equation can be solved quickly (i.e., O(n 3 )) and can be used to evaluate infinite sums of quadratic functions, etc. Linear quadratic Lyapunov theory 13 22
23 Converse theorems suppose x t+1 = Ax t is stable, A T PA P + Q = if Q > then P > if Q and (Q,A) observable, then P > in particular, a discrete-time linear system is stable if and only if there is a quadratic Lyapunov function that proves it Linear quadratic Lyapunov theory 13 23
24 Monotonic norm convergence suppose A T PA P + Q =, with P = I and Q > this means A T A < I, i.e., A < 1 meaning: x t decreases on every nonzero trajectory; indeed, x t+1 A x t < x t when A < 1, stability is obvious, since x t A t x system is called contractive since norm is reduced at each step the converse is false: system can be stable without A < 1 Linear quadratic Lyapunov theory 13 24
25 now suppose A is stable, and let P satisfy A T PA P + I = take T = P 1/2 in new coordinates A becomes à = T 1 AT, so i.e., à < 1 à T à = P 1/2 A T PAP 1/2 = P 1/2 (P I)P 1/2 = I P 1 < I so for a stable system, we can change coordinates so the system is contractive Linear quadratic Lyapunov theory 13 25
26 Lyapunov s linearization theorem we consider nonlinear time-invariant system ẋ = f(x), where f : R n R n suppose x e is an equilibrium point, i.e., f(x e ) =, and let A = Df(x e ) R n n the linearized system, near x e, is δx = Aδx linearization theorem: if the linearized system is stable, i.e., Rλ i (A) < for i = 1,..., n, then the nonlinear system is locally asymptotically stable if for some i, Rλ i (A) >, then the nonlinear system is not locally asymptotically stable Linear quadratic Lyapunov theory 13 26
27 stability of the linearized system determines the local stability of the nonlinear system, except when all eigenvalues are in the closed left halfplane, and at least one is on the imaginary axis examples like ẋ = x 3 (which is not LAS) and ẋ = x 3 (which is LAS) show the theorem cannot, in general, be tightened examples: eigenvalues of Df(x e ) conclusion about ẋ = f(x) 3,.1 ± 4i,.2 ± i LAS near x e 3,.1 ± 4i,.2 ± i not LAS near x e 3,.1 ± 4i, ± i no conclusion Linear quadratic Lyapunov theory 13 27
28 Proof of linearization theorem let s assume x e =, and express the nonlinear differential equation as where g(x) K x 2 ẋ = Ax + g(x) suppose that A is stable, and let P be unique solution of Lyapunov equation A T P + PA + I = the Lyapunov function V (z) = z T Pz proves stability of the linearized system; we ll use it to prove local asymptotic stability of the nonlinear system Linear quadratic Lyapunov theory 13 28
29 V (z) = 2z T P(Az + g(z)) = z T (A T P + PA)z + 2z T Pg(z) = z T z + 2z T Pg(z) z z P g(z) z 2 + 2K P z 3 = z 2 (1 2K P z ) so for z 1/(4K P ), V (z) z 2 2λ max (P) zt Pz = 1 2 P zt Pz Linear quadratic Lyapunov theory 13 29
30 finally, using we have z 2 1 λ min (P) zt Pz V (z) λ min(p) 16K 2 P 2 = z 1 4K P = V (z) 1 2 P V (z) and we re done comments: proof actually constructs an ellipsoid inside basin of attraction of x e =, and a bound on exponential rate of convergence choice of Q = I was arbitrary; can get better estimates using other Qs, better bounds on g, tighter bounding arguments... Linear quadratic Lyapunov theory 13 3
31 Integral quadratic performance consider ẋ = f(x), x() = x we are interested in the integral quadratic performance measure J(x ) = x(t) T Qx(t) dt for any fixed x we can find this (approximately) by simulation and numerical integration (we ll assume the integral exists; we do not require Q ) Linear quadratic Lyapunov theory 13 31
32 Lyapunov bounds on integral quadratic performance suppose there is a function V : R n R such that V (z) for all z V (z) z T Qz for all z then we have J(x ) V (x ), i.e., the Lyapunov function V serves as an upper bound on the integral quadratic cost (since Q need not be PSD, we might not have V ; so we cannot conclude that trajectories are bounded) Linear quadratic Lyapunov theory 13 32
33 to show this, we note that V (x(t)) V (x()) = T V (x(t)) dt T x(t) T Qx(t) dt and so T x(t) T Qx(t) dt V (x()) V (x(t)) V (x()) since this holds for arbitrary T, we conclude x(t) T Qx(t) dt V (x()) Linear quadratic Lyapunov theory 13 33
34 Integral quadratic performance for linear systems for a stable linear system, with Q, the Lyapunov bound is sharp, i.e., there exists a V such that V (z) for all z V (z) z T Qz for all z and for which V (x ) = J(x ) for all x (take V (z) = z T Pz, where P is solution of A T P + PA + Q = ) Linear quadratic Lyapunov theory 13 34
Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x
Lecture 4. LaSalle s Invariance Principle We begin with a motivating eample. Eample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum Dynamics of a pendulum with friction can be written
Lecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?
Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability p. 1/? p. 2/? Definition: A p p proper rational transfer function matrix G(s) is positive
CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems
Chapter 4 Stability Topics : 1. Basic Concepts 2. Algebraic Criteria for Linear Systems 3. Lyapunov Theory with Applications to Linear Systems 4. Stability and Control Copyright c Claudiu C. Remsing, 2006.
19 LINEAR QUADRATIC REGULATOR
19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead
4 Lyapunov Stability Theory
4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)
ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x
Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems
Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient
State Feedback Stabilization with Guaranteed Transient Bounds
State Feedback Stabilization with Guaranteed Transient Bounds D. Hinrichsen, E. Plischke, F. Wirth Zentrum für Technomathematik Universität Bremen 28334 Bremen, Germany Abstract We analyze under which
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
The Heat Equation. Lectures INF2320 p. 1/88
The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)
2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
LS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
Lecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
5.3 Improper Integrals Involving Rational and Exponential Functions
Section 5.3 Improper Integrals Involving Rational and Exponential Functions 99.. 3. 4. dθ +a cos θ =, < a
Factoring Cubic Polynomials
Factoring Cubic Polynomials Robert G. Underwood 1. Introduction There are at least two ways in which using the famous Cardano formulas (1545) to factor cubic polynomials present more difficulties than
1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
V(x)=c 2. V(x)=c 1. V(x)=c 3
University of California Department of Mechanical Engineering Linear Systems Fall 1999 (B. Bamieh) Lecture 6: Stability of Dynamic Systems Lyapunov's Direct Method 1 6.1 Notions of Stability For a general
TOPIC 4: DERIVATIVES
TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the
Linear Control Systems
Chapter 3 Linear Control Systems Topics : 1. Controllability 2. Observability 3. Linear Feedback 4. Realization Theory Copyright c Claudiu C. Remsing, 26. All rights reserved. 7 C.C. Remsing 71 Intuitively,
Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand
Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such
Dynamical Systems Analysis II: Evaluating Stability, Eigenvalues
Dynamical Systems Analysis II: Evaluating Stability, Eigenvalues By Peter Woolf [email protected]) University of Michigan Michigan Chemical Process Dynamics and Controls Open Textbook version 1.0 Creative
Stability Analysis for Systems of Differential Equations
Stability Analysis for Systems of Differential Equations David Eberly Geometric Tools, LLC http://wwwgeometrictoolscom/ Copyright c 1998-2016 All Rights Reserved Created: February 8, 2003 Last Modified:
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725
Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T
Dimension Theory for Ordinary Differential Equations
Vladimir A. Boichenko, Gennadij A. Leonov, Volker Reitmann Dimension Theory for Ordinary Differential Equations Teubner Contents Singular values, exterior calculus and Lozinskii-norms 15 1 Singular values
6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points
Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a
(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
Fuzzy Differential Systems and the New Concept of Stability
Nonlinear Dynamics and Systems Theory, 1(2) (2001) 111 119 Fuzzy Differential Systems and the New Concept of Stability V. Lakshmikantham 1 and S. Leela 2 1 Department of Mathematical Sciences, Florida
Lecture 11: 0-1 Quadratic Program and Lower Bounds
Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite
Math 2280 - Assignment 6
Math 2280 - Assignment 6 Dylan Zwick Spring 2014 Section 3.8-1, 3, 5, 8, 13 Section 4.1-1, 2, 13, 15, 22 Section 4.2-1, 10, 19, 28 1 Section 3.8 - Endpoint Problems and Eigenvalues 3.8.1 For the eigenvalue
3.2 Sources, Sinks, Saddles, and Spirals
3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients
MATH 425, PRACTICE FINAL EXAM SOLUTIONS.
MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator
Inner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
Numerical methods for American options
Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment
General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1
A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem
4.3 Lagrange Approximation
206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average
Solving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
Nonlinear Algebraic Equations Example
Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
Continuity of the Perron Root
Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North
Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.
Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.
Chapter 6. Cuboids. and. vol(conv(p ))
Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both
7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse
Pacific Journal of Mathematics
Pacific Journal of Mathematics GLOBAL EXISTENCE AND DECREASING PROPERTY OF BOUNDARY VALUES OF SOLUTIONS TO PARABOLIC EQUATIONS WITH NONLOCAL BOUNDARY CONDITIONS Sangwon Seo Volume 193 No. 1 March 2000
Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems
Using the Theory of Reals in Analyzing Continuous and Hybrid Systems Ashish Tiwari Computer Science Laboratory (CSL) SRI International (SRI) Menlo Park, CA 94025 Email: [email protected] Ashish Tiwari
Principles of Scientific Computing Nonlinear Equations and Optimization
Principles of Scientific Computing Nonlinear Equations and Optimization David Bindel and Jonathan Goodman last revised March 6, 2006, printed March 6, 2009 1 1 Introduction This chapter discusses two related
Math 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS
VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS MICHAEL DRMOTA, OMER GIMENEZ, AND MARC NOY Abstract. We show that the number of vertices of a given degree k in several kinds of series-parallel labelled
t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).
1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
The Mean Value Theorem
The Mean Value Theorem THEOREM (The Extreme Value Theorem): If f is continuous on a closed interval [a, b], then f attains an absolute maximum value f(c) and an absolute minimum value f(d) at some numbers
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
PID Controller Design for Nonlinear Systems Using Discrete-Time Local Model Networks
PID Controller Design for Nonlinear Systems Using Discrete-Time Local Model Networks 4. Workshop für Modellbasierte Kalibriermethoden Nikolaus Euler-Rolle, Christoph Hametner, Stefan Jakubek Christian
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z
FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization
Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
Lecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
Linear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department [email protected].
Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department [email protected] These slides are a supplement to the book Numerical Methods with Matlab:
Eigenvalues, Eigenvectors, and Differential Equations
Eigenvalues, Eigenvectors, and Differential Equations William Cherry April 009 (with a typo correction in November 05) The concepts of eigenvalue and eigenvector occur throughout advanced mathematics They
Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
3. Reaction Diffusion Equations Consider the following ODE model for population growth
3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent
15 Limit sets. Lyapunov functions
15 Limit sets. Lyapunov functions At this point, considering the solutions to ẋ = f(x), x U R 2, (1) we were most interested in the behavior of solutions when t (sometimes, this is called asymptotic behavior
H/wk 13, Solutions to selected problems
H/wk 13, Solutions to selected problems Ch. 4.1, Problem 5 (a) Find the number of roots of x x in Z 4, Z Z, any integral domain, Z 6. (b) Find a commutative ring in which x x has infinitely many roots.
Reaction diffusion systems and pattern formation
Chapter 5 Reaction diffusion systems and pattern formation 5.1 Reaction diffusion systems from biology In ecological problems, different species interact with each other, and in chemical reactions, different
OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov
DISCRETE AND CONTINUOUS Website: http://aimsciences.org DYNAMICAL SYSTEMS Supplement Volume 2005 pp. 345 354 OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN E.V. Grigorieva Department of Mathematics
Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
THE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:
THE DYING FIBONACCI TREE BERNHARD GITTENBERGER 1. Introduction Consider a tree with two types of nodes, say A and B, and the following properties: 1. Let the root be of type A.. Each node of type A produces
Some Problems of Second-Order Rational Difference Equations with Quadratic Terms
International Journal of Difference Equations ISSN 0973-6069, Volume 9, Number 1, pp. 11 21 (2014) http://campus.mst.edu/ijde Some Problems of Second-Order Rational Difference Equations with Quadratic
University of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
ON LIMIT LAWS FOR CENTRAL ORDER STATISTICS UNDER POWER NORMALIZATION. E. I. Pancheva, A. Gacovska-Barandovska
Pliska Stud. Math. Bulgar. 22 (2015), STUDIA MATHEMATICA BULGARICA ON LIMIT LAWS FOR CENTRAL ORDER STATISTICS UNDER POWER NORMALIZATION E. I. Pancheva, A. Gacovska-Barandovska Smirnov (1949) derived four
Controllability and Observability
Chapter 3 Controllability and Observability In this chapter, we study the controllability and observability concepts. Controllability is concerned with whether one can design control input to steer the
15. Symmetric polynomials
15. Symmetric polynomials 15.1 The theorem 15.2 First examples 15.3 A variant: discriminants 1. The theorem Let S n be the group of permutations of {1,, n}, also called the symmetric group on n things.
Applied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
EXISTENCE AND NON-EXISTENCE RESULTS FOR A NONLINEAR HEAT EQUATION
Sixth Mississippi State Conference on Differential Equations and Computational Simulations, Electronic Journal of Differential Equations, Conference 5 (7), pp. 5 65. ISSN: 7-669. UL: http://ejde.math.txstate.edu
Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction
MATH 52: MATLAB HOMEWORK 2
MATH 52: MATLAB HOMEWORK 2. omplex Numbers The prevalence of the complex numbers throughout the scientific world today belies their long and rocky history. Much like the negative numbers, complex numbers
