# Lecture 7: Finding Lyapunov Functions 1

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1 This lecture gives an introduction into basic methods for finding Lyapunov functions and storage functions for given dynamical systems. 7.1 Convex search for storage functions The set of all real-valued functions of system state which do not increase along system trajectories is convex, i.e. closed under the operations of addition and multiplication by a positive constant. This serves as a basis for a general procedure of searching for Lyapunov functions or storage functions Linearly parameterized storage function candidates Consider a system model given by discrete time state space equations x(t + 1) = f (x(t), w(t)), y(t) = g(x(t), w(t)), (7.1) where x(t) X R n is the system state, w(t) W R m is system input, y(t) Y R k is system output, and f : X W X, g : X W Y are given functions. A functional V : X R is a storage function for system (7.1) with supply rate ψ : Y W R if for every solution of (7.1), i.e. if 1 Version of September 26, 2003 V (x(t + 1)) V (x(t)) ψ(y(t)) (7.2) V (f( x, w)) V ( x) ψ(g( x, w), w) x X, w W. (7.3)

2 2 In particular, when ψ 0, this yields the definition of a Lyapunov function. Finding, for a given supply rate, a valid storage function (or at least proving that one exists) is a major challenge in constructive analysis of nonlinear systems. The most common approach is based on considering a linearly parameterized subset of storage function candidates V defined by N V = {V ( x) = φ q V q ( x), (7.4) where {V q } is a fixed set of basis functions, and φ k are parameters to be determined. Here every element of V is considered as a storage function candidate, and one wants to set up an efficient search for the values of φ k which yield a function V satisfying (7.3). Example 7.1 Consider the finite state automata defined by equations (7.1) with value sets X = {1, 2, 3}, W = {0, 1}, Y = {0, 1}, and with dynamics defined by f (1, 1) = 2, f (2, 1) = 3, f (3, 1) = 1, f (1, 0) = 1, f (2, 0) = 2, f (3, 0) = 2, q=1 g(1, 1) = 1, g( x, w) = 0 ( x, w) = (1, 1). In order to show that the amount of 1 s in the output is never much larger than one third of the amount of 1 s in the input, one can try to find a storage function V with supply rate ψ( y, w) = w 3 y. Taking three basis functions V 1, V 2, V 3 defined by 1, x = k, V k ( x) = 0, x = k, the conditions imposed on φ 1, φ 2, φ 3 can be written as the set of six affine inequalities (7.3), two of which (with ( x, w) = (1, 0) and ( x, w) = (2, 0)) will be satisfied automatically, while the other four are φ 2 φ 3 1 at ( x, w) = (3, 0), φ 2 φ 1 2 at ( x, w) = (1, 1), φ 3 φ 2 1 at ( x, w) = (2, 1), φ 1 φ 3 1 at ( x, w) = (3, 1). Solutions of this linear program are given by φ 1 = c, φ 2 = c 2, φ 3 = c 1,

3 3 where c R is arbitrary. It is customary to normalize storage and Lyapunov functions so that their minimum equals zero, which yields c = 2 and φ 1 = 2, φ 2 = 0, φ 3 = 1. Now, summing the inequalities (7.2) from t = 0 to t = T yields T 1 T 1 3 y(t) V (x(0)) V (x(t )) + w(t), t=0 t=0 which is implies the desired relation between the numbers of 1 s in the input and in the output, since V (x(0)) V (x(t )) cannot be larger than Storage functions via cutting plane algorithms The possibility to reduce the search for a valid storage function to convex optimization, as demonstrated by the example above, is a general trend. One general situation in which an efficient search for a storage function can be performed is when a cheap procedure of checking condition (7.3) (an oracle) is available. Assume that for every given element V V it is possible to find out whether condition (7.3) is satisfied, and, in the case when the answer is negative, to produce a pair of vectors x X, w W for which the inequality in (7.3) does not hold. Select a sufficiently large N set T 0 (a polytope or an ellipsoid) in the space of parameter vector φ = (φ q ) q =1 (this set will limit the search for a valid storage function). Let φ be the center of T 0. Define V by the φ, and apply the verification oracle to it. If V is a valid storage function, the search for storage function ends successfully. Otherwise, the invalidity certificate ( x, w) produced by the oracle yields a hyperplane separating φ and the (unknown) set of φ defining valid storage functions, thus cutting a substantial portion from the search set T 0, reducing it to a smaller set T 1. Now re-define φ as the center of T 1 and repeat the process by constructing a sequence of monotonically decreasing search sets T k, until either a valid storage function is found, or T k shrinks to nothing. With an appropriate selection of a class of search sets T k (ellipsoids or polytopes are most frequently used) and with an adequate definition of a center (the so-called analytical center is used for polytopes), the volume of T k can be made exponentially decreasing, which constitutes fast convergence of the search algorithm Completion of squares The success of the search procedure described in the previous section depends heavily on the choice of the basis functions V k. A major difficulty to overcome is verification of (7.3) for a given V. It turns out that the only known large linear space of functionals

4 4 F : R n R which admits efficient check of non-negativity of its elements is the set of quadratic forms x x F ( x) = Q, (Q = Q ) 1 1 for which nonnegativity is equivalent to positive semidefiniteness of the coefficient matrix Q. This observation is exploited in the linear-quadratic case, when f, g are linear functions and ψ is a quadratic form f( x, w) = A x + B w, g( x, w) = C x + D w, x x ψ( x, w) =. w w Then it is natural to consider quadratic storage function candidates V ( x) = x P x only, and (7.3) transforms into the (symmetric) matrix inequality P A + A P P B. (7.5) B P 0 Since this inequality is linear with respect to its parameters P and, it can be solved relatively efficiently even when additional linear constraints are imposed on P and. Note that a quadratic functional is non-negative if and only if it can be represented as a sum of squares of linear functionals. The idea of checking non-negativity of a functional by trying to represent it as a sum of squares of functions from a given linear set can be used in searching for storage functions of general nonlinear systems as well. Indeed, let Ĥ : R n R m R M and Vˆ : R n R N be arbitrary vector-valued functions. For every φ R N, condition (7.3) with V ( x) = φ Vˆ ( x) is implied by the identity φ Vˆ (f( x, w)) φ V ˆ ( x) + H( ˆ x, w) S H( ˆ x, w) = ψ( x, w) x X, w W, (7.6) as long as S = S 0 is a positive semidefinite symmetric matrix. Note that both the storage function candidate parameter φ and the sum of squares parameter S = S 0 enter constraint (7.6) linearly. This, the search for a valid storage function is reduced to semidefinite program. In practice, the scalar components of vector Ĥ should include enough elements so that identity (7.6) can be achieved for every φ R N by choosing an appropriate S = S (not necessarily positivie semidefinite). For example, if f, g, ψ are polynomials, it may be a good idea to use a polynomial Vˆ and to define Ĥ as the vector of monomials up to a given degree.

5 5 7.2 Storage functions with quadratic supply rates As described in the previous section, one can search for storage functions by considering linearly parameterized sets of storage function candidates. It turns out that storage functions derived for subsystems of a given system can serve as convenient building blocks (i.e. the components V q of Vˆ ). Indeed, assume that V q = V q (x(t)) are storage functions with supply rates ψ q = ψ q (z(t)). Typically, z(t) includes x(t) as its component, and has some additional elements, such as inputs, outputs, and othe nonlinear combinations of system states and inputs. If the objective is to find a storage function V with a given supply rate ψ, one can search for V in the form N V (x(t)) = V q (x(t)), φ q 0, (7.7) q=1 where φ q are the search parameters. Note that in this case it is known a-priori that every V in (7.7) is a storage function with supply rate N ψ(z(t)) = φ q ψ q (z(t)). (7.8) q=1 Therefore, in order to find a storage function with supply rate ψ = ψ (z(t)), it is sufficient to find φ q 0 such that N φ1 ψ q ( z) ψ ( z) z. (7.9) q=1 When ψ, ψ q are generic functions, even this simplified task can be difficult. However, in the important special case when ψ and ψ q are quadratic functionals, the search for φ q in (7.9) becomes a semidefinite program. In this section, the use of storage functions with quadratic supply rates is discussed Storage functions for LTI systems A quadratic form V ( x) = x P x is a storage function for LTI system with quadratic supply rate ẋ = Ax + Bw (7.10) x x ψ( x, w) = w w if and only if matrix inequality (7.5) is satisfied. The well-known Kalman-Popov-Yakubovich Lemma, or positive real lemma gives useful frequency domain condition for existence of such P = P for given A, B,.

6 6 Theorem 7.1 Assume that the pair (A, B) is controllable. A symmetric matrix P = P satisfying (7.5) exists if and only if x w x w 0 whenever jσ x = A x + B w for some σ R. (7.11) Moreover, if there exists a matrix K such that A + BK is a Hurwitz matrix, and I I 0, K K then all such matrices P = P are positive semidefinite. Example 7.2 Let G(s) = C(sI A) 1 B + D be a stable transfer function (i.e. matrix A is a Huewitz matrix) with a controllable pair (A, B). Then G(jσ) 1 for all σ R if and only if there exists P = P 0 such that 2 P x (A x + B w) w 2 C x + D w 2 x R n, w R m. This can be proven by applying Theorem 7.1 with and K = 0. ψ( x, w) = w 2 Cx + D w Storage functions for sector nonlinearities Whenever two components v = v(t) and w = w(t) of the system trajectory z = z(t) are related in such a way that the pair (v(t), w(t)) lies in the cone between the two lines w = k 1 v and v = k 2 v, V 0 is a storage function for ψ(z(t)) = (w(t) k 1 v(t))(k 2 v(t) w(t)). For example, if w(t) = v(t) 3 then ψ(z(t)) = v(t)w(t). If w(t) = sin(t) sin(v(t)) then 2 ψ(z(t)) = v(t) 2 w(t) Storage for scalar memoryless nonlinearity Whenever two components v = v(t) and w = w(t) of the system trajectory z = z(t) are related by w(t) = (v(t)), where : R R is an integrable function, and v(t) is a component of system state, V (x(t)) = (v(t)) is a storage function with supply rate ψ(z(t)) = v (t)w(t), where (y) = y 0 (φ)dφ.

7 7 7.3 Implicit storage functions A number of important results in nonlinear system analysis rely on storage functions for which no explicit formula is known. It is frequently sufficient to provide a lower bound for the storage function (for example, to know that it takes only non-negative values), and to have an analytical expression for the supply rate function ψ. In order to work with such implicit storage functions, it is helpful to have theorems which guarantee existence of non-negative storage functions for a given supply rate. In this regard, Theorem 7.1 can be considered as an example of such result, stating existence of a storage function for a linear and time invariant system as an implication of a frequencydependent matrix inequality. In this section we present a number of such statements which can be applied to nonlinear systems Implicit storage functions for abstract systems Consider a system defined by behavioral set B = {z} of functions z : [0, ) R q. As usually, the system can be autonomous, in which case z(t) is the output at time t, or with an input, in which case z(t) = [v(t); w(t)] combines vector input v(t) and vector output w(t). Theorem 7.2 Let ψ : R q R be a function and let B be a behavioral set, consisting of some functions z : [0, ) R q. Assume that the composition ψ(z(t)) is integrable over every bounded interval (t 0, t 1 ) in R + for all z B. For t 0, t R + define I(z, t 0, t) = The following conditions are equivalent: t t 0 ψ(z(φ))dφ. (a) for every z 0 B and t 0 R + the set of values I(z, t 0, t), taken for all t t 0 and for all z B defining same state as z 0 at time t 0, is bounded from below; (b) there exists a non-negative storage function V : B R + R + (such that V (z 1, t) = V (z 2, t) whenever z 1 and z 2 define same state of B at time t) with supply rate ψ. Moreover, when condition (a) is satisfied, a storage function V from (b) can be defined by V (z 0 ( ), t 0 ) = inf I(z, t 0, t), (7.12) where the infimum is taken over all t t 0 and over all z B defining same state as z 0 at time t 0. Proof Implication (b) (a) follows directly from the definition of a storage function, which requires V (z 0, t 1 ) V (z 0, t 0 ) I(z, t 0, t 1 ) (7.13)

8 8 for t 1 t 0, z 0 B. Combining this with V 0 yields I(z, t 0, t 1 ) V (z, t 0 ) = V (z 0, t 0 ) for all z, z 0 defining same state of B at time t 0. Now let us assume that (a) is valid. Then a finite infimum in (7.12) exists (as an infimum over a non-empty set bounded from below) and is not positive (since I(z 0, t 0, t 0 ) = 0). Hence V is correctly defined and not negative. To finish the proof, let us show that (7.13) holds. Indeed, if z 1 defines same state as z 0 at time t 1 then z 0 (t), t t 1, z 01 (t) = z 1 (t), t > t 1 defines same state as z 0 at time t 0 < t 1 (explain why). Hence the infimum of I(z, t 0, t) in the definition of V is not larger than the infimum of integrals of all such z 01, over intervals of length not smaller than t 1 t 0. These integrals can in turn be decomposed into two integrals I(z 01, t 0, t) = I(z 0, t 0, t 1 ) + I(z 1, t 1, t), which yields the desired inequality Storage functions for ODE models As an important special case of Theorem 7.2, consider the ODE model ẋ(t) = f(x(t), w(t)), (7.14) defined by a function f : X W R n, where X, W are subsets of R n and R m respectively. Cxonsider the behavior model B consisting of all functions z(t) = [x(t); v(t)] where x : [0, ) X is a solution of (7.14). In this case, two signals z 1 = [x 1 ; v 1 ] and z 2 = [x 2 ; v 2 ] define same state of B at time t 0 if and only if x 1 (t 0 ) = x 2 (t 0 ). Therefore, according to Theorem 7.2, for a given function ψ : X W R, existence of a function V : X R + R + such that t 2 V (x(t 2 ), t 2 ) V (x(t 1 ), t 1 ) ψ(x(t), v(t))dt for all 0 t 1 t 2, [x; v] B is equivalent to finiteness of the infimum of the integrals t 1 t ψ(x(φ), v(φ))dφ t 0 over all solutions of (7.14) with a fixed x(t 0 ) = x 0 which can be extended to the time interval [0, ).

9 9 In the case when X = R n, and f : R n R n is such that existence and uniqueness of solutions x : [0, ) R n is guaranteed for all locally integrable inputs w : [0, ) W and all initial conditions x(t 0 ) = x 0 R n, the infimum in (7.12) (and hence, the corresponding storage function) do not depend on time. If, in addition, f is continuous and V is continuously differentiable, the well-known dynamic programming condition lim inf {ψ( x, w) V ( x)f( x, w)}0 inf {ψ( x w) V ( x 0, x 0 R n 0, x 0 )f( w)} 0,>0 w W, x B ( x 0 ) w W (7.15) will be satisfied. However, using (7.15) requires a lot of caution in most cases, since, even for very smooth f, ψ, the resulting storage function V does not have to be differentiable Zames-Falb quadratic supply rate A non-trivial and powerful case of an implicitly defined storage function with a quadratic supply rate was introduced in late 60-s by G. Zames and P. Falb. Theorem 7.3 Let A, B, C be matrices such that A is a Hurwitz matrix, and Ce At B dt < 1. 0 Let : R R be a monotonic odd function such that Then for all < 1 system 0 w( w) w 2 w R. ẋ(t) = Ax(t) + Bw(t) has a non-negative storage function with supply rate and system ψ + ( x, w) = ( w ( w))( w C x), ẋ(t) = Ax(t) + B(w(t) (w(t)) has a non-negative storage function with supply rate ψ ( x, w) = ( w ( w) C x) w. The proof of Theorem 7.3 begins with establishing that, for every function h : R R with L1 norm not exceeding 1, and for every square integrable function w : R R the integral (w(t) (w(t)))y(t)dt, where y = h w, is non-negative. This verifies that the assumptions of Theorem 7.2 are satisfied, and proves existence of the corresponding storage function without actually finding it. Combining the Zames-Falb supply rate with the statement of the Kalman- Yakubovich-Popov lemma yields the following stability criterion.

10 10 Theorem 7.4 Assume that matrices A p, B p, C p are such that A p is a Hurwitz matrix, and there exists γ > 0 such that Re(1 G(jσ))(1 H(jσ)) γ σ R, where H is a Fourier transform of a function with L1 norm not exceeding 1, and Then system G(s) = C p (si A p ) 1 B p. ẋ(t) = A p x(t) + B p (Cx(t) + v(t)) has finite L2 gain, in the sense that there exists θ > 0 such that for all solutions. x(t) 2 dt θ( x(0) v(t) 2 dt 7.4 Example with cubic nonlinearity and delay Consider the following system of differential equations 2 with an uncertain constant delay parameter φ: ẋ 1 (t) = x 1 (t) 3 x 2 (t φ) 3 (7.16) ẋ 2 (t) = x 1 (t) x 2 (t) (7.17) Analysis of this system is easy when φ = 0, and becomes more difficult when φ is an arbitrary constant in the interval [0, φ 0 ]. The system is not exponentially stable for any value of φ. Our objective is to show that, despite the absence of exponential stability, the method of storage functions with quadratic supply rates works. The case φ = 0 For φ = 0, we begin with describing (7.16),(7.17) by the behavior set where Z = {z = [x 1 ; x 2 ; w 1 ; w 2 ]}, 3 3 w 1 = x 1, w 2 = x 2, ẋ 1 = w 1 w 2, ẋ 2 = x 1 x 2. Quadratic supply rates for which follow from the linear equations of Z are given by x 1 w 1 w ψ 2 LT I (z) = 2 P, x 1 x 2 2 Suggested by Petar Kokotovich x 2

11 11 where P = P is an arbitrary symmetric 2-by-2 matrix defining storage function V LT I (z( ), t) = x(t) P x(t). Among the non-trivial quadratic supply rates ψ valid for Z, the simplest are defined by with the storage function ψ NL (z) = d 1 x 1 w 1 + d 2 x 2 w 2 + q 1 w 1 ( w 1 w 2 ) + q 2 w 2 (x 1 x 2 ), V NL (z( ), t) = 0.25(q 1 x 1 (t) 4 + q 2 x 2 (t) 4 ), where d k 0. It turns out (and is easy to verify) that the only convex combinations of these supply rates which yield ψ 0 are the ones that make ψ = ψ LT I + ψ NL = 0, for example P =, d 1 = d 2 = q 2 = 1, q 1 = The absence of strictly negative definite supply rates corresponds to the fact that the system is not exponentially stable. Nevertheless, a Lyapunov function candidate can be constructed from the given solution: V (x) = x P x (q 1 x 1 + q 2 x 2 ) = 0.5x x 2. This Lyapunov function can be used along the standard lines to prove global asymptotic stability of the equilibrium x = 0 in system (7.16),(7.17) The general case Now consider the case when φ [0, 0.2] is an uncertain parameter. To show that the delayed system (7.16),(7.17) remains stable when φ 0.2, (7.16),(7.17) can be represented by a more elaborate behavior set Z = {z( )} with satisfying LTI relations z = [x 1 ; x 2 ; w 1 ; w 2 ; w 3 ; w 4 ; w 5 ; w 6 ] R 8, ẋ 1 = w 1 w 2 + w 3, ẋ 2 = x 1 x 2 and the nonlinear/infinite dimensional relations w 1 (t) = x 1, w 2 = x 2, w 3 = x 2 (x 2 + w 4 ) 3, 3 w 4 (t) = x 2 (t φ) x 2 (t), w 5 = w 4, w 6 = (x 1 x 2 ) 3. Some additional supply rates/storage functions are needed to bound the new variables. These will be selected using the perspective of a small gain argument. Note that the

12 12 perturbation w 4 can easily be bounded in terms of ẋ 2 = x 1 x 2. In fact, the LTI system with transfer function (exp( φs) 1)/s has a small gain (in almost any sense) when φ is small. Hence a small gain argument would be applicable provided that the gain from w 4 to ẋ 2 could be bounded as well. It turns out that the L 2 -induced gain from w 4 to ẋ 2 is unbounded. Instead, we can use the L 4 norms. Indeed, the last two components w 5, w 6 of w were introduced in order to handle L 4 norms within the framework of quadratic supply rates. More specifically, in addition to the usual supply rate x 1 w 1 w 2 + w ψ 3 LT I (z) = 2 P, x 1 x 2 the set Z has supply rates x 2 ψ(z) =d 1 x 1 w 1 + d 2 x 2 w 2 + q 1 w 1 ( w 1 w 2 + w 3 ) + q 2 w 2 (x 1 x 2 ) + d 3 [0.99(x 1 w 1 + x 2 w 2 ) x 1 w w 4 w (x 1 x 2 )w 6 ] + q 3 [0.2 4 (x 1 x 2 )w 6 w 4 w 5 ], d i 0. Here the supply rates with coefficients d 1, d 2, q 1, q 2 are same as before. The term with d 3, based on a zero storage function, follows from the inequality x 1 x (x 1 + x 2 ) x 1 (x 2 (x 2 + w 4 ) 3 5w 4 ) (which is satisfied for all real numbers x 1, x 2, w 4, and can be checked numerically). The term with q 3 follows from a gain bound on the transfer function G (s) = (exp( φs) 1)/s from x 1 x 2 to w 4. It is easy to verify that the L 1 norm of its impulse response equals φ, and hence the L 4 induced gain of the causal LTI system with transfer function G will not exceed 1. Consider the function t 4 V d (v( ), T ) = inf v 1 (t) 4 v 1 (r)dr dt, (7.18) T where the infimum is taken over all functions v 1 which are square integrable on (0, ) and such that v 1 (t) = v(t) for t T. Because of the L 4 gain bound of G with φ [0, 0.2] does not exceed 0.2, the infimum in (7.18) is bounded. Since we can always use v 1 (t) = 0 for t > T, the infimum is non-positive, and hence V d is non-negative. The IQC defined by the q 3 term holds with V = q 3 V d (x 1 x 2, t). Let 4 4 ψ 0 (z) = 0.01(x 1 w 1 + x 2 w 2 ) = 0.01(x 1 + x 2 ), which reflects our intention to show that x 1, x 2 will be integrable with fourth power over (0, ). Using P =, d 1 = d 2 = 0.01, d 3 = q 2 = 1, q 1 = 0, q 3 = t

13 13 yields a Lyapunov function V (x e (t)) = 0.5x 1 (t) x 2 (t) V d (x 1 x 2, t), where x e is the total state of the system (in this case, x e (T ) = [x(t ); v T ( )], where v T ( ) L 2 (0, φ) denotes the signal v(t) = x 1 (T φ + t) x 2 (T φ + t) restricted to the interval t (0, φ)). It follows that dv (x e (t)) 0.01(x1 (t) 4 + x 2 (t) 4 ). dt On the other hand, we saw previously that V (x e (t)) 0 is bounded from below. Therefore, x 1 ( ), x 2 ( ) 4 (fourth powers of x 1, x 2 are integrable over (0, )) as long as the initial conditions are bounded. Thus, the equilibrium x = 0 in system (7.16),(7.17) is stable for 0 φ 0.2.

### Lecture 13 Linear quadratic Lyapunov theory

EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

### Lecture 12 Basic Lyapunov theory

EE363 Winter 2008-09 Lecture 12 Basic Lyapunov theory stability positive definite functions global Lyapunov stability theorems Lasalle s theorem converse Lyapunov theorems finding Lyapunov functions 12

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### 3.7 Non-autonomous linear systems of ODE. General theory

3.7 Non-autonomous linear systems of ODE. General theory Now I will study the ODE in the form ẋ = A(t)x + g(t), x(t) R k, A, g C(I), (3.1) where now the matrix A is time dependent and continuous on some

### Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

Lecture 4. LaSalle s Invariance Principle We begin with a motivating eample. Eample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum Dynamics of a pendulum with friction can be written

### Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability p. 1/? p. 2/? Definition: A p p proper rational transfer function matrix G(s) is positive

### What is Linear Programming?

Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

### Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

### 4 Lyapunov Stability Theory

4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We

### 1 Norms and Vector Spaces

008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### x if x 0, x if x < 0.

Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the

### Control Systems 2. Lecture 7: System theory: controllability, observability, stability, poles and zeros. Roy Smith

Control Systems 2 Lecture 7: System theory: controllability, observability, stability, poles and zeros Roy Smith 216-4-12 7.1 State-space representations Idea: Transfer function is a ratio of polynomials

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient

### Duality of linear conic problems

Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

### Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems

Chapter 4 Stability Topics : 1. Basic Concepts 2. Algebraic Criteria for Linear Systems 3. Lyapunov Theory with Applications to Linear Systems 4. Stability and Control Copyright c Claudiu C. Remsing, 2006.

### On the D-Stability of Linear and Nonlinear Positive Switched Systems

On the D-Stability of Linear and Nonlinear Positive Switched Systems V. S. Bokharaie, O. Mason and F. Wirth Abstract We present a number of results on D-stability of positive switched systems. Different

### Introduction to Flocking {Stochastic Matrices}

Supelec EECI Graduate School in Control Introduction to Flocking {Stochastic Matrices} A. S. Morse Yale University Gif sur - Yvette May 21, 2012 CRAIG REYNOLDS - 1987 BOIDS The Lion King CRAIG REYNOLDS

### POWER SETS AND RELATIONS

POWER SETS AND RELATIONS L. MARIZZA A. BAILEY 1. The Power Set Now that we have defined sets as best we can, we can consider a sets of sets. If we were to assume nothing, except the existence of the empty

### 3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

### Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems

Using the Theory of Reals in Analyzing Continuous and Hybrid Systems Ashish Tiwari Computer Science Laboratory (CSL) SRI International (SRI) Menlo Park, CA 94025 Email: ashish.tiwari@sri.com Ashish Tiwari

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### Control Systems Design

ELEC4410 Control Systems Design Lecture 20: Scaling and MIMO State Feedback Design Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science Lecture 20: MIMO State

### t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

### max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Tangent and normal lines to conics

4.B. Tangent and normal lines to conics Apollonius work on conics includes a study of tangent and normal lines to these curves. The purpose of this document is to relate his approaches to the modern viewpoints

### The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

### SECTION 1.5: PIECEWISE-DEFINED FUNCTIONS; LIMITS AND CONTINUITY IN CALCULUS

(Section.5: Piecewise-Defined Functions; Limits and Continuity in Calculus).5. SECTION.5: PIECEWISE-DEFINED FUNCTIONS; LIMITS AND CONTINUITY IN CALCULUS LEARNING OBJECTIVES Know how to evaluate and graph

### Numerical methods for American options

Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Introduction and message of the book

1 Introduction and message of the book 1.1 Why polynomial optimization? Consider the global optimization problem: P : for some feasible set f := inf x { f(x) : x K } (1.1) K := { x R n : g j (x) 0, j =

### 1 if 1 x 0 1 if 0 x 1

Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

### 1 Polyhedra and Linear Programming

CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material

### Conic optimization: examples and software

Conic optimization: examples and software Etienne de Klerk Tilburg University, The Netherlands Etienne de Klerk (Tilburg University) Conic optimization: examples and software 1 / 16 Outline Conic optimization

19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead

### Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

### First Welfare Theorem

First Welfare Theorem Econ 2100 Fall 2015 Lecture 17, November 2 Outline 1 First Welfare Theorem 2 Preliminaries to Second Welfare Theorem Last Class Definitions A feasible allocation (x, y) is Pareto

### By W.E. Diewert. July, Linear programming problems are important for a number of reasons:

APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)

### Linear Control Systems Lecture # 16 Observers and Output Feedback Control

Linear Control Systems Lecture # 16 Observers and Output Feedback Control p. 1/2 p. 2/2 Observers: Consider the system ẋ = Ax + Bu y = Cx + Du where the initial state x(0) is unknown, but we can measure

### Zeros of a Polynomial Function

Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

### Fourier Series. Chapter Some Properties of Functions Goal Preliminary Remarks

Chapter 3 Fourier Series 3.1 Some Properties of Functions 3.1.1 Goal We review some results about functions which play an important role in the development of the theory of Fourier series. These results

### Fuzzy Differential Systems and the New Concept of Stability

Nonlinear Dynamics and Systems Theory, 1(2) (2001) 111 119 Fuzzy Differential Systems and the New Concept of Stability V. Lakshmikantham 1 and S. Leela 2 1 Department of Mathematical Sciences, Florida

### Lecture 5: Conic Optimization: Overview

EE 227A: Conve Optimization and Applications January 31, 2012 Lecture 5: Conic Optimization: Overview Lecturer: Laurent El Ghaoui Reading assignment: Chapter 4 of BV. Sections 3.1-3.6 of WTB. 5.1 Linear

### Some Notes on Taylor Polynomials and Taylor Series

Some Notes on Taylor Polynomials and Taylor Series Mark MacLean October 3, 27 UBC s courses MATH /8 and MATH introduce students to the ideas of Taylor polynomials and Taylor series in a fairly limited

Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

### Algebra II Pacing Guide First Nine Weeks

First Nine Weeks SOL Topic Blocks.4 Place the following sets of numbers in a hierarchy of subsets: complex, pure imaginary, real, rational, irrational, integers, whole and natural. 7. Recognize that the

### Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon

lp.nb 1 Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon lp.nb 2 Overview This is a tutorial about some interesting math and geometry connected with constrained optimization. It is not

### Math212a1010 Lebesgue measure.

Math212a1010 Lebesgue measure. October 19, 2010 Today s lecture will be devoted to Lebesgue measure, a creation of Henri Lebesgue, in his thesis, one of the most famous theses in the history of mathematics.

### Control Systems Design

ELEC4410 Control Systems Design Lecture 17: State Feedback Julio H Braslavsky julio@eenewcastleeduau School of Electrical Engineering and Computer Science Lecture 17: State Feedback p1/23 Outline Overview

### MATHEMATICAL METHODS OF STATISTICS

MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS

### 1. R In this and the next section we are going to study the properties of sequences of real numbers.

+a 1. R In this and the next section we are going to study the properties of sequences of real numbers. Definition 1.1. (Sequence) A sequence is a function with domain N. Example 1.2. A sequence of real

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

### On using numerical algebraic geometry to find Lyapunov functions of polynomial dynamical systems

Dynamics at the Horsetooth Volume 2, 2010. On using numerical algebraic geometry to find Lyapunov functions of polynomial dynamical systems Eric Hanson Department of Mathematics Colorado State University

### Lecture 2: August 29. Linear Programming (part I)

10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

### Math 317 HW #5 Solutions

Math 317 HW #5 Solutions 1. Exercise 2.4.2. (a) Prove that the sequence defined by x 1 = 3 and converges. x n+1 = 1 4 x n Proof. I intend to use the Monotone Convergence Theorem, so my goal is to show

### THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING 1. Introduction The Black-Scholes theory, which is the main subject of this course and its sequel, is based on the Efficient Market Hypothesis, that arbitrages

### Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

### The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

### Tutorial on Convex Optimization for Engineers Part I

Tutorial on Convex Optimization for Engineers Part I M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

### Course 221: Analysis Academic year , First Semester

Course 221: Analysis Academic year 2007-08, First Semester David R. Wilkins Copyright c David R. Wilkins 1989 2007 Contents 1 Basic Theorems of Real Analysis 1 1.1 The Least Upper Bound Principle................

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### Iterative Techniques in Matrix Algebra. Jacobi & Gauss-Seidel Iterative Techniques II

Iterative Techniques in Matrix Algebra Jacobi & Gauss-Seidel Iterative Techniques II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin

### A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the

### Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

### The minimum number of monochromatic 4-term progressions

The minimum number of monochromatic 4-term progressions Rutgers University May 28, 2009 Monochromatic 3-term progressions Monochromatic 4-term progressions 1 Introduction Monochromatic 3-term progressions

### Introduction to Convex Optimization for Machine Learning

Introduction to Convex Optimization for Machine Learning John Duchi University of California, Berkeley Practical Machine Learning, Fall 2009 Duchi (UC Berkeley) Convex Optimization for Machine Learning

### Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

### Convex analysis and profit/cost/support functions

CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m

### ANALYTICAL MATHEMATICS FOR APPLICATIONS 2016 LECTURE NOTES Series

ANALYTICAL MATHEMATICS FOR APPLICATIONS 206 LECTURE NOTES 8 ISSUED 24 APRIL 206 A series is a formal sum. Series a + a 2 + a 3 + + + where { } is a sequence of real numbers. Here formal means that we don

### NORMAL FORMS, STABILITY AND SPLITTING OF INVARIANT MANIFOLDS II. FINITELY DIFFERENTIABLE HAMILTONIANS ABED BOUNEMOURA

NORMAL FORMS, STABILITY AND SPLITTING OF INVARIANT MANIFOLDS II. FINITELY DIFFERENTIABLE HAMILTONIANS ABED BOUNEMOURA Abstract. This paper is a sequel to Normal forms, stability and splitting of invariant

### 5. Convergence of sequences of random variables

5. Convergence of sequences of random variables Throughout this chapter we assume that {X, X 2,...} is a sequence of r.v. and X is a r.v., and all of them are defined on the same probability space (Ω,

### Lecture 3. Mathematical Induction

Lecture 3 Mathematical Induction Induction is a fundamental reasoning process in which general conclusion is based on particular cases It contrasts with deduction, the reasoning process in which conclusion

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Lecture 14: Section 3.3

Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

### Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

### ORDERS OF ELEMENTS IN A GROUP

ORDERS OF ELEMENTS IN A GROUP KEITH CONRAD 1. Introduction Let G be a group and g G. We say g has finite order if g n = e for some positive integer n. For example, 1 and i have finite order in C, since

### Some Problems of Second-Order Rational Difference Equations with Quadratic Terms

International Journal of Difference Equations ISSN 0973-6069, Volume 9, Number 1, pp. 11 21 (2014) http://campus.mst.edu/ijde Some Problems of Second-Order Rational Difference Equations with Quadratic

### 15 Limit sets. Lyapunov functions

15 Limit sets. Lyapunov functions At this point, considering the solutions to ẋ = f(x), x U R 2, (1) we were most interested in the behavior of solutions when t (sometimes, this is called asymptotic behavior

### ON GALOIS REALIZATIONS OF THE 2-COVERABLE SYMMETRIC AND ALTERNATING GROUPS

ON GALOIS REALIZATIONS OF THE 2-COVERABLE SYMMETRIC AND ALTERNATING GROUPS DANIEL RABAYEV AND JACK SONN Abstract. Let f(x) be a monic polynomial in Z[x] with no rational roots but with roots in Q p for

### Module MA1S11 (Calculus) Michaelmas Term 2016 Section 3: Functions

Module MA1S11 (Calculus) Michaelmas Term 2016 Section 3: Functions D. R. Wilkins Copyright c David R. Wilkins 2016 Contents 3 Functions 43 3.1 Functions between Sets...................... 43 3.2 Injective

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...

### Prime Numbers. Chapter Primes and Composites

Chapter 2 Prime Numbers The term factoring or factorization refers to the process of expressing an integer as the product of two or more integers in a nontrivial way, e.g., 42 = 6 7. Prime numbers are

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

### Cubic Functions: Global Analysis

Chapter 14 Cubic Functions: Global Analysis The Essential Question, 231 Concavity-sign, 232 Slope-sign, 234 Extremum, 235 Height-sign, 236 0-Concavity Location, 237 0-Slope Location, 239 Extremum Location,

### EE 580 Linear Control Systems VI. State Transition Matrix

EE 580 Linear Control Systems VI. State Transition Matrix Department of Electrical Engineering Pennsylvania State University Fall 2010 6.1 Introduction Typical signal spaces are (infinite-dimensional vector

### CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

### HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### since by using a computer we are limited to the use of elementary arithmetic operations

> 4. Interpolation and Approximation Most functions cannot be evaluated exactly: x, e x, ln x, trigonometric functions since by using a computer we are limited to the use of elementary arithmetic operations

### TOPIC 3: CONTINUITY OF FUNCTIONS

TOPIC 3: CONTINUITY OF FUNCTIONS. Absolute value We work in the field of real numbers, R. For the study of the properties of functions we need the concept of absolute value of a number. Definition.. Let