5.1 Dynamic programming and the HJB equation

Size: px
Start display at page:

Download "5.1 Dynamic programming and the HJB equation"

Transcription

1 Lecture 20 Cold war era: Pontryagin vs. Bellman. 5.1 Dynamic programming and the HJB equation In place of determining the optimal sequence of decisions from the fixed state of the system, we wish to determine the optimal decision to be made at any state of the system. Only if we know the latter, do we understand the intrinsic structure of the solution. Richard Bellman, Dynamic Programming, [Vinter, p. 435] Reading assignment: read and recall the derivation of HJB equation done in ECE 515 notes. Other references: Yong-Zhou, A-F, Vinter, Bressan (slides and tutorial paper), Sontag Motivation: the discrete problem Bellman s dynamic programming approach is quite general, but it s probably the easiest to understand in the case of purely discrete system (see, e.g., [Sontag, Sect. 8.1]). x k+1 = f(x k, u k ), k = 0, 1,..., T 1 x k X a finite set of cardinality N, u k U a finite set of cardinality M (T, N, M are positive integers). x x 0 $ $ $... $ $ $ T 1 T t Figure 5.1: Discrete time: going forward Each transition has a cost, plus possibly a terminal cost. Goal: minimize total cost. Most naive approach: starting at x 0, enumerate all possible trajectories, calculate the cost for each, compare and select the optimal one. Computational effort? (done offline) M T possible sequences, need to add T terms to find cost for each roughly M T T operations (or O(M T T )). Alternative approach: let s go backwards! At k = T, terminal costs are known for each x. At k = T 1: for each x, find to which x we should jump at k = T so as to have the smallest cost-to-go (1-step running cost plus terminal cost). Write this cost next to x, and mark the selected path (in the picture). 121

2 x x 0 $ $ $ $ $ $ $... $ $ $ 0 1 T 1 T t Figure 5.2: Discrete time: going backward Repeat for k = T 2,..., 0. Claim: when we re done, we have an optimal path from each x 0 to some x T (this path is unique unless there exist more than one path giving the same cost then can select a unique one by coin toss). Is this obvious? The above result relies on (already saw in the Maximum Principle ) Principle of optimality: a final portion of an optimal trajectory is itself optimal (with respect to its starting point as initial condition). The reason for this is obvious: if there is another choice of the final portion with lower cost, we could choose it and get lower total cost, which contradicts optimality. What principle of optimality does for us here is guarantee that the steps we discard (going backwards) cannot be parts of optimal trajectories. (But: cannot discard anything going forward!) Computational effort? (again, offline) At each time, for each x, need to compare all M controls and add up cost-to-go for each roughly, O(T N M) operations. Recall: in forward approach we had O(T M T ). Advantages of the backward approach ( dynamic programming approach): Computational effort is clearly smaller if N and M are fixed and T is large (a better organized calculation). But still large if N and M are large ( curse of dimensionality ); Note that the comparision is not really fair because the backward approach gives more: optimal policy for any initial condition (and k). For forward approach, to handle all initial conditions would require O(T NM T ) operations and still wouldn t cover some states for k > 0. Even more importantly, the backward approach gives the optimal control policy in the form of feedback: given x, we know what to do (for any k). In forward approach, optimal policy depends not on x but on the result of the entire forward pass computation The HJB equation and sufficient conditions for optimality Back to continuous case [ECE 515, Yong-Zhou] ẋ = f(t, x, u), x( ) = x 0 J = t1 L(t, x, u)dt + K(x(t 1 )) 122

3 t 1 fixed, x(t 1 ) free (other variants also possible target sets [A-F]). The above cost can be written as J(, x 0, u( )). Dynamic Programming: the idea is to consider, instead of the problem of minimizing J(, x 0, u( )) for given (, x 0 ), the family of problems min u J(t, x, u) where t ranges over [, t 1 ) and x ranges over R n. Goal: derive dynamic relationships among these problems, and ultimately solve all of them (see Bellman s quote at the beginning). So, we consider the cost functionals J(t, x, u) = t1 t L(τ, x(τ), u(τ))dτ + K(x(t 1 )) where x( ) has initial condition x(t) = x. (Yong-Zhou use (s, y) instead of (t, x 0 ).) Value function (optimal cost-to-go): V (t, x) := inf J(t, x, u( )) u [t,t1 ] By definition of the control problem, we have V (t 1, x) = K(x). Note: this is the feature of our specific problem. If there is no terminal cost (K = 0), then V (t 1, x) = 0. More generally, we could have a target set S R R n, and then we d have V (t, x) = 0 when (t, x) S [A-F]. By defining V via inf (rather than min), we re not even assuming that an optimal control exists (inf doesn t need to be achieved). Also, we can have V = for some (t, x). Principle of Optimality (or Principle of Dynamic Programming): For every (t, x) and every t > 0, we have { t+ t } V (t, x) = inf L(τ, x, u)dτ + V (t + t, x(t + t, u [t,t+ t] )) u [t,t+ t] t (x(t + t, u [t,t+ t] ) is the state at t + t corresponding to the control u [t,t+ t] ). This is familiar: x t t + t t Figure 5.3: Continuous time: principle of optimality (this picture is in [Bressan]). Continuous version of what we had before. 123

4 The statement of principle of optimality seems obvious: to search for an optimal control, we can search for a control over a small time interval that minimizes the cost over this interval plus the costto-go from there. However, it s useful to prove it formally especially since we re using inf and not assuming existence of optimal control. Exercise 17 (due Apr 11) Provide a formal proof of the principle of optimality (using definition of inf). The principle of optimality is that dynamic relation among family of problems we talked about earlier. But it s difficult to use, so we want to pass to its infinitesimal version (will get a PDE). Namely, let s do something similar to the proof of the Maximum Principle : take t to be small, write x(t + t) = x(t) + x where x is then also small, and do a first-order Taylor expansion: V (t + t, x + x) V (t, x) + V t (t, x) t + V x (t, x) x (V x is the gradient, can use, notation). Also, t+ t t L(τ, x, u)dτ L(t, x, u(t)) t (u(t) is the value at the left endpoint). Plug this into the principle of optimality: V (t, x) cancels out and we get 0 inf u [t,t+ t] {L(t, x, u(t)) t + V t (t, x) t + V x (t, x) x} Divide by t: { 0 inf L(t, x, u(t)) + V t (t, x) + V x (t, x) x } u [t,t+ t] t Finally, let t 0. Then x t f(t, x, u(t)), and h.o.t. disappear. Then, inf is over the instantaneous values of u at time t. Also, V t doesn t depend on u so we can take it out: This PDE is called the HJB equation. V t (t, x) = inf u {L(t, x, u(t)) + V x(t, x), f(t, x, u) } (5.1) Recall the boundary condition (where did K go?) V (t 1, x) = K(x) For different problem formulations, with target sets, this boundary condition will change but the HJB equation is the same. Note that the expression inside the inf is minus the Hamiltonian, so we have where V x plays the role of p. V t = sup H (t, x, V x, u) u If u is the optimal control, then can repeat everything (starting with principle of optimality) with u plugged in. Then sup becomes max, i.e., it s achieved for some control u along its corresponding trajectory x : u (t) = arg max H (t, u x (t), V x (t, x (t)), u), t [, t 1 ] (5.2) 124

5 (this is state feedback, more on this later). Plugging this into HJB, we get V t (t, x (t)) = H(t, x (t), V x (t, x (t)), u (t)) The above derivation only shows the necessity of the conditions the HJB and the H-maximization condition on u. (If u arg max then it can t be optimal, but sufficiency is not proved yet.) The latter condition is very similar to the Maximum Principle, and the former is a new condition complementing it. However, it turns out that these conditions are also sufficient for optimality. It should be clear that we didn t prove sufficiency yet: V = optimal cost, u = optimal control HJB and (5.2), that s what we did so far. Theorem 10 (sufficient conditions for optimality) Suppose that there exists a function V that satisfies the HJB equation on R R n (or [, t 1 ] R n ) and the boundary condition. (Assuming the solution exists and is C 1.) Suppose that a given control u and the corresponding trajectory x satisfy the Hamiltonian maximization condition (5.2). (Assuming min is achieved.) Then u is an optimal control, and V (, x 0 ) is the optimal cost. (Uniqueness of u not claimed, can have multiple controls giving the same cost.) Proof. Let s use (5.1). First, plug in x and u, which achieve the min: Can rewrite as V t (t, x ) = L(t, x, u ) + V x f(t, x, u ) 0 = L(t, x, u ) + dv dt (the total derivative of V along x ). Integrate from to t 1 (x( ) is given and fixed): so we have 0 = t1 V (, x 0 ) = L(t, x, u )dt + V (t 1, x (t 1 )) V (t } {{ } 0, x ( )) } {{ } x 0 t1 K(x (t 1 )) L(t, x, u )dt + K(x (t 1 )) = J(, x 0, u ) Now let s consider another arbitrary control ū and the corresponding trajectory x, again with initial condition (, x 0 ). Because of the minimization condition in HJB, or, as before, V t (t, x) L(t, x, ū) + V x (t, x) f(t, x, ū) 0 L(t, x, ū) + dv dt (total derivative along x). Integrating over [, t 1 ]: 0 t1 L(t, x, ū)dt + V (t 1, x(t 1 )) V (t } {{ } 0, x 0 ) K( x(t 1 )) 125

6 hence V (, x 0 ) t1 L(t, x, ū)dt + K( x(t 1 )) = J(, x 0, ū) We conclude that V (, x 0 ) is the cost for u and the cost for an arbitrary other control ū cannot be smaller V (, x 0 ) is the optimal cost and u is an optimal control. 126

7 Lecture 21 Example 13 Consider the parking problem from before: ẍ = u, u [ 1, 1], goal: transfer from given x 0 to rest at the origin (x = ẋ = 0) in minimum time. The HJB equation is V t = inf u {1 + V x 1 x 2 + V x2 u} with boundary condition V (t, 0) = 0. The control law that achieves the inf (so it s a min) is u = sgn(v x2 ) (as long as V x2 0). Plugging this into the HJB, we have V t = 1 + V x1 x 2 V x2 Can you solve the HJB equation? Can you use it to derive the bang-bang principle? Can you verify that the optimal control law we derived using the Maximum Principle is indeed optimal? (This is a free-time, fixed-endpoint problem. But the difference between different target sets is only in boundary conditions.) Saw other examples in ECE 515. Know that HJB is typically very hard to solve. Important special case for which the HJB simplifies: System is time-invariant: ẋ = f(x, u); infinite horizon: t 1 = (and no terminal cost); Lagrangian is time-independent: L = L(x, u). (Another option is to work with free terminal time and target set for the terminal state, e.g., timeoptimal problems.) Then it is clear that the cost doesn t depend on the initial time, just on the initial state value function depends on x only: V (t, x) V (x). Hence V t = 0 and HJB simplifies to 0 = inf u {L(x, u) + V x(x) f(x, u)} (5.3) This is of course still a PDE unless dim x = 1. (But for scalar systems becomes an ODE and can be solved.) This case has a simpler HJB equation, but there is a complication compared to the finite-horizon case: the infinite integral L(x, u)dt may be infinite (+ ). So, we need to restrict attention to trajectories for which this integral is finite. Example 14 LQR problem: t1 ẋ = Ax + Bu J = (x T Qx + u T Ru)dt (t 1 ) We ll study this case in detail later and find that optimal controls take the form of linear state feedbacks. When x(t) is given by exponentials, it s clear that for t 1 = the integral is finite if and only if these exponentials are decaying. So, the infinite-horizon case is closely related to the issue of closed-loop asymptotic stability. 127

8 We will investigate this connection for LQR in detail later. Back in the nonlinear case, let s for now assume the following: L(x, u)dt < is possible only when x(t) 0 as t. V (0) = 0 (true, e.g., if f(0, 0) = 0, L(0, 0) = 0, and L(x, u) > 0 (x, u) (0, 0). Then the best thing to do is not to move gives zero cost.) Then we have the following Sufficient conditions for optimality: (Here we are assuming that V (x) < x R n.) Suppose that V (x) satisfies the infinite-horizon HJB (5.3), with boundary condition V (0) = 0. Suppose that a given control u and the corresponding trajectory x satisfy the H-maximization condition (5.2) and x (t) 0 as t. Then V (x 0 ) is the optimal cost and u is an optimal control. Exercise 18 (due Apr 18) Prove this Some historical remarks Reference: [Y-Z]. The PDE was derived by Hamilton in 1830s in the context of calculus of variations. Later studied by Jacobi who improved Hamilton s results. Read [G-F, Sect. 23] where it s derived using variation of a functional (as a necessary condition). Using it as a sufficient condition for optimality was proposed (again in calculus of variations ) by Carathéodory in 1920s ( royal road ; he worked with V locally near test trajectory). Principle of optimality seems almost a trivial observation, was made by Isaacs in early 1950s (slightly before Bellman). Bellman s contribution was to recognize the power of the approach to study value functions globally. He coined the term dynamic programming. Applied to a wide variety of problems, wrote a book (all in 1950s). [A-F, Y-Z] In 1960s, Kalman made explicit connection between Bellman s work and Hamilton-Jacobi equation (not clear if Bellman knew this). He coined the term HJB equation and used it for control problems. A most remarkable fact is that the Maximum Principle was developed in Soviet Union independently around the same time (1950s-1960s)! So it s natural to discuss the connection. 5.2 Relationships between the Maximum Principle and the HJB equation There is a notable difference between the form of the optimal control law provided by the Maximum Principle and HJB equation. the Maximum Principle says: ẋ = H p, ṗ = H x 128

9 u pointwise maximizes H(x (t), u, p (t)). This is an open-loop specification. HJB says: u (t) = arg max u H(t, x (t), V x, u) This is a closed-loop (feedback) specification: assuming we know V everywhere, u (t) is determined by current state x (t). We discussed this feature at the beginning of this chapter, when we first described Bellman s approach it determines optimal strategy for all states, so we automatically get a feedback. But to know V (t, x) everywhere, we need to solve HJB which is hard so there s no free lunch. HJB: V t = max u H (t, x, V x, u) or sup if max is not achieved but assume it is and we have u. So HJB, like the Maximum Principle, involves a Hamiltonian maximization condition. In fact, let s see if we can prove the Maximum Principle starting from HJB equation. the Maximum Principle doesn t involve any PDE, but instead involves a system of ODEs (canonical equations). We can convert HJB equation into an ODE (having t only) if we introduce a new vector variable: The boundary condition for p is p (t) := V x (t, x (t)) p (t 1 ) = V x (t 1, x (t 1 )) But recall the boundary condition for HJB: V (t 1, x) = K(x), so we get p (t 1 ) = K x (x (t 1 )) and this matches the boundary condition we had for p in the Maximum Principle for the case of terminal cost. Seems OK (we didn t check this carefully for target sets and transversality, though), but let s forget the boundary condition and derive a differential equation for p. ṗ = d dt (V x ) = V tx V xx } {{ } ẋ matrix = (swapping partials) V xt V xx f = V t + V x, f + (f x } {{ } x ) T V x }{{} = L by HJB = p = L x (f x ) T p and this is the correct adjoint equation. We saw in the Maximum Principle that p has interpretations as the normal to supporting hyperplanes, also as Lagrange multiplier. Now we see another interpretation: it s the gradient of the value function. Can view it as sensitivity of the optimal cost with respect to x. Economic meaning: marginal value, or shadow price it tells us by how much we can increase the benefits by increasing resources/spending, or how much we d be willing to pay someone else for this resource to still make profit. See [Y-Z, p. 231], also [Luenberger (blue), pp ]. 129

10 The above calculation is easy enough. Why didn t we derive HJB and then the Maximum Principle from it, why did we have to spend so much time on the proof of the Maximum Principle? Answer: the above assumes V C 1, what if this is not the case?! In fact, all our previous developments (e.g., sufficient conditions) rely on HJB equation having a C 1 solution V. Let s see if this is a reasonable expectation. Example 15 (Y-Z, p. 163), [Vinter, p. 36] ẋ = xu, x R, u [ 1, 1], x( ) = x 0 J = x(t 1 ) min, t 1 fixed. Optimal solution can be found by inspection: x 0 > 0 u = 1 ẋ = x x(t 1 ) = e t 1+ x 0 ; x 0 < 0 u = 1 ẋ = x x(t 1 ) = e t 1 x 0 ; x 0 = 0 x(t) 0 u. From this we get the value function e t 1+ x 0 if x 0 > 0 V (, x 0 ) = e t 1 x 0 if x 0 < 0 0 if x 0 = 0 Let s fix and plot this as a function of x 0. In fact, for simplicity let = 0, t 1 = 1. V (, x 0 ) x 0 Figure 5.4: V (, x 0 ) as a function of x 0 So V is not C 1 when x 0 = 0, it s only Lipschitz continuous. The HJB equation in the above example is V t = inf u {V x xu} = V x x (we re working with scalars here). Looks pretty simple. So, maybe it has another solution which is C 1, and the value function is just not the right solution? It turns out that: The above HJB equation does not admit any C 1 solution [Y-Z]; The above simple example is not an exception this situation is quite typical for problems involving control bounds and terminal cost [Vinter]; The value function can be shown to be Lipschitz for a reasonable class of control problems [Bressan, Lemma 4]. 130

11 This has implications not just for connecting HJB and the Maximum Principle but for the HJB theory itself: we need to reconsider the assumption that V C 1 and work with some more general solution concept that allows V to be nondifferentiable (just Lipschitz). (Cf. the situation with ODEs: x C 1 x C 1 a.e. (absolutely continuous) Filippov s solutions, etc.) The above difficulty has been causing problems for a long time. The theory of dynamic programming remained non-rigorous until 1980s when, after a series of related developments, the concept of viscosity solutions was introduced by Crandall and Lions (see [Y-Z, p. 213]). (This completes the historical timeline given earlier.) This is in contrast with the Maximum Principle theory, which was pretty solid already in 1960s. 131

12 132

13 Lecture Viscosity solutions of the HJB equation References: [Bressan s tutorial], also [Y-Z], [Vinter]. The concept of viscosity solution relies on some notions from nonsmooth analysis, which we now need to review One-sided differentials Reference: [Bressan] Consider a continuous (nothing else) function v : R n R. A vector p is called a super-differential of v at a given point x if v(y) v(x) + p, y x + o( y x ) along any path y x. See [Bressan] for a different, somewhat more precise definition. Geometrically: v(y) slope p v(y) v v x y slope p x y Figure 5.5: (a) Super-differential, (b) Sub-differential In higher dimensions, this says that the plane y v(x) + p, y x is tangent from above to the graph of v at x. p is the gradient of this linear function. o( y x ) means that this is an infinitesimal, or local, requirement. Such a p in general is not unique, so we have a set of super-differentials of v at x, denoted by D + v(x). Similarly, we say that p R n is a sub-differential for v at x if v(y) v(x) + p, y x o( y x ) as y x along any path. Geometrically, everything is reversed: line with slope p (for n = 1) or, in general, linear function with gradient p is tangent from below, locally up to higher-order terms: The set of subdifferentials of v at x is denoted by D v(x). 133

14 Example 16 [Bressan] 0 if x < 0 v(x) = x if 0 x 1 1 if x > 1 v(y) 1 y Figure 5.6: The function in Example 16 D + v(0) =, D v(0) = [0, ), D + v(1) = [0, 1/2], D v(1) =. points, because of the following result. These are the only interesting Lemma 11 (some useful properties of D ± v) p D ± v(x) a C 1 function ϕ such that ϕ(x) = p, ϕ(x) = v(x), and, for all y near x, ϕ(y) v(y) in case of D +, in case of D. (Or can have strict inequality no loss of generality.) I.e., ϕ v has a local minimum at x in case of D +, and local maximum in case of D. v(y) ϕ slope p v 1 y Figure 5.7: Interpretation of super-differential via test function Will not prove this (see [Bressan]), but will need this for all other statements. ϕ is sometimes called a test function [Y-Z]. If v is differentiable at x, then D + v(x) = D v(x) = { v(x)} gradient of v at x. I.e., super/subdifferentials are extensions of the usual differentials. (In contrast, Clarke s generalized gradient extends the idea of continuous differential [Y-Z, bookmark].) Proof. Use the first property. v(x) D ± v(x) clear 1. If ϕ C 1 and ϕ v with equality at x, then ϕ v has a local minimum at x (ϕ v) = 0 ϕ(x) = v(x) so there are no other p in D + v(x). Same for D v(x). 1 The gradient provides, up to higher-order terms, both an over- and under-approximation for a differentiable function. 134

15 If D + v(x) and D v(x) are both non-empty, then v is differentiable at x and the previous property holds. Proof. There exist ϕ 1, ϕ 2 C 1 such that ϕ 1 (x) = v(x) = ϕ 2 (x), ϕ 1 (y) v(y) ϕ 2 (y) y near x. Thus ϕ 1 ϕ 2 has a local maximum at x (ϕ 1 ϕ 2 )(x) = 0 ϕ 1 (x) = ϕ 2 (x). Now write (assuming y > x, otherwise the inequalities are flipped but the conclusion is the same) ϕ 1 (y) ϕ 1 (x) y x v(y) v(x) y x ϕ 2(y) ϕ 2 (x) y x As y x, the first fraction approaches ϕ 1 (x), the last one approaches ϕ 2 (x). The two gradients are equal. Hence, by the Theorem about Two Cops (or the Sandwich Theorem ), the limit of the middle fraction exists and equals the other two. This limit must be v(x), and everything is proved. The set {x : D + v(x) } (respectively, {x : D v(x) }) is non-empty, and actually dense in the domain of v. Idea of proof. x 0 x Figure 5.8: Idea behind the denseness proof Take an arbitrary x 0. For ϕ steep enough (but C 1 ), ϕ v will have a local maximum at a nearby x, as close to x 0 as we want. Just move ϕ to cross that point ϕ(x) D v(x). (Same for D +.) Viscosity solutions of PDEs (t can be part of x, as before). F (x, v, v x ) = 0 (5.4) Definition 3 A continuous function v(x) is a viscosity subsolution of the PDE (5.4) if F (x, v, p) 0 p D + v(x), x Equivalently, F (x, ϕ, ϕ x ) 0 for every C 1 function ϕ such that ϕ v has a local minimum at x, and this is true for every x. Similarly, v(x) is a viscosity supersolution of the PDE (5.4) if F (x, v, p) 0 p D v(x), x or, equivalently, F (x, ϕ, ϕ x ) 0 for every C 1 function ϕ such that ϕ v has a local maximum at x, and this is true for every x. v is a viscosity solution if it is both viscosity super- and sub-solution. 135

16 Note that the above definitions impose conditions on v only at points where D + v and D v are non-empty. We know that the set of these points is dense. At all points where v is differentiable, the PDE must hold in the usual sense. (If v is Lipschitz then this is a.e. by Rademacher s Theorem.) Example 17 F (x, v, v x ) = 1 v x, so the PDE is 1 v x = 0. (Is this the HJB for some control problem?) v = ±x (plus constant) are classical solutions. Claim: v(x) = x is a viscosity solution. At points of differentiability OK. At x = 0, we have: D + v(0) = nothing to check. D v(0) = [ 1, 1], 1 p 0 p [ 1, 1] OK. Note the lack of symmetry: if we rewrite the PDE as v x 1 = 0 then x is not a viscosity solution! So we need to be careful with the sign. The term viscosity solution comes from the fact that v can be obtained from smooth solutions to the PDE F (x, v ε, v ε ) = ε v ε in the limit as ε 0, and the Laplacian on the right-hand side is a term used to model viscosity of a fluid (cf. [Feynman, Eq. (41.17), vol. II]). See [Bressan] for a proof of this convergence result. Begin optional: The basic idea behind that proof is to consider a test function ϕ C 2 (or approximate it by some ψ C 2 ). Let s say we re in supersolution case. Then for each ε, ϕ v ε has a local maximum at some x ε near a given x, hence ϕ(x ε ) = v ε (x ε ) and ϕ(x ε ) v ε (x ε ). This gives F (x, v ε, ϕ ε ) ε ϕ. Taking the limit as ε 0, we have F (x, v, ϕ) 0 as needed. End optional The HJB partial differential equation and the value function Now go back to the optimal control problem and consider the HJB equation v t min u {L(x, u) + v x f(x, u)} = 0 (or inf), with boundary condition v(t 1, x) = K(x). Now we have (t, x) in place of what was just x earlier. Note that u is minimized over, so it doesn t explicitly enter the PDE. Theorem 12 Under suitable technical assumptions (rather strong: f, L, K are bounded uniformly over x and globally Lipschitz as functions of x; see [Bressan] for details), the value function V (t, x) is a unique viscosity solution of the HJB equation. It is also Lipschitz (but not necessarily C 1 ). We won t prove everything, but let s prove one claim: that V is a viscosity subsolution. Need to show: ϕ(t, x) C 1 such that ϕ V attains a local minimum at (, x 0 ) arbitrary time/space we have ϕ t (, x 0 ) + inf u {L(x 0, u) + ϕ x (, x 0 ) f(x 0, u)} 0 Assume the contrary: ϕ(t, x) and u 0 U such that 136

17 1) ϕ(, x 0 ) = V (, x 0 ) 2) ϕ(t, x) V (t, x) (t, x) 3) ϕ t (, x 0 ) + L(x 0, u 0 ) + ϕ x (, x 0 ) f(x 0, u 0 ) < Θ < 0 Goal: Show that this control u 0 is too good to be true, i.e., it provides a cost decrease inconsistent with the principle of optimality (too fast). 137

18 138

19 Lecture 23 Using control u 0 on [, + t], from the above three items we have V ( + t, x( + t)) V (, x 0 ) ϕ( + t, x( + t)) ϕ(, x 0 ) = t0 + t dϕ (t, x(t))dt = dt t0 + t (by continuity, if t is small, using item 3) On the other hand, recall the principle of optimality: V (, x 0 ) = inf u { t0 + t } Ldt + V ( + t, x( + t)) ϕ t (t, x(t)) + ϕ x (t, x(t)) f(x(t), u 0 )dt t0 + t t0 + t L(t, x(t), u 0 )dt t Θ L(t, x(t), u 0 )dt + V ( + t, x( + t)) (the second integral is along x(t) generated by u(t) u 0, t [, + t]). This implies t0 + t V ( + t, x( + t)) V (, x 0 ) Ldt (5.5) (again along u(t) u 0 ). We have arrived at a contradiction. Try proving that V is a super-solution. The idea is the same, but the contradiction will be that the cost now doesn t decrease fast enough to satisfy the inf condition. See [Bressan] for help, and for all other claims too. Via uniqueness in the above theorem [Bressan]: Sufficient conditions for optimality now generalize: if V is a viscosity solution of HJB and it is the cost for u, then V and u are the optimal cost and optimal control. 139

20 Chapter 6 The linear quadratic regulator Will specialize to the case of linear systems and quadratic cost. Things will be simpler, but we ll be able to dig deeper. Another approach would have been to do this first and then generalize to nonlinear systems. Historically, however, things developed the way we cover them. References: ECE 515 class notes; Kalman s 1960 paper (posted on the web must read!), which is the original reference on LQR and other things (controllability); there exist many classic texts: Brockett (FDLS), A-F, Anderson-Moore, Kwakernaak-Sivan (all these should be on reserve at Grainger). 6.1 The finite-horizon LQR problem System: ẋ = A(t)x + B(t)u, x( ) = x 0 x R n, u R m (unconstrained). Cost (1/2 is to match [Kalman], [A-F]): J = t1 1 ( x T (t)q(t)x(t) + u T (t)r(t)u(t) ) dt xt (t 1 )Mx(t 1 ) where t 1 is a specified final time, x(t 1 ) is free, Q( ), R( ), M are matrices satisfying M = M T 0, Q(t) = Q T (t) 0, R(t) = R T (t) > 0 t. We will justify/revise these assumptions later (R > 0 because we ll need R 1 ). Free t 1, target sets also possible. L = x T Qx + u T Ru this is another explanation of the acronym LQR. Intuitively, this cost is very reasonable: keep x and u small (remember Q, R 0) The optimal feedback law Let s derive necessary conditions for optimality using the Maximum Principle first. Hamiltonian: H = 1 2 (xt Qx + u T Ru) + p T (Ax + Bu) 140

21 (everything depends on t). Optimal control u must maximize H. (Cf. Problem 11 earlier.) H u = Ru + B T p u = R 1 B T p H uu = R < 0 this is indeed a maximum (we see that we need the strict inequality R > 0 to write R 1 ). We have a formula for u in terms of the adjoint p, so let s look at p more closely. Adjoint equation: ṗ = H x = Qx A T p Boundary condition: p(t 1 ) = K x (x(t 1 )) = Mx(t 1 ). We ll show that the relation p (t) = (matrix) x (t) holds for all t, not just t 1. Canonical system (using u = R 1 B T p ): (ẋ ) ( A BR ṗ = 1 B T ) ( ) x Q A T p Consider its transition matrix Φ(, ), and partition it as ( Φ11 ) Φ 12 Φ 21 Φ 22 (all have two arguments), so ( x ) (t) p = (t) ( Φ11 (t, t 1 ) Φ 12 (t, t 1 ) Φ 21 (t, t 1 ) Φ 22 (t, t 1 ) ) ( x ) (t 1 ) p (t 1 ) (flowing back, Φ(t, t 1 ) = Φ 1 (t 1, t), not true for Φ ij ). Using the terminal condition p (t 1 ) = Mx (t 1 ), we have x (t) = (Φ 11 (t, t 1 ) Φ 12 (t, t 1 )M)x (t 1 ) and p (t) = (Φ 21 (t, t 1 ) Φ 22 (t, t 1 )M)x (t 1 ) = (Φ 21 (t, t 1 ) Φ 22 (t, t 1 )M)(Φ 11 (t, t 1 ) Φ 12 (t, t 1 )M) 1 x (t) This is formally for now, will address the existence of the inverse later. But at least for t = t 1, we have ( ) I 0 Φ(t 1, t 1 ) = Φ 0 I 11 (t 1, t 1 ) = I, Φ 12 (t 1, t 1 ) = 0 so Φ 11 (t 1, t 1 ) Φ 12 (t 1, t 1 )M = I invertible, and by continuity it stays invertible for t near t 1. To summarize, p (t) = P (t)x (t) for a suitable matrix P (t) defined by P (t) = (Φ 21 (t, t 1 ) Φ 22 (t, t 1 )M)(Φ 11 (t, t 1 ) Φ 12 (t, t 1 )M) 1 which exists at least for t near t 1. (The sign is to match a common convention and to have P 0 which is later related to the cost.) The optimal control is which is a linear state feedback! u (t) = R 1 (t)b T (t)p (t) = R 1 (t)b T (t)p (t) x (t) } {{ } =:K(t) Note that K is time-varying even if the system is LTI. 141

22 u (t) = K(t)x (t) this shows that quadratic costs are very compatible with linear systems. (This idea essentially goes back to Gauss: least square problems linear gradient descent laws.) Note that the LQR problem does not impose the feedback form of the solution. We could just as easily derive an open-loop formula for u : hence x ( ) = x 0 = (Φ 11 (, t 1 ) Φ 12 (, t 1 )M)x (t 1 ) p (t) = (Φ 21 (t, t 1 ) Φ 22 (t, t 1 )M)(Φ 11 (, t 1 ) Φ 12 (, t 1 )M) 1 x 0 again modulo the existence of an inverse. But as long as P (t) above exists, the closed-loop feedback system is well-defined can solve it and convert from feedback to open-loop form. But feedback form is nicer in many respects. We have linear state feedback independent of the Riccati equation (which comes later). We have p (t) = P (t)x (t) but the formula is quite messy The Riccati differential equation Idea: Let s derive a differential equation for P by differentiating both sides of p = P x: and using the canonical equations and Plugging these in, we have Finally, using p = P x, ṗ = P x P ẋ ẋ = Ax + BR 1 B T p ṗ = Qx A T p Qx A T p = P x P Ax P BR 1 B T p Qx + A T P x = P x P Ax + P BR 1 B T P x This must hold along the optimal trajectory. But the initial state x 0 was arbitrary, and P (t) doesn t depend on x 0. So we must have the matrix differential equation P (t) = P (t)a(t) A T (t)p (t) Q(t) + P (t)b(t)r 1 (t)b T (t)p (t) This is the Riccati differential equation (RDE). Boundary condition: P (t 1 ) = M (since p (t 1 ) = Mx (t 1 ) and x (t 1 ) was free). RDE is a first-order nonlinear (quadratic) ODE, a matrix one. We can propagate its solution backwards from t 1 to obtain P (t). Global existence is an issue, more on this later. Another option is to use a previous formula by which we defined P (t) that calls for solving two first-order ODEs (canonical system) which are linear: (ẋ ) ( A BR = 1 B T ) ( ) x ṗ Q A T p 142

23 But we don t just need to solve these for some initial condition, we need to have the transition matrix Φ, which satisfies a matrix ODE. In fact, let X(t), Y (t) be n n matrices solving (Ẋ ) ( A BR = 1 B T ) ( ) X Ẏ Q A T, X(t Y 1 ) = I, Y (t 1 ) = M Then we can check that P (t) := Y (t)x 1 (t) satisfies the RDE and boundary condition [ECE 515]. Cf. earlier discussion of solution of RDE in the context of calculus of variations (sufficient optimality conditions). 143

24 144

25 Lecture 24 Summary so far: optimal control u (if it exists) is given by where P satisfies RDE + boundary condition. u = R 1 B T P x }{{} = p Note: if u exists then it s automatically unique. Indeed, solution of RDE with a given boundary condition is unique, and this uniquely specifies the feedback law u = Kx and the corresponding optimal trajectory (x 0 being given). But the above analysis was via the Maximum Principle so it only gives necessary conditions. What about sufficiency? Road map: sufficiency; global existence of P ; infinite horizon (t 1 ) The optimal cost Actually, we already proved (local) sufficiency for this control for the LQR problem in Problem 11. To get global sufficiency, let s use sufficient conditions for global optimality in terms of the HJB equation: V t = inf u { 1 2 xt Qx + 1 } 2 ut Ru + V x, Ax + Bu and boundary condition: V (t 1, x) = 1 2 xt Mx The inf u on the right-hand side is in this case a minimum, and it s achieved for u = R 1 B T V x. Thus we can rewrite the HJB as V t = 1 2 xt Qx (V x) T BR 1 RR 1 B T V x + (V x ) T Ax (V x ) T BR 1 B T V x = 1 2 xt Qx 1 2 (V x) T BR 1 B T V x + (V x ) T Ax We want to verify optimality of the control u = R 1 B T P x, P (t 1 ) = M V didn t appear in the the Maximum Principle based analysis, but now we need to find it. The two expressions for u as well as boundary conditions would match if V (t, x) = 1 2 xt P (t)x This is a very natural guess for the value function. Let s verify that it indeed satisfies the HJB: V t = 1 2 xt P x (note that this is partial derivative with respect to t, not total derivative), V x = P x 145

26 Plugging this in: 1 2 xt P x = 1 2 xt Qx 1 2 xt P T BR 1 B T P x + x} T P{{ Ax} = 1 2 xt (P A+A T P )x We see that RDE implies that this equality indeed holds. Thus our linear feedback is a (unique) optimal control, and V is the cost. Let s reflect on how we proved optimality: We derived a candidate control from the Maximum Principle ; We found a suitable V that let us verify its optimality via the HJB equation. This is a typical path, suggested by Carathéodory (and Kalman). Exercise 19 (due Apr 25) Let t t 1 be an arbitrary time at which the solution P (t) of RDE exists. Assumptions as in the lecture. Prove that P (t) is symmetric; Prove that P (t) 0; Can you prove that P (t) > 0? If not, can you prove it under some additional assumption(s)? Existence of solution for RDE We still have to address the issue of global existence for P (t). We know from the theory of ODEs that P (t) exists for t < t 1 sufficiently close to t 1. Two cases are possible: 1) P (t) exists for all t < t 1 ; 2) There exists a t < t 1 such that for some i, j, P ij (t) ± as t t, i.e., we have finite escape for some entry of P. We know from calculus of variations that in general, global existence of solutions to Riccati-type equations is not assured, and related to existence of conjugate points. In the present notation, P (t) = Y (t)x 1 (t) and zeros of X(t) correspond (roughly) to conjugate points as we defined them in calculus of variations. However, for the LQR problem, we can prove global existence rather easily: Suppose Case 2 above holds. We know from Problem 19 that P (t) = P T (t) 0, t ( t, t 1 ]. If an off-diagonal element of P goes to ± as t t while diagonal elements stay bounded, then some 2 2 principal minor becomes negative impossible. ( ) Pij P ij 146

27 So P ii for some i. Let x 0 = e i = (0,..., 1,..., 0) with 1 in the i-th position. Then x T 0 P (t)x 0 = P ii (t) as t t. But 1 2 xt 0 P (t)x 0 is the optimal cost for the system with initial condition x 0 at time t. This cost cannot exceed the cost corresponding to, e.g., control u(τ) 0, τ [t, t 1 ]. The state trajectory for this control is x(τ) = Φ A (τ, t)x 0, and the cost is t1 t 1 ( x T 2 0 Φ T A (τ, t)q(τ)φ ) A(τ, t)x 0 dτ + x T 0 Φ T A (t 1, t)mφ A (t 1, t)x 0 which clearly remains bounded, as t t, by an expression of the form α( t, t 1 ) x 0 2, a contradiction. }{{} =1 So everything is valid for arbitrary t. So, can we actually compute solutions to LQR problems? Example 18 (simplest possible, from ECE 515) ẋ = u, x, u R J = t (x2 + u 2 )dt A = 0, B = 1, Q = 1, R = 1, M = 0. RDE: P = 1+P 2, P scalar. The ODE ż = 1+z 2 doesn t have a finite escape time going backwards (remember that we re concerned with backward completeness). (Digression But: ż = z 2 does have finite escape backward in time! We would get this RDE if Q = 0, R = 1. So the assumption R 0 is important. Where did we use our standing assumptions in the above completeness proof? ) Can we solve the RDE from this example? 0 dp P 2 1 = P (t) (recall: P (t 1 ) = M = 0). The integral on the left-hand side is tanh 1, so the solution is P (t) = tanh(t 1 t) (recall: sinh x = ex e x 2,... ). t1 t dt So we see that even in such a simple example, solving RDE analytically is not easy. However, the improvement compared to the general nonlinear problems is very large: HJB (PDE) RDE (ODE), can solve efficiently by numerical methods. But: see a need for further simplification. 6.2 The infinite-horizon LQR problem As in the general case of HJB, we know that the value function becomes time-independent and the solution of the problem considerably simplifies if we make suitable additional assumptions. In this case, they are: System is LTI: ẋ = Ax + Bu, A, B constant matrices; 147

28 Cost is time-independent: L = 1 2 (xt Qx + u T Ru), Q, R constant matrices; No terminal cost: M = 0; Infinite horizon: t 1 =. The first 3 assumptions no problem, certainly simplifying. The last one, as we know, requires care (e.g., need to be sure that the cost is <, expect this to be related to closed-loop stability). So, let s make the first 3 assumptions, keep t 1 finite for the moment, and a bit later examine the limit as t 1. J = t1 1 2 (xt Qx + u T Ru)dt where t 1 is large but finite, treat as parameter. Let s explicitly include t 1 as a parameter in the value function: V t1 (, x 0 ), or V t1 (t, x),, t t 1. We know that this is given by V t1 (, x 0 ) = 1 2 xt 0 P (; 0, t 1 )x 0 where P ( ; 0, t 1 ) (0 is the zero matrix) stands for the solution at time of the RDE P = P A A T P Q + P BR 1 B T P (6.1) with boundary condition P (t 1 ) = 0 (since there is no terminal cost). The optimal control is u (t) = R 1 B T P (t; 0, t 1 )x (t) Existence and properties of the limit We want to study lim t1 P ( ; 0, t 1 ). It doesn t necessarily exist in general, but it does under suitable assumptions on (A, B) which guarantee that the cost remains finite. In force from now on: Assumption: (A, B) is a controllable pair. Reason for this assumption: if not, then we may have unstable uncontrollable modes and then no chance of having V t1 < as t 1. We see that stabilizability might also be enough, more on this later. Claim: the limit lim t1 P ( ; 0, t 1 ) exists, and has some interesting properties. Proof. [Kalman], [Brockett] 1 t1 1 ( 2 xt 0 P ( ; 0, t 1 )x 0 = (x ) T Qx + (u ) T Ru ) dt 2 where u is the optimal control for finite-horizon LQR which we found earlier. It is not hard to see that x T 0 P (; 0, t 1 )x 0 is a monotonically nondecreasing function of t 1, since Q 0, R > 0. Indeed, let now u be optimal for t 1 + t. Then V t1 + t(, x 0 ) = t1 + t 1 t1 2 ((x ) T Qx + (u ) T Ru 1 )dt 2 ((x ) T Qx + (u ) T Ru )dt V t1 (, x 0 ) 148

29 We would know that it has a limit if we can show that it s bounded. Can we do that? Use controllability! There exists a time t > and a control ū which steers the state from x 0 at t = to 0 at t = t. After t, set ū 0. Then V t1 (, x 0 ) J(, x 0, ū) = t 1 2 ( xt Q x + ū T Rū)dt t 1 t (where x is the corresponding trajectory), and this doesn t depend on t 1 and works for all t 1 t uniform bound. So, lim t1 x T 0 P (; 0, t 1 )x 0 exists. Does this imply the existence of the matrix limit lim t1 P ( ; 0, t 1 )? Yes. Let s play the same kind of game as before (when we were proving global existence for P (t)) and plug in different x 0. Plug in x 0 = e i lim t1 P ii ( ; 0, t 1 ) exists. Plug in x 0 = e i + e j = (0,..., 1,..., 1,..., 0) (1 in the i-th and j-th spots) lim P ii + 2P ij + P jj exists since lim P ii and lim P jj exist, lim P ij exists. What are the interesting properties? Does lim t1 P ( ; 0, t 1 ) depend on? Remember that P ( ; 0, t 1 ) is the solution at time of the time-invariant matrix ODE (6.1), and we re passing to the limit as t 1 = in the limit, the solution has flown for infinite time backwards starting from the zero matrix. So lim t1 P ( ; 0, t 1 ) is a constant matrix, independent of. Call it P, or just P : P = lim t 1 P (t; 0, t 1) Passing to the limit on both sides of RDE, we see that P (t; 0, t 1 ) must also approach a constant matrix this constant matrix must be the zero matrix P satisfies the algebraic Riccati equation (ARE) t P A + A T P + Q P BR 1 B T P = 0 (6.2) This simplification corresponds to getting rid of V t in HJB when passing to the infinite-horizon case. But now we re left with no derivatives at all! A way to think about this limit: P ij (t) P ij (t) t t 1 t 1 t Figure 6.1: Steady-state solution of RDE As t 1, P (t; 0, t 1 ) approaches steady state. 149

30 150

Lecture 11. 3.3.2 Cost functional

Lecture 11. 3.3.2 Cost functional Lecture 11 3.3.2 Cost functional Consider functions L(t, x, u) and K(t, x). Here L : R R n R m R, K : R R n R sufficiently regular. (To be consistent with calculus of variations we would have to take L(t,

More information

19 LINEAR QUADRATIC REGULATOR

19 LINEAR QUADRATIC REGULATOR 19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

DERIVATIVES AS MATRICES; CHAIN RULE

DERIVATIVES AS MATRICES; CHAIN RULE DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we

More information

4 Lyapunov Stability Theory

4 Lyapunov Stability Theory 4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

More information

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Stochastic Inventory Control

Stochastic Inventory Control Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

Math 120 Final Exam Practice Problems, Form: A

Math 120 Final Exam Practice Problems, Form: A Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,

More information

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov DISCRETE AND CONTINUOUS Website: http://aimsciences.org DYNAMICAL SYSTEMS Supplement Volume 2005 pp. 345 354 OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN E.V. Grigorieva Department of Mathematics

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

1.7 Graphs of Functions

1.7 Graphs of Functions 64 Relations and Functions 1.7 Graphs of Functions In Section 1.4 we defined a function as a special type of relation; one in which each x-coordinate was matched with only one y-coordinate. We spent most

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

CHAPTER FIVE. Solutions for Section 5.1. Skill Refresher. Exercises

CHAPTER FIVE. Solutions for Section 5.1. Skill Refresher. Exercises CHAPTER FIVE 5.1 SOLUTIONS 265 Solutions for Section 5.1 Skill Refresher S1. Since 1,000,000 = 10 6, we have x = 6. S2. Since 0.01 = 10 2, we have t = 2. S3. Since e 3 = ( e 3) 1/2 = e 3/2, we have z =

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

15 Kuhn -Tucker conditions

15 Kuhn -Tucker conditions 5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

1 Error in Euler s Method

1 Error in Euler s Method 1 Error in Euler s Method Experience with Euler s 1 method raises some interesting questions about numerical approximations for the solutions of differential equations. 1. What determines the amount of

More information

Differentiation of vectors

Differentiation of vectors Chapter 4 Differentiation of vectors 4.1 Vector-valued functions In the previous chapters we have considered real functions of several (usually two) variables f : D R, where D is a subset of R n, where

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Class Meeting # 1: Introduction to PDEs

Class Meeting # 1: Introduction to PDEs MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Integrals of Rational Functions

Integrals of Rational Functions Integrals of Rational Functions Scott R. Fulton Overview A rational function has the form where p and q are polynomials. For example, r(x) = p(x) q(x) f(x) = x2 3 x 4 + 3, g(t) = t6 + 4t 2 3, 7t 5 + 3t

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}. Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

5 Numerical Differentiation

5 Numerical Differentiation D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

2.2 Derivative as a Function

2.2 Derivative as a Function 2.2 Derivative as a Function Recall that we defined the derivative as f (a) = lim h 0 f(a + h) f(a) h But since a is really just an arbitrary number that represents an x-value, why don t we just use x

More information

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors. 3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

MATH 10034 Fundamental Mathematics IV

MATH 10034 Fundamental Mathematics IV MATH 0034 Fundamental Mathematics IV http://www.math.kent.edu/ebooks/0034/funmath4.pdf Department of Mathematical Sciences Kent State University January 2, 2009 ii Contents To the Instructor v Polynomials.

More information

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1 Implicit Functions Defining Implicit Functions Up until now in this course, we have only talked about functions, which assign to every real number x in their domain exactly one real number f(x). The graphs

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

COLLEGE ALGEBRA. Paul Dawkins

COLLEGE ALGEBRA. Paul Dawkins COLLEGE ALGEBRA Paul Dawkins Table of Contents Preface... iii Outline... iv Preliminaries... Introduction... Integer Exponents... Rational Exponents... 9 Real Exponents...5 Radicals...6 Polynomials...5

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Microeconomic Theory: Basic Math Concepts

Microeconomic Theory: Basic Math Concepts Microeconomic Theory: Basic Math Concepts Matt Van Essen University of Alabama Van Essen (U of A) Basic Math Concepts 1 / 66 Basic Math Concepts In this lecture we will review some basic mathematical concepts

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems Using the Theory of Reals in Analyzing Continuous and Hybrid Systems Ashish Tiwari Computer Science Laboratory (CSL) SRI International (SRI) Menlo Park, CA 94025 Email: ashish.tiwari@sri.com Ashish Tiwari

More information

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0. Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard

More information

Multi-variable Calculus and Optimization

Multi-variable Calculus and Optimization Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus

More information

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88 Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Elasticity Theory Basics

Elasticity Theory Basics G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold

More information

E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 RAVI VAKIL CONTENTS 1. Discrete valuation rings: Dimension 1 Noetherian regular local rings 1 Last day, we discussed the Zariski tangent space, and saw that it

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality. 8 Inequalities Concepts: Equivalent Inequalities Linear and Nonlinear Inequalities Absolute Value Inequalities (Sections 4.6 and 1.1) 8.1 Equivalent Inequalities Definition 8.1 Two inequalities are equivalent

More information

Polynomial and Rational Functions

Polynomial and Rational Functions Polynomial and Rational Functions Quadratic Functions Overview of Objectives, students should be able to: 1. Recognize the characteristics of parabolas. 2. Find the intercepts a. x intercepts by solving

More information

Controllability and Observability

Controllability and Observability Controllability and Observability Controllability and observability represent two major concepts of modern control system theory These concepts were introduced by R Kalman in 1960 They can be roughly defined

More information

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015 ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2015 These notes have been used before. If you can still spot any errors or have any suggestions for improvement, please let me know. 1

More information

Math 181 Handout 16. Rich Schwartz. March 9, 2010

Math 181 Handout 16. Rich Schwartz. March 9, 2010 Math 8 Handout 6 Rich Schwartz March 9, 200 The purpose of this handout is to describe continued fractions and their connection to hyperbolic geometry. The Gauss Map Given any x (0, ) we define γ(x) =

More information

On the D-Stability of Linear and Nonlinear Positive Switched Systems

On the D-Stability of Linear and Nonlinear Positive Switched Systems On the D-Stability of Linear and Nonlinear Positive Switched Systems V. S. Bokharaie, O. Mason and F. Wirth Abstract We present a number of results on D-stability of positive switched systems. Different

More information

Finite dimensional topological vector spaces

Finite dimensional topological vector spaces Chapter 3 Finite dimensional topological vector spaces 3.1 Finite dimensional Hausdorff t.v.s. Let X be a vector space over the field K of real or complex numbers. We know from linear algebra that the

More information

1 Teaching notes on GMM 1.

1 Teaching notes on GMM 1. Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in

More information

An Introduction to Partial Differential Equations

An Introduction to Partial Differential Equations An Introduction to Partial Differential Equations Andrew J. Bernoff LECTURE 2 Cooling of a Hot Bar: The Diffusion Equation 2.1. Outline of Lecture An Introduction to Heat Flow Derivation of the Diffusion

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information