Controllability and Observability

Size: px
Start display at page:

Download "Controllability and Observability"

Transcription

1 Chapter 3 Controllability and Observability In this chapter, we study the controllability and observability concepts. Controllability is concerned with whether one can design control input to steer the state to arbitrarily values. Observability is concerned with whether without knowing the initial state, one can determine the state of a system given the input and the output. 3.1 Some linear mapping concepts The study of controllability and observability for linear systems essentially boils down to studying two linear maps: the reachability map L r which maps, for zero initial state, the control input u( to the final state x(t 1 ; and the observability map, L o which maps, for zero input, from the initial state x(t to the output trajectory y(. The range and the null spaces of these maps respectively are critical for their studies. Let L : X(= R p Y(= R q be a linear map: i.e. R(L Y denotes the range of L, given by: N(L X denotes the null space of L, given by: L(αx 1 + βx 2 = α(lx 1 + β (Lx 2 R(L = {y Y y = Lx for some x X } N(L = {x X Lx = } Note: Both R(L and N(L are linear subspaces, i.e. for scalar α, β, y 1, y 2 R(L (αy 1 + βy 2 R(L x 1, x 2 N(L (αx 1 + βx 2 R(L L is called onto or surjective if R(L = Y (everything is within range. L is called into or one-to-one (cf. many-to-one if N(L = {}. Thus, Lx 1 = Lx 2 x 1 = x 2 27

2 28 c Perry Y.Li The Finite Rank Theorem Proposition : Let L R n m, so that L : u R m y R n R(L = R(LL T N(L = N(L T L 3. Any point u R n can be written as u = L T y 1 + u n R n ;where y 1 R n, Lu n =. In other words, u can always be written as a sum of u 1 R(L T and u n N(L. 4. Similarly, any point u R n can be written as Note: y = Lu 1 + y n R n ; where u 1 R m, L T y n =. In other words, u can always be written as a sum of y 1 R(L and y n N(L T. If L R n m and m > n (short and fat, then LL T R n n is both short and thin. To check whether the system is controllable we need to check if LL T is full rank - e.g. check its determinant. If m < n (tall and thin, then L T L R n n is also both short and thin. Checking N(L T L will be useful when we talk about observerability. 3.2 Controllability Consider a state determined dynamical system with a transition map s(t 1, t, x, u( such that This can be continuous time or discrete time. x(t 1 = s(t 1, t, x(t, u(. Definition (Controllability : The dynamical system is controllable on [t, t 1 ] if initial and final states x, x 1, u( so that s(t 1, t, x, u = x 1. It is said to controllable at t if x, x 1, t 1 t and u( U so that s(t 1, t, x, u = x 1 A system that is not controllable probably means the followings: If state space system is realized from input-output data, then the realization is redundant. i.e. some states have no relationship to either the input or the output. If states are meaningful (physical variables that need to be controlled, then the design of the actuators are deficient. The effect of control is limited. There is also a possibililty of instability

3 ME 8281:Advanced Control Systems, F1, F2, S6, S8 29 Remarks: The followings are equivalent: 1. A system is controllable at t = t. 2. The system can be transferred from any state at t = t to any other state in finite time. 3. For each x Σ, s(, t, x, : [t, u( s(t 1, t, x, u( is surjective (onto. Controllability does not say that x(t remains at x 1. e.g. for ẋ = Ax + Bu to make x(t = x 1 t t, it is necessary that Ax 1 Range(B. This is generally not true. Example - An Uncontrollable System An linear actuator pushing 2 masses on each end of the actuator in space. m 1 ẍ 1 = u; m 2 ẍ 2 = u State: X = [x 1, ẋ 1, x 2, ẋ 2 ] T. Action and reaction are equal and opposite, and act on different bodies. Momentum is conserved: m 1 ẋ 1 + m 2 ẋ 2 = constant = m 1 ẋ 1 (t + m 2 ẋ 2 (t E.g. if both masses are initially at rest: it is not possible to choose u( to make ẋ 1 =, ẋ 2 = Effects of feedback on controllability Consider a system ẋ = f(t, x(t, u(t. Suppose that the system is under memoryless (static state feedback: with v(t being the new input. u(t = v(t g(t, x(t A system is controllable if and only if the system under memoryless feedback with v(t as the input is controllable.

4 3 c Perry Y.Li Proof: Suppose that the original system is controllable. Given a pair of initial and final states, x and x 1, suppose that the control u( transfers the state from x at t to x 1 at t 1. For the new system, which has also x(t as the state variables, we can use the control v(t = u(t + g(t, x(t to transfer the state from x at t to x 1 at t 1. Therefore, the new system is also controllable. Suppose that the feedback system is controllable, and v( is the control that can achieve the desired state transfer. Then for the original system, we can use the control u(t = v(t g(t, x(t to achieve the same state transfer. Thus, the original system is also controllable. Consider now the system under dynamic state feedback: ż(t = α(t, x(t, z(t u(t = v(t g(t, x(t, z(t where v(t is the new input and z(t is the state of the controller. If a system under dynamic feedback is controllable then the original system is. The converse is not true, i.e. the original system is controllable does not necessarily imply that the system under dynamic state feedback is. Proof: The same proof for the memoryless state feedback works for the first statement. For the second statement, we give an example of the controller: ż = z; u(t = v(t g(t, x(t, z(t. The state of the new system is (x, z but no control can affect z(t. Thus, the new system is not controllable. These results show that feedback control cannot make a uncontrollable system become controllable, but can even make a controllable system, uncontrollable. To make a system controllable, one must redesign the actuators. 3.3 Controllability of Linear Systems Continuous time system: ẋ(t = A(tx(t + B(tu(t x(t 1 = Φ(t 1, t x + The reachability map on [t, t 1 ] is defined to be: L r,[t,t 1 ](u( = t t x R n Φ(t 1, tb(tu(tdt. Φ(t 1, tb(tu(tdt Thus, it is controllable on [t, t 1 ] if and only if L r,[t,t 1 ](u( is surjective (onto. Notice that L r,[t,t 1 ] determines the set of states that can be reached from the origin at t = t 1. The study of the range space of the linear map: is central to the study of controllability. L r,[t,t 1 ] : {u( } R n

5 ME 8281:Advanced Control Systems, F1, F2, S6, S8 31 Proposition If a continuous time, possibly time varying, linear system is controllable on [t, t 1 ] then it is controllable on [t, t 2 ] where t 2 t 1. Proof: To transfer from x(t = x to x(t 2 = x 2, define x 1 = Φ(t 1, t 2 x 2 Suppose that u [t,t 1 ] transfers x(t = x to x(t 1 = x 1. Choose { u (t t [t, t 1 ] u(t = t (t 1, t 2 ] This result does not hold for t < t 2 < t 1. e.g. The system { t [t, t 2 ] ẋ(t = B(tu(t B(t = I n t (t 2, t 1 ]. is controllable on [t, t 1 ] but x(t 2 = x(t for any control u( Discrete time system. x(k + 1 = A(kx(k + B(ku(k, x(k 1 = Φ(k 1, k x(k + k 1 1 x R n, u R m k=k Φ(k 1, k + 1B(ku(k The transition function of the discrete time system is given by: x(k 1 = s(k, k 1, x(k, u( = Φ(k 1, k x(k + k 1 1 The reachability map of the discrete time system can be written as: where L r (k, k 1 R n (k 1 k m, U = k=k Φ(k 1, k + 1B(ku(k (3.1 Φ(k f, k i = Π k f 1 k=k i A(k (3.2 k 1 1 L r,[k,k 1 ](u(k,..., u(k 1 1 = Φ(k 1, k + 1B(ku(k = L r (k, k 1 U k=k u(k.. u(k 1 1 Thus, the system is controllable on [k, k 1 ] if and only if L r,[k,k 1 ] is surjective, which is true if and only if L r (k, k 1 has rank n Proposition Suppose that A(k is nonsingular for each k 1 k < k 2, then the discrete time linear system is controllable on [k, k 1 ] implies that it is controllable on [k, k 2 ] where k 2 k 1. Proof: : The proof is similar to the continuous case. However for a discrete time system, we need A(k nonsingular for k 1 k < k 2 to ensure that Φ(k 2, k 1 is invertible. Q.E.D.

6 32 c Perry Y.Li Example Consider a unit point mass under control of a force: ( ( ( d x 1 x = + dt ẋ ẋ ( u 1 This is the same as ẍ = u. Suppose that x( = ẋ( =, we would like to translate the mass to x(t = 1, ẋ(t =. The reachability map L : u( x(t is x(t = T Φ(T, tb(tu(tdt Let use try to solve this problem using piecewise constant control: u t < T/1 u 1 T/1 t < 2T/1 u(t =. 9T/1 t T and find U = [u 1, u 2,...,u 1 ] T. The reachability map becomes: u 1 u 1 u 2 u 1 x(t = ( L 1 L 2... L 1 } {{ }. L r where [ ] T/1 L 1 = Φ(T, tb(tdt = T ( T 1 1 [ ] 2T/1 L 2 = Φ(T, tb(tdt = T 1 T/1, (.9T 1,. L 1 = T 1 (.1T 1 Whether the matrix denoted by L r R 2 1 is surjective (onto (i.e. R(L r = R 2 determines whether we can achieve our task using piecewise constant control. In our case, let T = 1 for convenience, ( L r = and L r L T r = (

7 ME 8281:Advanced Control Systems, F1, F2, S6, S8 33 which is non-singular (full rank. Thus, from the finite rank theorem R(L r = R(L r L T r, the particle transfer problem is solvable using piecewise constant control. If we make the number of pieces in the piecewise constant control larger and larger, then the multiplication of L r by L T r becomes an integral: L r L T r t Φ(t 1, tb(tb(t T Φ(t 1, t T dt. Therefore for the continuous time linear system is controllable over the interval [t, t 1 ] if and only if the matrix, t Φ(t 1, tb(tb(t T Φ(t 1, t T dt (3.3 is full rank. 3.4 Reachability Grammian The matrix W r = L r L T r R n n is called the Reachability Grammian for the interval that L r is defined. For the continuous time system ẋ = A(tx + B(tu the Reachability Grammian on the time interval [t, t 1 ] is defined to be: W r,[t,t 1 ] = For time invariant systems, t W r ([t 1, t ] = L r L r = = For the discrete time system, Φ(t 1, tb(tb(t T Φ(t 1, t T dt t t 1 t e Aτ BB T e AT τ dτ. e A(t 1 t BB T e AT (t 1 t dt x(k + 1 = A(kx(k + B(ku(k, the Reachability Grammian on the interval [k, k 1 ] is k 1 1 k=k Φ(k 1, k + 1B(kB(k T Φ(k 1, k + 1 T. From the finite rank theorem, we know that the controllability of a system on [t, t 1 ] (or on [k, k 1 ] is equivalent to Reachability Grammian on the same time interval being full rank.

8 34 c Perry Y.Li Reduction theorem: Controllability in terms of Controllability to Zero Thus far, we have been deciding controllability by considering the reachability map L r which concerns what states we are able to steer to from x(t =. The reduction theorem allows us to formulate the question in terms of whether a state can be steered to x(t f = in finite time. Theorem The followings are equivalent for a linear time varying differential system: 1. It is controllable on [t, t 1 ]. ẋ = A(tx + B(tu 2. It is controllable to on [t, t 1 ], i.e. x(t = x, there exists u(τ, τ [t, t 1 such that final state is x(t 1 =. 3. It is reachable from x(t =, i.e. x(t 1 = x f, there exists u(τ, τ [t, t 1 such that for x(t =, the final state is x(t 1 = x f. Proof: Clearly, (1 (2 and (3. To prove (2 (1 and (3 (1, use x(t 1 = Φ(t 1, t x(t + Φ(t 1, τb(τu(τdτ. t and Φ(t 1, t is invertible, to construct the appropriate x(t and x(t f. Controllability map: L c,[t,t 1 ] The controllability (to zero map on [t, t 1 ] is the map between u( to the initial state x such that x(t 1 =. L c,[t,t 1 ](u( = Φ(t, tb(tu(tdt t Proof: Notice that = x(t 1 = Φ(t 1, t x + Φ(t 1, τb(τu(τdτ t x = Φ(t 1, t 1 Φ(t 1, τb(τu(τdτ x = x = t t t Φ(t, t 1 Φ(t 1, τb(τu(τdτ Φ(t, τb(τu(τdτ R(L c,[t,t 1 ] = Φ(t 1, t R(L r,[t,t 1 ] These two ranges are typically not the same for time varying systems. However, one is full rank if and only if the other is, since Φ(t 1, t is invertible.

9 ME 8281:Advanced Control Systems, F1, F2, S6, S8 35 For time invariant continuous time systems, R(L c,[t,t 1 ] = R(L r,[t,t 1 ]. The above two statements are not true for time invariant discrete time systems. Can you find counter-examples? Because of the reduction theorem, the linear continuous time system is controllable on the interval [t, t f ] if R(L c,[t,t 1 ] = R n. From the finite rank theorem, R(L c,[t,t 1 ] = R(L c,[t,t 1 ]L c,[t,t 1 ] = R n. The controllability-to-zero grammian on the time interval [t, t 1 ] is defined to be: For time invariant system, W c = L c L c = W c = L c L c = t Φ(t, tb(tb(t T Φ(t, t T dt t e A(t t BB T e AT (t t dt = e Aτ BB T e AT τ dτ. t 1 t Since controllability of a system on [t, t 1 ] is equivalent to controllability to which is determined by the controllability map being surjective, it is also equivalent to controllability grammian on the same time interval [t, t 1 ] being full rank. 3.5 Minimum norm control Formulation The Controllability Grammian is related to the cost of control. Given x(t = x, find a control function or sequence u( so that x(t 1 = x 1. Let x d = x 1 Φ(t 1, t x. Then we must have x d = L r,[t,t 1 ](u where L r,[t,t 1 ] : {u( } R is the reachability map which, for continuous time system is: and for discrete time system: where C R n (k 1 k m, U = L r,[t,t 1 ](u = k 1 1 t Φ(t 1, tb(tu(tdt. L r,[k,k 1 ](u( = Φ(k 1, k + 1B(ku(k = C(k, k 1 U k=k u(k.. u(k 1 1 Let us focus on the discrete time system. Since x d Range(L r,[k,k 1 ] for solutions to exist, we assumerange(l r,[k,k 1 ] = R n. This implies that W r,[k,k 1 ] is invertible. Generally there are multiple solutions. To resolve the non-uniqueness, we find the solution so that k 1 1 J(U = 1 u(k T u(k = U T U 2 k=k is minimized. Here U = [u(k ; u(k + 1;...;u(k 1 1].

10 36 c Perry Y.Li Least norm control solution This is a constrained optimization problem (with J(U as the cost to be minimized, and L r U x d = as the constraint. It can be solved using the Lagrange Multiplier method of converting a constrained optimization problem into an unconstrained optimization. Define an augmented cost function, with the Lagrange multipliers λ R n The optimal given by (U, λ must satisfy: J (U, λ = J(U + λ T (L r U x d J U (U, λ = ; The second condition is just the constraint: L r U = x d. This means that L T r λ + U = ; Solving, we get: The optimal cost of control is: J λ (U, λ = L r U = x d λ = (L r L T r 1 x d = Wr 1 x d U = L T r λ = L T r Wr 1 x d. J(U = x T d W 1 r L r L T r W 1 r x d = x T d W r 1 x d Thus, the inverse of the Reachability Grammian tells us how difficult it is to perform a state transfer from x = to x d. In particular, if W r is not invertible, for some x d, the cost is infinite Geometric interpretation of least norm solution Geometrically, we can think of the cost as J = U T U, i.e. the inner product of U with itself. In notation of inner product, J =< U, U > R The advantage of the notation is that we can change the definition of inner product, e.g. < U, V > R = U T RV where R is a positive definite matrix. The usual inner (dot product has R = I. We say that U and V are normal to each other if < U, V > R =. Any solution that satisfies the constraint must be of the form where L r U p = x f is any particular solution. (U U p Null(L r Claim: Let U be the optimal solution, and U is any solution that satisfies the constraint. Then, (U U U, i.e. < (U U, U > R = which is the normal equation for the least norm solution problem. Proof: Direct substitution. (U U T R U R 1 U LT pc ( Lr R 1 1 U LT r xf = (L r (U U T ( L r R 1 1 U LT r xf =.

11 ME 8281:Advanced Control Systems, F1, F2, S6, S Open-loop least norm control Applying the above result to linear continuous time system using standard norms, i.e. J = t u T (τu(τdτ. (L r,[t,t 1 ] (t = B T (tφ(t, t 1 T [ ] W r,[t,t 1 ] = L r L r = Φ(t 1, τb(τb T (τφ(t 1, τ T dτ t u opt (t = B(t T Φ(t, t 1 T W 1 r,[t,t 1 ] (x 1 Φ(t 1, t x (3.4 Eq, (3.4 solves u opt ( in a batch form given x, i.e. it is openloop Recursive formulation Openloop control does not make use of feedback. It is not robust to disturbances or model unceratinty. Suppose now that x(t is measured. Conceptually, we can solve (3.4 for u opt (t and let t = t. Computing W r,[t,t1 ] is expensive. Try recursion: u opt (t = B(t T Φ(t, t 1 T W 1 r,[t,t 1 ] (x 1 Φ(t 1, tx(t ( ( = B(t T Φ(t, t 1 T W 1 r,[t,t 1 ] x 1 B(t T Φ(t, t 1 T W 1 r,[t,t 1 ] Φ(t 1, t x(t For any invertible matrix Q(t R n n, Q(tQ 1 (t = I, therefore Since W r,[t,t1 ] = d dt Q 1 (t = Q 1 (t Q(tQ 1 (t t Φ(t 1, τb(τb(τ Φ(t 1, τ dτ d dt W r,[t,t 1 ] = Φ(t 1, tb(tb(t Φ(t 1, t Thus we can solve for W 1 r,[t,t 1 ] dynamically by integrating on-line d dt W 1 1 r,[t,t 1 ] = Wr,[t,t 1 ] Φ(t 1, tb(tb(t Φ(t 1, t W 1 r,[t,t 1 ] Issue: Does not work well when t t 1 because W r,[t,t1 ] can be close to singular.

12 38 c Perry Y.Li Optimal cost Optimal cost as a function of initial time t Since W r,[t,t f ] = tf t Φ(t f, τb(τb(τ Φ(t f, τ dτ and the integrand is a positive semi-definite term, for each v R n, v T W 1 r,[t,t f ] v decreases as t decreases, i.e. given more time, the cost of control decreases. To see this, fix the final time t f, and consider the minimum cost function, E min (x f, t = x T f W 1 r,[t,t f ] x f and consider Since d dt f W 1 r,[t,t f ] d dt E min (x f, t f = v T d dt f W 1 r,[t,t f ] x f 1 d = Wr,[t,t f ] W dt r,[t,t f ]W 1 r,[t,t f f ] { } = W 1 r,[t,t f ] Φ(t f, t B(t B(t Φ(t f, t W 1 r,[t,t f ] is negative semi-definite, d dt E min (v, t f. Hence, the optimal control increases as the initial time decreases, increasing the allowable time interval. In other words, the set of states reachable with a cost less than a given value grows as t decreases. If the system is unstable, in that, Φ(t f, t becomes unbounded as t, then one can see that W r,[t,t f ] also becomes unbounded. This means that for some x f, E min (x f, t and t f. This means that some directions do not require any cost of control. Which direction do these correspond to? For time invariant system, decreasing t is equivalent to increasing t f. Thus for time-invariant system, increasing the time interval increases W r, thus decreasing cost. Optimal cost as a function final state x f Since W r,[t,t f ] is symmetric and positive semi-definite, has n orthogonal eigenvectors, V = [v 1,...,v n ], and associated eigenvalues Λ = diag(λ 1,...,λ n, such that V V T = I. Then W r,[t,t f ] = V ΛV T W 1 r,[t,t f ] = V diag(λ 1 1,...,λ 1 n V T This shows that most expensive direction to transfer to is the eigenvector associated with the minimum λ i, and the least expensive direction is the eigenvector associated with the maximum λ i 3.6 Linear Time Invariant (LTI Continuous Time Systems Consider first the LTI continuous time system: ẋ = Ax + Bu y = Cx (3.5

13 ME 8281:Advanced Control Systems, F1, F2, S6, S8 39 Note: Because of time invariance, the system is controllable (observable at t means that it is controllable (observable at any time. We will develop tests for controllability and observability by using directly the matrices A and B. The three tests for controllability of system (3.5 is given by the following theorem. Theorem For the continuous time system (3.5, the followings are equivalent: 1. The system is controllable over the interval [, T], for some T >. 2. The system is controllable over any interval [t, t 1 ] with t 1 > t. 3. The reachability grammian, W r,t := = T T Φ(T, tb(tb(t T Φ(T, t T dt e At BB e A t dt is full rank n. (This test works for time varying systems also 4. The controllability matrix C := ( B AB A 2 B A n 1 B has rank n. Notice that the controllability matrix has dimension n nm where m is the dimension of the control vector u. 5. (Belovich-Popov-Hautus test For each s C, the matrix, ( si A B has rank n. Note that rank of (si A, B is less than n only if s is an eigenvalue of A. We need the Cayley-Hamilton Theorm to prove this result: Theorem (Cayley Hamilton Let A R n n. Consider the polynomial (s = det(si A Then (A =. Notice that if λ i is an eigenvalue of A, then (λ i =. Proof: This is true for any A, but is easy to see using the eigen decomposition of A when A is semi-simple. A = TΛT 1 where Λ is diagonal with eigenvalues on its diagonal. Since ψ(λ i = by definition, (A = Tψ(ΛT 1 =.

14 4 c Perry Y.Li Proof: Controllability test (1 (3: We have already seen before that controllability over the interval [, T] means that the range space of the reachability map, L r (u = T must be the whole state space, R n. Notice that W r,t := T e A(T τ Bu(τdτ e At BB e A t dt = L r L r where L r is the adjoint (think transpose of L r. From the finite rank linear map theorem, R(L r = R(L r L r but, L r L r is nothing but W r,t. (3 (4: We will show this by showing not (4 not (3. Suppose that (4 is not true. Then, there exists a 1 n vector v T so that, Consider now, Since v T B = v T AB = v T A 2 B = v T A n 1 B =. v T W r,t v = T e At = I + At + A2 t 2 2 v T e At B 2 2dt. + An 1 t n 1 n 1! + and by the Cayley Hamilton Theorem, A k is a linear combination of I, A,... A n 1, therefore, v T e At B = for all t. Hence, v T W r,t v = or W r,t is not full rank. (3 (2: We will show this by showing not (2 not (3. If (2 is not true, then there exists 1 n vector v T so that v T W r,t v = T v T e At B 2 2dt =. Because e At is continuous in t, this implies that v T e At B = for all t [, T]. Hence, the all time derivatives of v T e At B = for all t [, T]. In particular, at t =, v T e At B t= = v T B = v T d dt eat B t= = v T AB = v T d2 dt 2eAt B t= = v T A 2 B =. v T dn 1 dt n 1eAt B t= = v T A n 1 B =

15 ME 8281:Advanced Control Systems, F1, F2, S6, S8 41 Hence, v T( B AB A 2 B A n 1 B = ( i.e. the controllability matrix of (A, B is not full rank. (4 (5: We will show not (5 implies not (4. Suppose (5 is not true so that there exists a 1 n vector v, and λ C, v T (λi A =, v T B =. for some λ. Then v T A = λv T, v T A 2 = λ 2 v T etc. Because v T B = by assumption, we have v T AB = λv T B =, v T A 2 B = λ 2 v T B = etc. Hence, Therefore the controllability is not full rank. v T B = v T AB = v T A 2 B = v T A n 1 B =. The proof of (4 (3 requires the 2nd representation theorem, and we shall leave it afterward. Note that (2 (1 is obvious. To see that (1 (2, notice that the controllability matrix C does not depend on T, so controllability does not depend on duration of time interval. Since the system is time invariant, it does not depend on the initial time (t. Proof of (4 (3 - Optional To prove (4 (3, we need the so called 2nd Representation Theorem. It is included here for completeness. We will not go through this part. Definition Let A : R n R n and W R n a subspace with the property that for each w W, Aw W. We say that W is A invariant. Theorem (2nd Representation theorem Let A : R n R n is a linear map (i.e. a matrix W R n be an A invariant k dimensional subspace of R n. There exists a nonsingular T = ( e 1 e 2 e n s.t. e 1,, e k form a basis of W, in the basis of e 1, e 2,, e n, A is represented by: Ã 11 R k k where k = dim W. T 1 AT = (Ã11 Ã 12 Ã 22 For any vector B W, the representation of B is given by B = T ( B.

16 42 c Perry Y.Li The point of the theorem is that A can be put in a form with in strategic locations (Ã21 are zeros, similarly for any vector B W ( B 2 are zeros. Proof: Take e 1,, e k to be a basis of W, and choose e k+1, e n to complete the basis of R n. Because Ae i W for each i = 1, k, Ae i = k α li e l. l=1 Therefore, α li are the coefficients of Ã11 and the coefficients of Ã21 are all zero. Let B W, then k B = β l e l so that β l are simply the coefficients in B 1 and the coefficients of B 2 are zeros. l=1 Proof of (4 (3 in controllability test: Notice that A ( B AB A 2 B A n 1 B = ( AB A 2 B A n 1 B A n B therefore, by the Cayley Hamilton Theorem, the range space of the controllability matrix is A invariant. Suppose that (3 is not true, so that the rank of ( B AB A 2 B A n 1 B is less than n. Since the controllability matrix is A-invariant, by the 2nd representation theorem, there is an invertible matrix T so that in the new coordinates, z = T 1 x, ( (Ã11 Ã ż = 12 B z + u (3.6 Ã 22 where B = T ( B, and A = T (Ã11 Ã 12 Ã 22 and the dim of Ã22 is non-zero (it is n - rank(c. Let v T 2 be a left eigenvector of Ã22 so that T 1. v2 T (λi Ã22 = for some λ C. Define v T := ( v2 T T 1. Evaluating v T B, v T (λi A gives v T B = ( v2 T B T T( 1 = v T (λi A = λ ( v2 T T 1 (λtt 1 TÃT 1 = ( v2 T (λi ÃT 1 =. This shows that [λi A, B] has rank less than n. Remarks:

17 ME 8281:Advanced Control Systems, F1, F2, S6, S Notice that the controllability matrix is not a function of the time interval. Hence, if a LTI system is controllable over some interval, it is controllable over any (non-zero interval. c.f. with result of linear time varying system. 2. Because of the above fact, we often say that the pair (A, B is controllable. 3. Controllability test can be done by just examining A and B without computing the grammian. The test in (4 is attractive in that it enumerates the vectors in the controllability subspace. However, numerically, since it involves power of A, numerical stability needs to be considered. 4. The PBH Test in (5, or the Hautus test for short, involves simply checking the condition at the eigenvalues. It is because for (si A, B to have rank less than n, s must be an eigenvalue. 5. The range space of the controllability matrix is of special interests. It is called the controllable subspace and is the set of all states that can be reached from zero-initial condition. This is A invariant. 6. Using the basis for the controllable subspace as part of the basis for R n, the controllability property can be easily seen in ( Linear Time Invariant (LTI discrete time system For the discrete time system (3.7, x(k + 1 = Ax(k + Bu(k y(k = Cx(k (3.7 where x R n, u R m, the conditions for controllability is fairly similar except that we need to consider controllability over period of length greater than n. First, recall that the reachability map for a linear discrete time system L r,[k,k1 ] : u( s(k, k, x =, u( is given by: L r,[k,k 1 ][u( ] = k 1 Πj=i+1 k 1 i=k Theorem For the discrete time system (3.7, the followings are equivalent: A(jB(iu(i ( The system is controllable over the interval [, T], with T n (i.e. it can transfer from any arbitrary state at k = to any other arbitrary state at k = T. 2. The system is controllable over the interval [k, k 1 ], with k 1 k n 3. The controllability grammian, is full rank n. 4. The controllability matrix T 1 W r,t := A l BB A l l= ( B AB A 2 B A n 1 B has rank n. Notice that the controllability matrix has dimension n nm where m is the dimension of the control vector u.

18 44 c Perry Y.Li 5. For each z C, the matrix, ( zi A B has rank n. Proof: The proof of this theorem is exactly analogous to the continuous time case. However, in the case of the equivalence of (2 and (3, we can notice that u(k 1 x(k = A k x( + [B, AB, A 2 B, A n 1 u(k 2 B]. u( so clearly one can transfer to arbitrary states at k = n iff the controllability matrix is full rank. Controllability does not improve for T > n is again the consequence of Cayley-Hamilton Theorem. 3.8 Observability The observability question deals with the question of while not knowing the initial state, whether one can determine the state from the input and output. This is equivalent to the question of whether one can determine the initial state x( when given the input u(t, t [, T] and the output y(t, t [, T]. Definition A state determined dynamical system is called observable on [t, t 1 ] if for all inputs, u(τ, and outputs y(τ, τ [t, t 1 ], the state x(t = x can be uniquely determined. For observability, the null space of the observability map on the interval [t, t 1 ], given by: L o : x y( = C( Φ(, t x, y(τ = C(τΦ(τ, t x Φ(τ, x, τ [t, t 1 ] must be trivial (i.e. only contains. Otherwise, if L o (x n (t = y(t =, for all t [, T], then for any α R, y(t = C(tΦ(t, x = Φ(t, (x + αx n. Recall from the finite rank theorem that if the observability map L o is a matrix, then the null space of L o, N(L o = N(L T o L o. For continuous time system, L o can be thought of as an infinitely long matrix, then the matrix product operation in L T o L o becomes an integral, and is written as L ol o where L denotes the adjoint of the L. The observability Grammian is given by: W o,[t,t 1 ] = L ol o = t Φ(t, t T C T (tc(tφ(t, t R n n where L o is the adjoint (think transpose of the observability map.

19 ME 8281:Advanced Control Systems, F1, F2, S6, S Least square estimation Formulation and solution Consider the continuous time varying system: ẋ(t = A(tx(t y(t = C(tx(t. The observability map on [, t 1 ] is given by: τ [, t 1 ] L o : x( y(τ = C(τΦ(τ, x( (3.9 The so-called Least squares estimate of x( given the output y(τ, τ [, t 1 ] is: ˆx( t 1 = arg min x e T (τe(τdτ. The least squares estimation problem can be cast into a set of linear equations. For example, we can stack all the outputs y(t, t [, t 1 ] into a vector Y = [Y (,..., Y (.1,...,Y (.2,..., Y (t 1 ] T R pp, and represent L o as a matrix R pp n, then, Y = L o x + E; x o = argmin x E T E = (L T o L o 1 L T o Y Here, E = [E(,...,E(t 1 ] T = R pp is the modeling error or noise. The optimal estimate x uses the minimum amount of noise E to explain the data Y. Notice that the resulting E satisfies E T L o =, i.e. E is normal to Range(L o. Using the observability map for the LTI continuous time system (3.9 as L o, ˆx( t 1 = Wo 1 (t 1 W o (t 1 = L ol o = Φ T (τ, C T (τy(τdτ Φ T (τ, C T (τc(τφ(τ, dτ. One can also find estimates of the states at time t 2 given data up to time t 1 : ˆx(t 2 t 1 = Φ(t 2, ˆx( t 1 This refers to three classes of esimation problems: t 2 < t 1 : Filtering problem t 2 > t 1 : Prediction problem t 2 = t 1 : Observer problem The estimate ˆx(t t is especially useful for control systems as it allows us to effective do state feedback control.

20 46 c Perry Y.Li Recursive Least Squares Observer The least squares solution computed above is expensive to compute: ˆx(t t = Φ(t, Wo 1 (t W o (t = L ol o = t t Φ T (τ, C T (τy(τdτ Φ T (τ, C T (τc(τφ(τ, dτ. It is desirable if the estimate ˆx(t t can be obtained recursively via a differential equation. where d dt t d 1 ˆx(t t = A(tΦ(t, Wo (t Φ T (τ, C T (τy(τdτ dt } {{ } + Φ(t, d dt W o 1 (t + Φ(t, W 1 o t ˆx(t t Φ T (τ, C T (τy(τdτ (tφ T (t, C T (ty(t Now by differentiating W o (two 1 (t = I, we have: d dt W o 1 ˆx(t t = A(tˆx(t t Φ(t, W 1 o P(t satisfies: (t = W 1 o (t dw o dt 1 (two (t d dt W o(t = Φ T (t, C T (tc(tφ(t, (tφ T (t, C T (tc(tˆx(t t + Φ(t, Wo 1 (tφ T (t, C T (ty(t = A(tˆx(t t Φ(t, Wo 1 (tφ T (t, C T (t[c(tˆx(t t y(t] (3.1 } {{ } } {{ } P(t ŷ(t t P(t = A(tP(t + P(tA T (t P(tC T (tc(tp(t Typically, we would initialize P( to be a large non-singular matrix, e.g. P( = α I, where α is large. In the recursive least squares observer in (3.1, the first term is an openloop prediction term, and the second term is called an output injection where an output prediction error is used to correct the estimation error. We can think of the observer feedback gain as: Covariance Windup L(t = P(tC T (t Let us rewrite P(t as follows: [ t 1 P(t = Φ(t, Φ T (τ, C T (τc(τφ(τ, dτ] Φ T (t, [ t 1 = Φ T (τ, tc T (τc(τφ(τ, tdτ] =: K 1 (t

21 ME 8281:Advanced Control Systems, F1, F2, S6, S8 47 The matrix K(t := P 1 (t is known as the Information Matrix and satisfies: K = C T (tc(t A T (tk(t K(tA(t Notice that W o (t increases with time. Thus, W o 1(t decreases with time, as would K 1 (t. This means that the observer gain L(t will decrease. The observer will asymptotically rely more and more on open loop estimation: Forgetting factor d ˆx(t t = A(tˆx(t t dt Forget old information (τ far away from current time: Choose λ > : K(t = t Φ T (τ, tc T (τc(τφ(τ, te λ(t τ dτ; K = C T (tc(t A T (tk(t K(tA(t λk Covariance reset If K(t becomes large, reset it to small number. Check maximum singular value of K(t, σk(t. If σk(t λ max, K(t = ǫi, i.e. P(t = I/ǫ. 3.1 Observability Tests The observability question deals with the question of whether one can determine what the initial state x( is, given the input u(t, t [, T] and the output y(t, t [, T]. For observability, the null space of the observability map, L o : x y( = Φ(, x, τ [, T], must be trivial (i.e. only contains. Otherwise, if L o (x n (t = y(t =, for all t [, T], then for any α R, y(t = Φ(t, x = Φ(t, (x + αx n. Hence, one cannot distinguish the various initial conditions from the output. Just like for controllability, it is inconvenient to check the rank (and the null space of the L o which is tall and thin. We can instead check the rank and the null space of the observability grammian given by: W o,t = L ol o where L o is the adjoint (think transpose of the observability map. Proposition Null(L o = Null(L ol o. Proof:

22 48 c Perry Y.Li Let x Nul(L o so, L o x =. Then, clearly, L T o L o x = L T. Therefore, Null(L o Null(L ol o. Let x Nul(L ol o. Then, x T L ol o x =. Or, (L o x T (L o x =. This can only be true if L o x =. Therefore, Null(L ol o Null(L o Observability Tests for Continuous time LTI systems Theorem For the LTI continuous time system (3.5, the followings are equivalent: 1. The system is observable over the interval [, T] 2. The observability grammian, is full rank n. 3. The observability matrix C CA CA 2. CA n 1 W o,t := T e A t C Ce At dt has rank n. Notice that the observability matrix has dimension np n where p is the dimension of the output vector y. 4. For each s C, the matrix, ( si A C has rank n. Proof: The proof is similar to the ones as in the controllable case and will not be repeated here. Some differences are that instead of considering 1 n vector v T multiplying on the left hand sides of the controllability matrix and the grammians, we consider n 1 vector multiplying on the RHS of the observability matrix and the grammian etc. Also instead of considering the range space of the controllability matrix, we consider the NULL space of the observability matrix. Its null space is also A-invariant. Hence if the observability matrix is not full rank, then using basis for its null space as the last k basis vectors of R n, the system can be represented as: ( (Ã11 B1 ż = z + u à 21 à 22 B 2 y = ( C z where C = ( C T 1, and A = T (Ã11 à 12 à 22 T 1.

23 ME 8281:Advanced Control Systems, F1, F2, S6, S8 49 and the dim of Ã22 is non-zero (and is the dim of the null space of the observability matrix. Remark 1. Again, observability of a LTI system does not depend on the time interval. So, theoretically speaking, if observing the output and input for an arbitrary amount of time will be sufficient to figure out x(. In reality, when more data is available, one can do more averaging to eliminate effects of noise (e.g. using the Least square ie. Kalman Filter approach. 2. The subspace of particular interest is the null space of the controllability matrix. An initial state lying in this set will generate identically zero-input response. This subspace is called the unobservable subspace. 3. Using the basis of the unobservable subspace as part of the basis of R n, the observability property can be easily seen Observability Tests for Discrete time LTI systems The tests for observability of the discrete time system (3.7 is given similarly by the following theorem. Theorem For the discrete time system (3.7, the followings are equivalent: 1. The system is observable over the interval [, T], for some T n. 2. The observability grammian, is full rank n. 3. The observability matrix C CA CA 2. CA n 1 T 1 W o,t := A k C CA k k= has rank n. Notice that the observability matrix has dimension np n where p is the dimension of the output vector y. 4. For each z C, the matrix, ( zi A C has rank n. The controllability matrix can have the following interpretation: zero-input response is given by: y( C y(1 CA y(2 = CA 2 x(.. y(n 1. CA n 1

24 5 c Perry Y.Li Thus, clearly if the observability matrix is full rank, one can reconstruct x( from measurements of y(,, y(n 1. One might think that by increasing the number of output measurement times (e.g. y(n the system can become observable. However, because of Cayley-Hamilton theorem, y(n = CA n x( can be expressed as n 1 i= a ica i x( consisting of rows already in the observability matrix PHB test and Eigen/Jordan Forms Consider ẋ = Ax + Bu, y = Cx Suppose first that A is semi-simple, then A = TDT 1 and D = diag λ 1,...,λ n where λ i might be repeating. The PBH test in the transformed eigen-coordinates is to check rank(λ i D, T 1 B for each i if it is n. This shows that T 1 B must be non-zero in each row. Also, if λ i = λ j for some i j, then a single input system cannot be controllable. Now suppose that A = TJT 1 and J is in Jordan form, J = λ 1 1 λ 1 λ 1 λ 2 This Jordan form has 3 generalized eigenvector chains. Controllability From homework, we saw that if the entries of B R 4 at the beginning of some chain is, i.e.. b 1 b 1 b 1 T 1 B = b 2 b3, T 1 B = b 2, T 1 B = b3 b 4 then the system is uncontrollable. Thus, T 1 B being nonzero at the beginning of each chain is a necessary condition for controllability. PBH test shows that it is in fact a sufficient condition. Observability Similarly, if C T R 1 n is a zero entry at the end of any chain (these correspond to the coordinates for the eigenvector, i.e. b 4 CT = ( c 2 c 3 c 4, CT = ( c1 c 2 c 4, or, CT = ( c 1 c 2 c 3 then we saw from the homework that the system is not observable.

25 ME 8281:Advanced Control Systems, F1, F2, S6, S8 51 Hence, CT having non-zero entries at the end of each chain is necessary for observability. PHB test shows that it is also sufficient. Note If the system is either uncontrollable or unobservable, then the order of the transfer function will be less than the order of A. However, for a multi-input-multi-output system, the fact that the order of each transfer function in the transfer function matrix has order less than the order of A, does NOT imply that the system is uncontrollable or unobservable. e.g. A = ( 1 2 B = C = ( 1 1 gives, Y (s = ( 1 s+1 1 U(s. s Kalman Decomposition Controllable / uncontrollable decomposition Suppose that the controllability matrix C R n n of a system has rank n 1 < n. Then there exists an invertible transformation, T R n n such that: where B = T ( B, and ż = z = T 1 x, (Ã11 Ã 12 A = T Ã 22 z + (Ã11 Ã 12 Ã 22 ( B u (3.11 T 1. and the dim of Ã22 is n n Observable / unobservable decomposition Hence if the observability matrix is not full rank, then using basis for its null space as the last k basis vectors of R n, the system can be represented as: ż = (Ã11 Ã 21 Ã 22 y = ( C z ( B1 z + u B 2 where C = ( C T 1, and A = T (Ã11 Ã 12 Ã 22 T 1. and the dim of Ã22 is non-zero (and is the dim of the null space of the observability matrix.

26 52 c Perry Y.Li Kalman decomposition The Kalamn decomposition is just the combination of the observable/unobservable, and the controllable/uncontrollable decomposition. Theorem There exists a coordinate transformation z = T 1 x R n such that à 11 à 13 B 1 ż = à 21 à 22 A 23 A 24 à 33 z + B 2 u à 43 à 44 y = ( C1 C2 z Let T = ( T 1 T 2 T 3 T 4 (with compatible block sizes, then T 2 (C Ō: T 2 = basis for Range(C Null(O. T 1 (C O: T 1 T 2 = basis for Range(C T 4 ( C Ō: T 2 T 4 = basis for Null(O. T 3 ( C O: T 1 T 2 T 3 T 4 = basis for R n. Note: Only T 2 is uniquely defined. Proof: This is just combination of the obs/unobs and cont/uncont decomposition Stabilizability and Detectability If the uncontrollable modes Ā33, Ā 44 are stable (have eigenvalues on the Left Half Plane, then, system is called stabilizable. One can use feedback to make the system stable; Uncontrollable modes decay, so not an issue. If the unobservable modes Ā22, Ā 44 are stable (have eigenvalues on the Left Half Plane, then, system is called detectable. The states z 2, z 4 decay to Eventually, they will have no effect on the output State can be reconstructed by ignoring the unobservable states (eventually. The PBH tests can be modified to check if the system is stabilizable or detectable, namely, check rank ( si A B ( si A rank C for all s with non-negative real parts if they lose rank. Specifically, one needs to check only s which are the unstable eigenvalues of A.

27 ME 8281:Advanced Control Systems, F1, F2, S6, S Realization Degree of controllability / observability Formal definitions for controllability and observability are black and white. In reality, some states are very costly to control to, or have little effect on the outputs. They are therefore, effectively uncontrollable or unobservable. Degree of controllability and observability can be evaluated by the sizes of W r and W o with infinite time horizon: W r (, = lim = T W o (, = lim T Notice that W r satisfies differential equation: T e A(T τ BB T e AT (T τ dτ e At BB T e AT t dt T e AT τ C T Ce Aτ dτ d dt W r(, t = AW r (, t + W r (, ta T + BB T Thus, if all the eigenvalues of A have strictly negative real parts (i.e. A is stable, then, as T, = AW r + W r A T + BB T which is a set of linear equations in the entries of W r, and can be solved effectively. The equation is called a Lyapunov equation. Matlab can solve for W r using gram. Similarly, if A is stable, as T, W o satisfies the Lyapunov equation = A T W o + W o A + C T C which can be solved using linear method (or using Matlab via gram. W r is related to the minimum norm control. To reach x( = x from x( = in infinite time, u = L T r Wc 1 x min u( J(u = min u( u T (τu(τdτ = x T W 1 c x. If state x is difficult to reach, then x T W c 1 x is large meaning that x is practically uncontrollable. Similarly, if the initial state is x( = x, then, with u(τ y T (τy(τdτ = x T W o x. Thus, x T W ox measures the effect of the initial condition on the output. If it is small, the effect of x in the output y( is small, thus, it is hard to observe, or practically unobservable. Generally, one can look at the smallest eigenvalues W c and W o, the span of the associated eigenvectors will be difficult to control, or difficult to observe.

28 54 c Perry Y.Li Relation to Transfer Function For the system ẋ = Ax + Bu y = Cx + Du The transfer function from U(s Y (s is given by: G(s = C(sI A 1 B + D = Cadj(sI AB det(si A + D. Note that transfer function is not affected by similarity transform. From Kalman decomposition, it is obvious that G(s = C 1 (si Ã11 1 B1 + D where Ã11, B1, C1 correspond to the controllable and observable component. Poles they are values of s C s.t. G(s becomes infinite. poles of G(s are clearly the eigenvalues of Ã11. Zeros For the SISO case, z C (complex numbers is a zero if it makes G(z =. Thus, zeros satisfy Cadj(zI AB + Ddet(zI A = assuming system is controllable and observable, otherwise, need to apply Kalman decomposition first. In the general MIMO case, a zero implies that there can be non-zero inputs U(s that produce output Y (s that is identically zero. Therefore, there exists X(z, U(z such that: This will be true if and only if at s = z zi A X(z = B U(z = C X(z + D U(z ( si A B rank C D is less than the normal rank of the matrix at other s. We shall return to the multi-variable zeros when we discuss the effect of state feedback on the zero of the system.

29 ME 8281:Advanced Control Systems, F1, F2, S6, S Pole-zero cancellation Cascade system with input/output (u 1, y 2 : System 1 with input/output (u 1, y 1 has a zero at α and a pole at β. ẋ 1 = A 1 x 1 + B 1 u 1 y 1 = C 1 x 1 System 2 with input/output (u 2, y 2 has a pole at α and a zero at β. ẋ 2 = A 2 x 2 + B 2 u 2 y 2 = C 2 x 2 Then Cascade interconnection: u 2 = y The system pole at β is not observable from y 2 2. The system pole at α is not controllable from u 1. Combined system: Consider x = ( x1 x 2. ( A1 A = B 2 C 1 A 2 ; B = ( B1 ; C = ( C 2 Use PBH test with λ = β. βi A 1 B 2 C 1 βi A 2 ( x1 x C 2 2 =??? Let x 1 be eigenvector associated with β for A 1. Let x 2 = (βi A 2 1 B 2 C 2 x 1 (i.e. solve second row in PBH test. Then since Cx = C 2 x 2 = (C 2 (βi A 2 1 B 2 C 2 x 1 and β is a zero of system 2, PHB test shows that the mode β is not observable. Similar for the case when α mode in system 2 not controllable by u. These example shows that when cascading systems, it is important not to do unstable pole/zero cancellation.

30 56 c Perry Y.Li 3.16 Balanced Realization The objective is to reduce the number of states while minimizing effect on I/O response (transfer function. This is especially useful when the state equations are obtained from distributed parameters system, P.D.E. via finite elements methods, etc. which generate many states. If states are unobservable or uncontrollable, they can be removed. The issue at hand is if some states are lightly controllable while others are lightly observable. Do we cancel them? In particular, Some states can be lightly controllable, but heavily observable. Some states can be lightly observable, but heavily controllable. Both contribute to significant component to input/output (transfer function response. Idea: Exhibit states so that they are simultaneously lightly (heavily controllable and observable. Consider a stable system: ẋ = Ax + Bu y = Cx + Du If A is stable, reachability and observability grammians can be computed by solving the Lyapunov equations: = AW r + W r A T + BB T = A T W o + W o A + C T C If state x is difficult to reach, then x T W r 1 x is large practically uncontrollable. If x T W ox is small, the signal of x in the output y( is small, thus, it is hard to observe practically unobservable. Generally, one can look at the smallest eigenvalues W r and W o, the span of the associated eigenvectors will be difficult to control, or difficult to observe. Transformation of grammians: (beware of which side T goes!: z = Tx Minimum norm control should be invariant to coordinate transformation: Therefore x T Wr 1 x = z T Wr,z 1 z = x T T T Wr,z 1 Tx W 1 r = T T W 1 r,z T W r,z = T W r T T. Similarly, the energy in a state transmitted to output should be invariant: x T W o x = zw o,z z W o,z = T T W o T 1. Theorem (Balanced realization If the linear time invariant system is controllable and observable, then, there exists an invertible transformation T R n n s.t. W o,z = W r,z.

31 ME 8281:Advanced Control Systems, F1, F2, S6, S8 57 The balanced realization algorithm: 1. Compute controllability and observability grammians: >> W r = gram(sys, c ; >> W o = gram(sys, o 2. Write W o = R T R [One can use SVD] >> [U,S,V]=svd(W o ; >> R = sqrtm(s*v ; 3. Diagonalize RW r R T [via SVD again]: where UU T = I and Σ is diagonal. RW r R T = UΣ 2 U T, >> [U 1,S 1,V 1 ]=svd(r*w r *R ; >> Σ = sqrtm(s 1 ; 4. Take T = Σ 1 2U T R >> T = inv(sqrtm(σ*u 1 *R; 5. This gives W r,z = W o,z = Σ. >> W r,z = T W r T, >> W o,z = inv(t W o inv(t Proof: Direct substitution! Once W r,z and W o,z are diagonal, and equal, one can decide to eliminate states that correspond to the smallest few entries. Do not remove unstable states from the realization since these states need to be controlled (stabilized It is also possible to maintain some D.C. performance. Consider the system in the balanced realization form: ( ( ( d z1 A11 A = 12 z1 + dt z 2 A 21 A 22 z 2 y = C 1 z 2 + C 2 z 2 + Du where z 1 R n 1 and z 2 R n 2, and n 1 + n 2 = n. Suppose that the z 1 are associated with the simultaneously lightly controllable and light observable mode. The naive truncation approach is to remove z 2 to form the reduced system: ż 1 = A 11 z 1 + B 1 u y = C 1 z 1 + Du ( B1 B 2 u

32 58 c Perry Y.Li However, this does not retain the D.C. (steady state gain from u to y (which is important from a regulation application point of view. To maintain the steady state response, instead of eliminating z 2 completely, replace z 2 by its steady state value: In steady state: ż 2 = = A 21 z 1 + A 22 z 2 + B 2 u z 2 = A 1 22 (A 21z 1 + B 2 u This is feasible if A 22 is invertible ( is not an eigenvalue. A truncation approach that maintains the steady state gain is to replace z 2 by its steady state value: ż 1 = A 11 z 1 + A 12 z 2 + B 1 u y = C 1 z 1 + C 2 z 2 + Du so that, ż 1 = [ A 11 A 12 A 1 22 A ] 21 } {{ } A r + [ B1 A 12 A 1 22 B ] 2 u } {{ } y = [ C 1 C 2 A 1 22 A ] [ 21 z 1 + D C2 A 1 22 } {{ } B ] 2 u } {{ } C r D r B r The truncated system with state z 1 R n 1 is ż 1 = A r z 1 + B r u y = C r z 1 + D r u which with have the same steady state response as the original system.

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Linear Control Systems

Linear Control Systems Chapter 3 Linear Control Systems Topics : 1. Controllability 2. Observability 3. Linear Feedback 4. Realization Theory Copyright c Claudiu C. Remsing, 26. All rights reserved. 7 C.C. Remsing 71 Intuitively,

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/? Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability p. 1/? p. 2/? Definition: A p p proper rational transfer function matrix G(s) is positive

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

19 LINEAR QUADRATIC REGULATOR

19 LINEAR QUADRATIC REGULATOR 19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Brief Introduction to Vectors and Matrices

Brief Introduction to Vectors and Matrices CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems Chapter 4 Stability Topics : 1. Basic Concepts 2. Algebraic Criteria for Linear Systems 3. Lyapunov Theory with Applications to Linear Systems 4. Stability and Control Copyright c Claudiu C. Remsing, 2006.

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

3.1 State Space Models

3.1 State Space Models 31 State Space Models In this section we study state space models of continuous-time linear systems The corresponding results for discrete-time systems, obtained via duality with the continuous-time models,

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione Formulations of Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione Impulse and step response models 2 At the beginning of the 80, the early formulations

More information

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points. 806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Understanding Poles and Zeros

Understanding Poles and Zeros MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Controllability and Observability

Controllability and Observability Controllability and Observability Controllability and observability represent two major concepts of modern control system theory These concepts were introduced by R Kalman in 1960 They can be roughly defined

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Largest Fixed-Aspect, Axis-Aligned Rectangle

Largest Fixed-Aspect, Axis-Aligned Rectangle Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

SECTION 10-2 Mathematical Induction

SECTION 10-2 Mathematical Induction 73 0 Sequences and Series 6. Approximate e 0. using the first five terms of the series. Compare this approximation with your calculator evaluation of e 0.. 6. Approximate e 0.5 using the first five terms

More information

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R. Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1 19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information