Linear Control Systems

Size: px
Start display at page:

Download "Linear Control Systems"

Transcription

1 Chapter 3 Linear Control Systems Topics : 1. Controllability 2. Observability 3. Linear Feedback 4. Realization Theory Copyright c Claudiu C. Remsing, 26. All rights reserved. 7

2 C.C. Remsing 71 Intuitively, a control system should be designed so that the input u( ) controls all the states; and also so that all states can be observed from the output y( ). The concepts of (complete) controllability and observability formalize these ideas. Another two fundamental concepts of control theory feedback and realization are introduced. Using (linear) feedback it is possible to exert a considerable influence on the behaviour of a (linear) control system.

3 72 AM3.2 - Linear Control 3.1 Controllability An essential first step in dealing with many control problems is to determine whether a desired objective can be achieved by manipulating the chosen control variables. If not, then either the objective will have to be modified or control will have to be applied in some different fashion. We shall discuss the general property of being able to transfer (or steer) a control system from any given state to any other by means of a suitable choice of control functions Definition. The linear control system Σ defined by ẋ = A(t)x + B(t)u(t) (3.1) where A(t) R m m and B(t) R m l, is said to be completely controllable (c.c.) if for any, any initial state x( ) = x, and any given final state x f, there exist a finite time t 1 > and a control u :, t 1 R l such that x(t 1 ) = x f. Note : (1) The qualifying term completely implies that the definition holds for all x and x f, and several other types of controllability can be defined. (2) The control u( ) is assumed piecewise-continuous in the interval, t Example. Consider the control system described by ẋ 1 = a 1 x 1 + a 2 x 2 + u(t) ẋ 2 = x 2. Clearly, by inspection, this is not completely controllable (c.c.) since u( ) has no influence on x 2, which is entirely determined by the second equation and x 2 ( ). We have x f = Φ(t 1, ) x + Φ(, τ)b(τ)u(τ) dτ

4 C.C. Remsing 73 or = Φ(t 1, ) x Φ(, t 1 )x f + Φ(, τ)b(τ)u(τ) dτ. Since Φ(t 1, ) is nonsingular it follows that if u( ) transfers x to x f, it also transfers x Φ(, t 1 )x f to the origin in the same time interval. Since x and x f are arbitrary, it therefore follows that in the controllability definition the given final state can be taken to be the zero vector without loss of generality. Note : For time-invariant control systems in the controllability definition the initial time can be set equal to zero. The Kalman rank condition For linear time-invariant control systems a general algebraic criterion (for complete controllability) can be derived Theorem. The linear time-invariant control system ẋ = Ax + Bu(t) (3.2) (or the pair (A, B) ) is c.c. if and only if the (Kalman) controllability matrix has rank m. Proof : C = C(A, B) : = B AB A 2 B... A m 1 B R m ml ( ) We suppose the system is c.c. and wish to prove that rank (C) = m. This is done by assuming rank(c) < m, which leads to a contradiction. Then there exists a constant row m-vector q such that In the expression qb =, qab =,..., qa m 1 B =. x(t) = exp(ta) x + t exp( τa)bu(τ) dτ

5 74 AM3.2 - Linear Control for the solution of (3.2) subject to x() = x, set t = t 1, x(t 1 ) = to obtain (since exp(t 1 A) is nonsingular) x = exp( τ A)Bu(τ) dτ. Now, exp( τa) can be expressed as some polynomial r(a) in A having degree at most m 1, so we get x = (r I m + r 1 A + + r m 1 A m 1 )Bu(τ) dτ. Multiplying this relation on the left by q gives qx =. Since the system is c.c., this must hold for any vector x, which implies q =, contradiction. ( ) We asume rank(c) = m, and wish to show that for any x there is a function u :, t 1 R l, which when substituted into t x(t) = exp(ta) x + exp( τa)bu(τ) dτ produces Consider the symmetric matrix W c : = x(t 1 ) =. exp( τa)bb T exp( τa T ) dτ. (3.3) One can show that W c is nonsingular. Indeed, consider the quadratic form associated to W c α T W c α = = ψ(τ)ψ T (τ) dτ ψ(τ) 2 e dτ where α R m 1 is an arbitrary column vector and ψ(τ) : = α T exp ( τa)b. It is clear that W c is positive semi-definite, and will be singular only if there

6 C.C. Remsing 75 exists an ᾱ such that ᾱ T W c ᾱ =. However, in this case, it follows (using the properties of the norm) that ψ(τ), τ t 1. Hence, we have ( ) ᾱ T I m τa + τ2 2! A2 τ3 3! A3 + B =, τ t 1 from which it follows that ᾱ T B =, ᾱ T AB =, ᾱ T A 2 B =, This implies that ᾱ T C =. Since by assumption C has rank m, it follows that such a nonzero vector ᾱ cannot exist, so W c is nonsingular. Now, if we choose as the control vector then substitution into (3.3) gives u(t) = B T exp( ta T )W 1 c x, t, t 1 x(t 1 ) = exp(t 1 A) x exp( τa)bb T exp( τa T ) dτ (Wc 1 x ) = exp(t 1 A) x W c Wc 1 x = as required Corollary. If rank(b) = r, then the condition in Theorem reduces to rank B AB... A m r B = m. Proof : Define the matrix C k : = B AB A k B, k =, 1, 2... If rank (C j ) = rank(c j+1 ) it follows that all the columns of A j+1 B must be linearly dependent on those of C j. This then implies that all the columns of A j+2 B, A j+3 B,... must also be linearly dependent on those of C j, so that rank(c j ) = rank(c j+1 ) = rank(c j+2 ) =

7 76 AM3.2 - Linear Control Hence the rank of C k increases by at least one when the index k is increased by one, until the maximum value of rank (C k ) is attained when k = j. Since rank (C ) = rank(b) = r and rank(c k ) m it follows that r+j m, giving j m r as required Example. Consider the linear control system Σ described by ẋ = x + u(t). 1 1 The (Kalman) controllability matrix is 1 2 C = C Σ = 1 which has rank 2, so the control system Σ is c.c. Note : When l = 1, B reduces to a column vector b and Theorem can be restated as : A linear control system in the form ẋ = Ax + bu(t) can be transformed into the canonical form ẇ = Cw + du(t) if and only if it is c.c. Controllability criterion We now give a general criterion for (complete) controllability of control systems (time-invariant or time-varying) as well as an explicit expression for a control vector which carry out a required alteration of states Theorem. The linear control system Σ defined by ẋ = A(t)x + B(t)u(t)

8 C.C. Remsing 77 is c.c. if and only if the symmetric matrix, called the controllability Gramian, W c (, t 1 ) : = is nonsingular. In this case the control Φ(, τ)b(τ)b T (τ)φ T (, τ) dτ R m m (3.4) u (t) = B T (t)φ T (, t)w c (, t 1 ) 1 x Φ(, t 1 )x f, t, t 1 transfers x( ) = x to x(t 1 ) = x f. Proof : ( ) Sufficiency. If W c (, t 1 ) is assumed nonsingular, then the control defined by u (t) = B T (t)φ T (, t)w c (, t 1 ) 1 x Φ(, t 1 )x f, t, t 1 exists. Now, substitution of the above expression into the solution t x(t) = Φ(t, ) x + Φ(, τ)b(τ)u(τ) dτ of ẋ = A(t)x + B(t)u(t) gives x(t 1 ) = Φ(t 1, ) x + Φ(, τ)b(τ)( B T (τ)φ T (, τ)w c (, t 1 ) 1 x Φ(, t 1 )x f ) dτ = Φ(t 1, ) x W c (, t 1 )W c (, t 1 ) 1 x Φ(, t 1 )x f = Φ(t 1, ) x x + Φ(, t 1 )x f = Φ(t 1, )Φ(, t 1 )x f = x f. ( ) Necessity. We need to show that if Σ is c.c., then W c (, t 1 ) is nonsingular. First, notice that if α R m 1 is an arbitrary column vector, then from

9 78 AM3.2 - Linear Control (3.4) since W = W c (, t 1 ) is symmetric we can construct the quadratic form α T Wα = = θ T (τ, )θ(τ, ) dτ θ 2 e dτ where θ(τ, ) : = B T (τ)φ T (, τ)α, so that W c (, t 1 ) is positive semi-definite. Suppose that there exists some ᾱ such that ᾱ T Wᾱ =. Then we get (for θ = θ when α = ᾱ) θ 2 e dτ = which in turn implies (using the properties of the norm) that θ(τ, ), τ t 1. However, by assumption Σ is c.c. so there exists a control v( ) making x(t 1 ) = if x( ) = ᾱ. Hence Therefore ᾱ = Φ(, τ)b(τ)v(τ) dτ. ᾱ 2 e = ᾱ T ᾱ = = v T (τ)b T (τ)φ T (, τ)ᾱdτ v T (τ) θ(τ, ) dτ = which contradicts the assumption that ᾱ. Hence W c (, t 1 ) is positive definite and is therefore nonsingular Example. The control system is ẋ = x + u(t) Observe that λ = is an eigenvalue of A, and b = is a corresponding 1 eigenvector, so the controllability rank condition does not hold. However, A is

10 C.C. Remsing 79 1 similar to its companion matrix. Using the matrix T = 1 1 before (see Example 2.5.1) and w = Tx we have the system 1 1 ẇ = w + u. 3 computed Differentiation of the w 1 equation and substitution produces a second-order ODE for w 1 : ẅ 1 + 3ẇ 1 = 3u + u. One integration produces a first-order ODE ẇ 1 + 3w 1 = 3 u(τ) dτ + u which shows that the action of arbitrary inputs u( ) affects the dynamics in only a one-dimensional space. The original x equations might lead us to think that u( ) can fully affect x 1 and x 2, but notice that the w 2 equation says that u( ) has no effect on the dynamics of the difference x 1 x 2 = w 2. Only when the initial condition for w involves w 2 () = can u( ) be used to control a trajectory. That is, the inputs completely control only the states that lie in the subspace span b Ab = span {b} = span Solutions starting with x 1 () = x 2 () satisfy x 1 (t) = x 2 (t) = t 1 1 u(τ) dτ + x 1 (). One can steer along the line x 1 = x 2 from any initial point to any final point x 1 (t 1 ) = x 2 (t 1 ) at any finite time t 1 by appropriate choice of u( ). On the other hand, if the initial condition lies off the line x 1 = x 2, then the difference w 2 = x 1 x 2 decays exponentially so there is no chance of steering to an arbitrarily given final state in finite time..

11 8 AM3.2 - Linear Control Note : The control function u ( ) which transfers the system from x = x( ) to x f = x(t 1 ) requires calculation of the state transition matrix Φ(, ) and the controllability Gramian W c (, τ ). However, this is not too dificult for linear timeinvariant control systems, although rather tedious. Of course, there will in general be many other suitable control vectors which achieve the same result Proposition. If u( ) is any other control taking x = x( ) to x f = x(t 1 ), then u(τ) 2 e dτ > u (τ) 2 e dτ. Proof : Since both u and u satisfy x f = Φ(t 1, ) x + Φ(, τ)b(τ)u(τ) dτ we obtain after subtraction = Φ(, τ)b(τ) u(τ) u (τ) dτ. Multiplication of this equation on the left by x Φ(, t 1 )x f T W c (, t 1 ) 1 T gives and thus Therefore (u ) T (τ) u (τ) u(τ) dτ = u (τ) 2 e dτ = (u ) T (τ)u(τ) dτ. < t u (τ) u(τ) 2 e dτ = = = u (τ) u(τ) T u (τ) u(τ) dτ ( u(τ) 2 e + u (τ) 2 e 2(u ) T (τ)u(τ) ) dτ ( u(τ) 2 e u (τ) 2 ) e dτ

12 C.C. Remsing 81 and so u(τ) 2 e dτ = ( u (τ) 2 e + u (τ) u(τ) 2 ) t1 e dτ > u (τ) 2 e dτ as required. Note : This result can be interpreted as showing that the control u (t) = B T (t)φ T (, t)w c (, t 1 ) 1 x Φ(, t 1 )x f is optimal, in the sense that it minimizes the integral ( u(τ) 2 e dτ = u 2 1 (τ) + u 2 2(τ) + + u 2 l(τ) ) dτ over the set of all (admissible) controls which transfer x = x( ) to x f = x(t 1 ), and this integral can be thought of as a measure of control energy involved. Algebraic equivalence and decomposition of control systems We now indicate a further aspect of controllability. Let P( ) be a matrixvalued mapping which is continuous and such that P(t) is nonsingular for all t. (The continuous maping P :, ) GL(m, R) is a path in the general linear group GL (m, R).) Then the system Σ obtained from Σ by the transformation x = P(t)x is said to be algebraically equivalent to Σ Proposition. If Φ(t, ) is the state transition matrix for Σ, then is the state transition matrix for Σ. P(t)Φ(t, )P 1 ( ) = Φ(t, ) Proof : We recall that Φ(t, ) is the unique matrix-valued mapping satisfying Φ(t, ) = A(t)Φ(t, ), Φ(, ) = I m

13 82 AM3.2 - Linear Control and is nonsingular. Clearly, Φ(, ) = I m ; differentiation of x = P(t)x gives x = Px + Pẋ = (Ṗ + PA)x + PBu = (Ṗ + PA)P 1 x + PBu. We need to show that Φ is the state transition matrix for x = ( P + PA)P 1 x + PBu. We have Φ(t, ) = d P(t)Φ(t, t )P 1 ( ) dt = P(t)Φ(t, )P 1 ( + P(t) Φ(t, )P 1 ( ) ( ) = P(t) + P(t)A(t) P 1 (t) P(t)Φ(t, )P 1 ( ) ( ) = P(t) + P(t)A(t) P (t) 1 Φ(t, t ) Proposition. If Σ is c.c., then so is Σ. Proof : The system matrices for Σ are so the controllability matrix for Σ is W = = Ã = ( P + PA)P 1 and B = PB Φ(t, τ) B(τ) B T (τ) Φ T (, τ) dτ P( )Φ(, τ)p 1 (τ)p(τ)b(τ)b T (τ)p T (τ) ( P 1 (τ) ) T Φ T (, τ)p T ( ) dτ = P( )W c (, t 1 )P T ( ).

14 C.C. Remsing 83 Thus the matrix W = W c (, t 1 ) is nonsingular since the matrices W c (, t 1 ) and P( ) each have rank m. The following important result on system decomposition then holds : Theorem. When the linear control system Σ is time-invariant then if the controllability matrix C Σ has rank m 1 < m there exists a control system, algebraically equivalent to Σ, having the form ẋ(1) A1 A 2 x(1) B1 = + u(t) ẋ (2) A 3 x (2) y = C 1 C 2 x where x (1) and x (2) have orders m 1 and m m 1, respectively, and (A 1, B 1 ) is c.c. We shall postpone the proof of this until a later section (see the proof of Theorem 3.4.5) where an explicit formula for the transformation matrix will also be given. Note : It is clear that the vector x (2) is completely unaffected by the control u( ). Thus the state space has been divided into two parts, one being c.c. and the other uncontrollable. 3.2 Observability Closely linked to the idea of controllability is that of observability, which in general terms means that it is possible to determine the state of a system by measuring only the output Definition. The linear control system (with outputs) Σ described by ẋ = A(t)x + B(t)u(t) (3.5) y = C(t)x

15 84 AM3.2 - Linear Control is said to be completely observable (c.o.) if for any and any initial state x( ) = x, there exists a finite time t 1 > such that knowledge of u( ) and y( ) for t, t 1 suffices to determine x uniquely. Note : There is in fact no loss of generality in assuming u( ) is identically zero throughout the interval. Indeed, for any input u :, t 1 R l and initial state x, we have Defining we get t y(t) C(t)Φ(t, τ)b(τ)u(τ)dτ = C(t)Φ(t, )x. t ŷ(t) : = y(t) C(t)Φ(t, τ)b(τ)u(τ)dτ ŷ(t) = C(t)Φ(t, )x. Thus a linear control system is c.o. if and only if knowledge of the output ŷ( ) with zero input on the interval, t 1 allows the initial state x to be determined Example. Consider the linear control system described by ẋ 1 = a 1 x 1 + b 1 u(t) ẋ 2 = a 2 x 2 + b 2 u(t) y = x 1. The first equation shows that x 1 ( ) (= y( )) is completely determined by u( ) and x 1 ( ). Thus it is impossible to determine x 2 ( ) by measuring the output, so the system is not completely observable (c.o.) Theorem. The linear control system Σ is c.o. if and only if the symmetric matrix, called the observability Gramian, W o (, t 1 ) : = is nonsingular. Φ T (τ, )C T (τ)c(τ)φ(τ, ) dτ R m m (3.6)

16 C.C. Remsing 85 Proof : ( ) Sufficiency. Assuming u(t), t, t 1, we have y(t) = C(t)Φ(t, )x. Multiplying this relation on the left by Φ T (t, )C T (t) and integrating produces Φ T (τ, )C T (τ)y(τ) dτ = W o (, t 1 )x so that if W o (, t 1 ) is nonsingular, the initial state is so Σ is c.o. x = W o (, t 1 ) 1 Φ T (τ, )C T (τ)y(τ) dτ ( ) Necessity. We now assume that Σ is c.o. and prove that W = W o (, t 1 ) is nonsingular. First, if α R m 1 is an arbitrary column vector, α T Wα = (C(τ)Φ(τ, )α) T C(τ)Φ(τ, )α dτ so W o (, t 1 ) is positive semi-definite. Next, suppose there exists an ᾱ such that ᾱ T Wᾱ =. It then follows that C(τ)Φ(τ, )ᾱ, τ t 1. This implies that when x = ᾱ the output is identically zero throughout the time interval, so that x cannot be determined in this case from the knowledge of y( ). This contradicts the assumption that Σ is c.o., hence W o (, t 1 ) is positive definite, and therefore nonsingular. Note : Since the observability of Σ is independent of B, we may refer to the observability of the pair (A, C). Duality Theorem. The linear control system (with outputs) Σ defined by ẋ = A(t)x + B(t)u(t) y = C(t)x

17 86 AM3.2 - Linear Control is c.c. if and only if the dual system Σ defined by ẋ = A T (t)x + C T (t)u(t) y = B T (t)x is c.o.; and conversely. Proof : We can see that if Φ(t, ) is the state transition matrix for the system Σ, then Φ T (, t) is the state transition matrix for the dual system Σ. Indeed, differentiate I m = Φ(t, )Φ(t, ) 1 to get = d dt I m = Φ(t, )Φ(t, ) 1 + Φ(t, ) Φ(, t) = A(t)Φ(t, )Φ(t, ) 1 + Φ(t, ) Φ(, t) = A(t) + Φ(t, ) Φ(, t). This implies Φ(, t) = Φ(, t)a(t) or Φ T (, t) = A T (t)φ T (, t). Furthermore, the controllability matrix W Σ c (, t 1 ) = Φ(, τ)b(τ)b T (τ)φ T (, τ) dτ (associated with Σ ) is identical to the observability matrix W Σ o (, t 1 ) (associated with Σ ). Conversely, the observability matrix W Σ o (, t 1 ) = Φ T (τ, )C T (τ)c(τ)φ(τ, ) dτ (associated with Σ ) is identical to the controllability matrix Wc Σ (, t 1 ) (associated with Σ ).

18 C.C. Remsing 87 Note : This duality theorem is extremely useful, since it enables us to deduce immediately from a controllability result the corresponding one on observability (and conversely). For example, to obtain the observability criterion for the time-invariant case, we simply apply Theorem to Σ to obtain the following result Theorem. The linear time-invariant control system ẋ = Ax + Bu(t) y = Cx (3.7) (or the pair (A, C)) is c.o. if and only if the (Kalman) observability matrix O = O(A, C) : = has rank m. C CA CA 2. CA m 1 R mn m Example. Consider the linear control system Σ described by ẋ = x + u(t) 1 1 y = x 1. The (Kalman) observability matrix is O = O Σ = which has rank 2. Thus the control system Σ is c.o.

19 88 AM3.2 - Linear Control In the single-output case (i.e. n = 1), if u( ) = and y( ) is known in the form γ 1 e λ 1t + γ 2 e λ 2t + + γ m e λmt assuming that all the eigenvalues λ i of A are distinct, then x can be obtained more easily than by using x = W o (, t 1 ) 1 Φ T (τ, )C T (τ)y(τ) dτ. For suppose that = and consider the solution of ẋ = Ax in the spectral form, namely We have x(t) = (v 1 x())e λ 1t w 1 + (v 2 x())e λ 2t w (v m x())e λmt w m. y(t) = (v 1 x())(cw 1 )e λ 1t + (v 2 x())(cw 2 )e λ 2t + + (v m x())(cw m )e λmt and equating coefficients of the exponential terms gives v i x() = γ i (i = 1, 2,..., m). cw i This represents m linear equations for the m unknown components of x() in terms of γ i, v i and w i (i = 1, 2,..., m). Again, in the single-output case, C reduces to a row matrix c and Theorem can be restated as : A linear system (with outputs) in the form ẋ = Ax y = cx can be transformed into the canonical form v = Ev if and only if it is c.o. y = fv

20 C.C. Remsing 89 Decomposition of control systems By duality, the result corresponding to Theorem is : Theorem. When the linear control system Σ is time-invariant then if the observability matrix O Σ has rank m 1 < m there exists a control system, algebraically equivalent to Σ, having the form ẋ(1) A1 x(1) = + A 2 A 3 ẋ (2) x (2) B1 B 2 u(t) y = C 1 x (1) where x (1) and x (2) have orders m 1 and m m 1, respectively and (A 1, C 1 ) is c.o. We close this section with a decomposition result which effectively combines together Theorems and to show that a linear time-invariant control system can split up into four mutually exclusive parts, respectively c.c. but unobservable c.c. and c.o. uncontrollable and unobservable c.o. but uncontrollable Theorem. When the linear control system Σ is time-invariant it is algebraically equivalent to ẋ (1) A 11 A 12 A 13 A 14 ẋ (2) A 22 A 24 = ẋ (3) A 33 A 34 A 44 ẋ (4) x (1) x (2) x (3) x (4) y = C 2 x (2) + C 4 x (4) where the subscripts refer to the stated classification. B 1 B 2 + u(t)

21 9 AM3.2 - Linear Control 3.3 Linear Feedback Consider a linear control system Σ defined by ẋ = Ax + Bu(t) (3.8) where A R m m and B R m l. Suppose that we apply a (linear) feedback, that is each control variable is a linear combination of the state variables, so that u(t) = Kx(t) where K R l m is a feedback matrix. The resulting closed loop system is ẋ = (A + BK)x. (3.9) The pole-shifting theorem We ask the question whether it is possible to exert some influence on the behaviour of the closed loop system and, if so, to what extent. A somewhat surprising result, called the Spectrum Assignment Theorem, says in essence that for almost any linear control system Σ it is possible to obtain arbitrary eigenvalues for the matrix A + BK (and hence arbitrary asymptotic behaviour) using suitable feedback laws (matrices) K, subject only to the obvious constraint that complex eigenvalues must appear in pairs. Almost any means that this will be true for (completely) controllable systems. Note : This theorem is most often referred to as the Pole-Shifting Theorem, a terminology that is due to the fact that the eigenvalues of A+BK are also the poles of the (complex) function z 1 det (zi n A BK) This function appears often in classical control design. The Pole-Shifting Theorem is central to linear control systems theory and is itself the starting point for more interesting analysis. Once we know that arbitrary sets of eigenvalues can be assigned, it becomes of interest to compare the performance

22 C.C. Remsing 91 of different such sets. Also, one may ask what happens when certain entries of K are restricted to vanish, which corresponds to constraints on what can be implemented Theorem. Let Λ = {θ 1, θ 2,..., θ m } be an arbitrary set of m complex numbers (appearing in conjugate pairs). If the linear control system Σ is c.c., then there exists a matrix K R l m such that the eigenvalues of A + BK are the set Λ. Proof (when l = 1) : Since ẋ = Ax + Bu(t) is c.c., it follows that there exists a (linear) transformation w = Tx such that the given system is transformed into ẇ = Cw + du(t) where C = k m k m 1 k m 2... k 1, d =. 1. The feedback control u = kw, where k : = k m k m 1... k 1 produces the closed loop matrix C + dk, which has the same companion form as C but with last row γ m γ m 1 γ 1, where k i = k i γ i, i = 1, 2,..., m. (3.1) Since C + dk = T(A + bkt)t 1 it follows that the desired matrix is K = kt

23 92 AM3.2 - Linear Control the entries k i (i = 1, 2,..., m) being given by (3.1). In this equation k i (i = 1, 2,..., m) are the coefficients in the characteristic polynomial of A ; that is, det (λi m A) = λ m + k 1 λ m k m and γ i (i = 1, 2,..., m) are obtained by equating coefficients of λ in λ m + γ 1 λ m γ m (λ θ 1 )(λ θ 2 ) (λ θ m ). Note : The solution of (the closed loop system) ẋ = (A + BK)x depends on the eigenvalues of A + BK, so provided the control system Σ is c.c., the theorem tells us that using linear feedback it is possible to exert a considerable influence on the time behaviour of the closed loop system by suitably choosing the numbers θ 1, θ 2,..., θ m Corollary. If the linear time-invariant control system ẋ = Ax + Bu(t) y = cx is c.o., then there exists a matrix L R m 1 such that the eigenvalues of A + Lc are the set Λ. This result can be deduced from Theorem using the Duality Theorem Example. Consider the linear control system ẋ = x + u(t)

24 C.C. Remsing 93 The characteristic equation of A is char A (λ) λ 2 3λ + 14 = which has roots 3±i Suppose we wish the eigenvalues of the closed loop system to be 1 and 2, so that the characteristic polynomial is We have k 1 λ 2 + 3λ + 2. = k 1 γ 1 = 3 3 = 6 k 2 = k 2 γ 2 = 14 2 = 12. Hence K = kt = = It is easy to verify that A + bk = does have the desired eigenvalues Lemma. If the linear control system Σ defined by ẋ = Ax + Bu(t) is c.c. and B = b 1 b 2 b l with b i, i = 1, 2,..., l, then there exist matrices K i R l m, i = 1, 2,..., l such that the systems ẋ = (A + BK i )x + b i u(t) are c.c. Proof : For convenience consider the case i = 1. Since the matrix C = B AB A 2 B... A m 1 B

25 94 AM3.2 - Linear Control has full rank, it is possible to select from its columns at least one set of m vectors which are linearly independent. Define an m m matrix M by choosing such a set as follows : M = b 1 Ab 1... A r1 1 b 1 b 2 Ab 2... A r2 1 b 2... where r i is the smallest integer such that A r i b i is linearly dependent on all the preceding vectors, the process continuing until m columns of U are taken. Define an l m matrix N having its r th 1 column equal to e 2, the second column of I l, its (r 1 + r 2 ) th column equal to e 3, its (r 1 + r 2 + r 3 ) th column equal to e 4 and so on, all its other columns being zero. It is then not difficult to show that the desired matrix in the statement of the Lemma is K 1 = NM 1. Proof of Theorem when l > 1 : Let K 1 be the matrix in the proof of Lemma and define an l m matrix K having as its first row some vector k, and all its other rows zero. Then the control u = (K 1 + K )x leads to the closed loop system ẋ = (A + BK 1 )x + BK x = (A + BK 1 )x + b 1 kx where b 1 is the first column of B. Since the system ẋ = (A + BK 1 )x + b 1 u is c.c., it now follows from the proof of the theorem when l = 1, that k can be chosen so that the eigenvalues of A + BK 1 + b 1 k are the set Λ, so the desired feedback control is indeed u = (K 1 + K )x.

26 C.C. Remsing 95 deduce If y = Cx is the output vector, then again by duality we can immediately Corollary. If the linear control system ẋ = Ax + Bu(t) y = Cx is c.o., then there exists a matrix L R m n such that the eigenvalues of A + LC are the set Λ. Algorithm for constructing a feedback matrix The following method gives a practical way of constructing the feedback matrix K. Let all the eigenvalues λ 1, λ 2,..., λ m of A be distinct and let W = w 1 w 2... w m where w i is an eigenvector corresponding to the eigenvalue λ i. With linear feedback u = Kx, suppose that the eigenvalues of A and A BK are ordered so that those of A BK are to be µ 1, µ 2,..., µ r, λ r+1,..., λ m (r m). Then provided the linear system Σ is c.c., a suitable matrix is K = fg W where W consists of the first r rows of W 1, and α1 α 2 α r g = β 1 β 2 β r r (λ i µ j ) j=1 if r > 1 α i = r (λ i λ j ) β = j=1 j i λ 1 µ 1 if r = 1 T β 1 β 2... β r = WBf

27 96 AM3.2 - Linear Control f being any column l-vector such that all β i Example. Consider the linear system 1 2 ẋ = x u(t). We have Suppose that λ 1 = 1, λ 2 = 2 and 1 1 W =, W = µ 1 = 3, µ 2 = 4, so W = W 1. We have α 1 = 6, α 2 = 2 and β = WBf gives β1 β 2 = f = 5f 1 3f 1. Hence we can take f 1 = 1, which results in g = Finally, the desired feedback matrix is K = W 1 = Example. Consider now the linear control system ẋ = x + u(t) We now obtain β1 β 2 5f1 + 2f 2 = 3f 1 f 2

28 C.C. Remsing 97 so that f 1 = 1, f 2 = gives K = However f 1 = 1, f 2 = 1 gives β 1 = 3, β 2 = 2 so that g = 2 1 and from K = fg W we now have 1 K = W 1 = Realization Theory The realization problem may be viewed as guessing the equations of motion (i.e. state equations) of a control system from its input/output behaviour or, if one prefers, setting up a physical model which explains the experimental data. Consider the linear control system (with outputs) Σ described by ẋ = Ax + Bu(t) y = Cx (3.11) where A R m m, B R m l and C R n m. Taking Laplace transforms of (3.11) and assuming zero initial conditions gives sx(s) = Ax(s) + Bu(s) and after rearrangement x(s) = (si m A) 1 Bu(s).

29 98 AM3.2 - Linear Control The Laplace transform of the output is y(s) = Cx(s) and thus where the n l matrix y(s) = C (si m A) 1 Bu(s) = G(s)u(s) G(s) : = C (si m A) 1 B (3.12) is called the transfer function matrix since it relates the Laplace transform of the output vector to that of the input vector. Exercise 41 Evaluate (the Laplace transform of the exponential) L e at (s) : = and then show that (for A R m m ) : e st e at dt L exp(ta) (s) = (si m A) 1. Using relation (si m A) 1 = sm 1 I m + s m 2 B 1 + s m 3 B B m 1 char A (s) (3.13) where the k i and B i are determined successively by B 1 = A + k 1 I m, B i = AB i 1 + k i I m ; i = 2, 3,..., m 1 k 1 = tr(a), k i = 1 i tr(ab i 1) ; i = 2, 3,..., m the expression (3.12) becomes G(s) = sm 1 G + s m 2 G G m 1 χ(s) = H(s) χ(s)

30 where χ(s) = char A (s) and G k = C.C. Remsing 99 g (k) ij R n l, k =, 1, 2,..., m 1. The n l matrix H(s) is called a polynomial matrix, since each of its entries is itself a polynomial; that is, h ij = s m 1 g () ij + s m 2 g (1) ij + + g (m 1) ij. Note : The formulas above, used mainly for theoretical rather than computational purposes, constitute Leverrier s algorithm Example. Consider the electrically-heated oven described in section 1.3, and suppose that the values of the constants are such that the state equations are ẋ = x + 1 u(t). Suppose that the output is provided by a thermocouple in the jacket measuring the jacket (excess) temperature, i.e. y = 1 x. The expression (3.12) gives s G(s) = 1 1 s = s + 1 s 2 + 3s using (si 2 A) 1 = 1 char A (s) adj(si 2 A). Realizations In practice it often happens that the mathematical description of a (linear time-invariant) control system in terms of differential equations is not known, but G(s) can be determined from experimental measurements or other considerations. It is then useful to find a system in our usual state space form to which G( ) corresponds.

31 1 AM3.2 - Linear Control In formal terms, given an n l matrix G(s), whose elements are rational functions of s, we wish to find (constant) matrices A, B, C having dimensions m m, m l and n m, respectively, such that G(s) = C(sI m A) 1 B and the system equations will then be ẋ = Ax + Bu(t) y = Cx. The triple (A, B, C) is termed a realization of G( ) of order m, and is not, of course, unique. Amongst all such realizations some will include matrices A having least dimensions these are called minimal realizations, since the corresponding systems involve the smallest possible number of state variables. Note : Since each element in (si m A) 1 = adj (si m A) det(si m A) has the degree of the numerator less than that of the denominator, it follows that lim C(sI m A) 1 B = s and we shall assume that any given G(s) also has this property, G( ) then being termed strictly proper Example. Consider the scalar transfer function g(s) = 2s + 7 s 2 5s + 6 It is easy to verify that one realization of g( ) is 1 A =, b =, c = It is also easy to verify that a quite different triple is 2 1 A =, b =, c =

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems Chapter 4 Stability Topics : 1. Basic Concepts 2. Algebraic Criteria for Linear Systems 3. Lyapunov Theory with Applications to Linear Systems 4. Stability and Control Copyright c Claudiu C. Remsing, 2006.

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/? Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability p. 1/? p. 2/? Definition: A p p proper rational transfer function matrix G(s) is positive

More information

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

System of First Order Differential Equations

System of First Order Differential Equations CHAPTER System of First Order Differential Equations In this chapter, we will discuss system of first order differential equations. There are many applications that involving find several unknown functions

More information

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom. Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

More information

Notes from February 11

Notes from February 11 Notes from February 11 Math 130 Course web site: www.courses.fas.harvard.edu/5811 Two lemmas Before proving the theorem which was stated at the end of class on February 8, we begin with two lemmas. The

More information

Factoring Polynomials

Factoring Polynomials Factoring Polynomials Sue Geller June 19, 2006 Factoring polynomials over the rational numbers, real numbers, and complex numbers has long been a standard topic of high school algebra. With the advent

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 RAVI VAKIL CONTENTS 1. Discrete valuation rings: Dimension 1 Noetherian regular local rings 1 Last day, we discussed the Zariski tangent space, and saw that it

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it. This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

RESULTANT AND DISCRIMINANT OF POLYNOMIALS RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3 Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

ARBITRAGE-FREE OPTION PRICING MODELS. Denis Bell. University of North Florida

ARBITRAGE-FREE OPTION PRICING MODELS. Denis Bell. University of North Florida ARBITRAGE-FREE OPTION PRICING MODELS Denis Bell University of North Florida Modelling Stock Prices Example American Express In mathematical finance, it is customary to model a stock price by an (Ito) stochatic

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t) Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

26. Determinants I. 1. Prehistory

26. Determinants I. 1. Prehistory 26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

More information

it is easy to see that α = a

it is easy to see that α = a 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

More information

3.1 State Space Models

3.1 State Space Models 31 State Space Models In this section we study state space models of continuous-time linear systems The corresponding results for discrete-time systems, obtained via duality with the continuous-time models,

More information

Some Lecture Notes and In-Class Examples for Pre-Calculus:

Some Lecture Notes and In-Class Examples for Pre-Calculus: Some Lecture Notes and In-Class Examples for Pre-Calculus: Section.7 Definition of a Quadratic Inequality A quadratic inequality is any inequality that can be put in one of the forms ax + bx + c < 0 ax

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

Structure of the Root Spaces for Simple Lie Algebras

Structure of the Root Spaces for Simple Lie Algebras Structure of the Root Spaces for Simple Lie Algebras I. Introduction A Cartan subalgebra, H, of a Lie algebra, G, is a subalgebra, H G, such that a. H is nilpotent, i.e., there is some n such that (H)

More information

19 LINEAR QUADRATIC REGULATOR

19 LINEAR QUADRATIC REGULATOR 19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead

More information

SOLVING POLYNOMIAL EQUATIONS

SOLVING POLYNOMIAL EQUATIONS C SOLVING POLYNOMIAL EQUATIONS We will assume in this appendix that you know how to divide polynomials using long division and synthetic division. If you need to review those techniques, refer to an algebra

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Vector Spaces. Chapter 2. 2.1 R 2 through R n

Vector Spaces. Chapter 2. 2.1 R 2 through R n Chapter 2 Vector Spaces One of my favorite dictionaries (the one from Oxford) defines a vector as A quantity having direction as well as magnitude, denoted by a line drawn from its original to its final

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the

More information