Linear Control Systems

Size: px
Start display at page:

Download "Linear Control Systems"

Transcription

1 Chapter 3 Linear Control Systems Topics : 1. Controllability 2. Observability 3. Linear Feedback 4. Realization Theory Copyright c Claudiu C. Remsing, 26. All rights reserved. 7

2 C.C. Remsing 71 Intuitively, a control system should be designed so that the input u( ) controls all the states; and also so that all states can be observed from the output y( ). The concepts of (complete) controllability and observability formalize these ideas. Another two fundamental concepts of control theory feedback and realization are introduced. Using (linear) feedback it is possible to exert a considerable influence on the behaviour of a (linear) control system.

3 72 AM3.2 - Linear Control 3.1 Controllability An essential first step in dealing with many control problems is to determine whether a desired objective can be achieved by manipulating the chosen control variables. If not, then either the objective will have to be modified or control will have to be applied in some different fashion. We shall discuss the general property of being able to transfer (or steer) a control system from any given state to any other by means of a suitable choice of control functions Definition. The linear control system Σ defined by ẋ = A(t)x + B(t)u(t) (3.1) where A(t) R m m and B(t) R m l, is said to be completely controllable (c.c.) if for any, any initial state x( ) = x, and any given final state x f, there exist a finite time t 1 > and a control u :, t 1 R l such that x(t 1 ) = x f. Note : (1) The qualifying term completely implies that the definition holds for all x and x f, and several other types of controllability can be defined. (2) The control u( ) is assumed piecewise-continuous in the interval, t Example. Consider the control system described by ẋ 1 = a 1 x 1 + a 2 x 2 + u(t) ẋ 2 = x 2. Clearly, by inspection, this is not completely controllable (c.c.) since u( ) has no influence on x 2, which is entirely determined by the second equation and x 2 ( ). We have x f = Φ(t 1, ) x + Φ(, τ)b(τ)u(τ) dτ

4 C.C. Remsing 73 or = Φ(t 1, ) x Φ(, t 1 )x f + Φ(, τ)b(τ)u(τ) dτ. Since Φ(t 1, ) is nonsingular it follows that if u( ) transfers x to x f, it also transfers x Φ(, t 1 )x f to the origin in the same time interval. Since x and x f are arbitrary, it therefore follows that in the controllability definition the given final state can be taken to be the zero vector without loss of generality. Note : For time-invariant control systems in the controllability definition the initial time can be set equal to zero. The Kalman rank condition For linear time-invariant control systems a general algebraic criterion (for complete controllability) can be derived Theorem. The linear time-invariant control system ẋ = Ax + Bu(t) (3.2) (or the pair (A, B) ) is c.c. if and only if the (Kalman) controllability matrix has rank m. Proof : C = C(A, B) : = B AB A 2 B... A m 1 B R m ml ( ) We suppose the system is c.c. and wish to prove that rank (C) = m. This is done by assuming rank(c) < m, which leads to a contradiction. Then there exists a constant row m-vector q such that In the expression qb =, qab =,..., qa m 1 B =. x(t) = exp(ta) x + t exp( τa)bu(τ) dτ

5 74 AM3.2 - Linear Control for the solution of (3.2) subject to x() = x, set t = t 1, x(t 1 ) = to obtain (since exp(t 1 A) is nonsingular) x = exp( τ A)Bu(τ) dτ. Now, exp( τa) can be expressed as some polynomial r(a) in A having degree at most m 1, so we get x = (r I m + r 1 A + + r m 1 A m 1 )Bu(τ) dτ. Multiplying this relation on the left by q gives qx =. Since the system is c.c., this must hold for any vector x, which implies q =, contradiction. ( ) We asume rank(c) = m, and wish to show that for any x there is a function u :, t 1 R l, which when substituted into t x(t) = exp(ta) x + exp( τa)bu(τ) dτ produces Consider the symmetric matrix W c : = x(t 1 ) =. exp( τa)bb T exp( τa T ) dτ. (3.3) One can show that W c is nonsingular. Indeed, consider the quadratic form associated to W c α T W c α = = ψ(τ)ψ T (τ) dτ ψ(τ) 2 e dτ where α R m 1 is an arbitrary column vector and ψ(τ) : = α T exp ( τa)b. It is clear that W c is positive semi-definite, and will be singular only if there

6 C.C. Remsing 75 exists an ᾱ such that ᾱ T W c ᾱ =. However, in this case, it follows (using the properties of the norm) that ψ(τ), τ t 1. Hence, we have ( ) ᾱ T I m τa + τ2 2! A2 τ3 3! A3 + B =, τ t 1 from which it follows that ᾱ T B =, ᾱ T AB =, ᾱ T A 2 B =, This implies that ᾱ T C =. Since by assumption C has rank m, it follows that such a nonzero vector ᾱ cannot exist, so W c is nonsingular. Now, if we choose as the control vector then substitution into (3.3) gives u(t) = B T exp( ta T )W 1 c x, t, t 1 x(t 1 ) = exp(t 1 A) x exp( τa)bb T exp( τa T ) dτ (Wc 1 x ) = exp(t 1 A) x W c Wc 1 x = as required Corollary. If rank(b) = r, then the condition in Theorem reduces to rank B AB... A m r B = m. Proof : Define the matrix C k : = B AB A k B, k =, 1, 2... If rank (C j ) = rank(c j+1 ) it follows that all the columns of A j+1 B must be linearly dependent on those of C j. This then implies that all the columns of A j+2 B, A j+3 B,... must also be linearly dependent on those of C j, so that rank(c j ) = rank(c j+1 ) = rank(c j+2 ) =

7 76 AM3.2 - Linear Control Hence the rank of C k increases by at least one when the index k is increased by one, until the maximum value of rank (C k ) is attained when k = j. Since rank (C ) = rank(b) = r and rank(c k ) m it follows that r+j m, giving j m r as required Example. Consider the linear control system Σ described by ẋ = x + u(t). 1 1 The (Kalman) controllability matrix is 1 2 C = C Σ = 1 which has rank 2, so the control system Σ is c.c. Note : When l = 1, B reduces to a column vector b and Theorem can be restated as : A linear control system in the form ẋ = Ax + bu(t) can be transformed into the canonical form ẇ = Cw + du(t) if and only if it is c.c. Controllability criterion We now give a general criterion for (complete) controllability of control systems (time-invariant or time-varying) as well as an explicit expression for a control vector which carry out a required alteration of states Theorem. The linear control system Σ defined by ẋ = A(t)x + B(t)u(t)

8 C.C. Remsing 77 is c.c. if and only if the symmetric matrix, called the controllability Gramian, W c (, t 1 ) : = is nonsingular. In this case the control Φ(, τ)b(τ)b T (τ)φ T (, τ) dτ R m m (3.4) u (t) = B T (t)φ T (, t)w c (, t 1 ) 1 x Φ(, t 1 )x f, t, t 1 transfers x( ) = x to x(t 1 ) = x f. Proof : ( ) Sufficiency. If W c (, t 1 ) is assumed nonsingular, then the control defined by u (t) = B T (t)φ T (, t)w c (, t 1 ) 1 x Φ(, t 1 )x f, t, t 1 exists. Now, substitution of the above expression into the solution t x(t) = Φ(t, ) x + Φ(, τ)b(τ)u(τ) dτ of ẋ = A(t)x + B(t)u(t) gives x(t 1 ) = Φ(t 1, ) x + Φ(, τ)b(τ)( B T (τ)φ T (, τ)w c (, t 1 ) 1 x Φ(, t 1 )x f ) dτ = Φ(t 1, ) x W c (, t 1 )W c (, t 1 ) 1 x Φ(, t 1 )x f = Φ(t 1, ) x x + Φ(, t 1 )x f = Φ(t 1, )Φ(, t 1 )x f = x f. ( ) Necessity. We need to show that if Σ is c.c., then W c (, t 1 ) is nonsingular. First, notice that if α R m 1 is an arbitrary column vector, then from

9 78 AM3.2 - Linear Control (3.4) since W = W c (, t 1 ) is symmetric we can construct the quadratic form α T Wα = = θ T (τ, )θ(τ, ) dτ θ 2 e dτ where θ(τ, ) : = B T (τ)φ T (, τ)α, so that W c (, t 1 ) is positive semi-definite. Suppose that there exists some ᾱ such that ᾱ T Wᾱ =. Then we get (for θ = θ when α = ᾱ) θ 2 e dτ = which in turn implies (using the properties of the norm) that θ(τ, ), τ t 1. However, by assumption Σ is c.c. so there exists a control v( ) making x(t 1 ) = if x( ) = ᾱ. Hence Therefore ᾱ = Φ(, τ)b(τ)v(τ) dτ. ᾱ 2 e = ᾱ T ᾱ = = v T (τ)b T (τ)φ T (, τ)ᾱdτ v T (τ) θ(τ, ) dτ = which contradicts the assumption that ᾱ. Hence W c (, t 1 ) is positive definite and is therefore nonsingular Example. The control system is ẋ = x + u(t) Observe that λ = is an eigenvalue of A, and b = is a corresponding 1 eigenvector, so the controllability rank condition does not hold. However, A is

10 C.C. Remsing 79 1 similar to its companion matrix. Using the matrix T = 1 1 before (see Example 2.5.1) and w = Tx we have the system 1 1 ẇ = w + u. 3 computed Differentiation of the w 1 equation and substitution produces a second-order ODE for w 1 : ẅ 1 + 3ẇ 1 = 3u + u. One integration produces a first-order ODE ẇ 1 + 3w 1 = 3 u(τ) dτ + u which shows that the action of arbitrary inputs u( ) affects the dynamics in only a one-dimensional space. The original x equations might lead us to think that u( ) can fully affect x 1 and x 2, but notice that the w 2 equation says that u( ) has no effect on the dynamics of the difference x 1 x 2 = w 2. Only when the initial condition for w involves w 2 () = can u( ) be used to control a trajectory. That is, the inputs completely control only the states that lie in the subspace span b Ab = span {b} = span Solutions starting with x 1 () = x 2 () satisfy x 1 (t) = x 2 (t) = t 1 1 u(τ) dτ + x 1 (). One can steer along the line x 1 = x 2 from any initial point to any final point x 1 (t 1 ) = x 2 (t 1 ) at any finite time t 1 by appropriate choice of u( ). On the other hand, if the initial condition lies off the line x 1 = x 2, then the difference w 2 = x 1 x 2 decays exponentially so there is no chance of steering to an arbitrarily given final state in finite time..

11 8 AM3.2 - Linear Control Note : The control function u ( ) which transfers the system from x = x( ) to x f = x(t 1 ) requires calculation of the state transition matrix Φ(, ) and the controllability Gramian W c (, τ ). However, this is not too dificult for linear timeinvariant control systems, although rather tedious. Of course, there will in general be many other suitable control vectors which achieve the same result Proposition. If u( ) is any other control taking x = x( ) to x f = x(t 1 ), then u(τ) 2 e dτ > u (τ) 2 e dτ. Proof : Since both u and u satisfy x f = Φ(t 1, ) x + Φ(, τ)b(τ)u(τ) dτ we obtain after subtraction = Φ(, τ)b(τ) u(τ) u (τ) dτ. Multiplication of this equation on the left by x Φ(, t 1 )x f T W c (, t 1 ) 1 T gives and thus Therefore (u ) T (τ) u (τ) u(τ) dτ = u (τ) 2 e dτ = (u ) T (τ)u(τ) dτ. < t u (τ) u(τ) 2 e dτ = = = u (τ) u(τ) T u (τ) u(τ) dτ ( u(τ) 2 e + u (τ) 2 e 2(u ) T (τ)u(τ) ) dτ ( u(τ) 2 e u (τ) 2 ) e dτ

12 C.C. Remsing 81 and so u(τ) 2 e dτ = ( u (τ) 2 e + u (τ) u(τ) 2 ) t1 e dτ > u (τ) 2 e dτ as required. Note : This result can be interpreted as showing that the control u (t) = B T (t)φ T (, t)w c (, t 1 ) 1 x Φ(, t 1 )x f is optimal, in the sense that it minimizes the integral ( u(τ) 2 e dτ = u 2 1 (τ) + u 2 2(τ) + + u 2 l(τ) ) dτ over the set of all (admissible) controls which transfer x = x( ) to x f = x(t 1 ), and this integral can be thought of as a measure of control energy involved. Algebraic equivalence and decomposition of control systems We now indicate a further aspect of controllability. Let P( ) be a matrixvalued mapping which is continuous and such that P(t) is nonsingular for all t. (The continuous maping P :, ) GL(m, R) is a path in the general linear group GL (m, R).) Then the system Σ obtained from Σ by the transformation x = P(t)x is said to be algebraically equivalent to Σ Proposition. If Φ(t, ) is the state transition matrix for Σ, then is the state transition matrix for Σ. P(t)Φ(t, )P 1 ( ) = Φ(t, ) Proof : We recall that Φ(t, ) is the unique matrix-valued mapping satisfying Φ(t, ) = A(t)Φ(t, ), Φ(, ) = I m

13 82 AM3.2 - Linear Control and is nonsingular. Clearly, Φ(, ) = I m ; differentiation of x = P(t)x gives x = Px + Pẋ = (Ṗ + PA)x + PBu = (Ṗ + PA)P 1 x + PBu. We need to show that Φ is the state transition matrix for x = ( P + PA)P 1 x + PBu. We have Φ(t, ) = d P(t)Φ(t, t )P 1 ( ) dt = P(t)Φ(t, )P 1 ( + P(t) Φ(t, )P 1 ( ) ( ) = P(t) + P(t)A(t) P 1 (t) P(t)Φ(t, )P 1 ( ) ( ) = P(t) + P(t)A(t) P (t) 1 Φ(t, t ) Proposition. If Σ is c.c., then so is Σ. Proof : The system matrices for Σ are so the controllability matrix for Σ is W = = Ã = ( P + PA)P 1 and B = PB Φ(t, τ) B(τ) B T (τ) Φ T (, τ) dτ P( )Φ(, τ)p 1 (τ)p(τ)b(τ)b T (τ)p T (τ) ( P 1 (τ) ) T Φ T (, τ)p T ( ) dτ = P( )W c (, t 1 )P T ( ).

14 C.C. Remsing 83 Thus the matrix W = W c (, t 1 ) is nonsingular since the matrices W c (, t 1 ) and P( ) each have rank m. The following important result on system decomposition then holds : Theorem. When the linear control system Σ is time-invariant then if the controllability matrix C Σ has rank m 1 < m there exists a control system, algebraically equivalent to Σ, having the form ẋ(1) A1 A 2 x(1) B1 = + u(t) ẋ (2) A 3 x (2) y = C 1 C 2 x where x (1) and x (2) have orders m 1 and m m 1, respectively, and (A 1, B 1 ) is c.c. We shall postpone the proof of this until a later section (see the proof of Theorem 3.4.5) where an explicit formula for the transformation matrix will also be given. Note : It is clear that the vector x (2) is completely unaffected by the control u( ). Thus the state space has been divided into two parts, one being c.c. and the other uncontrollable. 3.2 Observability Closely linked to the idea of controllability is that of observability, which in general terms means that it is possible to determine the state of a system by measuring only the output Definition. The linear control system (with outputs) Σ described by ẋ = A(t)x + B(t)u(t) (3.5) y = C(t)x

15 84 AM3.2 - Linear Control is said to be completely observable (c.o.) if for any and any initial state x( ) = x, there exists a finite time t 1 > such that knowledge of u( ) and y( ) for t, t 1 suffices to determine x uniquely. Note : There is in fact no loss of generality in assuming u( ) is identically zero throughout the interval. Indeed, for any input u :, t 1 R l and initial state x, we have Defining we get t y(t) C(t)Φ(t, τ)b(τ)u(τ)dτ = C(t)Φ(t, )x. t ŷ(t) : = y(t) C(t)Φ(t, τ)b(τ)u(τ)dτ ŷ(t) = C(t)Φ(t, )x. Thus a linear control system is c.o. if and only if knowledge of the output ŷ( ) with zero input on the interval, t 1 allows the initial state x to be determined Example. Consider the linear control system described by ẋ 1 = a 1 x 1 + b 1 u(t) ẋ 2 = a 2 x 2 + b 2 u(t) y = x 1. The first equation shows that x 1 ( ) (= y( )) is completely determined by u( ) and x 1 ( ). Thus it is impossible to determine x 2 ( ) by measuring the output, so the system is not completely observable (c.o.) Theorem. The linear control system Σ is c.o. if and only if the symmetric matrix, called the observability Gramian, W o (, t 1 ) : = is nonsingular. Φ T (τ, )C T (τ)c(τ)φ(τ, ) dτ R m m (3.6)

16 C.C. Remsing 85 Proof : ( ) Sufficiency. Assuming u(t), t, t 1, we have y(t) = C(t)Φ(t, )x. Multiplying this relation on the left by Φ T (t, )C T (t) and integrating produces Φ T (τ, )C T (τ)y(τ) dτ = W o (, t 1 )x so that if W o (, t 1 ) is nonsingular, the initial state is so Σ is c.o. x = W o (, t 1 ) 1 Φ T (τ, )C T (τ)y(τ) dτ ( ) Necessity. We now assume that Σ is c.o. and prove that W = W o (, t 1 ) is nonsingular. First, if α R m 1 is an arbitrary column vector, α T Wα = (C(τ)Φ(τ, )α) T C(τ)Φ(τ, )α dτ so W o (, t 1 ) is positive semi-definite. Next, suppose there exists an ᾱ such that ᾱ T Wᾱ =. It then follows that C(τ)Φ(τ, )ᾱ, τ t 1. This implies that when x = ᾱ the output is identically zero throughout the time interval, so that x cannot be determined in this case from the knowledge of y( ). This contradicts the assumption that Σ is c.o., hence W o (, t 1 ) is positive definite, and therefore nonsingular. Note : Since the observability of Σ is independent of B, we may refer to the observability of the pair (A, C). Duality Theorem. The linear control system (with outputs) Σ defined by ẋ = A(t)x + B(t)u(t) y = C(t)x

17 86 AM3.2 - Linear Control is c.c. if and only if the dual system Σ defined by ẋ = A T (t)x + C T (t)u(t) y = B T (t)x is c.o.; and conversely. Proof : We can see that if Φ(t, ) is the state transition matrix for the system Σ, then Φ T (, t) is the state transition matrix for the dual system Σ. Indeed, differentiate I m = Φ(t, )Φ(t, ) 1 to get = d dt I m = Φ(t, )Φ(t, ) 1 + Φ(t, ) Φ(, t) = A(t)Φ(t, )Φ(t, ) 1 + Φ(t, ) Φ(, t) = A(t) + Φ(t, ) Φ(, t). This implies Φ(, t) = Φ(, t)a(t) or Φ T (, t) = A T (t)φ T (, t). Furthermore, the controllability matrix W Σ c (, t 1 ) = Φ(, τ)b(τ)b T (τ)φ T (, τ) dτ (associated with Σ ) is identical to the observability matrix W Σ o (, t 1 ) (associated with Σ ). Conversely, the observability matrix W Σ o (, t 1 ) = Φ T (τ, )C T (τ)c(τ)φ(τ, ) dτ (associated with Σ ) is identical to the controllability matrix Wc Σ (, t 1 ) (associated with Σ ).

18 C.C. Remsing 87 Note : This duality theorem is extremely useful, since it enables us to deduce immediately from a controllability result the corresponding one on observability (and conversely). For example, to obtain the observability criterion for the time-invariant case, we simply apply Theorem to Σ to obtain the following result Theorem. The linear time-invariant control system ẋ = Ax + Bu(t) y = Cx (3.7) (or the pair (A, C)) is c.o. if and only if the (Kalman) observability matrix O = O(A, C) : = has rank m. C CA CA 2. CA m 1 R mn m Example. Consider the linear control system Σ described by ẋ = x + u(t) 1 1 y = x 1. The (Kalman) observability matrix is O = O Σ = which has rank 2. Thus the control system Σ is c.o.

19 88 AM3.2 - Linear Control In the single-output case (i.e. n = 1), if u( ) = and y( ) is known in the form γ 1 e λ 1t + γ 2 e λ 2t + + γ m e λmt assuming that all the eigenvalues λ i of A are distinct, then x can be obtained more easily than by using x = W o (, t 1 ) 1 Φ T (τ, )C T (τ)y(τ) dτ. For suppose that = and consider the solution of ẋ = Ax in the spectral form, namely We have x(t) = (v 1 x())e λ 1t w 1 + (v 2 x())e λ 2t w (v m x())e λmt w m. y(t) = (v 1 x())(cw 1 )e λ 1t + (v 2 x())(cw 2 )e λ 2t + + (v m x())(cw m )e λmt and equating coefficients of the exponential terms gives v i x() = γ i (i = 1, 2,..., m). cw i This represents m linear equations for the m unknown components of x() in terms of γ i, v i and w i (i = 1, 2,..., m). Again, in the single-output case, C reduces to a row matrix c and Theorem can be restated as : A linear system (with outputs) in the form ẋ = Ax y = cx can be transformed into the canonical form v = Ev if and only if it is c.o. y = fv

20 C.C. Remsing 89 Decomposition of control systems By duality, the result corresponding to Theorem is : Theorem. When the linear control system Σ is time-invariant then if the observability matrix O Σ has rank m 1 < m there exists a control system, algebraically equivalent to Σ, having the form ẋ(1) A1 x(1) = + A 2 A 3 ẋ (2) x (2) B1 B 2 u(t) y = C 1 x (1) where x (1) and x (2) have orders m 1 and m m 1, respectively and (A 1, C 1 ) is c.o. We close this section with a decomposition result which effectively combines together Theorems and to show that a linear time-invariant control system can split up into four mutually exclusive parts, respectively c.c. but unobservable c.c. and c.o. uncontrollable and unobservable c.o. but uncontrollable Theorem. When the linear control system Σ is time-invariant it is algebraically equivalent to ẋ (1) A 11 A 12 A 13 A 14 ẋ (2) A 22 A 24 = ẋ (3) A 33 A 34 A 44 ẋ (4) x (1) x (2) x (3) x (4) y = C 2 x (2) + C 4 x (4) where the subscripts refer to the stated classification. B 1 B 2 + u(t)

21 9 AM3.2 - Linear Control 3.3 Linear Feedback Consider a linear control system Σ defined by ẋ = Ax + Bu(t) (3.8) where A R m m and B R m l. Suppose that we apply a (linear) feedback, that is each control variable is a linear combination of the state variables, so that u(t) = Kx(t) where K R l m is a feedback matrix. The resulting closed loop system is ẋ = (A + BK)x. (3.9) The pole-shifting theorem We ask the question whether it is possible to exert some influence on the behaviour of the closed loop system and, if so, to what extent. A somewhat surprising result, called the Spectrum Assignment Theorem, says in essence that for almost any linear control system Σ it is possible to obtain arbitrary eigenvalues for the matrix A + BK (and hence arbitrary asymptotic behaviour) using suitable feedback laws (matrices) K, subject only to the obvious constraint that complex eigenvalues must appear in pairs. Almost any means that this will be true for (completely) controllable systems. Note : This theorem is most often referred to as the Pole-Shifting Theorem, a terminology that is due to the fact that the eigenvalues of A+BK are also the poles of the (complex) function z 1 det (zi n A BK) This function appears often in classical control design. The Pole-Shifting Theorem is central to linear control systems theory and is itself the starting point for more interesting analysis. Once we know that arbitrary sets of eigenvalues can be assigned, it becomes of interest to compare the performance

22 C.C. Remsing 91 of different such sets. Also, one may ask what happens when certain entries of K are restricted to vanish, which corresponds to constraints on what can be implemented Theorem. Let Λ = {θ 1, θ 2,..., θ m } be an arbitrary set of m complex numbers (appearing in conjugate pairs). If the linear control system Σ is c.c., then there exists a matrix K R l m such that the eigenvalues of A + BK are the set Λ. Proof (when l = 1) : Since ẋ = Ax + Bu(t) is c.c., it follows that there exists a (linear) transformation w = Tx such that the given system is transformed into ẇ = Cw + du(t) where C = k m k m 1 k m 2... k 1, d =. 1. The feedback control u = kw, where k : = k m k m 1... k 1 produces the closed loop matrix C + dk, which has the same companion form as C but with last row γ m γ m 1 γ 1, where k i = k i γ i, i = 1, 2,..., m. (3.1) Since C + dk = T(A + bkt)t 1 it follows that the desired matrix is K = kt

23 92 AM3.2 - Linear Control the entries k i (i = 1, 2,..., m) being given by (3.1). In this equation k i (i = 1, 2,..., m) are the coefficients in the characteristic polynomial of A ; that is, det (λi m A) = λ m + k 1 λ m k m and γ i (i = 1, 2,..., m) are obtained by equating coefficients of λ in λ m + γ 1 λ m γ m (λ θ 1 )(λ θ 2 ) (λ θ m ). Note : The solution of (the closed loop system) ẋ = (A + BK)x depends on the eigenvalues of A + BK, so provided the control system Σ is c.c., the theorem tells us that using linear feedback it is possible to exert a considerable influence on the time behaviour of the closed loop system by suitably choosing the numbers θ 1, θ 2,..., θ m Corollary. If the linear time-invariant control system ẋ = Ax + Bu(t) y = cx is c.o., then there exists a matrix L R m 1 such that the eigenvalues of A + Lc are the set Λ. This result can be deduced from Theorem using the Duality Theorem Example. Consider the linear control system ẋ = x + u(t)

24 C.C. Remsing 93 The characteristic equation of A is char A (λ) λ 2 3λ + 14 = which has roots 3±i Suppose we wish the eigenvalues of the closed loop system to be 1 and 2, so that the characteristic polynomial is We have k 1 λ 2 + 3λ + 2. = k 1 γ 1 = 3 3 = 6 k 2 = k 2 γ 2 = 14 2 = 12. Hence K = kt = = It is easy to verify that A + bk = does have the desired eigenvalues Lemma. If the linear control system Σ defined by ẋ = Ax + Bu(t) is c.c. and B = b 1 b 2 b l with b i, i = 1, 2,..., l, then there exist matrices K i R l m, i = 1, 2,..., l such that the systems ẋ = (A + BK i )x + b i u(t) are c.c. Proof : For convenience consider the case i = 1. Since the matrix C = B AB A 2 B... A m 1 B

25 94 AM3.2 - Linear Control has full rank, it is possible to select from its columns at least one set of m vectors which are linearly independent. Define an m m matrix M by choosing such a set as follows : M = b 1 Ab 1... A r1 1 b 1 b 2 Ab 2... A r2 1 b 2... where r i is the smallest integer such that A r i b i is linearly dependent on all the preceding vectors, the process continuing until m columns of U are taken. Define an l m matrix N having its r th 1 column equal to e 2, the second column of I l, its (r 1 + r 2 ) th column equal to e 3, its (r 1 + r 2 + r 3 ) th column equal to e 4 and so on, all its other columns being zero. It is then not difficult to show that the desired matrix in the statement of the Lemma is K 1 = NM 1. Proof of Theorem when l > 1 : Let K 1 be the matrix in the proof of Lemma and define an l m matrix K having as its first row some vector k, and all its other rows zero. Then the control u = (K 1 + K )x leads to the closed loop system ẋ = (A + BK 1 )x + BK x = (A + BK 1 )x + b 1 kx where b 1 is the first column of B. Since the system ẋ = (A + BK 1 )x + b 1 u is c.c., it now follows from the proof of the theorem when l = 1, that k can be chosen so that the eigenvalues of A + BK 1 + b 1 k are the set Λ, so the desired feedback control is indeed u = (K 1 + K )x.

26 C.C. Remsing 95 deduce If y = Cx is the output vector, then again by duality we can immediately Corollary. If the linear control system ẋ = Ax + Bu(t) y = Cx is c.o., then there exists a matrix L R m n such that the eigenvalues of A + LC are the set Λ. Algorithm for constructing a feedback matrix The following method gives a practical way of constructing the feedback matrix K. Let all the eigenvalues λ 1, λ 2,..., λ m of A be distinct and let W = w 1 w 2... w m where w i is an eigenvector corresponding to the eigenvalue λ i. With linear feedback u = Kx, suppose that the eigenvalues of A and A BK are ordered so that those of A BK are to be µ 1, µ 2,..., µ r, λ r+1,..., λ m (r m). Then provided the linear system Σ is c.c., a suitable matrix is K = fg W where W consists of the first r rows of W 1, and α1 α 2 α r g = β 1 β 2 β r r (λ i µ j ) j=1 if r > 1 α i = r (λ i λ j ) β = j=1 j i λ 1 µ 1 if r = 1 T β 1 β 2... β r = WBf

27 96 AM3.2 - Linear Control f being any column l-vector such that all β i Example. Consider the linear system 1 2 ẋ = x u(t). We have Suppose that λ 1 = 1, λ 2 = 2 and 1 1 W =, W = µ 1 = 3, µ 2 = 4, so W = W 1. We have α 1 = 6, α 2 = 2 and β = WBf gives β1 β 2 = f = 5f 1 3f 1. Hence we can take f 1 = 1, which results in g = Finally, the desired feedback matrix is K = W 1 = Example. Consider now the linear control system ẋ = x + u(t) We now obtain β1 β 2 5f1 + 2f 2 = 3f 1 f 2

28 C.C. Remsing 97 so that f 1 = 1, f 2 = gives K = However f 1 = 1, f 2 = 1 gives β 1 = 3, β 2 = 2 so that g = 2 1 and from K = fg W we now have 1 K = W 1 = Realization Theory The realization problem may be viewed as guessing the equations of motion (i.e. state equations) of a control system from its input/output behaviour or, if one prefers, setting up a physical model which explains the experimental data. Consider the linear control system (with outputs) Σ described by ẋ = Ax + Bu(t) y = Cx (3.11) where A R m m, B R m l and C R n m. Taking Laplace transforms of (3.11) and assuming zero initial conditions gives sx(s) = Ax(s) + Bu(s) and after rearrangement x(s) = (si m A) 1 Bu(s).

29 98 AM3.2 - Linear Control The Laplace transform of the output is y(s) = Cx(s) and thus where the n l matrix y(s) = C (si m A) 1 Bu(s) = G(s)u(s) G(s) : = C (si m A) 1 B (3.12) is called the transfer function matrix since it relates the Laplace transform of the output vector to that of the input vector. Exercise 41 Evaluate (the Laplace transform of the exponential) L e at (s) : = and then show that (for A R m m ) : e st e at dt L exp(ta) (s) = (si m A) 1. Using relation (si m A) 1 = sm 1 I m + s m 2 B 1 + s m 3 B B m 1 char A (s) (3.13) where the k i and B i are determined successively by B 1 = A + k 1 I m, B i = AB i 1 + k i I m ; i = 2, 3,..., m 1 k 1 = tr(a), k i = 1 i tr(ab i 1) ; i = 2, 3,..., m the expression (3.12) becomes G(s) = sm 1 G + s m 2 G G m 1 χ(s) = H(s) χ(s)

30 where χ(s) = char A (s) and G k = C.C. Remsing 99 g (k) ij R n l, k =, 1, 2,..., m 1. The n l matrix H(s) is called a polynomial matrix, since each of its entries is itself a polynomial; that is, h ij = s m 1 g () ij + s m 2 g (1) ij + + g (m 1) ij. Note : The formulas above, used mainly for theoretical rather than computational purposes, constitute Leverrier s algorithm Example. Consider the electrically-heated oven described in section 1.3, and suppose that the values of the constants are such that the state equations are ẋ = x + 1 u(t). Suppose that the output is provided by a thermocouple in the jacket measuring the jacket (excess) temperature, i.e. y = 1 x. The expression (3.12) gives s G(s) = 1 1 s = s + 1 s 2 + 3s using (si 2 A) 1 = 1 char A (s) adj(si 2 A). Realizations In practice it often happens that the mathematical description of a (linear time-invariant) control system in terms of differential equations is not known, but G(s) can be determined from experimental measurements or other considerations. It is then useful to find a system in our usual state space form to which G( ) corresponds.

31 1 AM3.2 - Linear Control In formal terms, given an n l matrix G(s), whose elements are rational functions of s, we wish to find (constant) matrices A, B, C having dimensions m m, m l and n m, respectively, such that G(s) = C(sI m A) 1 B and the system equations will then be ẋ = Ax + Bu(t) y = Cx. The triple (A, B, C) is termed a realization of G( ) of order m, and is not, of course, unique. Amongst all such realizations some will include matrices A having least dimensions these are called minimal realizations, since the corresponding systems involve the smallest possible number of state variables. Note : Since each element in (si m A) 1 = adj (si m A) det(si m A) has the degree of the numerator less than that of the denominator, it follows that lim C(sI m A) 1 B = s and we shall assume that any given G(s) also has this property, G( ) then being termed strictly proper Example. Consider the scalar transfer function g(s) = 2s + 7 s 2 5s + 6 It is easy to verify that one realization of g( ) is 1 A =, b =, c = It is also easy to verify that a quite different triple is 2 1 A =, b =, c =

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems Chapter 4 Stability Topics : 1. Basic Concepts 2. Algebraic Criteria for Linear Systems 3. Lyapunov Theory with Applications to Linear Systems 4. Stability and Control Copyright c Claudiu C. Remsing, 2006.

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Inverses and powers: Rules of Matrix Arithmetic

Inverses and powers: Rules of Matrix Arithmetic Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Math 315: Linear Algebra Solutions to Midterm Exam I

Math 315: Linear Algebra Solutions to Midterm Exam I Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

More information

2.1: MATRIX OPERATIONS

2.1: MATRIX OPERATIONS .: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

More information

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/? Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability p. 1/? p. 2/? Definition: A p p proper rational transfer function matrix G(s) is positive

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Using the Singular Value Decomposition

Using the Singular Value Decomposition Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8. Matrices II: inverses. 8.1 What is an inverse? Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

C 1 x(t) = e ta C = e C n. 2! A2 + t3

C 1 x(t) = e ta C = e C n. 2! A2 + t3 Matrix Exponential Fundamental Matrix Solution Objective: Solve dt A x with an n n constant coefficient matrix A x (t) Here the unknown is the vector function x(t) x n (t) General Solution Formula in Matrix

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

Chapter 1 - Matrices & Determinants

Chapter 1 - Matrices & Determinants Chapter 1 - Matrices & Determinants Arthur Cayley (August 16, 1821 - January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed

More information

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

22 Matrix exponent. Equal eigenvalues

22 Matrix exponent. Equal eigenvalues 22 Matrix exponent. Equal eigenvalues 22. Matrix exponent Consider a first order differential equation of the form y = ay, a R, with the initial condition y) = y. Of course, we know that the solution to

More information

EE 580 Linear Control Systems VI. State Transition Matrix

EE 580 Linear Control Systems VI. State Transition Matrix EE 580 Linear Control Systems VI. State Transition Matrix Department of Electrical Engineering Pennsylvania State University Fall 2010 6.1 Introduction Typical signal spaces are (infinite-dimensional vector

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

Notes from February 11

Notes from February 11 Notes from February 11 Math 130 Course web site: www.courses.fas.harvard.edu/5811 Two lemmas Before proving the theorem which was stated at the end of class on February 8, we begin with two lemmas. The

More information

System of First Order Differential Equations

System of First Order Differential Equations CHAPTER System of First Order Differential Equations In this chapter, we will discuss system of first order differential equations. There are many applications that involving find several unknown functions

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, Chapter 1: Linear Equations and Matrices MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

More information

Factoring Polynomials

Factoring Polynomials Factoring Polynomials Sue Geller June 19, 2006 Factoring polynomials over the rational numbers, real numbers, and complex numbers has long been a standard topic of high school algebra. With the advent

More information

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom. Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

More information

Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16.

Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 RAVI VAKIL CONTENTS 1. Discrete valuation rings: Dimension 1 Noetherian regular local rings 1 Last day, we discussed the Zariski tangent space, and saw that it

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

The Dirichlet Unit Theorem

The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

Lecture 6. Inverse of Matrix

Lecture 6. Inverse of Matrix Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

More information

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

More information

ARBITRAGE-FREE OPTION PRICING MODELS. Denis Bell. University of North Florida

ARBITRAGE-FREE OPTION PRICING MODELS. Denis Bell. University of North Florida ARBITRAGE-FREE OPTION PRICING MODELS Denis Bell University of North Florida Modelling Stock Prices Example American Express In mathematical finance, it is customary to model a stock price by an (Ito) stochatic

More information

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants. Dr. Doreen De Leon Math 152, Fall 2015 Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Math 313 Lecture #10 2.2: The Inverse of a Matrix

Math 313 Lecture #10 2.2: The Inverse of a Matrix Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

equations Karl Lundengård December 3, 2012 MAA704: Matrix functions and matrix equations Matrix functions Matrix equations Matrix equations, cont d

equations Karl Lundengård December 3, 2012 MAA704: Matrix functions and matrix equations Matrix functions Matrix equations Matrix equations, cont d and and Karl Lundengård December 3, 2012 Solving General, Contents of todays lecture and (Kroenecker product) Solving General, Some useful from calculus and : f (x) = x n, x C, n Z + : f (x) = n x, x R,

More information

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it. This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Various Symmetries in Matrix Theory with Application to Modeling Dynamic Systems

Various Symmetries in Matrix Theory with Application to Modeling Dynamic Systems Various Symmetries in Matrix Theory with Application to Modeling Dynamic Systems A Aghili Ashtiani a, P Raja b, S K Y Nikravesh a a Department of Electrical Engineering, Amirkabir University of Technology,

More information

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3 Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r

More information

The determinant of a skew-symmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14

The determinant of a skew-symmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14 4 Symplectic groups In this and the next two sections, we begin the study of the groups preserving reflexive sesquilinear forms or quadratic forms. We begin with the symplectic groups, associated with

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

Facts About Eigenvalues

Facts About Eigenvalues Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Matrix Algebra and Applications

Matrix Algebra and Applications Matrix Algebra and Applications Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 1 / 49 EC2040 Topic 2 - Matrices and Matrix Algebra Reading 1 Chapters

More information

it is easy to see that α = a

it is easy to see that α = a 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

SOLVING POLYNOMIAL EQUATIONS

SOLVING POLYNOMIAL EQUATIONS C SOLVING POLYNOMIAL EQUATIONS We will assume in this appendix that you know how to divide polynomials using long division and synthetic division. If you need to review those techniques, refer to an algebra

More information

Presentation 3: Eigenvalues and Eigenvectors of a Matrix

Presentation 3: Eigenvalues and Eigenvectors of a Matrix Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues

More information