Inverse Problems for Selfadjoint Matrix Polynomials

Size: px
Start display at page:

Download "Inverse Problems for Selfadjoint Matrix Polynomials"

Transcription

1 Peter Lancaster, University of Calgary, Canada Castro Urdiales, SPAIN, Castro Urdiales, SPAIN, /

2 Preliminaries Given A 0, A 1,, A l C n n (or possibly in R n n ), L(λ) := l A j λ j, λ C, j=0 det A l 0. Spectrum of L : σ(l) := {λ C : detl(λ) = 0} (the eigenvalues). Eigenvectors: If λ 0 σ(l) vectors x C n such that x 0 and L(λ 0 )x = 0 are eigenvectors. Direct problems: Given L(λ) find σ(l), eigenvectors, etc.. Inverse problems: Given a set of candidates for eigenvalues and eigenvectors (possibly incomplete) find system(s) L(λ) consistent with this spectral data. Castro Urdiales, SPAIN, /

3 The ultimate inverse problem: One can argue that the ultimate inverse problem is the expression of the coefficients A j in terms of spectral data. i.e. when the spectral data is properly (completely) defined it should determine the coefficients of the system. Hence our interest in canonical forms. Castro Urdiales, SPAIN, /

4 Important special cases 1. l = l = 2: the quadratic eigenvalue problem. 3. A j R n n for all j. 4. A j = A j C n n for all j, A l > 0? (et al?) An equivalent first-degree problem is selfadjoint in a natural inner-product on C ln ln. We say that L(λ) is selfadjoint. 5. A T j = A j R n n for all j, A l > 0? (et al?) Selfadjoint...on R ln ln. Note that the role of eigenvalues of L(λ) as the singularitites of L(λ) 1 is of great importance. Castro Urdiales, SPAIN, /

5 Sign characteristics (from GLR, 2005). Theorem Let L(λ) be an n n self-adjoint matrix polynomial with non-singular leading coefficient and let µ 1 (λ),..., µ n (λ) be the real analytic functions of real λ such that det{µ j (λ)i n L(λ)} = 0 for j = 1,..., n. Let λ 1 < < λ r be the different real eigenvalues of L(λ) (zeros of detl(λ))). For j = 1, 2,..., n and every i = 1,..., r, write µ j (λ) = (λ λ i ) m ij ν ij (λ) where ν ij (λ i ) 0 is real. Castro Urdiales, SPAIN, /

6 Sign characteristics (from GLR, 2005). Theorem Let L(λ) be an n n self-adjoint matrix polynomial with non-singular leading coefficient and let µ 1 (λ),..., µ n (λ) be the real analytic functions of real λ such that det{µ j (λ)i n L(λ)} = 0 for j = 1,..., n. Let λ 1 < < λ r be the different real eigenvalues of L(λ) (zeros of detl(λ))). For j = 1, 2,..., n and every i = 1,..., r, write µ j (λ) = (λ λ i ) m ij ν ij (λ) where ν ij (λ i ) 0 is real. Then the non-zero numbers among m i1,..., m in are the partial multiplicities of L(λ) associated with λ i, and the sign of ν ij (λ i ) (for m ij 0) is the sign characteristic associated with the elementary divisors (λ λ i ) m ij of L(λ). Castro Urdiales, SPAIN, /

7 Sign characteristics and inverse problems Spectral properties for selfadjoint matrix functions include sign characteristics - as above. So they have a role to play in the formulation of inverse problems. For example, suppose that the leading coefficient A l is NOT positive definite, and consider the behaviour of the eigenfunctions µ j (λ) as λ. Theorem Let (π, n π, 0) be the inertia of L l and assume that there exists at least one real eigenvalue of L(λ). Let λ max be the largest real eigenvalue of L(λ). Then there are π indices {i 1,..., i π } in {1, 2,..., n} such that, for all λ > λ max, µ j (λ) > 0 if j {i 1,..., i π } and µ j (λ) < 0 if j / {i 1,..., i π }. Castro Urdiales, SPAIN, /

8 Quadratics using divisors (with Tisseur and then Chorianopoulos) Consider quadratic functions with Hermitian coeffts.: L(λ) = L 2 λ 2 + L 1 λ + L 0, det L 2 0. We would like to express L(λ) in factored form, L(λ) = (I λ S)L 2 (I λ A). More precisely, given nonsingular Hermitian L 2 and the right divisor I λ A, describe the class of matrices S for which L(λ) is selfadjoint. Castro Urdiales, SPAIN, /

9 Quadratics using divisors (with Tisseur and then Chorianopoulos) Consider quadratic functions with Hermitian coeffts.: L(λ) = L 2 λ 2 + L 1 λ + L 0, det L 2 0. We would like to express L(λ) in factored form, L(λ) = (I λ S)L 2 (I λ A). More precisely, given nonsingular Hermitian L 2 and the right divisor I λ A, describe the class of matrices S for which L(λ) is selfadjoint. First define T H = {A L 2 + Z 1 : A Z 1 Z 1 A = 0 and Z 1 = Z 1}. Theorem Given L 2 nonsingular and a right divisor I λ A, the function L(λ) is Hermitian when λ R if and only if S = TL 1 2 and T T H. Castro Urdiales, SPAIN, /

10 Example Nonsingular leading coefficient: L 2 = (Diagonal) right divisor I λ A where A = diag [ 1, 3, 4, 2 + i, 5 3i ]. Calculated left divisor S has spectrum , ± i, 2 i, 5 + 3i. Castro Urdiales, SPAIN, /

11 Updating problems: Given L(λ) and a complete description of σ(l), adjust the coefficients of L(λ) to produce desired changes in σ(l). xxxxxxxxxxxxxxxxxxxxxxxx Standard pair: X F n ln, T F ln ln, (F = R or C) for which X XT det. 0. XT l 1 Standard pair for L(λ) is a standard pair (X, T ) for which L(X, T ) := L l XT l + + L 1 XT + L 0 X = 0. With T in Jordan form, columns of X determine eigenvectors (and generalized eigenvectors) of L(λ). Castro Urdiales, SPAIN, /

12 From pairs to triples If the leading coefft. L l is invertible, then a triple (X, T, Y ) is a standard triple for L(λ) if (X, T ) is a standard pair for L(λ) and Y = X XT. XT l L l 1. Similarity of standard triples: {Standard Triples} = {XS, S 1 TS, S 1 Y } : (X, T, Y ) is a st. triple.} JORDAN forms JORDAN pairs JORDAN triples. Castro Urdiales, SPAIN, /

13 Hermitian/real-symmetric systems Symmetries in coefficients Symmetries in canonical structures Prime example of a standard triple: X = [ I 0 0 ], T = C R, Y = where C R is the (right) companion matrix of L(λ). Any triple similar to the above is also a standard triple L 1 l, Castro Urdiales, SPAIN, /

14 Symmetries in terms of standard triples Definition: (a) A real standard triple (X, T, Y ) is real self-adjoint if there is a real nonsingular H such that Y T = XH 1, T T = HTH 1, X T = HY. (b) A complex standard triple (X, T, Y ) is self-adjoint if there is a nonsingular Hermitian H such that Theorem Y = XH 1, T = HTH 1, X = HY. Let L(λ) have real coefficients with A l nonsingular. Then: (a) If L(λ) admits a real selfadjoint standard triple, then L(λ) is real and symmetric. (b) If L(λ) is real and symmetric then all its real standard triples are selfadjoint. Castro Urdiales, SPAIN, /

15 APPLICATION - AN UPDATING PROBLEM M, D, K R n n all real-symmetric, M > 0. L(λ) = Mλ 2 + Dλ + K. Castro Urdiales, SPAIN, /

16 APPLICATION - AN UPDATING PROBLEM M, D, K R n n all real-symmetric, M > 0. All e.v. distinct. Jordan pair: J = L(λ) = Mλ 2 + Dλ + K. U 1 + iw U U U 1 iw X = [ X c X R1 X R2 X c ]., Castro Urdiales, SPAIN, /

17 APPLICATION - AN UPDATING PROBLEM M, D, K R n n all real-symmetric, M > 0. All e.v. distinct. Jordan pair: J = L(λ) = Mλ 2 + Dλ + K. U 1 + iw U U U 1 iw X = [ X c X R1 X R2 X c ]., To complete the selfadjoint Jordan triple (X, J, PX ) we need: I n r P = 0 I r I r 0. I n r Castro Urdiales, SPAIN, /

18 Updating - contd.. Define the moments of the system: Γ j = X (J j P)X C n n j = 1, 2, 3,..., then M = Γ 1 1, D = MΓ 2M, K = MΓ 3 M + DΓ 1 D. Castro Urdiales, SPAIN, /

19 Updating - contd.. Define the moments of the system: Γ j = X (J j P)X C n n j = 1, 2, 3,..., then M = Γ 1 1, D = MΓ 2M, K = MΓ 3 M + DΓ 1 D. This gives a formal recursive solution to the inverse problem: Given the spectral data in the form of the Jordan triple (X, J, PX ), these formulae generate the coefficients of the system. Castro Urdiales, SPAIN, /

20 Updating - STRATEGY: Given a system with real symmetric M, D, K and M > 0, compute a Jordan triple (X, J, PX ). Make the updates in X and J to produce ˆX, Ĵ so that: (a) the canonical matrix P is not disturbed, and (b) conditions XPX = 0 and X (JP)X > 0 are maintained. Compute the moments defined by ( ˆX, P, Ĵ) and hence new coefficients ˆM, ˆD, ˆK. Castro Urdiales, SPAIN, /

21 A CANONICAL selfadjoint triple for REAL, SEMISIMPLE systems with A l nonsingular. 1. Should display ALL eigenvalue/sign-characteristic/eigenvector information in convenient form. 2. Found by searching among the similarity-class of all selfadjoint triples. SPECTRAL DATA: Let δ be the signature of leading coefft. L l, and χ = 0 or 1 according as l is even or odd. Let r 1,..., r q+χδ be the real eigenvalues of positive type, r q+χδ+1,..., r 2q+χδ be the real e.v. of negative type and construct diagonal matrices of size q + χδ and q: R + = diag[r 1,..., r q+χδ ], R = diag[r q+χδ+1,..., r 2(q+χδ) ]. Castro Urdiales, SPAIN, /

22 A CANONICAL selfadjoint triple for REAL, SEMISIMPLE systems with A l nonsingular Write the 2s conjugate pairs of eigenvalues as follows: β j = µ j +iν j, β j+1 = β j = µ j iν j (ν j > 0), j = 1, 3,..., 2s 1, and set M = diag[µ 1, µ 3,..., µ 2s 1 ], N = diag[ν 1, ν 3,..., ν 2s 1 ]. Castro Urdiales, SPAIN, /

23 A CANONICAL TRIPLE (L., Prells and Zaballa, 2012) If non-real eigenvectors are u j ± iv j, for j = 1, 2,, s they can be normalized in such a way that, with V = [v 1 v s ], U = [u 1 u s ] R n s there is a REAL (CANONICAL) STANDARD TRIPLE: (X, J, PX T ) with [ J = Π T M N J R Π = Diag(R +, R, N M ] ) R nl nl, P = Π T P R Π = Diag (I q+χδ, I q, I s, I s ) R nl nl, X = X R Π = [ X + X V U ] R n nl. (Still in the semisimple case. Admits real and complex spectrum.) Castro Urdiales, SPAIN, /

24 FIRST PROPERTIES Moment conditions: XJ k PX T = 0, k = 0, 1,..., l 2, Resolvent form: XJ (l 1) PX T = L 1 l. { λ r L(λ) 1 XJ r (I λ J) 1 PX T, r = 0, 1,..., l 1, = XJ l (I λi J) 1 PX T + L 1 l, r = l, Castro Urdiales, SPAIN, /

25 CASE OF EVEN DEGREE: l = 2m: Moment conditions can be written: X XJ. XJ m P ˆ X T J T X T (J T ) m 1 X T = 0. Now separate real spectrum of +ve and -ve types - along with separation of conjugate pairs (ref canonical forms) and define: 2 A = 6 4 A 0 A 1. A m = 6 4 X + U 0 X +R + U 1.. X +R+ m 1 U m , B = 6 4 B 0 B 1. B m = 6 4 X V 0 X R V 1.. X R m 1 V m Castro Urdiales, SPAIN, /

26 CASE OF EVEN DEGREE (the punch-line): With A and B as above: Theorem For any real-symmetric matrix polynomial L(λ) of even degree, l = 2m, with invertible leading coefficient, there is a real orthogonal matrix Θ R mn mn such that B = AΘ. Conversely, let J, P be real canonical matrices as described, and X = [ X + X V U ] R n ln with B := AΘ for some real orthogonal Θ and (XJ l 1 PX T ) nonsingular then (X, J, PX T ) is a real selfadjoint triple. Implication for inverse problems? Castro Urdiales, SPAIN, /

27 The quadratic case In the eigenvector matrix: X = X R Π = [ X + X V U ] R n 2n. we have [ X V ] = [ X + U ] Θ (began with Lancaster-Prells, 2005). Castro Urdiales, SPAIN, /

28 CONSTRUCTIONS Use the 2s non-real e.vs. µ j ± iν j (j=1,3,...,2s-1) to define M := diag[µ 1,, µ 2s 1 ], N := diag[ν 1,, ν 2s 1 ], in R s s. and then define [ Mr N r N r M r ] [ M N := N M ] r [ Is 0 0 I s ]. Castro Urdiales, SPAIN, /

29 CONSTRUCTIONS Use the 2s non-real e.vs. µ j ± iν j (j=1,3,...,2s-1) to define M := diag[µ 1,, µ 2s 1 ], N := diag[ν 1,, ν 2s 1 ], in R s s. and then define [ Mr N r N r M r H k (Θ) := [ I nm Θ ] ] [ ] r [ M N Is 0 := N M 0 I s := [ I nm Θ ] G k [ Inm Θ T R+ k M k 0 N k 0 0 R k 0 0 N k 0 M k ]. [ Inm Θ T ] ; defines G k R ln ln. ]. Castro Urdiales, SPAIN, /

30 Cauchy to the rescue Keeping in mind H k (Θ) = [ I nm Θ ] G k [ Inm Θ T ] and using the Cauchy interlacing inequalities we get (for the ev of H k (Θ)): Theorem For k = 1, 2,... write the (known) eigenvalues of G k in the form λ 1 (G k ) λ 2nm (G k ). Then for any nm nm orthogonal matrix Θ we have 2λ i (G k ) λ i (H k (Θ)) 2λ i+nm (G k ), 1 i nm. Castro Urdiales, SPAIN, /

31 Semisimple, real, symmetric, QUADRATICS We form a real canonical triple (X, J, PX T ) for L(λ) = L 2 λ 2 + L 1 λ + L 0. Castro Urdiales, SPAIN, /

32 Semisimple, real, symmetric, QUADRATICS We form a real canonical triple (X, J, PX T ) for L(λ) = L 2 λ 2 + L 1 λ + L 0. R + = Diag[r 1,, r q ], R = Diag[r q+1,, r 2q ] β j = µ j + iν j, β j+1 = β j = µ j iν j, ν j > 0, j = 1, 3,..., 2s 1. M = Diag[µ 1, µ 3,, µ 2s 1 ], N = Diag[ν 1, ν 3,, ν 2s 1 ] [ ] M N J = Diag[R +, R, ] R 2n 2n, N M P = Π T P R Π = Diag[I q, I q, I s, I s ] R 2n 2n, X = [ X + X V U ] R n 2n. (X +, X are n q and (V, U) are n s.) Castro Urdiales, SPAIN, /

33 An inverse quadratic problem: Given semisimple canonical matrices J, P R 2n 2n as above, can we always find an X R n 2n to complete a canonical triple (X, J, PX T )? NO! Recall that there are 2q real eigenvalues (counting multiplicities), and s pairs of complex conjugate eigenvalues 0 q, s n. To characterize this problem we need another parameter: Let p be the maximal multiplicity of any real eigenvalue (so that 1 p 2n). Theorem (L.-Zaballa) With the hypotheses above, there exists a canonical triple (X, J, PX T ) if and only if q + s p. (Notice that s = 0 is possible if q p. Also, q + s = p when the system is semisimple.) Castro Urdiales, SPAIN, /

34 Another inverse quadratic problem Recall that, for a real canonical triple (X, J, PX T ), Γ 1 = XJPX = A 1 2. Theorem Given (semisimple) candidates J, P as components of a real canonical triple (X, J, PX T ), there is an associated semisimple, real, symmetric quadratic matrix polynomial with Γ 1 (= XJPX ) nonsingular (or Γ 1 > 0) if and only if there is an orthogonal matrix Θ such that H 1 (Θ) := [ I nm Θ ] is nonsingular (resp. positive definite). R M 0 N 0 0 R 0 0 N 0 M [ Inm Θ T ] Castro Urdiales, SPAIN, /

35 EPILOGUE: GENERAL CANONICAL FORMS (L/Z) FIRST - reduction over C. Primitive matrices F j and G j : F 1 = [1], G 1 = [0], F 4 = , G 4 = etc.. Castro Urdiales, SPAIN, /

36 EPILOGUE: GENERAL CANONICAL FORMS (L/Z) P = q s ε j F lj F 2mk, j=1 k=1 PJ = q ε j (α j F lj + G lj ) s [ j=1 k=1 0 β k F mk + G mk β k F mk + G mk 0 ]. Theorem If L(λ) is Hermitian and A l is nonsingular, then there exists a selfadjoint Jordan triple of the form (X, J, PX ). The set of numbers ε j (= ±1) is determined uniquely by L(λ) up to permutation of the signs in the blocks of P corresponding to the Jordan blocks, α j F lj + G lj, of J with the same real eigenvalue and size. Castro Urdiales, SPAIN, /

37 And the Jordan form, J itself: The numbers ε j are, of course, ±1 and are the sign characteristics of the real eigenvalues. J = q (α j I lj + F lj ) s [ βk I mk + F mk G mk 0 0 β k I mk + F mk G mk j=1 k=1 ], and satisfies J P = PJ, i.e. J is selfadjoint in the (indefinite) P inner-product. Castro Urdiales, SPAIN, /

38 Epilogue: THE REAL SYMMETRIC CASE (L/Z) Reduction over the reals is more complicated and requires another class of primitive matrices of even size, 2m. For example, when m = 3 E 6 = Castro Urdiales, SPAIN, /

39 Epilogue: THE REAL SYMMETRIC CASE (L/Z) PK = j=1 P = q s ε j F lj F 2mk, j=1 j=1 k=1 q ε j (α j F lj + G lj ) s ( [ F2mj 2 0 µ j F 2mj + ν j E 2mj (Extract K from this.) ]) Theorem If L(λ) is real and symmetric and A l is nonsingular, then there exists a real Jordan triple of the form (X ρ, K, PX T ρ ). The set of numbers ε j (= ±1) is determined uniquely by L(λ) up to permutation of the signs in the blocks of P corresponding to the Jordan blocks of K with the same real eigenvalue and size. Castro Urdiales, SPAIN, /

40 REFERENCES: C. Chorianopoulos and P. Lancaster Inverse problems for Hermitian quadratic matrix polynomials, Indag. Math., 23, 2012, I. Gohberg, P. Lancaster, L. Rodman, Matrix Polynomials, Academic Press, 1982, and SIAM, I. Gohberg, P. Lancaster, L. Rodman, Indefinite Linear Algebra and Applications, Birkhäuser, Basel, P. Lancaster, Model-updating for self-adjoint quadratic eigenvalue problems, Lin. Alg and its Applications, 428, 2008, P. Lancaster, U. Prells Inverse problems for damped vibrating systems, J. of Sound and Vibration, 283, 2005, Castro Urdiales, SPAIN, /

41 REFERENCES (continued) P. Lancaster, U. Prells, I. Zaballa, An orthogonality property for real symmetric matrix polynomials with application to the inverse problem Operators and Matrices, to appear. P. Lancaster and F. Tisseur, Hermitian quadratic matrix polynomials: solvents and inverse problems, Lin. Alg and its Applications, 436, 2012, P. Lancaster, I. Zaballa, A review of canonical forms for selfadjoint matrix polynomials. Operator Theory: Advances and Applications, 218, 2012, P. Lancaster, I. Zaballa, On the inverse symmetric quadratic eigenvalue problem. Submitted. Castro Urdiales, SPAIN, /

42 Time to go! Castro Urdiales, SPAIN, /

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1 19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

March 29, 2011. 171S4.4 Theorems about Zeros of Polynomial Functions

March 29, 2011. 171S4.4 Theorems about Zeros of Polynomial Functions MAT 171 Precalculus Algebra Dr. Claude Moore Cape Fear Community College CHAPTER 4: Polynomial and Rational Functions 4.1 Polynomial Functions and Models 4.2 Graphing Polynomial Functions 4.3 Polynomial

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

RESULTANT AND DISCRIMINANT OF POLYNOMIALS RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

System of First Order Differential Equations

System of First Order Differential Equations CHAPTER System of First Order Differential Equations In this chapter, we will discuss system of first order differential equations. There are many applications that involving find several unknown functions

More information

ALGEBRAIC EIGENVALUE PROBLEM

ALGEBRAIC EIGENVALUE PROBLEM ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

Some Lecture Notes and In-Class Examples for Pre-Calculus:

Some Lecture Notes and In-Class Examples for Pre-Calculus: Some Lecture Notes and In-Class Examples for Pre-Calculus: Section.7 Definition of a Quadratic Inequality A quadratic inequality is any inequality that can be put in one of the forms ax + bx + c < 0 ax

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

Finite Dimensional Hilbert Spaces and Linear Inverse Problems Finite Dimensional Hilbert Spaces and Linear Inverse Problems ECE 174 Lecture Supplement Spring 2009 Ken Kreutz-Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California,

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Linear Algebra and TI 89

Linear Algebra and TI 89 Linear Algebra and TI 89 Abdul Hassen and Jay Schiffman This short manual is a quick guide to the use of TI89 for Linear Algebra. We do this in two sections. In the first section, we will go over the editing

More information

3.1 State Space Models

3.1 State Space Models 31 State Space Models In this section we study state space models of continuous-time linear systems The corresponding results for discrete-time systems, obtained via duality with the continuous-time models,

More information

Theory of Matrices. Chapter 5

Theory of Matrices. Chapter 5 Chapter 5 Theory of Matrices As before, F is a field We use F[x] to represent the set of all polynomials of x with coefficients in F We use M m,n (F) and M m,n (F[x]) to denoted the set of m by n matrices

More information

G = G 0 > G 1 > > G k = {e}

G = G 0 > G 1 > > G k = {e} Proposition 49. 1. A group G is nilpotent if and only if G appears as an element of its upper central series. 2. If G is nilpotent, then the upper central series and the lower central series have the same

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Linear Control Systems

Linear Control Systems Chapter 3 Linear Control Systems Topics : 1. Controllability 2. Observability 3. Linear Feedback 4. Realization Theory Copyright c Claudiu C. Remsing, 26. All rights reserved. 7 C.C. Remsing 71 Intuitively,

More information

2.5 ZEROS OF POLYNOMIAL FUNCTIONS. Copyright Cengage Learning. All rights reserved.

2.5 ZEROS OF POLYNOMIAL FUNCTIONS. Copyright Cengage Learning. All rights reserved. 2.5 ZEROS OF POLYNOMIAL FUNCTIONS Copyright Cengage Learning. All rights reserved. What You Should Learn Use the Fundamental Theorem of Algebra to determine the number of zeros of polynomial functions.

More information

Zeros of Polynomial Functions

Zeros of Polynomial Functions Review: Synthetic Division Find (x 2-5x - 5x 3 + x 4 ) (5 + x). Factor Theorem Solve 2x 3-5x 2 + x + 2 =0 given that 2 is a zero of f(x) = 2x 3-5x 2 + x + 2. Zeros of Polynomial Functions Introduction

More information

9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS 9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

The last three chapters introduced three major proof techniques: direct,

The last three chapters introduced three major proof techniques: direct, CHAPTER 7 Proving Non-Conditional Statements The last three chapters introduced three major proof techniques: direct, contrapositive and contradiction. These three techniques are used to prove statements

More information

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V. .4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5

More information

MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY

MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY PLAMEN KOEV Abstract In the following survey we look at structured matrices with what is referred to as low displacement rank Matrices like Cauchy Vandermonde

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Lecture 5 Rational functions and partial fraction expansion

Lecture 5 Rational functions and partial fraction expansion S. Boyd EE102 Lecture 5 Rational functions and partial fraction expansion (review of) polynomials rational functions pole-zero plots partial fraction expansion repeated poles nonproper rational functions

More information

Structure of the Root Spaces for Simple Lie Algebras

Structure of the Root Spaces for Simple Lie Algebras Structure of the Root Spaces for Simple Lie Algebras I. Introduction A Cartan subalgebra, H, of a Lie algebra, G, is a subalgebra, H G, such that a. H is nilpotent, i.e., there is some n such that (H)

More information

Banded Matrices with Banded Inverses and A = LPU

Banded Matrices with Banded Inverses and A = LPU Banded Matrices with Banded Inverses and A = LPU Gilbert Strang Abstract. If A is a banded matrix with a banded inverse, then A = BC = F... F N is a product of block-diagonal matrices. We review this factorization,

More information

Chapter 13: Basic ring theory

Chapter 13: Basic ring theory Chapter 3: Basic ring theory Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 42, Spring 24 M. Macauley (Clemson) Chapter 3: Basic ring

More information