equations Karl Lundengård December 3, 2012 MAA704: Matrix functions and matrix equations Matrix functions Matrix equations Matrix equations, cont d



Similar documents
The Characteristic Polynomial

Lecture 1: Schur s Unitary Triangularization Theorem

Similarity and Diagonalization. Similar Matrices

Applied Linear Algebra I Review page 1

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Chapter 17. Orthogonal Matrices and Symmetries of Space

LINEAR ALGEBRA. September 23, 2010

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Orthogonal Diagonalization of Symmetric Matrices

Inner Product Spaces

Chapter 6. Orthogonality

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Brief Introduction to Vectors and Matrices

Section Inner Products and Norms

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH APPLIED MATRIX THEORY

Notes on Determinant

Eigenvalues and Eigenvectors

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

by the matrix A results in a vector which is a reflection of the given

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

PYTHAGOREAN TRIPLES KEITH CONRAD

LS.6 Solution Matrices

Continued Fractions and the Euclidean Algorithm

Elasticity Theory Basics

Inner products on R n, and more

Solution to Homework 2

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Applications of Fermat s Little Theorem and Congruences

Finite dimensional C -algebras

Introduction to Matrix Algebra

Notes on Linear Algebra. Peter J. Cameron

x = + x 2 + x

1 Sets and Set Notation.

Linear Algebra Notes for Marsden and Tromba Vector Calculus

University of Lille I PC first year list of exercises n 7. Review

Linear Algebra I. Ronald van Luijk, 2012

Inner Product Spaces and Orthogonality

Solving Linear Systems, Continued and The Inverse of a Matrix

Similar matrices and Jordan form

Integrals of Rational Functions

Derek Holt and Dmitriy Rumynin. year 2009 (revised at the end)

Numerical Methods I Eigenvalue Problems

Vector and Matrix Norms

MAT188H1S Lec0101 Burbulla

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any.

1 Lecture: Integration of rational functions by decomposition

Lecture 5 Principal Minors and the Hessian

1 Introduction to Matrices

COMPLEX NUMBERS. a bi c di a c b d i. a bi c di a c b d i For instance, 1 i 4 7i i 5 6i

1.7. Partial Fractions Rational Functions and Partial Fractions. A rational function is a quotient of two polynomials: R(x) = P (x) Q(x).

Using row reduction to calculate the inverse and the determinant of a square matrix

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Numerical Analysis Lecture Notes

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

8 Square matrices continued: Determinants

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

Introduction to Partial Differential Equations. John Douglas Moore

THREE DIMENSIONAL GEOMETRY

Taylor and Maclaurin Series

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

1 Determinants and the Solvability of Linear Systems

Lecture 4: Partitioned Matrices and Determinants

The Determinant: a Means to Calculate Volume

T ( a i x i ) = a i T (x i ).

Zeros of a Polynomial Function

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Factorization Theorems

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Differentiation and Integration

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

Lecture 2 Matrix Operations

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

MAT 242 Test 2 SOLUTIONS, FORM T

Chapter 20. Vector Spaces and Bases

HW6 Solutions. MATH 20D Fall 2013 Prof: Sun Hui TA: Zezhou Zhang (David) November 14, Checklist: Section 7.8: 1c, 2, 7, 10, [16]

Factoring Polynomials and Solving Quadratic Equations

LINEAR ALGEBRA W W L CHEN

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P.

Partial Fractions Examples

On the generation of elliptic curves with 16 rational torsion points by Pythagorean triples

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

THE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:

Quotient Rings and Field Extensions

A note on companion matrices

Transcription:

and and Karl Lundengård December 3, 2012 Solving General,

Contents of todays lecture and (Kroenecker product) Solving General,

Some useful from calculus and : f (x) = x n, x C, n Z + : f (x) = n x, x R, n Z + n Polynomials: p(x) = a k x k, a,x C, n Z + k=0 function: f (x) = e x, x C Can we make versions of these? Solving General,

and What could A n mean? Natural interpretation: A n = AAA } {{... A}, n Z + (1) n A 0 = I (2) A n = (A 1 ) n, n Z +, A is invertible (3) A M k k (4) Solving General,

Calculating A n and Direct calculation of A n might mean a lot of work, especially for large n. Is it easier to calculate for certain types of matrices? How can the calculation of A n be simplified? Solving General,

Calculating A n and Diagonal m m d n 11 0... 0 D n 0 d22 n... 0 =...... 0 0... dmm n Solving General,

Calculating A n and Block-diagonal D n 1 0... 0 D n 0 D2 n... 0 =...... 0 0... Dk n D i, 1 i k, are square matrices of (different) size. Solving General,

Calculating A n and D n for diagonal matrices is easy! What about diagonalizable matrices? D n for block diagonal matrices is relatively easy! Any can be written as a Jordan, which is a block diagonal. Can this be useful? Solving General,

Calculating A n Let A and B be similar matrices For positive integer n A = S 1 BS A n = } AAA {{... A} = n = S 1 B SS 1 = S 1 B n S } {{ } =I B } SS{{ 1 } BS... BS = =I Can be shown in the same way that A n = S 1 B n S for any integer n. Diagonalizable : A n = S 1 D n S Any square : A n = S 1 J n S and Solving General,

Calculating A n and Can be easier to find B than S in A = S 1 BS. Finding B might be enough since similar matrices share several properties: Eigenvalues (but generally not eigenvectors) Determinant Trace Rank Solving General,

and What could A = n B = B 1 n mean? Definition If the following relation holds for two square matrices A, B M k k A = BBB... B } {{ } n then B is said to be the nth root of A. This is annotated B = n A = A 1 n How do you find n B? For how many different A is A = n B Solving General,

Square root and How do you find B? A = B AA = B For how many different A is A = B? Solving General,

Square root of a diagonal Theorem If we have two diagonal square matrices, C and D, such that d11 0... 0 0 d22... 0 C =...... 0 0... dmm d 11 0... 0 CC = C 2 0 d 22... 0 =...... 0 0... d mm and Solving General, Then C is a root of D.

Square root of a diagonal and There are at least 2 n roots since we can choose ± d ii If all d ii are chosen positive, this is called the principal root. Can be many more roots Theorem [ ] 1 0 All of the following matrices are roots to I 2 = 0 1 [ ] [ ] [ ] [ ] 1 s r 1 ±s r ±1 0 1 0,,, t r ±s t r s 0 1 0 ±1 if r 2 + s 2 = t 2. Solving General,

Square root of similar matrices and Let A = S 1 BS then A = S 1 BS Solving General,

Square root of a Jordan and Jm1 (λ 1 ) 0... 0 0 Jm2 (λ 2 )... 0 J =........ 0 0... J mk (λ k ) Solving General,

Square root of a Jordan block and λ 1 0... 0 1 λ 0... 0 0 λ 1... 0 0 1 λ... 0 J m (λ) = 0 0 λ... 0 = λ 0 0 1... 0 = λ(i +K).............. 0 0 0... λ 0 0 0... 1 Solving General,

Square root of a Jordan block and K is a strictly triangular K is nilpotent K n = 0 for some finite integer n 0. Theorem Jm (λ) = I + K = 1 + 1 2 K 1 8 K 2 + 1 16 K 3 5 128 K 4 +... Proof. See compendium page 61. Solving General,

polynomial and Definition A polynomial of degree n is a function with this form n p(a) = c k A k, c K, A M n n (K) k=0 Solving General,

Interesting properties of the polynomial and The coefficients behave like the coefficients for regular p(x) + q(x) has the same coefficients as p(a) + q(a) p(x)q(x) has the same coefficients as p(a)q(a) p(s 1 AS) = S 1 p(a)s A 11 0... 0 p(a 11 ) 0... 0 0 A 22... 0 p...... 0 p(a 22 )... 0...... 0 0... A kk 0 0... p(a kk ) where all A ii are quadratic blocks. Solving General,

The Cayley-Hamilton theorem and Theorem (Cayley-Hamilton theorem) Let p A (λ) = det(λi A) then p A (A) = 0 Solving General,

The adjugate formula and Theorem (The adjugate formula) For any square A the following equality holds A adj(a) = adj(a)a = det(a)i (5) where adj(a) is a square such that adj(a) ki = A ki where A ki is the ki-cofactor of A Proof. See compendium page 62-64. Solving General,

Sketch of proof for Cayley-Hamilton Proof. and p A (λ)i = (λi A)B n 1 B = λ k B k k=0 n 1 (λi A)B = (λi A) λ k B k = = λ n B n 1 + λ n 1 (B n 2 AB n 1 ) +... + λ(b 0 AB 1 ) + ( AB 0 ) = k=0 = λ n I + λ n 1 a n 1 I +... + a 1 λ + a 0 = p A (λ)i Solving General,

Sketch of proof for Cayley-Hamilton Proof. If this expression holds for all λ then the following three conditions must be fulfilled a k I = B k 1 AB k for 1 k n 1 (6) I = B n 1 (7) a 0 I = AB 0 (8) Next consider the polynomial p A (A) p A (A) = A n + A n 1 a n 1 +... + Aa 1 + Ia 0 Combining this with condition (6)-(8) results in and Solving General, p A (A) = A n + A n 1 (B n 2 AB n 1 ) +... + A(B 0 AB 1 ) + ( AB 0 ) = = A n (I B n 1 ) +A n 1 (B } {{ } n 2 B n 2 ) +... + A (B } {{ } 0 B 0 ) = 0 } {{ } =0 =0 =0

Companion and Definition Let p(x) = x n + a n 1 x n 1 +... + a 1 x + a 0 then 0 1 0... 0 0 0 0 1... 0 0 C(p) =........ 0 0 0... 0 1 a 0 a 1 a 2... a n 2 a n 1 Solving General,

State variable representation Consider the linear equation (with constant scalar coefficients) with y = y(t) y (n) + a n 1 y (n 1) +... + a 1 y + a 0 = f Create the state variable vector y y x =. y n 2 y n 1 and Solving General,

State variable representation Rewrite equation as a linear equation system x 1 = x 2 x 2 = x 3. x n 1 = x n x n = a 0 x 1 a 1 x 2... a n 1 x n + f Rewrite on form dx dt = C(p)x and Solving General, with p(x) = x n + a n 1 x n 1 +... + a 1 x + a 0

State variable representation and Note that det(λi C(p)) = p(λ) which means that for any λ Sp(C(p)) p(λ) = 0. Can be a convenient way of finding roots to a polynomial which can then be used to solve a equation. Solving General,

First order and du dt = AU, U(0) = I, U, A M n n How do you solve this equation? If A had not been a then the answer would be U = e At, what is the corresponding answer? Solving General,

and Instead of shoving matrices into the definition and hoping they fit we can construct that have some desirable property of a normal function of numbers. The exponential function e at has three nice properties d dt (eat ) = ae at = e at a (e at ) 1 = e at e a e b = e a+b = e b+a = e b e a Solving General,

and Theorem The following is true for the exponential d a) dt eta = Ae ta = e ta A b) (e ta ) 1 = e ta c) e A+B = e A e B = e B e A if AB = BA Solving General,

Consider what happens to a general polynomial of At when we take the first time derivative p(at) = n a k (At) k d dt (p(at)) = k=0 n a k ka k t k 1 = k=1 n 1 = A a l+1 (l + 1)A l t l l=0 The last equality is achieved by simply taking l = k 1. Compare this to Ap(At) = A n a l A l t l l=0 and Solving General,

and If the polynomial coefficients are chosen such that a l+1 (l + 1) = a l these expressions would be identical expect for the final term. Let us choose the correct coefficients a l+1 (l + 1) = a l a l = 1 l! and simply avoid the problem of the last term not matching up by making the polynomial have infinite degree. Solving General,

and Definition The polynomial function is defined as e At = k=0 t k k! Ak, A M n n Solving General,

Calculating the exponential Calculating the exponential e A is generally difficult. Some classes of matrices are simple, for example diagonal matrices e d 11 0... 0 e D 0 e d 22... 0 =...... 0 0... e dnn This means the exponential of a diagonalizable can be calculated like this e A = S 1 e D S and Solving General, Prove this as an exercise. Hint: What is the Taylor series (power series) for e at?

1st order equation and Theorem The exponential e xa is the solution to the initial value problem du dx = AU U(0) = I Proof. From the construction of e xa we already know that if fulfills the equation. Checking that the initial condition is fulfilled can be done simply by setting x = 0 in the definition. Solving General,

and Can we systematically create a function for every function of numbers? Solving General,

General and Definition ( function via Hermite interpolation) If a function f (λ) is defined on all λ that are eigenvalues of a square A then the function f (A) is defined as f (A) = h s i 1 i=1 k=0 f (k) (λ i ) φ ik (A) k! where φ ik are the Hermite interpolation (defined below), λ i are the different eigenvalues and s i is the multiplicity of each eigenvalue. Solving General,

General and Definition (Hermite interpolation ) Let lambda i, 1 i n be complex numbers and s i, 1 i n, be positive integers. Let s be the sum of all s i. The Hermitian interpolation are a set of with degree lower than s that fulfills the conditions φ (l) ik (λ j) = l! { 0 if i j or k l 1 if i = j and k = l Solving General,

General and Definition ( function via Cauchy Integral) Let C be a curve that encloses all eigenvalues of the square A. If the function f is analytical (can be written as a power series) on C and inside C then f (A) = 1 2πi C f (z)(zi A) 1 dz The R A (z) = (zi A) 1 is called the resolvent of A. Equivalent to the Hermite interpolation definition (for analytical ). Solving General,

power series and Using Taylor expansion f (x) = n=0 f (n) (a) (x a) n n many (to be precise, all analytical ) can be rewritten as a (convergent) polynomial of infinite degree. This can be proved from the Cauchy integral of the resolvent. Solving General,

and Linear equation system as matrices and column vectors: Ax = y, A and y known Simple equation: AX = Y or XB = Y, A and y known or B and y known Sylvester s equation: AX + XB = C, A, B, C known Liapunov s equation: AX + XA H = C, A, C known Solving General,

Solution to Sylvester s equation and Theorem Sylvester s equation AX + XB = C has a unique solution if and only if A and B have no common eigenvalues. Proof. To show this it is very useful to know about tensor products. Solving General,

and Generally: the tensor product on a set is the most general bilinear function on the set, it is often denoted by and sometimes referred to as the direct product and sometimes (for certain Hilbert spaces for example) outer product. for matrices always have two properties: a) Bilinearity: (µa + ηb) C = µa C + ηb C A (µb + ηc) = µa B + ηa C b) Associaticity: (A B) C = A (B C) For matrices is usually the Kroenecker product Solving General,

Kroenecker product Definition The Kroenecker product between two matrices, A M m n and B M p q, is defined as or a 11 B a 12 B... a 1n B a 21 B a 22 B... a 2n B A B =...... a m1 B a m2 B... a mn B Ab 11 Ab 12... Ab 1n Ab 21 Ab 22... Ab 2n A B =...... Ab m1 Ab m2... Ab mn these two definitions are equivalent but not equal, one is a permutation of the other. and Solving General,

Some properties of the Kroenecker product and Theorem For the Kroenecker product the following is true a) (µa + ηb) C = µa C + ηb C b) A (µb + ηc) = µa B + ηa C c) (A B) C = A (B C) d) (A B) = A B e) A B = A B (complex conjugate) f) (A B) H = A H B H g) (A B) 1 = A 1 B 1, for all invertible A and B h) det(a B) = det(a) k det(b) n, with A M n n, B M k k Solving General,

Some more properties of the Kroenecker product Theorem For the Kroenecker product the following is true i) (A B)(C D) = (AC BD), A M m n, B M n k, C M p q, D M q r j) A B = (A I k k )(I n n B), A M n n, B M k k k) AX = C (I A) X XB = C (B I ) X where X.1 X.2 X = [ ] X.1 X.2... X.n, X =. X.n and Solving General,

Eigenvalues and Kroenecker products and Theorem Let {λ} be the eigenvalues of A and {µ} be the eigenvalues of B. Then the following is true: a) {λµ} are the eigenvalues of A B b) {λ + µ} are the eigenvalues of A B Solving General,

Eigenvalues and Kroenecker products Proof. Proof (sketch) a) Use Shurs lemma A = S 1 R A S, B = T 1 R B T where R A and R B are triangular matrices with eigenvalues along the diagonal. Then A B = (S 1 R A S) (T 1 R B T ) = = (S 1 R A T 1 R B )(S T ) = = (S T ) 1 (R A R B )(S T ) R A R B will be triangular and will have the elements a ii b jj = λ i µ j on the diagonal, thus λ i µ j is an eigenvalue of A B since similar matrices have the same eigenvalues. b) Same argument as above gives A I (B I ) have the same eigenvalues as A (B) adding the two terms together gives λ + µ is an eigenvalue. and Solving General,

Solution to Sylvester s equation, and Theorem Sylvester s equation AX + XB = C has a unique solution if and only if A and B have no common eigenvalues. Proof. To show this it is very useful to know about tensor products. Solving General,

Solution to Sylvester s equation, and Proof. Rewrite the equation using the Kroenecker product AX + XB = C (I A + B I ) X = } {{ } C K This is a normal equation which is solvable if λ = 0 is not an eigenvalue of K. The eigenvalues of K are λ + µ where λ is an eigenvalue of A and µ is an eigenvalue of B. Thus if A and B have no common eigenvalues the eigenvalues of K will 0. Solving General,

and We have seen version of common function: A n, A,, e A Seen how analytical can be turned into using the Taylor expansion. Introduced the Kroenecker product. Taken a look at some, Sylvester s equation and first order. Solving General,