Systems of Differential Equations Matrix Methods

Similar documents
The Characteristic Polynomial

by the matrix A results in a vector which is a reflection of the given

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Similarity and Diagonalization. Similar Matrices

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

System of First Order Differential Equations

Orthogonal Diagonalization of Symmetric Matrices

LS.6 Solution Matrices

Introduction to Matrix Algebra

Chapter 17. Orthogonal Matrices and Symmetries of Space

Inner Product Spaces and Orthogonality

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Linearly Independent Sets and Linearly Dependent Sets

Notes on Determinant

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

1 Introduction to Matrices

x = + x 2 + x

Chapter 6. Orthogonality

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

University of Lille I PC first year list of exercises n 7. Review

[1] Diagonal factorization

LINEAR ALGEBRA. September 23, 2010

Zeros of Polynomial Functions

8 Square matrices continued: Determinants

Applied Linear Algebra I Review page 1

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Factoring Cubic Polynomials

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

1 Determinants and the Solvability of Linear Systems

Using row reduction to calculate the inverse and the determinant of a square matrix

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Numerical Analysis Lecture Notes

State of Stress at Point

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P.

Equations, Inequalities & Partial Fractions

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Sect Solving Equations Using the Zero Product Rule

Lecture 1: Schur s Unitary Triangularization Theorem

Chapter 20. Vector Spaces and Bases

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

POLYNOMIAL FUNCTIONS

Name: Section Registered In:

General Theory of Differential Equations Sections 2.8, , 4.1

Examination paper for TMA4115 Matematikk 3

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Higher Order Equations

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

MATH APPLIED MATRIX THEORY

Notes on Linear Algebra. Peter J. Cameron

Chapter 2 Determinants, and Linear Independence

Algebraic Concepts Algebraic Concepts Writing

Matrix algebra for beginners, Part I matrices, determinants, inverses

NOTES ON LINEAR TRANSFORMATIONS

Eigenvalues and Eigenvectors

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

2.5 Zeros of a Polynomial Functions

On the D-Stability of Linear and Nonlinear Positive Switched Systems

4.5 Linear Dependence and Linear Independence

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Algebra 1 Course Title

Math Assignment 6

Indiana State Core Curriculum Standards updated 2009 Algebra I

Section 1.1. Introduction to R n

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

2.3. Finding polynomial functions. An Introduction:

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Unit 18 Determinants

1 VECTOR SPACES AND SUBSPACES

ORDINARY DIFFERENTIAL EQUATIONS

Essential Mathematics for Computer Graphics fast

160 CHAPTER 4. VECTOR SPACES

T ( a i x i ) = a i T (x i ).

ROUTH S STABILITY CRITERION

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Dynamic Eigenvalues for Scalar Linear Time-Varying Systems

Solution to Homework 2

Operation Count; Numerical Linear Algebra

Systems of Linear Equations

Elementary Linear Algebra

1 Sets and Set Notation.

Integer roots of quadratic and cubic polynomials with integer coefficients

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Zeros of a Polynomial Function

Inner product. Definition of inner product

Alum Rock Elementary Union School District Algebra I Study Guide for Benchmark III

The Determinant: a Means to Calculate Volume

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

MAT188H1S Lec0101 Burbulla

Lecture 3: Finding integer solutions to systems of linear equations

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Sample Problems. Practice Problems

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

3.1 State Space Models

Solving simultaneous equations using the inverse matrix

Polynomial Invariants

Transcription:

Characteristic Equation Cayley-Hamilton Systems of Differential Equations Matrix Methods Cayley-Hamilton Theorem An Example The Cayley-Hamilton-Ziebur Method for u = A u A Working Rule for Solving u = A u Solving 2 2 u = A u Finding d 1 and d 2 A Matrix Method for Finding d 1 and d 2 Other Representations of the Solution u Another General Solution of u = A u Change of Basis Equation

Characteristic Equation Definition 1 (Characteristic Equation) Given a square matrix A, the characteristic equation of A is the polynomial equation det(a ri) = 0. The determinant det(a ri) is formed by subtracting r from the diagonal of A. The polynomial p(r) = det(a ri) is called the characteristic polynomial. If A is 2 2, then p(r) is a quadratic. If A is 3 3, then p(r) is a cubic. The determinant is expanded by the cofactor rule, in order to preserve factorizations.

Characteristic Equation Examples Create det(a ri) by subtracting r from the diagonal of A. Evaluate by the cofactor rule. ( ) 2 3 A =, p(r) = 0 4 2 r 3 0 4 r = (2 r)(4 r) A = 2 3 4 0 5 6 0 0 7, p(r) = 2 r 3 4 0 5 r 6 0 0 7 r = (2 r)(5 r)(7 r)

Cayley-Hamilton Theorem 1 (Cayley-Hamilton) A square matrix A satisfies its own characteristic equation. If p(r) = ( r) n + a n 1 ( r) n 1 + a 0, then the result is the equation ( A) n + a n 1 ( A) n 1 + + a 1 ( A) + a 0 I = 0, where I is the n n identity matrix and 0 is the n n zero matrix.

Cayley-Hamilton Example Assume Then p(r) = A = 2 r 3 4 0 5 r 6 0 0 7 r and the Cayley-Hamilton Theorem says that 2 3 4 0 5 6 0 0 7 (2I A)(5I A)(7I A) = = (2 r)(5 r)(7 r) 0 0 0 0 0 0 0 0 0.

Cayley-Hamilton-Ziebur Theorem Theorem 2 (Cayley-Hamilton-Ziebur Structure Theorem for u = A u) A component function u k (t) of the vector solution u(t) for u (t) = A u(t) is a solution of the nth order linear homogeneous constant-coefficient differential equation whose characteristic equation is det(a ri) = 0. The theorem implies that the vector solution u(t) of u = A u is a vector linear combination of atoms constructed from the roots of the characteristic equation det(a ri) = 0.

Proof of the Cayley-Hamilton-Ziebur Theorem Consider the case n = 2, because the proof details are similar in higher dimensions. r 2 + a 1 r + a 0 = 0 A 2 + a 1 A + a 0 I = 0 Expanded characteristic equation Cayley-Hamilton matrix equation A 2 u + a 1 A u + a 0 u = 0 Right-multiply by u = u(t) u = A u = A 2 u Differentiate u = A u u + a 1 u + a 0 u = 0 Replace A 2 u u, A u u Then the components x(t), y(t) of u(t) satisfy the two differential equations x (t) + a 1 x (t) + a 0 x(t) = 0, y (t) + a 1 y (t) + a 0 y(t) = 0. This system implies that the components of u(t) are solutions of the second order DE with characteristic equation det(a ri) = 0.

Cayley-Hamilton-Ziebur Method The Cayley-Hamilton-Ziebur Method for u = A u Let atom 1,..., atom n denote the atoms constructed from the nth order characteristic equation det(a ri) = 0 by Euler s Theorem. The solution of u = A u is given for some constant vectors d 1,..., d n by the equation u(t) = (atom 1 ) d 1 + + (atom n ) d n

Cayley-Hamilton-Ziebur Method Conclusions Solving u = A u is reduced to finding the constant vectors d 1,..., d n. The vectors d j are not arbitrary. They are uniquely determined by A and u(0)! A general method to find them is to differentiate the equation u(t) = (atom 1 ) d 1 + + (atom n ) d n n 1 times, then set t = 0 and replace u (k) (0) by A k u(0) [because u = A u, u = A u = AA u, etc]. The resulting n equations in vector unknowns d1,..., d n can be solved by elimination. If all atoms constructed are base atoms constructed from real roots, then each d j is a constant multiple of a real eigenvector of A. Atom e rt corresponds to the eigenpair equation Av = rv.

A 2 2 Illustration ( ) ( ) 1 2 1 Let s solve u = u, u(0) =. 2 1 2 ( ) 1 2 The characteristic polynomial of the non-triangular matrix A = is 2 1 1 r 2 2 1 r = (1 r)2 4 = (r + 1)(r 3). Euler s theorem implies solution atoms are e t, e 3t. Then u is a vector linear combination of the solution atoms, u = e t d 1 + e 3t d 2.

How to Find d 1 and d 2 We solve for vectors d 1, d 2 in the equation Advice: Define d 0 = u = e t d1 + e 3t d2. ( ) 1 2. Differentiate the above relation. Replace u via u = A u, then set t = 0 and replace u(0) by d 0 in the two formulas to obtain the relations d0 = e 0 d1 + e 0 d2 A d 0 = e 0 d1 + 3e 0 d2 We solve for ( d 1, d) 2 by elimination. Adding the equations gives d 0 + A d 0 = 4 d 2 and then 1 d 0 = implies 2 ( ) d1 = 3 3/2 d 4 0 1 A d 4 0 =, 3/2 ( ) d2 = 1 1/2 d 4 0 + 1 A d 4 0 =. 1/2

Summary of the 2 2 Illustration The solution of the dynamical system ( 1 2 u = 2 1 ) u, u(0) = ( 1 2 ) is a vector linear combination of solution atoms e t, e 3t given by the equation ( ) ( ) 3/2 1/2 u = e t + e 3/2 3t. 1/2

A Matrix Method for Finding d 1 and d 2 The Cayley-Hamilton-Ziebur Method produces a unique solution for d 1, d 2 because the coefficient matrix ( e 0 e 0 e 0 3e 0 ) is exactly the Wronskian W of the basis of atoms e t, e 3t evaluated at t = 0. This same fact applies no matter the number of coefficients d 1, d 2,... to be determined. The answer for d 1 and d 2 can be written in matrix form in terms of the transpose W T of the Wronskian matrix as aug( d 1, d 2 ) = aug( d 0, A d 0 )(W T ) 1.

Solving a 2 2 Initial Value Problem by the Matrix Method Then d 0 = u = A u, u(0) = ( 1 2 ) aug( d 1, d 2 ) =, A d 0 = ( 1 3 2 0 ( ) 1, A = 2 ( 1 2 2 1 ) ( 1 2 ) ( ( 1 1 1 3 ) = ( 3 0 ) ) T ) 1 = ( 1 2 2 1 and ). ( 3/2 1/2 3/2 1/2 ). The solution of the initial value problem is ( ) ( 3/2 1/2 u(t) = e t + e 3/2 3t 1/2 ) = ( 3 2 e t + 1 2 e3t 3 2 e t + 1 2 e3t ).

Other Representations of the Solution u Let y 1 (t),..., y n (t) be a solution basis for the nth order linear homogeneous constantcoefficient differential equation whose characteristic equation is det(a ri) = 0. Consider the solution basis atom 1, atom 2,..., atom n. Each atom is a linear combination of y 1,..., y n. Replacing the atoms in the formula u(t) = (atom 1 ) d 1 + + (atom n ) d n by these linear combinations implies there are constant vectors d 1,..., d n such that u(t) = y 1 (t) d 1 + + y n (t) d n

Another General Solution of u = A u Theorem 3 (General Solution) The unique solution of u = Au, u(0) = d 0 is u(t) = φ 1 (t)u 0 + φ 2 (t)au 0 + + φ n (t)a n 1 u 0 where φ 1,..., φ n are linear combinations of atoms constructed from roots of the characteristic equation det(a ri) = 0, such that Wronskian(φ 1 (t),..., φ n (t)) t=0 = I.

Proof of the theorem Proof: Details will be given for n = 3. The details for arbitrary matrix dimension n is a routine modification of this proof. The Wronskian condition implies φ 1, φ 2, φ 3 are independent. Then each atom constructed from the characteristic equation is a linear combination of φ 1, φ 2, φ 3. It follows that the unique solution u can be written for some vectors d 1, d 2, d 3 as u(t) = φ 1 (t) d 1 + φ 2 (t) d 2 + φ 3 (t) d 3. Differentiate this equation twice and then set t = 0 in all 3 equations. The relations u = A u and u = A u = AA u imply the 3 equations d 0 = φ 1 (0) d 1 + φ 2 (0) d 2 + φ 3 (0) d 3 A d 0 = φ 1 (0) d 1 + φ 2 (0) d 2 + φ 3 (0) d 3 A 2 d 0 = φ 1 (0) d 1 + φ 2 (0) d 2 + φ 3 (0) d 3 Because the Wronskian is the identity matrix I, then these equations reduce to d0 = 1 d 1 + 0 d 2 + 0 d 3 A d 0 = 0 d 1 + 1 d 2 + 0 d 3 A 2 d0 = 0 d 1 + 0 d 2 + 1 d 3 which implies d 1 = d 0, d 2 = A d 0, d 3 = A 2 d0. The claimed formula for u(t) is established and the proof is complete.

Change of Basis Equation Illustrated here is the change of basis formula for n = 3. The formula for general n is similar. Let φ 1 (t), φ 2 (t), φ 3 (t) denote the linear combinations of atoms obtained from the vector formula ( φ1 (t), φ 2 (t), φ 3 (t) ) = ( atom 1 (t), atom 2 (t), atom 3 (t) ) C 1 where C = Wronskian(atom 1, atom 2, atom 3 )(0). The solutions φ 1 (t), φ 2 (t), φ 3 (t) are called the principal solutions of the linear homogeneous constant-coefficient differential equation constructed from the characteristic equation det(a ri) = 0. They satisfy the initial conditions Wronskian(φ 1, φ 2, φ 3 )(0) = I.