LS.6 Solution Matrices



Similar documents
Solutions to Linear First Order ODE s

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

by the matrix A results in a vector which is a reflection of the given

Systems of Linear Equations

x = + x 2 + x

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s

Methods for Finding Bases

Notes on Determinant

4.5 Linear Dependence and Linear Independence

Similar matrices and Jordan form

Linearly Independent Sets and Linearly Dependent Sets

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Linear Algebra Notes for Marsden and Tromba Vector Calculus

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

[1] Diagonal factorization

Chapter 20. Vector Spaces and Bases

Solving Systems of Linear Equations

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

MATH APPLIED MATRIX THEORY

Similarity and Diagonalization. Similar Matrices

Solving Linear Systems, Continued and The Inverse of a Matrix

Continued Fractions and the Euclidean Algorithm

Inner Product Spaces

8 Square matrices continued: Determinants

NOTES ON LINEAR TRANSFORMATIONS

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Section Continued

( ) which must be a vector

University of Lille I PC first year list of exercises n 7. Review

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

HW6 Solutions. MATH 20D Fall 2013 Prof: Sun Hui TA: Zezhou Zhang (David) November 14, Checklist: Section 7.8: 1c, 2, 7, 10, [16]

Chapter 17. Orthogonal Matrices and Symmetries of Space

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Second Order Linear Partial Differential Equations. Part I

Lecture L3 - Vectors, Matrices and Coordinate Transformations

A =

Solutions to Math 51 First Exam January 29, 2015

Solving simultaneous equations using the inverse matrix

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Chapter 6. Orthogonality

Lecture 14: Section 3.3

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

1 Determinants and the Solvability of Linear Systems

LINEAR ALGEBRA W W L CHEN

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Question 2: How do you solve a matrix equation using the matrix inverse?

Class Meeting # 1: Introduction to PDEs

Row Echelon Form and Reduced Row Echelon Form

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Taylor and Maclaurin Series

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Matrix Representations of Linear Transformations and Changes of Coordinates

1 Introduction to Matrices

SECOND-ORDER LINEAR DIFFERENTIAL EQUATIONS

160 CHAPTER 4. VECTOR SPACES

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

MAT 242 Test 2 SOLUTIONS, FORM T

THE DIMENSION OF A VECTOR SPACE

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Linear Algebra Notes

Inner product. Definition of inner product

The integrating factor method (Sect. 2.1).

1.5 SOLUTION SETS OF LINEAR SYSTEMS

8 Hyperbolic Systems of First-Order Equations

Vector and Matrix Norms

Introduction to Matrix Algebra

Brief Introduction to Vectors and Matrices

5. Orthogonal matrices

5 Homogeneous systems

Math 4310 Handout - Quotient Vector Spaces

MATH10040 Chapter 2: Prime and relatively prime numbers

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Partial Fractions: Undetermined Coefficients

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Elasticity Theory Basics

Solving Systems of Linear Equations

Vector has a magnitude and a direction. Scalar has a magnitude

Quantum Physics II (8.05) Fall 2013 Assignment 4

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Name: Section Registered In:

1.2 Solving a System of Linear Equations

PYTHAGOREAN TRIPLES KEITH CONRAD

7. Beats. sin( + λ) + sin( λ) = 2 cos(λ) sin( )

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Vector Spaces 4.4 Spanning and Independence

Examination paper for TMA4115 Matematikk 3

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

T ( a i x i ) = a i T (x i ).

Student Outcomes. Lesson Notes. Classwork. Discussion (10 minutes)

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Transcription:

LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions and results for a 2 2 system, but they generalize immediately to n n systems.. Fundamental matrices. We return to the system () x = A(t) x, with the general solution (2) x = c x (t) + c 2 x 2 (t), where x and x 2 are two independent solutions to (), and c and c 2 are arbitrary constants. We form the matrix whose columns are the solutions x and x 2 : x x (3) X(t) = ( x 2 x 2 ) =. y y 2 Since the solutions are linearly independent, we called them in LS.5 a fundamental set of solutions, and therefore we call the matrix in (3) a fundamental matrix for the system (). Writing the general solution using X(t). As a first application of X(t), we can use it to write the general solution (2) efficiently. For according to (2), it is x x2 x x x = c + c 2 = 2 c, y y 2 y y 2 c 2 which becomes using the fundamental matrix c (4) x = X(t) c where c =, (general solution to ()). c 2 Note that the vector c must be written on the right, even though the c s are usually written on the left when they are the coefficients of the solutions x i. Solving the IVP using X(t). We can now write down the solution to the IVP (5) x = A(t) x, x(t 0 ) = x 0. Starting from the general solution (4), we have to choose the c so that the initial condition in (6) is satisfied. Substituting t 0 into (5) gives us the matrix equation for c : X(t 0 ) c = x 0. Since the determinant X(t 0 ) is the value at t 0 of the Wronskian of x amd x 2, it is non-zero since the two solutions are linearly independent (Theorem 5.2C). Therefore the inverse matrix exists (by LS.), and the matrix equation above can be solved for c: c = X(t 0 ) x 0 ; using the above value of c in (4), the solution to the IVP () can now be written (6) x = X(t)X(t 0 ) x 0. 25

26 8.03 NOTES: LS. LINEAR SYSTEMS Note that when the solution is written in this form, it s obvious that x(t 0 ) = x 0, i.e., that the initial condition in (5) is satisfied. An equation for fundamental matrices We have been saying a rather than the fundamental matrix since the system () doesn t have a unique fundamental matrix: thare are many different ways to pick two independent solutions of x = A x to form the columns of X. It is therefore useful to have a way of recognizing a fundamental matrix when you see one. The following theorem is good for this; we ll need it shortly. Theorem 6. X(t) is a fundamental matrix for the system () if its determinant X(t) is non-zero and it satisfies the matrix equation (7) X = A X, where X means that each entry of X has been differentiated. Proof. Since X 0, its columns x and x 2 are linearly independent, by section LS.5. And writing X = ( x x 2 ), (7) becomes, according to the rules for matrix multiplication, ( x x 2 ) = A ( x x 2 ) = ( Ax Ax 2 ), which shows that x = A x and x 2 = A x 2 ; this last line says that x and x 2 are solutions to the system (). 2. The normalized fundamental matrix. Is there a best choice for fundamental matrix? There are two common choices, each with its advantages. If the ODE system has constant coefficients, and its eigenvalues are real and distinct, then a natural choice for the fundamental matrix would be the one whose columns are the normal modes the solutions of the form x i = i e i t, i =, 2. There is another choice however which is suggested by (6) and which is particularly useful in showing how the solution depends on the initial conditions. Suppose we pick X(t) so that 0 (8) X(t 0 ) = I =. Referring to the definition (3), this means the solutions x and x 2 are picked so (8 ) x (t 0 ) = 0 0, x 2 (t 0 ) =. Since the x i (t) are uniquely determined by these initial conditions, the fundamental matrix X(t) satisfying (8) is also unique; we give it a name. Definition 6.2 The unique matrix X t0 (t) satisfying (9) X = A X t0, X t0 (t 0 ) = I t 0

LS.6 SOLUTION MATRICES 27 is called the normalized fundamental matrix at t 0 for A. For convenience in use, the definition uses Theorem 6. to guarantee X t0 will actually be a fundamental matrix; the condition X t0 (t) = 0 in Theorem 6. is satisfied, since the definition implies X t0 (t 0 ) =. To keep the notation simple, we will assume in the rest of this section that t 0 = 0, as it almost always is; then X 0 is the normalized fundamental matrix. Since X 0 (0) = I, we get from (6) the matrix form for the solution to an IVP: (0) The solution to the IVP x = A(t) x, x(0) = x 0 is x(t) = X 0 (t)x 0. Calculating X 0. One way is to find the two solutions in (8 ), and use them as the columns of X 0. This is fine if the two solutions can be determined by inspection. If not, a simpler method is this: find any fundamental matrix X(t); then () X 0 (t) = X(t) X(0). To verify this, we have to see that the matrix on the right of () satisfies the two conditions in Definition 6.2. The second is trivial; the first is easy using the rule for matrix differentiation: If M = M(t) and B, C are constant matrices, then (BM) = BM, (MC) = M C, from which we see that since X is a fundamental matrix, (X(t)X(0) ) = X(t) X(0) = AX(t)X(0) = A(X(t)X(0) ), showing that X(t)X(0) also satisfies the first condition in Definition 6.2. Example 6.2A Find the solution to the IVP: x = x, x(0) = x 0. 0 Solution Since the system is x = y, y = x, we can find by inspection the fundamental set of solutions satisfying (8 ) : x = cos t x = sin t and. y = sin t y = cos t Thus by (0) the normalized fundamental matrix at 0 and solution to the IVP is cos t sin t x 0 cos t sin t x = X x 0 = = x 0 + y 0. sin t cos t y 0 sin t cos t 3 Example 6.2B Give the normalized fundamental matrix at 0 for x = x. Solution. This time the solutions (8 ) cannot be obtained by inspection, so we use the second method. We calculated the normal modes for this sytem at the beginning of LS.2; using them as the columns of a fundamental matrix gives us 3e 2t e X(t) = 2t 2t 2t. e e

28 8.03 NOTES: LS. LINEAR SYSTEMS Using () and the formula for calculating the inverse matrix given in LS., we get 3 X(0) =, X(0) = 4, 3 so that 2t e 2t 2t 2t 3e + e 2t 3e 2t 3e X(t) 3e = 4 2t 2t = 2t e 2t + 3e 2t. e e 3 4 e 2t e 6.3 The Exponential matrix. The work in the preceding section with fundamental matrices was valid for any linear homogeneous square system of ODE s, x = A(t) x. However, if the system has constant coefficients, i.e., the matrix A is a constant matrix, the results are usually expressed by using the exponential matrix, which we now define. Recall that if x is any real number, then 2 n x x (2) e x = + x + +... + +.... 2! n! Definition 6.3 Given an n n constant matrix A, the exponential matrix e A is the n n matrix defined by (3) e A = I + A + +... + +.... 2! n! A 2 Each term on the right side of (3) is an n n matrix; adding up the ij-th entry of each of these matrices gives you an infinite series whose sum is the ij-th entry of e A. (The series always converges.) In the applications, an independent variable t is usually included: A n (4) e = I + A t + A 2 t2 +... + A n tn +.... 2! n! This is not a new definition, it s just (3) above applied to the matrix A t in which every element of A has been multiplied by t, since for example 2 () 2 = = A A t 2 = A 2 t. Try out (3) and (4) on these two examples; the first is worked out in your book (Example 2, p. 47); the second is easy, since it is not an infinite series. at a 0 e 0 a 0 e Example 6.3A Let A =, show: e A = ; e = 0 b 0 e b 0 e bt t Example 6.3B Let A =, show: e 0 0 A = ; e = What s the point of the exponential matrix? The answer is given by the theorem below, which says that the exponential matrix provides a royal road to the solution of a square

LS.6 SOLUTION MATRICES 29 system with constant coefficients: no eigenvectors, no eigenvalues, you just write down the answer! Theorem 6.3 Let A be a square constant matrix. Then (5) (a) e = X 0 (t), the normalized fundamental matrix at 0; (6) (b) the unique solution to the IVP x = Ax, x(0) = x 0 is x = e x 0. Proof. Statement (6) follows immediately from (5), in view of (0). We prove (5) is true, by using the description of a normalized fundamental matrix given in Definition 6.2: letting X = e, we must show X = AX and X(0) = I. The second of these follows from substituting t = 0 into the infinite series definition (4) for e. To show X = AX, we assume that we can differentiate the series (4) term-by-term; then we have for the individual terms d t n t n A n = A n, dt n! (n )! since A n is a constant matrix. Differentiating (4) term-by-term then gives dx d = e = A + A 2 t +... + A n t n +... (8) dt dt (n )! = A e = A X. Calculation of e. The main use of the exponential matrix is in (6) writing down explicitly the solution to an IVP. If e has to be actually calculated for a specific system, several techniques are available. a) In simple cases, it can be calculated directly as an infinite series of matrices. b) It can always be calculated, according to Theorem 6.3, as the normalized fundamental matrix X 0 (t), using (): X 0 (t) = X(t)X(0). c) A third technique uses the exponential law Bt Ct (9) e (B+C)t = e e, valid if BC = CB. To use it, one looks for constant matrices B and C such that Bt (20) A = B + C, BC = CB, e and e Ct are computable; then B t C t (2) e = e e. 2 Example 6.3C Let A =. Solve x = A x, x(0) =, using e. 0 2 2 2 0 Solution. We set B = and C = ; then (20) is satisfied, and 0 2 0 0

30 8.03 NOTES: LS. LINEAR SYSTEMS e = e 2t 0 t 2t t 0 e 2t = e, by (2) and Examples 6.3A and 6.3B. Therefore, by (6), we get 2t t 2t + 2t x = e x 0 = e = e. 2 2 Exercises: Sections 4G,H

MIT OpenCourseWare http://ocw.mit.edu 8.03 Differential Equations Spring 200 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.