Complex Eigenvalues. 1 Complex Eigenvalues



Similar documents
Higher Order Equations

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Zeros of Polynomial Functions

by the matrix A results in a vector which is a reflection of the given

MAC Learning Objectives. Module 10. Polar Form of Complex Numbers. There are two major topics in this module:

Similarity and Diagonalization. Similar Matrices

Linearly Independent Sets and Linearly Dependent Sets

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Zeros of a Polynomial Function

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

MATH APPLIED MATRIX THEORY

System of First Order Differential Equations

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

1.3 Algebraic Expressions

[1] Diagonal factorization

Mathematics. ( : Focus on free Education) (Chapter 5) (Complex Numbers and Quadratic Equations) (Class XI)

2.3. Finding polynomial functions. An Introduction:

College Algebra - MAT 161 Page: 1 Copyright 2009 Killoran

ROUTH S STABILITY CRITERION

Math Assignment 6

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

Chapter 6. Orthogonality

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Zeros of Polynomial Functions

HW6 Solutions. MATH 20D Fall 2013 Prof: Sun Hui TA: Zezhou Zhang (David) November 14, Checklist: Section 7.8: 1c, 2, 7, 10, [16]

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Chapter 20. Vector Spaces and Bases

LS.6 Solution Matrices

4.1. COMPLEX NUMBERS

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Chapter 17. Orthogonal Matrices and Symmetries of Space

So far, we have looked at homogeneous equations

Examination paper for TMA4115 Matematikk 3

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Chapter R.4 Factoring Polynomials

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Equations, Inequalities & Partial Fractions

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

COMPLEX NUMBERS AND DIFFERENTIAL EQUATIONS

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

expression is written horizontally. The Last terms ((2)( 4)) because they are the last terms of the two polynomials. This is called the FOIL method.

Second Order Linear Partial Differential Equations. Part I

MATH 52: MATLAB HOMEWORK 2

Question 2: How do you solve a matrix equation using the matrix inverse?

3.1 State Space Models

Lecture 5 Rational functions and partial fraction expansion

Solving Systems of Linear Equations Using Matrices

Eigenvalues, Eigenvectors, and Differential Equations

MAT 242 Test 2 SOLUTIONS, FORM T

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Systems of Linear Equations

University of Lille I PC first year list of exercises n 7. Review

Differentiation and Integration

Zeros of Polynomial Functions

Solving Systems of Linear Equations

5.5. Solving linear systems by the elimination method

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

Factoring Guidelines. Greatest Common Factor Two Terms Three Terms Four Terms Shirley Radai

3.2 Sources, Sinks, Saddles, and Spirals

3.1. RATIONAL EXPRESSIONS

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Understanding Poles and Zeros

Lecture 5 Principal Minors and the Hessian

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Chapter 9. Systems of Linear Equations

PYTHAGOREAN TRIPLES KEITH CONRAD

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Orthogonal Diagonalization of Symmetric Matrices

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Systems of Equations

Methods for Finding Bases

POLYNOMIAL FUNCTIONS

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Introduction to Matrix Algebra

Eigenvalues and Eigenvectors

Inner Product Spaces

Factoring Polynomials and Solving Quadratic Equations

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

COMPLEX NUMBERS. a bi c di a c b d i. a bi c di a c b d i For instance, 1 i 4 7i i 5 6i

Quadratics - Build Quadratics From Roots

2.5 ZEROS OF POLYNOMIAL FUNCTIONS. Copyright Cengage Learning. All rights reserved.

Vector and Matrix Norms

Introduction Assignment

Solutions to Math 51 First Exam January 29, 2015

March 29, S4.4 Theorems about Zeros of Polynomial Functions

Notes on Determinant

1 Introduction to Matrices

General Theory of Differential Equations Sections 2.8, , 4.1

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

6-3 Solving Systems by Elimination

is identically equal to x 2 +3x +2

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

FOIL FACTORING. Factoring is merely undoing the FOIL method. Let s look at an example: Take the polynomial x²+4x+4.

Name Intro to Algebra 2. Unit 1: Polynomials and Factoring

Section Continued

Transcription:

Complex Eigenvalues Today we consider how to deal with complex eigenvalues in a linear homogeneous system of first der equations We will also look back briefly at how what we have done with systems recapitulates what we did with second der equations Complex Eigenvalues Second Order Equations as Systems Complex Eigenvalues We know that to solve a system of n equations written in matrix fm as Ax, we must find n linearly independent solutions x,,x n In the case where A has n real and distinct eigenvalues, we have already solved the system by using the solutions e λ it v i, where λ i and v i are the eigenvalues and eigenvects of A We now consider the case where A has complex eigenvalues We will assume that A has only real entries Then the characteristic polynomial A ri has real coefficients, and therefe any eigenvalues occur in conjugate pairs: r = a + bi and r = a bi Only slightly me surprising is the fact that the eigenvects also occur in conjugate pairs F example, suppose we have eigenvalue r with eigenvect v Then they satisfy the equation A riv = 0 Now if we take the complex conjugate of both sides, and note that both A and I have only real entries, we get A riv = 0 Therefe, an eigenvect associated with r is v! If we have a solution e rt v, we also have its conjugate e rt v, and this means that we also have its real and imaginary parts, since Rex = x + x and Imx = i x x

Now let us write the eigenvect split into real and imaginary parts, as v = a + bi Note that a and b are real vects If we also write our eigenvalues with real and imaginary parts as r = λ + µi, then one solution can be rewritten as follows: a + bie λ+µit = a + bie λt cosµt + i sinµt = e λt a cosµt b sinµt + ie λt a sinµt + b cosµt Of course, we also have the complex conjugate of this solution Therefe, we can get both the real and imaginary parts as solutions So we have found two real solutions: ut = e λt a cosµt b sinµt and vt = e λt a sinµt + b cosµt Solve the system 6 0 First we find the eigenvalues of the matrix A in Ax: x 6 λ λ = 6 λ λ + = λ 6λ + = 0 Solving f λ yields λ = 6 ± 6 5 = 6 ± 4i = ± i We only need to find the eigenvect associated with one of these eigenvalues Let s find the eigenvect f λ = + i by solving A λiv = 0 We row-reduce the augmented matrix 6 + i 0 + i 0 i 0 i 0

A useful trick to convert a complex value into a real value is to multiply by the complex conjugate, so to get rid of the complex number in the first column of row one, let us multiply by the conjugate + i Then i + i = 9 4i = 9 + 4 =, and we get i 0 i 0 9 6i 0 i 0 i 0 i 0 after also dividing through by on row one Then we can subtract row one from row two, and we end the row reduction with: i 0 0 0 0 We note that we now have v + iv = 0 in the first row, and nothing in the second row Note that as expected, we have eliminated at least one row in solving f our eigenvects So we have v = + iv, and v is a free variable Let s assign v =, and then we have the eigenvalue/eigenvect pair λ = + i, and + i So we get a solution of the fm + i + i e +it = e t e it = e t cost + i sint + i Remember: e it = Multiplying through and separating into real and imaginary parts yields e t cost e t sint + i [e t sint + e t cost] e t cost + ie t = sint e t cost e t sint e e t + i t sint + e t cost cost e t sint + i We know that the real and imaginary parts are both solutions, so our general solution is e c t cost e t sint e e t + c t sint + e t cost cost e t sint

If we wish to set an initial condition, such as x0 = c : c e 0 cos0 e 0 sin0 e 0 cos0 which gives us the following augmented matrix: 5 0 Row reduction leads to 0 0 so c = and c = are the required constants 5 e + c 0 sin0 + e 0 cos0 e 0 sin0 c, we can solve f c and + c 0 = = 5 Solve the system Eigenvalues: x, x0 = Eigenvects: General Solution: x = Solving the initial condition: x0 = Solution: 4

x = x t = x t = Second Order Equations as Systems We know that any der n equation can be converted to a system of n first der equations Let s see what happens when we use this approach to solve a second der equation Solve y + y y = 0 We know the characteristic equation is r + r = 0, which has roots r = and r = Thus we know the general solution is yt = c e t + c e t If we first convert to a system, we set x = y, x = y, and get the following: x t = x x t = x x 0 We find our eigenvalues: λ λ = λ λ + = λ + λ = 0 Thus λ = and λ = are the eigenvalues 5 x

We find our eigenvects: x = 0 We have the single relation x = x, so we can use, T F λ =, we solve x = 0 Here we get x = x, so we use, T Thus, our general solution is c e t + c x t = c e t + c e t x t = c e t + c e t Since x = y, we see that we have obtained the same solution as we did befe Solve y + 4y = 0 We know that the characteristic equation is r +4 = 0, so r = ±i Thus our general solution is yt = c cost + c sint If we convert this to a system, we let x = y and x = y to get We get our eigenvalues: λ 4 λ x = x x = 4x 0 4 0 x = λ + 4 = 0 So our eigenvalues are λ = ±i We can then find eigenvects: If λ = i, we get i x = 0 4 i e t 6

Solving this, we get eigenvect i, T The eigenvect f λ = i is then the conjugate, i, T So expanding the solution cresponding to λ = i and i, T into real and imaginary parts yields So our solution is cost + i sint i c sint cost = sint cost + i + c cost sint cost sint The first row gives c sint c cost Since c could be any value, this is equivalent to the answer we would get from solving the second der equation previously 7