CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s



Similar documents
by the matrix A results in a vector which is a reflection of the given

LS.6 Solution Matrices

SECOND-ORDER LINEAR DIFFERENTIAL EQUATIONS

Second Order Linear Partial Differential Equations. Part I

A Brief Review of Elementary Ordinary Differential Equations

Systems of Linear Equations

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

1. First-order Ordinary Differential Equations

The one dimensional heat equation: Neumann and Robin boundary conditions

Inner Product Spaces

Linearly Independent Sets and Linearly Dependent Sets

Zeros of a Polynomial Function

Chapter 20. Vector Spaces and Bases

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Separable First Order Differential Equations

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Homework #2 Solutions

Section Continued

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

160 CHAPTER 4. VECTOR SPACES

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

4.5 Linear Dependence and Linear Independence

University of Lille I PC first year list of exercises n 7. Review

Solutions to Math 51 First Exam January 29, 2015

Linear Equations in Linear Algebra

An Introduction to Partial Differential Equations

Section 1.1 Linear Equations: Slope and Equations of Lines

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

The last three chapters introduced three major proof techniques: direct,

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Similarity and Diagonalization. Similar Matrices

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

HW6 Solutions. MATH 20D Fall 2013 Prof: Sun Hui TA: Zezhou Zhang (David) November 14, Checklist: Section 7.8: 1c, 2, 7, 10, [16]

Solving simultaneous equations using the inverse matrix

MA107 Precalculus Algebra Exam 2 Review Solutions

Second-Order Linear Differential Equations

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

1.2 Solving a System of Linear Equations

Linear Programming. March 14, 2014

Numerical Analysis Lecture Notes

Integrals of Rational Functions

LINEAR ALGEBRA W W L CHEN

5 Systems of Equations

2.3. Finding polynomial functions. An Introduction:

Notes on Determinant

( ) which must be a vector

19.7. Applications of Differential Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Examination paper for TMA4115 Matematikk 3

The Point-Slope Form

Chapter 17. Orthogonal Matrices and Symmetries of Space

Solving Systems of Linear Equations

Eigenvalues, Eigenvectors, and Differential Equations

INTEGRATING FACTOR METHOD

NOTES ON LINEAR TRANSFORMATIONS

Name: Section Registered In:

[1] Diagonal factorization

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Differentiation and Integration

BANACH AND HILBERT SPACE REVIEW

19.6. Finding a Particular Integral. Introduction. Prerequisites. Learning Outcomes. Learning Style

LINEAR INEQUALITIES. less than, < 2x + 5 x 3 less than or equal to, greater than, > 3x 2 x 6 greater than or equal to,

TOPIC 4: DERIVATIVES

Core Maths C1. Revision Notes

Chapter 9. Systems of Linear Equations

Methods for Finding Bases

Partial Fractions. p(x) q(x)

Linear Programming Notes V Problem Transformations

Vector and Matrix Norms

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

Zeros of Polynomial Functions

Math Assignment 6

Lecture 14: Section 3.3

Lecture 5 Principal Minors and the Hessian

Application of Fourier Transform to PDE (I) Fourier Sine Transform (application to PDEs defined on a semi-infinite domain)

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Section 13.5 Equations of Lines and Planes

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

Inner products on R n, and more

Introduction to Matrix Algebra

1 Sets and Set Notation.

Class Meeting # 1: Introduction to PDEs

Mechanics 1: Conservation of Energy and Momentum

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P.

MATH APPLIED MATRIX THEORY

5 Homogeneous systems

Math 4310 Handout - Quotient Vector Spaces

MA106 Linear Algebra lecture notes

Review Jeopardy. Blue vs. Orange. Review Jeopardy

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

Second Order Linear Differential Equations

Section 9.5: Equations of Lines and Planes

Transcription:

A SERIES OF CLASS NOTES FOR 005-006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 4 A COLLECTION OF HANDOUTS ON PARTIAL DIFFERENTIAL EQUATIONS (PDE's) CHAPTER Eigenvalue Problems (EVP s) for ODE s 1. Eigen Value Problems for Second Order Linear ODE s. Basis of Eigenvectors Ch. Pg. 1

Handout # 1 EIGENVALUE PROBLEMS Professor Moseley FOR SECOND ORDER LINEAR ODE'S We consider the Eigenvalue Problem (EVP) for Second Order Linear ODE's (it is a special case of a more general EVP known as a regular Sturm-Liouville problem): EVP ODE - (p(x)y')' - q(x)y = ëy IC's y(x 0) = 0, y(x 1) = 0. 1 We assume p C (I) C( ), q C( ), and p(x) > 0 x (e.g. p(x) = 1 ), where I = (x 0,x 1). Note that if ë is given, then this is a homogenous boundary value problem (BVP). We can rewrite the problem as EVP ODE p(x) y" + p'(x)y' + (q(x) + ë)y = 0 IC's y(x 0) = 0, y(x 1) = 0. The theory of this EVP is similar to that of the matrix EVP Ax = ëx or (A - xëi)x = 0. Since for fixed ë we have a homogenous BVP, the "problem" EVP always has at least one solution, namely y = 0, no matter what the value of ë. However, the Eigen Value Problem (EVP) is not simply to solve the BVP for a given ë. By definition, the EVP requires you to find all of the values of ë such that the BVP problem defined by that value of ë has at least one nontrivial solution. However, remember that if it has one nontrivial solution, then by linearity, it will have an infinite number of solutions (all scalar multiples of solutions are solutions). In fact, the solution set will be a nontrivial subspace of C (I) C( ) where I = (x 0,x 1). Like matrix EVP s, often this subspace will be one dimensional; that is, all of the eigen vectors (we will call them eigen functions since now our vector space is a function space) are multiples of a (non-unique) function which is a basis for the eigen space. THEOREM #1. The eigen value problem EVP defined above is self-adjoint. (We could also say that the operator in the eigenvalue problem is self-adjoint but you must remember that the operator includes the boundary conditions as well as the differential operator.) The important thing to remember is that since the problem is self-adjoint, the eigen values are real. Also the eigen functions can be chosen to be real. If the ODE has constant coefficients, the procedure for solving an EVP is similar to the procedure for solving a BVP and IVP for second order ODE's: Since ë is a constant we first find Ch. Pg.

the general solution to the ODE and then apply the boundary conditions. We illustrate with an example. EXAMPLE #1. Solve the eigenvalue problem EVP ODE y" + ëy = 0 IC's y(0) = 0, y(1) = 0. Solution. Since the problem (or operator which defines the problem) is self-adjoint, the eigen values are all real. The general solution of the ODE depends on three cases. rx Case 1. ë < 0. In this case letting y = e yields the auxiliary equation r + ë = 0 or (since -ë > 0) r = ±. Hence we obtain y = C exp( x ) + C exp( x ). rx Case. ë = 0. In this case letting y = e yields the auxiliary equation r + ë = 0 or (since ë > 0) r = 0,0. Hence we obtain y = C x + C. rx Case 3. ë > 0. In this case letting y = e yields the auxiliary equation r + ë = 0 or (since ë > 0) r = ±. Hence we obtain y = C sin( x ) + C cos( x ). We now apply the boundary conditions in these three cases to determine if there are any (real) values of ë for which the "problem" EVP has nontrivial solutions. Case 1. ë < 0. Recall that the general solution of the ODE for this case is y = C exp( x) + C exp( x). Applying the boundary conditions yields 0 = C 1 exp( 0 ) + C exp( 0 ) 0 = C exp( ) + C exp( ) The first condition is independent of the value of ë and yields C = - C 1. Substituting into the second equation (which is not independent of ë ) yields C 1 exp( ) C 1 exp( ) = 0. or C (exp( ) exp( ) = 0. 1 This means C 1 = 0 or (exp( ) ) exp( ) = 0. If C 1 = 0, then C = - C 1 = 0 so that y = 0. That is we get only the trivial solution. There are two ways to handle the case when (exp( ) exp( ) = 0. We do one. The other is left as an exercise. If you remember the definition of the hyperbolic sine function as Ch. Pg. 3

x -x sinh(x) = (e - e )/, then dividing the condition by yields sinh( ) = 0. Since (if you remember the properties of the sinh function) the only zero of the sinh function is zero, we obtain = 0 so that ë = 0 and hence ë = 0. However, we are considering the case when ë < 0. Hence for no ë < 0 are there any eigen values. x EXERCISE. Using only the properties of e and ln x show that if exp( ) exp( ) = 0, then ë = 0. The bottom line is that there are no eigenvalues in Case 1. Case. ë = 0. Recall that the general solution of the ODE for this case is y = C x + C. Applying the boundary conditions yields 0 = C 1 ( 0 ) + C, 0 = C ( 1 ) + C. The first condition implies C = 0. The second condition then implies C = - C 1 = 0. Hence y = 0. Thus we get only the trivial solution and ë = 0 is not an eigen values. Case 3. ë > 0. Recall that the general solution of the ODE for this case is y = C sin( x) + C cos( x). Applying the boundary conditions yields 0 = C 1 sin( 0 ) + C cos( 0 ) 0 = C sin( ) + C cos( ) The first condition is independent of the value of ë and yields C = 0. Substituting into the second equation (which is not independent of ë ) yields C 1 sin( ) = 0. This means C 1 = 0 or sin( ) = 0. If C 1 = 0, then y = 0. That is we get only the trivial solution. Hence our only hope to find any eigen values is if there are values of ë such that the equation sin( ) = 0 has positive solutions. Hence this equation is called the characteristic equation for this eigen value problem. Recalling from the graph of the sine function that the zeros of the sine function are 0, ±ð, ±ð, ±3ð,... or ± n where n {0} N where N = {1,,3,4,...}, we see that the solutions are given by Ch. Pg. 4

= nð or ë = n ð. Recall that n = 0, ±1, ±, ±3,.. However, if n = 0, then ë = 0. But we are in the case where ë > 0. Hence we must throw this value of n out. Also note that if n is negative we get the same eigen values as we do when n is positive. Before throwing out the negative values of n we must first examine the eigen functions. Recall that since sin( ) = 0, there is no requirement that C 1 = 0. Hence the general solution of the "problem" EVP is y = C 1 sin( x ) or y = C 1 sin( nð x ). Note that if n is negative, then using a trig identity, the minus sign can be taken out and incorporated into the arbitrary constant C 1. Hence the set of eigen functions is just the set of all scalar multiplies of y n = sin( nð x ) no matter whether n is positive or negative. Hence we obtain the table explicitly (you should list them this way when solving problems): TABLE Associated Eigen Values Eigen Functions ë 1 = ð y 1 = sin(ðx) ë = 4ð y = sin(ðx) ë = 9ð y 3 = sin(3ðx) ë = n ð y n = sin(nðx) or more compactly TABLE Associated Eigen Values Eigen Functions ë = n ð, n = 1,, 3,... y = sin(nðx) n = 1,, 3,... n Note that there are an infinite number of eigen values each with a one dimensional eigen space. The listed functions are the basis functions for the eigen spaces. The hope is that the entire collection will be a basis for the entire space; that is that we have a full set of eigen functions. This will be explained more fully later. Ch. Pg. 5

Handout #3 BASIS OF EIGENVECTORS Prof. Moseley Ch. Pg. 6