8 Hyperbolic Systems of First-Order Equations



Similar documents
MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

Chapter 6. Orthogonality

LS.6 Solution Matrices

Similarity and Diagonalization. Similar Matrices

Section Inner Products and Norms

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Inner Product Spaces

Class Meeting # 1: Introduction to PDEs

[1] Diagonal factorization

by the matrix A results in a vector which is a reflection of the given

Elasticity Theory Basics

MATH APPLIED MATRIX THEORY

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Inner product. Definition of inner product

Lecture 1: Schur s Unitary Triangularization Theorem

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 17. Orthogonal Matrices and Symmetries of Space

Notes on Symmetric Matrices

x = + x 2 + x

Similar matrices and Jordan form

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Data Mining: Algorithms and Applications Matrix Math Review

1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, The Neumann Boundary Condition

Inner Product Spaces and Orthogonality

Section 2.7 One-to-One Functions and Their Inverses

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Orthogonal Diagonalization of Symmetric Matrices

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Numerical Analysis Lecture Notes

Homework # 3 Solutions

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Solving Linear Systems, Continued and The Inverse of a Matrix

Vector and Matrix Norms

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Orthogonal Projections

APPLICATIONS. are symmetric, but. are not.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Parabolic Equations. Chapter 5. Contents Well-Posed Initial-Boundary Value Problem Time Irreversibility of the Heat Equation

LINEAR ALGEBRA. September 23, 2010

University of Lille I PC first year list of exercises n 7. Review

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Inner products on R n, and more

CHAPTER IV - BROWNIAN MOTION

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Second Order Linear Partial Differential Equations. Part I

DERIVATIVES AS MATRICES; CHAIN RULE

The Heat Equation. Lectures INF2320 p. 1/88

Teoretisk Fysik KTH. Advanced QM (SI2380), test questions 1

Lectures 5-6: Taylor Series

Brief Introduction to Vectors and Matrices

PRACTICE FINAL. Problem 1. Find the dimensions of the isosceles triangle with largest area that can be inscribed in a circle of radius 10cm.

Linear Algebra Notes

T ( a i x i ) = a i T (x i ).

Lecture 13 Linear quadratic Lyapunov theory

Limits and Continuity

Chapter 20. Vector Spaces and Bases

Linköping University Electronic Press

HW6 Solutions. MATH 20D Fall 2013 Prof: Sun Hui TA: Zezhou Zhang (David) November 14, Checklist: Section 7.8: 1c, 2, 7, 10, [16]

7 Gaussian Elimination and LU Factorization

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. INNER PRODUCT SPACES

MATH PROBLEMS, WITH SOLUTIONS

Direct Methods for Solving Linear Systems. Matrix Factorization

An Introduction to Partial Differential Equations

Examination paper for TMA4115 Matematikk 3

Solutions for Review Problems

Numerical Methods for Differential Equations

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

5. Orthogonal matrices

α = u v. In other words, Orthogonal Projection

Scalar Valued Functions of Several Variables; the Gradient Vector

Taylor and Maclaurin Series

Constrained optimization.

5.4 The Heat Equation and Convection-Diffusion

The one dimensional heat equation: Neumann and Robin boundary conditions

MAT 242 Test 2 SOLUTIONS, FORM T

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

8 Square matrices continued: Determinants

1 VECTOR SPACES AND SUBSPACES

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Linear Algebra Review. Vectors

Solving Systems of Linear Equations

DATA ANALYSIS II. Matrix Algorithms

Continuity of the Perron Root

LINEAR ALGEBRA W W L CHEN

1 Norms and Vector Spaces

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Application of Fourier Transform to PDE (I) Fourier Sine Transform (application to PDEs defined on a semi-infinite domain)

Eigenvalues and Eigenvectors

4.5 Linear Dependence and Linear Independence

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

Transcription:

8 Hyperbolic Systems of First-Order Equations Ref: Evans, Sec 73 8 Definitions and Examples Let U : R n (, ) R m Let A i (x, t) beanm m matrix for i,,n Let F : R n (, ) R m Consider the system U t + A i (x, t)u xi F (x, t) (8) Fix ξ R n Let A(x, t; ξ) A i (x, t)ξ i The system (8) is hyperbolic if A(x, t; ξ) is diagonalizable for all x, ξ R n,t > In particular, a system is hyperbolic if for all x, ξ R n,t > the matrix A(x, t; ξ) hasm real eigenvalues λ (x, t; ξ) λ (x, t; ξ) λ m (x, t; ξ) corresponding to eigenvectors {r i (x, t; ξ)} m which form a basis for Rm There are two special cases of hyperbolicity which we now define If A i (x, t) is symmetric for i,,n,thena(x, t; ξ) is symmetric for all ξ R n Recall that if the m m matrix A(x, t; ξ) is symmetric, then it is diagonalizable For the case when the matrices A i (x, t) are all symmetric, we say the system (8) is symmetric hyperbolic If A(x, t; ξ) hasm real, distinct eigenvalues λ (x, t; ξ) <λ (x, t; ξ) <<λ m (x, t; ξ) for all x, ξ R n,t>, then A(x, t; ξ) is diagonalizable In this case, we say the system (8) is strictly hyperbolic Example The system is strictly hyperbolic Example The system is symmetric hyperbolic u u u u t t + + 3 4 3 u u u u x x f (x, t) f (x, t)

Motivation Recall that all linear, constant-coefficient second-order hyperbolic equations can be written as t u + through a change of variables, where represents lower-order terms One of the distinguishing features of the wave equation is that it has wave-like solutions In particular, for the wave equation the general solution is given by t c u xx u(x, t) f(x + ct)+g(x ct), the sum of a wave moving to the right and a wave moving to the left The functions f(x+ct) and g(x ct) areknownastravelling waves More generally, for the wave equation in R n, t c u x R n, (8) for any (smooth) function f and any ξ R n, u(x, t) f(ξ x σt) is a solution of (8) for σ ±c ξ A solution of the form f(ξ x σt) isknownasaplane wave solution Motivated by the existence of plane wave solutions for the wave equation, we look for properties of the system (8) such that the equation will have plane wave solutions The conditions under which plane wave solutions exist lead us to the definition of hyperbolicity given above First, we rewrite the wave equation as a system in the form of (8) First, consider the wave equation in one spatial dimension, t u xx Let Then where U t uxt t utx U u xx A ux ux x A U x In general, for x R n,consider t u

Let U u x u xn Then U t u x t u xnt t x xn n u x i x i A i U xi u x u xn x u x x + u x x + + + + u xn u x u xn x n x n where each A i is an (n +) (n + ) symmetric matrix whose entries a i jk are given by { j i, k n +;j n +,k i a i jk otherwise Now we claim that for A i as defined above, for each ξ R n,therearem n + distinct plane wave solutions U(x, t) V (ξ x σt) of U t A i U xi In particular, define A(ξ) A i ξ i Let λ i (ξ), R i (ξ) betheith eigenvalue and corresponding eigenvector of A(ξ) Let U(x, t) f(ξ x λ i (ξ)t)r i (ξ) Now and U t λ i (ξ)f (ξ x λ i (ξ)t)r i (ξ) U xi ξ i f (ξ x λ i (ξ)t)r i (ξ) 3

Therefore, because U t A i U xi f (ξ x λ i (ξ)t) λ i (ξ)r i (ξ) A i ξ i R i (ξ) f (ξ x λ i (ξ)t)λ i (ξ)r i (ξ) A(ξ)R i (ξ) A(ξ)R i (ξ) λ i (ξ)r i (ξ) Now notice that A i is a symmetric matrix for i,,n Therefore, A(ξ) n A iξ i is an m m symmetric matrix for each ξ R n Consequently, A(ξ) hasm real eigenvalues and m linearly independent eigenvectors R i (ξ) Therefore, for each ξ R n and each eigenvalue/eigenvector pair λ i (ξ),r i (ξ), we get a distinct plane wave solution U(x, t) V (ξ x λ i (ξ)t) We use this fact to define hyperbolicity for systems of the form (8) In particular, we want to find a condition on the system (8) under which there will be m distinct plane wave solutions for each ξ R n We look for a solution of (8) of the form U(x, t) V (ξ x σt) Plugging a function U of this form into (8) with F (x, t), we see this implies σv + ξ i A i V (83) Now if n ξ ia i is an m m diagonalizable matrix, then (83) will have m solutions V,,V m These solutions are the eigenvectors of n ξ ia i which correspond to the m eigenvalues σ,,σ m As a result, if n ξ ia i is diagonalizable, then we have m plane wave solutions of (8) This criteria gives us our definition for hyperbolicity described above 8 Solving Hyperbolic Systems In this section, we will solve hyperbolic systems of the form U t + AU x F (x, t) (84) where A is a constant-coefficient matrix Note that if (84) is hyperbolic, then A must be a diagonalizable matrix Therefore, there exists an m m invertible matrix Q and an m m diagonal matrix Λ such that Q AQ Λ In particular, Λ is the diagonal matrix of eigenvalues and Q is the matrix of eigenvectors Therefore, A QΛQ Substituting this into (84), our system becomes U t + QΛQ U x F (x, t) (85) 4

Now multiplying (85) by Q, our system becomes Q U t +ΛQ U x Q F (x, t) Letting V Q U, we arrive at the decoupled system, V t +ΛV x Q F (x, t) Remark: If A is symmetric, the eigenvectors may be chosen to be orthonormal, in which case Q Q T Example 3 Find a solution to the initial-value problem u 4 u + u 4 u t x u (x, ) sin x u (x, ) cos x (86) Here A 4 4 is a symmetric matrix Therefore, it has two real eigenvalues and its eigenvectors form an orthonormal basis for R In particular, A can be diagonalized The eigenvalue/eigenvector pairs are given by λ 5 v λ 3 v Therefore, A QΛQ T where / / Q / / / / Q T / / 5 Λ 3 Our system can be rewritten as Q T u u t +ΛQ T u u x 5

Letting V Q T U our system becomes v +Λ v t v (x, ) v (x, ) v v x Q T u (x, ) u (x, ) That is, we have two separate transport equations, v t +5v x v (x, ) (sin x +cosx) and v t 3v x v (x, ) (sin x cos x) Our solutions are given by v (x, t) (sin(x 5t)+cos(x 5t)) v (x, t) (sin(x +3t) cos(x +3t)) Now V Q T U implies U QV Therefore, our solution is given by U(x, t) sin(x 5t)+cos(x 5t)+sin(x +3t)+cos(x +3t) sin(x 5t)+cos(x 5t) sin(x +3t) cos(x +3t) Remark Notice in the above example that the value of the solution U at the point (x,t ) depends only on the values of the initial conditions at the points x 5t and x +3t In the next section, we prove that for a symmetric hyperbolic system of the form U t + A i U xi that the domain of dependence for a solution at the point (x,t ) is contained within the region {(x, t) : x x M(t t)} where M is an upper bound on the eigenvalues of A(ξ) n A iξ i over all ξ R n such that ξ ;thatis, M where λ i (ξ) are the m eigenvalues of A(ξ) max,,m ξ 6 λ i (ξ),

83 Domain of Dependence for a Symmetric Hyperbolic System In the above example, we saw that the domain of dependence for a solution U at the point (x,t ) is contained within the triangular region {(x, t) :x +5(t t ) x x 3(t t ), t t } More generally, for any symmetric hyperbolic system of the form U t + AU x F (x, t) x R, the domain of dependence is contained within the region {(x, t) :x + M(t t ) x x M(t t ), t t } where M max λ i i where λ i are the m eigenvalues of A This idea extends to systems in higher dimensions in well hyperbolic system, Consider the symmetric U t + A i U xi x R n where each A i is an m m symmetric matrix Let A(ξ) ξ i A i Let λ i (ξ), i,,m be the m eigenvalues of A(ξ) Let M max,,m ξ λ i (ξ) We claim that M is the upper bound on the speed of waves in any direction We state this more precisely in the following theorem First, we make some definitions Let (x,t ) R n (, ) Let t t Let B {x R n : x x Mt } C {x R n : x x M(t t )} S {(x, t) : t t, x x M t t } Theorem 4 Let (x,t ) R n (, ) Let B,C be as defined above Assume U is a solution of U t + A i U xi (87) where each A i is an m m constant-coefficient, symmetric matrix If U on B, then U on C 7

Proof Let Ω be the region bounded by B,C and S Let Ω B C S Multiplying (87) by U T and integrating over Ω, we have U T (U t + A i U xi ) dx dt First, we note that Second, we have Ω U T U t t U U T A i U xi (U A i U) xi, using the fact that each A i is symmetric Then, by the divergence theorem, we have t U dx dt U ν t ds Ω Ω where ν (ν,,ν n,ν t ) is the outward unit normal on Ω Now U ν t ds U dx U dx + U ν t ds On S, Therefore, C Ω ν (x x,,x n x n,m (t t )) (t t)m +M ν t B M +M Therefore, U ν t ds U dx U dx + U M Ω C B ds S +M Similarly, we use the divergence theorem on our other term, (U A i U) xi dx dt (U A i U)ν i ds Ω S (x i x i ) (U A i U) (t t)m +M ds S S 8

Let ξ i (x i x i ) (t t)m Then, we have ξ And, therefore, (U A i U) xi dx dt Ω (U A i ξ i U) ds +M S (U A(ξ)U) ds +M S Therefore, we have U T (U t + A i U xi ) dx dt Ω But, C U dx B U dx + S U M ds + +M (U A(ξ)U) ds +M U M + U A(ξ)U U (MI + A(ξ))U, as the eigenvalues of A(ξ) M Therefore, U dx U dx C B Therefore, if U onb, then U onc S Now we prove uniqueness of solutions to symmetric hyperbolic systems Theorem 5 (Uniqueness) Consider the symmetric hyperbolic system, U t + A i U xi F (x, t) x R n U(x, )Φ(x) where the initial data Φ(x) has compact support Then there exists at most one (smooth) solution Proof Suppose there are two smooth solutions U and U with the same initial data Φ Let W (x, t) U (x, t) U (x, t) We know that W (x, ) We claim that W (x, t) Define the energy function, E(t) W dx R n 9

We know E() We claim E(t), and, therefore, W (x, t) We will show that E(t) by showing that E (t) E (t) W W t dx R n W A i W xi R n (W A i W ) xi dx R n using the fact that Φ has compact support implies W has compact support (By the domain of dependence results we showed above) Therefore, E (t), which implies for E(), then E(t) for any time t Therefore, W which implies U U (assuming U,U are smooth)