8 Hyperbolic Systems of First-Order Equations

Size: px
Start display at page:

Download "8 Hyperbolic Systems of First-Order Equations"

Transcription

1 8 Hyperbolic Systems of First-Order Equations Ref: Evans, Sec 73 8 Definitions and Examples Let U : R n (, ) R m Let A i (x, t) beanm m matrix for i,,n Let F : R n (, ) R m Consider the system U t + A i (x, t)u xi F (x, t) (8) Fix ξ R n Let A(x, t; ξ) A i (x, t)ξ i The system (8) is hyperbolic if A(x, t; ξ) is diagonalizable for all x, ξ R n,t > In particular, a system is hyperbolic if for all x, ξ R n,t > the matrix A(x, t; ξ) hasm real eigenvalues λ (x, t; ξ) λ (x, t; ξ) λ m (x, t; ξ) corresponding to eigenvectors {r i (x, t; ξ)} m which form a basis for Rm There are two special cases of hyperbolicity which we now define If A i (x, t) is symmetric for i,,n,thena(x, t; ξ) is symmetric for all ξ R n Recall that if the m m matrix A(x, t; ξ) is symmetric, then it is diagonalizable For the case when the matrices A i (x, t) are all symmetric, we say the system (8) is symmetric hyperbolic If A(x, t; ξ) hasm real, distinct eigenvalues λ (x, t; ξ) <λ (x, t; ξ) <<λ m (x, t; ξ) for all x, ξ R n,t>, then A(x, t; ξ) is diagonalizable In this case, we say the system (8) is strictly hyperbolic Example The system is strictly hyperbolic Example The system is symmetric hyperbolic u u u u t t u u u u x x f (x, t) f (x, t)

2 Motivation Recall that all linear, constant-coefficient second-order hyperbolic equations can be written as t u + through a change of variables, where represents lower-order terms One of the distinguishing features of the wave equation is that it has wave-like solutions In particular, for the wave equation the general solution is given by t c u xx u(x, t) f(x + ct)+g(x ct), the sum of a wave moving to the right and a wave moving to the left The functions f(x+ct) and g(x ct) areknownastravelling waves More generally, for the wave equation in R n, t c u x R n, (8) for any (smooth) function f and any ξ R n, u(x, t) f(ξ x σt) is a solution of (8) for σ ±c ξ A solution of the form f(ξ x σt) isknownasaplane wave solution Motivated by the existence of plane wave solutions for the wave equation, we look for properties of the system (8) such that the equation will have plane wave solutions The conditions under which plane wave solutions exist lead us to the definition of hyperbolicity given above First, we rewrite the wave equation as a system in the form of (8) First, consider the wave equation in one spatial dimension, t u xx Let Then where U t uxt t utx U u xx A ux ux x A U x In general, for x R n,consider t u

3 Let U u x u xn Then U t u x t u xnt t x xn n u x i x i A i U xi u x u xn x u x x + u x x u xn u x u xn x n x n where each A i is an (n +) (n + ) symmetric matrix whose entries a i jk are given by { j i, k n +;j n +,k i a i jk otherwise Now we claim that for A i as defined above, for each ξ R n,therearem n + distinct plane wave solutions U(x, t) V (ξ x σt) of U t A i U xi In particular, define A(ξ) A i ξ i Let λ i (ξ), R i (ξ) betheith eigenvalue and corresponding eigenvector of A(ξ) Let U(x, t) f(ξ x λ i (ξ)t)r i (ξ) Now and U t λ i (ξ)f (ξ x λ i (ξ)t)r i (ξ) U xi ξ i f (ξ x λ i (ξ)t)r i (ξ) 3

4 Therefore, because U t A i U xi f (ξ x λ i (ξ)t) λ i (ξ)r i (ξ) A i ξ i R i (ξ) f (ξ x λ i (ξ)t)λ i (ξ)r i (ξ) A(ξ)R i (ξ) A(ξ)R i (ξ) λ i (ξ)r i (ξ) Now notice that A i is a symmetric matrix for i,,n Therefore, A(ξ) n A iξ i is an m m symmetric matrix for each ξ R n Consequently, A(ξ) hasm real eigenvalues and m linearly independent eigenvectors R i (ξ) Therefore, for each ξ R n and each eigenvalue/eigenvector pair λ i (ξ),r i (ξ), we get a distinct plane wave solution U(x, t) V (ξ x λ i (ξ)t) We use this fact to define hyperbolicity for systems of the form (8) In particular, we want to find a condition on the system (8) under which there will be m distinct plane wave solutions for each ξ R n We look for a solution of (8) of the form U(x, t) V (ξ x σt) Plugging a function U of this form into (8) with F (x, t), we see this implies σv + ξ i A i V (83) Now if n ξ ia i is an m m diagonalizable matrix, then (83) will have m solutions V,,V m These solutions are the eigenvectors of n ξ ia i which correspond to the m eigenvalues σ,,σ m As a result, if n ξ ia i is diagonalizable, then we have m plane wave solutions of (8) This criteria gives us our definition for hyperbolicity described above 8 Solving Hyperbolic Systems In this section, we will solve hyperbolic systems of the form U t + AU x F (x, t) (84) where A is a constant-coefficient matrix Note that if (84) is hyperbolic, then A must be a diagonalizable matrix Therefore, there exists an m m invertible matrix Q and an m m diagonal matrix Λ such that Q AQ Λ In particular, Λ is the diagonal matrix of eigenvalues and Q is the matrix of eigenvectors Therefore, A QΛQ Substituting this into (84), our system becomes U t + QΛQ U x F (x, t) (85) 4

5 Now multiplying (85) by Q, our system becomes Q U t +ΛQ U x Q F (x, t) Letting V Q U, we arrive at the decoupled system, V t +ΛV x Q F (x, t) Remark: If A is symmetric, the eigenvectors may be chosen to be orthonormal, in which case Q Q T Example 3 Find a solution to the initial-value problem u 4 u + u 4 u t x u (x, ) sin x u (x, ) cos x (86) Here A 4 4 is a symmetric matrix Therefore, it has two real eigenvalues and its eigenvectors form an orthonormal basis for R In particular, A can be diagonalized The eigenvalue/eigenvector pairs are given by λ 5 v λ 3 v Therefore, A QΛQ T where / / Q / / / / Q T / / 5 Λ 3 Our system can be rewritten as Q T u u t +ΛQ T u u x 5

6 Letting V Q T U our system becomes v +Λ v t v (x, ) v (x, ) v v x Q T u (x, ) u (x, ) That is, we have two separate transport equations, v t +5v x v (x, ) (sin x +cosx) and v t 3v x v (x, ) (sin x cos x) Our solutions are given by v (x, t) (sin(x 5t)+cos(x 5t)) v (x, t) (sin(x +3t) cos(x +3t)) Now V Q T U implies U QV Therefore, our solution is given by U(x, t) sin(x 5t)+cos(x 5t)+sin(x +3t)+cos(x +3t) sin(x 5t)+cos(x 5t) sin(x +3t) cos(x +3t) Remark Notice in the above example that the value of the solution U at the point (x,t ) depends only on the values of the initial conditions at the points x 5t and x +3t In the next section, we prove that for a symmetric hyperbolic system of the form U t + A i U xi that the domain of dependence for a solution at the point (x,t ) is contained within the region {(x, t) : x x M(t t)} where M is an upper bound on the eigenvalues of A(ξ) n A iξ i over all ξ R n such that ξ ;thatis, M where λ i (ξ) are the m eigenvalues of A(ξ) max,,m ξ 6 λ i (ξ),

7 83 Domain of Dependence for a Symmetric Hyperbolic System In the above example, we saw that the domain of dependence for a solution U at the point (x,t ) is contained within the triangular region {(x, t) :x +5(t t ) x x 3(t t ), t t } More generally, for any symmetric hyperbolic system of the form U t + AU x F (x, t) x R, the domain of dependence is contained within the region {(x, t) :x + M(t t ) x x M(t t ), t t } where M max λ i i where λ i are the m eigenvalues of A This idea extends to systems in higher dimensions in well hyperbolic system, Consider the symmetric U t + A i U xi x R n where each A i is an m m symmetric matrix Let A(ξ) ξ i A i Let λ i (ξ), i,,m be the m eigenvalues of A(ξ) Let M max,,m ξ λ i (ξ) We claim that M is the upper bound on the speed of waves in any direction We state this more precisely in the following theorem First, we make some definitions Let (x,t ) R n (, ) Let t t Let B {x R n : x x Mt } C {x R n : x x M(t t )} S {(x, t) : t t, x x M t t } Theorem 4 Let (x,t ) R n (, ) Let B,C be as defined above Assume U is a solution of U t + A i U xi (87) where each A i is an m m constant-coefficient, symmetric matrix If U on B, then U on C 7

8 Proof Let Ω be the region bounded by B,C and S Let Ω B C S Multiplying (87) by U T and integrating over Ω, we have U T (U t + A i U xi ) dx dt First, we note that Second, we have Ω U T U t t U U T A i U xi (U A i U) xi, using the fact that each A i is symmetric Then, by the divergence theorem, we have t U dx dt U ν t ds Ω Ω where ν (ν,,ν n,ν t ) is the outward unit normal on Ω Now U ν t ds U dx U dx + U ν t ds On S, Therefore, C Ω ν (x x,,x n x n,m (t t )) (t t)m +M ν t B M +M Therefore, U ν t ds U dx U dx + U M Ω C B ds S +M Similarly, we use the divergence theorem on our other term, (U A i U) xi dx dt (U A i U)ν i ds Ω S (x i x i ) (U A i U) (t t)m +M ds S S 8

9 Let ξ i (x i x i ) (t t)m Then, we have ξ And, therefore, (U A i U) xi dx dt Ω (U A i ξ i U) ds +M S (U A(ξ)U) ds +M S Therefore, we have U T (U t + A i U xi ) dx dt Ω But, C U dx B U dx + S U M ds + +M (U A(ξ)U) ds +M U M + U A(ξ)U U (MI + A(ξ))U, as the eigenvalues of A(ξ) M Therefore, U dx U dx C B Therefore, if U onb, then U onc S Now we prove uniqueness of solutions to symmetric hyperbolic systems Theorem 5 (Uniqueness) Consider the symmetric hyperbolic system, U t + A i U xi F (x, t) x R n U(x, )Φ(x) where the initial data Φ(x) has compact support Then there exists at most one (smooth) solution Proof Suppose there are two smooth solutions U and U with the same initial data Φ Let W (x, t) U (x, t) U (x, t) We know that W (x, ) We claim that W (x, t) Define the energy function, E(t) W dx R n 9

10 We know E() We claim E(t), and, therefore, W (x, t) We will show that E(t) by showing that E (t) E (t) W W t dx R n W A i W xi R n (W A i W ) xi dx R n using the fact that Φ has compact support implies W has compact support (By the domain of dependence results we showed above) Therefore, E (t), which implies for E(), then E(t) for any time t Therefore, W which implies U U (assuming U,U are smooth)

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Class Meeting # 1: Introduction to PDEs

Class Meeting # 1: Introduction to PDEs MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Elasticity Theory Basics

Elasticity Theory Basics G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions College of the Holy Cross, Spring 29 Math 373, Partial Differential Equations Midterm 1 Practice Questions 1. (a) Find a solution of u x + u y + u = xy. Hint: Try a polynomial of degree 2. Solution. Use

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, 2007. 1.1 The Neumann Boundary Condition

1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, 2007. 1.1 The Neumann Boundary Condition MAT 280: Laplacian Eigenfunctions: Theory, Applications, and Computations Lecture 11: Laplacian Eigenvalue Problems for General Domains III. Completeness of a Set of Eigenfunctions and the Justification

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Section 2.7 One-to-One Functions and Their Inverses

Section 2.7 One-to-One Functions and Their Inverses Section. One-to-One Functions and Their Inverses One-to-One Functions HORIZONTAL LINE TEST: A function is one-to-one if and only if no horizontal line intersects its graph more than once. EXAMPLES: 1.

More information

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t) Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Homework # 3 Solutions

Homework # 3 Solutions Homework # 3 Solutions February, 200 Solution (2.3.5). Noting that and ( + 3 x) x 8 = + 3 x) by Equation (2.3.) x 8 x 8 = + 3 8 by Equations (2.3.7) and (2.3.0) =3 x 8 6x2 + x 3 ) = 2 + 6x 2 + x 3 x 8

More information

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

APPLICATIONS. are symmetric, but. are not.

APPLICATIONS. are symmetric, but. are not. CHAPTER III APPLICATIONS Real Symmetric Matrices The most common matrices we meet in applications are symmetric, that is, they are square matrices which are equal to their transposes In symbols, A t =

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Parabolic Equations. Chapter 5. Contents. 5.1.2 Well-Posed Initial-Boundary Value Problem. 5.1.3 Time Irreversibility of the Heat Equation

Parabolic Equations. Chapter 5. Contents. 5.1.2 Well-Posed Initial-Boundary Value Problem. 5.1.3 Time Irreversibility of the Heat Equation 7 5.1 Definitions Properties Chapter 5 Parabolic Equations Note that we require the solution u(, t bounded in R n for all t. In particular we assume that the boundedness of the smooth function u at infinity

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

CHAPTER IV - BROWNIAN MOTION

CHAPTER IV - BROWNIAN MOTION CHAPTER IV - BROWNIAN MOTION JOSEPH G. CONLON 1. Construction of Brownian Motion There are two ways in which the idea of a Markov chain on a discrete state space can be generalized: (1) The discrete time

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Second Order Linear Partial Differential Equations. Part I

Second Order Linear Partial Differential Equations. Part I Second Order Linear Partial Differential Equations Part I Second linear partial differential equations; Separation of Variables; - point boundary value problems; Eigenvalues and Eigenfunctions Introduction

More information

DERIVATIVES AS MATRICES; CHAIN RULE

DERIVATIVES AS MATRICES; CHAIN RULE DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

Teoretisk Fysik KTH. Advanced QM (SI2380), test questions 1

Teoretisk Fysik KTH. Advanced QM (SI2380), test questions 1 Teoretisk Fysik KTH Advanced QM (SI238), test questions NOTE THAT I TYPED THIS IN A HURRY AND TYPOS ARE POSSIBLE: PLEASE LET ME KNOW BY EMAIL IF YOU FIND ANY (I will try to correct typos asap - if you

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

Brief Introduction to Vectors and Matrices

Brief Introduction to Vectors and Matrices CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

More information

PRACTICE FINAL. Problem 1. Find the dimensions of the isosceles triangle with largest area that can be inscribed in a circle of radius 10cm.

PRACTICE FINAL. Problem 1. Find the dimensions of the isosceles triangle with largest area that can be inscribed in a circle of radius 10cm. PRACTICE FINAL Problem 1. Find the dimensions of the isosceles triangle with largest area that can be inscribed in a circle of radius 1cm. Solution. Let x be the distance between the center of the circle

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

Limits and Continuity

Limits and Continuity Math 20C Multivariable Calculus Lecture Limits and Continuity Slide Review of Limit. Side limits and squeeze theorem. Continuous functions of 2,3 variables. Review: Limits Slide 2 Definition Given a function

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

Linköping University Electronic Press

Linköping University Electronic Press Linköping University Electronic Press Report Well-posed boundary conditions for the shallow water equations Sarmad Ghader and Jan Nordström Series: LiTH-MAT-R, 0348-960, No. 4 Available at: Linköping University

More information

HW6 Solutions. MATH 20D Fall 2013 Prof: Sun Hui TA: Zezhou Zhang (David) November 14, 2013. Checklist: Section 7.8: 1c, 2, 7, 10, [16]

HW6 Solutions. MATH 20D Fall 2013 Prof: Sun Hui TA: Zezhou Zhang (David) November 14, 2013. Checklist: Section 7.8: 1c, 2, 7, 10, [16] HW6 Solutions MATH D Fall 3 Prof: Sun Hui TA: Zezhou Zhang David November 4, 3 Checklist: Section 7.8: c,, 7,, [6] Section 7.9:, 3, 7, 9 Section 7.8 In Problems 7.8. thru 4: a Draw a direction field and

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information

Using a table of derivatives

Using a table of derivatives Using a table of derivatives In this unit we construct a Table of Derivatives of commonly occurring functions. This is done using the knowledge gained in previous units on differentiation from first principles.

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

MATH PROBLEMS, WITH SOLUTIONS

MATH PROBLEMS, WITH SOLUTIONS MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

An Introduction to Partial Differential Equations

An Introduction to Partial Differential Equations An Introduction to Partial Differential Equations Andrew J. Bernoff LECTURE 2 Cooling of a Hot Bar: The Diffusion Equation 2.1. Outline of Lecture An Introduction to Heat Flow Derivation of the Diffusion

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

Solutions for Review Problems

Solutions for Review Problems olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 1: Initial value problems in ODEs Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical

More information

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Scalar Valued Functions of Several Variables; the Gradient Vector

Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,

More information

Taylor and Maclaurin Series

Taylor and Maclaurin Series Taylor and Maclaurin Series In the preceding section we were able to find power series representations for a certain restricted class of functions. Here we investigate more general problems: Which functions

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

5.4 The Heat Equation and Convection-Diffusion

5.4 The Heat Equation and Convection-Diffusion 5.4. THE HEAT EQUATION AND CONVECTION-DIFFUSION c 6 Gilbert Strang 5.4 The Heat Equation and Convection-Diffusion The wave equation conserves energy. The heat equation u t = u xx dissipates energy. The

More information

The one dimensional heat equation: Neumann and Robin boundary conditions

The one dimensional heat equation: Neumann and Robin boundary conditions The one dimensional heat equation: Neumann and Robin boundary conditions Ryan C. Trinity University Partial Differential Equations February 28, 2012 with Neumann boundary conditions Our goal is to solve:

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition

More information

Critical Thresholds in Euler-Poisson Equations. Shlomo Engelberg Jerusalem College of Technology Machon Lev

Critical Thresholds in Euler-Poisson Equations. Shlomo Engelberg Jerusalem College of Technology Machon Lev Critical Thresholds in Euler-Poisson Equations Shlomo Engelberg Jerusalem College of Technology Machon Lev 1 Publication Information This work was performed with Hailiang Liu & Eitan Tadmor. These results

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Application of Fourier Transform to PDE (I) Fourier Sine Transform (application to PDEs defined on a semi-infinite domain)

Application of Fourier Transform to PDE (I) Fourier Sine Transform (application to PDEs defined on a semi-infinite domain) Application of Fourier Transform to PDE (I) Fourier Sine Transform (application to PDEs defined on a semi-infinite domain) The Fourier Sine Transform pair are F. T. : U = 2/ u x sin x dx, denoted as U

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information