1 Inner Products and Norms on Real Vector Spaces



Similar documents
Inner Product Spaces

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Lecture 14: Section 3.3

Section 4.4 Inner Product Spaces

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Math 4310 Handout - Quotient Vector Spaces

Inner product. Definition of inner product

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Linear Equations and Inequalities

α = u v. In other words, Orthogonal Projection

Vector and Matrix Norms

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

DERIVATIVES AS MATRICES; CHAIN RULE

Class Meeting # 1: Introduction to PDEs

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

5. Orthogonal matrices

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

3. INNER PRODUCT SPACES

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

12. Inner Product Spaces

5.3 The Cross Product in R 3

Vectors Math 122 Calculus III D Joyce, Fall 2012

Math 215 HW #6 Solutions

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

Section 1.1. Introduction to R n

Scalar Valued Functions of Several Variables; the Gradient Vector

Maximum Likelihood Estimation

Chapter 20. Vector Spaces and Bases

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Differential Operators and their Adjoint Operators

Equations, Inequalities & Partial Fractions

Linear Algebra: Vectors

Inner Product Spaces and Orthogonality

BANACH AND HILBERT SPACE REVIEW

Exact Confidence Intervals

Numerical Analysis Lecture Notes

NOTES ON LINEAR TRANSFORMATIONS

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

The Method of Partial Fractions Math 121 Calculus II Spring 2015

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Section 9.5: Equations of Lines and Planes

Orthogonal Projections and Orthonormal Bases

Absolute Value Equations and Inequalities

Quantum Physics II (8.05) Fall 2013 Assignment 4

5 Numerical Differentiation

WHEN DOES A CROSS PRODUCT ON R n EXIST?

The Dot and Cross Products

Metric Spaces. Chapter Metrics

LEARNING OBJECTIVES FOR THIS CHAPTER

Practice with Proofs

Solutions to Math 51 First Exam January 29, 2015

9 Multiplication of Vectors: The Scalar or Dot Product

Similarity and Diagonalization. Similar Matrices

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

Vector Math Computer Graphics Scott D. Anderson

Math 55: Discrete Mathematics

Notes on Determinant

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Functions: Piecewise, Even and Odd.

MATH APPLIED MATRIX THEORY

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Quantum Mechanics: Postulates

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions

General Theory of Differential Equations Sections 2.8, , 4.1

3 Contour integrals and Cauchy s Theorem

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

Continued Fractions and the Euclidean Algorithm

Solutions to Homework 10

LS.6 Solution Matrices

1 VECTOR SPACES AND SUBSPACES

Solving Rational Equations

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Mechanics 1: Conservation of Energy and Momentum

Systems of Linear Equations

Notes on metric spaces

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

4.5 Linear Dependence and Linear Independence

9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Largest Fixed-Aspect, Axis-Aligned Rectangle

The Point-Slope Form

Limits and Continuity

Finite dimensional C -algebras

Vector Spaces; the Space R n

1 Lecture 3: Operators in Quantum Mechanics

Let H and J be as in the above lemma. The result of the lemma shows that the integral

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Derive 5: The Easiest... Just Got Better!

Methods for Finding Bases

5 Homogeneous systems

2.6. Probability. In general the probability density of a random variable satisfies two conditions:

Vector Spaces. 2 3x + 4x 2 7x 3) + ( 8 2x + 11x 2 + 9x 3)

Transcription:

Math 373: Principles Techniques of Applied Mathematics Spring 29 The 2 Inner Product 1 Inner Products Norms on Real Vector Spaces Recall that an inner product on a real vector space V is a function from V V to R such that for any v, w, x V scalars c R, we have v, w w, v cv, w c v, w v, w + x v, w + v, w v, v > for v Given a vector space V with an inner product, it is possible to define notions such as lengths an angles. The length or norm of an element v V is given by v v, v. It is not difficult to prove that inner products lengths satisfy the Cauchy-Schwarz inequality: v, w v w From this it follows that it makes sense to define the angle between v w to be the angle θ such that v, w θ v w, ce the right h side is between 1 1. We then say that two vectors v w are orthogonal if v, w. It also follows from the definition of an inner product, together with the Cauchy-Schwarz inequality, that for all v, w V scalars c R, we have cv c v v > for v v + w v + w These are actually the defining properties of a norm. 2 Orthogonal Bases Suppose that we have a basis {v 1, v 2,..., v n } for a vector space V consisting of vectors that are mutually orthogonal. That is, suppose that v j, v k for every j k. Now suppose we are given a vector w that we wish to write as a linear combination of the v k s. First write w c 1 v 1 + c 2 v 2 + + c n v n

for some unknown coefficients c 1 through c n. Now fix k take the inner product of both sides with v k to get w, v k c 1 v 1, v k + c 2 v 2, v k + + c n v n, v k Now because of the orthogonality of the basis vectors, every one of the inner products on the right h side is zero, except for one, namely v k, v k. Thus the equation reduces to thus We therefore have the general formula w, v k c k v k, v k c k w, v k v k, v k. w w, v 1 v 1, v 1 v 1 + w, v 2 v 2, v 2 v 2 + + w, v n v n, v n v n. Thus if we have a basis consisting of orthogonal vectors, then it is very simple to express a given vector in terms of the basis vectors. Simply take inner products to find the coefficients. 3 The 2 Inner Product Now consider the vector space V of real-valued continuous functions on an interval [a, b]. The 2 inner product on V is defined by f, g b a f(xg(x dx It is left as an exercise to verify that this satisfies the properties of an inner product. The 2 norm of a function is then defined by f ( b 1/2 f, f [f(x] dx 2. The thing that makes Fourier series work so well is that the basis functions are all orthogonal with respect to the 2 inner product. Example 1. Consider the Fourier e series φ(x A n n1 of a function φ on the interval [, ]. Denoting by we proved in class that s m, s n s n, s n s n (x a 2 dx 2, dx

From this we can derive formulas for the coefficients A n as we did for the c k above: A n φ, s n s n, s n 2 φ(x dx Example 2. Similar calculations apply to the Fourier ine series φ(x 1 2 A + A n of a function φ on the interval [, ]. Here the basis functions are c n (x, together with the constant function 1 (which can be thought of as ( πx from which we get c m, c n c n, c n 1, 1 2 1 dx A n φ, c n c n, c n 2 n1 dx 2 φ, 1 A 2 1, 1 2 dx φ(x n 1, 2,... dx φ(x dx.. we have This explains the reason for the factor 1 in front of A 2. It just ensures that the formulas for all the coefficients have the same form. Example 3. Finally, consider the full Fourier series φ(x 1 2 A + A n n1 + B n of a function φ on the interval [, ]. The reason for ug the entire interval [, ] instead of [, ] is that the functions s n c n are not all mutually orthogonal on [, ], but they are orthogonal on [, ]. That is, c m, c n s m, s n c m, s n dx dx dx for all m, n

Since we then have c n, c n s n, s n 1, 1 2, A n φ, c n c n, c n 1 B n φ, s n s n, s n 1 A 1 φ(x dx φ(x dx φ(x dx. 4 Hermitian Inner Products et V be a complex vector space (the field of scalars is C. Then a Hermitian inner product is function V V to C such that v, w w, v cv, w c v, w v, w + x v, w + v, w v, v > for v The notions of orthogonality length are defined in the same way as we did ug inner products on real vector spaces. Also, given an orthogonal basis v 1,..., v n for V we can derive the coefficients c k in the expansion in the same way as before: w c 1 v 1 + + c n v n c k w, v k v k, v k The 2 inner product for complex-valued functions on an interval [a, b] is given by f, g f(xg(x dx It is not hard to prove that it satisfies the four conditions above.

Example 4. given by The functions The complex form of the full Fourier series of a function φ on [, ] is φ(x n C n e inπx/ e n (x e inπx/ are orthogonal with respect to the 2 inner product on [, ]. To see this, we can use the fact that e iπ 1 to compute e m, e n e imπx/ e inπx/ dx e imπx/ e inπx/ dx e i(m nπx/ dx iπ(m n ei(m nπx/ ( e i(m nπ e i(m nπ iπ(m n ( ( 1 m n ( 1 n m iπ(m n Therefore e n, e n e inπx/ e inπx/ dx 1 dx 2 C n φ, e n e n, e n 1 φ(xe inπx/ dx. 2