Chapter 20. Vector Spaces and Bases



Similar documents
by the matrix A results in a vector which is a reflection of the given

Similarity and Diagonalization. Similar Matrices

Linear Algebra I. Ronald van Luijk, 2012

Chapter 17. Orthogonal Matrices and Symmetries of Space

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

LINEAR ALGEBRA W W L CHEN

Systems of Linear Equations

Linear Algebra Notes

Orthogonal Diagonalization of Symmetric Matrices

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

BANACH AND HILBERT SPACE REVIEW

LS.6 Solution Matrices

Inner Product Spaces

4.5 Linear Dependence and Linear Independence

Inner Product Spaces and Orthogonality

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

THE DIMENSION OF A VECTOR SPACE

Section 1.1. Introduction to R n

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH APPLIED MATRIX THEORY

1 VECTOR SPACES AND SUBSPACES

Chapter 6. Orthogonality

Vector and Matrix Norms

MA106 Linear Algebra lecture notes

LEARNING OBJECTIVES FOR THIS CHAPTER

NOTES ON LINEAR TRANSFORMATIONS

Linear Algebra Review. Vectors

Lecture 14: Section 3.3

Math 4310 Handout - Quotient Vector Spaces

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Vector Spaces 4.4 Spanning and Independence

3. INNER PRODUCT SPACES

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

University of Lille I PC first year list of exercises n 7. Review

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Linearly Independent Sets and Linearly Dependent Sets

Metric Spaces. Chapter Metrics

MAT 242 Test 2 SOLUTIONS, FORM T

9 Multiplication of Vectors: The Scalar or Dot Product

α = u v. In other words, Orthogonal Projection

5. Linear algebra I: dimension

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:

ISOMETRIES OF R n KEITH CONRAD

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

1 Introduction to Matrices

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Section Continued

Lecture Notes 2: Matrices as Systems of Linear Equations

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

Section Inner Products and Norms

Math 312 Homework 1 Solutions

Numerical Analysis Lecture Notes

( ) which must be a vector

THREE DIMENSIONAL GEOMETRY

Matrix Representations of Linear Transformations and Changes of Coordinates

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

A =

Quotient Rings and Field Extensions

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Examination paper for TMA4115 Matematikk 3

Name: Section Registered In:

Solutions to Math 51 First Exam January 29, 2015

PROJECTIVE GEOMETRY. b3 course Nigel Hitchin

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

LINEAR ALGEBRA. September 23, 2010

Inner product. Definition of inner product

1 Sets and Set Notation.

Notes on Determinant

Methods for Finding Bases

Inner Product Spaces. 7.1 Inner Products

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

[1] Diagonal factorization

Notes from February 11

5.3 The Cross Product in R 3

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Unified Lecture # 4 Vectors

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Factoring Polynomials

Math 215 HW #6 Solutions

1 Inner Products and Norms on Real Vector Spaces

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

On the representability of the bi-uniform matroid

Second Order Linear Partial Differential Equations. Part I

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

Inner products on R n, and more

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Introduction to Matrix Algebra

Transcription:

Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit to this hierarchy of structures. We have indexed these objects by their dimension in an ad hoc way. It is time to unify these structures and ideas; this chapter gives a brief introduction to this more abstract viewpoint. Paradoxically, this abstraction also makes Linear Algebra more applicable to other areas of Mathematics and Science. In this higher viewpoint, Linear Algebra is the study of vector spaces and linear mappings between vector spaces. Definition of a vector space: A vector space is a set V of objects v called vectors which can be added and multiplied by scalars t R, subject to the rules: u + v = v + u, (u + v) + w = u + (v + w), t(u + v) = tu + tv, (t + s)v = tv + sv, t(sv) = (ts)v, these holding for all u, v, w V and s, t R. There must also be a vector 0 V with the properties that 0 + v = v, 0v = 0 for all v V. Examples of vector spaces: R n is a vector space. Any line, plane, hyperplane,... in R n is a vector space. Even the set consisting of one vector {0} is a vector space, called the trivial vector space. In fact, any subset of R n that is closed under addition and scalar multiplication is a vector space; such a vector space is called subspace of R n. For n 0, the set P n of all polynomials c 0 + c 1 x + + c n x n with real coefficients is a vector space. Here the vectors are polynomials. The set V of real valued functions f(x) satisfying the differential equation f +f = 0 is a vector space. For if f and g are solutions so are f + g and cf, where c is a scalar. Note that V is a subspace of the giant vector space C (R), consisting of all infinitely differentiable functions f : R R. In fact V = E( 1) is an eigenspace for the linear map d 2 dx 2 : C (R) C (R). Any nonzero vector space V contains infinitely many vectors. To write them all down, we want to have something like the standard basis e 1,..., e n of R n. Here, we have (x 1, x 2,..., x n ) = x 1 e 1 + x 2 e 2 + x n e n. (1) 1

Thus, any vector x = (x 1, x 2,..., x n ) R n is a linear combination of the vectors e 1, e n. Moreover, this linear combination is unique: Equation (1) is the only way to write x as a combination of the e i. Definition of a Basis: A basis of a vector space V is a set {v 1,..., v n } of vectors in V with the property that every vector in V can be uniquely expressed as a linear combination of v 1,..., v n. Examples of bases: A basis of a line l is a nonzero vector v in l. Every vector in l can be uniquely expressed as cv, for some a R. A basis of a plane P is a pair {u, v} of non-proportional vectors in P. Every vector in P can be uniquely expressed as au + bv, for some a, b in R. A basis of R 3 is any set of three vectors {u, v, w} in R 3 not contained in a plane. A basis of a hyperplane H R 4 is a set of three vectors in H not contained in a plane. A basis of R 4 is a set of four vectors not contained in a hyperplane. For any n 1, the standard basis {e 1,..., e n } is a basis of R n. There are many other bases, such as {e 1, e 1 + e 2,..., e 1 + e 2 + + e n }. The set {1, x, x 2, x 3,..., x n } is a basis of P n. This means that any polynomial can be uniquely expressed as a finite linear combination c 0 + c 1 x + c 2 x 2 + + c n x n. The vector space of solutions of the differential equation f + f = 0 has basis {cos x, sin x}. In other words, any function satisfying f + f = 0 can be uniquely expressed as f = c 1 cos x + c 2 sin x for some real numbers c 1, c 2. The above definition of a basis was simple to state, but it often takes time to fully grasp the idea. This is because the phrase can be uniquely is actually two conditions in compressed form. We now uncompress them, starting with can be. Definition of Span : The span of a set of vectors {v 1,..., v k } V is the set of all linear combinations of the v i, namely {c 1 v 1 + + c n v n : c i R}. We also use span as a verb: {v 1,..., v k } spans V if every vector in V can be expressed as a linear combination of the v i. The span of a {v 1,..., v k } is closed under addition and scalar multiplication, hence forms a vector space in its own right. Now to unravel the second part of the definition of Basis: uniqueness. Definition of Linear Independence: A collection of vectors S in a vector space V is linearly independent if no vector in S is a linear combination of the other vectors in S. If some vector in S is a linear combination of the others, we say that S is linearly dependent. 2

Proposition 0.1 A subset S V is linearly dependent if and only if there exist vectors v 1,..., v k in S and nonzero scalars c 1,..., c k in R such that c 1 v 1 + c 2 v 2 + + c k v k = 0. Proof: Suppose there exist vectors v i in S and nonzero scalars c i such that c 1 v 1 + + c k v k = 0. Then v 1 = c 2 v 2 c k v k, c 1 c 1 so v 1 is a linear combination of the other vectors v 2,..., v k and therefore S is linearly dependent. Conversely, if S is linearly dependent, then we can write some v in S as a linear combination of other vectors v 1,..., v k in S: v = c 1 v 1 + + c k v k, where the c i are nonzero scalars. Then with all coefficients nonzero. Example: The three vectors v c 1 v 1 c k v k = 0 u = (1, 2, 3), v = (4, 5, 6), w = (7, 8, 9) are linearly dependent because u 2v + w = 0. Definition of a Basis, rephrased: A basis of a vector space V is subset {v 1,... v n } V satisfying the two properties: (i) {v 1,..., v n } spans V; (ii) {v 1,..., v n } is linearly independent. To check that a given subset {v 1,..., v n } V is a basis of V, you have to check items (i) and (ii). That is, (i) To show spanning: Take an arbitrary vector v V, and show that there are scalars c 1,..., c n such that v = c 1 v 1 + + c n v n. (ii) To show linearly independence: Suppose you have an equation of the form c 1 v 1 + + c n v n = 0, and show this implies c 1 = c 2 = = c n = 0. 3

Example 1: Let V be the hyperplane in R 4 with equation The vectors x + y + z + w = 0. v 1 = e 1 e 2, v 2 = e 2 e 3, v 3 = e 3 e 4 are in V. Let s show that {v 1, v 2, v 3 } is a basis of V. Step (i): To check spanning, let v = (x, y, z, w) be an arbitary vector in V. We want to find scalars c 1, c 2, c 3 such that v = c 1 v 1 + c 2 v 2 + c 3 v 3. Since w = x y z, we have v = xv 1 + (x + y)v 2 + (x + y + z)v 3, so c 1 = x, c 2 = x + y, c 3 = x + y + z do the job. This shows that S spans V. Step (ii): To check linear independence, we suppose there are scalars c 1, c 2, c 3 such that c 1 v 1 + c 2 v 2 + c 3 v 3 = 0. Writing both sides out in terms of components, this means which amounts to the equations (c 1, c 2 c 1, c 3 c 2, c 3 ) = (0, 0, 0, 0), c 1 = 0 c 2 c 1 = 0 c 3 c 2 = 0 c 3 = 0. The only solution to these equation is c 1 = c 2 = c 3 = 0. This shows that {v 1, v 2, v 3 } is linearly independent. We have now shown that {v 1, v 2, v 3 } is a basis of V. This means that every vector v V can be uniquely written as v = c 1 v 1 + c 2 v 2 + c 3 v 3 = (c 1, c 2 c 1, c 3 c 2, c 3 ). Example 2: Let us now take the vector space W of vectors (x, y, z, w, t) in R 5 satisfying the same equation x + y + z + w = 0. Then the set {v 1, v 2, v 3 } in example 2 is no longer a basis of W: the vector e 5 is in W, but e 5 is not in the span of {v 1, v 2, v 3 }. By the method of example 2 one can check that the enlarged set {e 1 e 2, e 2 e 3, e 3 e 4, e 5 } is a basis of W. Example 3: We have seen that the vector space P n of polynomials of degree n has the basis {1, x, x 2, x 3,..., x n }. This may be the most obvious basis of P n, but for many purposes it is not the best one. For numerical integration, one prefers to use instead the basis {P 0, P 1, P 2,..., P n } 4

consisting of Legendre Polynomials. These are the unique polynomials satisfying the conditions deg P k = k, P k (1) = 1, 1 1 P k P l dx = 0 if k l. The first few Legendre polynomials are and a general formula for them is P 0 = 1, P 1 = x, P 2 = 1 2 (3x2 1), P 3 = 1 2 (5x3 3x), (2) P k (x) = All we need to know from this formula is that 1 2 4 (2k) d k dx k [(x2 1) k ]. P k (x) = a k x k + (lower powers of x). Let s show that {P 0, P 1, P 2,..., P n } is a basis of P n. Step (i): The span L n of {P 0, P 1, P 2,..., P n } is a vector space, consisting of all linear combinations of the P k, and we want to show that L n = P n. We first note that 1 = P 0 and x = P 1, so 1 and x are in L n. Next, P 2 = 3 2 x2 1 2 x and both P 2 and x are in L n, so 3 2 x2 L n, so x 2 L n. Likewise, P 3 = a 3 x 3 + (linear combination of 1, x, x 2 and since 1, x, x 2 and P 3 are in L n we have x 3 L n. In general P k = a k x k + (linear combination of 1, x,..., x k 1 and up to this point we will have shown 1, x,..., x k 1 L n, along with having P k L n by definition, so we get x k L n, and continue like this, until we have all powers 1, x,..., x n in L n. Now, since a general polynomial Q P n is a linear combination of 1, x,..., x n, we have Q L n as well. This proves spanning. Step (ii): To show linear independence, suppose c 0, c 1,..., c n are scalars such that c 0 P 0 + c 1 P 1 + + c n P n = 0 is the zero polynomial. Since deg P n = n and deg P k < n for k < n, it follows that x n appears in c n P n and nowhere else. Since the coefficient of x n must cancel out, we must have c n = 0. The same argument now shows c n 1 = 0, etc., so all c k = 0 and this proves that {P 0, P 1, P 2,..., P n } is linearly independent and is therefore a basis of P n. Example 4: Let V be the vector space of solutions to the differential equation f + af + bf = 0, 5

where a and b are constants. Let λ and µ be the roots of the polynomial x 2 + ax + b and assume for simplicity that λ µ. I claim that {e λx, e µx } is a basis of V. Let D : C (R) C (R) be the linear map of the vector space of infinitely differentiable functions given by Df = f. The eigenvectors of D with eigenvalue λ are solutions to the equation Df = λf, namely f = ke λx, where k is a constant. In other words, e λx spans ker(d λ). Since x 2 + ax + b = (x λ)(x µ) we have f + af + bf = (D 2 + ad + b)f = (D λ)(d µ)f = (D µ)(d λ)f. If f + af + bf = 0 then (D µ)f ker(d λ) and vice-versa. so (D µ)f = k 1 e λx and (D λ)f = k 2 e µx, for some constants k 1 and k 2. Solving these two equations for Df we get Integrating both sides, we get This shows that {e λx, e µx } spans V. We now show linear independence. Suppose (λ µ)df = k 1 λe λx k 2 µe µx. f = 1 ( k1 λe λx k 2 µe µx). λ µ c 1 e λx + c 2 e µx = 0 for some constants c 1 and c 2. Differentiating, we get c 1 λe λ x + c 2 µe µx = 0. Multiplying the first equation by µ and subtracting we get c 1 (µ λ)e λ x = 0. Since µ λ this forces c 1 = 0. Then c 2 e µx = 0 so c 2 = 0. This proves that {e λx, e µx } is linearly independent and is therefore a basis of the solution space V. 6

Exercise 20.1 Find a basis of the hyperplane in R 4 with equation x + 2y + 3z + 4w = 0. Exercise 20.2 Let V be the vector space of solutions of the differential equation f 2f + f = 0. (a) Show that the functions e x and xe x belong to V. (b) Show that the functions e x and xe x are linearly independent. 1 Exercise 20.3 Let P n be the vector space of polynomials of degree at most n. Let p 0, p 1,..., p n be polynomials with deg(p i ) = i. Show that {p 0, p 1,..., p n } is a basis of P n. Exercise 20.4 On P n, we have the an analogue of the dot product, given by p, q = 1 1 p(x)q(x) dx. We say p and q are orthogonal if p, q = 0. In this problem you may use without proof the fact that distinct Legendre polynomials P k and P l are orthogonal: P k, P l = 0 if k l. (a) Suppose f P n is orthogonal to each Legendre polynomial P k, for all 0 k n. Show that f = 0. (b) We know that any f P n may be uniquely expressed as a linear combination of the P k. Show that this unique linear combination is given by f(x) = n k=0 f, P k P k, P k P k(x). [Hint: Let be the difference of the two sides and show that, P k = 0 for all 0 k n. Then invoke part (a). ] Exercise 20.5 Let v 1,... v k be nonzero vectors in R n which are orthogonal with respect to the dot product. Prove that {v 1,..., v k } is linearly independent. 1 In fact {e x, xe x } is a basis of V, but that is harder to prove. 7