Inner Product Spaces. 7.1 Inner Products

Size: px
Start display at page:

Download "Inner Product Spaces. 7.1 Inner Products"

Transcription

1 7 Inner Product Spaces 71 Inner Products Recall that if z is a complex number, then z denotes the conjugate of z, Re(z) denotes the real part of z, and Im(z) denotes the imaginary part of z By definition, a + bi = a bi, Re(a + bi) = a, Im(a + bi) = b for any a, b R Throughout this chapter we take F to represent any field that is a subfield of the complex numbers C, which is to say F is a field consisting of objects on which the operation of conjugation may be done This of course includes C itself, as well as the field of real numbers R, rational numbers Q, and others Definition 71 An inner product on a vector space V over F is a function : V V F that associates each pair of vectors (u, v) V V with a scalar u, v F in accordance with the following axioms: IP1 u, v = v, u for all u, v V IP2 u + v, w = u, w + v, w for all u, v, w V IP3 au, v = a u, v for all u, v V and a F IP4 u, u > 0 for all u 0 A vector space V together with an associated inner product is called an inner product space and denoted by (V, ) Remark Care must be taken to not confuse the symbol for the inner product of two vectors u, v with, say, the symbol for a euclidean vector x, y R 2 that is used in some textbooks (particularly calculus books) One features a pair of vectors between angle brackets, while the other features a pair of scalars An inner product associated with a vector space V over C is generally complex-valued and called a hermitian inner product or simply a hermitian product, in which case the pair (V, ) is called a hermitian inner product space Axiom IP1 is the conjugate symmetry property If V is a vector space over R (or some subfield of R), then this axiom becomes and is called the symmetry property u, v = v, u for all u, v V

2 Axioms IP2 and IP3 taken together are the linearity properties, and using them we easily obtain u v, w = u + ( v), w = u, w + v, w = u, w v, w Axiom IP4 is the positive-definiteness property Products which satisfy all axioms save IP4 (or which satisfy a modified version of IP4) are also of theoretical interest, but will not be entertained in this chapter Theorem 72 Let (V, ) be an inner product space over F For u, v, w V and a F, the following properties hold: 1 0, u = u, 0 = 0 2 u, v + w = u, v + u, w 3 u, av = ā u, v 4 u, u = 0 if and only if u = 0 5 If u, v = u, w for all u V, then v = w Proof Proof of Part (1): Let u V By Axiom IP2 we have 0, u = 0 + 0, u = 0, u + 0, u Subtracting 0, u from the leftmost and rightmost expressions yields 0, u = 0 as desired Then u, 0 = 0, u = 0 = 0 completes the proof Proof of Part (2): For any u, v, w V we have u, v + w = v + w, u Axiom IP1 = v, u + w, u Axiom IP2 = v, u + w, u Property of complex conjugates = u, v + u, w Axiom IP1 2 Proof of Part (3): For any u, v V and a F we have u, av = av, u Axiom IP1 = a v, u Axiom IP3 = ā v, u Property of complex conjugates = ā u, v Axiom IP1 Proof of Part (4): The contrapositive of Axiom IP4 states that if u, u 0, then u = 0 Thus, in particular, u, u = 0 implies that u = 0

3 3 For the converse, suppose that u = 0 Then, applying Axiom IP2, u, u = 0, 0 = 0 + 0, 0 = 0, 0 + 0, 0 ; that is, 0, 0 + 0, 0 = 0, 0, from which we obtain 0, 0 = 0 We conclude that u = 0 implies that u, u = 0 Proof of Part (4): Suppose that u, v = u, w for all u V Then u, v w = u, v + ( 1)w = u, v + u, ( 1)w = u, v + ( 1) u, w = u, v u, w = u, v u, v = 0 for all u V, making use of Proposition 33, parts (2) and (3), and the property x + ( 1)y = x y for x, y F Letting u = v w subsequently yields v w, v w = 0, so that v w = 0 by part (4), and therefore v = w One sure result that obtains from Axiom IP4 and Theorem 72(4) is that u, u 0 for all u V This will be important when the discussion turns to norms in the next section Recall the euclidean dot product as defined for vectors in R n : x 1 y 1 x = and y = x n y n x y = y x = [ x ] 1 y 1 y n = x n x k y k It is easily verified that the euclidean dot product applied to R n satisfies the four axioms of an inner product, and so (R n, ) is an inner product space It might be assumed that (C n, ) is also an inner product space (where as usual C n is taken to have underlying field C), but this is not the case Consider for instance the vector z = [ 1 i ] in C 2 We have z z = z z = [ 1 i ][ 1 i ] = i 2 = 1 + ( 1) = 0; that is, z z = 0 even though z 0, and so Axiom IP4 fails! Or consider z = [ i 0 0 ] in C 3, for which we find that z z = z z = [ i 0 0 ] i 0 = i 2 = 1 < 0 0

4 and again Axiom IP4 fails To remedy the situation only requires a modest modification of the dot product definition For the definition we need the conjugate transpose matrix operation: If A = [a ij ] Mat m,n (C), then set 4 Thus, in particular, if then A = A = [a ij ] z 1 z = C n, z n z = [ z 1 z n ] Definition 73 If w, z C n, then the hermitian dot product of w and z is w z = z w = [ w ] 1 z 1 z n = w k z k (1) w n The natural isomorphism [a] 1 1 a is an implicit part of the definition, so that the hermitian dot product produces a scalar value as expected Letting denote the hermitian dot product, we return to the vector [ 1 i ] C 2 and find that [ ] [ ] 1 1 = [ 1 i ][ ] 1 = [ 1 i ][ ] 1 = i( i) = 1 i i i i i 2 = 1 ( 1) = 2, which is an outcome that does not run afoul of Axiom IP4 and so corrects the problem [ 1 i ] presented for the euclidean dot product above The hermitian dot product becomes the euclidean dot product when applied to vectors in R n : letting x, y R n we have x y = y x = y x = [ ] y 1 y n x 1 x n = [ ] y 1 y n x 1 x n = y x, since y k R implies that y k = y k for each 1 k n For this reason we will henceforth always assume (unless stated otherwise) that denotes the hermitian dot product, and call it simply the dot product Example 74 Let a, b R such that a < b, and let V be the vector space over R consisting of all continuous functions f : [a, b] R Given f, g V, define f, g = b a fg (2) We verify that (V, ) is an inner product space Since f, g is real-valued for any f, g V, we have f, g = b a fg = b a gf = g, f = g, f

5 and thus Axiom IP1 is confirmed Next, for any f, g, h V we have f + g, h = b a (f + g)h = confirming Axiom IP2 Axiom IP3 obtains readily: af, g = b a b a (af)g = (fh + fg) = b a b a a(fg) = a fh + b a b a fg = f, h + f, g, fg = a f, g Next, for any f V we have f 2 (x) 0 for all x [a, b], and so f, f = b a f 2 0 follows from an established property of the definite integral Finally, if b a f 2 = 0 it follows from another property of definite integrals that f(x) = 0 for all x [a, b], which is to say f = 0 and therefore Axiom IP4 holds Example 75 Recall the notion of the trace of a square matrix, which is a linear transformation tr : Mat n (F) F given by tr(a) = for each A = [a ij ] Mat n (F) Letting F = R, define : Sym n (R) Sym n (R) R by a ii A, B = tr(ab) The claim is that (Sym n (R), ) is an inner product space To substantiate the claim we must verify that the four axioms of an inner product are satisfied Let A = [a ij ] and B = [b ij ] be elements of Sym n (R) The ii-entry of AB is n a ijb ji, and so tr(ab) = a ij b ji (3) The ii-entry of BA is n b ija ji, from which we obtain tr(ba) = b ij a ji = = b ji a ij (Interchange i and j) a ij b ji (Interchange summations) 5

6 6 Hence = tr(ab) (Equation (3)) A, B = tr(ab) = tr(ba) = B, A and Axiom IP1 is confirmed to hold In Chapter 4 it was found that the trace operation is a linear transformation, and so for any A, B, C Sym n (R) and x R we have and A + B, C = tr((a + B)C) = tr(ac + BC) = tr(ac) + tr(bc) = A, C + B, C xa, B = tr((xa)b) = tr(x(ab)) = x tr(ab) = x A, B, which confirms Axioms IP2 and IP3 Next, observing that A = [a ij ] Sym n (R) if and only if a ij = a ji for all 1 i, j n, we have A, A = tr(a 2 ) = a ij a ji = a ij a ij = a 2 ij 0 It is easy to see that if tr(a 2 ) = 0, then we must have a ij = 0 for all 1 i, j n, and thus A = O n Axiom IP4 is confirmed

7 7 72 Norms Given an inner product space (V, ) and a vector u V, we define the norm of u to be the scalar u = u, u If u = 1 we say that u is a unit vector Notice that, by Axiom IP4, u is always a nonnegative real number The distance d(u, v) between two vectors u, v V is given by also always a nonnegative real number If d(u, v) = u v, u, v = 0 we say that u and v are orthogonal and write u v Proposition 76 Let (V, ) be an inner product space If W V is a subspace of V, then is also a subspace of V W = {v V : v, w = 0 for all w W } (4) Proof Suppose u, v W Then for any w W we have u + v, w = u, w + v, w = = 0, which shows that u + v W Moreover, for any a F we have au, w = a u, w = a(0) = 0 for any w W, which shows that au W Since W V is closed under scalar multiplication and vector addition, we conclude that it is a subspace of V The subspace W defined by (4) is called the orthogonal complement of W 1 If v W, then we say v is orthogonal to W and write v W Proposition 77 Let (V, ) be an inner product space Let w 1,, w m V, and define the subspace U = {v V : v w i for all 1 i m} If W = Span{w 1,, w m }, then U = W Proof It is a routine matter to verify that U is indeed a subspace of V Let v U For any w W we have w = c 1 w c m w m for some c 1,, c m F, and then since v w i implies w i, v = 0 we obtain m w, v = c iw i, v = m c i w i, v = m c i(0) = 0 1 The symbol W is often read as W perp

8 by Axioms IP2 and IP3 Hence v w for all w W, so that v W and therefore U W Next, let v W Then w, v = 0 for all w W, or equivalently m c iw i, v = 0 (5) for any c 1,, c m F If for any 1 i m we choose c i = 1 and c j = 0 for j i, then (5) gives w i, v = 0 Thus v w i for all 1 i m, implying that v U and so W U Therefore U = W Let v (V, ) such that v = 0 Given any u (V, ) there can be found some c F such that v, u cv = 0 Indeed v, u cv = 0 v, u v, cv = 0 v, u c v, v = 0 where v, v 0 since v 0 c v, v = v, u c v, v = v, u c v, v = u, v c = 8 u, v v, v, (6) Definition 78 Let v 0 The orthogonal projection of u onto v is given by Theorem 79 Let u, v (V, ) 1 Pythagorean Theorem: If u v, then 2 Parallelogram Law: 3 Schwarz Inequality: 4 Triangle Inequality: proj v u = u, v v, v v u + v 2 = u 2 + v 2 u + v 2 + u v 2 = 2 u v 2 u, v u v u + v u + v Proof Proof of the Pythagorean Theorem: Suppose u v, so that u, v = v, u = 0 By direct calculation we have u + v 2 = u + v, u + v = u, u + v + v, u + v Axiom IP2 = u, u + u, v + v, u + v, v Theorem 72(2) = u, u + v, v = u 2 + v 2

9 9 Proof of the Parallelogram Law: We have u + v 2 = u, u + u, v + v, u + v, v = u 2 + u, v + v, u + v 2 (7) from the proof of the Pythagorean Theorem, and u v 2 = u v, u v = u 2 u, v v, u + v 2 (8) Adding equations (7) and (8) completes the proof Proof of the Schwarz Inequality: If u = 0 or v = 0, then by Theorem 72(1) we obtain which affirms the theorem s conclusion Suppose u, v 0, and let Now, by (6), u, v = 0 = 0 = u v, c = u, v v, v = u, v v 2 u cv, cv = c u cv, v = c v, u cv = c( 0) = c(0) = 0 Thus u cv and cv are orthogonal, and by the Pythagorean Theorem u 2 = (u cv) + cv 2 = u cv 2 + cv 2 Hence cv 2 u 2 since u cv 2 0 However, recalling that z z = z 2 for any z F, we obtain cv 2 = cv, cv = c c v, v = c 2 v 2 u, v 2 = v 2 u, v 2 =, v 4 v 2 and so cv 2 u 2 implies that Therefore we have u, v 2 v 2 u 2 u, v 2 u 2 v 2, and taking the square root of both sides completes the proof Proof of the Triangle Inequality: For any u, v V we have u, v = a + bi for some a, b R, so that the real part of u, v is Re( u, v ) = a (If V is a vector field over R then b = 0, but this will not affect our analysis) By the Schwarz Inequality we have a2 + b 2 = a + bi = u, v u v, and since it follows that Re ( u, v ) = a a = a 2 a 2 + b 2, Re ( u, v ) u v (9)

10 10 Recalling the property of complex numbers z + z = 2 Re(z), we have u, v + v, u = u, v + u, v = 2 Re ( u, v ) (10) Now, u + v 2 = u 2 + u, v + v, u + v 2 Equation (7) = u Re ( u, v ) + v 2 Equation (10) u u v + v 2, Inequality (9) and so u + v 2 ( u + v ) 2 Taking the square root of both sides completes the proof Proposition 710 Let (V, ) be an inner product space, and let v 1,, v n V be such that v i 0 for each 1 i n and v i v j whenever i j If v V and for each 1 i n, then is orthogonal to v 1,, v n c i = v, v i v i, v i v c i v i Proof Fix 1 k n Since v k 0 we have v k, v k 0 Also v i, v j = 0 whenever i j Now, v c i v i, v k = v, v k c i v i, v k = v, v k c i v i, v k = v, v k c k v k, v k = v, v k v, v k v k, v k v k, v k = v, v k v, v k = 0, and therefore v n c kv k v k for any 1 k n Proposition 711 Let (V, ) be an inner product space, and let v 1,, v n V be such that v i 0 for each 1 i n and v i v j whenever i j If v V and c i = v, v i / v i, v i for each 1 i n, then v n v c n iv i a iv i for any a 1,, a n F

11 Proof Fix v V and a 1,, a n F, and let c i = v, v i / v i, v i for each 1 i n First we observe that for any scalars x 1,, x n we have v n c kv k, n x iv i = n v n c kv k, x i v i Theorem 72(2) = n x i v n c kv k, v i Theorem 72(3) = n x i(0) = 0, Proposition 710 which is to say that v n c kv k is orthogonal to any linear combination of the vectors v 1,, v n In particular v c i v i (c i a i )v i, and so by the Pythagorean Theorem v n a 2 v n iv i = c iv i + n (c 2 i a i )v i = v n c 2 n iv i + (c 2 i a i )v i v n c 2 iv i Taking square roots completes the proof 11

12 12 73 Orthogonal Bases If B = {v 1,, v n } is a basis for a vector space V and, : V V F is an inner product, then we refer to B as a basis for the inner product space (V, ) Definition 712 Let B = {v 1,, v n } be a basis for an inner product space (V, ) If v i v j whenever i j, then B is an orthogonal basis If B is an orthogonal basis such that v i = 1 for all i, then B is called an orthonormal basis Lemma 713 Let v 1,, v n (V, ) be nonzero vectors If v i v j whenever i j, then v 1,, v n are linearly independent Proof Suppose that v i v j whenever i j Let x 1,, x n F and set Now, for each 1 i n, On the other hand, Hence x 1 v x n v n = 0 (11) x k v k, v i = 0, v i = 0 x k v k, v i = x k v k, v i = x i v i, v i x i v i, v i = 0, and since v i 0 implies v i, v i = 0, it follows that x i = 0 Therefore (11) leads to the conclusion that x 1 = = x n = 0, and so v 1,, v n are linearly independent Theorem 714 (Gram-Schmidt Orthogonalization Process) Let m N For any n N, if (V, ) is an inner product space over F with dim(v ) = m + n, W is a subspace of V with orthogonal basis (w i ) m, and (w 1,, w m, u m+1,, u m+n ) (12) is a basis for V, then an orthogonal basis for V is (w i ) m+n, where w i = u i for each m + 1 i m + n Moreover, for all 1 k n i 1 u i, w k w k, w k w k (13) Span(w i ) m+k = Span(w 1,, w m, u m+1,, u m+k ) (14) Note that the existence of vectors u m+1,, u m+n V such that (12) is a basis for V is assured by Theorem 353 Also observe that, since m, n N implies m + n 2, the theorem does not address one-dimensional vector spaces This is because one-dimensional vector spaces are not of much interest: any nonzero vector serves as an orthogonal basis!

13 Proof We carry out an argument by induction on n by first considering the case when n = 1 That is, we let m N be arbitrary, and suppose (V, ) is an inner product space with dim(v ) = m + 1, W is a subspace of V with orthogonal basis (w i ) m, and B = (w 1,, w m, u m+1 ) is a basis for V Let m u m+1, w k w m+1 = u m+1 w k, w k w k If w m+1 = 0, then u m+1 = m u m+1, w k w k, w k w k obtains, so that u m+1 Span(w i ) m and by Proposition 338 it follows that B is a linearly dependent set a contradiction Hence w m+1 0 is assured Moreover w m+1 is orthogonal to w 1,, w m by Proposition 710, implying that w i w j for all 1 i, j m + 1 such that i j Since {w 1,, w m+1 } is an orthogonal set of nonzero vectors, by Lemma 713 it is also a linearly independent set Therefore, by Theorem 352, (w i ) m+1 is a basis for V that is also an orthogonal basis We have proven that the theorem is true in the base case when n = 1 Next, suppose the theorem is true for some particular n N Fix m N, suppose (V, ) is an inner product space with dim(v ) = m + n + 1, W is a subspace of V with orthogonal basis (w i ) m, and B = (w 1,, w m, u m+1,, u m+n+1 ) is a basis for V Let V = Span(B \ {u m+n+1 }), which is to say (V, ) is an inner product space with basis B = (w 1,, w m, u m+1,, u m+n ), and W is a subspace of V Since dim(v ) = m + n, by our inductive hypothesis we conclude that (w i ) m+n, where i 1 u i, w k wi = ui w k, w k w k for each m + 1 i m + n, is an orthogonal basis for V Now, V is a subspace of V with orthogonal basis (w i ) m+n, and C = (w 1,, w m+n, u m+n+1 ) is a basis for V (To substantiate the latter claim use Proposition 338 twice: first to find that u m+n+1 / Span(B ) = V = Span(w i ) m+n, and then to find that C is a linearly independent set Now invoke Theorem 352) Applying the base case proven above, only with m replaced by m + n, we conclude that (w i ) m+n+1 is an orthogonal basis for V, where w m+n+1 = u m+n+1 m+n u m+n+1, w k w k w k, w k We have now shown that if the theorem holds when m N is arbitrary and dim(v ) = m+n, then it holds when m N is arbitrary and dim(v ) = m + n + 1 All but the last statement of the theorem is now proven by the Principle of Induction 13

14 Finally, to see that (14) holds for each 1 k n, simply note from (13) that each vector in (w i ) m+k lies in Span(w 1,, w m, u m+1,, u m+k ), and also each vector in (w 1,, w m, u m+1,, u m+k ) lies in Span(w i ) m+k Corollary 715 If (V, ) is a nontrivial finite-dimensional inner product space over F, then it has an orthonormal basis Example 716 Give the vector space R 3 the customary dot product, thereby producing the inner product space (R 3, ) Let u 1 = 1 1, u 2 = 1 1, u 3 = Then B = {u 1, u 2, u 3 } is a basis for (R 3, ) Use the Gram-Schmidt Process to transform B into an orthogonal basis for (R 3, ), and then find an orthonormal basis for (R 3, ) Solution Let w 1 = u 1 Then {w 1 } is an orthogonal basis for the subspace W = Span{w 1 } Certainly W R 3, and we already know that {w 1, u 2, u 3 } is a basis for R 3 Hence we have the essential ingredients to commence the Gram-Schmidt Process and find vectors w 2 and w 3 so that {w 1, w 2, w 3 } constitutes an orthogonal basis for (R 3, ) The formula for finding w i (where i = 2, 3) is Hence and w i = u i i 1 u i w k w k w k w k w 2 = u 2 u 2 w 1 w 1 = 1 1 [ 1, 1, 0] [1, 1, 1] 1 1 = 1 1, w 1 w 1 0 [1, 1, 1] [1, 1, 1] u 3 w k w 3 = u 3 w k = u 3 u 3 w 1 w 1 u 3 w 2 w 2 w k w k w 1 w 1 w 2 w 2 = = 1/6 1/ /3 (Note: it should not be surprising that w 2 = u 2 since u 2 is in fact already orthogonal to w 1 ) We have obtained {w 1, w 2, w 3 } = 1 1, 1 1, 1/ /3 as an orthogonal basis for (R 3, ) 14

15 To find an orthonormal basis all we need do is normalize the vectors w 1, w 2 and w 3 We have ŵ 1 = w [ ] 1 1 w 1 = 1 1 3,,, ŵ 2 = w [ w 2 = 1 1,, 0], 2 2 and ŵ 3 = w [ 3 1 w 3 = 1 6,, 2 ] 6 6 The set {ŵ 1, ŵ 2, ŵ 3 } is an orthonormal basis for (R 3, ) Example 717 Recall the vector space P 2 (R) of polynomial functions of degree at most 2 with coefficients in R, which here we shall denote simply by P 2 Define p, q = for all p, q P 2 The verification that (P 2, ) is an inner product space proceeds in much the same way as Example 74 Apply the Gram-Schmidt Process to transform the standard basis E = {1, x, x 2 } into an orthonormal basis for (P 2, ) Solution Let w 1 = 1, the polynomial function with constant value 1 If W = Span{w 1 }, then W is a subspace of P 2 such that W P 2, and {w 1 } is an orthogonal basis for W Starting with w 1, we employ the Gram-Schmidt Process to obtain w 2 and w 3 from u 2 = x and u 3 = x 2, respectively We have and 1 1 pq w 2 = u 2 u 2, w 1 w 1, w 1 w x, 1 1 = x 1, 1 = x 1 1 x dx dx = x 0 2 = x, w 3 = u 3 u 3, w 1 w 1, w 1 w 1 u 3, w 2 w 2, w 2 w 2 = x 2 x2, 1 1, 1 x2, x x, x x 1 1 = x 2 1 x2 dx 1 1 dx 1 x3 dx x2 dx x = x2 1, 3 and so {w 1, w 2, w 3 } = { } 1, x, x is an orthogonal basis for P 2 To find an orthonormal basis we need only normalize the vectors w 1, w 2 and w 3 From w 1 = w 1, w 1 = 1 1, 1 = 1 dx = 2, 1 w 2 = w 2, w 2 = 1 x, x = 1 x2 2 dx =, 3 and w 3 = x2 w 3, w 3 = 1, 3 x ( ) = x dx = 8, 45 we obtain ŵ 1 = w 1 w 1 = 1 2, ŵ 2 = w 2 w 2 = 6 2 x, ŵ 3 = w 3 w 3 = 10 4 (3x2 1) 15

16 The set {ŵ 1, ŵ 2, ŵ 3 }, which consists of the first three of what are known as normalized Legendre polynomials, is an orthonormal basis for (P 2, ) Proposition 718 Let (V, ) be an inner product space over F with dim(v ) = n > 0, let be an orthogonal basis for V, and let Then U = W, W = U, and B = {w 1,, w r, u 1,, u s } W = Span{w 1,, w r } and U = Span{u 1,, u s } dim(w ) + dim(w ) = dim(v ) Proof Let u U Then there exist scalars x 1,, x s F such that s u = x i u i Let w W be arbitrary, so that w = r y j w j for scalars y 1,, y r F Now, s u, w = x iu i, w = s x i u i, w (Axiom IP2) = s (x i u i, r ) y jw j = s ( r ) x i u i, y j w j (Theorem 72(2)) = s ( r ) x i ȳj u i, w j (Theorem 72(3)) 16 Since B is an orthogonal basis we have u i, w j = 0 for all 1 i s and 1 j r, so that s r u, w = x i ȳ j u i, w j = 0 and therefore u w Since w W is arbitrary, we conclude that u W and hence U W Next, let v W Since B is a basis for V, there exist scalars x 1,, x s, y 1,, y r F such that s r v = x i u i + y j w j Fix 1 k r Since y k w k W we have v, y k w k = 0 (15)

17 On the other hand, since u i, w k = 0 for all 1 i s, and w j, w k = 0 for all j k, we have s r v, y k w k = x i ȳ k u i, w k + y j ȳ k w j, w k = y k ȳ k w k, w k = y k 2 w k, w k (16) Combining (15) and (16) yields y k 2 w k, w k = 0, and since w k 0 implies that w k, w k 0 by Axiom IP4, it follows that y k = 0 We conclude, then, that s v = x i u i U, and so W U Therefore U = W, and by symmetry W = U Finally, since {u 1,, u s } is a basis for U and {w 1,, w r } is a basis for W, we obtain dim(v ) = n = r + s = dim(w ) + dim(u) = dim(w ) + dim(w ), 17 which completes the proof The conclusions of Proposition 718 in fact apply to any arbitrary subspace of an inner product space, as the next theorem establishes Theorem 719 Let W be a subspace of an inner product space (V, ) over F Then and (W ) = W dim(w ) + dim(w ) = dim(v ) Proof The proof is trivial in the case when dim(v ) = 0, since the only possible subspace is then {0} So suppose henceforth that n = dim(v ) > 0 If W = {0}, then W = V Now, and since dim({0}) = 0 we have (W ) = V = {0} = W, dim(v ) = dim({0}) + dim(v ) = dim(w ) + dim(w ) If W = V, then W = {0} and a symmetrical argument to the one above leads to the same conclusions Set m = dim(w ), and suppose W {0} and W V Then m n by Theorem 354(2), and m n by Theorem 354(3), so that 0 < m < n Since W is a nontrivial vector space in its own right, by Corollary 715 it has an orthogonal basis {w 1,, w m } Since W V it follows by Theorem 714 that there exist w m+1,, w n V such that B = {w 1,, w n } is an orthogonal basis for V Observing that W = Span{w 1,, w m } and defining U = Span{w m+1,, w n },

18 18 by Proposition 718 we have U = W, W = U, and dim(w ) + dim(w ) = dim(v ) Finally, observe that (W ) = U = W, which finishes the proof The dimension equation in Theorem 719 amounts to a generalization of Proposition 446 from the setting of real euclidean vector spaces (equipped specifically with the euclidean dot product) to that of abstract inner product spaces over an arbitrary field F Example 720 As a compelling application of some of the developments thus far, we give a proof that the row rank of a matrix equals its column rank that is quite different (and shorter) than the proof given in 36 Let A = [a ij ] Mat m,n (R) Define the linear transformation L : R n R m by L(x) = Ax, and let a 1,, a m R n be such that a 1,, a m are the row vectors of A Then Nul(L) is a subspace of the inner product space (R n, ) by Proposition 414, and so too is Row(A) = Span{a 1,, a m } 2 Now, so that x Nul(L) Ax = 0 a 1 x x a 1 = = a mx x a m Nul(L) = {x R n : x a i for all 1 i m} and by Proposition 77 we have Nul(L) = Row(A) By Theorem 719 whence and finally Next, by Theorem 437, dim(row(a)) + dim(row(a) ) = dim(r n ), 0 0 row-rank(a) + dim(nul(l)) = n row-rank(a) = n dim(nul(l)) dim(nul(l)) + dim(img(l)) = dim(r n ), and since Img(L) = Col(A) by Proposition 435, it follows that x a 1,, x a m, n = dim(r n ) = dim(nul(l)) + dim(col(a)) = dim(nul(l)) + col-rank(a) and finally Therefore and we re done col-rank(a) = n dim(nul(l)) row-rank(a) = col-rank(a) = n dim(nul(l)), 2 We cleave here to the convention that elements of R n are column vectors (ie n 1 column matrices)

19 Proposition 721 If W is a subspace of an inner product space (V, ) over F, then V = W W Proof The situation is trivial in the cases when W = {0} or W = V, so suppose W is a subspace such that W {0}, V Let dim(w ) = m and dim(v ) = n, and note that 0 < m < n Since (W, ) is a nontrivial inner product space, by Corollary 715 is has an orthogonal basis {w 1,, w m } By Theorem 714 there exist w m+1,, w n V such that B = {w 1,, w n } is an orthogonal basis for V, and W = Span{w m+1,, w n } by Proposition 718 Let v V Since Span(B) = V, there exist scalars c 1,, c n F such that m v = c k w k = c k w k + c k w k, k=m+1 and so v W + W Hence V W + W, and since the reverse containment is obvious we have V = W + W Suppose that v W W From v W we have v w for all w W, and since v W it follows that v v Thus v, v = 0, and so v = 0 by Theorem 72(4) Hence W W {0}, and since the reverse containment is obvious we have W W = {0} Since V = W + W and W W = {0}, we conclude that V = W W Corollary 722 If W is a subspace of an inner product space (V, ) over F, then dim(w W ) = dim(w ) + dim(w ) Proof By Proposition 721 we have V = W W, and thus dim(v ) = dim(w W ) The conclusion then follows from Theorem 719 The corollary could also be proved quite easily by utilizing Proposition 436, which applies to abstract vector spaces over F For the following theorem we take all vectors in F n to be, as ever, n 1 column matrices (ie column vectors) Theorem 723 Let (V, ) be an inner product space over F ordered orthonormal basis for V, then for all u, v V 19 If O = (w 1,, w n ) is an u, v = [v] O[u] O (17) Proof Let u, v V, so there exist a 1,, a n, b 1,, b n F such that and hence u = a 1 w a n w n and v = b 1 w b n w n, a 1 [u] O = and [v] O = a n b n b 1

20 Now, because O is orthonormal, w i, w j = 0 whenever i j, and w i, w i = w i 2 = 1 for all i = 1,, n By Definition 71 and Theorem 72 we obtain u, v = a i w i, b j w j = a i bj w i, w j as desired = a i bi w i, w i = a i bi = [v] O[u] O, In the case when F = R we find that [v] O = [v] O, since the components of [v] O are all real numbers, and thus we readily obtain the following Corollary 724 If (V, ) is an inner product space over R, and O = (w 1,, w n ) is an ordered orthonormal basis for V, then 20 for all u, v V u, v = [v] O[u] O In Theorem 723, let ϕ O : V F n denote the O-coordinate map, so that for all v V, and then (17) may be written as ϕ O (v) = [v] O u, v = ϕ O (u) ϕ O (v), recalling Definition 73 Now, if V denotes the norm in V and F n the norm in F n, then v V = v, v = ϕ O (v) ϕ O (v) = ϕ O (v) F n (18) for all v V In fact, if d V and d F n are the distance functions on V and F n, respectively, so that for any u, v V and x, y F n we have then it follows from (18) that d V (u, v) = u v V and d F n(x, y) = x y F n, d V (u, v) = u v V = ϕ O (u v) F n = ϕ O (u) ϕ O (v) F n = d F n(ϕ O (u), ϕ O (v)), (19) recalling that ϕ O is an isomorphism Equation (18) exhibits a property of the transformation ϕ O that is called norm-preserving, and equation (19) exhibits the distance-preserving property of ϕ O Definition 725 Let (U, U ) and (V, V ) be inner product spaces, and let U and V denote the norms on U and V induced by the inner products U and V, respectively A linear transformation L : U V is an isometry if it is norm-preserving; that is, u U = L(u) V for all u U If L is also an isomorphism, then (U, U ) and (V, V ) are said to be isometrically isomorphic

21 Thus we see that the transformation ϕ O is an isometry as well as an isomorphism, where it must not be forgotten that O represents an orthonormal basis for an inner product space (V, ) over F of dimension n 1 By Corollary 715 every such inner product space admits an orthonormal basis, and so must be isometrically isomorphic to (F n, ) 21

22 22 74 Quadratic Forms In this section elements of the vector space F n will be represented by column matrices, which is to say any x F n is to be regarded as an n 1 matrix: x 1 x = x n In particular if x, y R n, the Euclidean dot product x y is given as and the Euclidean norm x is given as x y = x y, x = x x = x x (20) Strictly speaking, since x is an 1 n matrix and y is n 1 matrix, the product x y is a 1 1 matrix However, throughout this section as in the past, we identify a 1 1 matrix with its sole scalar-valued entry via the natural isomorphism [c] c Definition 726 Let A Mat n (F) The quadratic form associated with A is the function Q A : F n F given by Q A (x) = x Ax for all x F n Again, the natural isomorphism [c] c is implicitly built into the definition of Q A, so that x Ax is regarded as a scalar If A = [a ij ] n and x = [x 1 x n ], it is routine to verify that Q A (x) = a ij x i x j (21) Example 727 Let Then A = and x = x y z Q A (x) = [ x y z ] x y = [ x y z ] 3x y + 2z x + y + 4z z 2x + 4y 2z = x(3x y + 2z) + y( x + y + 4z) + z(2x + 4y 2z) = 3x 2 2xy + 4xz + y 2 + 8yz 2z 2 is the quadratic form associated with A

23 then More generally, if is the associated quadratic form A = a b c b d e c e f Q A (x) = ax 2 + 2bxy + 2cxz + dy 2 + 2eyz + fz 2 (22) For n N define S n to be the set of all unit vectors in the vector space R n+1 with respect to the Euclidean dot product: x 1 n+1 S n = {x R n+1 : x = 1} = R n+1 : x 2 k = 1 x n+1 The set S n may be referred to as the n-sphere or the (n-dimensional) unit sphere 3 If n = 1 we obtain a circle centered at 0, 0, {[ ] } x S 1 = R y 2 : x 2 + y 2 = 1, and if n = 2 we obtain a sphere with center 0, 0, 0, S 1 = x y R 2 : x 2 + y 2 + z 2 = 1 z The next proposition establishes an important property of the quadratic forms of symmetric matrices that have, in particular, real-valued entries It depends on a fact from analysis, not proven here, that if f : S R n R is a continuous function and S is a closed and bounded set, then f attains a maximum value on S That is, there exists some x 0 S such that f(x 0 ) = max{f(x) : x S} Certainly S n 1, as a subset of R n, is closed and bounded with respect to the Euclidean dot product Also a cursory examination of (21) should make it clear that, for any A Mat n (R), the function Q A is a polynomial function Hence Q A is continuous on R n with respect to the Euclidean dot product, which easily implies that Q A is continuous on S n 1 R n Proposition 728 Let A Sym n (R) Suppose v, w S n 1 are such that Q A (v) = max{q A (x) : x S n 1 } and Q A (w) = min{q A (x) : x S n 1 } Then v and w are eigenvectors of A 23 3 It makes no difference whether we regard the elements of S n as vectors or points For consistency s sake we keep on with the vector interpretation here, but later will make occasional use of the point interpretation to aid intuitive understanding

24 24 Proof Define U R n to be the set U = {u R n : u v = 0} Since v = 1 implies that v 0, by Example 437 we find that U is a subspace of R n and dim(u) = n 1 By Proposition 444 dim(u ) = dim(r n ) dim(u) = n (n 1) = 1, and since clearly v U and {v} is a linearly independent set, it follows by Theorem 352(1) that {v} is a basis for U Hence U = Span(v) = {cv : c R} Fix u U such that u = 1, and define the vector-valued function f : R R n by f(t) = sin(t)u + cos(t)v Since v v = v 2 = 1, u u = u 2 = 1, and u v = 0, we find that f(t) 2 = f(t) f(t) = ( sin(t)u + cos(t)v ) ( sin(t)u + cos(t)v ) = sin 2 (t)u u + 2 cos(t) sin(t)u v + cos 2 (t)v v = sin 2 (t) + cos 2 (t) = 1, and so f(t) S n 1 for all t R That is, the function f can be regarded as defining a curve on the unit sphere S n 1, and f(0) = v shows that the curve passes through the point v Letting we have and so by definition u 1 v 1 u = and v = u n v n u 1 sin(t) + v 1 cos(t) f(t) =, u n sin(t) + v n cos(t) u 1 cos(t) v 1 sin(t) f (t) = = cos(t)u sin(t)v u n cos(t) v n sin(t) Now, letting g = Q A f and defining the function Af by (Af)(t) = Af(t) for t R, we have g(t) = Q A (f(t)) = f(t) Af(t) = f(t) Af(t) = f(t) (Af)(t) By the Product Rule of dot product differentiation, g (t) = f (t) (Af)(t) + f(t) (Af) (t) = f (t) Af(t) + f(t) Af (t) = f (t) Af(t) + f(t) Af (t) (23)

25 Since f(t) Af (t) is a scalar it equals its own transpose, and so by Proposition 213 and the fact that A = A we obtain f(t) Af (t) = ( f(t) Af (t) ) = f (t) A f(t) = f (t) Af(t) Combining this result with (23) yields g (t) = 2f (t) Af(t) (24) Because the function f maps from R to S n 1, the function Q A : S n 1 R has a maximum at v S n 1, and g(0) = Q A (f(0)) = Q A (v), it follows that the function g : R R has a local maximum at t = 0 Thus, since g (0) exists, it further follows by Fermat s Theorem in 41 of the Calculus Notes that g (0) = 0 From (24) we have u Av = u Av = f (0) Af(0) = 0, and since u U is arbitrary we conclude that Av u for all u U Therefore Av U = {x R n : x u for all u U} = Span(v), and so there must exist some λ R such that Av = λv Since v R n is nonzero, we conclude that v is an eigenvector of A The proof that w is also an eigenvector of A is very much the same argument and so is omitted Proposition 729 If A Sym n (R), then the maximum value of Q A on S n 1 is equal to the largest real eigenvalue of A, and the minimum value equals the smallest real eigenvalue Proof By Proposition 728 and the details of its proof, we know that Q A : S n 1 R has a maximum at some v S n 1, that v is an eigenvalue of A, and that the corresponding eigenvalue λ is a real number Now, recalling (20) and noting that v S n 1 implies v = 1, we have Q A (v) = v Av = v λv = λv v = λ v 2 = λ This demonstrates that the maximum value of Q A on S n 1 equals a real eigenvalue of A Now, if µ is a real eigenvalue of A and u is a corresponding eigenvector, then û = u/ u is also a corresponding eigenvector since by Proposition 66 the eigenspace E A (µ) is a subspace of R n and hence closed under scalar multiplication Because û S n 1 and Q A has a maximum on S n 1 at v, we have Q A (û) Q A (v) = λ But we also have Q A (û) = û Aû = û µû = µû û = µ û 2 = µ, and hence µ λ This demonstrates that λ is the largest real eigenvalue of A, and therefore the maximum value of Q A on S n 1 equals the largest real eigenvalue of A The proof that the minimum value of Q A on S n 1 is equal to the smallest real eigenvalue of A is similar and so omitted From Proposition 728 and the particulars of its proof we immediately obtain the following result 25

26 Corollary 730 If A Sym n (R), then A has a real eigenvalue with a corresponding eigenvector in R n An eigenvector in R n is also known as a real eigenvector, so the corollary could be phrased as follows: Every real symmetric matrix has a real eigenvalue with a corresponding real eigenvector Example 731 Find the maximum and minimum value of the function ϕ : R 3 R given by on the unit sphere S 2 ϕ(x, y, z) = x 2 4xy + 4y 2 4yz + z 2 (25) Solution Comparing (25) to equation (22) in Example 727, we see we have a = 1, b = 2, c = 0, d = 4, e = 2, and f = 1 Thus the function ϕ is the quadratic form associated with the matrix A = a b c b d e = c e f The characteristic polynomial of A is 1 t 2 0 P A (t) = det(a ti 3 ) = 2 4 t t = ( 1) 1+1 (1 t) 4 t t + ( 1)1+2 ( 2) t and so = t 3 + 6t 2 t 4, P A (t) = 0 t 3 6t 2 + t + 4 = 0 By the Rational Zeros Theorem of algebra, the only rational numbers that may be zeros of P A are ±1, ±2, and ±4 It happens that 1 is in fact a zero, and so by the Factor Theorem of algebra t 1 must be a factor of P A (t) Now, whence we obtain t 3 6t 2 + t + 4 t 1 = t 2 5t 4, P A (t) = 0 (t 1)(t 2 5t 4) = 0 t = 1 or t 2 5t 4 = 0, and so P A (t) = 0 implies that t By Theorem 618 the eigenvalues of A are { λ 1 = , 5 } 41, 1 2, λ 2 = 5 41, λ 3 = 1, 2 26

27 so by Proposition 729 the maximum value of ϕ on S 2 is λ 1 (approximately 5702) and the minimum value is λ 2 (approximately 0702) The statement of Corollary 730, achieved by means of rather nontrivial results from analysis and topology, can in fact be wholly subsumed by a much stronger theorem whose proof makes use of only the most basic properties of complex numbers Recall that the standard form for elements of C n is x + iy, where x, y R n Recall also that z = z for any z C n, so in particular (z ) = z and z = z Theorem 732 All eigenvalues of a real symmetric matrix A are real, and if x + iy C n is a complex eigenvector corresponding to λ, then either x or y is a real eigenvector corresponding to λ Proof Suppose A Sym n (R), and let λ be an eigenvalue of A with corresponding eigenvector z = x + iy C n So z 0 is such that Az = λz Since A = A and A = A, we obtain z Az = z Az = z A (z ) = ( z Az ) = z Az, (26) where the last equality is due to the fact that z Az is a 1 1 matrix and hence symmetric On the other hand, z Az = z (λz) = λ(z z) = λ z 2 (27) Equations (26) and (27), taken together, imply that λ z 2 = λ z 2, where z 0 since z 0, and so we obtain λ = λ Therefore λ is real Now, A(x + iy) = λ(x + iy) Ax + iay = λx + iλy, and since the entries of A are real, λ is real, and x, y R n, it follows that Ax = λx and Ay = λy We observed earlier that x + iy 0, so either x 0 or y 0 Therefore either x or y is a real eigenvector corresponding to λ 27

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Linear Algebra: Vectors

Linear Algebra: Vectors A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

Notes on Linear Algebra. Peter J. Cameron

Notes on Linear Algebra. Peter J. Cameron Notes on Linear Algebra Peter J. Cameron ii Preface Linear algebra has two aspects. Abstractly, it is the study of vector spaces over fields, and their linear maps and bilinear forms. Concretely, it is

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Math 67: Modern Linear Algebra (UC Davis, Fall 2011) Summary of lectures Prof. Dan Romik

Math 67: Modern Linear Algebra (UC Davis, Fall 2011) Summary of lectures Prof. Dan Romik Math 67: Modern Linear Algebra (UC Davis, Fall 2011) Summary of lectures Prof. Dan Romik [Version of November 30, 2011 this document will be updated occasionally with new material] Lecture 1 (9/23/11)

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Orthogonal Projections and Orthonormal Bases

Orthogonal Projections and Orthonormal Bases CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t) Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

More information

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

Finite Dimensional Hilbert Spaces and Linear Inverse Problems Finite Dimensional Hilbert Spaces and Linear Inverse Problems ECE 174 Lecture Supplement Spring 2009 Ken Kreutz-Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California,

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3

4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3 MT Notebook Fall / prepared by Professor Jenny Baglivo c Copyright 9 by Jenny A. Baglivo. All Rights Reserved. Contents MT Notebook. Eigenvalues and Eigenvectors................................... Definitions;

More information

Let H and J be as in the above lemma. The result of the lemma shows that the integral

Let H and J be as in the above lemma. The result of the lemma shows that the integral Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Elementary Linear Algebra

Elementary Linear Algebra Elementary Linear Algebra Kuttler January, Saylor URL: http://wwwsaylororg/courses/ma/ Saylor URL: http://wwwsaylororg/courses/ma/ Contents Some Prerequisite Topics Sets And Set Notation Functions Graphs

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

PROJECTIVE GEOMETRY. b3 course 2003. Nigel Hitchin

PROJECTIVE GEOMETRY. b3 course 2003. Nigel Hitchin PROJECTIVE GEOMETRY b3 course 2003 Nigel Hitchin hitchin@maths.ox.ac.uk 1 1 Introduction This is a course on projective geometry. Probably your idea of geometry in the past has been based on triangles

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Scalar Valued Functions of Several Variables; the Gradient Vector

Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,

More information

WHEN DOES A CROSS PRODUCT ON R n EXIST?

WHEN DOES A CROSS PRODUCT ON R n EXIST? WHEN DOES A CROSS PRODUCT ON R n EXIST? PETER F. MCLOUGHLIN It is probably safe to say that just about everyone reading this article is familiar with the cross product and the dot product. However, what

More information

4: SINGLE-PERIOD MARKET MODELS

4: SINGLE-PERIOD MARKET MODELS 4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market

More information

DECOMPOSING SL 2 (R)

DECOMPOSING SL 2 (R) DECOMPOSING SL 2 R KEITH CONRAD Introduction The group SL 2 R is not easy to visualize: it naturally lies in M 2 R, which is 4- dimensional the entries of a variable 2 2 real matrix are 4 free parameters

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information