Section Inner Products and Norms


 Neal Paul
 1 years ago
 Views:
Transcription
1 Section Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F, denoted x, y, such that for all x, y, and z in V and all c in F, the following hold: 1. x + z, y = x, y + z, y 2. cx, y = c x, y 3. x, y = y, x, where the bar denotes complex conjugation. 4. x, x > 0 if x 0 Note that if z is a complex number, then the statement z 0 means that z is real and nonnegative. Notice that if F = R, (3) is just x, y = y, x. Definition. Let A M m n (F ). We define the conjugate transpose or adjoint of A to be the n m matrix A such that (A ) i,j = A i,j for all i, j. Theorem 6.1. Let V be an inner product space. Then for x, y, z V and c F, 1. x, y + z = x, y + x, z 2. x, cy = c x, y 3. x, 0 = 0, x = 0 4. x, x = 0 x = 0 5. If x, y = x, z, for all x V, then y = z. Proof. (Also see handout by Dan Hadley.) 1. x, y + z = y + z, x = y, x + z, x = y, x + z, x = x, y + x, z 2. x, cy = cy, x = c y, x = c y, x = c x, y 3. x, 0 = x, 0x = 0 x, x = 0x, x = 0, x 4. If x = 0 then by (3) of this theorem, x, x = 0. If x, x = 0 then by (3) of the definition, it must be that x = x, y x, z = 0, for all x V. x, y z = 0, for all x V. Thus, y z, y z = 0 and we have y z = 0 by (4). So, y = z. Definition. Let V be an inner product space. For x V, we define the norm or length of x by x = x, x. Theorem 6.2. Let V be an inner product space over F. Then for all x, y V and c F, the following are true. 1. cx = c x. 2. x = 0 if and only if x = 0. In any case x (CauchySchwarz Inequality) < x, y > x y. 4. (Triangle Inequality) x + y x + y. 1
2 Proof. (Also see notes by Chris Lynd.) 1. cx 2 = cx, cx = c c x, x = c 2 x x = 0 iff ( x, x ) = 0 iff x, x = 0 x = 0. If x 0 then ( x, x ) > 0 and so ( x, x ) > If y = 0 the result is true. So assume y 0. Finished in class. 4. Done in class. Definition. Let V be an inner product space, x, y V are orthogonal or ( perpendicular ) if x, y = 0. A subset S V is called orthogonal if x, y S, x, y = 0. x V is a unit vector if x = 1 and a subset S V is orthonormal if S is orthogonal and x S, x = 1. Section 6.2 Definition. Let V be an inner product space. Then S V is an orthonormal basis of V if it is an ordered basis and orthonormal. Theorem 6.3. Let V be an inner product space and S = {v 1, v 2,..., v k } be an orthogonal subset of V such that i, v i 0. If y Span(S), then k y, v i y = v i 2 v i Proof. Let Then y = k a i v i y, v j = = k a i v i, v j k a i v i, v j = a j v j, v j = a j v j 2 So a j = y,vi v i. Corollary 1. If S also is orthonormal then y = k y, v i v i Corollary 2. If S also is orthogonal and all vectors in S are nonzero then S is linearly independent. Proof. Suppose k a iv i = 0. Then for all j, a j = 0,v i v i = 0. So S is linearly independent. Theorem 6.4. (GramSchmidt) Let V be an inner product space and S = {w 1, w 2,..., w n } V be a linearly independent set. Define S = {v 1, v 2,..., v n } where v 1 = w 1 and for 2 k n, k 1 v k = w k w k, v i v i 2 v i. Then, S is an orthogonal set of nonzero vectors and Span(S ) =Span(S). 2
3 Proof. Base case, n = 1. S = {w 1 }, S = {v 1 }. Let n > 1 and S n = {w 1, w 2,..., w n }. For S n 1 = {w 1, w 2,..., w n 1 } and S n 1 = {v 1, v 2,..., v n 1 }, we have by induction that span(s n 1) = span(s n 1 ) and S n 1 is orthogonal. To show S n is orthogonal, we just have to show j [n 1], < v n, v j >= 0. n 1 v n = w n k=1 < v n, v j > = < w n, v j > < < w k, v k > v k 2 v k n 1 = < w n, v j > k=1 n 1 k=1 < w k, v k > v k 2 v k, v j > < w k, v k > v k 2 < v k, v j > = < w n, v j > < w n, v j > v j 2 < v j, v j > = = 0 We now show span(s n ) = span(s n). We know dim(span(s n )) = n, since S n is linearly independent. We know dim(span(s n)) = n, by Corollary 2 to Theorem 6.3. We know span(s n 1 ) = span(s n 1). So, we just need to show v n span(()s n ). n 1 v n = w n a j v j for some constants a 1, a 2,..., a n 1. For all j [n 1], v j span(s n 1 ) since span(s n 1 ) = span(s n 1) and v j S n 1. Therefore, v n span(s n ). Theorem 6.5. Let V be a nonzero finite dimensional inner product space. Then V has an orthonormal basis β. Furthermore, if β = {v 1, v 2,..., v n } and x V, then x = j=1 x, v i v i. Proof. Start with a basis of V Apply Gram Schmidt, Theorem 6.4 to get an orthogonal set β. Produce β from β by normalizing β. That is, multiply each x β by 1/ x. By Corollary 2 to Theorem 6.3, β is linearly independent and since it has n vectors, it must be a basis of V. By Corollary 1 to Theorem 6.3, if x V, then x = x, v i v i. Corollary 1. Let V be a finite dimensional inner product space with an orthonormal basis β = {v 1, v 2,..., v n }. Let T be a linear operator on V, and A = [T ] β. Then for any i, j, (A) i,j = T (v j ), v i. Proof. By Theorem 6.5, x V, then So Hence So, (A) i,j = T (v j ), v i. x = T (v j ) = x, v i v i. T (v j ), v i v i. [T (v j )] β = ( T (v j ), v 1, T (v j ), v 2,..., T (v j ), v n ) t 3
4 Definition. Let S be a nonempty subset of an inner product space V. We define S (read S perp ) to be the set of all vectors in V that are orthogonal to every vector in S; that is, S = {x : x, y = 0, y S}. S is called the orthogonal complement of S. Theorem 6.6. Let W be a finite dimensional subspace of an inner product space V, and let y V. Then there exist unique vectors u W, z W such that y = u + z. Furthermore, if {v 1, v 2..., v k } is an orthonormal basis for W, then k u = y, v i v i. Proof. Let y V. Let {v 1, v 2,..., v k } be an orthonormal basis for W. This exists by Theorem 6.5. Let u = k y, v i v i. We will show y u W. It suffices to show that for all i [k], y u, v i = 0. y u, v i = y k y, v j v j, v i j=1 = y, v i k y, v j v j, v i j=1 = y, v i y, v i = 0 Thus y u W. Suppose x W W. Then x W and x W. So, x, x = 0 and we have that x = 0. Suppose r W and s W such that y = r + s. Then r + s = u + z r u = z s This shows that both r u and z s are in both W and W. But W W = so it must be that r u = 0 and z s = 0 which implies that r = u and z = s and we see that the representation of y is unique. Corollary 1. In the notation of Theorem 6.6, the vector u is the unique vector in W that is closest to y. That is, for any x W, y x y u and we get equality in the previous inequality if and only if x = u. Proof. Let y V, u = k < y, v i > v i. z = y u W. Let x W. Then < y u, x >= 0, since z W. By Exercise 6.1, number 10, if a is orthogonal to b then a + b 2 = a 2 + b 2. So we have y x 2 = (u + z) x 2 = (u x) + z 2 = u x 2 + z 2 z 2 = y u 2 Now suppose y x = y u. By the squeeze theorem, u x 2 + z 2 = z 2 and thus u x 2 = 0, u x = 0, u x = 0, and so u = x. 4
5 Theorem 6.7. Suppose that S = {v 1, v 2,..., v k } is an orthonormal set in an ndimensional inner product space V. Then (a) S can be extended to an orthonormal basis {v 1,..., v k, v k+1,..., v n } (b) If W =Span(S) then S 1 = {v k+1, v k+2,..., v b } is an orthonormal basis for W. (c) If W is any subspace of V, then dim(v ) = dim(w )+ dim(w ). Proof. (a) Extend S to an ordered basis. S = {v 1, v 2,..., v k, w k+1, w k+2,..., w n }. Apply GramSchmidt to S. First k do not change. S spans V. Then normalize S to obtain β = {v 1,..., v k, v k+1,..., v n }. (b) S 1 = {v k+1, v k+2,..., v n } is an orthonormal set and linearly independent Also, it is a subset of W. It must span W since if x W, x = n < x, v i > v i = n i=k+1 < x, v i > v i. (c) dim(v ) = n = k + n k = dim(w ) + dim(w ). Section 6.3: Theorem 6.8. Let V be a finite dimensional inner product space over F, and let g : V F be a linear functional. Then, there exists a unique vector y V such that g(x) = x, y, x V. Proof. Let β = {v 1, v 2,..., v n } be an orthonormal basis for V and let y = g(v i )v i. Then for 1 j n, v j, y = v j, = g(v i )v i g(v i ) v j, v i = g(v j ) v j, v j = g(v j ) and we have, x V, g(x) = x, y. To show y is unique, suppose g(x) = x, y, x. Then, x, y = x, y, x. By Theorem 6.1(e), we have y = y. Example. (2b) Let V = C 2, g(z 1, z 2 ) = z 1 2z 2. Then V is an innerproduct space with the standard inner product (x 1, x 2 ), (y 1, y 2 ) = x 1 y 1 + x 2 y 2 and g is a linear operator on V. Find a vector y V such that g(x) = x, y, x V. Sol: We need to find (y 1, y 2 ) C 2 such that That is: g(z 1, z 2 ) = (z 1, z 2 ), (y 1, y 2 ), (z 1, z 2 ) C 2. z 1 2z 2 = z 1 y 1 + z 2 y 2. (1) Using the standard ordered basis {(1, 0), (0, 1)} for C 2, the proof of Theorem 6.8 gives that y = n g(v i)v i. So, (y 1, y 2 ) = g(1, 0)(1, 0) + g(0, 1)(0, 1) = z 1 (1, 0) + 2z 2 (0, 1) = (1, 0) 2(0, 1) = (1, 2) Check (1): LHS= z 1 2z 2 and for y 1 = 1 and y 2 = 2, we have RHS= z 1 2z 2. 5
6 Theorem 6.9. Let V be a finite dimensional inner product space and let T be a linear operator on V. There exists a unique operator T : V V such that T (x), y = x, T (y), x, y V. Furthermore, T is linear. Proof. Let y V Define g : V F as g(x) = T (x), y, x V. Claim: g is linear. g(ax + z) = T (ax + z), y = at (x) + T (z), y = at (x), y + T (z), y = ag(x) + g(z). By Theorem 6.8, there is a unique y V such that g(x) = x, y. So we have T (x), y = x, y, x V. We define T : V V by T (y) = y. So T (x), y = x, T (y), x V Claim: T is linear. We have for cy + z V, T (cy + z) equals the unique y such that T (x), cy + z = x, T (cy + z). But, Since y is unique, T (cy + z) = ct (y) + T (z). Claim: T is unique. Let U : V V be linear such that So, T = U. T (x), cy + z = c T (x), y + T (x), z = c x, T (y) + x, T (z) = x, ct (y) + T (z) T (x), y = x, U(y), x, y V x, T (y) = x, U(y), x, y V = T (y) = U(y), y V Definition. T is called the adjoint of the linear operator T and is defined to be the unique operator on V satisfying T (x), y = x, T (y), x, y V. For A M n (F ), we have the definition of A, the adjoint of A from before, the conjugate transpose. Fact. x, T (y) = T (x), y, x, y V. Proof. x, T (y) = T (y), x = y, T (x) = T (x), y Theorem Let V be a finite dimensional inner product space and let β be an orthonormal basis for V. If T is a linear operator on V, then [T ] β = [T ] β Proof. Let A = [T ] β and B = [T ] β with β = {v 1, v 2,..., v n }. By the corollary to Theorem 6.5, (B) i,j = T (v j ), v i = v i, T (v j ) = T (v i ), v j = (A) j,i = (A ) i,j 6
7 Corollary 2. Let A be an n n matrix, then L A = (L A ). Proof. Use β, the standard ordered basis. By Theorem 2.16, [L A ] β = A (2) and [L A ] β = A (3) By Theorem 6.10, [L A ] β = [L A ] β, which equals A by (2) and [L A ] β by (3). Therefore, [L A ] β = [L A ] β L A = L A. Theorem Let V be an inner product space, T, U linear operators on V. Then 1. (T + U) = T + U. 2. (ct ) = ct, c F. 3. (T U) = U T (composition) 4. (T ) = T 5. I = I. Proof and (T + U)(x), y = x, (T + U) (y) (T + U)(x), y = T (x), y + U(x), y And (T + U) is unique, so it must equal T + u. and and = x, T (y) + x, U (y) = x, (T + U )(y) ct (x), y = x, (ct ) (y) ct (x), y = c T (x), y = c x, T (y) = x, ct (y) T U(x), y = x, (T U) (y) T U(x), y = T (U(x)), y = U(x), T (y) = x, U (T (y)) = x, (U T )(y) 7
8 4. T (x), y = x, (T ) (y) by definition and T (x), y = x, T (y) by Fact. 5. I (x), y = x, I(y) = x, y, x, y V Therefore, I (x) = x, x V and we have I = I. Corollary 1. Let A and B be n n matrices. Then 1. (A + B) = A + B. 2. (ca) = ca, c F. 3. (AB) = B A 4. (A ) = A 5. I = I. Proof. Use Theorem 6.11 and Corollary to Theorem Or, use below. Example. (Exercise 5b) Let A and B be m n matrices and C an n p matrix. Then 1. (A + B) = A + B. 2. (ca) = ca, c F. 3. (AC) = C A 4. (A ) = A 5. I = I. Proof. 1. (A + B) i,j = (A + B) j,i = (A) j,i + (B) j,i = (A) j,i + (B) j,i and (A + B ) i,j = (A ) i,j + (B ) i,j = (A) j,i + (B) j,i 2. Let c F. (ca) i,j = (ca) j,i = c(a) j,i = c(a) j,i and (ca ) i,j = c(a ) i,j = c(a) j,i 8
9 3. ((AC) ) i,j = (AC) j,i = (A) j,k (C) k,i = = = k=1 (A) j,k (C) k,i k=1 (A ) k,j (C ) i,k k=1 (C ) i,k (A ) k,j k=1 = (C A ) i,j Fall The following was not covered For x, y F n, let x, y n denote the standard inner product of x and y in F n. Recall that if x and y are regarded as column vectors, then x, y n = y x. Lemma. Let A M m n (F ), x F n, and y F m. Then Ax, y m = x, A y n. Proof. Ax, y m = y (Ax) = (y A)x = (A y) x = x, A y n. Lemma. Let A M m n (F ), x F n. Then rank(a A) =rank(a). Proof. A A is an n n matrix. By the Dimension Theorem, rank(a A)+nullity(A A) = n. We also have, rank(a)+nullity(a) = n. We will show that the nullspace of A equals the nullspace of A A. We will show A Ax = 0 if and only if Ax = 0. 0 = A Ax 0 = A Ax, x n 0 = Ax, A x m 0 = Ax, Ax m 0 = Ax Corollary 1. If A is an m n matrix such that rank(a) = n, then A A is invertible. Theorem Let A M m n (F ) and y F m. Then there exists x 0 F n such that (A A)x 0 = A y and Ax 0 y Ax y for all x F n. Furthermore, if rank(a) = n, then x 0 = (A A) 1 A y. Proof. Define W = {Ax : x F n } = R(L A ). By the corollary to Theorem 6.6, there is a unique vector u = Ax 0 in W that is closest to y. Then, Ax 0 y Ax y for all x F n. Also by Theorem 6.6, z = y u is in W. So, z = Ax 0 y is in W. So, Ax, Ax 0 y m = 0, x F n. 9
10 By Lemma 1, x, A (Ax 0 y) n = 0, x F n. So, A (Ax 0 y) = 0. We see that x 0 is the solution for x in A Ax = A y. If, in addition, we know that rank(a) = n, then by Lemma 2, we have that rank(a A) = n and is therefore invertible. So, x 0 = (A A) 1 A y. Fall not covered until here. Section 6.4: Lemma. Let T be a linear operator on a finitedimensional inner product space V. If T has an eigenvector, then so does T. Proof. Let v be an eigenvector of T corresponding to the eigenvalue λ. For all x V, we have 0 = 0, x = (T λi)(x), x = v, (T λi) (x) = v, (T λi)(x) So v is orthogonal to (T λi)(x) for all x. Thus, v R(T λi) and so the nullity of (T λi) is not 0. There exists x 0 such that (T λi)(x) = 0. Thus x is a eigenvector corresponding to the eigenvalue λ of T. Theorem (Schur). Let T be a linear operator on a finitedimensional inner product space V. Suppose that the characteristic polynomial of T splits. Then there exists an orthonormal basis β for V such that the matrix [T ] β is upper triangular. Proof. The proof is by mathematical induction on the dimension n of V. The result is immediate if n = 1. So suppose that the result is true for linear operators on (n 1)dimensional inner product spaces whose characteristic polynomials split. By the lemma, we can assume that T has a unit eigenvector z. Suppose that T (z) = λz and that W = span({z}). We show that W is T invariant. If y W and x = cz W, then T (y), x = T (y), cz = y, T (cz) = y, ct (z) = y, cλz = cλ y, z = cλ(0) = 0. So T (y) W. By Theorem 5.21, the characteristic polynomial of T W divides the characteristic polynomial of T and hence splits. By Theorem 6.7(c), dim(w ) = n 1, so we may apply the induction hypothesis to T W and obtain an orthonormal basis γ of W such that [T W ] γ is upper triangular. Clearly, β = γ {z} is an orthonormal basis for V such that [T ] β is upper triangular. Definition. Let V be an inner product space, and let T be a linear operator on V. We say that T is normal if T T = T T. An n n real or complex matrix A is normal if AA = A A. Theorem Let V be an inner product space, and let T be a normal operator on V. Then the following statements are true. (a) T (x) = T (x) for all x V. (b) T ci is normal for every c F (c) If x is an eigenvector of T, then x is also an eigenvector of T. In fact, if T (x) = λx, then T (x) = λx. 10
11 (d) If λ 1 and λ 2 are distinct eigenvalues of T with corresponding eigenvectors x 1 and x 2, then x 1 and x 2 are orthogonal. Proof. (a) For any x V, we have (b) T (x) 2 = T (x), T (x) = T T (x), x = T T (x), x = T (x), T (x) = T (x) 2. (T ci)(t ci) = (T ci)(t ci) = T (T ci) ci(t ci) = T T ct ct c ci = T T ct ct c ci = T T ct ct c ci = T (T ci) ci(t ci) = (T ci)(t ci) = (T ci) (T ci) (c) Suppose that T (x) = λx for some x V. Let U = T λi. Then U(x) = 0, and U is normal by (b). Thus (a) implies that 0 = U(x) = U (x) = (T λi)(x) = T (x) λx. Hence T (x) = λx. So x is an eigenvector of T. (d) Let λ 1 and λ 2 be distinct eigenvalues of T with corresponding eigenvectors x 1 and x 2. Then using (c), we have Since λ 1 λ 2, we conclude that x 1, x 2 = 0. λ 1 x 1, x 2 = λ 1 x 1, x 2 = T (x 1 ), x 2 = x 1, T (x 2 ) = x 1, λ 2 x 2 = λ2 x 1, x 2. Definition. Let T be a linear operator on an inner product space V. We say that T is selfadjoint ( Hermitian) if T = T. An n n real or complex matrix A is selfadjoint (Hermitian) if A = A. Theorem Let T be a linear operator on a finitedimensional complex inner product space V. Then T is normal if and only if there exists an orthonormal basis for V consisting of eigenvectors of T. Proof. The characteristic polynomial of T splits over C. By Shur s Theorem there is an orthonormal basis β = {v 1, v 2,..., v n } such that A = [T ] β is upper triangular. ( ) Assume T is normal. T (v 1 ) = A 1,1 v 1. Therefore, v 1 is an eigenvector with associated eigenvalue A 1,1. Assume v 1,..., v k 1 are all eigenvectors of T. Claim: v k is also an eigenvector. Suppose λ 1,..., λ k 1 are the corresponding eigenvalues. By Theorem 6.15, T (v j ) = λ j v j T (v j ) = λ j v j. Also, T (v k ) = A 1,k v 1 + A 2,k v A k,k v k. We also know by the Corollary to Theorem 6.5 that A i,j =< T (v j, v i >. So, A j,k = < T (v k, v j > = < v k, T (v j ) > = < v k, λ j v j > = λ j < v k, v j > { 0 j k = j = k A k,k 11
12 So T (v k ) = A k,k v k and we have that v k is an eigenvector of T. By induction, β is a set of eigenvectors of T. ( ) If β is orthonormal basis of eigenvectors then T is diagonalizable by Theorem 5.1. D = [T ] β is diagonal. Hence [T ] β is diagonal and equals [T ] β. Diagonal matrices commute. Hence, [T ] β [T ] β = [T ] β [T ] β. Let x V, x = a 1 v 1 + a 2 v a n v n. We have that T T (x) = T T (x), and hence, T T = T T. Lemma. Let T be a selfadjoint operator on a finitedimensional inner product space V. Then (a) Every eigenvalue of T is real. (b) Suppose that V is a real inner product space. Then the characteristic polynomial of T splits. Proof. (a) Let λ be an eigenvalue of T. So T (x) = λx for some x 0. Then λx = T (x) = T (x) = λ(x) (λ λ)x = 0 But x 0 so λ λ = 0, and we have that λ = λ so λ is real. (b) Let dim(v ) = n and β be an orthonormal basis for V. Set A = [T ] β. Then So A is selfadjoint. A = [T ] β = [T ] β = [T ] β = A Define T A : C n C n by T A (x) = Ax. Notice that [T A ] γ = A where γ is the standard ordered basis, which is orthonormal. So T A is selfadjoint and by (a) its eigenvalues are real. WE know that over C the characteristic polynomial of T A factors into linear factors, t λ, and since each λ is R, we know it also factors over R. But T A, A, and T all have the same characteristic polynomial. Theorem Let T be a linear operator on a finitedimensional real inner product space V. Then T is selfadjoint if and only if there exists an orthonormal basis β for V consisting of eigenvectors of T. Proof. ( ) Assume T is selfadjoint. By the lemma, the characteristic polynomial of T splits. Now by Shur s Theorem, there exists an orthonormal basis β for V such that A = [T ] β is upper triangular. But A = [T ] β = [T ] β = [T ] β = A So A and A are both upper triangular, thus diagonal and we see that β is a set of eigenvectors of T. ( ) Assume there is an orthonormal basis β of V of eigenvectors of T. We know D = [T ] β is diagonal with eigenvalues on the diagonal. D is diagonal and is equal to D since it is real. But then [T ] β = D = D = [T ] β = [T ] β. So T = T. Fall The following was covered but will not be included on the final. Section 6.5: Definition. Let T be a linear operator on a finitedimensional inner product space V (over F ). If T (x) = x for all x V, we call T a unitary operator if F = C and an orthogonal operator if F = R. Theorem Let T be a linear operator on a finitedimensional inner product space V. Then the following statements are equivalent. 1. T T = T T = I. 12
13 2. T (x), T (y) = x, y, x, y V. 3. If β is an orthonormal basis for V, then T (β) is an orthonormal basis for V. 4. There exists an orthonormal basis β for V such that T (β) is an orthonormal basis for V. 5. T (x) = x, x V. Proof. (1) (2) T (x), T (y) = x, T T (y) = x, y (2) (3) Let v i, v j β. Then 0 = v i, v j = T (v i ), T (v j ), so T (β) is orthogonal. By Corollary 2 to Theorem 6.3, any orthogonal subset is linearly independent and since T (β) has n vectors, it must be a basis of V. Also, 1 = v i 2 = v i, v i = T (v i ), T (v i ). So, T (β) is an orthonormal basis of V. (3) (4) By GrahamSchmit, there is an orthonormal basis β for V. By (3), T (β) is orthonormal. (4) (5) Let β = {v 1, v 2,..., v n } be an orthonormal basis for V. Let Then And x = a 1 v 1 + a 2 v 2 + a n v n. x 2 = a 2 1 v 1, v 1 + a 2 2 v 2, v a 2 n v n, v n = a a a 2 n T (x) 2 = a 2 1 T (v 1 ), T (v 1 ) + a 2 2 T (v 2 ), T (v 2 ) + + a 2 n T (v n ), T (v n ) = a a a 2 n Therefore, T (x) = x. (5) (1) We are given T (x) = x, for all x. We know x, x = 0 if and only if x = 0 and T (x), T (x) = 0 if and only if T (x) = 0. Therefore, T (x) = 0 if and only if x = 0. So N(T ) = {0} and therefore, T is invertible. We have x, x = T (x), T (x) = x, T T (x). Therefore, T T (x) = x for all x, which implies that T T = I. But since T is invertible, it must be that T = T 1 and we have that T T = T T = I. Lemma. Let U be a selfadjoint operator on a finitedimensional inner product space V. If x, U(x) = 0 for all x V, then U = T 0. (Where T 0 (x) = 0, x.) Proof. u = U U is normal. If F = C, by Theorem 6.16, there is an orthonormal basis β of eigenvectors. If F = R, by Theorem 6.17, there is an orthonormal basis β of eigenvectors. Let x β. Then U(x) = λx for some λ. x, Therefore, λ = 0, so λ = 0 and U(x) = 0, x. 0 =< x, U(x) >=< x, λx >= λ < x, x > Corollary 1. Let T be a linear operator on a finitedimensional real inner product space V. Then V has an orthonormal basis of eigenvectors of T with corresponding eigenvalues of absolute value 1 if and only if T is both selfadjoint and orthogonal. 13
14 Proof. ( ) Suppose V has an orthonormal basis {v 1, v 2,..., v n } such that i, T (v i ) = λ i v i and λ i = 1. Then by Theorem 6.17, T is selfadjoint. We ll show T T = T T = I. Then by Theorem 6.18, T (x) = x, x V T is orthogonal. T T (v i ) = T (T (v i )) = T (λ i v i ) = λ i T (v i ) = λ i λ i v i = λ 2 i v i = v i So T T = I. Similarly, T T = I. ( ) Assume T is selfadjoint. Then by Theorem 6.17, V has an orthonormal basis {v 1, v 2,..., v n } such that T (v i ) = λ i v i, i. If T is also orthogonal, we have λ i v i = λ i v i = T (v i ) = v i λ i = 1 Corollary 2. Let T be a linear operator on a finitedimensional complex inner product space V. Then V has an orthonormal basis of eigenvectors of T with corresponding eigenvalues of absolute value 1 if and only if T is unitary. Definition. A square matrix A is called an orthogonal matrix if A t A = AA t = I and unitary if A A = AA = I. We say B is unitarily equivalent to D if there exists a unitary matrix Q such that D = Q BQ. Theorem Let A be a complex n n matrix. Then A is normal if and only if A is unitarily equivalent to a diagonal matrix. Proof. ( ) Assume A is normal. There is an orthonormal basis β for F n consisting of eigenvectors of A, by Theorem Let β = {v 1, v 2,..., v n }. So A is similar to a diagonal matrix D by Theorem 5.1, where the matrix S with column i equal to v i is the invertible matrix of similarity. S 1 AS = D. S is unitary. (Leftarrow) Suppose A = P DP where P is unitary and D is diagonal. Also A A = P D DP, but D D = DD. AA = (P DP )(P DP ) = P DP P D P = P DD P Theorem Let A be a real n n matrix. Then A is symmetric if and only if A is orthogonally equivalent to a real diagonal matrix. Theorem Let A M n (F ) be a matrix whose characteristic polynomial splits over F. 1. If F = C, then A is unitarily equivalent to a complex upper triangular matrix. 2. If F = R, then A is orthogonally equivalent to a real upper triangular matrix. Proof. (1) By Shur s Theorem there is an orthonormal basis β = {v 1, v 2,..., v n } such that [L A ] β = N where N is a complex upper triangular matrix. Let β be the standard ordered basis. Then [L A ] β = A. Let Q = [I] β β. Then N = Q 1 AQ. We know that Q is unitary since its columns are an orthonormal set of vectors and so Q Q = I. Section 6.6 Definition. If V = W 1 W 2, then a linear operator T on V is the projection on W 1 along W 2 if whenever x = x 1 + x 2, with x 1 W 1 and x 2 W 2, we have T (x) = x 1. In this case, R(T ) = W 1 = {x V : T (x) = x} and N(T ) = W 2. We refer to T as the projection. Let V be an inner product space, and let T : V V be a projection. We say that T is an orthogonal projection if R(T ) = N(T ) and N(T ) = R(T ). 14
15 Theorem Let V be an inner product space, and let T be a linear operator on V. orthogonal projection if and only if T has an adjoint T and T 2 = T = T. Then T is an Compare Theorem 6.24 to Theorem 6.9 where V is finitedimensional. This is the nonfinite dimensional version. Theorem (The Spectral Theorem). Suppose that T is a linear operator on a finitedimensional inner product space V over F with the distinct eigenvalues λ 1, λ 2,..., λ k. Assume that T is normal if F = C and that T is selfadjoint if F = R. For each i(1 i k), let W i be the eigenspace of T corresponding to the eigenvalue λ i, and let T i be the orthogonal projection of V on W i. Then the following statements are true. (a) V = W 1 W 2 W k. (b) If W i denotes the direct sum of the subspaces W j for j i, then W i = W i. (c) T i T j = δ i,j T i for 1 i, j k. (d) I = T 1 + T T k. (e) T = λ 1 T 1 + λ 2 T λ k T k. Proof. Assume F = C. (a) T is normal. By Theorem 6.16 there exists an orthonormal basis of eigenvectors of T. By Theorem 5.10, V = W 1 W 2 W k. (b) Let x W i and y W j, i j. Then x, y = 0 and so W i = W i. But from (1), dim(w i ) = j i dim(w j ) = dim(v ) dim(w i ). By Theorem 6.7(c), we know also that dim(w i ) = dim(v ) dim(w i ). Hence W i = W i. (c) T i is the orthogonal projection of T on W i. For i j, x V, x = w 1 + w w k, w i W i. (d) T i (w i ) = w i T i (T i (x)) = T i (x) T i (x) = w i T j T i (x) = T j (w i ) = 0 (T 1 + T T k )(x) = T 1 (x) + T 2 (x) + + T k (x) (e) Let x = T 1 (x) + T 2 (x) + + T k (x). So, T (x) = T (T 1 (x)) + T (T 2 (x)) + + T (T k (x)). For all i, T i (x) W i. So T (T i (x)) = λ i T i (x) = w 1 + w w k = x = λ 1 T 1 (x) + λ 2 T 2 (x) + + λ k T k (x) = (λ 1 T 1 + λ 2 T λ k T k )(x). 15
16 Definition. The set {λ 1, λ 2,..., λ k } of eigenvalues of T is called the spectrum of T, the sum I = T 1 + T T k is called the resolution of the identity operator induced by T, and the sum T = λ 1 T 1 + λ 2 T λ k T k is called the spectral decomposision of T. Corollary 1. If F = C, then T is normal if and only if T = g(t ) for some polynomial g. Corollary 2. If F = C, then T is unitary if and only if T is normal and λ = 1 for every eigenvalue λ of T. Corollary 3. If F = C and T is normal, then T is selfadjoint if and only if every eigenvalue of T is real. Corollary 4. Let T be as in the spectral theorem with spectral decomposition T = λ 1 T 1 + λ 2 T λ k T k. Then each T j is a polynomial in T. 16
T ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationMATH 240 Fall, Chapter 1: Linear Equations and Matrices
MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 34 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationInner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationMath 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010
Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 SelfAdjoint and Normal Operators
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationOctober 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
More informationLecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
More informationThe Hadamard Product
The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would
More informationSummary of week 8 (Lectures 22, 23 and 24)
WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationProblems for Advanced Linear Algebra Fall 2012
Problems for Advanced Linear Algebra Fall 2012 Class will be structured around students presenting complete solutions to the problems in this handout. Please only agree to come to the board when you are
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationDiagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions
Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationFinite dimensional C algebras
Finite dimensional C algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for selfadjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n
More informationINTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL
SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationPractice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16.
Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the ndimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More information1 Eigenvalues and Eigenvectors
Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x
More informationDiagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
More informationWHICH LINEARFRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?
WHICH LINEARFRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
More informationSection 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More information9.3 Advanced Topics in Linear Algebra
548 93 Advanced Topics in Linear Algebra Diagonalization and Jordan s Theorem A system of differential equations x = Ax can be transformed to an uncoupled system y = diag(λ,, λ n y by a change of variables
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More information(January 14, 2009) End k (V ) End k (V/W )
(January 14, 29) [16.1] Let p be the smallest prime dividing the order of a finite group G. Show that a subgroup H of G of index p is necessarily normal. Let G act on cosets gh of H by left multiplication.
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More information4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
More informationLecture 2: Essential quantum mechanics
Department of Physical Sciences, University of Helsinki http://theory.physics.helsinki.fi/ kvanttilaskenta/ p. 1/46 Quantum information and computing Lecture 2: Essential quantum mechanics JaniPetri Martikainen
More informationMATH36001 Background Material 2015
MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be
More information1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm
Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 0050615 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationSolution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2
8.2 Quadratic Forms Example 1 Consider the function q(x 1, x 2 ) = 8x 2 1 4x 1x 2 + 5x 2 2 Determine whether q(0, 0) is the global minimum. Solution based on matrix technique Rewrite q( x1 x 2 = x1 ) =
More informationQuick Reference Guide to Linear Algebra in Quantum Mechanics
Quick Reference Guide to Linear Algebra in Quantum Mechanics Scott N. Walck September 2, 2014 Contents 1 Complex Numbers 2 1.1 Introduction............................ 2 1.2 Real Numbers...........................
More informationMath 333  Practice Exam 2 with Some Solutions
Math 333  Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationsome algebra prelim solutions
some algebra prelim solutions David Morawski August 19, 2012 Problem (Spring 2008, #5). Show that f(x) = x p x + a is irreducible over F p whenever a F p is not zero. Proof. First, note that f(x) has no
More information[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
More informationInner Product Spaces. 7.1 Inner Products
7 Inner Product Spaces 71 Inner Products Recall that if z is a complex number, then z denotes the conjugate of z, Re(z) denotes the real part of z, and Im(z) denotes the imaginary part of z By definition,
More informationMATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in threespace, we write a vector in terms
More informationLINEAR ALGEBRA AND MATRICES M3P9
LINEAR ALGEBRA AND MATRICES M3P9 December 11, 2000 PART ONE. LINEAR TRANSFORMATIONS Let k be a field. Usually, k will be the field of complex numbers C, but it can also be any other field, e.g., the field
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More information1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)
Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible
More informationSec 4.1 Vector Spaces and Subspaces
Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common
More informationBASIC THEORY AND APPLICATIONS OF THE JORDAN CANONICAL FORM
BASIC THEORY AND APPLICATIONS OF THE JORDAN CANONICAL FORM JORDAN BELL Abstract. This paper gives a basic introduction to the Jordan canonical form and its applications. It looks at the Jordan canonical
More information4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3
MT Notebook Fall / prepared by Professor Jenny Baglivo c Copyright 9 by Jenny A. Baglivo. All Rights Reserved. Contents MT Notebook. Eigenvalues and Eigenvectors................................... Definitions;
More informationDETERMINANTS. b 2. x 2
DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in
More informationMATH 551  APPLIED MATRIX THEORY
MATH 55  APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LUdecomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More information4. MATRICES Matrices
4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus ndimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all ndimensional column
More informationFinite Dimensional Hilbert Spaces and Linear Inverse Problems
Finite Dimensional Hilbert Spaces and Linear Inverse Problems ECE 174 Lecture Supplement Spring 2009 Ken KreutzDelgado Electrical and Computer Engineering Jacobs School of Engineering University of California,
More informationLinear Algebra Test 2 Review by JC McNamara
Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible
More informationCHARACTERISTIC ROOTS AND VECTORS
CHARACTERISTIC ROOTS AND VECTORS 1 DEFINITION OF CHARACTERISTIC ROOTS AND VECTORS 11 Statement of the characteristic root problem Find values of a scalar λ for which there exist vectors x 0 satisfying
More informationFacts About Eigenvalues
Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v
More informationLinear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the yaxis We observe that
More informationLinear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 51 Orthonormal
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More information3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
More informationMATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.
MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An mbyn matrix is a rectangular array of numbers that has m rows and n columns: a 11
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra  1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 201112 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationMA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationUNIT 2 MATRICES  I 2.0 INTRODUCTION. Structure
UNIT 2 MATRICES  I Matrices  I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress
More informationOrthogonal Projections and Orthonormal Bases
CS 3, HANDOUT A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).
More informationLinear Transformations
a Calculus III Summer 2013, Session II Tuesday, July 23, 2013 Agenda a 1. Linear transformations 2. 3. a linear transformation linear transformations a In the m n linear system Ax = 0, Motivation we can
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More informationSection 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj
Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that
More information(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.
Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product
More informationEC9A0: Presessional Advanced Mathematics Course
University of Warwick, EC9A0: Presessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Presessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond
More information17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
More information1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationTopic 1: Matrices and Systems of Linear Equations.
Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method
More informationLinear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationLecture 6. Inverse of Matrix
Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More information2. Introduction to quantum mechanics
2. Introduction to quantum mechanics 2.1 Linear algebra Dirac notation Complex conjugate Vector/ket Dual vector/bra Inner product/bracket Tensor product Complex conj. matrix Transpose of matrix Hermitian
More information7  Linear Transformations
7  Linear Transformations Mathematics has as its objects of study sets with various structures. These sets include sets of numbers (such as the integers, rationals, reals, and complexes) whose structure
More information