Math 110, Spring 2015: Homework 5 Solutions. Exercise 2.4.4: Let A and B be n n invertible matrices. Prove that AB is invertible and (AB) 1 = B 1 A 1.
|
|
- Marlene Cooper
- 7 years ago
- Views:
Transcription
1 Math, Spring 25: Homework 5 Solutions Section 2.4 Exercise 2.4.4: Let A and B be n n invertible matrices. Prove that AB is invertible and (AB) = B A. Proof. We first note that the matrices AB and B A are n n matrices. invertible n n matrix (on page of the text), we need only prove that By the definition of an (AB)(B A ) = (B A )(AB) = I, where I denotes the n n identity matrix. We can readily check this by using the associativity of matrix multiplication (Theorem 2.6) and the fact that IC = CI = C for all C M n n (R) (Theorem 2.2(c)): (AB)(B A ) = A(BB )A = AIA = AA = I. Similarly, (B A )(AB) = B (A A)B = B IB = B B = I. Exercise 2.4.6: Prove that if A is invertible and AB =, then B =. Proof. Multiplying both sides of the equation AB = on the left by the inverse A of A, we obtain A AB = A = IB = B = as desired; here I denotes the identity matrix of the same dimensions as A.
2 Exercise 2.4.9: Let A and B be n n matrices such that AB is invertible. Prove that A and B are invertible. Give an example to show that arbitrary matrices A and B need not be invertible if AB is invertible. Solution: We first prove that A and B are invertible under the original hypotheses: Proof. We ll prove this by converting it into a claim about linear transformations: By Corollary 2 on page 2 of the text, A and B are invertible matrices if and only if the left-multiplication transformations L A and L B are invertible linear transformations. Furthermore, by the same Corollary, our assumption that AB is invertible tells us that the composition L A L B = L AB is an invertible linear transformation. Note first that, because A and B are n n matrices, each of L A and L B maps R n to itself. So to show that each of L A and L B is invertible, we may show either that it is one-to-one or that it is onto because, by the Dimension Theorem, the conditions of being one-to-one and onto are equivalent here. We first show that L B is one-to-one. Note that if L B (x) =, then L A L B (x) = L A ( ) =. Thus, N(L B ) N(L A L B ) = { }, where the last equality follows from the assumption that L A L B is invertible and hence one-to-one. So N(L B ) = { }, and therefore L B is one-to-one. Next, we show that L A is onto. Indeed, for any y R(L A L B ), we can write y = L A ( LB (x) ) for some x R n. Thus, R n = R(L A L B ) R(L A ), where the first equality follows from the assumption that L A L B is invertible and hence onto. So we have R(L A ) = R n, and therefore L A is onto. Thus, L A and L B are invertible linear transformations; as noted above, this is equivalent to saying that A and B are invertible matrices. To wrap up this exercise, we construct an example to show that for arbitrary (i.e., not necessarily square) matrices A and B with AB invertible, A and B need not be invertible. The point here is that, by definition, an invertible matrix must be a square matrix. So if AB is invertible, it must be square, say of dimensions n n; then we must have A M n m (R) and B M m n (R) for some m N. So we should look for such examples with m n, since by definition such A and B cannot be invertible as they are not square matrices. Possibly the simplest example is the following: Take A = [ [ M 2 (R) and B = M 2 (R). Then A and B are not square matrices, so they can t be invertible; on the other hand, AB = [ = I, the identity matrix, which is invertible. Remark: If you want a principled way to come up with an example instead of just playing around until you find one, try using ideas similar to the proof for the first part of the exercise to prove the following: Suppose A is an n m matrix such that L A is onto (i.e., rank(a) = n) and B is an m n matrix such that L B is one-to-one. Suppose further that R(L B ) N(L A ) = { }. Then AB M n n (R) is invertible. (I don t know what exact language was used in Math 54, but note that R(L B ) is the column space of B and N(L A ) is the null space of A. You can easily arrange for the three conditions we just listed by writing down matrices in reduced row echelon form. ) 2
3 Exercise 2.4.5: Let V and W be n-dimensional vector spaces, and let T : V W be a linear transformation. Suppose that β is a basis for V. Prove that T is an isomorphism if and only if T(β) is a basis for W. Proof. We first prove the only if implication. So assume that T : V W is an isomorphism; we first claim that then T(β) must be a linearly independent set of n vectors in W. To that end, write β = {v,..., v n }. Because T is one-to-one, the vectors T(v k ) are distinct for k n, and thus T(β) = {T(v ),..., T(v n )} is a set of n vectors in W. Now suppose that some linear combination of these vectors is equal to the zero vector of W ; i.e., suppose a T(v ) a n T(v n ) = for some scalars a,..., a n R. It suffices to prove that a k = for all k. Now, again because T is a one-to-one linear transformation, we have N(T) = { }; thus, a T(v ) a n T(v n ) = T(a v a n v n ) = = a v a n v n =. Since β is assumed to be a basis, in particular the v k are mutually linearly independent; thus, we must have a k = for all k as desired. So T(β) is a linearly independent set of n vectors in W ; because dim(w ) = n, T(β) is a basis for W. Now let us prove the if implication. To that end, assume that T(β) = {T(v ),..., T(v n )} is a basis for W. Let y W be arbitrary. Using the assumed linearity of T, we can find scalars b,..., b n R such that y = b T(v ) b n T(v n ) = T(b v b n v n ) = T(x), where x = b v b n v n V. So we ve shown that for all y W, y R(T). In other words, T is onto. By the Dimension Theorem, because dim(v ) = dim(w ), T must also be one-to-one; thus, T is an isomorphism. Remarks: For the only if direction, we chose to show that T(β) must be linearly independent; of course one could also have chosen to show that T(β) spans W. Similarly, for the if direction, it would have been totally acceptable to show directly that T must be one-to-one and then to deduce that T must also be onto from the Dimension Theorem. Note also that the if implication of this exercise is not true in general if we don t assume dim(v ) = dim(w ); try to think of a counterexample. Exercise 2.4.6: Let B be an n n invertible matrix. Define Φ : M n n (R) M n n (R) by Φ(A) = B AB. Prove that Φ is an isomorphism. Proof. We first show that Φ is a linear transformation. Let A, A 2 M n n (R) and c R be arbitrary. Then Φ(cA + A 2 ) = B (ca + A 2 )B = (cb A + B A 2 )B = cb A B + B A 2 B = cφ(a ) + Φ(A 2 ), so Φ is linear. Next, let C M n n (R) be any n n matrix. Then Φ(BCB ) = B (BCB )B = (B B) C (B B) = I n C I n = C, where I n denotes the n n identity matrix. In particular, C R(Φ); as C was arbitrary, Φ is onto. Since Φ is a linear transformation from the finite-dimensional vector space M n n (R) to itself, by the Dimension Theorem Φ must also be one-to-one. Thus, Φ is an isomorphism. 3
4 Exercise 2.4.7: Let V and W be finite-dimensional vector spaces and T : V W be an isomorphism. Let V be a subspace of V. (a) Prove that T(V ) is a subspace of W. Proof. We need only use the fact that T is a linear transformation here; i.e., we do not need to use the full fact that T is an isomorphism. Because V is a subspace of V, the zero vector V of V is an element of V. So W = T( V ) T(V ). To see that T(V ) is closed under addition and scalar multiplication in W, let w, w 2 T(V ) and c R be arbitrary. Then there exist v, v 2 V such that T(v ) = w and T(v 2 ) = w 2. Furthermore, because V is a subspace of V, we have cv +v 2 V. Thus, cw + w 2 = ct(v ) + T(v 2 ) = T(cv + v 2 ) T(V ). As c, w, and w 2 were arbitrary, T(V ) is closed under addition and scalar multiplication in W. Thus, T(V ) is a subspace of W. (b) Prove that dim(v ) = dim ( T(V ) ). Proof. Here we need to use the full fact that T is an isomorphism. Consider the restriction T V of T to the subspace V. The restriction of a linear transformation to a subspace of its domain was defined in a previous homework; in this case we may view it as the linear transformation T V : V T(V ) defined by T V (x) := T(x) for all x V. By definition, R ( ) T V = T(V ), so T V is onto. Moreover, if x N(T V ), then W = T V (x) = T(x) = x = V, because T is an isomorphism, hence one-to-one. Thus, T V is one-to-one; since we also noted that it s onto, T V : V T(V ) is an isomorphism. In other words, V and T(V ) are isomorphic vector spaces; by Theorem 2.9, dim(v ) = dim ( T(V ) ). Exercise 2.4.2: Let V and W be finite-dimensional vector spaces with ordered bases β = {v, v 2,..., v n } and γ = {w, w 2,..., w m }, respectively. By Theorem 2.6 (p. 72), there exist linear transformations T ij : V W such that { wi if k = j T ij (v k ) = if k j. First prove that {T ij i m, j n} is a basis for L(V, W ). Remark: One could make this proof a bit shorter by using Theorem 2.2 and its Corollary; that would tell us that dim ( L(V, W ) ) = mn. Once we knew the dimension, we could get away with showing either linear independence or spanning and then immediately deduce the other. But since the spirit of this exercise seems to be to reprove those results from a slightly different perspective, we ll just do everything from scratch. Note also that the arguments below can be motivated by considering the matrix representations of linear transformations with respect to the bases β and γ. 4
5 Proof. We first show that the set {T ij i m, j n} is linearly independent. Let T : V W denote the zero transformation, which is the zero vector of the vector space L(V, W ). Assume there exist scalars a ij R such that m n a ij T ij = T ; i= j= we aim to show that we must have a ij = for all i m and j n. Our assumption is equivalent to the condition m n a ij T ij (v) = () i= j= for all v V. Fix k {,..., n}. Then plugging v = v k into equation () gives m n a ij T ij (v k ) = i= j= m a ik w i = a k w a mk w m =. i= Since γ = {w,..., w m } is a basis for W, the w i are linearly independent, and thus we must have a ik = for all i m. Since k was arbitrary, we have shown a ij = for all i m and j n, as desired. So the given set is linearly independent in L(V, W ). Next, we show that {T ij i m, j n} spans L(V, W ). Let S L(V, W ) be an arbitrary linear transformation. Again using the fact that γ = {w,..., w m } is a basis for W, for each k n we can find scalars b k, b 2k,..., b mk R such that S(v k ) = b k w b mk w m = m b ik w i. i= Consider the linear transformation m n b ij T ij span ( {T ij i m, j m} ). Note that, for each k n, i= j= m n b ij T ij (v k ) = i= j= m b ik w i = S(v k ). i= Since β = {v,..., v n } is a basis for v, by the Corollary to Theorem 2.6 on page 73 of the text we have S = m n b ij T ij span ( {T ij i m, j n} ). i= j= Since S L(V, W ) was arbitrary, the given set spans L(V, W ). Since we also showed it is linearly independent, it is a basis for L(V, W ). 5
6 Exercise (continued): Then let M ij be the m n matrix with in the ith row and jth column and elsewhere, and prove that [T ij γ β = M ij. Proof. We just compute to show that the corresponding entries of the two matrices are equal; i.e., we show that for all k m and all l n we have ( [Tij γ β) kl = ( M ij) kl. To do that, we just use the definition of the matrix representation of a linear transformation with respect to a pair of bases β and γ. Fix an arbitrary such k and l. Note that the lth column of [T ij γ β is the coordinate vector { { [wi [T ij (v l ) γ = γ if l = j ei R = m if l = j [ γ if l j R m if l j. (Here e i denotes the ith standard basis vector in R m.) So the kth entry of this vector is ( [Tij γ β)kl = { if l = j and k = i otherwise = ( M ij) kl, which is what we wanted to show. Exercise (continued): Again by Theorem 2.6, there exists a linear transformation Φ : L(V, W ) M m n (R) such that Φ(T ij ) = M ij. Prove that Φ is an isomorphism. Proof. Note that from the first part of this exercise, in which we showed that {T ij i m, j n} is a basis for L(V, W ) consisting of mn distinct linear transformations, we can deduce dim ( L(V, W ) ) = mn. Thus, because we also have dim ( M m n (R) ) = mn, we may apply Exercise 5 above to the linear transformation Φ : L(V, W ) M m n (R). Once we note that the set {M ij i m, j n} is just the standard basis for M m n (R), the if implication of Exercise 5 tells us that Φ is an isomorphism. Remark: If you think about it, Φ here is in fact the exact same isomorphism Φ appearing in Theorem 2.2 of the text. We ve basically given a somewhat lengthy elaboration of the proof of that theorem; comparing the proof we did here with that appearing in the book may be instructive. 6
7 Exercise : Let V denote the vector space defined in Example 5 of Section.2, and let W = P(R). Define n T : V W by T(σ) = σ(i)x i, where n is the largest integer such that σ(n). Prove that T is an isomorphism. Remark : To be utterly pedantic, the statement of the exercise does not define T( V ). (The zero vector V of V is the sequence σ defined by σ (m) = for all m, so it does not make sense to say n is the largest integer such that σ (n). ) So if we want to be completely precise, we should further define T( V ) = W, where W is the zero polynomial in P(R). This exercise is also confusing as stated because Example 5 in Section.2 defines V to be the space of functions from the positive integers to R; thus, σ() does not make sense for σ V under that definition. The natural thing to do is to modify the definition of V slightly, taking V to be the vector space of functions from the nonnegative integers to R. (N.B.: The two definitions yield isomorphic vector spaces. To check your understanding, find a reasonably obvious isomorphism from the new V to the old V. ) We ll use that modified definition of V for the following proof. Remark 2: Note that V and W here are infinite-dimensional vector spaces; accordingly, we cannot hope to make any use of the Dimension Theorem or other finite-dimensional techniques. Proof. We first show that T is a linear transformation. Let c R and σ, η V be arbitrary. Let N be the largest integer such that either σ(n) or η(n) ; if σ = η = V, then set N =. Then T(cσ + η) = i= i= N N (cσ + η)(i)x i = c σ(i)x i + i= N η(i)x i = ct(σ) + T(η), so T is linear. Next, we show T is one-to-one. Indeed, suppose T(σ) = W, the zero polynomial. Then by definition of the zero polynomial and the map T, we know that: ˆ σ(n) = for all integers n >, and ˆ σ() =. In other words, σ(n) = for all nonnegative integers n, and thus σ = V. So N(T) = { V }, and T is one-to-one. Finally, let f W = P(R) be an arbitrary polynomial, and write n f(x) = a i x i, so that deg(f) = n. Define a sequence σ V by { ak for k n σ(k) = for k > n. i= Then T(σ) = f; since f W was arbitrary, T is onto. Thus, we ve shown that T is a linear transformation that is both one-to-one and onto; in other words, T is an isomorphism. i= 7
8 Section 2.5 Exercise 2.5.2: For each of the following pairs of ordered bases β and β for R 2, find the change of coordinate matrix that changes β -coordinates into β-coordinates. (a) β = {e, e 2 } and β = {(a, a 2 ), (b, b 2 )}. Solution: For j =, 2, the j-th column of the change-of-coordinates matrix Q is given by the β-coordinate vector of the j-th vector in the basis β. We compute these coordinate vectors as follows: [ a (a, a 2 ) = a e + a 2 e 2 [(a, a 2 ) β = ; (b, b 2 ) = b e + b 2 e 2 [(b, b 2 ) β = So the change-of-coordinates matrix is [ Q = [I R 2 β a b β = a 2 b 2, a 2 [ b where I R 2 denotes the identity transformation on the vector space R 2. (d) β = {( 4, 3), (2, )} and β = {(2, ), ( 4, )}. Solution: With the same approach as in part (a), we compute the relevant coordinate vectors: { [ [ 4a + 2c = 2 a 2 (2, ) = a( 4, 3) + c(2, ) = [(2, ) 3a c = c β = ; 5 ( 4, ) = b( 4, 3) + d(2, ) { 4b + 2d = 4 3b d = So the change-of-coordinates matrix is [ a b Q = = c d [ b 2 [ b d. = [( 4, ) β = [ 4. 8
9 Exercise 2.5.3: For each of the following pairs of ordered bases β and β for P 2 (R), find the change of coordinate matrix that changes β -coordinates into β-coordinates. (b) β = {, x, x 2 } and β = {a 2 x 2 + a x + a, b 2 x 2 + b x + b, c 2 x 2 + c x + c }. Solution: We follow the same basic approach as in Exercise 2. The β-coordinate vectors of the basis vectors in β can easily be computed by inspection in this case: a [a 2 x 2 + a x + a β = a a 2, [b 2 x 2 + b x + b β =, [c 2 x 2 + c x + c β =. So the change-of-coordinates matrix is Q = [I P2 (R) β β = b b b 2 a b c a b c a 2 b 2 c 2 (d) β = {x 2 x +, x +, x 2 + } and β = {x 2 + x + 4, 4x 2 3x + 2, 2x 2 + 3}. Solution: For convenience, write β = {f, f 2, f 3 } and β = {g, g 2, g 3 } (as ordered bases). Then we compute the coordinate vectors: f = a g + a 2 g 2 + a 3 g 3 f 2 = b g + b 2 g 2 + b 3 g 3 f 3 = c g + c 2 g 2 + c 3 g 3 a + a 3 = a + a 2 = a + a 2 + a 3 = 4 b + b 3 = 4 b + b 2 = 3 b + b 2 + b 3 = 2 So the change-of-coordinates matrix is Q = [I P2 (R) β β = c + c 3 = 2 c + c 2 = c + c 2 + c 3 = 3 a b c a 2 b 2 c 2 a 3 b 3 c 3. [f β = [f 2 β = [f 2 β = a a 2 a 3 b b 2 b 3 c c 2 c c c c ;. ; 9
10 (f) β = {2x 2 x +, x 2 + 3x 2, x 2 + 2x + } and β = {9x 9, x 2 + 2x 2, 3x 2 + 5x + 2}. Solution: Again, write β = {f, f 2, f 3 } and β = {g, g 2, g 3 }. Compute the coordinate vectors (in the interest of saving space I will not show my work in solving the systems of equations, but you should): f = a g + a 2 g 2 + a 3 g 3 f 2 = b g + b 2 g 2 + b 3 g 3 f 3 = c g + c 2 g 2 + c 3 g 3 2a + a 2 a 3 = a + 3a 2 + 2a 3 = 9 a 2a 2 + a 3 = 9 So the change-of-coordinates matrix is Q = [I P2 (R) β β = 2b + b 2 b 3 = b + 3b 2 + 2b 3 = 2 b 2b 2 + b 3 = 2 2c + c 2 c 3 = 3 c + 3c 2 + 2c 3 = 5 c 2c 2 + c 3 = 2 a b c a 2 b 2 c 2 a 3 b 3 c 3 [f β = [f 2 β = [f 3 β = a a 2 a 3 b b 2 b 3 c c 2 c ;. ; Exercise 2.5.6: For each matrix A and ordered basis β, find [L A β. Also, find an invertible matrix Q such that [L A β = Q AQ. [ {[ [ } 2 (b) A = and β =,. 2 Solution: For convenience, write β = {v, v 2 } (as an ordered basis). Then the j-th column of [L A β is the β-coordinate vector [L A (v j ) β. So we compute [ [ 3 3 L A (v ) = = 3v 3 + v 2 = [L A (v ) β = ; [ [ L A (v 2 ) = = v + ( )v 2 = [L A (v 2 ) β =. So [L A β = [ 3 By the Corollary on page 5 of the text, the change-of-coordinates matrix Q that changes β- coordinates into standard-basis-coordinates, given by [ Q =, satisfies [L A β = Q AQ..
11 3 4 (d) A = 3 4 and β = 4 4 2,,. Solution: Again, for convenience write β = {v, v 2, v 3 }. Compute the relevant β-coordinate vectors: So we have L A (v ) = = 6v + v 2 + v 3 = [L A (v ) β = ; 2 L A (v 2 ) = 2 = v + 2v 2 + v 3 = [L A (v 2 ) β = 2 ; 8 L A (v 3 ) = 8 = v + v 2 + 8v 3 = [L A (v 3 ) β = [L A β = 2. 8 Again using the Corollary on page 5 of the text, the change-of-coordinates matrix Q = 2 satisfies [L A β = Q AQ. Remark: In both parts of this exercise, it was easy enough to solve (by inspection) the systems of linear equations required to obtain the relevant coordinate vectors to construct [L A β. On the other hand, if the vectors of the basis β are explicitly presented to you in standard coordinates, the matrix Q is always easy to construct: its columns are simply the standard coordinate vectors of the basis vectors from β, arranged in their proper order. So in general, you have a choice of which computation you d rather do: () solve the linear systems needed to compute the relevant coordinate vectors, or (2) invert the matrix Q, by whatever technique you desire (probably either by row operations or by Cramer s rule). Which option is the less tedious varies on a case-by-case basis, of course.
12 Exercise 2.5.7(b): In R 2, let L be the line y = mx, where m. Find an expression for T(x, y), where T is the projection on L along the line perpendicular to L. (See the definition of projection in the exercises of Section 2..) Solution: We follows the geometrically intuitive approach of Example 3 on pages 3 4 of the text. Let L denote the line in R 2 perpendicular to L; it has equation y = m x. Since L and L are lines through the origin in R 2, they are one-dimensional subspaces of R 2. Indeed, β = {(, m)} is a basis for L, and β = {( m, )} is a basis for L. We first verify that R 2 = L L. Geometrically, it is clear that the lines L and L intersect (only) at the origin; that is, L L = {(, )}. Moreover, since neither of the basis vectors (, m) and ( m, ) is a scalar multiple of the other, the set γ = {(, m), ( m, )} is linearly independent in R 2 ; since dim(r 2 ) = 2 the set must also span R 2, which implies R 2 = L + L. Note that γ is in fact a basis of R 2. By definition of the projection on L along L, we see that T(, m) = (, m) and T( m, ) = (, ). So we have [ [T γ =. Moreover, we can form the change-of-coordinates matrix Q that converts γ-coordinates into standardbasis-coordinates, and we can take its inverse using the familiar formula for the inverse of a 2 2 matrix: [ [ m Q =, Q m = m m 2. + m Then, as in Example 3, T = L A, where A is the matrix So we just compute A [ x = y A = Q [T γ Q = [ x + my m 2 + mx + m 2 y m 2 + = T(x, y) = [ m m m 2. ( x + my m 2 +, mx + ) m2 y m 2. + Remark: If you have learned elsewhere how to take orthogonal projections using dot products, you can verify that T(x, y) is indeed the same orthogonal projection of (x, y) to L as computed using the dot-product method. Exercise 2.5.: Prove that if A and B are similar n n matrices, then tr(a) = tr(b). Hint: Use Exercise 3 of Section 2.3. Proof. Exercise 3 of Section 2.3 gives the following commutativity property of the trace: For any C, D M n n (R), we have tr(cd) = tr(dc). Since A and B are similar matrices, there exists some invertible n n matrix Q such B = Q AQ, by definition. So we just compute, using the commutativity property: tr(b) = tr ( Q (AQ) ) = tr ( (AQ)Q ) = tr ( A(QQ ) ) = tr(ai) = tr(a), where I denotes the n n identity matrix, as desired. 2
T ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationLinear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationSection 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationName: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
More informationISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More information18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.
806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More informationLEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationMath 333 - Practice Exam 2 with Some Solutions
Math 333 - Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationMAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
More information4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
More informationSection 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj
Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that
More informationv w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.
3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More informationMA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
More informationOrthogonal Projections and Orthonormal Bases
CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).
More information1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0
Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are
More informationTHE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
More information( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationMatrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.
Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
More informationMATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationSection 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
More informationLectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More informationMATH1231 Algebra, 2015 Chapter 7: Linear maps
MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter
More informationLecture Notes 2: Matrices as Systems of Linear Equations
2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More informationChapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
More informationis in plane V. However, it may be more convenient to introduce a plane coordinate system in V.
.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More informationLinear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:
Linear Algebra A vector space (over R) is an ordered quadruple (V, 0, α, µ) such that V is a set; 0 V ; and the following eight axioms hold: α : V V V and µ : R V V ; (i) α(α(u, v), w) = α(u, α(v, w)),
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationChapter 20. Vector Spaces and Bases
Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationLet H and J be as in the above lemma. The result of the lemma shows that the integral
Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;
More informationMath 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered: What is a vector? What is a vector space? Span, linear dependence, linear independence
Math 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered: What is Linear algebra? Overview of course What is a vector? What is a vector space? Examples of vector spaces Vector subspaces Span, linear
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationLecture L3 - Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationLinear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
More informationPYTHAGOREAN TRIPLES KEITH CONRAD
PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationLinear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
More informationMATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
More informationArkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan
Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan 3 Binary Operations We are used to addition and multiplication of real numbers. These operations combine two real numbers
More informationIRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More informationDERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
More informationHOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!
Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following
More information1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]
1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationMATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
More informationI. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
More informationMath 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationYou know from calculus that functions play a fundamental role in mathematics.
CHPTER 12 Functions You know from calculus that functions play a fundamental role in mathematics. You likely view a function as a kind of formula that describes a relationship between two (or more) quantities.
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationChapter 7. Permutation Groups
Chapter 7 Permutation Groups () We started the study of groups by considering planar isometries In the previous chapter, we learnt that finite groups of planar isometries can only be cyclic or dihedral
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More informationThe Matrix Elements of a 3 3 Orthogonal Matrix Revisited
Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More information