T ( a i x i ) = a i T (x i ).



Similar documents
Math Practice Exam 2 with Some Solutions

NOTES ON LINEAR TRANSFORMATIONS

Section Inner Products and Norms

Matrix Representations of Linear Transformations and Changes of Coordinates

Methods for Finding Bases

MA106 Linear Algebra lecture notes

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

( ) which must be a vector

LINEAR ALGEBRA W W L CHEN

Similarity and Diagonalization. Similar Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

160 CHAPTER 4. VECTOR SPACES

Orthogonal Diagonalization of Symmetric Matrices

1 Sets and Set Notation.

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

A =

1 Introduction to Matrices

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Math 312 Homework 1 Solutions

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Inner Product Spaces

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

Vector and Matrix Norms

Solutions to Math 51 First Exam January 29, 2015

3. INNER PRODUCT SPACES

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

1 VECTOR SPACES AND SUBSPACES

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Orthogonal Projections

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

THE DIMENSION OF A VECTOR SPACE

University of Lille I PC first year list of exercises n 7. Review

Systems of Linear Equations

Solving Linear Systems, Continued and The Inverse of a Matrix

Name: Section Registered In:

5. Linear algebra I: dimension

Inner Product Spaces and Orthogonality

Linear Algebra Review. Vectors

α = u v. In other words, Orthogonal Projection

Linear Algebra Notes

Let H and J be as in the above lemma. The result of the lemma shows that the integral

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

4.5 Linear Dependence and Linear Independence

MAT188H1S Lec0101 Burbulla

GROUP ALGEBRAS. ANDREI YAFAEV

Notes on Symmetric Matrices

Lecture 2 Matrix Operations

ISOMETRIES OF R n KEITH CONRAD

Notes on Determinant

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:

LEARNING OBJECTIVES FOR THIS CHAPTER

Section 4.4 Inner Product Spaces

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

PROJECTIVE GEOMETRY. b3 course Nigel Hitchin

by the matrix A results in a vector which is a reflection of the given

1 Norms and Vector Spaces

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Finite dimensional C -algebras

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

BANACH AND HILBERT SPACE REVIEW

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Inner products on R n, and more

Lecture Notes 2: Matrices as Systems of Linear Equations

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Matrix Differentiation

Linear Algebra I. Ronald van Luijk, 2012

3. Prime and maximal ideals Definitions and Examples.

Lecture 3: Finding integer solutions to systems of linear equations

Subspaces of R n LECTURE Subspaces

1 = (a 0 + b 0 α) (a m 1 + b m 1 α) 2. for certain elements a 0,..., a m 1, b 0,..., b m 1 of F. Multiplying out, we obtain

LINEAR ALGEBRA. September 23, 2010

Chapter 17. Orthogonal Matrices and Symmetries of Space

16.3 Fredholm Operators

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Solution to Homework 2

Applied Linear Algebra I Review page 1

1 Homework 1. [p 0 q i+j p i 1 q j+1 ] + [p i q j ] + [p i+1 q j p i+j q 0 ]

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Math 319 Problem Set #3 Solution 21 February 2002

Notes on Linear Algebra. Peter J. Cameron

Math 67: Modern Linear Algebra (UC Davis, Fall 2011) Summary of lectures Prof. Dan Romik

Linear Algebra: Vectors

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

MATH APPLIED MATRIX THEORY

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Math 115A - Week 1 Textbook sections: Topics covered: What is a vector? What is a vector space? Span, linear dependence, linear independence

x = + x 2 + x

Transcription:

Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b) T (cx) = ct (x). Fact 1. (p. 65) 1. If T is linear, then T (0) = 0. Note that the first 0 is in V and the second one is in W. 2. T is linear if and only if T (cx + y) = ct (x) + T (y) for all x, y V and c F. 3. If T is linear, then T (x y) = T (x) T (y) for all x, y V. 4. T is linear if and only if, for x 1, x 2,..., x n V and a 1, a 2,..., a n F, we have T ( a i x i = a i T (x i ). Proof.. 1. We know 0 x = 0 for all vectors x. Hence, 0 = 0 T (x) = T (0 x) = T (0) 2. Assume T is linear and c F and x, y V, then T (cx) = ct (x) and T (cx + y) = T (cx) + T (y) = ct (x) + T (y). Assume T (x + y) = T (x) + T (y) for all x, y V. Let a F and x, y V. Then T (ax) = T (ax + 0) = at (x) + T (0) = at (x) and T (x + y) = T (1 x + y) = 1 T (x) + T (y) = T (x) + T (y). 3. To start with, the definition of subtraction is: x y= x + ( y). We also wish to show: x V, 1 x = x. Given 0 x = 0 which we proved earlier, we have (1 + ( 1))x = 0 1 x + ( 1) x = 0 Since the additive inverse in unique, we conclude that ( 1) x = x. AssumeT is linear. Then T (x y) = T (x + 1 y) = T (x) + ( 1)T (y) = T (x) + ( T (y)) = T (x) T (y) for all x, y V. 4. ( ) Let n = 2, a 1 = c, and a 2 = 1. We have that T (cx 1 + x 2 ) = ct (x 1 ) + T (x 2 ). ( ) This is done by induction. Using the definition of linear transformation, we get the base case (n = 2). That is, T (a 1 x 1 + a 2 x 2 ) = T (a 1 x 1 ) + T (a 2 x 2 ) = a 1 T (x 1 ) + a 2 T (x 2 ). Now assume that n 2 and for all x 1, x 2,..., x n V and a 1, a 2,..., a n F, we have T ( a i x i ) = a i T (x i ). 1

2 Then for all x 1, x 2,..., x n, x n+1 V and a 1, a 2,..., a n a n+1 F, we have n+1 T ( a i x i ) = T ( a i x i + a n+1 x n+1 ) = T ( = = a i x i ) + T (a n+1 x n+1 ) a i T (x i ) + a n+1 T (x n+1 ) n+1 a i T (x i ). Defn 2. (p. 67) Let V and W be vector spaces, and let T : V W be linear. We define the null space (or kernel) N(T ) of T to be the set of all vectors x in V such that T (x) = 0; N(T ) = {x V : T (x) = 0}. We define the range (or image) R(T ) of T to be the subset W consisting of all images (under T ) of vectors in V ; that is, R(T ) = {T (x) : x V }. Theorem 2.1. Let V and W be vector spaces and T : V W be linear. Then N(T ) and R(T ) are subspaces of V and W, respectively. Proof. To prove that N(T ) is a subspace of V, we let x, y N(T ) and c F. We will show that 0 N(T ) and cx + y N(T ). We know that T (0) = 0. So, 0 N(T ). We have that T (x) = 0 and T (y) = 0 so T (cx + y) = c T (x) + T (y) = c 0 + 0 = c 0 = 0. (The last equality is a fact that can be proved from the axioms of vector space. To prove that R(T ) is a subspace of W, assume T (x), T (y) R(T ), for some x, y V and c F. Then, ct (x)+t (y) = T (cx+y), where cx+y V, since T is a linear transformation and so, ct (x) + T (y) R(T ). Also, since 0 V and 0 = T (0), we know that 0 R(T ). Let A be a set of vectors in V. Then T (A) = {T (x) : x A}. Theorem 2.2. Let V and W be vector spaces, and let T : V W be linear. If β = {v 1, v 2,..., v n } is a basis for V, then R(T ) = span(t (β)). Proof. T (β) = {T (v 1 ), T (v 2 ),..., T (v n )}. We will show R(T ) = span(t (β)). We showed that R(T ) is a subspace of W. We know that T (β) R(T ). So, by Theorem 1.5, span(t (β)) R(T ). Let x R(T ). Then there is some x V such that T (x ) = x. β is a basis of V, so there exist scalars a 1, a 2,..., a n such that x = a 1 x 1 + a 2 x 2 + + a n x n. which is in span(t (β)). Thus R(T ) span(t (β)). x = T (x ) = T (a 1 x 1 + a 2 x 2 + + a n x n ) = a 1 T (x 1 ) + a 2 T (x 2 ) + + a n T (x n )

Defn 3. (p. 69) Let V and W be vector spaces, and let T : V W be linear. If N(T ) and R(T ) are finite-dimensional, then we define the nullity of T, denoted nullity(t ), and the rank of T, denoted rank(t ), to be the dimensions of N(T ) and R(T ), respectively. Theorem 2.3. (Dimension Theorem). Let V and W be vector spaces, and let T : V W be linear. If V is finite-dimensional, then nullity(t )+ rank(t ) = dim(v ). Proof. Suppose nullity(t ) = k and β = {n 1,..., n k } is a basis of N(T ). Since β is linearly independent in V, then β extends to a basis of V, β = {n 1,..., n k, x k+1,..., x n }, where, by Corollary 2c. to Theorem 1.10, i, x i N(T ). Notice that possibly β =. It suffices to show: β = {T (x k+1 ),..., T (x n )} is a basis for R(T ). By Theorem 2.2, T (β ) spans R(T ). But T (β ) = {0} β. So β spans R(T ). Suppose β is not linearly independent. Then there exist c 1, c 2,..., c n F, not all 0 such that 0 = c 1 T (x k+1 ) + + c n T (x n ). 3 Then and But then 0 = T (c 1 x k+1 + + c n x n ) c 1 x k+1 + + c n x n N(T ). c 1 x k+1 + + c n x n = a 1 n 1 + a 2 n 2 + + a k n k for some scalars a 1, a 2,..., a k, since β = {n 1,..., n k } is a basis of N(T ). Then 0 = a 1 n 1 + a 2 n 2 + + a k n k c 1 x k+1 c n x n. The c is are not all zero, so this contradicts that β is a linearly independent set. Theorem 2.4. Let V and W be vector spaces, and let T : V W be linear. Then T is one-to-one if and only if N(T ) = {0}. Proof. Suppose T is one-to-one. Let x N(T ). Then T (x) = 0 also T (0) = 0. T being oneto-one implies that x = 0. Therefore, N(T ) {0}. It is clear that {0} N(T ). Therefore, N(T ) = {0}. Suppose N(T ) = {0}. Suppose for x, y V, T (x) = T (y). We have that T (x) T (y) = 0. T (x y) = 0. Then it must be that x y = 0. This implies that y is the additive inverse of x and x is the additive inverse of y. But also y is the additive inverse of y. By uniqueness of additive inverses we have that x = y. Thus, T is one-to-one. Theorem 2.5. Let V and W be vector spaces of equal (finite) dimension, and let T : V W be linear. Then the following are equivalent. (a) T is one-to-one. (b) T is onto. (c) rank(t ) = dim(v ).

4 Proof. Assume dim(v ) = dim(w ) = n. (a) (b) : To prove T is onto, we show that R(T ) = W. Since T is one-to-one, we know N(T ) = {0} and so nullity(t ) = 0. By Theorem 2.3, nullity(t ) + rank(t ) = dim(v ). So we have dim(w ) = n = rank(t ). By Theorem 1.11, R(T ) = W. (b) (c) : T is onto implies that R(T ) = W. So rank(t ) = dim(w ) = dim(v ). (c) (a) : If we assume rank(t ) = dim(v ), By Theorem 2.3 again, we have that nullity(t ) = 0. But then we know N(T ) = {0}. By Theorem 2.4, T is one-to-one. Theorem 2.6. Let V and W be vector spaces over F, and suppose that β = {v 1, v 2,..., v n } is a basis for V. For w 1, w 2,..., w n in W, there exists exactly one linear transformation T : V W such that T (v i ) = w i for i = 1, 2,..., n. Proof. Define a function T : V W by T (v i ) = w i for i = 1, 2,..., n and since β is a basis, we can express every vector in V as a linear combination of vectors in β. For x V, x = a 1 v 1 + a 2 v 2 + + a n v n, we define T (x) = a 1 w 1 + a 2 w 2 + + a 2 w n. We will show that T is linear. Let c F and x, y V. Assume x = a 1 v 1 +a 2 v 2 + +a n v n and y = b 1 v 1 + b 2 v 2 + + b n v n. Then cx + y = (ca i + b i )v i (1) T (cx + y) = T ( (ca i + b i )v i ) = (ca i + b i )w i = c a i w i + b i w i = ct (x) + T (y) Thus, T is linear. To show it is unique, we need to show that if F : V W is a linear transformation such that F (v i ) = w i for all i [n], then for all x V, F (x) = T (x). Assume x = a 1 v 1 + a 2 v 2 + + a n v n. Then T (x) = a 1 w 1 + a 2 w 2 + + a n w n = a 1 F (v 1 ) + a 2 F (v 2 ) + + a n F (v n ) = F (x) Cor 1. Let V and W be vector spaces, and suppose that V has a finite basis {v 1, v 2,..., v n }. If U, T : V W are linear and U(v i ) = T (v i ) for i = 1, 2,..., n, then U = T. Defn 4. (p. 80) Given a finite dimensional vector space V, an ordered basis is a basis where the vectors are listed in a particular order, indicated by subscripts. For F n, we have {e 1, e 2,..., e n } where e i has a 1 in position i and zeros elsewhere. This is known as the standard ordered basis and e i is the i th characteristic vector. Let β = {u 1, u 2,..., u n } be an ordered basis for a finite-dimensional vector space V. For x V, let a 1, a 2,..., a n be the

unique scalars such that x = [x] β = a i u i We define the coordinate vector of x relative to β, denoted [x] β, by a 1 a 2 Defn 5. (p. 80) Let V and W be finite-dimensional vector spaces with ordered bases β = {v 1, v 2,..., v n } and γ = {w 1, w 2,..., w n }, respectively. Let T : V W be linear. Then for each j, 1 j n, there exists unique scalars a i,j F, a i m, such that m T (v j ) = t i,j w i 1 j n. So that [T (v j )] γ = We call the m n matrix A defined by A i,j = a i,j the matrix representation of T in the ordered bases β and γ and write A = [T ] γ β. If V = W and β = γ, then we write A = [T ] β.. a n. t 1,j t 2,j. t m,j. Fact 2. [a 1 x 1 + a 2 x 2 + + a n x n ] B = a 1 [x 1 ] B + a 2 [x 2 ] B + + a n [x n ] B Proof. Let β = {v 1, v 2,..., v n }. Say x j = b j,1 v 1 + b j,2 v 2 + + b j,n v n. The LHS: a 1 x 1 + a 2 x 2 + + a n x n = a 1 (b 1,1 v 1 + b 1,2 v 2 + + b 1,n v n ) +a 2 (b 2,1 v 1 + b 2,2 v 2 + + b 2,n v n ) + +a n (b n,1 v 1 + b n,2 v 2 + + b n,n v n ) = (a 1 b 1,1 + a 2 b 2,1 + + a n b n,1 )v 1 +(a 1 b 1,2 + a 2 b 2,2 + + a n b n,2 )v 2 + The i th position of the LHS is the coefficient c i of v i in: +(a 1 b 1,n + a 2 b 2,n + + a n b n,n )v n a 1 x 1 + a 2 x 2 + + a n x n = c 1 v 1 + c 2 v 2 + + c n v n And so, c i = a 1 b 1,i + a 2 b 2,i + + a n b n,i. The RHS: The i th position of the RHS is the sum of the i th position of each a j [x j ] β, which is the coefficient of v i in a j x j = a j (b j,1 v 1 + b j,2 v 2 + + b j,n v n ) and thus, a j b j,i. We have the i th position of the RHS is a 1 b 1,i + a 2 b 2,i + + a n b n,i. 5

6 Fact 2. (a) Let V and W be finite dimensional vector spaces with bases, β and γ, respectively. [T ] γ β [x] β = [T (x)] γ. Proof. Let β = {v 1, v 2,..., v n }. Let x = a 1 v 1 + a 2 v 2 + + a n v n. Then [T (x)] γ = [T (α 1 v 1 + α 2 v 2 + + α n v n )] γ = [α 1 T (v 1 ) + α 2 T (v 2 ) + + α n T (v n ))] γ = α 1 [T (v 1 )] γ + α 2 [T (v 2 )] γ + + α n [T (v n )] γ by fact 2 t 1,1 t 1,n α 1... =...... t n,1 t n,n α n = [T ] γ β [x] γ Fact 3. Let V be a vector space of dimension n with basis β. {x 1, x 2,..., x k } is linearly independent in V if and only if {[x 1 ] β, [x 2 ] β,..., [x k ] β } is linearly independent in F n. Proof. {x 1, x 2,..., x k } is linearly dependent a 1, a 2,..., a k, not all zero, such that a 1 x 1 + a 2 x 2 + a k x k = 0 a 1, a 2,..., a k, not all zero, such that [a 1 x 1 + a 2 x 2 + a k x k ] β = [0] β = 0 since the only way to represent 0( V ) in any basis is 0( F n ) a 1, a 2,..., a k, not all zero, such that a 1 [x 1 ] β + a 2 [x 2 ] β + a k [x k ] β = 0 {[x 1 ] β, [x 2 ] β,..., [x k ] β }is a linearly dependent set Defn 6. (p. 99) Let V and W be vector spaces, and let T : V W be linear. A function U : W V is said to be an inverse of T if T U = I W and UT = I V. If T has an inverse, then T is said to be invertible. Fact 4. v and W are vector spaces with T : V W. If T is invertible, then the inverse of T is unique. Proof. Suppose U : W V is such that T U = I W and UT = I V and X : W V is such that T X = I W and XT = I V. To show U = X, we must show that w W, U(w) = X(w). We know I W (w) = w and I V (X(w)) = X(w). U(w) = U(I W (w)) = UT X(w) = I V X(w) = X(w). Defn 7. We denote the inverse of T by T 1. Theorem 2.17. Let V and W be vector spaces, and let T : V W be linear and invertible. Then T 1 : W V is linear.

Proof. By the definition of invertible, we have: w W, T (T 1 (w)) = w. Let x, y W. Then, (2) (3) (4) (5) T 1 (cx + y) = T 1 (ct (T 1 (x)) + T (T 1 (y))) = T 1 (T (ct 1 (x) + T 1 y)) = T 1 T (ct 1 (x) + T 1 y) = T 1 (x) + T 1 y 7 Fact 5. If T is a linear transformation between vector spaces of equal (finite) dimension, then the conditions of being a.) invertible, b.) one-to-one, and c.) onto are all equivalent. Proof. We start with a) c). We have T T 1 = I V. To show onto, let v V then for x = T 1 (v), we have: T (x) = T (T 1 (v)) = I V (v) = v. Therefore, T is onto. By Theorem 2.5, we have b) c). We will show b) a). Let {v 1, v 2,..., v n } be a basis of V. Then {T (v 1 ), T (v 2 ),..., T (v n )} spans R(T ) by Theorem 2.2. One of the corollaries to the Replacement Theorem implies that {x 1 = T (v 1 ), x 2 = T (v 2 ),..., x n = T (v n )} is a basis for V. We define U : V V by, i, U(x i ) = v i. By Theorem 2.6, this is a well-defined linear transformation. Let x V, x = a 1 v 1 + a 2 v 2 + + a n v n for some scalars a 1, a 2,..., a n. UT (x) = UT (a 1 v 1 + a 2 v 2 + + a n v n ) = U(a 1 T (v 1 ) + a 2 T (v 2 ) + + a n T (v n ) = U(a 1 x 1 + a 2 x 2 + + a n x n ) = a 1 U(x 1 ) + a 2 U(x 2 ) + + a n U(x n ) = a 1 v 1 + a 2 v 2 + + a n v n = x Let x V, x = b 1 x 1 + b 2 x 2 + + b n x n for some scalars b 1, b 2,..., b n. T U(x) = T U(b 1 x 1 + b 2 x 2 + + b n x n ) = T (b 1 U(x 1 ) + b 2 T (x 2 ) + + b n T (x n ) = T (b 1 v 1 + b 2 v 2 + + b n v n ) = b 1 T (v 1 ) + b 2 T (v 2 ) + + b n T (v n ) = b 1 x 1 + b 2 x 2 + + b n x n = x Defn 8. (p. 100) Let A be an n n matrix. Then A is invertible if there exits an n n matrix B such that AB = BA = I. Fact 6. If A is invertible, then the inverse of A is unique. Proof. A M n (F ) and AB = BA = I n. Suppose AC = CA = I n. Then we have B = BI n = BAC = I n C = C.

8 Defn 9. We denote the inverse of A by A 1. Lemma 1. (p. 101) Let T be an invertible linear transformation from V to W. Then V is finite-dimensional if and only if W is finite-dimensional. In this case, dim(v ) = dim(w ). Proof. ( ) T : V W is invertible. Then T T 1 = I W and T 1 T = I V. Assume dim V = n and {v 1, v 2,..., v n } is a basis of V. We know T is onto since, if y W then for x = T 1 (y), we have T (x) = y. We know R(T ) = W since T is onto and {T (v 1 ), T (v 2 ),..., T (v n )} spans W, by Theorem 2.2. So W is finite dimensional and dim W n = dim V. ( ) Assume dim W = m. T 1 : W V is invertible. Applying the same argument as in the last case, we obtain that V is finite dimensional and dim V dim W. We see that when either of V or W is finite dimensional then so is the other and so dim W dim V and dim V dim W imply that dim V = dim W. Theorem 2.18. Let V and W be finite-dimensional vector spaces with ordered bases β and γ, respectively. Let T : V W be linear. Then T is invertible if and only if [T ] γ β is invertible. Furthermore, [T 1 ] β γ = ([T ] γ β ) 1. Proof. Assume T is invertible. Let dim V = n and dim W = m. By the lemma, n = m. Let β = { 1, 2,..., n } and γ = {w 1, w 2,..., w n }. We have T 1 such that T T 1 = I W and T 1 T = I V. We will show that [T 1 ] β γ is the inverse of [T ] γ β. We will show the following 2 matrix equations: (6) (7) [T ] γ β [T 1 ] β γ = I n [T 1 ] β γ[t ] γ β = I n To prove (1) we will take the approach of showing that the i th column of [T ] γ β [T 1 ] β γ is e i, the i th characteristic vector. Notice that for any matrix A, Ae i is the i th column of A. Consider [T ] γ β [T 1 ] β γe i. By repeated use of Fact 2a, we have: [T ] γ β [T 1 ] β γe i = [T ] γ β [T 1 ] β γ[w i ] γ = [T ] γ β [T 1 (w i )] β = [T T 1 (w i )] γ = [w i ] γ = e i To prove (7) the proof is very similar. We will show that the i th column of [T 1 ] β γ[t ] γ β is e i, the i th.

9 Consider [T 1 ] β γ[t ] γ β e i. By repeated use of Fact 2a, we have: [T 1 ] β γ[t ] γ β e i = [T 1 ] β γ[t ] γ β [v i] β = [T 1 ] β γ[t (v i )] γ = [T 1 (T (v i ))] β = [v i ] β = e i Thus we have ( ) and the Furthermore part of the statement. Now we assume A = [T ] γ β is invertible. Call its inverse A 1. We know that it is square, thus n = m. Let β = { 1, 2,..., n } and γ = {w 1, w 2,..., w n }. Notice that the i th column of A is A i = (t 1,i, t 2,i,..., t n,i ) where T (v i ) = t 1,i w 1 + t 2,i w 2 +... + t n,i w n To show T is invertible, we will define a function U and prove that it is the inverse of T. Let the i th column of A 1 be C i = (c 1,i, c 2,i,..., c n,i ). We define U : W V by for all i [n], U(w i ) = c 1,i 1 +c 2,i 2 + + c n,i n. Since we have defined U for the basis vectors γ, we know from Theorem 2.6 that U is a linear transformation. We wish to show that T U is the identity transformation in W and UT is the identity transformation in V. So, we show (1) T U(x) = x, x W and (2) UT (x) = x, x V. Starting with (1.). First lets see why, i [n], T (U(w i )) = w i. T (U(w i )) = T (c 1,i 1 +c 2,i 2 + + c n,i n ) = c 1,i (t 1,1 w 1 + t 2,1 w 2 +... + t n,1 w n ) +c 2,i (t 1,2 w 1 + t 2,2 w 2 +... + t n,2 w n ) Gathering coefficients of w 1, w 2,..., w n, we have: + + c n,i (t 1,n w 1 + t 2,n w 2 +... + t n,n w n ) T (U(w i )) = (c 1,i t 1,1 + c 2,i t 1,2 + + c n,i t 1,n )w 1 +(c 1,i t 2,1 + c 2,i t 2,2 + + c n,i t 2,n )w 2 + + (c 1,i t n,1 + c 2,i t n,2 + + c n,i t n,n )w n We see that the coefficient of w j in the above expression is Row j of A dot Column i of A 1, which is always equal to 1 if and only if i = j. Thus we have that T (U(w i )) = w i. Now we see that for x W then x = b 1 w 1 +b 2 w 2 + +b n w n for some scalars b 1, b 2,..., b n. T U(x) = T U(b 1 w 1 + b 2 w 2 + + b n w n ) = T (b 1 U(w 1 ) + b 2 U(w 2 ) + + b n U(w n )) = b 1 T (U(w 1 )) + b 2 T (U(w 2 )) + + b n T (U(w n )) = b 1 w 1 + b 2 w 2 + + b n w n = x

10 Similarly, UT (x) = x for all x in V. Cor 1. Let V be a finite-dimensional vector space with an ordered basis β, and let T : V V be linear. Then T is invertible if and only if [T ] β is invertible. Furthermore, [T 1 ] β = ([T ] β ) 1. Proof. Clear. Defn 10. Let A M m n (F ). Then the function L A : F n F m where for x F n, L A (x) = Ax and is called left-multiplication by A. Fact 6. (a) L A is a linear transformation and for β = {e 1, e 2,..., e n }, [L A ] β = A. Proof. We showed in class. Cor 2. Let A be an n n matrix. Furthermore, (L A ) 1 = L A 1. Then A is invertible if and only if L A is invertible. Proof. Let V = F n. Then L A : V V. Apply Corollary 1 with β = {e 1, e 2,..., e n } and [L A ] β = A, we have: L A is invertible if and only if A is invertible. The furthermore part of this corollary says that (L A ) 1 is left multiplication by A 1. We will show L A L A 1 = L A 1L A = I V. So we must show L A L A 1(x) = x, x V and L A 1L A (x) = x, x V. We have L A L A 1(x) = AA 1 x = x and L A 1L A (x) = A 1 Ax = x. Defn 11. Let V and W be vector spaces. We say that V is isomorphic to W if there exists a linear transformation T : V W that is invertible. Such a linear transformation is called an isomorphism from V onto W. Theorem 2.19. Let V and W be finite-dimensional vector spaces (over the same field). Then V is isomorphic to W if and only if dim(v ) = dim(w ). Proof. ( ) Let T : V W be an isomorphism. Then T is invertible and dim V = dim W by the Lemma. ( ) Assume dim V = dim W. Let β = {v 1,..., v k } and γ = {w 1, w 2,..., w k } be bases for V and W, respectively. Define T : V W where v i w i for all i. Then [T ] γ β = I which is invertible and Theorem 2.18 implies that T is invertible and thus, an isomorphism. Fact 7. Let P M n (F ) be invertible. W is a subspace of F n implies L P (W ) is a subspace of F n and dim(l P (W )) = dim(w ). Proof. Let x, y L P (W ), a F, then there exist x and y such that P x = x and P y = y. So P (ax + y ) = ap x + P y = ax + y. So ax + y L P (W ). Also, we know 0 W and L P (0) = 0 since L P is linear, so we have that 0 L P (W ). Therefore, L P (W ) is a subspace. Let {x 1, x 2,..., x k } be a basis of W. a 1 P (x 1 ) + a 2 P (x 2 ) + a k P (x k ) = 0 P (a 1 x 1 + a 2 x 2 + a k x k ) = 0 a 1 x 1 + a 2 x 2 + a k x k = 0 a 1 = a 2 = = a k = 0 {P (x 1 ), P (x 2 ),..., P (x k )} is linearly independent

To show that {P (x 1 ), P (x 2 ),..., P (x k )} spans L P (W ), we let z L P (W ). Then there is some w W, such that L P (W ) = z. If w = a 1 x 1 + a 2 x 2 + + a k x k, we have z = P w = P (a 1 x 1 + a 2 x 2 + + a k x k ) = a 1 P (x 1 ) + a 2 P (x 2 ) + + a k P (x k ). And with have that {P (x 1 ), P (x 2 ),..., P (x k )} is a basis of L P (W ) and dim(w ) = dim(l P (W )). 11 So far, we did not define the rank of a matrix. We have: Defn 12. Let A M m n (F ). Then, Rank(A)= Rank(L A ). And the range of a matrix, R(A) is the same as R(L A ). Fact 8. Let S M n (F ). If S is invertible, then R(S) = F n. Proof. L S : F n F n. We know S is invertible implies L S is invertible. We already know that, since L S is onto, R(L S ) = F n. By the definition of R(S), we have R(S) = F n. Fact 9. S, T M n (F ), S invertible and T invertible imply ST is invertible and its inverse is T 1 S 1. Proof. This is the oldest proof in the book. Theorem 2.20. Let V and W be finite-dimensional vector spaces over F of dimensions n and m, respectively, and let β and γ be ordered bases for V and W, respectively. Then the function Φ : L(V, W ) M m n (F ) defined by Φ(T ) = [T ] γ β for T L(V, W ), is an isomorphism. Proof. To show Φ is an isomorphism, we must show that it is invertible. Let β = {v 1, v 2,..., v n } and γ = {w 1, w 2,..., w n }. We define U : M m n (F ) L(V, W ) as follows. Let A M m n (F ) and have i th column: a 1,i a 2,i. a n,i So U maps matrices to transformations. U(A) is a transformation in L(V, W ). We describe the action of the linear transformation U(A) on v i and realize that this uniquely defines a linear transformation in L(V, W ). i [n], U(A)(v i ) = a 1,i w 1 + a 2,i w 2 + + a n,i w n. Then A = [U(A)] γ β. Thus, ΦU(A) = A. To verify that U(Φ)(T ) = T, we see what the action is on v i. U(Φ(T ))(v i ) = U([T ] γ β )(v i) = t 1,i w 1 + t 2,i w 2 + + t n,i w n = T (v i ) Cor 1. Let V and W be finite-dimensional vector spaces over F of dimensions n and m, respectively. Then L(V, W ) is finite-dimensional of dimension mn.

12 Proof. This by Theorem 2.19. Theorem 2.22. Let β and γ be two ordered bases for a finite-dimensional vector space V, and let Q = [I V ] γ β. Then (a) Q is invertible. (b) For any v V, [v] γ = Q[v] β. Proof. We see that Let β = {v 1,..., v n } and γ = {w 1,..., w n } We claim the inverse of Q is [I V ] β γ. We see that the i th column of [I V ] γ β [I V ] β γ is [I V ] γ β [I V ] β γe i. We have [I V ] γ β [I V ] β γe i = [I V ] γ β [I V ] β γ[w i ] γ = [I V ] γ β [I V (w i )] β = [I V (I V (w i ))] γ = [w i ] γ = e i and [I V ] β γ[i V ] γ β e i = [I V ] β γ[i V ] γ β [v i] β = [I V ] β γ[i V (v i )] γ = [I V (I V (v i ))] β = [v i ] β = e i We know, [v] γ = [I V (v)] γ = [I V ] γ β [v] β = Q[v] β. Defn 13. (p. 112) The matrix Q = [I V ] γ β defined in Theorem 2.22 is called a change of coordinate matrix. Because of part (b) of the theorem, we say that Q changes β- coordinates into γ-coordinates. Notice that Q 1 = [I V ] β γ. Defn 14. (p. 112) A linear transformation T : V V is called a linear operator. Theorem 2.23. Let T be a linear operator on a finite-dimensional vector space V, and let β and γ be ordered bases for V. Suppose that Q is the change of coordinate matrix that changes β-coordinates into γ-coordinates. Then Proof. The statement is short for: [T ] β = Q 1 [T ] γ Q. [T ] β β = Q 1 [T ] γ γq.

13 Let β = {v 1, v 2,..., v n }. As usual, we look at the i th column of each side. We have The i th column of [T ] β β. Q 1 [T ] γ γqe i = [I V ] β γ[t ] γ γ[i V ] γ β [v i] β = [I V ] β γ[t ] γ γ[i V (v i )] γ = [I V ] β γ[t ] γ γ[v i ] γ = [I V ] β γ[t (v i )] γ = [I V (T (v i ))] β = [T (v i )] β = [T ] β β [v i] β = [T ] β β e i Cor 1. Let A M n n (F ), and let γ be an ordered basis for F n. Then [L A ] γ = Q 1 AQ, where Q is the n n matrix whose j th column is the j th vector of γ. Proof. Let β be the standard ordered basis for F n. Then A = [L A ] β. So by the theorem, [L A ] γ = Q 1 [L A ] β Q. Defn 15. (p. 115) Let A and B be in M n (F ). We say that B is similar to A if there exists an invertible matrix Q such that B = Q 1 AQ. Defn 16. (p. 119) For a vector space V over F, we define the dual space of V to be the vector space L(V, F ), denoted by V. Let β = {x 1, x 2,..., x n } be an ordered basis for V. For each i [n], we define f i (x) = a i where a i is the i th coordinate of [x] β. Then f i is in V called the i th coordinate function with respect to the basis β. Theorem 2.24. Suppose that V is a finite-dimensional vector space with the ordered basis β = {x 1, x 2,..., x n }. Let f i (1 i n) be the ith coordinate function with respect to β and let β = {f 1, f 2,..., f n }. Then β is an ordered basis for V, and, for any f V, we have f = f(x i )f i. Proof. Let f V I. Since dim V = n, we need only show that f = f(x i )f i, from which it follows that β generates V, and hence is a basis by a Corollary to the Replacement Theorem. Let g = f(x i )f i. For 1 j n, we have ( ) g(x j ) = f(x i )f i (x j ) = f(x i )f i (x j ) = f(x i )δ i,j = f(x j ).

14 Therefore, f = g.