Abstract Vector Spaces, Linear Transformations, and Their Coordinate Representations

Size: px
Start display at page:

Download "Abstract Vector Spaces, Linear Transformations, and Their Coordinate Representations"

Transcription

1 Abstract Vector Spaces, Linear Transformations, and Their Coordinate Representations Contents 1 Vector Spaces Definitions Basics Linear Combinations, Spans, Bases, and Dimensions Sums and Products of Vector Spaces and Subspaces Basic Vector Space Theory Linear Transformations Definitions Linear Transformations, or Vector Space Homomorphisms Basic Properties of Linear Transformations Monomorphisms and Isomorphisms Rank-Nullity Theorem Matrix Representations of Linear Transformations Definitions Matrix Representations of Linear Transformations Change of Coordinates Definitions Change of Coordinate Maps and Matrices Vector Spaces 1.1 Definitions Basics A vector space (linear space) V over a field F is a set V on which the operations addition, + : V V V, and left F -action or scalar multiplication, : F V V, satisfy: for all x, y, z V and a, b, 1 F VS1 x + y = y + x VS2 (x + y) + z = x + (y + z) VS3 0 V such that x + 0 = x VS4 x V, y V such that x + y = 0 VS5 1x = x VS6 (ab)x = a(bx) VS7 a(x + y) = ax + ay VS8 (a + b)x = ax + ay (V, +) is an abelian group If V is a vector space over F, then a subset W V is called a subspace of V if W is a vector space over the same field F and with addition and scalar multiplication + W W and F W. 1

2 1.1.2 Linear Combinations, Spans, Bases, and Dimensions If S V, v 1,..., v n S and a 1,..., a n F, then a linear combination of v 1,..., v n is the finite sum a 1 v a n v n (1.1) which is a vector in V. The a i F are called the coefficients of the linear combination. If a 1 = = a n = 0, then the linear combination is said to be trivial. In particular, considering the special case of 0 in V, the zero vector, we note that 0 may always be represented as a linear combination of any vectors u 1,..., u n V, 0u u n }{{} 0 F = 0 }{{} 0 V This representation is called the trivial representation of 0 by u 1,..., u n. If, on the other hand, there are vectors u 1,..., u n V and scalars a 1,..., a n F such that a 1 u a n u n = 0 where at least one a i 0 F, then that linear combination is called a nontrivial representation of 0. Using linear combinations we can generate subspaces, as follows. If S is a subset of V, then the span of S is given by The span of is by definition span(s) := {v V v is a linear combination of vectors in S} (1.2) span( ) := {0} (1.3) In this case, S V is said to generate or span V, and to be a generating or spanning set. If span(s) = V, then S said to be a generating set or a spanning set for V. A nonempty subset S of a vector space V is said to be linearly independent if, taking any finite number of distinct vectors u 1,..., u n S, we have for all a 1,..., a n F that a 1 u 1 + a 2 u a n u n = 0 = a 1 = = a n = 0 That is S is linearly independent if the only representation of 0 V by vectors in S is the trivial one. In this case, the vectors u 1,..., u n themselves are also said to be linearly independent. Otherwise, if there is at least one nontrivial representation of 0 by vectors in S, then S is said to be linearly dependent. Given S V, a nonzero vector v S is said to be an essentially unique linear combination of the vectors in S if, up to order of terms, there is one and only one way to express v as a linear combination of u 1,..., u n S. That is, if there are a 1,..., a n, b 1,..., b m F \{0} and distinct u 1,..., u n S and distinct v 1,..., v m S distinct, then, re-indexing the b i s if necessary, } { } v = a 1 u a n u n ai = b i = m = n and i = 1,..., n = b 1 v b m v m u i = v i A subset β V is called a (Hamel) basis if it is linearly independent and span(β) = V. We also say that the vectors of β form a basis for V. Equivalently, as explained in Theorem 1.13 below, β is a basis if every nonzero vector v V is an essentially unique linear combination of vectors in β. In the context of inner product spaces of inifinite dimension, there is a difference between a vector space basis, the Hamel basis of V, and an orthonormal basis for V, the Hilbert basis for V, because though the two always exist, they are not always equal unless dim(v ) <. 2

3 The dimension of a vector space V is the cardinality of any basis for V, and is denoted dim(v ). V finite-dimensional if it is the zero vector space {0} or if it has a basis of finite cardinality. Otherwise, if it s basis has infinite cardinality, it is called infinite-dimensional. In the former case, dim(v ) = β = n < for some n N, and V is said to be n-dimensional, while in the latter case, dim(v ) = β = κ, where κ is a cardinal number, and V is said to be κ-dimensional. If V is finite-dimensional, say of dimension n, then an ordered basis for V a finite sequence or n-tuple (v 1,..., v n ) of linearly independent vectors v 1,..., v n V such that {v 1,..., v n } is a basis for V. If V is infinite-dimensional but with a countable basis, then an ordered basis is a sequence (v n ) n N such that the set {v n n N} is a basis for V. The term standard ordered basis is applied only to the ordered bases for F n, (F ω ) 0, (F B ) 0 and F [x] and its subsets F n [x]. For F n, the standard ordered basis is given by (e 1, e 2,..., e n ) where e 1 = (1, 0,..., 0), e 2 = (0, 1, 0,..., 0),..., e n = (0,..., 0, 1). Since F is any field, 1 represents the multiplicative identity and 0 the additive identity. For F = R, the standard ordered basis is just as above with the usual 0 and 1. If F = C, then 1 = (1, 0) and 0 = (0, 0), so the ith standard basis vector for C n looks like this: For (F ω ) 0 the standard ordered basis is given by e i = ( ith term (0, 0),..., (1, 0),..., (0, 0) ) (e n ) n N, where e n (k) = δ nk and generally for (F B ) 0 the standard ordered basis is given by: { 1, if x = b (e b ) b B, where e b (x) = 0, if x b When the term standard ordered basis is applied to the vector space F n [x], the subspace of F [x] of polynomials of degree less than or equal to n, it refers the basis For F [x], the standard ordered basis is given by (1, x, x 2,..., x n ) (1, x, x 2,... ) Sums and Products of Vector Spaces and Subspaces If S 1,..., S k P (V ) are subsets or subspaces of a vector space V, then their (internal) sum is given by k S S k = S i := {v v k v i S i for 1 i k} (1.4) i=1 More generally, if {S i i I} P (V ) is any collection of subsets or subspaces of V, then their sum is defined to be { S i = v v n si } S i and n N (1.5) i I i I If S is a subspace of V and v V, then the sum {v} + S = {v + s s S} is called a coset of S and is usually denoted v + S (1.6) 3

4 We are of course interested in the (internal) sum of subspaces of a vector space. In this case, we may have the special condition V = k i=1 S i and S j i j S i = {0}, j = 1,..., k (1.7) When this happens, V is said to be the (internal) direct sum of S 1,..., S k, and this is symbolically denoted k V = S 1 S 2 S k = S i = {s s k s i S i } (1.8) i=1 In the infinite-dimensional case we proceed similarly. If F = {S i i I} is a family of subsets of V such that V = S i and S j S i = {0}, j I (1.9) i I i j then V is a direct sum of the S i F, denoted V = F = { S i = s s n s j } S i, and s ji S i if s ji 0 i I i I (1.10) We can also define the (external) sum of distinct vector spaces which do not lie inside a larger vector space: if V 1,..., V n are vector spaces over the same field F, then their external direct sum is the cartesian product V 1 V n, with addition and scalar multiplication defined componentwise. And we denote the sum, confusingly, by the same notation: V 1 V 2 V n := {(v 1, v 2,..., v n ) v i V i, i = 1,..., n} (1.11) In the infinite-dimensional case, we have two types of external direct sum, one where there is no restriction on the sequences, the other where we only allow sequences with finite support. To distinguish these two cases we use the terms direct product and external direct product, respectively: let F = {V i i I} be a family of vector spaces over the same field F. Then their direct product is given by } V i := {v = (v i ) i I vi V i (1.12) i I = { f : I } V i f(i) V i i I (1.13) The infinite-dimensional external direct sum is then defined: { } V i := v = (v i ) i I vi V i and v has finite support i I = { f : I } V i f(i) Vi and f has finite support i I (1.14) (1.15) When all the V i are equal, there is some special notation: V n (finite-dimensional) (1.16) V I := V (infinite-dimensional) (1.17) i I (V I ) 0 := V (1.18) i I 4

5 Example 1.1 Common examples of vector spaces are the sequence space F n, F ω, F B, (F ω ) 0 and (F B ) 0 in a field F over that field, i.e. the various types of cartesian products of F equipped with addition and scalar multiplication operations defined componentwise (ω = N and B is any set, and where (F ω ) 0 and (F B ) 0 denote all functions f : ω F, respectively f : B F, with finite support). Some typical ones are Q n R n C n Q ω R ω C ω Q B R B C B (Q ω ) 0 (R ω ) 0 (C ω ) 0 (Q B ) 0 (R B ) 0 (C B ) 0 each over Q or R or C, respectively. 5

6 1.2 Basic Vector Space Theory Theorem 1.2 following hold: If V is a vector space over a field F, then for all x, y, z V and a, b F the (1) x + y = z + y = x = z (Cancellation Law) (2) 0 V is unique. (3) x + y = x + z = 0 = y = z (Uniqueness of the Additive Inverse) (4) 0x = 0 (5) ( a)x = (ax) = a( x) (6) The finite external direct sum V 1 V n is a vector space over F if each V i is a vector space over F and if we define + and componentwise by (v 1,..., v n ) + (u 1,..., u n ) := (v 1 + u 1,..., v n + u n ) c(v 1,..., v n ) := (cv 1,..., cv n ) More generally, if F = {V i i I} is a family of vector spaces over the same field F, then the direct product i I V i and external direct sum i I V i = (V I ) 0 are vector spaces over F, with + and given by v + w = ( f : I ) ( V i + g : I ) V i i I i I ( cv = c f : I ) V i i I := := ( cf : I ) V i i I ( f + g : I ) V i i I Proof: (1) x = x+0 = x+(y +( y)) = (x+y)+( y) = (z +y)+( y) = z +(y +( y)) = z +0 = z. (2) 0 1, 0 2 = 0 1 = = = 0 2. (3)(a) x + y 1 = x + y 2 = 0 = y 1 = y 2 by the Cancellation Law. (b) y 1 = y = y 1 + (x + y 2 ) = (y 1 + x) + y 2 = 0 + y 2 = y = y 2. (4) 0x + 0x = (0 + 0)x = 0x = 0x + 0 V = 0x = 0 V by the Cancellation Law. (5) ax + ( (ax)) = 0, and by 3 (ax) is unique. But note ax + ( a)x = (a + ( a))x = 0x = 0, hence ( a)x = (ax), and also a( x) = a(( 1)x) = (a( 1))x = ( a)x, which we already know equals (ax). (6) All 8 vector space properties are verified componentwise, since that is how addition and scalar multiplication are defined. But the components are vectors in a vector space, so all 8 properties will be satisfied. Theorem 1.3 Let V be a vector space over F. A subset S of V is a subspace iff for all a, b F and x, y S we have one of the following equivalent conditions: (1) ax + by S (2) x + y S and ax S (3) x + ay S Proof: If S satisfies any of the above implications, then it is a subspace because vector space properties VS1-2 and VS5-8 follow from the fact that S V, while VS3 follows because we can choose a = b = 0 and 0 V is unique (Theorem 1.2), and VS4 follows from VS3, in the first case by choosing b = a and x = y, in the second case by choosing y = ( 1)x, and in the third case by choosing a = 1 and y = x. Conversely, if S is a vector space, then (1)-(3) follow by definition. 6

7 Theorem 1.4 Let V be a vector space. If C = {S i i K} is a collection of subspaces of V, then S i, and S i (1.19) S i i K i K i K are subspaces. Proof: (1) If C = {S i i K} is a collection of subspaces, then none of the S i are empty by the previous theorem, so if a, b F and v, w i K S i, then v = s i1 + + s in i K S i and w = s j1 + + s jm i K S i such that s ik S ik for all i k K. Consequently av + bw = a(s i1 + + s in ) + b(s j1 + + s jn ) = as i1 + + as in + bs j1 + + bs jn S i i K because as ik S ik for all i k K because the S ik are subspaces. Hence by the previous theorem i K S i is a subspace of V. The direct sum case is a subcase of this general case. (2) If a, b F and u, v i K S i, then u, v S i for all i K, whence au + bv S i for all i K, or au + bv i K S i, and i K S i is a subspace by the previous theorem. Theorem 1.5 T S. If S and T are subspaces of a vector space V, then S T is a subspace iff S T or Proof: If S T is a subspace, then for all a, b F and x, y S T we have ax + by S T. Suppose neither S T nor S T holds, and take x S\T and y T \S. Then, x + y is neither in S nor in T (for if it were in S, say, then y = (x + y) + ( x) S, a contradiction), contradicting the fact that S T is a subspace. Conversely, suppose, say, S T, and choose any a, b F and x, y S T. If x, y S, then ax + by S S T, while if x, y T, then ax + by T S T. If x S and y T, then x T as well and ax + by T S T. Thus in all cases ax + by S T, and S T is a subspace. Theorem 1.6 If V is a vector space, then for all S V, span(s) is a subspace of V. Moreover, if T is a subspace of V and S T, then span(s) T. Proof: If S =, span( ) = {0}, which is a subspace of V ; if T is a subspace, then T and {0} T. If S, the any x, y S have the form x = a 1 v a n v n and y = b 1 w b m w m for some a 1,... a n, b 1,..., b n F and v 1,..., v n, w 1,..., w n S. Then, if a, b F we have ax + by = a(a 1 v a n v n ) + b(b 1 w b m w m ) = (aa 1 )v (aa n )v n + (bb 1 )w (bb m )w m span(s) so span(s) is a subspace of V. Now, suppose T is a subspace of V and S T. If w span(s), then w = a 1 w a k w n for some w 1,..., w n S and a 1,..., a n F. Since S T, w 1,..., w k T, whence w = c 1 w c k w k T. Corollary 1.7 If V is a vector space, then S V is a subspace of V iff span(s) = S. Proof: If S is a subspace of V, then by the theorem S S = span(s) S. But if s S, then s = 1s span(s), so S span(s), and therefore S = span(s). Conversely, if S = span(s), then S is a subspace of V by the theorem. 7

8 Corollary 1.8 If V is a vector space and S, T V, then the following hold: (1) S T V = span(s) span(t ) (2) S T V and span(s) = V = span(t ) = V (3) span(s T ) = span(s) + span(t ) (4) span(s T ) span(s) span(t ) Proof: (1) and (2) are immediate, so we only need to prove 3 and 4: (3) If v span(s T ), then v 1,..., v m S, u 1,..., u n T and a 1,..., a m, b 1,..., b n F such that v = v 1 a v m a m + b 1 u b n u n Note, however, v 1 a v m a m span(s) and b 1 u b n u n span(t ), so that v span(s) + span(t ). Thus span(s T ) span(s) + span(t ). Conversely, if v = s + t span(s) + span(t ), then s span(s) and t span(t ), so that by 1, since S S T and T S T, we must have span(s) span(s T ) and span(t ) span(s T ). Consequently, s, t span(s T ), and since span(s T ) is a subspace, v = s + t span(s T ). That is span(s) + span(t ) span(s T ). Thus, span(s) + span(t ) = span(s T ). (4) By Theorem 1.6 span(s T ), span(s) and span(t ) are subspaces, and by Theorem 1.4 span(s) span(t ) is a subspace. Now, consider x span(s T ). There exist vectors v 1,..., v n S T and scalars a 1,..., a n F such that x = a 1 v a n v n. But since v 1,..., v n belong to both S and T, x span(s) and x span(t ), so that x span(s) span(t ). It is not in general true, however, that span(s) span(t ) span(s T ). For example, if S = {e 1, e 2 } R 2 and T = {e 1, (1, 1)}, then span(s) span(t ) = R 2 R 2 = R 2, but span(s T ) = span({e 1 }) = R, and R 2 R. Theorem 1.9 If V is a vector space and S T V, then (1) S linearly dependent = T linearly dependent (2) T linearly independent = S linearly independent Proof: (1) If S is linearly dependent, there are scalars a 1,..., a m F, not all zero, and vectors v 1,..., v m S such that a 1 v a m v m = 0. If S T, then v 1,..., v m are also in T, and have the same property, namely a 1 v a m v m = 0. Thus, T is linearly dependent. (2) If T is linearly independent, then if v 1,..., v n S, a 1,..., a n F and a 1 v a n v n = 0, since S T implies v 1,..., v n T, we must have a 1 = a 2 = = a n = 0, and S is linearly independent. Theorem 1.10 If V is a vector space, then S V is linearly dependent iff S = {0} or there exist distinct vectors v, u 1,..., u n S such that v is a linear combination of u 1,..., u n. Proof: If S is linearly dependent, then S consists of at least one element by definition. Suppose S has only one element, v. Then, either v = 0 or v 0. If v 0, consider a F. Then, av = 0 = a = 0, which means S is linearly independent, contrary to assumption. Therefore, v = 0, and so S = {0}. If S contains more than one element and v S, then since S is linearly dependent, there exist distinct u 1,..., u n S and a 1,..., a n F, not all zero such that a 1 u 1 + a n u n = 0 If v = 0, we re done. If v 0, then, since {u 1,..., u n } is linearly dependent, so is {u 1,..., u n, v}, by Theorem 1.9, so that there are b 1,..., b n+1 F, not all zero, such that b 1 u b n u n + b n+1 v = 0 8

9 Choose b n+1 0, for if we let b n+1 = 0 we have gained nothing. Then, ( v = b ) ( 1 u b ) n u n b n+1 b n+1 and v is a linear combination of distinct vectors in S. Conversely, if S = {0}, clearly S is linearly dependent, for we can choose a, b F both nonzero and have a0 + b0 = 0. Otherwise, if there are distinct u 1,..., u n, v S and a 1,..., a n F such that v = a 1 u a n u n, then obviously 0 = 1v + a 1 u a n u n, and since 1 0, S is linearly dependent. Corollary 1.11 If S = {u 1,..., u n } V, then S is linearly dependent iff u 1 = 0 or for some 1 k < n we have u k+1 span(u 1,..., u k ). Theorem 1.12 If V is a vector space and u, v V are distinct, then {u, v} is linearly dependent iff u or v is a multiple of the other. Proof: If {u, v} is linearly dependent, there are scalars a, b F not both zero such that au+bv = 0. If a = 0, then b 0, and so au + bv = bv = 0 = v = 0 = u 0, and so v = 0u. If neither a nor b is equal to zero, then au + bv = 0 = u = a 1 bv. Conversely, suppose, say, u = av. If a = 0, then u = 0 and v can be any vector other than v, since by hypothesis u v, whence au + 0v = 0 for any a F, including nonzero a, so that {u, v} is linearly dependent. Otherwise, if a 0, we have u = av = 1u + ( a)v = 0, and both 1 0 and a 0, so that {u, v} is linearly dependent. Theorem 1.13 (Properties of a Basis) If V is a vector space, then the following are equivalent, and any S V satisfying any of them is a basis: (1) S is linearly dependent and V = span(s). (2) (Essentially Unique Representation) Ever nonzero vector v V is an essentially unique linear combination of vectors in S. (3) No vector in S is a linear combination of other vectors in S. (4) S is a minimal spanning set, that is V = span(s) but no proper subset of S spans V. (5) S is a maximal linearly independent set, that is S is linearly independent but any proper superset is not linearly independent. Proof: (1)= (2): If S is linearly independent and v S is a nonzero vector such that for distinct s i and t i in S v = a 1 s a n s n = b 1 t b m s m then grouping any s ij and t ij that are equal, we have 0 = (a i1 b i1 )s i1 + + (a ik b ik )s ik + a ik+1 s ik a in s in + b ik+1 t ik b im t im implies a ik+1 = = a in = b ik+1 = = b im = 0, so n = m = k, and for j = 1,..., k we have a ij = b ij, whence v is essentially uniquely represented as a linear combination of vectors in S. 9

10 (1)= (2): The contrapositive of Theorem (3)= (1): If 3 holds and a 1 s a n s n = 0, where the s i are distinct and a 1 0, then n > 1 and s 1 = 1 a 1 (a 2 s a n s n ), which violates 3. Thus 3 implies 1. (1) (4): If S is a linearly independent spanning set and T is a proper subset of S that also spans V, then any vectors in S\T would have to be linear combinations of the vectors in T, violating 3 for S (3 is equivalent to 1, as we just showed). Thus S is a minimal spanning set. Conversely, if S is a minimal spanning set, then it must be linearly independent, for otherwise there would be some s S that is a linear combination of other vectors in S, which would mean S\{s} also spans V, contradicting minimality. Thus 4 implies 1. (1) (5): If 1 holds, so S is linearly independent and spans V, but S is not maximal, then v V \S such that S {v} is also linearly independent. But v / span(s) by 3, contradiction. Therefore S is maximal and 1 implies 5. Conversely, if S is a maximal linearly independent set, then S must span V, for otherwise there is a v V \S that isn t a linear combination of vectors in S, implying S {v} is linearly independent proper superset, violating maximality. Therefore span(s) = V, and 5 implies 1. Corollary 1.14 If V is a vector space, then S = {v 1,..., v n } V is a basis for V iff V = span(v 1 ) span(v 2 ) span(v n ) (1.20) The previous theorem did not say that any such basis exists for a given vector space. It merely gave the defining characteristics of a basis, should we ever come across one. The next theorem assures us that all vector spaces have a basis a consequence of Zorns lemma. Theorem 1.15 (Existence of a Basis) If V is a nonzero vector space and I S V with I linearly independent and V = span(s), then B basis for V for which More specifically, (1) Any nonzero vector space has a basis. (2) Any linearly independent set in V is contained in a basis. (3) Any spanning set in V contains a basis. I B S (1.21) Proof: Let A P (V ) be the set of all linearly independent sets L such that I L S. Then A is non-empty because I I S, so I A. Now, if C = {I k k K} A is a chain in A, that is a totally ordered (under set inclusion) subset of A, then the union U = C = is linearly independent and satisfies I U S, that is U A. But by Zorn s lemma every chain has a maximal element B, so that B A maximal which is linearly independent. But of course such a B is a basis for V = span(s), for if any s S is not a linear combination of elements in B, then B {s} is linearly independent and B B {s}, contradicting the maximality of B. Therefore S span(b), and so V = span(s) span(b) V, or V = span(b). This shows that (1) there is a basis B for V, (2) any linearly independent set I has I B for this B, and (3) any spanning set S has B S for this B. i K I i 10

11 Corollary 1.16 (All Subpaces Have Complements) If S is a subspace of a vector space V, then there exists a subspace T of V such that V = S T (1.22) Proof: If S = V, then let T = {0}. Otherwise, if V \S, then since V has a basis B and B is maximal, we must have S = span(b S) and B (V \S). That is, there is a nonzero subspace T = span(b (V \S)), and by Corollary 1.8 V = span(b) = span((b S) (B (V \S))) = span(b S) + span(b (V \S)) = S + T But S T = {0} because of the essentially unique representation of vectors in V as linear combinations of vectors in B. Hence, V = S T, and T is a complement of S. Theorem 1.17 (Replacement Theorem) If V is a vector space such that V = span(s) for some S V with S = n, and if B V is linearly independent with B = m, then (1) m n (2) C S with C = n m such that V = span(b C) Proof: The proof is by induction on m, beginning with m = 0. For this m, B =, so taking C = S, we have B C = S = S, which generates V. Now suppose the theorem holds for some m 0. For m+1, let B = {v 1,..., v m+1 } V be linearly independent. By Corollary 1.8, B = {v 1,..., v m } is linearly independent, and so by the induction hypothesis m n and C = {u 1,..., u n m } S such that {v 1,..., v n } C generates V. This means a 1,..., a m, b 1,..., b n m F such that v m+1 = a 1 v a m v m + b 1 u b n m u n m (1.23) Note that if n = m, C =, so that v m+1 span(b ), contradicting the assumption that B is linearly independent. Therefore, n > m, or n m + 1. Moreover, some b i, say b 1, is nonzero, for otherwise we have v m+1 = a 1 v a m v m, leading again to B being linearly dependent in contradiction to assumption. Solving (1.23) for u 1, we get u 1 = ( b 1 1 a 1)v ( b 1 1 a m)v m + b 1 1 v m+1 + ( b 1 1 b 2)u ( b 1 1 b n m)u n m Let C = {u 2,..., u n m }. Then u 1 span(b C), and since v 1,..., v m, u 2,..., u n m B C, they are also in span(b C), whence B C span(b C) Now, since span(b C ) = V by the induction hypothesis, we have by Corollary 1.8 that span(b C) = V. So we have B linearly independent with B = m + 1 vectors, m + 1 n, span(b C) = V, and C S with C = (n m) 1 = n (m + 1) vectors, so the theorem holds for m + 1. Therefore the theorem is true for all m N by induction. Corollary 1.18 If V is a vector space with a finite spanning set, then every basis for V contains the same number of vectors. Proof: Suppose S is a finite spanning set for V, and let β and γ be two bases for V with γ > β = k, so that some subset T γ contains exactly k + 1 vectors. Since T is linearly independent and β generates V, the replacement theorem implies that k + 1 k, a contradiction. Therefore γ is finite and γ k. Reversing the roles of β and γ shows that k γ, so that γ = β = k. 11

12 Corollary 1.19 If V is a finite-dimensional vector space with dim(v ) = n, then the following hold: (1) Any finite spanning set for V contains at least n vectors, and a spanning set for V that contains exactly n vectors is a basis for V. (2) Any linearly independent subset of V that contains exactly n vectors is a basis for V. (3) Every linearly independent subset of V can be extended to a basis for V. Proof: Let B be a basis for V. (1) Let S V be a finite spanning set for V. By Theorem 1.15, B S that is a basis for V, and by Corollary 1.18 B = n vectors. Therefore S n, and S = n = S = B, so that S is a basis for V. (2) Let I V be linearly independent with I = n. By the replacement theorem T B with T = n n = 0 vectors such that V = span(i T ) = span(i ) = span(i). Since I is also linearly independent, it is a basis for V. (3) If I V is linearly independent with I = m vectors, then the replacement theorem asserts that H B containing exactly n m vectors such that V = span(i H). Now, I H n, so that by 1 I H = n, and so I H is a basis for V extended from I. Theorem 1.20 Any two bases for a vector space V have the same cardinality. Proof: If V is finite-dimensional, then the previous corollary applies, so we need only consider bases that are infinite sets. Let B = {b i i I} ba a basis for V and let C be any other basis for V. Then all c C can be written as finite linear combinations of vectors in B, where all the coefficients are nonzero. That is, if U c = {1,..., n c }, we have n c c = a i b i = a i b i i=1 i U c for unique a 1,..., a nc F. But because C is a basis, we must have I = c C U c, for if c C I, then all c C can be expressed by a proper subset B of B, so that V = span(b ), which is contradiction of the minimality of B as a spanning set. Now, for all c C we have U c < ℵ 0, which implies that B = I = U c ℵ0 C = C c C Reversing the roles of B and C gives C B, so by the Schröder-Bernstein theorem we have B = C. Corollary 1.21 If V is a vector space and S is a subspace of V : (1) dim(s) dim(v ) (2) dim(s) = dim(v ) < = S = V (3) V is infinite-dimensional iff it contains an infinite linearly independent subset. Theorem 1.22 Let V be a vector space. (1) If B is a basis for V and B = B 1 B 2, where B 1 B 2 =, then V = span(b 1 ) span(b 2 ). (2) If V = S T and B 1 is a basis for S and B 2 is a basis for T, then B 1 B 2 = and B = B 1 B 2 is a basis for V. Proof: (1) If B 1 B 2 = and B = B 1 B 2 is a basis for V, then 0 / B 1 B 2. But, if a nonzero vector v span(b 1 ) span(b 2 ), then B 1 B 2, a contradiction. Hence {0} = span(b 1 ) span(b 2 ). Moreover, since B 1 B 2 is a basis for V, and since it is also a basis for span(b 1 ) + span(b 2 ), we 12

13 must have V = span(b 1 ) + span(b 2 ), and hence V = span(b 1 ) span(b 2 ). (2) If V = S T, then S T = {0}, and since 0 / B 1 B 2, we have B 1 B 2 =. Also, since all v V = S T have the form a 1 u a m u m + b 1 v b n v n for u 1,..., u m B 1 and v 1,..., v n B 2, v span(b 1 B 2 ), so B 1 B 2 is a basis for V by Theorem Theorem 1.23 If S and T are subspaces of a vector space V, then dim(s) + dim(t ) = dim(s + T ) + dim(s T ) (1.24) As a consequence, V = S T iff dim(v ) = dim(s) + dim(t ). Proof: If B = {b i i I} is a basis for S T, then we can extend this to a basis A B for S and to another basis B C for T, where A = {a j j J}, C = {c k k K} and A B = and B C =. Then A B C is a basis for S + T : clearly span(a B C) = S + T, so we need only verify that A B C is linearly independent. To that end, suppose not, suppose c 1,..., c n F \{0} and v 1,..., v n A B C such that c 1 v 1 + c 2 v c n v n = 0 Then some v i A C since by our construction A B and B C are linearly independent. Isolating the vectors in A, say v 1,..., v k, on one side of the equality shows that there is a nonzero vector, x = span(a) {}}{ a 1 v a k v k = a k+1 v k a n v n }{{} span(b C) = x span(a) span(b C) = span(a) T S T since span(a) S. Consequently x = 0, and therefore a 1 = = a n = 0 since A and B C are linearly independent sets, a contradiction. Thus A B C is linearly independent and hence a basis for S + T. Moreover, dim(s) + dim(t ) = A B + B C Of course if S T = {0}, then dim(s T ) = 0, and = A + B + B + C = A B C + B = dim(s + T ) + dim(s T ) dim(s) + dim(t ) = dim(s T ) = dim(v ) while if dim(v ) = dim(s) + dim(t ), then dim(s T ) = 0, so S T = {0}, and so V = S T. Corollary 1.24 If S and T are subspaces of a vector space V, then (1) dim(s + T ) dim(s) + dim(t ) (2) dim(s + T ) max{dim(s), dim(t )}, if S and T are finite-dimensional. 13

14 Theorem 1.25 (Direct Sums) If F = {S i i I} is a family of subspaces of a vector space V such that V = i I S i, then the following statements are equivalent: (1) V = i I S i. (2) 0 V has a unique expression as a sum of vectors each from different S i, namely as a sum of zeros: for any distinct j 1,..., j n I, we have v j1 + v j2 + + v jn = 0 and v ji S ji for each i = v j1 = = v jn = 0 (3) Each v V has a unique expression as a sum of distinct v ji S ji \{0}, v = v j1 + v j2 + + v jn (4) If γ i is a basis for S i, then γ = i I γ i is a basis for V. If V is finite-dimensional, then this may be restated in terms of ordered bases γ i and γ. Proof: (1) = (2): Suppose 1 is true, that is V = i I S i, and choose v 1,..., v n V such that v i S ki and v v k = 0. Then for any j, v j = i j v i i j S i However, v j S j, whence v j W j i j S i = {0} so that v j = 0. This is true for all j = 1,..., k, so v 1 = = v k = 0, proving 2. (2) = (3): Suppose 2 is true and let v V = i I S i be given by v = u u n = w w m for some u i S ki and w j S kj. Then, grouping terms from the same subspaces 0 = v v = (u i1 w i1 ) + + (u ik w ik ) + u ik u in w ik+1 w im { u i1 = w i1,, u ik = w ik = and u ik+1 = = u in = w ik+1 = = w im = 0 proving uniqueness. (3) = (4): Suppose each vector v V = n i I S i can be uniquely written as v = v v k, for v i S ji. For each i I let γ i be an ordered basis for S i, so that since V = n ( i I S i, we must k ) have that V = span(γ) = span i=1 γ i, so we only need to show that γ is linearly independent. To that end, let v 11, v 12,..., v 1n1 γ i1 v m1, v m2,..., v mnm γ im for any bases γ ij for S ij F, then let a ij F for i = 1,..., m and j = 1,..., n i, and let w i = ni j=1 a ijv ij. Then, suppose w w m = a ij v ij = 0 i,j. 14

15 since 0 m i=1 S j i and 0 = , by the assumed uniqueness of expression of v V by w i S i we must have w i = 0 for all i, and since all the γ i are linearly independent, we must have a ij = 0 for all i, j. (4) = (1): if 4 is true, then there exist ordered bases γ i the S i such that γ = i I γ i is an ordered basis for V. By Corollary 1.8 and Theorem 1.22, V = span(γ) = span( γ i ) = span(γ i ) = i I i I i I S i Choose any j I and suppose for a nonzero vector v V we have v S j i j S i. Then, v S j = span(γ j ) span( i j γ i ) = i j S i which means v is a nontrivial linear combination of elements in γ i and elements in i j γ i, so that v can be expressed as a linear combination of i I γ i in more than one way, which contradicts uniqueness of representation of vectors in V in terms of a basis for V, Theorem Consequently, any such v must be 0. That is, S j i j S i = {0} and the sum is direct, i.e. V = i I S i, proving 1. 15

16 2 Linear Transformations 2.1 Definitions Linear Transformations, or Vector Space Homomorphisms If V and W are vector spaces over the same field F, a function T : V W is called a linear transformation or vector space homomorphism if it preserves the vector space structure, that is if for all vectors x, y V and all scalars a, b F we have T (x + y) = T (x) + T (y) T (ax) = at (x) or equivalently T (ax + by) = at (x) + bt (y) (2.1) The set of all linear transformations from V to W is denoted L(V, W ) or Hom(V, W ) (2.2) Linear transformation, like group and ring homomorphisms, come in different types: 1. linear operator (vector space endomorphism) on V if V is a vector space over F, a linear operator, or vector space endomorphism, is a linear transformation T L(V, V ). The set of all linear operators, or vector space endomorphisms, is denoted L(V ) or End(V ) (2.3) 2. monomorphism (embedding) a 1-1 or injective linear transformation. 3. epimorphism an onto or surjective linear transformation. 4. isomorphism (invertible linear transformation) from V to W a bijective (and therefore invertible) linear transformation T L(V, W ), i.e. a monomorphism which is also an epimorphism. If such an isomorphism exists, V is said to be isomorphic to W, and this relationship is denoted V = W (2.4) The set of all linear isomorphomorphisms from V to W is denoted by analogy with the general linear group. GL(V, W ) (2.5) 5. automorphism a bijective linear operator. The set of all automorphisms of V is denoted Aut(V ) or GL(V ) (2.6) Some common and important linear transformations are: 1. The identity transformation, I V : V V, given by I V (x) = x for all x V. It s linearity is obvious: I V (ax + by) = ax + by = ai V (x) + bi V (y). 2. The zero transformation, T 0 : V W, given by T 0 (x) = 0 for all x V. Sometimes this is denoted by 0 if the context is clear, like the zero polynomial 0 F [x]. This is also clearly a linear transformation. 3. An idempotent operator T L(V ) satisfies T 2 = T. 16

17 A crucial concept, which will allow us to classify and so distinguish operators, is that of similarity of operators. Two linear operators T, U L(V ) are said to be similar, denoted if there exists an automorphism φ Aut(V ) such that Similarity is an equivalence relation on L(V ): 1. T T, because T = I T I 1 T U (2.7) T = φ U φ 1 (2.8) 2. T U implies T = φ U φ 1 for some automorphism φ L(V ), so that U = φ 1 T φ, or U = ψ T ψ 1, where ψ = φ 1 is an automorphism. Thus U T. 3. S T and T U implies S = φ T φ 1 and T = ψ U ψ 1 where φ and ψ are automorphisms, so S = φ T φ 1 = φ (ψ U ψ 1 ) φ 1 = (φ ψ) U(φ ψ) 1 and since φ ψ is an automorphism of V, we have S U. This partitions L(V ) into equivalence classes. As we will see, each equivalence class corresponds to a single matrix representation, depending only on the choice of basis for V, which maybe different for each operator. The kernel or null space of a linear tranformation T L(V, W ) is the pre-image of 0 W under T : ker(t ) = T 1 (0) (2.9) i.e. ker(t ) = {x V T (x) = 0}. As we will show below, ker(t ) is a subspace of V. The range or image of a linear tranformation T L(V, W ) is the range of T as a set function: it is variously denoted by R(T ), T (V ) or im(t ), R(T ) = im(t ) = T (V ) := {T (x) x V } (2.10) As we show below, R(T ) is a subspace of W. The rank of a linear transformation T L(V, W ) is the dimension of its range: rank(t ) := dim ( R(T ) ) (2.11) The nullity of T L(V, W ) is the dimension of its kernel: null(t ) := dim ( ker(t ) ) (2.12) Given an operator T L(V ), a subspace S of V is said to be T -invariant if v S = T (v) S, that is if T (S) S. The restriction of T to a T -invariant subspace S is denoted T S, and of course T S L(S) is a linear operator on S. In the case that V = R S decomposes as a direct sum of T -invariant subspaces R and S, we say the pair (R, S) reduces T, or is a reducing pair for T, and we say that T is the direct sum of operators T R and T S, denoting this by T = T R T S (2.13) The expression T = τ σ L(V ) means there are subspaces R and S of V for which (R, S) reduces T and τ = T R and σ = T S. If V is a direct sum of subspaces V = R S, then the function π R,S : V V given by where r R and s S, is called the projection onto R along S. π R,S (v) = π R,S (r + s) = r (2.14) 17

18 Example 2.1 The projection on the y-axis along the x-axis is the map π Ry,R x : R 2 R 2 given by π Ry,R x (x) = y. Example 2.2 The projection on the y-axis along the line L = {(s, s) s R} is the map π Ry,L : R 2 R 2 given by π Ry,L(x) = π Ry,L(s, s + y) = y. Any projection π R,S is linear, i.e. π R,S L(V ): π R,S (au + bv) = π R,S (a(r 1 + s 1 ) + b(r 2 + s 2 )) = π R,S ((ar 1 + br 2 ) + (as 1 + bs 2 )) = ar 1 + br 2 = aπ R,S (u) + bπ R,S (v) And clearly R = im(π R,S ) and S = ker(π R,S ) (2.15) Two projections ρ, σ L(V ) are said to be orthogonal, denoted if ρ σ (2.16) ρ σ = σ ρ = 0 T 0 (2.17) This is of course by analogy with orthogonal vectors v w, for which we have v, w = w, v = 0. We use projections to define a resolution of the identity, a sum of the form π π k = I L(V ), where the π i are pairwise orthogonal projections, that is π i π j if i j. This occurs when V = S 1 S 2 S k, π i = π Si, j i Sj, and π i π j for i j. Resolutions of the identity will be important in establishing the spectral resolution of a linear operator. 18

19 2.2 Basic Properties of Linear Transformations Theorem 2.3 If V and W are vector spaces over the same field F and T L(V, W ), then ker(t ) is a subspace of V and im(t ) is a subspace of W. Moreover, if T L(V ), then {0}, ker(t ), im(t ) and V are T -invariant. Proof: (1) If x, y ker(t ) and a, b F, we have T (ax + by) = at (x) + bt (y) = a0 + b0 = 0 so ax + by ker(t ), and the statement follows by Theorem 1.3. If x, y im(t ) and a, b F, then u, v V such that T (u) = x and T (v) = y, whence ax + by = at (u) + bt (v) = T (au + bv) im(t ). (2) Clearly {0} and V are T -invariant, the first because T (0) = 0, the second because the codomain of T is V. If x im(t ), then of course x V, so that T (x) im(t ), while if x ker(t ), then T (x) = 0 ker(t ). Theorem 2.4 If V and W are vector spaces and B = {v i i I} is a basis for V, then for any T L(V, W ) we have im(t ) = span(t (B)) (2.18) Proof: T (v i ) im(t ) for each i I by definition, and because im(t ) is a subspace, we have span(t (B)) im(t ). Moreover, span(t (B)) is a subspace by Theorem 1.6. For the reverse inclusion, choose any w im(t )j. Then v V such that w = T (v), so that by Theorem 1.13!a 1,..., a n F such that v = a 1 v a n v n, where by v i we mean v i v ji. Consequently, w = T (v) = T (a 1 v a n v n ) = a 1 T (v 1 ) + + a n T (v n ) span(t (B)) Theorem 2.5 (A Linear Transformation Is Defined by Its Action on a Basis) Let V and W be vector spaces over the same field F. If B = {v i i I} is a basis for V, then we can define a unique linear transformation T L(V, W ) by arbitrarily specifying the values w i = T (v i ) W for each i I and extending T by linearity, i.e. specifying that for all a 1,..., a n F our T satisfy T (a 1 v a n v n ) = a 1 T (v 1 ) + + a n T (v n ) (2.19) = a 1 w a n w n Proof: By Theorem 1.13: v V, a 1,..., a n F such that v = a 1 v a n v n. Defining T W V by (2.19) we have the following results: (1) T L(V, W ): For any u, v V, there exist unique a 1,..., a n, b 1,..., b m F and u 1,..., u n, v 1,..., v m B such that u = a 1 u a n u n v = b 1 v b m v m If w 1,..., w n, z 1,..., z m W are the vectors for which we have specified T (u) = a 1 T (u 1 ) + + a n T (u n ) = a 1 w a n w n T (v) = b 1 T (v 1 ) + + b m T (v m ) = b 1 z b m z m then, for all a, b F we have T (au + bv) = T ( a(a 1 u a n u n ) + b(b 1 v b m v m ) ) = aa 1 T (u 1 ) + + aa n T (u n ) + bb 1 T (v 1 ) + + bb m T (v m ) = at (a 1 u a n u n ) + bt (b 1 v b m v m ) = at (u) + bt (u) 19

20 (2) T is unique: suppose U L(V, W ) such that U(v i ) = w i = T (v i ) for all i I. Then for any v V we have v = n i=1 a iv i for unique a i F, so that U(v) = n a i U(v i ) = i=1 n a i w i = T (v) i=1 and U = T. Theorem 2.6 If V, W, Z are vector spaces over the same field F and T L(V, W ) and U L(W, Z), then U T L(V, Z). Proof: If v, w V and a, b F, then (U T )(ac + bw) = U(T (ac + bw)) = U(aT (v) + bt (w)) = au(t (v)) + bu(t (w)) = a(u T )(x) + b(u T )(y) so U T L(V, Z). Theorem 2.7 If V and W are vector spaces over F, then L(V, W ) is a vector space over F under ordinary addition and scalar multiplication of functions. Proof: First, note that if T, U L(V, W ) and a, b, c, d F, then v, w V (at + bu)(cv + dw) = at (cv + dw) + bu(cv + dw) = act (v) + adt (w) + bcu(v) + bdu(w) = c ( at (v) + bu(v) ) + d ( at (w) + bu(w) ) = c(at + bu)(v) + d(at + bu)(w) so (at + bu) L(V, W ), and L(V, W ) is closed under addition and scalar multiplication. Moreover, L(V, W ) satisfies VS1-8 because W is a vector space, and addition and scalar multiplication in L(V, W ) are defined pointwise for values which any functions S, T, U L(V, W ) take in W. 20

21 2.3 Monomorphisms and Isomorphisms Theorem 2.8 If V, W, Z are vector spaces over the same field F and T GL(V, W ) and U GL(W, Z) are isomorphisms, then (1) T 1 L(W, V ) (2) (T 1 ) 1 = T, and hence T 1 GL(W, V ). (3) (U T ) 1 = T 1 U 1 GL(Z, V ). Proof: (1) If w, z W and a, b F, since T is bijective!u, v V such that T (u) = w and T (v) = z. Hence T 1 (w) = u and T 1 (z) = v, so that T 1 (aw + bz) = T 1( at (u) + bt (v) ) = T 1( T (au + bv) ) = au + bv = at 1 (w) + bt 1 (z) (2) T 1 T = I V and T T 1 = I W shows that T is the inverse of T 1, that is T = (T 1 ) 1. (3) First, any composition of bijections is a bijection, so U T is bijective. Moreover, and (U T ) (U T ) 1 = I Z = U U 1 = U I W U 1 (U T ) 1 (U T ) = I V = T 1 T = T 1 I W T = U (T T 1 ) U 1 = (U T ) (T 1 U 1 ) = T 1 (U 1 U) T = (U 1 T 1 ) (U T ) so by the cancellation law and the fact that L(V, Z) is a vector space, we must have (U T ) 1 = T 1 U 1 L(Z, V ) and by (1) it s an isomorphism. Alternatively, we can get (3) from (2) via Theorem 2.5: U T : V Z, so (U T ) 1 : Z V and of course T 1 U 1 : Z V Let α, β and γ be bases for V, W and Z, respectively, such that T (α) = β and U(β) = U(T (α)) = (U T )(α) = γ, so that (U T ) 1 (γ) = α which is possible by Theorem But we also have U 1 (γ) = β and T 1 (β) = α so that (T 1 U 1 )(γ) = α By Theorem 2.5 (uniqueness) (U T ) 1 = T 1 U 1 and by (1) it s an isomorphism. Theorem 2.9 If V and W are vector spaces over the same field F and T L(V, W ), then T is injective iff ker(t ) = {0}. Proof: If T is 1-1 and x ker(t ), then T (x) = 0 = T (0), so that x = 0, whence ker(t ) = {0}. Conversely, if ker(t ) = {0} and T (x) = T (y), then, 0 = T (x) T (y) = T (x y) = x y = 0 or x = y, and so T is

22 Theorem 2.10 Let V and W be vector spaces over the same field and S V. Then for any T L(V, W ), (1) T is injective iff it carries linearly independent sets into linearly independent sets. (2) T is injective implies S V is linearly independent iff T (S) W is linearly independent. Proof: (1) If T is 1-1 and S V is linearly independent, then for any v 1,..., v n S we have 0 V = a 1 v a n v n = a 1 = a n = 0, so 0 W = T (0 V ) = T (a 1 v a n v n ) = a 1 T (v 1 ) + + a n T (v n ) whence {T (v 1 ),..., T (v n )} is linearly independent, and since this is true for all v i S, T (S) is linearly independent, so T carries linearly independent sets into linearly independent sets. Conversely, if T carries linearly independent sets into linearly independent sets, let B be a basis for V and suppose T (u) = T (v) for some u, v V. Since u = a 1 u a n u n and v = b 1 v b m v m for unique a i, b i F and u 1,..., u n, v 1,..., v m B, we have 0 = T (u) T (v) = T (u v) = T ( (a i1 b j1 )v i1 + + (a ik b jk )v ik + a ik+1 u ik a in u in +b ik+1 v ik b im v im ) = (a i1 b j1 )T (v i1 ) + + (a ik b jk )T (v ik ) +a ik+1 T (u ik+1 ) + + a in T (u in ) + b ik+1 T (v ik+1 ) + + b im T (v im ) so that a ik+1 = = a in = b ik+1 = = b im = 0, which means m = n = k and therefore a i1 = b j1,..., a ik = b jk. Consequently u = a i1 v i1 + + a ik v ik = v whence T is 1-1. (2) If T is 1-1 and S V is linearly independent, then by (1) T (S) is linearly independent. Conversely, if T is 1-1 and T (S) is linearly independent, then for K = {v 1,..., v k } S V we have that T (0) = 0 = a 1 T (v 1 ) + + a k T (v k ) = T (a 1 v a k v k ) implying simultaneously that a 1 v a k v k = 0 and a 1 = = a k = 0, so K is linearly independent. But this is true for all such K S, so S is linearly independent. Theorem 2.11 If V and W are vector spaces over the same field F and T GL(V, W ), then for any S V we have (1) V = span(s) iff W = span(t (S)). (2) S is linearly independent in V iff T (S) is linearly independent in W. (3) S is a basis for V iff T (S) is a basis for W. Proof: (1) V = span(s) W = im(t ) = T GL(V,W ) {}}{ T (span(s)) = span(t (S)). (2) S linearly independent means for any s 1,..., s n S we have n i=1 a is i = 0 for all a i = 0, which implies ( n ) T a i s i = i=1 n a i T (s i ) = 0 = T (0) i=1 T GL(V,W ) = n a i s i = 0 i=1 S lin. indep. = a 1 = = a n = 0 22

23 so T (S) is linearly independent, since this is true for all s i S. Conversely, if T (S) is linearly independent we have for any T (s 1 ),..., T (s n ) T (S) n ( n ) T GL(V,W ) n 0 = a i T (s i ) = T a i s i = T (0) = a i s i = 0 i=1 so S is linearly independent. (1), (2) i=1 i=1 T (S) lin. indep. = a 1 = = a n = 0 (3) S basis for V T (S) linearly independent in W and W = span(t (S)) = T (S) is a basis for W. We could also say that since T 1 is an isomorphism by Theorem 2.8, we have that since T (S) is a basis for W, S = T 1 (T (S)) is a basis for V. Theorem 2.12 (Isomorphisms Preserve Bases) If V and W are vector spaces over the same field F and B is a basis for V, then T GL(V, W ) iff T (B) is a basis for W. Proof: If T GL(V, W ) is an isomorphism, then T is bijective, so by Theorem 2.11 T (B) is a basis for W. Conversely, if T (B) is a basis for W, then u V,!a 1,..., a n F and v 1,..., v n B such that u = a 1 v a n v n. Therefore, 0 = T (u) = T (a 1 v a n v n ) = a 1 T (v 1 ) + + a n T (v n ) = a 1 = = a n = 0 so ker(t ) = {0}, whence by Theorem 2.9 T is injective. But since span(t (B)) = W, we have for all w W that!a 1,... a n F such that w = a 1 T (v 1 ) + + a n T (v n ) = T (a 1 v a n v n ), so u = a 1 v a n v n V such that w = T (u), and W im(t ). We obviously have im(t ) W, and so T is surjective. Therefore it is bijective and an isomorphism. Theorem 2.13 (Isomorphisms Preserve Dimension) If V and W are vector spaces over the same field F, then V = W dim(v ) = dim(w ) (2.20) Proof: V = W = T GL(V, W ), so B basis for V = T (B) basis for W, and dim(v ) = B = T (B) = dim(w ). Conversely, if dim(v ) = B = C = dim(w ), where C is a basis for W, then T : B C bijective. Extending T to V by linearity defines a unique T L(V, W ) by Theorem 2.5 and T is an isomorphism because it is surjective, im(t ) = W, and injective, ker(t ) = {0}, so V = W. Corollary 2.14 If V and W are vector spaces over the same field and T L(V, W ), then dim(v ) < dim(w ) = T cannot be onto and dim(v ) > dim(w ) = T cannot be 1-1. Corollary 2.15 If V is an n-dimensional vector space over F, where n N, then V = F n (2.21) If κ is any cardinal number, B is a set of cardinality κ and V is a κ-dimensional vector space over F, then V = (F B ) 0 (2.22) Proof: As a result of the fact that dim(f n ) = n and dim ( (F B ) 0 ) = κ, by the previous theorem. Corollary 2.16 If V and W are vector spaces over the same field F and T GL(V, W ), then for any subspace S of V we have that T (S) is a subspace of W and dim(s) = dim(t (S)). Proof: If S is a subspace of V, then x, y S and a, b F, we have ax+by S, and T (x), T (y) S and T (ax+by) = at (x)+bt (y) T (S) imply that S is a subspace of W, while dim(s) = dim(t (S)) follows from the theorem. 23

24 2.4 Rank-Nullity Theorem Lemma 2.17 If V and W are vector spaces over the same field F and T L(V, W ), then any complement of the kernel of T is isomorphic to the range of T, that is where ker(t ) c is any complement of ker(t ). V = ker(t ) ker(t ) c = ker(t ) c = im(t ) (2.23) Proof: By Theorem 1.23 dim(v ) = dim ( ker(t ) ) + dim ( ker(t ) c). Let T c = T ker(t ) c : ker(t ) c im(t ) and note that T is injective by Theorem 2.9 because ker(t c ) = ker(t ) ker(t c ) = {0} Moreover T c is surjective because T c (V ) = im(t ): first we obviously have T c (V ) im(t ), while for the reverse inclusion suppose v im(t ). Then by Theorem 1.25!s ker(t ) and t ker(t ) c such that v = s + t, so that T (v) = T (s + t) = T (s) + 0 T (t) = T (s) = T c (s) T c (V ) so im(t ) = T c (V ) and T c is surjective, so that T c is an isomorphism, whence ker(t ) c = im(t ) Theorem 2.18 (Rank-Nullity Theorem) If V and W are vector spaces over the same field F and T L(V, W ), then dim(v ) = rank(t ) + null(t ) (2.24) Proof: By the lemma and we have ker(t ) c = im(t ), which by the previous theorem implies dim ( ker(t ) c) = dim ( im(t ) ) = rank(t ), while by Theorem 1.23 V = ker(t ) ker(t ) c implies which completes the proof. dim(v ) = dim ( ker(t ) ) + dim ( ker(t ) c) = dim ( ker(t ) ) + dim ( im(t ) ) = null(t ) + rank(t ) Corollary 2.19 Let V and W are vector spaces over the same field F and T L(V, W ). If dim(v ) = dim(w ), then the following are equivalent: (1) T is injective. (2) T is surjective. (3) rank(t ) = dim(v ) Proof: By the Rank-Nullity Theorem, rank(t ) + null(t ) = dim(v ), and by Theorem 2.9 we have T is 1-1 which completes the proof. Thm 2.9 ker(t ) = {0} null(t ) = 0 R-N Thm dim(im(t )) = rank(t ) = dim(v ) assump. = dim(w ) Thm im(t ) = W. T is onto 24

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold: Linear Algebra A vector space (over R) is an ordered quadruple (V, 0, α, µ) such that V is a set; 0 V ; and the following eight axioms hold: α : V V V and µ : R V V ; (i) α(α(u, v), w) = α(u, α(v, w)),

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

Math 333 - Practice Exam 2 with Some Solutions

Math 333 - Practice Exam 2 with Some Solutions Math 333 - Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

5. Linear algebra I: dimension

5. Linear algebra I: dimension 5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

GROUP ALGEBRAS. ANDREI YAFAEV

GROUP ALGEBRAS. ANDREI YAFAEV GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ] 1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Math 67: Modern Linear Algebra (UC Davis, Fall 2011) Summary of lectures Prof. Dan Romik

Math 67: Modern Linear Algebra (UC Davis, Fall 2011) Summary of lectures Prof. Dan Romik Math 67: Modern Linear Algebra (UC Davis, Fall 2011) Summary of lectures Prof. Dan Romik [Version of November 30, 2011 this document will be updated occasionally with new material] Lecture 1 (9/23/11)

More information

Notes on Linear Algebra. Peter J. Cameron

Notes on Linear Algebra. Peter J. Cameron Notes on Linear Algebra Peter J. Cameron ii Preface Linear algebra has two aspects. Abstractly, it is the study of vector spaces over fields, and their linear maps and bilinear forms. Concretely, it is

More information

3. Prime and maximal ideals. 3.1. Definitions and Examples.

3. Prime and maximal ideals. 3.1. Definitions and Examples. COMMUTATIVE ALGEBRA 5 3.1. Definitions and Examples. 3. Prime and maximal ideals Definition. An ideal P in a ring A is called prime if P A and if for every pair x, y of elements in A\P we have xy P. Equivalently,

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

GROUPS ACTING ON A SET

GROUPS ACTING ON A SET GROUPS ACTING ON A SET MATH 435 SPRING 2012 NOTES FROM FEBRUARY 27TH, 2012 1. Left group actions Definition 1.1. Suppose that G is a group and S is a set. A left (group) action of G on S is a rule for

More information

ZORN S LEMMA AND SOME APPLICATIONS

ZORN S LEMMA AND SOME APPLICATIONS ZORN S LEMMA AND SOME APPLICATIONS KEITH CONRAD 1. Introduction Zorn s lemma is a result in set theory that appears in proofs of some non-constructive existence theorems throughout mathematics. We will

More information

Cartesian Products and Relations

Cartesian Products and Relations Cartesian Products and Relations Definition (Cartesian product) If A and B are sets, the Cartesian product of A and B is the set A B = {(a, b) :(a A) and (b B)}. The following points are worth special

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Group Theory. Contents

Group Theory. Contents Group Theory Contents Chapter 1: Review... 2 Chapter 2: Permutation Groups and Group Actions... 3 Orbits and Transitivity... 6 Specific Actions The Right regular and coset actions... 8 The Conjugation

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

The cover SU(2) SO(3) and related topics

The cover SU(2) SO(3) and related topics The cover SU(2) SO(3) and related topics Iordan Ganev December 2011 Abstract The subgroup U of unit quaternions is isomorphic to SU(2) and is a double cover of SO(3). This allows a simple computation of

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Factoring of Prime Ideals in Extensions

Factoring of Prime Ideals in Extensions Chapter 4 Factoring of Prime Ideals in Extensions 4. Lifting of Prime Ideals Recall the basic AKLB setup: A is a Dedekind domain with fraction field K, L is a finite, separable extension of K of degree

More information

Nilpotent Lie and Leibniz Algebras

Nilpotent Lie and Leibniz Algebras This article was downloaded by: [North Carolina State University] On: 03 March 2014, At: 08:05 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered

More information

Lecture 18 - Clifford Algebras and Spin groups

Lecture 18 - Clifford Algebras and Spin groups Lecture 18 - Clifford Algebras and Spin groups April 5, 2013 Reference: Lawson and Michelsohn, Spin Geometry. 1 Universal Property If V is a vector space over R or C, let q be any quadratic form, meaning

More information

RINGS OF ZERO-DIVISORS

RINGS OF ZERO-DIVISORS RINGS OF ZERO-DIVISORS P. M. COHN 1. Introduction. A well known theorem of algebra states that any integral domain can be embedded in a field. More generally [2, p. 39 ff. ], any commutative ring R with

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Inner Product Spaces. 7.1 Inner Products

Inner Product Spaces. 7.1 Inner Products 7 Inner Product Spaces 71 Inner Products Recall that if z is a complex number, then z denotes the conjugate of z, Re(z) denotes the real part of z, and Im(z) denotes the imaginary part of z By definition,

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Mathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson

Mathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson Mathematics for Computer Science/Software Engineering Notes for the course MSM1F3 Dr. R. A. Wilson October 1996 Chapter 1 Logic Lecture no. 1. We introduce the concept of a proposition, which is a statement

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

BILINEAR FORMS KEITH CONRAD

BILINEAR FORMS KEITH CONRAD BILINEAR FORMS KEITH CONRAD The geometry of R n is controlled algebraically by the dot product. We will abstract the dot product on R n to a bilinear form on a vector space and study algebraic and geometric

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Row Ideals and Fibers of Morphisms

Row Ideals and Fibers of Morphisms Michigan Math. J. 57 (2008) Row Ideals and Fibers of Morphisms David Eisenbud & Bernd Ulrich Affectionately dedicated to Mel Hochster, who has been an inspiration to us for many years, on the occasion

More information

Lemma 5.2. Let S be a set. (1) Let f and g be two permutations of S. Then the composition of f and g is a permutation of S.

Lemma 5.2. Let S be a set. (1) Let f and g be two permutations of S. Then the composition of f and g is a permutation of S. Definition 51 Let S be a set bijection f : S S 5 Permutation groups A permutation of S is simply a Lemma 52 Let S be a set (1) Let f and g be two permutations of S Then the composition of f and g is a

More information

MATH10040 Chapter 2: Prime and relatively prime numbers

MATH10040 Chapter 2: Prime and relatively prime numbers MATH10040 Chapter 2: Prime and relatively prime numbers Recall the basic definition: 1. Prime numbers Definition 1.1. Recall that a positive integer is said to be prime if it has precisely two positive

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

4.1 Modules, Homomorphisms, and Exact Sequences

4.1 Modules, Homomorphisms, and Exact Sequences Chapter 4 Modules We always assume that R is a ring with unity 1 R. 4.1 Modules, Homomorphisms, and Exact Sequences A fundamental example of groups is the symmetric group S Ω on a set Ω. By Cayley s Theorem,

More information

COMMUTATIVE RINGS. Definition: A domain is a commutative ring R that satisfies the cancellation law for multiplication:

COMMUTATIVE RINGS. Definition: A domain is a commutative ring R that satisfies the cancellation law for multiplication: COMMUTATIVE RINGS Definition: A commutative ring R is a set with two operations, addition and multiplication, such that: (i) R is an abelian group under addition; (ii) ab = ba for all a, b R (commutative

More information

4. CLASSES OF RINGS 4.1. Classes of Rings class operator A-closed Example 1: product Example 2:

4. CLASSES OF RINGS 4.1. Classes of Rings class operator A-closed Example 1: product Example 2: 4. CLASSES OF RINGS 4.1. Classes of Rings Normally we associate, with any property, a set of objects that satisfy that property. But problems can arise when we allow sets to be elements of larger sets

More information

(a) Write each of p and q as a polynomial in x with coefficients in Z[y, z]. deg(p) = 7 deg(q) = 9

(a) Write each of p and q as a polynomial in x with coefficients in Z[y, z]. deg(p) = 7 deg(q) = 9 Homework #01, due 1/20/10 = 9.1.2, 9.1.4, 9.1.6, 9.1.8, 9.2.3 Additional problems for study: 9.1.1, 9.1.3, 9.1.5, 9.1.13, 9.2.1, 9.2.2, 9.2.4, 9.2.5, 9.2.6, 9.3.2, 9.3.3 9.1.1 (This problem was not assigned

More information

it is easy to see that α = a

it is easy to see that α = a 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Chapter 13: Basic ring theory

Chapter 13: Basic ring theory Chapter 3: Basic ring theory Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 42, Spring 24 M. Macauley (Clemson) Chapter 3: Basic ring

More information

Section 1.7 22 Continued

Section 1.7 22 Continued Section 1.5 23 A homogeneous equation is always consistent. TRUE - The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE - The equation

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

16.3 Fredholm Operators

16.3 Fredholm Operators Lectures 16 and 17 16.3 Fredholm Operators A nice way to think about compact operators is to show that set of compact operators is the closure of the set of finite rank operator in operator norm. In this

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

FINITE DIMENSIONAL ORDERED VECTOR SPACES WITH RIESZ INTERPOLATION AND EFFROS-SHEN S UNIMODULARITY CONJECTURE AARON TIKUISIS

FINITE DIMENSIONAL ORDERED VECTOR SPACES WITH RIESZ INTERPOLATION AND EFFROS-SHEN S UNIMODULARITY CONJECTURE AARON TIKUISIS FINITE DIMENSIONAL ORDERED VECTOR SPACES WITH RIESZ INTERPOLATION AND EFFROS-SHEN S UNIMODULARITY CONJECTURE AARON TIKUISIS Abstract. It is shown that, for any field F R, any ordered vector space structure

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

INTRODUCTORY SET THEORY

INTRODUCTORY SET THEORY M.Sc. program in mathematics INTRODUCTORY SET THEORY Katalin Károlyi Department of Applied Analysis, Eötvös Loránd University H-1088 Budapest, Múzeum krt. 6-8. CONTENTS 1. SETS Set, equal sets, subset,

More information