An Advanced Course in Linear Algebra. Jim L. Brown

Save this PDF as:

Size: px
Start display at page:

Transcription

1 An Advanced Course in Linear Algebra Jim L. Brown July 20, 2015

2 Contents 1 Introduction 3 2 Vector spaces Getting started Bases and dimension Direct sums and quotient spaces Dual spaces Basics of module theory Problems Choosing coordinates Linear transformations and matrices Transpose of a matrix via the dual map Problems Structure Theorems for Linear Transformations Invariant subspaces T -invariant complements Rational canonical form Jordan canonical form Diagonalizable operators Canonical forms via modules Problems Bilinear and sesquilinear forms Basic definitions and facts Symmetric, skew-symmetric, and Hermitian forms The adjoint map Problems Inner product spaces and the spectral theorem Inner product spaces The spectral theorem Polar decomposition and the Singular Value Theorem

3 CONTENTS CONTENTS 6.4 Problems Tensor products, exterior algebras, and the determinant Extension of scalars Tensor products of vector spaces Alternating forms, exterior powers, and the determinant Tensor products and exterior powers of modules Problems A Groups, rings, and fields: a quick recap 202 2

5 Chapter 2 Vector spaces In this chapter we give the basic definitions and facts having to do with vector spaces that will be used throughout the rest of the course. Many of the results in this chapter are covered in an undergraduate linear algebra class in terms of matrices. We often convert the language back to that of matrices, but the focus is on abstract vector spaces and linear transformations. Throughout this chapter F is a field. 2.1 Getting started We begin with the definition of a vector space. Definition Let V be a non-empty set with an operation V V V (v, w) v + w referred to as addition and an operation F V V (c, v) cv referred to as scalar multiplication so that V satisfies the following properties: 1. (V, +) is an abelian group; 2. c(v + w) = cv + cw for all c F, v, w V ; 3. (c + d)v = cv + dv for all c, d F, v V ; 4. (cd)v = c(dv) for all c, d F, v V ; 5. 1 F v = v for all v V. 4

6 2.1. GETTING STARTED CHAPTER 2. We say V is a vector space (or an F -vector space). If we need to emphasize the addition and scalar multiplication we write (V, +, ) instead of just V. We call elements of V vectors and elements of F scalars. We now recall some familiar examples from undergraduate linear algebra. The verification that these are actually vector spaces is left as an exercise. a 1 Example Set F n a 2 =. : a i F. Then F n is a vector space with a n addition given by a 1 b 1 a 1 + b 1 a b 2.. = a 2 + b 2.. a n b n a n + b n and scalar multiplication given by a 1 ca 1 a 2 c. = ca 2.. a n ca n Example Let F and K be fields and suppose F K. Then K is an F -vector space. The typical example of this one sees in undergraduate linear algebra is C as an R-vector space. Example Let m, n Z >0. Set Mat m,n (F ) to be the m by n matrices with entries in F. This forms an F -vector space with addition being matrix addition and scalar multiplication given by multiplying each entry of the matrix by the scalar. In the case m = n we write Mat n (F ) for Mat n,n (F ). Example Let n Z 0. Define P n (F ) = {f F [x] deg(f) n}. This is an F -vector space. We also have that F [x] = n 0 P n (F ) is an F -vector space. Example Let U and V be subsets of R. Define C 0 (U, V ) to be the set of continuous functions from U to V. This forms an R-vector space. Let f, g C 0 (U, V ) and c R. Addition is defined point-wise, namely, (f + g)(x) = f(x) + g(x) for all x U. Scalar multiplication is given point-wise as well: (cf)(x) = cf(x) for all x U. 5

7 2.1. GETTING STARTED CHAPTER 2. More generally, for k Z 0 we let C k (U, V ) be the functions from U to V so that f (j) is continuous for all 0 j k where f (j) denotes the jth derivative of f. This is an R-vector space. If U = V we write C k (U) for C k (U, U). We set C (U, V ) to be the set of smooth functions, i.e., f C (U, V ) if f C k (U, V ) for all k 0. This is an R-vector space. If U = V we write C (U) for C (U, U). Example Set a 1 F = a 2 : a i F, a i = 0 for all but finitely many i. is an F -vector space. Set This is an F -vector space. F N = Example Consider the sphere a 1 a 2. : a i F. S n 1 = {(x 1, x 2,..., x n ) R n : x x x 2 n = 1}. Let p = (a 1,..., a n ) be a point on the sphere. We can realize the sphere as the w = 1 level surface of the function w = f(x 1,..., x n ) = x x 2 n. The gradient of f is f = 2x 1 e 1 + 2x 2 e 2 + 2x n e n where e 1 = (1, 0,..., 0), e 2 = (0, 1, 0,..., 0),..., e n = (0,, 0, 1). This gives that the tangent plane at the point p is given by 2a 1 (x 1 a 1 ) + 2a 2 (x 2 a 2 ) + + 2a n (x n a n ) = 0. This is pictured in the n = 2 case here: 6

8 2.1. GETTING STARTED CHAPTER 2. Note the tangent plane is not a vector space because it does not contain a zero vector, but if we shift it to the origin we have a vector space called the tangent space of S n 1 at p: T p (S n 1 ) = {(x 1,..., x n ) R n : 2a 1 x a n x n = 0}. The shifted tangent space from the graph pictured above is given here: Lemma Let V be an F -vector space. 1. The zero element 0 V V is unique. 2. We have 0 F v = 0 V for all v V. 3. We have ( 1 F ) v = v for all v V. Proof. Suppose that 0, 0 both satisfy the conditions necessary to be 0 V, i.e., 0 + v = v = v + 0 for all v V 0 + v = v = v + 0 for all v V. We apply this with v = 0 to obtain 0 = 0 + 0, and now use v = 0 to obtain = 0. Thus, we have that 0 = 0. Observe that 0 F v = (0 F + 0 F ) v = 0 F v + 0 F v. So if we subtract 0 F v from both sides we get that 0 V = 0 F v. Finally, we have ( 1 F ) v + v = ( 1 F ) v + (1 F ) v = ( 1 F + 1 F ) v = 0 F v = 0 V, i.e., ( 1 F ) v + v = 0 V. So ( 1 F ) v is the unique additive identity of v, i.e., ( 1 F ) v = v. Note we will often drop the subscripts on 0 and 1 when they are clear from context. Definition Let V be an F -vector space and let W be a nonempty subset of V. If W is an F -vector space with the same operations as V we call W a subspace (or an F -subspace) of V. 7

9 2.1. GETTING STARTED CHAPTER 2. Example We saw above that V = C is an R-vector space. Set W = {x + 0 i : x R}. Then W is an R-subspace of V. We have that V is also a C-vector space. However, W is not a C-subspace of V as it is not closed under scalar multiplication by i. Example Let V = R 2. In the graph below W 1 is a subspace but W 2 is not. It is easy to check that any line passing through the origin is a subspace, but a line not passing through the origin cannot be a subspace because it does not contain the zero element. Example Let n Z 0. We have P j (F ) is a subspace of P n (F ) for all 0 j n. Example The F -vector space F is a subspace of F N. Example Let V = Mat n (F ). The set of diagonal matrices is a subspace of V. The following lemma gives easy to check criterion for when one has a subspace. The proof is left as an exercise. Lemma Let V an F -vector space and W V. Then W is a subspace of V if 1. W is nonempty; 2. W is closed under addition; 3. W is closed under scalar multiplication. As is customary in algebra, in order to study objects we first introduce the subobjects (subspaces in our case) and then the appropriate maps between the objects of interest. In our case these are linear transformations. 8

10 2.1. GETTING STARTED CHAPTER 2. Definition Let V and W be F -vector spaces and let T : V W be a map. We say T is a linear transformation (or F -linear) if 1. T (v 1 + v 2 ) = T (v 1 ) + T (v 2 ) for all v 1, v 2 V ; 2. T (cv) = ct (v) for all c F, v V. The collection of all F -linear maps from V to W is denoted Hom F (V, W ). Example Define id V : V V by id V (v) = v for all v V. Then id V Hom F (V, V ). We refer to this as the identity transformation. Example Let T : C C be defined by T (z) = z. Since C is a C and an R-vector space, it is natural to ask if T is C-linear or R-linear. Observe we have T (z + w) = z + w = z + w = T (z) + T (w) for all z, w C. However, if we let c C we have T (cv) = cv = ct (v). Note that if T (v) 0, then T (cv) = ct (v) if and only if c = c. Thus, T is not C-linear but is R-linear. Example Let m, n Z 1. Let A Mat m,n (F ). Define T A : F n F m by T A (x) = Ax. This is an F -linear map. Example Set V = C (R). In this example we give several linear maps that arise in calculus. Given any a R, define E a : V R f f(a). We have E a Hom R (V, R) for every a R. Define We have D Hom R (V, V ). D : V V f f. 9

11 2.1. GETTING STARTED CHAPTER 2. Let a R. Define I a : V V f x We have I a Hom R (V, V ) for every a R. Let a R. Define a Ẽ a : V V f(t)dt. f f(a) where here we view f(a) as the constant function. We have Ẽa Hom R (V, V ). We can use these linear maps to rephrase the fundamental theorems of calculus as follows: 1. D I a = id V ; 2. I a D = id V Ẽa. Exercise Let V and W be F -vector spaces. Show that Hom F (V, W ) is an F -vector space. Lemma Let T Hom F (V, W ). Then T (0 V ) = 0 W. Proof. This is proved using the properties of the additive identity element: T (0 V ) = T (0 V + 0 V ) = T (0 V ) + T (0 V ), i.e., T (0 V ) = T (0 V ) + T (0 V ). Now, subtract T (0 V ) from both sides to obtain 0 W = T (0 V ) as desired. Exercise Let U, V, W be F -vector spaces. Let S Hom F (U, V ) and T Hom F (V, W ). Then T S Hom F (U, W ). In the next chapter we will focus on matrices and their relation with linear maps much more closely, but we have the following elementary result that we can prove immediately. Lemma Let m, n Z Let A, B Mat m,n (F ). Then A = B if and only if T A = T B. 2. Every T Hom F (F n, F m ) is given by T A for some A Mat m,n (F ). Proof. If A = B then clearly we must have T A = T B from the definition. Conversely, suppose T A = T B. Consider the standard vectors e 1 = t (1, 0,..., 0), e 2 = t (0, 1, 0,..., 0),..., e n = t (0,..., 0, 1). We have T A (e j ) = T B (e j ) for j = 1,..., n. However, it is easy to see that T A (e j ) is the jth column of A. Thus, A = B. 10

12 2.1. GETTING STARTED CHAPTER 2. Let T Hom F (F n, F m ). Set ( A = T (e 1 ).... ) T (e n ). It is now easy to check that T = T A. In general, we do not multiply vectors. However, if V = Mat n (F ), we can multiply vectors in here! So V is a vector space, but also a ring. In this case, V is an example of an F -algebra. Though we will not be concerned with algebras in these notes, we give the definition here for the sake of completeness. Definition An F -algebra is a ring A with a multiplicative identity together with a ring homomorphism f : F A mapping 1 F to 1 A so that f(f ) is contained in the center of A, i.e., if a A and f(c) f(f ), then af(c) = f(c)a. Exercise Show that F [x] is an F -algebra. The fact that we can multiply in Mat n (F ) is due to the fact that we can compose linear maps. In fact, it is natural to define matrix multiplication to be the matrix associated to the composition of the associated linear maps. This definition explains the bizarre multiplication rule defined on matrices. Definition Let m, n and p be positive integers. Let A Mat m,n (F ), B Mat n,p (F ). Then AB Mat m,p (F ) is the matrix corresponding to T A T B. Exercise Show this definition of matrix multiplication agrees with that given in undergraduate linear algebra class. Definition Let T Hom F (V, W ) be invertible, i.e., there exists T 1 Hom F (W, V ) so that T T 1 = id W and T 1 T = id V. We say T is an isomorphism and we say V and W are isomorphic and right V = W. Exercise Let T Hom F (V, W ). Show that T is an isomorphism if and only if T is bijective. Example Let V = R 2, W = C. These are both R-vector spaces. Define T : R 2 C (x, y) x + iy. We have T Hom R (V, W ). It is easy to see this is an isomorphism with inverse given by T 1 (x + iy) = (x, y). Thus, C = R 2 as R-vector spaces. Example Let V = P n (F ) and W = F n+1. Define a map T : V W by sending a 0 + a 1 x + + a n x n to t (a 0,..., a n ). It is elementary to check this is an isomorphism. Example Let V = Mat n (F ) and W = F n2. Define T : V W by sending A = (a i,j ) to t (a 1,1, a 1,2,..., a n,n ). This gives an isomorphism between V and W. 11

13 2.1. GETTING STARTED CHAPTER 2. Definition Let T Hom F (V, W ). The kernel of T is given by The image of T is given by ker T = {v V : T (v) = 0 W }. Im T = {w W : there exists v V with T (v) = w} = T (V ). One should note that in undergraduate linear algebra one often refers to ker T as the null space and Im T the column space when T = T A for some A Mat m,n (F ). Lemma Let T Hom F (V, W ). Then ker T is a subspace of V and Im T is a subspace of W. Proof. First, observe that 0 V ker T so ker T is nonempty. Now let v 1, v 2 ker T and c F. Then we have and T (v 1 + v 2 ) = T (v 1 ) + T (v 2 ) = 0 W + 0 W = 0 W T (cv 1 ) = ct (v 1 ) = c 0 W = 0 W. Thus, ker T is a subspace of V. Next we show that Im T is a subspace. We have Im T is nonempty because T (0 V ) = 0 W Im T. Let w 1, w 2 Im T and c F. There exists v 1, v 2 V such that T (v 1 ) = w 1 and T (v 2 ) = w 2. Thus, and w 1 + w 2 = T (v 1 ) + T (v 2 ) = T (v 1 + v 2 ). cw 1 = ct (v 1 ) = T (cv 1 ). Thus, w 1 + w 2 and cw 1 are both in Im T, i.e., Im T is a subspace of W. Example Let m, n Z >0 with m > n. Define T : F m F n by a 1 a 1 a 2 T. = a 2.. a m a n 12

14 2.2. BASES AND DIMENSION CHAPTER 2. The image of this map is F n and the kernel is given by 0. ker T = 0 a n+1 : a i F.. a m It is easy to see that ker T = F m n. Define S : F m F n by a 1 a 2 a 1 a 2 S. =. a n 0 a n. 0 where there are m n zeroes. The image of this map is isomorphic to F m n and the kernel is trivial. Example Define T : Q[x] R by T (f(x)) = f( 3). This is a Q- linear map. The kernel of this map consists of those polynomials f Q[x] satisfying f( 3) = 0, i.e., those polynomials f for which (x 2 3) f. Thus, ker T = (x 2 3)Q[x]. The image of this map clearly contains Q( 3) = {a+b 3 : a, b Q}. Let α Im(T ). Then there exists f Q[x] so that f( 3) = α. Write f = q(x 2 3) + r with deg r < 2. Then α = f( 3) = q( 3)(( 3) 2 3) + r( 3) = r( 3). Since deg r < 2, we have α = r( 3) = a + b 3 for some a, b Q. Im(T ) = Q( 3). Thus, 2.2 Bases and dimension One of the most important features of vector spaces is the fact that they have bases. This essentially means that one can always find a nice subset (in many interesting cases a finite set) that will completely describe the space. Many of the concepts of this section are likely familiar from undergraduate linear algebra. Throughout this section V denotes an F -vector space unless indicated otherwise. 13

15 2.2. BASES AND DIMENSION CHAPTER 2. Definition Let B = {v i } i I be a subset of V. We say v V is an F - linear combination of B and write v span F B if there exists a finite collection {c 1,..., c n } F such that n v = c i v i. i=1 Definition Let B = {v i } i I be a subset of V. We say B is F -linearly independent if whenever we have a finite linear combination c i v i = 0 for some c i F we must have c i = 0 for all i. Definition Let B = {v i } be a subset of V. We say B is an F -basis of V if 1. span F B = V ; 2. B is linearly independent. As is usual, if F is clear from context we drop it from the notation and simply say linear combination, linearly independent, and basis. The first goal of this section is to show every vector space has a basis. We will need Zorn s lemma to prove this. We recall Zorn s lemma for the convenience of the reader. Theorem (Zorn s Lemma). Let X be any partially ordered set with the property that every chain in X has an upper bound. Then X contains at least one maximal element. Theorem Let V be a vector space. Let A C be subsets of V. Furthermore, assume A linearly independent and C spans V. Then there exists a basis B of V satisfying A B C. In particular, if we set A =, C = V, then this says that V has a basis. Proof. Let X = {B V : A B C, B is linearly independent}. We have X is a partially ordered set under inclusion and is nonempty because A X. We also have that C provides an upper bound on X. This allows us to apply Zorn s Lemma to the chain X to conclude it has a maximal element, say B. If span F B = V, we are done. Suppose span F B V. Then there exists v C such that v span F B. However, this gives B = B {v} is an element of X that properly contains B, a contradiction. Thus, B is the desired basis. One should note that the above proof gives that every vector space has a basis, but it does not provide an algorithm for constructing a basis. We will come back to this issue momentarily. We first deal with the easier case that there is a finite collection of vectors B so that span F B = V. In this case we say V is a finite dimensional F -vector space. We will make use of the following fact from undergraduate linear algebra. 14

16 2.2. BASES AND DIMENSION CHAPTER 2. Lemma ([3, Theorem 1.1]) A homogeneous system of m linear equations in n unknowns with m < n always has nontrivial solutions. Corollary Let B V be such that span F B = V and #B = m. Then any set with more than m elements cannot be linearly independent. Proof. Let C = {w 1,..., w n } with n > m. Let B = {v 1,..., v m } be a spanning set for V. For each i write m w i = a ji v j. Consider the equation j=1 n a ji x i = 0. i=1 The previous lemma gives a solution (c 1,..., c n ) to this equation with (c 1,..., c n ) (0,..., 0). We have ( m n ) 0 = a ji c i v j = = j=1 n i=1 c i i=1 n c i w i. i=1 This shows C is not linearly independent. m a ji v j Theorem Let V be a finite dimensional F -vector space. Any two bases of V have the same number of elements. Proof. Let B and C be bases of V. Suppose that #B = m and #C = n. Since B is a basis, it is a spanning set and since C is a basis it is linearly independent. The previous result now gives so n m. We now reverse the roles of B and C to get the other direction. Now that we have shown any two bases of a finite dimensional vector space have the same number of elements we can make the following definition. Definition Let V be a finite dimensional vector space. The number of elements in a F -basis of V is called the F -dimension of V and written dim F V. Note that if V is not finite dimensional we will write dim F V =. The notion of basis is not very useful in this context, as we will see below, so we do not spend much time on it here. The following theorem does not contain anything new, but it is useful to summarize some of the important facts to this point. j=1 15

17 2.2. BASES AND DIMENSION CHAPTER 2. Theorem Let V be a finite dimensional F -vector space with dim F V = n. Let C V with #C = m. 1. If m > n, then C is not linearly independent. 2. If m < n, then span F C V. 3. If m = n, then the following are equivalent: C is a basis; C is linearly independent; C spans V. This theorem immediately gives the following corollary. Corollary Let W V be a subspace. Then dim F W dim F V. If dim F V <, then V = W if and only if dim F V = dim F W. We now give several examples of bases of some familiar vector spaces. It is left as an exercise to check these are actually bases. Example Let V = F n. Set e 1 = t (1, 0,..., 0), e 2 = t (0, 1, 0,..., 0),..., e n = t (0,..., 0, 1). Then E = {e 1,..., e n } is a basis for V and is often referred to as the standard basis. We have dim F F n = n. Example Let V = C. From the previous example we have B = {1} is a basis of V as a C-vector space and dim C C = 1. We saw before that C is also a R-vector space. In this case a basis is given by C = {1, i} and dim R C = 2. Example Let V = Mat m,n (F ). Let e i,j to be the matrix with 1 in (i, j)th position and zeros elsewhere. Then B = {e i,j } is a basis for V over F and dim F Mat m,n (F ) = mn. Example Set V = sl 2 (C) where {( ) } a b sl 2 (C) = Mat c d 2 (C) : a + d = 0. One ( can ) check this is a C-vector space. Moreover, it is a proper subspace because 1 0 is not in sl (C). Set {( ) ( ) ( )} B =,, It is easy to see that B is linearly independent. We know that dim C sl 2 (C) < 4 because it is a proper subspace of Mat 2 (C). This gives dim C sl 2 (C) = 3 and B is a basis. The vector space sl 2 (C) is actually the Lie algebra associated to the Lie group SL 2 (C). There is a very rich theory of Lie algebras and Lie groups, but this is too far afield to delve into here. (Note the algebra multiplication on sl 2 (C) is not matrix multiplication, it is given by X Y = [X, Y ] = XY Y X.) 16

18 2.2. BASES AND DIMENSION CHAPTER 2. Example Let f(x) F [x] be an polynomial of degree n. We can use this polynomial to split F [x] into equivalence classes analogously to how one creates the field F p. For details of this construction, please see Example A.0.25 found in Appendix A. This is an F -vector space under addition given by [g(x)] + [h(x)] := [g(x) + h(x)] and scalar multiplication given by c[g(x)] := [cg(x)]. Note that given any g(x) F [x] we can use the Euclidean algorithm on F [x] (polynomial long division) to find unique polynomials q(x) and r(x) satisfying g(x) = f(x)q(x) + r(x) where r(x) = 0 or deg r < deg f. This shows that each nonzero equivalence class has a representative r(x) with deg r < deg f. Thus, we can write F [x]/(f(x)) = {[r(x)] : r(x) F [x] with r = 0 or deg r < deg f}. From this one can see that a spanning set for F [x]/(f(x)) is given by {[1], [x],..., [x n 1 ]}. This is also seen to be linearly independent by observing if there exists a 0,..., a n 1 F so that [a 0 + a 1 x + + a n 1 x n 1 ] = [0], this means f(x) divides a 0 + a 1 x + +a n 1 x n 1. However, this is impossible unless a 0 +a 1 x+ +a n 1 x n 1 = 0 because deg f = n. Thus a 0 = a 1 = = a n 1 = 0. This gives that F [x]/(f(x)) is an F -vector space of degree n. Exercise Show that R[x]/(x 2 + 1) = C as R-vector spaces. The following lemma gives an alternate way to check if a set is a basis. Lemma Let V be an F -vector space and let C = {v j } be a subset of V. We have C is a basis of V if and only if every vector in V can be written uniquely as a linear combination of elements in C. Proof. First suppose C is a basis. Let v V. Since span F C = V, we can write any vector as a linear combination of elements in C. Suppose we can write v = a i v i = b i v i for some a i, b i F. Subtracting these equations gives 0 = (a i b i )v i. We now use that the v i are linearly independent to conclude a i b i = 0, i.e., a i = b i for every i. Now suppose every v V can be written uniquely as a linear combination of the elements in C. This immediately gives span F C = V. It remains to show C is linearly independent. Suppose there exists a i F with a i v i = 0. We also have 0v i = 0, so the uniqueness gives a i = 0 for every i, i.e., C is linearly independent. Before we go further, we briefly delve into these concepts for infinite dimensional spaces. We begin with two familiar examples. The first works out exactly as one would hope; the second shows things are not as nice in general. Example Consider the vector space F [x]. This cannot be a finite dimensional vector space. For instance, if {f 1,..., f n } were a basis, the element x M+1 for M = max 1 j n deg f j would not be in the span of these vectors. We can find a basis for this space though. Consider the collection B = {1, x, x 2,... }. It is clear this set is linearly independent and spans F [x], thus it forms a basis. 17

19 2.2. BASES AND DIMENSION CHAPTER 2. Example Recall the vector space V = R N defined earlier. This can be identified with sequences {a n } of real numbers. One might be interested in a basis for this vector space. At first glance the most obvious choice would be E = {e 1, e 2,..., }. However, it is immediate that this set does not span V as v = (1, 1,... ) can not be represented as a finite linear combination of these elements. Now we know since v is not in span R E, that E {v} is a linearly independent set. However, it is clear this does not span either as (1, 2, 3, 4,... ) is not in the span of this set. We know that V has a basis, but it can be shown that no countable collection of vectors forms a basis for this space. Thus, one cannot construct a basis of this space by adding one vector at a time. The next thing one might try to do is to allow oneself to add infinitely many vectors. However, without some notion of convergence this does not make sense. For instance, how would one define (1, 1,... ) + (2, 2,... ) + (3, 3,... ) +? The previous example shows that while we know every vector space has a basis, it may not be practical to construct such a basis. In fact, this definition of basis is not very useful for infinite dimensional spaces and is given the name Hamel basis since other more useful concepts are often referred to as bases in this setting. We will not say more about other notions here as these are more appropriate for a functional analysis course. We will deal mostly with finite dimensional vector spaces in this course. The following proposition will be used repeatedly throughout the course. It says that a linear transformation between vector spaces is completely determined by what it does to a basis. One way we will often use this is to define a linear transformation by only specifying what it does to a basis. This provides another important application of the fact that vector spaces are completely determined by their bases. Proposition Let V, W be vector spaces. 1. Let T Hom F (V, W ). Then T is determined by its values on a basis of V. 2. Let B = {v i } be a basis of V and C = {w i } be any collection of vectors in W so that #B = #C. There is a unique linear transformation T Hom F (V, W ) satisfying T (v i ) = w i. Proof. Let B = {v i } be a basis of V. Given any v V there are elements a i F so that v = a i v i. We have ( ) T (v) = T ai v i = T (a i v i ) = a i T (v i ). Thus, if one knows the elements T (v i ), one knows T (v) for any v V. This gives the first claim. 18

20 2.2. BASES AND DIMENSION CHAPTER 2. The second part follows immediately from the first. Set T (v i ) = w i for each i. For v = a i v i V, define T (v) by T (v) = a i w i. It is now easy to check this is a linear map from V to W and is unique. { ( ) 1 Example Let V = W = R 2. It is easy to check that B = v 1 =, v 1 2 = is a basis of V. The previous results says to define a map{( from V( to W)} it is 2 5 enough to say where to send v 1 and v 2. For instance, let, be 3) 10 a subset of W. Then we have a unique linear map T : R 2 R 2 given by T (v 1 ) = w 1, T (v 2 ) = w 2. Note it is exactly this property that allows us to represent a linear map T : F n F m as a matrix. Corollary Let T Hom F (V, W ), B = {v i } be a basis of V, and C = {w i = T (v i )} W. Then C is a basis for W if and only if T is an isomorphism. Proof. Suppose C is a basis for W. The previous result allows us to define S : W V such that S(w i ) = v i. This is an inverse for T, so T is an isomorphism. Now suppose T is an isomorphism. We need to show that C spans W and it is linearly independent. Let w W. Since T is an isomorphism there exists a v V with T (v) = w. Using that B is a basis of V we can write v = a i v i for some a i F. Applying T to v we have w = T (v) ( ) = T ai v i = a i T (v i ) = a i w i. Thus, w span F C and since w was arbitrary we have span F C = W. Suppose there exists a i F with a i w i = 0. We have T (0) = 0 = a i w i = a i T (v i ) = T (a i v i ) ( ) = T ai v i. Applying that T is injective we have 0 = a i v i. However, B is a basis so it must be the case that a i = 0 for all i. This gives C is linearly independent and so a basis. ( )}

21 2.2. BASES AND DIMENSION CHAPTER 2. The following result ties the dimension of the kernel of a linear transformation, the dimensions of the image, and the dimension of the domain vector space. This is extremely useful as one often knows (or can bound) two of the three. Theorem Let V be a finite dimensional vector space and let T Hom F (V, W ). Then dim F ker T + dim F Im T = dim F V. Proof. Let dim F ker T = k and dim F V = n. Let A = {v 1,..., v k } be a basis of ker T and extend this to a basis B = {v 1,..., v n } of V. It is enough to show that C = {T (v k+1 ),..., T (v n )} is a basis for Im T. Let w Im T. There exists v V so that T (v) = w. Since B is a basis for n V there exists a i such that v = a i v i. This gives i=1 w = T (v) ( n ) = T a i v i = = i=1 n a i T (v i ) i=1 n i=k+1 a i T (v i ) where we have used T (v i ) = 0 for i = 1,..., k because A is a basis for ker T. Thus, span F C = Im T. It remains to show C is linearly independent. Suppose n there exists a i F such that a i T (v i ) = 0. Note Thus, n i=k+1 F such that i=k+1 0 = = T n i=k+1 ( n T (a i v i ) i=k+1 a i v i ) a i v i ker T. However, A spans ker T so there exists a 1,..., a k in n i=k+1 a i v i =. k a i v i, i=1 20

22 2.2. BASES AND DIMENSION CHAPTER 2. i.e., n i=k+1 a i v i + k ( a i )v i = 0. i=1 Since B = {v 1,..., v n } is a basis of V we must have a 1 = = a k = a k+1 = = a n = 0. In particular, a k+1 = = a n = 0 as desired. This theorem allows to prove the following important results. Corollary Let V, W be F -vector spaces with dim F V = n. Let V 1 V be a subspace of dimension k and W 1 W be a subspace of dimension n k. Then there exists T Hom F (V, W ) such that V 1 = ker T and W 1 = Im T. Proof. Let B = {v 1,..., v k } be a basis of V 1. Extend this to a basis {v 1,..., v k,..., v n } of V. Let C = {w k+1,..., w n } be a basis of W 1. Define T by T (v 1 ) = = T (v k ) = 0 and T (v k+1 ) = w k+1,..., T (v n ) = w n. This is the required linear map. The previous corollary says the only limitation on a subspace being the kernel or image of a linear transformation is that the dimensions add up properly. One should contrast this to the case of homomorphisms in group theory for example. There in order to be a kernel one requires the subgroup to satisfy the further property of being a normal subgroup. This is another way in which vector spaces are very nice to work with. The following corollary follows immediately from Theorem This corollary makes checking a map between two vector spaces of the same dimension is an isomorphism much easier as one only needs to check the map is injective or surjective, not both. Corollary Let T Hom F (V, W ) and dim F V = dim F W. Then the following are equivalent: 1. T is an isomorphism; 2. T is surjective; 3. T is injective. We can also rephrase this result in terms of matrices. Corollary Let A Mat n (F ). Then the following are equivalent: 1. A is invertible; 2. There exists B Mat n (F ) with BA = 1 n ; 3. There exists B Mat n (F ) with AB = 1 n. One should prove the previous two corollaries as well as the following corollary as exercises. 21

23 2.2. BASES AND DIMENSION CHAPTER 2. Corollary Let V, W be F -vector spaces and let dim F V = m, dim F W = n. 1. If m < n and T Hom F (V, W ), then T is not surjective. 2. If m > n and T Hom F (V, W ), then T is not injective. 3. We have m = n if and only if V = W. In particular, V = F m. Note that while it is true that if dim F V = dim F W = n < then V = W, it is not the case that every linear map from V to W is an isomorphism. This result is only saying there is a map T : V W that is an isomorphism. Clearly we can define a linear map T : V W by T (v) = 0 W for all v V and this is not an isomorphism. The previous corollary gives a very important fact: if V is an n-dimensional F -vector space, then V = F n. This result gives that for any positive integer n, all F -vector spaces of dimension n are isomorphic. One obtain this isomorphism by choosing a basis. This is why in undergraduate linear algebra one often focuses almost exclusively on the vector spaces F n and matrices as the linear transformations. The following example gives a nice application of what we have studied thus far. Example Recall P n (R) is the vector space of polynomials of degree less than or equal to n and dim R P n (R) = n + 1. Set V = P n 1 (R). Let k a 1,..., a k R be distinct and pick m 1,..., m k Z 0 such that (m j +1) = n. Our goal is to show given any real numbers b 1,0,..., b 1,m1,..., b k,mk there is a unique polynomial f P n 1 (R) satisfying f (j) (a i ) = b i,j. Define T :P n 1 (R) R n f(a 1 ). f (m1) (a 1 ) f(x).. f(a k ). f (mk) (a k ) If f ker T, then for each i = 1,..., k we have f (j) (a i ) = 0 for j = 1,..., m i. Thus, for each i we have (x a i ) mi+1 f(x). Since these polynomials are relatively prime, this gives their product divides f and thus f is divisible by a polynomial of degree n. Since f P n 1 (F ) this implies f = 0. Hence ker T = 0. Applying Theorem we have Im T must have dimension n, i.e., Im T = R n. This gives the result. j=1 22

24 2.3. DIRECT SUMS AND QUOTIENT SPACES CHAPTER Direct sums and quotient spaces In this section we cover two of the basic constructions in vector space theory, namely the direct sum and the quotient space. The direct sum is a way to split a vector space into subspaces that are independent. The quotient space construction is a way to identify some vectors to be 0. We begin with the direct sum. Definition Let V be an F -vector space and V 1,..., V k be subspaces of V. The sum of V 1,..., V k is defined by V V k = {v v k : v i V i }. Definition Let V 1,..., V k be subspaces of V. We say V 1,..., V k are independent if whenever v v k = 0 with v i V i, then v i = 0 for all i. Definition Let V 1,..., V k be subspaces of V. We say V is the direct sum of V 1,..., V k and write V = V 1 V k if the following two conditions are satisfied 1. V = V V k ; 2. V 1,..., V k are independent. Example Set V = F 2, V 1 = {(x, 0) : x F }, V 2 = {(0, y) : y F }. Then we clearly have V 1 + V 2 = {(x, y) : x, y F } = V. Moreover, if (x, 0) + (0, y) = (0, 0), then (x, y) = (0, 0) and so x = y = 0. This gives (x, 0) = (0, 0) = (0, y) and so V 1 and V 2 are independent. Thus F 2 = V 1 V 2. Example Let B = {v 1,..., v n } be a basis of V. Set V i = F v i = {av i : a F }. We have V 1,..., V n are clearly subspaces of V, and by the definition of a basis we obtain V = V 1 V n. Lemma Let V be a vector space, V 1,..., V k be subspaces of V. We have V = V 1 V k if and only if each v V can be written uniquely in the form v = v v k with v i V i. Proof. Suppose V = V 1 V k. Then certainly we have V = V V k and so given any v V there are elements v i V i so that v = v v k. The only thing to show is this expression is unique. Suppose v = v v k = w w k with v i, w i V i. Then 0 = (v 1 w 1 ) + + (v k w k ). Since the V i are independent we have v i w i = 0, i.e., v i = w i for all i. Suppose each v V can be written uniquely in the form v = v v k with v i V i. Immediately we get V = V V k. Suppose 0 = v v k with v i V i. We have 0 = as well, so by uniqueness, we get v i = 0 for all i. This gives V 1,..., V k are independent and so V = V 1 V k. 23

25 2.3. DIRECT SUMS AND QUOTIENT SPACES CHAPTER 2. Exercise Let V 1,..., V k be subspaces of V. For each i let B i be a basis of V i. Set B = B 1 B k. Then 1. B spans V if and only if V = V V k ; 2. B is linearly independent if and only if V 1,..., V k are independent; 3. B is a basis if and only if V = V 1 V k. Given a subspace U V, it is natural to ask if there is a subspace W so that V = U W. This leads us to the following definition. Definition Let V be a vector space and U V a subspace. We say W V is a complement of U in V if V = U W. Lemma Let U V be a subspace. Then U has a complement. Proof. Let A be a basis of U, and extend A to a basis of B of V. Set W = span F (B A). One checks immediately that V = U W. Exercise Let U V be a subspace. Is the complement of U unique? If so, prove it. If not, give a counterexample. We now turn our attention to quotient spaces. We have already seen an example of a quotient space. Namely, recall Example Consider V = F [x] and W = (f(x)) := {g F [x] : f g} = [0]. One can check that W is a subspace of V. In that example we defined a vector space V/W = F [x]/(f(x)) and saw that the elements in W become 0 in the new space. This construction is the one we generalize to form quotient spaces. Let V be a vector space and W V be a subspace. Define an equivalence relation on V as follows v 1 W v 2 if and only if v 1 v 2 W. We write the equivalence classes as [v 1 ] = {v 2 V : v 1 v 2 W } = v 1 + W. Set V/W = {v + W : v V }. Addition and scalar multiplication on V/W are defined as follows. Let v 1, v 2 V and c F. Define (v 1 + W ) + (v 2 + W ) = (v 1 + v 2 ) + W ; c(v 1 + W ) = cv 1 + W. Exercise Show that V/W is an F -vector space. We call V/W the quotient space of V by W. Example Let V = R 2 and W = {(x, 0) : x R}. Clearly we have W V is a subspace. Let (x 0, y 0 ) V. To find (x 0, y 0 ) + W, we want all (x, y) such that (x 0, y 0 ) (x, y) W. However, it is clear that (x 0 x, y 0 y) W if and only if y = y 0. Thus, (x 0, y 0 ) + W = {(x, y 0 ) : x R}. The graph below gives the elements (0, 0) + W and (0, 1) + W. 24

26 2.3. DIRECT SUMS AND QUOTIENT SPACES CHAPTER 2. One immediately sees from this that (x, y)+w is not a subspace of V unless (x, y) + W = (0, 0) + W. Define π : R V/W y 0 (x 0, y 0 ). It is straightforward to check this is an isomorphism, so V/W = R. Example More generally, let m, n Z >0 with m > n. Consider V = F m and let W be the subspace of V spanned by e 1,..., e n with {e 1,..., e m } the standard basis. We can form the quotient space V/W. This space is isomorphic to F m n with a basis given by {e n+1 + W,..., e m + W }. Example Let V = F [x] and let W = (f(x)). We saw before that the quotient space V/W = F [x]/(f(x)) has as a basis {[1], [x],..., [x n 1 ]}. Define T : V/W P n 1 (F ) by T ([x j ]) = x j. One can check this is an isomorphism, and so F [x]/(x n ) = P n 1 (F ) as F -vector spaces. Definition Let W V be a subspace. The canonical projection map is given by π W : V V/W v v + W. It is immediate that π W Hom F (V, V/W ). One important point to note is that when working with quotient spaces, if one defines a map from V/W to another vector space, one must always check the map is well-defined as defining the map generally involves a choice of representative for v + W. In other words, one must show if v 1 + W = v 2 + W then T (v 1 + W ) = T (v 2 + W ). Consider the following example. Example Let V = R 2 and W = {(x, 0) : x R}. We saw above the elements of the quotient space V/W are of the form (x, y)+w and (x 1, y 1 )+W = (x 2, y 2 ) + W if and only if y 1 = y 2. Suppose we with to define a linear map T : V/W R. We could try to define such a map by specifying T ((x, y) + W ) = x. 25

27 2.3. DIRECT SUMS AND QUOTIENT SPACES CHAPTER 2. However, we know that (x, y) + W = (x + 1, y) + W, so x = T ((x, y) + W ) = T ((x + 1, y) + W ) = x + 1. This doesn t make sense, so our map is not welldefined. The correct map in this situation is to send (x, y) + W to y since y is fixed across the equivalence class. The following result allows us to avoid checking the map is well-defined when it is induced from another linear map. One should look back at the examples of quotient spaces above to see how this theorem applies. Theorem Let T Hom F (V, W ). Define T : V/ ker T W v + ker T T (v). Then T Hom F (V/ ker T, W ). Moreover, T gives an isomorphism V/ ker T Im T. Proof. The first step is to show T is well-defined. Suppose v 1 +ker T = v 2 +ker T, i.e, v 1 v 2 ker T. So there exists x ker T such that v 1 v 2 = x. We have T (v 1 + ker T ) = T (v 1 ) = T (v 2 + x) = T (v 2 ) + T (x) = T (v 2 ) = T (v 2 + ker T ). Thus, T is well-defined. The next step is to show T is linear. Let v 1 + W, v 2 + W V/W and c F. We have and T (v 1 + ker T + v 2 + ker T ) = T (v 1 + v 2 + ker T ) = T (v 1 + v 2 ) = T (v 1 ) + T (v 2 ) = T (v 1 + ker T ) + T (v 2 + ker T ) T (c(v 1 + ker T )) = T (cv 1 + ker T ) = T (cv 1 ) = ct (v 1 ) = ct (v 1 + ker T ). Thus, T Hom F (V/ ker T, W ). It only remains to show that T is a bijection. Let w Im(T ). There exists v V so that T (v) = w. Thus, T (v + ker T ) = T (v) = w, so T is surjective. Now suppose v + ker T ker T. Then 0 = T (v + ker T ) = T (v). Thus, v ker T which means v + ker T = 0 + ker T and so T is injective. 26

28 2.4. DUAL SPACES CHAPTER 2. Example Let V = F [x]. Define a map T : V P 2 (F ) by sending f(x) = a 0 + a 1 x + a n x n to a 0 + a 1 x + a 2 x 2. This is a surjective linear map. The kernel of this map is exactly (x 3 ) = {g(x) : x 3 g(x)}. Thus, the previous result gives an isomorphism F [x]/(x 3 ) = P 2 (F ) as F -vector spaces. Example (( )) ( Let V = Mat 2 (F ). Define a map T : Mat 2 (F ) F 2 by a b b T =. One can check this is a surjective linear map. The kernel c d c) of this map is given by {( ) } a 0 ker T = : a, d F. 0 d Thus, Mat 2 (F )/ ker T is isomorphic to F 2 as F -vector spaces. Theorem Let W V be a subspace. Let B W = {v i } be a basis for W and extend to a basis B for V. Set B U = B B W = {z i }. Let U = span F B U, i.e., U is a complement of W in V. Then the linear map p : U V/W z i z i + W is an isomorphism. Thus, B U = {z i + W : z i B U } is a basis of V/W. Proof. Note that p is linear because we defined it on a basis. It only remains to show p is an isomorphism. We will do this by showing B U = {z i + W : z i B U } is a basis for V/W. Let v + W V/W. Since v V, there exists a i, b j F such that v = ai v i + b j z j. We have a i v i W, so v b j z j W. Thus v + W = b j z j + W = b j (z j + W ). This shows that B U spans V/W. Suppose there exists b j F such that b j (z j +W ) = 0+W, i.e., b j (z j + W ) W. So there exists a i F such that b j z j = a i v i. Thus a i v i + bj z j = 0. Since {v i, z j } is a basis of V we obtain a i = 0 = b j for all i, j. This gives B U is linearly independent and so completes the proof. 2.4 Dual spaces We conclude this chapter by discussing dual spaces. Throughout this section V is an F -vector space. Definition The dual space of V, denoted V, is given by V = Hom F (V, F ). Elements of the dual space are called linear functionals. 27

29 2.4. DUAL SPACES CHAPTER 2. Theorem The vector space V is isomorphic to a subspace of V. If dim F V <, then V = V. Proof. Let B = {v i } be a basis of V. For each v i, define an element vi by setting { vi 1 if i = j; (v j ) = 0 otherwise. One sees immediately that vi V as it is defined on a basis. Define T Hom F (V, V ) by T (v i ) = vi. We claim T is an injective linear map. Let v V and suppose T (v) = 0. Write v = a i v i, so that a i vi is then the 0 map, i.e., for any v V this gives a i vi (v ) = 0. In particular, 0 = a i v i (v j ) = a j v j (v j ) = a j. Since a j = 0 for all j, we have v = 0 and so T is injective We now use the fact that V/ ker T = Im T and the fact that ker T = 0 to conclude that V is isomorphic to a subspace of W, which gives the first statement of the theorem. Assume dim F V < so we can write B = {v 1,..., v n }. Given v V, define a j = v (v j ). Set v = a i v i V. Define S : V V by v v. This map defines an inverse to the map T given above and thus V = V. Note it is not always the case that V is isomorphic to its dual. In fact, if V is infinite dimensional it is never the case that V is isomorphic to its dual. In functional analysis or topology this problem is often overcome by requiring the linear functionals to be continuous, i.e. the dual space consists of continuous linear maps from V to F. In that case one would refer to our dual as the algebraic dual. However, we will only consider the algebraic setting here so we don t bother with saying algebraic dual. Example Let V be a vector space over a field F and let the dimension be denoted by α. Note that α is not finite by assumption, but we need to work with different cardinalities here so we must keep track of this. The cardinality of V as a set is given by α #F = max{α, #F }. Moreover, we have V is naturally isomorphic to the set of functions from a set of cardinality α to F with finite support. We denote this space by F (α). The dual space of V is the set of all functions from a set of cardinality α to F, i.e., to F α. If we set α = dim F V, we wish to show α > α. As above, the cardinality of V as a set is max{α, #F }. Let A = {v i } be a countable linear independent subset of V and extend it to a basis of V. For each nonzero c F define f c : V F by f c (v i ) = c i for v i A and 0 for the other elements in the basis. One can show that {f c } is linearly independent, so α #F. Thus, we have #V = α #F = max{α, #F } = α. However, we also have #V = #F α. Since α < #F α because #F 2, we have α = #F α > α as desired. 28

30 2.4. DUAL SPACES CHAPTER 2. One should note that in the finite dimensional case when we have V = V, the isomorphism depends upon the choice of a basis. This means that while V is isomorphic to its dual, the isomorphism is non-canonical. In particular, there is no preferred isomorphism between V and its dual. Studying the possible isomorphisms that arise between V and its dual is interesting in its own right. We will return to this problem in Chapter 6. Definition Let V be a finite dimensional F -vector space and let B = {v 1,..., v n } be a basis of V. The dual basis of V with respect to B is given by B = {v 1,..., v n }. If V is finite dimensional then we have V = V = (V ), i.e., V = (V ). The major difference is that while there is no canonical isomorphism between V and V, there is a canonical isomorphism V = (V )! Note that in proof below we construct the map from V to (V ) without choosing any basis. This is what makes the map canonical. We do use a basis in proving injectivity, but the map does not depend on this basis so it does not matter. Proposition There is a canonical injective linear map from V to (V ). If dim F V <, then this is an isomorphism. Proof. Let v V. Define eval v : V F by sending ϕ Hom F (V, F ) = V to eval v (ϕ) = ϕ(v). One must check that eval v is a linear map. To see this, let ϕ, ψ V and c F. We have eval v (cϕ + ψ) = (cϕ + ψ)(v) = cϕ(v) + ψ(v) = c eval v (ϕ) + eval v (ψ). Thus, for each v V we obtain a map eval v Hom F (V, F ). This allows us to define a well-defined map Φ : V Hom F (V, F ) = (V ) v eval v : ϕ ϕ(v). We claim that Φ is a linear map. Since Φ maps into a space of maps, we check equality by checking the maps agree on each element, i.e., for v, w V and c F we want to show that for each ϕ Hom F (V, F ) that Φ(cv+w)(ϕ) = cφ(v)(ϕ) + Φ(w)(ϕ). Observe we have Φ(cv + w)(ϕ) = eval cv+w (ϕ) = ϕ(cv + w) = cϕ(v) + ϕ(w) = cφ(v)(ϕ) + Φ(w)(ϕ). Thus, we have Φ Hom F (V, (V ) ). It remains to show that Φ is injective. Let v V, v 0. Let B be a basis containing v. This is possible because we can start with the set {v} and 29

31 2.5. BASICS OF MODULE THEORY CHAPTER 2. complete it to a basis. Note v V and eval v (v ) = v (v) = 1. Moreover, for any w B with w v we have eval w (v ) = v (w) = 0. Thus, we have Φ(v) = eval v is not the 0 map, so ker Φ = {0}, i.e., Φ is injective. If dim F V <, we have Φ is an isomorphism because dim F V = dim F V since they are isomorphic, so dim F V = dim F V = dim F (V ). Let T Hom F (V, W ). We obtain a natural map T Hom F (W, V ) as follows. Let ϕ W, i.e., ϕ : W F. To obtain a map T (ϕ) : V F, we simply compose the maps, namely, T (ϕ) = ϕ T. It is easy to check this is a linear map. In particular, we have the following result. Proposition Let T Hom F (V, W ). The map T defined by T (ϕ) = ϕ T is a linear map from W to V. We will see in the next chapter how the dual map gives the proper definition of a transpose. 2.5 Basics of module theory The theory of vector spaces we have been studying so far is just a special case of the theory of modules. In spirit modules are just vector spaces where one considers the scalars to be in a ring instead of restricting them to be in a field. However, as we will see, restricting scalars to be in a field allows many nice results that are not available for general modules. As usual, we assume all our rings have an identity element. Definition Let R be a ring. A left R-module is an abelian group (M, +) along with a map R M M denoted (r, m) rm satisfying 1. (r 1 + r 2 )m = r 1 m + r 2 m for all r 1, r 2 R, m M; 2. r(m 1 + m 2 ) = rm 1 + rm 2 for all r R, m 1, m 2 M; 3. (r 1 r 2 )m = r 1 (r 2 m) for all r 1, r 2 R, m M; 4. 1 R m = m for all m M. One can also define a right R-module M by acting on the right by the scalars. Moreover, given rings R and S one can define an (R, S)-bimodule by acting on the left by R and the right by S. For now we will work only with left R-modules and refer to these just as modules for this section. Morever, if R is a commutative ring and M is a left R-module, then M is a right R-module as well by setting m r = rm. (Check this does not work if R is not commutative!) Note that if we take R to be a field this is exactly the definition of a vector space. Example Let R be a ring and set M = R n = {(r 1,..., r n ) : r i R}. We have M is an R-module via componentwise addition and scalar multiplication. 30

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

4. MATRICES Matrices

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

Appendix A. Appendix. A.1 Algebra. Fields and Rings

Appendix A Appendix A.1 Algebra Algebra is the foundation of algebraic geometry; here we collect some of the basic algebra on which we rely. We develop some algebraic background that is needed in the text.

MATH 110 Spring 2015 Homework 6 Solutions

MATH 110 Spring 2015 Homework 6 Solutions Section 2.6 2.6.4 Let α denote the standard basis for V = R 3. Let α = {e 1, e 2, e 3 } denote the dual basis of α for V. We would first like to show that β =

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

4.6 Null Space, Column Space, Row Space

NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:

Linear Algebra A vector space (over R) is an ordered quadruple (V, 0, α, µ) such that V is a set; 0 V ; and the following eight axioms hold: α : V V V and µ : R V V ; (i) α(α(u, v), w) = α(u, α(v, w)),

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

T ( a i x i ) = a i T (x i ).

Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

Chapter 13: Basic ring theory

Chapter 3: Basic ring theory Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 42, Spring 24 M. Macauley (Clemson) Chapter 3: Basic ring

Elementary Number Theory We begin with a bit of elementary number theory, which is concerned

CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

3. Prime and maximal ideals. 3.1. Definitions and Examples.

COMMUTATIVE ALGEBRA 5 3.1. Definitions and Examples. 3. Prime and maximal ideals Definition. An ideal P in a ring A is called prime if P A and if for every pair x, y of elements in A\P we have xy P. Equivalently,

DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

The cover SU(2) SO(3) and related topics

The cover SU(2) SO(3) and related topics Iordan Ganev December 2011 Abstract The subgroup U of unit quaternions is isomorphic to SU(2) and is a double cover of SO(3). This allows a simple computation of

(January 14, 2009) End k (V ) End k (V/W )

(January 14, 29) [16.1] Let p be the smallest prime dividing the order of a finite group G. Show that a subgroup H of G of index p is necessarily normal. Let G act on cosets gh of H by left multiplication.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

Finite Fields and Error-Correcting Codes

Lecture Notes in Mathematics Finite Fields and Error-Correcting Codes Karl-Gustav Andersson (Lund University) (version 1.013-16 September 2015) Translated from Swedish by Sigmundur Gudmundsson Contents

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

5. Linear algebra I: dimension

5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

BILINEAR FORMS KEITH CONRAD The geometry of R n is controlled algebraically by the dot product. We will abstract the dot product on R n to a bilinear form on a vector space and study algebraic and geometric

Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

GROUP ALGEBRAS. ANDREI YAFAEV

GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite

Quotient Rings and Field Extensions

Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

it is easy to see that α = a

21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

Math 312 Homework 1 Solutions

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

NOTES on LINEAR ALGEBRA 1

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

ZORN S LEMMA AND SOME APPLICATIONS

ZORN S LEMMA AND SOME APPLICATIONS KEITH CONRAD 1. Introduction Zorn s lemma is a result in set theory that appears in proofs of some non-constructive existence theorems throughout mathematics. We will

Sec 4.1 Vector Spaces and Subspaces

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

GROUPS SUBGROUPS. Definition 1: An operation on a set G is a function : G G G.

Definition 1: GROUPS An operation on a set G is a function : G G G. Definition 2: A group is a set G which is equipped with an operation and a special element e G, called the identity, such that (i) the

Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

The determinant of a skew-symmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14

4 Symplectic groups In this and the next two sections, we begin the study of the groups preserving reflexive sesquilinear forms or quadratic forms. We begin with the symplectic groups, associated with

Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

some algebra prelim solutions

some algebra prelim solutions David Morawski August 19, 2012 Problem (Spring 2008, #5). Show that f(x) = x p x + a is irreducible over F p whenever a F p is not zero. Proof. First, note that f(x) has no

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize

1 if 1 x 0 1 if 0 x 1

Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232

Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232 In mathematics, there are many different fields of study, including calculus, geometry, algebra and others. Mathematics has been

Name: Section Registered In:

Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

LEARNING OBJECTIVES FOR THIS CHAPTER

CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

Math 5311 Gateaux differentials and Frechet derivatives

Math 5311 Gateaux differentials and Frechet derivatives Kevin Long January 26, 2009 1 Differentiation in vector spaces Thus far, we ve developed the theory of minimization without reference to derivatives.

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

Vector Spaces II: Finite Dimensional Linear Algebra 1

John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

Proof. The map. G n i. where d is the degree of D.

7. Divisors Definition 7.1. We say that a scheme X is regular in codimension one if every local ring of dimension one is regular, that is, the quotient m/m 2 is one dimensional, where m is the unique maximal

(a) Write each of p and q as a polynomial in x with coefficients in Z[y, z]. deg(p) = 7 deg(q) = 9

Homework #01, due 1/20/10 = 9.1.2, 9.1.4, 9.1.6, 9.1.8, 9.2.3 Additional problems for study: 9.1.1, 9.1.3, 9.1.5, 9.1.13, 9.2.1, 9.2.2, 9.2.4, 9.2.5, 9.2.6, 9.3.2, 9.3.3 9.1.1 (This problem was not assigned

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

Finite dimensional topological vector spaces

Chapter 3 Finite dimensional topological vector spaces 3.1 Finite dimensional Hausdorff t.v.s. Let X be a vector space over the field K of real or complex numbers. We know from linear algebra that the

Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0.

Basic Terminology for Systems of Equations in a Nutshell E L Lady A system of linear equations is something like the following: x 7x +4x =0 5x +8x x = Note that the number of equations is not required

Metric Spaces. Chapter 7. 7.1. Metrics

Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

Matrix Algebra and Applications

Matrix Algebra and Applications Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 1 / 49 EC2040 Topic 2 - Matrices and Matrix Algebra Reading 1 Chapters

Polynomial Invariants

Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

Introduction to finite fields

Introduction to finite fields Topics in Finite Fields (Fall 2013) Rutgers University Swastik Kopparty Last modified: Monday 16 th September, 2013 Welcome to the course on finite fields! This is aimed at

( ) which must be a vector

MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

4. CLASSES OF RINGS 4.1. Classes of Rings class operator A-closed Example 1: product Example 2:

4. CLASSES OF RINGS 4.1. Classes of Rings Normally we associate, with any property, a set of objects that satisfy that property. But problems can arise when we allow sets to be elements of larger sets

NOTES ON DEDEKIND RINGS

NOTES ON DEDEKIND RINGS J. P. MAY Contents 1. Fractional ideals 1 2. The definition of Dedekind rings and DVR s 2 3. Characterizations and properties of DVR s 3 4. Completions of discrete valuation rings

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

Tensors on a vector space

APPENDIX B Tensors on a vector space In this Appendix, we gather mathematical definitions and results pertaining to tensors. The purpose is mostly to introduce the modern, geometrical view on tensors,

NOTES ON CATEGORIES AND FUNCTORS

NOTES ON CATEGORIES AND FUNCTORS These notes collect basic definitions and facts about categories and functors that have been mentioned in the Homological Algebra course. For further reading about category

Solution to Homework 2

Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

Chapter 10. Abstract algebra

Chapter 10. Abstract algebra C.O.S. Sorzano Biomedical Engineering December 17, 2013 10. Abstract algebra December 17, 2013 1 / 62 Outline 10 Abstract algebra Sets Relations and functions Partitions and