1 Sets and Set Notation.


 Maurice Richard
 1 years ago
 Views:
Transcription
1 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most often name sets using capital letters, like A, B, X, Y, etc., while the elements of a set will usually be given lowercase letters, like x, y, z, v, etc. Two sets X and Y are called equal if X and Y consist of exactly the same elements. In this case we write X = Y. Example (Examples of Sets). () Let X be the collection of all integers greater than or equal to 5 and strictly less than. Then X is a set, and we may write: X = {5, 6, 7, 8, 9} The above notation is an example of a set being described explicitly, i.e. just by listing out all of its elements. The set brackets { } indicate that we are talking about a set and not a number, sequence, or other mathematical object. (2) Let E be the set of all even natural numbers. We may write: E = {, 2, 4, 6, 8,...} This is an example of an explicity described set with infinitely many elements. The ellipsis (...) in the above notation is used somewhat informally, but in this case its meaning, that we should continue counting forever, is clear from the context. (3) Let Y be the collection of all real numbers greater than or equal to 5 and strictly less than. Recalling notation from previous math courses, we may write: Y = [5, ) This is an example of using interval notation to describe a set. Note that the set Y obviously consists of infinitely many elements, but that there is no obvious way to write down the elements of Y explicitly like we did the set E in Example (2). Even though [5, ) is a set, we don t need to use the set brackets in this case, as interval notation has a wellestablished meaning which we have used in many other math courses. (4) Now and for the remainder of the course, let the symbol denote the empty set, that is, the unique set which consists of no elements. Written explicitly, = { }. (5) Now and for the remainder of the course, let the symbol N denote the set of all natural numbers, i.e. N = {,, 2, 3,...}. (6) Now and for the remainder of the course, let the symbol R denote the set of all real numbers. We may think of R geometrically as being the collection of all the points on the number line.
2 2 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES (7) Let R 2 denote the set of all ordered pairs of real numbers. That is, let R 2 be the set which consists of all pairs (x, y) where x and y are both real numbers. We may think of R 2 geometrically as the set of all points on the Cartesian coordinate plane. If (x,[ y) is ] an element of R 2, it will often be convenient for us to write the pair as the column x vector. For our purposes the two notations will be interchangeable. It is important to y [ ] [ ] x y note here that the order matters when we talk about pairs, so in general we have. y x (8) Let R 3 be the set of all ordered triples of real numbers, i.e. R 3 is the set of all triples (x, y, z) such that x, y, and z are all real numbers. R 3 may be visualized geometrically as the set of all points in 3dimensional Euclidean coordinate space. We will also write elements (x, y, z) of R 3 be using the column vector notation x y z. (9) Lastly and most generally, let n be any natural number. We will let R n be the set of all ordered ntuples of real numbers, i.e. the set of all ntuples (x, x 2,..., x n ) for which each x coordinate x i, i n, is a real number. We will also use the column vector notation x 2... x n in this context. Definition 2 (Set Notation). If A is a set and x is an element of A, then we write: x A. If B is a set such that every element of B is an element of A (i.e. if x B then x A), then we call B a subset of A and we write: B A. In order to distinguish particular subsets we wish to talk about, we will frequently use setbuilder notation, which for convenience we will describe informally using examples, rather than give a formal definition. For an example, suppose we wish to formally describe the set E of all even positive integers (See Example (2)). Then we may write E = {x N : x is evenly divisible by 2}. The above notation should be read as The set of all x in N such that x is evenly divisible by 2, which clearly and precisely defines our set E. For another example, we could write Y = {x R : 5 x < }, which reads The set of all x in R such that 5 is less than or equal to x and x is strictly less than. The student should easily verify that Y = [5, ) from Example (3). In general, given a set A and a precise mathematical sentence P (x) about a variable x, the setbuilder notation should be read as follows. { x A : P (x)} The set of all elements x in A such that sentence P (x) is true for the element x.
3 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES 3 2 Vector Spaces and Subspaces. Definition 3. A (real) vector space is a nonempty set V, whose elements are called vectors, together with an operation +, called addition, and an operation, called scalar multiplication, which satisfy the following ten axioms: Addition Axioms. () If u V and v V, then u + v V. (Closure under addition.) (2) u + v = v + u for all u, v V. (Commutative property of addition.) (3) ( u + v) + w = u + ( v + w) for all u, v, w V. (Associative property of addition.) (4) There exists a vector V which satisfies u + = u for all u V. (Existence of an additive identity.) (5) For every u V, there exists a vector u V such that u + ( u) =. (Existence of additive inverses.) Scalar multiplication axioms. (6) If u V and c R, then c u V. (Closure under scalar multiplication.) (7) c ( u + v) = c u + c v for all c R, u, v V. (First distributive property of multiplication over addition.) (8) (c + d) u = c u + d u for all c, d R, u V. (Second distributive property of multiplication over addition.) (9) c (d u) = (c d) u for all c, d R, u V. (Associative property of scalar multiplication.) () u = u for every u V. We use the arrow above notation to help differentiate vectors ( u, v, etc.), which may or may not be real numbers, from scalars, which are always real numbers. When no confusion will arise, we will often drop the symbol in scalar multiplcation and simply write c u instead of c u, c( u + v) instead of c ( u + v), etc. Example 2. Let V be an arbitrary vector space. () Prove that + u = u for every u V. (2) Prove that the zero vector is unique. That is, prove that if w V has the property that u + w = u for every u V, then we must have w =. (3) Prove that for every u V, the additive inverse u is unique. That is, prove that if w V has the property that u + w =, then we must have w = u. Proof. () By Axiom (2), the commutativity of addition, we have + u = u +. Hence by Axiom (4), we have + u = u + = u.
4 4 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES (2) Suppose w V has the property that u + w = u for every u V. Then in particular, we have + w =. But + w = w by part () above; so w = + w =. (3) Let u V, and suppose w V has the property that u + w =. Let u be the additive inverse of u guaranteed by Axiom (5). Adding u to both sides of the equality above, and applying Axioms (2) and (3) (commutativity and associativity), we get u + ( u + w) = u + ( u + u) + w = u ( u + ( u)) + w = u + w = u. Now by part () above, we have w = + w = u. Example 3 (Examples of Vector Spaces). () The real number line R is a vector space, where both + and are interpreted in the usual way. In this case the axioms are the familiar properties of real numbers which we learn {[ in ] elementary} school. x (2) Consider the plane R 2 = : x, y R. Define an operation + on R y 2 by the rule [ ] [ ] [ ] x z x + z + = for all x, y, z, w R, y w y + w and a scalar multiplication by the rule [ x c y ] = [ cx cy ] for every c R, x, y R. Then R 2 becomes a vector space. (Verify that each of the axioms holds.) (3) Consider the following purely geometric description. Let V be the set of all arrows in twodimensional space, with two arrows being regarded as equal if they have the same length and point in the same direction. Define an addition + on V as follows: if u and v are two arrows in V, then lay them endtoend, so the base of v lies at the tip of u. Then define u + v to be the arrow which shares its base with u and its tip with v. (A picture helps here.) Define a scalar multiplication by letting c u be the arrow which points in the same direction as u, but whose length is c times the length of u. Is V a vector space? (What is the relationship of V with R 2?) (4) In general if n N, n, then R n is a vector space, where the addition and scalar multiplication are coordinatewise a la part (2) above. (5) Let n N, and let P n denote the set of all polynomials of degree at most n. That is, P n consists of all polynomials of the form p(x) = a + a x + a 2 x a n x n where the coefficients a, a,..., a n are real numbers, and x is an abstract variable. Define + and as follows: Suppose c R, and p, q P n, so p(x) = a + a x a n x n and q(x) = b + b x b n x n for some coefficients a,..., a n, b,..., b n R. Then (p + q)(x) = p(x) + q(x) = (a + b ) + (a + b )x (a n + b n )x n
5 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES 5 and Then P n is a vector space. (cp)(x) = cp(x) = ca + ca x ca n x n. Definition 4. Let V be a vector space, and let W V. If W is also a vector space using the same operations + and inherited from V, then we call W a vector subspace, or just subspace, of V. Example 4. For each of the following, prove or disprove your answer. {[ ] } x () Let V = R y 2 : x, y, so V is the first quadrant of the Cartesian plane. Is V a vector subspace of R 2? (2) Let V be the set of all points on the graph of y = 5x. Is V a vector subspace of R 2? (3) Let V be the set of all points on the graph of y = 5x +. Is V a vector subspace of R 2? (4) Is R a vector subspace of R 2? (5) Is { } a vector subspace of R 2? (This space is called the trivial space.) Proof. (2) We claim that the set V of all points on the graph of y = 5x is indeed a subspace of R 2. To prove this, observe that since V is a subset of R 2 with the same addition and scalar multiplication operations, it is immediate that Axioms (2), (3), (7), (8), (9), and () hold (Verify this in your brain!) So we need only check Axioms (), (4), (5), and (6). [ ] [ ] x x2 If, V, then by the definition of V we have y y y = 5x and y 2 = 5x 2. It follows 2 [ ] x + x that y +y 2 = 5x +5x 2 = 5(x +x 2 ). Hence the sum 2 satisfies the defining condition y + y [ ] [ ] [ ] 2 x + x of V, and so 2 x x2 = + V. So V is closed under addition and Axiom () y + y 2 y y 2 is satisfied. [ ] Check that the additive identity is in V, so Axiom (3) is satisfied. [ ] [ ] [ ] x x cx If V and c R, then c = V since cy y y cy = c(5x ) = 5(cx ), so V is closed under scalar multiplication. Hence Axiom (6)[ is satisfied. ] Moreover, if we take x c = in the previous equalities, we see that each vector V has an additive inverse y [ ] x V. So Axiom (5) is satisfied. Thus V meets all the criteria to be a vector space, as y we claimed. (3) On the other hand, if we take V to be the set of all points on the graph of y = 5x +, then V is not a vector subspace of R 2. To see this, it suffices to check, for instance, that V fails Axiom (), i.e. V is not closed under addition. To show that V is not [ closed ] under addition, [ ] it suffices to exhibit two vectors in V whose sum is not in V. So let u = and let v =. Both u and v are in V. (Why?) But their sum [ ] 6 u + v = is not a solution of the equation y = 5x +, and hence not in V. So V fails to 7 satisfy Axiom (), and cannot be a vector space. We will not prove the following fact, but the reader should think about why it must be true.
6 6 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Fact. Let V be a vector space, and let W V. Then W, together with the addition and scalar multiplication inherited from V, satisfies Axioms (2), (3), (7), (8), (9), () in the definition of a vector space. Theorem. Let V be a vector space and let W V. Then W is a subspace of V if and only if the following three properties hold: () W. (2) W is closed under addition. That is, for each u, v W, we have u + v W. (3) W is closed under scalar multiplication. That is, for each c R and u W, we have c u W. Proof. One direction of the proof is trivial: if W is a vector subspace of V, then W satisfies the three conditions above because it satisfies Axioms (4), (), and (6) respectively in the definition of a vector space. Conversely, suppose W V, and W satisfies the three conditions above. The three conditions imply that W satisfies Axioms (4), (), and (6), respectively, while our previous fact implies that W also satisfies Axioms (2), (3), (7), (8), (9), and (). So the only axiom left to check is (5). To see that (5) is satisfied, let u W. Since W V, the vector u is in V, and hence has an additive inverse u V. We must show that in fact u W. Note that since W is closed under scalar multiplication, the vector u is in W. But on our homework problem #(d), we show that in fact u = u. So u = u V as we hoped, and the proof is complete. Example 5. What do the vector subspaces of R 2 look like? What about R 3? 3 Linear Combinations and Spanning Sets. Definition 5. Let V be a vector space. Let v, v 2,..., v n V, and let c, c 2,..., c n R. We say that a vector u defined by u = c v + c 2 v c n v n is a linear combination of the vectors v,..., v n with weights c,..., c n. Notice that since addition is associative in any vector space V, we may omit parentheses from the sum above. Also notice that the zero vector is always a linear combination of any collection of vectors v,..., v n, since we may always take c = c 2 =... = c n =. Example 6. Let v = [ ] [ 2 and let v 2 = () Which points in R 2 are linear combinations of v and v 2, using integer weights? (2) Which points in R 2 are linear combinations of v and v 2, using any weights? 2 Example 7. Let a = 2 and a 2 = 5 be vectors in R () Let b = 7 4. May b be written as a linear combination of a and a 2? 3 6 (2) Let b = 3. May b be written as a linear combination of a and a 2? 5 Partial Solution. which ]. () We wish to answer the question: Do there exist real numbers c and c 2 for c a + c 2 a 2 = b?
7 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES 7 Written differently, are there c, c 2 R for which c c = c + 2c 2 2c + 5c 2 5c + 6c 2 = Recall that vectors in R 3 are equal if and only if each pair of entries are equal. So to identify a c and c 2 which make the above equality true, we wish to solve the following system of equations in two variables: 7 4 3? c + 2c 2 = 7 2c + 5c 2 = 4 5c + 6c 2 = 3 This can be done manually using elementary algebra techniques. We should get c = 3 and c 2 = 2. Since a solution exists, b = 3 a + 2 a 2 is indeed a linear combination of a and a 2. Definition 6. Let V be a vector space and v,..., v n V. We denote the set of all possible linear combinations of v,..., v n by Span{ v,..., v n }, and we call this set the subset of V spanned by v,..., v n. We also call Span{ v,..., v n } the subset of V generated by v,..., v n. {[ ]} 3 Example 8. () Find Span in R 2. (2) Find Span, in R3. Theorem 2. Let V be a vector space and let W V. Then W is a subspace of V if and only if W is closed under linear combinations, i.e., for every v,..., v n W and every c,..., c n R, we have c v c n v n W. Corollary. Let V be a vector space and v,..., v n V. Then Span{ v,..., v n } is a subspace of V. 4 Matrix Row Reductions and Echelon Forms. As our previous few examples should indicate, our techniques for solving systems of linear equations are extremely relevant to our understanding of linear combinations and spanning sets. The student should recall the basics of solving systems from a previous math course, but in this section we will develop a new notationally convenient method for finding solutions to such systems. Definition 7. Recall that a linear equation in n variables is an equation that can be written in the form a x + a 2 x a n x n = b where a,..., a n, b R and x,..., x n are variables. A system of linear equations is any finite collection of linear equations involving the same variables x,..., x n. A solution to the system is an ntuple (s, s 2,..., s n ) R n, that makes each equation in the system true if we substitute s,..., s n for x,..., x n respectively. The set of all possible solutions to a system is called the solution set. Two systems are called equivalent if they have the same solution set. It is only possible for a system of linear equations to have no solutions, exactly one solution, or infinitely many solutions. (Geometrically one may think of parallel lines, intersecting lines, and coincident lines, respectively.
8 8 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES A system of linear equations is called consistent if it has at least one solution; if the system s solution set is, then the system is inconsistent. Example 9. Solve the following system of three equations in three variables. x 2x 2 + x 3 = 2x 2 8x 2 = 8 4x + 5x 2 + 9x 3 = 9 Solution. Our goal here will be to solve the system using the elimination technique, but with a stripped down notation which preserves only the necessary information and makes computations by hand go quite a bit faster. First let us introduce a little terminology. Given the system we are working with, the matrix of coefficients is the 3x3 matrix below The augmented matrix of the system is the 3x4 matrix below Note that the straight black line between the third and fourth columns in the matrix above is not necessary. Its sole purpose is to remind us where the equals sign was in our original system, and may be omitted if the student prefers. To solve our system of equations, we will perform operations on the augmented matrix which encode the process of elimination. For instance, if we were using elimination on the given system, we might add 4 times the first equation to the third equation, in order to eliminate the x variable from the third equation. Let s do the analogous operation to our matrix instead. Add 4 times the top row to the bottom row: Notice that the new matrix we obtained above can also be interpreted as the augmented matrix of a system of linear equations, which is the new system we would have obtained by just using the elimination technique. In particular, the first augmented matrix and the new augmented matrix represent systems which are equivalent to one another. (However, the two matrices are obviously not equal to one another, which is why we will stick to the arrow notation instead of using equals signs!) Now we will continue the process, replacing our augmented matrix with a new, simpler augmented matrix, in such a way that the linear systems the matrices represent are all equivalent to one another. Multiply the second row by 2 :
9 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES 9 Add 3 times the second row to the third row: Add 4 times the third row to the second row: Add times the third row to the first row: Add 2 times the second row to the first row: Now let s stop and examine what we ve done. following system: x = 29 x 2 = 6 x 3 = 3 So the system is solved, and has a unique solution (29, 6, 3). The last augmented matrix above represents the Definition 8. Given an augmented matrix, the three elementary row operations are the following. () (Replacement) Replace one row by the sum of itself and a multiple of another row. (2) (Interchange) Interchange two rows. (3) (Scaling) Multiply all entries in a row by a nonzero constant. These are the legal moves when solving systems of equations using augmented matrices. Two matrices are called row equivalent if one can be transformed into the other using a finite sequence of elementary row operations. Fact 2. If two augmented matrices are row equivalent, then the two linear systems they represent are equivalent, i.e. they have the same solution set. Definition 9. A matrix is said to be in echelon form, or row echelon form, if it has the following three properties: () All rows which have nonzero entries are above all rows which have only zero entries. (2) Each leading nonzero entry of a row is in a column to the right of the leading nonzero entry of the row above it. (3) All entries in a column below a leading nonzero entry are zeros.
10 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES A matrix is said to be in reduced echelon form or reduced row echelon form if it is in echelon form, and also satisfies the following two properties: (4) Each nonzero leading entry is. (5) Each leading is the only nonzero entry in its column. Every matrix may be transformed, via elementary row operations, into a matrix in reduced row echelon form. This process is called row reduction. Example. Use augmented matrices and row reductions to find solution sets for the following systems of equations. () (2) 2x 6x 3 = 8 x 2 + 2x 3 = 3 3x + 6x 2 2x 3 = 4 x 5x 2 + 4x 3 = 3 2x 7x 2 + 3x 3 = 2 2x + x 2 + 7x 3 = 5 Functions and Function Notation Definition. Let X and Y be sets. A function f from X to Y is just a rule by which we associate to each element x X, an element f(x) Y. The input set X is called the domain of f, while the set Y of possible outputs is called the codomain of f. To denote the domain and codomain when we define a function, we write f : X Y. We regard the terms map, mapping, and transformation as all being synonymous with function. Example. () For the familiar quadratic function f(x) = x 2, we would write f : R R, since f takes real numbers for input and returns real numbers for output. Notice that the codomain R here is distinct from the range, i.e. the set of all actual outputs of the function, which the reader should know is just the set [, ). This is an ok part of the notation. (2) On the other hand, for the familiar hyperbolic function f(x) = x, we would NOT write f : R R; this is because R but is not part of the domain of f. 6 Linear Transformations Definition. Let V and W be vector spaces. A function T : V W is a linear transformation if () T ( v + w) = T ( v) + T ( w) for every v, w V. (2) T (c v) = ct ( v) for every v V and c R. Note that the transformations which are linear are those which respect the addition and scalar multiplication operations of the vector spaces involved. Example 2. Are the following maps linear transformations? [ ] x () T : R R, T (x) =. 3x [ ] x (2) T : R R 2, T (x) = x 2. [ ] 2 (3) T : R 2 R 2, T ( v) = A v, where A =.
11 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Fact 3. Every matrix transformation T : R n R m is a linear transformation. Fact 4. If T : V W is a linear transformation, then T preserves linear combinations. In other words, for every collection of vectors v,..., v n V and every choice of weights c,..., c n R, we have T (c v c n v n ) = c T ( v ) c n T ( v n ). Definition 2. Let V = R n be ndimensional Euclidean space. Define vectors e,..., e n V by: e =... ; e 2 =... ;...; e n = These vectors e,..., e n are called the standard basis for R n. Note that any vector v =.... v v 2... v n may be written as a linear combination of the standard basis vectors in an obvious way: v = v e + v 2 e v n e n. 5 Example 3. Suppose T : R 2 R 3 is a linear transformation such that T ( e ) = 7 and T ( e 2 ) = Find an explicit formula for T. Theorem 3. Let T : R n R m (n, m N) be a linear transformation. Then there exists a unique m n matrix A such that T ( v) = A v for every v R n. In fact, we have A = T ( e ) T ( e 2 )... T ( e n ). Proof. Notice that for any v = v v 2... Rn, we have v = v e v n e n. Since T is a linear transformation, it respects linear combinations, and hence v n T ( v) = v T ( e ) v n T ( e n ) = T ( e ) T ( e 2 )... T ( e n ) = A v, where A is as we defined in the statement of the theorem. Example 4. Let T : R 2 R 2 be the linear transformation which scales up every vector by a factor of 3, i.e. T ( v) = 3 v for every v R 2. Find a matrix representation for T. Example 5. Let T : R 2 R 2 be the linear transformation which rotates the plane about the origin at some fixed angle θ. Is T a linear transformation? If so, find its matrix representation. v v 2... v n
12 2 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES 7 The Null Space of a Linear Transformation Definition 3. Let V and W be vector spaces, and let T : V W be a linear transformation. Define the following set: Nul T = { v V : T ( v) = }. Then this set Nul T is called the null space or kernel of the map T. Example 6. Let T : R 3 R 3 be the matrix transformation T ( v) = A v, where A = () Let u = (2) Let w = Is u Nul T?. Is w Nul T? Theorem 4. Let V, W be vector spaces and T : V W a linear transformation. Then Nul T is a vector subspace of V. Proof. We will show that Nul T is closed under taking linear combinations, and hence the result will follow from Theorem 2. Let v,..., v n Nul T and c,..., c n R be arbitrary. We must show that c v c n v n Nul T, i.e. that T (c v c n v n ) =. To see this, simply observe that since T is a linear transformation, we have. T (c v c n v n ) = c T ( v c n T ( v n ). But since v,..., v n Nul T, we have T ( v ) =... = T ( v n ) =. So in fact we have T (c v c n v n ) = c c n =. It follows that c v +...+c n v n Nul T, and Nul T is closed under linear combinations. This completes the proof. Example 7. For the following matrix transformations, give an explicit description of Nul T by finding a spanning set. x [ ] x () T : R 4 R 2, T x x 3 = x x 3 for every x, x 2, x 3, x 4 R. x 4 x 4 (2) T : R 5 R 3, T x x 2 x 3 x 4 x 5 = x x 2 x 3 x 4 x 5 for every x, x 2, x 3, x 4, x 5 R. Definition 4. Let V, W be sets. A mapping T : V W is called onetoone if the following holds: for every v, w V, if v w, then T ( v) T ( w).
13 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES 3 Equivalently, T is onetoone if the following holds: for every v, w V, if T ( v) = T ( w), then v = w). A map is onetoone only if it sends distinct elements in V to distinct elements in W, i.e. no two elements in V are mapped to the same place in W. Example 8. Are the following linear transformations onetoone? [ ] x () T : R R 2, T (x) =. 3x (2) T : R 2 R 2, T ([ x x 2 ]) = [ ] [ x x 2 ]. Theorem 5. A linear transformation T : V W is onetoone if and only if Nul T = { }. Proof. ( ) First suppose T is onetoone, and let v Nul T. We will show v =. To see this, note that T ( v) = since v Nul T. But we also have Nul T since Nul T is a subspace of V, so T ( ) = = T ( v). Since T is onetoone, we must have = v, and hence Nul T is the trivial subspace Nul T = { }. ( ) Conversely, suppose T is not onetoone; we will show Nul T is nontrivial. Since T is not onetoone, there exist two distinct vectors v, w V, v w, such that T ( v) = T ( w). Set u = v w. Since v w and additive inverses are unique, we have u, and we also have T ( u) = T ( v w) = T ( v) T ( w) = T ( v) T ( v) =. So u Nul T. Since u is not the zero vector, Nul T { }. Definition 5. Let V, W be sets. A mapping T : V W is called onto if the following holds: for every w W, there exists a vector v V such that T ( v) = w. Example 9. Are the following linear transformations onto? [ ] x () T : R R 2, T (x) =. 3x ([ ]) (2) T : R 2 x R, T = x x 2 x. ([ 2 ]) [ ] (3) T : R 2 R 2 x x, T = 2. x 2 2x + x 2 Definition 6. Let V, W be vector spaces and T : V W a linear transformation. If T is both onetoone and onto, then T is called an isomorphism. In this case the domain V and codomain W are called isomorphic as vector spaces, or just isomorphic. It means that V and W are indistinguishable from one another in terms of their vector space structure. Example 2. Prove that the mapping T : R 2 R 2, where T rotates the plane by a fixed angle θ, is an isomorphism. Example 2. Let W be the graph of the line y = 3x, a vector subspace of R 2. isomorphic to W. Prove that R is 8 The Range Space of a Linear Transformation and the Column Space of a Matrix Definition 7. Let V, W be vector spaces and T : V W a linear transformation. Define the following set: Ran T = { w W : there exists a vector v V such that T ( v) = w}.
14 4 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Then Ran T is called the range of T. (This definition should coincide with the student s knowledge of the range of a function from previous courses.) Theorem 6. Let V, W be vector spaces and T : V W a linear transformation. Then Ran T is a vector subspace of W. Proof. Again we will show that Ran T is closed under linear combinations, and appeal to Theorem 2. To that end, let w,..., w n Ran T and let c,..., c n R all be arbitrary. We must show c w c n w n Ran T. To see this, note that since w,..., w n Ran T, there exist vectors v,..., v n V such that T ( v ) = w,..., T ( v n ) = w n. Set u = c v c n v n. Since V is a vector space, V is closed under taking linear combinations and hence u V. Moreover, we have T ( u) = T (c v c n v n ) = c T ( v ) c n T ( v n ) = c w c n w n. So the vector c w c n w n is the image of u under T, and hence c w c n w n Ran T. This shows Ran T is closed under linear combinations and hence a vector subspace of W. Theorem 7. Let V, W be vector spaces and T : V W a linear transformation. Then T is onto if and only if Ran T = W. Proof. Obvious if you think about it! Corollary 2. Let V, W be vector spaces and T : V W a linear transformation. Then the following statements are all equivalent: () T is an isomorphism. (2) T is onetoone and onto. (3) Nul T = { } and Ran T = W. Definition 8. Let m, n N, and let A be an m n matrix (so A induces a linear transformation from R n into R m ). Write A = w n w..., where each of w,..., w n is a column vector in R m. Define the following set: Col A = Span{ w,..., w n }. Then Col A is called the column space of the matrix A. Col A is exactly the set of all vectors in W which may be written as a linear combination of the columns of the matrix A. Of course Col A is also a vector subspace of R m, by Corollary.
15 Example 22. Let A = LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES 5 2. () Determine whether or not the vector 3 5 is in Col A. (2) What does Col A look like in R 3? 6a b Example 23. Let W = a + b : a, b R. Find a matrix A so that W = Col A. 7a Theorem 8. Let T : R n R m (n, m N) be a linear transformation. Let A be the m n matrix representation of A guaranteed by Theorem 3. Then Ran T = Col A. Proof. Recall from Theorem 3 that the matrix A is given by A = T ( e ) T ( e 2 )... T ( e n ), where e,..., e n are the standard basis vectors in R n. So its columns are just T ( e ),..., T ( e n ). The first thing we will show is that Ran T Col A. So suppose w Ran T. Then there exists some v vector v R n for which T ( v) = w. Write v =... = v e v n e n for some real numbers v,..., v n R. Then, since T is a linear transformation, we have v n w = T ( v) = T (v e v n e n ) = v T ( e v n T ( e n ). So the above equality displays w as a linear combination of the columns of A, with weights v,..., v n. This proves w Col A. Since w was taken arbitrarily out of Ran T, we must have Ran T Col A. The next thing we will show is that Col A Ran T. So let w Col A. Then w can be written as a linear combination of the columns of A, so there exist some weights c,..., c n R for which w = c T ( e ) c n T ( e n ). Set v = c e c n e n. So v R n, and we claim that T ( v) = w. To see this, just compute (again using the fact that T is linear): T ( v) = T (c e +...c n e n ) = c T ( e ) c n T ( e n ) = w.
16 6 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES This shows w Ran T, and again since w was taken arbitrarily out of Col A, we have shown that Col A Ran T. Since Ran T Col A and Col A Ran T, we must have Ran T = Col A. This completes the proof. Example 24. Let T : R 2 [ R] 2 be the linear transformation defined by T ( e ) = 2 e + 4 e 2 and 2 T ( e 2 ) = e + 2 e 2. Let w =. Is w Ran T? 9 Linear Independence and Bases. Definition 9. Let V be a vector space and let v,..., v n V. Consider the equation: c v c n v n = where c,..., c n are interpreted as real variables. Notice that c =... = c n = is always a solution to the equation, which is called the trivial solution, but that there may be others depending on our choice of v,..., v n. If there exists a nonzero solution (c,..., c n ) to the equation, i.e. a solution where c k for at least one c k ( k n) then the vectors v,..., v n are called linearly dependent. Otherwise if the trivial solution is the only solution, then the vectors v,..., v n are called linearly independent. Example 25. Let v = 2 3, v 2 = 4 5 6, and v 3 = () Are the vectors v, v 2, v 3 linearly independent? (2) If possible, find a dependence relation, i.e. a nontrivial linear combination of v, v 2, v 3 which sums to. Fact 5. Let v,..., v n be column vectors in R m. Define an m n matrix A by: A = 2. v n v..., and define a linear transformation T : R n R m by the rule T ( v) = A v for every v R n. Then the following statements are all equivalent: () The vectors v,..., v n are linearly independent. (2) Nul T = { }. (3) T is onetoone. [ 2 Example 26. Let v = ] [ 4 and v 2 = ]. Are v and v 2 linearly independent? Fact 6. Let V be a vector space and v,..., v n V. Then v,..., v n are linearly dependent if and only if there exists some k {,..., n} such that v k Span{ v,..., v k, v k+,..., v n }, i.e. v k can be written as a linear combination of the other vectors. [ ] 2 Example 27. Let v = R 2. () Describe all vectors v 2 R 2 for which v, v 2 are linearly independent.
17 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES 7 (2) Describe all pairs of vectors v 2, v 3 R 2 for which v, v 2, v 2 are linearly independent. Example 28. Let v = and v 2 = 3. Describe all vectors v 3 R 3 for which v, v 2, v 3 are 3 linearly independent. Definition 2. Let V be a vector space and let b,..., b n V. The set { b,..., b n } V is called a basis for V if () b,..., b n are linearly independent, and (2) V = Span{ b,..., b n }. 3 Example 29. Let v = 6, v 2 = [ ] 2 Example 3. Let v = and v 3 2 = () Is { v, v 2 } a basis for R 2? [ ] (2) What if v 2 =? [ 5, and v 3 = Fact 7. The standard basis { e,..., e n } is a basis for R n. Fact 8. The set {, x, x 2,..., x n } is a basis for P n. ] Is { v, v 2, v 3 } a basis for R 3? Fact 9. Let V be any vector space and let B = { b,..., b n } be a basis for V. () If you remove any one element b k from B, then the resulting set no longer spans V. This is because b k is linearly independent from the other vectors in B, and hence cannot be written as a linear combination of them by Fact 6. In this sense a basis is a minimal spanning set. (2) If you add any one element v, not already in B, to B, then the result set is no longer linearly independent. This is because B spans V, and hence there are no vectors in V which are independent from those in B by Fact 6. In this sense a basis is a maximal linearly independent set. Theorem 9 (Unique Representation Theorem). Let V be a vector space and let B = { b,..., b n } be a basis for V. Then every vector v V may be written as a linear combination v = c b c n bn in one and only one way, i.e. v has a unique representation with respect to B. Proof. Since { b,..., b n } is a basis for V, in particular the set spans V, so we have v Span{ b,..., b n }. It follows that there exist some scalars c,..., c n R n for which v = c b c n bn. So we need only check that the representation above is unique. To that end, suppose d,..., d n R is another set of scalars for which v = d b d n bn. We will show that in fact d = c,..., d n = c n, and hence there is really only one choice of scalars to begin with. To see this, observe the following equalities:
18 8 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES (d c ) b (d n c n ) b n = [d b d n bn ] [c b c n bn ] = v v =. So we have written as a linear combination of b,..., b n. But the collection is a basis and hence linearly independent, so the coefficients must all be zero, i.e. d c =... = d n c n =. The conclusion now follows. Dimension Theorem (Steinitz Exchange Lemma). Let V be a vector space and let B = { b,..., b n } be a basis for V. Let v V. By Theorem 9 there are unique scalars c,..., c n for which v = c b c n bn. If there is k {,..., n} for which c k, then exchanging v for b k yields another basis for the space, i.e. the collection { b,..., b k, v, b k+,..., b n } is a basis for V. Proof. Call the new collection ˆB = { b,..., b k, v, b k+,..., b n }. We must show that ˆB is both linearly independent and that it spans V. First we check that ˆB is linearly independent. Consider the equation d b d k b k + d k v + d k+ b k d nbn =. Let s substitute the representation for v in the above: d b d k b k + d k (c b c nbn ) + d k+ b k d nbn =. Now collecting like terms we get: (d + d k c k ) b (d k + d k c k ) b k + d k c k bk + (d k+ + d k c k+ ) b k (d n + d k c n ) b n =. Now since b,..., b n are linearly independent, all the coefficients in the above equation must be zero. In particular, we have d k c k =. But c k by our hypothesis, so we may divide through by c k and get d k =. Now substituting back in for d k we have: d b +...d k b k + d k+ b k d nbn. Now using the linear independence of b,..., b n one more time, we conclude that d =...d k = d k = d k+ =... = d n =. So the only solution is the trivial solution, and hence ˆB is linearly independent. It remains only to check that V = Span ˆB. To see this, let w V be arbitrary. By Theorem 9, write w as a linear combination of the vectors in B: w = a b a n bn
19 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES 9 for some scalars a,..., a n R. Now notice that since v = c b +...c n bn and since c k, we may solve for the vector b k as follows: b k = c c b k... c k b k + c k v c k+ b k+... cn c bn k. c k Note that for the above to make sense, it is crucial that c k. Now, if we substitute the above expression for b k in the equation w = a b a nbn and then collect like terms, we will see w written as a linear combination of the vectors b,..., b k, b k+,..., b n and the vector v. This implies that w Span ˆB. Since w was arbitrary in V, we have V = Span ˆB. So ˆB is indeed a basis for V and the theorem is proved. Corollary 3. Let V be a vector space which has a finite basis. Then every basis of V is the same size. Proof. Let B = { b,..., b n } be a basis of minimal size. Let D be any other basis for V, consisting of arbitrarily many elements of V. We will show that in fact D has exactly n elements. Let d D be arbitrary. Since d V = Span{ b,..., b n }, it is possible to write d as a nontrivial linear combination d = c b c n bn, i.e. c k for at least one k {,.., n}. By reordering the terms of the basis B if necessary, we may assume without loss of generality that c. Define a new set B = { d, b 2,..., b n }; by the Steinitz Exchange Lemma, B is also a basis for V. Now we go one step further. Since B was a basis of minimal size, we know D has at least n many elements. So let d 2 D be distinct from d. We know d 2 V = Span B, so there exist constants c,..., c n for which d 2 = c d + c 2 b c n bn. Notice that we must have c k for some k {2,..., n}, since if c were the only nonzero coefficient, then would be a scalar multiple of, which is not the case since they are linearly independent. By reordering the terms b 2,..., b n if necessary, we may assume without loss of generality that c 2. Define B 2 = { d, d 2, b 3,..., b 4 }. By the Steinitz Exchange Lemma, B 2 is also a basis for V. Continue this process. At each step k {,..., n}, we will have obtained a new basis B k = { d,..., d k, b k+,..., b n } for V. Since D has at least n many elements, there is a d k+ V which is not equal to any of d,..., d k. Write d k+ = c d c kdk + c k+ b k c nbn for some constants c,..., c n, and observe that one of the constants c k+,..., c n must be nonzero since d k+ is independent from d,..., d k. Then reorder the terms b k+,..., b n and use the Steinitz Exchange Lemma to replace b k+ with d k+ to obtain a new basis. After finitely many steps we end up with the basis B n = { d,..., d n } D. Since B n spans V, it is not possible for D to contain any more elements, since nothing in V is linearly independent from the vectors in B n. So in fact D = B n and the proof is complete. Definition 2. Let V be a vector space, and suppose V has a finite basis B = { b,..., b n } consisting of n vectors. Then we say that the dimension of V is n, or that V is ndimensional. We also write that dim V = n. Notice that every finitedimensional space V has a unique dimension by Corollary 3. If V has no finite basis, we say that V is infinitedimensional. Example 3. Determine the dimension of the following vector spaces. () R. (2) R 2. (3) R 3. (4) R n. c k
20 2 LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES (5) P n. (6) The trivial space { }. (7) Let V be the set of all infinite sequences (a, a 2, a 3,...) of real numbers which eventualy end in an infinite string of s. (Verify that V is a vector space using coordinatewise addition and scalar multiplication.) (8) The space P of all polynomials. (Verify that P is a vector space using standard polynomial addition and scalar multiplication.) (9) Let V be the set of all continuous functions f : R R. (Verify that V is a vector space using function addition and scalar multiplication. () Any line through the origin in R 3. () Any line through the origin in R 2. (2) Any plane through the origin in R 3. Corollary 4. Suppose V is a vector space with dim V = n. Then no linearly independent set in V consists of more than n elements. Proof. This follows from the proof of Corollary 3, since when considering D we only used the fact that D was linearly independent, not spanning. Corollary 5. Let V be finitedimensional vector space. expanded to make a basis for V. Any linearly independent set in V may be Proof. If a linearly independent set { d,..., d n } already spans V then it is already a basis. If it doesn t span, then we can find a linearly independent vector d n+ / Span{ d,..., d n } by Fact 6 and consider { d,..., d n, d n+ }. If it spans V, then we re done. Otherwise, continue adding vectors. By the previous corollary, we know the process stops in finitely many steps. Corollary 6. Let V be a finitedimensional vector space. Any spanning set in V may be shrunk to make a basis for V. Proof. Let S be a spanning set for V, i.e. V = Span S. Without loss of generality we may assume that / S, for if is in S, then we can toss it out and the set S will still span V. If S is empty then V = Span S = { } and S is already a basis, and we are done. If S is not empty, then choose s S. If V = Span{ s }, we are done and { s } is a basis. Otherwise, observe that if every vector in S were in Span{ s }, then we would have Span S = Span{ s }; since S spans V and { s } does not, this is impossible. So there is some s 2 S with s 2 / Span{ s }, i.e. s and s 2 are linearly independent. If { s, s 2 } spans V, we are done. Otherwise we can find a third linearly independent vector s 3, and so on. The process stops in finitely many steps. Corollary 7. Let V be an ndimensional vector space. Then a set of n vectors in V is linearly independent if and only if it spans V. Coordinate Systems Definition 22. Let V be a vector space and B = { b,..., b n } be a basis for V. Let v v. By the Unique Representation Theorem, there are unique weights c,..., c n R for which v = c b c n bn. Call
A Modern Course on Curves and Surfaces. Richard S. Palais
A Modern Course on Curves and Surfaces Richard S. Palais Contents Lecture 1. Introduction 1 Lecture 2. What is Geometry 4 Lecture 3. Geometry of InnerProduct Spaces 7 Lecture 4. Linear Maps and the Euclidean
More informationA Course on Number Theory. Peter J. Cameron
A Course on Number Theory Peter J. Cameron ii Preface These are the notes of the course MTH6128, Number Theory, which I taught at Queen Mary, University of London, in the spring semester of 2009. There
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More informationHow many numbers there are?
How many numbers there are? RADEK HONZIK Radek Honzik: Charles University, Department of Logic, Celetná 20, Praha 1, 116 42, Czech Republic radek.honzik@ff.cuni.cz Contents 1 What are numbers 2 1.1 Natural
More informationYou know from calculus that functions play a fundamental role in mathematics.
CHPTER 12 Functions You know from calculus that functions play a fundamental role in mathematics. You likely view a function as a kind of formula that describes a relationship between two (or more) quantities.
More informationALGEBRAIC NUMBER THEORY AND QUADRATIC RECIPROCITY
ALGEBRAIC NUMBER THEORY AND QUADRATIC RECIPROCITY HENRY COHN, JOSHUA GREENE, JONATHAN HANKE 1. Introduction These notes are from a series of lectures given by Henry Cohn during MIT s Independent Activities
More informationChapter 5. Banach Spaces
9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on
More informationSwitching Algebra and Logic Gates
Chapter 2 Switching Algebra and Logic Gates The word algebra in the title of this chapter should alert you that more mathematics is coming. No doubt, some of you are itching to get on with digital design
More informationWHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? 1. Introduction
WHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? DAVIDE P. CERVONE, WILLIAM V. GEHRLEIN, AND WILLIAM S. ZWICKER Abstract. Consider an election in which each of the n voters casts a vote consisting of
More informationTopics in Number Theory, Algebra, and Geometry. Ambar N. Sengupta
Topics in Number Theory, Algebra, and Geometry Ambar N. Sengupta December, 2006 2 Ambar N. Sengupta Contents Introductory Remarks........................... 5 1 Topics in Number Theory 7 1.1 Basic Notions
More informationWHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?
WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly
More informationSemiSimple Lie Algebras and Their Representations
i SemiSimple Lie Algebras and Their Representations Robert N. Cahn Lawrence Berkeley Laboratory University of California Berkeley, California 1984 THE BENJAMIN/CUMMINGS PUBLISHING COMPANY Advanced Book
More informationRevised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)
Chapter 23 Squares Modulo p Revised Version of Chapter 23 We learned long ago how to solve linear congruences ax c (mod m) (see Chapter 8). It s now time to take the plunge and move on to quadratic equations.
More informationAn Introductory Course in Elementary Number Theory. Wissam Raji
An Introductory Course in Elementary Number Theory Wissam Raji 2 Preface These notes serve as course notes for an undergraduate course in number theory. Most if not all universities worldwide offer introductory
More informationONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK
ONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1. A random walk on the integers with step distribution F and initial state x is a sequence S n of random variables whose increments are independent,
More informationA Tutorial on Spectral Clustering
A Tutorial on Spectral Clustering Ulrike von Luxburg Max Planck Institute for Biological Cybernetics Spemannstr. 38, 7276 Tübingen, Germany ulrike.luxburg@tuebingen.mpg.de This article appears in Statistics
More informationFoundations of Data Science 1
Foundations of Data Science John Hopcroft Ravindran Kannan Version /4/204 These notes are a first draft of a book being written by Hopcroft and Kannan and in many places are incomplete. However, the notes
More information8430 HANDOUT 3: ELEMENTARY THEORY OF QUADRATIC FORMS
8430 HANDOUT 3: ELEMENTARY THEORY OF QUADRATIC FORMS PETE L. CLARK 1. Basic definitions An integral binary quadratic form is just a polynomial f = ax 2 + bxy + cy 2 with a, b, c Z. We define the discriminant
More informationAlgebra & Number Theory. A. Baker
Algebra & Number Theory [0/0/2009] A. Baker Department of Mathematics, University of Glasgow. Email address: a.baker@maths.gla.ac.uk URL: http://www.maths.gla.ac.uk/ ajb Contents Chapter. Basic Number
More informationMatthias Beck Gerald Marchesi Dennis Pixton Lucas Sabalka
Matthias Beck Gerald Marchesi Dennis Pixton Lucas Sabalka Version.5 Matthias Beck A First Course in Complex Analysis Version.5 Gerald Marchesi Department of Mathematics Department of Mathematical Sciences
More information26. Determinants I. 1. Prehistory
26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinateindependent
More informationNotes on Richard Dedekind s Was sind und was sollen die Zahlen?
Notes on Richard Dedekind s Was sind und was sollen die Zahlen? David E. Joyce, Clark University December 2005 Contents Introduction 2 I. Sets and their elements. 2 II. Functions on a set. 5 III. Onetoone
More informationProgressions for the Common Core State Standards in Mathematics (draft)
Progressions for the Common Core State Standards in Mathematics (draft) cthe Common Core Standards Writing Team July Suggested citation: Common Core Standards Writing Team. (, July ). Progressions for
More informationControllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More informationIntroduction to Algebraic Number Theory
Introduction to Algebraic Number Theory William Stein May 5, 2005 2 Contents 1 Introduction 9 1.1 Mathematical background I assume you have............. 9 1.2 What is algebraic number theory?...................
More informationMODULES OVER A PID KEITH CONRAD
MODULES OVER A PID KEITH CONRAD Every vector space over a field K that has a finite spanning set has a finite basis: it is isomorphic to K n for some n 0. When we replace the scalar field K with a commutative
More information784: algebraic NUMBER THEORY (Instructor s Notes)*
Math 784: algebraic NUMBER THEORY (Instructor s Notes)* Algebraic Number Theory: What is it? The goals of the subject include: (i) to use algebraic concepts to deduce information about integers and other
More informationMUSTHAVE MATH TOOLS FOR GRADUATE STUDY IN ECONOMICS
MUSTHAVE MATH TOOLS FOR GRADUATE STUDY IN ECONOMICS William Neilson Department of Economics University of Tennessee Knoxville September 29 289 by William Neilson web.utk.edu/~wneilson/mathbook.pdf Acknowledgments
More informationRegression. Chapter 2. 2.1 Weightspace View
Chapter Regression Supervised learning can be divided into regression and classification problems. Whereas the outputs for classification are discrete class labels, regression is concerned with the prediction
More information