Vector Spaces II: Finite Dimensional Linear Algebra 1

Size: px
Start display at page:

Download "Vector Spaces II: Finite Dimensional Linear Algebra 1"

Transcription

1 John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition 1. X R N, X, is a vector space iff the following conditions hold. 1. For any x, ˆx X, x + ˆx X. (X is closed under vector addition.) 2. For any x X and any α R, αx X. (X is closed under scalar multiplication.) Since 0 R, if X is a vector space then, by condition (2) of the definition, X contains 0x = (0,..., 0). That is, any vector space must contain the origin. Example 1. R N is itself a vector space. Example 2. The set of points in R 2 given by the graph of y = ax + b, a, b 0 does not contain the origin and is not a vector space. On the other hand, the graph of y = ax is a vector space. Definition 2. Let X R N be a vector space and suppose B X is also a vector space. Then B is a vector subspace of X. Example 3. If X R N is a vector space then it is a vector subspace of R N. Example 4. R 1 is a vector subspace of R 2. But the set [ 1, 1] is not a vector subspace because it is not closed under either vector addition or scalar multiplication (for example, = 2 [ 1, 1]). Geometrically, a vector space in R N looks like a line, plane, or higher dimensional analog thereof, through the origin. A key feature of a vector space X R N is that X can be characterized by listing only a few of its vectors. The characterization is not unique, except in the trivial case X = {0}. These characterizing vectors are said to span the space. Definition 3. Let S = {s 1,..., s r } be a set of r vectors in R N. The span of S is { } span(s) = x R N : λ R r such that x = λ i s i. 1 cbna. This work is licensed under the Creative Commons Attribution-NonCommercial- ShareAlike 4.0 License. i=1 1

2 Example 5. The span of S = {(1, 0), (0, 1)} is all of R 2, as is easily verified by noting that x = (x 1, x 2 ) = x 1 (1, 0) + x 2 (0, 1). Less obviously S = {(1, 1), (1, 1)} has the same span. I return to this below. The span of S = {(1, 1)} is all points of the form (α, α). Theorem 1. Let S = {s 1,..., s r } be a non-empty subset of R N. Then span(s) is a vector space. Proof. I need to check that the definition of vector space is satisfied. Trivially, span(s) R N. It remains to show that x + ˆx, αx span(s) for any x, ˆx span(s), any α R. Let x, ˆx span(s). Then there are vectors λ, ˆλ R r such that x = r i=1 λ is i and ˆx = r ˆλ i=1 i s i. Then x + ˆx = r i=1 (λ i + ˆλ i )s i span(s). Similarly, for any α R, αx = α r i=1 λ is i = r i=1 (αλ i)s i span(s). The following says that a vector s i S is redundant, in the sense of not increasing span(s) iff s i is itself in the span of the other vectors in S. Theorem 2. Let s i S = {s 1,..., s r } R N. s i span(s \ {s i }) iff span(s) = span(s \ {s i }). Proof. Possibly relabeling the elements of S, suppose that s 1 span(s \ {s 1 }). Since S \ {s 1 } S, span(s \ {s 1 }) span(s). It remains to show that span(s) span(s \ {s 1 }). Since s 1 span(s \ {s 1 }), there is a ρ R r 1 such that s 1 = r i=2 ρ is i. Consider any x span(s). Then there is a λ R r such that x = r i=1 λ is i. Therefore x = λ 1 s 1 + λ i s i i=2 ( ) = λ 1 ρ i s i + λ i s i i=2 i=2 = (λ 1 ρ i + λ i ) s i. i=2 Hence x span(s \ {s 1 }), as claimed. If span(s) = span(s \ {s i }) then, since s i S span(s) = span(s \ {s i }), s i span(s \ {s i }). Given a vector space, one wishes to identify sets S that span that vector space efficiently, that is, S that contain a minimal number of elements. Such sets are independent in the following sense. Definition 4. Let S = {s 1,..., s r } be a subset of R N, S. S is dependent iff there is a λ R r, λ 0, such λ i s i = 0 i=1 2

3 If S is not dependent then it is independent. Note that if S is independent and s i S then s i 0. Theorem 3. Let S = {s 1,..., s r } be a subset of R N. S is independent iff there is no s i S such that s i span(s \ {s i }). Proof. The argument is by contraposition. Possibly relabeling the elements of S, suppose that s 1 span(s\{s 1 }). Then there is a ρ R r 1 such that s 1 = r i=2 ρ is i. Hence 0 = s 1 ρ i s i, which immediately implies that S is not independent. Again, the argument is by contraposition. Suppose S is dependent. Then there is a λ R r such that 0 = r i=1 λ is i but λ 0. Possibly relabeling elements, suppose λ 1 0. Then s 1 = That is, s 1 span({s 2,..., s r }). i=2 i=2 ( λ ) i s i. λ 1 By Theorem 2, an immediate corollary of Theorem 3 is the following. Theorem 4. Let S = {s 1,..., s r } be a subset of R N. S is independent iff there is no s i S such that span(s) = span(s \ {s i }). By Theorem 4, a set S is independent iff it contains no vector that is redundant in the sense that it could be deleted from S without altering the span. This implies that the search for a minimal spanning set should focus on independent sets. In particular, I ask, for a given vector space X, for what independent sets S is span(s) = X? Such sets are called bases for X. Definition 5. Let X be a vector space. S X is a basis for X iff S spans X and S is independent. Remark 1. A basis is also sometimes called a Hamel basis. Suppose S = (s 1,..., s r ) is a basis for X. Then, since S spans X, for any x X there is an λ R r such that x = λ i s i. i=1 λ i is the ith coordinate. For any basis, the coordinate representation is unique (for that basis). If also x = r i=1 ρ is i, so that 0 = x x = r i=1 (λ i ρ i )s i, then the independence of S implies λ i ρ i = 0, or λ i = ρ i for every i. 3

4 When X = R N the standard basis consists of the unit vectors e n = (0,..., 0, 1, 0,..., 0), where the 1 appears in the Nth place. This is indeed a basis for R N. e n spans R N, since, for any x R N, x = N n=1 x ne n. (This is exactly what one means when one writes x = (x 1,..., x N ).) Likewise, the e n are independent, since 0 = N n=1 λ ne n immediately implies (by the definition of e n ), 0 = λ n for all N. In the standard basis, the Nth coordinate is, of course, just x n. It is important to remember that the standard basis for R N is not the only basis for R N. Indeed, except for the trivial case X = {0}, every vector space has an infinite number of bases. Thus, for example, {(2, 1), (1, 2)} forms a basis for R 2. Theorem 8 below provides a tool for checking this assertion. Consider the point (11, 1) (written in the standard basis). In the basis {(2, 1), (1, 2)}, the coordinates get rewritten as (7, 3). This exercise may seem artificial, but finding coordinates in a new, non-standard, basis is effectively what one does when one solves systems of simultaneous linear equations. Definition 6. Let X be a vector space. dim(x), the dimension of X, is r iff there is an independent set of r vectors in X, but no independent set of r + 1 vectors in X. If dim(x) = r then one expects that any basis for X contains r vectors. I confirm this by first noting that if S spans X and S has t vectors then dim(x) cannot be more than t. (It could be strictly less if S were dependent, in which case some of the t vectors would be redundant.) Let S is the number of elements (vectors) in S. Theorem 5. Let X be a vector space. If S spans X then dim(x) S. Proof. I argue by contraposition. Consider any set of vectors S = (s 1,..., s t ). If dim(x) > S then there is an independent set Z = (z 1,..., z r ) X with Z = r > t = S. I show that S does not span X, and that in particular some z i is not in the span of S. Consider first z 1. If z 1 is not in the span of S, I am done. If z 1 is in the span of S then there is a λ 1 R t such that z 1 = t λ 1 i s i. i=1 Since z 1 0 (since Z is assumed independent), λ 1 i 0 for some i. Possibly relabeling the elements of S, suppose that in fact λ Then s 1 = 1 λ 1 1 z 1 + t i=2 4 ( ) λ1 i λ 1 s i. 1

5 It follows that s 1 span(t 1 ) where T 1 = {z 1, s 2,..., s t }). Therefore span(t 1 ) = span(s). Explicitly, since s 1 span(t 1 ), while s 2,... s t T 1, it follows that span(s) span(t 1 ). Conversely, since z 1 span(s), span(t 1 ) span(s). Next consider z 2. Again, if z 2 span(s) = span(t 1 ), I am done. Otherwise, if z 2 span(t 1 ) then there is a λ 2 R t such that z 2 = λ 2 1z 1 + t λ 2 i s i. Since z 2 0 and since Z is assumed independent (hence, in particular, z 2 λ 2 1 z1 ), λ 2 i 0 for some i > 1. Possibly relabeling the elements of S, suppose that in fact λ As above, I conclude that T 2 = {z 1, z 2, s 3,..., s t } has the same span as T 1 and hence the same span as S. Proceeding in this way for i = 1, 2,..., t, I either uncover a z i not in the span of S or I find that the set T t = {z 1,..., z t } Z has the same span as S. Since r = Z > t, there is a z t+1 Z. Since Z is independent, z t+1 is not in the span of T t (by Theorem 3), hence z t+1 is not in the span of S. Thus S does not span X. i=2 Theorem 6. Let X be a vector space. If S is a basis for X then dim(x) = S. Proof: Suppose S is a basis for X. Since S is independent, dim(x) S. On the other hand, by Theorem 5, since S spans X, dim(x) S. Hence dim(x) = S. The following is then an immediate corollary. Theorem 7. dim(r N ) = N. Theorem 8. Let X be a vector space. Suppose that dim(x) = r. 1. If S X and S = r then span(s) = X iff S is independent. 2. X has a basis, and every basis has r vectors. 3. If S X is independent then there is a Ŝ X such that S Ŝ and Ŝ is a basis for X. Proof. 1. I argue by contraposition. Suppose that S is dependent. Then, by Theorem 4, I can remove an element of S without changing the span. Since dim(x) = r, the contrapositive of Theorem 5 implies that S does not span X. Suppose that S is independent. Consider any x X. Since dim(x) = r, {s 1,..., s r, x} is dependent. Then there is an λ R r+1, λ 0, such that 0 = r i=1 λ is i + λ r+1 x. Since S is independent, λ r+1 0. Therefore x = r i=1 ( λ i/λ r+1 )s i, hence x span(s). 5

6 2. Since dim(x) = r there is an independent set S X with S = r. By (1), S spans X, hence S is a basis for X. That every basis has r vectors follows from Theorem If S = r simply set Ŝ = S. Otherwise, S = t < r. Since S is independent, I know from (1) that S does not span X. Therefore choose any x 1 X \ span(s). Form the set T 1 = {s 1,..., s t, x 1 }. To see that this set is independent, consider any point (λ 1,..., λ t, ρ) R t+1 such that 0 = t λ i s i + ρx 1. i=1 If ρ 0 then x 1 span(s). Since I have assumed x 1 span(s), ρ = 0. Since S is independent, this implies λ i = 0. Hence, T 1 is independent. Continuing in this way, I construct T r t, which is independent, has r vectors, and hence, by (1), is a basis for X. 2 Linear Maps. Definition 7. Let X and Y be vector spaces. L : X Y is a linear map iff both of the following conditions hold. 1. For any x, ˆx X, L(x + ˆx) = L(x) + L(ˆx). 2. For any x X, α R, L(αx) = αl(x). Remark 2. Map is just another word for function. The word map is used most frequently when the target space is something other than R. Choosing α = 0 in part (2) of the definition implies that, for any linear map, L(0) = 0. For example, suppose X = Y = R. Then L(x) = ax is a linear map for any a R. However, B(x) = ax + b, b 0, is not a linear map. Rather, it is called an affine map. Definition 8. Let X and Y be vector spaces. The kernel or null space of a linear map L : X Y, denoted K(L), is the the zero set of L. That is, K(L) = L 1 (0) = {x X : L(x) = 0}. The kernel plays a central role in much of the analysis to follow. Theorem 9. Let X and Y be vector spaces. Let L : X Y be linear. Then L(X) is a vector subspace of Y and K(L) is a vector subspace of X. 6

7 Proof. Consider any y, ŷ L(X). Since y L(X), there is an x X such that L(x) = y. Similarly, there is an ˆx X such that L(ˆx) = ŷ. Then y + ŷ = L(x) + L(ˆx) = L(x + ˆx). Since x + ˆx X, this implies that y + ŷ L(X). Consider any y L(X) and any α R. Since y L(X), there is an x X such that L(x) = y. Then αy = αl(x) = L(αx). Since αx X, this implies that αy L(X). The proof for K(L) is analogous, so I omit it. Theorem 10. Let X and Y be vector spaces. Let L : X Y be linear. Then L is 1-1 iff K(L) = {0}. Proof. Consider any x X, x 0. Since L(0) = 0 and since L is 1-1, L(x) 0. Therefore x K(L), which establishes that K(L) = {0}. Consider any x, ˆx X, x ˆx. Then x ˆx {0} = K(L), hence L(x ˆx) 0, hence L(x) L(ˆx), which establishes that L is 1-1. Theorem 11. Let X and Y be vector spaces. Let L : X Y be linear. Then L is onto iff dim(l(x)) = dim(y ). Proof. Trivial. Let dim(y ) = r. Let V be a basis for L(X). Since dim(l(x)) = dim(y ), V = r. Since V Y and V is independent, Theorem 8 part 1 implies that V is a basis for for Y, hence L(X) = Y. Hence L is onto. 3 Matrices. A linear map L : R N R M has a natural representation in the standard bases of R N and R M. I represent vectors, in the relevant standard basis, as columns. Thus x R N, x = (x 1,..., x N ), is represented x = For a standard basis element e n R N, L(e n ) is some vector a n R M, x 1. x N a n = (a 1n,... a Mn ) = Let x be any element of R N. Then x = ( N n=1 x N ) ne n. Hence L(x) = L n=1 x ne n = N n=1 x nl(e n ) = N n=1 x na n. It is convenient to represent the set {a 1,..., a N } as 7. a 1n. a Mn.

8 an M N matrix Define A = [ a 1... a N] = Ax = a a 1N..... a M1... a MN N n=1 a 1nx n. N n=1 a Mnx n.. Hence L(x) = Ax. I say that L is represented by the matrix A. Any linear map has a matrix representation, and conversely any matrix represents a linear map. Theorem 12. Let L : R N R M be linear. Then L is continuous Proof. Since L is linear, there is a matrix A such that, for any x R N, L(x) = Ax. The claim then follows from the notes on Continuity. Theorem 12 does not generalize to arbitrary metric vector spaces. Linear functions are not, for example, continuous in R with a pointwise convergence metric. Definition 9. Let L : R N R M be a linear map. The transpose of L is L : R M R N, given by L (x) = A x, where the columns of A are the rows of A: A = If A is M n then A is N m. a a M a 1N... a MN. 4 Fundamental Theorem of Linear Algebra Dimension counting arguments play a central role in applications of linear algebra. The canonical example, discussed in Section 5.4, is the analysis of systems of simultaneous linear equations. The central fact used in this application and many others is the following result, sometimes called the Fundamental Theorem of Linear Algebra. Theorem 13 (Fundamental Theorem of Linear Algebra). Let X and Y be vector spaces. If L : X Y is linear then dim(k(l)) + dim(l(x)) = dim(x). 8

9 Proof. Let dim(k(l)) = t and let W = {w 1,..., w t } be a basis for K(L). Let dim(l(x)) = r and let Z = {z 1,..., z r } be a basis for L(X). Since z j L(X), there is a v j X such that L(v j ) = z j. If I can show that S = {w 1,..., w t, v 1,..., v r } is a basis for X then dim(x) = t + r, and I am done. To show that S is independent, suppose 0 = t i=1 λ iw i + r j=1 ρ jv j. I must show that λ i = 0 for all i and ρ j = 0 for all j. Note that 0 = L(0) = L( t i=1 λ iw i + r j=1 ρ jv j ) = t i=1 λ il(w i ) + r j=1 ρ jl(v j ) = r i=1 ρ jz i (since w i K(L) for each i). Since Z is independent (it is a basis), this implies ρ j = 0 for all j. Hence 0 = t i=1 λ iw i. Since W is independent (it is a basis), λ i = 0 for all i. To show that S spans X, let x be any element of X. I must show that x is in the span of S. Since L(x) L(X), and since Z spans L(X), there is a ρ R r such that ( r j=1 ρ jv j ). Let v = r j=1 ρ jv j. Then L(x) = r j=1 ρ jz j = r j=1 ρ jl(v j ) = L x = (x v ) + v. But L(x) = L(v ), so L(x v ) = 0, or x v K(L). Since W spans K(L), there is a λ R t such that x v = t i λ iw i. x = t i λ iw i + r j=1 ρ jv j. Thus x span(s). Thus, Theorem 14. Let X and Y be vector spaces with dim(x) = dim(y ). Let L : X Y be linear. Then L is 1-1 iff L is onto. Proof. If L is 1-1 then K(L) = {0}. From Theorem 13, dim(l(x)) = dim(x) = dim(y ), which, by Theorem 11, implies that L is onto. Similarly, if L is onto then dim(l(x)) = dim(y ) = dim(x), which implies K(L) = {0}, hence, by Theorem 10, L is The Structure of Linear Maps. Theorem 13 implies that linear maps must have a very particular structure, formalized in Theorem 15 below. To improve the flow of the narrative, I have divided this section into subsections. Section 5.1 gives the basic definition and results. Section 5.2 reinterprets Theorem 15 in terms of matrices and provides a concrete illustration. Section 5.3 discusses the interpretation of Theorem 15. Section 5.4 applies this machinery to the analysis of systems of simultaneous linear equations. All proofs are collected in Section The basic result. Consider any vector space X and any P, Q X. Define P + Q = {x X : p P, q Q such that x = p + q}. Definition 10. Let P and Q be vector subspaces of a vector space X. P and Q are orthogonal iff p q = 0 for any p P, q Q. P and Q are orthogonal complements iff they are orthogonal and P + Q = X. 9

10 If P and Q are orthogonal complements, I refer to P and Q as a decomposition of X. Similarly, if x = p+q with p P and q Q, I refer to p and q as a decomposition of x. Example 6. For X = R 2, the horizontal and vertical axes are orthogonal complements. So are the spaces P spanned by (1, 1) and Q spanned by (1, 1). The main result of this section is the following theorem. Theorem 15. Let X and Y be vector spaces and let L : X Y be linear. 1. L (Y ) and K(L) are orthogonal complements. Likewise, L(X) and K(L ) are orthogonal complements. 2. dim(l (Y )) = dim(l(x)). Moreover, L maps the set L (Y ) 1-1 onto the set L(X). 3. For any y L(X), there is an x y L (Y ) such that L 1 (y) = K(L) + {x y }. 5.2 An application to matrices. Let A be any M N matrix and let L : R N R M be the linear map represented by A. L(R N ) is the vector subspace of R M spanned by the columns of A. Accordingly, it is referred to as the column space of A. Similarly, L (R M ) is the space spanned by the rows of A and is, accordingly, referred to as the row space of A. The column rank of A is the number of linearly independent columns of A, which equals dim(l(r N )). The row rank of A is the number of linearly independent rows of A, which equals dim(l (R M )). It follows from (2) of Theorem 15 that for any matrix A, the row rank equals the column rank, which I henceforth refer to simply as the rank of A, written rank(a). By way of illustration, let X = Y = R 2. Suppose [ ] 0 2 A = 0 1 and let L be the corresponding linear map. Note that [ ] A 0 0 =. 2 1 The column space, L(R 2 ), is one dimensional (hence rank(a) = 1) and is spanned by (2, 1). The row space, L (R 2 ), is likewise one dimensional and is spanned by (0, 2). One can verify that the kernel K(L) is one dimensional and is spanned by (1, 0). Finally, one can verify that K(L ) is one dimensional and is spanned by ( 1, 2). The various spaces are illustrated in Figure 1. If the linear map L : R N R N is invertible then let A 1 be the matrix representation of L 1. Linear algebra textbooks provide effective procedures for computing 10

11 L (R 2 ) K(L ) L(R 2 ) 0 K(L) 0 Figure 1: An illustration of Theorem 15. A 1 explicitly. For the moment, I wish only to note that a general function L is invertible if and only if it is 1-1 and onto. By Theorem 14, if L is a linear function then it suffices to check that L is onto, which means dim(l(r N )) = N. Thus A is invertible if and only if the rank of A is N, a condition for invertibility familiar from elementary linear algebra. 5.3 The interpretation of Theorem 15. Theorem 15 implies that the behavior of a linear map L : X Y is characterized by, first, the decomposition of X into the orthogonal complements K(L) and L (Y ) and, second, by the behavior of L on L (Y ). More specifically, for any x k K(L), consider L (Y ) + {x k }, which is a copy of L (Y ) shifted parallel to L (Y ) by the vector x k. See Figure 2 for an illustration using the example of Section 5.2. For any x L (Y ), L(x + x k ) = L(x). Thus, X can be chopped up into parallel copies of L (Y ), and the behavior of L on each copy is effectively the same as it is on L (Y ). In particular, L maps each such copy 1-1 onto L(X) (by (2) of Theorem 15). One can also view X as being chopped up into the preimage sets of L, that is, sets of the form L 1 (y) for each y L(X). Of course, L 1 (0) = K(L). Part (3) of Theorem 15 states that every preimage set of L is simply a copy of K(L), translated by some vector in L (Y ). This is illustrated in Figure 3. Translates of vector spaces are called linear manifolds. Thus L 1 (y) is a linear manifold for each y L(X). One can unambiguously define the dimension of L 1 (y) to be the same as the dimension of K(L), namely dim(x) dim(l(x)). A linear manifold in R N of dimension N 1 is called a plane. For N 4, one 11

12 L (R 2 ) L (R 2 )+{x k } K(L ) L(R 2 ) K(L) x k 0 0 L L Figure 2: For each x k K(L), L maps L (R 2 ) + {x k } 1-1 onto L(R 2 ). L -1 (y) = K(L) + {x y } L'(R 2 ) x y K(L') L L(R 2 ) L -1 (0) = K(L) y O O Figure 3: For each y L(R 2 ), there is an x y such that L 1 (y) = K(L) + {x y }. 12

13 often sees the word hyperplane rather than plane, but word plane is still correct. In applications, planes often arise as the preimage of a linear map. Specifically, suppose that L : R N R is linear and onto. Then for any y R, L 1 (y) has dimension equal to dim(k(l)) = dim(r N ) dim(l(r N )) = N 1, and is thus a plane. Note that L is represented by a 1 N matrix of the form A = [a 1... a N ] The transpose of A is A = a 1. a N, which can be viewed as a vector in R N, call it v. Thus A = v and the canonical plane, namely K(L), is the set of vectors x R N that are orthogonal to v. 5.4 Application: simultaneous equations, kernels, and planes. A set of M linear equations in N unknowns can be written in the form Ax = y, where A is M N and the ith row of A gives the coefficients for the ith equation. Note that x R N while y R M. This sort of problem arises frequently in economic applications. Geometrically, Ax = y if y lies in the column space of A, in which case x is the coordinate representation for y in terms of the columns of A. One is interested in knowing whether any solutions x exist, and if so, how many. Is there any solution to Ax = y? The answer is yes if and only if y is in the column space of A. A sufficient condition for y to be in the column space of A is rank(a) = M, a necessary condition for which is that N M (at least as many unknowns as equations). If M > N (more equations than unknowns) then Ax = y is said to be overdetermined. If M > rank(a) then Ax = y typically has no solutions; the notion of typically can be formalized but I do not do so here. How many solutions are there to Ax = y? The set of solutions is L 1 (y). Assume that this set is not empty. Then L 1 (y) is a linear manifold of dimension equal to the dimension of the kernel. By the Fundamental Theorem of Linear Algebra, this dimension is N dim(l(r N )) = N rank(a). The solution is unique iff this dimension is zero if and only if L is 1-1 if and only if rank(a) = N, a necessary condition for which is that M N (at least as many equations as unknowns). If N > M (more unknowns than equations) then Ax = y is said to be underdetermined. In summary, if M > rank(a) then typically there is no solution to Ax = y, but if a solution exists then it is unique. If N > rank(a) then there are a continuum of solutions; indeed L 1 (y) is a linear maniforld of positive dimension. If N = M = rank(a) then there is a unique solution. 13

14 The preceding analysis was in terms of the columns of A. One can also analyze Ax = y in terms of the rows of A. Suppose M = 1 and that A has full rank, namely 1. Then for any y R, L 1 (y) is an N 1 dimensional linear manifold, that is, a plane. If M = 2 and A has rank 2 then for any y R 2, L 1 (y) is the intersection of two planes, one corresponding to the first equation (i.e. the first row of A) and the other corresponding to the second equation. From Theorem 13 and Theorem 15, this intersection must be of dimension N 2. Thus, having two equations rather than one drops the dimension of L 1 (y) by one. By way of example, suppose N = 3, so that each plane is two-dimensional. Then their intersection is one-dimensional (a line). The only thing that might possibly go wrong is if the two planes are parallel, in which case there is no intersection at all unless the planes exactly coincide. But the planes cannot be parallel if the two rows of A are independent, as must be the case if the rank is two. This is intuitive and it is confirmed by Theorem 13 and Theorem 15. And similarly if M = 3. L 1 (y) is now the intersection of 3 planes, and Theorem 13 and Theorem 15 tell us that if the rank of A is three then the dimension of this intersection is N 3. Again, adding another equation drops the dimension of L 1 (y) by 1. And so on for M = 4, 5,... until M = N, at which point Theorem 13 and Theorem 15 imply that if the rank of A is N then L 1 (y) has dimension zero, meaning that the intersection is a singleton. If I try to add one more equation, so that M = N + 1 > N, then it is no longer possible for A to have rank M. In a sense that can be formalized, although I do not do so, the intersection of M > N planes in R N is typically empty. This sort of dimension counting applies more generally. In general, given two linear manifolds in R N of dimensions c and d, if c + d N then a typical intersection (and I won t formalize typical, although a formalization is possible) has dimension c + d N. This follows from the fact that any linear manifold can be described as the preimage of a linear map. In particular, one can easily show that for any linear manifold in R N, if the manifold has dimension c then there is a linear map L that maps from R N onto R N c such that the linear manifold is a preimage of L. (Note that the preimage of such a map has dimension N (N c) = c, as desired.) The manifold can, in turn, be viewed as the intersection of N c planes in R N, generated by the N c rows in the matrix representation of L. The intersection of two manifolds of dimension c and d is thus equivalent to the intersection of (N c) + (N d) rows, which has dimension N (2N c d) = c + d N, provided all rows are independent. For example, the intersection of two planes (each of dimension N 1) has dimension (N 1) + (N 1) N = N 2. If c + d < N then typically the intersection of the manifolds is empty. 5.5 Proofs. I establish Theorem 15 by means of a series of lemmas. The first of these establishes the orthogonal half of part (1) of Theorem

15 Lemma 1. Let X and Y be vector spaces and let L : X Y be linear. Then L (Y ) and K(L) are orthogonal. Similarly, L(X) and K(L ) are orthogonal. Proof. Let A represent L, hence A represents L. Consider any x K(L). Then Ax = 0. This means that x is orthogonal to each row of A, which means that x is orthogonal to each column of A. Since L (Y ) is spanned by the columns of A, Ax = 0 implies that x is orthogonal to every element in L (Y ), as was to be shown. Similarly for L(X) and K(L ). To show that K(L) and L (Y ) are in fact orthogonal complements I need to show that the union of the bases for K(L) and L (Y ) span all of X. As a first step in doing so, I establish the following general fact about orthogonal spaces. Lemma 2. Let P and Q be vector subspaces of a vector space X. If P and Q are orthogonal then the union of any basis for P and any basis for Q is independent. Proof. Let V = {v 1,..., v r } be any basis for P and let W = {w 1,..., w t } be any basis for Q. Suppose that 0 = t λ i v i + ρ j w j. i=1 j=1 Let p = r i=1 λ iv i P and let q = t j=1 ρ jw j Q. Then p + q = 0, hence 0 = (p + q) (p + q) = p p + 2p q + q q = p p + q q (since, by assumption, p and q are orthogonal). Since p p 0, with equality only if p = 0, and similarly for q q, it follows that p = q = 0. But this implies, since V and W are bases, that λ i = 0 for all i and ρ j = 0 for all j, as was to be shown. The following is then immediate. Lemma 3. Let P and Q be orthogonal vector subspaces of a vector space X. Then dim(p ) + dim(q) dim(x). To prove part (1) of Theorem 15, it thus remains to show that dim(k(l)) + dim(l (R M )) = dim(x). As a step toward establishing this, I first establish the first half of part (2) of Theorem 15. Lemma 4. Let X and Y be vector spaces and let L : X Y be linear. dim(l(x)) = dim(l (Y )). Proof. From Theorem 13 Then dim(x) = dim(l(x)) + dim(k(l)) (1) dim(y ) = dim(l (Y )) + dim(k(l )). (2) 15

16 From Theorem 3 dim(x) dim(l (Y )) + dim(k(l)) (3) dim(y ) dim(l(x)) + dim(k(l )). (4) Equation (1) and inequality (3) together imply dim(l(x)) dim(l (Y )). Equation (3) and inequality (4) together imply dim(l (Y )) dim(l(x)). Combining, these imply as was to be shown dim(l(x)) = dim(l (Y )), Proof of part (1) of Theorem 15. Orthogonality was established in Lemma 1. In view of Lemma 2, to show that K(L) and L (Y ) are orthogonal complements, it suffices to show that But by Theorem 13, dim(k(l)) + dim(l (Y )) = dim(x). dim(k(l)) + dim(l(x)) = dim(x) and by Lemma 4, dim(l (Y )) = dim(l(x)), and so the claim follows. The proof that K(L ) and L(X) are orthogonal complements is analogous. Proof of part (2) of Theorem 15. The first half of part (2) was established in Lemma 4. Define ˆL : L (Y ) L(X) by ˆL(x) = L(x) for any x L (Y ). I must show that ˆL is 1-1 and onto. By Theorem 14, it suffices to show that ˆL is 1-1. Consider any x, x L (Y ). Suppose that L(x) = L(x ). Then ˆL(x x ) = 0. Since L (Y ) is a vector space, x x L (Y ). Since 0 = ˆL(x x ) = L(x x ), x x K(L). But L (Y ) and K(L) are orthogonal, hence L (Y ) K(L) = {0}. (The latter follows from Lemma 1. More directly, suppose P and Q are orthogonal and let p P Q. Then p p = 0, which implies p = 0.) Therefore, x x = 0, or x = x. Since x and x were arbitrary, this establishes that ˆL is 1-1 and hence also onto. Proof of part (3) of Theorem 15. Consider any y L(X). Since L maps L (Y ) 1-1 onto L(X), there is a unique element of L (Y ), call it x y, such that L(x y ) = y. Consider any x L 1 (y). I must show that there is an x k K(L) such that x = x y +x k. Since K(L) and L (Y ) are orthogonal complements, there is an x L (Y ) and a x K(L) such that x = x + x. But y = L(x) = L(x + x) = L(x ) + L( x) = L(x ), since L( x) = 0. It follows that x = x y, and hence x k = x. 16

MATHEMATICAL BACKGROUND

MATHEMATICAL BACKGROUND Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Review: Vector space

Review: Vector space Math 2F Linear Algebra Lecture 13 1 Basis and dimensions Slide 1 Review: Subspace of a vector space. (Sec. 4.1) Linear combinations, l.d., l.i. vectors. (Sec. 4.3) Dimension and Base of a vector space.

More information

1 Orthogonal projections and the approximation

1 Orthogonal projections and the approximation Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

More information

THEORY OF SIMPLEX METHOD

THEORY OF SIMPLEX METHOD Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize

More information

Convexity in R N Main Notes 1

Convexity in R N Main Notes 1 John Nachbar Washington University December 16, 2016 Convexity in R N Main Notes 1 1 Introduction. These notes establish basic versions of the Supporting Hyperplane Theorem (Theorem 5) and the Separating

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Geometry of Linear Programming

Geometry of Linear Programming Chapter 2 Geometry of Linear Programming The intent of this chapter is to provide a geometric interpretation of linear programming problems. To conceive fundamental concepts and validity of different algorithms

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Summary of week 8 (Lectures 22, 23 and 24)

Summary of week 8 (Lectures 22, 23 and 24) WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

MATH 304 Linear Algebra Lecture 16: Basis and dimension.

MATH 304 Linear Algebra Lecture 16: Basis and dimension. MATH 304 Linear Algebra Lecture 16: Basis and dimension. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis. Equivalently, a subset S V is a basis for

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Sec 4.1 Vector Spaces and Subspaces

Sec 4.1 Vector Spaces and Subspaces Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, Chapter 1: Linear Equations and Matrices MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

More information

Linear Span and Bases

Linear Span and Bases MAT067 University of California, Davis Winter 2007 Linear Span and Bases Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (January 23, 2007) Intuition probably tells you that the plane R 2 is of dimension

More information

Math 215 Exam #1 Practice Problem Solutions

Math 215 Exam #1 Practice Problem Solutions Math 5 Exam # Practice Problem Solutions For each of the following statements, say whether it is true or false If the statement is true, prove it If false, give a counterexample (a) If A is a matrix such

More information

Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations. Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Algebra and Linear Algebra

Algebra and Linear Algebra Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Algebra and Linear Algebra Algebra: numbers and operations on numbers 2 + 3 = 5 3 7 = 21 Linear algebra: tuples, triples,...

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Definition 12 An alternating bilinear form on a vector space V is a map B : V V F such that

Definition 12 An alternating bilinear form on a vector space V is a map B : V V F such that 4 Exterior algebra 4.1 Lines and 2-vectors The time has come now to develop some new linear algebra in order to handle the space of lines in a projective space P (V ). In the projective plane we have seen

More information

2.5 Gaussian Elimination

2.5 Gaussian Elimination page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

More information

The Classical Linear Regression Model

The Classical Linear Regression Model The Classical Linear Regression Model 1 September 2004 A. A brief review of some basic concepts associated with vector random variables Let y denote an n x l vector of random variables, i.e., y = (y 1,

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Markov Chains, part I

Markov Chains, part I Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X

More information

4.6 Null Space, Column Space, Row Space

4.6 Null Space, Column Space, Row Space NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

1 Polyhedra and Linear Programming

1 Polyhedra and Linear Programming CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

3.4. Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes

3.4. Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes Solving Simultaneous Linear Equations 3.4 Introduction Equations often arise in which there is more than one unknown quantity. When this is the case there will usually be more than one equation involved.

More information

Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0.

Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0. Basic Terminology for Systems of Equations in a Nutshell E L Lady A system of linear equations is something like the following: x 7x +4x =0 5x +8x x = Note that the number of equations is not required

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232

Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232 Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232 In mathematics, there are many different fields of study, including calculus, geometry, algebra and others. Mathematics has been

More information

Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,

Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n, LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x

More information

Lecture Notes 2: Matrices as Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations 2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

More information

We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P. Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form 1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

More information

MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0, MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call

More information

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

More information

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable. Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

Weak topologies. David Lecomte. May 23, 2006

Weak topologies. David Lecomte. May 23, 2006 Weak topologies David Lecomte May 3, 006 1 Preliminaries from general topology In this section, we are given a set X, a collection of topological spaces (Y i ) i I and a collection of maps (f i ) i I such

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

Lecture Note on Linear Algebra 15. Dimension and Rank

Lecture Note on Linear Algebra 15. Dimension and Rank Lecture Note on Linear Algebra 15. Dimension and Rank Wei-Shi Zheng, wszheng@ieee.org, 211 November 1, 211 1 What Do You Learn from This Note We still observe the unit vectors we have introduced in Chapter

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

Linear Algebra Test 2 Review by JC McNamara

Linear Algebra Test 2 Review by JC McNamara Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible

More information

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

More information

By W.E. Diewert. July, Linear programming problems are important for a number of reasons:

By W.E. Diewert. July, Linear programming problems are important for a number of reasons: APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

2.5 Complex Eigenvalues

2.5 Complex Eigenvalues 1 25 Complex Eigenvalues Real Canonical Form A semisimple matrix with complex conjugate eigenvalues can be diagonalized using the procedure previously described However, the eigenvectors corresponding

More information

Problems for Advanced Linear Algebra Fall 2012

Problems for Advanced Linear Algebra Fall 2012 Problems for Advanced Linear Algebra Fall 2012 Class will be structured around students presenting complete solutions to the problems in this handout. Please only agree to come to the board when you are

More information

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points. 806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What

More information

Linear Dependence Tests

Linear Dependence Tests Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

More information

Orthogonal Projections and Orthonormal Bases

Orthogonal Projections and Orthonormal Bases CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

4: SINGLE-PERIOD MARKET MODELS

4: SINGLE-PERIOD MARKET MODELS 4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market

More information

Section 3 Sequences and Limits

Section 3 Sequences and Limits Section 3 Sequences and Limits Definition A sequence of real numbers is an infinite ordered list a, a 2, a 3, a 4,... where, for each n N, a n is a real number. We call a n the n-th term of the sequence.

More information

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

Chapter Three. Functions. In this section, we study what is undoubtedly the most fundamental type of relation used in mathematics.

Chapter Three. Functions. In this section, we study what is undoubtedly the most fundamental type of relation used in mathematics. Chapter Three Functions 3.1 INTRODUCTION In this section, we study what is undoubtedly the most fundamental type of relation used in mathematics. Definition 3.1: Given sets X and Y, a function from X to

More information

Row and column operations

Row and column operations Row and column operations It is often very useful to apply row and column operations to a matrix. Let us list what operations we re going to be using. 3 We ll illustrate these using the example matrix

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold: Linear Algebra A vector space (over R) is an ordered quadruple (V, 0, α, µ) such that V is a set; 0 V ; and the following eight axioms hold: α : V V V and µ : R V V ; (i) α(α(u, v), w) = α(u, α(v, w)),

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Math 113 Homework 3. October 16, 2013

Math 113 Homework 3. October 16, 2013 Math 113 Homework 3 October 16, 2013 This homework is due Thursday October 17th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name on the first

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

4.2. Linear Combinations and Linear Independence that a subspace contains the vectors

4.2. Linear Combinations and Linear Independence that a subspace contains the vectors 4.2. Linear Combinations and Linear Independence If we know that a subspace contains the vectors v 1 = 2 3 and v 2 = 1 1, it must contain other 1 2 vectors as well. For instance, the subspace also contains

More information

f(x) is a singleton set for all x A. If f is a function and f(x) = {y}, we normally write

f(x) is a singleton set for all x A. If f is a function and f(x) = {y}, we normally write Math 525 Chapter 1 Stuff If A and B are sets, then A B = {(x,y) x A, y B} denotes the product set. If S A B, then S is called a relation from A to B or a relation between A and B. If B = A, S A A is called

More information

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc. 2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

More information

Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns

Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns Math 5. Selected Solutions for Week 2 Section. (Page 2). Let u = and A = 5 2 6. Is u in the plane in R spanned by the columns of A? (See the figure omitted].) Why or why not? First of all, the plane in

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

3.4. Solving simultaneous linear equations. Introduction. Prerequisites. Learning Outcomes

3.4. Solving simultaneous linear equations. Introduction. Prerequisites. Learning Outcomes Solving simultaneous linear equations 3.4 Introduction Equations often arise in which there is more than one unknown quantity. When this is the case there will usually be more than one equation involved.

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

Interpolating Polynomials Handout March 7, 2012

Interpolating Polynomials Handout March 7, 2012 Interpolating Polynomials Handout March 7, 212 Again we work over our favorite field F (such as R, Q, C or F p ) We wish to find a polynomial y = f(x) passing through n specified data points (x 1,y 1 ),

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information