MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.


 Jasmine Craig
 5 years ago
 Views:
Transcription
1 MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, ISBN Systems of Linear Equations Definition. An ndimensional vector is a row or a column of n numbers (or letters): [a 1,..., a n ] or a 1. a n. The set of all such vectors (either only rows, or only columns) with real entries is denoted by R n. Short notation for vectors varies: a, or a, or a. Definition. A linear equation in n variables x 1, x 2,..., x n is an equation a 1 x 1 + a 2 x a n x n = b where the coefficients a 1, a 2,..., a n and the constant term b are constants. A solution of a linear equation a 1 x 1 + a 2 x a n x n = b is a vector [s 1, s 2,..., s n ] whose components satisfy the equation when we substitute x 1 = s 1, x 2 = s 2,..., x n = s n, that is, a 1 s 1 + a 2 s a n s n = b. A system of linear equations is a finite set of linear equations, each with the same variables. A solution of a system of linear equations is a vector that is simultaneously a solution of each equation in the system. The solution set of a system of linear equations is the set of all solutions of the system. 1
2 MATH10212 Linear Algebra Brief lecture notes 2 Definition A general solution of a linear system (or equation) is an expression of the unknowns in terms of certain parameters that can take independently any values producing all the solutions of the equation (and only solutions). Two linear systems are equivalent if they have the same solution sets. For example, x + y = 3 x y = 1 and x y = 1 y = 2 are equivalent, since both have the unique solution [1, 2]. We solve a system of linear equations by transforming it into an equivalent one of a triangular or staircase pattern: x y z = 4 y + 3z = 11 5z = 15 Using back substitution, we find successively that z = 3, y = = 1, and x = = 1. So the unique solution is [1, 2, 3].!!! However, in many cases the solution is not unique, or may not exist. If it does exist, we need to find all solutions. Another example: x y + z = 1 y + z = 1 Using back substitution: y = 1 z; x = y z 1 = (1 z) z 1 = 2z; thus, x = 2t, y = 1 t, z = t, where t is a parameter; so the solution set is {[ 2t, 1 t, t] t R}; infinitely many solutions. Matrices and Echelon Form The coefficient matrix of a linear system contains the coefficients of the variables, and the augmented matrix is the coefficient matrix augmented by an extra column containing the constant terms. (At the moment, matrix for us is simply a table of coefficients; no prior knowledge of matrices is assumed; properties of matrices will be studied later.) For the system the coefficient matrix is 2x + y z = 3 x + 5z = 1 x + 3y 2z = 0
3 MATH10212 Linear Algebra Brief lecture notes 3 and the augmented matrix is If a variable is missing, its coefficient 0 is entered in the appropriate position in the matrix. If we denote the coefficient matrix of a linear system by A and the column vector of constant terms by b, then the form of the augmented matrix is [A b] Definition A matrix is in row echelon form if: 1. Any rows consisting entirely of zeros are at the bottom. 2. In each nonzero row, the first nonzero entry (called the leading entry) is in a column to the left of any leading entries below it. Definition If the augmented matrix of a linear system is in r.e.f., then the leading variables are those corresponding to the leading entries; the free variables are all the remaining variables (possibly, none). Remark If the augmented matrix of a linear system is in r.e.f., then it is easy to solve it (or see that there are no solutions): namely, there are no solutions if and only if there is a bad row at the bottom [0, 0,..., 0, b] with b 0. If there is no bad row, then one can solve the system using back substitution: express the leading var. in the equation corresponding to the lowest nonzero row, substitute into all the upper equations, then express the leading var. from the equation of the nextupward row, substitute everywhere above, and so on. Elementary Row Operations These are what is used to arrive at r.e.f. for solving linear systems (and there are many other applications). Definition a matrix: The following elementary row operations can be performed on 1. Interchange two rows.
4 MATH10212 Linear Algebra Brief lecture notes 4 2. Multiply a row by a nonzero constant. 3. Add a multiple of a row to another row. Remark Observe that dividing a row by a nonzero constant is implied in the above definition, since, for example, dividing a row by 2 is the same as multiplying it by 1 2. Similarly, subtracting a multiple of a row from another row is the same as adding a negative multiple of a row to another row. Notation for the three elementary row operations: 1. R i R j means interchange rows i and j. 2. kr i means multiplying row i by k (remember that k 0!). 3. R i + kr j means add k times row j to row i (and replace row i with the result, so only the ith row is changed). The process of applying elementary row operations to bring a matrix into row echelon form, called row reduction, is used to reduce a matrix to echelon form. Remarks E.r.o.s must be applied only one at a time, consecutively. The row echelon form of a matrix is not unique. Lemma on inverse e.r.o.s Elementary row operations are reversible by other e.r.o.s: operations 1 3 are undone by R i R j, 1 k R i (using k 0), R i kr j. Fundamental Theorem on E.R.O.s for Linear Systems. Elementary row operations applied to the augmented matrix do not alter the solution set of a linear system. (Thus, two linear systems with row equivalent matrices have the same solution set.) Proof. Suppose that one system (old) is transformed into a new one by an elementary row operation (of one of the types 1, 2, 3). (Clearly, we only need to consider one e.r.o.) Let S 1 be the solution set of the old system, and S 2 the solution set of the new one. We need to show that S 1 = S 2. First it is almost obvious that S 1 S 2, that is, every solution of the old system is a solution of the new one. Indeed, if it was type 1, then clearly
5 MATH10212 Linear Algebra Brief lecture notes 5 nothing changes, since the solution set does not depend on the order of equations. If it was e.r.o. of type 2, then only the ith equation changes: if a i1 u 1 + a i2 u a in u n = b i (old), then ka i1 u 1 + ka i2 u ka in u n = k(a i1 u 1 +a i2 u 2 + +a in u n ) = kb i (new), so a solution (u 1,..., u n ) of the old system remains a solution of the new one. Similarly, if it was type 3: only the ith equation changes: if [u 1,..., u n ] was a solution of the old system, then both a i1 u 1 + a i2 u a in u n = b i and a j1 u 1 + a j2 u a jn u n = b j, whence by adding the second times k to the second and collecting terms we get (a i1 + ka j1 )u 1 + (a i2 + ka j2 )u (a in + ka jn )u n = b i + kb j, so [u 1,..., u n ] remains a solution of the new system. Thus, in each case, S 1 S 2. But by Lemma on inverses each e.r.o. has inverse, so the old system can also be obtained from the new one by an elementary row operation. Therefore, by the same argument, we also have S 2 S 1. Since now both S 2 S 1 and S 1 S 2, we have S 2 = S 1, as required. This theorem is the theoretical basis of methods of solution by e.r.o.s. Gaussian Elimination method for solving linear systems 1. Write the augmented matrix of the system of linear equations. 2. Use elementary row operations to reduce the augmented matrix to row echelon form. 3. If there is a bad row, then there are no solutions. If there is no bad row, then solve the equivalent system that corresponds to the rowreduced matrix expressing the leading variables via the constant terms and free variables using back substitution. Remark When performed by hand, step 2 of Gaussian elimination allows quite a bit of choice. Here are some useful guidelines: (a) Locate the leftmost column that is not all zeros. (b) Create a leading entry at the top of this column using type 1 e.r.o. R 1 R i. (It helps if you make this leading entry = 1, if necessary using type 2 e.r.o. (1/k)R 1.) (c) Use the leading entry to create zeros below it: kill off all the entries of this column below the leading, using type 3 e.r.o. R i ar 1. (d) Cover (ignore) the first row containing the leading entry, and repeat steps (a), (b), (c) on the remaining submatrix....and so on, every time in (d) ignoring several upper rows with the already created leading entries. Stop when the entire matrix is in row echelon form.
6 MATH10212 Linear Algebra Brief lecture notes 6 It is fairly obvious that this procedure always works. There are no solutions if and only if a bad row appears 0, 0,..., 0, b with b 0: indeed, then nothing can satisfy this equation 0x x n = b 0. Variables corresponding to leading coefficients are leading variables; all other variables are free variables (possibly, none then solution is unique). Clearly, when we backsubstitute, free variables can take any values ( free ), while leading variables are uniquely expressed in terms of free variables and lower leading variables, which in turn are..., so in fact in the final form of solution leading variables are uniquely expressed in terms of free variables only, while free variables can take independently any values. In other words, free variables are equal to independent parameters, and leading variables are expressed in these parameters. Gauss Jordan Elimination method for solving linear systems We can reduce the augmented matrix even further than in Gauss elimination. Definition A matrix is in reduced row echelon form if: 1. It is in row echelon form. 2. The leading entry in each nonzero row is a 1 (called a leading 1). 3. Each column containing a leading 1 has zeros everywhere else. Gauss Jordan Elimination: 1. Write the augmented matrix of the system of linear equations. 2. Use elementary row operations to reduce the augmented matrix to reduced row echelon form. (In addition to (c) above, also kill off all entries )i.e. create zeros) above the leading one in the same column.) 3. If there is a bad row, then there are no solutions. If there is no bad row (i.e. the resulting system is consistent), then express the leading variables in terms of the constant terms and any remaining free variables. A bit more work to r.r.e.f., but then much easier expressing leading variables in terms of the free variables. The Gaussian (or Gauss Jordan) elimination methods yield the following
7 MATH10212 Linear Algebra Brief lecture notes 7 Corollary Every consistent linear system over R has either a unique solution (if there are no free variables, so all variables are leading), or infinitely many solutions (when there are free variables, which can take arbitrary values). (We included over R because sometimes linear systems are considered over other number systems, e.g. socalled finite fields, although in this module we work only over R.) Remark If one needs a particular solution (that is, just any one solution), simply set the parameters (leading var.) to any values (usually the simplest is to 0s). E.g. general solution {[1 t + 2u, t, 3 + u, u] t, u R}; setting t = u = 0 we get a particular solution [1, 0, 3, 0]; or we can set, say, t = 1 and u = 2, then we get a particular solution {[4, 1, 5, 2], etc. Definition The rank of a matrix is the number of nonzero rows in its row echelon form. We denote the rank of a matrix A by rank(a). Theorem 2.2 (The Rank Theorem) Let A be the coefficient matrix of a system of linear equations with n variables. If the system is consistent, then number of free variables = n rank(a) Homogeneous Systems Definition A system of linear equations is called homogeneous if the constant term in each equation is zero. In other words, a homogeneous system has an augmented matrix of the form [A 0]. E.g., the following system is homogeneous: x + 2y 3z = 0 x + y + 2z = 0 Remarks. 1) Every homogeneous system is consistent, as it has (at least) the trivial solution [0, 0,..., 0]. 2) Hence, by the Corollary above, every homogeneous system has either a unique solution (the trivial solution) or infinitely many solutions. The next theorem says that the latter case must occur if the number of variables is greater than the number of equations. Theorem 2.3. If [A 0] is a homogeneous system of m linear equations with n variables, where m < n, then the system has infinitely many solutions.
8 MATH10212 Linear Algebra Brief lecture notes 8 Byproduct result for matrices Definition Matrices A and B are row equivalent if there is a sequence of elementary row operations that converts A into B. For example, the matrices and are row equivalent Theorem 2.1 Matrices A and B are row equivalent if and only if they can be reduced to the same row echelon form.
9 MATH10212 Linear Algebra Brief lecture notes 9 Spanning Sets, Linear (In)Dependence, Connections with Linear Systems Linear Combinations, Spans Recall that the sum of two vectors of the same length is a 1 b 1 a 1 + b 1 a 2. + b 2. = a 2 + b 2.. a n b n a n + b n a 1 ka 1 a 2 ka 2 Multiplication by a scalar k R is: k. a n =. ka n. Definition. A linear combination of vectors v 1, v 2,..., v k R n with coefficients c 1,..., c k R is c 1 v 1 + c 2 v c k v k. Theorem 2.4. A system of linear equations with augmented matrix [A b] is consistent if and only if b is a linear combination of the columns of A. Method for deciding if a vector b is a linear combination of vectors a 1,..., a k (of course all vectors must be of the same length): form the linear system with augmented matrix whose columns are a 1,..., a k, b (the unknowns of this system are those coefficients). If it is consistent, then b is a linear combination of vectors a 1,..., a k ; if inconsistent, it is not. If one needs to express b as a linear combination of vectors a 1,..., a k, just produce some particular solution, which gives required coefficients. We will often be interested in the collection of all linear combinations of a given set of vectors. Definition. If S = { v 1, v 2,..., v k } is a set of vectors in R n, then the set of all linear combinations of v 1, v 2,..., v k is called the span of v 1, v 2,..., v k and is denoted by span( v 1, v 2,..., v k ) or span(s). Thus, span( v 1, v 2,..., v k ) = {c 1 v 1 + c 2 v c k v k c i R}.
10 MATH10212 Linear Algebra Brief lecture notes 10 Definition. If span(s) = R n, then S is called a spanning set for R n. Obviously, to ask whether a vector b belongs to the span of vectors v 1,..., v k is exactly the same as to ask whether b is a linear combination of the vectors v 1,..., v k ; see Theorem 2.4 and the method described above. Linear (in)dependence Definition. A set of vectors S = { v 1, v 2,..., v k } is linearly dependent if there are scalars c 1, c 2,..., c k at least one of which is not zero, such that c 1 v 1 + c 2 v c k v k = 0 A set of vectors that is not linearly dependent is called linearly independent. In other words, vectors { v 1, v 2,..., v k } are linearly independent if equality c 1 v 1 + c 2 v c k v k = 0 implies that all the c i are zeros (or: only the trivial linear combination of the v i is equal to 0). Remarks. In the definition of linear dependence, the requirement that at least one of the scalars c 1, c 2,..., c k must be nonzero allows for the possibility that some may be zero. In the example above, u, v and w are linearly dependent, since 3 u+2 v w = 0 and, in fact, all of the scalars are nonzero. On the other hand, [ 2 6 ] [ ] [ ] [ ] 0 = 0 [ ] [ ] [ ] so, and are linearly dependent, since at least one (in fact, two) of the three scalars 1, 2 and 0 is nonzero. (Note, that the actual dependence arises simply from the fact that the first two vectors are multiples.) Since 0 v v v k = 0 for any vectors v 1, v 2,..., v k, linear dependence essentially says that the zero vector can be expressed as a nontrivial linear combination of v 1, v 2,..., v k. Thus, linear independence means that the zero vector can be expressed as a linear combination of v 1, v 2,..., v k only in the trivial way: c 1 v 1 + c 2 v c k v k = 0 only if c 1 = 0, c 2 = 0,..., c k = 0.
11 MATH10212 Linear Algebra Brief lecture notes 11 Theorem 2.6. n m matrix Let v 1, v 2,..., v m be (column) vectors in R n and let A be the A = [ v 1 v 2 v m ] with these vectors as its columns. Then v 1, v 2,..., v m are linearly dependent if and only if the homogeneous linear system with augmented matrix [A 0] has a nontrivial solution. Proof. v 1, v 2,..., v m are linearly dependent if and only if there are scalars c 1, c 2,..., c m not all zero, such that c 1 v 1 + c 2 v c m v m = 0. By Theorem 2.4, this is equivalent to saying that the system with the augmented matrix [ v 1 v 2... v m 0] has a nontrivial solution. Method for determining if given vectors v 1, v 2,..., v m are linearly dependent: form the homogeneous system as in Theorem 2.6 (unknowns are those coefficients). Reduce its augmented matrix to r.e.f. If there are no nontrivial solutions (= no free variables), then the vectors are linearly independent. If there are free variables, then there are nontrivial solutions and the vectors are dependent. To find a concrete dependence, find a particular nontrivial solution, which gives required coefficients; for that set the free variables to 1, say (not all to 0). Example Any set of vectors 0, v 2,..., v m containing the zero vector is linearly dependent. For we can find a nontrivial combination of the form c c 2 v c m v m = 0. by setting c 1 = 1 and c 2 = c 3 = = c m = 0. The relationship between the intuitive notion of dependence and the formal definition is given in the next theorem. Theorem 2.5. Vectors v 1, v 2,..., v m in R n are linearly dependent if and only if at least one of the vectors can be expressed as a linear combination of the others. Proof. If one of the vectors, say, v 1, is a linear combination of the others, then there are scalars c 2,..., c m such that Rearranging, we obtain v 1 = c 2 v c m v m. v 1 c 2 v 2 c m v m = 0,
12 MATH10212 Linear Algebra Brief lecture notes 12 which implies that v 1, v 2,..., v m are linearly dependent, since at least one of the scalars (namely, the coefficient 1 of v 1 ) is nonzero. Conversely, suppose that v 1, v 2,..., v m are linearly dependent. there are scalars c 1, c 2,..., c m not all zero, such that Suppose c 1 0. Then c 1 v 1 + c 2 v c m v m = 0. c 1 v 1 = c 2 v 2 c m v m Then and we may multiply both sides by 1 c 1 to obtain v 1 as a linear combination of the other vectors: ( ) ( ) c2 cm v 1 = v 2 v m. c 1 c 1 Corollary. Two vectors u, v R n are linearly dependent if and only if they are proportional. E.g., vectors [1, 2, 1] and [1, 1, 3] are linearly independent, as they are not proportional. Vectors [ 1, 2, 1] and [2, 4, 2] are lin. dependent, since they are proportional (with coeff. 2). Theorem 2.8. Any set of m vectors in R n is linearly dependent if m > n. Proof. Let v 1, v 2,..., v m be (column) vectors in R n and let A be the n m matrix A = [ v 1 v 2 v m ] with these vectors as its columns. By Theorem 2.6, v 1, v 2,..., v m are linearly dependent if and only if the homogeneous linear system with augmented matrix [A 0] has a nontrivial solution. But, according to Theorem 2.3 (not 2.6 a misprint in the Textbook here), this will always be the case if A has more columns than rows; it is the case here, since number of columns m is greater than number of rows n. (Note that here m and n have opposite meanings compared to Theorem 2.3.) Theorem 2.7. m n matrix Let v 1, v 2,..., v m be (row) vectors in R n and let A be the v 1 v 2. v m with these vectors as its rows. Then v 1, v 2,..., v m are linearly dependent if and only if rank(a) < m. Note that there is no linear system in Th.2.7 (although e.r.o.s must be used to reduce A to r.e.f.; then rank(a) = number of nonzero rows of this r.e.f.)
13 MATH10212 Linear Algebra Brief lecture notes 13 Proof. If v 1, v 2,..., v m are linearly dependent, then by Th. 2.5 one of these vectors is equal to a linear combination of the others. Swapping rows by type 1 e.r.o. if necessary, we can assume that v m = c 1 v c m 1 v m 1. We can now kill off the mth row by e.r.o.s A Rm c1r1 Rm c2r2 R m c m 1R m 1 ; the resulting matrix will have mth row consisting of zeros. Next, we apply e.r.o.s to reduce the submatrix consisting of the upper m 1 rows to r.e.f. Clearly, together with the zero mth row it will be r.e.f. of A, with at most m 1 nonzero rows. Thus, rank(a) m 1. (assumed without proof) The idea is that if rank(a) m 1, then r.e.f. of A has zero row at the bottom. Analysing e.r.o.s that lead from A to this r.e.f. one can show (we assume this without proof) that one of the rows is a linear combination of the others; see the textbook, Example Row Method for deciding if vectors v 1,..., v m are linearly dependent. Form the matrix A with rows v i (even if originally you were given columns, just lay them down, rotate by 90 0 clockwise). Reduce A by e.r.o.s to r.e.f., the number of nonzero rows in this r.e.f. is =rank(a). The vectors are linearly dependent if and only if rank(a) < m. (Again: note that there is no linear system to solve here; no unknowns, it does not matter if there is a bad row.) Theorem on e.r.o.s and spans. matrix. E.r.o.s do not alter the span of rows of a (Again: there is no linear system here, no unknowns.) Proof. Let v 1, v 2,..., v m be the rows of a matrix A, to which we apply e.r.o.s. Clearly, it is sufficient to prove that the span of rows is not changed by a single e.r.o. Let u 1, u 2,..., u m be the rows of the new matrix. By the definition of e.r.o., every u i is a linear combination of the v j (most rows are even the same). Now, in every linear combination c 1 u 1 + c 2 u c m u m we can substitute those expressions of the u i via the v j. Expand brackets, collect terms: this becomes a linear combination of the the v j. In other words, span( v 1,..., v m ) span( u 1,..., u m ). By Lemma on inverse e.r.o., the old matrix is also obtained from the new one by the inverse e.r.o. By the same argument, span( v 1,..., v m ) span( u 1,..., u m ). As a result, span( v 1,..., v m ) = span( u 1,..., u m ).
Row Echelon Form and Reduced Row Echelon Form
These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for inclass presentation
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More information1.2 Solving a System of Linear Equations
1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems  Basic De nitions As noticed above, the general form of a linear system of m equations in n variables
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Rowreduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationLecture Notes 2: Matrices as Systems of Linear Equations
2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all ndimensional column
More informationSection 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)
Section 8. Solving a System of Equations Using Matrices (Guassian Elimination) x + y + z = x y + 4z = x 4y + z = System of Equations x 4 y = 4 z A System in matrix form x A x = b b 4 4 Augmented Matrix
More informationMATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3
MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationTHE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
More informationLinearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for inclass presentation
More information5.5. Solving linear systems by the elimination method
55 Solving linear systems by the elimination method Equivalent systems The major technique of solving systems of equations is changing the original problem into another one which is of an easier to solve
More informationName: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in twodimensional space (1) 2x y = 3 describes a line in twodimensional space The coefficients of x and y in the equation
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More information2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system
1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationLinear Equations ! 25 30 35$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development
MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!
More information( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
More informationSolving Systems of Linear Equations Using Matrices
Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More information5 Homogeneous systems
5 Homogeneous systems Definition: A homogeneous (homojeen ius) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m
More informationLecture 1: Systems of Linear Equations
MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationReduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:
Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in
More informationThe Determinant: a Means to Calculate Volume
The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its wellknown properties Volumes of parallelepipeds are
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the yaxis We observe that
More informationVector Spaces 4.4 Spanning and Independence
Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set
More informationThese axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
More informationSYSTEMS OF EQUATIONS AND MATRICES WITH THE TI89. by Joseph Collison
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More information1.5 SOLUTION SETS OF LINEAR SYSTEMS
12 CHAPTER 1 Linear Equations in Linear Algebra 1.5 SOLUTION SETS OF LINEAR SYSTEMS Many of the concepts and computations in linear algebra involve sets of vectors which are visualized geometrically as
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationUsing row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More informationI. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
More informationAbstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).
MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More informationArithmetic and Algebra of Matrices
Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus ndimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationPractical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
More informationLinear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More information8 Primes and Modular Arithmetic
8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers.
More informationSubspaces of R n LECTURE 7. 1. Subspaces
LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationCOMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012
Binary numbers The reason humans represent numbers using decimal (the ten digits from 0,1,... 9) is that we have ten fingers. There is no other reason than that. There is nothing special otherwise about
More informationSystems of Linear Equations
Chapter 1 Systems of Linear Equations 1.1 Intro. to systems of linear equations Homework: [Textbook, Ex. 13, 15, 41, 47, 49, 51, 65, 73; page 11]. Main points in this section: 1. Definition of Linear
More informationLinear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
More information5. Linear algebra I: dimension
5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra  1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationLinear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.5 SOLUTION SETS OF LINEAR SYSTEMS HOMOGENEOUS LINEAR SYSTEMS A system of linear equations is said to be homogeneous if it can be written in the form A 0, where A
More informationLinear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
More informationv w is orthogonal to both v and w. the three vectors v, w and v w form a righthanded set of vectors.
3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with
More information8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes
Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us
More informationWhat is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationMath 4310 Handout  Quotient Vector Spaces
Math 4310 Handout  Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationSection 1.7 22 Continued
Section 1.5 23 A homogeneous equation is always consistent. TRUE  The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE  The equation
More informationMATH 551  APPLIED MATRIX THEORY
MATH 55  APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationMAT 242 Test 2 SOLUTIONS, FORM T
MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these
More informationGENERATING SETS KEITH CONRAD
GENERATING SETS KEITH CONRAD 1 Introduction In R n, every vector can be written as a unique linear combination of the standard basis e 1,, e n A notion weaker than a basis is a spanning set: a set of vectors
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationNotes from February 11
Notes from February 11 Math 130 Course web site: www.courses.fas.harvard.edu/5811 Two lemmas Before proving the theorem which was stated at the end of class on February 8, we begin with two lemmas. The
More information3. Mathematical Induction
3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)
More informationLinear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
More informationLecture 4: Partitioned Matrices and Determinants
Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying
More informationLecture L3  Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3  Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationPart 1 Expressions, Equations, and Inequalities: Simplifying and Solving
Section 7 Algebraic Manipulations and Solving Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving Before launching into the mathematics, let s take a moment to talk about the words
More informationSECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA
SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationLEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. FiniteDimensional
More informationLinear Equations in Linear Algebra
May, 25 :46 l57ch Sheet number Page number cyan magenta yellow black Linear Equations in Linear Algebra WEB INTRODUCTORY EXAMPLE Linear Models in Economics and Engineering It was late summer in 949. Harvard
More information