MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Size: px
Start display at page:

Download "MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1."

Transcription

1 MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, ISBN Systems of Linear Equations Definition. An n-dimensional vector is a row or a column of n numbers (or letters): [a 1,..., a n ] or a 1. a n. The set of all such vectors (either only rows, or only columns) with real entries is denoted by R n. Short notation for vectors varies: a, or a, or a. Definition. A linear equation in n variables x 1, x 2,..., x n is an equation a 1 x 1 + a 2 x a n x n = b where the coefficients a 1, a 2,..., a n and the constant term b are constants. A solution of a linear equation a 1 x 1 + a 2 x a n x n = b is a vector [s 1, s 2,..., s n ] whose components satisfy the equation when we substitute x 1 = s 1, x 2 = s 2,..., x n = s n, that is, a 1 s 1 + a 2 s a n s n = b. A system of linear equations is a finite set of linear equations, each with the same variables. A solution of a system of linear equations is a vector that is simultaneously a solution of each equation in the system. The solution set of a system of linear equations is the set of all solutions of the system. 1

2 MATH10212 Linear Algebra Brief lecture notes 2 Definition A general solution of a linear system (or equation) is an expression of the unknowns in terms of certain parameters that can take independently any values producing all the solutions of the equation (and only solutions). Two linear systems are equivalent if they have the same solution sets. For example, x + y = 3 x y = 1 and x y = 1 y = 2 are equivalent, since both have the unique solution [1, 2]. We solve a system of linear equations by transforming it into an equivalent one of a triangular or staircase pattern: x y z = 4 y + 3z = 11 5z = 15 Using back substitution, we find successively that z = 3, y = = 1, and x = = 1. So the unique solution is [1, 2, 3].!!! However, in many cases the solution is not unique, or may not exist. If it does exist, we need to find all solutions. Another example: x y + z = 1 y + z = 1 Using back substitution: y = 1 z; x = y z 1 = (1 z) z 1 = 2z; thus, x = 2t, y = 1 t, z = t, where t is a parameter; so the solution set is {[ 2t, 1 t, t] t R}; infinitely many solutions. Matrices and Echelon Form The coefficient matrix of a linear system contains the coefficients of the variables, and the augmented matrix is the coefficient matrix augmented by an extra column containing the constant terms. (At the moment, matrix for us is simply a table of coefficients; no prior knowledge of matrices is assumed; properties of matrices will be studied later.) For the system the coefficient matrix is 2x + y z = 3 x + 5z = 1 x + 3y 2z = 0

3 MATH10212 Linear Algebra Brief lecture notes 3 and the augmented matrix is If a variable is missing, its coefficient 0 is entered in the appropriate position in the matrix. If we denote the coefficient matrix of a linear system by A and the column vector of constant terms by b, then the form of the augmented matrix is [A b] Definition A matrix is in row echelon form if: 1. Any rows consisting entirely of zeros are at the bottom. 2. In each nonzero row, the first nonzero entry (called the leading entry) is in a column to the left of any leading entries below it. Definition If the augmented matrix of a linear system is in r.e.f., then the leading variables are those corresponding to the leading entries; the free variables are all the remaining variables (possibly, none). Remark If the augmented matrix of a linear system is in r.e.f., then it is easy to solve it (or see that there are no solutions): namely, there are no solutions if and only if there is a bad row at the bottom [0, 0,..., 0, b] with b 0. If there is no bad row, then one can solve the system using back substitution: express the leading var. in the equation corresponding to the lowest non-zero row, substitute into all the upper equations, then express the leading var. from the equation of the next-upward row, substitute everywhere above, and so on. Elementary Row Operations These are what is used to arrive at r.e.f. for solving linear systems (and there are many other applications). Definition a matrix: The following elementary row operations can be performed on 1. Interchange two rows.

4 MATH10212 Linear Algebra Brief lecture notes 4 2. Multiply a row by a nonzero constant. 3. Add a multiple of a row to another row. Remark Observe that dividing a row by a nonzero constant is implied in the above definition, since, for example, dividing a row by 2 is the same as multiplying it by 1 2. Similarly, subtracting a multiple of a row from another row is the same as adding a negative multiple of a row to another row. Notation for the three elementary row operations: 1. R i R j means interchange rows i and j. 2. kr i means multiplying row i by k (remember that k 0!). 3. R i + kr j means add k times row j to row i (and replace row i with the result, so only the ith row is changed). The process of applying elementary row operations to bring a matrix into row echelon form, called row reduction, is used to reduce a matrix to echelon form. Remarks E.r.o.s must be applied only one at a time, consecutively. The row echelon form of a matrix is not unique. Lemma on inverse e.r.o.s Elementary row operations are reversible by other e.r.o.s: operations 1 3 are undone by R i R j, 1 k R i (using k 0), R i kr j. Fundamental Theorem on E.R.O.s for Linear Systems. Elementary row operations applied to the augmented matrix do not alter the solution set of a linear system. (Thus, two linear systems with row equivalent matrices have the same solution set.) Proof. Suppose that one system (old) is transformed into a new one by an elementary row operation (of one of the types 1, 2, 3). (Clearly, we only need to consider one e.r.o.) Let S 1 be the solution set of the old system, and S 2 the solution set of the new one. We need to show that S 1 = S 2. First it is almost obvious that S 1 S 2, that is, every solution of the old system is a solution of the new one. Indeed, if it was type 1, then clearly

5 MATH10212 Linear Algebra Brief lecture notes 5 nothing changes, since the solution set does not depend on the order of equations. If it was e.r.o. of type 2, then only the ith equation changes: if a i1 u 1 + a i2 u a in u n = b i (old), then ka i1 u 1 + ka i2 u ka in u n = k(a i1 u 1 +a i2 u 2 + +a in u n ) = kb i (new), so a solution (u 1,..., u n ) of the old system remains a solution of the new one. Similarly, if it was type 3: only the ith equation changes: if [u 1,..., u n ] was a solution of the old system, then both a i1 u 1 + a i2 u a in u n = b i and a j1 u 1 + a j2 u a jn u n = b j, whence by adding the second times k to the second and collecting terms we get (a i1 + ka j1 )u 1 + (a i2 + ka j2 )u (a in + ka jn )u n = b i + kb j, so [u 1,..., u n ] remains a solution of the new system. Thus, in each case, S 1 S 2. But by Lemma on inverses each e.r.o. has inverse, so the old system can also be obtained from the new one by an elementary row operation. Therefore, by the same argument, we also have S 2 S 1. Since now both S 2 S 1 and S 1 S 2, we have S 2 = S 1, as required. This theorem is the theoretical basis of methods of solution by e.r.o.s. Gaussian Elimination method for solving linear systems 1. Write the augmented matrix of the system of linear equations. 2. Use elementary row operations to reduce the augmented matrix to row echelon form. 3. If there is a bad row, then there are no solutions. If there is no bad row, then solve the equivalent system that corresponds to the row-reduced matrix expressing the leading variables via the constant terms and free variables using back substitution. Remark When performed by hand, step 2 of Gaussian elimination allows quite a bit of choice. Here are some useful guidelines: (a) Locate the leftmost column that is not all zeros. (b) Create a leading entry at the top of this column using type 1 e.r.o. R 1 R i. (It helps if you make this leading entry = 1, if necessary using type 2 e.r.o. (1/k)R 1.) (c) Use the leading entry to create zeros below it: kill off all the entries of this column below the leading, using type 3 e.r.o. R i ar 1. (d) Cover (ignore) the first row containing the leading entry, and repeat steps (a), (b), (c) on the remaining submatrix....and so on, every time in (d) ignoring several upper rows with the already created leading entries. Stop when the entire matrix is in row echelon form.

6 MATH10212 Linear Algebra Brief lecture notes 6 It is fairly obvious that this procedure always works. There are no solutions if and only if a bad row appears 0, 0,..., 0, b with b 0: indeed, then nothing can satisfy this equation 0x x n = b 0. Variables corresponding to leading coefficients are leading variables; all other variables are free variables (possibly, none then solution is unique). Clearly, when we back-substitute, free variables can take any values ( free ), while leading variables are uniquely expressed in terms of free variables and lower leading variables, which in turn are..., so in fact in the final form of solution leading variables are uniquely expressed in terms of free variables only, while free variables can take independently any values. In other words, free variables are equal to independent parameters, and leading variables are expressed in these parameters. Gauss Jordan Elimination method for solving linear systems We can reduce the augmented matrix even further than in Gauss elimination. Definition A matrix is in reduced row echelon form if: 1. It is in row echelon form. 2. The leading entry in each nonzero row is a 1 (called a leading 1). 3. Each column containing a leading 1 has zeros everywhere else. Gauss Jordan Elimination: 1. Write the augmented matrix of the system of linear equations. 2. Use elementary row operations to reduce the augmented matrix to reduced row echelon form. (In addition to (c) above, also kill off all entries )i.e. create zeros) above the leading one in the same column.) 3. If there is a bad row, then there are no solutions. If there is no bad row (i.e. the resulting system is consistent), then express the leading variables in terms of the constant terms and any remaining free variables. A bit more work to r.r.e.f., but then much easier expressing leading variables in terms of the free variables. The Gaussian (or Gauss Jordan) elimination methods yield the following

7 MATH10212 Linear Algebra Brief lecture notes 7 Corollary Every consistent linear system over R has either a unique solution (if there are no free variables, so all variables are leading), or infinitely many solutions (when there are free variables, which can take arbitrary values). (We included over R because sometimes linear systems are considered over other number systems, e.g. so-called finite fields, although in this module we work only over R.) Remark If one needs a particular solution (that is, just any one solution), simply set the parameters (leading var.) to any values (usually the simplest is to 0s). E.g. general solution {[1 t + 2u, t, 3 + u, u] t, u R}; setting t = u = 0 we get a particular solution [1, 0, 3, 0]; or we can set, say, t = 1 and u = 2, then we get a particular solution {[4, 1, 5, 2], etc. Definition The rank of a matrix is the number of nonzero rows in its row echelon form. We denote the rank of a matrix A by rank(a). Theorem 2.2 (The Rank Theorem) Let A be the coefficient matrix of a system of linear equations with n variables. If the system is consistent, then number of free variables = n rank(a) Homogeneous Systems Definition A system of linear equations is called homogeneous if the constant term in each equation is zero. In other words, a homogeneous system has an augmented matrix of the form [A 0]. E.g., the following system is homogeneous: x + 2y 3z = 0 x + y + 2z = 0 Remarks. 1) Every homogeneous system is consistent, as it has (at least) the trivial solution [0, 0,..., 0]. 2) Hence, by the Corollary above, every homogeneous system has either a unique solution (the trivial solution) or infinitely many solutions. The next theorem says that the latter case must occur if the number of variables is greater than the number of equations. Theorem 2.3. If [A 0] is a homogeneous system of m linear equations with n variables, where m < n, then the system has infinitely many solutions.

8 MATH10212 Linear Algebra Brief lecture notes 8 By-product result for matrices Definition Matrices A and B are row equivalent if there is a sequence of elementary row operations that converts A into B. For example, the matrices and are row equivalent Theorem 2.1 Matrices A and B are row equivalent if and only if they can be reduced to the same row echelon form.

9 MATH10212 Linear Algebra Brief lecture notes 9 Spanning Sets, Linear (In)Dependence, Connections with Linear Systems Linear Combinations, Spans Recall that the sum of two vectors of the same length is a 1 b 1 a 1 + b 1 a 2. + b 2. = a 2 + b 2.. a n b n a n + b n a 1 ka 1 a 2 ka 2 Multiplication by a scalar k R is: k. a n =. ka n. Definition. A linear combination of vectors v 1, v 2,..., v k R n with coefficients c 1,..., c k R is c 1 v 1 + c 2 v c k v k. Theorem 2.4. A system of linear equations with augmented matrix [A b] is consistent if and only if b is a linear combination of the columns of A. Method for deciding if a vector b is a linear combination of vectors a 1,..., a k (of course all vectors must be of the same length): form the linear system with augmented matrix whose columns are a 1,..., a k, b (the unknowns of this system are those coefficients). If it is consistent, then b is a linear combination of vectors a 1,..., a k ; if inconsistent, it is not. If one needs to express b as a linear combination of vectors a 1,..., a k, just produce some particular solution, which gives required coefficients. We will often be interested in the collection of all linear combinations of a given set of vectors. Definition. If S = { v 1, v 2,..., v k } is a set of vectors in R n, then the set of all linear combinations of v 1, v 2,..., v k is called the span of v 1, v 2,..., v k and is denoted by span( v 1, v 2,..., v k ) or span(s). Thus, span( v 1, v 2,..., v k ) = {c 1 v 1 + c 2 v c k v k c i R}.

10 MATH10212 Linear Algebra Brief lecture notes 10 Definition. If span(s) = R n, then S is called a spanning set for R n. Obviously, to ask whether a vector b belongs to the span of vectors v 1,..., v k is exactly the same as to ask whether b is a linear combination of the vectors v 1,..., v k ; see Theorem 2.4 and the method described above. Linear (in)dependence Definition. A set of vectors S = { v 1, v 2,..., v k } is linearly dependent if there are scalars c 1, c 2,..., c k at least one of which is not zero, such that c 1 v 1 + c 2 v c k v k = 0 A set of vectors that is not linearly dependent is called linearly independent. In other words, vectors { v 1, v 2,..., v k } are linearly independent if equality c 1 v 1 + c 2 v c k v k = 0 implies that all the c i are zeros (or: only the trivial linear combination of the v i is equal to 0). Remarks. In the definition of linear dependence, the requirement that at least one of the scalars c 1, c 2,..., c k must be nonzero allows for the possibility that some may be zero. In the example above, u, v and w are linearly dependent, since 3 u+2 v w = 0 and, in fact, all of the scalars are nonzero. On the other hand, [ 2 6 ] [ ] [ ] [ ] 0 = 0 [ ] [ ] [ ] so, and are linearly dependent, since at least one (in fact, two) of the three scalars 1, 2 and 0 is nonzero. (Note, that the actual dependence arises simply from the fact that the first two vectors are multiples.) Since 0 v v v k = 0 for any vectors v 1, v 2,..., v k, linear dependence essentially says that the zero vector can be expressed as a nontrivial linear combination of v 1, v 2,..., v k. Thus, linear independence means that the zero vector can be expressed as a linear combination of v 1, v 2,..., v k only in the trivial way: c 1 v 1 + c 2 v c k v k = 0 only if c 1 = 0, c 2 = 0,..., c k = 0.

11 MATH10212 Linear Algebra Brief lecture notes 11 Theorem 2.6. n m matrix Let v 1, v 2,..., v m be (column) vectors in R n and let A be the A = [ v 1 v 2 v m ] with these vectors as its columns. Then v 1, v 2,..., v m are linearly dependent if and only if the homogeneous linear system with augmented matrix [A 0] has a nontrivial solution. Proof. v 1, v 2,..., v m are linearly dependent if and only if there are scalars c 1, c 2,..., c m not all zero, such that c 1 v 1 + c 2 v c m v m = 0. By Theorem 2.4, this is equivalent to saying that the system with the augmented matrix [ v 1 v 2... v m 0] has a non-trivial solution. Method for determining if given vectors v 1, v 2,..., v m are linearly dependent: form the homogeneous system as in Theorem 2.6 (unknowns are those coefficients). Reduce its augmented matrix to r.e.f. If there are no nontrivial solutions (= no free variables), then the vectors are linearly independent. If there are free variables, then there are non-trivial solutions and the vectors are dependent. To find a concrete dependence, find a particular non-trivial solution, which gives required coefficients; for that set the free variables to 1, say (not all to 0). Example Any set of vectors 0, v 2,..., v m containing the zero vector is linearly dependent. For we can find a nontrivial combination of the form c c 2 v c m v m = 0. by setting c 1 = 1 and c 2 = c 3 = = c m = 0. The relationship between the intuitive notion of dependence and the formal definition is given in the next theorem. Theorem 2.5. Vectors v 1, v 2,..., v m in R n are linearly dependent if and only if at least one of the vectors can be expressed as a linear combination of the others. Proof. If one of the vectors, say, v 1, is a linear combination of the others, then there are scalars c 2,..., c m such that Rearranging, we obtain v 1 = c 2 v c m v m. v 1 c 2 v 2 c m v m = 0,

12 MATH10212 Linear Algebra Brief lecture notes 12 which implies that v 1, v 2,..., v m are linearly dependent, since at least one of the scalars (namely, the coefficient 1 of v 1 ) is nonzero. Conversely, suppose that v 1, v 2,..., v m are linearly dependent. there are scalars c 1, c 2,..., c m not all zero, such that Suppose c 1 0. Then c 1 v 1 + c 2 v c m v m = 0. c 1 v 1 = c 2 v 2 c m v m Then and we may multiply both sides by 1 c 1 to obtain v 1 as a linear combination of the other vectors: ( ) ( ) c2 cm v 1 = v 2 v m. c 1 c 1 Corollary. Two vectors u, v R n are linearly dependent if and only if they are proportional. E.g., vectors [1, 2, 1] and [1, 1, 3] are linearly independent, as they are not proportional. Vectors [ 1, 2, 1] and [2, 4, 2] are lin. dependent, since they are proportional (with coeff. 2). Theorem 2.8. Any set of m vectors in R n is linearly dependent if m > n. Proof. Let v 1, v 2,..., v m be (column) vectors in R n and let A be the n m matrix A = [ v 1 v 2 v m ] with these vectors as its columns. By Theorem 2.6, v 1, v 2,..., v m are linearly dependent if and only if the homogeneous linear system with augmented matrix [A 0] has a nontrivial solution. But, according to Theorem 2.3 (not 2.6 a misprint in the Textbook here), this will always be the case if A has more columns than rows; it is the case here, since number of columns m is greater than number of rows n. (Note that here m and n have opposite meanings compared to Theorem 2.3.) Theorem 2.7. m n matrix Let v 1, v 2,..., v m be (row) vectors in R n and let A be the v 1 v 2. v m with these vectors as its rows. Then v 1, v 2,..., v m are linearly dependent if and only if rank(a) < m. Note that there is no linear system in Th.2.7 (although e.r.o.s must be used to reduce A to r.e.f.; then rank(a) = number of non-zero rows of this r.e.f.)

13 MATH10212 Linear Algebra Brief lecture notes 13 Proof. If v 1, v 2,..., v m are linearly dependent, then by Th. 2.5 one of these vectors is equal to a linear combination of the others. Swapping rows by type 1 e.r.o. if necessary, we can assume that v m = c 1 v c m 1 v m 1. We can now kill off the m-th row by e.r.o.s A Rm c1r1 Rm c2r2 R m c m 1R m 1 ; the resulting matrix will have m-th row consisting of zeros. Next, we apply e.r.o.s to reduce the submatrix consisting of the upper m 1 rows to r.e.f. Clearly, together with the zero m-th row it will be r.e.f. of A, with at most m 1 non-zero rows. Thus, rank(a) m 1. (assumed without proof) The idea is that if rank(a) m 1, then r.e.f. of A has zero row at the bottom. Analysing e.r.o.s that lead from A to this r.e.f. one can show (we assume this without proof) that one of the rows is a linear combination of the others; see the textbook, Example Row Method for deciding if vectors v 1,..., v m are linearly dependent. Form the matrix A with rows v i (even if originally you were given columns, just lay them down, rotate by 90 0 clockwise). Reduce A by e.r.o.s to r.e.f., the number of non-zero rows in this r.e.f. is =rank(a). The vectors are linearly dependent if and only if rank(a) < m. (Again: note that there is no linear system to solve here; no unknowns, it does not matter if there is a bad row.) Theorem on e.r.o.s and spans. matrix. E.r.o.s do not alter the span of rows of a (Again: there is no linear system here, no unknowns.) Proof. Let v 1, v 2,..., v m be the rows of a matrix A, to which we apply e.r.o.s. Clearly, it is sufficient to prove that the span of rows is not changed by a single e.r.o. Let u 1, u 2,..., u m be the rows of the new matrix. By the definition of e.r.o., every u i is a linear combination of the v j (most rows are even the same). Now, in every linear combination c 1 u 1 + c 2 u c m u m we can substitute those expressions of the u i via the v j. Expand brackets, collect terms: this becomes a linear combination of the the v j. In other words, span( v 1,..., v m ) span( u 1,..., u m ). By Lemma on inverse e.r.o., the old matrix is also obtained from the new one by the inverse e.r.o. By the same argument, span( v 1,..., v m ) span( u 1,..., u m ). As a result, span( v 1,..., v m ) = span( u 1,..., u m ).

1 Gaussian Elimination

1 Gaussian Elimination Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

More information

MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0, MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call

More information

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form 1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

2.5 Gaussian Elimination

2.5 Gaussian Elimination page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

More information

4 Solving Systems of Equations by Reducing Matrices

4 Solving Systems of Equations by Reducing Matrices Math 15 Sec S0601/S060 4 Solving Systems of Equations by Reducing Matrices 4.1 Introduction One of the main applications of matrix methods is the solution of systems of linear equations. Consider for example

More information

Lecture Notes 2: Matrices as Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations 2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0.

Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0. Basic Terminology for Systems of Equations in a Nutshell E L Lady A system of linear equations is something like the following: x 7x +4x =0 5x +8x x = Note that the number of equations is not required

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Linear Dependence Tests

Linear Dependence Tests Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

DETERMINANTS. b 2. x 2

DETERMINANTS. b 2. x 2 DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

More information

MAT Solving Linear Systems Using Matrices and Row Operations

MAT Solving Linear Systems Using Matrices and Row Operations MAT 171 8.5 Solving Linear Systems Using Matrices and Row Operations A. Introduction to Matrices Identifying the Size and Entries of a Matrix B. The Augmented Matrix of a System of Equations Forming Augmented

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

More information

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination) Section 8. Solving a System of Equations Using Matrices (Guassian Elimination) x + y + z = x y + 4z = x 4y + z = System of Equations x 4 y = 4 z A System in matrix form x A x = b b 4 4 Augmented Matrix

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

Math 240: Linear Systems and Rank of a Matrix

Math 240: Linear Systems and Rank of a Matrix Math 240: Linear Systems and Rank of a Matrix Ryan Blair University of Pennsylvania Thursday January 20, 2011 Ryan Blair (U Penn) Math 240: Linear Systems and Rank of a Matrix Thursday January 20, 2011

More information

Systems of Linear Equations

Systems of Linear Equations A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Systems of Linear Equations Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION Attribution-NonCommercial-ShareAlike (CC

More information

1.5 Elementary Matrices and a Method for Finding the Inverse

1.5 Elementary Matrices and a Method for Finding the Inverse .5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

More information

Sec 4.1 Vector Spaces and Subspaces

Sec 4.1 Vector Spaces and Subspaces Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

More information

Lecture Note on Linear Algebra 15. Dimension and Rank

Lecture Note on Linear Algebra 15. Dimension and Rank Lecture Note on Linear Algebra 15. Dimension and Rank Wei-Shi Zheng, wszheng@ieee.org, 211 November 1, 211 1 What Do You Learn from This Note We still observe the unit vectors we have introduced in Chapter

More information

2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant 2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants. Dr. Doreen De Leon Math 152, Fall 2015 Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

More information

Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations. Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Matrices, Determinants and Linear Systems

Matrices, Determinants and Linear Systems September 21, 2014 Matrices A matrix A m n is an array of numbers in rows and columns a 11 a 12 a 1n r 1 a 21 a 22 a 2n r 2....... a m1 a m2 a mn r m c 1 c 2 c n We say that the dimension of A is m n (we

More information

MATH10212 Linear Algebra B Homework 7

MATH10212 Linear Algebra B Homework 7 MATH22 Linear Algebra B Homework 7 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra and its Applications Pearson, 26 (or other editions) Normally, homework assignments

More information

Chapter 4: Systems of Equations and Ineq. Lecture notes Math 1010

Chapter 4: Systems of Equations and Ineq. Lecture notes Math 1010 Section 4.1: Systems of Equations Systems of equations A system of equations consists of two or more equations involving two or more variables { ax + by = c dx + ey = f A solution of such a system is an

More information

5.5. Solving linear systems by the elimination method

5.5. Solving linear systems by the elimination method 55 Solving linear systems by the elimination method Equivalent systems The major technique of solving systems of equations is changing the original problem into another one which is of an easier to solve

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.2 Row Reduction and Echelon Forms ECHELON FORM A rectangular matrix is in echelon form (or row echelon form) if it has the following three properties: 1. All nonzero

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

Solutions to Linear Algebra Practice Problems 1. form (because the leading 1 in the third row is not to the right of the

Solutions to Linear Algebra Practice Problems 1. form (because the leading 1 in the third row is not to the right of the Solutions to Linear Algebra Practice Problems. Determine which of the following augmented matrices are in row echelon from, row reduced echelon form or neither. Also determine which variables are free

More information

Matrix Inverse and Determinants

Matrix Inverse and Determinants DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Linear Systems and Gaussian Elimination

Linear Systems and Gaussian Elimination Eivind Eriksen Linear Systems and Gaussian Elimination September 2, 2011 BI Norwegian Business School Contents 1 Linear Systems................................................ 1 1.1 Linear Equations...........................................

More information

Linear Equations ! 25 30 35$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Linear Equations ! 25 30 35$ &  350 150% &  11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Diagonal, Symmetric and Triangular Matrices

Diagonal, Symmetric and Triangular Matrices Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

More information

4.6 Null Space, Column Space, Row Space

4.6 Null Space, Column Space, Row Space NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear

More information

( % . This matrix consists of $ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

( % . This matrix consists of $ 4 5  5' the coefficients of the variables as they appear in the original system. The augmented 3  2 2 # 2  3 4& Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

APPLICATIONS OF MATRICES. Adj A is nothing but the transpose of the co-factor matrix [A ij ] of A.

APPLICATIONS OF MATRICES. Adj A is nothing but the transpose of the co-factor matrix [A ij ] of A. APPLICATIONS OF MATRICES ADJOINT: Let A = [a ij ] be a square matrix of order n. Let Aij be the co-factor of a ij. Then the n th order matrix [A ij ] T is called the adjoint of A. It is denoted by adj

More information

Lecture 6. Inverse of Matrix

Lecture 6. Inverse of Matrix Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

More information

8.3. Gauss elimination. Introduction. Prerequisites. Learning Outcomes

8.3. Gauss elimination. Introduction. Prerequisites. Learning Outcomes Gauss elimination 8.3 Introduction Engineers often need to solve large systems of linear equations; for example in determining the forces in a large framework or finding currents in a complicated electrical

More information

NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that

NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that NON SINGULAR MATRICES DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that AB = I n = BA. Any matrix B with the above property is called

More information

Math 315: Linear Algebra Solutions to Midterm Exam I

Math 315: Linear Algebra Solutions to Midterm Exam I Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

More information

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self Study Course

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self Study Course SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self Study Course MODULE 17 MATRICES II Module Topics 1. Inverse of matrix using cofactors 2. Sets of linear equations 3. Solution of sets of linear

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Mathematics Notes for Class 12 chapter 3. Matrices

Mathematics Notes for Class 12 chapter 3. Matrices 1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form

More information

Definition: Group A group is a set G together with a binary operation on G, satisfying the following axioms: a (b c) = (a b) c.

Definition: Group A group is a set G together with a binary operation on G, satisfying the following axioms: a (b c) = (a b) c. Algebraic Structures Abstract algebra is the study of algebraic structures. Such a structure consists of a set together with one or more binary operations, which are required to satisfy certain axioms.

More information

Solving Systems of Linear Equations; Row Reduction

Solving Systems of Linear Equations; Row Reduction Harvey Mudd College Math Tutorial: Solving Systems of Linear Equations; Row Reduction Systems of linear equations arise in all sorts of applications in many different fields of study The method reviewed

More information

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, Chapter 1: Linear Equations and Matrices MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

More information

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above: Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in

More information

8.3. Solution by Gauss Elimination. Introduction. Prerequisites. Learning Outcomes

8.3. Solution by Gauss Elimination. Introduction. Prerequisites. Learning Outcomes Solution by Gauss Elimination 8.3 Introduction Engineers often need to solve large systems of linear equations; for example in determining the forces in a large framework or finding currents in a complicated

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor

More information

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

More information

Lecture 21: The Inverse of a Matrix

Lecture 21: The Inverse of a Matrix Lecture 21: The Inverse of a Matrix Winfried Just, Ohio University October 16, 2015 Review: Our chemical reaction system Recall our chemical reaction system A + 2B 2C A + B D A + 2C 2D B + D 2C If we write

More information

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

More information

Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold: Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Vector Spaces II: Finite Dimensional Linear Algebra 1

Vector Spaces II: Finite Dimensional Linear Algebra 1 John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

More information

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Math 018 Review Sheet v.3

Math 018 Review Sheet v.3 Math 018 Review Sheet v.3 Tyrone Crisp Spring 007 1.1 - Slopes and Equations of Lines Slopes: Find slopes of lines using the slope formula m y y 1 x x 1. Positive slope the line slopes up to the right.

More information

Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns

Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns Math 5. Selected Solutions for Week 2 Section. (Page 2). Let u = and A = 5 2 6. Is u in the plane in R spanned by the columns of A? (See the figure omitted].) Why or why not? First of all, the plane in

More information

Lecture 1: Systems of Linear Equations

Lecture 1: Systems of Linear Equations MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

More information

Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that

Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that 0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,

Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n, LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

Lecture 23: The Inverse of a Matrix

Lecture 23: The Inverse of a Matrix Lecture 23: The Inverse of a Matrix Winfried Just, Ohio University March 9, 2016 The definition of the matrix inverse Let A be an n n square matrix. The inverse of A is an n n matrix A 1 such that A 1

More information

LECTURE 1 I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find x such that IA = A

LECTURE 1 I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find x such that IA = A LECTURE I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find such that A = y. Recall: there is a matri I such that for all R n. It follows that

More information

Solving Systems of Linear Equations. Substitution

Solving Systems of Linear Equations. Substitution Solving Systems of Linear Equations There are two basic methods we will use to solve systems of linear equations: Substitution Elimination We will describe each for a system of two equations in two unknowns,

More information

We seek a factorization of a square matrix A into the product of two matrices which yields an

We seek a factorization of a square matrix A into the product of two matrices which yields an LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Inverses and powers: Rules of Matrix Arithmetic

Inverses and powers: Rules of Matrix Arithmetic Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

More information

1 Determinants. Definition 1

1 Determinants. Definition 1 Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described

More information

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable. Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

1 Orthogonal projections and the approximation

1 Orthogonal projections and the approximation Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

More information

Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013

Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013 Lecture 11 Shuanglin Shao October 2nd and 7th, 2013 Matrix determinants: addition. Determinants: multiplication. Adjoint of a matrix. Cramer s rule to solve a linear system. Recall that from the previous

More information

Chapter 1 - Matrices & Determinants

Chapter 1 - Matrices & Determinants Chapter 1 - Matrices & Determinants Arthur Cayley (August 16, 1821 - January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information