Solving Systems of Linear Equations



Similar documents
Solving Systems of Linear Equations

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Systems of Linear Equations Using Matrices

1.2 Solving a System of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Systems of Linear Equations

Row Echelon Form and Reduced Row Echelon Form

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Lecture 1: Systems of Linear Equations

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

5.5. Solving linear systems by the elimination method

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Methods for Finding Bases

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

1.5 SOLUTION SETS OF LINEAR SYSTEMS

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Direct Methods for Solving Linear Systems. Matrix Factorization

5 Homogeneous systems

Systems of Linear Equations

by the matrix A results in a vector which is a reflection of the given

Using row reduction to calculate the inverse and the determinant of a square matrix

Notes on Determinant

Solutions to Math 51 First Exam January 29, 2015

Solution of Linear Systems

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Arithmetic and Algebra of Matrices

1 VECTOR SPACES AND SUBSPACES

Lecture Notes 2: Matrices as Systems of Linear Equations

160 CHAPTER 4. VECTOR SPACES

Linear Algebra Notes

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

( ) which must be a vector

1 Determinants and the Solvability of Linear Systems

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Linearly Independent Sets and Linearly Dependent Sets

2.2/2.3 - Solving Systems of Linear Equations

A =

8 Square matrices continued: Determinants

Section Continued

7 Gaussian Elimination and LU Factorization

SOLVING LINEAR SYSTEMS

Factorization Theorems

Subspaces of R n LECTURE Subspaces

LS.6 Solution Matrices

NOTES ON LINEAR TRANSFORMATIONS

1 Introduction to Matrices

Continued Fractions and the Euclidean Algorithm

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Elementary Matrices and The LU Factorization

Linear Codes. Chapter Basics

Orthogonal Diagonalization of Symmetric Matrices

1 Sets and Set Notation.

5 Systems of Equations

University of Lille I PC first year list of exercises n 7. Review

6. Cholesky factorization

4.5 Linear Dependence and Linear Independence

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Practical Guide to the Simplex Method of Linear Programming

Linear Equations in Linear Algebra

Lecture 3: Finding integer solutions to systems of linear equations

Question 2: How do you solve a matrix equation using the matrix inverse?

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Solution to Homework 2

Orthogonal Projections

The Determinant: a Means to Calculate Volume

Math 312 Homework 1 Solutions

LINEAR ALGEBRA W W L CHEN

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Vector Spaces 4.4 Spanning and Independence

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

9 Multiplication of Vectors: The Scalar or Dot Product

Linear Equations in Linear Algebra

THE DIMENSION OF A VECTOR SPACE

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Numerical Analysis Lecture Notes

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Similar matrices and Jordan form

Inner product. Definition of inner product

Math 215 HW #6 Solutions

OPRE 6201 : 2. Simplex Method

Solving Simultaneous Equations and Matrices

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Linear Programming. March 14, 2014

Lecture 14: Section 3.3

Similarity and Diagonalization. Similar Matrices

α = u v. In other words, Orthogonal Projection

Linear Algebra I. Ronald van Luijk, 2012

Transcription:

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how this matrix machinery can also be used to solve such systems However, before we embark on solving systems of equations via matrices, let me remind you of what such solutions should look like The Geometry of Linear Systems Consider a linear system of m equations and n unknowns: (5) What does the solution set look like? easily visualize the solution sets a x + a x + + a n x n = b a x + a x + + a n x n = b a m x + a m x + + a mn x n = b m Let s examine this question case by case, in a setting where we can equations in 3 unknowns This would correspond to a situation where you have 3 variables x, x, x 3 with no relations between them Being free to choose whatever value we want for each of the 3 variables, it s clear that the solutions set is just R 3, the 3-dimensional space of ordered sets of 3 real numbers equation in 3 unknowns In this case, use the equation to express one variable, say x n, in terms of the other variables; x = a n (b a x + a x + a,3 x 3 ) The remaining variables x, and x are then unrestricted Letting these variables range freely over R will then fill out a -dimensional plane in R Thus, in the case of equation in 3 unknowns we have a two dimensional plane as a solution space equations in 3 unknowns As in the preceding example, the solution set of each individual equation will be a -dimensional plane The solution set of the pair of equation will be the intersection of these two planes (For points common to both solution sets will be points corresponding to the solutions of both equations) Here there will be three possibilities: () The two planes do not intersect In other words, the two planes are parallel but distinct Since they share no common point, there is no simultaneous solutoin of both equations () The intersection of the two planes is a line This is the generic case (3) The two planes coincide In this case, the two equations must be somehow redundant Thus we have either no solution, a -dimensional solution space (ie a line) or a -dimensional solution space 3 equations and 3 unknowns In this case, the solution set will correspond to the intersection of the three planes corresponding to the 3 equations We will have four possiblities: () The three planes share no common point In this case there will be no solution () The three planes have one point in common This will be the generic situation

ELEMENTARY ROW OPERATIONS (3) The three planes share a single line (4) The three planes all coincide Thus, we either have no solution, a -dimensional solution (ie, a point), a -dimensional solution (ie a line) or a -dimensional solution We now summarize and generalize this discussion as follows Theorem 5 Consider a linear system of m equations and n unknowns: The solution set of such a system is either: a x + a x + + a n x n = b a x + a x + + a n x n = b a m x + a m x + + a mn x n = b m () The empty set {}; ie, there is no solution () A hyperplane of dimension greater than or equal to (n m) Elementary Row Operations In the preceding lecture we remarked that our new-fangled matrix algebra allowed us to represent linear systems such as (5) succintly as matrix equations: a a a n x b a a a n x (5) = b a m a m a mn x m b m For example the linear system (53) can be represented as (54) [ 3 x + 3x = 3 x + x = ] [ ] [ ] x 3 = x Now, when solving linear systems like (53) it is very common to create new but equivalent equations by, for example, mulitplying by a constant or adding one equation to another In fact, we have Theorem 5 Let S be a system of m linear equations in n unknowns and let S be another system of m equations in n unknowns obtained from S by applying some combination of the following operations: interchanging the order of two equations multiplying one equation by a non-zero constant replacing a equation with the sum of itself and a multiple of another equation in the system Then the solution sets of S ane S are identical In particular, the solution set of (53) is equivalent to the solution set of (55) x + 3x = 3 x x =

3 SOLVING LINEAR EQUATIONS 3 (where we have multiplied the second equation by -), as well as the solution set of (56) x + 3x = 3 x = (where we have replaced the second row in (55) by its sum with the first row), as well as the solution of (57) x = 3 x = (where we have replaced the first row in (56) by its sum with 3 times the second row) We can thus solve a linear system by means of the elementary operations described in the theorem above Now because there is a matrix equation corresponding to every system of linear equations, each of the operations described in the theorem above corresponds to a matrix operation To convert these operations in our matrix language, we first associate with a linear system an augmented matrix a x + a x + + a n x n = b a x + a x + + a n x n = b a m x + a m x + + a mn x n = b m a a a n a a a n a m a m a mn This is just the m (n + ) matrix that s obtained appending the column vector b to the columns of the m n matrix A We shall use the notation [A b] to denote the augmented matrix of A and b The augmented matrices corresponding to equations (55), (56), and (57) are thus, respectively [ 3 3 ], [ 3 3 ], and [ 3 ] From this example, we can see that the operations in Theorem 5 translate to the following operations on the corresponding augmented matrices: Row Interchange: the interchange of two rows of the augmented matrix Row Scaling: multiplication of a row by a non-zero scalar Row Addition: replacing a row by its sum with a multiple of another row Henceforth, we shall refer to these operations as Elementary Row Operations b b b m 3 Solving Linear Equations Let s now reverse directions and think about how to recognize when the system of equations corresponding to a given matrix is easily solved (To keep our discussion simple, in the examples given below we consider systems of n equations in n unknowns)

3 SOLVING LINEAR EQUATIONS 4 3 Diagonal Matrices A matrix equation Ax = b is trivial to solve if the matrix A is purely diagonal For then the augmented matrix has the form a b a b A = a n,n b n a nn b n the corresponding system of equations reduces to a x = b x = b a a x = b x = b a a nn x n = b n x n = b n a nn 3 Lower Triangular Matrices If the coefficient matrix A has the form a a a A = a n, a n, a n,n a n a n, a n,n a nn (with zeros everywhere above the diagonal from a to a nn ), then it is called lower triangular A matrix equation Ax = b in which A is lower triangular is also fairly easy to solve For it is equivalent to a system of equations of the form a x = b a x + a x = b a 3 x + a 3 x + a 33 x 3 = b 3 a n x + a n x + a n3 x 3 + + a nn x n = b n To find the solution of such a system one solves the first equation for x and then substitutes its solution b /a for the variable x in the second equation ( ) b a + a x = b x = ( b a ) b a a a One can now substitute the numerical expressions for x, and x into the third equation and get a numerical expression for x 3 Continuing in this manner we can solve the system completely We call this method solution by forward substitution 33 Upper Triangular Matrices We can do a similar thing for systems of equations characterized by an upper triangular matrices A a a a,n a n a a,n a n A = a n,n a nn

3 SOLVING LINEAR EQUATIONS 5 (that is to say, a matrix with zero s everywhere below and to the left of the diagonal), then the corresponding system of equations will be a x + a x + + a n x n = b a x + + a n x n = b a n,n x n + a n,n x n = b n a nn x n = b n which can be solved by substituting the solution of the last equation into the preceding equation and solving for x n x n = a n,n x n = b n a nn ( ( )) bn b n a n,n a nn and then substituting this result into the third from last equation, etc by back-substitution This method is called solution 34 Solution via Row Reduction 34 Gaussian Reduction In the general case a matrix will be neither be upper triangular or lower triangular and so neither forward- or back-substituiton can be used to solve the corresponding system of equations However, using the elementary row operations discussed in the preceding section we can always convert the augmented matrix of a (self-consistent) system of linear equations into an equivalent matrix that is upper triangular; and having done that we can then use back-substitution to solve the corresponding set of equations We call this method Gaussian reduction Let me demonstate the Gaussian method with an example Example 53 Solve the following system of equations using Gaussian reduction x + x x 3 = x x + x 3 = x x x 3 = 3 First we write down the corresponding augmented matrix 3 We now use elementary row operations to convert this matrix into one that is upper triangular Adding - times the first row to the second row produces 3 Adding - times the first row to the third row produces 3 3

Adding 3 3 SOLVING LINEAR EQUATIONS 6 times the second row to the third row produces 6 The augmented matrix is now in upper triangular form It corresponds to the following system of equations which can easily be solved via back-substitution: x + x x 3 = x + x 3 = x 3 = 6 x 3 = 6 x 3 = 3 x + 6 = x = x + 3 = x = In summary, solution by Gaussian reduction consists of the following steps () Write down the augmented matrix corresponding to the system of linear equations to be solved () Use elementary row operations to convert the augmented matrix [A b] into one that is upper triangular This is acheived by systematically using the first row to eliminate the first entries in the rows below it, the second row to eliminate the second entries in the row below it, etc (3) Once the augmented matrix has been reduced to upper triangular form, write down the corresponding set of linear equations and use back-substitution to complete the solution Remark 54 The text refers to matrices that are upper triangular as being in row-echelon form Definition 55 A matrix is in row-echelon form if it satisfies two conditions: () All rows containing only zeros appear below rows with non-zero entries () The first non-zero entry in a row appears in a column to the right of the first non-zero entry of any preceding row For such a matrix the first non-zero entry in a row is called the pivot for that row 34 Gauss-Jordan Reduction In the Gauss Reduction method described above, one carries out row operations until the augmented matrix is upper triangular and then finishes the problem by converting the problem back into a linear system and using back-substitution to complete the solution Thus, row reduction is used to carry out about half the work involved in solving a linear system It is also possible to use only row operations to construct a solution of a linear system Ax = b technique is called Gauss-Jordan reduction The idea is this This () Write down the augmented matrix corresponding to the system of linear equations to be solved () Use elementary row operations to convert the augmented matrix [A b] into one that is upper triangular This is acheived by systematically using the first row to eliminate the first entries in the rows below it, the second row to eliminate the second entries in the row below it, etc a a a n b a a a n b a a a n b a a n b [A b] = row operations a n a n a nn b n a nn b n

3 SOLVING LINEAR EQUATIONS 7 (3) Continue to use row operations on the aumented matrix until all the entries above the diagonal of the first factor have been eliminated, and only s appear along the diagonal a a a n b a a b n b b row operations = [ I b ] a nn b n b n (4) The solution of the linear system corresponding to the augmented matrix [I b ] is trivial x = Ix = b Moreover, since [I b ] was obtained from [A b] by row operations, x = b must also be the solution of Ax = b Example 56 Solve the following system of equations using Gauss-Jordan reduction x + x x 3 = x x + x 3 = x x x 3 = 3 First we write down the corresponding augmented matrix 3 In the preceding example, I demonstated that this augmented matrix is equivalent to 6 We ll now continue to apply row operations until the first block has the form of a 3 3 identity matrix Multiplying the second and third rows by yields 3 Replacing the first row by its sum with times the second row yields 3 Replacing the second row by its sum with the third row yields 3 The system of equations corresponding to this augmented matrix is This is our solution x = x = x 3 = 3 To maintain contact with the definitions used in the text, we have Definition 57 A matrix is in reduced row-echelon form if it is in row-echelon form with all pivots equal to and with zeros in every column entry above and below the pivots

4 ELEMENTARY MATRICES 8 Theorem 58 Let Ax = b be a linear system and suppose [A b ] is row equivalent to [A b] with A a matrix in row-echelon form Then () The system Ax = b is inconsistent if and only if the augmented matrix [A b ] has a row with all entries to the left of the partition and a non-zero entry to the right of the partition () If Ax = b is consistent and every column of A contains a pivot, the system has a unique solution (3) If Ax = b is consistent and some column of A has no pivot, the system has infinitely many solutions, with as many free variables as there are pivot-free columns in A 4 Elementary Matrices I shall now show that all elementary row operations can be carried out by means of matrix multiplication Even though carrying out row operations by matrix multiplication will be grossly inefficient from a calculation point of view, this equivalence remains an important theoretical fact (As you ll see in some of the forthcoming proofs) Definition 59 An elementary matrix is a matrix that can be obtained from the identity matrix by means of a single row operation Theorem 5 Let A be an m n matrix, and let E be an m m elementary matrix Then multiplication of A on the left by E effects the same elementary row operation on A as that which was performed on the m m identity matrix to produce E Corollary 5 If A is a matrix that was obtained from A by a sequence of row operations then there exists a corresponding sequence of elementary matrices E, E,, E r such that A = E r E E A