MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

Similar documents
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Solving Systems of Linear Equations Using Matrices

1.2 Solving a System of Linear Equations

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Solving Systems of Linear Equations

Systems of Linear Equations

Solutions to Math 51 First Exam January 29, 2015

Row Echelon Form and Reduced Row Echelon Form

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Solving Linear Systems, Continued and The Inverse of a Matrix

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Methods for Finding Bases

Lecture Notes 2: Matrices as Systems of Linear Equations

Notes on Determinant

Lecture 1: Systems of Linear Equations

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

( ) which must be a vector

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

1.5 SOLUTION SETS OF LINEAR SYSTEMS

by the matrix A results in a vector which is a reflection of the given

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Solving Systems of Linear Equations

5.5. Solving linear systems by the elimination method

Arithmetic and Algebra of Matrices

5 Homogeneous systems

Systems of Equations

Direct Methods for Solving Linear Systems. Matrix Factorization

4.5 Linear Dependence and Linear Independence

Solution of Linear Systems

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Systems of Linear Equations

1 VECTOR SPACES AND SUBSPACES

Lecture 3: Finding integer solutions to systems of linear equations

LS.6 Solution Matrices

Using row reduction to calculate the inverse and the determinant of a square matrix

1 Solving LPs: The Simplex Algorithm of George Dantzig

Name: Section Registered In:

Question 2: How do you solve a matrix equation using the matrix inverse?

University of Lille I PC first year list of exercises n 7. Review

NOTES ON LINEAR TRANSFORMATIONS

Linear Equations in Linear Algebra

Mathematics Course 111: Algebra I Part IV: Vector Spaces

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Continued Fractions and the Euclidean Algorithm

8 Square matrices continued: Determinants

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Math 215 HW #6 Solutions

II. Linear Systems of Equations

Linear Algebra Notes

160 CHAPTER 4. VECTOR SPACES

Vector Spaces 4.4 Spanning and Independence

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

THE DIMENSION OF A VECTOR SPACE

7 Gaussian Elimination and LU Factorization

LINEAR ALGEBRA. September 23, 2010

PYTHAGOREAN TRIPLES KEITH CONRAD

1 Sets and Set Notation.

Similarity and Diagonalization. Similar Matrices

Matrix Representations of Linear Transformations and Changes of Coordinates

Linear Equations in Linear Algebra

The last three chapters introduced three major proof techniques: direct,

Section 1.1 Linear Equations: Slope and Equations of Lines

Lecture notes on linear algebra

9 Multiplication of Vectors: The Scalar or Dot Product

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Practical Guide to the Simplex Method of Linear Programming

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Linearly Independent Sets and Linearly Dependent Sets

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Linear Algebra Notes for Marsden and Tromba Vector Calculus

T ( a i x i ) = a i T (x i ).

MAT 242 Test 2 SOLUTIONS, FORM T

Solving simultaneous equations using the inverse matrix

Factorization Theorems

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Elementary Matrices and The LU Factorization

Linear Algebra I. Ronald van Luijk, 2012

Linear Programming. March 14, 2014

Chapter 20. Vector Spaces and Bases

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

α = u v. In other words, Orthogonal Projection

SOLVING LINEAR SYSTEMS

General Framework for an Iterative Solution of Ax b. Jacobi s Method

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

10.2 Systems of Linear Equations: Matrices

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Chapter 9. Systems of Linear Equations

LINEAR ALGEBRA W W L CHEN

AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Lecture 2: Homogeneous Coordinates, Lines and Conics

Linear Equations in One Variable

Transcription:

MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call these type of equations linear. ax + by + cz = d Definition.. A linear equation in the n variables x, x 2,..., x n is an equation that may be written in the form a x + b 2 x 2 +... + a n x n = b where the coefficients a,..., a n and the constant term are constants. Example.2. The following equations are linear: 2x + y = 3, r 2 3 s 5 8 t = 3, x x 2 + 2x 3 = 3, 2x + π 4 y tan( 4 2 )z = e. while these equations are not linear: xy z = 2, x 2 + x 2 2 =, x y = 3z, 2r + s =, 2x + π 2 4 sin( 4 2z) = e. A solution of a linear equation a x +...+a n x n = b is a vector [s, s 2,..., s n ] whose components satsify the equation when we substitute x i = s i for i [, n]. Example.3. In the first example, one possible solution to the first linear equation would be [2, ] since the substitution of x = 2 and y = yields 2(2) + ( ) = 3. The vector [, ] is another solution. We already know this describes a line in the plane, and this may be written parametrically by letting x = t and solving for y, to produce [t, 3 2t]. The linear equation x x 2 + 2x 3 = 3 has [3,, ] and [,, 2] and [6,, ] as specific solutions. Of course, in R 3 this describes a plane. To see this, set x 2 = s and x 3 = t then the solutions are described parametrically by [3 + s 2t, s, t]. A system of linear equations is a fine set of linear equation, each with the same variables. A solution of a system of linear equations is a vector that is simultaneously a solution of each equation in the system. The solution set of a system of linear equations will be the set of all solutions of the system. We will call the process of calculating the solution set, solving the system. Example.4. The system, x + y =, x y = has [ 2, 2 ] as a solution, since it satisfies both equations. Notice that the vector [, ] is not a solution as it only satisfies the second equation and not the first. Example.5. In R 2 there are three typical cases for linear equations x + y =, x y =, here the lines intersect once with the solution [ 2, 2 ].

2 MATH 23: SYSTEMS OF LINEAR EQUATIONS x y =, 2x 2y = 2, here the lines intersect infinitely many times with the form [t, t ]. x y =, x y = 2, this equation has no solution. Geometrically these represent parallel lines, to see that this has no solution solve for in the first and substitute into the second, we find that = 2 which cannot happen on the real line. From these examples we add a few terms: a system of linear equations is consistent if it has at least one solution, a system with no solutions will be called inconsistent. Despite their simplicity the three systems in the previous example illustrate the only three possibilities for the number of solutions of a system of linear equations with real coefficients. For the moment this will be unproven but we have the following Proposition for any system of linear equations Proposition.6. A system of linear equations with real coefficients has either A unique solution (consistent) infinitely many solutions (consistent) no solutions (inconsistent). Solving a System of Linear Equations. To start we must introduce the concept of equivalence between linear systems, we say two linear systems are equivalent if they have the same solution sets. As an example, x y =, x + y = 3, x y =, y = have the same unique solution [2, ] and so they are equivalent. This example illustrates how we will go about finding the solution to a given system of linear equations; by finding increasingly simpler yet equivalent linear systems we may determine the solution by inspection, as in this example where the triangular pattern of the second system gives y = automatically. Example.7. Q What is the solution to the system x y z = 2, y + 3z = 5, 5z = A: Taking the last equation, solving for z = 2 and substituting this value into the remaining two we find that these become: x y = 4, y =. Repeating the process with y = we find x = 3. Thus [3,, 2] is a solution to this system. In this last example we used back substitution, using this tool we will determine a general strategy for transforming a given system into an equivalent one that can be solved easily. Without saying too much we will illustrate this approach with one last example Example.8. Q: Solve the system x y z = 2, 3x 3y + 2z = 6, 2x y + z = 9. A: As a start towards reducing this to a simple triangular form, as in the last example, we would like to remove the coefficient of the x-term to zero in the second and third equation. This may be done by multiplying the first equation by an appropriate constant and subtracting this new equation from the one we wish to

MATH 23: SYSTEMS OF LINEAR EQUATIONS 3 change. This will not affect x,y or z and so this may be written more compactly as x 2 2 3 3 2 y = 6, or 3 3 2 6 2 z 9 2 9 where the first three columns contain the coefficients of the variables in order, the final column contains the constant terms and the vertical bar is meant to differentiate equality versus a sum of terms. This matrix is called the augmented matrix of the system. As we have yet to introduce the operation that will act on the matrix we will work with the system as linear equations and illustrate how our actions reflect on the augmented matrix of each system. x y z = 2 3x 3y + 2z = 6 2x y + z = 9 2 3 3 2 6 2 9 Subtract 3 times the first equation from the second equation x y z = 2 2 5z = 5 2x y + z = 9 2 9 Subtract 2 times the first equation from the third equation, x y z = 2 2 5z = 5 y + 3 = 5 3 5 Interchange the second and third equations x y z = 2 2 y + 3 = 5 3 5 5z = 5 We can stop here, we ve found a simpler equivalent system from which we can read off the required values for this system as [3,, 2]. Thus these two examples are equivalent systems despite appearances otherwise. The above calculation shows that any solution of the given system is also a solution of the final one. As this process is reversible we have found a way to change from one equivalent system to another and backwards if need be. Furthermore these operations may be expressed as operations on the components of matrices so we may as well work with matrices since they are equivalent to systems of linear equations. Direct Methods for Solving Linear Systems We want to make this procedure more systematic and generalized for any system of linear equations. We will do this by reducing the augmented matrix of a system of linear equations to a simpler form where back substitution produces the solution. Echelon form of matrices. For any linear system we will define two helpful matrices that will be important in the work to come, the coefficient matrix containing the coefficients of the variables, and the augmented matrix which is the coefficient matrix with an extra column added containing the constant terms of the linear system.

4 MATH 23: SYSTEMS OF LINEAR EQUATIONS From the last example, the augmented matrix related to the linear system is again, x y z = 2 2 3x 3y + 2z = 6 3 3 2 6 2x y + z = 9 2 9 and so the coefficient matrix for this system is 3 3 2 2 If a variable, say x i is missing in the j th linear equation in the system, the (i, j)-th component will have a zero. Symbolically if we denote the coefficient matrix as A and the column vector of the constant term for each equation as b, the augmented matrix is [A b]. There will be times where a matrix may not be simplified to a triangular form; it is possible to simplify a matrix to a another helpful form nonetheless Definition.9. A matrix is in row echelon form if it satisfies the properties Any rows consisting entirely of zeros are at the bottom. In each nonzero row, the first nonzero entry, the leading entry is in a column to the left of any leading entries below it. Example.. All of these matrices are in echelon form: 5 3 3 2 2 2 3 3 5, 2 3, 2 5 4 4 2 4 For any linear system, the equivalent augmented matrix which has been reduced to row echelon form may be solved using back substitution. Example.. Q: If we assume that each of the above matrices are augmented matrices, determine their solutions if any. A: Transforming each augmented matrix to the corresponding set of linear equations will facilitate back substitution () The equations are x + y = 2, 3y = 5 and so the solution will be [ 3, 5 3 ]. (2) Here the linear system is x = 3, 2x 2 = 3 and = 2. This cannot happen and so the system has no solution. (3) Noticing that the bottom row of this matrix implies = 5 and so we conclude no solution exists. Elementary Row Operations. To describe the procedure for reducing a matrix to row echelon form we should define the operations on a matrix that maintain the equivalence between linear systems and their augmented matrices, called elementary row operations. Definition.2. The following elementary operations can be performed on a matrix () Interchange two rows. (2) Multiply a row by a nonzero constant. (3) Add a multiple of a row to another row.

MATH 23: SYSTEMS OF LINEAR EQUATIONS 5 To facilitate calculations we will use a short hand to denote the row operations symbolically R i R j denotes an interchange of the i-th and j-th rows. kr i implies we multiply the i-th row by k. Adding a multiple of the i-th row to the j-th row, is simply kr i + R j The third rule allows one to subtract a row from another using k = and that division of a row s constants by a number r is easily done using k = r and the second rule. The process of applying elementary row operations to bring a matrix into row echelon form is called row reduction. Example.3. Q: Reduce the matrix to echelon form: 2 3 2 3 5 3 3 4 A: We are going to work from the top left to the bottom right. The idea will be to work with the leading entry in the top-most row and use it to create zeros below it, we call this a pivot and this sub-process is called pivoting, typically one uses row operations to set the pivot to equal one as well. With this in mind, we eliminate the entries below the in the top-right-most corner Composing the row operations R 2 + 2R and R 3 3R yields 2 3 7 5 5 The first column is in echelon form, we now move onto the next column. To simplify this matrix we interchange the rows R 2 R 3 and scale the new R 2 by 5 2 3 7 We have found the second pivot, we must eliminate the seven below it to have this column in echelon form, via R 3 7R 2, 2 3 4 At this point the entire matrix has been reduced to row echelon form. Elementary row operations are reversible, so that if a matrix A is transformed to a new matrix B by any combination of operations, there is a corresponding inverse operation to transform B into A. Definition.4. Matrices A and B are equivalent if there is a sequence of elementary row operations that converts A into B. The matrices in the previous example are equivalent. To generalize this idea we return to row echelon form. Theorem.5. Matrices A and B are equivalent if and only if they can be reduced to the same row echelon form.

6 MATH 23: SYSTEMS OF LINEAR EQUATIONS Proof. Supposing A and B are row equivalent, by composing the combination of row operation that converts A into B to the combination converting B into the row echelon form E, we have constructed a combination of row operations that take A to E. To prove the other direction, we suppose both A and B both have a different combination of row operations that bring them into the same row echelon form E. Inverting the combinations of elementary row operations that take B to E and appending this to the list of row operations that take A to E, we find a set of row operations that take A to B. Gaussian Elimination. The process of using elementary row operations to alter the augmented matrix of a system of linear equations we produce an equivalent system that is more easily solved using back substitution. More formally this procedure is called Gaussian elimination and it consists of the three steps () Find the augmented matrix of the system of linear equations (2) Use elementary row operations to reduce the augmented matrix to row echelon form. (3) Using back substitution, solve the equivalent system that corresponds to the row-reduced matrix. Example.6. Q: Solve the system 2x 2 + 3x 3 = 8 2x + 3x 2 + x 3 = 5, x x 2 2x 3 = 5. A: The augmented matrix is 2 3 8 2 3 5 2 5 As there is a one in the first column of the third row we swap it for the first row, R R 3 and then eliminate the 2 in the second row directly below the new pivot, R 2 2R, 2 5 5 5 5 2 3 8 Scaling the second row by 5 and following this by R 3 2R 2 produces the matrix 2 5 3 2 The equivalent system of linear equations that correspond to this augmented matrix will be x x 2 2x 3 = 5, x 2 + x 3 = 3, x 3 = 2; back substitution gives the solution = 3 x x 2 x 3

MATH 23: SYSTEMS OF LINEAR EQUATIONS 7 Example.7. Q: Solve the system w 2x + y z = 2 w x + y 2z = 5 2w + 2x 2y + 4z = A: The augmented matrix will be 2 2 2 5 2 2 2 4 As the first row has the leading coefficient as, we will use this as the first pivot. Applying the elementary row operation R 2 R and R 3 + 2R we produce the matrix 2 2 3 2 2 6 One more row operation R 3 + 2R 2 will put this into row echelon form 2 2 3 The linear system associated with this augmented matrix is w 2x + y z = 2, x z = 3 this will have infinitely many solutions, if we express the variables related to the leading entries of the matrix (the leading variables) in terms of the other variables (the free variables ). Substituting x = z + 3 into the equation for w, if we treat the free variables as parameters, the solution may be expressed in vector form as w 8 3 x y = 3 + y + z z This second example will showcase the fact that the free variables are just the variables that aren t leading variables. The number of leading coefficients is the number of nonzero rows in the row echelon form of the coefficient matrix, we will be able to predict the number of free variables before the solution is determined using back substitution. Definition.8. The rank of a matrix is the number of nonzero rows in its row echelon form. We will call the rank of a matrix A by rank(a), in the first example, the rank was 3, and in the second example the rank will be two (by construction). Theorem.9. Rank Theorem: Let A be the coefficient matrix of a system of linear equations with n variables. If the system is consistent, then: the number of free variables = n - rank(a).

8 MATH 23: SYSTEMS OF LINEAR EQUATIONS Example.2. Q: Solve the system A: The augmented matrix is now x x 2 + 2x 3 = 3 x + 2x 2 x 3 = 3 2x 2 2x 3 = 2 3 2 3 2 2 subtracting the first row from the second, and scaling this by three we find 2 3 2 2 2 To eliminate the column entries below the leading entry in the second row, R 3 2R 2 gives 2 3 2 5 this matrix is inconsistent as it is giving a clear contradiction = 5, it has no solutions. Gauss-Jordan Elimination. We modify the Gaussian elimination so that back substitution is easily still. This will be helpful for when calculations are being done by hand on a system with infinitely many solutions, this is done by changing the row echelon matrix further. Definition.2. A matrix is in reduced row echelon form if it satisfies the following properties () It is already in row echelon form. (2) The leading entry in each nonzero row is a (called a leading ). (3) Each column containing a leading has zeros everywhere else. As an example here are any of the 2 2 matrices in reduced row echelon form, [ ] [ ] [ ] [ ] a,,,. where a is any number in R. Unlike row echelon form, the reduced row echelon form is unique. That is for each matrix A there is only one matrix R in reduced row echelon form that is equivalent to A. As with Gaussian elimination we introduce Gauss-Jordan elimination whose steps consist of: () Write the augmented matrix of the system of linear equations. (2) Use elementary row operations to reduce the augmented matrix to a reduce row echelon form. (3) If the resulting system is consistent, solve for the leading variables in terms of the remaining free variables (if any).,

MATH 23: SYSTEMS OF LINEAR EQUATIONS 9 Example.22. Q: Apply the Gauss-Jordan elimination algorithm on the example in (.7) A: We already know this linear system has an augmented matrix which is equivalent to 2 2 3 to put this into reduced row echelon form, apply the row operation R + 2R 2 to get 3 8 3 Immediately we find the solutions are of the form w 8 3 x y = 3 + y + z z Example.23. Q:Consider the line of intersection of the two planes x 2y 2z = 7 x + 3y + 4z = 2 A: The augmented matrix will be [ 2 2 7 3 4 2 then by adding the first row from the second brings the metric into row echelon form (R 2 + R ). [ ] 2 2 7. 2 5 One more row operation brings this into reduced echelon form R + 2R 2 [ ] 2 3 2 5 with the associated linear system: x + 2z = 3, y + 2z = 5 Choosing x and y as our leading variables and z as the free variable the equation of the line may be written in vector form x 3 2 y = 5 + z 2 z 3 Example.24. Q: Let p =, q = 2, u = and v =. Determine whether the lines x = p + tu and x = q + sv intersect, and if so, where. ]

MATH 23: SYSTEMS OF LINEAR EQUATIONS A: If these lines intersect, there should be a solution x = x y that satisfies both z equations at once. i.e., p+tu = x = q+sv and so p+tu = q+sv or tu sv = q p. Using the parametric form for these lines, we find t 3s =, s + t = 2, s + t = 2 the solution is easily checked to be t = 5 4 and s = 3 4. Supposing x = p + tu we find 9 4 5 4 4 that x =. Homogeneous Systems. So far we have argued that every system of linear equations has either no solution, a unique solution or infinitely many solutions. There is a special type of system that always has at least one solution. Definition.25. A system of linear equations is called homogeneous if the constant term in each equation is zero. Symbolically, this means the augmented matrix is of the form [A ]; every linear system has an associated homogeneous linear system produced by replacing the b vector with the zero vector. These systems will never be inconsistent and so it must have either infinitely many solutions or a unique one. In the first case we have a helpful theorem to determine when a solution has infinitely many solutions, Theorem.26. If [A ] is a homogeneous system of m linear equations with n variables, where m < n, then the system has infinitely many solutions. Proof. At the very least x = will be a solution, it will be consistent. By the Rank theorem we know that the rank(a) m, as these may be seen as the number of nontrivial linear equations recorded as rows in the matrix it must be less than the number of linear equations in the system. Furthermore we see that the number of free variables will equal n rank(a) n m >, there will be at least one free variable (as m and n are integers) hence there will be infinitely many solutions. Linear Systems over Z p. We have only considered linear systems with real valued coefficients, which lead to vector solutions in R n. Returning to the idea of a code vector, we ask what the solutions of linear equations with coefficients in Z p. When p is a prime number Z p behaves like R in many ways - one can add, subtract, multiply and divide numbers in a way that is reversible as well. This is the important part, because it allows us to solve systems of linear equations when the variables and coefficients belong to Z p for some prime p, this is called solving the system over Z p. Consider the example x + x 2 + x 3 = with coefficients in Z 2, it will have four solutions due to the finite nature of the field Z 2 : x x 2 x 3 =,,,

MATH 23: SYSTEMS OF LINEAR EQUATIONS In Z 3 we will have quite a few more, by considering the ways we may add a triplet with values in Z 2 to sum to 3: 3 : + + = mod3 3 : 2 + + = mod3 3 : 2 + 2 + = mod3 thus there will be nine possible solutions in Z 3. We will not need to do such combinatoric guess-work, as in R n, Gauss-Jordan elimination will work here as well. Example.27. Q:Solve the following linear system of linear equations over Z 3, x + 2x 2 + x 3 = x + x 3 = 2 x 2 + 2x 3 = A: In Z 2, = 2 mod3, and so subtraction will not be needed, similarly division is unnecessary as 2 2 = mod3. The augmented matrix is 2 2 2 To begin row reduction, apply R 2 + 2 R, 2 2 2 following this with R + R 2 and R 3 + 2R 2 we find 2 2 2 2 This matrix is now in row echelon form, to simplify to reduced row echelon form apply the row operations R + R 2 and 2R 3, 2 Example.28. Q: Solve the system of linear equations over Z 2, x + x 2 + x 3 + x 4 = x + x 2 = x 2 + x 3 = x 3 + x 4 = x + x 4 =

2 MATH 23: SYSTEMS OF LINEAR EQUATIONS A: To start we write the augmented matrix As the leading entry of the first column is a, we may use this as a pivot, and so to remove the remaining non-zero components in this column we apply the row operations R 2 + R and R 5 + R as these are the only rows containing non-zero entries As the third row has a non-zero entry in the second column, we swap this with the second row (R 3 R 2 ) and use it as the next pivot. Applying R + R 2 and R 5 + R 2 to eliminate the remaining non-zero entries in this column: Here the third pivot appears in the third row, to put this into reduced row echelon form apply the row operations R 2 + R 3 and R 4 + R 3 : Writing down the associated linear system, this becomes x + x 4 =, x 2 + x 4 = and x 3 + x 4 =, the leading variables are x, x 2 and x 3 and x 4 is a free variable. In vector form this becomes x x 2 x 3 x 4 = + x 4 as x 4 =, we see the only vectors that are solutions will be s t = {[,,, ], [,,, ]}. References [] D. Poole, Linear Algebra: A modern introduction - 3rd Edition, Brooks/Cole (22).