2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system



Similar documents
Solving Systems of Linear Equations Using Matrices

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Solving Systems of Linear Equations

5.5. Solving linear systems by the elimination method

Systems of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Row Echelon Form and Reduced Row Echelon Form

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

1.2 Solving a System of Linear Equations

Solving simultaneous equations using the inverse matrix

Solutions to Math 51 First Exam January 29, 2015

5 Homogeneous systems

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Chapter 9. Systems of Linear Equations

1 Solving LPs: The Simplex Algorithm of George Dantzig

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

Sequences. A sequence is a list of numbers, or a pattern, which obeys a rule.

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

1 Determinants and the Solvability of Linear Systems

5 Systems of Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Systems of Linear Equations

II. Linear Systems of Equations

Solving Simultaneous Equations and Matrices

Lecture 3: Finding integer solutions to systems of linear equations

Section 10.4 Vectors

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Solving Linear Systems, Continued and The Inverse of a Matrix

Solution of Linear Systems

Solving Systems of Linear Equations

Methods for Finding Bases

Lecture 1: Systems of Linear Equations

Linear Algebra Notes

Binary Adders: Half Adders and Full Adders

Question 2: How do you solve a matrix equation using the matrix inverse?

Solving Systems of Two Equations Algebraically

Notes on Determinant

Regions in a circle. 7 points 57 regions

10.1 Systems of Linear Equations: Substitution and Elimination

LINEAR EQUATIONS IN TWO VARIABLES

Lecture Notes 2: Matrices as Systems of Linear Equations

Linear Algebra Notes for Marsden and Tromba Vector Calculus

The Graphical Method: An Example

Vector Math Computer Graphics Scott D. Anderson

No Solution Equations Let s look at the following equation: 2 +3=2 +7

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

1 Review of Least Squares Solutions to Overdetermined Systems

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Math Quizzes Winter 2009

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Linear Equations in Linear Algebra

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Recall that two vectors in are perpendicular or orthogonal provided that their dot

2013 MBA Jump Start Program

1 VECTOR SPACES AND SUBSPACES

Notes from February 11

Special Situations in the Simplex Algorithm

9 Multiplication of Vectors: The Scalar or Dot Product

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Systems of Equations

How To Understand And Solve Algebraic Equations

Linear Programming. March 14, 2014

CURVE FITTING LEAST SQUARES APPROXIMATION

What is Linear Programming?

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving

Unit 1 Equations, Inequalities, Functions

Solving Rational Equations

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

1.2 Linear Equations and Rational Equations

Linear Algebra I. Ronald van Luijk, 2012

Equations, Inequalities & Partial Fractions

DERIVATIVES AS MATRICES; CHAIN RULE

Math 4310 Handout - Quotient Vector Spaces

8 Primes and Modular Arithmetic

Introduction to Matrix Algebra

POLYNOMIAL FUNCTIONS

Georgia Standards of Excellence Curriculum Map. Mathematics. GSE 8 th Grade

Vector and Matrix Norms

Linear Programming Problems

Hill s Cipher: Linear Algebra in Cryptography

Arithmetic and Algebra of Matrices

Solving systems by elimination

THE DIMENSION OF A VECTOR SPACE

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

SOLVING LINEAR SYSTEMS

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

SYSTEMS OF LINEAR EQUATIONS

MATH APPLIED MATRIX THEORY

1 Introduction to Matrices

A vector is a directed line segment used to represent a vector quantity.

Transcription:

1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables together nor do we raise powers, nor takes logs or introduce sine and cosines. A system of linear equations is of the form 3x 5y + 2z = 3 2x + y + 5z = 4. This is a system of two linear equations in three variables. The first equation is a system consisting of one linear equation in four variables. In this class we will be more interested in the nature of the solutions rather than the exact solutions themselves. So let us try to form a picture of what to expect. Suppose we start with an easy case. A system of two linear equations in two variables: x 2y = 1 2x + y = 3. It is easy to check that this has the unique solution x = y = 1. To see that this is not the only qualitative behaviour, suppose we consider the system 2x + y = 3 2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 2x + y = 3. In other words the solution set is infinite. Any point of the line given by the equation 2x + y = 3 will do. The problem is that the second equation does not impose independent conditions. Note that we can disguise (to a certain extent) this problem by writing down the system 2x + y = 3 4x + 2y = 6. All we did was take the second equation and multiply it by 2. But we are not fooled, we still realise that the second equation imposes no more conditions than the first. It is not independent of the first equation. 1

We can tweak this example to get a system 2x + y = 3 2x + y = 2. Now this system has no solutions whatsoever. If 2x + y = 3 then 2x + y 2. The key point is that these three examples exhaust the qualitative behaviour: One solution. Infinitely many solutions. No solutions. In other words if there are at least two solutions then there are infinitely many solutions. In particular it turns out that the answer is never that there are two solutions. We say that a linear system is inconsistent if there are no solutions and otherwise we also say that the system is consistent. It is instructive to think how this comes out geometrically. Suppose there are two variables x and y. Then we can represent the solutions to a system of linear equations in x and y as a set of points in the plane. Points in the plane are represented by a pair of points (x, y) and we will refer to these points as vectors. The set of all such points is called R 2, R 2 = { (x, y) x, y R }. Okay, so what are the possibilities for the solution set? Well suppose we have a system of one equation. The solution set is a line. Suppose we have a system of two equations. If the equations impose independent conditions, then we get a single point, in fact the intersection of the two lines represented by each equation. Are there other possible solution sets? Yes, we already saw we might get no solutions. Either take a system of two equations with no solutions (in fact parallel lines) or a system of three equations, where we get three different points when we intersect each pair of lines. If we have three equations can we still get a solution? Yes suppose all three lines are concurrent (pass through the same point). It is interesting to see how this works algebraically. Suppose we consider the system 2x + y = 3 x 2y = 1 3x y = 2. Then the vector (x, y) = (1, 1) still satisfies all three equations. Note that the third equation is nothing more than the sum of the first two equations. As soon as one realises this fact, it is clear that the third 2

equation fails to impose independent conditions, that is the third equation depends on the first two. Note one further possibility. The following represents a system of three linear equations in two variables, with a line of solutions: x + y = 1 2x + 2y = 2 3x + 3y = 3. These equations are very far from being independent. In fact no matter how many equations there are, it is still possible (but more and more unlikely) that there are solutions. Finally let me point out some unusual possibilities. If we have a system of no equations, then the solution set is the whole of R 2. In fact 0x + 0y = 0, represents a single equation whose solution set is R 2. token 0x + 0y = 1, By the same is a linear equation with no solutions. In summary the solution set to a system of linear equations in two variables exhibits one of four different qualitative possibilities: (1) No solutions. (2) One solution. (3) A line of solutions. (4) The whole plane R 2. (Strictly speaking, so far we have only shown that these possibilities occur, we have not shown that these are the only possibilities.). In other words the case when there are infinitely many solutions can be be further refined into the last two cases. Now let us delve deeper into characterising the possible behaviour of the solutions. Let us consider a system of three linear equations in three unknowns. Can we list all geometric possibilities? One possibility is we get a point. For example x = 0 y = 0 z = 0. The first equation cuts down the space of solutions to a plane, the plane { (0, y, z) y, z R }. 3

The second equation represents another plane which intersects the first plane in a line { (0, 0, z) z R }. The last equation defines a plane which intersects this line in the origin, (0, 0, 0). Or the first two equations could give a line and the third equation might also contain the same line, x = 0 y = 0 x + y = 0. The last equation is a sum of the first two equations. But we could just as well consider x = 0 y = 0 3x 2y = 0. Again the problem is that the third equation is not independent from the first two equations. There are many other possibilities. Two planes could be parallel, in which case there are no solutions. It is time for some general notation and definitions. A vector (r 1, r 2,..., r n ) R n, is a solution to a linear equation if a 1 x 1 + a 2 x 2 + + a n x n = b. a 1 r 1 + a 2 r 2 + + a n r n = b. A system of m linear equations in n variables has the form a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. +. +... +. =. a m1 x 1 + a m2 x 2 + + a mn x n = b m. A solution to a system of linear equations is a vector (r 1, r 2,..., r n ) R n which satisfies all m equations simultaneously. The solution set is the set of all solutions. Okay let us now turn to the boring bit. Given a system of linear equations, how does one solve the system? Again, let us start with a 4

simple example, x 2y = 1 2x + y = 3. There are many ways to solve this system, but let me focus on a particular method which generalises well to larger systems. Use the first equation to eliminate x from the second equation. To do this, take the first equation, multiply it by 2 and add it to the second equation (we prefer to think of multiplying by a negative number and adding rather than subtracting). We get a new system of equations, x 2y = 1 5y = 5. Since the second equation does not involve x at all, it is straightforward to use the second equation to get y = 1. Now use that value and substitute it into the first equation to determine x, x 2 = 1, so that x = 1. This method is called Gaussian elimination. The last step, where we go from the bottom to the top and recursively solve for the variables is called backwards substitution. Let us do the same thing with a system of three equations in three variables: x 2y z = 4 2x 3y + z = 10 x + 5y + 11z = 3. We start with the first equation and use it to eliminate the appearance of x from the second equation. To do this, take the first equation, multiply it by 2 and add it to the second equation. x 2y z = 4 y + 3z = 2 x + 5y + 11z = 3. Now let us eliminate the appearance of x from the third equation. To do this add the first equation to the third equation. x 2y z = 4 y + 3z = 2 3y + 10z = 7. 5

The next step is to eliminate y from the third equation by using the second equation. To do this, take the second equation, multiply it by 3 and add it to the third equation, x 2y z = 4 y + 3z = 2 z = 1. This completes the elimination step. Notice the characteristic upside down staircase shape of the equations. Now we do backwards substitution. Since the last equation does not involve either x or y, we read off the value for z from the last equation, to get z = 1. Now take this value for z and use the second equation, which does not involve x to solve for y, y + 3 = 2. This gives y = 1. Finally substitute the values for x and y to determine x, x + 2 1 = 4, to get x = 3. The solution is (x, y, z) = (3, 1, 1) R 3. It is easy to check that this is indeed a solution. Strictly speaking we should also check that there are no other solutions. We will come to this point later. Let us introduce some notation, which for the time being we will think of as just being a convenience. Instead of carrying around the variables x, y, z etc, let us just put the information into a table. We represent the system x 2y z = 4 2x 3y + z = 10 x + 5y + 11z = 3. by the augmented matrix 1 2 1 4 B = 2 3 1 10. 1 5 11 3 The coefficient matrix is 1 2 1 A = 2 3 1. 1 5 11 6

A is a matrix with three rows and three columns, denoted 3 3. Note that the rows represent equations and the columns variables. Let b = 4 10. 3 b is also known as a column vector. Clearly b, as a matrix, is 3 1. Then the augmented matrix is formed from the two matrices A and b ( A b ) If we let v = x y, z then we may also write down the system of equations very compactly as Av = b. It is possible to understand the previous method of solving linear equations in terms of the augmented matrix only. We start with, 1 2 1 4 2 3 1 10. 1 5 11 3 The first step is to take the first row, multiply it by 2 and add it to the second row, 1 2 1 4 0 1 3 2. 1 5 11 3 This puts a zero in the second row, first column. The next step is to take the first row, multiply it by 1 and add it to the third row, 1 2 1 4 0 1 3 2. 0 3 10 7 This puts a zero in the third row, first column. The next step is to take the second row, multiply it by 3 and add it to the third row, 1 2 1 4 0 1 3 2. 0 0 1 1 This puts a zero in the third row, second column. At this stage, we think of this augmented matrix as representing a system of equations and use the old method of back substitution to solve this system. 7

Let us look at a slightly more complicated example. Suppose that we start with a system of four equations in four unknowns, x + 3y 2z w = 1 6x 15y + 9z + 9w = 9 x z + 4w = 5 4x + 10y 5z 2w = 3. We first replace this by the augmented matrix, 6 15 9 9 9 1 0 1 4 5. 4 10 5 2 3 The first step is to use the one in the first row, first column to eliminate the 6, 1 and 4 from the first column. To do this we add 6, 1, 4 times the first row to the second, third and fourth row, to get 0 3 3 3 3 0 3 3 3 4 0 2 3 2 1. The next step is to multiply the second row by 1/3 to get a one in the second row, second column, 0 1 1 1 1 0 3 3 3 4. 0 2 3 2 1 Now we eliminate the 3 and the 2 in the second column. To do this we add 3 and 2 times the second row to the third and fourth row, 0 1 1 1 1 0 0 0 0 1. 0 0 1 4 1 The next step is to get a one in the third row, third column. We cannot do this by rescaling the third row. We can do it by swapping the third and fourth rows, 0 1 1 1 1 0 0 1 4 1 0 0 0 0 1 8.

Consider what happens when we try to solve the resulting linear equations by back substitution. The last equation reads 0x + 0y + 0z + 0w = 2. There are no values for x, y, z and w which work. The original system has no solutions. Now suppose we start with the system x + 3y 2z w = 1 6x 15y + 9z + 9w = 9 x z + 4w = 4 4x + 10y 5z 2w = 5. The only thing we have changed is 5 to 4 = 5 1. If we follow the same steps as before, we get down to the same matrix, except that the last entry is 0 = 1 1 (remember at some point we swapped two rows) The last equation 0 1 1 1 1 0 0 1 4 1 0 0 0 0 0 0x + 0y + 0z + 0w = 0,. places no restriction on x, y, z and w. The previous equation reads, z + 4w = 1. Using this equation, we can solve for z in terms of w, to get z = 1 4w. We can use this value for z in the second equation to determine y in terms of w, y (1 4w) + w = 1, to get y = 2 5w. Finally, we can use this value for y and z in the first equation to solve for x, to get We get the family of solutions, x + 3(2 5w) 2(1 4w) w = 1, x = 5 8w. (x, y, z, w) = ( 5 + 8w, 2 5w, 1 4w, w). Note that no matter the value of w, we get a solution for the original system of linear equations. For example if we pick w = 0, we get the solution (x, y, z, w) = ( 5, 2, 1, 0), 9

but if we choose the value w = 1, we get the solution (x, y, z, w) = (3, 3, 3, 1). In particular this system has infinitely many solutions. As before the reason for this is because the original system of equations was not independent. In fact even the first three equations are not independent. We can think of this as being the same thing as the first three rows are not independent. The third row minus four times the first row is the same as the second row plus the first row (we can see this by following the steps of the Gaussian elimination). This is the same as saying the third row is equal to the second row plus three times the first row. It is then automatic that any solution to the first equation and the second equations is a solution to the third equation. Note also that we can represent the solutions in a slightly different way, (x, y, z, w) = ( 5, 2, 1, 0) + w(8, 5, 4, 1). 10