A Primer on Solving Systems of Linear Equations

Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Systems of Linear Equations Using Matrices

5.5. Solving linear systems by the elimination method

Using row reduction to calculate the inverse and the determinant of a square matrix

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

1 Determinants and the Solvability of Linear Systems

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Question 2: How do you solve a matrix equation using the matrix inverse?

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Row Echelon Form and Reduced Row Echelon Form

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Solving Systems of Linear Equations

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Operation Count; Numerical Linear Algebra

Notes on Determinant

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Solving Systems of Linear Equations

Chapter 2 Determinants, and Linear Independence

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

1.2 Solving a System of Linear Equations

Typical Linear Equation Set and Corresponding Matrices

Solving simultaneous equations using the inverse matrix

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

7 Gaussian Elimination and LU Factorization

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Linearly Independent Sets and Linearly Dependent Sets

Math 215 HW #6 Solutions

Lecture 3: Finding integer solutions to systems of linear equations

MAT 242 Test 2 SOLUTIONS, FORM T

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Linear Programming. March 14, 2014

Name: Section Registered In:

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Integrating algebraic fractions

Matrix algebra for beginners, Part I matrices, determinants, inverses

Systems of Equations

Solution to Homework 2

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

1 Solving LPs: The Simplex Algorithm of George Dantzig

Lecture Notes 2: Matrices as Systems of Linear Equations

Systems of Linear Equations

1 Introduction to Matrices

Systems of Linear Equations

Linear Programming Problems

Excel supplement: Chapter 7 Matrix and vector algebra

Methods for Finding Bases

Introduction to Matrix Algebra

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Math 2524: Activity 3 (Excel and Matrices, continued) Fall 2002

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

5 Numerical Differentiation

MAT188H1S Lec0101 Burbulla

Linear Algebra Notes for Marsden and Tromba Vector Calculus

The Characteristic Polynomial

Practical Guide to the Simplex Method of Linear Programming

Arithmetic and Algebra of Matrices

ROUTH S STABILITY CRITERION

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Module 3: Correlation and Covariance

Unit 18 Determinants

Solution of Linear Systems

LAB 11: MATRICES, SYSTEMS OF EQUATIONS and POLYNOMIAL MODELING

The Determinant: a Means to Calculate Volume

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Nonlinear Programming Methods.S2 Quadratic Programming

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

SOLVING LINEAR SYSTEMS

The Assignment Problem and the Hungarian Method

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

Introduction to Matrices for Engineers

Lecture 1: Systems of Linear Equations

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

3.6. Partial Fractions. Introduction. Prerequisites. Learning Outcomes

Multiple regression - Matrices

8 Square matrices continued: Determinants

OPRE 6201 : 2. Simplex Method

Direct Methods for Solving Linear Systems. Matrix Factorization

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Orthogonal Diagonalization of Symmetric Matrices

Lecture 20: Transmission (ABCD) Matrix.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

6. Cholesky factorization

A =

Linear Codes. Chapter Basics

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Some Lecture Notes and In-Class Examples for Pre-Calculus:

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

3.2 Matrix Multiplication

LINEAR ALGEBRA. September 23, 2010

5 Homogeneous systems

CHAPTER 15 NOMINAL MEASURES OF CORRELATION: PHI, THE CONTINGENCY COEFFICIENT, AND CRAMER'S V

Continued Fractions and the Euclidean Algorithm

LS.6 Solution Matrices

5.04 Principles of Inorganic Chemistry II

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

Transcription:

A Primer on Solving Systems of Linear Equations Prof. Steven R. Hall Department of Aeronautics and Astronautics Massachusetts Institute of Technology In Signals and Systems, as well as other subjects in Unified, it will often be necessary to solve systems of linear equations, such as x + 2y + z = 2x + 5y + 2z = 2 x + y + z = There are are at least three ways to solve this set of equations: Elimination of variables, Gaussian reduction, and Cramer s rule. These three approaches are discussed below. Elimination of Variables Elimination of variables is the method you learned in high school. In the example, you first eliminate x from the second two equations, by subtracting twice the first equation from the second, and subtracting the first equation from the third. The three equations then become () x + 2y + z = y 4z = 0 y 2z = 2 (2) Next, y is eliminated from the third equation, by adding the (new) second equation to the third, yielding x + 2y + z = y 4z = 0 () 6z = 2 From the third equation, we conclude that z = (4) From the second equation, we conclude that y = 4z = 4 (5)

Finally, from the first equation, we find that x = 2y z = 4 (6) 2 Gaussian Reduction A more organized way of solving the system of equations is Gaussian reduction. First, the augmented matrix of the system is formed: 2 2 (7) Each row of the augmented matrix corresponds to one equation in the system of equations. The first three elements of each row are the coefficients of x, y, and z in the equation. The fourth element in each row is the right-hand side of the corresponding equation. The goal of the reduction process is to apply row operations to the matrix to get zeros below the first diagonal of the matrix. To do this for the example, subtract twice the first row from the second, and the first row from the third, to obtain 2 0 4 0 0 2 2 (8) This produces zeros in the first column below the diagonal. Then add the second row to the third to obtain 2 0 4 0 (9) 0 0 6 2 Normally, the process is stopped at this point, and back substitution is used to solve for the variables. That is, the array above is really the same as the last reduction we did in elimination of variables, so we can proceed as before. Alternatively, we could continue the process further, by dividing the third row by -6, and then eliminating terms above the diagonal as well, as follows: 2 0 4 0 0 0 (0) Subtract times the third row from the first, and add 4 times the third row to the second: 2 0 2 0 0 4 () 0 0 2

Finally, subtract 2 times the second row from the first: 4 0 0 0 0 4 (2) 0 0 from which we conclude that x = 4 y = 4 z = as before. There is really no need to repeatedly write down the augmented matrix. Instead, it is convenient to write down the matrix with space between the rows, and update a row at a time, crossing out the old row. Note that the two methods, elimination of variables and Gaussian reduction, are really the same approach. The only real difference is that in Gaussian reduction, we don t bother to write down x, y, and z. Instead, the variables are associated with columns of the augmented matrix. Cramer s Rule Cramer s Rule is a method that is useful primarily for low-order systems, with two or three unknowns. Cramer s rules states that each unknown can be expressed as the ratio of two matrix determinants. For example, x (the first variable) is given by x = 2 2 = 28 6 = 4 The denominator is the determinant of the matrix of coefficients of the equation, i.e., 2 The numerator is the determinant of the same matrix, except with the first column replaced by the three numbers on the right-hand side of the equations. Likewise, y is found by a ()

similar ratio, with the top matrix found by replacing the second column of the denominator matrix by the right-hand side column: 2 2 2 y = = 8 2 6 = 4 (4) Finally, z = 2 2 = 2 6 = Cramer s rule is very handy for second- and third-order systems. However, it is much less useful for larger systems, because the determinant calculation becomes prohibitive, if done in the conventional way. (See section below.) Determinants can also be found by Gaussian reduction; however, the reduction process does more than determine the determinant, it also solves the equations! So just use Cramer s rule for small toy problems. 4 Determinants Finally, you need to know how to take determinants of matrices. For second-order matrices, a b c d = ad bc (6) For third-order systems, a b c d e f = aei + bfg + cdh afh bdi ceg (7) g h i Both of these have an obvious pattern each of the terms is the product of diagonals, with a + sign for one direction of diagonal, and a sign for the other direction. Note that for the third-order case, the diagonals wrap around. For higher-order systems, there are two approaches to find the determinant. One approach is to do Gaussian reduction to obtain a triangular matrix. The determinant is then the product of the terms on the main diagonal. The other approach is to decompose the determinant along, say, the first row of the matrix. For each element A j along the first row of the matrix, find the matrix formed by deleting the first row and jth column. Call the determinant of that matrix d j. Then (5) a = A d A 2 d 2 +... ( ) n A n d n (8) 4

For example, a b c d e f g h i = a e h f i b d g f i + c d g e h (9) Actually, we can expand on any row or column as above. When expanding on an evennumbered row or column, the sign of the determinant is changed. The only problem with the approach is that the amount of calculation required to calculate the determinant using this approach goes like n!, but only goes like n using Gaussian reduction. So it usually isn t used for systems much higher than third order. 5