Cramer s Rule and Gauss Elimination

Similar documents
Solving Systems of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Solving Systems of Linear Equations Using Matrices

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Operation Count; Numerical Linear Algebra

Solving Systems of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

by the matrix A results in a vector which is a reflection of the given

Direct Methods for Solving Linear Systems. Matrix Factorization

Solution of Linear Systems

Solving Linear Systems, Continued and The Inverse of a Matrix

Question 2: How do you solve a matrix equation using the matrix inverse?

Using row reduction to calculate the inverse and the determinant of a square matrix

6. Cholesky factorization

SOLVING LINEAR SYSTEMS

Lecture Notes 2: Matrices as Systems of Linear Equations

Notes on Determinant

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Linearly Independent Sets and Linearly Dependent Sets

Elementary Matrices and The LU Factorization

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Arithmetic and Algebra of Matrices

Lecture 1: Systems of Linear Equations

Systems of Linear Equations

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Row Echelon Form and Reduced Row Echelon Form

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Subspaces of R n LECTURE Subspaces

University of Lille I PC first year list of exercises n 7. Review

Solution to Homework 2

Recall that two vectors in are perpendicular or orthogonal provided that their dot

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Name: Section Registered In:

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

1.2 Solving a System of Linear Equations

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

7 Gaussian Elimination and LU Factorization

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

5.5. Solving linear systems by the elimination method

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Lecture 5: Singular Value Decomposition SVD (1)

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Similar matrices and Jordan form

Methods for Finding Bases

Lecture 3: Finding integer solutions to systems of linear equations

Typical Linear Equation Set and Corresponding Matrices

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

General Framework for an Iterative Solution of Ax b. Jacobi s Method

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

Solving simultaneous equations using the inverse matrix

8 Square matrices continued: Determinants

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

Solutions to Math 51 First Exam January 29, 2015

Math 215 HW #6 Solutions

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI

Linear Algebra Notes

The Characteristic Polynomial

3. Solve the equation containing only one variable for that variable.

A =

Lecture 2 Matrix Operations

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

MAT 242 Test 2 SOLUTIONS, FORM T

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving

1 Determinants and the Solvability of Linear Systems

1 VECTOR SPACES AND SUBSPACES

Linear Programming. March 14, 2014

6. LECTURE 6. Objectives

Examination paper for TMA4115 Matematikk 3

Here are some examples of combining elements and the operations used:

Math Lecture 33 : Positive Definite Matrices

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

LINEAR ALGEBRA. September 23, 2010

Section 1.1 Linear Equations: Slope and Equations of Lines

5 Homogeneous systems

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Lecture notes on linear algebra

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Unit 18 Determinants

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

Introduction to Matrix Algebra

1 Introduction to Matrices

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Data Mining: Algorithms and Applications Matrix Math Review

Some Lecture Notes and In-Class Examples for Pre-Calculus:

Linear Equations and Inequalities

Sequences. A sequence is a list of numbers, or a pattern, which obeys a rule.

0.4 FACTORING POLYNOMIALS

Eigenvalues and Eigenvectors

9 Multiplication of Vectors: The Scalar or Dot Product

Transcription:

Outlines September 28, 2004

Outlines Part I: Review of Previous Lecture Part II: Review of Previous Lecture

Outlines Part I: Review of Previous Lecture Part II: Cramer s Rule Introduction Matrix Version and

Part I Review of Previous Lecture

Review of Previous Lecture Graphical interpretation Solvable and unsolvable problems Linear dependence and independence Ill-conditioning

Part II

Introduction Cramer s Rule (1750) A linear system of equations can be solved by using Cramer s rule, which for a system of 2 equations [ ] { } { } a11 a 12 x1 b1 = a 21 a 22 x 2 b 2 yields where x 1 = [A 1] [A], x 2 = [A 2] [A] [ ] b1 a A 1 = 12 b 2 a 22 [ ] a11 b A 2 = 1 a 21 b 2

Introduction Cramer s Rule Details For a system of n equations, Cramer s rule requires that you calculate n + 1 determinants of n n matrices. In the general case for a system of equations [A]{x} = {b}, the matrix [A i ] is obtained by replacing the ith column of the original [A] matrix with the contents of the {b} vector. Each unknown variable x i is found by dividing the determinant [A i ] by the determinant of the original coefficient matrix [A].

Introduction Cramer s Rule Solve the following system of equations using Cramer s rule: x 1 x 2 + x 3 = 3 2x 1 + x 2 x 3 = 0 3x 1 + 2x 2 + 2x 3 = 15 Convert the system of equations into matrix form: 1 1 1 x 1 3 2 1 1 x 2 = 0 3 2 2 x 3 15

Introduction Cramer s Rule (continued) [A] = 1 1 1 2 1 1 3 2 2, {x} = Define matrices [A 1 ], [A 2 ], and [A 3 ] as [A 1 ] = 3 1 1 0 1 1 15 2 2 [A 3 ] = x 1 x 2 x 3, [A 2 ] =, {b} = 1 1 3 2 1 0 3 2 15 1 3 1 2 0 1 3 15 2 3 0 15,

Introduction Cramer s Rule (continued) Calculate determinants of [A], [A 1 ], [A 2 ], and [A 3 ]: [A] = 12 [A 1 ] = 12 [A 2 ] = 24 [A 3 ] = 48 Unknowns x 1, x 2, and x 3 are then calculated as x 1 = [A 1] [A] = 12 12 = 1, x 2 = [A 2] = 24 [A] 12 = 2, x 3 = [A 3] = 48 [A] 12 = 4

Introduction Cramer s Rule (continued) Be sure to double-check your answers by substituting them into the original equations: x 1 x 2 + x 3 = 1 2 + 4 = 3 2x 1 + x 2 x 3 = 2 + 2 4 = 0 3x 1 + 2x 2 + 2x 3 = 3 + 4 + 8 = 15

Introduction Cramer s Rule Advantages/Disadvantages Advantages Easy to remember steps Disadvantages Computationally intensive compared to other methods: the most efficient ways of calculating the determinant of an n n matrix require (n 1)(n!) operations. So Cramer s rule would require (n 1)((n + 1)!) total operations. For 8 equations, that works out to 7(9!) = 2540160 operations, or around 700 hours if you can perform one operation per second. Roundoff error may become significant on large problems with non-integer coefficients.

Matrix Version and Recall the scaffolding problem from the beginning of Chapter 3. Its matrix form was 1 1 1 1 0 1 T A P 1 0 9 1 4 0 7 T B 5P 1 0 0 1 1 1 0 T C P 0 0 0 3 2 0 = 2 T D P 2 0 0 0 0 1 1 T E P 3 0 0 0 0 0 4 T F P 3 Notice that its coefficient matrix contains nothing but zeroes below the diagonal. This is an example of an upper triangular matrix, and these systems of equations are very easy to solve.

Matrix Version and Introduction (continued) The original system of equations on the scaffolding problem was T A +T B T C T D T F = P 1 9T B +T C +4T D +7T F = 5P 1 T C +T D T E = P 2 3T D +2T E = P 2 T E +T F = P 3 4T F = P 3 Notice that we can solve for T F using only the sixth equation in the system. That is, T F = P 3 4. After solving for T F, we can solve for T E using only the fifth equation. The pattern continues, back-substituting through the system of equations until finally we solve for T A using the first equation..

Matrix Version and Introduction (continued) The goal of Gauss elimination is to convert any given system of equations into an equivalent upper triangular form. Once converted, we can back-substitute through the equations, solving for the unknowns algebraically.

Matrix Version and Rules The operations used in converting a system of equations to upper triangular form are known as elementary operations and are: Any equation may be multiplied by a nonzero scalar. Any equation may be added to (or subtracted from) another equation. The positions of any two equations in the system may be swapped.

Matrix Version and 2x 1 x 2 + x 3 = 4 (1) 4x 1 + 3x 2 x 3 = 6 (2) 3x 1 + 2x 2 + 2x 3 = 15 (3) To eliminate the 4x 1 term in Equation 2, multiply Equation 1 by 2 and subtract it from Equation 2. To eliminate the 3x 1 term in Equation 3, multiply Equation 1 by 3 2 and subtract it from Equation 3. This gives a system of equations 2x 1 x 2 + x 3 = 4 (4) 5x 2 3x 3 = 2 (5) 7 2 x 2 + 1 2 x 3 = 9 (6)

Matrix Version and (continued) To eliminate the 7 2 x 2 term from Equation 6, multiply Equation 5 by 7 10 and subtract it from Equation 6. This gives a system of equations 2x 1 x 2 + x 3 = 4 (7) 5x 2 3x 3 = 2 (8) 13 5 x 3 = 52 5 (9)

Matrix Version and (continued) Equation 9 can easily be solved for x 3. x 3 = 52 ( ) 5 = 4 5 13 Equation 8 can easily be solved for x 2, once x 3 is known. x 2 = 1 5 ( 2 + 3x 3) = 1 ( 2 + 3(4)) = 2 5 Equation 7 can easily be solved for x 1, once both x 2 and x 3 are known. x 1 = 1 2 (4 + x 2 x 3 ) = 1 (4 + 2 4) = 1 2

Matrix Version and Matrix Version of The Gauss elimination method can be applied to a system of equations in matrix form. Instead of eliminating terms from equations, we ll be replacing certain elements of the coefficient matrix with zeroes.

Matrix Version and Matrix Version (Step 0) Start by defining the augmented matrix [C (0) ] for the problem: [C (0) ] = 11 12 1n 1,n+1 21 22 2n 2,n+1..... n1 n2 a nn (0) n,n+1 where the first n columns are the elements of the original [A] matrix, and the last column is the elements of the original {b} matrix.

Matrix Version and Matrix Version (Step 1) Zero out the first column of the [C] matrix, rows 2 n. To turn a 21 to a zero, multiply row 1 by a 21 a 11, then subtract the numbers on row 1 from row 2. To turn a 31 to a zero, multiply row 1 by a 31 a 11, then subtract the numbers on row 1 from row 3. Repeat for rows 4 n. [C (1) ] = 11 12 1n 1,n+1 0 a (1) 22 a (1) 2n a (1) 2,n+1..... 0 a (1) n2 a nn (1) a (1) n,n+1

Matrix Version and Matrix Version (Step 2) Zero out the second column of the [C] matrix, rows 3 n. To turn a 32 to a zero, multiply row 2 by a 32 a 22, then subtract the numbers on row 2 from row 3. To turn a 42 to a zero, multiply row 2 by a 42 a 22, then subtract the numbers on row 2 from row 4. Repeat for rows 5 n. [C (1) ] = 11 12 13 1n 1,n+1 0 a (1) 22 a (1) 13 a (1) 2n a (1) 2,n+1 0 0 a (2) 33 a (2) 3n a (2) 3,n+1...... 0 0 a (2) n3 a nn (2) a (2) n,n+1

Matrix Version and Matrix Version (Step n-1) Zero out the (n 1)th column of the [C] matrix, row n. To turn a n,n 1 to a zero, multiply row n 1 by a n,n 1 a n 1,n 1, then subtract the numbers on row n 1 from row n. 11 12 13 1n 1,n+1 [C (1) ] = 0 a (1) 22 a (1) 13 a (1) 2n a (1) 2,n+1 0 0 a (2) 33 a (2) 3n a (2) 3,n+1..... 0 0 0 a nn (n 1). a (n 1) n,n+1

Matrix Version and Solve the following system of equations with Gauss elimination: 2 1 1 x 1 4 4 3 1 x 2 = 6 3 2 2 x 3 15 First, set up the augmented matrix [C (0) ]: [C (0) ] = 2 1 1 4 4 3 1 6 3 2 2 15

Matrix Version and (continued) Step 1a: eliminate the 4 on row 2, column 1. Multiply all the elements of row 1 by a 21 a 11 = 4 2 = 2, then subtract them from the elements of row 2. 2 1 1 4 [C] = 4 (2)(2) 3 (2)( 1) 1 (2)(1) 6 (2)(4) 3 2 2 15 2 1 1 4 = 0 5 3 2 3 2 2 15

Matrix Version and (continued) Step 1b: eliminate the 3 on row 3, column 1. Multiply all the elements of row 1 by a 31 a 11 = 3 2 = 1.5, then subtract them from the elements of row 3. 2 1 1 4 [C (1) ] = 0 5 3 2 3 (1.5)(2) 2 (1.5)( 1) 2 (1.5)(1) 15 (1.5)(4) 2 1 1 4 = 0 5 3 2 0 3.5 0.5 9 This completes the first elimination step.

Matrix Version and (continued) Step 2: eliminate the 3.5 on row 3, column 2. Multiply all the elements of row 2 by a 32 a 22 = 3.5 5 = 0.7, then subtract them from the elements of row 3. 2 1 1 4 [C (2) ] = 0 5 3 2 0 3.5 (0.7)(5) 0.5 (0.7)( 3) 9 (0.7)( 2) 2 1 1 4 = 0 5 3 2 0 0 2.6 10.4 This completes the second elimination step.

Matrix Version and (continued) We ve now converted the original system of equations 2 1 1 x 1 4 4 3 1 x 2 = 6 3 2 2 x 3 15 into an equivalent upper-triangular system of equations 2 1 1 x 1 4 0 5 3 x 2 = 2 0 0 2.6 x 3 10.4

Matrix Version and (continued) The new system of equations can be converted back to algebraic form as: 2x 1 x 2 + x 3 = 4 (10) 5x 2 3x 3 = 2 (11) 2.6x 3 = 10.4 (12) Solve Equation 12 for x 3 : x 3 = 10.4 2.6 = 4. Then solve Equation 11 for x 2 : x 2 = 1 5 ( 2 + 3x 3) = 2. Then solve Equation 10 for x 1 : x 1 = 1 2 (4 + x 2 x 3 ) = 1.

Matrix Version and (continued) Double-check the solution by substituting the values of x 1, x 2, and x 3 into the original equations: 2x 1 x 2 + x 3 = 2(1) 2 + 4 = 4 4x 1 3x 2 x 3 = 4(1) + 3(2) 4 = 6 3x 1 + 2x 2 + 2x 3 = 3(1) + 2(2) + 2(4) = 15

Matrix Version and Advantages/Disadvantages Advantages Much less computation required for larger problems. Gauss elimination requires n3 3 multiplications to solve a system of n equations. For 8 equations, this works out to around 170 operations, versus the roughly 2.5 million operations for Cramer s rule. Disadvantages Not quite as easy to remember the procedure for hand solutions. Roundoff error may become significant, but can be partially mitigated by using more advanced techniques such as pivoting or scaling.

Solve Problem 3.4 using: Cramer s rule, Gauss elimination, and MATLAB s \ operator. Double-check your answers by substituting them back into the original equations.