Lecture 5. Gaussian and Gauss-Jordan elimination

Similar documents
1.3 Complex Numbers; Quadratic Equations in the Complex Number System*

tr(a + B) = tr(a) + tr(b) tr(ca) = c tr(a)

Solving Systems of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Solving Systems of Linear Equations Using Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m


MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Solving Linear Systems, Continued and The Inverse of a Matrix

How To Fator

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

1.2 Solving a System of Linear Equations

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Solving simultaneous equations using the inverse matrix

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Programming Basics - FORTRAN 77

HEAT CONDUCTION. q A q T

Sebastián Bravo López

Chapter 5 Single Phase Systems

Classical Electromagnetic Doppler Effect Redefined. Copyright 2014 Joseph A. Rybczyk

Improved SOM-Based High-Dimensional Data Visualization Algorithm

Channel Assignment Strategies for Cellular Phone Systems

DSP-I DSP-I DSP-I DSP-I

5.2 The Master Theorem

Solving Systems of Linear Equations

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

A Keyword Filters Method for Spam via Maximum Independent Sets

Computer Networks Framing

HEAT EXCHANGERS-2. Associate Professor. IIT Delhi P.Talukdar/ Mech-IITD

10.1 The Lorentz force law

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

5 Homogeneous systems

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

5.5. Solving linear systems by the elimination method

Chapter 1 Microeconomics of Consumer Theory

Linear Equations in Linear Algebra

Lecture Notes 2: Matrices as Systems of Linear Equations

1.5 SOLUTION SETS OF LINEAR SYSTEMS

7 Gaussian Elimination and LU Factorization

cos t sin t sin t cos t

Notes on Determinant

The Reduced van der Waals Equation of State

Henley Business School at Univ of Reading. Chartered Institute of Personnel and Development (CIPD)

Weighting Methods in Survey Sampling

4.15 USING METEOSAT SECOND GENERATION HIGH RESOLUTION VISIBLE DATA FOR THE IMPOVEMENT OF THE RAPID DEVELOPPING THUNDERSTORM PRODUCT

The Characteristic Polynomial

USA Mathematical Talent Search. PROBLEMS / SOLUTIONS / COMMENTS Round 3 - Year 12 - Academic Year

POLYNOMIAL FUNCTIONS

Intuitive Guide to Principles of Communications By Charan Langton

Electrician'sMathand BasicElectricalFormulas

Core Maths C1. Revision Notes

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Systems of Linear Equations

Henley Business School at Univ of Reading. Pre-Experience Postgraduate Programmes Chartered Institute of Personnel and Development (CIPD)

arxiv:astro-ph/ v2 10 Jun 2003 Theory Group, MS 50A-5101 Lawrence Berkeley National Laboratory One Cyclotron Road Berkeley, CA USA

DETERMINANTS TERRY A. LORING

BUILDING A SPAM FILTER USING NAÏVE BAYES. CIS 391- Intro to AI 1

) ( )( ) ( ) ( )( ) ( ) ( ) (1)

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

User s Guide VISFIT: a computer tool for the measurement of intrinsic viscosities

3. Solve the equation containing only one variable for that variable.

1 VECTOR SPACES AND SUBSPACES

Operation Count; Numerical Linear Algebra

Direct Methods for Solving Linear Systems. Matrix Factorization

Lecture 24: Spinodal Decomposition: Part 3: kinetics of the

7.7 Solving Rational Equations

Capacity at Unsignalized Two-Stage Priority Intersections

26. Determinants I. 1. Prehistory

Solution to Homework 2

Linear Programming Problems

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Row Echelon Form and Reduced Row Echelon Form

Basic Properties of Probability

Static Fairness Criteria in Telecommunications

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Deadline-based Escalation in Process-Aware Information Systems

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Fixed-income Securities Lecture 2: Basic Terminology and Concepts. Present value (fixed interest rate) Present value (fixed interest rate): the arb

Linear Programming. March 14, 2014

Using row reduction to calculate the inverse and the determinant of a square matrix

Lecture 3: Finding integer solutions to systems of linear equations

Decimal numbers. Chapter

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:

LINEAR INEQUALITIES. less than, < 2x + 5 x 3 less than or equal to, greater than, > 3x 2 x 6 greater than or equal to,

The Determinant: a Means to Calculate Volume

There are only finitely many Diophantine quintuples

NOMCLUST: AN R PACKAGE FOR HIERARCHICAL CLUSTERING OF OBJECTS CHARACTERIZED BY NOMINAL VARIABLES

2. Properties of Functions

Notes from February 11

Linearly Independent Sets and Linearly Dependent Sets

Systems of Equations Involving Circles and Lines

THE PERFORMANCE OF TRANSIT TIME FLOWMETERS IN HEATED GAS MIXTURES

TeeJay Publishers Homework for Level F book Ch 59 - Pythagoras

Elementary Matrices and The LU Factorization

Standard Form of a Linear Programming Problem

Geometry Notes RIGHT TRIANGLE TRIGONOMETRY

Social Network Analysis Based on BSP Clustering Algorithm

SHAFTS: TORSION LOADING AND DEFORMATION

Transcription:

International College of Eonomis and Finane (State University Higher Shool of Eonomis) Letures on Linear Algebra by Vladimir Chernyak, Leture. Gaussian and Gauss-Jordan elimination To be read to the musi of Do, Do, Do by George Gershvin GAUSSIAN METHOD OF SOLVING LINEAR EQUATIONS There are essentially three ways of solving systems of linear equations: () substitution, () elimination of variables, and () matri methods. Substitution method Substitution is the method usually taught in beginning algebra lasses. To use this method, Solve one equation of system for one variable, say in terms of the other variables in that equation. n Substitute this epression for n into the other m equations. The result is a new system of m equations in the n unknowns,,, n. Continue this proess by solving one equation in the new system for n and substituting this epression into the other m equations to obtain a system of m equations in the n variables,,, n. Proeed until you reah a system with just a single equation, a situation whih is easily solved. Finally, use the earlier epressions of one variable in terms of the others to find all the i s. Defiienies of the substitution method The substitution method is straightforward, but it an be umbersome. Furthermore, it does not provide muh insight into the nature of the general solution to systems like (). It is not a method around whih one an build a general theory of linear systems. However, it is the most diret method for solving ertain systems with a speial, very simple form. As suh, it will play a role in the general solution tehnique we now develop. Elimination of Variables Method (a) Gaussian Elimination First onsider the simple ase of a system with unique solution. The method here onsists of three steps: The method whih is most onduive to theoretial analysis is elimination of variables, another tehnique that should be familiar from high shool algebra. First, onsider the simple system It is lear that if one "eliminates" the variable from equations and of this system by subtrating first equation from them, the result is a "nearly triangular" system 7

Continuing the proess one an now "eliminate" the variable from the third equation by multiplying seond equation by to obtain and adding this new equation to the third one. The result is To find, we substitute bak into the seond equation. After some alulations we find. Again to find, we substitute and bak into the first equation and finally get. Instead of transforming equations one an operate with matries. The orresponding elementary operations applied to the augmented matri of the system will be as follows 7 Now we an restore a system and finish the solutions as earlier. To solve a general system of m equations by elimination of variables, use the oeffiient of in the first equation to eliminate the term from all the equations below it. To do this, add proper multiples of the first equation to eah of the sueeding equations. Now disregard the first equation and eliminate the net variable usually from the last m equations just as before, that is, by adding proper multiples of the seond equation to eah of the sueeding equations. If the seond equation does not ontain an but a lower equation does, you will have to interhange the order of these two equations before proeeding. Continue eliminating variables until you reah the last equation. The resulting simplified system an then easily be solved by substitution. This method of reduing a given system of equations by adding a multiple of one equation to another or by interhanging equations until one reahes a system with regular triangular or ehelon matri and then solving it via bak substitution is alled Gaussian elimination. The important harateristi of resulting system is that eah equation ontains fewer variables than the previous equation. At eah stage of the Gaussian elimination proess, we want to hange some oeffiient of our linear system to by adding a multiple of an earlier equation to the given one. The oeffiient a ij in this earlier equation is then alled a pivot, and we say that we "pivot on a ij to eliminate ai j " At eah stage of the elimination proedure, we use a pivot to eliminate all oeffiients diretly below it. In our eample, the oeffiient by the unknown in the first equation is the pivot; on the net stage the oeffiient by the unknown in the seond equation is the pivot (both pivots are double-underlined). Note that an never be a pivot in this proess. If you want to eliminate j from a subsystem of equations and if the oeffiient of j is zero in the first equation of this subsystem and nonzero in a subsequent equation, you will have to reverse the order of these two equations before proeeding.

(b) Gauss-Jordan Elimination The first stage of the usual Gaussian elimination is quite straightforward but bak substitution stage is awkward enough and in a big system an make muh trouble. There is a variant of Gaussian elimination, alled Gauss-Jordan elimination, whih does not use bak substitution at all, but uses some additional elementary operations. This method starts like Gaussian elimination, e.g., by transforming original system to the system with regular diagonal or ehelon matri. Now, after reahing it, instead of using bak substitution, use Gaussian elimination methods from the bottom equation to the top to eliminate all but the first term on the left-hand side in eah equation. For eample, ontinue to transform our system Now the pivot is (a oeffiient by the unknown in the third equation). One an "eliminate" the variable from equations and of this system by subtrating third equation from the first one and by multiplying third equation by and adding new equation to the seond one. The result is Now one an pivot on a by in the seond equation to eliminate a by in the first equation. We get the system whih needs no further work to see the solution. In matri form Gauss-Jordan elimination looks as follows The solution an be read in the last olumn. Gauss-Jordan elimination is partiularly useful in developing the theory of linear systems; Gaussian elimination is usually more effiient in solving atual linear systems. Let us sum up Gaussian elimination tehnique. To use the Gaussian elimination method of solving linear equations Ab, simply apply repeated elementary equations operations to the system (or epress the system of equations as an augmented matri and apply repeated elementary matri operations to the augmented matri) until the oeffiient matri A is redued to an identity matri. The solution to the system of equations an then be read from the remaining elements in the olumn vetor b. To transform the oeffiient matri to an identity matri, work along the prinipal ais. First obtain a in the a position of the oeffiient matri; then use row operations to obtain s everywhere else in the first olumn. Net obtain a in the a position, and use row operations to get s (or everywhere else) in the olumn.

Continue getting s along the prinipal diagonal and then learing the olumn until the identity matri is ompleted. Applying these ideas to our eample we get 7 One an see that this version of Gauss-Jordan elimination is the most effetive. Matri methods Earlier we mentioned a third method for solving linear systems, namely matri methods. We will study these methods in the net letures, when we disuss matri inversion and Cramer's rule. For now, it suffies to note that some of these more advaned methods derives from Gaussian elimination. The understanding of this tehnique will provide a solid base on whih to build your knowledge of linear algebra and some further applied ourses suh as quantitative methods for eonomist. SYSTEMS WITH MANY OR NO SOLUTIONS Let us try to use methods under disussion in solving system with many solutions. Consider the system Using matri notation we get It is obvious that from the transformed system we an not get unique solution. Here and are basi variables. By transposing all the terms ontaining free variables (in our eample and ) from the left to the right side we get Choosing values for all free variables (in our eample and : ) and plugging them into system we get a new system with regular triangular matri These formulas together with epressions of hoosing for free variables arbitrary values forms the general solution of the system with many solutions. It is essential that applying Gauss elimination method we an detet the ase of many solutions and manage it. Now let us onsider a system with no solution

Using matri notation we get 9 It is obvious that the last system has no solution So using Gauss elimination method we an also detet the situation of no solution. Let us now generalise the results of our analysis. Lemma on Gauss elimination iteration. Any matri having at least one nonzero element an be transformed by elementary matri operations to the form in whih any row has more leading zeros than the first row. Proof. Comparing rows one an find a row ontaining less leading rows than the others. Interhanging rows we an make this row to be the first one. Using Gauss elimination operations one an make other rows to have less leading zeros than the first row. g Theorem on Gauss elimination method. The augmented matri of a system of linear equations having at least one nonzero oeffiient by the unknown an be redued by elementary matri operations to the ehelon form. After this proess done () the system has no solution if and only if the last row of the ehelon form of the augmented matri ontains only one nonzero element and it is in the olumn of the onstant terms (last olumn of the augmented matri); in other ases the system has at least one solution; () if the solution eists it is unique if and only if in the ehelon form the oeffiient matri (all olumns eept the last olumn of the onstant terms) has regular triangular form; Proof. One an find a nonzero oeffiient and interhanging rows and olumns move it to the first plae in the first row. Then applying lemma of Gauss elimination iteration transform the matri into the form in whih any row has more leading zeros than the first row. Now delete the first row and onsider the matri onsisting of all other rows. If all elements of new matri are equal to zero the proess is finished (after deleting zero rows we get ehelon matri onsisting of one first row the system has at least one solution). If all oeffiients are equal to zero but some of the onstant terms are not zero we have an equation of the form b (b ) the system has no solution. If there is a oeffiient not equal to zero we an repeat the proess. After some iterations an ehelon matri will be obtained. g Aording to this theorem any system of linear equations an be solved using Gauss elimination method. BIBLIOGRAPHY. Carl P. Simon, Lawrene Blume. Mathematis for Eonomists. W.W.Norton&Company. New- York, London. 99. Chapter 7. V.Therniak. Leture notes on linear algebra. Introdutory ourse. Mosow. Dialog MSU. L..