# General Framework for an Iterative Solution of Ax b. Jacobi s Method

Save this PDF as:

Size: px
Start display at page:

Download "General Framework for an Iterative Solution of Ax b. Jacobi s Method"

## Transcription

1 2.6 Iterative Solutions of Linear Systems Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization, for example) or by an iterative procedure that generates a sequence of vectors that approach the exact solution. When the coefficient matrix is large and sparse (with a high proportion of zero entries), iterative algorithms can be more rapid than direct methods and can require less computer memory. Also, an iterative process may be stopped as soon as an approximate solution is sufficiently accurate for practical work. However, iterative methods can also fail or be extremely slow for some problems. The simple iterative methods below were discovered long ago, but they provided the foundation for what today is an active area of research at the interface of mathematics and computer science. General Framework for an Iterative Solution of Ax b Throughout the section, A is an invertible matrix. The goal of an iterative algorithm is to produce a sequence of vectors, x (0), x (1),...,x (k),... that converges to the unique solution of Ax b, sayx, in the sense that the entries in x (k) are as close as desired to the corresponding entries in x for all k sufficiently large. To describe a recursion algorithm that produces x (k 1) from x (k), we write A M N for suitable matrices M and N, and then we rewrite the equation Ax b as Mx Nx b and Mx Nx b If a sequence x (k) satisfies Mx (k 1) Nx (k) b (k 0, 1,...) (1) and if the sequence converges to some vector x, then it can be shown that Ax b.[the vector on the left in (1) approaches Mx, while the vector on the right in (1) approaches Nx b. This implies that Mx Nx b and Ax b.] For x (k 1) to be uniquely specified in (1), M must be invertible. Also, M should be chosen so that x (k 1) is easy to calculate. The iterative methods below illustrate two simple choices for M. Jacobi s Method This method assumes that the diagonal entries of A are all nonzero. Let D be the diagonal matrix formed from the diagonal entries of A. Jacobi s method uses D for M and D A for N in (1), so that Dx (k 1) (D A)x (k) b (k 0, 1,...) In a real-life problem, available information may suggest a value for x (0). For simplicity, we take the zero vector as x (0).

2 144 Chapter 2 Matrix Algebra EXAMPLE 1 Apply Jacobi s method to the system 10x 1 x 2 x 3 18 x 1 15x 2 x 3 12 x 1 x 2 20x 3 17 Take x (0) (0, 0, 0) as an initial approximation to the solution, and use six iterations (that is, compute x (1),...,x (6) ). Solution For some k, denote the entries in x (k) by (x 1,x 2,x 3 ) and the entries in x (k 1) by (y 1,y 2,y 3 ). The recursion y x y x y x 3 17 D x (k 1) (D A) x (k) b can be written as 10y 1 x 2 x y 2 x 1 x y 3 x 1 x 2 17 and y 1 ( x 2 x 3 18) 10 y 2 ( x 1 x 3 12) 15 (3) y 3 (x 1 x 2 17) 20 A faster way to get (3) is to solve the first equation in (2) for x 1, the second equation for x 2, the third for x 3, and then rename the variables on the left as y 1, y 2,andy 3, respectively. For k 0, take x (0) (x 1,x 2,x 3 ) (0, 0, 0), and compute x (1) (y 1,y 2,y 3 ) (18 10, 12 15, 17 20) (1.8,.8,.85) For k 1, use the entries in x (1) as x 1,x 2,x 3 in (3) and compute the new y 1,y 2,y 3 : y 1 [ (.8) (.85) 18] y 2 [ (1.8) (.85) 12] y 3 [(1.8) (.8) 17] Thus x (2) (1.965,.9767,.98). The entries in x (2) are used on the right in (3) to compute the entries in x (3), and so on. Here are the results, with calculations using MATLAB and results reported to four decimal places: x (0) x (1) x (2) x (3) x (4) x (5) x (6) (2)

3 2.6 Iterative Solutions of Linear Systems 145 If we decide to stop when the entries in x (k) and x (k 1) differ by less than.001, then we need five iterations (k 5). The Gauss Seidel Method This method uses the recursion (1) with M the lower triangular part of A.Thatis,M has the same entries as A on the diagonal and below, and M has zeros above the diagonal. See Fig. 1. As in Jacobi s method, the diagonal entries of A must be nonzero in order for M to be invertible. A =, M = FIGURE 1 of A. * * * 0 * * * * * * * The lower triangular part EXAMPLE 2 Apply the Gauss Seidel method to the system in Example 1 with x (0) 0 and six iterations. 10x 1 x 2 x 3 18 x 1 15x 2 x 3 12 x 1 x 2 20x 3 17 (4) Solution With M the lower triangular part of A and N M A, the recursion relation is y x y x 3 17 M x (k 1) (M A) x (k) b or 10y 1 x 2 x 3 18 y 1 15y 2 x 3 12 y 1 y 2 20y 3 17 We will work with one equation at a time. When we reach the second equation, y 1 is already known, so we can move it to the right side. Likewise, in the third equation y 1 and y 2 are known, so we move them to the right. Dividing by the coefficients of the terms

4 146 Chapter 2 Matrix Algebra remaining on the left, we obtain y 1 ( x 2 x 3 18) 10 y 2 ( y 1 x 3 12) 15 (5) y 3 (y 1 y 2 17) 20 Another way to view (5) is to solve each equation in (4) for x 1, x 2, x 3, respectively, and regard the highlighted x s as the new values: x 1 ( x 2 x 3 18) 10 x 2 ( x 1 x 3 12) 15 (6) x 3 ( x 1 x 2 17) 20 Use the first equation to calculate the new x 1 [called y 1 in (5)] from x 2 and x 3.Thenuse this new x 1 along with x 3 in the second equation to compute the new x 2. Finally, in the third equation, use the new values for x 1 and x 2 to compute x 3. In this way, the latest information about the variables is used to compute new values. [A computer program would use statements corresponding to the equations in (6).] From x (0) (0, 0, 0), we obtain x 1 [ (0) (0) 18] x 2 [ (1.8) (0) 12] x 3 [ (1.8) (.92) 17] Thus x (1) (1.8,.92,.986). The entries in x (1) are used in (6) to produce x (2),andso on. Here are the MATLAB calculations reported to four decimal places: x (0) x (1) x (2) x (3) x (4) x (5) x (6) Observe that when k is 4, the entries in x (k) and x (k 1) differ by less than.001. The values in x (6) in this case happen to be accurate to eight decimal places. There exist examples where Jacobi s method is faster than the Gauss Seidel method, but usually a Gauss Seidel sequence converges faster, as in Example 2. (If parallel processing is available, Jacobi might be faster because the entries in x (k) can be computed simultaneously.) There are also examples where one or both methods fail to produce a convergent sequence, and other examples where a sequence is convergent, but converges too slowly for practical use. Fortunately, there is a simple condition that guarantees (but is not essential for) the convergence of both Jacobi and Gauss Seidel sequences. This condition is often satisfied, for instance, in large-scale systems that can occur during numerical solutions of partial differential equations (such as Laplace s equation for steady-state heat flow).

5 2.6 Iterative Solutions of Linear Systems 147 An n n matrix A is said to be strictly diagonally dominant if the absolute value of each diagonal entry exceeds the sum of the absolute values of the other entries in the same row. In this case it can be shown that A is invertible and that both the Jacobi and Gauss Seidel sequences converge to the unique solution of Ax b, for any initial x (0). (The speed of the convergence depends on how much the diagonal entries dominate the corresponding row sums.) The coefficient matrix in Examples 1 and 2 is strictly diagonally dominant, but the following matrix is not. Examine each row: The problem lies in the third row, because 8 is not larger than the sum of the magnitudes of the other entries. The practice problem below suggests a trick that sometimes works when a system is not strictly diagonally dominant. P RACTICE P ROBLEM Show that the Gauss Seidel method will produce a sequence converging to the solution of the following system, provided the equations are arranged properly: x 1 3x 2 x 3 2 6x 1 4x 2 11x 3 1 5x 1 2x 2 2x E XERCISES Solve the systems in Exercises 1 4 using Jacobi s method, with x (0) 0 and three iterations. [M] Repeat the iterations until two successive approximations agree within a tolerance of.001 in each entry. 1. 4x 1 x 2 7 x 1 5x x 1 x 2 11 x 1 5x 2 2x x 2 7x x 1 x 2 25 x 1 8x x 1 x x 1 100x 2 2x x 2 50x 3 98 In Exercises 5 8, use the Gauss Seidel method, with x (0) 0 and two iterations. [M] Compare the number of iterations needed by Gauss Seidel and Jacobi to make two successive approximations agree within a tolerance of The system in Exercise The system in Exercise The system in Exercise The system in Exercise 4. Determine which of the matrices in Exercises 9 and 10 are strictly diagonally dominant a. b a. b Neither iteration method of this section works for the systems as written in Exercises 11 and 12. Compute the first three iterates for Gauss Seidel, with x (0) 0. Then rearrange the equations to make Gauss Seidel work, and compute two iterations. [M] Find an approximation accurate to three decimal places x 1 6x 2 4 5x 1 x x 1 4x 2 x 3 3 4x 1 x 2 10 x 2 4x 3 6

6 148 Chapter 2 Matrix Algebra The systems in Exercises 13 and 14 are not strictly diagonally dominant, and no rearrangement of the equations will make them so. Nevertheless, both Jacobi and Gauss Seidel produce convergent sequences. [M] Find how many Gauss Seidel iterations are needed to produce an approximation accurate to three decimal places. Set x (0) x 1 2x x 1 4x x 1 3x 2 2x 3 19 x 1 5x 2 3x 3 10 x 1 4x 2 4x 3 18 [M] In Exercises 15 and 16, try both Jacobi s method and the Gauss Seidel method and describe what happens. Set x (0) 0.If a sequence seems to converge, try to find two successive approximations that agree within a tolerance of x 1 x 3 6 x 1 x 2 3 x 1 2x 2 3x x 1 2x 2 2x x 1 3x 2 2x 3 7 2x 1 2x 2 3x The main focus of Section 2.7 will be the equation x Cx d. Under suitable hypotheses on C, this equation can be solved by an iterative scheme using the recursion relation x (k 1) Cx (k) d (7) a. Show that (7) is a special case of (1). What are M, N, and A? b. Take x (0) 0 and use (7) to find a formula for x (2) not involving x (1). c. Verify by induction that if x (0) 0, then x (k) d Cd C 2 d C k 1 d (k 1, 2,...) 18. Suppose x satisfies Ax b and x (k) is determined by the recursion relation (1), where A is invertible, A M N,and M is invertible. The vector e (k) x (k) x is the error in the approximation of x by x (k). The steps below imply that if (M 1 N) k 0ask, thene (k) 0, which means that the sequence x (k) converges to the solution x. a. Show that e (k 1) (M 1 N)e (k). b. Prove by induction that e (k) (M 1 N) k e (0). S OLUTION TO P RACTICE P ROBLEM The system is not strictly diagonally dominant, so neither Jacobi nor Gauss Seidel is guaranteed to work. In fact, both iterative methods produce sequences that fail to converge, even though the system has the unique solution x 1 3, x 2 2, x 3 1. However, the equations can be rearranged as 5x 1 2x 2 2x 3 9 x 1 3x 2 x 3 2 6x 1 4x 2 11x 3 1 Now the coefficient matrix is strictly diagonally dominant, so we know Gauss Seidel works with any initial vector. In fact, if x (0) 0, then x (8) (2.9987, ,.9996).

### 10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

### (57) (58) (59) (60) (61)

Module 4 : Solving Linear Algebraic Equations Section 5 : Iterative Solution Techniques 5 Iterative Solution Techniques By this approach, we start with some initial guess solution, say for solution and

### Operation Count; Numerical Linear Algebra

10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...

### Practical Numerical Training UKNum

Practical Numerical Training UKNum 7: Systems of linear equations C. Mordasini Max Planck Institute for Astronomy, Heidelberg Program: 1) Introduction 2) Gauss Elimination 3) Gauss with Pivoting 4) Determinants

### Iterative Techniques in Matrix Algebra. Jacobi & Gauss-Seidel Iterative Techniques II

Iterative Techniques in Matrix Algebra Jacobi & Gauss-Seidel Iterative Techniques II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin

### UNIT - I LESSON - 1 The Solution of Numerical Algebraic and Transcendental Equations

UNIT - I LESSON - 1 The Solution of Numerical Algebraic and Transcendental Equations Contents: 1.0 Aims and Objectives 1.1 Introduction 1.2 Bisection Method 1.2.1 Definition 1.2.2 Computation of real root

### Numerical Methods for Civil Engineers. Linear Equations

Numerical Methods for Civil Engineers Lecture Notes CE 311K Daene C. McKinney Introduction to Computer Methods Department of Civil, Architectural and Environmental Engineering The University of Texas at

### SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

### Solution of Linear Systems

Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

### Chapter 4 - Systems of Equations and Inequalities

Math 233 - Spring 2009 Chapter 4 - Systems of Equations and Inequalities 4.1 Solving Systems of equations in Two Variables Definition 1. A system of linear equations is two or more linear equations to

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self Study Course

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self Study Course MODULE 17 MATRICES II Module Topics 1. Inverse of matrix using cofactors 2. Sets of linear equations 3. Solution of sets of linear

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### Iterative Methods for Solving Linear Systems

Chapter 5 Iterative Methods for Solving Linear Systems 5.1 Convergence of Sequences of Vectors and Matrices In Chapter 2 we have discussed some of the main methods for solving systems of linear equations.

### 10.4 APPLICATIONS OF NUMERICAL METHODS Applications of Gaussian Elimination with Pivoting

59 CHAPTER NUMERICAL METHODS. APPLICATIONS OF NUMERICAL METHODS Applications of Gaussian Elimination with Pivoting In Section.5 you used least squares regression analysis to find linear mathematical models

### MAT Solving Linear Systems Using Matrices and Row Operations

MAT 171 8.5 Solving Linear Systems Using Matrices and Row Operations A. Introduction to Matrices Identifying the Size and Entries of a Matrix B. The Augmented Matrix of a System of Equations Forming Augmented

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

### Solutions to Linear Algebra Practice Problems 1. form (because the leading 1 in the third row is not to the right of the

Solutions to Linear Algebra Practice Problems. Determine which of the following augmented matrices are in row echelon from, row reduced echelon form or neither. Also determine which variables are free

### Matrix Inverse and Determinants

DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

### 3.4. Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes

Solving Simultaneous Linear Equations 3.4 Introduction Equations often arise in which there is more than one unknown quantity. When this is the case there will usually be more than one equation involved.

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### Numerical methods for finding the roots of a function

Numerical methods for finding the roots of a function The roots of a function f (x) are defined as the values for which the value of the function becomes equal to zero. So, finding the roots of f (x) means

### 6. Cholesky factorization

6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

### MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### Arithmetic and Algebra of Matrices

Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational

### Solving Sets of Equations. 150 B.C.E., 九章算術 Carl Friedrich Gauss,

Solving Sets of Equations 5 B.C.E., 九章算術 Carl Friedrich Gauss, 777-855 Gaussian-Jordan Elimination In Gauss-Jordan elimination, matrix is reduced to diagonal rather than triangular form Row combinations

### Iterative Methods for Computing Eigenvalues and Eigenvectors

The Waterloo Mathematics Review 9 Iterative Methods for Computing Eigenvalues and Eigenvectors Maysum Panju University of Waterloo mhpanju@math.uwaterloo.ca Abstract: We examine some numerical iterative

### Math 2331 Linear Algebra

2.2 The Inverse of a Matrix Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math2331 Jiwen He, University

### Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

### 10.4 APPLICATIONS OF NUMERICAL METHODS Applications of Gaussian Elimination with Pivoting

558 HAPTER NUMERIAL METHODS 5. (a) ompute the eigenvalues of A 5. (b) Apply four iterations of the power method with scaling to each matrix in part (a), starting with x,. 5. (c) ompute the ratios for A

### Solving linear systems. Solving linear systems p. 1

Solving linear systems Solving linear systems p. 1 Overview Chapter 12 from Michael J. Quinn, Parallel Programming in C with MPI and OpenMP We want to find vector x = (x 0,x 1,...,x n 1 ) as solution of

### Chapter 4: Systems of Equations and Ineq. Lecture notes Math 1010

Section 4.1: Systems of Equations Systems of equations A system of equations consists of two or more equations involving two or more variables { ax + by = c dx + ey = f A solution of such a system is an

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### 2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

### 2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

### 1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with

### Ordered Pairs. Graphing Lines and Linear Inequalities, Solving System of Linear Equations. Cartesian Coordinates System.

Ordered Pairs Graphing Lines and Linear Inequalities, Solving System of Linear Equations Peter Lo All equations in two variables, such as y = mx + c, is satisfied only if we find a value of x and a value

### Math 315: Linear Algebra Solutions to Midterm Exam I

Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

### Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

### Text. 7. Summary. A. Gilat, MATLAB: An Introduction with Applications, 4th ed., Wiley Wikipedia: Matrix, SLE, etc.

2. Linear algebra 1. Matrix operations 2. Determinant and matrix inverse 3. Advanced matrix manipulations in the MATLAB 4. Systems of linear equations (SLEs). Solution of a SLE by the Gauss elimination

### Equations, Inequalities & Partial Fractions

Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

### We shall turn our attention to solving linear systems of equations. Ax = b

59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

### 7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

### Iterative Methods for Linear Systems

Iterative Methods for Linear Systems We present the basic concepts for three of the most fundamental and well known iterative techniques for solving a system of linear equations of the form Ax = b. Iterative

### Chapter 7. Induction and Recursion. Part 1. Mathematical Induction

Chapter 7. Induction and Recursion Part 1. Mathematical Induction The principle of mathematical induction is this: to establish an infinite sequence of propositions P 1, P 2, P 3,..., P n,... (or, simply

### 3.4. Solving simultaneous linear equations. Introduction. Prerequisites. Learning Outcomes

Solving simultaneous linear equations 3.4 Introduction Equations often arise in which there is more than one unknown quantity. When this is the case there will usually be more than one equation involved.

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### 5 Homogeneous systems

5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Linear Equations in Linear Algebra

May, 25 :46 l57-ch Sheet number Page number cyan magenta yellow black Linear Equations in Linear Algebra WEB INTRODUCTORY EXAMPLE Linear Models in Economics and Engineering It was late summer in 949. Harvard

### LINEAR SYSTEMS. Consider the following example of a linear system:

LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x +2x 2 +3x 3 = 5 x + x 3 = 3 3x + x 2 +3x 3 = 3 x =, x 2 =0, x 3 = 2 In general we want to solve n equations in

### A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

### Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen October 30, 2008 Overview

### 1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

### Linear Algebraic and Equations

Next: Optimization Up: Numerical Analysis for Chemical Previous: Roots of Equations Subsections Gauss Elimination Solving Small Numbers of Equations Naive Gauss Elimination Pitfalls of Elimination Methods

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor

### CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

### Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur. Lecture - 19 Power Flow IV

Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur Lecture - 19 Power Flow IV Welcome to lesson 19 on Power System Analysis. In this

### Solving Linear Diophantine Matrix Equations Using the Smith Normal Form (More or Less)

Solving Linear Diophantine Matrix Equations Using the Smith Normal Form (More or Less) Raymond N. Greenwell 1 and Stanley Kertzner 2 1 Department of Mathematics, Hofstra University, Hempstead, NY 11549

### Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### (Quasi-)Newton methods

(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

### Linear Equations in Linear Algebra

1 Linear Equations in Linear Algebra 1.2 Row Reduction and Echelon Forms ECHELON FORM A rectangular matrix is in echelon form (or row echelon form) if it has the following three properties: 1. All nonzero

### Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations,

### Math 018 Review Sheet v.3

Math 018 Review Sheet v.3 Tyrone Crisp Spring 007 1.1 - Slopes and Equations of Lines Slopes: Find slopes of lines using the slope formula m y y 1 x x 1. Positive slope the line slopes up to the right.

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### Chapter 5. The Inverse; Numerical Methods

Vector Spaces in Physics 8/6/ Chapter. The nverse Numerical Methods n the Chapter we discussed the solution of systems of simultaneous linear algebraic equations which could be written in the form C -

### (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

### Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

### MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

### 1 Orthogonal projections and the approximation

Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

### Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

### Presentation 3: Eigenvalues and Eigenvectors of a Matrix

Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues

### 1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

### Solving Systems of Linear Equations. Substitution

Solving Systems of Linear Equations There are two basic methods we will use to solve systems of linear equations: Substitution Elimination We will describe each for a system of two equations in two unknowns,

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that

0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

MODULAR ARITHMETIC KEITH CONRAD. Introduction We will define the notion of congruent integers (with respect to a modulus) and develop some basic ideas of modular arithmetic. Applications of modular arithmetic

### Lecture 5: Singular Value Decomposition SVD (1)

EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

### Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### Linear Equations ! 25 30 35\$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!

### Definition: Group A group is a set G together with a binary operation on G, satisfying the following axioms: a (b c) = (a b) c.

Algebraic Structures Abstract algebra is the study of algebraic structures. Such a structure consists of a set together with one or more binary operations, which are required to satisfy certain axioms.

### 1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

### ( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

### Lecture 1 Newton s method.

Lecture 1 Newton s method. 1 Square roots 2 Newton s method. The guts of the method. A vector version. Implementation. The existence theorem. Basins of attraction. The Babylonian algorithm for finding

### Fourier Series. A Fourier series is an infinite series of the form. a + b n cos(nωx) +

Fourier Series A Fourier series is an infinite series of the form a b n cos(nωx) c n sin(nωx). Virtually any periodic function that arises in applications can be represented as the sum of a Fourier series.

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation