# (57) (58) (59) (60) (61)

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Module 4 : Solving Linear Algebraic Equations Section 5 : Iterative Solution Techniques 5 Iterative Solution Techniques By this approach, we start with some initial guess solution, say for solution and generate an improved solution estimate from the previous approximation This method is a very effective for solving differential equations, integral equations and related problems Kelley. Let the residue vector be defined as (57) i.e. The iteration sequence is terminated when some norm of the residue becomes sufficiently small, i.e (58) is an arbitrarily small number (such as or ). Another possible termination criterion can be (59) It may be noted that the later condition is practically equivalent to the previous termination condition. A simple way to form an iterative scheme is Richardson iterations Kelley (60) or Richardson iterations preconditioned with approximate inversion (61) matrix is called approximate inverse of if A question that naturally arises is 'will the iterations converge to the solution of?'. In this section, to begin with, some well known iterative schemes are presented. Their convergence analysis is presented next. In the derivations that follow, it is implicitly assumed that the diagonal elements of matrix are non-zero, i.e. If this is not the case, simple row exchange is often sufficient to satisfy this condition. 5.1 Iterative Algorithms Jacobi-Method Suppose we have a guess solution, say for To generate an improved estimate starting from consider the first equation in the set of equations, i.e., (62) Rearranging this equation, we can arrive at a iterative formula for computing, as (63)

2 Similarly, using second equation from, we can derive (64) Table 1: Algorithm for Jacobi Iterations INITIALIZE WHILE FOR END FOR END WHILE In general, using row of we can generate improved guess for the i'th element of as follows The above equation can also be rearranged as follows (65) is defined by equation (Ri). The algorithm for implementing the Jacobi iteration scheme is summarized in Table Gauss-Seidel Method When matrix is large, there is a practical difficulty with the Jacobi method. It is required to store all components of in the computer memory (as a separate variables) until calculations of is over. The Gauss-Seidel method overcomes this difficulty by using immediately in the next equation while computing This modification leads to the following set of equations (66) Table 2: Algorithm for Gauss-Seidel Iterations

3 INITIALIZE WHILE FOR END FOR END WHILE (67) (68) In general, for i'th element of, we have To simplify programming, the above equation can be rearranged as follows (69) The algorithm for implementing Gauss-Siedel iteration scheme is summarized in Table Relaxation Method Suppose we have a starting value say, of a quantity and we wish to approach a target value, say by some method. Let application of the method change the value from to. If is between and which is even closer to than, then we can approach faster by magnifying the change ( ) Strang. In order to achieve this, we need to apply a magnifying factor and get Table 3: Algorithms for Over-Relaxation Iterations INITIALIZE WHILE FOR

4 END FOR END WHILE (70) This amplification process is an extrapolation and is an example of over-relaxaation. If the intermediate value tends to overshoot target, then we may have to use ; this is called under-relaxation. Application of over-relaxation to Gauss-Seidel method leads to the following set of equations (71) are generated using the Gauss-Seidel method, i.e., (72) The steps in the implementation of the over-relaxation iteration scheme are summarized in Table T O R. It may be noted that is a tuning parameter, which is chosen such that \caption{algorithms for Over-Relaxation Iterations} 5.2 Convergence Analysis of Iterative Methods [3, 2] Vector-Matrix Representati When is to be solved iteratively, a question that naturally arises is 'under what conditions the iterations converge?'. The convergence analysis can be carried out if the above set of iterative equations are expressed in the vector-matrix notation. For example, the iterative equations in the Gauss-Siedel method can be arranged as follows (73) Let and be diagonal, strictly lower triangular and strictly upper triangular parts of, i.e., (74) (The representation given by equation (74) should NOT be confused with matrix factorization

5 ). Using these matrices, the Gauss-Siedel iteration can be expressed as follows (75) or (76) Similarly, rearranging the iterative equations for Jacobi method, we arrive at and for the relaxation method we get (77) (78) Thus, in general, an iterative method can be developed by splitting matrix. If is expressed as (79) then, equation can be expressed as Starting from a guess solution (80) we generate a sequence of approximate solutions as follows (81) Requirements on and matrices are as follows [3] : matrix A should be decomposed into such that Matrix should be easily invertible Sequence should converge to is the solution of. The popular iterative formulations correspond to the following choices of matrices and [3,4] Jacobi Method: (82) Forward Gauss-Seidel Method (83) Relaxation Method: (84) Backward Gauss Seidel: In this case, iteration begins the update of rather than the first. This results in the following splitting of matrix [4] (85) with n'th coordinate In Symmetric Gauss Seidel approach, a forward Gauss-Seidel iteration is followed by a backward Gauss-Seidel iteration Iterative Scheme as a Linear Difference Equation In order to solve equation (LAE), we have formulated an iterative scheme

6 Let the true solution equation (LAE) be (86) Defining error vector (87) (88) and subtracting equation (true) from equation (Itr), we get (89) Thus, if we start with some then after iterations we have (90) (91) (92) (93) The convergence of the iterative scheme is assured if (94) (95) for any initial guess vector Alternatively, consider application of the general iteration equation (86) k times starting from initial guess At the k'th iteration step, we have

7 (96) If we select such that (97) represents the null matrix, then, using identity we can write for large The above expression clearly explains how the iteration sequence generates a numerical approximation to provided condition (97) is satisfied Convergence Criteria for Iteration Schemes It may be noted that equation (89) is a linear difference equation of form (98) with a specified initial condition. Here, and is a matrix. In Appendix A, we analyzed behavior of the solutions of linear difference equations of type (98). The criterion for convergence of iteration equation (89) can be derived using results derived in Appendix A. The necessary and sufficient condition for convergence of (89) can be stated as i.e. the spectral radius of matrix should be less than one. The necessary and sufficient condition for convergence stated above requires computation of eigenvalues of which is a computationally demanding task when the matrix dimension is large. For a large dimensional matrix, if we could check this condition before starting the iterations, then we might as well solve the problem by a direct method rather than using iterative approach to save computations. Thus, there is a need to derive some alternate criteria for convergence, which can be checked easily before starting iterations. Theorem 14 in Appendix A states that spectral radium of a matrix is smaller than any induced norm of the matrix. Thus, for matrix we have is any induced matrix norm. Using this result, we can arrive at the following sufficient conditions for the convergence of iterations Evaluating 1 or norms of is significantly easier than evaluating the spectral radius of. Satisfaction of any of the above conditions implies However, it may be noted that these are only sufficient conditions. Thus, if or we cannot conclude anything about the convergence of iterations. If the matrix has some special properties, such as diagonal dominance or symmetry and positive definiteness, then the convergence is ensured for some iterative techniques. Some of the important convergence results available in the literature are summarized here. Definition 1 :A matrix is called strictly diagonally dominant if

8 (99) Theorem 2 [2] A sufficient condition for the convergence of Jacobi and Gauss-Seidel methods is that the matrix of linear system is strictly diagonally dominant. Proof: Refer to Appendix B. Theorem 3 [5] The Gauss-Seidel iterations converge if matrix is symmetric and positive definite. Proof: Refer to Appendix B. Theorem 4 [3] For an arbitrary matrix, the necessary condition for the convergence of relaxation method is Proof: Refer to appendix B. Theorem 5 [2] When matrix is strictly diagonally dominant, a sufficient condition for the convergence of relaxation methods is that Proof: Left to reader as an exercise. Theorem 6 [2] For a symmetric and positive definite matrix, the relaxation method converges if and only if Proof: Left to reader as an exercise. The Theorems 3 and 6 guarantees convergence of Gauss-Seidel method or relaxation method when matrix is symmetric and positive definite. Now, what do we do if matrix in is not symmetric and positive definite? We can multiply both the sides of the equation by the original problem as follows and transform (100) If matrix is non-singular, then matrix is always symmetric and positive definite as (101) Now, for the transformed problem, we are guaranteed convergence if we use the Gauss-Seidel method or relaxation method such that. Example 7 [3] Consider system (102) For Jacobi method (103) (104) Thus, the error norm at each iteration is reduced by factor of 0.5For Gauss-Seidel method

9 (105) (106) Thus, the error norm at each iteration is reduced by factor of 1/4. This implies that, for the example under consideration For relaxation method, (107) (108) (109) (110) (111) (112) Now, if we plot v/s, then it is observed that at From equation (110), it follows that (113) at optimum Now, (114) (115) (116) (117) This is a major reduction in spectral radius when compared to Gauss-Seidel method. Thus, the error

10 norm at each iteration is reduced by factor of 1/16 ( ) if we choose Example 8 Consider system (118) If we use Gauss-Seidel method to solve for the iterations do not converge as (119) (120) Now, let us modify the problem by pre-multiplying by on both the sides, i.e. the modified problem is The modified problem becomes (121) The matrix is symmetric and positive definite and, according to Theorem 3, the iterations should converge if Gauss-Seidel method is used. For the transformed problem, we have (122) (123) and within 220 iterations (termination criterion ), we get following solution (124) which is close to the solution (125) computed as

11 Table 4: Rate of Convergence of Iterative Methods Example 9 Consider system (126) If it is desired to solve the resulting problem using Jacobi method / Gauss-Seidel method, will the iterations converge? To establish convergence of Jacobi / Gauss-Seidel method, we can check whether A is strictly diagonally dominant. Since the following inequalities hold matrix A is strictly diagonally dominant, which is a sufficient condition for convergence of Jacobi / Gauss-Seidel iterations Theorem 2. Thus, Jacobi / Gauss-Seidel iterations will converge to the solution starting from any initial guess. From these examples, we can clearly see that the rate of convergence depends on A comparison of rates of convergence obtained from analysis of some simple problems is presented in Table 4[2]

### Numerical Analysis Module 4 Solving Linear Algebraic Equations

Numerical Analysis Module 4 Solving Linear Algebraic Equations Sachin C. Patwardhan Dept. of Chemical Engineering, Indian Institute of Technology, Bombay Powai, Mumbai, 400 076, Inda. Email: sachinp@iitb.ac.in

### LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 45 4 Iterative methods 4.1 What a two year old child can do Suppose we want to find a number x such that cos x = x (in radians). This is a nonlinear

### 10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

### Iterative Techniques in Matrix Algebra. Jacobi & Gauss-Seidel Iterative Techniques II

Iterative Techniques in Matrix Algebra Jacobi & Gauss-Seidel Iterative Techniques II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin

### General Framework for an Iterative Solution of Ax b. Jacobi s Method

2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### Iterative Methods for Solving Linear Systems

Chapter 5 Iterative Methods for Solving Linear Systems 5.1 Convergence of Sequences of Vectors and Matrices In Chapter 2 we have discussed some of the main methods for solving systems of linear equations.

### Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...

### Linear Algebraic and Equations

Next: Optimization Up: Numerical Analysis for Chemical Previous: Roots of Equations Subsections Gauss Elimination Solving Small Numbers of Equations Naive Gauss Elimination Pitfalls of Elimination Methods

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### Iterative Methods for Linear Systems

Iterative Methods for Linear Systems We present the basic concepts for three of the most fundamental and well known iterative techniques for solving a system of linear equations of the form Ax = b. Iterative

### Math 313 Lecture #10 2.2: The Inverse of a Matrix

Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is

### The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Math 2331 Linear Algebra

2.2 The Inverse of a Matrix Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math2331 Jiwen He, University

### Practical Numerical Training UKNum

Practical Numerical Training UKNum 7: Systems of linear equations C. Mordasini Max Planck Institute for Astronomy, Heidelberg Program: 1) Introduction 2) Gauss Elimination 3) Gauss with Pivoting 4) Determinants

### SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self Study Course

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self Study Course MODULE 17 MATRICES II Module Topics 1. Inverse of matrix using cofactors 2. Sets of linear equations 3. Solution of sets of linear

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### Topic 3 Iterative methods for Ax = b

Topic 3 Iterative methods for Ax = b 3. Introduction Earlier in the course, we saw how to reduce the linear system Ax = b to echelon form using elementary row operations. Solution methods that rely on

### ( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that

0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c

### Numerical Methods for Civil Engineers. Linear Equations

Numerical Methods for Civil Engineers Lecture Notes CE 311K Daene C. McKinney Introduction to Computer Methods Department of Civil, Architectural and Environmental Engineering The University of Texas at

### 7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

### 2.5 Complex Eigenvalues

1 25 Complex Eigenvalues Real Canonical Form A semisimple matrix with complex conjugate eigenvalues can be diagonalized using the procedure previously described However, the eigenvectors corresponding

### Iterative Methods for Computing Eigenvalues and Eigenvectors

The Waterloo Mathematics Review 9 Iterative Methods for Computing Eigenvalues and Eigenvectors Maysum Panju University of Waterloo mhpanju@math.uwaterloo.ca Abstract: We examine some numerical iterative

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 7. Iterative Methods for Linear Systems Linear iteration coincides with multiplication by successive powers of a matrix; convergence of the iterates depends

### Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen October 30, 2008 Overview

### Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

### Inverses and powers: Rules of Matrix Arithmetic

Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### Solution of Linear Systems

Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

### Text. 7. Summary. A. Gilat, MATLAB: An Introduction with Applications, 4th ed., Wiley Wikipedia: Matrix, SLE, etc.

2. Linear algebra 1. Matrix operations 2. Determinant and matrix inverse 3. Advanced matrix manipulations in the MATLAB 4. Systems of linear equations (SLEs). Solution of a SLE by the Gauss elimination

### By W.E. Diewert. July, Linear programming problems are important for a number of reasons:

APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)

### Introduction to Matrix Algebra I

Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

### 1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### 30.3. LU decomposition. Introduction. Prerequisites. Learning Outcomes

LU decomposition 30.3 Introduction In this section we consider another direct method for obtaining the solution of systems of equations in the form AX = B. Prerequisites Before starting this Section you

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### We seek a factorization of a square matrix A into the product of two matrices which yields an

LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable

### MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

### Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

### SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

### Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

### Linear Least Squares

Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

### The Power Method for Eigenvalues and Eigenvectors

Numerical Analysis Massoud Malek The Power Method for Eigenvalues and Eigenvectors The spectrum of a square matrix A, denoted by σ(a) is the set of all eigenvalues of A. The spectral radius of A, denoted

### Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

### Jones and Bartlett Publishers, LLC. NOT FOR SALE OR DISTRIBUTION

8498_CH08_WilliamsA.qxd 11/13/09 10:35 AM Page 347 Jones and Bartlett Publishers, LLC. NOT FOR SALE OR DISTRIBUTION C H A P T E R Numerical Methods 8 I n this chapter we look at numerical techniques for

### Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor

### Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur. Lecture - 19 Power Flow IV

Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur Lecture - 19 Power Flow IV Welcome to lesson 19 on Power System Analysis. In this

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### SECTION 8.3: THE INVERSE OF A SQUARE MATRIX

(Section 8.3: The Inverse of a Square Matrix) 8.47 SECTION 8.3: THE INVERSE OF A SQUARE MATRIX PART A: (REVIEW) THE INVERSE OF A REAL NUMBER If a is a nonzero real number, then aa 1 = a 1 a = 1. a 1, or

### 4. MATRICES Matrices

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

### The Classical Linear Regression Model

The Classical Linear Regression Model 1 September 2004 A. A brief review of some basic concepts associated with vector random variables Let y denote an n x l vector of random variables, i.e., y = (y 1,

### Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

### Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition

Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition Jack Gilbert Copyright (c) 2014 Jack Gilbert. Permission is granted to copy, distribute and/or modify this document under the terms

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

### CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

### Elementary Matrices and The LU Factorization

lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three

### 3.4. Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes

Solving Simultaneous Linear Equations 3.4 Introduction Equations often arise in which there is more than one unknown quantity. When this is the case there will usually be more than one equation involved.

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### 6. Cholesky factorization

6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

### We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

### Lecture 5 - Triangular Factorizations & Operation Counts

LU Factorization Lecture 5 - Triangular Factorizations & Operation Counts We have seen that the process of GE essentially factors a matrix A into LU Now we want to see how this factorization allows us

### MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call

### 1111: Linear Algebra I

1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Lecture 3 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Lecture 3 1 / 12 Vector product and volumes Theorem. For three 3D vectors u, v, and w,

### 1 Gaussian Elimination

Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

### Chapter 4: Binary Operations and Relations

c Dr Oksana Shatalov, Fall 2014 1 Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. A binary operation on a nonempty set A is a function from A A to A. Addition, subtraction,

### (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

### Matrix Differentiation

1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient

### Additional Topics in Linear Algebra Supplementary Material for Math 540. Joseph H. Silverman

Additional Topics in Linear Algebra Supplementary Material for Math 540 Joseph H Silverman E-mail address: jhs@mathbrownedu Mathematics Department, Box 1917 Brown University, Providence, RI 02912 USA Contents

### UNIT - I LESSON - 1 The Solution of Numerical Algebraic and Transcendental Equations

UNIT - I LESSON - 1 The Solution of Numerical Algebraic and Transcendental Equations Contents: 1.0 Aims and Objectives 1.1 Introduction 1.2 Bisection Method 1.2.1 Definition 1.2.2 Computation of real root

### = [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are

This document deals with the fundamentals of matrix algebra and is adapted from B.C. Kuo, Linear Networks and Systems, McGraw Hill, 1967. It is presented here for educational purposes. 1 Introduction In

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### 1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

### Matrix Algebra and Applications

Matrix Algebra and Applications Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 1 / 49 EC2040 Topic 2 - Matrices and Matrix Algebra Reading 1 Chapters

### Solving Sets of Equations. 150 B.C.E., 九章算術 Carl Friedrich Gauss,

Solving Sets of Equations 5 B.C.E., 九章算術 Carl Friedrich Gauss, 777-855 Gaussian-Jordan Elimination In Gauss-Jordan elimination, matrix is reduced to diagonal rather than triangular form Row combinations

### Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method

### LECTURE 1 I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find x such that IA = A

LECTURE I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find such that A = y. Recall: there is a matri I such that for all R n. It follows that

### Direct Methods for Solving Linear Systems. Linear Systems of Equations

Direct Methods for Solving Linear Systems Linear Systems of Equations Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University

### Introduction to Flocking {Stochastic Matrices}

Supelec EECI Graduate School in Control Introduction to Flocking {Stochastic Matrices} A. S. Morse Yale University Gif sur - Yvette May 21, 2012 CRAIG REYNOLDS - 1987 BOIDS The Lion King CRAIG REYNOLDS

### L1-2. Special Matrix Operations: Permutations, Transpose, Inverse, Augmentation 12 Aug 2014

L1-2. Special Matrix Operations: Permutations, Transpose, Inverse, Augmentation 12 Aug 2014 Unfortunately, no one can be told what the Matrix is. You have to see it for yourself. -- Morpheus Primary concepts:

### 7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

### Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is