Chapter 8 Matrix computations Page 1

Similar documents
Solving Systems of Linear Equations

Direct Methods for Solving Linear Systems. Matrix Factorization

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

6. Cholesky factorization

SOLVING LINEAR SYSTEMS

Solution of Linear Systems

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Solving Linear Systems, Continued and The Inverse of a Matrix

Systems of Linear Equations

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

7 Gaussian Elimination and LU Factorization

by the matrix A results in a vector which is a reflection of the given

Row Echelon Form and Reduced Row Echelon Form

1 Determinants and the Solvability of Linear Systems

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

1 Introduction to Matrices

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Solution to Homework 2

CS3220 Lecture Notes: QR factorization and orthogonal transformations

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Lecture 3: Finding integer solutions to systems of linear equations

Solving Systems of Linear Equations

Factorization Theorems

Operation Count; Numerical Linear Algebra

Elementary Matrices and The LU Factorization

1 VECTOR SPACES AND SUBSPACES

8 Square matrices continued: Determinants

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Linear Programming. March 14, 2014

Linearly Independent Sets and Linearly Dependent Sets

( ) which must be a vector

Notes on Determinant

Lecture notes on linear algebra

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

Subspaces of R n LECTURE Subspaces

The Characteristic Polynomial

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Vector and Matrix Norms

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Name: Section Registered In:

Arithmetic and Algebra of Matrices

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

4.5 Linear Dependence and Linear Independence

Data Mining: Algorithms and Applications Matrix Math Review

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

1.2 Solving a System of Linear Equations

OPRE 6201 : 2. Simplex Method

Typical Linear Equation Set and Corresponding Matrices

Special Situations in the Simplex Algorithm

Jim Lambers MAT 169 Fall Semester Lecture 25 Notes

Introduction to Matrix Algebra

LINEAR ALGEBRA. September 23, 2010

MAT 242 Test 2 SOLUTIONS, FORM T

Solutions to Math 51 First Exam January 29, 2015

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Integrating algebraic fractions

The Basics of FEA Procedure

MAT188H1S Lec0101 Burbulla

5 Homogeneous systems

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Lecture 2 Matrix Operations

The Determinant: a Means to Calculate Volume

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Using row reduction to calculate the inverse and the determinant of a square matrix

Linear Programming. April 12, 2005

Orthogonal Bases and the QR Algorithm

II. Linear Systems of Equations

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Unit 18 Determinants

SOLVING COMPLEX SYSTEMS USING SPREADSHEETS: A MATRIX DECOMPOSITION APPROACH

9.2 Summation Notation

Practical Guide to the Simplex Method of Linear Programming

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Linear Algebra: Determinants, Inverses, Rank

Lecture Notes 2: Matrices as Systems of Linear Equations

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Solving Quadratic Equations by Factoring

Methods for Finding Bases

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Linear Programming. Solving LP Models Using MS Excel, 18

LS.6 Solution Matrices

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

Orthogonal Diagonalization of Symmetric Matrices

Standard Form of a Linear Programming Problem

Duality in Linear Programming

Inner Product Spaces

Section Continued

A =

Transcription:

Chapter 8 Matrix computations Page 1 Concepts Gauss elimination Downward pass Upper triangular systems Upward pass Square nonsingular systems have unique solutions Efficiency: counting arithmetic operations The main algorithm considered in this chapter for solving linear systems A> = $ is called Gauss elimination. Its basic strategy is to replace the original system step by step with equivalent simpler ones until the resulting system can be analyzed very easily. Two systems are called equivalent if they have the same sets of solution vectors >. Just two kinds of operations are used to produce the simpler systems: I) interchange two equations; II) subtract from one equation a scalar multiple of another. Obviously, operation (I) doesn t change the set of solution vectors: it produces an equivalent system. Here s an application of operation (II), from the example in Section 8.1: In general, this operation has the following appearance:

Chapter 8 Matrix computations Page 2 Clearly, any solution x 1, ÿ, x n of [1] and [2] satisfies [3]. On the other hand, [2] = [3] + c@[1], hence any solution of [1] and [3] also satisfies [2]. Thus, operation (II) doesn t change the solution set; it produces an equivalent system. Downward pass The first steps of Gauss elimination, called the downward pass, convert the original system into an equivalent upper triangular system A linear system is called upper triangular if a ij = 0 whenever i > j. Figure 8.2.1 contains pseudocode for the downward pass. The algorithm considers in turn the diagonal coefficients a 1,1, ÿ, a n&1,n&1, called pivots. Lines (2) to (4) ensure that each pivot is nonzero, if possible. That is, if a diagonal coefficient is zero, you search

Chapter 8 Matrix computations Page 3 downward for a nonzero coefficient; if you find one, you interchange rows to make it the pivot. If you don t, then you proceed to the next equation. Nonzero pivots a kk are used in Lines (6) and (7) to eliminate the unknown x k from all equations below Equation k. This process clearly produces an equivalent upper triangular system. 1) for (k = 1; k < min(m,n); ++k) { 2) for (i = k; a ik == 0 && i # m; ++i); 3) if (i # m) { 4) if (i > k) { interchange Equations [i] and [k]; } 5) for (i = k + 1; i # m; ++i) { 6) M = a ik /a kk ; 7) subtract M times Equation [k] from Equation [i]; }}} Figure 8.2.1 Pseudocode for the downward pass Here s an example downward pass to convert a system of five equations in six unknowns x 1, ÿ, x 6 into an equivalent upper triangular system:

Chapter 8 Matrix computations Page 4 You can assess this algorithm s efficiency by counting the number of scalar arithmetic operations it requires. They re all in Lines (6) and (7) of the pseudocode in Figure 8.2.1. When you execute these for particular values of k and i, you need one division to evaluate M, then n & k + 1 subtractions and as many multiplications to subtract M times Equation [k] from Equation [i]. (While the equations have n + 1 coefficients, you know the first k coefficients of the result of the subtraction are zero.) Thus the total number of scalar operations is 1 + 2(n & k + 1)œ = (m & k)(2n & 2k + 3) = m(2n + 3) & (2m + 2n + 3)k + 2k 2 œ = m(m & 1)(2n + 3) & (2m + 2n + 3) k + 2 k 2

Chapter 8 Matrix computations Page 5 = m(m & 1)(2n + 3) & (2m + 2n + 3) = m 2 n & a m 3 + lower-order terms. In particular, for large m = n, about b n 3 operations are required. This operation count has two major consequences. First, solving large linear systems can require an excessive amount of time. For example, computing the steady state temperature distribution for the problem at the beginning of this section required solving a system of 25 equations, one for each interior cell. The result will be only a coarse approximation, because each cell is one square cm. To double the resolution to use cells with a 0.5-cm edge requires four times as many cells and equations, hence 4 3 = 64 times as many operations. Second, solving large linear systems can require a huge number of scalar operations, each of which depends on the preceding results. This can produce excessive round-off error. For example, computing the temperature distribution just mentioned for a 0.5 cm. grid requires about 700,000 operations. These large numbers justify spending considerable effort to minimize the use of Gauss elimination in numerical analysis applications. Unfortunately, there are few alternatives. Linear systems of special form where nonzero coefficients are rare and occur in symmetric patterns can sometimes be solved by special algorithms that are more efficient. For systems of general form there are algorithms that are somewhat more efficient than Gauss elimination, but only for extremely large systems. They are considerably more difficult to implement in software. Thus, round-off error in solving linear systems is often unavoidable. This subject has been studied in detail, particularly by Wilkinson [55]. Upward pass for nonsingular square systems When you apply Gauss elimination to a system of m linear equations in n unknowns x 1, ÿ, x n, the downward pass always yields an equivalent upper triangular system. This system may have a unique solution, infinitely many, or none at all. In one situation, you can easily determine a unique solution: namely, when m = n and none of the diagonal coefficients of the upper triangular system is zero. Such a system is called nonsingular. For example, if the upper triangular system has the form

Chapter 8 Matrix computations Page 6 with a 11, a 22, a 33, a 44 0, then you can solve [4] for x 4 and substitute this value into [3], then solve that equation for x 3. You can substitute the x 3 and x 4 values into Equation [2], and solve that for x 2. Finally, Equation [1] would yield a value for x 1. This process is called the upward pass of Gauss elimination. In pseudocode, it has the form for (k = n; k $ 1; &&k) x k =. For k = n to 1, the assignment statement in the upward pass requires n & k subtractions, n & k multiplications, and one division. Thus, the total number of scalar arithmetic operations required is 2(n & k) + 1œ = 2 (n & k) + n = 2 k + n = = n 2 + lower order terms. Notice two facts about nonsingular square systems. First, their solutions are unique: each iteration of the for-loop in the upward pass pseudocode determines the value of one entry of the solution vector. Second, the nonsingularity criterion (no zero among the diagonal coefficients of the upper triangular system) doesn t involve the righthand sides of the equations. Here s a summary of the preceding discussion of the downward and upward passes: Consider a square linear system

Chapter 8 Matrix computations Page 7 that is, A> = $ with. Suppose the downward pass of Gauss elimination yields an upper triangular system with no zero among the diagonal coefficients ar kk. Then this situation will occur for any coefficient vector $ on the right hand side, and each of these systems has a unique solution >. Each solution may be computed by the upward pass, executed after the downward. For large n, the downward pass requires about b n 3 scalar arithmetic operations; the upward pass requires about n 2.