Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department


 Randell Williamson
 1 years ago
 Views:
Transcription
1 Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department
2 These slides are a supplement to the book Numerical Methods with Matlab: Implementations and Applications, by Gerald W. Recktenwald, c , PrenticeHall, Upper Saddle River, NJ. These slides are copyright c Gerald W. Recktenwald. The PDF version of these slides may be downloaded or stored or printed only for noncommercial, educational use. The repackaging or sale of these slides in any form, without written consent of the author, is prohibited. The latest version of this PDF file, along with other supplemental material for the book, can be found at or web.cecs.pdx.edu/~gerry/nmm/. Version 0.88 August, 006 page 1
3 Primary Topics Basic Concepts Gaussian Elimination Limitations on Numerical Solutions to Ax = b Factorization Methods Nonlinear Systems of Equations NMM: Solving Systems of Equations page
4 Basic Concepts Matrix Formulation Requirements for a Solution Consistency The Role of rank(a) Formal Solution when A is n n NMM: Solving Systems of Equations page
5 Pump Curve Model (1) Objective: Find the coefficients of the quadratic equation that approximates the pump curve data. 10 Head (m) Flow rate (m /s) x 10 Model equation: h = c 1 q + c q + c Write the model equation for three points on the curve. This gives three equations for the three unknowns c 1, c,andc. NMM: Solving Systems of Equations page 4
6 Pump Curve Model () Points from the pump curve: q (m /s) h (m) Substitute each pair of data points into the model equation 115 = c c + c, 110 = c c + c, 9.5 = c c + c, Rewrite in matrix form as c c 5 = c NMM: Solving Systems of Equations page 5
7 Pump Curve Model () Using more compact symbolic notation Ax = b where A = , x = c 1 4c 5, b = c NMM: Solving Systems of Equations page 6
8 Pump Curve Model (4) In general, for any three (q, h) pairs the system is still Ax = b with q 1 q 1 1 c 1 h 1 6 A = 4q 7 q 15, x = 4c 5, b = 4h 5. q q 1 c h NMM: Solving Systems of Equations page 7
9 Matrix Formulation Recommended Procedure 1. Write the equations in natural form.. Identify unknowns, and order them.. Isolate the unknowns. 4. Write equations in matrix form. NMM: Solving Systems of Equations page 8
10 Thermal Model of an IC Package (1) Objective: Find the temperature of an integrated circuit (IC) package mounted on a heat spreader. The system of equations is obtained from a thermal resistive network model. Physical Model: Mathematical Model: air flow Q c Τ c Q 1 Q Τ p Τ w R 1 R T p Tw Q R Q 4 R 4 Q 5 R 5 ambient air at T a Τ a Τ a Τ a NMM: Solving Systems of Equations page 9
11 Thermal Model of an IC Package () 1. Write the equations in natural form: Use resistive model of heat flow between nodes to get Q 1 = 1 R 1 (T c T p ) Q = 1 R (T p T w ) Q = 1 R (T c T a ) Q 4 = 1 R 4 (T p T a ) Q = 1 R 5 (T w T a ) Q c = Q 1 + Q Q 1 = Q + Q 4 NMM: Solving Systems of Equations page 10
12 Thermal Model of an IC Package (). Identify unknowns, and order them: Unknowns are Q 1, Q, Q, Q 4, T c, T p,and T w. Thus, x = 6 4 Q 1 Q Q Q 4 T c T p T w Isolate the Unknowns R 1 Q 1 T c + T p =0, R Q T p + T w =0, R Q T c = T a, R 4 Q 4 T p = T a, R 5 Q T w = T a, Q 1 + Q = Q c, Q 1 Q Q 4 =0. NMM: Solving Systems of Equations page 11
13 Thermal Model of an IC Package (4) 4. Write in Matrix Form R R R R R Q 1 Q Q Q 4 T c T p T w 7 5 = 0 0 T a T a T 6 a 7 4 Q c 5 0 Note: The coefficient matrix has many more zeros than nonzeros. This is an example of a sparse matrix. NMM: Solving Systems of Equations page 1
14 Requirements for a Solution 1. Consistency. The Role of rank(a). Formal Solution when A is n n 4. Summary NMM: Solving Systems of Equations page 1
15 Consistency (1) If an exact solution to Ax = b exists, b must lie in the column space of A. If it does, then the system is said to be consistent. If the system is consistent, an exact solution exists. NMM: Solving Systems of Equations page 14
16 Consistency () rank(a) gives the number of linearly independent columns in A [A b] is the augmented matrix formed by combining the b vector with the columns of A. a 11 a 1 a 1,n b 1 [A b]= 6 a 1 a a,n 4.. b a m,1 a m, a n,n b n If rank([a b]) > rank(a) then b does not lie in the column space of A. Inother words, since [A b] has a larger set of basis vectors than A, and since the difference in the size of the basis set is solely due to the b vector, the b vector cannot be constructed from the column vectors of A. NMM: Solving Systems of Equations page 15
17 Role of rank(a) If A is m n, andz is an nelement column vector, then Az =0has a nontrivial solution only if the columns of A are linearly dependent 1. In other words, the only solution to Az =0is z =0when A is full rank. Given the m n matrix A, the system Ax = b has a unique solution if the system is consistent and if rank(a) =n. 1 0 isthemelement column vector filled with zeros NMM: Solving Systems of Equations page 16
18 Summary of Solution to Ax = b where A is m n For the general case where A is m n and m n, If rank(a) =n and the system is consistent, the solution exists and it is unique. If rank(a) =n and the system is inconsistent, no solution exists. If rank(a) <nand the system is consistent, an infinite number of solutions exist. If A is n n and rank(a) =n, then the system is consistent and the solution is unique. NMM: Solving Systems of Equations page 17
19 Formal Solution when A is n n The formal solution to Ax = b is where A is n n. x = A 1 b If A 1 exists then A is said to be nonsingular. If A 1 does not exist then A is said to be singular. NMM: Solving Systems of Equations page 18
20 Formal Solution when A is n n If A 1 exists then but Ax = b = x = A 1 b Do not compute the solution to Ax = b by finding A 1, and then multiplying b by A 1! We see: We do: x = A 1 b Solve Ax = b by Gaussian elimination or an equivalent algorithm NMM: Solving Systems of Equations page 19
21 Singularity of A If an n n matrix, A, issingular then the columns of A are linearly dependent the rows of A are linearly dependent rank(a) <n det(a) =0 A 1 does not exist a solution to Ax = b may not exist If a solution to Ax = b exists, it is not unique NMM: Solving Systems of Equations page 0
22 Summary of Requirements for Solution of Ax = b Given the n n matrix A and the n 1 vector, b the solution to Ax = b exists and is unique for any b if and only if rank(a) =n. rank(a) =n automatically guarantees that the system is consistent. NMM: Solving Systems of Equations page 1
23 Gaussian Elimination Solving Diagonal Systems Solving Triangular Systems Gaussian Elimination Without Pivoting Hand Calculations Cartoon Version The Algorithm Gaussian Elimination with Pivoting Row or Column Interchanges, or Both Implementation Solving Systems with the Backslash Operator NMM: Solving Systems of Equations page
24 Solving Diagonal Systems (1) The system defined by A = b = NMM: Solving Systems of Equations page
25 Solving Diagonal Systems (1) The system defined by A = b = is equivalent to x 1 = 1 x = 6 5x = 15 NMM: Solving Systems of Equations page 4
26 Solving Diagonal Systems (1) The system defined by A = b = is equivalent to The solution is x 1 = 1 x = 6 5x = 15 x 1 = 1 x = 6 = x = 15 5 = NMM: Solving Systems of Equations page 5
27 Solving Diagonal Systems () Algorithm 8.1 In Matlab: given A, b for i =1...n x i = b i /a i,i end >> A =... % A is a diagonal matrix >> b =... >> x = b./diag(a) This is the only place where elementbyelement division (.*) has anything to do with solving linear systems of equations. NMM: Solving Systems of Equations page 6
28 Triangular Systems (1) The generic lower and upper triangular matrices are L = l l 1 l l n1 l nn and u 11 u 1 u 1n U = 6 0 u u n u nn The triangular systems Ly = b Ux = c are easily solved by forward substitution and backward substitution, respectively NMM: Solving Systems of Equations page 7
29 Solving Triangular Systems () A = b = NMM: Solving Systems of Equations page 8
30 Solving Triangular Systems () is equivalent to A = b = x 1 + x + x = 9 x + x = 1 4x = 8 NMM: Solving Systems of Equations page 9
31 Solving Triangular Systems (4) is equivalent to A = b = x 1 + x + x = 9 x + x = 1 4x = 8 Solve in backward order (last equation is solved first) x = 8 4 = NMM: Solving Systems of Equations page 0
32 Solving Triangular Systems (5) is equivalent to A = b = x 1 + x + x = 9 x + x = 1 4x = 8 Solve in backward order (last equation is solved first) x = 8 4 = x = 1 ( 1 +x )= =1 NMM: Solving Systems of Equations page 1
33 Solving Triangular Systems (6) is equivalent to A = b = x 1 + x + x = 9 x + x = 1 4x = 8 Solve in backward order (last equation is solved first) x = 8 4 = x = 1 ( 1 +x )= =1 x 1 = 1 (9 x x )= 4 = NMM: Solving Systems of Equations page
34 Solving Triangular Systems (7) Solving for x n,x n 1,...,x 1 for an upper triangular system is called backward substitution. Algorithm 8. given U, b x n = b n /u nn for i = n s = b i for j = i +1...n s = s u i,j x j end x i = s/u i,i end NMM: Solving Systems of Equations page
35 Solving Triangular Systems (8) Solving for x 1,x,...,x n for a lower triangular system is called forward substitution. Algorithm 8. given L, b x 1 = b 1 /l 11 for i =...n s = b i for j =1...i 1 s = s l i,j x j end x i = s/l i,i end Using forward or backward substitution is sometimes referred to as performing a triangular solve. NMM: Solving Systems of Equations page 4
36 Gaussian Elimination Goal is to transform an arbitrary, square system into the equivalent upper triangular system so that it may be easily solved with backward substitution. The formal solution to Ax = b, wherea is an n n matrix is x = A 1 b In Matlab: >> A =... >> b =... >> x = A\b NMM: Solving Systems of Equations page 5
37 Gaussian Elimination Hand Calculations (1) Solve x 1 +x =5 x 1 +4x =6 Subtract times the first equation from the second equation x 1 +x =5 x = 4 This equation is now in triangular form, and can be solved by backward substitution. NMM: Solving Systems of Equations page 6
38 Gaussian Elimination Hand Calculations () The elimination phase transforms the matrix and right hand side to an equivalent system x 1 +x =5 x 1 +4x =6 x 1 +x =5 x = 4 The two systems have the same solution. The right hand system is upper triangular. Solve the second equation for x x = 4 = Substitute the newly found value of x into the first equation and solve for x 1. x 1 =5 ()() = 1 NMM: Solving Systems of Equations page 7
39 Gaussian Elimination Hand Calculations () When performing Gaussian Elimination by hand, we can avoid copying the x i by using a shorthand notation. For example, to solve: A = b = Form the augmented system Ã =[A b]= The vertical bar inside the augmented matrix is just a reminder that the last column is the b vector. NMM: Solving Systems of Equations page 8
40 Gaussian Elimination Hand Calculations (4) Add times row 1 to row, and add (1 times) row 1 to row Ã (1) = Subtract (1 times) row from row Ã () = NMM: Solving Systems of Equations page 9
41 Gaussian Elimination Hand Calculations (5) The transformed system is now in upper triangular form 1 Ã () = Solve by back substitution to get x = = 1 x = 1 ( 9 5x )= x 1 = 1 ( 1 x + x )= NMM: Solving Systems of Equations page 40
42 Gaussian Elimination Cartoon Version (1) Start with the augmented system x x x x x 6x x x x x 7 4x x x x x5 x x x x x The xs represent numbers, they are not necessarily the same values. Begin elimination using the first row as the pivot row and the first element of the first row as the pivot element x x x x x 6 x x x x x 7 4 x x x x x5 x x x x x NMM: Solving Systems of Equations page 41
43 Gaussian Elimination Cartoon Version () Eliminate elements under the pivot element in the first column. x indicates a value that has been changed once. x x x x x x x x x x x x x x x 0 x x x x 6 4 x x x x x x x x x x7 5 x x x x x x x x x x x x x x x 0 x x x x 0 x x x x 7 5 x x x x x x x x x x 0 x x x x 0 x x x x 7 0 x x x x 5 NMM: Solving Systems of Equations page 4
44 Gaussian Elimination Cartoon Version () The pivot element is now the diagonal element in the second row. Eliminate elements under the pivot element in the second column. x indicates a value that has been changed twice. x x x x x 0 x x x x 6 40 x x x x 7 0 x x x x 5 x x x x x 0 x x x x x x x 7 0 x x x x 5 x x x x x 0 x x x x x x x x x x 5 NMM: Solving Systems of Equations page 4
45 Gaussian Elimination Cartoon Version (4) The pivot element is now the diagonal element in the third row. Eliminate elements under the pivot element in the third column. x indicates a value that has been changed three times. x x x x x 0 x x x x 60 0 x x x x x x 5 x x x x x 0 x x x x 60 0 x x x x x 5 NMM: Solving Systems of Equations page 44
46 Gaussian Elimination Cartoon Version (5) Summary Gaussian Elimination is an orderly process of transforming an augmented matrix into an equivalent upper triangular form. The elimination operation is ã kj =ã kj (ã ki /ã ii )ã ij Elimination requires three nested loops. The result of the elimination phase is represented by the image below. x x x x x x x x x x 6 4 x x x x x7 5 x x x x x x x x x x 0 x x x x x x x x x NMM: Solving Systems of Equations page 45
47 Gaussian Elimination Algorithm Algorithm 8.4 form Ã =[A b] for i =1...n 1 for k = i +1...n for j = i...n+1 ã kj =ã kj (ã ki /ã ii )ã ij end end end GEshow: The GEshow function in the NMM toolbox uses Gaussian elimination to solve a system of equations. GEshow is intended for demonstration purposes only. NMM: Solving Systems of Equations page 46
48 The Need for Pivoting (1) Solve: A = b = Note that there is nothing wrong with this system. A is full rank. The solution exists and is unique. Form the augmented system NMM: Solving Systems of Equations page 47
49 The Need for Pivoting () Subtract 1/ times the first row from the second row, add / times the first row to the third row, add 1/ times the first row to the fourth row. The result of these operations is: The next stage of Gaussian elimination will not work because there is a zero in the pivot location, ã. NMM: Solving Systems of Equations page 48
50 The Need for Pivoting () Swap second and fourth rows of the augmented matrix Continue with elimination: subtract (1 times) row from row NMM: Solving Systems of Equations page 49
51 The Need for Pivoting (4) Another zero has appear in the pivot position. Swap row and row The augmented system is now ready for backward substitution. 7 5 NMM: Solving Systems of Equations page 50
52 Pivoting Strategies Partial Pivoting: Exchange only rows Exchanging rows does not affect the order of the x i For increased numerical stability, make sure the largest possible pivot element is used. This requires searching in the partial column below the pivot element. Partial pivoting is usually sufficient. NMM: Solving Systems of Equations page 51
53 Partial Pivoting To avoid division by zero, swap the row having the zero pivot with one of the rows below it. Rows completed in forward elimination. 0 Row with zero pivot element * Rows to search for a more favorable pivot element. To minimize the effect of roundoff, always choose the row that puts the largest pivot element on the diagonal, i.e., find i p such that a i p,i = max( a k,i ) for k = i,...,n NMM: Solving Systems of Equations page 5
54 Pivoting Strategies () Full (or Complete) Pivoting: Exchange both rows and columns Column exchange requires changing the order of the x i For increased numerical stability, make sure the largest possible pivot element is used. This requires searching in the pivot row, and in all rows below the pivot row, starting the pivot column. Full pivoting is less susceptible to roundoff, but the increase in stability comes at a cost of more complex programming (not a problem if you use a library routine) and an increase in work associated with searching and data movement. NMM: Solving Systems of Equations page 5
55 Full Pivoting Rows completed in forward elimination. 0 * Row with zero pivot element * Rows to search for a more favorable pivot element. Columns to search for a more favorable pivot element. NMM: Solving Systems of Equations page 54
56 Gauss Elimination with Partial Pivoting Algorithm 8.5 form Ã =[A b] for i =1...n 1 find i p such that max( ã i pi ) max( ã ki ) for k = i...n exchange row i p with row i for k = i +1...n for j = i...n+1 ã k,j =ã k,j (ã k,i /ã i,i )ã i,j end end end NMM: Solving Systems of Equations page 55
57 Gauss Elimination with Partial Pivoting GEshow: The GEshow function in the NMM toolbox uses naive Gaussian elimination without pivoting to solve a system of equations. GEpivshow: The GEpivshow function in the NMM toolbox uses Gaussian elimination with partial pivoting to solve a system of equations. GEpivshow is intended for demonstration purposes only. GEshow and GEpivshow are for demonstration purposes only. They are also a convenient way to check your hand calculations. NMM: Solving Systems of Equations page 56
58 The Backslash Operator (1) Consider the scalar equation 5x =0 = x =(5) 1 0 The extension to a system of equations is, of course Ax = b = x = A 1 b where A 1 b is the formal solution to Ax = b In Matlab notation the system is solved with x = A\b NMM: Solving Systems of Equations page 57
59 The Backslash Operator () Given an n n matrix A, andann 1 vector b the \ operator performs a sequence of tests on the A matrix. Matlab attempts to solve the system with the method that gives the least roundoff and the fewest operations. When A is an n n matrix: 1. Matlab examines A to see if it is a permutation of a triangular system If so, the appropriate triangular solve is used.. Matlab examines A to see if it appears to be symmetric and positive definite. If so, Matlab attempts a Cholesky factorization and two triangular solves.. If the Cholesky factorization fails, or if A does not appear to be symmetric, Matlab attempts an LU factorization and two triangular solves. NMM: Solving Systems of Equations page 58
60 Limits on Numerical Solution to Ax = b Machine Limitations RAM requirements grow as O(n ) flop count grows as O(n ) The time for data movement is an important speed bottleneck on modern systems NMM: Solving Systems of Equations page 59
61 cache FPU CPU Data Access Time: shortest internal bus RAM system bus Stand alone computer Disk network Computer Computer Computer longest NMM: Solving Systems of Equations page 60
62 Limits on Numerical Solution to Ax = b Limits of Floating Point Arithmetic Exact singularity Effect of perturbations to b Effect of perturbations to A The condition number NMM: Solving Systems of Equations page 61
63 Geometric Interpretation of Singularity (1) Consider a system describing two lines that intersect y = x +6 y = 1 x +1 The matrix form of this equation is» 1 1/ 1» x1 x =» 6 1 The equations for two parallel but not intersecting lines are»»» 1 x1 6 = 1 5 Here the coefficient matrix is singular (rank(a) =1), and the system is inconsistent x NMM: Solving Systems of Equations page 6
64 Geometric Interpretation of Singularity () The equations for two parallel and coincident lines are» 1 1» x1 x =» 6 6 The equations for two nearly parallel lines are» 1 +δ 1» x1 x =» 6 6+δ NMM: Solving Systems of Equations page 6
65 Geometric Interpretation of Singularity () A and b are consistent A is nonsingular A and b are consistent A is singular A and b are inconsistent A is singular A and b are consistent A is ill conditioned NMM: Solving Systems of Equations page 64
66 Effect of Perturbations to b Consider the solution of a system where b =» 1 / One expects that the exact solutions to Ax =» 1 / and Ax =» will be different. Should these solutions be a lot different or a little different? NMM: Solving Systems of Equations page 65
67 Effect of Perturbations to b Perturb b with δb such that The perturbed system is The perturbations satisfy δb b 1, A(x + δx b )=b + δb Aδx b = δb Analysis shows (see next two slides for proof) that δx b x A A 1 δb b Thus, the effect of the perturbation is small only if A A 1 is small. δx b x 1 only if A A 1 1 NMM: Solving Systems of Equations page 66
68 Effect of Perturbations to b (Proof) Let x + δx b be the exact solution to the perturbed system A(x + δx b )=b + δb (1) Expand Ax + Aδx b = b + δb Subtract Ax from left side and b from right side since Ax = b Aδx b = δb Left multiply by A 1 δx b = A 1 δb () NMM: Solving Systems of Equations page 67
69 Effect of Perturbations to b (Proof, p. ) Take norm of equation () δx b = A 1 δb Applying consistency requirement of matrix norms δx A 1 δb () Similarly, Ax = b gives b = Ax, and b A x (4) Rearrangement of equation (4) yields 1 x A b (5) NMM: Solving Systems of Equations page 68
70 Effect of Perturbations to b (Proof) Multiply Equation (4) by Equation () to get δx b x A A 1 δb b (6) Summary: If x + δx b is the exact solution to the perturbed system A(x + δx b )=b + δb then δx b x A A 1 δb b NMM: Solving Systems of Equations page 69
71 Effect of Perturbations to A Perturb A with δa such that The perturbed system is δa A 1, (A + δa)(x + δx A )=b Analysis shows that δx A x + δx A A A 1 δa A Thus, the effect of the perturbation is small only if A A 1 is small. δx A x + δx A 1 only if A A 1 1 NMM: Solving Systems of Equations page 70
72 Effect of Perturbations to both A and b Perturb both A with δa and b with δb such that The perturbation satisfies δa A 1 and δb b 1 (A + δa)(x + δx) =b + δb Analysis shows that δx x + δx A A 1 1 A A 1 δa A» δa A + δb b Thus, the effect of the perturbation is small only if A A 1 is small. δx x + δx 1 only if A A 1 1 NMM: Solving Systems of Equations page 71
73 Condition number of A The condition number κ(a) A A 1 indicates the sensitivity of the solution to perturbations in A and b. The condition number can be measured with any pnorm. The condition number is always in the range 1 κ(a) κ(a) is a mathematical property of A Any algorithm will produce a solution that is sensitive to perturbations in A and b if κ(a) is large. In exact math a matrix is either singular or nonsingular. κ(a) = for a singular matrix κ(a) indicates how close A is to being numerically singular. A matrix with large κ is said to be illconditioned NMM: Solving Systems of Equations page 7
74 Computational Stability In Practice, applying Gaussian elimination with partial pivoting and back substitution to Ax = b gives the exact solution, ˆx, to the nearby problem (A + E)ˆx = b where E ε m A Gaussian elimination with partial pivoting and back substitution gives exactly the right answer to nearly the right question. Trefethen and Bau NMM: Solving Systems of Equations page 7
75 Computational Stability An algorithm that gives the exact answer to a problem that is near to the original problem is said to be backward stable. Algorithms that are not backward stable will tend to amplify roundoff errors present in the original data. As a result, the solution produced by an algorithm that is not backward stable will not necessarily be the solution to a problem that is close to the original problem. Gaussian elimination without partial pivoting is not backward stable for arbitrary A. If A is symmetric and positive definite, then Gaussian elimination without pivoting in backward stable. NMM: Solving Systems of Equations page 74
76 The Residual Let ˆx be the numerical solution to Ax = b. ˆx x (x is the exact solution) because of roundoff. The residual measures how close ˆx is to satisfying the original equation r = b Aˆx It is not hard to show that ˆx x ˆx κ(a) r b Small r does not guarantee a small ˆx x. If κ(a) is large the ˆx returned by Gaussian elimination and back substitution (or any other solution method) is not guaranteed to be anywhere near the true solution to Ax = b. NMM: Solving Systems of Equations page 75
77 Rules of Thumb (1) Applying Gaussian elimination with partial pivoting and back substitution to Ax = b yields a numerical solution ˆx such that the residual vector r = b Aˆx is small even if the κ(a) is large. If A and b are stored to machine precision ε m, the numerical solution to Ax = b by any variant of Gaussian elimination is correct to d digits where d = log 10 (ε m ) log 10 (κ(a)) NMM: Solving Systems of Equations page 76
78 Rules of Thumb () d = log 10 (ε m ) log 10 (κ(a)) Example: Matlab computations have ε m For a system with κ(a) the elements of the solution vector will have d = log 10 ( ) log correct digits =16 11 =5 NMM: Solving Systems of Equations page 77
79 Summary of Limits to Numerical Solution of Ax = b 1. κ(a) indicates how close A is to being numerically singular. If κ(a) is large, A is illconditioned and even the best numerical algorithms will produce a solution, ˆx that cannot be guaranteed to be close to the true solution, x. In practice, Gaussian elimination with partial pivoting and back substitution produces a solution with a small residual r = b Aˆx even if κ(a) is large. NMM: Solving Systems of Equations page 78
80 Factorization Methods LU factorization Cholesky factorization Use of the backslash operator NMM: Solving Systems of Equations page 79
81 LU Factorization (1) Find L and U such that A = LU and L is lower triangular, and U is upper triangular l, L = l,1 l, l n,1 l n, ln 1,n 1 u 1,1 u 1, u 1, u 1,n 0 u, u, u,n U = u n 1,n u n,n Since L and U are triangular, it is easy to apply their inverses. NMM: Solving Systems of Equations page 80
82 LU Factorization () Since L and U are triangular, it is easy to apply their inverses. Consider the solution to Ax = b. Regroup, matrix multiplication is associative A = LU = (LU)x = b L(Ux)=b Let Ux = y, then Ly = b Since L is triangular it is easy (without Gaussian elimination) to compute y = L 1 b This expression should be interpreted as Solve Ly = b with a forward substitution. NMM: Solving Systems of Equations page 81
83 LU Factorization () Now, since y is known, solve for x x = U 1 y which is interpreted as Solve Ux = y with a backward substitution. NMM: Solving Systems of Equations page 8
84 LU Factorization (4) Algorithm 8.6 Solve Ax = b with LU factorization Factor A into L and U Solve Ly = b for y use forward substitution Solve Ux = y for x use backward substitution NMM: Solving Systems of Equations page 8
85 The Builtin lu Function Refer to lunopiv and lupiv functions in the NMM toolbox for expository implementations of LU factorization Use the builtin lu function for routine work NMM: Solving Systems of Equations page 84
86 Cholesky Factorization (1) A must be symmetric and positive definite (SPD) For SPD matrices, pivoting is not required Cholesky factorization requires one half as many flops as LU factorization. Since pivoting is not required, Cholesky factorization will be more than twice as fast as LU factorization since data movement is avoided. Refer to the Cholesky function in NMM Toolbox for a view of the algorithm Use builtin chol function for routine work NMM: Solving Systems of Equations page 85
87 Backslash Redux The \ operator examines the coefficient matrix before attempting to solve the system. \ uses: A triangular solve if A is triangular, or a permutation of a triangular matrix Cholesky factorization and triangular solves if A is symmetric and the diagonal elements of A are positive (and if the subsequent Cholesky factorization does not fail.) LU factorization if A is square and the preceding conditions are not met. QR factorization to obtain the least squares solution if A is not square. NMM: Solving Systems of Equations page 86
88 Nonlinear Systems of Equations The system of equations is nonlinear if A = A(x) or b = b(x) Ax = b Characteristics of nonlinear systems Solution requires iteration Each iteration involves the work of solving a related linearized system of equations For strongly nonlinear systems it may be difficult to get the iterations to converge Multiple solutions might exist NMM: Solving Systems of Equations page 87
89 Nonlinear Systems of Equations Example: Intersection of a parabola and a line y = αx + β y = x + σx + τ or, using x 1 = x, x = y αx 1 x = β (x 1 + σ)x 1 x = τ which can be expressed in matrix notation as» α 1 x 1 + σ 1 The coefficient matrix, A depends on x» x1 x =» β τ NMM: Solving Systems of Equations page 88
90 Nonlinear Systems of Equations Graphical Interpretation of solutions to:» α 1 x 1 + σ 1» x1 x =» β τ Change values of α and β to get 10 5 Two distinct solutions 10 5 One solution One solution sensitive to inputs 10 5 No solution NMM: Solving Systems of Equations page 89
91 Newton s Method for Nonlinear Systems (1) Given where A is n n, write Note: f = r The solution is obtained when f(x) = Ax = b f = Ax b f 1 (x 1,x,...,x n )) 6f (x 1,x,...,x n )) = f n (x 1,x,...,x n )) NMM: Solving Systems of Equations page 90
92 Newton s Method for Nonlinear Systems () Let x (k) be the guess at the solution for iteration k Look for Δx (k) so that x (k+1) = x (k) +Δx (k) f(x (k+1) )=0 Expand f with the multidimensional Taylor Theorem f(x (k+1) )=f(x (k) )+f (x (k) )Δx (k) + O Δx (k) NMM: Solving Systems of Equations page 91
93 Newton s Method for Nonlinear Systems () f (x (k) ) is the Jacobian of the system of equations f (x) J(x) = f 1 f 1 x 1 x f f x 1 x f n f n x 1 x f 1 x n f x n 7 f n 5 x n Neglect the higher order terms in the Taylor expansion f(x (k+1) )=f(x (k) )+J(x (k) )Δx (k) NMM: Solving Systems of Equations page 9
94 Newton s Method for Nonlinear Systems (4) Now, assume that we can find the Δx (k) that gives f(x (k+1) )=0 0=f(x (k) )+J(x (k) )Δx (k) = J(x (k) )Δx (k) = f(x (k) ) The essence of Newton s method for systems of equations is 1. Make a guess at x. Evaluate f. If f is small enough, stop 4. Evaluate J 5. solve J Δx = f for Δx 6. update: x x +Δx 7. Go back to step NMM: Solving Systems of Equations page 9
95 Newton s Method for Nonlinear Systems (5) Example: Intersection of a line and parabola y = αx + β y = x + σx + τ Recast as αx 1 x + β =0 x 1 + σx 1 x + τ =0 or f =» αx1 x + β x 1 + σx 1 x + τ =» 0 0 NMM: Solving Systems of Equations page 94
96 Newton s Method for Nonlinear Systems (6) Evaluate each term in the Jacobian f 1 x 1 = α f 1 x = 1 therefore f x 1 =x 1 + σ J = f x = 1» α 1 (x 1 + σ) 1 NMM: Solving Systems of Equations page 95
Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:
Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrixvector notation as 4 {{ A x x x {{ x = 6 {{ b General
More informationSolving Sets of Equations. 150 B.C.E., 九章算術 Carl Friedrich Gauss,
Solving Sets of Equations 5 B.C.E., 九章算術 Carl Friedrich Gauss, 777855 GaussianJordan Elimination In GaussJordan elimination, matrix is reduced to diagonal rather than triangular form Row combinations
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationLinear Systems COS 323
Linear Systems COS 323 Last time: Constrained Optimization Linear constrained optimization Linear programming (LP) Simplex method for LP General optimization With equality constraints: Lagrange multipliers
More information7. LU factorization. factorsolve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
7. LU factorization EE103 (Fall 201112) factorsolve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse
More informationPractical Numerical Training UKNum
Practical Numerical Training UKNum 7: Systems of linear equations C. Mordasini Max Planck Institute for Astronomy, Heidelberg Program: 1) Introduction 2) Gauss Elimination 3) Gauss with Pivoting 4) Determinants
More informationAbstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).
MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at UrbanaChampaign Copyright c 2002. Reproduction
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationNumerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and NonSquare Systems
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and NonSquare Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420001,
More informationDETERMINANTS. b 2. x 2
DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in
More informationOperation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floatingpoint
More informationLinear Algebraic and Equations
Next: Optimization Up: Numerical Analysis for Chemical Previous: Roots of Equations Subsections Gauss Elimination Solving Small Numbers of Equations Naive Gauss Elimination Pitfalls of Elimination Methods
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 201112) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More information2.5 Gaussian Elimination
page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationMAT Solving Linear Systems Using Matrices and Row Operations
MAT 171 8.5 Solving Linear Systems Using Matrices and Row Operations A. Introduction to Matrices Identifying the Size and Entries of a Matrix B. The Augmented Matrix of a System of Equations Forming Augmented
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More information1.5 Elementary Matrices and a Method for Finding the Inverse
.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:
More information1 Gaussian Elimination
Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 GaussJordan reduction and the Reduced
More informationDeterminants. Dr. Doreen De Leon Math 152, Fall 2015
Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationMatrix Inverse and Determinants
DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and
More informationLinear Dependence Tests
Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks
More informationMATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,
MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More informationChapter 1  Matrices & Determinants
Chapter 1  Matrices & Determinants Arthur Cayley (August 16, 1821  January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed
More informationDirect Methods for Solving Linear Systems. Linear Systems of Equations
Direct Methods for Solving Linear Systems Linear Systems of Equations Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University
More informationUNIT 2 MATRICES  I 2.0 INTRODUCTION. Structure
UNIT 2 MATRICES  I Matrices  I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress
More information( % . This matrix consists of $ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&
Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More information1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form
1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and
More informationChapter 4: Systems of Equations and Ineq. Lecture notes Math 1010
Section 4.1: Systems of Equations Systems of equations A system of equations consists of two or more equations involving two or more variables { ax + by = c dx + ey = f A solution of such a system is an
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationLecture 5  Triangular Factorizations & Operation Counts
LU Factorization Lecture 5  Triangular Factorizations & Operation Counts We have seen that the process of GE essentially factors a matrix A into LU Now we want to see how this factorization allows us
More informationSystems of Linear Equations
A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Systems of Linear Equations Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION AttributionNonCommercialShareAlike (CC
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
More informationLinear Equations ! 25 30 35$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development
MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!
More informationMATH 240 Fall, Chapter 1: Linear Equations and Matrices
MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS
More information(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.
Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product
More informationSYSTEMS OF EQUATIONS AND MATRICES WITH THE TI89. by Joseph Collison
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections
More information6 Gaussian Elimination
G1BINM Introduction to Numerical Methods 6 1 6 Gaussian Elimination 61 Simultaneous linear equations Consider the system of linear equations a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 +
More information1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely sidestepped
More informationPower System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur. Lecture  19 Power Flow IV
Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur Lecture  19 Power Flow IV Welcome to lesson 19 on Power System Analysis. In this
More informationMatrices, Determinants and Linear Systems
September 21, 2014 Matrices A matrix A m n is an array of numbers in rows and columns a 11 a 12 a 1n r 1 a 21 a 22 a 2n r 2....... a m1 a m2 a mn r m c 1 c 2 c n We say that the dimension of A is m n (we
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Rowreduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationNonlinear Algebraic Equations Example
Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationNUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS
NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS DO Q LEE Abstract. Computing the solution to Least Squares Problems is of great importance in a wide range of fields ranging from numerical
More informationWe seek a factorization of a square matrix A into the product of two matrices which yields an
LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationLinear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 51 Orthonormal
More information2. Perform elementary row operations to get zeros below the diagonal.
Gaussian Elimination We list the basic steps of Gaussian Elimination, a method to solve a system of linear equations. Except for certain special cases, Gaussian Elimination is still state of the art. After
More informationTopic 1: Matrices and Systems of Linear Equations.
Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method
More information1 Eigenvalues and Eigenvectors
Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x
More informationMathematics Notes for Class 12 chapter 3. Matrices
1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationLINEAR SYSTEMS. Consider the following example of a linear system:
LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x +2x 2 +3x 3 = 5 x + x 3 = 3 3x + x 2 +3x 3 = 3 x =, x 2 =0, x 3 = 2 In general we want to solve n equations in
More information= [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are
This document deals with the fundamentals of matrix algebra and is adapted from B.C. Kuo, Linear Networks and Systems, McGraw Hill, 1967. It is presented here for educational purposes. 1 Introduction In
More informationAPPLICATIONS OF MATRICES. Adj A is nothing but the transpose of the cofactor matrix [A ij ] of A.
APPLICATIONS OF MATRICES ADJOINT: Let A = [a ij ] be a square matrix of order n. Let Aij be the cofactor of a ij. Then the n th order matrix [A ij ] T is called the adjoint of A. It is denoted by adj
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in threespace, we write a vector in terms
More information2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors
2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the
More information2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system
1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables
More informationLinear Least Squares
Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One
More information1 Determinants. Definition 1
Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described
More information1. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither.
Math Exam  Practice Problem Solutions. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither. (a) 5 (c) Since each row has a leading that
More informationDiagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
More informationCofactor Expansion: Cramer s Rule
Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating
More informationInverses and powers: Rules of Matrix Arithmetic
Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in twodimensional space (1) 2x y = 3 describes a line in twodimensional space The coefficients of x and y in the equation
More informationNumerical Methods for Solving Systems of Nonlinear Equations
Numerical Methods for Solving Systems of Nonlinear Equations by Courtney Remani A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 Honour s
More information10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationLecture 23: The Inverse of a Matrix
Lecture 23: The Inverse of a Matrix Winfried Just, Ohio University March 9, 2016 The definition of the matrix inverse Let A be an n n square matrix. The inverse of A is an n n matrix A 1 such that A 1
More informationLecture Notes 2: Matrices as Systems of Linear Equations
2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably
More informationeissn: pissn: X Page 67
International Journal of Computer & Communication Engineering Research (IJCCER) Volume 2  Issue 2 March 2014 Performance Comparison of Gauss and GaussJordan Yadanar Mon 1, Lai Lai Win Kyi 2 1,2 Information
More information1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
More informationUsing row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
More informationSolving a System of Equations
11 Solving a System of Equations 111 Introduction The previous chapter has shown how to solve an algebraic equation with one variable. However, sometimes there is more than one unknown that must be determined
More information3.4. Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes
Solving Simultaneous Linear Equations 3.4 Introduction Equations often arise in which there is more than one unknown quantity. When this is the case there will usually be more than one equation involved.
More informationChapter 4  Systems of Equations and Inequalities
Math 233  Spring 2009 Chapter 4  Systems of Equations and Inequalities 4.1 Solving Systems of equations in Two Variables Definition 1. A system of linear equations is two or more linear equations to
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25Sep02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationLecture Notes: Matrix Inverse. 1 Inverse Definition
Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,
More information8.3. Solution by Gauss Elimination. Introduction. Prerequisites. Learning Outcomes
Solution by Gauss Elimination 8.3 Introduction Engineers often need to solve large systems of linear equations; for example in determining the forces in a large framework or finding currents in a complicated
More informationNumerical Methods for Civil Engineers. Linear Equations
Numerical Methods for Civil Engineers Lecture Notes CE 311K Daene C. McKinney Introduction to Computer Methods Department of Civil, Architectural and Environmental Engineering The University of Texas at
More information