SOLVING A SYSTEM OF LINEAR EQUATIONS

Size: px
Start display at page:

Download "SOLVING A SYSTEM OF LINEAR EQUATIONS"

Transcription

1 SOLVING A SYSTEM OF LINEAR EQUATIONS 1 Introduction In the previous chapter, we determined the value x that satisfied a single equation, f(x) = 0. Now, we deal with the case of determining the values x 1, x 2,..., x n that simultaneously satisfy a set of linear equations of the form a 11 x 1 + a 12 x , a 1n x n = b 1 a 21 x 1 + a 22 x , a 2n x n = b 2 a n1 x 1 + a n2 x , a nn x n = b n For small numbers of equations (n 3), linear equations can be solved readily by simple techniques. However, for four or more equations, solutions become tedious and computers must be utilized. The system of equations introduced above can be represented in a compact form where [A] is an n n matrix of coefficients B = [A]X = B a 11 a a 1n a 21 a a 2n a n1 a n2... a nn B is the n 1 column vector of constants, and X is the n 1 column vector of unknowns: b 1 b 2 X = x 1 x 2 b n x n 1.1 Examples Examples of engineering problems that require a solution of a system of linear equations are: 1. Referring to fig.1 and using Kirchhoffs law, the currents i 1, i 2,i 3, and i 4 can be determined by solving the following system of four equations: 9i 1 4i 2 2i 3 = 24 4i i 2 6i 3 3i 4 = 16 2i 1 6i i 3 6i 4 = 0 3i 2 6i i 4 = 18 1

2 Fig Overview of numerical methods for solving a system of linear algebraic equations Two types of numerical methods, direct and iterative, are used for solving systems of linear algebraic equations. In direct methods, the solution is calculated by performing arithmetic operations with the equations. In iterative methods, an initial approximate solution is assumed and then used in an iterative process for obtaining successively more accurate solutions Direct methods In direct methods, the system of equations that is initially given in the general form, is manipulated to an equivalent system of equations that can be easily solved. Three systems of equations that can be easily solved are the upper triangular, lower triangular, and diagonal forms. The upper triangular form is shown in fig.2. The system in this form has all zero coefficients below the diagonal and is solved by a procedure called back substitution. It starts with the last equation, which is solved for x n. The value of x n is then substituted in the next-to-the-last equation, which is solved for x n 1. The process continues in the same manner all the way up to the first equation. Fig.2 x n = b n a nn x i = b i j=i+1 a ij x j i = n 1, n 2,..., 1 The lower triangular form is shown in fig.3. The system in this form has zero coefficients above the diagonal. A system in lower triangular form is solved in the same way as the upper triangular form but in an opposite order. The procedure is called forward substitution. It starts with the first equation, which is solved for x 1. The value of x 1 is then substituted in the second equation, which is solved for x 2. The process continues in the same manner all the way down to the last equation. 2

3 Fig.3 x 1 = b 1 a 11 x i = i 1 b i a ij x j j=1 i = 2, 3,..., n The diagonal form of a system of linear equations is shown in fig.4. A system in diagonal form has nonzero coefficients along the diagonal and zeros everywhere else. Obviously, a system in this form can be easily solved. Fig.4 x i = b i Three direct methods for solving systems of equations: Gauss elimination, GaussJordan, and LU decomposition are described in this chapter Indirect Methods Two indirect (iterative) methods, Jacobi and GaussSeidel are described in this chapter. 2 Direct Methods 2.1 Naive Gauss elimination method This section includes the systematic techniques for forward elimination and back substitution that comprise Gauss elimination. Although these techniques are ideally suited for implementation on computers, some modifications will be required to obtain a reliable algorithm. In particular, the computer program must avoid division by zero. The following method is called naive Gauss elimination because it does not avoid this problem. Subsequent sections will deal with the additional features required for an effective computer program. The approach is designed to solve a general set of n equations: The technique consists of two phases: elimination of unknowns and solution through back substitution. 3

4 a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2 a n1 x 1 + a n2 x a nn x n = b n Forward elimination of unknowns The first phase is designed to reduce the set of equations to an upper triangular system. The initial step will be to eliminate the first unknown, x 1, from the second through the n th equations. To do this, multiply the first rwo in the previous equation by a 21 /a 11 to give a 21 x 1 + a 21 a 11 a 12 x a 21 a 11 a 1n x n = a 21 a 11 b 1 Now, this equation can be subtracted from the second row to give ( a 22 a ) ( 21 a 12 x a 2n a ) 21 a 1n x n = b 2 a 21 b 1 a 11 a 11 a 11 or a 22x a 2nx n = b 2 where the prime indicates that the elements have been changed from their original values. The procedure is then repeated for the remaining equations. For instance, teh first row can be multiplied by a 31 /a 11 and the result subtracted from the third equation. Repeating the procedure for the remaining equations results in the following modified system: a 11 x 1 + a 12 x , a 1n x n = b 1 a 22x a 2nx n = b 2 a n2x a nnx n = b n For the foregoing steps, row one is called the pivot equation and a 11 is called the pivot coefficient or element. Note that the process of multiplying the first row by a 21 /a 11 is equivalent to dividing it by a 11 and multiplying it by a 21. Sometimes the division operation is referred to as normalization. We make this distinction because a zero pivot element can interfere with normalization by causing a division by zero. We will return to this important issue after we complete our description of naive Gauss elimination. Now repeat the above to eliminate the second unknown from row 3 through n in the last set of equations. To do this multiply the second row by a 32/a 22 and subtract the result from the third equation. Perform a similar elimination for the remaining equations to yield a 11 x 1 + a 12 x 2 + a 13 x , a 1n x n = b 1 a 22x 2 + a 23x a 2nx n = b 2 a 33x a 2nx n = b 3 a n2x a nnx n = b n where the double prime indicates that the elements have been modified twice. 4

5 The procedure can be continued using the remaining pivot equations. The final manipulation in the sequence is to use the (n 1) th equation to eliminate the x n 1 term from the n th equation. At this point, the system will have been transformed to an upper triangular system: a 11 x 1 + a 12 x 2 + a 13 x , a 1n x n = b 1 a 22x 2 + a 23x a 2nx n = b 2 a 33x a 2nx n = b 3 a (n 1) nn x n = b (n 1) n back substitution x n = b(n 1) n a (n 1) nn x i = b (i 1) i j=i+1 a (i 1) ii a (i 1) ij x j i = n 1, n 2,..., 1 Example: Consider solving the following system using the naive Gauss elimination 3x 1 2x 2 + 5x 3 = 14 x 1 x 2 = 1 2x 1 + 4x 3 = Potential difficulties when applying the Gauss elimination method Whereas there are many systems of equations that can be solved with naive Gauss elimination, there are some pitfalls that must be explored before writing a general computer program to implement the method. Division by zero The primary reason that the foregoing technique is called naive is that during both the elimination and the back-substitution phases, it is possible that a division by zero can occur. For example, if we use naive Gauss elimination to solve 2x 2 + 3x 3 = 8 4x 1 + 6x 2 + 7x 3 = 3 2x 1 + x 2 + 6x 3 = 5 the normalization of the first row would involve division by a 11 = 0. Problems also can arise when a coefficient is very close to zero due to round off errors. The technique of pivoting has been developed to partially avoid these problems. It will be described in the next section. For example, consider using Gauss elimination to solve 5

6 0.0003x x 2 = x x 2 = Note that in this form the first pivot element, a 11 = , is very close to zero. The exact solution is x 1 = 1/3 and x 2 = 2/3. Multiplying the first equation by 1 /(0.0003) yields x , 000x 2 = 6667 which can be used to eliminate x 1 from the second equation: 9999x 2 = 6666 which can be solved for x 2 = 2/3. This result can be substituted back into the first equation to evaluate x 1 : x 1 = (2/3) However, due to subtractive cancellation, the result is very sensitive to the number of significant figures carried in the computation: Significant x 2 x 1 figures Note how the solution for x 1 is highly dependent on the number of significant figures. On the other hand, if the equations are solved in reverse order, the row with the larger pivot element is normalized. The equations are x x 2 = x x 2 = Elimination and substitution yield x 2 = 2/3. For different numbers of significant figures, x 1 can be computed from the first equation, as in x 1 = 1 (2/3) 1 This case is much less sensitive to the number of significant figures in the computation: Significant x 2 x 1 figures

7 Gauss elimination with pivoting As mentioned earlier, obvious problems occur when a pivot element is zero because the normalization step leads to division by zero. Problems may also arise when the pivot element is close to, rather than exactly equal to, zero because if the magnitude of the pivot element is small compared to the other elements, then round-off errors can be introduced. Therefore, before each row is normalized, it is advantageous to determine the largest available coefficient (in absolute value) in the column below the pivot element. The rows can then be switched so that the largest element is the pivot element. 2.2 Gauss-Jordan elimination method The Gauss-Jordan method is a variation of Gauss elimination. The major difference is that when an unknown is eliminated in the Gauss-Jordan method, it is eliminated from all other equations rather than just the subsequent ones. In addition, all rows are normalized by dividing them by their pivot elements. Thus, the elimination step results in an identity matrix rather than a triangular matrix as shown in fig.5. Consequently, it is not necessary to employ back substitution to obtain the solution. Gauss-Jordan elimination with pivoting Fig.5 It is possible that the equations are written in such an order that during the elimination procedure a pivot equation has a pivot element that is equal to zero. Obviously, in this case it is impossible to normalize the pivot row (divide by the pivot element). As with the Gauss elimination method, the problem can be corrected by using pivoting. Although the Gauss-Jordan technique and Gauss elimination might appear almost identical, the former requires approximately 50 percent more operations than Gauss elimination. Therefore, Gauss elimination is the simple elimination method of preference for obtaining solutions of linear algebraic equations. One of the primary reasons that we have introduced the Gauss-Jordan, however, is that it is still used in engineering as well as in some numerical algorithms. Example: Consider solving the following system using the naive Gauss elimination 3x 1 2x 2 + 5x 3 = 14 x 1 x 2 = LU decomposition method 2x 1 + 4x 3 = 14 As described in the previous sections, Gauss elimination is designed to solve systems of linear algebraic equations, [A]X = B Although it certainly represents a sound way to solve such systems, it becomes inefficient when solving equations with the same coefficients [A], but with different right-hand-side constants (the b s). Recall that 7

8 Gauss elimination involves two steps: forward elimination and back- substitution. Of these, the forwardelimination step comprises the bulk of the computational effort. This is particularly true for large systems of equations. LU decomposition methods separate the time-consuming elimination of the matrix [A] from the manipulations of the right-hand side B. Thus, once [A] has been decomposed, multiple right-hand-side vectors can be evaluated in an efficient manner. Before showing how this can be done, let us first provide a mathematical overview of the decomposition strategy Overview of the LU decomposition A two-step strategy (see Fig.6) for obtaining solutions can be based explained as follow LU decomposition step. [A] is factored or decomposed into lower [L] and upper [U] triangular matrices. Substitution step. [L] and [U] are used to determine a solution X for a right-hand side B. This step itself consists of two steps. First, an intermediate vector D is generated by forward substitution. Then, the result is substituted back to solve for X by back substitution. [A] = [L][U] D = [U]X [L][U]X = [L]D = B The forward-substitution step can be represented concisely as d 1 = b 1 a 11 d i = b i i 1 j=1 a ijd j i = 2, 3,..., n the back-substitution step can be represented concisely as x i = d i x n = d n a nn j=i+1 a ij x j i = n 1, n 2,..., 1 8

9 Fig.6 For a given matrix several methods can be used to determine the corresponding [L] and [U]. Two of the methods, one related to the Gauss elimination method and another called Crout s method, are described next LU decomposition using Gauss elimination procedure When the Gauss elimination procedure is applied to a matrix, the elements of the matrices [L] and [U] are actually calculated. The upper triangular matrix [U] is the matrix of coefficients [A] that is obtained at the end of the Gauss elimination procedure. For the lower triangular matrix [L], the elements on the diagonal are all 1, and the elements below the diagonal are the multipliers m ij that multiply the pivot equation when it is used to eliminate the elements below the pivot coefficient. For the case of a system of three equations, the decomposition has the form: a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 = m m 31 m 32 1 where m 21 = a 21 /a 11, m 31 = a 31 /a 11, and m 32 = a 32/a LU decomposition using Crout s method a 11 a 12 a 13 0 a 22 a a 33 In this method the matrix is decomposed into the product [L][U], where the diagonal elements of the matrix [U] are all 1s. It turns out that in this case, the elements of both matrices can be determined using formulas that can be easily programmed. For example, in the case of a system of four equations a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44 = L L 21 L L 31 L 32 L 33 0 L 41 L 42 L 43 L 44 1 U 12 U 13 U U 23 U U Executing the matrix multiplication on the right-hand side of the equation gives: a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44 = L 11 (L 11 U 12 ) (L 11 U 13 ) (L 11 U 14 ) L 21 (L 21 U 12 + L 22 ) (L 21 U 13 + L 22 U 23 ) (L 21 U 14 + L 22 U 24 ) L 31 (L 31 U 12 + L 32 ) (L 31 U 13 + L 32 U 23 + L 33 ) (L 31 U 14 + L 32 U 24 + L 33 U 34 ) L 41 (L 41 U 12 + L 42 ) (L 41 U 13 + L 42 U 23 + L 43 ) (L 41 U 14 + L 42 U 24 + L 43 U 34 + L 44 ) The elements of the matrices [L] and [U] can be determined by solving the previous equation. The solution is obtained by equating the corresponding elements of the matrices on both sides of the equation. 9

10 One can observe that the elements of the matrices [L] and [U] can be easily determined row after row from the known elements of [A] and the elements of [L] and [U] that are already calculated. Starting with the first row, the value of L 11 is calculated from L 11 = a 11. Once L 11 is known, the values of U 12, U 13, and U 14 are calculated by: U 12 = a 12 /L 11 U 13 = a 13 /L 11 U 14 = a 14 /L 11 Moving on to the next row the next elements can be calculated in a similar manner. A procedure for determining the elements of the matrices [L] and [U] can be written as follow. If [A] is an (n n) matrix, the elements of [L] and [U] are given by: Step 1: Calculating the first column of [L]: for i=1, 2,..., n L i1 = a i1 Step 2: Substituting 1s in the diagonal of [U] : for i=1, 2,..., n U ii = 1 Step 3: Calculating the elements in the first row of [U] (except U 11 which was already calculated): for j=2, 3,..., n U 1j = a 1j L 11 Step 4: Calculating the rest of the elements row after row (i is the row number and j is the column number). The elements of [L] are calculated first because they are used for calculating the elements of [U] : for i=2, 3,..., n for j=2, 3,..., i j 1 L ij = a ij L ik U kj k=1 for j=(i+1), (i+2),..., n U ij = i 1 a ij L ik U kj k=1 L ii LU decomposition with pivoting Decomposition of a matrix into the matrices [L] and [U] means that [A] = [L][U]. In the presentation of Gauss and Crouts decomposition methods in the previous two subsections, it is assumed that it is possible to carry out all the calculations without pivoting. In reality, as was discussed before, pivoting may be required for a successful execution of the Gauss elimination procedure. Pivoting might also be needed with Crouts method. If pivoting is used, then the matrices [L] and [U] that are obtained are not the decomposition of the original matrix [A]. The product [L][U] gives a matrix with rows that have the same elements as [A], but due to the pivoting, the rows are in a different order. When pivoting is used in the decomposition procedure, the changes that are made have to be recorded and stored. This is done by creating a matrix [P ], called a permutation matrix, such that: [P ][A] = [L][U] The order of the rows of B have to be changed such that it is consistent with the pivoting. This is done by multiplying B by the permutation matrix,[p ]. 10

11 Example: Consider solving the following system using the LU decomposition x 1 x 2 = 1 3x 1 2x 2 + 5x 3 = 14 3 Iterative methods 2x 1 + 4x 3 = 14 Iterative or approximate methods provide an alternative to the elimination methods described previously. Such approaches are similar to the techniques we developed to obtain the roots of a single equation (fixed point iteration). Those approaches consisted of guessing a value and then using a systematic method to obtain a refined estimate of the root. Because the present part of the book deals with a similar problemobtaining the values that simultaneously satisfy a set of equations-we might suspect that such approximate methods could be useful in this context. For a system with n equations, the explicit equations for the [x i ] unknowns can be written as: x i = 1 b i a ij x j j=1j i For a system of (n = 4) equations the previous equation reduces to Fig.7 For a system of n equations [a]x = b, a sufficient condition for convergence is that in each row of the matrix the absolute value of the diagonal element is greater than the sum of the absolute values of the off-diagonal elements. > j=n j=1,j i a ij This condition is sufficient but not necessary for convergence when the iteration method is used. When this condition is satisfied, the matrix is classified as diagonally dominant, and the iteration process converges toward the solution. The solution, however, might converge even when the condition is not satisfied. Two iterative methods are presented next 3.1 Jacobi iterative method In the Jacobi method, an initial (first) value is assumed for each of the unknowns, x (1) 1, x(1) 2,..., x(1) n. If no information is available regarding the approximate values of the unknown, the initial value of all the unknowns can be assumed to be zero. The second estimate of the solution is calculated by substituting the first estimate in the right-hand side of the following equation x i = 1 b i a ij x j j=1j i 11

12 In general, the (k+1)th estimate of the solution is calculated from the (k)th estimate by: x (k+1) i = 1 b i a ij x (k) j j=1j i The iterations continue until the differences between the values that are obtained in successive iterations are small. The iterations can be stopped when the absolute value of the estimated relative error of all the unknowns is smaller than some predetermined value: Example: x (k+1) i x (k) i x (k+1) i < ɛ i = 1, 2,..., n Consider solving the following system using Jacobi iterative method (three iterations). 3x 1 2x 2 + 5x 3 = 14 x 1 x 2 = Gauss-Seidel iterative method 2x 1 + 4x 3 = 14 In the GaussSeidel method, initial (first) values are assumed for the unknowns x 2, x 3,..., x n (all of the unknowns except x 1 ). If no information is available regarding the approximate value of the unknowns, the initial value of all the unknowns can be assumed to be zero. The first assumed values of the unknowns are substituted in the following equation with i = 1 to calculate the value of x 1. x i = 1 b i a ij x j a 11 j=1j i Next, the same equation with i = 2 is used for calculating a new value for x 2. This is followed by i = 3 for calculating a new value for x 3. The process continues until i = n, which is the end of the first iteration. Then, the second iteration starts with i = 1 where a new value for x 1 is calculated, and so on. In the GaussSeidel method, the current values of the unknowns are used for calculating the new value of the next unknown. In other words, as a new value of an unknown is calculated, it is immediately used for the next application. Applying the previous equations to the GaussSeidel method gives the iteration formula: x (k+1) i = 1 b i x (k+1) 1 = 1 a 11 i 1 j=1 a ij x (k+1) j + x (k+1) n = 1 a nn b 1 j=i+1 j=2 a 1j x (k) j a ij x (k) j n 1 b n a nj x (k+1) j=1 i = 2, 3,..., n 1 Convergence can be checked using the same criterion as in the Jacobi method ɛ a,i = x ( i k) x(k 1) i x ( i k) j < ɛ s for all i, where k and k 1 are the present and previous iterations. 12

13 Example: Consider solving the following system using Gauss-Seidel iterative method (three iterations). 3x 1 2x 2 + 5x 3 = 14 x 1 x 2 = 1 2x 1 + 4x 3 = 14 4 Use of MATLAB built in functions for solving systems of linear equations 4.1 Left division Given a system of linear equations in the form [A]X = b, one can use the left division in MATLAB so solve for X. The syntax is 4.2 Inverse operation X = [A]\b Given a system of linear equations in the form [A]X = b, one can use the inverse of the matrix to X. The syntax is 4.3 LU decomposition X = [A] 1 b or X = inv([a]) b MATLAB has a built in function for decomposition to solve for X. MATLAB uses partial pivoting. the lu function gives [L], [U], and the permutation matrix such as [L] [U] = [P ] [A]. [L, U, P ] = lu([a]) 5 Application: Inverse of a matrix As an application of the proceeding methods we proceed to finding the inverse of a matrix. The procedure will be demonstrated for a 3 3 matrix but can easily extended to a matrix of dimension n n. consider matrices [A] and [B] such as [A] [B] = I, where [I] is the identity matrix. or [A] [B] = [A] = [B] = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 b 11 b 12 b 13 b 21 b 22 b 23 b 31 b 32 b 33 b 11 b 12 b 13 b 21 b 22 b 23 b 31 b 32 b 33 =

14 a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 b 11 b 21 b 31 b 12 b 22 b 32 b 13 b 23 b 33 = = = these systems can be solved by any of the methods discussed in this chapter 6 Ill conditioned systems A numerical solution of a system of equations is seldom an exact solution. Even though direct methods (Gauss, GaussJordan, LU decomposition) can be exact, they are still susceptible to round-off errors when implemented on a computer. This is especially true with large systems and with ill-conditioned systems. An ill-conditioned system of equations is one in which small variations in the coefficients of the matrix (A) cause large changes in the solution. When an ill-conditioned system of equations is being solved numerically, there is a high probability that the solution obtained will have a large error or that a solution will not be obtained at all. To illustrate this and also be able to identify whether a system of linear equations is ill conditioned we first introduce the concept of norm. By definition a norm is a real valued function that provides a measure of the size or length of multiple component mathematical entities such as vectors and matrices. Example of norms Euclidean norm: For an n-dimensional vector X = [x 1, x 2,..., x n ] Uniform vector norm: X e = n i=1 x 2 i X = max 1 i n x i Frobenius norm:, for a n n matrix A of components a ij Uniform matrix norm: A e = A = max i=1 j=1 1 i n j=1 a 2 ij a ij 14

15 Although there are theoretical benefits for using certain of the norms, the choice is sometimes influenced by practical considerations. For example, the uniform row norm is widely used because of the ease with which it can be calculated and the fact that it usually provides an adequate measure of matrix size. Now that we have introduced the concept of norm, we can use it to define another quantity called matrix condition number This number is always larger or equal to 1. Cond[A] = A A 1 Cond[A] = A A 1 1 It can be shown that the true relative error of the solution of ([A]X = b) or X / X is less or equal to the true relative error of the residual AX / b or where X X Cond[A] AX b and AX = AX t AX NS DeltaX = X t X NS If the condition number of teh matrix [A] is large there is a large probability that the true relative error of the solution be large as well. Example Consider solving the following system of linear equations 6x 1 2x 2 = x x 2 = 17 15

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

Elementary Matrices and The LU Factorization

Elementary Matrices and The LU Factorization lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx. Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Linear Equations ! 25 30 35$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Linear Equations ! 25 30 35$ &  350 150% &  11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix 7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Solving simultaneous equations using the inverse matrix

Solving simultaneous equations using the inverse matrix Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Question 2: How do you solve a matrix equation using the matrix inverse?

Question 2: How do you solve a matrix equation using the matrix inverse? Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013 Hill Cipher Project K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013 Directions: Answer all numbered questions completely. Show non-trivial work in the space provided. Non-computational

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

1 Lecture: Integration of rational functions by decomposition

1 Lecture: Integration of rational functions by decomposition Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.

More information

1 Review of Newton Polynomials

1 Review of Newton Polynomials cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

More information

Basic numerical skills: EQUATIONS AND HOW TO SOLVE THEM. x + 5 = 7 2 + 5-2 = 7-2 5 + (2-2) = 7-2 5 = 5. x + 5-5 = 7-5. x + 0 = 20.

Basic numerical skills: EQUATIONS AND HOW TO SOLVE THEM. x + 5 = 7 2 + 5-2 = 7-2 5 + (2-2) = 7-2 5 = 5. x + 5-5 = 7-5. x + 0 = 20. Basic numerical skills: EQUATIONS AND HOW TO SOLVE THEM 1. Introduction (really easy) An equation represents the equivalence between two quantities. The two sides of the equation are in balance, and solving

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

Introduction to Matrices for Engineers

Introduction to Matrices for Engineers Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Lecture Notes 2: Matrices as Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations 2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Nonlinear Algebraic Equations Example

Nonlinear Algebraic Equations Example Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

LECTURES IN BASIC COMPUTATIONAL NUMERICAL ANALYSIS

LECTURES IN BASIC COMPUTATIONAL NUMERICAL ANALYSIS x (m+1) (m) = x [J(ƒ (m) 1 )] ƒ(x (m) ) f f i+1 i 1 D f = 0 i 2h LECTURES IN BASIC COMPUTATIONAL NUMERICAL ANALYSIS J M McDonough y = ƒ(y,t) University of Kentucky Lexington, KY 40506 E-mail: jmmcd@ukyedu

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations,

More information

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

More information

Equations, Inequalities & Partial Fractions

Equations, Inequalities & Partial Fractions Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

More information

9.2 Summation Notation

9.2 Summation Notation 9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

More information

Lecture notes on linear algebra

Lecture notes on linear algebra Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra

More information

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a 0 has a reciprocal b written as a or such that a ba = ab =. Similarly a square matrix A may have an inverse B = A where AB =

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Lecture 1: Systems of Linear Equations

Lecture 1: Systems of Linear Equations MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Solving Mass Balances using Matrix Algebra

Solving Mass Balances using Matrix Algebra Page: 1 Alex Doll, P.Eng, Alex G Doll Consulting Ltd. http://www.agdconsulting.ca Abstract Matrix Algebra, also known as linear algebra, is well suited to solving material balance problems encountered

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

The Method of Partial Fractions Math 121 Calculus II Spring 2015

The Method of Partial Fractions Math 121 Calculus II Spring 2015 Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

a 11 a 12 a 13... a 1n x 1 b 1 b 2..., (6.2)

a 11 a 12 a 13... a 1n x 1 b 1 b 2..., (6.2) Chapter 6 Systems of Linear Equations 6. Introduction Systems of linear equations hold a special place in computational physics, chemistry, and engineering. In fact, multiple computational problems in

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

More information

ALGEBRAIC EIGENVALUE PROBLEM

ALGEBRAIC EIGENVALUE PROBLEM ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.

More information

26. Determinants I. 1. Prehistory

26. Determinants I. 1. Prehistory 26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Typical Linear Equation Set and Corresponding Matrices

Typical Linear Equation Set and Corresponding Matrices EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of

More information

Iterative Methods for Solving Linear Systems

Iterative Methods for Solving Linear Systems Chapter 5 Iterative Methods for Solving Linear Systems 5.1 Convergence of Sequences of Vectors and Matrices In Chapter 2 we have discussed some of the main methods for solving systems of linear equations.

More information

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering

More information

is identically equal to x 2 +3x +2

is identically equal to x 2 +3x +2 Partial fractions 3.6 Introduction It is often helpful to break down a complicated algebraic fraction into a sum of simpler fractions. 4x+7 For example it can be shown that has the same value as 1 + 3

More information

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6 Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra: Partial Fractions Combining fractions over a common denominator is a familiar operation from algebra: From the standpoint of integration, the left side of Equation 1 would be much easier to work with than

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

Syntax Description Remarks and examples Also see

Syntax Description Remarks and examples Also see Title stata.com permutation An aside on permutation matrices and vectors Syntax Description Remarks and examples Also see Syntax Permutation matrix Permutation vector Action notation notation permute rows

More information

Arithmetic and Algebra of Matrices

Arithmetic and Algebra of Matrices Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational

More information

MATHEMATICS FOR ENGINEERING BASIC ALGEBRA

MATHEMATICS FOR ENGINEERING BASIC ALGEBRA MATHEMATICS FOR ENGINEERING BASIC ALGEBRA TUTORIAL 3 EQUATIONS This is the one of a series of basic tutorials in mathematics aimed at beginners or anyone wanting to refresh themselves on fundamentals.

More information

SECTION 2.5: FINDING ZEROS OF POLYNOMIAL FUNCTIONS

SECTION 2.5: FINDING ZEROS OF POLYNOMIAL FUNCTIONS SECTION 2.5: FINDING ZEROS OF POLYNOMIAL FUNCTIONS Assume f ( x) is a nonconstant polynomial with real coefficients written in standard form. PART A: TECHNIQUES WE HAVE ALREADY SEEN Refer to: Notes 1.31

More information