1 Matrix Representation of linear equations

Size: px
Start display at page:

Download "1 Matrix Representation of linear equations"

Transcription

1 Linear System of Equations GOAL Understand linear system Learn solution methods for triangular linear system Learn how to do vectorization in high performance computing Solve linear system by LU factorization KEY WORDS Matrix, lower triangular matrix, upper triangular matrix, tridiagonal system, LU factorization, Gaussian elimination, pivoting 1 Matrix Representation of linear equations A system of linear equations is a set of linear equations in multi-variables For example, if the system involves m variables x = (x 1, x,, x m ) T and n equations, then it can be written as as a 11 x 1 + a 1 x + + a 1m x m = b 1 a 1 x 1 + a x + + a m x m = b a n1 x 1 + a n x + + a nm x m = b n This system of linear equations can be represented in a matrix-vector form Ax = b, where the matrix A = (a ij ) n m, the vector x = (x 1, x,, x m ) T and the vector b = (b 1, b,, b n ) T If the number of unknows m is more than the number of equations n, then the system is said to be under-determined and may have infinitely many solutions On the other hand, if the number of unknowns m is less than the number of equations, then the system is said to be over-determined, and the system might have no solution For simplicity, we only consider the case of m = n and the case of A being invertible in this Chapter The case of m < n is addressed in the Chapter of Least Square Methods When m = n, x = A 1 b Solving linear system Ax = b arises in earlier part of the course, for example in Lecture 3, power series form of polynomial interpolation, where the system size is relatively small The problem often arises in solving differential equations, eg boundary value problems, in areas of scientific computing and engineering applications, where the size of the system can be in hundreds, thousands, and even millions We are concerned about solving linear system Ax = b by numerical algorithms Several questions are to be addressed The answer to each of the question is one topic in this Chapter 1

2 1 How do computer softwares, eg Matlab, compute inv(a) or A 1 b? Gaussian elimination Assuming A 1 exists (det A ), is the numerical algorithm robust enough to compute inv(a) or A 1 b for all A? If not, what can be done to improve the numerical algorithm? The will be some instability associated with Gaussian elimination, which can be remedied by Gaussian elimination with pivoting 3 How long does it take to compute the answer? Or what is the complexity of the algorithm, or the computational cost as a function of n, the size of A? We will study the complexity of a numerical algorithm, eg Gaussian elimination, by counting the number of flops (floating point operations) as a function of n 4 There is always a limit in our computational resource and computational time If the current algorithm takes too long to compute the answer, what can be done to improve the efficiency of the algorithm (or reduce the cost of algorithm), so that the computational cost is affordable? Iterative method is alternative approach of solving Ax = b with reduced cost, compared with Gaussian elimination Triangular linear system The direct method that solves the general matrix problem Ax = b is called Gaussian elimination The goal of Gaussian elimination is to convert a given linear system into an equivalent triangular system Triangular system solving is easy because the unknowns can be solved without any further manipulation of the matrix of coefficients The goal here is to study direct methods for triangular linear systems 1 Lower triangular system: general forward substitution A general lower triangular system looks like Lx = b where the matrix L = (l ij ) n n has the property that l ij =, if i < j and looks like l 11 l 1 l l n1 l n l nn

3 Such a linear system can be written as l i1 x 1 + l i x + + l ii x i = b i, for i = 1,,, n The solution for x is given by i 1 x i = (b i l ij x j )/l ii j=1 This procedure of solving a lower triangular system is called the general forward substitution Example 1 Solve for x from / 1 From the first equation, we have x = x 1 = b 1 /l 11 = 1/ From the second equation x 1 + 3x =, we have From the third equation, we have 1 3 x = (b l 1 x 1 )/l = ( + x 1 )/3 = 5/6 x 3 = (b 3 l 31 x 1 l 3 x )/l 33 = (3 3x 1 1 x )/( 1) = 13/1 Therefore x = (, 5/6, 13/1) T Code (Pseudo code) The pseudo code for forward substitution is as follows for i = 1 : n i 1 x i = (b i l ij x j )/l ii j=1 Code 3 (Matlab code) The following is an implementation in Matlab You might mimic this function in a language of your choice for i = 1 : n x(i) = b(i) for j = 1 : i 1 x(i) = x(i) L(i, j) x(j) x(i) = x(i)/l(i, i) 3

4 Code 4 (Vectorization of Matlab code) Notice the the j-loop in the Matlab implementation in fact subtracts the following inner product i 1 l ij x j = L(i, 1 : i 1) x(1 : i 1) j=1 from the component b i follows: Thus we can have a vectorized implementation as for i = 1 : n x(i) = (b(i) L(i, 1 : i 1) x(1 : i 1))/L(i, i) Complexity Based on the pseudo/matlab code above, the forward substitution for computing x involves flops in each of the j-loop, therefore (i 1) in the j-loops; i(= (i 1) + ) flops in each of the i-loop Thus the computation of x involves n i = ( n) = n(n + 1) = O(n ) i=1 flops Upper triangular system: general backward substitution A general upper triangular system looks like Ux = b where the matrix (u ij ) n n has the property that u ij =, if i > j, and looks like u 11 u 1 u 1n u u n u nn Such a linear system can be written as u ii x i + u i,i+1 x i+1 + u in x n = b i, for i = 1,,, n The solution for x is given by x i = (b i n j=i+1 u ij x j )/u ii, for i = n,, 1 This procedure of solving a upper triangular system is called the general backward substitution 4

5 Example 5 Solve for x from From the last equation, we have x = x 3 = b 3 /u 33 = From the second equation 3x x 3 =, we have From the first equation, we have Therefore x = (1, 1, 1) T x = (b u 3 x 3 )/u = ( + x 3 )/3 = 1 x 1 = (b 1 u 13 x 3 u 1 x )/u 11 = (3 + x 3 x )/3 = 1 Code 6 (Pseudo code) The pseudo code for backward substitution is as follows for i = n : 1 x i = (b i n j=i+1 u ij x j )/u ii Code 7 (Matlab code) The following is an implementation in Matlab You might mimic this function in a language of your choice for i = n : 1 x(i) = b(i) for j = i + 1 : n x(i) = x(i) U(i, j) x(j) x(i) = x(i)/u(i, i) Code 8 (Vectorization of Matlab code) Notice the the j-loop in the Matlab implementation in fact subtracts the following inner produce n u ij x j = U(i, i + 1 : n) x(i + 1 : n) j=i+1 from the component b i follows: for i = n : 1 Thus we can have a vectorized implementation as x(i) = (b(i) U(i, i + 1 : n) x(i + 1 : n))/u(i, i) 5

6 Complexity The backward substitution for computing x involves flops in each of the j-loop, therefore (n (i + 1) + 1) in the j-loops; (n i + 1)(= (n i) + ) flops in each of the i-loop Thus the computation of x involves n (n i + 1) = (n + (n + 1) + + 1) = n(n + 1) = O(n ) flops i=1 3 LU Factorization We have seen that triangular linear systems are easy to solve via forward and backward substitutions without any further manipulation on the matrix The goal of this section is to devise a method that solves linear systems by reducing it to triangular linear system Consider the general matrix problem Ax = b An LU factorization of A in an expression of A = LU where L = (l ij ) n n is a lower triangular matrix and U = (u ij ) n n is an upper triangular matrix If the matrix A is non-singular, then both L and U is not singular (think about why?) What can we do with the factorization? It turns out that the solution x is easy to resolve once an LU factorization is known To see why, we substitute A = LU into Ax = b to obtain LUx = b By letting y = Ux, we see that both x and the new variable y satisfies the following enlarged system { y = Ux Ly = b The above linear system can be decoupled in the sense that the second one can be solved indepently of the first one Therefore, the following procedure can be adopted in resolving x: Forward substitution: solve Ly = b Backward substitution: solve U x = y That sounds great and easy, isn t it? But wait a minute, how can we find and LU factorization for a general matrix A? The answer is given by Gaussian elimination, to be discussed later on 6

7 31 Tridiagonal linear system Let s consider a special class of linear systems in which the matrices are tridiagonal A tridiagonal matrix is one with the following property: A = (a ij ) n n, a ij =, if i j > For simplicity, we shall represent a tridiagonal matrix by using three vectors: and shall write A = a = (a,, a n ) c = (c 1,, c n 1 ) d = (d 1,, d n ) d 1 c 1 a d c a 3 d 3 d n 1 c n 1 a n d n We look for an LU factorization of the matrix A in the following form: 1 u 1 c 1 l 1 u c l 3 1 u 3 L = 1 l n 1 U = u n 1 c n 1 u n Equaling A = LU, we find that u 1 = d 1 and for i = : n, at (i, i 1): a i = l i u i 1 l i = a i /u i 1 at (i, i): d i = l i c i 1 + u i u i = d i l i c i 1 at (i, i + 1): c i = c i The pseudo code for computing the triangular matrices L and U is then given by: u 1 = d 1 for i = : n l i = a i /u i 1 u i = d i l i c i 1 Once L and U is computed, the solution of the tridiagonal matrix problem Ax = b can be obtained by a forward and backward substitution 7

8 Example 31 The LU factorization of a 3-by-3 tridiagonal matrix A is A = LU with L = 1 l 1 l 3 1 U = u 1 1 u 1 u 3 where according to the procedure outlined above, we can obtain u 1 = d 1 =, l = a /u 1 = 1/, u = d l c 1 = 1 = 3 l 3 = a 3 /u = 3, u 3 = d 3 l 3 c = ( 3 )( 1) = 3 = 4 3 Therefore, the LU factorization of A is 1 1 A = Naive Gaussian Elimination The goal here is to develop a solver for general matrix problem Ax = b The approach is again to find a lower triangular matrix L and an upper triangular U such that A = LU Once such a factorization is accomplished, the solution x can be obtained from a forward and backward substitution The method for computing L and U is call Gaussian elimination 41 An example of 3 by 3 system We wish to find the solution for the following linear system x 1 + x 3x 3 = x 1 + x x 3 = x 1 + 5x = 3 (1) Solution procedure: The augmented matrix representation for the above linear system is given by 1 3 D = (A, b) =

9 where the forth column responds to the right-hand vector b In the following, our goal is to transform the linear system to an upper triangular system via elementary row elimination In other words, we want to make a ij =, for i > j We follow the order of a 1 a 31 a 3 1 To make entry a 1 be zero, we perform the following elementary row elimination: D(, :) l 1 D(1, :) where l 1 = a1 a 11 = 1 In other words, we have This leads to A(, :) A(, :) + 1 A(1, :), b b + 1 b 1 A(, :) = [, 3, 5 ], b = In matrix notation, this step can be illustrated by D = D = To eliminate a 13, we have D(3, :) D(3, :) l 31 D(1, :) where l 31 = a31 a 11 = 1 In other words, we have This leads to A(3, :) A(3, :) 1 A(1, :), b 3 b 3 1 b 1 A(3, :) = [, 9, 3 ], b 3 = 3 In matrix notation, this step can be illustrated by D = 3 5 D = To eliminate a 3 in the updated matrix A, we perform the following row operations D(3, :) D(3, :) a 3 D(, :) a By letting l 3 = a3 a = 3, we have D(3, :) D(3, :) l 3 D(, :) = D(3, :) 3D(, :) We illustrate the computation in the matrix form as follows: D = 3 5 D =

10 Lastly, we solve the upper triangular matrix problem produced in previous step, where Ux = b, 1 3 U = 3 5, b = (,, 3) T 9 The solution via the backward substitution is x = ( 8 9, 7 9, 1 3 ) Remark 41 The first three steps in the solution procedure (Gaussian elimination) provides an LU factorization for the original coefficient matrix A To see how it works, we collect the multiplier l ij in the above elimination process and form a lower triangular matrix L as follows: L = 1 l 1 1 l 31 l 3 1 = The standard matrix multiplication shows that LU = = = A Remark 4 The Gaussian elimination to the augmented matrix and LU factorization of matrix A are two different (but similar) approaches in solving x Code 43 The pseudo code for the LU factorization of this 3-by-3 matrix is as follows L = eye(3); for j = 1 : 4 General case for i = j + 1 : 3 L(i, j) = A(i, j)/a(j, j) A(i, :) = A(i, :) L(i, j) A(j, :) U = A The objective here is to obtain an LU factorization for linear general linear system Ax = b, where A is of n by n: a 11 a 1 a 13 a 1,n 1 a 1n a 1 a a 3 a,n 1 a n A = a 31 a 3 a 33 a n a 3n a n1 a n a n3 a n,n 1 a nn 1

11 By following the spirit illustrated in the example of 3-by-3 case, we eliminate the elements a i1 for i =,, n by performing A(i, :) A(i, :) l i1 A(1, :), with l i1 = a i1 a 11 With the above elementary row operation, the matrix A is updated as a 11 a 1 a 13 a 1,n 1 a 1n a (1) a (1) 3 a (1),n 1 a (1) n A (1) = a (1) 3 a (1) 33 a (1) n a (1) 3n a (1) n a (1) n3 a (1) n,n 1 a (1) nn where a (1) ij = a ij l i1 a 1j for i, j =, n If we represent the above procedure as a matrix multiplication process, then we have with L (1) A = A (1), 1 l 1 1 L (1) = l 31 1 l n1 1 The same procedure can then to applied to matrix A (1) = where we eliminate the elements a (1) i, a (1) 11 a (1) 1 a (1) 13 a (1) 1,n 1 a (1) 1n a (1) a (1) 3 a (1),n 1 a (1) n a (1) 3 a (1) 33 a (1) n a (1) 3n a (1) n a (1) n3 a (1) n,n 1 a (1) nn for i = 3,, n by performing A(i, :) A(i, :) l i A(, :), with l i = a(1) i If we represent the above procedure as a matrix multiplication process, then we have L () A (1) = A (), with 1 1 L () = l 3 1 l n 1 a (1) 11

12 Repeat this process till the matrix A is reduced to an upper triangular one At the k th step of the elimination, the multiplier required are given by for i = k + 1 : n L(i, k) = A(i, k)/a(k, k) The act of multiplying row k by L(i, k) and subtracting from the row i can be implemented as A(i, k : n) = A(i, k : n) L(i, k)a(k, k : n) Incorporating these ideas, we get the following procedure for the upper triangular matrix A L (k) A (k 1) = A (k), k = 1, n 1 with A (n 1) being an upper triangular matrix denoted as U Combining the operations above, L (n 1) L () L (1) A = U, hence A = (L (1) ) 1 (L (n 1) ) 1 U = LU, with L = (L (1) ) 1 (L (n 1) ) 1 A simple computation shows that 1 1 (L (k) ) 1 = l k+1,k 1 l nk 1 and L = (L (1) ) 1 (L (n 1) ) 1 = 1 l 1 1 l n,1 l n, l n,n 1 1 Below is a pseudo code for computing the LU factorization for general A matrix using Gaussian elimination 1

13 Code 44 L = eye(n); for k = 1 : n 1 for L(k + 1 : n, k) = A(k + 1 : n, k)/a(k, k); i = k + 1, n A(i, k + 1 : n) = A(i, k + 1 : n) L(i, k) A(k, k + 1 : n) U = A Once an LU factorization is completed, one can get the solution for Ax = b by doing a forward and backward substitution Complexity Based on the pseudo/matlab code above, the LU factorization of a general matrix A of size n n involves for each k-loop (n k) flops (n k) + 1 flops in each of the i-loop, therefore ((n k) + 1)(n k) for i-loops with i range from k + 1 to n; Thus the computation involves n 1 (n k) + ((n k) + 1)(n k) k=1 n 1 = ((n k) + )(n k) k=1 n 1 = (k + )k k=1 (n 1)n(n 1) (n 1)n = + 6 = O(n 3 ) flops () Remark 45 Alternatively, one can form an augmented matrix D = [A; b], and perform Gaussian elimination on D matrix to form an upper triangular system Finally one can perform backward substitution for an upper triangular system to solve for x The complexity of the algorithm would be the same as that of the LU factorization for A above, that is O(n 3 ) Remark 46 If one has to solve many linear systems with the same A matrix but different rhs vector b, then first performing the LU factorization of A with 13

14 O(n 3 ) computational cost, then perform forward and backward substitution for the lower and upper triangular system with O(n ) computational cost would be preferred to optimize the efficiency (save computational cost) Remark 47 It turns out that the LU factorization (when it exists) is not unique If L has 1 s on it s diagonal, then it is called a Doolittle factorization If U has 1 s on its diagonal, then it is called a Crout factorization The LU factorization introduced above has 1 as diagonal entries, therefore it is a Doolittle factorization Note that the Doolittle factorization does not always exists (see the example in the next section with ɛ = ) Remark 48 In the above algorithm, there is a division operation in computing l ij This could lead to a potentially danger of divided by The Gaussian elimination with pivoting in the next section is designed to avoid such problem 5 Gaussian Elimination with Pivoting The elimination process in the native Gaussian elimination is based on the following elementary row operation: A(i, :) A(i, :) l ik A(k, :), l ik = a ik a kk When the updated diagonal entry a kk vanishes, the above process can t be completed because the multiplier l ik is not defined We want to modify the naive Gaussian elimination in a way that the eliminating can continue even if a kk = 51 Stability: the need for pivoting Consider the matrix problem Ax = b, where ( ) ( ɛ 1 1 A =, b = 1 1 ) Assume that the parameter ɛ is sufficiently small, we would like to see how the naive Gaussian elimination is affected by the change of ɛ The LU factorization of A is given by A = ( ɛ ) = ( 1 1/ɛ 1 ) ( ɛ 1 1 1/ɛ ) = LU We then solve for x in the following two steps: (1) use forward substitution for Ly = b, () backward substitution for Ux = y In doing so, the intermedia variable y is given by y 1 = 1, y = ( y 1 /ɛ) = 1/ɛ, 14

15 and the final solution is given by x = y /(1 ɛ 1 ) = ɛ 1 ɛ 1, x 1 = 1 x ɛ When ɛ, then x 1, and x 1 from computer is unstable as we have / Notice that when ɛ =, the exact solution to the given linear system is x 1 = x = 1 Therefore, the LU factorization will not provide a stable solution (See homework for examples of very small ɛ) 5 Pivoting The instability seen in the previous subsection in really caused by the fact that the elimination step involves the division of a very small number ɛ The entry of a 11 = ɛ in the native Gaussian elimination is known as the pivot element and the computation of l 1 involves the division of this small pivot element How shall we avoid such a situation? Consider the k-th step in the Gaussian elimination To eliminate the entries a ik, for i = k + 1,, n, the following row operations were performed A(i, :) A(i, :) l ik A(k, :), l ik = a ik a kk Wouldn t it be nice if a kk was the largest entry among a kk, a k,k+1,, a kn? This would ensure that all the multipliers are less than or equal to 1 in their absolute value This suggest that at the beginning of the k th step, we would to swap row k and row q, where we assume that a qk is the largest element in terms of absolute value of a kk, a k,k+1,, a kn The swapping idea is known as the selection of pivot elements The resulting elimination is called Gaussian elimination with pivoting Code 51 Let D be the augmented matrix, ie D = [A, b] for k = 1 : n 1 for r : index of the maximum of D(k : n, k) swap row r and row k i = k + 1, n L(i, k) = D(i, k)/d(k, k); D(i, k + 1 : n + 1) = D(i, k + 1 : n + 1) L(i, k) D(k, k + 1 : n + 1) Let the updated D = [A, b], and use backward substitution to solve for x, since A is a upper triangular matrix now 15

16 Example 5 Perform the Gaussian elimination with pivoting to solve the following linear system Ax = b with A = and b = (5, 8, 13) Solution Step 1: form an augmented matrix D = Step (pivoting): swap row 1 and row D = Step 3: Apply Gaussian elimination to eliminate D,1 and D 3, D = 9 9 9/ 7/ 15 Step 4 (pivoting): check D > D 3 : no need for swapping Step 5: Apply Gaussian elimination to eliminate D 3, D = 9 9 7/ 1/ Step 6: x = ( 1, 1, 3) 16

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix 7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

More information

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx. Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

Question 2: How do you solve a matrix equation using the matrix inverse?

Question 2: How do you solve a matrix equation using the matrix inverse? Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Elementary Matrices and The LU Factorization

Elementary Matrices and The LU Factorization lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three

More information

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Linear Equations ! 25 30 35$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Linear Equations ! 25 30 35$ &  350 150% &  11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above: Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in

More information

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

Unit 18 Determinants

Unit 18 Determinants Unit 18 Determinants Every square matrix has a number associated with it, called its determinant. In this section, we determine how to calculate this number, and also look at some of the properties of

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Arithmetic and Algebra of Matrices

Arithmetic and Algebra of Matrices Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Lecture Notes 2: Matrices as Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations 2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

More information

Typical Linear Equation Set and Corresponding Matrices

Typical Linear Equation Set and Corresponding Matrices EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

Lecture 1: Systems of Linear Equations

Lecture 1: Systems of Linear Equations MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

More information

1 Review of Newton Polynomials

1 Review of Newton Polynomials cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Lecture notes on linear algebra

Lecture notes on linear algebra Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Natural cubic splines

Natural cubic splines Natural cubic splines Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology October 21 2008 Motivation We are given a large dataset, i.e. a function sampled

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI

Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI Solving a System of Linear Algebraic Equations (last updated 5/19/05 by GGB) Objectives:

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

DETERMINANTS TERRY A. LORING

DETERMINANTS TERRY A. LORING DETERMINANTS TERRY A. LORING 1. Determinants: a Row Operation By-Product The determinant is best understood in terms of row operations, in my opinion. Most books start by defining the determinant via formulas

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Introduction to Matrices for Engineers

Introduction to Matrices for Engineers Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2016 1 / 32 Outline 1 Matrix operations Importance Dense and sparse

More information

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2 MATHEMATICS FO ENGINEES BASIC MATIX THEOY TUTOIAL This is the second of two tutorials on matrix theory. On completion you should be able to do the following. Explain the general method for solving simultaneous

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

5 Numerical Differentiation

5 Numerical Differentiation D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations,

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Nonlinear Algebraic Equations Example

Nonlinear Algebraic Equations Example Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities

More information

LU Factorization Method to Solve Linear Programming Problem

LU Factorization Method to Solve Linear Programming Problem Website: wwwijetaecom (ISSN 2250-2459 ISO 9001:2008 Certified Journal Volume 4 Issue 4 April 2014) LU Factorization Method to Solve Linear Programming Problem S M Chinchole 1 A P Bhadane 2 12 Assistant

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style Cramer s Rule for Solving Simultaneous Linear Equations 8.1 Introduction The need to solve systems of linear equations arises frequently in engineering. The analysis of electric circuits and the control

More information