DETERMINANTS. b 2. x 2

Size: px
Start display at page:

Download "DETERMINANTS. b 2. x 2"

Transcription

1 DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in matrix notation [ ] [ ] [ ] a11 a 12 x1 b1 = a 21 a 22 or simply Ax = b Let s try to work out a general formula for the solution to such a system To solve for x 1, you need to eliminate x 2 Multiplying the first equation by a 22 and the second by a 12, and subtracting gives x 2 b 2 (a 11 a 22 a 12 a 21 )x 1 = a 22 b 1 a 12 b 2 If the coefficient of x 1 on the left is non-zero, this gives x 1 = a 22b 1 a 12 b 2 a 11 a 22 a 12 a 21 A similar method can be used to solve for x 2 To eliminate x 1, multiply the first equation by a 21 and the second by a 11 and subtract, obtaining (a 11 a 22 a 12 a 21 )x 2 = a 11 b 2 a 21 b 1 If the coefficient of x 2 on the left is non-zero, you get x 2 = a 11b 2 a 21 b 1 a 11 a 22 a 12 a 21 Notice that the number we had to divide by to solve for either x 1 or x 2, namely a 11 a 22 a 12 a 21, was the same This number is called the erminant of the matrix [ a11 a A = 12 a 21 a 22 It is denoted by either A or A Roughly speaking, it is the ] number you must divide by to solve the system Ax = b Notice that there is a theorem implicit in the above calculations As long as the erminant A is non-zero, the above calculations can be carried out, leading to a unique solution Theorem 1 The 2 2 system Ax = b has a unique solution if A 0, and therefore the coefficient matrix A is invertible The proof of the following complementary result is left as an exercise Theorem 2 If A = 0 then A is not invertible, so the system Ax = b either has no solutions or it has infinitely many solutions Our goal in the following sections is to develop an analogous theory of erminants for square matrices of arbitrary size 1

2 DETERMINANTS 2 2 Permutations 21 What is a permutation? A permutation of the numbers {1, 2,, n} is an arrangement of those numbers in a particular order There are two permutations of the numbers {1, 2}, namely, (1, 2) and (2, 1) There are six permutations of the numbers {1, 2, 3}, namely, (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), and (3, 2, 1) There are 24 permutations of the numbers {1, 2, 3, 4}, which you may list if you care to In general, the number of permutations of the numbers {1, 2,, n} is n! 22 Parity A inversion in a permutation is an occurrence of a larger number before a smaller one Thus the permutation (3, 1, 2) contains two inversions, since 3 precedes the smaller numbers 1 and 2 The permutation (3, 2, 1) contains three inversions, since 3 precedes 1 and 2, and 2 precedes 1 A permutation is called even if it contains an even number of inversions, and odd if it contains an odd number of inversions Thus the permutations (1, 2, 3), (2, 3, 1), and (3, 1, 2) are even, while the permutations (1, 3, 2), (3, 1, 2), and (2, 1, 3) are odd The sign of a permutation J = (j 1, j 2,, j n ), denoted sgn(j), is 1 if J is even and 1 if J is odd Thus sgn(1, 2, 3) = sgn(2, 3, 1) = sgn(3, 1, 2) = 1 and sgn(1, 3, 2) = sgn(3, 2, 1) = sgn(2, 1, 3) = 1 What happens to the parity of a permutation if you interchange two entries? For example, the permutation (2, 3, 1) is even If you switch the first and third entry, you get (1, 3, 2), which is odd, so the parity reverses This always happens Theorem 3 If the permutation J is obtained from J by interchanging two entries, then J and J have opposite parities: sgn(j ) = sgn(j) Proof: Consider first the case when the two entries that are exchanged are adjacent, say j k and j k+1 There are two cases to consider If j k < j k+1, then interchanging them will increase the number of of inversions by 1, thus reversing the parity On the other hand, if j k > j k+1, then interchanging them will decrease the number of inversions by 1, again reversing the parity Now consider the case of interchanging two entries that are separated by l other entries This can be accomplished by repeatedly interchanging adjacent entries Let s count the number of interchanges of adjacent entries that are required First, take one of the two entries to be exchanged, and move it past each of the l inbetween entries, so that it becomes adjacent to the other entry that is to be moved This requires exactly l interchanges of adjacent entries Now move the other entry to be moved to its final destination, by exchanging it past the one you ve already moved, and then past the l intermediate entries This requires exactly l + 1 interchanges of adjacent entries Thus the total number of interchanges of adjacent entries is l + (l + 1) = 2l + 1 Since this number is odd, the number of parity changes is odd, and so the parity is reversed

3 DETERMINANTS 3 3 Determinants of n n matrices 31 The 3 3 case The erminant of a 3 3 matrix A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 is A = a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 13 a 22 a 31 a 11 a 23 a 32 a 12 a 21 a 33 Notice that each term is of the form ±a 1j1 a 2j2 a 3j3, where (j 1, j 2, j 3 ) is a permutation of {1, 2, 3}, and the sign is the sign of the permutation Thus we can write A = sgn(j 1, j 2, j 3 )a 1j1 a 2j2 a 3j3 where the sum runs over the six permutations (j 1, j 2, j 3 ) of {1, 2, 3} 32 The general case The definition of the erminant of an n n matrix follows the same pattern as a 3 3 matrix: (1) A = sgn(j)a 1j1 a 2j2 a njn where the sum runs over all permutations J = (j 1, j 2, j n ) of the numbers {1, 2,, n} 1 Before proceeding, you should note that this definition does not provide a practical method for calculating erminants of large matrices The calculation of each term in the sum requires n 1 multiplications, and since there are n! terms, the number of multiplications required is n!(n 1) This number grows very rapidly as n increases If you could perform one million multiplications each second, it would take about 15 million years to calculate the erminant of a matrix directly from the definition 4 Properties of erminants 41 Primitive properties There are three primitive properties of erminants that are deduced directly from the definition Other properties are derived from these three primitive properties Property (i):the erminant function depends linearly on each individual row when the other rows are held fixed To be explicit, write the n n matrix A as a column of row vectors A = and suppose that the ith row is a linear combination of two other row vectors:, A i = t A i + t A i 1 You should check that this is consistent with the definition given previously for 2 2 matrices

4 DETERMINANTS 4 Then A = t A i + t A i To verify Property (i), write = t A i + t A i A i = [a i1 a i2 a in] and A i = [a i1 a i2 a in] so that the ith row of A is A i = t A i +t A i From the definition of the erminant, A = sgn(j)a 1j1 (t a ij i + t a ij i ) a nj n = ( t sgn(j)a 1j1 a ij i a njn + t sgn(j)a 1j1 a ij i a njn ) = t sgn(j)a 1j1 a ij i a njn + t sgn(j)a 1j1 a ij i a njn = t A i + t A i Property (ii): A square matrix with two equal rows has erminant 0 To verify property (ii), let A be a square matrix with two equal rows, say rows k and l For each permutation J, let J be the permutation obtained by switching the entries in positions k and l Since rows k and l are equal, we have a kjk = a ljk = a lj l, while a iji = a ij i when i is not equal to k or l Therefore, the terms in (1) corresponding to the permutations J and J are the same, except for the factors sgn(j) and sgn(j ), which have opposite sign by Theorem 3 Therefore, these two terms sum to 0 Thus the terms of (1) can be paired into terms that sum to 0 Property (iii):the erminant of the n n identity matrix is 1 To verify Property (iii), let δ ij be the entries of I n, so that δ ij = 0 if i j, and δ ii = 1 From the definition of the erminant, I = sgn(j)δ 1j1 δ njn However, if j i i for even one value of i, then the corresponding term in the above sum has a factor of 0, and so the product is 0 Therefore, the only non-zero term comes from the permutation (1, 2,, n), so I = sgn(1, 2,, n)δ 11 δ nn = 1

5 DETERMINANTS 5 42 Derived properties Let s now deduce some additional properties of erminants from the three primitive properties above Property (iv)multiplying a row of A by a constant c changes the erminant by a factor of c This is really just a special case of Property (i) Property (v):if A has a row consisting entirely of zeroes, then A = 0 To see this, notice that multiplying the row of zeroes by 0 has no effect on the matrix, and therefor no effect on its erminant On the other hand, Property (iv) says that the erminant changes by a factor of 0, so A = 0 Property (vi):if B is obtained from A by interchanging two rows, then B = A This is a consequence of Property (ii) Suppose B is obtained from A by interchanging rows k and l with k < l Write By Property (ii), A = A l + A l + A l = 0 Applying Property (i) twice, once in row k and again in row l, we obtain + A l + A l + A l A l = 0 By Property (ii), the first and last terms in this sum are 0, while the second and third are A and B, respectively We have therefore shown that A + B = 0, so B = A

6 DETERMINANTS 6 Property (vii):adding a multiple of one row of A to another row of A will not change the erminant To verify this, let,, be the row vectors of A, and let B be the matrix obtained by adding a multiple of row k of A to row i, with i k Thus row i of B can be written B i = A i + t, while the other rows of B are the same as the corresponding rows of A By Property (i), we have B = A i + t = A i + t Notice that the first term on the right is just A, while the second term is t times the erminant of a matrix with two equal rows, which is zero by Property (ii) Thus the right hand side reduces to A 5 The effect of elementary row operations Properties (iv), (vi) and (vii) tell you how elementary row operations affect the erminant Theorem 4 Let A be an n n matrix (1) If B is obtained from A by multiplying a single row of A by a non-zero constant λ, then B = λ A (2) If B is obtained from A by interchanging two rows, then B = A (3) If B is obtained from A by adding a multiple of one row to another row, then B = A Notice in every case, an elementary row operation changes the erminant by a non-zero multiple It follows that a sequence of several elementary row operations changes the erminant by a non-zero multiple, so we obtain Corollary 5 If B is row equivalent to A, then B = c A for some non-zero scalar c In particular, the above corollary applies when B is the reduced row echelon form of A Let s examine two cases If A is invertible, then the reduced row echelon form of A is an identity matrix, which has erminant 1 by Property (iii) Thus, in this case, the corollary gives A = c I = c 0 On the other hand, if A is singular, the reduced row echelon form E of A contains a row of zeroes, and Property (v) gives A = c E = c 0 = 0 Therefore you get Theorem 6 A is invertible if and only if A 0 51 Calculating erminants using EROs In principle, Theorems 4 and 6 give you a new way to calculate erminants using elementary row operations You can reduce to reduced row echelon form, keeping track of the effect on the erminant using Theorem 4, and then plug in the value of the erminant of the reduced matrix using either Property (iii) (if the reduced matrix is an identity matrix) or Property (v) (if the reduced matrix has a row of zeroes) In practice, you can avoid the necessity of completely reducing the matrix by first working out the erminant of an upper triangular matrix The result is

7 DETERMINANTS 7 Theorem 7 The erminant of an upper triangular matrix is the product of its diagonal entries Proof Let s first consider the case when one of the diagonal entries is zero In this case, the assertion is that A = 0, so we want to show that any upper triangular matrix A with a zero on the diagonal must have erminant zero By Theorem 6, it s enough to show that A is singular For this, consider the homogeneous system Ax = 0 Let a ii be the first zero on the diagonal Then the variable x i does not occur from the ith equation on Set x i = 1 and x k = 0 for k > i, and ermine x 1,, x i 1 by back substitution in the first i 1 equations Then x is a non-trivial solution to the homogeneous system Ax = 0, and since the system has a non-trivial solution, its coefficient matrix A is singular It remains to consider the case when all diagonal entries are non-zero In this case, by applying Property (i) to each row in succession, we obtain (2) A = a 11 a 22 a nn B where the entries of B are b ij = a ij /a ii In particular, B has a zero in every position where A has one, so B is also triangular Moreover, B has all ones along the diagonal It follows that B can be row reduced to an identity matrix I using only the operation of adding multiples of rows to other rows Since this type of elementary row operation does not affect the erminant, it follows that B = I = 1, so (2) gives A = a 11 a 22 a nn, as required The procedure for calculating erminants using elementary row operations is now the following (1) Perform elementary row operations to reduce A to an upper triangular matrix, using Theorem 4 to keep track of the effect on the erminant (2) Calculate the erminant of the resulting upper triangular matrix by multiplying its diagonal entries This method us usually much more efficient than calculating from the definition of erminants We refer to your textbook for examples illustrating the technique The goal in this section is to establish 6 Determinants of products Theorem 8 Let A and B be n n matrices Then (AB) = ( A)( B) We begin with some facts about elementary matrices Recall that an elementary matrix is one obtained from an identity matrix by a single elementary row operation Determinants of elementary matrices can be read off from Theorem 4 and the fact that the erminant of an identity matrix is 1 (Property (iii)) Lemma 9 (1) If E is obtained by multiplying a row of an identity matrix by the scalar λ, then E = λ (2) If E is obtained by exchanging two rows of an identity matrix, then E = 1 (3) If E is obtained by adding a multiple of a row of an identity matrix to another row, then E = 1 In view of Lemma 9, we can reformulate Theorem 4 as follows: Lemma 10 If B is an n n matrix, and E is an elementary n n matrix, then (EB) = ( E)( B)

8 DETERMINANTS 8 Iterating this result gives Lemma 11 If B is an n n matrix and E 1,, E K are elementary n n matrices, then (E 1 E K B) = ( E 1 ) ( E K )( B) In particular, when B = I n, we obtain Corollary 12 If E 1,, E K are elementary n n matrices, then (E 1 E K ) = ( E 1 ) ( E K ) Proof of Theorem 8 Consider first the case when the first factor A is singular In this case, the product AB is also singular, so by Theorem 6, it follows that (AB) = 0 and A = 0 Therefore (AB) = 0 = 0 B = ( A)( B), so the theorem holds in this case To complete the proof, we must establish the theorem when A is invertible In this case, we can write A as a product of elementary matrices: A = E 1 E K By Lemma 11 and Corollary 12, we obtain (AB) = (E 1 E K B) = ( E 1 ) ( E K )( B) = ((E 1 E K ))( B) = ( A)( B) and the theorem is proved As a special case of Theorem 8, suppose A is invertible, and take B = A 1 We obtain 1 = I = (AA 1 ) = ( A)( A 1 ) Dividing by A gives Corollary 13 If A is invertible, then (A 1 ) = 1 A In this section, we establish 7 Transposition Theorem 14 For any n n matrix A, we have (A t ) = A We begin with some special cases First, if A is singular, then A t is also singular, so both A and A t have erminant 0: Lemma 15 If A is singular, then A t = A = 0 For elementary matrices, we have Lemma 16 If E is an elementary matrix, then (E t ) = E Proof There are three types of elementary matrices We ll take them one at a time If E is obtained from an identity matrix by adding a multiple of row k to row l, then E = 1, by Theorem 4 But E t is obtained by adding a multiple of row

9 DETERMINANTS 9 l of an identity matrix to row k, so (E t ) = 1, again by Theorem 4 Therefore (E t ) = 1 = E If E is obtained by multiplying a row of the identity matrix by a constant, then E is diagonal, and hence symmetric, so E t = E, and so (E t ) = (E) Finally, if E is obtained by switching rows k and l of an identity matrix, then the only non-zero off diagonal entries of E are ones in positions (k, l) and (l, k), and therefore E is symmetric, so again (E t ) = E Proof of Theorem 14 In view of Lemma 15, we may assume that A is invertible Therefore, A can be expressed as a product of elementary matrices: Theorem 8 and Lemma 16, you get A = E 1 E 2 E K A t = ((E 1 E 2 E K ) t ) = (E t K E t 1) = (E t K) (E t 1) = (E K ) (E 1 ) = (E 1 ) (E K ) = (E 1 E K ) = A 71 Consequences of the Transposition Theorem Theorem 14 allows us to immediately deduce transposed forms of results we have already discussed For example, transposing Theorem 7 gives Theorem 17 The erminant of a lower triangular matrix is the product of the diagonal entries In addition, the properties of erminants that we have formulated for rows have analogues for columns In particular, Theorem 4 has an exact analogue for elementary column operations, so you can use both row and column operations in erminant calculations 8 Cofactor expansions Let A be an n n matrix The minor M ij is the (n 1) (n 1) matrix obtained by deleting the ith row and jth column of A The cofactors of A are the scalars For example, the 3 3 matrix cof ij (A) = ( 1) i+j M ij A = has nine minors and nine cofactors A typical minor is [ ] 4 6 M 12 = 7 9 The corresponding cofactor is, cof 12 (A) = ( 1) 1+2 M 12 = (36 42) = 6 Theorem 18 Let A be an n n matrix (1) (Cofactor expansion along the ith row) For any fixed i we have A = a ij cof ij (A) = ( 1) i+j a ij M ij j=1 j=1

10 DETERMINANTS 10 (2) (Cofactor expansion along the jth column) For any fixed j we have A = a ij cof ij (A) = ( 1) i+j a ij M ij i=1 We will omit a ailed proof, giving instead a brief sketch The diligent reader can supply ails herself First, let δ(a) be the cofactor expansion of A along the first column: δ(a) = ( 1) i+1 a i1 M i1 i=1 Using the properties of erminants that have already been established, one checks that the function δ satisfies the primitive properties (i)-(iii) of erminants However, we have already observed that these three properties uniquely ermine the erminant function, so we must have δ(a) = A This establishes the validity of the cofactor expansion along the first column Next, we use the fact that the erminant is invariant under transposition to establish the validity of the cofactor expansion along the first row To establish the validity of the cofactor expansion along the ith row, shift the ith row up to the first (changing the erminant by the factor ( 1) i 1 in the process), and then apply a cofactor expansion along the first row Finally, the validity of a cofactor expansion along the jth column can be established by transposition i=1 9 The adjoint formula and Cramer s Rule 91 Adjoints For any n n matrix A, the cofactors cof ij (A) themselves form and n n matrix called the cofactor matrix of A The transpose of the cofactor matrix is the adjoint matrix of A, denoted adj(a) Let s calculate the product matrix A adj(a) The entry in position ij is (3) a ik cof jk (A) k=1 We will consider separately the cases i = j and i j When i = j, the formula (3) becomes a ik cof ik (A), k=1 which is just a cofactor expansion along the ith row for A Thus we have established that the diagonal entries of the product A adj(a) are all equal to A It remains to calculate the off diagonal entries When i j, the formula (3) may be viewed as a cofactor expansion for the matrix A obtained by replacing row j of A by row i Thus we have, for the ij entry of A adj(a), a ik cof jk (A) = A k=1 But rows i and j of A are equal, so by Property (vi) of erminants, A = 0 Thus, we have shown that the diagonal entries of A adj(a) are all equal to A, and the off diagonal entries are all 0 In other words, we have proved

11 DETERMINANTS 11 Theorem 19 (Adjoint Formula) For any n n matrix A we have A adj(a) = ( A)I n If A 0, dividing through by A gives a formula for the inverse of A Corollary 20 If A 0 then A 1 = 1 A adj(a) We will finish our discussion of erminants by applying the above corollary to the solution of an n n system of equations Consider the system Ax = b, where the coefficient matrix A is invertible Then the unique solution is x = A 1 B = adj(a)b A In particular, this gives n j=1 x i = cof ji(a)b j A But the sum in the numerator on the right may be viewed as a cofactor expansion along the ith column of the matrix A i obtained by replacing the ith column of A by b Therefore, we have established Theorem 21 (Cramer s Rule) Consider the n n system Ax = b, with invertible coefficient matrix A The unique solution is given by x i = A i A, where A i is the matrix obtained by replacing the ith column of A with b Let s write out Cramer s Rule explicitly for the 2 2 system a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 We have [ ] [ ] b1 a 12 a11 b 1 b 2 a 22 a 21 b 2 x 1 = [ ], x 2 = [ ] a11 a 12 a11 a 12 a 21 a 22 a 21 a 22 You should check that this is consistent with the formulas obtained in Section 1 for solutions to 2 2 systems You should view Cramer s Rule as a generalization of those formulas to n n systems Finally, it should be noted that, although Cramer s Rule is of theoretical interest, it is rarely an efficient method for solving linear systems Except for very small systems, Gaussian elimination is usually much more computationally efficient

1 Determinants. Definition 1

1 Determinants. Definition 1 Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described

More information

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants. Dr. Doreen De Leon Math 152, Fall 2015 Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

More information

2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors 2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

More information

Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

More information

Matrix Inverse and Determinants

Matrix Inverse and Determinants DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

More information

Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013

Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013 Lecture 11 Shuanglin Shao October 2nd and 7th, 2013 Matrix determinants: addition. Determinants: multiplication. Adjoint of a matrix. Cramer s rule to solve a linear system. Recall that from the previous

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant 2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

More information

1.5 Elementary Matrices and a Method for Finding the Inverse

1.5 Elementary Matrices and a Method for Finding the Inverse .5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

More information

NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that

NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that NON SINGULAR MATRICES DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that AB = I n = BA. Any matrix B with the above property is called

More information

( % . This matrix consists of $ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

( % . This matrix consists of $ 4 5  5' the coefficients of the variables as they appear in the original system. The augmented 3  2 2 # 2  3 4& Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

More information

Diagonal, Symmetric and Triangular Matrices

Diagonal, Symmetric and Triangular Matrices Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Linear Dependence Tests

Linear Dependence Tests Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

More information

APPLICATIONS OF MATRICES. Adj A is nothing but the transpose of the co-factor matrix [A ij ] of A.

APPLICATIONS OF MATRICES. Adj A is nothing but the transpose of the co-factor matrix [A ij ] of A. APPLICATIONS OF MATRICES ADJOINT: Let A = [a ij ] be a square matrix of order n. Let Aij be the co-factor of a ij. Then the n th order matrix [A ij ] T is called the adjoint of A. It is denoted by adj

More information

Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations. Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

More information

1 Gaussian Elimination

1 Gaussian Elimination Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

More information

Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold: Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, Chapter 1: Linear Equations and Matrices MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

More information

Determinants. Chapter Properties of the Determinant

Determinants. Chapter Properties of the Determinant Chapter 4 Determinants Chapter 3 entailed a discussion of linear transformations and how to identify them with matrices. When we study a particular linear transformation we would like its matrix representation

More information

The Inverse of a Matrix

The Inverse of a Matrix The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

More information

Math 315: Linear Algebra Solutions to Midterm Exam I

Math 315: Linear Algebra Solutions to Midterm Exam I Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

More information

Mathematics Notes for Class 12 chapter 3. Matrices

Mathematics Notes for Class 12 chapter 3. Matrices 1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses

Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 57 Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses Peter J. Hammond email: p.j.hammond@warwick.ac.uk Autumn 2012,

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

More information

Matrix Algebra and Applications

Matrix Algebra and Applications Matrix Algebra and Applications Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 1 / 49 EC2040 Topic 2 - Matrices and Matrix Algebra Reading 1 Chapters

More information

4. MATRICES Matrices

4. MATRICES Matrices 4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

More information

= [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are

= [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are This document deals with the fundamentals of matrix algebra and is adapted from B.C. Kuo, Linear Networks and Systems, McGraw Hill, 1967. It is presented here for educational purposes. 1 Introduction In

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

More information

Introduction to Matrix Algebra I

Introduction to Matrix Algebra I Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that

Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that 0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Lecture 10: Invertible matrices. Finding the inverse of a matrix

Lecture 10: Invertible matrices. Finding the inverse of a matrix Lecture 10: Invertible matrices. Finding the inverse of a matrix Danny W. Crytser April 11, 2014 Today s lecture Today we will Today s lecture Today we will 1 Single out a class of especially nice matrices

More information

The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23 (copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, high-dimensional

More information

Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8. Matrices II: inverses. 8.1 What is an inverse? Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Helpsheet. Giblin Eunson Library MATRIX ALGEBRA. library.unimelb.edu.au/libraries/bee. Use this sheet to help you:

Helpsheet. Giblin Eunson Library MATRIX ALGEBRA. library.unimelb.edu.au/libraries/bee. Use this sheet to help you: Helpsheet Giblin Eunson Library ATRIX ALGEBRA Use this sheet to help you: Understand the basic concepts and definitions of matrix algebra Express a set of linear equations in matrix notation Evaluate determinants

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor

More information

Elementary Row Operations and Matrix Multiplication

Elementary Row Operations and Matrix Multiplication Contents 1 Elementary Row Operations and Matrix Multiplication 1.1 Theorem (Row Operations using Matrix Multiplication) 2 Inverses of Elementary Row Operation Matrices 2.1 Theorem (Inverses of Elementary

More information

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

More information

MAT Solving Linear Systems Using Matrices and Row Operations

MAT Solving Linear Systems Using Matrices and Row Operations MAT 171 8.5 Solving Linear Systems Using Matrices and Row Operations A. Introduction to Matrices Identifying the Size and Entries of a Matrix B. The Augmented Matrix of a System of Equations Forming Augmented

More information

Matrices, Determinants and Linear Systems

Matrices, Determinants and Linear Systems September 21, 2014 Matrices A matrix A m n is an array of numbers in rows and columns a 11 a 12 a 1n r 1 a 21 a 22 a 2n r 2....... a m1 a m2 a mn r m c 1 c 2 c n We say that the dimension of A is m n (we

More information

Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,

Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n, LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x

More information

GENERATING SETS KEITH CONRAD

GENERATING SETS KEITH CONRAD GENERATING SETS KEITH CONRAD 1 Introduction In R n, every vector can be written as a unique linear combination of the standard basis e 1,, e n A notion weaker than a basis is a spanning set: a set of vectors

More information

EC9A0: Pre-sessional Advanced Mathematics Course

EC9A0: Pre-sessional Advanced Mathematics Course University of Warwick, EC9A0: Pre-sessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Pre-sessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

MATH10212 Linear Algebra B Homework 7

MATH10212 Linear Algebra B Homework 7 MATH22 Linear Algebra B Homework 7 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra and its Applications Pearson, 26 (or other editions) Normally, homework assignments

More information

Homework: 2.1 (page 56): 7, 9, 13, 15, 17, 25, 27, 35, 37, 41, 46, 49, 67

Homework: 2.1 (page 56): 7, 9, 13, 15, 17, 25, 27, 35, 37, 41, 46, 49, 67 Chapter Matrices Operations with Matrices Homework: (page 56):, 9, 3, 5,, 5,, 35, 3, 4, 46, 49, 6 Main points in this section: We define a few concept regarding matrices This would include addition of

More information

Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse. 1 Inverse Definition Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

More information

Determinants LECTURE Calculating the Area of a Parallelogram. Definition Let A be a 2 2 matrix. A = The determinant of A is the number

Determinants LECTURE Calculating the Area of a Parallelogram. Definition Let A be a 2 2 matrix. A = The determinant of A is the number LECTURE 13 Determinants 1. Calculating the Area of a Parallelogram Definition 13.1. Let A be a matrix. [ a c b d ] The determinant of A is the number det A) = ad bc Now consider the parallelogram formed

More information

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations,

More information

5.3 Determinants and Cramer s Rule

5.3 Determinants and Cramer s Rule 290 5.3 Determinants and Cramer s Rule Unique Solution of a 2 2 System The 2 2 system (1) ax + by = e, cx + dy = f, has a unique solution provided = ad bc is nonzero, in which case the solution is given

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

LECTURE 1 I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find x such that IA = A

LECTURE 1 I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find x such that IA = A LECTURE I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find such that A = y. Recall: there is a matri I such that for all R n. It follows that

More information

Facts About Eigenvalues

Facts About Eigenvalues Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

More information

2.5 Gaussian Elimination

2.5 Gaussian Elimination page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

More information

Solutions to Review Problems

Solutions to Review Problems Chapter 1 Solutions to Review Problems Chapter 1 Exercise 42 Which of the following equations are not linear and why: (a x 2 1 + 3x 2 2x 3 = 5. (b x 1 + x 1 x 2 + 2x 3 = 1. (c x 1 + 2 x 2 + x 3 = 5. (a

More information

Inverses and powers: Rules of Matrix Arithmetic

Inverses and powers: Rules of Matrix Arithmetic Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 27, 2015

Inverses. Stephen Boyd. EE103 Stanford University. October 27, 2015 Inverses Stephen Boyd EE103 Stanford University October 27, 2015 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

Solving Systems of Linear Equations. Substitution

Solving Systems of Linear Equations. Substitution Solving Systems of Linear Equations There are two basic methods we will use to solve systems of linear equations: Substitution Elimination We will describe each for a system of two equations in two unknowns,

More information

MATH36001 Background Material 2015

MATH36001 Background Material 2015 MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

More information

9 Matrices, determinants, inverse matrix, Cramer s Rule

9 Matrices, determinants, inverse matrix, Cramer s Rule AAC - Business Mathematics I Lecture #9, December 15, 2007 Katarína Kálovcová 9 Matrices, determinants, inverse matrix, Cramer s Rule Basic properties of matrices: Example: Addition properties: Associative:

More information

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self Study Course

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self Study Course SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self Study Course MODULE 17 MATRICES II Module Topics 1. Inverse of matrix using cofactors 2. Sets of linear equations 3. Solution of sets of linear

More information

1. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither.

1. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither. Math Exam - Practice Problem Solutions. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither. (a) 5 (c) Since each row has a leading that

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

26. Determinants I. 1. Prehistory

26. Determinants I. 1. Prehistory 26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0.

Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0. Basic Terminology for Systems of Equations in a Nutshell E L Lady A system of linear equations is something like the following: x 7x +4x =0 5x +8x x = Note that the number of equations is not required

More information

Chapter 1 - Matrices & Determinants

Chapter 1 - Matrices & Determinants Chapter 1 - Matrices & Determinants Arthur Cayley (August 16, 1821 - January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed

More information

MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0, MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

10. Graph Matrices Incidence Matrix

10. Graph Matrices Incidence Matrix 10 Graph Matrices Since a graph is completely determined by specifying either its adjacency structure or its incidence structure, these specifications provide far more efficient ways of representing a

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20

Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20 Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen October 30, 2008 Overview

More information

The Solution of Linear Simultaneous Equations

The Solution of Linear Simultaneous Equations Appendix A The Solution of Linear Simultaneous Equations Circuit analysis frequently involves the solution of linear simultaneous equations. Our purpose here is to review the use of determinants to solve

More information

We seek a factorization of a square matrix A into the product of two matrices which yields an

We seek a factorization of a square matrix A into the product of two matrices which yields an LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable

More information

Matrix Solution of Equations

Matrix Solution of Equations Contents 8 Matrix Solution of Equations 8.1 Solution by Cramer s Rule 2 8.2 Solution by Inverse Matrix Method 13 8.3 Solution by Gauss Elimination 22 Learning outcomes In this Workbook you will learn to

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Chapter 4: Systems of Equations and Ineq. Lecture notes Math 1010

Chapter 4: Systems of Equations and Ineq. Lecture notes Math 1010 Section 4.1: Systems of Equations Systems of equations A system of equations consists of two or more equations involving two or more variables { ax + by = c dx + ey = f A solution of such a system is an

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information