Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Similar documents
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

8 Square matrices continued: Determinants

Unit 18 Determinants

Notes on Determinant

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Lecture 4: Partitioned Matrices and Determinants

Using row reduction to calculate the inverse and the determinant of a square matrix

The Determinant: a Means to Calculate Volume

DETERMINANTS TERRY A. LORING

1 Introduction to Matrices

Solving Linear Systems, Continued and The Inverse of a Matrix

LINEAR ALGEBRA. September 23, 2010

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Elementary Matrices and The LU Factorization

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Name: Section Registered In:

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

The Characteristic Polynomial

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

by the matrix A results in a vector which is a reflection of the given

Direct Methods for Solving Linear Systems. Matrix Factorization

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

6. Cholesky factorization

Solving Systems of Linear Equations

1 Determinants and the Solvability of Linear Systems

MAT188H1S Lec0101 Burbulla

Systems of Linear Equations

1 Sets and Set Notation.

Linear Algebra: Determinants, Inverses, Rank

Lecture 2 Matrix Operations

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Math 312 Homework 1 Solutions

Linear Algebra Notes for Marsden and Tromba Vector Calculus

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Similarity and Diagonalization. Similar Matrices

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Row Echelon Form and Reduced Row Echelon Form

University of Lille I PC first year list of exercises n 7. Review

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

Chapter 2 Determinants, and Linear Independence

Notes on Linear Algebra. Peter J. Cameron

( ) which must be a vector

Lecture 3: Finding integer solutions to systems of linear equations

Chapter 17. Orthogonal Matrices and Symmetries of Space

Methods for Finding Bases

A =

Factorization Theorems

26. Determinants I. 1. Prehistory

T ( a i x i ) = a i T (x i ).

NOTES ON LINEAR TRANSFORMATIONS

MATH APPLIED MATRIX THEORY

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Suk-Geun Hwang and Jin-Woo Park

1 VECTOR SPACES AND SUBSPACES

Matrix algebra for beginners, Part I matrices, determinants, inverses

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Typical Linear Equation Set and Corresponding Matrices

Solving Systems of Linear Equations Using Matrices

Solution of Linear Systems

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Continued Fractions and the Euclidean Algorithm

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Applied Linear Algebra I Review page 1

Matrices and Linear Algebra

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Orthogonal Diagonalization of Symmetric Matrices

LS.6 Solution Matrices

GENERATING SETS KEITH CONRAD

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

Matrix Representations of Linear Transformations and Changes of Coordinates

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Subspaces of R n LECTURE Subspaces

Vector and Matrix Norms

7 Gaussian Elimination and LU Factorization

Modélisation et résolutions numérique et symbolique

Introduction to Matrices for Engineers

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Solutions to Math 51 First Exam January 29, 2015

[1] Diagonal factorization

Solution to Homework 2

Mathematics Course 111: Algebra I Part IV: Vector Spaces

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

5.3 The Cross Product in R 3

Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph

General Framework for an Iterative Solution of Ax b. Jacobi s Method

160 CHAPTER 4. VECTOR SPACES

α = u v. In other words, Orthogonal Projection

LINEAR ALGEBRA W W L CHEN

Eigenvalues and Eigenvectors

ISOMETRIES OF R n KEITH CONRAD

Solving Systems of Linear Equations

Introduction to Matrix Algebra

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

Orthogonal Projections

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

Some Lecture Notes and In-Class Examples for Pre-Calculus:

Transcription:

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A. Definition. An elementary matrix, typically denoted E, is a matrix that is obtained by performing a single elementary row operation on the identity matrix. As such, there are three types of elementary matrices.. The first type is formed by exchanging row i of I with row j of I, which the author denotes as E i,j. 2. The second type is formed by multiplying row i of I by a nonzero scalar α, which the author denotes E i (α). 3. The third type is formed by adding a nonzero multiple α of row i to row j, which the author denotes E i,j (α). Example: For each of the following, determine if it is an elementary matrix. 0 0 0 3 0 0 (a) 0 0 (b) 0 0 (c) 0 0 0 0 0 0 0 0 3 0 2 (d) 0 0 0 0 Solution: 0 0 (e) 2 0 0 0 (a) Yes. Row is formed by adding row 3 of the identity matrix to row.

(b) No. Rows and 3 of I are exchanged, and then the resulting rows and 2 are exchanged. (c) Yes. Row of the identity matrix is multiplied by 3. (d) No. Row is multiplied by 3 and then twice row 3 of the identity matrix is added to row. (e) Yes. Row 2 is formed by adding -2 times row of the identity matrix to row 2. We have the following theorem. textbook. Note that this is a summary of Theorem EMDRO in the Theorem. If an elementary row operation is performed on an m n matrix A, the resulting matrix can be written as EA, where the m m elementary matrix E is created by performing the same row operation on I m. Elementary matrices have another useful property. Theorem 2. If E is an elementary matrix, then E is nonsingular. Proof. The idea is that we can row reduce E to the identity matrix by reversing the row operation that formed E. If E = E i,j, then exchanging rows i and j again will give the identity matrix. If E = E i (α), multiply row i of E by to obtain the identity matrix. Finally, α if E = E i,j (α), perform the row operation that multiplies row i by α and adds it to row j. Therefore, each elementary matrix is row equivalent to the identity matrix and is thus nonsingular. In fact, we have the following useful theorem. Theorem 3. Suppose that A is a nonsingular matrix. Then there exist elementary matrices E, E 2,..., E t so that A = E t E t E 2 E. Proof. Since A is nonsingular, it is row equivalent to the identity matrix. Therefore, there is a sequence of t row operations that converts I to A. For each of these row operations, form the associated elementary matrix and denote these matrices by E, E 2,..., E t. Applying the first row operation to I gives the matrix E I. The second row operation gives E 2 (E I) = E 2 E I. and so on. The result of the full sequence of t operations will yield A, so we have A = E t E t E 2 E I = E t E t E 2 E. 2

Definition of the Determinant We need a few definitions first. Definition. Suppose that A is an m n matrix. Then the submatrix A ij (denoted A(i j) in the textbook) is the (m ) (n ) matrix obtain from A by removing row i and column j. Examples: Given 3 2 A = 0 4 5, 2 4 0 [ ] 0 5 A 2 = 2 0 [ ] 3 A 23 =. 2 4 Exercise: Given find A 2 and A 34. Solution: 2 4 5 A = 7 9 3 0 2, 0 5 0 6 7 3 0 2 4 A 2 = 2, A 34 = 7 9 3. 0 0 6 0 5 0 Definition. Suppose A is an n n matrix. Then its determinant, det(a) = A, is an element of C defined recursively by. If n =, then det(a) = a. 2. If n 2, then det(a) = a det(a ) a 2 det(a 2 ) + + ( ) +n a n det(a n ) n = ( ) +j a j det(a j ). i= 2 5 Example: Find det(a) for A = 0 3 7. 0 3

Solution: Using the formula given in the definition, we have 2 5 0 3 7 0 = ( )+ () 3 7 0 + ( )+2 (2) 0 7 + ( )+3 (5) 0 3 0 = 3[( ) + ( 3) + ( ) +2 (7) 0 ] 2[( ) + (0) + ( ) +2 (7) ] + +5[( ) + (0) 0 + ( ) +2 ( 3) ] = 3( 3 0) 2( 7) + 5(3) = 26. Note that this definition also leads to the standard formula for the determinant of a 2 2 matrix. [ ] a b Theorem 4. Let A =. Then det(a) = ad bc. c d Definition. Given an n n matrix A, the (i, j) cofactor of A, denoted C ij, is given by C ij = ( ) i+j det(a ij ). Then, using this, we can define det(a) = a C + a 2 C 2 + + a n C n. () Equation () is a cofactor expansion across the first row. Computing Determinants There are a number of ways to compute the determinant. Theorem 5. The determinant of an n n matrix A can be computed by a cofactor expansion across any row i, det(a) = ( ) i+ a i det(a i ) + ( ) i+2 a i2 det(a i2 ) + + ( ) i+n a in det(a in ), called a cofactor expansion along row i, or down any column j, det(a) = ( ) +j a j det(a j ) + ( ) i+2 a 2j det(a 2j ) + + ( ) n+j a nj det(a nj ), called a cofactor expansion along column j. Example: Compute det(a), where 2 5 2 A = 0 0 3 0 2 6 7 5, 5 0 4 4 4

using a cofactor expansion. Solution: We choose to do a cofactor expansion along row 2. det(a) = ( ) 2+ a 2 det(a 2 ) + ( ) 2+2 a 22 det(a 22 ) + ( ) 2+3 a 23 det(a 23 ) + ( ) 2+4 a 24 det(a 24 ) 2 2 = 0 det(a 2 ) + 0 det(a 22 ) + ( ) 2+3 3 2 6 5 5 0 4 + 0 det(a 24) 2 2 = 3 2 6 5 5 0 4. We will use a cofactor expansion along row 3 to compute this determinant. ( det(a) = 3 ( ) 3+ 5 2 2 6 5 + 0 + ( )3+3 4 2 ) 2 6 = 3[5( 0 ( 2)) + 4( 6 ( 4))] = 6. Another property of the determinant follows. Theorem 6. Suppose that A is a square matrix. Then det(a t ) = det(a). Example: Let Then, 3 4 A = 0 2 0. 0 2 det(a) = 0 + ( ) 2+2 2 det(a 22 ) + 0 = 2 4 0 2 = 2((2) 4(0)) = 4. And, since 0 0 A t = 2 2, 3 0 2 det(a t ) = ( ) + 2 0 2 = (2(2) (0)) = 4 = det(a). Consider the following example. 5

Example: Let Find det(a). Solution: 2 3 4 5 6 0 8 3 4 7 A = 0 0 6 5 4 3 0 0 0 2 5. 0 0 0 0 7 2 0 0 0 0 0 5 2 3 4 5 6 0 8 3 4 7 8 3 4 7 0 0 6 5 4 3 0 6 5 4 3 0 0 0 2 5 = ( ) + 0 0 2 5 0 0 0 0 7 2 0 0 0 7 2 0 0 0 0 0 5 0 0 0 0 5 6 5 4 3 = ( ) + 0 2 5 0 0 7 2 0 0 0 5 2 5 = ()( ) + 6 0 7 2 0 0 5 = ()(6)( ) + 2 7 2 0 5 = ()(6)(2)(7(5) 2(0)) = ()(6)(2)(7)(5) = 420. Notice that this is equivalent to multiplying the numbers on the diagonal of A. We can generalize this to the theorem following this requisite definition. Definition.. An upper triangular matrix is a square matrix whose entries below the main diagonal are zero. 2. A lower triangular matrix is a square matrix whose entries above the main diagonal are zero. 3. A diagonal matrix is a square matrix whose entries above and below the main diagonal are zero. Theorem 7. If A is a triangular matrix, then det(a) is the product of the entries on the main diagonal. 6

Example: Evaluate Solution: 2 3 4 5 6 0 8 3 4 7 0 0 6 5 4 3 0 0 0 2 5. 0 0 0 0 7 2 0 0 0 0 0 5 2 3 4 5 6 0 8 3 4 7 0 0 6 5 4 3 0 0 0 2 5 = ()(6)(2)(7)(5) = 420. 0 0 0 0 7 2 0 0 0 0 0 5 2 Properties of Determinants of Matrices Theorem 8. Suppose that A is a square matrix with a row with every entry a zero or a column with every entry a zero. Then det(a) = 0. Proof. Suppose that A is an n n matrix and that every entry in row i is 0. Then, we can find det(a) by doing a cofactor expansion along row i, giving det(a) = = = n ( ) i+j a ij det(a ij ) j= n ( ) i+j 0 det(a ij ) j= n 0 = 0. j= The proof for the case of a column consisting entirely of zeros is similar. Theorem 9 (Row Operations and the Determinant). Let A be a square matrix. (a) If a multiple of one row of A is added to another row to produce a matrix B, then det B = det(a). (b) If two rows of A are interchanged to produce B, then det B = det(a). (c) If one row of A is multiplied by a nonzero scalar α to produce B, then det B = α det(a). 7

2 4 Example: Let A = 2 0 7. We form matrix B by 0 0 5 2 4 2 4 2 0 7 2 r 2 2r 0 4 = B. 0 0 5 0 0 5 Then, det(a) = 0 C 3 + 0 C 32 + 5C 33 = 5( ) 3+3 2 2 0 = 5( 4) = 20. det(b) = ( 4)(5) = 20. So, we see that det(b) = 20 = det(a). 2 2 Example: Let A = 0 3 5. We form matrix B as follows: 0 0 6 2 4 2 2 0 3 5 r 2r 0 3 5 = B. 0 0 6 0 0 6 Then, det(a) = (3)(6) = 9 2 and det(b) = (3)(6) = 8. So, we see that det(b) = 2 det(a). 2 3 Example: Let A = 0. We form matrix B by 0 0 5 2 3 2 3 0 r 2 r 3 0 0 5. 0 0 5 0 Then, So, we see that det(b) = det(a). det(a) = ()(5) = 5. det(b) = 0 C 2 + 0 C 22 + 5 C 23 = 5( ) 2+3 2 0 = 5. Note that as a consequence of the above properties, we can show the following. 8

Theorem 0. Suppose that A is a square matrix with two equal rows or two equal columns. Then det(a) = 0. Proof. Suppose A is an n n matrix such that rows i and j are equal. Let B be the matrix from by subtracting row i from row j. Then det(a) = det(b) = 0. Example: Compute det(a) by row reducing to echelon form, where 3 0 2 A = 2 5 8 3 3 5 5 5. 0 6 3 Solution: 3 0 2 2 5 8 3 3 5 5 5 0 6 3 3 0 2 0 8 7 0 4 5 0 3 6 3 0 2 = 27 0 8 7 0 0 0 0 8 22 r 2 r 2 +2r r 3 r 3 3r r 4 r 4 r = r 3 27 r 3 = 27()()()(4) = 08. 3 0 2 0 8 7 0 0 27 27 0 0 8 22 3 0 2 = 27 0 8 7 0 0 0 0 0 4 r 3 r 3 +4r 2 r 4 r 4 +3r 2 = r 4 r 4 8r 3 Determinants, Row Operations, and Elementary Matrices First, we will prove a few theorems about elementary matrices, which we will use to prove an important theorem in a bit. Theorem. For every n, det(i n ) =. Proof. We can see that I n is a triangular matrix. Therefore, det(i n ) is equal to the product of the entries on the diagonal, or det(i n ) = (n times) =. 9

Theorem 2 (Determinants of Elementary Matrices). For the three possible versions of an elementary matrix, we have the determinants () det(e i,j ) =, (2) det(e i (α)) = α, (3) det(e i,j (α)) =. This theorem is proved by using the fact that each elementary matrix is obtained by performing a single elementary row operation on the identity matrix, and then applying the theorem on elementary row operations and the determinant Theorem 3. If E is an elementary matrix, then det(ea) = det(e) det(a). The proof of this theorem uses the theorem on determinants of elementary matrices and the fact that if we let B = EA, then B is the matrix formed by performing the row operation from E on A. A Quick Note on Column Operations (not in text) We can perform column operations on a matrix in the same way we perform row operations. Column operations have the same effect on determinants as row operations. NOTE: Do NOT perform column operations when solving systems of equations. Determinants, Nonsingular Matrices, Matrix Multiplication Theorem 4. Let A be a square matrix. Then A is singular if and only if det(a) = 0. This is proved in the text, so I will prove the following equivalent theorem. Theorem 5. A square matrix A is nonsingular if and only if det(a) 0. Proof. A can be reduced to reduced row echelon form U with a finite number of row operations, so U = E k E k E A, where E i represents an elementary matrix. 0

Then, det(u) = det(e k E k E A) = det(e k ) det(e k ) det(e ) det(a). Since det(e i ) 0 for all i, det(a) = 0 if and only if det(u) = 0. If A is nonsingular, then U = I, so det(u) = = det(a) 0. If det(a) = 0, then det(u) = 0 = U contains a row consisting entirely of zeros (since det U = u u 22 u nn ). Therefore, A is singular. Theorem 6 (Nonsingular Matrix Equivalences, Round 5). Suppose that A is a square matrix. Then the following are equivalent. () A is nonsingular. (2) A row reduces to the identity matrix. (3) The null space of A contains only the zero vector (i.e., N (A) = {0}). (4) The linear system LS(A, b) has a unique solution for every b. (5) The columns of A are linearly independent. (6) A is invertible. (7) The column space of A is C n (Col(A) = C n ). (8) The determinant of A is nonzero, i.e., det(a) 0. Finally, we have the following property of determinants. Theorem 7. If A and B are n n matrices, then det(ab) = det(a) det(b). Proof. If A or B is singular, then so is AB. So, det(ab) = 0 = det(a) det(b). If A is nonsingular, then Therefore, we have, A = E E 2 E t. det(ab) = det(e E 2 E t B) = det(e ) det(e 2 ) det(e t ) det(b) = det(e E 2 E t ) det(b) = det(a) det(b).

Corollary. Let A be an n n matrix. Then det(a k ) = [det(a)] k for k a nonnegative integer. 2 3 7 Example: Let A = 0 9 5. Find det(a 3 ). 0 0 2 Solution: Since det(a 3 ) = [det(a)] 3, det(a) = 2(9)(2) = 36. det(a 3 ) = [36] 3 = 46656. Example: Show that if A is invertible, then Solution: Since A is invertible, det(a ) = det(a). AA = I = det(aa ) = det(i) = (det(a))(det(a) ) = = det(a ) = det(a). 3 Cramer s Rule We first need the following notation. For an n n matrix A and any b R n, let A i (b) be the matrix obtained from A by replacing column i of A by the vector b; so, A i (b) = [ a a i b a i+ a n ]. Theorem 8 (Cramer s Rule). Let A be an invertible n n matrix. For any b R n, the unique solution x of Ax = b has entries given by x i = det(a i(b)), for i =, 2,..., n. det(a) Proof. Let A = [ a a 2 a n ] and I = [ e e 2 e n ]. If Ax = b, then AI i (x) = A [ e e i x e i+ e n ] = [ Ae Ae i Ax Ae i+ Ae n ] = [ Ae Ae i b Ae i+ Ae n ] = [ a a i b a i+ a n ] = Ai (b). 2

Then, (det(a))(det(i i (x)) = det(a i (b), and x i det(a) = det(a i (b)) = x i = det(a i(b)). det(a) Note that det(a) 0 since A is invertible. Example: Use Cramer s rule to solve 2x + x 2 = 7 3x + x 3 = 8 x 2 + 2x 3 = 3. Solution: For this problem, 2 0 A = 3 0 and det(a) = 4. 0 2 Applying Cramer s rule, we have 7 0 8 0 3 2 x = 4 = 2 4 = 3. 2 7 0 3 8 0 3 2 x 2 = 4 = 4 4 =. 2 7 3 0 8 0 3 x 3 = 4 = 4 4 =. 3 So, x =. 3

Use Cramer s Rule for Engineering Applications Systems of first order differential equations solved using Laplace transforms can lead to systems of equations like 6sx + 4x 2 = 5 9x + 2sx 2 = 2. We need to know (a) for what values of s the is solution unique, and (b) what the solution is for these values of s. First, we know from Cramer s rule that if the coefficient matrix A is invertible, the solution is unique. Since [ ] 6s 4 A =, 9 2s we have 6s 4 9 2s = 2s2 36 = 2(s 2 3) = 2(s + 3)(s 3). Therefore, the system has a unique solution if s ± 3. For such an s, we have 5 4 2 2s x = A = 0s + 8 2(s 2 3). 6s 5 9 2 x 2 = A 2s 45 = 2(s 2 3). A Formula for A Cramer s rule leads to a general formula for the inverse of an n n matrix as follows. The j th column of A is a vector x that satisfies Ax = e j. The i th entry of x is the (i, j) th entry of A. 4

By Cramer s rule, the We can show that (i, j) th entry of A = x i = det(a i(e j )). det(a) det(a i (e j )) = ( ) i+j det(a ji ) = C ji, where C is the cofactor matrix (the matrix of cofactors of A), and So, A ji = (n ) (n ) matrix formed by deleting row j and column i of A. A = det(a) C C 2 C n C 2 C 22 C n2... = det(a) Ct. C n C 2n C nn The matrix C t is the adjugate of A, adj(a). This leads us to the following theorem. Theorem 9. Let A be an invertible n n matrix. Then A = det(a) adj(a). 3 0 0 Example: Find A if A = 0. 2 3 2 Solution: First, note that det(a) = 3()(2) = 6. Then, C = ( ) + 0 3 2 = 2 C 2 = ( ) +2 0 2 2 = 2 C 3 = ( ) +3 2 3 = 3 ( 2) = C 2 = ( ) 2+ 0 0 3 2 = 0 C 22 = ( ) 2+2 3 0 2 2 = 6 C 23 = ( ) 2+3 3 0 2 3 = 9 C 3 = ( ) +3 0 0 0 = 0 C 32 ( ) 2+3 3 0 0 = 0 C 33 = ( ) 3+3 3 0 = 3 5

So, A = 2 0 0 0 0 3 2 6 0 = 0. 3 6 9 3 3 6 2 2 6