4 Determinant. Properties

Similar documents
8 Square matrices continued: Determinants

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Similarity and Diagonalization. Similar Matrices

Linear Algebra Notes for Marsden and Tromba Vector Calculus

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

The Determinant: a Means to Calculate Volume

160 CHAPTER 4. VECTOR SPACES

Solving Linear Systems, Continued and The Inverse of a Matrix

Unit 18 Determinants

Notes on Determinant

DETERMINANTS TERRY A. LORING

Solution to Homework 2

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Solving Systems of Linear Equations

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Solutions to Math 51 First Exam January 29, 2015

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Linear Algebra Notes

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

1 Sets and Set Notation.

Using row reduction to calculate the inverse and the determinant of a square matrix

NOTES ON LINEAR TRANSFORMATIONS

Continued Fractions and the Euclidean Algorithm

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Name: Section Registered In:

Chapter 17. Orthogonal Matrices and Symmetries of Space

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

The Characteristic Polynomial

Lecture 2 Matrix Operations

Systems of Linear Equations

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

DERIVATIVES AS MATRICES; CHAIN RULE

Math 312 Homework 1 Solutions

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Mathematics Course 111: Algebra I Part IV: Vector Spaces

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Introduction to Matrix Algebra

4.5 Linear Dependence and Linear Independence

ISOMETRIES OF R n KEITH CONRAD

Data Mining: Algorithms and Applications Matrix Math Review

α = u v. In other words, Orthogonal Projection

3. Mathematical Induction

LINEAR ALGEBRA. September 23, 2010

Typical Linear Equation Set and Corresponding Matrices

Solution of Linear Systems

1.2 Solving a System of Linear Equations

Vector Math Computer Graphics Scott D. Anderson

Methods for Finding Bases

0.8 Rational Expressions and Equations

[1] Diagonal factorization

T ( a i x i ) = a i T (x i ).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

1 VECTOR SPACES AND SUBSPACES

Lecture 4: Partitioned Matrices and Determinants

Linear Algebra I. Ronald van Luijk, 2012

5.3 The Cross Product in R 3

Lecture 3: Finding integer solutions to systems of linear equations

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

Solving Systems of Linear Equations

x = + x 2 + x

by the matrix A results in a vector which is a reflection of the given

SECTION 10-2 Mathematical Induction

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Figure 1.1 Vector A and Vector F

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

MATH1231 Algebra, 2015 Chapter 7: Linear maps

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Section 1.1. Introduction to R n

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

University of Lille I PC first year list of exercises n 7. Review

Question 2: How do you solve a matrix equation using the matrix inverse?

Orthogonal Projections

LINEAR ALGEBRA W W L CHEN

Orthogonal Diagonalization of Symmetric Matrices

Vieta s Formulas and the Identity Theorem

Inner Product Spaces

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture Notes 2: Matrices as Systems of Linear Equations

Inner product. Definition of inner product

Elasticity Theory Basics

1 Determinants and the Solvability of Linear Systems

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Unified Lecture # 4 Vectors

CS3220 Lecture Notes: QR factorization and orthogonal transformations

THREE DIMENSIONAL GEOMETRY

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

6. Cholesky factorization

Numerical Analysis Lecture Notes

9.2 Summation Notation

GROUPS ACTING ON A SET

The last three chapters introduced three major proof techniques: direct,

Transcription:

4 Determinant. Properties Let me start with a system of two linear equation: a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2. I multiply the first equation by a 22 second by a 12 and subtract the second one from the first one. I get x 1 (a 11 a 22 a 21 a 12 ) = b 1 a 22 b 2 a 12. Now I multiply the second equation by a 11 first by a 21 and subtract the first one from the second one: x 2 (a 11 a 22 a 21 a 12 ) = a 11 b 2 a 21 b 1. Assuming that a 11 a 22 a 21 a 12 0 I get the solution x 1 = b 1a 22 b 2 a 12 a 11 a 22 a 21 a 12 x 2 = a 11b 2 a 21 b 1 a 11 a 22 a 21 a 12. The expressions in numerator and denominator have certain symmetry and I am tempted to introduce a definition for such objects. Namely I define the determinant of a 2 2 matrix A = [a ij ] as a11 a det A = det 12 = a 21 a 22 a 11 a 12 a 21 a 22 = a 11a 22 a 12 a 21. Then if I also introduce the matrices b1 a B 1 = 12 a11 b B b 2 a 2 = 1 22 a 21 b 2 then I can rewrite my solution in the following elegant form x 1 = det B 1 det A x 2 = det B 2 det A. It turns out that it is also possible to define the determinant for square matrices of an arbitrary size but any definition by a formula does not really look motivated (in part due to the reasons that all the general formulas for the determinant of an arbitrary size square matrix look really complicated) and this served in part as a reason to abandon determinants from courses of linear algebra. On the other hand determinants are indispensable for many theoretical concepts in mathematics and basically unavoidable 1. Hence as a compromise I will introduce the determinant by its properties and only after it I will present several (not extremely useful computationally) formulas for the determinant. Math 329: Intermediate Linear Algebra by Artem Novozhilov c e-mail: artem.novozhilov@ndsu.edu. Spring 2017 1 It is impossible for example to learn how to make a change of variables in multiple integral without some basic understanding of determinants 1

The properties of the determinant are motivated by the fact that the determinant of a 2 2 matrix how I defined it above has a very simple geometric meaning. Let A = [a ij ] 2 2 and I introduce two column vectors a 1 = [ a11 a 21 ] a 2 = such that my matrix is A = [a 1 a 2 ]. These two vectors define a parallelogram on the plane which has as its two sides these two vectors. The value det A = a 11 a 22 a 12 a 21 is actually the area of this parallelogram (see Fig. 4.1 for two examples). Exercise 1. Can you prove the last claim? [ a12 a 22 ] Figure 4.1: Geometric meaning of the determinant. R 2 is on the left and R 3 is on the right. Therefore I will think of the properties of a determinant of an arbitrary square matrix as the properties that should be in place for the volume of a parallelogram defined by the columns of this matrix (Fig. 4.1). I note that I allow negative values for the determinant and hence talk about the signed volume. First the determinant must be linear in each argument. This technically means that I postulate that (all the bold small letters denote column vectors with n components). det[a 1... αa k + b k... a n ] = α det[a 1... a k... a n ] + det[a 1... b k... a n ]. (4.1) Here α is some scalar. The volume justification is that the usual formula tells me that the volume is equal to the area times height and if my vector multiplied by a constant α then intuitively the height is also multiplied by the same constant; if a vector is the sum of two vectors then the corresponding height is also the sum of two heights. Intuitively the volume of a parallelogram which is built on some vectors such that two of them coincide should be zero (think e.g. of a three dimensional case). That is det[a 1... αa k... a k... a n ] = 0. (4.2) 2

Note that the properties (4.1) and (4.2) together imply that det[a 1... a j + αa k... a k... a n ] = det[a 1... a j... a k... a n ]. In words an elementary column operation of the third type does not change the determinant. Another important implication is that if I exchange the order of two columns in the matrix the determinant will change its sign. Here is a simple proof of this fact. Let me fix all the columns in the matrix except columns i and j and consider the determinant as a function of these two columns which I will denote f(a b). Now using (4.1) and (4.2) I get 0 = f(a + b a + b) = f(a b) + f(a b) + f(b a) + f(b b) = f(a b) + f(b a). And finally I will need a normalization. Namely (and quite naturally) I postulate that det I = 1. (4.3) It turns out that (4.1) (4.2) and (4.3) define a unique function on the collection of all possible square matrices. A proof that this function is unique will be given later but now I will show that a lot of other properties can be deduced from my assumptions (4.1) (4.2) and (4.3) (which by the way are easily checked for the 2 2 case I invite the reader to do these computations). Proposition 4.1. Let A be a square matrix. 1. If A has a zero column then det A = 0. 2. If A is a diagonal matrix then det A = a 11... a nn. 3. If A is a triangular matrix then det A = a 11... a nn. 4. A is invertible if and only if det A 0. 5. det(αa) = α n det A. Proof. 1. By (4.1) det[a 1... 0... a n ] = 0 det[a 1... 0... a n ] = 0. 2. Since for the diagonal matrix I can factor out all the diagonal elements and get the identity matrix the conclusion follows. 3. If a triangular matrix has a zero on the main diagonal then by the elementary column operations I can always reduce it to a matrix with a zero column and hence the determinant will be zero due to Claim 1. If there are no zero elements on the main diagonal then by the third type column operations I can reduce my matrix to the diagonal one with the same diagonal elements. These operations does not change the determinant. Now applying already proved Claim 2 I get the desired result. The reasonings above actually give a computational recipe to calculate the determinant. Namely using the elementary column operations of the first and third type (if you are not very comfortable with column operations do row operations on A and then transpose the final result again) find the column echelon form which is a triangular matrix keeping track of the number of times you switched two columns say k times. The determinant of the triangular matrix is the product of the diagonal elements and to get the determinant of the original matrix one just needs to multiply it by ( 1) k. By the way recall that a square matrix is invertible if and only if its row (or similarly column) reduced echelon form is the identity matrix and hence this computational procedure proves Claim 4. The last point follows from (4.1). 3

Exercise 2. Carefully write down a proof for claim 4 in the proposition. Example 4.2. Compute Note that 0 1 2 det 1 0 3. 2 3 0 0 1 2 1 0 2 1 0 2 1 0 2 1 0 3 0 1 3 0 1 3 0 1 3. 2 3 0 3 2 0 0 2 6 0 0 12 The determinant of the last matrix is 1 ( 1) ( 12) = 12. I recall that while I was row reducing the matrix I switch columns once. Therefore the determinant of the original matrix is 12. Corollary 4.3. The system Ax = b with a square matrix A has a unique solution if and only if det A 0 Lemma 4.4. Let A be a square matrix and E be an elementary matrix of the same size. Then det(ae) = det A det E. Proof. Since E = IE and det I = 1 I know determinants of any elementary matrix. (If the previous sentence is not clear: Take e.g. the elementary column operation of the first type; I know that the multiplication from the right by an elementary matrix of type 1 changes two columns therefore using the properties of the determinant I conclude that the determinant of this elementary matrix must be 1; analogously for other elementary matrices.) Since the multiplication by an elementary matrix amounts to an elementary column operation the conclusion follows. Corollary 4.5. For elementary matrices E 1... E k Theorem 4.6. For square matrices A B: 1. det A = det A. 2. det(ab) = det A det B. det(ae 1... E k ) = det A det E 1... det E k. Proof. 1. I will rely on two facts. First for any elementary matrix E I have det E = det E which can be checked by direct calculations. Second if A is not invertible then A is also not invertible and both determinants are zero hence it is sufficient to prove the claim only for the invertible matrices. Let A be invertible then it can be represented as a product of elementary matrices A = E 1... E k. Taking the transpose of both sides using Corollary 4.5 and the fact that det E = det E conclude the proof. 2. If B is invertible the proof follows from the fact that B can be represented as a product of elementary matrices and Corollary 4.5. If B is not invertible then the product AB is also not invertible (why?) and I get 0 = 0. 4

Exercise 3. Prove that if B is not invertible then AB for any A is also not invertible. Remark 4.7. The first point in the last theorem actually tells us that everything which was said about properties of determinants with respect to elementary column operations is true for elementary row operations. Remark 4.8. Since the properties of determinants are so important let me list them again all together. Here A is a square matrix. 1. Determinant is linear in each row (column) when the other rows or columns are kept fixed. 2. det(αa) = α n det A. 3. If two rows (columns) are switched the determinant changes its sign. 4. If there is a zero column or row then det A = 0. 5. For a triangular matrix det A is equal to the product of the elements on the main diagonal. In particular det I = 1. 6. det A = 0 if and only if A is not invertible (or equivalently det A 0 if and only if A is invertible). 7. Ax = b has a unique solution if and only if det A 0. 8. det A does not change is we perform elementary row or column operations of the third type (take a row or a column multiply by a constant and add to another row or column). 9. det A = det A. 10. det(ab) = det A det B. 11. det A 1 = (det A) 1. So we know a lot properties of determinants the row or column operations allow us to calculate the determinants. Do we need anything else? Actually yes. We are still lacking a proof of uniqueness of a determinant (note that existence is actually guaranteed by the finite number of elementary operations required to put a given matrix into a reduced row or column echelon form). We still have no proof that if we perform the elementary operations in different orders we ll end up with the same answer. For this we need to show that our property (4.1) (4.3) define a unique function which would guarantee that the determinant is unique. 5