1 Determinants and the Solvability of Linear Systems



Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Unit 18 Determinants

Using row reduction to calculate the inverse and the determinant of a square matrix

Linear Algebra Notes

Solving Systems of Linear Equations

LINEAR ALGEBRA. September 23, 2010

4.5 Linear Dependence and Linear Independence

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Notes on Determinant

5.5. Solving linear systems by the elimination method

Chapter 17. Orthogonal Matrices and Symmetries of Space

Solving Linear Systems, Continued and The Inverse of a Matrix

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Name: Section Registered In:

The Characteristic Polynomial

Solution to Homework 2

Solving Systems of Linear Equations

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

MATH APPLIED MATRIX THEORY

Solution of Linear Systems

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

8 Square matrices continued: Determinants

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

University of Lille I PC first year list of exercises n 7. Review

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

The Determinant: a Means to Calculate Volume

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

1 Introduction to Matrices

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Solving simultaneous equations using the inverse matrix

Chapter 9. Systems of Linear Equations

LS.6 Solution Matrices

( ) which must be a vector

Systems of Linear Equations

Methods for Finding Bases

Continued Fractions and the Euclidean Algorithm

Row Echelon Form and Reduced Row Echelon Form

Similarity and Diagonalization. Similar Matrices

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Solutions to Math 51 First Exam January 29, 2015

x = + x 2 + x

A =

Linearly Independent Sets and Linearly Dependent Sets

6. Cholesky factorization

Lecture 1: Systems of Linear Equations

Introduction to Matrix Algebra

Lecture 3: Finding integer solutions to systems of linear equations

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

Systems of Linear Equations

1 Sets and Set Notation.

Notes from February 11

26. Determinants I. 1. Prehistory

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Operation Count; Numerical Linear Algebra

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Matrix algebra for beginners, Part I matrices, determinants, inverses

Section Continued

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

SOLVING LINEAR SYSTEMS

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Question 2: How do you solve a matrix equation using the matrix inverse?

Solving Systems of Two Equations Algebraically

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

L 2 : x = s + 1, y = s, z = 4s Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has

Vector and Matrix Norms

General Framework for an Iterative Solution of Ax b. Jacobi s Method

the points are called control points approximating curve

Math 312 Homework 1 Solutions

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Least-Squares Intersection of Lines

1.5 SOLUTION SETS OF LINEAR SYSTEMS

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Review of Least Squares Solutions to Overdetermined Systems

Solving Systems of Linear Equations Using Matrices

Direct Methods for Solving Linear Systems. Matrix Factorization

Solving Equations Involving Parallel and Perpendicular Lines Examples

Linear Programming. March 14, 2014

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

1 Homework 1. [p 0 q i+j p i 1 q j+1 ] + [p i q j ] + [p i+1 q j p i+j q 0 ]

10.1 Systems of Linear Equations: Substitution and Elimination

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Systems of Linear Equations and Inequalities

5.3 The Cross Product in R 3

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

EQUATIONS and INEQUALITIES

Equations of Lines and Planes

9 Multiplication of Vectors: The Scalar or Dot Product

160 CHAPTER 4. VECTOR SPACES

THREE DIMENSIONAL GEOMETRY

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

Transcription:

1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped one important question: that of whether a system has a solution and, if so, whether it is unique Consider the case of two lines in the plane ax + by = e cx + dy = f (1) In fact, we know that intersections between two lines can happen in any of three different ways: 1 the lines intersect at a unique point (ie, solution exists and is unique), 2 the lines are coincident (that is, the equations represent the same line and there are infinitely many points of intersection; in this case a solution exists, but is not unique), or 3 the lines are parallel but not coincident (so that no solution exists) Experience has taught us that it is quite easy to decide which of these situations we are in before ever attempting to solve a linear system of two equations in two unknowns For instance, the system 3x 5y = 9 5x + 25 3 y = 15 obviously contains two representations of the same line (since one equation is a constant multiple of the other) and will have infinitely many solutions In contrast, the system x + 2y = 1 2x + 4y = 5 will have no solutions This is the case because, while the left sides of each equation the sides that contain the coefficients of x and y which determine the slopes of the lines are in proportion to one another, the right sides are not in the same proportion As a result, these two lines will have the same slopes but not the same y-intercepts Finally, the system 2x + 5y = 11 7x y = 1 will have just one solution (one point of intersection), as the left sides of the equations are not at all in proportion to one another What is most important about the preceding discussion is that we can distinguish situation 1 (the lines intersecting at one unique point) from the others simply by looking at the coefficients a, b, c and d from equation (1) In particular, we can determine the ratios a : c 1

and b : d and determine whether these ratios are the same or different Equivalently, we can look at whether the quantity ad bc is zero or not If ad bc 0 then the system has one unique point of intersection, but if ad bc = 0 then the system either has no points or infinitely many points of intersection If we write equation (1) as a matrix equation [ ] a b c d ] [ x y we see that the quantity ad bc is dependent only upon the coefficient matrix Since this quantity determines whether or not the system has a unique solution, it is called the determinant of the coefficient matrix [ ] a b A =, c d and is sometimes abbreviated as det(a), A or a b c d While it is quite easy for us to determine in advance the number of solutions which arise from a system of two linear equations in two unknowns, the situation becomes a good deal more complicated if we add another variable and another equation The solutions of such a system = [ e f ax + by + cz = l ], dx + ey + fz = m (2) ex + hy + kz = n can be thought of as points of intersection between three planes Again, there are several possibilities: 1 the planes intersect at a unique point, 2 the planes intersect along a line, 3 the planes intersect in a plane, or 4 the planes do not intersect It seems reasonable to think that situation 1 can once again be distinguished from the other three simply by performing some test on the numbers a, b, c, d, e, f, g, h and k As in the case of the system (1), perhaps if we write system (2) as the matrix equation a b c d e f g h k 2 x y z = l m n,

we will be able to define an appropriate quantity det(a) that depends only on the coefficient matrix a b c A = d e f g h k in such a way that, if det(a) 0 then the system has a unique solution (situation 1), but if det(a) = 0 then one of the other situations (2 4) is in effect Indeed it is possible to define det(a) for a square matrix A of arbitrary dimension For our purposes, we do not so much wish to give a rigorous definition of such a determinant as we do wish to be able to find it As of now, we do know how to find it for a 2 2 matrix: [ ] a11 a det 12 = a 21 a 22 a 11 a 12 a 21 a 22 = a 11a 22 a 12 a 21 For square matrices of dimension larger than 2 we will find the determinant using cofactor expansion Let A = (a ij ) be an arbitrary n n matrix; that is, a 11 a 12 a 1n a 21 a 22 a 2n A = a n1 a n2 a nn We define the (i, j)-minor of A, M ij, to be the determinant of the matrix resulting from crossing out the i th row and the j th column of A Thus, if 1 4 3 B = 3 2 5, 4 0 1 we have nine possible minors M ij of B, two of which are M 21 = 4 3 0 1 = 4 and M 33 = 1 4 3 2 = 10 A concept that is related to the (i, j)-minor is the (i, j)-cofactor, C ij, which is defined to be C ij := ( 1) i+j M ij Thus, the matrix B above has 9 cofactors C ij, two of which are C 21 = ( 1) 2+1 M 21 = 4 and C 33 = ( 1) 3+3 M 33 = 10 Armed with the concept of cofactors, we are prepared to say how the determinant of an arbitrary square matrix A = (a ij ) is found It may be found by expanding in cofactors along the i th row: n det(a) = a i1 C i1 + a i2 C i2 + + a in C in = a ik C ik ; 3 k=1

or, alternatively, it may be found by expanding in cofactors along the j th column: det(a) = a 1j C 1j + a 2j C 2j + + a nj C nj = n a kj C kj k=1 Thus, det(b) for the 3 3 matrix B above is 1 4 3 det(b) = 3 2 5 = 4( 1) 3+1 4 3 4 0 1 2 5 +(0)( 1) 3+2 1 3 3 5 = 4( 20 6) + 0 (2 12) = 94 + 1 4 ( 1)( 1)3+3 3 2 Here we found det(b) via a cofactor expansion along the third row You should verify that a cofactor expansion along any of the other two rows would also lead to the same result Had we expanded in cofactors along one of the columns, for instance column 2, we would have det(b) = ( 4)( 1) 1+2 3 5 4 1 + 1 3 (2)( 1)2+2 4 1 + 1 3 (0)( 1)3+2 3 5 = 4(3 20) + 2( 1 12) + 0 = 94 This process can be used iteratively on larger square matrices For instance 3 1 2 0 1 0 5 4 1 2 0 1 1 0 1 = (0)( 1) 4+1 0 5 4 0 0 3 1 1 0 1 + 3 2 0 (0)( 1)4+2 1 5 4 1 0 1 3 1 0 +( 3)( 1) 4+3 1 0 4 1 1 1 + 3 1 2 (1)( 1)4+4 1 0 5 1 1 0, where our cofactor expansion of the original determinant of a 4 4 matrix along its fourth row expresses it in terms of the determinants of several 3 3 matrices We may proceed to find these latter determinants using cofactor expansions as well: 3 1 0 1 0 4 1 1 1 = (3)( 1) 1+1 0 4 1 1 + 1 4 (1)( 1)1+2 1 1 + 1 0 (0)( 1)1+3 1 1 = 3(0 + 4) (1 + 4) + 0 = 7, 4

where we expanded in cofactors along the first row, and 3 1 2 1 0 5 = (1)( 1) 1+2 1 5 1 1 0 1 0 + 3 2 (0)( 1)2+2 1 0 = (0 5) + 0 (15 + 2) = 12, + 3 2 (1)( 1)3+2 1 5 where this cofactor expansion was carried out along the second column Thus 3 1 2 0 1 0 5 4 1 1 0 1 = 0 + 0 + (3)(7) + (1)( 12) = 9 0 0 3 1 The matrices whose determinants we computed in the preceding paragraph are the coefficient matrices for the two linear systems and x 1 4x 2 + 3x 3 = b 1 3x 1 + 2x 2 + 5x 3 = b 2 4x 1 x 3 = b 3 3x 1 + x 2 + 2x 3 = b 1 x 1 + 5x 3 4x 4 = b 2 x 1 + x 2 x 4 = b 3 3x 3 + x 4 = b 4 respectively; that is, if we write each of these systems as a matrix equation of the form Ax = b, with x 1 b 1 x = x 2 and b = b 2, then the coefficient matrix A in each case is one for which we have already computed the determinant and found it to be nonzero This means that no matter what values are used for b 1, b 2, b 3 (and b 4 in the latter system) there is exactly one solution for the unknown vector x 5