Linear Algebra Notes



Similar documents
Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

by the matrix A results in a vector which is a reflection of the given

A =

Linearly Independent Sets and Linearly Dependent Sets

NOTES ON LINEAR TRANSFORMATIONS

Systems of Linear Equations

Similarity and Diagonalization. Similar Matrices

University of Lille I PC first year list of exercises n 7. Review

1 Determinants and the Solvability of Linear Systems

Name: Section Registered In:

Solving Systems of Linear Equations

Matrix Representations of Linear Transformations and Changes of Coordinates

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Recall that two vectors in are perpendicular or orthogonal provided that their dot

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Section Continued

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

8 Square matrices continued: Determinants

Linear Algebra Review. Vectors

Chapter 20. Vector Spaces and Bases

Methods for Finding Bases

Solving Systems of Linear Equations

Solutions to Math 51 First Exam January 29, 2015

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

160 CHAPTER 4. VECTOR SPACES

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Mathematics Course 111: Algebra I Part IV: Vector Spaces

LINES AND PLANES CHRIS JOHNSON

Linear Equations in Linear Algebra

Vector and Matrix Norms

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Review Jeopardy. Blue vs. Orange. Review Jeopardy

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Lecture 1: Systems of Linear Equations

( ) which must be a vector

α = u v. In other words, Orthogonal Projection

Chapter 6. Orthogonality

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Orthogonal Projections

Notes from February 11

Linear Algebra Notes for Marsden and Tromba Vector Calculus

MATH APPLIED MATRIX THEORY

Chapter 17. Orthogonal Matrices and Symmetries of Space

Notes on Determinant

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

THE DIMENSION OF A VECTOR SPACE

4.5 Linear Dependence and Linear Independence

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Figure 1.1 Vector A and Vector F

T ( a i x i ) = a i T (x i ).

Vector Spaces 4.4 Spanning and Independence

5.5. Solving linear systems by the elimination method

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Row Echelon Form and Reduced Row Echelon Form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Solution to Homework 2

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

MATH1231 Algebra, 2015 Chapter 7: Linear maps

5.3 The Cross Product in R 3

5. Orthogonal matrices

PROJECTIVE GEOMETRY. b3 course Nigel Hitchin

1 VECTOR SPACES AND SUBSPACES

Lecture 2: Homogeneous Coordinates, Lines and Conics

Math 215 HW #6 Solutions

Linear Algebra I. Ronald van Luijk, 2012

9 Multiplication of Vectors: The Scalar or Dot Product

3. INNER PRODUCT SPACES

ISOMETRIES OF R n KEITH CONRAD

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

CURVE FITTING LEAST SQUARES APPROXIMATION

5 Homogeneous systems

Lecture Notes 2: Matrices as Systems of Linear Equations

Orthogonal Diagonalization of Symmetric Matrices

DERIVATIVES AS MATRICES; CHAIN RULE

Numerical Analysis Lecture Notes

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Data Mining: Algorithms and Applications Matrix Math Review

LS.6 Solution Matrices

Direct Methods for Solving Linear Systems. Matrix Factorization

LEARNING OBJECTIVES FOR THIS CHAPTER

Math 312 Homework 1 Solutions

THREE DIMENSIONAL GEOMETRY

5. Linear algebra I: dimension

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

MAT188H1S Lec0101 Burbulla

What is Linear Programming?

LINEAR ALGEBRA W W L CHEN

Section 1.1. Introduction to R n

[1] Diagonal factorization

Question 2: How do you solve a matrix equation using the matrix inverse?

Operation Count; Numerical Linear Algebra

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

Transcription:

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note that ker A lives in R m The image of A is ker set of all x in R m such that Ax = 0 im set of all vectors in R n which are Ax for some x R m Note that im A lives in R n Many calculations in linear algebra boil down to the computation of kernels and images of matrices Here are some different ways of thinking about ker A and im A In terms of equations, ker A is the set of solution vectors x = (x 1,, x m ) in R m of the n equations a 11 x 1 + a 12 x 2 + + a 1m x m = 0 a 21 x 1 + a 22 x 2 + + a 2m x m = 0 a n1 x 1 + a n2 x 2 + + a nm x m = 0, (19a) and im A consists of those vectors y = (y 1, y n ) in R n for which the system a 11 x 1 + a 12 x 2 + + a 1m x m = y 1 a 21 x 1 + a 22 x 2 + + a 2m x m = y 2 a n1 x 1 + a n2 x 2 + + a nm x m = y n, 1 (19b)

2 has a solution x = (x 1,, x m ) A single equation a i1 x 1 + a i2 x 2 + + a im x m = y i is called a hyperplane in R m (So a line is a hyperplane in R 2, and a plane is a hyperplane in R 3 ) Geometrically, ker A is the intersection of hyperplanes (19a), and im A is the set of vectors (y 1,, y n ) R n for which the hyperplanes (19b) intersect in at least one point If x and x are two solutions of (19b) for the same y, then A(x x ) = Ax Ax = y y = 0, so x x belongs to the kernel of A If ker 0 (ie, consists just of the zero vector) then there can be at most one solution In general, the bigger the kernel, the more solutions there are to a given equation that has at least one solution Thus, if x is one solution, then all other solutions are obtained from x by adding a vector from ker A However, we will see that there is a certain conservation principle at work here, which implies that the bigger the kernel, the smaller the image, so the less likely it is that there will be even one solution Intuitively, you can think of vectors in R m as representing information Then ker A is the information lost by A, while im A is the information retained by A We can also describe im A as the span (=set of linear combinations of) the columns of A That is, if u 1,, u m are the columns of A, then im A consists of the vectors Ax = x 1 u 1 + x 2 u 2 + + x m u m R n, with all possible choices of scalars x 1,, x m Example 1: (2 3 case) 12 a 13 a 21 a 22 a 23 Suppose first that A is the zero matrix (all a ij = 0) Then ker R 3 and im A consists only of the zero vector Suppose then that A is not the zero matrix Then ker A is the intersection of two planes through (0, 0, 0) a 11 x 1 + a 12 x 2 + a 13 x 3 = 0 a 21 x 1 + a 22 x 2 + a 23 x 3 = 0 Each plane corresponds to a row vector of A, whereby the row vector is the normal vector to the plane If the row vectors of A are not proportional, then the planes are distinct In this case the planes intersect in a line, and ker A is this line If the rows of A are proportional, then the two equations determine just one plane, and ker A is this plane For example, ker 3 3 2 1 is the line R(1, 2, 1),

3 while ker 3 2 4 6 is the plane x + 2y + 3z = 0 What about im A? Since im A R 2 and is nonzero (because we re assuming A 0), the image of A is either a line or all of R 2 How to tell? Recall that im A is spanned by the three column vectors 12 a13 u 1 =, u a 2 =, u 21 a 3 = 22 a 23 The image of A will be a line l exactly when these three column vectors all live on the same line l If, say, u 1 0, and the image is a line, then there are scalars s, t such that u 2 = su 1 and u 3 = tu 1 This would mean that a11 sa 11 ta 11 a 21 sa 21 ta 21 But look, this means the rows are proportional They are both proportional to (1, s, t) By what we saw before, this means the kernel is a plane In summary: Recall also the case 0: If im line then ker plane (19a) If im plane then ker line (19b) If im 0 then ker R 3 (19c) We can summarize (19a-c) in a table of dimensions: 2 3 matrix expected, rows not proportional 2 1 rows proportional A 0 3 0 0 only Note that for any 2 3 matrix A, we have dim(ker A) + dim(im A) = 3 As you vary A, the quantity dim(ker A) can vary from 1 to 3, and the quantity dim(im A) can vary from 0 to 2, but the sum dim(ker A) + dim(im A) remains constant at 3 For 3 2 matrices the table is a 11 a 12 a 21 a 22 a 31 a 32 : R 2 R 3,

4 2 3 matrix 0 2 expected, columns not proportional 1 1 columns proportional, A 0 2 0 0 only Again dim(ker A) + dim(im A) = 3 You can think of this as conservation of information It is a general fact: Kernel-Image Theorem Let A be an n m matrix Then dim(ker A) + dim(im A) = m The corresponding table depends on whether n or m is bigger n m, n m m n n expected, rows linearly independent m n + 1 n 1 m 0 0 only n m, n m 0 m expected, columns linearly independent 1 m 1 m 0 0 only A set of vectors is linearly independent if none of them is a linear combination of the others Note that the expected situation has minimal kernel As you go down the rows in the tables, there are more and more conditions to be satisfied, hence each row is less likely than the one above, until finally only 0 satisfies all the conditions of the last row The conditions can be expressed as certain determinants being zero, as follows Let µ be the smaller of n or m Each table has µ + 1 rows Number the rows 0, 1,, µ starting at the top row Then a matrix satisfies the conditions for row 0 (the expected case) if some µ µ subdeterminant of A is nonzero The conditions for some lower row p are that some p p subdeterminant of A is nonzero, but all (p+1) (p+1) subdeterminants are zero This is illustrated in the exercises Intuitively, the Kernel-Image Theorem says the amount of information lost plus the amount of information retained equals the amount of information you started with However, to really understand the Kernel-Image Theorem, we have to understand dimension inside R n for any n

Exercise 191 Determine the kernel and image, and the dimensions of these, for the following matrices (A line is described by giving a nonzero vector on the line, and a plane can by described by giving two nonproportional vectors in the plane) (a) 1 1 2 2 (b) 3 0 (d) (e) (f) 2 2 2 3 (g) 3 4 5 6 (h) 1 0 0 0 0 1 0 0 7 8 9 0 0 1 0 a b Exericise 192 Let c d (c) 1 1 2 2 3 3 1 1 1 0 1 1 0 0 1 (a) Suppose A is the zero matrix What are ker A and im A? (b) Suppose A 0, but det 0 What are ker A and im A? (c) Suppose det A 0 What are ker A and im A? Exercise 193 Let u = (u 1, u 2 ) and v = (v 1, v 2 ) be vectors in R 2, and let 1 0 u1 v 1 0 1 u 2 v 2 (This is the sort of matrix you used to map the hypercube in R 4 into R 2 ) (a) Describe the kernel of A in terms of the vectors u and v (b) What is the image of A? Exercise 194 A 2 3 matrix 12 a 13 a 21 a 22 a 23 has three subdeterminants det 12, det 13 a 21 a 22 a 21 a 23 a12 a, det 13 a 22 a 23 Assume A is non zero Explain how these subdeterminants determine the dimensions of the kernel and image of A (Study the 2 3 analysis given above ) Exercise 195 Explain how to use subdeterminants to determine the dimensions of the image and kernel of a 2 m matrix 12 a 1m a 21 a 22 a 2m 5 Exercise 196 Make the tables of dimensions of kernels and images of 3 4 and 4 3 matrices, and find a matrix for each row of each table