MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3



Similar documents
Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Systems of Linear Equations

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Row Echelon Form and Reduced Row Echelon Form

Solving Systems of Linear Equations

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Solutions to Math 51 First Exam January 29, 2015

1 VECTOR SPACES AND SUBSPACES

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Linearly Independent Sets and Linearly Dependent Sets

MAT188H1S Lec0101 Burbulla

Name: Section Registered In:

Section Continued

Linear Equations in Linear Algebra

Solving Systems of Linear Equations

( ) which must be a vector

by the matrix A results in a vector which is a reflection of the given

Lecture 1: Systems of Linear Equations

Solving Systems of Linear Equations Using Matrices

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Arithmetic and Algebra of Matrices

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

Section 9.5: Equations of Lines and Planes

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

EQUATIONS and INEQUALITIES

Notes on Determinant

Methods for Finding Bases

Lecture 14: Section 3.3

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

Lecture Notes 2: Matrices as Systems of Linear Equations

Linear Algebra Notes

TWO-DIMENSIONAL TRANSFORMATION

160 CHAPTER 4. VECTOR SPACES

Math 312 Homework 1 Solutions

Recall that two vectors in are perpendicular or orthogonal provided that their dot

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

1 Introduction to Matrices

NOTES ON LINEAR TRANSFORMATIONS

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Solving Linear Systems, Continued and The Inverse of a Matrix

Systems of Linear Equations

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

9 Multiplication of Vectors: The Scalar or Dot Product

4.5 Linear Dependence and Linear Independence

Lecture notes on linear algebra

α = u v. In other words, Orthogonal Projection

1.2 Solving a System of Linear Equations

1 Sets and Set Notation.

MATH APPLIED MATRIX THEORY

v 1 v 3 u v = (( 1)4 (3)2, [1(4) ( 2)2], 1(3) ( 2)( 1)) = ( 10, 8, 1) (d) u (v w) = (u w)v (u v)w (Relationship between dot and cross product)

Subspaces of R n LECTURE Subspaces

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

5.5. Solving linear systems by the elimination method

Vector Spaces 4.4 Spanning and Independence

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Linear Algebra Notes for Marsden and Tromba Vector Calculus

THE DIMENSION OF A VECTOR SPACE

Using row reduction to calculate the inverse and the determinant of a square matrix

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

PYTHAGOREAN TRIPLES KEITH CONRAD

Vector Math Computer Graphics Scott D. Anderson

[1] Diagonal factorization

Linear Programming. March 14, 2014

x = + x 2 + x

How To Understand And Solve A Linear Programming Problem

A =

One advantage of this algebraic approach is that we can write down

MAT 242 Test 2 SOLUTIONS, FORM T

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

5 Systems of Equations

4 MT210 Notebook Eigenvalues and Eigenvectors Definitions; Graphical Illustrations... 3

5. Orthogonal matrices

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Equations Involving Lines and Planes Standard equations for lines in space

Chapter 17. Orthogonal Matrices and Symmetries of Space

Math 215 HW #6 Solutions

8 Square matrices continued: Determinants

A vector is a directed line segment used to represent a vector quantity.

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

Matrix Representations of Linear Transformations and Changes of Coordinates

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

521493S Computer Graphics. Exercise 2 & course schedule change

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

Least-Squares Intersection of Lines

Section 1.1 Linear Equations: Slope and Equations of Lines

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Linear Equations and Inequalities

6. Vectors Scott Surgent (surgent@asu.edu)

LS.6 Solution Matrices

Jim Lambers MAT 169 Fall Semester Lecture 25 Notes

Problems. Universidad San Pablo - CEU. Mathematical Fundaments of Biomedical Engineering 1. Author: First Year Biomedical Engineering

Transcription:

MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................ 3.. Linear Equations and Linear Systems................... 3.. Solutions and Solution Sets......................... 3..3 Elementary Row Operations, Forward and Backward Pass........ 4..4 Existence and Uniqueness.......................... 5..5 Geometric Interpretations.......................... 7. Row Reductions and Echelon Forms........................ 8.. Echelon Form and Reduced Echelon Form................. 8.. Pivot Positions, Pivot Columns and Pivots................ 9..3 Basic and Free Variables; Solutions to Linear Systems...........3 Vectors and Vector Equations.............................3. Vectors in m-dimensional Space.......................3. Vector Sum, Scalar Product, and Linear Combinations of Vectors.... 3.3.3 Linear Combinations and Spans...................... 5.3.4 Vector Equations, Spans and Solution Sets................ 7.4 Matrix Equations and Solution Sets........................ 9.4. Coefficient Matrix and Matrix-Vector Product.............. 9.4. Matrix Equation, Span and Consistency.................. 0.4.3 Solution Sets and Homogeneous Systems...................4.4 Solution Sets and Nonhomogeneous Systems............... 5.5 Linearly Independent and Linearly Dependent Sets................ 7.6 Linear and Matrix Transformations......................... 30

.6. Transformations, Images and Pre-Images................. 30.6. One-To-One and Onto Transformations.................. 3.6.3 Linear Transformations........................... 33.6.4 Linear and Matrix Transformations..................... 35

MATH0 Notebook This notebook is concerned with introductory linear algebra concepts. The notes correspond to material in Chapter of the Lay textbook.. Solving Systems of Linear Equations In linear algebra we study linear equations and systems of linear equations. Linear algebra methods are used to solve problems in areas as diverse as ecology (e.g. population projections), economics (e.g. input-output analysis), engineering (e.g. analysis of air flow), and computer graphics (e.g. perspective drawing). Further, linear methods are fundamental in statistical analysis of multivariate data... Linear Equations and Linear Systems A linear equation in the variables x, x,..., x n is an equation of the form a x + a x + + a n x n = b where a, a,..., a n and b are constants. A system of linear equations in the variables x, x,..., x n is a collection of one or more linear equations in these variables. For example, x x + x 3 = 0 x 8x 3 = 8 4x +5x +9x 3 = 9 is a system of 3 linear equations in the 3 unknowns x, x, x 3 (a 3-by-3 system )... Solutions and Solution Sets A solution of a linear system is a list (s, s,..., s n ) of numbers that makes each equation in the system true when s i is substituted for x i for i =,,..., n. The list (9, 6, 3) is a solution to the system above. To check this, note that (9) (6) + (3) = 0 (6) 8(3) = 8 4(9) +5(6) +9(3) = 9 The solution set of a linear system is the collection of all possible solutions of the system. For the example above, (9, 6, 3) is the unique solution. Two linear systems are equivalent if each has the same solution set. 3

..3 Elementary Row Operations, Forward and Backward Pass The strategy for solving a linear system is to replace the system with an equivalent one that is easier to solve. The equivalent system is obtained using elementary row operations:. (Replacement) Add to one row a multiple of another,. (Interchange) Interchange two rows, 3. (Scaling) Multiply all entries in a row by a nonzero constant, where each row corresponds to an equation in the system. Example. We work out the strategy using the 3-by-3 system given above, where a 3-by-4 augmented matrix of the coefficients and right hand side values follows our progress toward a solution. R : x x + x 3 = 0 0 () R : x 8x 3 = 8 0 8 8 R 3 : 4x +5x +9x 3 = 9 4 5 9 9 () (3) (4) (5) (6) R : R : R 3 + 4R : R : R : R 3 : R : R : R 3 + 3R : R R 3 : R + 4R 3 : R 3 : R + R : R : R 3 : x x + x 3 = 0 x 8x 3 = 8 3x +3x 3 = 9 x x + x 3 = 0 x 4x 3 = 4 3x +3x 3 = 9 x x + x 3 = 0 x 4x 3 = 4 x 3 = 3 x x = 3 x = 6 x 3 = 3 x = 9 x = 6 x 3 = 3 Thus, the solution is x = 9, x = 6, and x 3 = 3. 0 0 8 8 0 3 3 9 0 0 4 4 0 3 3 9 0 0 4 4 0 0 3 0 3 0 0 6 0 0 3 0 0 9 0 0 6 0 0 3 Forward and backward phases. In the forward phase of the row reduction process (systems () through (4)), R is used to eliminate x from R and R 3 ; then R is used to eliminate x from R 3. In the backward phase of the row reduction process (systems (5) and (6)), R 3 is used to eliminate x 3 from R and R ; then R is used to eliminate x from R. 4

..4 Existence and Uniqueness Two fundamental questions about linear systems are:. Does a solution exist?. If a solution exists, is it unique? If a linear system has one or more solutions, then it is said to be consistent; otherwise, it is said to be inconsistent. A linear system can be inconsistent, or have a unique solution, or have infinitely many solutions. Note that in the example above, we knew that the system was consistent once we reached equivalent system (4); the remaining steps allowed us to find the unique solution. Example. The following 3-by-3 linear system has an infinite number of solutions: x +4x +x 3 = 6 3x +7x +x 3 = 9 4x x = 5 To demonstrate this, consider the sequence of equivalent augmented matrices: 4 6 3 7 9 4 0 5 8 0 5 0 4 4 0 8 0 5 0 0 0 0 0 3 0 5 0 0 0 0 The last matrix corresponds to x + 3x 3 =, x x 3 = 5, and 0 = 0. Thus, ( 3x 3, 5 + x 3, x 3 ) is a solution for any value of x 3. Footnote on Example : The elementary row operations used in this example were R R R 3 5

Example 3. The following 3-by-3 linear system is inconsistent: 3x 6x 3 = 8 x x +3x 3 = 5x 7x +9x 3 = 0 To demonstrate this, consider the sequence of equivalent augmented matrices: 0 3 6 8 3 3 3 3 0 3 6 8 0 3 6 8 0 3 6 8 5 7 9 0 5 7 9 0 0 3 6 5 0 0 0 3 The last matrix corresponds to x x + 3x 3 =, 3x 6x 3 = 8, and 0 = 3. Since 0 3, the system has no solution. Footnote on Example 3: The elementary row operations used in this example were R R R 3 Example 4. The following 3-by-4 system has an infinite number of solutions: x + x x 3 = 9 x +x 3x 3 4x 4 = 9 3x x x +x 3 3x 4 = 3 To demonstrate this, consider the sequence of equivalent augmented matrices: 0 9 0 9 0 9 3 4 9 0 4 0 0 4 0 3 3 3 0 3 4 0 0 4 0 3 0 0 8 0 0 4 0 0 3 5 0 0 8 0 0 4 The last matrix corresponds to x + 3x 4 = 5, x x 4 = 8, and x 3 + x 4 = 4. Thus, (5 3x 4, 8 + x 4, 4 x 4, x 4 ) is a solution for any value of x 4. Footnote on Example 4: The elementary row operations used in this example were R R R 3 6

Example 5. The following 3-by- system is inconsistent: x +4x = 6 3x 5x = 44 3x x = 55 To demonstrate this, consider the sequence of equivalent augmented matrices: 4 6 3 5 44 3 0 5 3 0 5 3 55 0 4 6 0 0 4 The last matrix corresponds to x + x = 3, x = 5, and 0 = 4. Since 0 4, the system has no solution. Footnote on Example 5: The elementary row operations used in this example were R R R 3..5 Geometric Interpretations Linear equations in two variables correspond to lines in the plane. Linear equations in three variables correspond to planes in 3-space. Thus, solution sets for m-by- and m-by-3 systems have natural geometric interpretations. Example 6. Consider the following three -by- systems: - - - () x + x = 5 x x = 0 () x +x = 3 x 4x = 8 (3) x + x = 3 x x = 6. System () has the unique solution (.5,.5), the point of intersection of the lines.. System () is inconsistent; the lines are unequal and parallel. 3. System (3) has an infinite number of solutions: (3 x, x ) for any value of x, since the two equations graph to the same line. 7

Example, revisited. Each pair of planes intersect in a line. The lines are quite close in 3-space. In addition, the three planes have a single point of intersection, (9, 6, 3).. Row Reductions and Echelon Forms.. Echelon Form and Reduced Echelon Form A matrix is in (row) echelon form when. All nonzero rows are above any row of zeros.. Each leading entry (that is, leftmost nonzero entry) in a row is in a column to the right of the leading entries of the rows above it. 3. All entries in a column below a leading entry are zero. An echelon matrix is in reduced form when (in addition) 4. The leading entry in each nonzero row is. 5. Each leading is the only nonzero entry in its column. Example. The 6-by- matrix shown on the left below is in echelon form and the 6-by- matrix shown on the right is in reduced echelon form. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 In the left matrix, the symbol represents a leading nonzero entry and the symbol represents any number (either zero or nonzero). In the right matrix, each leading entry has been converted to a, and each is the only nonzero entry in its column. 8

Starting with the augmented matrix of a system of linear equations, the forward phase of the row reduction process will produce a matrix in echelon form. Continuing the process through the backward phase, we get a reduced echelon form. The following theorem explains why reduced echelon form matrices are important. Theorem (Uniqueness Theorem). Each matrix is row-equivalent (that is, equivalent using elementary row operations) to one and only one reduced echelon form matrix. Further, we can read the solutions to the original system from the reduced echelon form of the augmented matrix... Pivot Positions, Pivot Columns and Pivots. A pivot position is a position of a leading entry in an echelon form matrix.. A column that contains a pivot position is called a pivot column. 3. A pivot is a nonzero number that either is used in a pivot position to create zeros or is changed into a leading, which in turn is used to create zeros. In general, there is no more than one pivot in any row and no more than one pivot in any column. Example, continued. In the left matrix in Example, each is located at a pivot position, and columns, 4, 5, 8, and 9 are pivot columns. Example. Consider the following 4-by-5 matrix 0 3 6 4 9 3 3 0 3 4 5 9 7 We first interchange R and R 4 : 4 5 9 7 3 3 0 3 0 3 6 4 9 Column is a pivot column, and the in the upper left corner is used to change values in rows and 3 to 0 (R + R replaces R ; R 3 + R replaces R 3 ): 4 5 9 7 0 4 6 6 0 5 0 5 5 0 3 6 4 9 9

Column is a pivot column. We could interchange any of the remaining three rows to identify a pivot, but will just use the in the current second row to change values in rows 3 and 4 to 0 (R 3 5 R replaces R 3 ; R 4 + 3 R replaces R 4 ): 4 5 9 7 0 4 6 6 0 0 0 0 0 0 0 0 5 0 Column 3 is not a pivot column. Column 4 is a pivot column, revealed after we we interchange R 3 and R 4 : 4 5 9 7 0 4 6 6 0 0 0 5 0 0 0 0 0 0 The matrix is now in echelon form. Columns,, and 4 are pivot columns. Working with rows 3 and (in that order) to clear columns 4 and, we get the following reduced echelon form matrix: 4 5 9 7 4 5 0 7 0 3 0 5 0 4 6 6 0 0 0 5 0 0 4 0 6 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Note that if the original matrix was the augmented matrix of a 4-by-4 system of linear equations in x, x, x 3, x 4, then the last matrix corresponds to the equivalent linear system x 3x 3 = 5, x + x 3 = 3, x 4 = 0, 0 = 0. Thus, (5 + 3x 3, 3 x 3, x 3, 0) is a solution for any value of x 3. Footnote on Example : The last elementary row operations were: R R R 3 R 4 0

Example 3. Consider the following 3-by-6 matrix: 0 3 6 6 4 5 3 7 8 5 8 9 3 9 9 6 5 We first interchange R and R 3 : 3 9 9 6 5 3 7 8 5 8 9 0 3 6 6 4 5 Column is a pivot column and we use the 3 in the upper left corner to convert the 3 in row to 0 (R R replaces R ). In addition, R is replaced by 3 R for simplicity: 3 4 3 5 0 4 4 6 0 3 6 6 4 5 Column is a pivot column. We could use the current row, or interchange the second and third rows and use the new row two for pivot. For simplicity, we use the in the second row to convert the 3 in the third row to a zero (R 3 3 R replaces R 3 ). In addition, R is replaced by R for simplicity: 3 4 3 5 0 3 0 0 0 0 4 This matrix is in echelon form; columns,, and 5 are pivot columns. Working with rows 3 and (in that order) to clear columns 5 and, we get the following reduced echelon form matrix: 3 4 3 5 3 4 3 0 3 0 3 0 4 0 3 0 0 7 0 0 7 0 0 0 0 4 0 0 0 0 4 0 0 0 0 4 Note that if the original matrix was the augmented matrix of a 3-by-5 system of linear equations in x, x, x 3, x 4, x 5, then the last matrix corresponds to the equivalent linear system x x 3 + 3x 4 = 4, x x 3 + x 4 = 7, x 5 = 4. Thus, ( 4 + x 3 3x 4, 7 + x 3 x 4, x 3, x 4, 4) is a solution for any values of x 3, x 4. Footnote on Example 3: The last elementary row operations were: R R R 3

..3 Basic and Free Variables; Solutions to Linear Systems When solving linear systems,. A basic variable is any variable that corresponds to a pivot column in the augmented matrix of the system, and a free variable is any non-basic variable.. From the equivalent system produced using the reduced echelon form, we solve each equation for the basic variable in terms of the free variables (if any) in the equation. 3. If there is at least one free variable, then the original linear system has an infinite number of solutions. 4. If an echelon form of the augmented matrix has a row of the form [ 0 0 0 b ] where b 0, then the system is inconsistent. (There is no need to continue to find the reduced echelon form.) Example, continued. For the 4-by-4 system in Example, the basic variables are, and the free variables are. Example 3, continued. For the 4-by-5 system in Example 3, the basic variables are, and the free variables are..3 Vectors and Vector Equations.3. Vectors in m-dimensional Space In linear algebra, we let R m be the set of m matrices of real numbers, Further, v R m v =. v m v : v v i R, and we let v =. Rm be a specific element. v m. v is known as a column vector or simply a vector.. The value v i is the i th component of v. 3. The zero vector, O, is the vector all of whose components equal zero.

.3. Vector Sum, Scalar Product, and Linear Combinations of Vectors Suppose v, w R m and c, d R.. The vector sum v + w is the vector obtained by componentwise addition. That is, v + w is the vector whose i th component is v i + w i for each i.. The scalar product cv is the vector obtained by multiplying each component by c. That is, cv is the vector whose i th component is cv i for each i. 3. The vector cv + dw is known as a linear combination of the vectors v and w. [ ] [ ] Example in -space. Let v = and v = be vectors in R. These vectors can be represented as points in the plane or as directed line segments whose initial point is the origin. In the plot, Vector v is represented as the directed line segment from (0, 0) to (, ), and points on y = x represent scalar multiples of v : cv, where c R. Vector v is represented as the directed line segment from (0, 0) to (, ), and points on y = x represent scalar multiples of v : dv, where d R. - - - - Linear combinations of the two vectors, cv +dv, are in one-to-one correspondence with R. The plot shows a grid of linear combinations where either c or d is an integer. Problem. Write w = [ ] 7 as a linear combination of v 5 and v. 3

Example in 3-space. Let v = and v = The thick lines represent scalar multiples of the two vectors: cv for c R, and dv for d R. The plane shown represents the linear combinations of the two vectors: cv + dv, for c, d R. The grid represents the linear combinations where either c or d is an integer. Problem. Using v and v from above: be vectors in R 3. x (a) Suppose that y = cv + dv for some c, d R. Find an equation relating x, y, z. z (b) Can w = 3 be written as a linear combination of v and v? Why? 0 4

.3.3 Linear Combinations and Spans If v, v,..., v n R m and c, c,..., c n R, then w = c v + c v + + c n v n is a linear combination of the vectors v, v,..., v n. The span of the set {v, v,..., v n } is the collection of all linear combinations of the vectors v, v,..., v n : Span{v, v,..., v n } = {c v + c v + + c n v n : c i R for each i} R m. The span of a set of vectors always contains the zero vector O (let each c i = 0). For example, {[. Span ]} = { [ c ] } : c R corresponds to the line y = x in R. {[. Span ] [, ]} = { [ c ] [ + d R can be written as a linear combination of ] } : c, d R = R. That is, every vector in [ ] [ ] and. {[ 3. Span ] [, ] [ 7, 5 ]} = { [ c ] [ + d ] [ 7 + e 5 ] } : c, d, e R = R. That is, every vector in R can be written as a linear combination of the three listed vectors. (In fact, the first two vectors were already sufficient to represent each v R.) 4. Span, = c + d : c, d R corresponds to the plane with equation in R 3. 5. Span,, 4 0 = Span in the list is a linear combination of the first two., since the third vector 5

Exercise. Let v =, v = and v 3 = 3. Demonstrate that 0 Span{v, v, v 3 } = {cv + dv + ev 3 : c, d, e R} = R 3. That is, demonstrate that every vector in R 3 can be written as a linear combination of the three vectors v, v, v 3. 6

.3.4 Vector Equations, Spans and Solution Sets Consider an m-by-n system of equations in variables x, x,..., x n, where each equation is of the form a i x + a i x + + a in x n = b i for i =,,..., m. Let a j be the vector of coefficients of x j, and b be the vector of right hand side values. The m-by-n system can be rewritten as a vector equation: x a + x a + + x n a n = b. The system is consistent if and only if b Span{a, a,..., a n }. For example, the linear system on the left below becomes the vector equation on the right: x +5x 3x 3 = 4 x 4x + x 3 = 3 x 7x = = x + x + x 3 =. Problem. Starting with the vector equation x a + x a + x 3 a 3 = b shown on the right above, (a) Determine if the system is consistent. That is, determine if b Span {a, a, a 3 }. x (b) Suppose that y z = ca + da + ea 3 for some c, d, e. Find an equation relating x, y, z. 7

(c) Find the reduced echelon form of the matrix whose columns are the a i s: A = [ a a a 3 ]. (d) What do we know about the set Span{a, a, a 3 }? Be as complete as possible. 8

.4 Matrix Equations and Solution Sets As above, an m-by-n system of linear equations, a i x + a i x + + a in x n = b i for i =,,..., m, leads to a vector equation, x a + x a + + x n a n = b, where a j is the vector of coefficients of x j, and b is the vector of right hand side values. Further, the system is consistent if and only if b Span{a,..., a n }..4. Coefficient Matrix and Matrix-Vector Product. The coefficient matrix A of an m-by-n system of linear equations is the m n matrix whose columns are the a j s: A = [ a a a n ].. If A is an m n matrix and v R n, then the matrix-vector product of A with v is defined to be the the following linear combination of the columns of A: v Av = [ ] v a a a n. = v a + v a + + v n a n. v n For example, if A = 5 3 4 7 0 Av = 3 and v = 5 4 7 3 +, then 3 0 =. Row-column definition. The definition of the matrix-vector product as a linear combination of column vectors is equivalent to the usual row-column definition of the product of two matrices. For example, 5 3 3 (3) + 5( ) 3() Av = 4 = (3) 4( ) + () =. 7 0 (3) 7( ) + 0() Linearity property. The matrix-vector product of a linear combination of k vectors in R n is equal to that linear combination of matrix-vector products: A (c v + c v + + c k v k ) = c Av + c Av + + c k Av k for all c i R, v i R n. That is, multiplication by A distributes over addition and we can factor out constants. 9

.4. Matrix Equation, Span and Consistency Let A = [ a a a n ] be the coefficient matrix of an m-by-n system of linear equations, x R n be the vector of unknowns, and b R m be the vector of right hand side values. Then. The matrix equation of the system is the equation Ax = b.. If Ax = b is consistent for all b, then the matrix A is said to span R m. For example, the linear equation on the left becomes the matrix equation Ax = b on the right: x +5x 3x 3 = 4 5 3 x 4 x 4x + x 3 = 3 = 4 x = 3. x 7x = 7 0 x 3 Further, since the system is inconsistent, A does not span R 3. The following theorem gives criteria for determining when A spans R m. Theorem (Consistency Theorem). Let A = [ a a a n ] be an m n coefficient matrix, and x R n be a vector of unknowns. The following statements are equivalent: (a) The equation Ax = b has a solution for all b R m. (b) Each b R m is a linear combination of the coefficient vectors a j. (c) The columns of the matrix A span R m. That is, Span{a, a,..., a n } = R m. (d) A has a pivot position in each row. Problem. Let A = 4 5 3 4 8. () Determine if Ax = b is consistent for all b R 3. () If it is not consistent for all b, find an equation characterizing when Ax = b is consistent. 0

Problem. Let A = 4 0. () Determine if Ax = b is consistent for all b R 3. () If it is not consistent for all b, find an equation characterizing when Ax = b is consistent. Problem 3. A matrix can have at most pivot position in any row or any column. What, if anything, does this fact tell you about determining consistency for all b when m < n, m = n, m > n? Be as complete as possible in your explanation.

.4.3 Solution Sets and Homogeneous Systems Let Ax = b be the matrix equation of an m-by-n system of linear equations.. The solution set of the system is {x : Ax = b} R n. A solution set can be empty, contain one element or contain an infinite number of elements.. If b = O (the zero vector), then the system is said to be homogeneous. Homogeneous systems satisfy the following properties:. Every homogeneous system is consistent.. If v, v,..., v k are solutions to the homogeneous system and c, c,..., c k are constants, then the linear combination is also a solution to the system. w = c v + c v + + c k v k 3. The solution set of Ax = O can be written as the span of a certain set of vectors. Problem 4. Demonstrate the first two properties of homogeneous systems.

Writing the solution set of Ax=O as a span. The technique for writing the solution set of Ax = O is illustrated using the following example: Consider the homogeneous system x +5x 3x 3 = 0 5 3 x 0 x 4x + x 3 = 0 = 4 x = 0. x 7x = 0 7 0 x 3 0 Since 5 3 0 4 0 7 0 0 5 3 0 0 0 0 3 6 0 5 3 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 x = 7x 3 x = x 3 x 3 free, we can write the solutions to the homogeneous system as vectors satisfying x = 7x 3 7 7 x 3 = x 3 where x 3 is free. Thus, Span = c x 3 is the solution set of the homogeneous system above. 7 : c R Problem 5. In each case, write the solution set of Ax = O as a span of a set of vectors. (a) Let A =. 8 (b) Let A = [ 8 4 ]. 3

(c) Let A = 4 6 8 4 8 4 0 3 6 4. Writing the solution set of Ax=O in parametric vector form. If the solution set to Ax = O is Span{v, v,..., v k }, then the solutions can be written in parametric vector form as follows: x = c v + c v +... + c k v k, where c, c,..., c k R, and the c i s are the parameters in the general equation for the solution. For example, the parametric vector form for the solutions in Problem 5(b) is x = c, for c R (only one parameter is needed); and the parametric vector form for the solutions in Problem 5(c) is x = c + c, for c, c R (two parameters are needed). 4

.4.4 Solution Sets and Nonhomogeneous Systems A linear system is nonhomogeneous if it is not homogeneous. That is, a linear system is nonhomogeneous if its matrix equation is of the form Ax = b, where b O. Theorem (Solution Sets). Suppose the equation Ax = b is consistent and let p be a particular solution. Then the solution set of Ax = b is the set of all vectors of the form x = p + v h, where v h is a solution to Ax = O. That is, each solution can be written as the vector sum of a particular solution to the nonhomogeneous system and a solution to the homogeneous system with the same coefficient matrix. For example, let [ 4 6 4 8 0 [ A b ] = [ 4 6 0 4 8 0 4 x = ] and b = ] 6 x x we can write x = p + v h, where p = [ 0 4 ]. Since [ 3 0 0 0 4 = 6 0 6 0 ] + x [ 0 6 0 0 0 and v h = x We can visualize the solution sets to both systems as follows: Solid Line: The solution to the homogeneous system is the solid line c 0, for some c. Dashed Line: The solution to the original nonhomogeneous system is the dashed line 6 0 + c, for some c. 0 ], where x is free, 0 for some x. (The solutions are written in parametric vector form, using c as the parameter.) x = 6 x x is free x 3 = Each point on the dashed line is obtained by adding p to a point on the solid line. The lines are parallel and lie on the plane x + y 3z = 0. Shift of a span. Since the set of solutions to Ax = O is the span of a set of vectors, the set of solutions to Ax = b can be described as the shift of a span, where each solution of the homogeneous system is shifted by the particular solution p. 5

Problem 6. Let A = 4 6 8 4 8 4 0 3 6 4 and b = 8 4 Write the solutions to Ax = b in parametric vector form. (That is, write the solutions in the form x = p + v h, where p is a particular solution to the nonhomogeneous system and v h is a general solution to the homogeneous system with the same coefficient matrix.). 6

.5 Linearly Independent and Linearly Dependent Sets The set {v, v,..., v n } R m is said to be linearly independent when c v + c v + + c n v n = O only when each c j = 0. Otherwise, the set is said to be linearly dependent. Note that if A = [ ] v v v n is the matrix whose columns are the vj s, then {v, v,..., v n } is linearly independent Ax = O has the trivial solution only. Problem. In each case, determine if the set {v, v,..., v n } is linearly independent or linearly dependent. If the set is linearly dependent, then find a linear dependence relationship among the vectors (that is, find constants c j not all equal to zero so that c v +... + c n v n = O). 0 0 0 (a) Let {v, v, v 3 } =,, R 4. 0 0 0 (b) Let {v, v, v 3 } = 4, 8, 3 9 R3. 4 0 7

(c) Let {v, v, v 3 } = {[ ] [ [ 0,, R ] 6]}. Property of linear independence. The following theorem gives an important property of linearly independent sets. Theorem (Uniqueness Theorem). If {v, v,..., v n } R m is a linearly independent set and w Span{v, v,..., v n }, then w can be written uniquely as a linear combination of the v i s. That is, we can write w = c v + c v + + c n v n for a unique ordered list of scalars c, c,..., c n. Problem. Demonstrate that the uniqueness theorem is true. 8

Facts about linearly independent and linearly dependent sets include the following. If v i = O for some i, then the set {v, v,..., v n } is linearly dependent.. Suppose that v and v are nonzero vectors. Then the set {v, v } is linearly independent if and only if v cv for some c. 3. If m < n, then the set {v, v,..., v n } is linearly dependent. 4. Suppose n and each v i O. Then, the set is linearly dependent if and only if one vector in the set can be written as a linear combination of the others. 5. Let A be the m n matrix whose columns are the v j s: A = [ v v v n ]. Then, the set is linearly independent if and only if the homogeneous system Ax = O has the trivial solution only. 6. Let A be the m n matrix whose columns are the v j s: A = [ v v v n ]. Then, the set is linearly independent if and only if A has a pivot in every column. Problem. Which, if any, of the following sets are linearly independent? Why? 3 0 (a) 0 0, 5 0, 6, 4 5. 0 0 0 6 0 8 4 (b) 7,, 6,. 4 6 3 (c) 6, 0, 8. 6 7 (d) 4. 9

.6 Linear and Matrix Transformations.6. Transformations, Images and Pre-Images. A transformation T : R n R m is a function (or rule) that assigns to each x R n a value T (x) = b R m. Thus, The domain of T is all of n-space: Domain(T ) = R n. The range of T is a subset of m-space: Range(T ) = {T (x) : x R n } R m.. The value T (x) for a given x is also called its image. 3. Each x R n whose image is b R m is said to be a pre-image of b. Note that images are unique, but pre-images may not be unique. Problem. In each case, find the range of the transformation. That is, find Range(T ). ([ ]) [ ] (a) T : R R x x x with rule T =. x x (b) T : R R with rule T ([ x x ]) [ ] x + 6x =. x 3x 30

x (c) T : R 4 R 3 with rule T x x 3 + x 4 x 3 = x + x 4. x x + x + x 3 + x 4 4 (d) T : R 3 R 3 with rule T x x = 3x 3x. x 3 3x 3.6. One-To-One and Onto Transformations. The transformation T : R n R m is said to be one-to-one if each b Range(T ) is the image of at most one x R n.. The transformation T : R n R m is said to be onto if Range(T ) = R m. Problem (a), continued. Is T : R R with rule T onto? Explain. ([ x x ]) = [ ] x x x one-to-one and/or 3

Problem (b), continued. and/or onto? Explain. ([ ]) Is T : R R x with rule T x = [ ] x + 6x x 3x one-to-one x Problem (c), continued. Is T : R 4 R 3 with rule T x x 3 + x 4 x 3 = x + x 4 one-to-one x x + x + x 3 + x 4 4 and/or onto? Explain. Problem (d), continued. Is T : R 3 R 3 with rule T onto? Explain. x x x 3 = 3x 3x one-to-one and/or 3x 3 3

.6.3 Linear Transformations T : R n R m is said to be a linear transformation if the following two conditions are satisfied:. T (cv) = ct (v) for c R and v R n, and. T (v + w) = T (v) + T (w) for v, w R n. The conditions for linear transformations lead to the following facts: Fact : If T is a linear transformation and w is a linear combination of v,v,..., v k, then T (w) is a linear combination of T (v ),T (v ),..., T (v k ). In symbols, w = c v + c v + + c k v k = T (w) = c T (v ) + c T (v ) + + c k T (v k ). Fact : If T is a linear transformation, then T maps the span of a set of vectors to the span of the set of images of the vectors. In symbols, ( ) T Span{v, v,..., v k } = Span{T (v ), T (v ),..., T (v k )}. Fact 3: If T (x) = Ax, for some m n matrix A, then T is a linear transformation. Problem. Which, if any, transformations from Problem are linear transformations? Explain. 33

Problem 3. Let T : R R be a linear transformation, and suppose that ([ ]) [ ] ([ ]) [ ] 4 3 T = and T =. 4 ([ (a) Find T 6 ]). (b) Find a general formula for T ([ x x ]). 34

.6.4 Linear and Matrix Transformations The following theorem says that every linear transformation can be written as a matrix transformation. Formally, Theorem (Matrix Transformations). If T : R n R m is a linear transformation, then the rule for T can be written as T (x) = Ax for a unique m n matrix A. The matrix A, known as the standard matrix of T, has an interesting form. Standard basis vectors and the standard matrix. Let T : R n R m be a linear transformation, and let e j R n be the vector whose j th component is and whose remaining components are 0, for j =,,..., n. Then. Each x R n can be written uniquely as a linear combination of the e j s: 0 0 x x 0 0 x =. = x 0 + x 0 + + x. n 0 = x. e + x e +... + x n e n... x n 0 0. Since T is a linear transformation, T (x) = x T (e ) + x T (e ) +... + x n T (e n ) [ ] x = T (e ) T (e ) T (e n ) = Ax.. 3. Thus, the rule for T corresponds to matrix multiplication, where A is the matrix whose columns are the images of the e j s: A = [ T (e ) T (e ) T (e n ) ]. x x n The vectors e, e,..., e n are known as the standard basis vectors of R n. If we know the images of the standard basis vectors under the linear transformation T, then we know the standard matrix of T (and the formula for T in terms of the standard matrix). For example, if T : R R 3 is a linear transformation, T (e ) = 3 and T (e ) =, then A = 3. 0 0 35

Problem 4. Suppose that T : R R is a linear transformation, [ [ 4 T (e + e ) = and T (e 3] e ) =. 0] Find T (e ), T (e ) and the standard matrix of T. Images of lines in the plane. lines to either lines or points. If T : R R is a linear transformation, then T maps Problem 5. Let T : R R be the transformation with rule [ ] 3 T (x) = x. Find an equation for the image of y = x. - - - - 36

Problem 6. Let T : R R be the transformation with rule [ ] T (x) = x. 4 Find an equation for the image of y = x (or x = y). - - - - One-to-one and onto transformations, revisited. Recall that T : R n R m is said to be one-to-one if each b Range(T ) is the image of at most one x R n and that T : R n R m is said to be onto if Range(T ) = R m. The following theorem relates these definitions to properties of the standard matrix of a linear transformation. Theorem (Linear Transformations). Let T : R n R m be the linear transformation with rule T (x) = Ax, where A is an m n matrix. Then. T is onto if and only if the columns of A span all of R m.. T is one-to-one if and only if the columns of A form a linearly independent set. Aside : T is onto iff A has a pivot. Aside : T is one-to-one iff A has a pivot. 37

Problem 7. Which, if any, of the following transformations are one-to-one? Which, if any, are onto? Why? ([ ]) x x (a) T : R R 3 x with rule T = 3x x x 3x x (b) T : R 4 R 3 with rule T x x x 3 x 4 x 4x + 8x 3 + x 4 = x 8x 3 + 3x 4 5x 4 (c) T : R 3 R 3 with rule T x x x x = x + 3x + x 3 x 3 3x x + 4x 3 38

Footnote: Rotations in -space. Recall that positive angles are measured counterclockwise from the positive x-axis, and negative angles are measured clockwise from the positive x-axis. The transformation that rotates points in the plane at an angle θ is a linear transformation. For example, the plot on the right shows the images of the standard basis vectors under the rotation through the positive angle θ = π/5. - - - - Problem 8. Let T : R R be the rotation about the origin through angle θ. Find the standard matrix of T. Your answer should be in terms of sin(θ) and cos(θ). 39

Footnote: Rotations about the z-axis. Assume that positive angles are measured counterclockwise from the positive x-axis, and negative angles are measured clockwise from the positive x-axis, when looking down from the positive z-axis. The transformation that rotates points in 3-space at angle θ around the z-axis is a linear transformation. For example, the plot on the right shows the images of the standard basis vectors under the rotation through the positive angle θ = π/5. Problem 9. Let T : R 3 R 3 be the rotation about the z-axis through angle θ. Find the standard matrix of T. Your answer should be in terms of sin(θ) and cos(θ). 40