Linear Transformations

Similar documents
Section Continued

α = u v. In other words, Orthogonal Projection

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Linearly Independent Sets and Linearly Dependent Sets

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

by the matrix A results in a vector which is a reflection of the given

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

Linear Algebra Notes

160 CHAPTER 4. VECTOR SPACES

Systems of Linear Equations

NOTES ON LINEAR TRANSFORMATIONS

( ) which must be a vector

Solutions to Math 51 First Exam January 29, 2015

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

LS.6 Solution Matrices

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Solving Systems of Linear Equations

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Linear Equations in Linear Algebra

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Chapter 6. Orthogonality

Vector Spaces 4.4 Spanning and Independence

Lecture 14: Section 3.3

Methods for Finding Bases

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Linear Algebra Notes for Marsden and Tromba Vector Calculus

1.5 SOLUTION SETS OF LINEAR SYSTEMS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Similarity and Diagonalization. Similar Matrices

Subspaces of R n LECTURE Subspaces

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Name: Section Registered In:

Row Echelon Form and Reduced Row Echelon Form

MAT188H1S Lec0101 Burbulla

Notes on Determinant

Orthogonal Diagonalization of Symmetric Matrices

LINEAR ALGEBRA W W L CHEN

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

5 Homogeneous systems

4.5 Linear Dependence and Linear Independence

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

University of Lille I PC first year list of exercises n 7. Review

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

1 VECTOR SPACES AND SUBSPACES

Orthogonal Projections

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Inner Product Spaces and Orthogonality

THE DIMENSION OF A VECTOR SPACE

MATH1231 Algebra, 2015 Chapter 7: Linear maps

Linear Algebra Review. Vectors

Solving Systems of Linear Equations

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Lecture 2: Homogeneous Coordinates, Lines and Conics

1 Introduction to Matrices

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

8. Linear least-squares

1 Sets and Set Notation.

Numerical Analysis Lecture Notes

8 Square matrices continued: Determinants

1.2 Solving a System of Linear Equations

Inner product. Definition of inner product

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

5. Orthogonal matrices

x = + x 2 + x

Inner Product Spaces

Review Jeopardy. Blue vs. Orange. Review Jeopardy

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Lecture 2 Matrix Operations

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

MATH APPLIED MATRIX THEORY

Similar matrices and Jordan form

9 Multiplication of Vectors: The Scalar or Dot Product

Geometric Transformations

T ( a i x i ) = a i T (x i ).

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrix Representations of Linear Transformations and Changes of Coordinates

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 20. Vector Spaces and Bases

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Question 2: How do you solve a matrix equation using the matrix inverse?

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Equations Involving Lines and Planes Standard equations for lines in space

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Vector Spaces; the Space R n

Math 312 Homework 1 Solutions

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Numerical Analysis Lecture Notes

Lecture Notes 2: Matrices as Systems of Linear Equations

Continued Fractions and the Euclidean Algorithm

WHEN DOES A CROSS PRODUCT ON R n EXIST?

5.3 The Cross Product in R 3

CURVE FITTING LEAST SQUARES APPROXIMATION

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

3. INNER PRODUCT SPACES

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Transcription:

Linear Transformations Systems of linear equations, with matrix form Ax = b, are often usefully analyzed by viewing the equation as the problem that asks for an unknown input x for a function that produces a known output b. The rule for this function is the one that takes vector inputs x R n and returns vector outputs Ax R m. We call such functions with vector inputs and vector outputs transformations (of Euclidean spaces). Using standard notation for describing functions, we can refer to such a transformation as a function, or map, of the form T:R n R m. Here, T is the name of the transformation that carries vector inputs from R n to vector outputs from R m according to some well-defined rule. A particularly simple example of a transformation is the identity map T :R n R n with formula T(x) = x. In the context of systems of linear equations (the one of prime interest to us), we could define T by the formula T(x) = Ax where A is some m n matrix. See Example 1, p. 74, for details, and study the marginal diagrams there for a glimpse at how one might form a mental picture of such a transformation.

Examples 2, 3, 4, and 5 on pp. 76-78 illustrate that much geometrical information is captured by the behavior of certain transformations of the form T(x) = Ax. It is not coincidence that these transformations obey the simple properties T(u + v) = T(u) +T(v) for all inputs u, v; T(cu) = ct(u) for all inputs u and scalars c. Any transformation of Euclidean spaces that satisfies these two properties is called a linear transformation. The defining properties can be interpreted as saying that linear transformations preserve vector additions and scalar multiplications. Because of the algebraic properties of the matrixvector product, it is clear that all transformations of the form T(x) = Ax are automatically linear transformations. But as we shall soon see, the converse statement is also true: every linear transformation has a matrix form T(x) = Ax! For instance, check that the identity map T(x) = x is a linear transformation. It can be represented in terms of the matrix-vector product formula T(x) = I n x, involving the n n identity matrix: I n = 1 0 0 0 1 0. 0 0 1

Let s investigate some other important properties of linear transformations: Theorem Any linear transformation T:R n R m takes the zero vector in R n to the zero vector in R m. Proof T(0) = T(0 + 0) = T(0) +T(0) T(0) = 0. // Theorem The transformation T:R n R m is linear if and only if for all vectors u,v R n and all scalars c, d, the relation T(cu +dv) = ct(u)+ dt(v) holds. Proof If T is a linear transformation, then, using the defining properties, we have T(cu +dv) = T(cu) +T(dv) = ct(u) + dt(v). Conversely, if T is a transformation for which T(cu +dv) = ct(u)+ dt(v) holds, then setting c = d = 1 shows that T(u + v) = T(u) +T(v) for all u and v in R n ; and setting d = 0 shows that T(cu) = ct(u) for every choice of c and u. So T must be linear. //

Repeated application of the property T(cu +dv) = ct(u)+ dt(v), shows that if T is a linear transformation, then for any collection of vectors v 1,v 2,,v k and associated scalars c 1,c 2,,c k, T(c 1 v 1 +c 2 v 2 + +c k v k ) = c 1 T(v 1 ) +c 2 T(v 2 )+ + c k T(v k ) That is, T carries any linear combination of a set v 1,v 2,,v k of vectors in R n to the same linear combination of their images T(v 1 ),T(v 2 ),,T(v k ) in R m. This is often referred to as the superposition principle. We are now in a position to prove the theorem alluded to earlier: Theorem Any linear transformation T:R n R m has an associated m n matrix A for which T(x) = Ax. More specifically, A = [ T(e 1 ) T(e 2 ) T(e n )] is the matrix whose jth column is the image T(e j ) of the vector e j which is the jth column of the n n identity matrix.

Proof The identity matrix I n satisfies x = I n x = [ e 1 e 2 e n ]x = x 1 e 1 + x 2 e 2 + + x n e n So, by the linearity of T, T(x) = T(x 1 e 1 + x 2 e 2 + + x n e n ) = x 1 T(e 1 ) + x 2 T(e 2 )+ + x n T(e n ) x 1 = [ T(e 1 ) T(e 2 ) T(e n ) x ] 2 = Ax. // x n The matrix A obtained by applying this theorem to a linear transformation T is called its standard matrix. For instance, pp. 85-87 present standard matrices for the linear transformations from R 2 to R 2 which represent reflections across lines, reflection through a point (the origin), dilations and shears, and projections onto certain lines.

It is significant to note that the geometric transformations in Tables 1-3 (pp. 85-86) are one-toone as functions, and they map R 2 onto R 2. In contrast, the projection maps in Table 4 (p. 87) are neither one-to-one nor onto. The properties of being one-to-one and onto are related to ideas we have explored earlier; this is spelled out in the following two theorems: Theorem The linear transformation T:R n R m is one-to-one if and only if the zero vector in R n is the only vector that is mapped by T to the zero vector in R m, i.e., T(x) = 0 has only the trivial solution. Proof Since T is a linear transformation, T(0) = 0. So, if T is one-to-one, T(x) = 0 can have only the trivial solution x = 0. Conversely, suppose T is a linear transformation for which T(x) = 0 has only the trivial solution. Then, if u and v are vectors in R n for which T(u) = T(v), it follows that T(u v) = T(u) T(v) = 0 from which we conclude that u v = 0, or u = v. Therefore, T is one-to-one. //

Theorem Suppose the linear transformation T:R n R m has standard matrix A. Then (1) T is one-to-one if and only if the columns of A are linearly independent, that is, if and only if A has a pivot entry in every column; and (2) T is onto if and only if the columns of A span R m, that is, if and only if A has a pivot entry in every row. Proof (1) is an immediate consequence of the previous theorem, since we know that the columns of A are linearly independent if and only if Ax = 0 has only the trivial solution, which happens if and only if the solution to the homogeneous system has no free variables. To prove (2), recall the theorem that says that the columns of A span R m if and only if the equation Ax = b is consistent for every b R n. But this holds if and only if T(x) = Ax is an onto function. In particular, if A has a pivot entry in every row, we are assured that the system Ax = b always has a solution; but conversely, if Ax = b has a solution x for every possible b R n, then every b is a linear combination of the columns of A. //