Linear Equations in Linear Algebra

Similar documents
Section Continued

Linearly Independent Sets and Linearly Dependent Sets

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

LINEAR ALGEBRA W W L CHEN

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Inner Product Spaces and Orthogonality

Inner product. Definition of inner product

Linear Algebra Notes

Systems of Linear Equations

5.3 The Cross Product in R 3

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MAT188H1S Lec0101 Burbulla

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Row Echelon Form and Reduced Row Echelon Form

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Linear Equations in Linear Algebra

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

( ) which must be a vector

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Name: Section Registered In:

CURVE FITTING LEAST SQUARES APPROXIMATION

NOTES ON LINEAR TRANSFORMATIONS

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

Solutions to Math 51 First Exam January 29, 2015

Solving Systems of Linear Equations

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Introduction to Matrices

MATH APPLIED MATRIX THEORY

Vector Spaces 4.4 Spanning and Independence

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MATH1231 Algebra, 2015 Chapter 7: Linear maps

Lecture Notes 2: Matrices as Systems of Linear Equations

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

5 Homogeneous systems

1 VECTOR SPACES AND SUBSPACES

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Solving Linear Systems, Continued and The Inverse of a Matrix

Geometric Transformations

12.5 Equations of Lines and Planes

Methods for Finding Bases

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

α = u v. In other words, Orthogonal Projection

5.5. Solving linear systems by the elimination method

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Typical Linear Equation Set and Corresponding Matrices

by the matrix A results in a vector which is a reflection of the given

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Vector Math Computer Graphics Scott D. Anderson

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Solving simultaneous equations using the inverse matrix

1.5 SOLUTION SETS OF LINEAR SYSTEMS

LS.6 Solution Matrices

L 2 : x = s + 1, y = s, z = 4s Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

THREE DIMENSIONAL GEOMETRY

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Lecture 1: Systems of Linear Equations

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

BALTIC OLYMPIAD IN INFORMATICS Stockholm, April 18-22, 2009 Page 1 of?? ENG rectangle. Rectangle

Lecture 2 Matrix Operations

T ( a i x i ) = a i T (x i ).

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

Similarity and Diagonalization. Similar Matrices

7 Relations and Functions

Math 312 Homework 1 Solutions

1.2 Solving a System of Linear Equations

Systems of Equations

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Orthogonal Projections

Using row reduction to calculate the inverse and the determinant of a square matrix

Linear Algebra: Vectors

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Solving Systems of Linear Equations Using Matrices

Binary Adders: Half Adders and Full Adders

Practical Guide to the Simplex Method of Linear Programming

9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes

LINEAR ALGEBRA W W L CHEN

Notes on Determinant

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Math 215 HW #6 Solutions

Lecture 5: Singular Value Decomposition SVD (1)

Section 13.5 Equations of Lines and Planes

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

1 Review of Least Squares Solutions to Overdetermined Systems

What is Linear Programming?

9 Multiplication of Vectors: The Scalar or Dot Product

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

3.1 Solving Systems Using Tables and Graphs

Solutions to old Exam 1 problems

How To Understand And Solve A Linear Programming Problem

Unified Lecture # 4 Vectors

2D Geometric Transformations

1 Determinants and the Solvability of Linear Systems

System of First Order Differential Equations

Transcription:

1 Linear Equations in Linear Algebra 1.8 INTRODUCTION TO LINEAR TRANSFORMATIONS

LINEAR TRANSFORMATIONS A transformation (or function or mapping) T from R n to R m is a rule that assigns to each vector x in R n a vector T (x) in R m. The set R n is called domain of T, and R m is called the codomain of T. T : n The notation R R indicates that the domain of T is R n and the codomain is R m. For x in R n, the vector T (x) in R m is called the image of x (under the action of T ). The set of all images T (x) is called the range of T. See the figure on the next slide. m Slide 1.8-

MATRIX TRANSFORMATIONS For each x in R n, T (x) is computed as Ax, where A is an mn matrix. For simplicity, we denote such a matrix transformation by. x Ax The domain of T is R n when A has n columns and the codomain of T is R m when each column of A has m entries. Slide 1.8-3

MATRIX TRANSFORMATIONS The range of T is the set of all linear combinations of the columns of A, because each image T (x) is of the form Ax. Example 1: Let,,. and define a transformation R 3 R by, so that 1 3 A 3 5 1 7 u 1 3 c 5 T : T(x) Ax 1 3 x 3x 1 x1 T (x) Ax 3 5 3x 5x 1 x 1 7 x 7x 1. Slide 1.8-4

MATRIX TRANSFORMATIONS a. Find T (u), the image of u under the transformation T. b. Find an x in R whose image under T is b. c. Is there more than one x whose image under T is b? d. Determine if c is in the range of the transformation T. Slide 1.8-5

MATRIX TRANSFORMATIONS Solution: a. Compute b. Solve for x. That is, solve, or 1 3 5 T(u) Au 3 5 1 1 1 7 9 T(x) b Ax b 1 3 3 x1 3 5 x 1 7 5. ----(1). Slide 1.8-6

MATRIX TRANSFORMATIONS Row reduce the augmented matrix: 1 3 3 1 3 3 1 3 3 1 0 1.5 3 5 ~ 0 14 7 ~ 0 1.5 ~ 0 1.5 1 7 5 0 4 0 0 0 0 0 0 ----() x 1.5 x.5 1 1.5.5 Hence,, and. The image of this x under T is the given vector b. x Slide 1.8-7

MATRIX TRANSFORMATIONS c. Any x whose image under T is b must satisfy equation (1). From (), it is clear that equation (1) has a unique solution. So there is exactly one x whose image is b. d. The vector c is in the range of T if c is the image of some x in R, that is, if c T(x) for some x. This is another way of asking if the system is consistent. Ax c Slide 1.8-8

MATRIX TRANSFORMATIONS To find the answer, row reduce the augmented matrix. 1 3 3 1 3 3 1 3 3 1 3 3 3 5 0 14 7 0 1 0 1 ~ ~ ~ 1 7 5 0 4 8 0 14 7 0 0 35 The third equation, 0 35 system is inconsistent., shows that the So c is not in the range of T. Slide 1.8-9

SHEAR TRANSFORMATION A 1 3 0 1 T : T(x) Ax Example : Let R R defined by transformation.. The transformation is called a shear It can be shown that if T acts on each point in the square shown in the figure on the next slide, then the set of images forms the shaded parallelogram. Slide 1.8-10

SHEAR TRANSFORMATION The key idea is to show that T maps line segments onto line segments and then to check that the corners of the square map onto the vertices of the parallelogram. For instance, the image of the point T (u) 1 3 0 6 0 1, u 0 is Slide 1.8-11

LINEAR TRANSFORMATIONS 1 3 8 0 1 and the image of is. T deforms the square as if the top of the square were pushed to the right while the base is held fixed. Definition: A transformation (or mapping) T is linear if: i. T(u v) T(u) T(v) for all u, v in the domain of T; ii. T( cu) ct (u) for all scalars c and all u in the domain of T. Slide 1.8-1

LINEAR TRANSFORMATIONS Linear transformations preserve the operations of vector addition and scalar multiplication. Property (i) says that the result of first adding u and v in R n and then applying T is the same as first applying T to u and v and then adding T (u) and T (v) in R n. These two properties lead to the following useful facts. If T is a linear transformation, then T(0) 0 T(u v) ----(3) Slide 1.8-13

LINEAR TRANSFORMATIONS and. ----(4) for all vectors u, v in the domain of T and all scalars c, d. Property (3) follows from condition (ii) in the definition, because. Property (4) requires both (i) and (ii): If a transformation satisfies (4) for all u, v and c, d, it must be linear. (Set T( cu dv) ct (u) dt (v) T(0) T(0u) 0 T(u) 0 T( cu dv) T( cu) T( dv) ct (u) dt(v) c d 0 d 1 for preservation of addition, and set for preservation of scalar multiplication.) Slide 1.8-14

LINEAR TRANSFORMATIONS Repeated application of (4) produces a useful generalization: T( c v... c v ) ct(v )... c T(v ) 1 1 p p 1 1 p p ----(5) In engineering and physics, (5) is referred to as a superposition principle. Think of v 1,, v p as signals that go into a system and T (v 1 ),, T (v p ) as the responses of that system to the signals. Slide 1.8-15

LINEAR TRANSFORMATIONS The system satisfies the superposition principle if whenever an input is expressed as a linear combination of such signals, the system s response is the same linear combination of the responses to the individual signals. T : T(x) rx Given a scalar r, define R R by. T is called a contraction when dilation when r 1. 0r 1 and a Slide 1.8-16