No bullshit guide to linear algebra. Ivan Savov

Similar documents
Recall that two vectors in are perpendicular or orthogonal provided that their dot

Solutions to Math 51 First Exam January 29, 2015

x = + x 2 + x

Linear Algebra Review. Vectors

Similarity and Diagonalization. Similar Matrices

Chapter 17. Orthogonal Matrices and Symmetries of Space

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Lecture Notes 2: Matrices as Systems of Linear Equations

Chapter 6. Orthogonality

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

[1] Diagonal factorization

Vector Math Computer Graphics Scott D. Anderson

α = u v. In other words, Orthogonal Projection

Linear Algebra Notes for Marsden and Tromba Vector Calculus

by the matrix A results in a vector which is a reflection of the given

MATH APPLIED MATRIX THEORY

Orthogonal Diagonalization of Symmetric Matrices

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Review Jeopardy. Blue vs. Orange. Review Jeopardy

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

NOTES ON LINEAR TRANSFORMATIONS

Lecture 2 Matrix Operations

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Section V.3: Dot Product

9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes

Orthogonal Projections

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Inner Product Spaces

Section 1.1. Introduction to R n

Name: Section Registered In:

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Introduction to Matrices

LINEAR ALGEBRA W W L CHEN

MAT 242 Test 2 SOLUTIONS, FORM T

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

( ) which must be a vector

1 Sets and Set Notation.

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

5.3 The Cross Product in R 3

Applied Linear Algebra

LS.6 Solution Matrices

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

160 CHAPTER 4. VECTOR SPACES

Solving simultaneous equations using the inverse matrix

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture 14: Section 3.3

THREE DIMENSIONAL GEOMETRY

Data Mining: Algorithms and Applications Matrix Math Review

Lecture notes on linear algebra

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

SIGNAL PROCESSING & SIMULATION NEWSLETTER

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

DERIVATIVES AS MATRICES; CHAIN RULE

Math 215 HW #6 Solutions

Unified Lecture # 4 Vectors

Question 2: How do you solve a matrix equation using the matrix inverse?

Chapter 20. Vector Spaces and Bases

Eigenvalues and Eigenvectors

9 MATRICES AND TRANSFORMATIONS

1 Introduction to Matrices

Introduction to Matrices for Engineers

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

Introduction to Matrix Algebra

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

Examination paper for TMA4115 Matematikk 3

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Linearly Independent Sets and Linearly Dependent Sets

Figure 1.1 Vector A and Vector F

One advantage of this algebraic approach is that we can write down

Playing with Numbers

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

University of Lille I PC first year list of exercises n 7. Review

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Linear Algebra: Vectors

Typical Linear Equation Set and Corresponding Matrices

1 VECTOR SPACES AND SUBSPACES

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Vector and Matrix Norms

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

1 Solving LPs: The Simplex Algorithm of George Dantzig

Linear Algebra Notes

Notes on Determinant

Equations Involving Lines and Planes Standard equations for lines in space

Row Echelon Form and Reduced Row Echelon Form

Linear Algebra and TI 89

4.5 Linear Dependence and Linear Independence

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

Transcription:

No bullshit guide to linear algebra Ivan Savov February 3, 4

Chapter 6 Linear transformations 6 Finding matrix representations Every linear transformation T : R n R m can be represented as a matrix product with a matrix M T R m n Suppose that the transformation T is defined in a word description like Let T be the counterclockwise rotation of all points in the xy-plane by 3 How do we find the matrix M T that corresponds to this transformation? In this section we will discuss various useful linear transformations and derive their matrix representations The goal of this section is to solidify the bridge in your understanding between the abstract specification of a transformation T (v) and its specific implementation of this transformation as a matrix-vector product M T v Once you find the matrix representation of a given transformation you can apply that transformation to many vectors For example, if you know the (x, y) coordinates of each pixel of an image, and you replace these coordinates with the outcome of the matrix-vector product M T (x, y) T, you ll obtain a rotated version of the image That is essentially what happens when you use the rotate tool inside an image editing program Concepts In the previous section we learned about linear transformations and their matrix representations: T : R n R m : A linear transformation, which takes inputs v R n and produces outputs vector w R n : T (v) = w M T R m n :Amatrixrepresentationofthelineartransformation T The action of the linear transformation T is equivalent to a multiplication by the matrix M T : Theory w = T (v) w = M T v In order to find the matrix representation of the transformation T : R n R m it is sufficient to probe T with the n vectors from the standard basis for the input space R n : ê ê,,, ê n

6 Finding matrix representations The matrix M T which corresponds to the action of T on the standard basis is M T = T (e ) T (e ) T(e n ) This is an m n matrix that has as its columns the outputs of T for the n probes Projections The first kind of linear transformation we will study is the projection X projection Consider the projection onto the x- axis Π x The action of Π x on any vector or point is to leave the x- coordinate unchanged and set the y- coordinate to zero We can find the matrix associated with this projection by analyzing how it transforms the two vectors of the standard basis: = Π x, = Π x The matrix representation of Π x is therefore given by: M Πx = Π x Π x = Y projection Can you guess what the matrix for the projection onto the y-axis will look like? We use the standard approach to compute the matrix representation of Π y : M Πy = Π y Π y = We can easily verify that the matrices M Πx and M Πy do indeed select the appropriate coordinate from 3

Chapter 6 Linear transformations ageneralinputvectorv =(v x,v y ) T : vx vx =, v y Projection onto a vector vx = v y vy Recall that the general formula for the projection of a vector v onto another vector a is obtained as follows: a v Π a (v) = a a Thus, if we wanted to compute the projection onto an arbitrary direction a, wewouldhavetocompute: M Πa = Π a Π a Projection onto a plane We can also compute the projection of the vector v R 3 onto some plane P : n x = n x x + n y y + n z z =as follows: Π P (v) =v Π n (v) The interpretation of the above formula is as follows We compute the part of the vector v that is in the n direction, and then we subtract this part from v to obtain a point in the plane P To obtain the matrix representation of Π P we calculate what it does to the standard basis î =ê =(,, ) T, ĵ =ê =(,, ) T and ˆk =ê 3 =(,, ) T Projections as outer products We can obtain a projection matrix onto any unit vector as an outer product of the vector with itself Let us consider as an example how we could find the matrix for the projection onto the x-axis Π x (v) = (î v)î = M Πx v Recallthattheinner product (dot product) between two column vectors u and v is equivalent to the matrix product u T v, while their outer product is given by the matrix product uv T The inner product corresponds to a n matrix times a n matrix, so the answer is matrix, which is equivalent to a number: the value of the dot product The outer product corresponds to n matrix times a n matrix so the answer is an n n matrix For example the projection matrix onto the x-axis is given by the matrix M Πx =îî T 4

6 Finding matrix representations What? Where did that equation come from? To derive this equation you simply have to rewrite the projection formula in terms of the matrix product and use the commutative law of scalar multiplication αv = vα and the associative law of matrix multiplication A(BC) =(AB)C Check it: Π x (v) =(î v)î =î(î v) =î(î T v v) = x v y = îî T v v = x =(M) v = vx v y vx = We see that outer product M îî T corresponds to the projection matrix M Πx which we were looking for More generally, the projection matrix onto a line with direction vector a is obtained by constructing a the unit vector â and then calculating the outer product: (,) v â a a, M Π a =ââ T Example Find the projection matrix M d R for the projection Π d onto the 45 diagonal line, aka the line with equation y = x The line y = x corresponds to the parametric equation {(x, y) R (x, y) = (, ) + t(, ),t R}, so the direction vector is a = (, ) We need to find the matrix which corresponds to Π d (v) = (, ) T The projection matrix onto a =(, ) is computed most easily using the outer product approach First we compute a normalized direction vector â =(, ) and then we compute the matrix product: M d =ââ T = = Note that the notion of an outer product is usually not covered in a first linear algebra class, so don t worry about outer products showing up on the exam I just wanted to introduce you to this equivalence between projections onto â and the outer product ââ T,becauseitis one of the fundamental ideas of quantum mechanics The probing with the standard basis approach is the one you want to remember for the exam We can verify that it gives the same answer: M d = Π d Π d = a î a a a ĵ a a = v y 5

Chapter 6 Linear transformations Projections are idempotent Any projection matrix M Π satisfies M Π M Π = M Π This is one of the defining properties of projections, and the technical term for this is idempotence: the operation can be applied multiple times without changing the result beyond the initial application Subspaces Note that a projection acts very differently on different sets of input vectors Some input vectors are left unchanged and some input vectors are killed Murder! Well, murder in a mathematical sense, which means being multiplied by zero Let Π S be the projection onto the space S, ands be the orthogonal space to S defined by S = { w R n w S =} The action of Π S is completely different on the vectors from S and S Allvectors v S comes out unchanged: Π S (v) =v, whereas vectors w S will be killed Π S ( w) =w = The action of Π S on any vector from S is equivalent a multiplication by zero This is why we call S the null space of M ΠS Reflections We can easily compute the matrices for simple reflections in the standard two-dimensional space R X reflection The reflection through the x-axis should leave the x-coordinate unchanged and flip the sign of the y- coordinate We obtain the matrix by probing as usual: M Rx = R x R x = Which correctly sends (x, y) T to (x, y) T as required 6

6 Finding matrix representations Y reflection The matrix associated with R y,the reflection through the y-axis is given by: M Ry = The numbers in the above matrix tell you to change the sign of the x-coordinate and leave the y-coordinate unchanged In other words, everything that was to the left of the y-axis, now has to go to the right and vice versa Do you see how easy and powerful this matrix formalism is? You simply have to put in each column whatever you want the happen to the ê vector and in the second column whatever you want to happen to the ê vector Diagonal reflection Suppose we want to find the formula for the reflection through the line y = x, which passes right through the middle of the first quadrant We will call this reflection R d (this time, my dear reader the diagram is on you to draw) In words though, what we can say is that R d makes x and y swap places Starting from the description x and y swap places it is not difficult to see what the matrix should be: M Rd = I want to point out that an important property that all reflection have We can always identify the action of a reflection by the fact that it does two very different things to two sets of points: () some points are left unchanged by the reflection and () some points become the exact negatives of themselves For example, the points that are invariant under R y are the points that lie on the y-axis, ie, the multiples of (, ) T The points that become the exact negative of themselves are those that only have an x-component, ie, the multiples of (, ) T The acton of R y on all other points can be obtained as a linear combination of the leave unchanged and the multiply by actions We will discuss this line 7

Chapter 6 Linear transformations of reasoning more at the end of this section and we will sey generally how to describe the actions of R y on its different input subspaces Reflections through lines and planes What about reflections through an arbitrary line? Consider the line : {+ta, t R} that passes through the origin We can write down a formula for the reflection through in terms of the projection formula: R a (v) =Π a (v) v The reasoning behind the this formula is as follows First we compute the projection of v onto the line Π a (v), thentaketwostepsinthat direction and subtract v once Use a pencil to annotate the figure to convince yourself the formula works Similarly, we can also derive and expression for the reflection through an arbitrary plane P : n x =: R P (v) =Π P (v) v = v Π n (v) The first form of the formula uses areasoningsimilartotheformulafor the reflection through a line The second form of the formula can be understood as computing the shortest vector from the plane to v, subtractingthatvectoronce from v to get to a point in the plane, and subtracting it a second time to move to the point R P (v) on the other side of the plane Rotations We now want to find the matrix which corresponds to the counterclockwise rotation by the angle θ An input point A in the plane will get rotated around the origin by an angle θ to obtain a new point B By now you know the drill Probe with the standard basis: M Rθ = R θ R θ 8

6 Finding matrix representations To compute the values in the first column, observe that the point (, ) = = ( cos, sin ) will be moved to the point θ = (cos θ, sinθ) The second input ê =(, ) will get rotated to ( sin θ, cos θ) We therefore get the matrix: cos θ sin θ M Rθ = sin θ cos θ Finding the matrix representation of a linear transformation is like acolouring-bookactivityformathematicians youjusthavetofillin the columns Inverses Can you tell me what the inverse matrix of M Rθ is? You could use the formula for finding the inverse of a matrix or you could use the [A I ]-and-rref algorithm for finding the inverse, but both of these approaches would be waaaaay too much work for nothing I want you to try to guess the formula intuitively If R θ rotates stuff by +θ degrees, what do you think the inverse operation will be? Yep! You got it The inverse operation is R θ which rotates stuff by θ degrees and corresponds to the matrix cos θ sin θ M R θ = sin θ cos θ For any vector v R we have R θ (R θ (v)) = v = R θ (R θ (v)) or in terms of matrices: M R θ M Rθ = I = M Rθ M R θ Cool no? That is what representation really means, the abstract notion of composition of linear transformations is represented by the matrix product What is the inverse operation to the reflection through the x-axis R x?reflectagain! What is the inverse matrix for some projection Π S? Good luck finding that one The whole point of projections is to send some part of the input vectors to zero (the orthogonal part) so a projection is inherently many to one and therefore not invertible You can also see this from its matrix representation: if a matrix does not have full rank then it is not invertible 9

Chapter 6 Linear transformations Non-standard basis probing At this point I am sure that you feel confident to face any linear transformation T : R R and find its matrix M T R by probing with the standard basis But what if you are not allowed to probe T with the standard basis? What if you are given the outputs of T for some other basis {v, v }: tx = T t y vx v y, tx vx = T t y v y Can we find the matrix for M T given this data? Yes we can Because the vectors form a basis, we can reconstruct the information about the matrix M T from the input-output data provided We are looking for four unknowns m, m, m,andm that make up the matrix M T : m m M T = m m Luckily, the input-output data allows us to write four equations: m v x + m v y = t x, m v x + m v y = t y, m v x + m v y = t x, m v x + m v y = t y We can solve this system of equations using the usual techniques and find the coefficients m, m, m,andm Let s see how to do this in more detail We can think of the entries of M T as a 4 vector of unknowns x =(m,m,m,m ) T and then rewrite the four equations as a matrix equation: v x v y m t x Ax = b v x v y m v x v y m = t y t x v x v y m t y We can then solve for x by finding x = A basyoucansee,itisa little more work than probing with the standard basis, but it is still doable Eigenspaces Probing the transformation T with any basis should give us sufficient information to determine its matrix with respect to the standard basis

63 Change of basis for matrices using the above procedure Given the freedom we have for choosing the probing basis, is there a natural basis for probing each transformation T? The standard basis is good for computing the matrix representation, but perhaps there is another choice of basis which would make the abstract description of T simpler Indeed, this is the case For many linear transformations there exists a basis {e, e,} such that the action of T on the basis vector e i is equivalent to the scaling of e i by a constant λ i : T (e i )=λ i e i Recall for example how projections leave some vectors unchanged (multiply by ) andsendsomevectorstozero(multiplyby) These subspaces of the input space are specific to each transformation and are called the eigenspaces (own spaces) of the transformation T As another example, consider the reflection R x which has two eigenspaces The space of vectors that are left unchanged (the eigenspace correspondence to λ =), which is spanned by the vector (, ): R x = The space of vectors which become the exact negatives of themselves (the eigenspace correspondence to λ = ), which is spanned by (, ): R x = From the theoretical point of view, describing the action of T in its natural basis is the best way to understand what it does For each of the eigenvectors in the various eigenspaces of T,theactionofT is asimplescalarmultiplication! In the next section we will study the notions of eigenvalues and eigenvectors in more detail Note, however, that you are already familiar with the special case of the zero eigenspace, which we call the null space The action of T on the vectors in its null space is equivalent to a multiplication by the scalar 63 Change of basis for matrices TODO ZZZZZZ