Linear Algebra Notes for Marsden and Tromba Vector Calculus
|
|
|
- Claire Dickerson
- 9 years ago
- Views:
Transcription
1 Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of in any of three ways: (i) as the set of triples (x, y, z) where x, y, and z are real numbers; (ii) as the set of points in space; (iii) as the set of directed line segments in space, based at the origin The first of these points of view is easily extended from 3 to any number of dimensions We define R n, where n is a positive integer (possibly greater than 3), as the set of all ordered n-tuples (x, x,, x n ), where the x i are real numbers For instance, (, 5,, 4) R 4 The set R n is known as Euclidean n-space, and we may think of its elements a = (a, a,, a n ) as vectors or n-vectors By setting n =,, or 3, we recover the line, the plane, and three-dimensional space respectively Addition and Scalar Multiplication We begin our study of Euclidean n-space by introducing algebraic operations analogous to those learned for R and R 3 Addition and scalar multiplication are defined as follows: (i) (a, a,, a n ) + (b, b,, b n ) = (a + b, a + b,, a n + b n ); and (ii) for any real number α, α(a, a,, a n ) = (αa, αa,, αa n ) Informal Discussion What is the point of going from R, R and R 3, which seem comfortable and real world, to R n? Our world is, after all, three-dimensional, not n-dimensional! First, it is true that the bulk of multivariable calculus is about R and R 3 However, many of the ideas work in R n with little extra effort, so why not do it? Second, the ideas really are useful! For instance, if you are studying a chemical reaction involving 5 chemicals, you will probably want to store and manipulate their concentrations as a 5-tuple; that is, an element of R 5 The laws governing chemical reaction rates also demand we do calculus in this 5-dimensional space
2 The Standard Basis of R n The n vectors in R n defined by e = (,,,, ), e = (,,,, ),, e n = (,,,, ) are called the standard basis vectors of R n, and they generalize the three mutually orthogonal unit vectors i, j, k of R 3 Any vector a = (a, a,, a n ) can be written in terms of the e i s as a = a e + a e + + a n e n Dot Product and Length Recall that for two vectors a = (a, a, a 3 ) and b = (b, b, b 3 ) in R 3, we defined the dot product or inner product a b to be the real number a b = a b + a b + a 3 b 3 This definition easily extends to R n ; specifically, for a = (a, a,, a n ), and b = (b, b,, b n ), we define a b = a b + a b + + a n b n We also define the length or norm of a vector a by the formula length of a = a = a a = a + a + + a n The algebraic properties 5 for the dot product in three space are still valid for the dot product in R n Each can be proven by a simple computation Vector Spaces and Inner Product Spaces The notion of a vector space focusses on having a set of objects called vectors that one can add and multiply by scalars, where these operations obey the familiar rules of vector addition Thus, we refer to R n as an example of a vector space (also called a linear space) For example, the space of all continuous functions f defined on the interval [, is a vector space The student is familiar with how to add functions and how to multipy them by scalars and they obey similar algebraic properties as the addition of vectors (such as, for example, α(f + g) = αf + αg) This same space of functions also provides an example of an inner product space, that is, a vector space in which one has a dot product that satisfies the properties 5 (page 669 in Calculus, Chapter 3 Namely, we define the inner product of f and g to be f g = f(x)g(x) dx
3 3 Notice that this is analogous to the definition of the inner product of two vectors in R n, with the sum replaced by an integral The Cauchy-Schwarz Inequality Using the algebraic properties of the dot product, we will prove algebraically the following inequality, which is normally proved by a geometric argument in R 3 Cauchy-Schwarz Inequality For vectors a and b in R n, we have a b a b If either a or b is zero, the inequality reduces to, so we can assume that both are non-zero Then a a b b = a b a b + ; that is, Rearranging, we get a b a b a b a b The same argument using a plus sign instead of minus gives a b a b The two together give the stated inequality, since a b = ±a b depending on the sign of a b If the vectors a and b in the Cauchy-Schwarz inequality are both nonzero, a b a b the inequality shows that the quotient lies between and Any number in the interval [, is the cosine of a unique angle θ between and π; by analogy with the and 3-dimensional cases, we call θ = cos the angle between the vectors a and b a b a b Informal Discussion Notice that the reasoning above is opposite to the geometric reasoning normally done in R 3 That is because, in R n, we have no geometry to begin with, so we start with algebraic definitions and define the geometric notions in terms of them
4 4 The Triangle Inequality We can derive the triangle inequality from the Cauchy-Schwarz inequality: a b a b a b, so that a + b = a + a b + b a + a b + b Hence, we get a+b ( a + b ) ; taking square roots gives the following result Triangle Inequality Let vectors a and b be vectors in R n Then a + b a + b If the Cauchy-Schwarz and triangle inequalities are written in terms of components, they look like this: ( n n a i b i i= i= a i ) ( n i= b i ) ; and ( n ) (a i + b i ) i= ( n i= a i ) ( n + i= b i ) The fact that the proof of the Cauchy-Schwarz inequality depends only on the properties of vectors and of the inner product means that the proof works for any inner product space and not just for R n For example, for the space of continuous functions on [,, the Cauchy-Schwarz inequality reads f(x)g(x) dx [f(x) dx [g(x) dx and the triangle inequality reads [f(x) + g(x) dx [f(x) dx + [g(x) dx
5 5 Example Verify the Cauchy-Schwarz and triangle inequalities for a = (,,, ) and b = (,,, ) Solution a = ( ) = 6 b = ( ) = 3 a b = ( ) ( ) = a + b = (, 3,, ) a + b = ( ) = We compute a b = and a b = , which verifies the Cauchy-Schwarz inequality Similarly, we can check the triangle inequality: while which is larger a + b = 33, a + b = , Distance By analogy with R 3, we can define the notion of distance in R n ; namely, if a and b are points in R n, the distance between a and b is defined to be a b, or the length of the vector a b There is no cross product defined on R n except for n = 3, because it is only in R 3 that there is a unique direction perpendicular to two (linearly independent) vectors It is only the dot product that has been generalized Informal Discussion By focussing on different ways of thinking about the cross product, it turns out that there is a generalization of the cross product to R n This is part of a calculus one learns in more advanced courses that was discovered by Cartan around 9 This calculus is closely related to how one generalizes the main integral theorems of Green, Gauss and Stokes to R n Matrices Generalizing and 3 3 matrices, we can consider m n matrices, that is, arrays of mn numbers: a a a n a a a n A = a m a n a mn
6 6 Note that an m n matrix has m rows and n columns The entry a ij goes in the ith row and the jth column We shall also write A as [a ij A square matrix is one for which m = n Addition and Scalar Multiplication We define addition and multiplication by a scalar componentwise, as we did for vectors Given two m n matrices A and B, we can add them to obtain a new m n matrix C = A + B, whose ijth entry c ij is the sum of a ij and b ij It is clear that A + B = B + A Similarly, the difference D = A B is the matrix whose entries are d ij = a ij b ij Example [ (a) 3 4 [ = [ (b) [ + [ = [ [ [ (c) = Given a scalar λ and an m n matrix A, we can multiply A by λ to obtain a new m n matrix λa = C, whose ijth entry c ij is the product λa ij Example = Matrix Multiplication Next we turn to the most important operation, that of matrix multiplication If A = [a ij is an m n matrix and B = [b ij is an n p matrix, then AB = C is the m p matrix whose ijth entry is c ij = n a ik b kj, which is the dot product of the ith row of A and the jth column of B k= One way to motivate matrix multiplication is via substitution of one set of linear equations into another Suppose, for example, one has the linear
7 7 equations a x + a x = y a x + a x = y and we want to substitute (or change variables) the expressions x = b u + b u x = b u + b u Then one gets the equations (as is readily checked) c u + c u = y c u + c u = y where the coefficient matrix of c ij is given by the product of the matrices a ij and b ij Similar considerations can be used to motivate the multiplication of arbitrary matrices (as long as the matrices can indeed be multiplied) Example 4 Let A = 3 and B = Then AB = 4 3 and BA = 3 3 This example shows that matrix multiplication is not commutative That is, in general, AB BA Note that for AB to be defined, the number of columns of A must equal the number of rows of B Example 5 Let A = [ and B =
8 8 Then but BA is not defined AB = [ , Example 6 Let A = 3 and B = [ Then AB = and BA = [3 If we have three matrices A, B, and C such that the products AB and BC are defined, then the products (AB)C and A(BC) will be defined and equal (that is, matrix multiplication is associative) It is legitimate, therefore, to drop the parenthesis and denote the product by ABC Example 7 Let A = [ 3 5 [, B = [, and C = Then Also, as well A(BC) = AB(C) = [ 3 5 [ [ 9 [3 = 5 [ = [ 9 5
9 9 It is convenient when working with matrices to represent the vector a = a (a,, a n ) in R n a by the n column matrix, which we also a n denote by a Informal Discussion Less frequently, one also encounters the n row matrix [a a n, which is called the transpose of a a n and is denoted by a T Note that an element of R n (e g (4,, 3, 6)) is written with parentheses and commas, while a row matrix (eg [7 3 ) is written with square brackets and no commas It is also convenient at this point to use the letters x, y, z, instead of a, b, c, to denote vectors and their components a a n Linear Transformations If A = is a m n matrix, a m a mn and x = (x,, x n ) is in R n, we can form the matrix product a a n x y Ax = = a m a mn The resulting m column matrix can be considered as a vector (y,, y m ) in R m In this way, the matrix A determines a function from R n to R m defined by y = Ax This function, sometimes indicated by the notation x Ax to emphasize that x is transformed into Ax, is called a linear transformation, since it has the following linearity properties x n y m A(x + y) = Ax + Ay for x and y in R n A(αx) = α(ax) for x in R n and α in R This is related to the general notion of a linear transformation; that is, a map between vector spaces that satisfies these same properties
10 One can show that any linear transformation T (x) of the vector space R n to R m actually is of the form T (x) = Ax above for some matrix A In view of this, one speaks of A as the matrix of the linear transformation As in calculus, an important operation on trasformations is their composition The main link with matrix multiplication is this: The matrix of a composition of two linear transformations is the product of the matrices of the two transformations Again, one can use this to motivate the definition of matrix multiplication and it is basic in the study of linear algebra Informal Discussion We will be using abstract linear algebra very sparingly If you have had a course in linear algebra it will only enrich your vector calculus experience If you have not had such a course there is no need to worry we will provide everything that you will need for vector calculus Example 8 If A = 3, then the function x Ax from R 3 to R 4 is defined by 3 x x x x = x 3 x 3 x + 3x 3 x + x 3 x + x + x 3 x + x + x 3 Example 9 The following illustrates what happens to a specific point when mapped by the 4 3 matrix of Example 8: 3 Ae = = = nd column of A Inverses of Matrices An n n matrix is said to be invertible if there is an n n matrix B such that AB = BA = I n,
11 where I n = is the n n identity matrix The matrix I n has the property that I n C = CI n = C for any n n matrix C We denote B by A and call A the inverse of A The inverse, when it exists, is unique Example If 4 A =, then A = since AA = I 3 = A A, as may be checked by matrix multiplication If A is invertible, the equation Ax = b can be solved for the vector x by multiplying both sides by A to obtain x = A b, Solving Systems of Equations Finding inverses of matrices is more or less equivalent to solving systems of linear equations The fundamental fact alluded to above is that Write a system of n linear equations in n unknowns in the form Ax = b It has a unique solution for any b if and only if the matrix A has an inverse and in this case the solution is given by x = A b Example Consider the system of equations 3x + y = u x + y = v
12 which can be readily solved by row reduction to give x = u 5 v 5 y = u 5 + 3v 5 If we write the system of equations as [ [ 3 x y and the solution as [ x y then we see that [ 3 = [ = [ u v [ u v 3 5 [ = 5 5 which can also be checked by matrix multiplication If one encounters an n n system that does not have a solution, then one can conclude that the matrix of coefficients does not have an inverse One can also conclude that the matrix of coefficients does not have an inverse if there are multiple solutions Determinants One way to approach determinants is to take the definition of the determinant of a and a 3 3 matrix as a starting point This can then be generalized to n n determinants We illustrate here how to write the determinant of a 4 4 matrix in terms of the determinants of 3 3 matrices: a a a 3 a 4 a a a 3 a 4 a 3 a 3 a 33 a 34 a 4 a 4 a 43 a 44 = a + a 3 a a 3 a 4 a 3 a 33 a 34 a 4 a 43 a 44 a a a 4 a 3 a 3 a 34 a 4 a 4 a 44 a a 4 a a 3 a 4 a 3 a 33 a 34 a 4 a 43 a 44 a a a 3 a 3 a 3 a 33 a 4 a 4 a 43 (note that the signs alternate +,, +,, ) Continuing this way, one defines 5 5 determinants in terms of 4 4 determinants, and so on
13 3 The basic properties of 3 3 determinants remain valid for n n determinants In particular, we note the fact that if A is an n n matrix and B is the matrix formed by adding a scalar multiple of the kth row (or column) of A to the lth row (or, respectively, column) of A, then the determinant of A is equal to the determinant of B (This fact is used in Example below) A basic theorem of linear algebra states that An n n matrix A is invertible if and only if the determinant of A is not zero Another basic property is that det(ab) = (det A)(det B) In this text we leave these assertions unproved Example Let A = Find det A Does A have an inverse? Solution Adding ( ) first column to the third column and then expanding by minors of the first row, we get det A = = Adding ( ) first column to the third column of this 3 3 determinant gives det A = = = Thus, det A =, and so A has an inverse To actually find the inverse (if this were asked), one can solve the relevant system of linear equations (by, for example, row reduction) and proceed as in the previous example
14 4 Exercises Perform the calculations indicated in Exercises 4 (, 4, 5, 6, 7) + (,, 3, 4, 5) = (,, 3,, n) + (,,,, n ) = 3 (,, 3, 4, 5) (5, 4, 3,, ) = 4 4(5, 4, 3, ) 6(8, 4,, 7) = Verify the Cauchy-Schwarz inequality and the triangle inequality for the vectors given in Exercises a = (,, ), b = (4,, ) 6 a = (,,, 6), b = (3, 8, 4, ) 7 i + j + k, i + k 8 i + j, 3i + 4j Perform the calculations indicated in Exercises = = = = In Exercises 3, find the matrix product or explain why it is not defined 3 [
15 5 4 [ 4 4 [ [ a b 5 c d [ [ 3 ([ 3 3 a b c d e f g h i [ 4 ) [ In Exercises 4, for the given A, (a) define the mapping x Ax as was done in Example 8, and (b) calculate Aa 3 A = 4 5 6, a = A = 3 A = , a =, a = [
16 6 4 A = [ , a = 7 9 In Exercises 5 8, determine whether the given matrix has an inverse [ 5 [ [ [ 9 Let I = and A = Find a matrix B such that AB = I Check that BA = I b d b 3 is c d ad bc c a [ a Verify that the inverse of [ 3 Assuming det(ab) = (det A)(det B), verify that (det A)(det A ) = and conclude that if A has an inverse, then det A 3 Show that, if A, B and C are n n matrices such that AB = I n and CA = I n, then B = C 33 Define the transpose A T of an m n matrix A as follows, A T is the n m matrix whose ijth entry is the jith entry of A For instance [ T [ a b a c = c d b d, a d g b e h T = [ a d g b e h One can see directly that, if A is a or 3 3 matrix, then det(a T ) = det A Use this to prove the same fact for 4 4 matrices
17 34 Let B be the m column matrix row matrix, what is AB? 35 Prove that if A is a 4 4 matrix, then m ṃ m 7 If A = [a a m is any (a) if B is a matrix obtained from a 4 4 matrix A by multiplying any row or column by a scalar λ, then det B = λ det A; and (b) det(λa) = λ 4 det A In Exercises 36 38, A, B, and C denote n n matrices 36 Is det(a + B) = det A + det B? Give a proof or counterexample 37 Does (A + B)(A B) = A B? 38 Assuming the law det(ab) = (det A)(det B), prove that det(abc) = (det A)(det B)(det C) 39 Solve the system of equations: x + y + z = 3x + y + 4z = 3 x + y = 4 Relate your solution to the inversion problem for the matrix of coefficients Find the inverse if it exists 4 (a) Try to invert the matrix 3 7 by solving the appropriate system of equations by row reduction Find the inverse if it exists or show that an inverse does not exist (b) Try to invert the matrix 3 7 by solving the appropriate system of equations by row reduction Find the inverse if it exists or show that an inverse does not exist
18
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
Lecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.
3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
5.3 The Cross Product in R 3
53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or
Linear Algebra: Vectors
A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector
13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.
Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
9 Multiplication of Vectors: The Scalar or Dot Product
Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
Inner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form
Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...
Lecture L3 - Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
Chapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
Name: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi
Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the
Section 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
Vectors Math 122 Calculus III D Joyce, Fall 2012
Vectors Math 122 Calculus III D Joyce, Fall 2012 Vectors in the plane R 2. A vector v can be interpreted as an arro in the plane R 2 ith a certain length and a certain direction. The same vector can be
Unified Lecture # 4 Vectors
Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6
Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a
1 Inner Products and Norms on Real Vector Spaces
Math 373: Principles Techniques of Applied Mathematics Spring 29 The 2 Inner Product 1 Inner Products Norms on Real Vector Spaces Recall that an inner product on a real vector space V is a function from
9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes
The Scalar Product 9.4 Introduction There are two kinds of multiplication involving vectors. The first is known as the scalar product or dot product. This is so-called because when the scalar product of
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
Matrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
Linear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
NOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
Lecture 14: Section 3.3
Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in
THREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product
Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Geometrical definition Properties Expression in components. Definition in components Properties Geometrical expression.
ISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
Numerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z
28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE 1.4 Cross Product 1.4.1 Definitions The cross product is the second multiplication operation between vectors we will study. The goal behind the definition
1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,
1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It
Introduction to Matrices for Engineers
Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Systems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Solution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product
Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot
The Determinant: a Means to Calculate Volume
The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are
x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
MATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
Math 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
MA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
LS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
Equations Involving Lines and Planes Standard equations for lines in space
Equations Involving Lines and Planes In this section we will collect various important formulas regarding equations of lines and planes in three dimensional space Reminder regarding notation: any quantity
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
Section 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
University of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
Using row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
Section V.3: Dot Product
Section V.3: Dot Product Introduction So far we have looked at operations on a single vector. There are a number of ways to combine two vectors. Vector addition and subtraction will not be covered here,
MAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus
Chapter 1 Matrices, Vectors, and Vector Calculus In this chapter, we will focus on the mathematical tools required for the course. The main concepts that will be covered are: Coordinate transformations
Math 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
CS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
T ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
South Carolina College- and Career-Ready (SCCCR) Pre-Calculus
South Carolina College- and Career-Ready (SCCCR) Pre-Calculus Key Concepts Arithmetic with Polynomials and Rational Expressions PC.AAPR.2 PC.AAPR.3 PC.AAPR.4 PC.AAPR.5 PC.AAPR.6 PC.AAPR.7 Standards Know
Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
To give it a definition, an implicit function of x and y is simply any relationship that takes the form:
2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to
1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0
Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are
Lecture Notes 2: Matrices as Systems of Linear Equations
2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably
Brief Introduction to Vectors and Matrices
CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued
Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj
Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that
DERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
Solutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
Lecture notes on linear algebra
Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra
MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
Linear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
