13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.



Similar documents
Linear Algebra Review. Vectors

Similarity and Diagonalization. Similar Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Introduction to Matrix Algebra

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Lecture 2 Matrix Operations

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Lecture L3 - Vectors, Matrices and Coordinate Transformations

1 Introduction to Matrices

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Orthogonal Diagonalization of Symmetric Matrices

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Chapter 17. Orthogonal Matrices and Symmetries of Space

5.3 The Cross Product in R 3

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

[1] Diagonal factorization

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

The Characteristic Polynomial

MATH APPLIED MATRIX THEORY

Data Mining: Algorithms and Applications Matrix Math Review

Review A: Vector Analysis

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

Linear Algebra: Vectors

Vector and Matrix Norms

LINEAR ALGEBRA. September 23, 2010

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

Inner Product Spaces and Orthogonality

Chapter 6. Orthogonality

Section Inner Products and Norms

Linear Algebra: Determinants, Inverses, Rank

Similar matrices and Jordan form

A vector is a directed line segment used to represent a vector quantity.

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

One advantage of this algebraic approach is that we can write down

x = + x 2 + x

Figure 1.1 Vector A and Vector F

8 Square matrices continued: Determinants

GCE Mathematics (6360) Further Pure unit 4 (MFP4) Textbook

Computing Orthonormal Sets in 2D, 3D, and 4D

9 MATRICES AND TRANSFORMATIONS

Brief Introduction to Vectors and Matrices

Notes on Determinant

3 Orthogonal Vectors and Matrices

5. Orthogonal matrices

MAT188H1S Lec0101 Burbulla

Name: Section Registered In:

1 Determinants and the Solvability of Linear Systems

VECTOR ALGEBRA A quantity that has magnitude as well as direction is called a vector. is given by a and is represented by a.

9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

13.4 THE CROSS PRODUCT

Solution to Homework 2

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Unified Lecture # 4 Vectors

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

Matrix Differentiation

Introduction to Matrices for Engineers

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

MAT 242 Test 2 SOLUTIONS, FORM T

Factorization Theorems

ISOMETRIES OF R n KEITH CONRAD

Inner products on R n, and more

Section 1.1. Introduction to R n

by the matrix A results in a vector which is a reflection of the given

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

1 VECTOR SPACES AND SUBSPACES

1 Sets and Set Notation.

Solving Linear Systems, Continued and The Inverse of a Matrix

Vector has a magnitude and a direction. Scalar has a magnitude

MAT 1341: REVIEW II SANGHOON BAEK

Math 312 Homework 1 Solutions

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

Mathematics Notes for Class 12 chapter 10. Vector Algebra

LS.6 Solution Matrices

v 1 v 3 u v = (( 1)4 (3)2, [1(4) ( 2)2], 1(3) ( 2)( 1)) = ( 10, 8, 1) (d) u (v w) = (u w)v (u v)w (Relationship between dot and cross product)

Linear Algebra and TI 89

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Eigenvalues and Eigenvectors

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

The Dot and Cross Products

State of Stress at Point

9 Multiplication of Vectors: The Scalar or Dot Product

Math 215 HW #6 Solutions

7 Gaussian Elimination and LU Factorization

University of Lille I PC first year list of exercises n 7. Review

Vector Math Computer Graphics Scott D. Anderson

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Notes on Symmetric Matrices

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

Vectors Math 122 Calculus III D Joyce, Fall 2012

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Transcription:

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms of its components with respect to a reference system as 2 a =. 7 The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.. Vector addition: a + b = c 2 3 5 + 3 = 4. 7 2 9 Graphically, addition is stringing the vectors together head to tail. 2. Scalar multiplication: 2 4 2 = 2. 7 4 3..2 Vector Magnitude The total length of a vector of dimension m, its Euclidean norm, is given by m x = x 2 i. i= This scalar is commonly used to normalize a vector to length one.

3 MATH FACTS 02 3..3 Vector Dot or Inner Product The dot product of two vectors is a scalar equal to the sum of the products of the corresponding components: The dot product also satisfies where θ is the angle between the vectors. m x y = x T y = x i y i. i= x y = x y cos θ, 3..4 Vector Cross Product The cross product of two three-dimensional vectors x and y is another vector z, x y = z, whose. direction is normal to the plane formed by the other two vectors, 2. direction is given by the right-hand rule, rotating from x to y, 3. magnitude is the area of the parallelogram formed by the two vectors the cross product of two parallel vectors is zero and 4. (signed) magnitude is equal to x y sin θ, where θ is the angle between the two vectors, measured from x to y. In terms of their components, i j ˆ ˆ ˆ k (x i 2 y 3 x 3 y 2 )ˆ x y = x x 2 x 3 = (x 3 y x y 3 )ĵ. y y 2 y 3 (x y 2 x 2 y )kˆ 3.2 Matrices 3.2. Definition A matrix, or array, is equivalent to a set of column vectors of the same dimension, arranged side by side, say 2 3 A = [ a b]= 3. 7 2

3 MATH FACTS 03 This matrix has three rows (m = 3) and two columns (n = 2); a vector is a special case of a matrix with one column. Matrices, like vectors, permit addition and scalar multiplication. We usually use an upper-case symbol to denote a matrix. 3.2.2 Multiplying a Vector by a Matrix If A ij denotes the element of matrix A in the i th row and the j th column, then the multiplication c = A v is constructed as: n c i = A i v + A i2 v 2 + + A in v n = A ij v j, j= where n is the number of columns in A. c will have as many rows as A has rows (m). Note that this multiplication is defined only if v has as many rows as A has columns; they have consistent inner dimension n. The product va would be well-posed only if A had one row, and the proper number of columns. There is another important interpretation of this vector multiplication: Let the subscript : indicate all rows, so that each A :j is the j th column vector. Then c = A v = A : v + A :2 v 2 + + A :n v n. We are multiplying column vectors of A by the scalar elements of v. 3.2.3 Multiplying a Matrix by a Matrix The multiplication C = AB is equivalent to a side-by-side arrangement of column vectors C :j = AB :j, so that C = AB = [AB : AB :2 AB :k ], where k is the number of columns in matrix B. The same inner dimension condition applies as noted above: the number of columns in A must equal the number of rows in B. Matrix multiplication is:. Associative. (AB)C = A(BC). 2. Distributive. A(B + C) = AB + AC, (B + C)A = BA + CA. 3. NOT Commutative. AB BA, except in special cases.

3 MATH FACTS 04 3.2.4 Common Matrices Identity. The identity matrix is usually denoted I, and comprises a square matrix with ones on the diagonal, and zeros elsewhere, e.g., 0 0 I 3 3 = 0 0. 0 0 The identity always satisfies AI n n = I m m A = A. Diagonal Matrices. A diagonal matrix is square, and has all zeros off the diagonal. For instance, the following is a diagonal matrix: 4 0 0 A = 0 2 0. 0 0 3 The product of a diagonal matrix with another diagonal matrix is diagonal, and in this case the operation is commutative. 3.2.5 Transpose The transpose of a vector or matrix, indicated by a T superscript results from simply swapping the row-column indices of each entry; it is equivalent to flipping the vector or matrix around the diagonal line. For example, a = 2 a T = { 2 3} 3 A = 2 4 5 8 9 A very useful property of the transpose is A T = (AB) T = B T A T. [ ] 4 8. 2 5 9 3.2.6 Determinant The determinant of a square matrix A is a scalar equal to the volume of the parallelepiped enclosed by the constituent vectors. The two-dimensional case is particularly easy to remember, and illustrates the principle of volume: det(a) = A A 22 A 2 A 2

3 MATH FACTS 05 ([ det ]) = + =2. x 2 x In higher dimensions, the determinant is more complicated to compute. The general formula allows one to pick a row k, perhaps the one containing the most zeros, and apply j=n det(a) = A kj ( ) k+j Δ kj, j= where Δ kj is the determinant of the sub-matrix formed by neglecting the k th row and the j th column. The formula is symmetric, in the sense that one could also target the k th column: j=n det(a) = A jk ( ) k+j Δ jk. j= If the determinant of a matrix is zero, then the matrix is said to be singular there is no volume, and this results from the fact that the constituent vectors do not span the matrix dimension. For instance, in two dimensions, a singular matrix has the vectors colinear; in three dimensions, a singular matrix has all its vectors lying in a (two-dimensional) plane. Note also that det(a) = det(a T ). If det(a) = 0, then the matrix is said to be nonsingular. 3.2.7 Inverse The inverse of a square matrix A, denoted A, satisfies AA = A A = I. Its computation requires the determinant above, and the following definition of the n n adjoint matrix: T ( ) + Δ ( ) +n Δ n adj(a) =. ( ) n+ Δ n ( ) n+n Δ nn.

3 MATH FACTS 06 Once this computation is made, the inverse follows from A = adj(a). det(a) If A is singular, i.e., det(a) = 0, then the inverse does not exist. The inverse finds common application in solving systems of linear equations such as A x = b x = A b. 3.2.8 Eigenvalues and Eigenvectors A typical eigenvalue problem is stated as A x = λ x, where A is an n n matrix, x is a column vector with n elements, and λ is a scalar. We ask for what nonzero vectors x (right eigenvectors), and scalars λ (eigenvalues) will the equation be satisfied. Since the above is equivalent to (A λi) x = 0, it is clear that det(a λi) =0. This observation leads to the solutions for λ; here is an example for the two-dimensional case: [ ] 4 5 A = 2 3 [ ] 4 λ 5 A λi = 2 3 λ det(a λi) = (4 λ)( 3 λ)+0 = λ 2 λ 2 = (λ + )(λ 2). Thus, A has two eigenvalues, λ = and λ 2 = 2. Each is associated with a right eigenvector x. In this example, (A λ [ I) x ] = 0 5 5 x = 0 2 2 { } T x = 2/2, 2/2 (A λ [ 2 I) x ] 2 = 0 2 5 x 2 = 0 2 5 { } T x 2 = 5 29/29, 2 29/29.

3 MATH FACTS 07 Eigenvectors are defined only within an arbitrary constant, i.e., if x is an eigenvector then c x is also an eigenvector for any c = 0. They are often normalized to have unity magnitude, and positive first element (as above). The condition that rank(a λ i I)= rank(a) indicates that there is only one eigenvector for the eigenvalue λ i ; more precisely, a unique direction for the eigenvector, since the magnitude can be arbitrary. If the left-hand side rank is less than this, then there are multiple eigenvectors that go with λ i. The above discussion relates only the right eigenvectors, generated from the equation A x = λ x. Left eigenvectors, defined as y T A = λ y T, are also useful for many problems, and can be defined simply as the right eigenvectors of A T. A and A T share the same eigenvalues λ, since they share the same determinant. Example: (A T λ [ I) y ] = 0 5 2 y = 0 5 2 { } T y = 2 29/29, 5 29/29 (A T λ [ 2 I) y ] 2 = 0 2 2 y 5 5 2 = 0 { } T y 2 = 2/2, 2/2. 3.2.9 Modal Decomposition For simplicity, we consider matrices that have unique eigenvectors for each eigenvalue. The right and left eigenvectors corresponding to a particular eigenvalue λ can be defined to have unity dot product, that is x i T y i =, with the normalization noted above. The dot products of a left eigenvector with the right eigenvectors corresponding to different eigenvalues are zero. Thus, if the set of right and left eigenvectors, V and W, respectively, is then we have V = [ x x n ], and W = [ y y n ], W T V = I, or W T = V. Next, construct a diagonal matrix containing the eigenvalues: λ 0 Λ= ; 0 λ n

3 MATH FACTS 08 it follows that AV = V Λ A = V ΛW T n T = λ i v i w i. i= Hence A can be written as a sum of modal components. 3 3 By carrying out successive multiplications, it can be shown that A k has its eigenvalues at λ k i, and keeps the same eigenvectors as A.

MIT OpenCourseWare http://ocw.mit.edu 2.07J Design of Electromechanical Robotic Systems Fall 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.