Elements of Linear Algebra: Q&A

Size: px
Start display at page:

Download "Elements of Linear Algebra: Q&A"

Transcription

1 Elements of Linear Algebra: Q&A A matrix is a rectangular array of objects (elements that are numbers, functions, etc.) with its size indicated by the number of rows and columns, i.e., an m n matrix A with m rows and n columns. If A is an m n matrix, A T is an n m matrix. The determinant of a matrix is the absolute value of the sum of the diagonal elements. The determinant is only defined for a square matrix. The determinant of a matrix can be computed using the Laplace expansion where a row or column is expanded in terms of minors and cofactors. An orthogonal matrix is an invertible n n matrix Q with the property Q 1 = Q T. 75

2 Elements of Linear Algebra: Q&A Given a system of m linear equations in n variables x i (i = 1,, n), written as Ax = b, the system is either 1. Consistent, with a unique (one) solution x.. Consistent, with infinitely many possible solutions. 3. Inconsistent with no solutions. If n > m, the system has more unknowns than equations it is underdetermined. If the system is consistent, some of the variables can be chosen arbitrarily and the remaining variables defined in terms of the arbitrary ones. If n < m, the system has more equations than unknowns it is overdetermined. 76

3 Elements of Linear Algebra: Q&A P3.16. Invertible matrix properties Assume that A is an n n invertible matrix. Which statements are true? a. The system Ax = b has a unique solution for every vector b in R n. b. The rows (and columns) of A are linearly independent. c. det(a) = 0. d. A can be reduced (by elementary operations) to the identity matrix. e. The rank of A is n. f. The rows of A span R n. 77

4 Elements of Linear Algebra: Q&A P3.18. Linear Independence Consider the equations of combustion in which a mixture of CO, H, and CH 4 are burned with O to form CO, CO and H O. 1 CO + O = CO. 1 H + O = HO. CH 4 + O = CO + HO. 3 CH 4 + O = CO + HO. Treating the compounds as real variables, determine if the equations are independent. If not, write the dependent equation(s) in terms of the independent ones. 78

5 Elements of Linear Algebra: AEM P.35 79

6 Elements of Linear Algebra: AEM P.35 80

7 Elements of Linear Algebra Eigenvalues & Eigenvectors As an engineer, you have undoubtedly been introduced to eigenvalues and possibly eigenvectors. We develop background here and will later make use of eigenvalues/vectors in the discussion of second and higherorder tensors. Given the linear equation A x= b the vector x is called the eigenvector (characteristic vector) and the scalar λ is the eigenvalue (characteristic value) of matrix A that characterizes the length (and sense) of the eigenvector x. The spectrum of A is the set of eigenvalues of A and the spectral radius of A is the absolute value of the largest eigenvalue. 81

8 Elements of Linear Algebra Example: Find eigenvalues and eigenvectors of Solution: 1. Compute roots of the characteristic polynomial 3 λ 0 0 D( λ) = 5 4 λ 0 = (3 λ)(4 λ)(1 λ) = λ roots: λ 1 = 3, λ = 4, λ 3 =1. 8

9 Elements of Linear Algebra These roots are the eigenvalues. They form the spectrum with a spectral radius of 4.. Compute the eigenvectors: λ 1 = 3 0= 0 1 5x1+ x = 0 set x1 1 5 or 10 = x = 3x1+ 6x x3 = 0 7 / 7 λ = 4 x1 = 0 0 5x1 = 0 set x 1 1 = x = 3x1+ 6x 3x3 = 0 83

10 Elements of Linear Algebra λ 3 = 1 x1 = 0 0 5x1+ 3x = 0 set x3 1 0 = x = 3x1+ 6x = 0 1 Properties of eigenvalues and eigenvectors of an n n square matrix A: 1. A has at least one eigenvalue and at most n numerically different eigenvalues, but it may have fewer than n.. If x is an eigenvector of a matrix A corresponding to an eigenvalue λ, so is kx, for any k 0, i.e., Ax = λx implies k(ax) = A(kx) = λ(kx). 84

11 Elements of Linear Algebra 3. M λ is the algebraic multiplicity, the number of times the root λ of the characteristic polynomial is repeated, and m λ is the geometric multiplicity, the number of independent eigenvectors corresponding to λ. According to property 1 above, the sum of algebraic multiplicities equals n and in general m λ M λ. 4. A real matrix may have complex eigenvalues that occur in conjugate pairs and complex eigenvectors. 5. The eigenvalues of a symmetric matrix (A T = A) are real. 6. The eigenvalues of a skew symmetric matrix (A T = A) are pure imaginary or zero. 85

12 Elements of Linear Algebra Eigenvectors & Diagonalization Similar matrices have the same spectrum (i.e., same eigenvalues), ˆ 1 A = T AT : n n matrix Â, is similar to A for some n n matrix T. This is an important property, particularly for numerical analysis, to diagonalize (or nearly diagonalize) matrices for computing approximations to eigenvalues and eigenvectors. The eigenvectors corresponding to a set of distinct eigenvalues form a linearly independent set. Thus, these eigenvectors form a basis. 1 If an n n matrix A has a basis of eigenvectors, then D= X AX is diagonal with eigenvalues of A as the entries on the main diagonal. 86

13 The vector algebra included operations involving sums and products of vectors. The definitions and operations defined in the linear algebra provide the basis for linear transformations and matrix operations useful in tensor analysis. The vector calculus allows us to apply the methods of differential and integral calculus in the general tensor analysis. We begin with the usual basic definitions and operations. Derivative of a Vector Function of a Scalar a(t) eˆ s a s a(t + t) da a( t+ t) a( t) = lim dt t 0 t s = a da a s ds = lim = eˆ dt t 0 s t dt s 87

14 Product Rules d da db ( ab ) = b+ a dt dt dt d da db ( a b) = b+ a (order preserved) dt dt dt Note that because a vector is composed of two distinct parts, magnitude and direction, a nonzero derivative could result from: a) a change in magnitude but not direction, b) a change in direction but not magnitude, or c) a change in both magnitude and direction as illustrated in the previous diagram. 88

15 For case b), a constant length vector, a = const da = 0 a dt = const d da da a ( aa = a ) a = a = 0 dt dt dt a = const da a a dt const a In general coordinates, the base vectors are not necessarily constant in magnitude or direction, i i da i dei a= aei ei + a dt dt By definition, the base vectors of Cartesian systems have constant magnitude and direction de / dt = 0. i 89

16 Example: Compute the acceleration of a body in a circular orbit. v(t) r(t) v(t + t) r(t + t) ω = ωeˆ r = reˆ r v = ω r = veˆ z t v(t) v v(t + t) 90

17 dv d dω dr a = = ( ω r) = r+ ω dt dt dt dt vdeˆ z = r + ω v = ω v r dt v v = eˆ ( ) [ ˆ ( ˆ ˆ z ω r = ez ez er)] r r v = [( eˆ ˆ ) ˆ ( ˆ ˆ ) ˆ z er ez ez ez er)] r v = eˆ r r 91

18 Example: Prove d d d d d a = dt dt dt dt dt 3 a a a a a 3 Solution: 0 0 d d d d d d d d d d a = + + dt dt dt dt dt dt dt dt dt dt 3 a a a a a a a a a a 3 3 da d a = a Q.E.D. 3 dt dt 9

19 Cartesian Coordinate Systems A general Cartesian coordinate system is oblique, i.e., the basis vectors are generally not all mutually orthogonal. As stated earlier, however, the basis vectors of a Cartesian system are constant in magnitude and direction. The usual convention is to refer to the familiar orthonormal Cartesian system as the Cartesian system, with basis vectors usually denoted as { ˆˆ i,, j kˆ }, { eˆ, eˆ, eˆ }, { ˆi } x y z i 93

20 z ˆk r xyz = x x x = x x x 1 3 (,, ) (,, ) ( 1,, 3) r = ˆi x j j î ĵ y x 94

21 In any coordinate system, the differential distance between two points is given by the differential arclength, computed from dr dr. In particular, for the Cartesian system, z i i dr dr = ( ds) = dx dx = ( dx) + ( dy) + ( dz) ds dy dz dx y x 95

22 Curvilinear Coordinates Define a coordinate system 1 3 ( q, q, q ) with the coordinate transformation from the Cartesian system, q = q ( x, x, x ) 1 3 q = q ( x, x, x ) q = q ( x, x, x ) 96

23 3 x r q 3 3 q 1 x x q = const r q 1 = const r q 1 q r 1 q q 3 = const q 97

24 If the transformation is linear, it defines a Cartesian system. If the transformation is nonlinear, it defines a curvilinear system. The Jacobian of the transformation is defined by the following determinant, J x x x x x x q q q q q q = = = j x x x x 1 x x 3 x i q 1 q q 3 q q q q x x x x x x q q q q q q

25 If J 0, then J -1 (inverse Jacobian) is defined and the inverse transformation is also defined, x = x ( q, q, q ) 1 3 x = x ( q, q, q ) x = x ( q, q, q ) The position arrow is r i = r( q ), i = 1,,3 and a differential displacement is then r 1 r r 3 r dr = dq + dq + dq = dq 1 3 i q q q q i 99

26 i The vectors r / q are tangent to the coordinate curves defined by the intersection of the coordinate surfaces (q i = const). Using these vectors, we define a unitary basis, r ei =, i = 1,,3 i q Note, in general, the orientation and magnitude of the basis vectors are not constant, e.g., 100

27 Oblique-Cartesian system: Basis vectors have constant magnitude and orientation Curvilinear system: Basis vectors generally have non-constant magnitude and orientation 101

28 The coordinate transformation was written for a general system in terms of the original Cartesian system. We almost always write the transformations in this manner. In terms of the original Cartesian system, the unitary basis is given by, e i j r x = = ˆ i i i j, i = 1,,3 q q This is a linear system that is easily written in matrix format. The coefficient matrix is the Jacobian matrix, ˆ e e = e x / q x / q x / q ˆ x / q x / q x / q i i ˆ 3 x / q x / q x / q i 3 10

29 Fundamental Metric Tensor In a unitary system, the square of the differential distance separating two infinitesimally spaced points is, i j dr dr = ( ds) = ( ei ej) dq dq Now define the components of the fundamental metric tensor as, g ij i j Then, e e i j dr dr = ( ds) = gijdq dq 103

30 In matrix format, the fundamental metric tensor is, g g g G g g g = 1 3 g g g Properties of the fundamental metric tensor: 1. Symmetric, i.e., e e = e e g = g i j j i ij ji. The norm (magnitude) of the unitary base vectors is, 1/ 1/ ei = ( ei ei) = ( gii) (no summation) 104

31 3. Describes the curvature of the space, a) A flat space has no curvature and is called Euclidean. In this case, all the g ij components are constant. b) A curved space is called Riemannian In this case, the g ij components are not constant. An example is Lobachevskian space. This space has hyperbolic curvature. We can compare these two spaces by Euclidean looking at the geometry of a triangle in each. For Euclidean geometry, we know the sum of the interior angles of a triangle is γ always 180. α β α + β + γ =

32 In Lobachevskian geometry, that sum is always less than 180, the difference being proportional to the area of the triangle. (Penrose, Roger, The Emperor s New Mind, p. 156). Lobachevskian α + β + γ 180 = const area α γ β 106

33 Example: Find the unitary basis vectors and components of the fundamental metric tensor for elliptic-cylindrical, coordinates defined by the following inverse transformation (a = constant): x = acosh q cos q, x = asinh q sin q, x = q In terms of the Cartesian basis, the unitary basis is, x e i i i q i = ˆ sinh cos ˆ ˆ 1 i = a q q 1+ acosh q sin q x e i i i q i 1 1 = ˆ cosh sin ˆ ˆ i = a q q 1+ asinh q cos q i x e = ˆi = ˆi q 3 i 3 107

34 Components of the fundamental metric tensor are: 1 1 g = e e = a [sinh ( q )cos ( q ) + cosh ( q )sin ( q ) = g g = 1, g = g = g = g = Components and Bases Recall, i a= ( a e ) e = ( a e ) e j Now set a= e, e = ( e e ) e i j j i i i i With the definition for the components of G, we have, 108

35 g e e ij i j g e e ij i j contravariant component of the fundamental metric. covariant component of the fundamental metric. Then according to the cogredient and contragredient transformation laws raising and lowering of the indices is accomplished with the following, e j = g ij e i and a j = g ij a i e j = g ij e i and a j = g ij a i. Note that when dealing with a unitary basis, cogredient components and vectors are referred to as covariant components and contragredient components and vectors are referred to as contravariant components. 109

36 Also, if we dot both sides of the e j transformation equation, (e j = g ij e i ) e k, then we get the neat result k ik δ (4) j = gg ij For this relation, note the sum over i, e.g., δ δ = g g + g g + g g = 1, g11g g1g g31g = + + = 0. Now with a given unitary basis e i, both sets of fundamental metric components can be generated via, 110

37 Now with a given unitary basis e i, both sets of fundamental metric components can be generated via, g ij = e i e j e i ej e i k ij i j e = g = e e (ijk cyclic) [ eee ]

38 The cross product step is avoided by using the linear transformation (e j = g ij e i ) e k or in matrix notation, e g g g e e = g1 g g 3 e 3 e3 g31 g3 g33 e (5) (6) and a g g g a a = g1 g g 3 a 3 a3 g31 g3 g33 a (7) 11

39 To determine the e j in terms of the e i the matrix equation (6) must be inverted. Let, g g ij det G, M ij = minor of g ij, C ij ( 1) i+j M ij = cofactor of g ij Employing Cramer s rule, e e g g (no summation) e1m 11 em1+ e3m31 eici1 = e g g3 = = e g g g g We obtain similar expressions for e and e 3. In general then C j ij e = ei. g 113

40 Continuing in matrix format, you will probably recognize where this is leading, from the previous section on linear algebra. Since, g ij = g ji, the fundamental metric tensor is symmetric and C ij = C ji, then, j 1 T e = C ij e i e = e i so j 1 [ ] G [ ] g 1 T 1 Cij G. g = We designate the elements of G 1 with superscripts, i.e., G = g. 1 ij 114

41 So what have we accomplished with all this? If G = [g ij ] is known, we can use linear transformations and the rules of linear algebra to determine the dual basis and covariant components without formulae that involve cross products. In fact, knowing what we now know about systems of linear equations we could have anticipated this result from the matrix representation of Eq. (6), i.e., j j 1 [ e ] = G e e = G [ e ] i Another thing to note is the result in Eq. (4) is also anticipated since, in matrix notation, the Kronecker delta is the unit matrix, i. 115

42 1 0 0 i δ j = Note that the product in Eq. (4) is just, δ k j = = ik gg 1 ij I GG. 116

43 The General Permutation Symbol In the Cartesian system, the cross product is well defined analytically and geometrically. What about general coordinates? We define the general permutation symbol by the operation k ei ej = Eijke (for a right-handed system) where Eijk = ei ej ek [ ee 1 e3]. Using det( g ) = g = [ eee ], we then write E ij 1 3 ijk 1 = gε and E ε. g ijk ijk ijk 117

44 Physical Components of a Vector Recall a physical component of a vector is defined by i i aˆ eˆ i = aei (no summation). Then, i i aˆ eˆ eˆ = ae eˆ i i i i i i e i i g ˆ i ii a = ae ˆ i a = a (no summation). ei gii Therefore, the physical component, in terms of the contravariant and covariant components is, i i ii aˆ = a g and similarily aˆ = a g (no summation). ii i i 118

45 Orthogonal Curvilinear Coordinate Systems Because of the many conveniences of orthogonal systems, most space-coordinate systems used in engineering analysis are orthogonal. Many of these systems are also curvilinear systems, in particular, the spherical and cylindrical systems with which you are familiar. In this section we will look at orthogonal curvilinear systems and how they relate to our original Cartesian system. Scale Factors Define the scale factors h = e = g, h = e = g, h = e = g

46 Orthogonal Curvilinear Coordinate Systems With these definitions, then, e1 = h1eˆ1, e = heˆ, e3 = h3eˆ3 For a general curvilinear system, we earlier showed that a differential displacement is written as, 1 3 dr = dq e1+ dq e + dq e3. Now using the scale factors, 1 3 dr = ( h1dq ) eˆ1+ ( h ˆ ˆ dq ) e + ( h3dq ) e3. So, for the arclength, the differential distances are dr dr = ( ds) = ( h dq ) + ( h dq ) + ( h dq ) ,, 3 3 ds = h dq ds = h dq ds = h dq 10

47 r + dr 3 q r dr hdq = ( ds) q q hdq = ( ds) 1 hdq = ( ds)

48 The scale factors scale the q j to the appropriate magnitude and dimension for an orthogonal curvilinear system. In terms of the original Cartesian system, h h h r x x x = + + q q q q r x x x = + + q q q q 1 3 r x x x = + + q q q q / 1/ 1/,., 1

49 Differential Volume Element In many applications, especially finite-volume and finiteelement methods, you often must determine the volume of a differential element. For instance, a finite-volume form of the mass conservation equation in fluid mechanics requires a computation of the flux of mass through the boundaries, which must balance the creation of mass inside the volume. In most applications, the differential cell (volume) is of some variable shape determined by a curvilinear coordinate system. Here we introduce a general expression for determining a differential volume. Recall how the scalar triple product is related to the volume of a parallelepiped (with appropriate sign): 13

50 [e 1 e e 3 ] = volume of parallelepiped (with appropriate sign. In general, dv = ds ds ds 1 3 = dq e dq e dq e = dq dq dq 1 3 [ eee ] 1 3 = = dq dq dq g dq dq dq J For an orthogonal curvilinear system, dv = dq e dq e dq e = = hhhdqdq dq hhhdqdq dq [ eee ˆˆˆ] 14

51 Finally, for the Cartesian system, the familiar result 1 3 dv = dx dx dx = dx dy dz Note we can gain a bit of insight into the physical meaning of the Jacobian J. Combining the general expression for the differential volume element with that for the Cartesian system, we find, J dv = = dq dq dq dx dy dz dq dq dq This shows that the Jacobian of the transformation is the ratio of a differential volume in the Cartesian system to that of the general system. You can also see (if you haven t already discovered this) how the Jacobian is related to the fundamental metric,.i.e., J = g. 15

State of Stress at Point

State of Stress at Point State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus Chapter 1 Matrices, Vectors, and Vector Calculus In this chapter, we will focus on the mathematical tools required for the course. The main concepts that will be covered are: Coordinate transformations

More information

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Elasticity Theory Basics

Elasticity Theory Basics G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

Linear Algebra: Vectors

Linear Algebra: Vectors A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Mechanics 1: Conservation of Energy and Momentum

Mechanics 1: Conservation of Energy and Momentum Mechanics : Conservation of Energy and Momentum If a certain quantity associated with a system does not change in time. We say that it is conserved, and the system possesses a conservation law. Conservation

More information

Review of Vector Analysis in Cartesian Coordinates

Review of Vector Analysis in Cartesian Coordinates R. evicky, CBE 6333 Review of Vector Analysis in Cartesian Coordinates Scalar: A quantity that has magnitude, but no direction. Examples are mass, temperature, pressure, time, distance, and real numbers.

More information

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space 11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of

More information

A Primer on Index Notation

A Primer on Index Notation A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors. 3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Math 241, Exam 1 Information.

Math 241, Exam 1 Information. Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Gradient, Divergence and Curl in Curvilinear Coordinates

Gradient, Divergence and Curl in Curvilinear Coordinates Gradient, Divergence and Curl in Curvilinear Coordinates Although cartesian orthogonal coordinates are very intuitive and easy to use, it is often found more convenient to work with other coordinate systems.

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z 28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE 1.4 Cross Product 1.4.1 Definitions The cross product is the second multiplication operation between vectors we will study. The goal behind the definition

More information

Brief Review of Tensors

Brief Review of Tensors Appendix A Brief Review of Tensors A1 Introductory Remarks In the study of particle mechanics and the mechanics of solid rigid bodies vector notation provides a convenient means for describing many physical

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Chapter 2. Parameterized Curves in R 3

Chapter 2. Parameterized Curves in R 3 Chapter 2. Parameterized Curves in R 3 Def. A smooth curve in R 3 is a smooth map σ : (a, b) R 3. For each t (a, b), σ(t) R 3. As t increases from a to b, σ(t) traces out a curve in R 3. In terms of components,

More information

Vectors and Index Notation

Vectors and Index Notation Vectors and Index Notation Stephen R. Addison January 12, 2004 1 Basic Vector Review 1.1 Unit Vectors We will denote a unit vector with a superscript caret, thus â denotes a unit vector. â â = 1 If x is

More information

L 2 : x = s + 1, y = s, z = 4s + 4. 3. Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has

L 2 : x = s + 1, y = s, z = 4s + 4. 3. Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has The line L through the points A and B is parallel to the vector AB = 3, 2, and has parametric equations x = 3t + 2, y = 2t +, z = t Therefore, the intersection point of the line with the plane should satisfy:

More information

Chapter 4 One Dimensional Kinematics

Chapter 4 One Dimensional Kinematics Chapter 4 One Dimensional Kinematics 41 Introduction 1 4 Position, Time Interval, Displacement 41 Position 4 Time Interval 43 Displacement 43 Velocity 3 431 Average Velocity 3 433 Instantaneous Velocity

More information

CBE 6333, R. Levicky 1 Differential Balance Equations

CBE 6333, R. Levicky 1 Differential Balance Equations CBE 6333, R. Levicky 1 Differential Balance Equations We have previously derived integral balances for mass, momentum, and energy for a control volume. The control volume was assumed to be some large object,

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Geometrical definition Properties Expression in components. Definition in components Properties Geometrical expression.

More information

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu) 6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1982, 2008. This chapter originates from material used by author at Imperial College, University of London, between 1981 and 1990. It is available free to all individuals,

More information

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Brief Introduction to Vectors and Matrices

Brief Introduction to Vectors and Matrices CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

More information

Content. Chapter 4 Functions 61 4.1 Basic concepts on real functions 62. Credits 11

Content. Chapter 4 Functions 61 4.1 Basic concepts on real functions 62. Credits 11 Content Credits 11 Chapter 1 Arithmetic Refresher 13 1.1 Algebra 14 Real Numbers 14 Real Polynomials 19 1.2 Equations in one variable 21 Linear Equations 21 Quadratic Equations 22 1.3 Exercises 28 Chapter

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

1 Symmetries of regular polyhedra

1 Symmetries of regular polyhedra 1230, notes 5 1 Symmetries of regular polyhedra Symmetry groups Recall: Group axioms: Suppose that (G, ) is a group and a, b, c are elements of G. Then (i) a b G (ii) (a b) c = a (b c) (iii) There is an

More information

Vector Algebra CHAPTER 13. Ü13.1. Basic Concepts

Vector Algebra CHAPTER 13. Ü13.1. Basic Concepts CHAPTER 13 ector Algebra Ü13.1. Basic Concepts A vector in the plane or in space is an arrow: it is determined by its length, denoted and its direction. Two arrows represent the same vector if they have

More information

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v 12.4 Cross Product Geometric description of the cross product of the vectors u and v The cross product of two vectors is a vector! u x v is perpendicular to u and v The length of u x v is uv u v sin The

More information

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE

ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE i93 c J SYSTEMS OF CURVES 695 ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE BY C H. ROWE. Introduction. A system of co 2 curves having been given on a surface, let us consider a variable curvilinear

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Understanding Poles and Zeros

Understanding Poles and Zeros MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function

More information

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a 88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small

More information

Review Sheet for Test 1

Review Sheet for Test 1 Review Sheet for Test 1 Math 261-00 2 6 2004 These problems are provided to help you study. The presence of a problem on this handout does not imply that there will be a similar problem on the test. And

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t) Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

v 1 v 3 u v = (( 1)4 (3)2, [1(4) ( 2)2], 1(3) ( 2)( 1)) = ( 10, 8, 1) (d) u (v w) = (u w)v (u v)w (Relationship between dot and cross product)

v 1 v 3 u v = (( 1)4 (3)2, [1(4) ( 2)2], 1(3) ( 2)( 1)) = ( 10, 8, 1) (d) u (v w) = (u w)v (u v)w (Relationship between dot and cross product) 0.1 Cross Product The dot product of two vectors is a scalar, a number in R. Next we will define the cross product of two vectors in 3-space. This time the outcome will be a vector in 3-space. Definition

More information

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours MAT 051 Pre-Algebra Mathematics (MAT) MAT 051 is designed as a review of the basic operations of arithmetic and an introduction to algebra. The student must earn a grade of C or in order to enroll in MAT

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information