Elements of Linear Algebra: Q&A

Similar documents
State of Stress at Point

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Chapter 6. Orthogonality

Similarity and Diagonalization. Similar Matrices

Linear Algebra Notes for Marsden and Tromba Vector Calculus

x = + x 2 + x

1 Introduction to Matrices

5.3 The Cross Product in R 3

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Inner Product Spaces

Linear Algebra Review. Vectors

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

Inner Product Spaces and Orthogonality

Applied Linear Algebra I Review page 1

LINEAR ALGEBRA. September 23, 2010

The Characteristic Polynomial

by the matrix A results in a vector which is a reflection of the given

Unified Lecture # 4 Vectors

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Section 1.1. Introduction to R n

Elasticity Theory Basics

Orthogonal Diagonalization of Symmetric Matrices

LINEAR ALGEBRA W W L CHEN

1 Determinants and the Solvability of Linear Systems

5. Orthogonal matrices

Figure 1.1 Vector A and Vector F

Linear Algebra: Vectors

Data Mining: Algorithms and Applications Matrix Math Review

1 Sets and Set Notation.

MATH APPLIED MATRIX THEORY

Mechanics 1: Conservation of Energy and Momentum

Review of Vector Analysis in Cartesian Coordinates

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

A Primer on Index Notation

Introduction to Matrix Algebra

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Math 241, Exam 1 Information.

Mathematics Course 111: Algebra I Part IV: Vector Spaces

[1] Diagonal factorization

Vector and Matrix Norms

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Gradient, Divergence and Curl in Curvilinear Coordinates

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

α = u v. In other words, Orthogonal Projection

Eigenvalues and Eigenvectors

Section Inner Products and Norms

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Chapter 17. Orthogonal Matrices and Symmetries of Space

Mean value theorem, Taylors Theorem, Maxima and Minima.

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

LS.6 Solution Matrices

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

Brief Review of Tensors

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Notes on Determinant

Chapter 2. Parameterized Curves in R 3

Vectors and Index Notation

L 2 : x = s + 1, y = s, z = 4s Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has

Chapter 4 One Dimensional Kinematics

CBE 6333, R. Levicky 1 Differential Balance Equations

Numerical Analysis Lecture Notes

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

6. Vectors Scott Surgent (surgent@asu.edu)

LINEAR ALGEBRA W W L CHEN

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Numerical Analysis Lecture Notes

Brief Introduction to Vectors and Matrices

Content. Chapter 4 Functions Basic concepts on real functions 62. Credits 11

University of Lille I PC first year list of exercises n 7. Review

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

1 Symmetries of regular polyhedra

Vector Algebra CHAPTER 13. Ü13.1. Basic Concepts

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Math 312 Homework 1 Solutions

ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Understanding Poles and Zeros

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

Review Sheet for Test 1

Recall that two vectors in are perpendicular or orthogonal provided that their dot

1 VECTOR SPACES AND SUBSPACES

Linear Algebra: Determinants, Inverses, Rank

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

DATA ANALYSIS II. Matrix Algorithms

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

v 1 v 3 u v = (( 1)4 (3)2, [1(4) ( 2)2], 1(3) ( 2)( 1)) = ( 10, 8, 1) (d) u (v w) = (u w)v (u v)w (Relationship between dot and cross product)

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours

Continued Fractions and the Euclidean Algorithm

ISOMETRIES OF R n KEITH CONRAD

Matrix Differentiation

Transcription:

Elements of Linear Algebra: Q&A A matrix is a rectangular array of objects (elements that are numbers, functions, etc.) with its size indicated by the number of rows and columns, i.e., an m n matrix A with m rows and n columns. If A is an m n matrix, A T is an n m matrix. The determinant of a matrix is the absolute value of the sum of the diagonal elements. The determinant is only defined for a square matrix. The determinant of a matrix can be computed using the Laplace expansion where a row or column is expanded in terms of minors and cofactors. An orthogonal matrix is an invertible n n matrix Q with the property Q 1 = Q T. 75

Elements of Linear Algebra: Q&A Given a system of m linear equations in n variables x i (i = 1,, n), written as Ax = b, the system is either 1. Consistent, with a unique (one) solution x.. Consistent, with infinitely many possible solutions. 3. Inconsistent with no solutions. If n > m, the system has more unknowns than equations it is underdetermined. If the system is consistent, some of the variables can be chosen arbitrarily and the remaining variables defined in terms of the arbitrary ones. If n < m, the system has more equations than unknowns it is overdetermined. 76

Elements of Linear Algebra: Q&A P3.16. Invertible matrix properties Assume that A is an n n invertible matrix. Which statements are true? a. The system Ax = b has a unique solution for every vector b in R n. b. The rows (and columns) of A are linearly independent. c. det(a) = 0. d. A can be reduced (by elementary operations) to the identity matrix. e. The rank of A is n. f. The rows of A span R n. 77

Elements of Linear Algebra: Q&A P3.18. Linear Independence Consider the equations of combustion in which a mixture of CO, H, and CH 4 are burned with O to form CO, CO and H O. 1 CO + O = CO. 1 H + O = HO. CH 4 + O = CO + HO. 3 CH 4 + O = CO + HO. Treating the compounds as real variables, determine if the equations are independent. If not, write the dependent equation(s) in terms of the independent ones. 78

Elements of Linear Algebra: AEM P.35 79

Elements of Linear Algebra: AEM P.35 80

Elements of Linear Algebra Eigenvalues & Eigenvectors As an engineer, you have undoubtedly been introduced to eigenvalues and possibly eigenvectors. We develop background here and will later make use of eigenvalues/vectors in the discussion of second and higherorder tensors. Given the linear equation A x= b the vector x is called the eigenvector (characteristic vector) and the scalar λ is the eigenvalue (characteristic value) of matrix A that characterizes the length (and sense) of the eigenvector x. The spectrum of A is the set of eigenvalues of A and the spectral radius of A is the absolute value of the largest eigenvalue. 81

Elements of Linear Algebra Example: Find eigenvalues and eigenvectors of 3 0 0 5 4 0 3 6 1 Solution: 1. Compute roots of the characteristic polynomial 3 λ 0 0 D( λ) = 5 4 λ 0 = (3 λ)(4 λ)(1 λ) = 0 3 6 1 λ roots: λ 1 = 3, λ = 4, λ 3 =1. 8

Elements of Linear Algebra These roots are the eigenvalues. They form the spectrum with a spectral radius of 4.. Compute the eigenvectors: λ 1 = 3 0= 0 1 5x1+ x = 0 set x1 1 5 or 10 = x = 3x1+ 6x x3 = 0 7 / 7 λ = 4 x1 = 0 0 5x1 = 0 set x 1 1 = x = 3x1+ 6x 3x3 = 0 83

Elements of Linear Algebra λ 3 = 1 x1 = 0 0 5x1+ 3x = 0 set x3 1 0 = x = 3x1+ 6x = 0 1 Properties of eigenvalues and eigenvectors of an n n square matrix A: 1. A has at least one eigenvalue and at most n numerically different eigenvalues, but it may have fewer than n.. If x is an eigenvector of a matrix A corresponding to an eigenvalue λ, so is kx, for any k 0, i.e., Ax = λx implies k(ax) = A(kx) = λ(kx). 84

Elements of Linear Algebra 3. M λ is the algebraic multiplicity, the number of times the root λ of the characteristic polynomial is repeated, and m λ is the geometric multiplicity, the number of independent eigenvectors corresponding to λ. According to property 1 above, the sum of algebraic multiplicities equals n and in general m λ M λ. 4. A real matrix may have complex eigenvalues that occur in conjugate pairs and complex eigenvectors. 5. The eigenvalues of a symmetric matrix (A T = A) are real. 6. The eigenvalues of a skew symmetric matrix (A T = A) are pure imaginary or zero. 85

Elements of Linear Algebra Eigenvectors & Diagonalization Similar matrices have the same spectrum (i.e., same eigenvalues), ˆ 1 A = T AT : n n matrix Â, is similar to A for some n n matrix T. This is an important property, particularly for numerical analysis, to diagonalize (or nearly diagonalize) matrices for computing approximations to eigenvalues and eigenvectors. The eigenvectors corresponding to a set of distinct eigenvalues form a linearly independent set. Thus, these eigenvectors form a basis. 1 If an n n matrix A has a basis of eigenvectors, then D= X AX is diagonal with eigenvalues of A as the entries on the main diagonal. 86

The vector algebra included operations involving sums and products of vectors. The definitions and operations defined in the linear algebra provide the basis for linear transformations and matrix operations useful in tensor analysis. The vector calculus allows us to apply the methods of differential and integral calculus in the general tensor analysis. We begin with the usual basic definitions and operations. Derivative of a Vector Function of a Scalar a(t) eˆ s a s a(t + t) da a( t+ t) a( t) = lim dt t 0 t s = a da a s ds = lim = eˆ dt t 0 s t dt s 87

Product Rules d da db ( ab ) = b+ a dt dt dt d da db ( a b) = b+ a (order preserved) dt dt dt Note that because a vector is composed of two distinct parts, magnitude and direction, a nonzero derivative could result from: a) a change in magnitude but not direction, b) a change in direction but not magnitude, or c) a change in both magnitude and direction as illustrated in the previous diagram. 88

For case b), a constant length vector, a = const da = 0 a dt = const d da da a ( aa = a ) a = a = 0 dt dt dt a = const da a a dt const a In general coordinates, the base vectors are not necessarily constant in magnitude or direction, i i da i dei a= aei ei + a dt dt By definition, the base vectors of Cartesian systems have constant magnitude and direction de / dt = 0. i 89

Example: Compute the acceleration of a body in a circular orbit. v(t) r(t) v(t + t) r(t + t) ω = ωeˆ r = reˆ r v = ω r = veˆ z t v(t) v v(t + t) 90

dv d dω dr a = = ( ω r) = r+ ω dt dt dt dt vdeˆ z = r + ω v = ω v r dt v v = eˆ ( ) [ ˆ ( ˆ ˆ z ω r = ez ez er)] r r v = [( eˆ ˆ ) ˆ ( ˆ ˆ ) ˆ z er ez ez ez er)] r v = eˆ r r 91

Example: Prove d d d d d a = dt dt dt dt dt 3 a a a a a 3 Solution: 0 0 d d d d d d d d d d a = + + dt dt dt dt dt dt dt dt dt dt 3 a a a a a a a a a a 3 3 da d a = a Q.E.D. 3 dt dt 9

Cartesian Coordinate Systems A general Cartesian coordinate system is oblique, i.e., the basis vectors are generally not all mutually orthogonal. As stated earlier, however, the basis vectors of a Cartesian system are constant in magnitude and direction. The usual convention is to refer to the familiar orthonormal Cartesian system as the Cartesian system, with basis vectors usually denoted as { ˆˆ i,, j kˆ }, { eˆ, eˆ, eˆ }, { ˆi } x y z i 93

z ˆk r xyz = x x x = x x x 1 3 (,, ) (,, ) ( 1,, 3) r = ˆi x j j î ĵ y x 94

In any coordinate system, the differential distance between two points is given by the differential arclength, computed from dr dr. In particular, for the Cartesian system, z i i dr dr = ( ds) = dx dx = ( dx) + ( dy) + ( dz) ds dy dz dx y x 95

Curvilinear Coordinates Define a coordinate system 1 3 ( q, q, q ) with the coordinate transformation from the Cartesian system, 1 1 1 3 q = q ( x, x, x ) 1 3 q = q ( x, x, x ) 3 3 1 3 q = q ( x, x, x ) 96

3 x r q 3 3 q 1 x x q = const r q 1 = const r q 1 q r 1 q q 3 = const q 97

If the transformation is linear, it defines a Cartesian system. If the transformation is nonlinear, it defines a curvilinear system. The Jacobian of the transformation is defined by the following determinant, J x x x x x x q q q q q q 1 1 1 1 3 1 3 1 1 1 = = = j x x x x 1 x x 3 x i q 1 q q 3 q q q q x x x x x x q q q q q q 3 3 3 1 3 1 3 3 3 3 98

If J 0, then J -1 (inverse Jacobian) is defined and the inverse transformation is also defined, 1 1 1 3 x = x ( q, q, q ) 1 3 x = x ( q, q, q ) 3 3 1 3 x = x ( q, q, q ) The position arrow is r i = r( q ), i = 1,,3 and a differential displacement is then r 1 r r 3 r dr = dq + dq + dq = dq 1 3 i q q q q i 99

i The vectors r / q are tangent to the coordinate curves defined by the intersection of the coordinate surfaces (q i = const). Using these vectors, we define a unitary basis, r ei =, i = 1,,3 i q Note, in general, the orientation and magnitude of the basis vectors are not constant, e.g., 100

Oblique-Cartesian system: Basis vectors have constant magnitude and orientation Curvilinear system: Basis vectors generally have non-constant magnitude and orientation 101

The coordinate transformation was written for a general system in terms of the original Cartesian system. We almost always write the transformations in this manner. In terms of the original Cartesian system, the unitary basis is given by, e i j r x = = ˆ i i i j, i = 1,,3 q q This is a linear system that is easily written in matrix format. The coefficient matrix is the Jacobian matrix, ˆ e e = e 1 1 1 3 1 1 x / q x / q x / q 1 1 3 ˆ x / q x / q x / q i i 1 3 3 3 3 ˆ 3 x / q x / q x / q i 3 10

Fundamental Metric Tensor In a unitary system, the square of the differential distance separating two infinitesimally spaced points is, i j dr dr = ( ds) = ( ei ej) dq dq Now define the components of the fundamental metric tensor as, g ij i j Then, e e i j dr dr = ( ds) = gijdq dq 103

In matrix format, the fundamental metric tensor is, g g g G g g g 11 1 13 = 1 3 g g g 31 3 33 Properties of the fundamental metric tensor: 1. Symmetric, i.e., e e = e e g = g i j j i ij ji. The norm (magnitude) of the unitary base vectors is, 1/ 1/ ei = ( ei ei) = ( gii) (no summation) 104

3. Describes the curvature of the space, a) A flat space has no curvature and is called Euclidean. In this case, all the g ij components are constant. b) A curved space is called Riemannian In this case, the g ij components are not constant. An example is Lobachevskian space. This space has hyperbolic curvature. We can compare these two spaces by Euclidean looking at the geometry of a triangle in each. For Euclidean geometry, we know the sum of the interior angles of a triangle is γ always 180. α β α + β + γ = 180 105

In Lobachevskian geometry, that sum is always less than 180, the difference being proportional to the area of the triangle. (Penrose, Roger, The Emperor s New Mind, p. 156). Lobachevskian α + β + γ 180 = const area α γ β 106

Example: Find the unitary basis vectors and components of the fundamental metric tensor for elliptic-cylindrical, coordinates defined by the following inverse transformation (a = constant): x = acosh q cos q, x = asinh q sin q, x = q 1 1 1 3 3 In terms of the Cartesian basis, the unitary basis is, x e i i i q i 1 1 1 = ˆ sinh cos ˆ ˆ 1 i = a q q 1+ acosh q sin q x e i i i q i 1 1 = ˆ cosh sin ˆ ˆ i = a q q 1+ asinh q cos q i x e = ˆi = ˆi q 3 i 3 107

Components of the fundamental metric tensor are: 1 1 g = e e = a [sinh ( q )cos ( q ) + cosh ( q )sin ( q ) 11 1 1 = g g = 1, g = g = g = g = 0 33 1 1 13 31 Components and Bases Recall, i a= ( a e ) e = ( a e ) e j Now set a= e, e = ( e e ) e i j j i i i i With the definition for the components of G, we have, 108

g e e ij i j g e e ij i j contravariant component of the fundamental metric. covariant component of the fundamental metric. Then according to the cogredient and contragredient transformation laws raising and lowering of the indices is accomplished with the following, e j = g ij e i and a j = g ij a i e j = g ij e i and a j = g ij a i. Note that when dealing with a unitary basis, cogredient components and vectors are referred to as covariant components and contragredient components and vectors are referred to as contravariant components. 109

Also, if we dot both sides of the e j transformation equation, (e j = g ij e i ) e k, then we get the neat result k ik δ (4) j = gg ij For this relation, note the sum over i, e.g., δ δ = g g + g g + g g = 1, 1 11 1 31 1 11 1 31 1 3 1 g11g g1g g31g = + + = 0. Now with a given unitary basis e i, both sets of fundamental metric components can be generated via, 110

Now with a given unitary basis e i, both sets of fundamental metric components can be generated via, g ij = e i e j e i ej e i k ij i j e = g = e e (ijk cyclic) [ eee ] 1 3 111

The cross product step is avoided by using the linear transformation (e j = g ij e i ) e k or in matrix notation, e g g g e 1 1 11 1 13 e = g1 g g 3 e 3 e3 g31 g3 g33 e (5) (6) and a g g g a 1 1 11 1 13 a = g1 g g 3 a 3 a3 g31 g3 g33 a (7) 11

To determine the e j in terms of the e i the matrix equation (6) must be inverted. Let, g g ij det G, M ij = minor of g ij, C ij ( 1) i+j M ij = cofactor of g ij Employing Cramer s rule, e e g g (no summation). 1 1 13 1 e1m 11 em1+ e3m31 eici1 = e g g3 = = e g g 3 3 33 g g We obtain similar expressions for e and e 3. In general then C j ij e = ei. g 113

Continuing in matrix format, you will probably recognize where this is leading, from the previous section on linear algebra. Since, g ij = g ji, the fundamental metric tensor is symmetric and C ij = C ji, then, j 1 T e = C ij e i e = e i so j 1 [ ] G [ ] g 1 T 1 Cij G. g = We designate the elements of G 1 with superscripts, i.e., G = g. 1 ij 114

So what have we accomplished with all this? If G = [g ij ] is known, we can use linear transformations and the rules of linear algebra to determine the dual basis and covariant components without formulae that involve cross products. In fact, knowing what we now know about systems of linear equations we could have anticipated this result from the matrix representation of Eq. (6), i.e., j j 1 [ e ] = G e e = G [ e ] i Another thing to note is the result in Eq. (4) is also anticipated since, in matrix notation, the Kronecker delta is the unit matrix, i. 115

1 0 0 i δ j = 0 1 0. 0 0 1 Note that the product in Eq. (4) is just, δ k j = = ik gg 1 ij I GG. 116

The General Permutation Symbol In the Cartesian system, the cross product is well defined analytically and geometrically. What about general coordinates? We define the general permutation symbol by the operation k ei ej = Eijke (for a right-handed system) where Eijk = ei ej ek [ ee 1 e3]. Using det( g ) = g = [ eee ], we then write E ij 1 3 ijk 1 = gε and E ε. g ijk ijk ijk 117

Physical Components of a Vector Recall a physical component of a vector is defined by i i aˆ eˆ i = aei (no summation). Then, i i aˆ eˆ eˆ = ae eˆ i i i i i i e i i g ˆ i ii a = ae ˆ i a = a (no summation). ei gii Therefore, the physical component, in terms of the contravariant and covariant components is, i i ii aˆ = a g and similarily aˆ = a g (no summation). ii i i 118

Orthogonal Curvilinear Coordinate Systems Because of the many conveniences of orthogonal systems, most space-coordinate systems used in engineering analysis are orthogonal. Many of these systems are also curvilinear systems, in particular, the spherical and cylindrical systems with which you are familiar. In this section we will look at orthogonal curvilinear systems and how they relate to our original Cartesian system. Scale Factors Define the scale factors h = e = g, h = e = g, h = e = g. 1 1 11 3 3 33 119

Orthogonal Curvilinear Coordinate Systems With these definitions, then, e1 = h1eˆ1, e = heˆ, e3 = h3eˆ3 For a general curvilinear system, we earlier showed that a differential displacement is written as, 1 3 dr = dq e1+ dq e + dq e3. Now using the scale factors, 1 3 dr = ( h1dq ) eˆ1+ ( h ˆ ˆ dq ) e + ( h3dq ) e3. So, for the arclength, the differential distances are dr dr = ( ds) = ( h dq ) + ( h dq ) + ( h dq ) 1 3 1 3 1 3 1 1,, 3 3 ds = h dq ds = h dq ds = h dq 10

r + dr 3 q r dr hdq = ( ds) 3 1 3 1 q q hdq = ( ds) 1 hdq = ( ds) 1 1 1 11

The scale factors scale the q j to the appropriate magnitude and dimension for an orthogonal curvilinear system. In terms of the original Cartesian system, h h h r x x x = + + q q q q 1 3 1 1 1 1 1 r x x x = + + q q q q 1 3 r x x x = + + q q q q 1 3 3 3 3 3 3 1/ 1/ 1/,., 1

Differential Volume Element In many applications, especially finite-volume and finiteelement methods, you often must determine the volume of a differential element. For instance, a finite-volume form of the mass conservation equation in fluid mechanics requires a computation of the flux of mass through the boundaries, which must balance the creation of mass inside the volume. In most applications, the differential cell (volume) is of some variable shape determined by a curvilinear coordinate system. Here we introduce a general expression for determining a differential volume. Recall how the scalar triple product is related to the volume of a parallelepiped (with appropriate sign): 13

[e 1 e e 3 ] = volume of parallelepiped (with appropriate sign. In general, dv = ds ds ds 1 3 = dq e dq e dq e = 1 3 1 3 dq dq dq 1 3 [ eee ] 1 3 = = 1 3 1 3 dq dq dq g dq dq dq J For an orthogonal curvilinear system, dv = dq e dq e dq e = = 1 3 1 3 hhhdqdq dq 1 3 1 3 1 3 hhhdqdq dq 1 3 1 3 [ eee ˆˆˆ] 14

Finally, for the Cartesian system, the familiar result 1 3 dv = dx dx dx = dx dy dz Note we can gain a bit of insight into the physical meaning of the Jacobian J. Combining the general expression for the differential volume element with that for the Cartesian system, we find, J dv = = dq dq dq dx dy dz dq dq dq 1 3 1 3. This shows that the Jacobian of the transformation is the ratio of a differential volume in the Cartesian system to that of the general system. You can also see (if you haven t already discovered this) how the Jacobian is related to the fundamental metric,.i.e., J = g. 15