Brief Review of Tensors
|
|
|
- Ross Marshall
- 9 years ago
- Views:
Transcription
1 Appendix A Brief Review of Tensors A1 Introductory Remarks In the study of particle mechanics and the mechanics of solid rigid bodies vector notation provides a convenient means for describing many physical quantities and laws In studying the mechanics of solid deformable media, physical quantities of a more complex nature, such as stress and strain, assume importance 1 Mathematically such physical quantities are represented by matrices In the analysis of general problems in continuum mechanics, the physical quantities encountered can be somewhat more complex than vectors and matrices Like vectors and matrices these physical quantities are independent of any particular coordinate system that may be used to describe them At the same time, these physical quantities are very often specified most conveniently by referring to an appropriate system of coordinates Tensors, which are a generalization of vectors and matrices, offer a suitable way of mathematically representing these quantities As an abstract mathematical entity, tensors have an existence independent of any coordinate system or frame of reference, yet are most conveniently described by specifying their components in an appropriate system of coordinates Specifying the components of a tensor in one coordinate system determines the components in any other system Indeed, the law of transformation of tensor components is often used as a means for defining the tensor The objective of this appendix is to present a brief overview of tensors Further details pertaining to this subject can be found in standard books on the subject such as [2, 4], or in books dealing with Continuum Mechanics such as [1, 3] 1 We recall that in describing stresses and strains one must specify not only the magnitude of the quantity, but also the orientation of the face upon which this quantity acts 1
2 2 A Brief Review of Tensors A2 General Characteristics The following general characteristics apply to all tensors: Tensor Rank Tensors may be classified by rank or order according to the particular form of transformation law they obey This classification is also reflected in the number of components a given tensor possesses in an N-dimensional space Thus, a tensor of order p has N p components For example, in a threedimensional Euclidean space, the number of components of a tensor is 3 p It follows therefore, that in three-dimensional space: A tensor of order zero has one component and is called a scalar Physical quantities possessing magnitude only are represented by scalars A tensor of order one has three components and is called a vector; quantities possessing both magnitude and direction are represented by vectors Geometrically, vectors are represented by directed line segments that obey the Parallelogram Law of addition A tensor of order two has nine components and is typically represented by a matrix Notation The following symbols are used herein: Scalars are represented by lowercase Greek letters For example, α Vectors are represented by lowercase Latin letters For example, a or { a } Matrices and tensors are represented by uppercase Latin letters For example, A or { A } Cartesian Tensors When only transformations from one homogeneous coordinate system (eg, a Cartesian coordinate system) to another are considered, the tensors involved are referred to as Cartesian tensors The Cartesian coordinate system can be rectangular (x 1, x 2, x 3 ) or curvilinear, such as cylindrical (R, θ, z) or spherical (r, θ, φ)
3 A3 Indicial Notation 3 A3 Indicial Notation A tensor of any order, its components, or both may be represented clearly and concisely by the use of indicial notation This convention was believed to have been introduced by Einstein In this notation, letter indices, either subscripts or superscripts, are appended to the generic or kernel letter representing the tensor quantity of interest; eg A ij, B ijk, δ ij, a kl, etc Some benefits of using indicial notation include: (1) economy in writing; and, (2) compatibility with computer languages (eg easy correlation with do loops ) Some rules for using indicial notation follow Index rule In a given term, a letter index may occur no more than twice Range Convention When an index occurs unrepeated in a term, that index is understood to take on the values 1, 2,, N where N is a specified integer that, depending on the space considered, determines the range of the index Summation Convention When an index appears twice in a term, that index is understood to take on all the values of its range, and the resulting terms are summed For example, A kk = a 11 + a a NN Free Indices By virtue of the range convention, unrepeated indices are free to take the values over the range, that is, 1, 2,, N These indices are thus termed free The following items apply to free indices: Any equation must have the same free indices in each term The tensorial rank of a given term is equal to the number of free indices N (no of free indices) = number of components represented by the symbol Dummy Indices In the summation convention, repeated indices are often referred to as dummy indices, since their replacement by any other letter not appearing as a free index does not change the meaning of the term in which they occur In the following equations, the repeated indices are thus dummy indices: A kk = A mm and a ik b kl = a in b nl In the equation E ij = e im e mj i and j represent free indices and m is a dummy index Assuming N = 3 and using the range convention, it follows that E ij = e i1 e 1j +e i2 e 2j +e i3 e 3j Care must be taken to avoid breaking grammatical rules in the indicial language For example, the expression a b = (a k ê k ) (b k ê k ) is erroneous since the summation on the dummy indices is ambiguous To avoid such ambiguity, a dummy index can only be paired with one other dummy index in an expression A good rule to follow is use separate dummy indices for each implied summation in an expression
4 4 A Brief Review of Tensors Contraction of Indices Contraction refers to the process of summing over a pair of repeated indices This reduces the order of a tensor by two For example: Contracting the indices of A ij (a second-order tensor) leads to A kk (a zeroth-order tensor or scalar) Contracting the indices of B ijk (a third-order tensor) leads to B ikk (a first-order tensor) Contracting the indices of C ijkl (a fourth-order tensor) leads to C ijmm (a second-order tensor) Comma Subscript Convention A subscript comma followed by a subscript index i indicates partial differentiation with respect to each coordinate x i Thus, φ,m φ x m ; a i,j a i x j ; C ij,kl C ij x k x l ; etc (A1) If i remains a free index, differentiation of a tensor with respect to i produces a tensor of order one higher For example A j,i = A j (A2) If i is a dummy index, differentiation of a tensor with respect to i produces a tensor of order one lower For example V m,m = V m x m = V 1 x 1 + V 2 x V N x N (A3)
5 A4 Coordinate Systems 5 A4 Coordinate Systems The definition of geometric shapes of bodies is facilitated by the use of a coordinate system With respect to a particular coordinate system, a vector may be defined by specifying the scalar components of that vector in that system A rectangular Cartesian coordinate (RCC) system is represented by three mutually perpendicular axes in the manner shown in Figure A1 x 3 e ^ 3 e ^ 1 x 1 ^ e 2 x 2 Figure A1: Rectangular Cartesian Coordinate System Any vector in the RCC system may be expressed as a linear combination of three arbitrary, nonzero, non-coplanar vectors called the base vectors Base vectors are, by hypothesis, linearly independent A set of base vectors in a given coordinate system is said to constitute a basis for that system The most frequent choice of base vectors for the RCC system is the set of unit vectors ê 1, ê 2, ê 3, directed parallel to the x 1, x 2 and x 3 coordinate axes, respectively Remark 1 The summation convention is very often employed in connection with the representation of vectors and tensors by indexed base vectors written in symbolic notation In Euclidean space any vector is completely specified by its three components The range on indices is thus 3 (ie, N = 3) A point with coordinates (q 1, q 2, q 3 ) is thus located by a position vector x, where In abbreviated form this is written as x = q 1 ê 1 + q 2 ê 2 + q 3 ê 3 (A4) x = q i ê i (A5) where i is a summed index (ie, the summation convention applies even though there is no repeated index on the same kernal) The base vectors constitute a right-handed unit vector triad or right orthogonal triad that satisfy the following relations: and ê i ê j = δ ij (A6)
6 6 A Brief Review of Tensors ê i ê j = ɛ ijk ê k (A7) The set of base vectors satisfying the above conditions are often called an orthonormal basis In equation (A6), δ ij denotes the Kronecker delta (a second-order tensor typically denoted by I), defined by δ ij = { 1 if i = j 0 if i j (A8) In equation (A7), ɛ ijk is the permutation symbol or alternating tensor (a third-order tensor), that is defined in the following manner: 1 if i, j, k are an even permutation of 1,2,3 ɛ ijk = 1 if i, j, k are an odd permutation of 1,2,3 0 if i, j, k are not a permutation of 1,2,3 (A9) An even permutation of 1,2,3 is 1, 2, 3, 1, 2, 3, ; an odd permutation of 1,2,3 is 3, 2, 1, 3, 2, 1, The indices fail to be a permutation if two or more of them have the same value Remarks 1 The Kronecker delta is sometimes called the substitution operator since, for example, δ ij b j = δ i1 b 1 + δ i2 b 2 + δ i3 b 3 = b i (A10) δ ij C jk = C ik (A11) and so on From its definition we note that δ ii = 3 2 In light of the above discussion, the scalar or dot product of two vectors a and b is written as a b = (a i ê i ) (b j ê j ) = a i b j δ ij = a i b i (A12) In the special case when a = b, a a = a k a k = (a 1 ) 2 + (a 2 ) 2 + (a 3 ) 2 (A13) The magnitude of a vector is thus computed as a = (a a) 1 2 = (a k a k ) 1 2 (A14) 3 The vector or cross product of two vectors a and b is written as a b = (a i ê i ) (b j ê j ) = a i b j ɛ ijk ê k (A15) 4 The determinant of a square matrix A is
7 A4 Coordinate Systems 7 A 11 A 12 A 13 det A = A = A 21 A 22 A 23 A 31 A 32 A 33 = A 11 A 22 A 33 + A 12 A 23 A 31 + A 13 A 21 A 32 A 31 A 22 A 13 A 32 A 23 A 11 A 33 A 21 A 12 = ɛ ijk A 1i A 2j A 3k (A16) 5 It can also be shown that A i1 A i2 A i3 ɛ ijk det A = A j1 A j2 A j3 A k1 A k2 A k3 = A 1i A 1j A 1k A 2i A 2j A 2k A 3i A 3j A 3k A ir A is A it ɛ ijk ɛ rst det A = A jr A js A jt A kr A ks A kt δ ir δ is δ it ɛ ijk ɛ rst = δ jr δ js δ jt δ kr δ ks δ kt This leads to the following relations: ɛ ijk ɛ ist = δ js δ kt δ jt δ ks ɛ ijk ɛ ijr = 2δ kr ɛ ijk ɛ ijk = 6
8 8 A Brief Review of Tensors A5 Transformation Laws for Cartesian Tensors Define a point P in space referred to two rectangular Cartesian coordinate systems The base vectors for one coordinate system are unprimed, while for the second one they are primed The origins of both coordinate systems are assumed to coincide The position vector to this point is given by x = x i ê i = x jê j (A17) To obtain a relation between the two coordinate systems, form the scalar product of the above equation with either set of base vectors; viz, Upon expansion, ê k (x i ê i ) = ê k ( x jê ) j x i (ê k ê i ) = x jδ kj = x k Since ê i and ê k are unit vectors, it follows from the definition of the scalar product that ê k ê i = (1)(1) cos (ê k, ê i ) R ki (A18) (A19) (A20) The R ki are computed by taking (pairwise) the cosines of angles between the x k and x i axes For a prescribed pair of coordinate axes, the elements of R ki are thus constants that can easily be computed From equation (A19) it follows that the coordinate transformation for first-order tensors (vectors) is thus x k = R ki x i where the free index is the first one appearing on R We next seek the inverse transformation Beginning again with equation (A17), we write Thus, ê k (x i ê i ) = ê k ( x jê ) j (A21) (A22) or x i δ ki = x jê k ê j (A23) x k = R jk x j The free index is now the second one appearing on R (A24) Remark 1 In both above transformations, the second index on R is associated with the unprimed system In order to gain insight into the direction cosines R ij, we differentiate equation (A21) with respect to x i giving (with due change of dummy indices), x m = R mj x j = R mj δ ji = R mi (A25)
9 A5 Transformation Laws for Cartesian Tensors 9 We next differentiate equation (A24) with respect to x m, giving x k x m Using the chain rule, it follows that x k In direct notation this is written as x j = R jk x = R jk δ jm = R mk (A26) m = δ ki = x k x m x = R mk R mi (A27) m I = R T R (A28) implying that the R are orthogonal tensors (ie, R 1 = R T ) Linear transformations such as those given by equations (A21) and (A24), whose direction cosines satisfy the above equation, are thus called orthogonal transformations The transformation rules for second-order Cartesian tensors are derived in the following manner Let S be a second-order Cartesian tensor, and let u = Sv in the unprimed coordinates Similarly, in primed coordinates let u = S v Next we desire to relate S to S Using equation (A21), substitute for u and v to give (A29) (A30) But from equation (A29) implying that Ru = S Rv Ru = RSv (A31) (A32) RSv = S Rv Since v is an arbitrary vector, and since R is an orthogonal tensor, it follows that (A33) In a similar manner, S = RSR T or S ij = R ik R jl S kl (A34) S = R T S R or S ij = R mi R nj S mn (A35) The transformation rules for higher-order tensors are obtained in a similar manner For example, for tensors of rank three, and A ijk = R il R jm R kn A lmn A ijk = R li R mj R nk A lmn (A36) (A37)
10 10 A Brief Review of Tensors Finally, the fourth-order Cartesian tensor C transforms according to the following relations: and C ijkl = R ip R jq R kr R ls C pqrs C ijkl = R pi R qj R rk R sl C pqrs (A38) (A39) A6 Principal Values and Principal Directions In the present discussion, only symmetric second-order tensors with real components are considered For every symmetric tensor A, defined at some point in space, there is associated with each direction (specified by the unit normal n) at the point, a vector given by the inner product This is shown schematically in Figure A2 n v = An v = An (A40) Figure A2: Normal Direction Associated with the Vector v Remark 1 A may be viewed as a linear vector operator that produces the vector v conjugate to the direction n If v is parallel to n, the above inner product may be expressed as a scalar multiple of n; viz, v = An = λn or A ij n j = λn i (A41) The direction n i is called a principal direction, principal axis or eigenvector of A Substituting the relationship n i = δ ij n j into equation (A41) leads to the following characteristic equation of A: (A λi) = 0 or (A ij λδ ij ) n j = 0 (A42) For a non-trivial solution, the determinant of the coefficients must be zero; viz, det (A λi) = 0 or A ij λδ ij = 0 (A43)
11 A6 Principal Values and Principal Directions 11 This is called the characteristic equation of A In light of the symmetry of A, the expansion of equation (A43) gives (A 11 λ) A 12 A 13 A 12 (A 22 λ) A 23 A 13 A 23 (A 33 λ) = 0 (A44) The evaluation of this determinant leads to a cubic polynomial in λ, known as the characteristic polynomial of A; viz, where λ 3 Ī1λ 2 + Ī2λ Ī3 = 0 Ī 1 = tr (A) = A 11 + A 22 + A 33 = A kk (A45) (A46) Ī 2 = 1 2 (A iia jj A ij A ij ) (A47) Ī 3 = det (A) (A48) The scalar coefficients Ī1, Ī2 and Ī3 are called the first, second and third invariants, respectively, derived from the characteristic equation of A The three roots ( λ (i) ; i = 1, 2, 3 ) of the characteristic polynomial are called the principal values or eigenvalues of A Associated with each eigenvalue is an eigenvector n (i) For a symmetric tensor with real components, the principal values are real If the three principal values are distinct, the three principal directions are mutually orthogonal When referred to principal axes, A assumes a diagonal form; viz, λ (1) 0 A = 0 λ (2) λ (3) (A49) Remark 1 Eigenvalues and eigenvectors have a useful geometric interpretation in two- and threedimensional space If λ is an eigenvalue of A corresponding to v, then Av = λv, so that depending on the value of λ, multiplication by A dilates v (if λ > 1), contracts v (if 0 < λ < 1), or reverses the direction of v (if λ < 0) Example 1: Invariants of First-Order Tensors Consider a vector v If the coordinate axes are rotated, the components of v will change However, the length (magnitude) of v remains unchanged As such, the length is said to be invariant In fact a vector (first-order tensor) has only one invariant, its length Example 2: Invariants of Second-Order Tensors
12 12 A Brief Review of Tensors A second order tensor possesses three invariants Denoting the tensor by A, its invariants are (these differ from the ones derived from the characteristic equation of A) I 1 = tr (A) = A 11 + A 22 + A 33 = A kk (A50) I 2 = 1 2 tr [( A 2)] = 1 2 A ika ki (A51) I 3 = 1 3 tr [( A 3)] = 1 3 A ika kj A ji (A52) Any function of the invariants is also an invariant To verify that the first invariant is unchanged under coordinate transformation, recall that A ij = R ik R jl A kl (A53) Thus, A mm = R mk R ml A kl = δ kl A kl = A kk (A54) For the second invariant, Finally, for the third invariant, A ika ki = (R il R km A lm ) (R kn R ip A np ) = R il R ip A lm R km R kn A np = δ lp A lm δ mn A np = A pm A mp (A55) A ika kma mi = (R il R kp A lp ) (R kn R mq A nq ) (R ms R it A st ) = R il R it A lp R kp R kn A nq R mq R ms A st = δ lt A lp δ pn A nq δ qs A st = A tp A pq A qt (A56)
13 A7 Tensor Calculus 13 A7 Tensor Calculus Several important differential operators are summarized below Gradient Operator The linear differential operator is called the gradient or del operator = ê 1 + ê 2 + ê 3 = ê i x 1 x 2 x 3 (A57) Gradient of a Scalar Field Let φ(x 1, x 2, x 3 ) be a scalar field The gradient of φ is the vector φ with components φ = grad φ = ê i φ = ê i φ,i If n = n i ê i is a unit vector, the scalar operator n = n i is called the directional derivative operator in the direction n (A58) (A59) Divergence of a Vector Field Let v(x 1, x 2, x 3 ) be a vector field The scalar quantity is called the divergence of v Curl of a Vector Field v = div v = v i = v i,i (A60) Let u(x 1, x 2, x 3 ) be a vector field The vector quantity is called the curl of u u = curl u = ε ijk u k x j ê i = ε ijk u k,j ê i (A61) Remark 1 When using u k,j for u k / x j, the indices are reversed in order as compared to the definition of the vector (cross) product; that is, whereas u v = ε kij u i v j ê k v = ε ijk v k,j ê i (A62)
14 14 A Brief Review of Tensors The Laplacian Operator The Laplacian operator is defined by 2 ( ) = div grad ( ) = ( ) = 2 ( ) = ( ),ii Let φ(x 1, x 2, x 3 ) be a scalar field The Laplacian of φ is then 2 φ = ( ) ê i (φ,j ê j ) = 2 φ x j (ê i ê j ) = φ,ji δ ij = φ,ii (A63) (A64) Let v(x 1, x 2, x 3 ) be a vector field The Laplacian of v is the following vector quantity: 2 v = 2 u k ê k = u k,ii ê k Remark 1 An alternate statement of the Laplacian of a vector is 2 v = ( v) ( v) (A65) (A66)
15 References [1] Fung, Y C, A First Course in Continuum Mechanics, Second Edition Englewood Cliffs, NJ: Prentice Hall (1977) [2] Joshi, A W, Matrices and Tensors in Physics, 2nd Edition A Halsted Press Book, New York: J Wiley and Sons (1984) [3] Mase, G E, Continuum Mechanics, Schaums Outline Series New York: McGraw-Hill Book Co (1970) [4] Sokolnikoff, I S, Tensor Analysis, Theory and Applications New York: J Wiley and Sons (1958) 15
A Primer on Index Notation
A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more
The Matrix Elements of a 3 3 Orthogonal Matrix Revisited
Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation
State of Stress at Point
State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,
Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus
Chapter 1 Matrices, Vectors, and Vector Calculus In this chapter, we will focus on the mathematical tools required for the course. The main concepts that will be covered are: Coordinate transformations
Vectors and Index Notation
Vectors and Index Notation Stephen R. Addison January 12, 2004 1 Basic Vector Review 1.1 Unit Vectors We will denote a unit vector with a superscript caret, thus â denotes a unit vector. â â = 1 If x is
13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
Linear Algebra: Vectors
A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
Lecture L3 - Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
Elasticity Theory Basics
G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold
CBE 6333, R. Levicky 1. Tensor Notation.
CBE 6333, R. Levicky 1 Tensor Notation. Engineers and scientists find it useful to have a general terminology to indicate how many directions are associated with a physical quantity such as temperature
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Unit 3 (Review of) Language of Stress/Strain Analysis
Unit 3 (Review of) Language of Stress/Strain Analysis Readings: B, M, P A.2, A.3, A.6 Rivello 2.1, 2.2 T & G Ch. 1 (especially 1.7) Paul A. Lagace, Ph.D. Professor of Aeronautics & Astronautics and Engineering
Vector and Tensor Algebra (including Column and Matrix Notation)
Vector and Tensor Algebra (including Column and Matrix Notation) 2 Vectors and tensors In mechanics and other fields of physics, quantities are represented by vectors and tensors. Essential manipulations
Lecture Notes on The Mechanics of Elastic Solids
Lecture Notes on The Mechanics of Elastic Solids Volume 1: A Brief Review of Some Mathematical Preliminaries Version 1. Rohan Abeyaratne Quentin Berg Professor of Mechanics Department of Mechanical Engineering
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
Vector Calculus: a quick review
Appendi A Vector Calculus: a quick review Selected Reading H.M. Sche,. Div, Grad, Curl and all that: An informal Tet on Vector Calculus, W.W. Norton and Co., (1973). (Good phsical introduction to the subject)
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
Introduction to Tensor Calculus
Introduction to Tensor Calculus arxiv:1603.01660v3 [math.ho] 23 May 2016 Taha Sochi May 25, 2016 Department of Physics & Astronomy, University College London, Gower Street, London, WC1E 6BT. Email: [email protected].
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
9 Multiplication of Vectors: The Scalar or Dot Product
Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation
Section 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
Figure 1.1 Vector A and Vector F
CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have
Scalars, Vectors and Tensors
Scalars, Vectors and Tensors A scalar is a physical quantity that it represented by a dimensional number at a particular point in space and time. Examples are hydrostatic pressure and temperature. A vector
13.4 THE CROSS PRODUCT
710 Chapter Thirteen A FUNDAMENTAL TOOL: VECTORS 62. Use the following steps and the results of Problems 59 60 to show (without trigonometry) that the geometric and algebraic definitions of the dot product
1 of 79 Erik Eberhardt UBC Geological Engineering EOSC 433
Stress & Strain: A review xx yz zz zx zy xy xz yx yy xx yy zz 1 of 79 Erik Eberhardt UBC Geological Engineering EOSC 433 Disclaimer before beginning your problem assignment: Pick up and compare any set
Systems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
Unified Lecture # 4 Vectors
Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,
1 Vectors: Geometric Approach
c F. Waleffe, 2008/09/01 Vectors These are compact lecture notes for Math 321 at UW-Madison. Read them carefully, ideally before the lecture, and complete with your own class notes and pictures. Skipping
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product
Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Geometrical definition Properties Expression in components. Definition in components Properties Geometrical expression.
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
Chapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
5.3 The Cross Product in R 3
53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or
v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.
3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
NOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
University of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
Matrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
SOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product
Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space
11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of
Review of Vector Analysis in Cartesian Coordinates
R. evicky, CBE 6333 Review of Vector Analysis in Cartesian Coordinates Scalar: A quantity that has magnitude, but no direction. Examples are mass, temperature, pressure, time, distance, and real numbers.
Lecture 3: Coordinate Systems and Transformations
Lecture 3: Coordinate Systems and Transformations Topics: 1. Coordinate systems and frames 2. Change of frames 3. Affine transformations 4. Rotation, translation, scaling, and shear 5. Rotation about an
MAT 1341: REVIEW II SANGHOON BAEK
MAT 1341: REVIEW II SANGHOON BAEK 1. Projections and Cross Product 1.1. Projections. Definition 1.1. Given a vector u, the rectangular (or perpendicular or orthogonal) components are two vectors u 1 and
Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
Linear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
F Matrix Calculus F 1
F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative
9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes
The Scalar Product 9.4 Introduction There are two kinds of multiplication involving vectors. The first is known as the scalar product or dot product. This is so-called because when the scalar product of
Section 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
MATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
Rotation Matrices and Homogeneous Transformations
Rotation Matrices and Homogeneous Transformations A coordinate frame in an n-dimensional space is defined by n mutually orthogonal unit vectors. In particular, for a two-dimensional (2D) space, i.e., n
5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,
1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It
Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi
Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the
1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
Content. Chapter 4 Functions 61 4.1 Basic concepts on real functions 62. Credits 11
Content Credits 11 Chapter 1 Arithmetic Refresher 13 1.1 Algebra 14 Real Numbers 14 Real Polynomials 19 1.2 Equations in one variable 21 Linear Equations 21 Quadratic Equations 22 1.3 Exercises 28 Chapter
Class Meeting # 1: Introduction to PDEs
MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
Computing Orthonormal Sets in 2D, 3D, and 4D
Computing Orthonormal Sets in 2D, 3D, and 4D David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: March 22, 2010 Last Modified: August 11,
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Understanding Poles and Zeros
MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function
CONTINUUM MECHANICS. (Lecture Notes) Zdeněk Martinec
CONTINUUM MECHANICS (Lecture Notes) Zdeněk Martinec Department of Geophysics Faculty of Mathematics and Physics Charles University in Prague V Holešovičkách 2, 180 00 Prague 8 Czech Republic e-mail: [email protected]
ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1
19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point
1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z
28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE 1.4 Cross Product 1.4.1 Definitions The cross product is the second multiplication operation between vectors we will study. The goal behind the definition
Lecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
Direct Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
Factorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
CS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Problem set on Cross Product
1 Calculate the vector product of a and b given that a= 2i + j + k and b = i j k (Ans 3 j - 3 k ) 2 Calculate the vector product of i - j and i + j (Ans ) 3 Find the unit vectors that are perpendicular
1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form
Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...
discuss how to describe points, lines and planes in 3 space.
Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position
LINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
