1 Scalars, Vectors and Tensors

Save this PDF as:

Size: px
Start display at page:

Transcription

1 DEPARTMENT OF PHYSICS INDIAN INSTITUTE OF TECHNOLOGY, MADRAS PH350 Classical Physics Handout Scalars, Vectors and Tensors In physics, we are interested in obtaining laws (in the form of mathematical equations) which govern the behaviour of different systems. These laws should hold at different places as well as times. They should also be independent of the observer testing them. We thus infer that the physical laws and values of experimentally measurable quantities must be frame independent, i.e., independent of the choice of location of the origin of coordinates and the coordinate axes that we choose. In order to check this, we need to be able to convert results in one frame to equivalent results in another frame. Suppose we are able to write the physical laws, i.e., the mathematical equations which describe them, in terms of objects that transform in a definite manner under a change of frame, the rule for conversion is manifest. Examples of such objects are scalars, vectors and more generally, tensors. When the equations are written in terms of these objects, we say that the equations are form invariant. For instance, an equation such as Newton s second law, F = m a, has vectors on both sides of the equation. So while two different observers have different numerical values for each component of their vectors, they will both agree that the equation holds. However, the elementary or high-school definition of a vector ( a quantity with both magnitude and direction ) is rather unsatisfactory. In this unit, we shall first provide a better definition of a vector through its properties under a change in coordinate axes (change in basis vectors). 1.1 What is a vector? The standard example of a vector in three dimensions is the position vector r = xê x + yê y + zê z. The three numbers (x, y, z) specify the vector fully. However, their actual numerical values depend on the particular coordinate axes chosen: In other words, a set (ê x,ê y,ê z ) of mutually perpendicular unit vectors is chosen, and the values (x,y,z) are with reference to this set 1. This choice is not unique. A different orthonormal basis will give rise to a different set of numbers, say, (x,y,z ) associated with the same position vector. How are the two sets related? It is useful to relabel the coordinates as x 1 = x, x 2 = y, x 3 = z. 1 This is called an orthonormal basis. Each vector has unit magnitude, and they are mutually orthogonal (perpendicular): ê x ê y = 0 etc. 1

2 The relation between the two sets of coordinates can be written as 3 x i = R ij x j, (1) j=1 where the nine numbers R ij are the coefficients showing how the two sets of axes are related. This can be rewritten in matrix form if we represent the coordinates as a column vector and R ij as the entries in a matrix R with i as the row label and j as the column label. x 1 R 11 R 12 R 13 x 2 = R 21 R 22 R 23 x 3 R 31 R 32 R 33 x 1 x 2 x 3. (2) We can show that the matrix R satisfies the conditions (R T is the transpose of the matrix R) R T = R 1 or R T R = R R T = I (3) and det R = 1, (4) where I is the (3 3) unit matrix. The first condition ensures that the magnitude of vectors remain unchanged in all bases 2. The second condition preserves orientation, i.e., the basis vectors always form a right-handed triad. Such matrices are called rotation matrices because two different choices of basis vectors are related by rotations. For instance, consider the case when two bases have one common basis vector, ê z = ê z, i.e., they have the same z-axis. The coordinates are related by x cos φ sin φ 0 x y = sin φ cos φ 0 y (5) z z i.e., the two x- and y-axes are related by a rotation by an angle φ about the z-axis. We can now generalise this to arbitrary rotations. Arbitrary rotations are specified by an axis of rotation ˆn and an angle of rotation φ about the direction of ˆn. Let us denote the corresponding rotation matrix by R(ˆn,φ). Thus, the rotation matrix given above corresponds to R(ê z,φ). One can give an explicit form for the general rotation matrix but we shall not do so now. We can now provide the definition of a vector. A vector is a set of three quantities that transform, under rotations of the coordinate axes, exactly as the set of coordinates itself transform.. We usually represent the three quantities as a column vector, v = of coordinates given by Eqn.(1), the components transform as other. v i = v 1 v 2 v 3. Under the change 3 R ij v j. (6) j=1 2 Actually it also ensures that the new basis vectors are unit vectors and orthogonal to each 2

3 Exercise Given a vector v, find a rotation matrix R (or equivalently, a change of basis) such that v = (0,0,v) T. Is R unique? Explain. 1.2 The Einstein summation convention This convention is very useful in writing in a compact and uncluttered form, formulae involving several summations. The convention is simply that any index that occurs twice is summed over. The summation always runs from 1 to 3 since we are in three dimensions. Further, no index can occur more than twice in any equation. Thus, we can rewrite Eqn. (6) as v i = R ij v j (7) where the summation of j is implicitly assumed under the Einstein summation convention and i is a free (i.e., unsummed) index. The index that is summed over is referred to as a dummy index. Care should be taken to see that different letters are used to indicate different summations. Further, a simple check is to see that the same free indices occur on either side of any equation. 1.3 The dot-product: Scalars from vectors A scalar is defined as an object that is unchanged or invariant under change of basis/rotations. Given two vectors v = (v 1,v 2,v 3 ) T and w = (w 1,w 2,w 3 ) T, we can construct a scalar from them. It is called the dot product, and is defined by where v w v i δ ij w j, (8) δ ij = { 0 i j 1 i = j is the Kronecker delta. When it is written as a 3 3 matrix, it is nothing but the identity matrix I. Thus, it is not hard to see that v w = w v = v i w i (9) (Note the summation over i as per our convention.) The invariance of the dotproduct follows from the condition R T R = I satisfied by rotation matrices. Explicitly, this condition is R T R = I R ji R jk = δ ik. Here i and k are free indices and the dummy index j is summed over from 1 to 3. The following steps show how the dot product is unchanged when going from one basis to another v jw j = R ji v i R jk w k = R ji R jk v i w k = δ ik v i w k = v i w i 3

4 The dot-product when we choose v = w gives the square of the magnitude of a vector. That is v 2 = v i v i = v v2 2 + v2 3, (10) which is clearly positive definite. Note that v = 0 implies v = 0. For any two vectors u and v, we can show that u v = u v cos θ, where θ is the angle between the two vectors u and v. Recall that in elementary vector analysis this formula was used to define the scalar product. We now see that it follows as a consequence of the definition u v = u i v i. 1.4 The cross-product: New vectors from old Given two vectors v and w, one can construct a new vector u by the cross product defined below: the components of the cross product are u 1 = v 2 w 3 v 3 w 2 u 2 = v 3 w 1 v 1 w 3 (11) u 3 = v 1 w 2 v 2 w 1, written symbolically as u = v w. Eqs. (12) can be written in compact form as u i = ǫ ijk v j w k ( v w) i (12) where ǫ ijk is the Levi-Civita tensor (also called the permutation symbol) defined as: ǫ 123 = 1 ǫ ijk = ǫ jik = ǫ ikj = ǫ kji The second condition implies that the Levi-Civita tensor is totally antisymmetric and is non-vanishing if and only if all three indices are distinct. Thus, ǫ 123 = ǫ231 = ǫ 312 = 1 and ǫ 132 = ǫ 321 = ǫ 213 = 1. All other components are vanishing. For any two vectors u and v, one can show that u v = u v sin θ, where θ is the angle between the two vectors u and v. Clearly, u u = More complicated objects: Tensors As mentioned earlier, scalars and vectors are not the only kinds of objects that one encounters in physical situations. A simple example of a more complicated object - a tensor, is given by A ij = v i w j, where v and w are any two vectors. Under a change of basis, one can see that A ij = v i w j = R ik R jl v k w l = R ik R jl A kl. 4

5 A tensor like A ij can be represented as a 3 3 matrix A. Then, the above transformation law can be written as A = R T A R (13) Objects which transform in this fashion are called tensors of rank two (because its components are specified by two indices i and j). Not all tensors of rank two can be written as v i w j and thus one can use the above equation as the defining condition. An example of such an object is given by the moment of inertia of a rigid body. Other examples include the stress and strain tensors, the dielectric tensor and so on. Clearly, a tensor of second rank is the simplest generalisation of a vector. Generalising further, tensors of rank r are objects which have r-indices. Under a rotation (given by the rotation matrix R ij ), a tensor of rank r, T i1 i 2 i r transforms as T i 1 i 2 i r = R i1 j 1 R i2 j 2 R irj r T j1 j 2...j r. (14) Thus, scalars are tensors of rank zero and vectors are tensors of rank one. The Levi-Civita tensor has rank three and the Kronecker delta has rank two. The elastic constants that relate stress and strain in linear materials is a tensor of rank four. Exercise Using the transformation law for a tensor of rank two, show that the Kronecker delta is invariant under rotations of the coordinate axes. Exercise Using the transformation law for a tensor of rank three and the antisymmetry of the Levi-Civita tensor, show that ǫ ijk = (detr) ǫ ijk. Thus, the Levi-Civita tensor is invariant under rotations since detr = 1. Comment: Under arbitrary rotations, the Kronecker delta and the Levi-Civita tensor are the only invariant ( isotropic ) tensors. Any higher rank tensor that are isotropic can always be written in terms of these two basic tensor! 1.6 An important identity As we just saw, the Kronecker delta and Levi-Civita tensor are invariant under change of basis. They are related by the following important identity which enables one to derive just about all vector identities: ǫ ijk ǫ ilm = (δ jl δ km δ jm δ kl ) (15) Here we must remember that the index i is repeated, and so a summation over i from i = 1 to i = 3 is implicit. The most straightforward proof is to explicitly verify that this is true for all values of the free indices (j,k,l,m). This is left to the student. However, some simple checks are: both the LHS and RHS are antisymmetric under the exchanges: j k and k l. 5

6 As an application of the identity (15), consider the multiple cross-product: u ( v w). The i-th component of this is ( u ( v w)) i = ǫ ijk u j ( v w) k = ǫ ijk u j ǫ klm v l w m = ǫ kij ǫ klm u j v l w m Using the identity (15), we obtain ( u ( v w)) i = (δ il δ jm δ im δ jl )u j v l w m = u m v i w m u j v j w i which can be rewritten as One can further show that ( u ( v w)) = ( u w) v ( u v) w (16) ( u v) w u ( v w), i.e., the order of the cross-product is important. 1.7 Reflections and Parity We saw that rotations are given by matrices that satisfy R T = R 1 and detr = 1. Are there transformations which satisfy only the first of these two conditions? Taking the determinant on both sides, we find that detr 2 = 1, which suggests that one consider transformations such that detr = 1. It is not hard to see that reflection about any plane is such an operation. Under reflections about the xy-plane, one obtains x x y = y. z z The matrix R = diag(1, 1, 1) clearly has determinant 1. Another example, is the parity operation under which all coordinates change sign: x x P : y y (17) z z One can show that any matrix satisfying R T = R 1 with determinant 1 can be obtained by following the parity operation with a rotation. Thus, it suffices to consider parity alone. A scalar is invariant under both rotations and parity while a pseudoscalar is one that is invariant under rotations but changes sign under parity. A vector is one that transforms identical to the position vector under both rotations as well as parity while a pseudo-vector transforms like the position vector under rotations but is invariant under parity. The momentum p of a particle is a vector, while its angular momentum L is a pseudo-vector. This follows from the fact that both r and p are vectors and hence change sign under parity. Thus L = r p does not change sign, and 6

7 hence is a pseudo-vector. In general, the cross-product of two vectors gives rise to a pseudo-vector, while the cross-product of a vector with a pseudo-vector is a vector. The dot product of two vectors is a scalar (e.g. the kinetic energy of a particle), while the dot product of a vector and a pseudo-vector is a pseudoscalar. Vectors are sometimes called polar vectors, to distinguish them from axial vectors which is another name for pseudovectors. The Lorentz force law F = q( E + v B) enables us to fix the behaviour of the electric and magnetic fields under parity. Both F and v are vectors and q is a scalar, it follows that the electric field is a vector while the magnetic field is a pseudo-vector. We usually decide if an object is a pseudo-vector or pseudo-scalar by considering simple known equations. Never equate a vector to a pseudo-vector or a scalar to a pseudoscalar. 1.8 Form invariance of physical laws The full power of working with objects which transform nicely under change of basis/rotations is that the equations which describe the physical motion of particles can be written in manifestly invariant manner i.e., they retain the same form in different bases. For instance, Newton s law F = m a is the same in all bases. The LHS is a vector and the RHS is the product of a scalar and a vector. Hence the RHS is a vector as well. In more complicated situations, the equations need not be relations between vectors but those between tensors of the same rank. That way, invariance is guaranteed since both sides change in an identical fashion under change of bases and reflections. The simple way to verify this is to check that free indices are the same on both sides of any equation. One must also be cautious and check that the dummy (non-free) indices occur only twice. If any index occurs three or more timee, there s been a mistake somewhere. 1.9 Geometric meaning of the dot product and cross product Area element as a vector Given two vectors u and v, one can construct a parallelogram with two edges u and v(see Fig. 1.1). By considerations of elementary geometry, the area of the parallelogram is Area = u v sin θ, where θ is the angle between the two vectors u and v. Thus, one can see that the area can be written as Area = u v. It turns out that it is better to think of the area of a parallelogram (a planar figure) as the vector ( u v). Clearly, the direction of this vector is normal to the surface of the parallelogram. We thus define the area vector to be A = ( u v). (18) 7

8 v θ u v sinθ Figure 1: Area of a parallelogram Note that there is an ambiguity of an overall sign in the definition, for we could also have chosen it to be ( v u). We will return to this point when discussing the unit vector perpendicular to an area element Volume as a scalar Just as two vectors can define a parallelogram, three (non co-planar) vectors u, v and w can define a parallelepiped which has the three vectors as three of w v u Figure 2: Volume of a parallelepiped its edges (see Fig. 1.2). Again by means of elementary geometry, one can show that the volume of the parallelepiped is given by Volume = u ( v w). (19) The above expression seems to treat the three vectors differently. but this is not so. We can re-express the above formula by using the Levi-Civita tensor Volume = ǫ ijk u i v j w k = u ( v w) = v ( w u) = w ( u v) (20) We note again that there is a sign ambiguity in the above definition. This can be removed by taking the magnitude in the RHS of the above equation, since we require the volume to be non-negative. Note that if u, v, w are co-planar, the volume of the parallelepiped vanishes automatically, as it must. Exercise Obtain an explicit 3 3 matrix for an arbitrary rotation matrix R(ˆn,φ). Recommended reading: V. Balakrishnan, How is a vector rotated?, Resonance, Vol. 4, No. 10, pp (1999). Also available at the URL: 8

Index Notation for Vector Calculus

Index Notation for Vector Calculus by Ilan Ben-Yaacov and Francesc Roig Copyright c 2006 Index notation, also commonly known as subscript notation or tensor notation, is an extremely useful tool for performing

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

Index notation in 3D. 1 Why index notation?

Index notation in 3D 1 Why index notation? Vectors are objects that have properties that are independent of the coordinate system that they are written in. Vector notation is advantageous since it is elegant

How is a vector rotated?

How is a vector rotated? V. Balakrishnan Department of Physics, Indian Institute of Technology, Madras 600 036 Appeared in Resonance, Vol. 4, No. 10, pp. 61-68 (1999) Introduction In an earlier series

A Primer on Index Notation

A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

Primer on Index Notation

Massachusetts Institute of Technology Department of Physics Physics 8.07 Fall 2004 Primer on Index Notation c 2003-2004 Edmund Bertschinger. All rights reserved. 1 Introduction Equations involving vector

Lecture L3 - Vectors, Matrices and Coordinate Transformations

S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

State of Stress at Point

State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

1 Vectors: Geometric Approach

c F. Waleffe, 2008/09/01 Vectors These are compact lecture notes for Math 321 at UW-Madison. Read them carefully, ideally before the lecture, and complete with your own class notes and pictures. Skipping

Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,

LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Geometrical definition Properties Expression in components. Definition in components Properties Geometrical expression.

A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

VECTOR CALCULUS: USEFUL STUFF Revision of Basic Vectors

Prof. S.M. Tobias Jan 2009 VECTOR CALCULUS: USEFUL STUFF Revision of Basic Vectors A scalar is a physical quantity with magnitude only A vector is a physical quantity with magnitude and direction A unit

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

Problem set on Cross Product

1 Calculate the vector product of a and b given that a= 2i + j + k and b = i j k (Ans 3 j - 3 k ) 2 Calculate the vector product of i - j and i + j (Ans ) 3 Find the unit vectors that are perpendicular

1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

1111: Linear Algebra I

1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Lecture 3 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Lecture 3 1 / 12 Vector product and volumes Theorem. For three 3D vectors u, v, and w,

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

Chapter 1 Matrices, Vectors, and Vector Calculus In this chapter, we will focus on the mathematical tools required for the course. The main concepts that will be covered are: Coordinate transformations

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

Unified Lecture # 4 Vectors

Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

CBE 6333, R. Levicky 1. Tensor Notation.

CBE 6333, R. Levicky 1 Tensor Notation. Engineers and scientists find it useful to have a general terminology to indicate how many directions are associated with a physical quantity such as temperature

Figure 1.1 Vector A and Vector F

CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

Vector algebra Christian Miller CS Fall 2011

Vector algebra Christian Miller CS 354 - Fall 2011 Vector algebra A system commonly used to describe space Vectors, linear operators, tensors, etc. Used to build classical physics and the vast majority

Elasticity Theory Basics

G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE 1.4 Cross Product 1.4.1 Definitions The cross product is the second multiplication operation between vectors we will study. The goal behind the definition

Vectors, Gradient, Divergence and Curl. 1 Introduction A vector is determined by its length and direction. They are usually denoted with letters with arrows on the top a or in bold letter a. We will use

Vectors and Index Notation

Vectors and Index Notation Stephen R. Addison January 12, 2004 1 Basic Vector Review 1.1 Unit Vectors We will denote a unit vector with a superscript caret, thus â denotes a unit vector. â â = 1 If x is

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

x 2 = (ct) 2 +1, x 2 = (ct ) 2 +1, Major disadvantage of geometric approach: scaling according to hyperbolic laws.

Scaling of axes The Lorentz transformation S S changes the scale of the axes. Choose a length unit by s 2 = 1. Then x 2 = (ct) 2 +1, ie hyperbola, all points on it have length s 2 = 1. In particular t

Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

2D Geometric Transformations. COMP 770 Fall 2011

2D Geometric Transformations COMP 770 Fall 2011 1 A little quick math background Notation for sets, functions, mappings Linear transformations Matrices Matrix-vector multiplication Matrix-matrix multiplication

BASIC CONTINUUM MECHANICS

BASIC CONTINUUM MECHANICS Lars H. Söderholm Department of Mechanics, KTH, S-100 44 Stockholm, Sweden clars H. Söderholm Fall 2008 2 Contents 1 Vectors and second order tensors 7 1 The Einstein summation

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

The Kronecker Delta and e - d Relationship

The Kronecker Delta and e - d Relationship Techniques for more complicated vector identities Overview We have already learned how to use the Levi - Civita permutation tensor to describe cross products

1 Spherical Kinematics

ME 115(a): Notes on Rotations 1 Spherical Kinematics Motions of a 3-dimensional rigid body where one point of the body remains fixed are termed spherical motions. A spherical displacement is a rigid body

Notes: Tensor Operators

Notes: Tensor Operators Ben Baragiola I. VECTORS We are already familiar with the concept of a vector, but let s review vector properties to refresh ourselves. A -component Cartesian vector is v = v x

Rotation Matrices and Homogeneous Transformations

Rotation Matrices and Homogeneous Transformations A coordinate frame in an n-dimensional space is defined by n mutually orthogonal unit vectors. In particular, for a two-dimensional (2D) space, i.e., n

Linear Algebra Concepts

University of Central Florida School of Electrical Engineering Computer Science EGN-3420 - Engineering Analysis. Fall 2009 - dcm Linear Algebra Concepts Vector Spaces To define the concept of a vector

Lectures on Vector Calculus

Lectures on Vector Calculus Paul Renteln Department of Physics California State University San Bernardino, CA 92407 March, 2009; Revised March, 2011 c Paul Renteln, 2009, 2011 ii Contents 1 Vector Algebra

Foundations of Mathematical Physics: Vectors, Tensors and Fields

Foundations of Mathematical Physics: Vectors, Tensors and Fields 29 21 John Peacock www.roe.ac.uk/japwww/teaching/vtf.html Textbooks The standard recommended text for this course (and later years) is Riley,

Grassmann Algebra in Game Development. Eric Lengyel, PhD Terathon Software

Grassmann Algebra in Game Development Eric Lengyel, PhD Terathon Software Math used in 3D programming Dot / cross products, scalar triple product Planes as 4D vectors Homogeneous coordinates Plücker coordinates

ELEMENTS OF VECTOR ALGEBRA

ELEMENTS OF VECTOR ALGEBRA A.1. VECTORS AND SCALAR QUANTITIES We have now proposed sets of basic dimensions and secondary dimensions to describe certain aspects of nature, but more than just dimensions

Introduction to Tensor Calculus

Introduction to Tensor Calculus arxiv:1603.01660v3 [math.ho] 23 May 2016 Taha Sochi May 25, 2016 Department of Physics & Astronomy, University College London, Gower Street, London, WC1E 6BT. Email: t.sochi@ucl.ac.uk.

We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

Tensors on a vector space

APPENDIX B Tensors on a vector space In this Appendix, we gather mathematical definitions and results pertaining to tensors. The purpose is mostly to introduce the modern, geometrical view on tensors,

Rotation in the Space

Rotation in the Space (Com S 477/577 Notes) Yan-Bin Jia Sep 6, 2016 The position of a point after some rotation about the origin can simply be obtained by multiplying its coordinates with a matrix One

Three-Dimensional Proper and Improper Rotation Matrices

Physics 116A Winter 011 Three-Dimensional Proper and Improper Rotation Matrices 1. Proper and improper rotation matrices ArealorthogonalmatrixRisamatrixwhoseelementsarerealnumbersandsatisfies R 1 = R T

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

0.1 Linear Transformations

.1 Linear Transformations A function is a rule that assigns a value from a set B for each element in a set A. Notation: f : A B If the value b B is assigned to value a A, then write f(a) = b, b is called

Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

Quick Reference Guide to Linear Algebra in Quantum Mechanics

Quick Reference Guide to Linear Algebra in Quantum Mechanics Scott N. Walck September 2, 2014 Contents 1 Complex Numbers 2 1.1 Introduction............................ 2 1.2 Real Numbers...........................

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

Linear Algebra Notes

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

Brief Review of Tensors

Appendix A Brief Review of Tensors A1 Introductory Remarks In the study of particle mechanics and the mechanics of solid rigid bodies vector notation provides a convenient means for describing many physical

Notice that v v w (4)( 15) ( 3)( 20) (0)(2) ( 2)( 15) (2)( 20) (5)(2)

The Cross Product When discussing the dot product, we showed how two vectors can be combined to get a number. Here, we shall see another way of combining vectors, this time resulting in a vector. This

v 1 v 3 u v = (( 1)4 (3)2, [1(4) ( 2)2], 1(3) ( 2)( 1)) = ( 10, 8, 1) (d) u (v w) = (u w)v (u v)w (Relationship between dot and cross product)

0.1 Cross Product The dot product of two vectors is a scalar, a number in R. Next we will define the cross product of two vectors in 3-space. This time the outcome will be a vector in 3-space. Definition

DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

Hermitian Operators An important property of operators is suggested by considering the Hamiltonian for the particle in a box: d 2 dx 2 (1)

CHAPTER 4 PRINCIPLES OF QUANTUM MECHANICS In this Chapter we will continue to develop the mathematical formalism of quantum mechanics, using heuristic arguments as necessary. This will lead to a system

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

Relativistic Electromagnetism

Chapter 8 Relativistic Electromagnetism In which it is shown that electricity and magnetism can no more be separated than space and time. 8.1 Magnetism from Electricity Our starting point is the electric

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot

Classical Physics Prof. V. Balakrishnan Department of Physics Indian Institution of Technology, Madras. Lecture No. # 13

Classical Physics Prof. V. Balakrishnan Department of Physics Indian Institution of Technology, Madras Lecture No. # 13 Now, let me formalize the idea of symmetry, what I mean by symmetry, what we mean

Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

Linear transformations Affine transformations Transformations in 3D. Graphics 2011/2012, 4th quarter. Lecture 5: linear and affine transformations

Lecture 5 Linear and affine transformations Vector transformation: basic idea Definition Examples Finding matrices Compositions of transformations Transposing normal vectors Multiplication of an n n matrix

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

Motivation. Goals. General Idea. Outline. (Nonuniform) Scale. Foundations of Computer Graphics

Foundations of Computer Graphics Basic 2D Transforms Motivation Many different coordinate systems in graphics World, model, body, arms, To relate them, we must transform between them Also, for modeling

Determinants, Areas and Volumes

Determinants, Areas and Volumes Theodore Voronov Part 2 Areas and Volumes The area of a two-dimensional object such as a region of the plane and the volume of a three-dimensional object such as a solid

5. Orthogonal matrices

L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

Linear Algebra Test 2 Review by JC McNamara

Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible

Linear Algebra: Vectors

A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

[1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

Physics 53. Rotational Motion 1. We're going to turn this team around 360 degrees. Jason Kidd

Physics 53 Rotational Motion 1 We're going to turn this team around 360 degrees. Jason Kidd Rigid bodies To a good approximation, a solid object behaves like a perfectly rigid body, in which each particle

Algebra and Linear Algebra

Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Algebra and Linear Algebra Algebra: numbers and operations on numbers 2 + 3 = 5 3 7 = 21 Linear algebra: tuples, triples,...

We have just introduced a first kind of specifying change of orientation. Let s call it Axis-Angle.

2.1.5 Rotations in 3-D around the origin; Axis of rotation In three-dimensional space, it will not be sufficient just to indicate a center of rotation, as we did for plane kinematics. Any change of orientation

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

Calculus and the centroid of a triangle

Calculus and the centroid of a triangle The goal of this document is to verify the following physical assertion, which is presented immediately after the proof of Theorem I I I.4.1: If X is the solid triangular

row row row 4

13 Matrices The following notes came from Foundation mathematics (MATH 123) Although matrices are not part of what would normally be considered foundation mathematics, they are one of the first topics

CBE 6333, R. Levicky 1 Differential Balance Equations

CBE 6333, R. Levicky 1 Differential Balance Equations We have previously derived integral balances for mass, momentum, and energy for a control volume. The control volume was assumed to be some large object,

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

Chapter 3. Forces, Momentum & Stress. 3.1 Newtonian mechanics: a very brief résumé

Chapter 3 Forces, Momentum & Stress 3.1 Newtonian mechanics: a very brief résumé In classical Newtonian particle mechanics, particles (lumps of matter) only experience acceleration when acted on by external

The calibration problem was discussed in details during lecture 3.

1 2 The calibration problem was discussed in details during lecture 3. 3 Once the camera is calibrated (intrinsics are known) and the transformation from the world reference system to the camera reference

Rotational Motion. Description of the motion. is the relation between ω and the speed at which the body travels along the circular path.

Rotational Motion We are now going to study one of the most common types of motion, that of a body constrained to move on a circular path. Description of the motion Let s first consider only the description

Mathematics Notes for Class 12 chapter 10. Vector Algebra

1 P a g e Mathematics Notes for Class 12 chapter 10. Vector Algebra A vector has direction and magnitude both but scalar has only magnitude. Magnitude of a vector a is denoted by a or a. It is non-negative

Row and column operations

Row and column operations It is often very useful to apply row and column operations to a matrix. Let us list what operations we re going to be using. 3 We ll illustrate these using the example matrix

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

5.3 The Cross Product in R 3

53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

13.4 THE CROSS PRODUCT

710 Chapter Thirteen A FUNDAMENTAL TOOL: VECTORS 62. Use the following steps and the results of Problems 59 60 to show (without trigonometry) that the geometric and algebraic definitions of the dot product