Vector and Tensor Algebra (including Column and Matrix Notation)



Similar documents
State of Stress at Point

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Brief Review of Tensors

Linear Algebra: Vectors

A Primer on Index Notation

The Vector or Cross Product

1 Introduction to Matrices

Rotation Matrices and Homogeneous Transformations

Review of Vector Analysis in Cartesian Coordinates

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

THREE DIMENSIONAL GEOMETRY

Vector and Matrix Norms

Unified Lecture # 4 Vectors

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

[1] Diagonal factorization

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Section 1.1. Introduction to R n

Introduction to Matrix Algebra

Linear Algebra Review. Vectors

Mechanical Properties - Stresses & Strains

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Math 241, Exam 1 Information.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Linear Algebra I. Ronald van Luijk, 2012

Lecture 2 Matrix Operations

Elasticity Theory Basics

Solutions for Review Problems

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

1 VECTOR SPACES AND SUBSPACES

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

Linear Algebra Notes for Marsden and Tromba Vector Calculus

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

discuss how to describe points, lines and planes in 3 space.

Chapter 17. Orthogonal Matrices and Symmetries of Space

Continuous Groups, Lie Groups, and Lie Algebras

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Figure 1.1 Vector A and Vector F

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

Similarity and Diagonalization. Similar Matrices

The Characteristic Polynomial

3 Orthogonal Vectors and Matrices

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

5. Orthogonal matrices

9 MATRICES AND TRANSFORMATIONS

F Matrix Calculus F 1

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Two vectors are equal if they have the same length and direction. They do not

x = + x 2 + x

Orthogonal Diagonalization of Symmetric Matrices

Nonlinear Iterative Partial Least Squares Method

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Data Mining: Algorithms and Applications Matrix Math Review

Section Inner Products and Norms

Notes on Determinant

Mathematics Course 111: Algebra I Part IV: Vector Spaces

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

BANACH AND HILBERT SPACE REVIEW

LINEAR ALGEBRA. September 23, 2010

Differentiation of vectors

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

GCE Mathematics (6360) Further Pure unit 4 (MFP4) Textbook

Vectors and Index Notation

Gradient, Divergence and Curl in Curvilinear Coordinates

α = u v. In other words, Orthogonal Projection

Eigenvalues and Eigenvectors

Lecture notes on linear algebra

Chapter 6. Orthogonality

Notes on Symmetric Matrices

Applied Linear Algebra I Review page 1

VECTOR ALGEBRA A quantity that has magnitude as well as direction is called a vector. is given by a and is represented by a.

Vectors VECTOR PRODUCT. Graham S McDonald. A Tutorial Module for learning about the vector product of two vectors. Table of contents Begin Tutorial

Inner Product Spaces and Orthogonality

Geometric Transformation CS 211A

Inner Product Spaces

DATA ANALYSIS II. Matrix Algorithms

9 Multiplication of Vectors: The Scalar or Dot Product

v 1 v 3 u v = (( 1)4 (3)2, [1(4) ( 2)2], 1(3) ( 2)( 1)) = ( 10, 8, 1) (d) u (v w) = (u w)v (u v)w (Relationship between dot and cross product)

Vector Algebra CHAPTER 13. Ü13.1. Basic Concepts

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

CBE 6333, R. Levicky 1. Tensor Notation.

Linear Algebra: Determinants, Inverses, Rank

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1 Lecture 3: Operators in Quantum Mechanics

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

Electromagnetism - Lecture 2. Electric Fields

Least-Squares Intersection of Lines

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

MAT188H1S Lec0101 Burbulla

Continued Fractions and the Euclidean Algorithm

Review B: Coordinate Systems

(2,4,5).. ...

1 Determinants and the Solvability of Linear Systems

Solutions to old Exam 1 problems

Transcription:

Vector and Tensor Algebra (including Column and Matrix Notation)

2 Vectors and tensors In mechanics and other fields of physics, quantities are represented by vectors and tensors. Essential manipulations with these quantities will be summerized in this section. For quantitative calculations and programming, components of vectors and tensors are needed, which can be determined in a coordinate system with respect to a vector basis. The three components of a vector can be stored in a column. The nine components of a second-order tensor are generally stored in a three-by-three matrix. A fourth-order tensor relates two second-order tensors. Matrix notation of such relations is only possible, when the 9 components of the second-order tensor are stored in columns. Doing so, the 8 components of a fourth-order tensor are stored in a 9 9matrix. For some mathematical manipulations it is also advantageous to store the 9 components of a second-order tensor in a 9 9matrix.. Vector A vector represents a physical quantity which is characterized by its direction and its magnitude. The length of the vector represents the magnitude, while its direction is denoted with a unit vector along its axis, also called the working line. The zero vector is a special vector having zero length. a Fig. : Avector a and its working line a = a e length : a direction vector : e ; e = zero vector : 0 unit vector : e ; e =.. Scalar multiplication A vector can be multiplied with a scalar, which results in a new vector with the same axis. A negative scalar multiplier reverses the vector s direction.

3 a b b b Fig. 2 : Scalar multiplication of a vector a b = α a..2 Sum of two vectors Adding two vectors results in a new vector, which is the diagonal of the parallelogram, spanned by the two original vectors. b c a Fig. 3 : Addition of two vectors c = a + b..3 Scalar product The scalar or inner product of two vectors is the product of their lengths and the cosine of the smallest angle between them. The result is a scalar, which explains its name. Because the product is generally denoted with a dot between the vectors, it is also called the dot product. The scalar product is commutative and linear. According to the definition it is zero for two perpendicular vectors.

4 b φ a Fig. 4 : Scalar product of two vectors a and b a b = a b cos(φ) a a = a 2 0 ; a b = b a ; a ( b + c) = a b + a c..4 Vector product The vector product of two vectors results in a new vector, who s axis is perpendicular to the plane of the two original vectors. Its direction is determined by the right-hand rule. Its length equals the area of the parallelogram, spanned by the original vectors. Because the vector product is often denoted with a cross between the vectors, it is also referred to as the cross product. Instead of the cross other symbols are used however, eg.: a b ; a b The vector product is linear but not commutative. c b n φ a Fig. 5 : Vector product of two vectors a and b c = a b = { a b sin(φ)} n =[area parallelogram] n b a = a b ; a ( b c) =( a c) b ( a b) c

5..5 Triple product The triple product of three vectors is a combination of a vector product and a scalar product, where the first one has to be calculated first because otherwise we would have to take the vector product of a vector and a scalar, which is meaningless. The triple product is a scalar, which is positive for a right-handed set of vectors and negative for a left-handed set. Its absolute value equals the volume of the parallelepiped, spanned by the three vectors. When the vectors are in one plane, the spanned volume and thus the triple product is zero. In that case the vectors are not independent. c ψ b n φ a Fig. 6 : Triple product of three vectors a, b and c a b c = { a b sin(φ)}{ n c} = { a b sin(φ)}{ c cos(ψ)} = volume parallelepiped > 0 a, b, c right handed < 0 a, b, c left handed =0 a, b, c dependent..6 Tensor product The tensor product of two vectors represents a dyad, which is a linear vector transformation. A dyad is a special tensor to be discussed later, which explains the name of this product. Because it is often denoted without a symbol between the two vectors, it is also referred to as the open product. The tensor product is not commutative. Swapping the vectors results in the conjugate or transposed or adjoint dyad. In the special case that it is commutative, the dyad is called symmetric. A conjugate dyad is denoted with the index ( ) c or the index ( ) T (transpose). Both indices are used in these notes.

6 p a b r Fig. 7 : A dyad is a linear vector transformation a b = dyad = linear vector transformation a b p = a( b p) = r a b (α p + β q) =α a b p + β a b q = α r + β s conjugated dyad symmetric dyad ( a b) c = b a a b ( a b) c = a b..7 Vector basis A vector basis in a three-dimensional space is a set of three vectors not in one plane. These vectors are referred to as independent. Each fourth vector can be expressed in the three base vectors. When the vectors are mutually perpendicular, the basis is called orthogonal. If the basis consists of mutually perpendicular unit vectors, it is called orthonormal. c 3 e 3 c 2 e 2 c e Fig. 8 : A random and an orthonormal vector basis in three-dimensional space random basis { c, c 2, c 3 } ; c c 2 c 3 0 orthonormal basis { e, e 2, e 3 } (δ ij = Kronecker delta) e i e j = δ ij e i e j =0 i j ; e i e i = right-handed basis e e 2 = e 3 ; e 2 e 3 = e ; e 3 e = e 2

7..8 Matrix representation of a vector In every point of a three-dimensional space three independent vectors exist. Here we assume that these base vectors { e, e 2, e 3 } are orthonormal, i.e. orthogonal (= perpendicular) and having length. A fourth vector a can be written as a weighted sum of these base vectors. Thecoefficientsarethecomponentsof a with relation to { e, e 2, e 3 }. The component a i represents the length of the projection of the vector a on the line with direction e i. We can denote this in several ways. In index notation a short version of the above mentioned summation is based on the Einstein summation convention. In column notation, (transposed) columns are used to store the components of a and the base vectors and the usual rules for the manipulation of columns apply. a = a e + a 2 e 2 + a 3 e 3 = 3 i= a i e i = a i e i = [ a a 2 ] a 3 e e 2 e 3 = ã T ẽ = ẽ T ã a 3 a e 3 e e 2 a 2 a Fig. 9 : A vector represented with components w.r.t. an orthonormal vector basis..9 Components The components of a vector a with respect to an orthonormal basis can be determined directly. All components, stored in column ã, can then be calculated as the inner product of vector a and the column ẽ containing the base vectors. a i = a e i i =, 2, 3 a a 2 a 3 = a e a e 2 a e 3 = a e e 2 e 3 = a ẽ

8.2 Coordinate systems.2. Cartesian coordinate system A point in a Cartesian coordinate system is identified by three independent Cartesian coordinates, which measure distances along three perpendicular coordinate axes in a reference point, the origin. In each point three coordinate axes exist which are parallel to the original coordinate axes. Base vectors are unit vectors tangential to the coordinate axes. They are orthogonal and independent of the Cartesian coordinates. x 3 e 3 e 3 e e 2 x 2 x e e 2 Fig. 0 : Cartesian coordinate system Cartesian coordinates : (x,x 2,x 3 ) or (x, y, z) Cartesian basis : { e, e 2, e 3 } or { e x, e y, e z }.2.2 Cylindrical coordinate system A point in a cylindrical coordinate system is identified by three independent cylindrical coordinates. Two of these measure a distance, respectively from (r) and along (z) a reference axis in a reference point, the origin. The third coordinate measures an angle (θ), rotating from a reference plane around the reference axis. In each point three coordinate axes exist, two linear and one circular. Base vectors are unit vectors tangential to these coordinate axes. They are orthonormal and two of them depend on the angular coordinate. The cylindrical coordinates can be transformed into Cartesian coordinates : x = r cos(θ) ; x 2 = r sin(θ) ; x 3 = z [ ] r = x 2 + x2 x2 2 ; θ =arctan ; z = x 3 The unit tangential vectors to the coordinate axes constitute an orthonormal vector base { e r, e t, e z }. The derivatives of these base vectors can be calculated. x

9 x 3 z e z e t e r e 3 e 2 x 2 x e θ r Fig. : Cylindrical coordinate system cylindrical coordinates : (r, θ, z) cylindrical basis : { e r (θ), e t (θ), e z } e r (θ) =cos(θ) e +sin(θ) e 2 ; e t (θ) = sin(θ) e + cos(θ) e 2 ; e z = e 3 e r θ = sin(θ) e + cos(θ) e 2 = e t ; e t θ = cos(θ) e sin(θ) e 2 = e r.2.3 Spherical coordinate system A point in a spherical coordinate system is identified by three independent spherical coordinates. One measures a distance (r) from a reference point, the origin. The two other coordinates measure angles (θ and φ) w.r.t. two reference planes. In each point three coordinate axes exist, one linear and two circular. Base vectors are unit vectors tangential to these coordinate axes. They are orthonormal and depend on the angular coordinates. The spherical coordinates can be translated to Cartesian coordinates and vice versa : x = r cos(θ)sin(φ) ; x 2 = r sin(θ)sin(φ) ; x 3 = r cos(φ) [ r = x 2 + x2 2 + x3 ] [ ] x2 x2 3 ; φ = arccos ; θ =arctan r The unit tangential vectors to the coordinate axes constitute an orthonormal vector base { e r, e t, e φ }. The derivatives of these base vectors can be calculated. x

0 x 3 e r e t e 3 φ r e φ e 2 x 2 e θ x Fig. 2 : Spherical coordinate system spherical coordinates : (r, θ, φ) spherical basis : { e r (θ, φ), e t (θ), e φ (θ, φ)}.2.4 Polar coordinates e r (θ, φ) =cos(θ)cos(φ) e +sin(θ)cos(φ) e 2 +sin(φ) e 3 e t (θ) = d e r (θ, φ) = sin(θ) e + cos(θ) e 2 cos(φ) dθ e φ (θ, φ) = d e r(θ, φ) = cos(θ)sin(φ) e sin(θ)sin(φ) e 2 +cos(φ) e 3 dφ e r θ = sin(θ)cos(φ) e +cos(θ)cos(φ) e 2 =cos(φ) e t e r φ = cos(θ)sin(φ) e sin(θ)sin(φ) e 2 +cos(φ) e 3 = e φ e t θ = cos(θ) e sin(θ) e 2 = cos(φ) e r +sin(φ) e φ e φ θ =sin(θ)sin(φ) e cos(θ)sin(φ) e 2 = sin(φ) e t e φ φ = cos(θ)cos(φ) e sin(θ)cos(φ) e 2 sin(φ) e 3 = e r In two dimensions the cylindrical coordinates are often referred to as polar coordinates. x 2 e t e 2 r θ e r e x

Fig. 3 : Polar coordinates polar coordinates : (r, θ) polar basis : { e r (θ), e t (θ)} e r (θ) =cos(θ) e +sin(θ) e 2 e t (θ) = d e r(θ) = sin(θ) e + cos(θ) e 2 dθ d e t (θ) dθ = e r (θ).3 Position vector A point in a three-dimensional space can be identified with a position vector x, originating from the fixed origin..3. Position vector and Cartesian components In a Cartesian coordinate system the components of this vector x w.r.t. the Cartesian basis are the Cartesian coordinates of the considered point. The incremental position vector d x points from one point to a neighbor point and has its components w.r.t. the local Cartesian vector base. x 3 e 2 e 3 d x e e 3 e e 2 x x 2 x Fig. 4 : Position vector x = x e + x 2 e 2 + x 3 e 3 x + d x =(x + dx ) e +(x 2 + dx 2 ) e 2 +(x 3 + dx 3 ) e 3 incremental position vector d x = dx e + dx 2 e 2 + dx 3 e 3 components of d x dx = d x e ; dx 2 = d x e 2 ; dx 3 = d x e 3

2.3.2 Position vector and cylindrical components In a cylindrical coordinate system the position vector x has two components. The incremental position vector d x has three components w.r.t. the local cylindrical vector base. x 3 z d x e z e t e r e z e 3 x x e e 2 θ e r r x 2 Fig. 5 : Position vector x = r e r (θ)+z e z x + d x =(r + dr) e r (θ + dθ)+(z + dz) e { z =(r + dr) e r (θ)+ d e } r dθ dθ +(z + dz) e z = r e r (θ)+z e z + r e t (θ)dθ + dr e r (θ)+ e t (θ)drdθ + dz e z incremental position vector d x = dr e r (θ)+rdθ e t (θ)+dz e z components of d x dr = d x e r ; dθ = r d x e t ; dz = d x e z.3.3 Position vector and spherical components In a spherical coordinate system the position vector x has only one component, which is its length. The incremental position vector d x has three components w.r.t. the local spherical vector base. x = r e r (θ, φ) x + d x =(r + dr) e r (θ + dθ, φ + dφ) { =(r + dr) e r (θ, φ)+ d e r dθ dθ + d e } r dφ dφ = r e r (θ, φ)+ = r cos(φ) e t (θ)dθ + r e φ (θ, φ)dφ + dr e r (θ, φ)

3 incremental position vector d x = dr e r (θ, φ)+r cos(φ) dθ e t (θ)+rdφ e φ (θ, φ) components of d x dr = d x e r ; dθ = r cos(φ) d x e t ; dφ = d x e φ.4 Gradient operator In mechanics (and physics in general) it is necessary to determine changes of scalars, vectors and tensors w.r.t. the spatial position. This means that derivatives w.r.t. the spatial coordinates have to be determined. An important operator, used to determine these spatial derivatives is the gradient operator..4. Variation of a scalar function Consider a scalar function f of the scalar variable x. The variation of the function value between two neighboring values of x canbeexpressedwithataylorseriesexpansion.ifthe variation is very small, this series can be linearized, which implies that only the first-order derivative of the function is taken into account. f(x + dx) df dx dx df f(x) x x+ dx Fig. 6 : Variation of a scalar function of one variable df = f(x + dx) f(x) = f(x)+ df dx dx + 2 x df dx dx x d 2 f dx 2 dx 2 +.. f(x) x Consider a scalar function f of two independent variables x and y. The variation of the function value between two neighboring points can be expressed with a Taylor series expansion. If the variation is very small, this series can be linearized, which implies that only first-order derivatives of the function are taken into account.

4 df = f(x + dx, y + dy)+ f x dx + f (x,y) y dy +.. f(x, y) (x,y) f x dx + f (x,y) y dy (x,y) A function of three independent variables x, y and z can be differentiated likewise to give the variation. df f x dx + f (x,y,z,) y dy + f (x,y,z,) z dz+ (x,y,z,) Spatial variation of a Cartesian scalar function Consider a scalar function of three Cartesian coordinates x, y and z. The variation of the function value between two neighboring points can be expressed with a linearized Taylor series expansion. The variation in the coordinates can be expressed in the incremental position vector between the two points. This leads to the gradient operator. The gradient (or nabla or del) operator is not a vector, because it has no length or direction. The gradient of a scalar a is a vector : a da = dx a a a + dy + dz x y z =(d x e x) a x +(d x e y) a y +(d x e z) a z [ ] a = d x e x x + e a y y + e a z = d x ( a) z [ ] gradient operator = e x x + e y y + e z = ẽ T = z T ẽ Spatial variation of a cylindrical scalar function Consider a scalar function of three cylindrical coordinates r, θ and z. The variation of the function value between two neighboring points can be expressed with a linearized Taylor series expansion. The variation in the coordinates can be expressed in the incremental position vector between the two points. This leads to the gradient operator. da = dr a r = d x a a + dθ + dz θ [ e r a r + r e t z =(d x e r) a r +( r d x e t) a θ +(d x e z) a ] z a θ + e a z = d x ( a) z gradient operator = er r + e t r θ + e z z = ẽt = T ẽ

5 Spatial variation of a spherical scalar function Consider a scalar function of three spherical coordinates r, θ and φ. The variation of the function value between two neighboring points can be expressed with a linearized Taylor series expansion. The variation in the coordinates can be expressed in the incremental position vector between the two points. This leads to the gradient operator. da = dr a r = d x a a + dθ + dφ θ [ e r a r + φ =(d x e r) a r +( r cos(φ) d x e t) a θ +( r d x e φ) a φ r cos(φ) e a t θ + ] r e a φ = d x ( a) φ gradient operator = er r + e t r cos(φ) θ + e φ r φ = ẽt = T ẽ.4.2 Spatial derivatives of a vector function For a vector function a( x) the variation can be expressed as the inner product of the difference vector d x and the gradient of the vector a. The latter entity is a dyad. The inner product of the gradient operator and a is called the divergence of a. The outer product is referred to as the rotation or curl. When cylindrical or spherical coordinates are used, the base vectors are (partly) functions of coordinates. Differentiaion must than be done with care. The gradient, divergence and rotation can be written in components w.r.t. a vector basis. The rather straightforward algebric notation can be easily elaborated. However, the use of column/matrix notation results in shorter and more transparent expressions. grad( a) = a ; div( a) = a ; rot( a) = a Cartesian components The gradient of a vector a can be written in components w.r.t. the Cartesian vector basis { e x, e y, e z }. The base vectors are independent of the coordinates, so only the components of the vector need to be differentiated. The divergence is the inner product of and a and thus results in a scalar value. The curl results in a vector. ( ) a = e x x + e y y + e z (a x e x + a y e y + a z e z ) z = e x a x,x e x + e x a y,x e y + e x a z,x e z + e y a x,y e x + e y a y,y e y + e y a z,y e z + e z a x,z e x + e z a y,z e y + e z a z,z e z = [ ] e x e y e z x y z [ ax a y ] a z e x e y e z = ẽ T ( ã T ) ẽ

6 a = a = ( e x x + e y = a x x + a y ( e x ) y + e z (a x e x + a y e y + a z e z ) z y + a z z = tr ( ã T ) ( ) = tr a x + e y y + e z z ) (a x e x + a y e y + a z e z )= Cylindrical components The gradient of a vector a can be written in components w.r.t. the cylindrical vector basis { e r, e t, e z }. The base vectors e r and e t depend on the coordinate θ, so they have to be differentiated together with the components of the vector. The result is a 3 3 matrix. a = { e r r + e z z + e t r θ }{a r e r + a z e z + a t e t } = e r a r,r e r + e r a z,r e z + e r a t,r e t + e z a r,z e r + e z a z,z e z + e z a t,z e t + e t r a r,t e r + e t r a z,t e z + e t r a t,t e t + e t r a r e t e t r a t e r = ẽ T { ( ã T ẽ )} { = ẽ T ( ã T ) ( ) } ẽ + ẽ T ã r 0 0 0 ẽ T = [ ] r θ er (θ) e t (θ) e z = r e t r e r 0 0 0 0 z { = ẽ ( T ã T ) ẽ + = ẽ T ( ã T ) ẽ + ẽ T h ẽ 0 r e ta r r e ra t 0 } } { = ẽ ( T ã T ) 0 0 0 ẽ + r a t r a r 0 0 0 0 ẽ } a = tr( ã T )+tr(h) Laplace operator The Laplace operator appears in many equations. operator with itself. It is the inner product of the gradient Laplace operator Cartesian components 2 = 2 = 2 x 2 + 2 y 2 + 2 z 2

7 cylindrical components spherical components 2 = 2 r 2 + r 2 = 2 r 2 +2 r r + 2 r 2 θ 2 + 2 z 2 ( r + r cos(φ) ) 2 θ 2 + r 2 2 φ 2 r 2 tan(φ) φ.5 2nd-order tensor A scalar function f takes a scalar variable, eg. p as input and produces another scalar variable, say q, as output, so we have : q = f(p) or p Such a function is also called a projection. Instead of input and output being a scalar, they can also be a vector. A tensor is the equivalent of a function f in this case. What makes tensors special is that they are linear functions, a very important property. A tensor is written here in bold face character. The tensors which are introduced first and will be used most of the time are second-order tensors. Each second-order tensor can be written as a summation of a number of dyads. Such a representation is not unique, in fact the number of variations is infinite. With this representation in mind we can accept that the tensor relates input and output vectors with an inner product : q = A p or p f A q q tensor = linear projection A (α m + β n) =αa m + βa n representation A = α a b + α 2 a 2 b2 + α 3 a 3 b3 +...5. Components of a tensor As said before, a second-order tensor can be represented as a summation of dyads and this can be done in an infinite number of variations. When we write each vector of the dyadic products in components w.r.t. a three-dimensional vector basis { e, e 2, e 3 } it is immediately clear that all possible representations result in the same unique sum of nine independent dyadic products of the base vectors. The coefficients of these dyads are the components of the tensor w.r.t. the vector basis { e, e 2, e 3 }. The components of the tensor A can be stored in a 3 3matrixA. A = α a b + α 2 a 2 b2 + α 3 a 3 b3 +.. each vector in components w.r.t. { e, e 2, e 3 } A = α (a e + a 2 e 2 + a 3 e 3 )(b e + b 2 e 2 + b 3 e 3 )+ α 2 (a 2 e + a 22 e 2 + a 23 e 3 )(b 2 e + b 22 e 2 + b 23 e 3 )+... = A e e + A 2 e e 2 + A 3 e e 3 + A 2 e 2 e + A 22 e 2 e 2 + A 23 e 2 e 3 + A 3 e 3 e + A 32 e 3 e 2 + A 33 e 3 e 3

8 matrix/column notation A = [ e e 2 ] e 3 A A 2 A 3 A 2 A 22 A 23 A 3 A 32 A 33 e e 2 e 3 = ẽ T A ẽ.5.2 Special tensors Some second-order tensors are considered special with regard to their results and/or their representation. The dyad is generally considered to be a special second-order tensor. The null or zero tensor projects each vector onto the null vector which has zero length and undefined direction. The unit tensor projects each vector onto itself. Conjugating (or transposing) a second-order tensor implies conjugating each of its dyads. We could also say that the front- and back-end of the tensor are interchanged. Matrix representation of special tensors, eg. the null tensor, the unity tensor and the conjugate of a tensor, result in obvious matrices. dyad : a b null tensor : O O p = 0 unit tensor : I I p = p conjugated : A c A c p = p A null tensor null matrix O = ẽ O ẽ T = 0 0 0 0 0 0 0 0 0 unity tensor unity matrix 0 0 I = ẽ I ẽ T = 0 0 I = e e + e 2 e 2 + e 3 e 3 = ẽ T ẽ 0 0 conjugate tensor transpose matrix A = ẽ A ẽ T A T = ẽ A c ẽ T.5.3 Manipulations We already saw that we can conjugate a second-order tensor, which is an obvious manipulation, taking in mind its representation as the sum of dyads. This also leads automatically to multiplication of a tensor with a scalar, summation and taking the inner product of two tensors. The result of all these basic manipulations is a new second-order tensor. When we take the double inner product of two second-order tensors, the result is a scalar value, which is easily understood when both tensors are considered as the sum of dyads again.

9 scalar multiplication summation inner product double inner product NB : B = αa C = A + B C = A B A : B = A c : B c = scalar A B B A A 2 = A A ; A 3 = A A A ; etc..5.4 Scalar functions of a tensor Scalar functions of a tensor exist, which play an important role in physics and mechanics. As their name indicates, the functions result in a scalar value. This value is independent of the matrix representation of the tensor. In practice, components w.r.t. a chosen vector basis are used to calculate this value. However, the resulting value is independent of the chosen base and therefore the functions are called invariants of the tensor. Besides the Euclidean norm, we introduce three other (fundamental) invariants of the second-order tensor..5.5 Euclidean norm The first function presented here is the Euclidean norm of a tensor, which can be seen as a kind of weight or length. A vector base is in fact not needed at all to calculate the Euclidean norm. Only the length of a vector has to be measured. The Euclidean norm has some properties that can be proved which is not done here. Besides the Euclidean norm other norms can be defined. properties m = A =max A e e with e = e. A 0 2. αa = α A 3. A B A B 4. A + B A + B.5.6 st invariant The first invariant is also called the trace of the tensor. It is a linear function. Calculation of the trace is easily done using the matrix of the tensor w.r.t. an orthonormal vector basis. J (A) =tr(a) = [ c A ( c 2 c 3 )+cycl.] c c 2 c 3

20 properties. J (A) =J (A c ) 2. J (I) =3 3. J (αa) =αj (A) 4. J (A + B) =J (A)+J (B) 5. J (A B) =A : B J (A) =A : I.5.7 2nd invariant The second invariant can be calculated as a function of the trace of the tensor and the trace of the tensor squared. The second invariant is a quadratic function of the tensor. properties J 2 (A) = 2 {tr2 (A) tr(a 2 )} = [ c (A c 2 ) (A c 3 )+cycl.] c c 2 c 3. J 2 (A) =J 2 (A c ) 2. J 2 (I) =3 3. J 2 (αa) =α 2 J 2 (A).5.8 3rd invariant The third invariant is also called the determinant of the tensor. It is easily calculated from the matrix w.r.t. an orthonormal vector basis. The determinant is a third-order function of the tensor. It is an important value when it comes to check whether a tensor is regular or singular. properties J 3 (A) =det(a) = [(A c ) (A c 2 ) (A c 3 )] c c 2 c 3. J 3 (A) =J 3 (A c ) 2. J 3 (I) = 3. J 3 (αa) =α 3 J 3 (A) 4. J 3 (A B) =J 3 (A)J 3 (B)

2.5.9 Invariants w.r.t. an orthonormal basis From the matrix of a tensor w.r.t. an orthonormal basis, the three invariants can be calculated straightforwardly. A A = J (A) =tr(a) =tr(a) = A + A 22 + A 33 A A 2 A 3 A 2 A 22 A 23 A 3 A 32 A 33 J 2 (A) = 2 {tr2 (A) tr(a 2 )} J 3 (A) =deta =det(a) = A A 22 A 33 + A 2 A 23 A 3 + A 2 A 32 A 3 (A 3 A 22 A 3 + A 2 A 2 A 33 + A 23 A 32 A ).5.0 Regular singular tensor When a second-order tensor is regular, its determinant is not zero. If the inner product of a regular tensor with a vector results in the null vector, it must be so that the former vector is also a null vector. Considering the matrix of the tensor, this implies that its rows and columns are independent. A second-order tensor is singular, when its determinant equals zero. In that case the inner product of the tensor with a vector, being not the null vector, may result in the null vector. The rows and columns of the matrix of the singular tensor are dependent. det(a) 0 A regulier [A a = 0 a = 0] det(a) =0 A singulier [A a = 0 : a 0].5. Eigenvalues and eigenvectors Taking the inner product of a tensor with one of its eigenvectors results in a vector with the same direction better : working line as the eigenvector, but not necessarily the same length. It is standard procedure that an eigenvector is taken to be a unit vector. The length of the new vector is the eigenvalue associated with the eigenvector. A n = λ n with n 0 From its definition we can derive an equation from which the eigenvalues and the eigenvectors can be determined. The coefficient tensor of this equation must be singular, as eigenvectors are never the null vector. Demanding its determinant to be zero results in a third-order equation, the characteristic equation, from which the eigenvalues can be solved.

22 A n = λ n A n λ n = 0 A n λi n = 0 (A λi) n = 0 with n 0 A λi singular det(a λi) =0 det(a λi) =0 characteristic equation characteristic equation : 3 roots : λ,λ 2,λ 3 After determining the eigenvalues, the associated eigenvectors can be determined from the original equation. eigenvector to λ i, i {, 2, 3} : (A λ i I) n i = 0 or (A λ i I)ñ i =0 dependent set of equations only ratio n : n 2 : n 3 can be calculated components n,n 2,n 3 calculation extra equation necessary normalize eigenvectors n i = n 2 + n2 2 + n2 3 = It can be shown that the three eigenvectors are orthonormal when all three eigenvalues have different values. When two eigenvalues are equal, the two associated eigenvectors can be chosen perpendicular to each other, being both already perpendicular to the third eigenvector. With all eigenvalues equal, each set of three orthonormal vectors are principal directions. The tensor is then called isotropic..5.2 Relations between invariants The three principal invariants of a tensor are related through the Cayley-Hamilton theorem. The lemma of Cayley-Hamilton states that every second-order tensor obeys its own characteristic equation. This rule can be used to reduce tensor equations to maximum second-order. The invariants of the inverse of a non-singular tensor are related. Any function of the principal invariants of a tensor is invariant as well. Cayley-Hamilton theorem A 3 J (A)A 2 + J 2 (A)A J 3 (A)I = O relation between invariants of A J (A )= J 2(A) J 3 (A) ; J 2 (A )= J (A) J 3 (A) ; J 3 (A )= J 3 (A).6 Special tensors Physical phenomena and properties are commonly characterized by tensorial variables. In derivations the inverse of a tensor is frequently needed and can be calculated uniquely when the tensor is regular. In continuum mechanics the deviatoric and hydrostatic part of a tensor are often used. Tensors may have specific properties due to the nature of physical phenomena and quantities. Many tensors are for instance symmetric, leading to special features concerning eigenvalues and eigenvectors. Rotation (rate) is associated with skew symmetric and orthogonal tensors.

23 inverse tensor A A A = I deviatoric part of a tensor symmetric tensor A d = A 3 tr(a)i A c = A skew symmetric tensor A c = A positive definite tensor a A a >0 a 0 orthogonal tensor (A a) (A b) = a b a, b adjugated tensor (A a) (A b) =A a ( a b) a, b.6. Inverse tensor The inverse A of a tensor A only exists if A is regular, ie. if det(a) 0. Inversionis applied to solve x from the equation A x = y giving x = A y. The inverse of a tensor product A B equals B A, so the sequence of the tensors is reversed. The matrix A of tensor A is the inverse of the matrix A of A. Calculation of A can be done with various algorithms. det(a) 0! A A A = I property (A B) a = b a =(A B) b (A B) a = A (B a) = b B a = A b a = B A b (A B) = B A components (minor(a ij ) = determinant of sub-matrix of A ij ) A ji = det(a) ( )i+j minor(a ij ).6.2 Deviatoric part of a tensor Each tensor can be written as the sum of a deviatoric and a hydrostatic part. In mechanics this decomposition is often applied because both parts reflect a special aspect of deformation or stress state. properties A d = A 3 tr(a)i ; 3 tr(a)i = Ah = hydrostatic or spherical part

24. (A + B) d = A d + B d 2. tr(a d )=0 3. eigenvalues (μ i ) and eigenvectors ( m i ) det(a d μi) =0 det(a { 3 tr(a)+μ}i) =0 μ = λ 3 tr(a) (A d μi) m = 0 (A { 3 tr(a)+μ}i) m = 0 (A λi) m = 0 m = n.6.3 Symmetric tensor A second order tensor is the sum of dyads. When each dyad is written in reversed order, the conjugate tensor A c results. The tensor is symmetric when each dyad in its sum is symmetric. A very convenient property of a symmetric tensor is that all eigenvalues and associated eigenvectors are real. The eigenvectors are or can be chosen to be orthonormal. They can be used as an orthonormal vector base. Writing A in components w.r.t. this basis results in the spectral representation of the tensor. The matrix A is a diagonal matrix with the eigenvalues on the diagonal. Scalar functions of a tensor A can be calculated using the spectral representation, considering the fact that the eigenvectors are not changed. A c = A properties. eigenvalues and eigenvectors are real 2. λ i different n i 3. λ i not different n i chosen eigenvectors span orthonormal basis{ n, n 2, n 3 } spectral representation of A A = A I = A ( n n + n 2 n 2 + n 3 n 3 ) functions = λ n n + λ 2 n 2 n 2 + λ 3 n 3 n 3 A = λ n n + λ 2 n 2 n 2 + λ 3 n 3 n 3 + A = λ n n + λ 2 n 2 n 2 + λ 3 n 3 n 3 ln A =lnλ n n +lnλ 2 n 2 n 2 +lnλ 3 n 3 n 3 sin A =sin(λ ) n n +sin(λ 2 ) n 2 n 2 +sin(λ 3 ) n 3 n 3 J (A) =tr(a) =λ + λ 2 + λ 3 J 2 (A) = 2 {tr2 (A) tr(a A)} = 2 {(λ + λ 2 + λ 3 ) 2 (λ 2 + λ 2 2 + λ 2 3)} J 2 (A) =det(a) =λ λ 2 λ 3

25.6.4 Skew symmetric tensor The conjugate of a skewsymmetric tensor is the negative of the tensor. The double dot product of a skew symmetric and a symmetric tensor is zero. Because the unity tensor is also a symmetric tensor, the trace of a skew symmetric tensor must be zero. A skew symmetric tensor has one unique axial vector. properties A c = A. A : B = tr(a B) =tr(a c B c )=A c : B c A c = A A : B = A : B c } B c A : B =0 = B A : B = A : B 2. B = I tr(a) =A : I =0 3. A q = p q A q = q A c q = q A q q A q =0 q p =0 q p! ω such that A q = p = ω q The components of the axial vector ω associated with the skew symmetric tensor A can be expressed in the components of A. This involves the solution of a system of three equations. A q = ẽ T A A 2 A 3 A 2 A 22 A 23 A 3 A 32 A 33 q q 2 q 3 = ẽ T A q + A 2 q 2 + A 3 q 3 A 2 q + A 22 q 2 + A 23 q 3 A 3 q + A 32 q 2 + A 33 q 3 ω q =(ω e + ω 2 e 2 + ω 3 e 3 ) (q e + q 2 e 2 + q 3 e 3 ) = ω q 2 ( e 3 )+ω q 3 ( e 2 )+ω 2 q ( e 3 )+ω 2 q 3 ( e )+ ω 3 q ( e 2 )+ω 3 q 2 ( e ) ω 2 q 3 ω 3 q 2 0 ω 3 ω 2 = ẽ T ω 3 q ω q 3 A = ω 3 0 ω ω q 2 ω 2 q ω 2 ω 0.6.5 Positive definite tensor The diagonal matrix components of a positive definite tensor must all be positive numbers. A positive definite tensor cannot be skew symmetric. When it is symmetric, all eigenvalues must be positive. In that case the tensor is automatically regular, because its inverse exists.

26 a A a >0 a 0 properties. A cannot be skew symmetric, because : a A a = a A c a a (A A c ) a =0 A skew symm. A c = A 2. A = A c n i A n i = λ i > 0 all eigenvalues positive regular a A a =0 a.6.6 Orthogonal tensor When an orthogonal tensor is used to transform a vector, the length of that vector remains the same. The inverse of an orthogonal tensor equals the conjugate of the tensor. This implies that the columns of its matrix are orthonormal, which also applies to the rows. This means that an orthogonal tensor is either a rotation tensor or a mirror tensor. The determinant of A has either the value +, in which case A is a rotation tensor, or, when A is a mirror tensor. (A a) (A b) = a b a, b properties. (A v) (A v) = v v A v = v 2. a A c A b = a b A A c = I A c = A 3. det(a A c )=det(a) 2 =det(i) = det(a) =± A regular Rotation of a vector base A rotation tensor Q can be used to rotate an orthonormal vector basis to ñ. It can be shown that the matrix Q m (n) of Q w.r.t. ñ is the same as the matrix Q (m) w.r.t.. The column with the rotated base vectors ñ canbeexpressedinthecolumnwiththe m initial base vectors as : ñ = Q m T, so using the transpose of the rotation matrix Q. m n = Q m n 2 = Q m 2 n 3 = Q m 3 n m = Q m m n 2 m 2 = Q m 2 m 2 n 3 m 3 = Q m 3 m 3 Q (n) = ñ Q ñ T =( ñ ñ T ñ ) m T = ñ m T Q (m) = Q m m T = ñ m T ( m m T ñ )= m T Q = ñt m } Q(n) = Q (m) = Q m = Q ñ ñ = Q T m }

27 We consider a rotation of the vector base { e, e 2, e 3 } to { ε, ε 2, ε 3 }, which is the result of three subsequent rotations : ) rotation about the -axis, 2) rotation about the new 2-axis and 3) rotation about the new 3-axis. For each individual rotation the rotation matrix can be determined. 3 2 ε () = e ε () 2 = c () e 2 + s () e 3 ε () 3 = s () e 2 + c () e 3 ε (2) = c (2) ε () s (2) ε () 3 ε (2) 2 = ε () 2 ε (2) 3 = s (2) ε () + c (2) ε () 3 ε (3) = c (3) ε (2) + s (3) ε (2) 2 ε (3) 2 = s (3) ε 2) + c (3) ε (2) 2 ε (3) 3 = ε (2) 3 Q = Q 2 = Q 3 = 0 0 0 c () s () 0 s () c () c (2) 0 s (2) 0 0 s (2) 0 c (2) c (3) s (3) 0 s (3) c (3) 0 0 0 The total rotation matrix Q is the product of the individual rotation matrices. ε () = Q T ẽ ε (2) = Q T () 2 ε ε (3) = Q T 3 ε (2) = ε Q = ε = Q T Q T 3 2 Q T ẽ = Q T ẽ ẽ = Q ε c (2) c (3) c (2) s (3) s (2) c () s (3) + s () s (2) c (3) c () c (3) s () s (2) s (3) s () c (2) s () s (3) c () s (2) c (3) s () c (3) + c () s (2) s (3) c () c (2) AtensorA with matrix A w.r.t. { e, e 2, e 3 } has a matrix A w.r.t. basis { ε, ε 2, ε 3 }. The matrix A can be calculated from A by multiplication with Q, the rotation matrix w.r.t. { e, e 2, e 3 }. The column à with the 9 components of A can be transformed to à by multiplication with the 9x9 transformation matrix T : à = T Ã. WhenA is symmetric, the transformation matrix T is 6x6. Note that T is not the representation of a tensor. The matrix T is not orthogonal, but its inverse can be calculated easily by reversing the rotation angles : T = T ( α, α 2, α 3 ).

28 A = ẽ T A ẽ = ε T A ε A = ẽ ε T A ẽ ε T = Q T A Q Ã = T Ã.6.7 Adjugated tensor The definition of the adjugate tensor resembles that of the orthogonal tensor, only now the scalar product is replaced by a vector product. (A a) (A b) =A a ( a b) a, b property A c A a =det(a)i.7 Fourth-order tensor Transformation of second-order tensors are done by means of a fourth-order tensor. A secondorder tensor is mapped onto a different second-order tensor by means of the double inner product with a fourth-order tensor. This mapping is linear. A fourth-order tensor can be written as a finite sum of quadrades, being open products of four vectors. When quadrades of three base vectors in three-dimensional space are used, the number of independent terms is 8, which means that the fourth-order tensor has 8 components. In index notation this can be written very short. Use of matrix notation requires the use of a 3 3 3 3matrix. 4 A : B = C tensor = linear transformation representation components 4 A :(αm + βn) =α 4 A : M + β 4 A : N 4 A = α a b c d + α 2 a 2 b2 c 2 d2 + α 3 a 3 b3 c 3 d3 +.. 4 A = e i e j A ijkl e k e l.7. Conjugated fourth-order tensor Different types of conjugated tensors are associated with a fourth-order tensor. This also implies that there are different types of symmetries involved. A left or right symmetric fourth-order tensor has 54 independent components. A tensor which is left and right symmetric has 36 independent components. A middle symmetric tensor has 45 independent components. A total symmetric tensor is left, right and middle symmetric and has 2 independent components.

29 fourth-order tensor : total conjugate : right conjugate : left conjugate : middle conjugate : 4 A = a b c d 4 A c = d c b a 4 A rc = a b d c 4 A lc = b a c d 4 A mc = a c b d symmetries left right middle total 4 A = 4 A lc ; B : 4 A = B c : 4 A B 4 A = 4 A rc ; 4 A : B = 4 A : B c B 4 A = 4 A mc 4 A = 4 A c ; B : 4 A : C = C c : 4 A : B c B, C.7.2 Fourth-order unit tensor The fourth-order unit tensor maps each second-order tensor onto itself. The symmetric fourthorder unit tensor, which is total symmetric, maps a second-order tensor on its symmetric part. 4 I : B = B B components 4 I not left- or right symmetric symmetric fourth-order tensor 4 I = e e e e + e 2 e e e 2 + e 3 e e e 3 + e e 2 e 2 e +... = e i e j e j e i = e i e j δ il δ jk e k e l 4 I : B = B B c = 4 I : B c B : 4 I = B B c = B c : 4 I 4 I s = 2 ( 4 I + 4 I rc )= 2 e i e j (δ il δ jk + δ ik δ jl ) e k e l.7.3 Products Inner and double inner products of fourth-order tensors with fourth- and second-order tensors, result in new fourth-order or second-order tensors. Calculating such products requires that some rules have to be followed. 4 A B = 4 C A ijkm B ml = C ijkl 4 A : B = C A ijkl B lk = C ij 4 A : B B : 4 A rules 4 A : 4 B = 4 C A ijmn B nmkl = C ijkl 4 A : 4 B 4 B : 4 A

30 4 A :(B C) =( 4 A B) :C A B + B c A c = 4 I s :(A B) =( 4 I s A) :B.8 Column and matrix notation Three-dimensional continuum mechanics is generally formulated initially without using a coordinate system, using vectors and tensors. For solving real problems or programming, we need to use components w.r.t. a vector basis. For a vector and a second-order tensor, the components can be stored in a column and a matrix. In this section a more extended column/matrix notation is introduced, which is especially useful, when things have to be programmed..8. Matrix/column notation for second-order tensor The components of a tensor A canbestoredinamatrixa. For later purposes it is very convenient to store these components in a column. To distinguish this new column from the normal column with components of a vector, we introduce a double under-wave. In this new column à the components of A are located on specific places. Just like any other column, à can be transposed. Another manipulation is however also possible : the transposition of the individual column elements. When this is the case we write : à t. 3 3 matrix of a second-order tensor column notation A = e i A ij e j A = A A 2 A 3 A 2 A 22 A 23 A 3 A 32 A 33 à T = [ ] A A 22 A 33 A 2 A 2 A 23 A 32 A 3 A 3 = [ ] A A 22 A 33 A 2 A 2 A 32 A 23 A 3 A 3 T t à conjugate tensor A c A ji A T = A A 2 A 3 A 2 A 22 A 32 A 3 A 23 A 33 à t Column notation for A : B With the column of components of a second-order tensor, it is now very straightforward to write the double product of two tensors as the product of their columns.

3 C = A : B = e i A ij e j : e k B kl e l = A ij δ jk δ il B kl = A ij B ji = A B + A 2 B 2 + A 3 B 3 + A 2 B 2 + A 22 B 22 + A 23 B 32 + A 3 B 3 + A 32 B 23 + A 33 B 33 = [ ] A A 22 A 33 A 2 A 2 A 32 A 23 A 3 A 3 [ ] T B B 22 B 33 B 2 B 2 B 23 B 32 B 3 B 3 idem T = t à B = à T B t C = A : B c T C = = t t à B à T B C = A c : B C = à T B = à T t B t C = A c : B c T C = t à B = à T B t Matrix/column notation C = A B The inner product of two second-order tensors A and B is a new second-order tensor C. The components of this new tensor can be stored in a 3 3 matrixc, but of course also in a column C. A matrix representation will result when the components of A and B can be isolated. We will store the components of B in a column and the components of A in a matrix. B C = A B = e i A ik e k e l B lj e j = e i A ik δ kl B lj e j = e i A ik B kj e j C = C = A B + A 2 B 2 + A 3 B 3 A B 2 + A 2 B 22 + A 3 B 32 A B 3 + A 2 B 23 + A 3 B 33 A 2 B + A 22 B 2 + A 23 B 3 A 2 B 2 + A 22 B 22 + A 23 B 32 A 2 B 3 + A 22 B 23 + A 23 B 33 A 3 B + A 32 B 2 + A 33 B 3 A 3 B 2 + A 32 B 22 + A 33 B 32 A 3 B 3 + A 32 B 23 + A 33 B 33 C C 22 C 33 C 2 C 2 C 23 C 32 C 3 C 3 = A B + A 2 B 2 + A 3 B 3 A 2 B 2 + A 22 B 22 + A 23 B 32 A 3 B 3 + A 32 B 23 + A 33 B 33 A B 2 + A 2 B 22 + A 3 B 32 A 2 B + A 22 B 2 + A 23 B 3 A 2 B 3 + A 22 B 23 + A 23 B 33 A 3 B 2 + A 32 B 22 + A 33 B 32 A 3 B + A 32 B 2 + A 33 B 3 A B 3 + A 2 B 23 + A 3 B 33

32 The column can be written as the product of a matrix A and a column which contain the components C of the tensors A and B, respectively. To distinguish the new B matrix from the normal 3 3 matrixa, which contains also the components of A, we have introduced a double underline. The matrix A can of course be transposed, giving A T. We have to introduce, however, three new manipulations concerning the matrix A. First it will be obvious that the individual matrix components can be transposed : A ij A ji. When we do this the result is written as : A t, just as was done with a column C. Two manipulations concern the interchange of columns or rows and are denoted as ( ) c and ( ) r. It can be easily seen that not each row and/or column is interchanged, but only : (4 5), (6 7) and (8 9). C = A 0 0 0 A 2 0 0 A 3 0 0 A 22 0 A 2 0 0 A 23 0 0 0 0 A 33 0 0 A 32 0 0 A 3 0 A 2 0 A 0 0 A 3 0 0 A 2 0 0 0 A 22 0 0 A 23 0 0 0 A 23 0 0 A 22 0 0 A 2 0 A 32 0 A 3 0 0 A 33 0 0 A 3 0 0 0 A 32 0 0 A 33 0 0 0 A 3 0 0 A 2 0 0 A B B 22 B 33 B 2 B 2 B 23 B 32 B 3 B 3 = A B idem C = A B C = A B = A c B t ; C t = A r B = A rc B t C = A B c C = A B t = A c B C = A c B C = A t B = A tc B t C = A c B c C = A t B t = A tc B.8.2 Matrix notation of fourth-order tensor The components of a fourth-order tensor can be stored in a 9 9 matrix. This matrix has to be defined and subsequently used in the proper way. We denote the matrix of 4 A as A. When the matrix representation of 4 A is A, it is easily seen that right- and left-conjugation results in matrices with swapped columns and rows, respectively. 4 A = e i e j A ijkl e k e l

33 A = conjugation A A 22 A 33 A 2 A 2 A 23 A 32 A 3 A 3 A 22 A 2222 A 2233 A 222 A 222 A 2223 A 2232 A 223 A 223 A 33 A 3322 A 3333 A 332 A 332 A 3323 A 3332 A 333 A 333 A 2 A 222 A 233 A 22 A 22 A 223 A 232 A 23 A 23 A 2 A 222 A 233 A 22 A 22 A 223 A 232 A 23 A 23 A 23 A 2322 A 2333 A 232 A 232 A 2323 A 2332 A 233 A 233 A 32 A 3222 A 3233 A 322 A 322 A 3223 A 3232 A 323 A 323 A 3 A 322 A 333 A 32 A 32 A 323 A 332 A 33 A 33 A 3 A 322 A 333 A 32 A 32 A 323 A 332 A 33 A 33 4 A c A T 4 A rc A c 4 A lc A r Matrix/column notation C = 4 A : B The double product of a fourth-order tensor 4 A and a second-order tensor B is a second-order tensor, here denoted as C. The components of C are stored in a column, those of B in a column. C B The components of 4 A arestoredina9 9matrix. Using index-notation we can easily derive relations between the fore-mentioned columns. C = 4 A : B e i C ij e j = e i e j A ijmn e m e n : e p B pq e q = e i e j A ijmn δ np δ mq B pq = e i e j A ijmn B nm C = A = A c t B B idem C = B : 4 A e i C ij e j = e p B pq e q : e m e n A mnij e i e j = B pq δ qm δ pn A mnij e i e j = B nm A mnij e i e j C T = B T T A r = t B A Matrix notation 4 C = 4 A B The inner product of a fourth-order tensor 4 A and a second-order tensor B is a new fourthorder tensor, here denoted as 4 C. The components of all these tensors can be stored in matrices. For a three-dimensional physical problem, these would be of size 9 9. Here we only consider the 5 5 matrices, which would result in case of a two-dimensional problem.

34 C = = 4 C = 4 A B = e i e j A ijkl e k e l e p B pq e q = e i e j A ijkl e k δ lp B pq e q = e i e j A ijkl B lq e k e q = e i e j A ijkp B pl e k e l A p B p A 2p B p2 A 3p B p3 A p B p2 A 2p B p A 22p B p A 222p B p2 A 223p B p3 A 22p B p2 A 222p B p A 33p B p A 332p B p2 A 333p B p3 A 33p B p2 A 332p B p A 2p B p A 22p B p2 A 23p B p3 A 2p B p2 A 22p B p A 2p B p A 22p B p2 A 23p B p3 A 2p B p2 A 22p B p A A 22 A 33 A 2 A 2 A 22 A 2222 A 2233 A 222 A 222 A 33 A 3322 A 3333 A 332 A 332 A 2 A 222 A 233 A 22 A 22 A 2 A 222 A 233 A 22 A 22 = A B cr = A c B c C r = A r B r = A cr B B 0 0 B 2 0 0 B 22 0 0 B 2 0 0 B 33 0 0 B 2 0 0 B 22 0 0 B 2 0 0 B Matrix notation 4 C = B 4A The inner product of a second-order tensor and a fourth-order tensor can also be written as the product of the appropriate matrices. C = = 4 C = B 4A = e i B ij e j e p e q A pqrs e r e s = e i B ij δ jp e q A pqrs e r e s = e i e q B ij A jqrs e r e s = e i e j B ip A pjkl e k e l B p A p B p A p22 B p A p33 B p A p2 B p A p2 B 2p A p2 B 2p A p222 B 2p A p233 B 2p A p22 B 2p A p22 B 3p A p3 B 3p A p322 B 3p A p333 B 3p A p32 B 3p A p32 B p A p2 B p A p222 B p A p233 B p A p22 B p A p22 B 2p A p B 2p A p22 B 2p A p33 B 2p A p2 B 2p A p2 B 0 0 0 B 2 0 B 22 0 B 2 0 0 0 B 33 0 0 0 B 2 0 B 0 B 2 0 0 0 B 22 A A 22 A 33 A 2 A 2 A 22 A 2222 A 2233 A 222 A 222 A 33 A 3322 A 3333 A 332 A 332 A 2 A 222 A 233 A 22 A 22 A 2 A 222 A 233 A 22 A 22 = B A = B c A r C r = B r A c = B cr A cr

35 Matrix notation 4 C = 4 A : 4 B The double inner product of two fourth-order tensors, 4 A and 4 B, is again a fourth-order tensor 4 C. Its matrix, C, can be derived as the product of the matrices A and B. C = = 4 C = 4 A : 4 B = e i e j A ijkl e k e l : e p e q B pqrs e r e s = e i e j A ijkl δ lp δ kq B pqrs e r e s = e i e j A ijqp B pqrs e r e s = e i e j A ijqp B pqkl e k e l A qp B pq A qp B pq22 A qp B pq33 A qp B pq2 A qp B pq2 A 22qp B pq A 22qp B pq22 A 22qp B pq33 A 22qp B pq2 A 22qp B pq2 A 33qp B pq A 33qp B pq22 A 33qp B pq33 A 33qp B pq2 A 33qp B pq2 A 2qp B pq A 2qp B pq22 A 2qp B pq33 A 2qp B pq2 A 2qp B pq2 A 2qp B pq A 2qp B pq22 A 2qp B pq33 A 2qp B pq2 A 2qp B pq2 A A 22 A 33 A 2 A 2 A 22 A 2222 A 2233 A 222 A 222 A 33 A 3322 A 3333 A 332 A 332 A 2 A 222 A 233 A 22 A 22 A 2 A 222 A 233 A 22 A 22 = A B r = A c B B B 22 B 33 B 2 B 2 B 22 B 2222 B 2233 B 222 B 222 B 33 B 3322 B 3333 B 332 B 332 B 2 B 222 B 233 B 22 B 22 B 2 B 222 B 233 B 22 B 22 Matrix notation fourth-order unit tensor The fourth-order unit tensor 4 I can be written in matrix-notation. Following the definition of the matrix representation of a fourth-order tensor, the matrix I may look a bit strange. The matrix representation of A = 4 I : A is however consistently written as à = I c Ã. In some situations the symmetric fourth-order unit tensor 4 I s is used. 4 I = e i e j δ il δ jk e k e l I = δ δ δ 2 δ 2 δ 3 δ 3 δ 2 δ δ δ 2 δ 2 δ 2 δ 22 δ 22 δ 23 δ 23 δ 22 δ 2 δ 2 δ 22 δ 3 δ 3 δ 32 δ 32 δ 33 δ 33 δ 32 δ 3 δ 3 δ 32 δ δ 2 δ 2 δ 22 δ 3 δ 23 δ 2 δ 2 δ δ 22 δ 2 δ δ 22 δ 2 δ 23 δ 3 δ 22 δ δ 2 δ 2 symmetric fourth-order tensor = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

36 4 I s = 2 ( 4 I + 4 I rc ) I s = 2 (I + I c ) = 2 2 0 0 0 0 0 2 0 0 0 0 0 2 0 0 0 0 0 0 0 0 Matrix notation II In some relations the dyadic product II of the second-order unit tensor with itself appears. Its matrix representation can easily be written as the product of columns Ĩ and its transposed. II = e i δ ij e j e k δ kl e l = e i e j δ ij δ kl e k e l II = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 T = Ĩ Ĩ Matrix notation 4 B = 4 I A The inner product of the fourth-order unit tensor 4 I and a second-order tensor A, canbe elaborated using their matrices. 4 B = 4 I A = e i e j δ il δ jk e k e l e p A pq e q = A 4I B = = A c A δ A 2 δ 2 A 3 δ 3 A 2 δ A δ 2.. A 2 δ 2 A 22 δ 22 A 23 δ 23 A 22 δ 2 A 2 δ 22.. A 3 δ 3 A 32 δ 32 A 33 δ 33 A 32 δ 3 A 3 δ 32.. A δ 2 A 2 δ 22 A 3 δ 23 A 2 δ 2 A δ 22.. A 2 δ A 22 δ 2 A 23 δ 3 A 22 δ A 2 δ 2.............. = A 0 0 A 2 0.. 0 A 22 0 0 A 2.. 0 0 A 33 0 0.. 0 A 2 0 0 A.. A 2 0 0 A 22 0..............

37 Summary and examples Below the tensor/matrix transformation procedure is summarized and illustrated with a few examples. The storage of matrix components in columns or blown-up matrices is easily done with the Matlab.m-files m2cc.m and m2mm.m. (See appendix??.) x x A A ; à 4 A A ; A = x = à A = 4 I I A = ma ; à = cca = m2cc(ma,9) ; A = mma = m2mm(ma,9) x x 2 x 3 A A 22 A 33 A 2 A 2.. ; A = ; A = A A 2 A 3 A 2 A 22 A 23 A 3 A 32 A 33 A 0 0 0 A 2.. 0 A 22 0 A 2 0.. 0 0 A 33 0 0.. 0 A 2 0 A 0.. A 2 0 0 0 A 22.............. A A 22 A 33 A 2 A 2.. A 22 A 2222 A 2233 A 222 A 222.. A 33 A 3322 A 3333 A 332 A 332.. A 2 A 222 A 233 A 22 A 22.. A 2 A 222 A 233 A 22 A 22.............. ; I = Now some manipulations are introduced, which are easily done in Matlab. 0 0 0 0.. 0 0 0 0.. 0 0 0 0.. 0 0 0 0.. 0 0 0 0.............. A c A T ; t à ; A : transpose all components t mmat 4 A lc A r : interchange rows 4/5, 6/7, 8/9 mmar 4 A rc A c : interchange columns 4/5, 6/7, 8/9 mmac mmat = m2mm(ma ) mmar = mma([ 2 3 5 4 7 6 9 8],:) mmac = mma(:,[ 2 3 5 4 7 6 9 8])

38 c = A : B c = Ã T t B C = A B C = A B C = 4 A : B = A C B t C = B : 4 T T A = t C B A 4 C = 4 A B C = A B cr 4 C = 4 A : 4 B C = A B r 4 I I II Ĩ Ĩ T.8.3 Gradients Gradient operators are used to differentiate w.r.t. coordinates and are as such associated with the coordinate system. The base vectors unit tangent vectors to the coordinate axes in the Cartesian system, { e x, e y, e z }, are independent of the coordinates {x, y, z}. Two base vectors in the cylindrical coordinate system with coordinates {r, θ, z}, are a function of the coordinate θ : { e r (θ), e t (θ), e z }. This dependency has ofcourse to be taken into account when writing gradients of vectors and tensors in components w.r.t. the coordinate system, using matrix/column notation. The gradient operators are written in column notation as follows : Cartesian = e x x + e y y + e z z = [ x y z ] T e x e t e z = T ẽ = ẽ T cylindrical = e r r + e t r θ + e z z = [ r r θ z ] e r e t e z = T ẽ = ẽ T Gradient of a vector in Cartesian coordinate system a = ẽ T { ( ã T ẽ )} = ẽ T ( ã T ) ẽ = ẽ T a x,x a y,x a z,x a x,y a y,y a z,y a x,z a y,z a z,z ẽ

39 Gradient of a vector in cylindrical coordinate system ( ã T ẽ )} { = ẽ T ( ã T ) ( ) } ẽ + ẽ T ã a = ẽ T { e r,r e t,r e z,r ẽ T = r e r,t r e t,t e r,z e t,z e z,z { = ẽ ( T ã T ) 0 ẽ + r e z,t r e ta r r e ra t 0 = { = ẽ ( T ã T ) 0 0 0 ẽ + r a t r a r 0 0 0 0 = ẽ T ( ã T ) ẽ + ẽ T h ẽ } 0 0 0 r e t r e r 0 0 0 0 } ẽ } a = tr( ã T )+tr(h) Divergence of a tensor in cylindrical coordinate system A = e i i ( e j A jk e k ) = e i ( i e j )A jk e k + e i e j ( i A jk ) e k + e i e j A jk ( i e k ) = e i ( i e j )A jk e k + δ ij ( i A jk ) e k + δ ij A jk ( i e k ) i e j = δ i2 δ j r e t δ i2 δ 2j r e r = e i (δ i2 δ j r e t δ i2 δ 2j r e r)a jk e k + δ ij ( i A jk ) e k + δ ij A jk (δ i2 δ k r e t δ i2 δ 2k r e r) = e i (δ i2 δ j r e t δ i2 δ 2j r e r)a jk e k + δ ij ( i A jk ) e k +(δ i2 δ k r e t δ i2 δ 2k r e r)a jk δ ij = e t (δ j r e t δ 2j r e r)a jk e k +( j A jk ) e k +(δ j2 δ k r e t δ j2 δ 2k r e r)a jk = δ j r A jk e k +( j A jk ) e k +(δ j2 δ k r e t δ j2 δ 2k r e r)a jk = r A k e k +( j A jk ) e k + r (A 2 e t A 22 e r ) =( r A r A 22) e +( r A 2 + r A 2) e 2 + r A 3 e 3 +( j A jk ) e k = g k e k + j A jk e k = g T ẽ +( T A) ẽ =( T A) ẽ + g T ẽ

40