A Some Basic Rules of Tensor Calculus



Similar documents
State of Stress at Point

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Unified Lecture # 4 Vectors

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Figure 1.1 Vector A and Vector F

Review of Vector Analysis in Cartesian Coordinates

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Chapter 17. Orthogonal Matrices and Symmetries of Space

Elasticity Theory Basics

Linear Algebra: Vectors

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

α = u v. In other words, Orthogonal Projection

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

A Primer on Index Notation

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

Similarity and Diagonalization. Similar Matrices

5.3 The Cross Product in R 3

Vector and Tensor Algebra (including Column and Matrix Notation)

Section 1.1. Introduction to R n

13.4 THE CROSS PRODUCT

THREE DIMENSIONAL GEOMETRY

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

1 VECTOR SPACES AND SUBSPACES

Chapter 2. Parameterized Curves in R 3

MAT 1341: REVIEW II SANGHOON BAEK

Vector and Matrix Norms

Mechanics 1: Conservation of Energy and Momentum

The Characteristic Polynomial

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Vector Algebra II: Scalar and Vector Products

Introduction to Matrix Algebra

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Lecture 2 Matrix Operations

Special Theory of Relativity

Geometric Transformations

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Scalars, Vectors and Tensors

3 Orthogonal Vectors and Matrices

CBE 6333, R. Levicky 1 Differential Balance Equations

Differentiation of vectors

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Section 4.4 Inner Product Spaces

Introduction to Tensor Calculus

1 Symmetries of regular polyhedra

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Vector has a magnitude and a direction. Scalar has a magnitude

Differential Relations for Fluid Flow. Acceleration field of a fluid. The differential equation of mass conservation

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Continued Fractions and the Euclidean Algorithm

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Review A: Vector Analysis

Applied Linear Algebra I Review page 1

Metric Spaces. Chapter Metrics

AP Physics - Vector Algrebra Tutorial

F Matrix Calculus F 1

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Lecture 1: Schur s Unitary Triangularization Theorem

Data Mining: Algorithms and Applications Matrix Math Review

BANACH AND HILBERT SPACE REVIEW

Inner Product Spaces

Chapter 6. Orthogonality

6. Vectors Scott Surgent (surgent@asu.edu)

Continuous Groups, Lie Groups, and Lie Algebras

Linear Algebra: Determinants, Inverses, Rank

The Dot and Cross Products

Linear Algebra Review. Vectors

Section Inner Products and Norms

CONTINUUM MECHANICS. (Lecture Notes) Zdeněk Martinec

discuss how to describe points, lines and planes in 3 space.

x = + x 2 + x

Lecture L22-2D Rigid Body Dynamics: Work and Energy

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

A vector is a directed line segment used to represent a vector quantity.

APPLIED MATHEMATICS ADVANCED LEVEL

1 Vectors: Geometric Approach

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

Mathematics Notes for Class 12 chapter 10. Vector Algebra

THEORETICAL MECHANICS

1 Introduction to Matrices

Some Comments on the Derivative of a Vector with applications to angular momentum and curvature. E. L. Lady (October 18, 2000)

Vector Algebra CHAPTER 13. Ü13.1. Basic Concepts

Two vectors are equal if they have the same length and direction. They do not

Mechanics 1: Vectors

Math 241, Exam 1 Information.

Notes on Determinant

Georgia Department of Education Kathy Cox, State Superintendent of Schools 7/19/2005 All Rights Reserved 1

Notes on Symmetric Matrices

Plane Stress Transformations

Brief Review of Tensors

Inner Product Spaces and Orthogonality

2. Spin Chemistry and the Vector Model

Rotation Matrices and Homogeneous Transformations

APPENDIX D. VECTOR ANALYSIS 1. The following conventions are used in this appendix and throughout the book:

How To Understand The Dynamics Of A Multibody System

Transcription:

A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in continuum mechanics and the derivation of the governing equations for applied problems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: the direct (symbolic) notation and the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b +...+c c d. The scalars, vectors and tensors are handled as invariant (independent from the choice of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. g i, i = 1, 2, 3 one can write ( a = a i g i, A = a i b j +... + c i d j) g i g j Here the Einstein s summation convention is used: in one expression the twice repeated indices are summed up from 1 to 3, e.g. a k g k 3 a k g k, k=1 A ik b k 3 A ik b k k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e. g. the sum of two vectors is computed as a sum of their coordinates c i = a i + b i. The introduced basis remains in the background. It must be remembered that a change of the coordinate system leads to the change of the components of tensors. In this work we prefer the direct tensor notation over the index one. When solving applied problems the tensor equations can be translated into the language of matrices for a specified coordinate system. The purpose of this Appendix is to

168 A Some Basic Rules of Tensor Calculus give a brief guide to notations and rules of the tensor calculus applied throughout this work. For more comprehensive overviews on tensor calculus we recommend [54, 96, 123, 191, 199, 311, 334]. The calculus of matrices is presented in [40, 111, 340], for example. Section A.1 provides a brief overview of basic algebraic operations with vectors and second rank tensors. Several rules from tensor analysis are summarized in Sect. A.2. Basic sets of invariants for different groups of symmetry transformation are presented in Sect. A.3, where a novel approach to find the functional basis is discussed. A.1 Basic Operations of Tensor Algebra A.1.1 Polar and Axial Vectors A vector in the three-dimensional Euclidean space is defined as a directed line segment with specified magnitude (scalar) and direction. The magnitude (the length) of a vector a is denoted by a. Two vectors a and b are equal if they have the same direction and the same magnitude. The zero vector 0 has a magnitude equal to zero. In mechanics two types of vectors can be introduced. The vectors of the first type are directed line segments. These vectors are associated with translations in the threedimensional space. Examples for polar vectors include the force, the displacement, the velocity, the acceleration, the momentum, etc. The second type is used to characterize spinor motions and related quantities, i.e. the moment, the angular velocity, the angular momentum, etc. Figure A.1a shows the so-called spin vector a which represents a rotation about the given axis. The direction of rotation is specified by the circular arrow and the magnitude of rotation is the corresponding length. For the given spin vector a the directed line segment a is introduced according to the following rules [334]: 1. the vector a is placed on the axis of the spin vector, 2. the magnitude of a is equal to the magnitude of a, a b c a a a a Figure A.1 Axial vectors. a Spin vector, b axial vector in the right-screw oriented reference frame, c axial vector in the left-screw oriented reference frame a

A.1 Basic Operations of Tensor Algebra 169 3. the vector a is directed according to the right-handed screw, Fig A.1b, or the left-handed screw, Fig A.1c The selection of one of the two cases in 3. corresponds to the convention of orientation of the reference frame [334] (it should be not confused with the right- or left-handed triples of vectors or coordinate systems). The directed line segment is called a polar vector if it does not change by changing the orientation of the reference frame. The vector is called to be axial if it changes the sign by changing the orientation of the reference frame. The above definitions are valid for scalars and tensors of any rank too. The axial vectors (and tensors) are widely used in the rigid body dynamics, e.g. [333], in the theories of rods, plates and shells, e.g. [25], in the asymmetric theory of elasticity, e.g. [231], as well as in dynamics of micro-polar media, e.g. [108]. By dealing with polar and axial vectors it should be remembered that they have different physical meanings. Therefore, a sum of a polar and an axial vector has no sense. A.1.2 Operations with Vectors Addition. For a given pair of vectors a and b of the same type the sum c = a + b is defined according to one of the rules in Fig. A.2. The sum has the following properties a + b = b + a (commutativity), (a + b) + c = a + (b + c) (associativity), a + 0 = a Multiplication by a Scalar. For any vector a and for any scalar α a vector b = αa is defined in such a way that b = α a, for α > 0 the direction of b coincides with that of a, for α < 0 the direction of b is opposite to that of a. For α = 0 the product yields the zero vector, i.e. 0 = 0a. It is easy to verify that α(a + b) = αa + αb, (α + β)a = αa + βa a b b c c b a Figure A.2 Addition of two vectors. a Parallelogram rule, b triangle rule a

170 A Some Basic Rules of Tensor Calculus a b b b ϕ ϕ 2π ϕ a n a = a a (b n a )n a Figure A.3 Scalar product of two vectors. a Angles between two vectors, b unit vector and projection a Scalar (Dot) Product of two Vectors. For any pair of vectors a and b a scalar α is defined by α = a b = a b cos ϕ, where ϕ is the angle between the vectors a and b. As ϕ one can use any of the two angles between the vectors, Fig. A.3a. The properties of the scalar product are a b = b a (commutativity), a (b + c) = a b + a c (distributivity) Two nonzero vectors are said to be orthogonal if their scalar product is zero. The unit vector directed along the vector a is defined by (see Fig. A.3b) n a = a a The projection of the vector b onto the vector a is the vector (b n a )n a, Fig. A.3b. The length of the projection is b cos ϕ. Vector (Cross) Product of Two Vectors. For the ordered pair of vectors a and b the vector c = a b is defined in two following steps [334]: the spin vector c is defined in such a way that the axis is orthogonal to the plane spanned on a and b, Fig. A.4a, the circular arrow shows the direction of the shortest rotation from a to b, Fig. A.4b, the length is a b sin ϕ, where ϕ is the angle of the shortest rotation from a to b, from the resulting spin vector the directed line segment c is constructed according to one of the rules listed in Subsect. A.1.1. The properties of the vector product are a b = b a, a (b + c) = a b + a c The type of the vector c = a b can be established for the known types of the vectors a and b, [334]. If a and b are polar vectors the result of the cross product

A.1 Basic Operations of Tensor Algebra 171 a b c c c a ϕ b a ϕ b a ϕ b Figure A.4 Vector product of two vectors. a Plane spanned on two vectors, b spin vector, c axial vector in the right-screw oriented reference frame will be the axial vector. An example is the moment of momentum for a mass point m defined by r (m v), where r is the position of the mass point and v is the velocity of the mass point. The next example is the formula for the distribution of velocities in a rigid body v = ω r. Here the cross product of the axial vector ω (angular velocity) with the polar vector r (position vector) results in the polar vector v. The mixed product of three vectors a, b and c is defined by (a b) c. The result is a scalar. For the mixed product the following identities are valid a (b c) = b (c a) = c (a b) (A.1.1) If the cross product is applied twice, the first operation must be set in parentheses, e.g., a (b c). The result of this operation is a vector. The following relation can be applied a (b c) = b(a c) c(a b) (A.1.2) By use of (A.1.1) and (A.1.2) one can calculate (a b) (c d) = a [b (c d)] = a (c b d d b c) = a c b d a d b c (A.1.3) A.1.3 Bases Any triple of linear independent vectors e 1, e 2, e 3 is called basis. A triple of vectors e i is linear independent if and only if e 1 (e 2 e 3 ) = 0. For a given basis e i any vector a can be represented as follows a = a 1 e 1 + a 2 e 2 + a 3 e 3 a i e i The numbers a i are called the coordinates of the vector a for the basis e i. In order to compute the coordinates the dual (reciprocal) basis e k is introduced in such a way that { e k e i = δi k 1, k = i, = 0, k = i

172 A Some Basic Rules of Tensor Calculus δ k i is the Kronecker symbol. The coordinates a i can be found by e i a = a e i = a m e m e i = a m δ i m = a i For the selected basis e i the dual basis can be found from e 1 = e 2 e 3 (e 1 e 2 ) e 3, e 2 = e 3 e 1 (e 1 e 2 ) e 3, e 3 = e 1 e 2 (e 1 e 2 ) e 3 (A.1.4) By use of the dual basis a vector a can be represented as follows a = a 1 e 1 + a 2 e 2 + a 3 e 3 a i e i, a m = a e m, a m = a m In the special case of the orthonormal vectors e i, i.e. e i = 1 and e i e k = 0 for i = k, from (A.1.4) follows that e k = e k and consequently a k = a k. A.1.4 Operations with Second Rank Tensors A second rank tensor is a finite sum of ordered vector pairs A = a b +... + c d [334]. One ordered pair of vectors is called the dyad. The symbol is called the dyadic (tensor) product of two vectors. A single dyad or a sum of two dyads are special cases of the second rank tensor. Any finite sum of more than three dyads can be reduced to a sum of three dyads. For example, let A = n a (i) b (i) be a second rank tensor. Introducing a basis e k the vectors a (i) can be represented by a (i) = a k (i) e k, where a k (i) are coordinates of the vectors a (i). Now we may write A = n a k (i) e k b (i) = e k n a k (i) b (i) = e k d k, d k n a k (i) b (i) Addition. The sum of two tensors is defined as the sum of the corresponding dyads. The sum has the properties of associativity and commutativity. In addition, for a dyad a b the following operation is introduced a (b + c) = a b + a c, (a + b) c = a c + b c Multiplication by a Scalar. This operation is introduced first for one dyad. For any scalar α and any dyad a b α(a b) = (αa) b = a (αb), (α + β)a b = αa b + βa b (A.1.5) By setting α = 0 in the first equation of (A.1.5) the zero dyad can be defined, i.e. 0(a b) = 0 b = a 0. The above operations can be generalized for any finite sum of dyads, i.e. for second rank tensors.

A.1 Basic Operations of Tensor Algebra 173 Inner Dot Product. For any two second rank tensors A and B the inner dot product is specified by A B. The rule and the result of this operation can be explained in the special case of two dyads, i.e. by setting A = a b and B = c d A B = a b c d = (b c)a d = αa d, α b c The result of this operation is a second rank tensor. Note that A B = B A. This can be again verified for two dyads. The operation can be generalized for two second rank tensors as follows A B = 3 a (i) b (i) 3 k=1 c (k) d (k) = 3 3 k=1 (b (i) c (k) )a (i) d (k) Transpose of a Second Rank Tensor. The transpose of a second rank tensor A is constructed by the following rule A T = ( 3 ) T a (i) b (i) = 3 b (i) a (i) Double Inner Dot Product. For any two second rank tensors A and B the double inner dot product is specified by A B The result of this operation is a scalar. This operation can be explained for two dyads as follows A B = a b c d = (b c)(a d) By analogy to the inner dot product one can generalize this operation for two second rank tensors. It can be verified that A B = B A for second rank tensors A and B. For a second rank tensor A and for a dyad a b A a b = b A a (A.1.6) A scalar product of two second rank tensors A and B is defined by One can verify that α = A B T A B T = B T A = B A T Dot Products of a Second Rank Tensor and a Vector. The right dot product of a second rank tensor A and a vector c is defined by A c = For a single dyad this operation is ( 3 a (i) b (i) ) c = a b c = a(b c) 3 (b (i) c)a (i)

174 A Some Basic Rules of Tensor Calculus The left dot product is defined by ( 3 ) c A = c a (i) b (i) = 3 (c a (i) )b (i) The results of these operations are vectors. One can verify that A c = c A, A c = c A T Cross Products of a Second Rank Tensor and a Vector. The right cross product of a second rank tensor A and a vector c is defined by A c = ( 3 a (i) b (i) ) The left cross product is defined by c A = c c = ( 3 a (i) b (i) ) = 3 a (i) (b (i) c) 3 (c a (i) ) b (i) The results of these operations are second rank tensors. It can be shown that A c = [c A T ] T Trace. The trace of a second rank tensor is defined by tr A = tr ( 3 a (i) b (i) ) = 3 a (i) b (i) By taking the trace of a second rank tensor the dyadic product is replaced by the dot product. It can be shown that tr A = tr A T, tr (A B) = tr (B A) = tr (A T B T ) = A B Symmetric Tensors. A second rank tensor is said to be symmetric if it satisfies the following equality A = A T An alternative definition of the symmetric tensor can be given as follows. A second rank tensor is said to be symmetric if for any vector c = 0 the following equality is valid c A = A c An important example of a symmetric tensor is the unit or identity tensor I, which is defined by such a way that for any vector c c I = I c = c

A.1 Basic Operations of Tensor Algebra 175 The representations of the identity tensor are I = e k e k = e k e k for any basis e k and e k, e k e m = δk m. For three orthonormal vectors m, n and p I = n n + m m + p p A symmetric second rank tensor P satisfying the condition P P = P is called projector. Examples of projectors are m m, n n + p p = I m m, where m, n and p are orthonormal vectors. The result of the dot product of the tensor m m with any vector a is the projection of the vector a onto the line spanned on the vector m, i.e. m m a = (a m)m. The result of (n n + p p) a is the projection of the vector a onto the plane spanned on the vectors n and p. Skew-symmetric Tensors. A second rank tensor is said to be skew-symmetric if it satisfies the following equality or if for any vector c A = A T c A = A c Any skew symmetric tensor A can be represented by A = a I = I a The vector a is called the associated vector. Any second rank tensor can be uniquely decomposed into the symmetric and skew-symmetric parts A = 1 ( A + A T) + 1 ( A A T) = A 1 + A 2, 2 2 A 1 = 1 ( A + A T), A 1 = A1 T 2, A 2 = 1 ( A A T), A 2 = A2 T 2 Vector Invariant. The vector invariant or Gibbsian Cross of a second rank tensor A is defined by A = ( 3 a (i) b (i) ) = 3 a (i) b (i) The result of this operation is a vector. The vector invariant of a symmetric tensor is the zero vector. The following identities can be verified (a I) = 2a, a I b = b a (a b)i

176 A Some Basic Rules of Tensor Calculus Linear Transformations of Vectors. A vector valued function of a vector argument f(a) is called to be linear if f(α 1 a 1 + α 2 a 2 ) = α 1 f(a 1 ) + α 2 f(a 2 ) for any two vectors a 1 and a 2 and any two scalars α 1 and α 2. It can be shown that any linear vector valued function can be represented by f(a) = A a, where A is a second rank tensor. In many textbooks, e.g. [32, 293] a second rank tensor A is defined to be the linear transformation of the vector space into itself. Determinant and Inverse of a Second Rank Tensor. Let a, b and c be arbitrary linearly-independent vectors. The determinant of a second rank tensor A is defined by det A = (A a) [(A b) (A c)] a (b c) The following identities can be verified det(a T ) = det(a), det(a B) = det(a) det(b) The inverse of a second rank tensor A 1 is introduced as the solution of the following equation A 1 A = A A 1 = I A is invertible if and only if det A = 0. A tensor A with det A = 0 is called singular. Examples of singular tensors are projectors. Cayley-Hamilton Theorem. Any second rank tensor satisfies the following equation A 3 J 1 (A)A 2 + J 2 (A)A J 3 (A)I = 0, (A.1.7) where A 2 = A A, A 3 = A A A and J 1 (A) = tr A, J 2 (A) = 1 2 [(tr A)2 tr A 2 ], J 3 (A) = det A = 1 6 (tr A)3 1 2 tr Atr A2 + 1 (A.1.8) tr A3 3 The scalar-valued functions J i (A) are called principal invariants of the tensor A. Coordinates of Second Rank Tensors. Let e i be a basis and e k the dual basis. Any two vectors a and b can be represented as follows a = a i e i = a j e j, b = b l e l = b m e m A dyad a b has the following representations a b = a i b j e i e j = a i b j e i e j = a i b j e i e j = a i b j e i e j For the representation of a second rank tensor A one of the following four bases can be used e i e j, e i e j, e i e j, e i e j With these bases one can write

A.1 Basic Operations of Tensor Algebra 177 A = A ij e i e j = A ij e i e j = A i j e i e j = A j i ei e j For a selected basis the coordinates of a second rank tensor can be computed as follows A ij = e i A e j, A ij = e i A e j, A i j = e i A e j, A j i = ei A e j Principal Values and Directions of Symmetric Second Rank Tensors. Consider a dot product of a second rank tensor A and a unit vector n. The resulting vector a = A n differs in general from n both by the length and the direction. However, one can find those unit vectors n, for which A n is collinear with n, i.e. only the length of n is changed. Such vectors can be found from the equation A n = λn or (A λi) n = 0 (A.1.9) The unit vector n is called the principal vector and the scalar λ the principal value of the tensor A. Let A be a symmetric tensor. In this case the principal values are real numbers and there exist at least three mutually orthogonal principal vectors. The principal values can be found as roots of the characteristic polynomial det(a λi) = λ 3 + J 1 (A)λ 2 J 2 (A)λ + J 3 (A) = 0 The principal values are specified by λ I, λ II, λ III. For known principal values and principal directions the second rank tensor can be represented as follows (spectral representation) A = λ I n I n I + λ II n II n II + λ III n III n III Orthogonal Tensors. A second rank tensor Q is said to be orthogonal if it satisfies the equation Q T Q = I. If Q operates on a vector its length remains unchanged, i.e. let b = Q a, then b 2 = b b = a Q T Q a = a a = a 2 Furthermore, the orthogonal tensor does not change the scalar product of two arbitrary vectors. For two vectors a and b as well as a = Q a and b = Q b one can calculate a b = a Q T Q b = a b From the definition of the orthogonal tensor follows Q T = Q 1, Q T Q = Q Q T = I, det(q Q T ) = (det Q) 2 = det I = 1 det Q = ±1 Orthogonal tensors with det Q = 1 are called proper orthogonal or rotation tensors. The rotation tensors are widely used in the rigid body dynamics, e.g. [333], and in the theories of rods, plates and shells, e.g. [25, 32]. Any orthogonal tensor is either

178 A Some Basic Rules of Tensor Calculus the rotation tensor or the composition of the rotation (proper orthogonal tensor) and the tensor I. Let P be a rotation tensor, det P = 1, then an orthogonal tensor Q with det Q = 1 can be composed by Q = ( I) P = P ( I), det Q = det( I) det P = 1 For any two orthogonal tensors Q 1 and Q 2 the composition Q 3 = Q 1 Q 2 is the orthogonal tensor, too. This property is used in the theory of symmetry and symmetry groups, e.g. [232, 331]. Two important examples for orthogonal tensors are rotation tensor about a fixed axis Q(ψm) = m m + cos ψ(i m m) + sin ψm I, π < ψ < π, det Q = 1, where the unit vector m represents the axis and ψ is the angle of rotation, reflection tensor Q = I 2n n, det Q = 1, where the unit vector n represents a normal to the mirror plane. One can prove the following identities [334] (Q a) (Q b) = det QQ (a b) Q (a Q T ) = Q (a I) Q T = det Q [(Q a) I] (A.1.10) A.2 Elements of Tensor Analysis A.2.1 Coordinate Systems The vector r characterizing the position of a point P can be represented by use of the Cartesian coordinates x i as follows, Fig. A.5, r(x 1, x 2, x 3 ) = x 1 e 1 + x 2 e 2 + x 3 e 3 = x i e i Instead of coordinates x i one can introduce any triple of curvilinear coordinates q 1, q 2, q 3 by means of one-to-one transformations x k = x k (q 1, q 2, q 3 ) q k = q k (x 1, x 2, x 3 ) It is assumed that the above transformations are continuous and continuous differentiable as many times as necessary and for the Jacobians ( ) x k ( q i ) det q i = 0, det x k = 0

A.2 Elements of Tensor Analysis 179 r 3 q 3 q 2 x 3 r P r 2 r 1 q 3 = const e 3 e e 2 1 x 2 q 1 x 1 Figure A.5 Cartesian and curvilinear coordinates must be valid. With these assumptions the position vector can be considered as a function of curvilinear coordinates q i, i.e. r = r(q 1, q 2, q 3 ). Surfaces q 1 = const, q 2 = const, and q 3 = const, Fig. A.5, are called coordinate surfaces. For given fixed values q 2 = q 2 and q 3 = q 3 a curve can be obtained along which only q 1 varies. This curve is called the q 1 -coordinate line, Fig. A.5. Analogously, one can obtain the q 2 - and q 3 -coordinate lines. The partial derivatives of the position vector with respect the to selected coordinates r 1 = r q 1, r 2 = r q 2, r 3 = r q 3, r 1 (r 2 r 3 ) = 0 define the tangential vectors to the coordinate lines in a point P, Fig. A.5. The vectors r i are used as the local basis in the point P. By use of (A.1.4) the dual basis r k can be introduced. The vector dr connecting the point P with a point P in the differential neighborhood of P is defined by dr = r q 1 dq1 + r q 2 dq2 + r q 3 dq3 = r k dq k The square of the arc length of the line element in the differential neighborhood of P is calculated by ds 2 = dr dr = (r i dq i ) (r k dq k ) = g ik dq i dq k, where g ik r i r k are the so-called contravariant components of the metric tensor. With g ik one can represent the basis vectors r i by the dual basis vectors r k as follows r i = (r i r k )r k = g ik r k

180 A Some Basic Rules of Tensor Calculus Similarly r i = (r i r k )r k = g ik r k, g ik r i r k, where g ik are termed covariant components of the metric tensor. For the selected bases r i and r k the second rank unit tensor has the following representations I = r i r i = r i g ik r k = g ik r i r k = g ik r i r k = r i r i A.2.2 The Hamilton (Nabla) Operator A scalar field is a function which assigns a scalar to each spatial point P for the domain of definition. Let us consider a scalar field ϕ(r) = ϕ(q 1, q 2, q 3 ). The total differential of ϕ by moving from a point P to a point P in the differential neighborhood is dϕ = ϕ q 1 dq1 + ϕ q 2 dq2 + ϕ q 3 dq3 = ϕ q k dqk Taking into account that dq k = dr r k dϕ = dr r k ϕ q k = dr ϕ The vector ϕ is called the gradient of the scalar field ϕ and the invariant operator (the Hamilton or nabla operator) is defined by For a vector field a(r) one may write = r k q k da = (dr r k ) a q k = dr r k a q k = dr a = ( a) T dr, a = r k a q k The gradient of a vector field is a second rank tensor. The operation can be applied to tensors of any rank. For vectors and tensors the following additional operations are defined diva a = r k a q k rota a = r k a q k The following identities can be verified r = r k r q k = rk r k = I, r = 3

A.2 Elements of Tensor Analysis 181 For a scalar α, a vector a and for a second rank tensor A the following identities are valid (αa) = r k (αa) ( q k = r k α ) q k a + αr k a = ( α) a + α a, qk (A.2.1) (A a) = r k (A a) q k = r k A q k a + r k A a ( ) q k a = ( A) a + A q k (A.2.2) rk = ( A) a + A ( a) T Here the identity (A.1.6) is used. For a second rank tensor A and a position vector r one can prove the following identity (A r) = r k (A r) q k = r k A q k r + rk A r q k (A.2.3) = ( A) r + r k A r k = ( A) r A Here we used the definition of the vector invariant as follows ( ) A = r k r k A = r k (r k A) = r k A r k A.2.3 Integral Theorems Let ϕ(r), a(r) and A(r) be scalar, vector and second rank tensor fields. Let V be the volume of a bounded domain with a regular surface A(V) and n be the outer unit normal to the surface at r. The integral theorems can be summarized as follows Gradient Theorems V V V ϕ dv = a dv = A dv = A(V) A(V) A(V) nϕ da, n a da, n A da Divergence Theorems a dv = n a da, V A dv = A(V) n A da V A(V)

182 A Some Basic Rules of Tensor Calculus Curl Theorems a dv = n a da, V A dv = A(V) n A da V A(V) A.2.4 Scalar-Valued Functions of Vectors and Second Rank Tensors Let ψ be a scalar valued function of a vector a and a second rank tensor A, i.e. ψ = ψ(a, A). Introducing a basis e i the function ψ can be represented as follows ψ(a, A) = ψ(a i e i, A ij e i e j ) = ψ(a i, A ij ) The partial derivatives of ψ with respect to a and A are defined according to the following rule dψ = ψ a i dai + ψ daij Aij = da e i ψ (A.2.4) a i + da e j e i ψ daij Aij In the coordinates-free form the above rule can be rewritten as follows with dψ = da ψ a + da ( ψ A ) T = da ψ,a + da (ψ,a ) T (A.2.5) ψ,a ψ a = ψ a i ei, ψ,a ψ A = ψ A ij ei e j One can verify that ψ,a and ψ,a are independent from the choice of the basis. One may prove the following formulae for the derivatives of principal invariants of a second rank tensor A J 1 (A),A = I, J 1 (A 2 ),A = 2A T, J 1 (A 3 ),A = 3A 2T, J 2 (A),A = J 1 (A)I A T, (A.2.6) J 3 (A),A = A 2T J 1 (A)A T + J 2 (A)I = J 3 (A)(A T ) 1 A.3 Orthogonal Transformations and Orthogonal Invariants An application of the theory of tensor functions is to find a basic set of scalar invariants for a given group of symmetry transformations, such that each invariant relative

A.3 Orthogonal Transformations and Orthogonal Invariants 183 to the same group is expressible as a single-valued function of the basic set. The basic set of invariants is called functional basis. To obtain a compact representation for invariants, it is required that the functional basis is irreducible in the sense that removing any one invariant from the basis will imply that a complete representation for all the invariants is no longer possible. Such a problem arises in the formulation of constitutive equations for a given group of material symmetries. For example, the strain energy density of an elastic non-polar material is a scalar valued function of the second rank symmetric strain tensor. In the theory of the Cosserat continuum two strain measures are introduced, where the first strain measure is the polar tensor while the second one is the axial tensor, e.g. [108]. The strain energy density of a thin elastic shell is a function of two second rank tensors and one vector, e.g. [25]. In all cases the problem is to find a minimum set of functionally independent invariants for the considered tensorial arguments. For the theory of tensor functions we refer to [71]. Representations of tensor functions are reviewed in [280, 330]. An orthogonal transformation of a scalar α, a vector a and a second rank tensor A is defined by [25, 332] α (det Q) ζ α, a (det Q) ζ Q a, A (det Q) ζ Q A Q T, (A.3.1) where Q is an orthogonal tensor, i.e. Q Q T = I, det Q = ±1, I is the second rank unit tensor, ζ = 0 for absolute (polar) scalars, vectors and tensors and ζ = 1 for axial ones. An example of the axial scalar is the mixed product of three polar vectors, i.e. α = a (b c). A typical example of the axial vector is the cross product of two polar vectors, i.e. c = a b. An example of the second rank axial tensor is the skew-symmetric tensor W = a I, where a is a polar vector. Consider a group of orthogonal transformations S (e.g., the material symmetry transformations) characterized by a set of orthogonal tensors Q. A scalar-valued function of a second rank tensor f = f(a) is called to be an orthogonal invariant under the group S if Q S : f(a ) = (det Q) η f(a), (A.3.2) where η = 0 if values of f are absolute scalars and η = 1 if values of f are axial scalars. Any second rank tensor B can be decomposed into the symmetric and the skewsymmetric part, i.e. B = A + a I, where A is the symmetric tensor and a is the associated vector. Therefore f(b) = f(a, a). If B is a polar (axial) tensor, then a is an axial (polar) vector. For the set of second rank tensors and vectors the definition of an orthogonal invariant (A.3.2) can be generalized as follows Q S : f(a 1, A 2,..., A n, a 1, a 2,..., a k ) = (det Q) η f(a 1, A 2,... A n, a 1, a 2,..., a k ), A i = A T i (A.3.3)

184 A Some Basic Rules of Tensor Calculus A.3.1 Invariants for the Full Orthogonal Group In [335] orthogonal invariants for different sets of second rank tensors and vectors with respect to the full orthogonal group are presented. It is shown that orthogonal invariants are integrals of a generic partial differential equation (basic equations for invariants). Let us present two following examples Orthogonal invariants of a symmetric second rank tensor A are I k = tr A k, k = 1, 2, 3 Instead of I k it is possible to use the principal invariants J k defined by (A.1.8). Orthogonal invariants of a symmetric second rank tensor A and a vector a are I k = tr A k, k = 1, 2, 3, I 4 = a a, I 5 = a A a, I 6 = a A 2 a, I 7 = a A 2 (a A a) (A.3.4) In the above set of invariants only 6 are functionally independent. The relation between the invariants (so-called syzygy, [71]) can be formulated as follows I 4 I 5 I 6 I 2 7 = I 5 I 6 a A 3 a I 6 a A 3 a a A 4 a, (A.3.5) where a A 3 a and a A 4 a can be expressed by I l, l = 1,... 6 applying the Cayley-Hamilton theorem (A.1.7). The set of invariants for a symmetric second rank tensor A and a vector a can be applied for a non-symmetric second rank tensor B since it can be represented by B = A + a I, A = A T. A.3.2 Invariants for the Transverse Isotropy Group Transverse isotropy is an important type of the symmetry transformation due to a variety of applications. Transverse isotropy is usually assumed in constitutive modeling of fiber reinforced materials, e.g. [21], fiber suspensions, e.g. [22], directionally solidified alloys, e.g. [213], deep drawing sheets, e.g. [50, 57] and piezoelectric materials, e.g. [285]. The invariants and generating sets for tensor-valued functions with respect to different cases of transverse isotropy are discussed in [79, 328] (see also relevant references therein). In what follows we analyze the problem of a functional basis within the theory of linear first order partial differential equations rather than the algebra of polynomials. We develop the idea proposed in [335] for the invariants with respect to the full orthogonal group to the case of transverse isotropy. The invariants will be found as integrals of the generic partial differential equations. Although a functional basis formed by these invariants does not include any redundant element, functional relations between them may exist. It may be therefore useful to find out simple forms of such relations. We show that the proposed approach may supply results in a direct, natural manner.

A.3 Orthogonal Transformations and Orthogonal Invariants 185 Invariants for a Single Second Rank Symmetric Tensor. Consider the proper orthogonal tensor which represents a rotation about a fixed axis, i.e. Q(ϕm) = m m + cos ϕ(i m m) + sin ϕm I, det Q(ϕm) = 1, (A.3.6) where m is assumed to be a constant unit vector (axis of rotation) and ϕ denotes the angle of rotation about m. The symmetry transformation defined by this tensor corresponds to the transverse isotropy, whereby five different cases are possible, e.g. [299, 331]. Let us find scalar-valued functions of a second rank symmetric tensor A satisfying the condition f(a (ϕ))= f(q(ϕm) A Q T (ϕm)) = f(a), A (ϕ) Q(ϕm) A Q T (ϕm) (A.3.7) Equation (A.3.7) must be valid for any angle of rotation ϕ. In (A.3.7) only the lefthand side depends on ϕ. Therefore its derivative with respect to ϕ can be set to zero, i.e. ( d f dϕ = da f dϕ A ) T = 0 (A.3.8) The derivative of A with respect to ϕ can be calculated by the following rules da (ϕ) = dq(ϕm) A Q T (ϕm) + Q(ϕm) A dq T (ϕm), dq(ϕm) = m Q(ϕm)dϕ dq T (ϕm) = Q T (ϕm) m dϕ (A.3.9) By inserting the above equations into (A.3.8) we obtain ( ) f T (m A A m) = 0 (A.3.10) A Equation (A.3.10) is classified in [92] to be the linear homogeneous first order partial differential equation. The characteristic system of (A.3.10) is da ds = (m A A m) (A.3.11) Any system of n linear ordinary differential equations has not more then n 1 functionally independent integrals [92]. By introducing a basis e i the tensor A can be written down in the form A = A ij e i e j and (A.3.11) is a system of six ordinary differential equations with respect to the coordinates A ij. The five integrals of (A.3.11) may be written down as follows g i (A) = c i, i = 1, 2,..., 5, where c i are integration constants. Any function of the five integrals g i is the solution of the partial differential equation (A.3.10). Therefore the five integrals g i represent the invariants of the symmetric tensor A with respect to the symmetry transformation (A.3.6). The solutions of (A.3.11) are

186 A Some Basic Rules of Tensor Calculus A k (s) = Q(sm) A k 0 Q T (sm), k = 1, 2, 3, (A.3.12) where A 0 is the initial condition. In order to find the integrals, the variable s must be eliminated from (A.3.12). Taking into account the following identities tr (Q A k Q T ) = tr (Q T Q A k ) = tr A k, m Q(sm) = m, (Q a) (Q b) = (det Q)Q (a b) (A.3.13) and using the notation Q m Q(sm) the integrals can be found as follows tr (A k ) = tr (A0 k ), k = 1, 2, 3, m A l m = m Q m A l 0 Q T m m = m A l 0 m, l = 1, 2, m A 2 (m A m) = m Q T m A 2 0 Q m (m Q T m A 0 Q m m) = m A 2 0 Q m [(Q T m m) (Q T m A 0 m) ] = m A 2 0 (m A 0 m) (A.3.14) As a result we can formulate the six invariants of the tensor A with respect to the symmetry transformation (A.3.6) as follows I k = tr (A k ), k = 1, 2, 3, I 4 = m A m, I 5 = m A 2 m, I 6 = m A 2 (m A m) (A.3.15) The invariants with respect to various symmetry transformations are discussed in [79]. For the case of the transverse isotropy six invariants are derived in [79] by the use of another approach. In this sense our result coincides with the result given in [79]. However, from our derivations it follows that only five invariants listed in (A.3.15) are functionally independent. Taking into account that I 6 is the mixed product of vectors m, A m and A 2 m the relation between the invariants can be written down as follows I 2 6 = det 1 I 4 I 5 I 4 I 5 m A 3 m I 5 m A 3 m m A 4 m (A.3.16) One can verify that m A 3 m and m A 4 m are transversely isotropic invariants, too. However, applying the the Cayley-Hamilton theorem (A.1.7) they can be uniquely expressed by I 1, I 2,... I 5 in the following way [54] m A 3 m = J 1 I 5 + J 2 I 4 + J 3, m A 4 m = (J 2 1 + J 2)I 5 + (J 1 J 2 + J 3 )I 4 + J 1 J 3, where J 1, J 2 and J 3 are the principal invariants of A defined by (A.1.8). Let us note that the invariant I 6 cannot be dropped. In order to verify this, it is enough to consider two different tensors

A.3 Orthogonal Transformations and Orthogonal Invariants 187 A and B = Q n A Q T n, where Q n Q(πn) = 2n n I, n n = 1, n m = 0, det Q n = 1 One can prove that the tensor A and the tensor B have the same invariants I 1, I 2,..., I 5. Taking into account that m Q n = m and applying the last identity in (A.3.13) we may write I 6 (B) = m B 2 (m B m) = m A 2 Q T n (m Q n A m) = m A 2 (m A m) = I 6 (A) We observe that the only difference between the two considered tensors is the sign of I 6. Therefore, the triples of vectors m, A m, A 2 m and m, B m, B 2 m have different orientations and cannot be combined by a rotation. It should be noted that the functional relation (A.3.16) would in no way imply that the invariant I 6 should be dependent and hence redundant, namely should be removed from the basis (A.3.15). In fact, the relation (A.3.16) determines the magnitude but not the sign of I 6. To describe yielding and failure of oriented solids a dyad M = v v has been used in [53, 75], where the vector v specifies a privileged direction. A plastic potential is assumed to be an isotropic function of the symmetric Cauchy stress tensor and the tensor generator M. Applying the representation of isotropic functions the integrity basis including ten invariants was found. In the special case v = m the number of invariants reduces to the five I 1, I 2,... I 5 defined by (A.3.15). Further details of this approach and applications in continuum mechanics are given in [59, 71]. However, the problem statement to find an integrity basis of a symmetric tensor A and a dyad M, i.e. to find scalar valued functions f(a, M) satisfying the condition f(q A Q T, Q M Q T ) = (det Q) η f(a, M), Q, Q Q T = I, det Q = ±1 (A.3.17) essentially differs from the problem statement (A.3.7). In order to show this we take into account that the symmetry group of a dyad M, i.e. the set of orthogonal solutions of the equation Q M Q T = M includes the following elements Q 1,2 = ±I, Q 3 = Q(ϕm), m = v v, Q 4 = Q(πn) = 2n n I, n n = 1, n v = 0, (A.3.18) where Q(ϕm) is defined by (A.3.6). The solutions of the problem (A.3.17) are automatically the solutions of the following problem f(q i A Q T i, M) = (det Q i ) η f(a, M), i = 1, 2, 3, 4,

188 A Some Basic Rules of Tensor Calculus i.e. the problem to find the invariants of A relative to the symmetry group (A.3.18). However, (A.3.18) includes much more symmetry elements if compared to the problem statement (A.3.7). An alternative set of transversely isotropic invariants can be formulated by the use of the following decomposition A = αm m + β(i m m) + A pd + t m + m t, (A.3.19) where α, β, A pd and t are projections of A. With the projectors P 1 = m m and P 2 = I m m we may write α = m A m = tr (A P 1 ), β = 1 2 (tr A m A m) = 1 2 tr (A P 2 ), A pd = P 2 A P 2 βp 2, (A.3.20) t = m A P 2 The decomposition (A.3.19) is the analogue to the following representation of a vector a a = I a = m m a + (I m m) a = ψm + τ, ψ = a m, τ = P 2 a (A.3.21) Decompositions of the type (A.3.19) are applied in [68, 79]. The projections introduced in (A.3.20) have the following properties tr (A pd ) = 0, A pd m = m A pd = 0, t m = 0 (A.3.22) With (A.3.19) and (A.3.22) the tensor equation (A.3.11) can be transformed to the following system of equations dα ds = 0, dβ ds = 0, (A.3.23) da pd = m A pd A pd m, ds dt ds = m t From the first two equations we observe that α and β are transversely isotropic invariants. The third equation can be transformed to one scalar and one vector equation as follows da pd ds A pd = 0 d(a pd A pd ) ds = 0, db ds = m b with b A pd t. We observe that tr (A 2 pd ) = A pd A pd is the transversely isotropic invariant, too. Finally, we have to find the integrals of the following system

A.3 Orthogonal Transformations and Orthogonal Invariants 189 dt ds = t m, db ds = b m (A.3.24) The solutions of (A.3.24) are t(s) = Q(sm) t 0, b(s) = Q(sm) b 0, where t 0 and b 0 are initial conditions. The vectors t and b belong to the plane of isotropy, i.e. t m = 0 and b m = 0. Therefore, one can verify the following integrals t t = t 0 t 0, b b = b 0 b 0, t b = t 0 b 0, (t b) m = (t 0 b 0 ) m (A.3.25) We found seven integrals, but only five of them are functionally independent. In order to formulate the relation between the integrals we compute b b = t A 2 pd t, t b = t A pd t For any plane tensor A p satisfying the equations A p m = m A p = 0 the Cayley- Hamilton theorem can be formulated as follows, see e.g. [71] A 2 p (tr A p )A p + 1 [ ] (tr A p ) 2 tr (A 2 2 p) (I m m) = 0 Since tr A pd = 0 we have 2A 2 pd = tr (A2 pd )(I m m), t A 2 pd t = 1 2 tr (A2 pd )(t t) Because tr (A 2 pd ) and t t are already defined, the invariant b b can be omitted. The vector t b is spanned on the axis m. Therefore t b = γm, γ = (t b) m, γ 2 = (t b) (t b) = (t t)(b b) (t b) 2 Now we can summarize six invariants and one relation between them as follows Ī 1 = α, Ī 2 = β, Ī 3 = 1 2 tr (A2 pd ), Ī 4 = t t = t A m, Ī 5 = t A pd t, Ī 6 = (t A pd t) m, (A.3.26) Ī 2 6 = Ī2 4 Ī3 Ī 2 5 Let us assume that the symmetry transformation Q n Q(πn) belongs to the symmetry group of the transverse isotropy, as it was made in [71, 59]. In this case f(a ) = f(q n A Q T n) = f(a) must be valid. With Q n m = m we can write α = α, β = β, A pd = A pd, t = Q n t

190 A Some Basic Rules of Tensor Calculus Therefore in (A.3.26) Ī k = Ī k, k = 1, 2,..., 5 and Ī 6 = (t A pd t ) m = ( (Q n t) Q n A pd t ) m = (t A pd t) Q n m = (t A pd t) m = Ī 6 Consequently f(a ) = f(ī 1, Ī 2,..., Ī 5, Ī 6 ) = f(ī 1, Ī 2,..., Ī 5, Ī 6 ) f(a) = f(ī 1, Ī 2,..., Ī 5, Ī 2 6 ) and Ī6 2 can be omitted due to the last relation in (A.3.26). Invariants for a Set of Vectors and Second Rank Tensors. By setting Q = Q(ϕm) in (A.3.3) and taking the derivative of (A.3.3) with respect to ϕ results in the following generic partial differential equation n ( f A i ) T (m A i A i m) + k f (m a a j ) = 0 j=1 j (A.3.27) The characteristic system of (A.3.27) is da i ds = (m A i A i m), i = 1, 2,..., n, da j ds = m a j, j = 1, 2,..., k (A.3.28) The above system is a system of N ordinary differential equations, where N = 6n + 3k is the total number of coordinates of A i and a j for a selected basis. The system (A.3.28) has not more then N 1 functionally independent integrals. Therefore we can formulate: Theorem A.3.1. A set of n symmetric second rank tensors and k vectors with N = 6n + 3k independent coordinates for a given basis has not more than N 1 functionally independent invariants for N > 1 and one invariant for N = 1 with respect to the symmetry transformation Q(ϕm). In essence, the proof of this theorem is given within the theory of linear first order partial differential equations [92]. As an example let us consider the set of a symmetric second rank tensor A and a vector a. This set has eight independent invariants. For a visual perception it is useful to keep in mind that the considered set is equivalent to A, a, A a, A 2 a Therefore it is necessary to find the list of invariants, whose fixation determines this set as a rigid whole. The generic equation (A.3.27) takes the form ( ) f T (m A A m) + f (m a) = 0 A a (A.3.29)

A.3 Orthogonal Transformations and Orthogonal Invariants 191 The characteristic system of (A.3.29) is da ds = m A A m, da ds = m a (A.3.30) This system of ninth order has eight independent integrals. Six of them are invariants of A and a with respect to the full orthogonal group. They fix the considered set as a rigid whole. The orthogonal invariants are defined by Eqs (A.3.4) and (A.3.5). Let us note that the invariant I 7 in (A.3.4) cannot be ignored. To verify this it is enough to consider two different sets A, a and B = Q p A Q T p, a, where Q p = I 2p p, p p = 1, p a = 0. One can prove that the invariants I 1, I 2,..., I 6 are the same for these two sets. The only difference is the invariant I 7, i.e. a B 2 (a B a) = a A 2 (a A a) Therefore the triples of vectors a, A a, A 2 a and a, B a, B 2 a have different orientations and cannot be combined by a rotation. In order to fix the considered set with respect to the unit vector m it is enough to fix the next two invariants I 8 = m A m, I 9 = m a (A.3.31) The eight independent transversely isotropic invariants are (A.3.4), (A.3.5) and (A.3.31). A.3.3 Invariants for the Orthotropic Symmetry Group Consider orthogonal tensors Q 1 = I 2n 1 n 1 and Q 2 = I 2n 2 n 2, det Q 1 = det Q 2 = 1. These tensors represent the mirror reflections, whereby the unit orthogonal vectors ±n 1 and ±n 2, are the normal directions to the mirror planes. The above tensors are the symmetry elements of the orthotropic symmetry group. The invariants must be found from Consequently, f(q 1 A Q T 1 ) = f(q 2 A Q T 2 ) = f(a) f(q 1 Q 2 A Q T 2 Q T 1 ) = f(q 1 A Q T 1 ) = f(q 2 A Q T 2 ) = f(a) and the tensor Q 3 = Q 1 Q 2 = 2n 3 n 3 I belongs to the symmetry group, where the unit vector n 3 is orthogonal to n 1 and n 2. Taking into account that Q i n i = n i (no summation convention), Q i n j = n j, i = j and using the notation A i = Q i A Qi T we can write tr (A k ) = tr (A k ), k = 1,..., 3, i = 1, 2, 3 n i A n i n i A 2 n i = n i Q i A Qi T n i = n i A n i, i = 1, 2, 3 = n i Q i A 2 Qi T n i = n i A 2 n i, i = 1, 2, 3 (A.3.32)

192 A Some Basic Rules of Tensor Calculus The above set of includes 9 scalars. The number of independent scalars is 7 due to the obvious relations tr (A k ) = n 1 A k n 1 + n 2 A k n 2 + n 3 A k n 3, k = 1, 2, 3