A Some Basic Rules of Tensor Calculus

Size: px
Start display at page:

Download "A Some Basic Rules of Tensor Calculus"

Transcription

1 A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in continuum mechanics and the derivation of the governing equations for applied problems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: the direct (symbolic) notation and the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b +...+c c d. The scalars, vectors and tensors are handled as invariant (independent from the choice of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. g i, i = 1, 2, 3 one can write ( a = a i g i, A = a i b j c i d j) g i g j Here the Einstein s summation convention is used: in one expression the twice repeated indices are summed up from 1 to 3, e.g. a k g k 3 a k g k, k=1 A ik b k 3 A ik b k k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e. g. the sum of two vectors is computed as a sum of their coordinates c i = a i + b i. The introduced basis remains in the background. It must be remembered that a change of the coordinate system leads to the change of the components of tensors. In this work we prefer the direct tensor notation over the index one. When solving applied problems the tensor equations can be translated into the language of matrices for a specified coordinate system. The purpose of this Appendix is to

2 168 A Some Basic Rules of Tensor Calculus give a brief guide to notations and rules of the tensor calculus applied throughout this work. For more comprehensive overviews on tensor calculus we recommend [54, 96, 123, 191, 199, 311, 334]. The calculus of matrices is presented in [40, 111, 340], for example. Section A.1 provides a brief overview of basic algebraic operations with vectors and second rank tensors. Several rules from tensor analysis are summarized in Sect. A.2. Basic sets of invariants for different groups of symmetry transformation are presented in Sect. A.3, where a novel approach to find the functional basis is discussed. A.1 Basic Operations of Tensor Algebra A.1.1 Polar and Axial Vectors A vector in the three-dimensional Euclidean space is defined as a directed line segment with specified magnitude (scalar) and direction. The magnitude (the length) of a vector a is denoted by a. Two vectors a and b are equal if they have the same direction and the same magnitude. The zero vector 0 has a magnitude equal to zero. In mechanics two types of vectors can be introduced. The vectors of the first type are directed line segments. These vectors are associated with translations in the threedimensional space. Examples for polar vectors include the force, the displacement, the velocity, the acceleration, the momentum, etc. The second type is used to characterize spinor motions and related quantities, i.e. the moment, the angular velocity, the angular momentum, etc. Figure A.1a shows the so-called spin vector a which represents a rotation about the given axis. The direction of rotation is specified by the circular arrow and the magnitude of rotation is the corresponding length. For the given spin vector a the directed line segment a is introduced according to the following rules [334]: 1. the vector a is placed on the axis of the spin vector, 2. the magnitude of a is equal to the magnitude of a, a b c a a a a Figure A.1 Axial vectors. a Spin vector, b axial vector in the right-screw oriented reference frame, c axial vector in the left-screw oriented reference frame a

3 A.1 Basic Operations of Tensor Algebra the vector a is directed according to the right-handed screw, Fig A.1b, or the left-handed screw, Fig A.1c The selection of one of the two cases in 3. corresponds to the convention of orientation of the reference frame [334] (it should be not confused with the right- or left-handed triples of vectors or coordinate systems). The directed line segment is called a polar vector if it does not change by changing the orientation of the reference frame. The vector is called to be axial if it changes the sign by changing the orientation of the reference frame. The above definitions are valid for scalars and tensors of any rank too. The axial vectors (and tensors) are widely used in the rigid body dynamics, e.g. [333], in the theories of rods, plates and shells, e.g. [25], in the asymmetric theory of elasticity, e.g. [231], as well as in dynamics of micro-polar media, e.g. [108]. By dealing with polar and axial vectors it should be remembered that they have different physical meanings. Therefore, a sum of a polar and an axial vector has no sense. A.1.2 Operations with Vectors Addition. For a given pair of vectors a and b of the same type the sum c = a + b is defined according to one of the rules in Fig. A.2. The sum has the following properties a + b = b + a (commutativity), (a + b) + c = a + (b + c) (associativity), a + 0 = a Multiplication by a Scalar. For any vector a and for any scalar α a vector b = αa is defined in such a way that b = α a, for α > 0 the direction of b coincides with that of a, for α < 0 the direction of b is opposite to that of a. For α = 0 the product yields the zero vector, i.e. 0 = 0a. It is easy to verify that α(a + b) = αa + αb, (α + β)a = αa + βa a b b c c b a Figure A.2 Addition of two vectors. a Parallelogram rule, b triangle rule a

4 170 A Some Basic Rules of Tensor Calculus a b b b ϕ ϕ 2π ϕ a n a = a a (b n a )n a Figure A.3 Scalar product of two vectors. a Angles between two vectors, b unit vector and projection a Scalar (Dot) Product of two Vectors. For any pair of vectors a and b a scalar α is defined by α = a b = a b cos ϕ, where ϕ is the angle between the vectors a and b. As ϕ one can use any of the two angles between the vectors, Fig. A.3a. The properties of the scalar product are a b = b a (commutativity), a (b + c) = a b + a c (distributivity) Two nonzero vectors are said to be orthogonal if their scalar product is zero. The unit vector directed along the vector a is defined by (see Fig. A.3b) n a = a a The projection of the vector b onto the vector a is the vector (b n a )n a, Fig. A.3b. The length of the projection is b cos ϕ. Vector (Cross) Product of Two Vectors. For the ordered pair of vectors a and b the vector c = a b is defined in two following steps [334]: the spin vector c is defined in such a way that the axis is orthogonal to the plane spanned on a and b, Fig. A.4a, the circular arrow shows the direction of the shortest rotation from a to b, Fig. A.4b, the length is a b sin ϕ, where ϕ is the angle of the shortest rotation from a to b, from the resulting spin vector the directed line segment c is constructed according to one of the rules listed in Subsect. A.1.1. The properties of the vector product are a b = b a, a (b + c) = a b + a c The type of the vector c = a b can be established for the known types of the vectors a and b, [334]. If a and b are polar vectors the result of the cross product

5 A.1 Basic Operations of Tensor Algebra 171 a b c c c a ϕ b a ϕ b a ϕ b Figure A.4 Vector product of two vectors. a Plane spanned on two vectors, b spin vector, c axial vector in the right-screw oriented reference frame will be the axial vector. An example is the moment of momentum for a mass point m defined by r (m v), where r is the position of the mass point and v is the velocity of the mass point. The next example is the formula for the distribution of velocities in a rigid body v = ω r. Here the cross product of the axial vector ω (angular velocity) with the polar vector r (position vector) results in the polar vector v. The mixed product of three vectors a, b and c is defined by (a b) c. The result is a scalar. For the mixed product the following identities are valid a (b c) = b (c a) = c (a b) (A.1.1) If the cross product is applied twice, the first operation must be set in parentheses, e.g., a (b c). The result of this operation is a vector. The following relation can be applied a (b c) = b(a c) c(a b) (A.1.2) By use of (A.1.1) and (A.1.2) one can calculate (a b) (c d) = a [b (c d)] = a (c b d d b c) = a c b d a d b c (A.1.3) A.1.3 Bases Any triple of linear independent vectors e 1, e 2, e 3 is called basis. A triple of vectors e i is linear independent if and only if e 1 (e 2 e 3 ) = 0. For a given basis e i any vector a can be represented as follows a = a 1 e 1 + a 2 e 2 + a 3 e 3 a i e i The numbers a i are called the coordinates of the vector a for the basis e i. In order to compute the coordinates the dual (reciprocal) basis e k is introduced in such a way that { e k e i = δi k 1, k = i, = 0, k = i

6 172 A Some Basic Rules of Tensor Calculus δ k i is the Kronecker symbol. The coordinates a i can be found by e i a = a e i = a m e m e i = a m δ i m = a i For the selected basis e i the dual basis can be found from e 1 = e 2 e 3 (e 1 e 2 ) e 3, e 2 = e 3 e 1 (e 1 e 2 ) e 3, e 3 = e 1 e 2 (e 1 e 2 ) e 3 (A.1.4) By use of the dual basis a vector a can be represented as follows a = a 1 e 1 + a 2 e 2 + a 3 e 3 a i e i, a m = a e m, a m = a m In the special case of the orthonormal vectors e i, i.e. e i = 1 and e i e k = 0 for i = k, from (A.1.4) follows that e k = e k and consequently a k = a k. A.1.4 Operations with Second Rank Tensors A second rank tensor is a finite sum of ordered vector pairs A = a b c d [334]. One ordered pair of vectors is called the dyad. The symbol is called the dyadic (tensor) product of two vectors. A single dyad or a sum of two dyads are special cases of the second rank tensor. Any finite sum of more than three dyads can be reduced to a sum of three dyads. For example, let A = n a (i) b (i) be a second rank tensor. Introducing a basis e k the vectors a (i) can be represented by a (i) = a k (i) e k, where a k (i) are coordinates of the vectors a (i). Now we may write A = n a k (i) e k b (i) = e k n a k (i) b (i) = e k d k, d k n a k (i) b (i) Addition. The sum of two tensors is defined as the sum of the corresponding dyads. The sum has the properties of associativity and commutativity. In addition, for a dyad a b the following operation is introduced a (b + c) = a b + a c, (a + b) c = a c + b c Multiplication by a Scalar. This operation is introduced first for one dyad. For any scalar α and any dyad a b α(a b) = (αa) b = a (αb), (α + β)a b = αa b + βa b (A.1.5) By setting α = 0 in the first equation of (A.1.5) the zero dyad can be defined, i.e. 0(a b) = 0 b = a 0. The above operations can be generalized for any finite sum of dyads, i.e. for second rank tensors.

7 A.1 Basic Operations of Tensor Algebra 173 Inner Dot Product. For any two second rank tensors A and B the inner dot product is specified by A B. The rule and the result of this operation can be explained in the special case of two dyads, i.e. by setting A = a b and B = c d A B = a b c d = (b c)a d = αa d, α b c The result of this operation is a second rank tensor. Note that A B = B A. This can be again verified for two dyads. The operation can be generalized for two second rank tensors as follows A B = 3 a (i) b (i) 3 k=1 c (k) d (k) = 3 3 k=1 (b (i) c (k) )a (i) d (k) Transpose of a Second Rank Tensor. The transpose of a second rank tensor A is constructed by the following rule A T = ( 3 ) T a (i) b (i) = 3 b (i) a (i) Double Inner Dot Product. For any two second rank tensors A and B the double inner dot product is specified by A B The result of this operation is a scalar. This operation can be explained for two dyads as follows A B = a b c d = (b c)(a d) By analogy to the inner dot product one can generalize this operation for two second rank tensors. It can be verified that A B = B A for second rank tensors A and B. For a second rank tensor A and for a dyad a b A a b = b A a (A.1.6) A scalar product of two second rank tensors A and B is defined by One can verify that α = A B T A B T = B T A = B A T Dot Products of a Second Rank Tensor and a Vector. The right dot product of a second rank tensor A and a vector c is defined by A c = For a single dyad this operation is ( 3 a (i) b (i) ) c = a b c = a(b c) 3 (b (i) c)a (i)

8 174 A Some Basic Rules of Tensor Calculus The left dot product is defined by ( 3 ) c A = c a (i) b (i) = 3 (c a (i) )b (i) The results of these operations are vectors. One can verify that A c = c A, A c = c A T Cross Products of a Second Rank Tensor and a Vector. The right cross product of a second rank tensor A and a vector c is defined by A c = ( 3 a (i) b (i) ) The left cross product is defined by c A = c c = ( 3 a (i) b (i) ) = 3 a (i) (b (i) c) 3 (c a (i) ) b (i) The results of these operations are second rank tensors. It can be shown that A c = [c A T ] T Trace. The trace of a second rank tensor is defined by tr A = tr ( 3 a (i) b (i) ) = 3 a (i) b (i) By taking the trace of a second rank tensor the dyadic product is replaced by the dot product. It can be shown that tr A = tr A T, tr (A B) = tr (B A) = tr (A T B T ) = A B Symmetric Tensors. A second rank tensor is said to be symmetric if it satisfies the following equality A = A T An alternative definition of the symmetric tensor can be given as follows. A second rank tensor is said to be symmetric if for any vector c = 0 the following equality is valid c A = A c An important example of a symmetric tensor is the unit or identity tensor I, which is defined by such a way that for any vector c c I = I c = c

9 A.1 Basic Operations of Tensor Algebra 175 The representations of the identity tensor are I = e k e k = e k e k for any basis e k and e k, e k e m = δk m. For three orthonormal vectors m, n and p I = n n + m m + p p A symmetric second rank tensor P satisfying the condition P P = P is called projector. Examples of projectors are m m, n n + p p = I m m, where m, n and p are orthonormal vectors. The result of the dot product of the tensor m m with any vector a is the projection of the vector a onto the line spanned on the vector m, i.e. m m a = (a m)m. The result of (n n + p p) a is the projection of the vector a onto the plane spanned on the vectors n and p. Skew-symmetric Tensors. A second rank tensor is said to be skew-symmetric if it satisfies the following equality or if for any vector c A = A T c A = A c Any skew symmetric tensor A can be represented by A = a I = I a The vector a is called the associated vector. Any second rank tensor can be uniquely decomposed into the symmetric and skew-symmetric parts A = 1 ( A + A T) + 1 ( A A T) = A 1 + A 2, 2 2 A 1 = 1 ( A + A T), A 1 = A1 T 2, A 2 = 1 ( A A T), A 2 = A2 T 2 Vector Invariant. The vector invariant or Gibbsian Cross of a second rank tensor A is defined by A = ( 3 a (i) b (i) ) = 3 a (i) b (i) The result of this operation is a vector. The vector invariant of a symmetric tensor is the zero vector. The following identities can be verified (a I) = 2a, a I b = b a (a b)i

10 176 A Some Basic Rules of Tensor Calculus Linear Transformations of Vectors. A vector valued function of a vector argument f(a) is called to be linear if f(α 1 a 1 + α 2 a 2 ) = α 1 f(a 1 ) + α 2 f(a 2 ) for any two vectors a 1 and a 2 and any two scalars α 1 and α 2. It can be shown that any linear vector valued function can be represented by f(a) = A a, where A is a second rank tensor. In many textbooks, e.g. [32, 293] a second rank tensor A is defined to be the linear transformation of the vector space into itself. Determinant and Inverse of a Second Rank Tensor. Let a, b and c be arbitrary linearly-independent vectors. The determinant of a second rank tensor A is defined by det A = (A a) [(A b) (A c)] a (b c) The following identities can be verified det(a T ) = det(a), det(a B) = det(a) det(b) The inverse of a second rank tensor A 1 is introduced as the solution of the following equation A 1 A = A A 1 = I A is invertible if and only if det A = 0. A tensor A with det A = 0 is called singular. Examples of singular tensors are projectors. Cayley-Hamilton Theorem. Any second rank tensor satisfies the following equation A 3 J 1 (A)A 2 + J 2 (A)A J 3 (A)I = 0, (A.1.7) where A 2 = A A, A 3 = A A A and J 1 (A) = tr A, J 2 (A) = 1 2 [(tr A)2 tr A 2 ], J 3 (A) = det A = 1 6 (tr A)3 1 2 tr Atr A2 + 1 (A.1.8) tr A3 3 The scalar-valued functions J i (A) are called principal invariants of the tensor A. Coordinates of Second Rank Tensors. Let e i be a basis and e k the dual basis. Any two vectors a and b can be represented as follows a = a i e i = a j e j, b = b l e l = b m e m A dyad a b has the following representations a b = a i b j e i e j = a i b j e i e j = a i b j e i e j = a i b j e i e j For the representation of a second rank tensor A one of the following four bases can be used e i e j, e i e j, e i e j, e i e j With these bases one can write

11 A.1 Basic Operations of Tensor Algebra 177 A = A ij e i e j = A ij e i e j = A i j e i e j = A j i ei e j For a selected basis the coordinates of a second rank tensor can be computed as follows A ij = e i A e j, A ij = e i A e j, A i j = e i A e j, A j i = ei A e j Principal Values and Directions of Symmetric Second Rank Tensors. Consider a dot product of a second rank tensor A and a unit vector n. The resulting vector a = A n differs in general from n both by the length and the direction. However, one can find those unit vectors n, for which A n is collinear with n, i.e. only the length of n is changed. Such vectors can be found from the equation A n = λn or (A λi) n = 0 (A.1.9) The unit vector n is called the principal vector and the scalar λ the principal value of the tensor A. Let A be a symmetric tensor. In this case the principal values are real numbers and there exist at least three mutually orthogonal principal vectors. The principal values can be found as roots of the characteristic polynomial det(a λi) = λ 3 + J 1 (A)λ 2 J 2 (A)λ + J 3 (A) = 0 The principal values are specified by λ I, λ II, λ III. For known principal values and principal directions the second rank tensor can be represented as follows (spectral representation) A = λ I n I n I + λ II n II n II + λ III n III n III Orthogonal Tensors. A second rank tensor Q is said to be orthogonal if it satisfies the equation Q T Q = I. If Q operates on a vector its length remains unchanged, i.e. let b = Q a, then b 2 = b b = a Q T Q a = a a = a 2 Furthermore, the orthogonal tensor does not change the scalar product of two arbitrary vectors. For two vectors a and b as well as a = Q a and b = Q b one can calculate a b = a Q T Q b = a b From the definition of the orthogonal tensor follows Q T = Q 1, Q T Q = Q Q T = I, det(q Q T ) = (det Q) 2 = det I = 1 det Q = ±1 Orthogonal tensors with det Q = 1 are called proper orthogonal or rotation tensors. The rotation tensors are widely used in the rigid body dynamics, e.g. [333], and in the theories of rods, plates and shells, e.g. [25, 32]. Any orthogonal tensor is either

12 178 A Some Basic Rules of Tensor Calculus the rotation tensor or the composition of the rotation (proper orthogonal tensor) and the tensor I. Let P be a rotation tensor, det P = 1, then an orthogonal tensor Q with det Q = 1 can be composed by Q = ( I) P = P ( I), det Q = det( I) det P = 1 For any two orthogonal tensors Q 1 and Q 2 the composition Q 3 = Q 1 Q 2 is the orthogonal tensor, too. This property is used in the theory of symmetry and symmetry groups, e.g. [232, 331]. Two important examples for orthogonal tensors are rotation tensor about a fixed axis Q(ψm) = m m + cos ψ(i m m) + sin ψm I, π < ψ < π, det Q = 1, where the unit vector m represents the axis and ψ is the angle of rotation, reflection tensor Q = I 2n n, det Q = 1, where the unit vector n represents a normal to the mirror plane. One can prove the following identities [334] (Q a) (Q b) = det QQ (a b) Q (a Q T ) = Q (a I) Q T = det Q [(Q a) I] (A.1.10) A.2 Elements of Tensor Analysis A.2.1 Coordinate Systems The vector r characterizing the position of a point P can be represented by use of the Cartesian coordinates x i as follows, Fig. A.5, r(x 1, x 2, x 3 ) = x 1 e 1 + x 2 e 2 + x 3 e 3 = x i e i Instead of coordinates x i one can introduce any triple of curvilinear coordinates q 1, q 2, q 3 by means of one-to-one transformations x k = x k (q 1, q 2, q 3 ) q k = q k (x 1, x 2, x 3 ) It is assumed that the above transformations are continuous and continuous differentiable as many times as necessary and for the Jacobians ( ) x k ( q i ) det q i = 0, det x k = 0

13 A.2 Elements of Tensor Analysis 179 r 3 q 3 q 2 x 3 r P r 2 r 1 q 3 = const e 3 e e 2 1 x 2 q 1 x 1 Figure A.5 Cartesian and curvilinear coordinates must be valid. With these assumptions the position vector can be considered as a function of curvilinear coordinates q i, i.e. r = r(q 1, q 2, q 3 ). Surfaces q 1 = const, q 2 = const, and q 3 = const, Fig. A.5, are called coordinate surfaces. For given fixed values q 2 = q 2 and q 3 = q 3 a curve can be obtained along which only q 1 varies. This curve is called the q 1 -coordinate line, Fig. A.5. Analogously, one can obtain the q 2 - and q 3 -coordinate lines. The partial derivatives of the position vector with respect the to selected coordinates r 1 = r q 1, r 2 = r q 2, r 3 = r q 3, r 1 (r 2 r 3 ) = 0 define the tangential vectors to the coordinate lines in a point P, Fig. A.5. The vectors r i are used as the local basis in the point P. By use of (A.1.4) the dual basis r k can be introduced. The vector dr connecting the point P with a point P in the differential neighborhood of P is defined by dr = r q 1 dq1 + r q 2 dq2 + r q 3 dq3 = r k dq k The square of the arc length of the line element in the differential neighborhood of P is calculated by ds 2 = dr dr = (r i dq i ) (r k dq k ) = g ik dq i dq k, where g ik r i r k are the so-called contravariant components of the metric tensor. With g ik one can represent the basis vectors r i by the dual basis vectors r k as follows r i = (r i r k )r k = g ik r k

14 180 A Some Basic Rules of Tensor Calculus Similarly r i = (r i r k )r k = g ik r k, g ik r i r k, where g ik are termed covariant components of the metric tensor. For the selected bases r i and r k the second rank unit tensor has the following representations I = r i r i = r i g ik r k = g ik r i r k = g ik r i r k = r i r i A.2.2 The Hamilton (Nabla) Operator A scalar field is a function which assigns a scalar to each spatial point P for the domain of definition. Let us consider a scalar field ϕ(r) = ϕ(q 1, q 2, q 3 ). The total differential of ϕ by moving from a point P to a point P in the differential neighborhood is dϕ = ϕ q 1 dq1 + ϕ q 2 dq2 + ϕ q 3 dq3 = ϕ q k dqk Taking into account that dq k = dr r k dϕ = dr r k ϕ q k = dr ϕ The vector ϕ is called the gradient of the scalar field ϕ and the invariant operator (the Hamilton or nabla operator) is defined by For a vector field a(r) one may write = r k q k da = (dr r k ) a q k = dr r k a q k = dr a = ( a) T dr, a = r k a q k The gradient of a vector field is a second rank tensor. The operation can be applied to tensors of any rank. For vectors and tensors the following additional operations are defined diva a = r k a q k rota a = r k a q k The following identities can be verified r = r k r q k = rk r k = I, r = 3

15 A.2 Elements of Tensor Analysis 181 For a scalar α, a vector a and for a second rank tensor A the following identities are valid (αa) = r k (αa) ( q k = r k α ) q k a + αr k a = ( α) a + α a, qk (A.2.1) (A a) = r k (A a) q k = r k A q k a + r k A a ( ) q k a = ( A) a + A q k (A.2.2) rk = ( A) a + A ( a) T Here the identity (A.1.6) is used. For a second rank tensor A and a position vector r one can prove the following identity (A r) = r k (A r) q k = r k A q k r + rk A r q k (A.2.3) = ( A) r + r k A r k = ( A) r A Here we used the definition of the vector invariant as follows ( ) A = r k r k A = r k (r k A) = r k A r k A.2.3 Integral Theorems Let ϕ(r), a(r) and A(r) be scalar, vector and second rank tensor fields. Let V be the volume of a bounded domain with a regular surface A(V) and n be the outer unit normal to the surface at r. The integral theorems can be summarized as follows Gradient Theorems V V V ϕ dv = a dv = A dv = A(V) A(V) A(V) nϕ da, n a da, n A da Divergence Theorems a dv = n a da, V A dv = A(V) n A da V A(V)

16 182 A Some Basic Rules of Tensor Calculus Curl Theorems a dv = n a da, V A dv = A(V) n A da V A(V) A.2.4 Scalar-Valued Functions of Vectors and Second Rank Tensors Let ψ be a scalar valued function of a vector a and a second rank tensor A, i.e. ψ = ψ(a, A). Introducing a basis e i the function ψ can be represented as follows ψ(a, A) = ψ(a i e i, A ij e i e j ) = ψ(a i, A ij ) The partial derivatives of ψ with respect to a and A are defined according to the following rule dψ = ψ a i dai + ψ daij Aij = da e i ψ (A.2.4) a i + da e j e i ψ daij Aij In the coordinates-free form the above rule can be rewritten as follows with dψ = da ψ a + da ( ψ A ) T = da ψ,a + da (ψ,a ) T (A.2.5) ψ,a ψ a = ψ a i ei, ψ,a ψ A = ψ A ij ei e j One can verify that ψ,a and ψ,a are independent from the choice of the basis. One may prove the following formulae for the derivatives of principal invariants of a second rank tensor A J 1 (A),A = I, J 1 (A 2 ),A = 2A T, J 1 (A 3 ),A = 3A 2T, J 2 (A),A = J 1 (A)I A T, (A.2.6) J 3 (A),A = A 2T J 1 (A)A T + J 2 (A)I = J 3 (A)(A T ) 1 A.3 Orthogonal Transformations and Orthogonal Invariants An application of the theory of tensor functions is to find a basic set of scalar invariants for a given group of symmetry transformations, such that each invariant relative

17 A.3 Orthogonal Transformations and Orthogonal Invariants 183 to the same group is expressible as a single-valued function of the basic set. The basic set of invariants is called functional basis. To obtain a compact representation for invariants, it is required that the functional basis is irreducible in the sense that removing any one invariant from the basis will imply that a complete representation for all the invariants is no longer possible. Such a problem arises in the formulation of constitutive equations for a given group of material symmetries. For example, the strain energy density of an elastic non-polar material is a scalar valued function of the second rank symmetric strain tensor. In the theory of the Cosserat continuum two strain measures are introduced, where the first strain measure is the polar tensor while the second one is the axial tensor, e.g. [108]. The strain energy density of a thin elastic shell is a function of two second rank tensors and one vector, e.g. [25]. In all cases the problem is to find a minimum set of functionally independent invariants for the considered tensorial arguments. For the theory of tensor functions we refer to [71]. Representations of tensor functions are reviewed in [280, 330]. An orthogonal transformation of a scalar α, a vector a and a second rank tensor A is defined by [25, 332] α (det Q) ζ α, a (det Q) ζ Q a, A (det Q) ζ Q A Q T, (A.3.1) where Q is an orthogonal tensor, i.e. Q Q T = I, det Q = ±1, I is the second rank unit tensor, ζ = 0 for absolute (polar) scalars, vectors and tensors and ζ = 1 for axial ones. An example of the axial scalar is the mixed product of three polar vectors, i.e. α = a (b c). A typical example of the axial vector is the cross product of two polar vectors, i.e. c = a b. An example of the second rank axial tensor is the skew-symmetric tensor W = a I, where a is a polar vector. Consider a group of orthogonal transformations S (e.g., the material symmetry transformations) characterized by a set of orthogonal tensors Q. A scalar-valued function of a second rank tensor f = f(a) is called to be an orthogonal invariant under the group S if Q S : f(a ) = (det Q) η f(a), (A.3.2) where η = 0 if values of f are absolute scalars and η = 1 if values of f are axial scalars. Any second rank tensor B can be decomposed into the symmetric and the skewsymmetric part, i.e. B = A + a I, where A is the symmetric tensor and a is the associated vector. Therefore f(b) = f(a, a). If B is a polar (axial) tensor, then a is an axial (polar) vector. For the set of second rank tensors and vectors the definition of an orthogonal invariant (A.3.2) can be generalized as follows Q S : f(a 1, A 2,..., A n, a 1, a 2,..., a k ) = (det Q) η f(a 1, A 2,... A n, a 1, a 2,..., a k ), A i = A T i (A.3.3)

18 184 A Some Basic Rules of Tensor Calculus A.3.1 Invariants for the Full Orthogonal Group In [335] orthogonal invariants for different sets of second rank tensors and vectors with respect to the full orthogonal group are presented. It is shown that orthogonal invariants are integrals of a generic partial differential equation (basic equations for invariants). Let us present two following examples Orthogonal invariants of a symmetric second rank tensor A are I k = tr A k, k = 1, 2, 3 Instead of I k it is possible to use the principal invariants J k defined by (A.1.8). Orthogonal invariants of a symmetric second rank tensor A and a vector a are I k = tr A k, k = 1, 2, 3, I 4 = a a, I 5 = a A a, I 6 = a A 2 a, I 7 = a A 2 (a A a) (A.3.4) In the above set of invariants only 6 are functionally independent. The relation between the invariants (so-called syzygy, [71]) can be formulated as follows I 4 I 5 I 6 I 2 7 = I 5 I 6 a A 3 a I 6 a A 3 a a A 4 a, (A.3.5) where a A 3 a and a A 4 a can be expressed by I l, l = 1,... 6 applying the Cayley-Hamilton theorem (A.1.7). The set of invariants for a symmetric second rank tensor A and a vector a can be applied for a non-symmetric second rank tensor B since it can be represented by B = A + a I, A = A T. A.3.2 Invariants for the Transverse Isotropy Group Transverse isotropy is an important type of the symmetry transformation due to a variety of applications. Transverse isotropy is usually assumed in constitutive modeling of fiber reinforced materials, e.g. [21], fiber suspensions, e.g. [22], directionally solidified alloys, e.g. [213], deep drawing sheets, e.g. [50, 57] and piezoelectric materials, e.g. [285]. The invariants and generating sets for tensor-valued functions with respect to different cases of transverse isotropy are discussed in [79, 328] (see also relevant references therein). In what follows we analyze the problem of a functional basis within the theory of linear first order partial differential equations rather than the algebra of polynomials. We develop the idea proposed in [335] for the invariants with respect to the full orthogonal group to the case of transverse isotropy. The invariants will be found as integrals of the generic partial differential equations. Although a functional basis formed by these invariants does not include any redundant element, functional relations between them may exist. It may be therefore useful to find out simple forms of such relations. We show that the proposed approach may supply results in a direct, natural manner.

19 A.3 Orthogonal Transformations and Orthogonal Invariants 185 Invariants for a Single Second Rank Symmetric Tensor. Consider the proper orthogonal tensor which represents a rotation about a fixed axis, i.e. Q(ϕm) = m m + cos ϕ(i m m) + sin ϕm I, det Q(ϕm) = 1, (A.3.6) where m is assumed to be a constant unit vector (axis of rotation) and ϕ denotes the angle of rotation about m. The symmetry transformation defined by this tensor corresponds to the transverse isotropy, whereby five different cases are possible, e.g. [299, 331]. Let us find scalar-valued functions of a second rank symmetric tensor A satisfying the condition f(a (ϕ))= f(q(ϕm) A Q T (ϕm)) = f(a), A (ϕ) Q(ϕm) A Q T (ϕm) (A.3.7) Equation (A.3.7) must be valid for any angle of rotation ϕ. In (A.3.7) only the lefthand side depends on ϕ. Therefore its derivative with respect to ϕ can be set to zero, i.e. ( d f dϕ = da f dϕ A ) T = 0 (A.3.8) The derivative of A with respect to ϕ can be calculated by the following rules da (ϕ) = dq(ϕm) A Q T (ϕm) + Q(ϕm) A dq T (ϕm), dq(ϕm) = m Q(ϕm)dϕ dq T (ϕm) = Q T (ϕm) m dϕ (A.3.9) By inserting the above equations into (A.3.8) we obtain ( ) f T (m A A m) = 0 (A.3.10) A Equation (A.3.10) is classified in [92] to be the linear homogeneous first order partial differential equation. The characteristic system of (A.3.10) is da ds = (m A A m) (A.3.11) Any system of n linear ordinary differential equations has not more then n 1 functionally independent integrals [92]. By introducing a basis e i the tensor A can be written down in the form A = A ij e i e j and (A.3.11) is a system of six ordinary differential equations with respect to the coordinates A ij. The five integrals of (A.3.11) may be written down as follows g i (A) = c i, i = 1, 2,..., 5, where c i are integration constants. Any function of the five integrals g i is the solution of the partial differential equation (A.3.10). Therefore the five integrals g i represent the invariants of the symmetric tensor A with respect to the symmetry transformation (A.3.6). The solutions of (A.3.11) are

20 186 A Some Basic Rules of Tensor Calculus A k (s) = Q(sm) A k 0 Q T (sm), k = 1, 2, 3, (A.3.12) where A 0 is the initial condition. In order to find the integrals, the variable s must be eliminated from (A.3.12). Taking into account the following identities tr (Q A k Q T ) = tr (Q T Q A k ) = tr A k, m Q(sm) = m, (Q a) (Q b) = (det Q)Q (a b) (A.3.13) and using the notation Q m Q(sm) the integrals can be found as follows tr (A k ) = tr (A0 k ), k = 1, 2, 3, m A l m = m Q m A l 0 Q T m m = m A l 0 m, l = 1, 2, m A 2 (m A m) = m Q T m A 2 0 Q m (m Q T m A 0 Q m m) = m A 2 0 Q m [(Q T m m) (Q T m A 0 m) ] = m A 2 0 (m A 0 m) (A.3.14) As a result we can formulate the six invariants of the tensor A with respect to the symmetry transformation (A.3.6) as follows I k = tr (A k ), k = 1, 2, 3, I 4 = m A m, I 5 = m A 2 m, I 6 = m A 2 (m A m) (A.3.15) The invariants with respect to various symmetry transformations are discussed in [79]. For the case of the transverse isotropy six invariants are derived in [79] by the use of another approach. In this sense our result coincides with the result given in [79]. However, from our derivations it follows that only five invariants listed in (A.3.15) are functionally independent. Taking into account that I 6 is the mixed product of vectors m, A m and A 2 m the relation between the invariants can be written down as follows I 2 6 = det 1 I 4 I 5 I 4 I 5 m A 3 m I 5 m A 3 m m A 4 m (A.3.16) One can verify that m A 3 m and m A 4 m are transversely isotropic invariants, too. However, applying the the Cayley-Hamilton theorem (A.1.7) they can be uniquely expressed by I 1, I 2,... I 5 in the following way [54] m A 3 m = J 1 I 5 + J 2 I 4 + J 3, m A 4 m = (J J 2)I 5 + (J 1 J 2 + J 3 )I 4 + J 1 J 3, where J 1, J 2 and J 3 are the principal invariants of A defined by (A.1.8). Let us note that the invariant I 6 cannot be dropped. In order to verify this, it is enough to consider two different tensors

21 A.3 Orthogonal Transformations and Orthogonal Invariants 187 A and B = Q n A Q T n, where Q n Q(πn) = 2n n I, n n = 1, n m = 0, det Q n = 1 One can prove that the tensor A and the tensor B have the same invariants I 1, I 2,..., I 5. Taking into account that m Q n = m and applying the last identity in (A.3.13) we may write I 6 (B) = m B 2 (m B m) = m A 2 Q T n (m Q n A m) = m A 2 (m A m) = I 6 (A) We observe that the only difference between the two considered tensors is the sign of I 6. Therefore, the triples of vectors m, A m, A 2 m and m, B m, B 2 m have different orientations and cannot be combined by a rotation. It should be noted that the functional relation (A.3.16) would in no way imply that the invariant I 6 should be dependent and hence redundant, namely should be removed from the basis (A.3.15). In fact, the relation (A.3.16) determines the magnitude but not the sign of I 6. To describe yielding and failure of oriented solids a dyad M = v v has been used in [53, 75], where the vector v specifies a privileged direction. A plastic potential is assumed to be an isotropic function of the symmetric Cauchy stress tensor and the tensor generator M. Applying the representation of isotropic functions the integrity basis including ten invariants was found. In the special case v = m the number of invariants reduces to the five I 1, I 2,... I 5 defined by (A.3.15). Further details of this approach and applications in continuum mechanics are given in [59, 71]. However, the problem statement to find an integrity basis of a symmetric tensor A and a dyad M, i.e. to find scalar valued functions f(a, M) satisfying the condition f(q A Q T, Q M Q T ) = (det Q) η f(a, M), Q, Q Q T = I, det Q = ±1 (A.3.17) essentially differs from the problem statement (A.3.7). In order to show this we take into account that the symmetry group of a dyad M, i.e. the set of orthogonal solutions of the equation Q M Q T = M includes the following elements Q 1,2 = ±I, Q 3 = Q(ϕm), m = v v, Q 4 = Q(πn) = 2n n I, n n = 1, n v = 0, (A.3.18) where Q(ϕm) is defined by (A.3.6). The solutions of the problem (A.3.17) are automatically the solutions of the following problem f(q i A Q T i, M) = (det Q i ) η f(a, M), i = 1, 2, 3, 4,

22 188 A Some Basic Rules of Tensor Calculus i.e. the problem to find the invariants of A relative to the symmetry group (A.3.18). However, (A.3.18) includes much more symmetry elements if compared to the problem statement (A.3.7). An alternative set of transversely isotropic invariants can be formulated by the use of the following decomposition A = αm m + β(i m m) + A pd + t m + m t, (A.3.19) where α, β, A pd and t are projections of A. With the projectors P 1 = m m and P 2 = I m m we may write α = m A m = tr (A P 1 ), β = 1 2 (tr A m A m) = 1 2 tr (A P 2 ), A pd = P 2 A P 2 βp 2, (A.3.20) t = m A P 2 The decomposition (A.3.19) is the analogue to the following representation of a vector a a = I a = m m a + (I m m) a = ψm + τ, ψ = a m, τ = P 2 a (A.3.21) Decompositions of the type (A.3.19) are applied in [68, 79]. The projections introduced in (A.3.20) have the following properties tr (A pd ) = 0, A pd m = m A pd = 0, t m = 0 (A.3.22) With (A.3.19) and (A.3.22) the tensor equation (A.3.11) can be transformed to the following system of equations dα ds = 0, dβ ds = 0, (A.3.23) da pd = m A pd A pd m, ds dt ds = m t From the first two equations we observe that α and β are transversely isotropic invariants. The third equation can be transformed to one scalar and one vector equation as follows da pd ds A pd = 0 d(a pd A pd ) ds = 0, db ds = m b with b A pd t. We observe that tr (A 2 pd ) = A pd A pd is the transversely isotropic invariant, too. Finally, we have to find the integrals of the following system

23 A.3 Orthogonal Transformations and Orthogonal Invariants 189 dt ds = t m, db ds = b m (A.3.24) The solutions of (A.3.24) are t(s) = Q(sm) t 0, b(s) = Q(sm) b 0, where t 0 and b 0 are initial conditions. The vectors t and b belong to the plane of isotropy, i.e. t m = 0 and b m = 0. Therefore, one can verify the following integrals t t = t 0 t 0, b b = b 0 b 0, t b = t 0 b 0, (t b) m = (t 0 b 0 ) m (A.3.25) We found seven integrals, but only five of them are functionally independent. In order to formulate the relation between the integrals we compute b b = t A 2 pd t, t b = t A pd t For any plane tensor A p satisfying the equations A p m = m A p = 0 the Cayley- Hamilton theorem can be formulated as follows, see e.g. [71] A 2 p (tr A p )A p + 1 [ ] (tr A p ) 2 tr (A 2 2 p) (I m m) = 0 Since tr A pd = 0 we have 2A 2 pd = tr (A2 pd )(I m m), t A 2 pd t = 1 2 tr (A2 pd )(t t) Because tr (A 2 pd ) and t t are already defined, the invariant b b can be omitted. The vector t b is spanned on the axis m. Therefore t b = γm, γ = (t b) m, γ 2 = (t b) (t b) = (t t)(b b) (t b) 2 Now we can summarize six invariants and one relation between them as follows Ī 1 = α, Ī 2 = β, Ī 3 = 1 2 tr (A2 pd ), Ī 4 = t t = t A m, Ī 5 = t A pd t, Ī 6 = (t A pd t) m, (A.3.26) Ī 2 6 = Ī2 4 Ī3 Ī 2 5 Let us assume that the symmetry transformation Q n Q(πn) belongs to the symmetry group of the transverse isotropy, as it was made in [71, 59]. In this case f(a ) = f(q n A Q T n) = f(a) must be valid. With Q n m = m we can write α = α, β = β, A pd = A pd, t = Q n t

24 190 A Some Basic Rules of Tensor Calculus Therefore in (A.3.26) Ī k = Ī k, k = 1, 2,..., 5 and Ī 6 = (t A pd t ) m = ( (Q n t) Q n A pd t ) m = (t A pd t) Q n m = (t A pd t) m = Ī 6 Consequently f(a ) = f(ī 1, Ī 2,..., Ī 5, Ī 6 ) = f(ī 1, Ī 2,..., Ī 5, Ī 6 ) f(a) = f(ī 1, Ī 2,..., Ī 5, Ī 2 6 ) and Ī6 2 can be omitted due to the last relation in (A.3.26). Invariants for a Set of Vectors and Second Rank Tensors. By setting Q = Q(ϕm) in (A.3.3) and taking the derivative of (A.3.3) with respect to ϕ results in the following generic partial differential equation n ( f A i ) T (m A i A i m) + k f (m a a j ) = 0 j=1 j (A.3.27) The characteristic system of (A.3.27) is da i ds = (m A i A i m), i = 1, 2,..., n, da j ds = m a j, j = 1, 2,..., k (A.3.28) The above system is a system of N ordinary differential equations, where N = 6n + 3k is the total number of coordinates of A i and a j for a selected basis. The system (A.3.28) has not more then N 1 functionally independent integrals. Therefore we can formulate: Theorem A.3.1. A set of n symmetric second rank tensors and k vectors with N = 6n + 3k independent coordinates for a given basis has not more than N 1 functionally independent invariants for N > 1 and one invariant for N = 1 with respect to the symmetry transformation Q(ϕm). In essence, the proof of this theorem is given within the theory of linear first order partial differential equations [92]. As an example let us consider the set of a symmetric second rank tensor A and a vector a. This set has eight independent invariants. For a visual perception it is useful to keep in mind that the considered set is equivalent to A, a, A a, A 2 a Therefore it is necessary to find the list of invariants, whose fixation determines this set as a rigid whole. The generic equation (A.3.27) takes the form ( ) f T (m A A m) + f (m a) = 0 A a (A.3.29)

25 A.3 Orthogonal Transformations and Orthogonal Invariants 191 The characteristic system of (A.3.29) is da ds = m A A m, da ds = m a (A.3.30) This system of ninth order has eight independent integrals. Six of them are invariants of A and a with respect to the full orthogonal group. They fix the considered set as a rigid whole. The orthogonal invariants are defined by Eqs (A.3.4) and (A.3.5). Let us note that the invariant I 7 in (A.3.4) cannot be ignored. To verify this it is enough to consider two different sets A, a and B = Q p A Q T p, a, where Q p = I 2p p, p p = 1, p a = 0. One can prove that the invariants I 1, I 2,..., I 6 are the same for these two sets. The only difference is the invariant I 7, i.e. a B 2 (a B a) = a A 2 (a A a) Therefore the triples of vectors a, A a, A 2 a and a, B a, B 2 a have different orientations and cannot be combined by a rotation. In order to fix the considered set with respect to the unit vector m it is enough to fix the next two invariants I 8 = m A m, I 9 = m a (A.3.31) The eight independent transversely isotropic invariants are (A.3.4), (A.3.5) and (A.3.31). A.3.3 Invariants for the Orthotropic Symmetry Group Consider orthogonal tensors Q 1 = I 2n 1 n 1 and Q 2 = I 2n 2 n 2, det Q 1 = det Q 2 = 1. These tensors represent the mirror reflections, whereby the unit orthogonal vectors ±n 1 and ±n 2, are the normal directions to the mirror planes. The above tensors are the symmetry elements of the orthotropic symmetry group. The invariants must be found from Consequently, f(q 1 A Q T 1 ) = f(q 2 A Q T 2 ) = f(a) f(q 1 Q 2 A Q T 2 Q T 1 ) = f(q 1 A Q T 1 ) = f(q 2 A Q T 2 ) = f(a) and the tensor Q 3 = Q 1 Q 2 = 2n 3 n 3 I belongs to the symmetry group, where the unit vector n 3 is orthogonal to n 1 and n 2. Taking into account that Q i n i = n i (no summation convention), Q i n j = n j, i = j and using the notation A i = Q i A Qi T we can write tr (A k ) = tr (A k ), k = 1,..., 3, i = 1, 2, 3 n i A n i n i A 2 n i = n i Q i A Qi T n i = n i A n i, i = 1, 2, 3 = n i Q i A 2 Qi T n i = n i A 2 n i, i = 1, 2, 3 (A.3.32)

26 192 A Some Basic Rules of Tensor Calculus The above set of includes 9 scalars. The number of independent scalars is 7 due to the obvious relations tr (A k ) = n 1 A k n 1 + n 2 A k n 2 + n 3 A k n 3, k = 1, 2, 3

State of Stress at Point

State of Stress at Point State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus Chapter 1 Matrices, Vectors, and Vector Calculus In this chapter, we will focus on the mathematical tools required for the course. The main concepts that will be covered are: Coordinate transformations

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

Review of Vector Analysis in Cartesian Coordinates

Review of Vector Analysis in Cartesian Coordinates R. evicky, CBE 6333 Review of Vector Analysis in Cartesian Coordinates Scalar: A quantity that has magnitude, but no direction. Examples are mass, temperature, pressure, time, distance, and real numbers.

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Elasticity Theory Basics

Elasticity Theory Basics G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold

More information

Linear Algebra: Vectors

Linear Algebra: Vectors A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Geometrical definition Properties Expression in components. Definition in components Properties Geometrical expression.

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

More information

A Primer on Index Notation

A Primer on Index Notation A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

More information

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space 11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

Vector and Tensor Algebra (including Column and Matrix Notation)

Vector and Tensor Algebra (including Column and Matrix Notation) Vector and Tensor Algebra (including Column and Matrix Notation) 2 Vectors and tensors In mechanics and other fields of physics, quantities are represented by vectors and tensors. Essential manipulations

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

13.4 THE CROSS PRODUCT

13.4 THE CROSS PRODUCT 710 Chapter Thirteen A FUNDAMENTAL TOOL: VECTORS 62. Use the following steps and the results of Problems 59 60 to show (without trigonometry) that the geometric and algebraic definitions of the dot product

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v, 1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Chapter 2. Parameterized Curves in R 3

Chapter 2. Parameterized Curves in R 3 Chapter 2. Parameterized Curves in R 3 Def. A smooth curve in R 3 is a smooth map σ : (a, b) R 3. For each t (a, b), σ(t) R 3. As t increases from a to b, σ(t) traces out a curve in R 3. In terms of components,

More information

MAT 1341: REVIEW II SANGHOON BAEK

MAT 1341: REVIEW II SANGHOON BAEK MAT 1341: REVIEW II SANGHOON BAEK 1. Projections and Cross Product 1.1. Projections. Definition 1.1. Given a vector u, the rectangular (or perpendicular or orthogonal) components are two vectors u 1 and

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Mechanics 1: Conservation of Energy and Momentum

Mechanics 1: Conservation of Energy and Momentum Mechanics : Conservation of Energy and Momentum If a certain quantity associated with a system does not change in time. We say that it is conserved, and the system possesses a conservation law. Conservation

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Vector Algebra II: Scalar and Vector Products

Vector Algebra II: Scalar and Vector Products Chapter 2 Vector Algebra II: Scalar and Vector Products We saw in the previous chapter how vector quantities may be added and subtracted. In this chapter we consider the products of vectors and define

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

Special Theory of Relativity

Special Theory of Relativity June 1, 2010 1 1 J.D.Jackson, Classical Electrodynamics, 3rd Edition, Chapter 11 Introduction Einstein s theory of special relativity is based on the assumption (which might be a deep-rooted superstition

More information

Geometric Transformations

Geometric Transformations Geometric Transformations Definitions Def: f is a mapping (function) of a set A into a set B if for every element a of A there exists a unique element b of B that is paired with a; this pairing is denoted

More information

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors. 3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

More information

Scalars, Vectors and Tensors

Scalars, Vectors and Tensors Scalars, Vectors and Tensors A scalar is a physical quantity that it represented by a dimensional number at a particular point in space and time. Examples are hydrostatic pressure and temperature. A vector

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

CBE 6333, R. Levicky 1 Differential Balance Equations

CBE 6333, R. Levicky 1 Differential Balance Equations CBE 6333, R. Levicky 1 Differential Balance Equations We have previously derived integral balances for mass, momentum, and energy for a control volume. The control volume was assumed to be some large object,

More information

Differentiation of vectors

Differentiation of vectors Chapter 4 Differentiation of vectors 4.1 Vector-valued functions In the previous chapters we have considered real functions of several (usually two) variables f : D R, where D is a subset of R n, where

More information

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z 28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE 1.4 Cross Product 1.4.1 Definitions The cross product is the second multiplication operation between vectors we will study. The goal behind the definition

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

Introduction to Tensor Calculus

Introduction to Tensor Calculus Introduction to Tensor Calculus arxiv:1603.01660v3 [math.ho] 23 May 2016 Taha Sochi May 25, 2016 Department of Physics & Astronomy, University College London, Gower Street, London, WC1E 6BT. Email: t.sochi@ucl.ac.uk.

More information

1 Symmetries of regular polyhedra

1 Symmetries of regular polyhedra 1230, notes 5 1 Symmetries of regular polyhedra Symmetry groups Recall: Group axioms: Suppose that (G, ) is a group and a, b, c are elements of G. Then (i) a b G (ii) (a b) c = a (b c) (iii) There is an

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Vector has a magnitude and a direction. Scalar has a magnitude

Vector has a magnitude and a direction. Scalar has a magnitude Vector has a magnitude and a direction Scalar has a magnitude Vector has a magnitude and a direction Scalar has a magnitude a brick on a table Vector has a magnitude and a direction Scalar has a magnitude

More information

Differential Relations for Fluid Flow. Acceleration field of a fluid. The differential equation of mass conservation

Differential Relations for Fluid Flow. Acceleration field of a fluid. The differential equation of mass conservation Differential Relations for Fluid Flow In this approach, we apply our four basic conservation laws to an infinitesimally small control volume. The differential approach provides point by point details of

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Review A: Vector Analysis

Review A: Vector Analysis MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Physics 8.02 Review A: Vector Analysis A... A-0 A.1 Vectors A-2 A.1.1 Introduction A-2 A.1.2 Properties of a Vector A-2 A.1.3 Application of Vectors

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

AP Physics - Vector Algrebra Tutorial

AP Physics - Vector Algrebra Tutorial AP Physics - Vector Algrebra Tutorial Thomas Jefferson High School for Science and Technology AP Physics Team Summer 2013 1 CONTENTS CONTENTS Contents 1 Scalars and Vectors 3 2 Rectangular and Polar Form

More information

F Matrix Calculus F 1

F Matrix Calculus F 1 F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative

More information

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu) 6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

More information

Continuous Groups, Lie Groups, and Lie Algebras

Continuous Groups, Lie Groups, and Lie Algebras Chapter 7 Continuous Groups, Lie Groups, and Lie Algebras Zeno was concerned with three problems... These are the problem of the infinitesimal, the infinite, and continuity... Bertrand Russell The groups

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

The Dot and Cross Products

The Dot and Cross Products The Dot and Cross Products Two common operations involving vectors are the dot product and the cross product. Let two vectors =,, and =,, be given. The Dot Product The dot product of and is written and

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

CONTINUUM MECHANICS. (Lecture Notes) Zdeněk Martinec

CONTINUUM MECHANICS. (Lecture Notes) Zdeněk Martinec CONTINUUM MECHANICS (Lecture Notes) Zdeněk Martinec Department of Geophysics Faculty of Mathematics and Physics Charles University in Prague V Holešovičkách 2, 180 00 Prague 8 Czech Republic e-mail: zm@karel.troja.mff.cuni.cz

More information

discuss how to describe points, lines and planes in 3 space.

discuss how to describe points, lines and planes in 3 space. Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Lecture L22-2D Rigid Body Dynamics: Work and Energy

Lecture L22-2D Rigid Body Dynamics: Work and Energy J. Peraire, S. Widnall 6.07 Dynamics Fall 008 Version.0 Lecture L - D Rigid Body Dynamics: Work and Energy In this lecture, we will revisit the principle of work and energy introduced in lecture L-3 for

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

A vector is a directed line segment used to represent a vector quantity.

A vector is a directed line segment used to represent a vector quantity. Chapters and 6 Introduction to Vectors A vector quantity has direction and magnitude. There are many examples of vector quantities in the natural world, such as force, velocity, and acceleration. A vector

More information

APPLIED MATHEMATICS ADVANCED LEVEL

APPLIED MATHEMATICS ADVANCED LEVEL APPLIED MATHEMATICS ADVANCED LEVEL INTRODUCTION This syllabus serves to examine candidates knowledge and skills in introductory mathematical and statistical methods, and their applications. For applications

More information

1 Vectors: Geometric Approach

1 Vectors: Geometric Approach c F. Waleffe, 2008/09/01 Vectors These are compact lecture notes for Math 321 at UW-Madison. Read them carefully, ideally before the lecture, and complete with your own class notes and pictures. Skipping

More information

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1 19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point

More information

Mathematics Notes for Class 12 chapter 10. Vector Algebra

Mathematics Notes for Class 12 chapter 10. Vector Algebra 1 P a g e Mathematics Notes for Class 12 chapter 10. Vector Algebra A vector has direction and magnitude both but scalar has only magnitude. Magnitude of a vector a is denoted by a or a. It is non-negative

More information

THEORETICAL MECHANICS

THEORETICAL MECHANICS PROF. DR. ING. VASILE SZOLGA THEORETICAL MECHANICS LECTURE NOTES AND SAMPLE PROBLEMS PART ONE STATICS OF THE PARTICLE, OF THE RIGID BODY AND OF THE SYSTEMS OF BODIES KINEMATICS OF THE PARTICLE 2010 0 Contents

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Some Comments on the Derivative of a Vector with applications to angular momentum and curvature. E. L. Lady (October 18, 2000)

Some Comments on the Derivative of a Vector with applications to angular momentum and curvature. E. L. Lady (October 18, 2000) Some Comments on the Derivative of a Vector with applications to angular momentum and curvature E. L. Lady (October 18, 2000) Finding the formula in polar coordinates for the angular momentum of a moving

More information

Vector Algebra CHAPTER 13. Ü13.1. Basic Concepts

Vector Algebra CHAPTER 13. Ü13.1. Basic Concepts CHAPTER 13 ector Algebra Ü13.1. Basic Concepts A vector in the plane or in space is an arrow: it is determined by its length, denoted and its direction. Two arrows represent the same vector if they have

More information

Two vectors are equal if they have the same length and direction. They do not

Two vectors are equal if they have the same length and direction. They do not Vectors define vectors Some physical quantities, such as temperature, length, and mass, can be specified by a single number called a scalar. Other physical quantities, such as force and velocity, must

More information

Mechanics 1: Vectors

Mechanics 1: Vectors Mechanics 1: Vectors roadly speaking, mechanical systems will be described by a combination of scalar and vector quantities. scalar is just a (real) number. For example, mass or weight is characterized

More information

Math 241, Exam 1 Information.

Math 241, Exam 1 Information. Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Georgia Department of Education Kathy Cox, State Superintendent of Schools 7/19/2005 All Rights Reserved 1

Georgia Department of Education Kathy Cox, State Superintendent of Schools 7/19/2005 All Rights Reserved 1 Accelerated Mathematics 3 This is a course in precalculus and statistics, designed to prepare students to take AB or BC Advanced Placement Calculus. It includes rational, circular trigonometric, and inverse

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Plane Stress Transformations

Plane Stress Transformations 6 Plane Stress Transformations ASEN 311 - Structures ASEN 311 Lecture 6 Slide 1 Plane Stress State ASEN 311 - Structures Recall that in a bod in plane stress, the general 3D stress state with 9 components

More information

Brief Review of Tensors

Brief Review of Tensors Appendix A Brief Review of Tensors A1 Introductory Remarks In the study of particle mechanics and the mechanics of solid rigid bodies vector notation provides a convenient means for describing many physical

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

2. Spin Chemistry and the Vector Model

2. Spin Chemistry and the Vector Model 2. Spin Chemistry and the Vector Model The story of magnetic resonance spectroscopy and intersystem crossing is essentially a choreography of the twisting motion which causes reorientation or rephasing

More information

Rotation Matrices and Homogeneous Transformations

Rotation Matrices and Homogeneous Transformations Rotation Matrices and Homogeneous Transformations A coordinate frame in an n-dimensional space is defined by n mutually orthogonal unit vectors. In particular, for a two-dimensional (2D) space, i.e., n

More information

APPENDIX D. VECTOR ANALYSIS 1. The following conventions are used in this appendix and throughout the book:

APPENDIX D. VECTOR ANALYSIS 1. The following conventions are used in this appendix and throughout the book: APPENDIX D. VECTOR ANALYSIS 1 Appendix D Vector Analysis The following conventions are used in this appendix and throughout the book: f, g, φ, ψ are scalar functions of x,t; A, B, C, D are vector functions

More information

How To Understand The Dynamics Of A Multibody System

How To Understand The Dynamics Of A Multibody System 4 Dynamic Analysis. Mass Matrices and External Forces The formulation of the inertia and external forces appearing at any of the elements of a multibody system, in terms of the dependent coordinates that

More information