Introduction to Tensor Calculus

Size: px
Start display at page:

Download "Introduction to Tensor Calculus"

Transcription

1 Introduction to Tensor Calculus Kees Dullemond & Kasper Peeters c

2 This booklet contains an explanation about tensor calculus for students of physics and engineering with a basic knowledge of linear algebra. The focus lies mainly on acquiring an understanding of the principles and ideas underlying the concept of tensor. We have not pursued mathematical strictness and pureness, but instead emphasise practical use (for a more mathematically pure resumé, please see the bibliography). lthough tensors are applied in a very broad range of physics and mathematics, this booklet focuses on the application in special and general relativity. We are indebted to all people who read earlier versions of this manuscript and gave useful comments, in particular G. Bäuerle (University of msterdam) and C. Dullemond Sr. (University of Nijmegen). The original version of this booklet, in Dutch, appeared on October 28th, major update followed on September 26th, This version is a re-typeset English translation made in 2008/2010. Copyright c Kees Dullemond & Kasper Peeters.

3 1 The index notation 5 2 Bases, co- and contravariant vectors Intuitive approach Mathematical approach Introduction to tensors The new inner product and the first tensor Creating tensors from vectors Tensors, definitions and properties Definition of a tensor Symmetry and antisymmetry Contraction of indices Tensors as geometrical objects Tensors as operators The metric tensor and the new inner product The metric as a measuring rod Properties of the metric tensor Co versus contra Tensor calculus The covariance of equations ddition of tensors Tensor products First order derivatives: non-covariant version Rot, cross-products and the permutation symbol Covariant derivatives Vectors in curved coordinates The covariant derivative of a vector/tensor field Tensors in special relativity 39 B Geometrical representation 41 C Exercises 47 C.1 Index notation C.2 Co-vectors C.3 Introduction to tensors C.4 Tensoren, algemeen C.5 Metrische tensor C.6 Tensor calculus

4 4

5 1 The index notation Before we start with the main topic of this booklet, tensors, we will first introduce a new notation for vectors and matrices, and their algebraic manipulations: the index notation. It will prove to be much more powerful than the standard vector notation. To clarify this we will translate all well-know vector and matrix manipulations (addition, multiplication and so on) to index notation. Let us take a manifold (=space) with dimension n. We will denote the components of a vector v with the numbers v 1,..., v n. If one modifies the vector basis, in which the components v 1,..., v n of vector v are expressed, then these components will change, too. Such a transformation can be written using a matrix, of which the columns can be regarded as the old basis vectors e 1,..., e n expressed in the new basis e 1,..., e n, v 1. v n = 11 1n.. n1 nn v 1. v n (1.1) Note that the first index of denotes the row and the second index the column. In the next chapter we will say more about the transformation of vectors. ccording to the rules of matrix multiplication the above equation means: v 1 = 11 v v n v n, v n = n1 v 1 + n2 v nn v n, (1.2) or equivalently, v 1 = n ν=1 1ν v ν, or even shorter,. v n =. n nν v ν, ν=1 (1.3) v µ = n µν v ν ( µ N 1 µ n). (1.4) ν=1 In this formula we have put the essence of matrix multiplication. The index ν is a dummy index and µ is a running index. The names of these indices, in this case µ and 5

6 CHPTER 1. THE INDEX NOTTION ν, are chosen arbitrarily. The could equally well have been called α and β: v α = n αβ v β ( α N 1 α n). (1.5) β=1 Usually the conditions for µ (in Eq. 1.4) or α (in Eq. 1.5) are not explicitly stated because they are obvious from the context. The following statements are therefore equivalent: v = y v µ = y µ v α = y α, v = y v µ = n µν y ν v ν = n νµ y µ. ν=1 µ=1 (1.6) This index notation is also applicable to other manipulations, for instance the inner product. Take two vectors v and w, then we define the inner product as v w := v 1 w v n w n = n v µ w µ. (1.7) µ=1 (We will return extensively to the inner product. Here it is just as an example of the power of the index notation). In addition to this type of manipulations, one can also just take the sum of matrices and of vectors: C = + B C µν = µν + B µν z = v + w z α = v α + w α (1.8) or their difference, C = B C µν = µν B µν z = v w z α = v α w α (1.9) Exercises 1 to 6 of Section C.1. From the exercises it should have become clear that the summation symbols can always be put at the start of the formula and that their order is irrelevant. We can therefore in principle omit these summation symbols, if we make clear in advance over which indices we perform a summation, for instance by putting them after the formula, n µν v ν µν v ν {ν} ν=1 n n β=1 γ=1 αβ B βγ C γδ αβ B βγ C γδ {β, γ} From the exercises one can already suspect that (1.10) almost never is a summation performed over an index if that index only appears once in a product, almost always a summation is performed over an index that appears twice in a product, an index appears almost never more than twice in a product. 6

7 CHPTER 1. THE INDEX NOTTION When one uses index notation in every day routine, then it will soon become irritating to denote explicitly over which indices the summation is performed. From experience (see above three points) one knows over which indices the summations are performed, so one will soon have the idea to introduce the convention that, unless explicitly stated otherwise: a summation is assumed over all indices that appear twice in a product, and no summation is assumed over indices that appear only once. From now on we will write all our formulae in index notation with this particular convention, which is called the Einstein summation convection. For a more detailed look at index notation with the summation convention we refer to [4]. We will thus from now on rewrite n n β=1 γ=1 n µν v ν µν v ν, ν=1 αβ B βγ C γδ αβ B βγ C γδ. (1.11) Exercises 7 to 10 of Section C.1. 7

8 CHPTER 1. THE INDEX NOTTION 8

9 2 Bases, co- and contravariant vectors In this chapter we introduce a new kind of vector ( covector ), one that will be essential for the rest of this booklet. To get used to this new concept we will first show in an intuitive way how one can imagine this new kind of vector. fter that we will follow a more mathematical approach Intuitive approach We can map the space around us using a coordinate system. Let us assume that we use a linear coordinate system, so that we can use linear algebra to describe it. Physical objects (represented, for example, with an arrow-vector) can then be described in terms of the basis-vectors belonging to the coordinate system (there are some hidden difficulties here, but we will ignore these for the moment). In this section we will see what happens when we choose another set of basis vectors, i.e. what happens upon a basis transformation. In a description with coordinates we must be fully aware that the coordinates (i.e. the numbers) themselves have no meaning. Only with the corresponding basis vectors (which span up the coordinate system) do these numbers acquire meaning. It is important to realize that the object one describes is independent of the coordinate system (i.e. set of basis vectors) one chooses. Or in other words: an arrow does not change meaning when described an another coordinate system. Let us write down such a basis transformation, e 1 = a 11 e 1 + a 12 e 2, e 2 = a 21 e 1 + a 22 e 2. (2.1) This could be regarded as a kind of multiplication of a vector with a matrix, as long as we take for the components of this vector the basis vectors. If we describe the matrix elements with words, one would get something like: ) ( projection of e1 onto e 1 projection of e ) ( e1 ) 1 onto e 2 e = 2 projection of e 2 onto e 1 projection of e. (2.2) 2 onto e 2 e 2 ( e1 (. Note that the basis vector-colums are not vectors, but just a very useful way to.) write things down. We can also look at what happens with the components of a vector if we use a different set of basis vectors. From linear algebra we know that the transformation 9

10 2.1 Intuitive approach e 2 e v=( 0.4 ) 1.6 v=( 0.4 ) e 1 e 1 Figure 2.1: The behaviour of the transformation of the components of a vector under the transformation of a basis vector e 1 = 1 2 e 1 v 1 = 2v 1. matrix can be constructed by putting the old basis vectors expressed in the new basis in the columns of the matrix. In words, ( ) ( ) ( ) v1 projection of e1 onto e v = 1 projection of e 2 onto e 1 v1 2 projection of e 1 onto e 2 projection of e 2 onto e. (2.3) 2 v 2 It is clear that the matrices of Eq. (2.2) and Eq. (2.3) are not the same. We now want to compare the basis-transformation matrix of Eq. (2.2) with the coordinate-transformation matrix of Eq. (2.3). To do this we replace all the primed elements in the matrix of Eq. (2.3) by non-primed elements and vice-versa. Comparison with the matrix in Eq. (2.2) shows that we also have to transpose the matrix. So if we call the matrix of Eq. (2.2) Λ, then Eq. (2.3) is equivalent to: v = (Λ 1 ) T v. (2.4) The normal vectors are called contravariant vectors, because they transform contrary to the basis vector columns. That there must be a different behavior is also intuitively clear: if we described an arrow by coordinates, and we then modify the basis vectors, then the coordinates must clearly change in the opposite way to make sure that the coordinates times the basis vectors produce the same physical arrow (see Fig.??). In view of these two opposite transformation properties, we could now attempt to construct objects that, contrary to normal vectors, transform the same as the basis vector columns. In the simple case in which, for example, the basis vector e 1 transforms into 1 2 e 1, the coordinate of this object must then also 2 1 times as large. This is precisely what happens to the coordinates of a gradient of a scalar function! The reason is that such a gradient is the difference of the function per unit distance in the direction of the basis vector. When this unit suddenly shrinks (i.e. if the basis vector shrinks) this means that the gradient must shrink too (see Fig.?? for a onedimensional example). gradient, which we so far have always regarded as a true vector, will from now on be called a covariant vector or covector : it transforms in the same way as the basis vector columns. The fact that gradients have usually been treated as ordinary vectors is that if the coordinate transformation transforms one cartesian coordinate system into the other (or in other words: one orthonormal basis into the other), then the matrices Λ en (Λ 1 ) T are the same. Exercises 1, 2 of Section C.2. s long as one transforms only between orthonormal basis, there is no difference between contravariant vectors and covariant vectors. However, it is not always possible in practice to restrict oneself to such bases. When doing vector mathematics in 10

11 2.2 Mathematical approach e 2 F e 2 F grad F = -1.4 grad F = -0.7 e 1 e 1 Figure 2.2: Basis vector e 1 = 1 2 e 1 f = 1 2 f curved coordinate systems (like polar coordinates for example), one is often forced to use non-orthonormal bases. nd in special relativity one is fundamentally forced to distinguish between co- and contravariant vectors Mathematical approach Now that we have a notion for the difference between the transformation of a vector and the transformation of a gradient, let us have a more mathematical look at this. Consider an n-dimensional manifold with coordinates x 1, x 2,..., x n. We define the gradient of a function f(x 1, x 2,..., x n ), ( f) µ := f x µ. (2.5) The difference in transformation will now be demonstrated using the simplest of transformations: a homogeneous linear transformation (we did this in the previous section already, since we described all transformations with matrices). In general a coordinate transformation can also be non-homogeneous linear, (e.g. a translation), but we will not go into this here. Suppose we have a vector field defined on this manifold V : v = v( x). Let us perform a homogeneous linear transformation of the coordinates: x µ = µν x ν. (2.6) In this case not only the coordinates x µ change (and therefore the dependence of v on the coordinates), but also the components of the vectors, v µ( x) = µν v ν ( x), (2.7) where is the same matrix as in Eq. (2.6) (this may look trivial, but it is useful to check it! lso note that we take as transformation matrix the matrix that describes the transformation of the vector components, whereas in the previous section we took for Λ the matrix that describes the transformation of the basis vectors. So is equal to (Λ 1 ) T ). Now take the function f(x 1, x 2,..., x n ) and the gradient w α at a point P in the following way, w α = f x α, (2.8) and in the new coordinate system as w α = f x α. (2.9) 11

12 2.2 Mathematical approach It now follows (using the chain rule) that f x 1 = f x 1 x 1 x 1 + f x 2 x 2 x f x n x n x 1. (2.10) That is, ( ) f x ν f x µ = x µ, (2.11) x ν ( ) w x ν µ = x µ w ν. (2.12) This describes how a gradient transforms. One can regard the partial derivative x ν x µ as the matrix ( 1 ) T where is defined as in Eq. (2.6). To see this we first take the inverse of Eq. (2.6): x µ = ( 1 ) µν x ν. (2.13) Now take the derivative, x µ x α = (( 1 ) µν x ν) x α = ( 1 ) µν x ν x α + ( 1 ) µν x α x ν. (2.14) Because in this case does not depend on x α the last term on the right-hand side vanishes. Moreover, we have that x ν x α = δ να, δ να = { 1 when ν = α, 0 when ν = α. (2.15) Therefore, what remains is x µ x α = ( 1 ) µν δ να = ( 1 ) µα. (2.16) With Eq. (2.12) this yields for the transformation of a gradient w µ = ( 1 ) T µν w ν. (2.17) The indices are now in the correct position to put this in matrix form, w = ( 1 ) T w. (2.18) (We again note that the matrix used here denotes the coordinate transformation from the coordinates x to the coordinates x ). We have shown here what is the difference in the transformation properties of normal vectors ( arrows ) and gradients. Normal vectors we call from now on contravariant vectors (though we usually simply call them vectors) and gradients we call covariant vectors (or covectors or one-forms). It should be noted that not every covariant vector field can be constructed as a gradient of a scalar function. gradient has the property that ( f) = 0, while not all covector fields may have this property. To distinguish vectors from covectors we will denote vectors with an arrow ( v) and covectors with a tilde ( w). To make further distinction between contravariant and covariant vectors we will put the contravariant indices (i.e. the indices of contravariant vectors) as superscript and the covariant indices (i.e. the indices of covariant vectors) with subscripts, y α w α : contravariant vector : covariant vector, or covector 12

13 2.2 Mathematical approach In practice it will turn out to be very useful to also introduce this convention for matrices. Without further argumentation (this will be given later) we note that the matrix can be written as: : µ ν. (2.19) The transposed version of this matrix is: T : ν µ. (2.20) With this convention the transformation rules for vectors resp. covectors become The delta δ of Eq. (2.15) also gets a matrix form, v µ = µ ν v ν, (2.21) w µ = ( 1 ) T µν wν = ( 1 ) ν µ w ν. (2.22) δ µν δ µ ν. (2.23) This is called the Kronecker delta. It simply has the function of renaming an index: δ µ νy ν = y µ. (2.24) 13

14 2.2 Mathematical approach 14

15 3 Introduction to tensors Tensor calculus is a technique that can be regarded as a follow-up on linear algebra. It is a generalisation of classical linear algebra. In classical linear algebra one deals with vectors and matrices. Tensors are generalisations of vectors and matrices, as we will see in this chapter. In section 3.1 we will see in an example how a tensor can naturally arise. In section 3.2 we will re-analyse the essential step of section 3.1, to get a better understanding The new inner product and the first tensor The inner product is very important in physics. Let us consider an example. In classical mechanics it is true that the work that is done when an object is moved equals the inner product of the force acting on the object and the displacement vector x, W = F, x. (3.1) The work W must of course be independent of the coordinate system in which the vectors F and x are expressed. The inner product as we know it, does not have this property in general, s = a, b = a µ b µ (old definition) (3.2) s = a, b = µ αa α µ βb β = ( T ) µ α µ βa α b β, (3.3) where is the transformation matrix. Only if 1 equals T (i.e. if we are dealing with orthonormal transformations) s will not change. The matrices will then together form the kronecker delta δ βα. It appears as if the inner product only describes the physics correctly in a special kind of coordinate system: a system which according to our human perception is rectangular, and has physical units, i.e. a distance of 1 in coordinate x 1 means indeed 1 meter in x 1 -direction. n orthonormal transformation produces again such a rectangular physical coordinate system. If one has so far always employed such special coordinates anyway, this inner product has always worked properly. However, as we already explained in the previous chapter, it is not always guaranteed that one can use such special coordinate systems (polar coordinates are an example in which the local orthonormal basis of vectors is not the coordinate basis). 15

16 3.1 The new inner product and the first tensor The inner product between a vector x and a covector y, however, is invariant under all transformations, s = x µ y µ, (3.4) because for all one can write s = x µ y µ = µ αx α ( 1 ) β µ y β = ( 1 ) β µ µ αx α y β = δ β αx α y β = s (3.5) With help of this inner produce we can introduce a new inner product between two contravariant vectors which also has this invariance property. To do this we introduce a covector w µ and define the inner product between x µ and y ν with respect to this covector w µ in the following way (we will introduce a better definition later): s = w µ w ν x µ y ν (first attempt) (3.6) (Warning: later it will become clear that this definition is not quite useful, but at least it will bring us on the right track toward finding an invariant inner product between two contravariant vectors). The inner product s will now obviously transform correctly, because it is made out of two invariant ones, s = ( 1 ) µ α w µ( 1 ) ν β w ν α ρx ρ β σy σ = ( 1 ) µ α α ρ( 1 ) ν β β σw µ w ν x ρ y σ = δ µ ρδ ν σw µ w ν x ρ y σ (3.7) = w µ w ν x µ y ν = s. We have now produced an invariant inner product for contravariant vectors by using a covariant vector w µ as a measure of length. However, this covector appears twice in the formula. One can also rearrange these factors in the following way, s = (w µ w ν )x µ y ν = ( x 1 x 2 x 3) w 1 w 1 w 1 w 2 w 1 w 3 y 1 w 2 w 1 w 2 w 2 w 2 w 3 y 2. (3.8) w 3 w 1 w 3 w 2 w 3 w 3 y 3 In this way the two appearances of the covector w are combined into one object: some kind of product of w with itself. It is some kind of matrix, since it is a collection of numbers labeled with indices µ and ν. Let us call this object g, g = w 1 w 1 w 1 w 2 w 1 w 3 w 2 w 1 w 2 w 2 w 2 w 3 = g 11 g 12 g 13 g 21 g 22 g 23. (3.9) w 3 w 1 w 3 w 2 w 3 w 3 g 31 g 32 g 33 Instead of using a covector w µ in relation to which we define the inner product, we can also directly define the object g: that is more direct. So, we define the inner product with respect to the object g as: s = g µν x µ y ν new definition (3.10) Now we must make sure that the object g is chosen such that our new inner product reproduces the old one if we choose an orthonormal coordinate system. So, with Eq. (3.8) in an orthonormal system one should have s = g µν x µ y ν = ( x 1 x 2 x 3) g 11 g 12 g 13 y 1 g 21 g 22 g 23 y 2 g 31 g 32 g 33 y 3 = x 1 y 1 + x 2 y 2 + x 3 y 3 in an orthonormal system! (3.11) 16

17 3.2 Creating tensors from vectors To achieve this, g must become, in an orthonormal system, something like a unit matrix: g µν = in an orthonormal system! (3.12) One can see that one cannot produce this set of numbers according to Eq. (3.9). This means that the definition of the inner product according to Eq. (3.6) has to be rejected (hence the warning that was written there). Instead we have to start directly from Eq. (3.10). We do no longer regard g as built out of two covectors, but regard it as a matrix-like set of numbers on itself. However, it does not have the transformation properties of a classical matrix. Remember that the matrix of the previous chapter had one index up and one index down: µ ν, indicating that it has mixed contra- and co-variant transformation properties. The new object g µν, however, has both indices down: it transforms in both indices in a covariant way, like the w µ w ν which we initially used for our inner product. This curious object, which looks like a matrix, but does not transform as one, is an example of a tensor. matrix is also a tensor, as are vectors and covectors. Matrices, vectors and covectors are special cases of the more general class of objects called tensors. The object g µν is a kind of tensor that is neither a matrix nor a vector or covector. It is a new kind of object for which only tensor mathematics has a proper description. The object g is called a metric, and we will study its properties later in more detail: in Chapter 5. With this last step we now have a complete description of the new inner product between contravariant vectors that behaves properly, in that it remains invariant under any linear transformation, and that it reproduces the old inner product when we work in an orthonormal basis. So, returning to our original problem, W = F, x = gµν F µ x ν general formula. (3.13) In this section we have in fact put forward two new concepts: the new inner product and the concept of a tensor. We will cover both concepts more deeply: the tensors in Section 3.2 and Chapter 4, and the inner product in Chapter Creating tensors from vectors In the previous section we have seen how one can produce a tensor out of two covectors. In this section we will revisit this procedure from a slightly different angle. Let us look at products between matrices and vectors, like we did in the previous chapters. One starts with an object with two indices and therefore n 2 components (the matrix) and an object with one index and therefore n components (the vector). Together they have n 2 + n componenten. fter multiplication one is left with one object with one index and n components (a vector). One has therefore lost (n 2 + n) n = n 2 numbers: they are summed away. similar story holds for the inner product between a covector and a vector. One starts with 2n components and ends up with one number, s = x µ y µ = x 1 y 1 + x 2 y 2 + x 3 y 3. (3.14) Summation therefore reduces the number of components. In standard multiplication procedures from classical linear algebra such a summation usually takes place: for matrix multiplications as well as for inner products. In index notation this is denoted with paired indices using the summation convention. However, the index notation also allows us to multiply vectors and covectors 17

18 3.2 Creating tensors from vectors without pairing up the indices, and therefore without summation. The object one thus obtains does not have fewer components, but more: x 1 y 1 x 1 y 2 x 1 y 3 s µ ν := x µ y ν = x y 1 x 2 y 2 x 2 y 3. (3.15) x 3 y 1 x 3 y 2 x 3 y 3 We now did not produce one number (as we would have if we replaced ν with µ in the above formula) but instead an ordered set of numbers labelled with the indices µ and ν. So if we take the example x = (1, 3, 5) and ỹ = (4, 7, 6), then the tensorcomponents of s µ ν are, for example: s 2 3 = x 2 y 3 = 3 6 = 18 and s 1 1 = x 1 y 1 = 1 4 = 4, and so on. This is the kind of tensor object that this booklet is about. However, this object still looks very much like a matrix, since a matrix is also nothing more or less than a set of numbers labeled with two indices. To check if this is a true matrix or something else, we need to see how it transforms, s α β = x α y β = α µx µ ( 1 ) ν β y ν = α µ( 1 ) ν β (xµ y ν ) = α µ( 1 ) ν β sµ ν. (3.16) Exercise 2 of Section C.3. If we compare the transformation in Eq.(3.16) with that of a true matrix of exercise 2 we see that the tensor we constructed is indeed an ordinary matrix. But if instead we use two covectors, t µν = x µ y ν = x 1 y 1 x 1 y 2 x 1 y 3 x 2 y 1 x 2 y 2 x 2 y 3, (3.17) x 3 y 1 x 3 y 2 x 3 y 3 then we get a tensor with different transformation properties, t αβ = x α y β = ( 1 ) µ α x µ( 1 ) ν β y ν = ( 1 ) µ α ( 1 ) ν β (x µy ν ) (3.18) = ( 1 ) µ α ( 1 ) ν β t µν. The difference with s µ ν lies in the first matrix of the transformation equation. For s it is the transformation matrix for contravariant vectors, while for t it is the transformation matrix for covariant vectors. The tensor t is clearly not a matrix, so we indeed created something new here. The g tensor of the previous section is of the same type as t. The beauty of tensors is that they can have an arbitrary number of indices. One can also produce, for instance, a tensor with 3 indices, αβγ = x α y β z γ. (3.19) This is an ordered set of numbes labeled with three indices. It can be visualized as a kind of super-matrix in 3 dimensions (see Fig.??). These are tensors of rank 3, as opposed to tensors of rank 0 (scalars), rank 1 (vectors and covectors) and rank 2 (matrices and the other kind of tensors we introduced so far). We can distinguish between the contravariant rank and covariant rank. Clearly αβγ is a tensor of covariant rank 3 and contravariant rank 0. Its total rank is 3. One can also produce tensors of, for instance, contravariant rank 2 18

19 3.2 Creating tensors from vectors Figure 3.1: tensor of rank 3. and covariant rank 3 (i.e. total rank 5): B αβ µνφ. similar tensor, C α µνφ β, is also of contravariant rank 2 and covariant rank 3. Typically, when tensor mathematics is applied, the meaning of each index has been defined beforehand: the first index means this, the second means that etc. s long as this is well-defined, then one can have co- and contra-variant indices in any order. However, since it usually looks better (though this is a matter of taste) to have the contravariant indices first and the covariant ones last, usually the meaning of the indices is chosen in such a way that this is accomodated. This is just a matter of how one chooses to assign meaning to each of the indices. gain it must be clear that although a multiplication (without summation!) of m vectors and m covectors produces a tensor of rank m + n, not every tensor of rank m + n can be constructed as such a product. Tensors are much more general than these simple products of vectors and covectors. It is therefore important to step away from this picture of combining vectors and covectors into a tensor, and consider this construction as nothing more than a simple example. 19

20 3.2 Creating tensors from vectors 20

21 4 Tensors, definitions and properties Now that we have a first idea of what tensors are, it is time for a more formal description of these objects. We begin with a formal definition of a tensor in Section 4.1. Then, Sections 4.2 and 4.3 give some important mathematical properties of tensors. Section 4.4 gives an alternative, in the literature often used, notation of tensors. nd finally, in Section 4.5, we take a somewhat different view, considering tensors as operators Definition of a tensor Definition of a tensor: n (N, M)-tensor at a given point in space can be described by a set of numbers with N + M indices which transforms, upon coordinate transformation given by the matrix, in the following way: t α 1...α N β1...β M = α1 µ 1... αn µ N ( 1 ) ν 1 β1... ( 1 ) ν M βm t µ 1...µ N ν 1...ν M (4.1) n (N, M)-tensor in a three-dimensional manifold therefore has 3 (N+M) components. It is contravariant in N components and covariant in M components. Tensors of the type t α β γ (4.2) are of course not excluded, as they can be constructed from the above kind of tensors by rearrangement of indices (like the transposition of a matrix as in Eq. 2.20). Matrices (2 indices), vectors and covectors (1 index) and scalars (0 indices) are therefore also tensors, where the latter transforms as s = s Symmetry and antisymmetry In practice it often happens that tensors display a certain amount of symmetry, like what we know from matrices. Such symmetries have a strong effect on the properties of these tensors. Often many of these properties or even tensor equations can be derived solely on the basis of these symmetries. tensor t is called symmetric in the indices µ and ν if the components are equal upon exchange of the index-values. So, for a 2 nd rank contravariant tensor, t µν = t νµ symmetric (2,0)-tensor. (4.3) 21

22 4.4 Tensors as geometrical objects tensor t is called anti-symmetric in the indices µ and ν if the components are equalbut-opposite upon exchange of the index-values. So, for a 2 nd rank contravariant tensor, t µν = t νµ anti-symmetric (2,0)-tensor. (4.4) It is not useful to speak of symmetry or anti-symmetry in a pair of indices that are not of the same type (co- or contravariant). The properties of symmetry only remain invariant upon basis transformation if the indices are of the same type. Exercises 2 and 3 of Section C Contraction of indices With tensors of at least one covariant and at least one contravariant index we can define a kind of internal inner product. In the simplest case this goes as, t α α, (4.5) where, as usual, we sum over α. This is the trace of the matrix t α β. lso this construction is invariant under basis transformations. In classical linear algebra this property of the trace of a matrix was also known. Exercise 3 of Section C.4. One can also perform such a contraction of indices with tensors of higher rank, but then some uncontracted indices remain, e.g. t αβ α = v β, (4.6) In this way we can convert a tensor of type (N, M) into a tensor of type (N 1, M 1). Of course, this contraction procedure causes information to get lost, since after the contraction we have fewer components. Note that contraction can only take place between one contravariant index and one covariant index. contraction like t αα is not invariant under coordinate transformation, and we should therefore reject such constructions. In fact, the summations of the summation convention only happen over a covariant and a contravariant index, be it a contraction of a tensor (like t µα α) or an inner product between two tensors (like t µα y αν ) Tensors as geometrical objects Vectors can be seen as columns of numbers, but also as arrows. One can perform calculations with these arrows if one regards them as linear combinations of basis arrows, v = v µ e µ. (4.7) µ We would like to do something like this also with covectors and eventually of course with all tensors. For covectors it amounts to constructing a set of unit covectors that serve as a basis for the covectors. We write w = w µ ẽ µ. (4.8) µ Note: we denote the basis vectors and basis covectors with indices, but we do not mean the components of these basis (co-)vectors (after all: in which basis would that 22

23 4.4 Tensors as geometrical objects be?). Instead we label the basis (co-)vectors themselves with these indices. This way of writing was already introduced in Section 2.1. In spite of the fact that these indices have therefore a slightly different meaning, we still use the usual conventions of summation for these indices. Note that we mark the co-variant basis vectors with an upper index and the contra-variant basis-vectors with a lower index. This may sound counter-intuitive ( did we not decide to use upper indices for contra-variant vectors? ) but this is precisely what we mean with the different meaning of the indices here: this time they label the vectors and do not denote their components. The next step is to express the basis of covectors in the basis of vectors. To do this, let us remember that the inner product between a vector and a covector is always invariant under transformations, independent of the chosen basis: v w = v α w α (4.9) We now substitute Eq. (4.7) and Eq. (4.8) into the left hand side of the equation, v w = v α e α w β ẽ β. (4.10) If Eq. (4.9) and Eq. (4.10) have to remain consistent, it automatically follows that ẽ β e α = δ β α. (4.11) On the left hand side one has the inner product between a covector ẽ β and a vector e α. This time the inner product is written in abstract form (i.e. not written out in components like we did before). The indices α and β simply denote which of the basis covectors (β) is contracted with which of the basis vectors (α). geometrical representation of vectors, covectors and the geometric meaning of their inner product is given in appendix B. Eq. (4.11) is the (implicit) definition of the co-basis in terms of the basis. This equation does not give the co-basis explicitly, but it defines it nevertheless uniquely. This co-basis is called the dual basis. By arranging the basis covectors in columns as in Section 2.1, one can show that they transform as ẽ α = α βẽ β, (4.12) when we choose (that was equal to (Λ 1 ) T ) in such a way that e α = (( 1 ) T ) α β eβ. (4.13) In other words: basis-covector-columns transform as vectors, in contrast to basisvector-columns, which transform as covectors. The description of vectors and covectors as geometric objects is now complete. We can now also express tensors in such an abstract way (again here we refer to the mathematical description; a geometric graphical depiction of tensors is given in appendix B). We then express these tensors in terms of basis tensors. These can be constructed from the basis vectors and basis covectors we have constructed above, t = t µν ρ e µ e ν ẽ ρ. (4.14) The operator is the tensor outer product. Combining tensors (in this case the basis (co-)vectors) with such an outer product means that the rank of the resulting tensor is the sum of the ranks of the combined tensors. It is the geometric (abstract) version of the outer product we introduced in Section 3.2 to create tensors. The operator is not commutative, a b = b a. (4.15) 23

24 4.5 Tensors as operators 4.5. Tensors as operators Let us revisit the new inner product with v a vector and w a covector, w α v α = s. (4.16) If we drop index notation and we use the usual symbols we have, This could also be written as w v = s, (4.17) w( v) = s, (4.18) where the brackets mean that the covector w acts on the vector v. In this way, w is an operator (a function) which eats a vector and produces a scalar. Therefore, written in the usual way to denote maps from one set to another, w : R n R. (4.19) covector is then called a linear function of direction : the result of the operation (i.e. the resulting scalar) is linearly dependent on the input (the vector), which is a directional object. We can also regard it the other way around, v( w) = s. (4.20) where the vector is now the operator and the covector the argument. To prevent confusion we write Eq. (4.19) from now on in a somewhat different way, v : R 3 R. (4.21) n arbitrary tensor of rank (N, M) can also be regarded in this way. The tensor has as input a tensor of rank (M, N), or the product of more than one tensors of lower rank, such that the number of contravariant indices equals M and the number of covariant indices equals N. fter contraction of all indices we obtain a scalar. Example (for a (2, 1)-tensor), t αβ γa α b β c γ = s. (4.22) The tensor t can be regarded as a function of 2 covectors (a an b) and 1 vector (c), which produces a real number (scalar). The function is linearly dependent on its input, so that we can call this a multilinear function of direction. This somewhat complex nomenclature for tensors is mainly found in older literature. Of course the fact that tensors can be regarded to some degree as functions or operators does not mean that it is always useful to do so. In most applications tensors are considered as objects themselves, and not as functions. 24

25 5 The metric tensor and the new inner product In this chapter we will go deeper into the topic of the new inner product. The new inner product, and the metric tensor g associated with it, is of great importance to nearly all applications of tensor mathematics in non-cartesian coordinate systems and/or curved manifolds. It allows us to produce mathematical and physical formulae that are invariant under coordinate transformations of any kind. In Section 5.1 we will give an outline of the role that is played by the metric tensor g in geometry. Section 5.2 covers the properties of this metric tensor. Finally, Section 5.3 describes the procedure of raising or lowering an index using the metric tensor The metric as a measuring rod If one wants to describe a physical system, then one typically wants to do this with numbers, because they are exact and one can put them on paper. To be able to do this one must first define a coordinate system in which the measurements can be done. One could in principle take the coordinate system in any way one likes, but often one prefers to use an orthonormal coordinate system because this is particularly practical for measuring distances using the law of Pythagoras. However, often it is not quite practical to use such cartesian coordinates. For systems with (nearly) cylindrical symmetry or (nearly) spherical symmetry, for instance, it is much more practical to use cylindrical or polar coordinates, even though one could use cartesian ones. In some applications it is even impossible to use orthonormal coordinates, for instance in the case of mapping of the surface of the Earth. In a typical projection of the Earth with meridians vertically and parallels horizontally, the northern countries are stretched enormously and the north/south pole is a line instead of a point. Measuring distances (or the length of a vector) on such a map with Pythagoras would produce wrong results. What one has to do to get an impression of true distances, sizes and proportions is to draw on various places on the Earth-map a unit circle that represents, say, 100 km in each direction (if we wish to measure distances in units of 100 km). t the equator these are presumably circles, but as one goes toward the pole they tend to become ellipses: they are circles that are stretched in horizonal (east-west) direction. t the north pole such a unit circle will be stretched infinitely in horizontal direction. There is a strong relation between this unit circle and the metric g. In appendix B one can see that indeed a metric can be graphically represented with 25

26 5.2 Properties of the metric tensor a unit circle (in 2-D) or a unit sphere (in 3-D). So how does one measure the distance between two points on such a curved manifold? In the example of the Earth s projection one sees that g is different at different lattitudes. So if points and B have different lattitudes, there is no unique g that we can use to measure the distance. Indeed, there is no objective distance that one can give. What what can do is to define a path from point to point B and measure the distance along this path in little steps. t each step the distance travelled is so small that one can take the g at the middle of the step and get a reasonably good estimate of the length ds of that step. This is given by ds 2 = g µν dx µ dx ν with g µν = g µν ( x). (5.1) By integrating ds all the way from to B one thus obtains the distance along this path. Perhaps the true distance is then the distance along the path which yields the shortest distance between these points. What we have done here is to chop the path into such small bits that the curvature of the Earth is negligible. Within each step g changes so little that one can use the linear expressions for length and distance using the local metric g. Let us take a bit more concrete example and measure distances in 3-D using polar coordinates. From geometric considerations we know that the length dl of an infinitesimally small vector at position (r, θ, φ) in polar coordinates satisfies dl 2 = s = dr 2 + r 2 dθ 2 + r 2 sin 2 θdφ 2. (5.2) We could also write this as dl 2 = s = ( dx 1 dx 2 dx 3) dx 1 0 r 2 0 dx 2 = g µν dx µ dx ν. (5.3) 0 0 r 2 sin 2 θ dx 3 ll the information about the concept of length or distance is hidden in the metric tensor g. The metric tensor field g( x) is usually called the metric of the manifold Properties of the metric tensor The metric tensor g has some important properties. First of all, it is symmetric. This can be seen in various ways. One of them is that one can always find a coordinate system in which, locally, the metric tensor has the shape of a unit matrix, which is symmetric. Symmetry of a tensor is conserved upon any transformation (see Section 4.2). nother way to see it is that the inner product of two vectors should not depend on the order of these vectors: g µν v µ w ν = g µν v ν w µ g νµ v µ w ν (in the last step we renamed the running indices). Every symmetric second rank covariant tensor can be transformed into diagonal form in which the diagonal elements are either 1,0 or 1. For the metric tensor they will then all be 1: in an orthonormal coordinate system. In fact, an orthonormal coordinate system is defined as the coordinate system in which g µν = diag(1, 1, 1). normal metric can always be put into this form. However, mathematically one could also conceive a manifold that has a metric that can be brought into the following form: g µν = diag(1, 1, 1). This produces funny results for some vectors. For instance, the vector (0, 0, 1) would have an imaginary length, and the vector (0, 1, 1) has length zero. These are rather pathological properties and appear to be only interesting to mathematicians. real metric, after all, should always produce positive definite lengths for vectors unequal to the zero-vector. However, as one can see in appendix, in special relativity a metric will be introduced that can be brought to 26

27 5.3 Co versus contra the form diag( 1, 1, 1, 1). For now, however, we will assume that the metric is positive definite (i.e. can be brought to the form g µν = diag(1, 1, 1) through a coordinate transformation). It should be noted that a metric of signature g µν = diag(1, 1, 1) can never be brought into the form g µν = diag(1, 1, 1) by coordinate transformation, and vice versa. This signature of the metric is therefore an invariant. Normal metrics have signature diag(1, 1, 1). This does not mean that they are g µν = diag(1, 1, 1), but they can be brought into this form with a suitable coordinate transformation. Exercise 1 of Section C.5. So what about the inner product between two co-vectors? Just as with the vectors we use a kind of metric tensor, but this time of contravariant nature: g µν. The inner product then becomes s = g µν x µ y ν. (5.4) The distinction between these two metric tensors is simply made by upper resp. lower indices. The properties we have derived for g µν of course also hold for g µν. nd both metrics are intimitely related: once g µν is given, g µν can be constructed and vice versa. In a sense they are two realisations of the same metric object g. The easiest way to find g µν from g µν is to find a coordinate system in which g µν = diag(1, 1, 1): then g µν is also equal to diag(1, 1, 1). But in the next section we will derive a more elegant relation between the two forms of the metric tensor Co versus contra Before we got introduced to co-vectors we presumably always used orthonormal coordinate systems. In such coordinate systems we never encountered the difference between co- and contra-variant vectors. gradient was simply a vector, like any kind of vector. n example is the electric field. One can regard an electric field as a gradient, Ẽ = V E µ = V x µ. (5.5) On the other hand it is clearly also an arrow-like object, related to a force, E = 1 q F = 1 q m a Eµ = 1 q Fµ = 1 q maµ. (5.6) In an orthonormal system one can interchange R and R (or equivalently E µ and E µ ) without punishment. But as soon as we go to a non-orthonormal system, we will encounter problems. Then one is suddenly forced to distinguish between the two. So the question becomes: if one has a potential field V, how does one obtain the contravariant E? Perhaps the most obvious solution is: first produce the covariant tensor Ẽ, then transform to an orthonormal system, then switch from co-vector form to contravariant vector form (in this system their components are identical, at least for a positive definite metric) and then transform back to the original coordinate system. This is possible, but very time-consuming and unwieldy. better method is to use the metric tensor: E µ = g µν E ν. (5.7) This is a proper tensor product, as the contraction takes place over an upper and a lower index. The object E µ, which was created from E ν is now contravariant. One can see that this vector must indeed be the contravariant version of E ν by taking an orthonormal coordinate system: here g µν = diag(1, 1, 1), i.e. g µν is some kind of unit 27

28 5.3 Co versus contra matrix. This is the same as saying that in the orthonormal basis E µ = E ν, which is indeed true (again, only for a metric that can be diagonalised to diag(1, 1, 1), which for classical physics is always the case). However, in contrast to the formula E µ = E ν, the formula Eq. (5.7) is now also valid after transformation to any coordinate system, ( 1 ) µ α E µ = ( 1 ) µ α ( 1 ) ν β g µν β σe σ = ( 1 ) µ α δν σg µν E σ (5.8) = ( 1 ) µ α g µνe ν, (here we introduced σ to avoid having four ν symbols, which would conflict with the summation convention). If we multiply on both sides with α ρ we obtain exactly (the index ρ can be of course replaced by µ) Eq. (5.7). One can of course also do the reverse, E µ = g µν E ν. (5.9) If we first lower an index with g νρ, and then raise it again with g µν, then we must arrive back with the original vector. This results in a relation between g µν en g µν, E µ = g µν E ν = g µν g νρ E ρ = δ µ ρe ρ = E µ g αν g νβ = δ α β. (5.10) With this relation we have now defined the contravariant version of the metric tensor in terms of its covariant version. We call the conversion of a contravariant index into a covariant one lowering an index, while the reverse is raising an index. The vector and its covariant version are each others dual. 28

29 6 Tensor calculus Now that we are familiar with the concept of tensor we need to go a bit deeper in the calculus that one can do with these objects. This is not entirely trivial. The index notation makes it possible to write all kinds of manipulations in an easy way, but makes it also all too easy to write down expressions that have no meaning or have bad properties. We will need to investigate where these pitfalls may arise. We will also start thinking about differentiation of tensors. lso here the rules of index notation are rather logical, but also here there are problems looming. These problems have to do with the fact that ordinary derivatives of tensors do no longer transform as tensors. This is a problem that has far-reaching consequences and we will deal with them in Chapter 7. For now we will merely make a mental note of this. t the end of this chapter we will introduce another special kind of tensor, the Levi-Civita tensor, which allows us to write down cross-products in index notation. Without this tensor the cross-product cannot be expressed in index notation The covariance of equations In chapter 3 we saw that the old inner product was not a good definition. It said that the work equals the (old) inner product between F en x. However, while the work is a physical notion, and thereby invariant under coordinate transformation, the inner product was not invariant. In other words: the equation was only valid in a prefered coordinate system. fter the introduction of the new inner product the equation suddenly kept its validity in any coordinate system. We call this universal validity of the equation: covariance. Be careful not to confuse this with covariant vector : it is a rather unpractical fate that the word covariant has two meanings in tensor calculus: that of the type of index (or vector) on the one hand and the universal validity of expressions on the other hand. The following expressions and equations are therefore covariant, x µ = y µ, (6.1) x µ = y µ, (6.2) because both sides transform in the same way. The following expression is not covariant: x µ = y µ, (6.3) because the left-hand-side transforms as a covector (i.e. with ( 1 ) T )) and the righthand-side as a contravariant vector (i.e. with ). If this equation holds by chance in 29

Special Theory of Relativity

Special Theory of Relativity June 1, 2010 1 1 J.D.Jackson, Classical Electrodynamics, 3rd Edition, Chapter 11 Introduction Einstein s theory of special relativity is based on the assumption (which might be a deep-rooted superstition

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus Chapter 1 Matrices, Vectors, and Vector Calculus In this chapter, we will focus on the mathematical tools required for the course. The main concepts that will be covered are: Coordinate transformations

More information

Linear Algebra: Vectors

Linear Algebra: Vectors A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

More information

State of Stress at Point

State of Stress at Point State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

A Primer on Index Notation

A Primer on Index Notation A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Elasticity Theory Basics

Elasticity Theory Basics G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

PLANE TRUSSES. Definitions

PLANE TRUSSES. Definitions Definitions PLANE TRUSSES A truss is one of the major types of engineering structures which provides a practical and economical solution for many engineering constructions, especially in the design of

More information

Lecture 7. Matthew T. Mason. Mechanics of Manipulation. Lecture 7. Representing Rotation. Kinematic representation: goals, overview

Lecture 7. Matthew T. Mason. Mechanics of Manipulation. Lecture 7. Representing Rotation. Kinematic representation: goals, overview Matthew T. Mason Mechanics of Manipulation Today s outline Readings, etc. We are starting chapter 3 of the text Lots of stuff online on representing rotations Murray, Li, and Sastry for matrix exponential

More information

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space 11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

6.4 Normal Distribution

6.4 Normal Distribution Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

CBE 6333, R. Levicky 1. Tensor Notation.

CBE 6333, R. Levicky 1. Tensor Notation. CBE 6333, R. Levicky 1 Tensor Notation. Engineers and scientists find it useful to have a general terminology to indicate how many directions are associated with a physical quantity such as temperature

More information

Vectors and Index Notation

Vectors and Index Notation Vectors and Index Notation Stephen R. Addison January 12, 2004 1 Basic Vector Review 1.1 Unit Vectors We will denote a unit vector with a superscript caret, thus â denotes a unit vector. â â = 1 If x is

More information

Let s first see how precession works in quantitative detail. The system is illustrated below: ...

Let s first see how precession works in quantitative detail. The system is illustrated below: ... lecture 20 Topics: Precession of tops Nutation Vectors in the body frame The free symmetric top in the body frame Euler s equations The free symmetric top ala Euler s The tennis racket theorem As you know,

More information

Mechanics 1: Vectors

Mechanics 1: Vectors Mechanics 1: Vectors roadly speaking, mechanical systems will be described by a combination of scalar and vector quantities. scalar is just a (real) number. For example, mass or weight is characterized

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

Solutions to Homework 10

Solutions to Homework 10 Solutions to Homework 1 Section 7., exercise # 1 (b,d): (b) Compute the value of R f dv, where f(x, y) = y/x and R = [1, 3] [, 4]. Solution: Since f is continuous over R, f is integrable over R. Let x

More information

Vector surface area Differentials in an OCS

Vector surface area Differentials in an OCS Calculus and Coordinate systems EE 311 - Lecture 17 1. Calculus and coordinate systems 2. Cartesian system 3. Cylindrical system 4. Spherical system In electromagnetics, we will often need to perform integrals

More information

1 Symmetries of regular polyhedra

1 Symmetries of regular polyhedra 1230, notes 5 1 Symmetries of regular polyhedra Symmetry groups Recall: Group axioms: Suppose that (G, ) is a group and a, b, c are elements of G. Then (i) a b G (ii) (a b) c = a (b c) (iii) There is an

More information

2 Session Two - Complex Numbers and Vectors

2 Session Two - Complex Numbers and Vectors PH2011 Physics 2A Maths Revision - Session 2: Complex Numbers and Vectors 1 2 Session Two - Complex Numbers and Vectors 2.1 What is a Complex Number? The material on complex numbers should be familiar

More information

Review of Vector Analysis in Cartesian Coordinates

Review of Vector Analysis in Cartesian Coordinates R. evicky, CBE 6333 Review of Vector Analysis in Cartesian Coordinates Scalar: A quantity that has magnitude, but no direction. Examples are mass, temperature, pressure, time, distance, and real numbers.

More information

2. Spin Chemistry and the Vector Model

2. Spin Chemistry and the Vector Model 2. Spin Chemistry and the Vector Model The story of magnetic resonance spectroscopy and intersystem crossing is essentially a choreography of the twisting motion which causes reorientation or rephasing

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan

Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan 6 Permutation Groups Let S be a nonempty set and M(S be the collection of all mappings from S into S. In this section,

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Reflection and Refraction

Reflection and Refraction Equipment Reflection and Refraction Acrylic block set, plane-concave-convex universal mirror, cork board, cork board stand, pins, flashlight, protractor, ruler, mirror worksheet, rectangular block worksheet,

More information

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors. 3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

More information

CBE 6333, R. Levicky 1 Differential Balance Equations

CBE 6333, R. Levicky 1 Differential Balance Equations CBE 6333, R. Levicky 1 Differential Balance Equations We have previously derived integral balances for mass, momentum, and energy for a control volume. The control volume was assumed to be some large object,

More information

Vector has a magnitude and a direction. Scalar has a magnitude

Vector has a magnitude and a direction. Scalar has a magnitude Vector has a magnitude and a direction Scalar has a magnitude Vector has a magnitude and a direction Scalar has a magnitude a brick on a table Vector has a magnitude and a direction Scalar has a magnitude

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

discuss how to describe points, lines and planes in 3 space.

discuss how to describe points, lines and planes in 3 space. Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Solutions to Problems in Goldstein, Classical Mechanics, Second Edition. Chapter 7

Solutions to Problems in Goldstein, Classical Mechanics, Second Edition. Chapter 7 Solutions to Problems in Goldstein, Classical Mechanics, Second Edition Homer Reid April 21, 2002 Chapter 7 Problem 7.2 Obtain the Lorentz transformation in which the velocity is at an infinitesimal angle

More information

PROJECTIVE GEOMETRY. b3 course 2003. Nigel Hitchin

PROJECTIVE GEOMETRY. b3 course 2003. Nigel Hitchin PROJECTIVE GEOMETRY b3 course 2003 Nigel Hitchin hitchin@maths.ox.ac.uk 1 1 Introduction This is a course on projective geometry. Probably your idea of geometry in the past has been based on triangles

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

F Matrix Calculus F 1

F Matrix Calculus F 1 F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative

More information

Mechanics 1: Conservation of Energy and Momentum

Mechanics 1: Conservation of Energy and Momentum Mechanics : Conservation of Energy and Momentum If a certain quantity associated with a system does not change in time. We say that it is conserved, and the system possesses a conservation law. Conservation

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

S-Parameters and Related Quantities Sam Wetterlin 10/20/09

S-Parameters and Related Quantities Sam Wetterlin 10/20/09 S-Parameters and Related Quantities Sam Wetterlin 10/20/09 Basic Concept of S-Parameters S-Parameters are a type of network parameter, based on the concept of scattering. The more familiar network parameters

More information

Vectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.

Vectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996. Vectors 2 The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996. Launch Mathematica. Type

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing

More information

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a 88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Math 1B, lecture 5: area and volume

Math 1B, lecture 5: area and volume Math B, lecture 5: area and volume Nathan Pflueger 6 September 2 Introduction This lecture and the next will be concerned with the computation of areas of regions in the plane, and volumes of regions in

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

1 Vectors: Geometric Approach

1 Vectors: Geometric Approach c F. Waleffe, 2008/09/01 Vectors These are compact lecture notes for Math 321 at UW-Madison. Read them carefully, ideally before the lecture, and complete with your own class notes and pictures. Skipping

More information

The Math Circle, Spring 2004

The Math Circle, Spring 2004 The Math Circle, Spring 2004 (Talks by Gordon Ritter) What is Non-Euclidean Geometry? Most geometries on the plane R 2 are non-euclidean. Let s denote arc length. Then Euclidean geometry arises from the

More information

Overview. Essential Questions. Precalculus, Quarter 4, Unit 4.5 Build Arithmetic and Geometric Sequences and Series

Overview. Essential Questions. Precalculus, Quarter 4, Unit 4.5 Build Arithmetic and Geometric Sequences and Series Sequences and Series Overview Number of instruction days: 4 6 (1 day = 53 minutes) Content to Be Learned Write arithmetic and geometric sequences both recursively and with an explicit formula, use them

More information

9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes

9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes The Scalar Product 9.4 Introduction There are two kinds of multiplication involving vectors. The first is known as the scalar product or dot product. This is so-called because when the scalar product of

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information

Unit 3 (Review of) Language of Stress/Strain Analysis

Unit 3 (Review of) Language of Stress/Strain Analysis Unit 3 (Review of) Language of Stress/Strain Analysis Readings: B, M, P A.2, A.3, A.6 Rivello 2.1, 2.2 T & G Ch. 1 (especially 1.7) Paul A. Lagace, Ph.D. Professor of Aeronautics & Astronautics and Engineering

More information

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0. Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

Introduction to Tensor Calculus

Introduction to Tensor Calculus Introduction to Tensor Calculus arxiv:1603.01660v3 [math.ho] 23 May 2016 Taha Sochi May 25, 2016 Department of Physics & Astronomy, University College London, Gower Street, London, WC1E 6BT. Email: t.sochi@ucl.ac.uk.

More information

ELECTRIC FIELD LINES AND EQUIPOTENTIAL SURFACES

ELECTRIC FIELD LINES AND EQUIPOTENTIAL SURFACES ELECTRIC FIELD LINES AND EQUIPOTENTIAL SURFACES The purpose of this lab session is to experimentally investigate the relation between electric field lines of force and equipotential surfaces in two dimensions.

More information

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu) 6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

More information

Map Patterns and Finding the Strike and Dip from a Mapped Outcrop of a Planar Surface

Map Patterns and Finding the Strike and Dip from a Mapped Outcrop of a Planar Surface Map Patterns and Finding the Strike and Dip from a Mapped Outcrop of a Planar Surface Topographic maps represent the complex curves of earth s surface with contour lines that represent the intersection

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

How Gravitational Forces arise from Curvature

How Gravitational Forces arise from Curvature How Gravitational Forces arise from Curvature 1. Introduction: Extremal ging and the Equivalence Principle These notes supplement Chapter 3 of EBH (Exploring Black Holes by Taylor and Wheeler). They elaborate

More information

www.mathsbox.org.uk ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates

www.mathsbox.org.uk ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates Further Pure Summary Notes. Roots of Quadratic Equations For a quadratic equation ax + bx + c = 0 with roots α and β Sum of the roots Product of roots a + b = b a ab = c a If the coefficients a,b and c

More information

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v, 1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

More information

Geometric Transformations

Geometric Transformations Geometric Transformations Definitions Def: f is a mapping (function) of a set A into a set B if for every element a of A there exists a unique element b of B that is paired with a; this pairing is denoted

More information

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1 Implicit Functions Defining Implicit Functions Up until now in this course, we have only talked about functions, which assign to every real number x in their domain exactly one real number f(x). The graphs

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

To give it a definition, an implicit function of x and y is simply any relationship that takes the form: 2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Vector Spaces; the Space R n

Vector Spaces; the Space R n Vector Spaces; the Space R n Vector Spaces A vector space (over the real numbers) is a set V of mathematical entities, called vectors, U, V, W, etc, in which an addition operation + is defined and in which

More information

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors

More information

1.7 Graphs of Functions

1.7 Graphs of Functions 64 Relations and Functions 1.7 Graphs of Functions In Section 1.4 we defined a function as a special type of relation; one in which each x-coordinate was matched with only one y-coordinate. We spent most

More information

Differentiation of vectors

Differentiation of vectors Chapter 4 Differentiation of vectors 4.1 Vector-valued functions In the previous chapters we have considered real functions of several (usually two) variables f : D R, where D is a subset of R n, where

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information