# We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Save this PDF as:

Size: px
Start display at page:

Download "We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P."

## Transcription

1 Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to be contained in some ambient Euclidean space, but each will have its own innate dimension. We shall usually denote the ambient space as R N, and we use its usual inner product and norm. In discussing these parallelograms, we shall always for simplicity assume one vertex is the origin 0 R N. This restriction is of course easily overcome in applications by a simple translation of R N. DEFINITION. Let x 1,..., x n be arbitrary points in R N (notice that these subscripts do not here indicate coordinates). We then form the set of all linear combinations of these points of a certain kind: { n } P = t i x i 0 t i 1. i=1 We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P. O x For n = 1 this parallelogram is of course just the line segment [0, x 1 ]. 1 For n = 2 we obtain a true parallelogram. x 2 O For n = 3 the word usually employed is parallelepiped. However, it seems to be a good idea to pick one word to be used for all n, so we actually call this a 3-dimensional parallelogram. x 1 x 1 x 3 O We are not assuming that the edge vectors x 1,..., x n are linearly independent, so it could happen that a 3-dimensional parallelogram could lie in a plane and itself be a 2-dimensional parallelogram, for instance. PROBLEM 8 1. Consider the parallelogram in R 3 with edges equal to the three points î, ĵ, î 2ĵ. Draw a sketch of it and conclude that it is actually a six-sided figure in the x y plane. x 2 A. Volumes in dimensions 1, 2, 3

2 2 Chapter 8 We are going to work out a definition of the n-dimensional volume of an n-dimensional parallelogram P, and we shall denote this as vol(p ); if we wish to designate the dimension, we write vol n (P ). In this section we are going to recall some formulas we have already obtained, and cast them in a somewhat different format. Dimension 1. We have the line segment [0, x 1 ] R N, and we simply say its 1-dimensional volume is its length, i.e. the norm of x 1 : vol([0, x 1 ]) = x 1. Dimension 3. We skip ahead to this dimension because of our already having obtained the pertinent formula in Section 7C. Namely, suppose P R 3, so we are dealing at first with the dimension of P equal to the dimension of the ambient space. Then if we write x 1, x 2, x 3 as column vectors, vol(p ) = det(x 1 x 2 x 3 ), the absolute value of the determinant of the 3 3 matrix formed from the column vectors. This formula has the advantage of being easy to calculate. But it suffers from two related disadvantages: (1) it depends on the coordinates of the edge vectors and thus is not very geometric, and (2) it requires the ambient space to be R 3. Both of these objections can be overcome by resorting to the by now familiar device of computing the matrix product (x 1 x 2 x 3 ) t (x 1 x 2 x 3 ) = = x t 1 x t 2 x t 3 (x 1 x 2 x 3 ) x 1 x 1 x 1 x 2 x 1 x 3 x 2 x 1 x 2 x 2 x 2 x 3. x 3 x 1 x 3 x 2 x 3 x 3 Notice that whereas both matrices on the left side of this equation depend on the coordinates of x 1, x 2, and x 3, the matrix (x i x j ) on the right side depends only on the (geometric) inner products of the vectors! The properties of determinant produce the formula (det(x 1 x 2 x 3 )) 2 = det(x i x j ). Thus we have obtained the formula vol(p ) = det(x i x j ).

3 Volumes of parallelograms 3 This is our desired formula. It displays vol(p ) in such a way that we no longer need the assumption P R 3. For if the ambient space is R N, we can simply regard x 1, x 2, x 3 as lying in a 3-dimensional subspace of R N and use the formula we have just derived. The point is that the inner products x i x j from R N are the same inner products as in the subspace of R N, since the inner product has a completely geometric description. Dimension 2. Here we are going to present the procedure for obtaining the formula analogous to the above, but without using a coordinate representation at all. We shall see in Section C that this procedure works for all dimensions. We suppose P to be the 2-dimensional parallelogram in R N with vertices 0, x 1, x 2, and x 1 + x 2. We choose the base of the parallelogram to be [0, x 1 ] and then use the classical formula for the area of P : vol(p ) = base times altitude = x 1 h x 1 O x 2 h (see the figure). The altitude h is the distance from x 2 to the foot tx 1 of the line segment from x 2 orthogonal to the base. The definition of t is then that (tx 1 x 2 ) x 1 = 0. (How many times have we used this technique in this book?!) The definition of h then can be rewritten h 2 = x 2 tx 1 2 = (x 2 tx 1 ) (x 2 tx 1 ) = (x 2 tx 1 ) x 2. We now regard what we have obtained as two linear equations for the two unknowns t and h 2 : { tx 1 x 1 + 0h 2 = x 1 x 2, tx 1 x 2 + h 2 = x 2 x 2. We don t care what t is in this situation, so we use Cramer s rule to compute only the desired h 2 :

4 4 Chapter 8 h 2 = ( ) x1 x det 1 x 1 x 2 x 1 x 2 x 2 x 2 ( ) x1 x det 1 0 x 1 x 2 1 = det(x i x j ) x 1 2. Thus vol(p ) = det(x i x j ). This is the desired analog to what we found for n = 3, but we have obtained it without using coordinates and without even thinking about R 2! EXAMPLE. We find the area of the triangle in R 3 with vertices ı, j, k. We handle this by finding the area of an associated parallelogram and then dividing by 2 (really, 2!). A complication appears because we do not have the origin of R 3 as a vertex, but we get around this by thinking of vectors emanating from one of the vertices (say j) thought of as the origin. Thus we take 1 0 x 1 = ı j = 1, x 2 = k j = Then the square of the area of the parallelogram is det(x i x j ) = det = 3. ( ) i k j Thus, area of triangle = PROBLEM 8 2. Compute the area of the triangle in R 3 whose vertices are aî, bĵ, and cˆk. Express the result in a form that is a symmetric function of a, b, c. B. The Gram determinant

5 Volumes of parallelograms 5 Our work thus far indicates that the matrix of dot products (x i x j ) will play a major role in these sorts of calculations. We therefore pause to consider it as an object of tremendous interest in its own right. It is named after the Danish mathematician Jorgen Pedersen Gram. DEFINITION. Suppose x 1, x 2,..., x n are points in the Euclidean space R N. Their Gram matrix is the n n symmetric matrix of their inner products x 1 x 1 x 1 x 2... x 1 x n G = (x i x j ) =.... x n x 1 x n x 2... x n x n The Gram determinant of the same points is Here are the first three cases: Gram(x 1, x 2,..., x n ) = det G = det(x i x j ). Gram(x) = x x = x 2 ; Gram(x 1, x 2 ) = det ( x1 2 ) x 1 x 2 x 2 x 1 x 2 2 = x 1 2 x 2 2 (x 1 x 2 ) 2 ; x 1 2 x 1 x 2 x 1 x 3 Gram(x 1, x 2, x 3 ) = det x 1 x 2 x 2 2 x 2 x 3. x 1 x 3 x 2 x 3 x 3 2 Notice that for n = 1, Gram(x) 0 and Gram(x) > 0 unless x = 0. For n = 2, Gram(x 1, x 2 ) 0 and Gram(x 1, x 2 ) > 0 unless x 1 and x 2 are linearly dependent; this is the Schwarz inequality. We didn t encounter Gram(x 1, x 2, x 3 ) until the preceding section, but we realize that the analogous property is valid. In fact, we shall now give an independent verification of this property in general. THEOREM. If x 1,..., x n R N, then their Gram matrix G is positive semidefinite. In particular, Gram(x 1,..., x n ) 0.

6 6 Chapter 8 Moreover, G is positive definite x 1,..., x n are linearly independent. Gram(x 1,..., x n ) > 0 PROOF. We first notice the formula for the norm of n i=1 t ix i : n t i x i 2 i=1 = = n n t i x i t j x j i=1 j=1 n (x i x j )t i t j. i,j=1 In terms of the Gram matrix, the calculation we have just done can thus be written in the form n t i x i 2 = Gt t, i=1 where t is the column vector with components t 1,..., t n. We see therefore that Gt t 0 for all t R n. That is, the matrix G is positive semidefinite (p. 4 26). Thus det G 0, thanks to the easy analysis on p (Remember: the eigenvalues of G are all nonnegative and det G is the product of the eigenvalues.) This proves the first statement of the theorem. For the second we first note that of course G is positive definite det G > 0. If these conditions do not hold, then there exists a nonzero t R n such that Gt = 0. Then of course also Gt t = 0. According to the formula above, n i=1 t ix i has zero norm and is therefore the zero vector. We conclude that x 1,..., x n are linearly dependent. Conversely, assume n i=1 t ix i = 0 for some t 0. Take the dot product of this equation with each x j to conclude that n i=1 t ix i x j = 0. This says that the j th component of Gt = 0. As j is arbitrary, Gt = 0. Thus det G = 0. QED PROBLEM 8 3. For the record, here is the general principle used above: prove that for a general real symmetric n n matrix A which is positive semidefinite and for t R n, At t = 0 At = 0. The following problem strikingly shows that Gram matrices provide a concrete realization of all positive semidefinite matrices.

7 Volumes of parallelograms 7 PROBLEM 8 4. Let A be a real symmetric positive semidefinite n n matrix. Prove that A is a Gram matrix; that is, there exist x 1,..., x n R n such that A = (x i x j ). (HINT: use A.) PROBLEM 8 5. Let A be a real symmetric positive definite n n matrix. Suppose A is represented as a Gram matrix in two different ways: A = (x i x j ) = (y i y j ). Prove that x i = By i, 1 i n, where B is a matrix in O(n). Prove the converse as well. As we noted earlier, the inequality Gram(x 1, x 2 ) 0 is exactly the Schwarz inequality. We pause to demonstrate geometrical significance of the next case, Gram(x 1, x 2, x 3 ) 0. Expanding the determinant, this says that x 1 2 x 2 2 x (x 1 x 2 )(x 2 x 3 )(x 3 x 1 ) x 1 2 (x 2 x 3 ) 2 x 2 2 (x 3 x 1 ) 2 x 3 2 (x 1 x 2 ) 2 0. Now consider points which lie on the unit sphere in R N. These are points with norm 1. For any two such points x and y, we can think of the distance between them, measured not from the viewpoint of the ambient R N, but rather from the viewpoint of the sphere itself. We shall later discuss this situation. The idea is to try to find the curve of shortest length that starts at x, ends at y, and that lies entirely in the sphere. We shall actually prove that this curve is the great circle passing through x and y. Here s a side view, where we have intersected R N with the unique 2-dimensional plane containing 0, x and y (minor technicality: if y = ±x then many planes will do):

8 8 Chapter 8 O θ x y The angle θ between x and y is itself the length of the great circle arc joining x and y. This is of course calculated by the dot product: cos θ = x y. As 1 x y 1, we then define the intrinsic distance from x to y by the formula d(x, y) = arccos x y. (Remember that arccos has values only in the interval [0, π].) The following properties are immediate: 0 d(x, y) π; d(x, y) = 0 x = y; d(x, y) = d(y, x). It certainly seems reasonable that the triangle inequality for this distance is true, based on the fact that the curve of shortest length joining two points on the sphere should be the great circle arc and therefore should have length d(x, y). However, we want to show that the desired inequality is intimately related to the theorem. The inequality states that d(x, y) d(x, z) + d(z, y). We now prove this inequality. It states that arccos x y arccos x z + arccos z y. If the right side of this equation is greater than π, then we have nothing to prove. We may thus assume it is between 0 and π. On this range cosine is a strictly decreasing function, so we must prove x y cos(arccos x z + arccos z y).

9 Volumes of parallelograms 9 The addition formula for cosine enables a computation of the right side; notice that sin(arccos x z) = 1 (x z) 2. Thus we must prove x y (x z)(z y) 1 (x z) 2 1 (z y) 2. That is, 1 (x z) 2 1 (z y) 2 (x z)(z y) x y. If the right side is negative, we have nothing to prove. Thus it suffices to square both sides and prove that (1 (x z) 2 )(1 (z y) 2 ) ((x z)(z y) x y) 2. A little calculation shows this reduces to 1 + 2(x z)(z y)(x y) (x z) 2 (z y) 2 (x y) 2 0. Aha! This is exactly the inequality Gram(x, y, z) 0. PROBLEM 8 6. In the above situation, show that equality holds in dist(x, y) = dist(x, z) + dist(z, y) if and only if z lies on the short great circle arc joining x and y.

10 10 Chapter 8 PROBLEM 8 7. (Euclid) Show that when three rays in R 3 make a solid angle in space, the sum of any two of the angles between them exceeds the third: D C β A α γ B PROBLEM 8 8. Suppose m rays meet in convex fashion at a point p in R 3. Show that the angles at p sum to less than 2π. (HINT: sum all the angles α, β, θ over all the triangles. Use the preceding problem.) p θ β α

11 Volumes of parallelograms 11 PROBLEM 8 9. Here is a familiar and wonderful theorem of plane Euclidean geometry. Consider the angles shown in a circle x with center 0. Both angles subtend the same arc of the circle. We say α is a central angle, β an inscribed angle. α y The theorem asserts that α = 2β. Now label the three points on the circle as shown and show that the equation O β α = 2β is exactly the equation Gram(x, y, z) = 0. z C. Volumes in all dimensions We are now ready to derive the general formula for volumes of parallelograms. We shall proceed inductively, and shall take an intuitive viewpoint: the n-dimensional parallelogram P with edges x 1,..., x n has as its base the (n 1)-dimensional parallelogram P with edges x 1,..., x n 1. We then compute the altitude h as the distance to the base, and define vol n (P ) = vol n 1 (P )h. x n Side view : h O P y This definition is not really adequate, but the calculations are too interesting to miss. Moreover, Fubini s theorem in the next chapter provides the simple theoretical basis for the validity of the current naive approach. To complete this analysis we simply follow the outline given in Section A for the case n = 2. That is, we want to investigate the foot y = n 1 j=1 t jx j of the altitude. We have (y x n ) x i = 0 for 1 i n 1. These equations should determine the t i s. Then the altitude can be computed by the equation h 2 = x n y 2 = (x n y) (x n y) = (x n y) x n.

12 12 Chapter 8 We now display these n equations for our unknowns t 1,..., t n 1, h 2 : n 1 t j x i x j + 0h 2 = x i x n, 1 i n 1; j=1 n 1 t j x n x j + h 2 = x n x n. j=1 Cramer s rule should then produce the desired formula for h 2 : x 1 x 1... x 1 x n 1 x 1 x n x 2 x 1... x 2 x n 1 x 2 x n det. x h 2 n x 1... x n x n 1 x n x n =. x 1 x 1... x 1 x n 1 0 x 2 x 1... x 2 x n 1 0 det. x n x 1... x n x n 1 1 Aha! This is precisely h 2 = Gram(x 1,..., x n ) Gram(x 1,..., x n 1 ). Of course, this makes sense only if the determinant of the coefficients in the above linear system is nonzero. That is, only if x 1,..., x n 1 are linearly independent. Notice then that h 2 = 0 x 1,..., x n are linearly dependent; that is, x n lies in the (n 1)-dimensional subspace containing the base of the parallelogram. Thanks to this formula, we now see inductively that vol n (P ) = Gram(x 1,..., x n ). PROBLEM In the special case of an n-dimensional parallelogram P with edges x 1,..., x n R n, show that vol n (P ) = det(x 1 x 2... x n ). D. Generalization of the cross product

13 Volumes of parallelograms 13 This is a good place to give a brief discussion of cross products on R n. We argue by analogy from the case n = 3. Suppose then that x 1,..., x n 1 are linearly independent vectors in R n. Then we want to find a vector z which is orthogonal to all of them. We definitely anticipate that z will exist and will be uniquely determined up to a scalar factor. The formula we came up with for the case n = 3 can be presented as a formal 3 3 determinant as î z = det x 1 x 2 ĵ, ˆk where x 1 and x 2 are column vectors and we expand the determinant along the third column. This is a slight rearrangement of the formula we derived in Section 7A. We now take the straightforward generalization for any dimension n 2: z = det ê 1 ê 2 x 1 x 2... x n 1.. ê n This is a formal n n determinant, in which the last column contains the unit coordinate vectors in order. The other columns are of course the usual column vectors. Notice the special case n = 2: we have just one vector, say ( ) a x x =, 2 Jx b. and the corresponding vector z is given by ( ) a ê1 z = det b ê 2 x 1 = aê 2 bê ( ) 1 b =. a x NOTATION. For n 3 we denote z = x 1 x 2 x n 1,

14 14 Chapter 8 and we call z the cross product of the vectors x 1,..., x n 1. For n = 2 this terminology doesn t work, so we write z = Jx, where J can actually be regarded as the 2 2 matrix J = ( ) (Another way to think of this is to use complex notation x = a + ib, and then Jx = ix (= b + ia).) If we depict R 2 as in the above figure, then J is a 90 counterclockwise rotation. (Thus J is in SO(2).) We now list the properties of the cross product. We shall not explicitly state the corresponding easy properties for the case n = 2. PROPERTIES. 1. x 1 x n 1 is a linear function of each factor. 2. Interchanging two factors in the cross product results in multiplying the product by The cross product is uniquely determined by the formula (x 1 x n 1 ) y = det(x 1 x 2 x n 1 y), all y R n. 4. (x 1 x n 1 ) x i = x 1 x n 1 2 = det (x 1... x n 1 x 1 x n 1 ). 6. x 1 x n 1 = 0 x 1,..., x n 1 are linearly dependent. (Proof: follows from 5, as the right side of 5 is zero. To prove the converse, assume x 1,..., x n 1 are linearly independent. Choose any y R n such that x 1,..., x n 1, y are linearly independent. Then 3 implies (x 1... x n 1 ) y 0. Thus x 1... x n 1 0.) 7. x 1 x n 1 = Gram(x 1,..., x n 1 ). (Proof: let z = x 1 x n 1. Then 5 implies z 4 = (det(x 1... x n 1 z)) 2 = Gram(x 1,..., x n 1, z).

15 Volumes of parallelograms 15 Now in the computation of this Gram determinant all the terms x i z = 0. Thus x 1 x 1... x 1 x n 1 0 z 4 = det. x n 1 x 1... x n 1 x n z z = z 2 Gram(x 1,..., x n 1 ).) 8. If x 1,..., x n 1 are linearly independent, then the frame {x 1,..., x n 1, x 1... x n 1 } has the standard orientation. (Proof: the relevant determinant equals x 1... x n 1 2 > 0, by 5.) This completes the geometric description of the cross product. Exactly as on p. 7 9, we have the following SUMMARY. The cross product x 1... x n 1 of n 1 vectors in R n is uniquely determined by the geometric description: (1) x 1... x n 1 is orthogonal to x 1,..., x n 1, (2) x 1... x n 1 is the volume of the parallelogram with edges x 1,..., x n 1, (3) in case x 1... x n 1 0, the frame {x 1,..., x n 1, x 1... x n 1 } has the standard orientation. PROBLEM Conclude immediately that ê 1... ê n (with ê i omitted) = ( 1) n+i ê i. PROBLEM Now prove that (x 1... x n 1 ) ê 1... ê n (with ê i omitted) = det((x 1... x n 1 ) (with row i omitted)).

16 16 Chapter 8 PROBLEM Now prove that (x 1... x n 1 ) (y 1... y n 1 ) = det(x i y j ). (HINT: each side of this equation is multilinear and alternating in its dependence on the y j s. Therefore it suffices to check the result when the y j s are themselves unit coordinate vectors.) The result of the last problem is a generalization of the seventh property above, and is also a generalization of the Lagrange identity given in Problem 7 3. PROBLEM In the preceding problem we must be assuming n 3. State and prove the correct statement corresponding to R 2.

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

### v 1 v 3 u v = (( 1)4 (3)2, [1(4) ( 2)2], 1(3) ( 2)( 1)) = ( 10, 8, 1) (d) u (v w) = (u w)v (u v)w (Relationship between dot and cross product)

0.1 Cross Product The dot product of two vectors is a scalar, a number in R. Next we will define the cross product of two vectors in 3-space. This time the outcome will be a vector in 3-space. Definition

### v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

### x R 2 x = (x 1, x 2 ) or p = (x, y) R 3

Euclidean space 1 Chapter 1 Euclidean space A. The basic vector space We shall denote by R the field of real numbers. Then we shall use the Cartesian product R n = R R... R of ordered n-tuples of real

### 1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### 5.3 The Cross Product in R 3

53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

### Determinants, Areas and Volumes

Determinants, Areas and Volumes Theodore Voronov Part 2 Areas and Volumes The area of a two-dimensional object such as a region of the plane and the volume of a three-dimensional object such as a solid

### Solutions to Homework 10

Solutions to Homework 1 Section 7., exercise # 1 (b,d): (b) Compute the value of R f dv, where f(x, y) = y/x and R = [1, 3] [, 4]. Solution: Since f is continuous over R, f is integrable over R. Let x

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### ARE211, Fall2012. Contents. 2. Linear Algebra Preliminary: Level Sets, upper and lower contour sets and Gradient vectors 1

ARE11, Fall1 LINALGEBRA1: THU, SEP 13, 1 PRINTED: SEPTEMBER 19, 1 (LEC# 7) Contents. Linear Algebra 1.1. Preliminary: Level Sets, upper and lower contour sets and Gradient vectors 1.. Vectors as arrows.

### Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

### Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

### Metric Spaces. Chapter 7. 7.1. Metrics

Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

### Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

### THREE DIMENSIONAL GEOMETRY

Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### One advantage of this algebraic approach is that we can write down

. Vectors and the dot product A vector v in R 3 is an arrow. It has a direction and a length (aka the magnitude), but the position is not important. Given a coordinate axis, where the x-axis points out

### Linear Algebra Test 2 Review by JC McNamara

Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot

### Lecture L3 - Vectors, Matrices and Coordinate Transformations

S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

### Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Vectors, Gradient, Divergence and Curl.

Vectors, Gradient, Divergence and Curl. 1 Introduction A vector is determined by its length and direction. They are usually denoted with letters with arrows on the top a or in bold letter a. We will use

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Unified Lecture # 4 Vectors

Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

### 28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE 1.4 Cross Product 1.4.1 Definitions The cross product is the second multiplication operation between vectors we will study. The goal behind the definition

### Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

### Elementary Linear Algebra

Elementary Linear Algebra Kuttler January, Saylor URL: http://wwwsaylororg/courses/ma/ Saylor URL: http://wwwsaylororg/courses/ma/ Contents Some Prerequisite Topics Sets And Set Notation Functions Graphs

### Math 215 HW #6 Solutions

Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

### Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Geometrical definition Properties Expression in components. Definition in components Properties Geometrical expression.

### Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis

Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis 2. Polar coordinates A point P in a polar coordinate system is represented by an ordered pair of numbers (r, θ). If r >

### The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

### Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

### Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

### Vector Spaces II: Finite Dimensional Linear Algebra 1

John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

### Inner product. Definition of inner product

Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

### Solutions to Practice Problems

Higher Geometry Final Exam Tues Dec 11, 5-7:30 pm Practice Problems (1) Know the following definitions, statements of theorems, properties from the notes: congruent, triangle, quadrilateral, isosceles

### discuss how to describe points, lines and planes in 3 space.

Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### Recall that the gradient of a differentiable scalar field ϕ on an open set D in R n is given by the formula:

Chapter 7 Div, grad, and curl 7.1 The operator and the gradient: Recall that the gradient of a differentiable scalar field ϕ on an open set D in R n is given by the formula: ( ϕ ϕ =, ϕ,..., ϕ. (7.1 x 1

### 1 Scalars, Vectors and Tensors

DEPARTMENT OF PHYSICS INDIAN INSTITUTE OF TECHNOLOGY, MADRAS PH350 Classical Physics Handout 1 8.8.2009 1 Scalars, Vectors and Tensors In physics, we are interested in obtaining laws (in the form of mathematical

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1982, 2008. This chapter originates from material used by author at Imperial College, University of London, between 1981 and 1990. It is available free to all individuals,

### We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

### LS.6 Solution Matrices

LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

### Euclidean Geometry. We start with the idea of an axiomatic system. An axiomatic system has four parts:

Euclidean Geometry Students are often so challenged by the details of Euclidean geometry that they miss the rich structure of the subject. We give an overview of a piece of this structure below. We start

### 5. Orthogonal matrices

L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

### MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

### Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

### Vectors Math 122 Calculus III D Joyce, Fall 2012

Vectors Math 122 Calculus III D Joyce, Fall 2012 Vectors in the plane R 2. A vector v can be interpreted as an arro in the plane R 2 ith a certain length and a certain direction. The same vector can be

### ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

### Mechanics 1: Vectors

Mechanics 1: Vectors roadly speaking, mechanical systems will be described by a combination of scalar and vector quantities. scalar is just a (real) number. For example, mass or weight is characterized

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

### Linear Algebra: Vectors

A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

### LINE INTEGRALS OF VECTOR FUNCTIONS: GREEN S THEOREM. Contents. 2. Green s Theorem 3

LINE INTEGRALS OF VETOR FUNTIONS: GREEN S THEOREM ontents 1. A differential criterion for conservative vector fields 1 2. Green s Theorem 3 1. A differential criterion for conservative vector fields We

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### Vector Algebra CHAPTER 13. Ü13.1. Basic Concepts

CHAPTER 13 ector Algebra Ü13.1. Basic Concepts A vector in the plane or in space is an arrow: it is determined by its length, denoted and its direction. Two arrows represent the same vector if they have

### 1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

### 4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Determinants. Chapter Properties of the Determinant

Chapter 4 Determinants Chapter 3 entailed a discussion of linear transformations and how to identify them with matrices. When we study a particular linear transformation we would like its matrix representation

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### 6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

### 5 =5. Since 5 > 0 Since 4 7 < 0 Since 0 0

a p p e n d i x e ABSOLUTE VALUE ABSOLUTE VALUE E.1 definition. The absolute value or magnitude of a real number a is denoted by a and is defined by { a if a 0 a = a if a

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### Linear Algebra Notes

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

### Figure 1.1 Vector A and Vector F

CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

### Trigonometric Functions and Triangles

Trigonometric Functions and Triangles Dr. Philippe B. Laval Kennesaw STate University August 27, 2010 Abstract This handout defines the trigonometric function of angles and discusses the relationship between

### Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

### MAT 1341: REVIEW II SANGHOON BAEK

MAT 1341: REVIEW II SANGHOON BAEK 1. Projections and Cross Product 1.1. Projections. Definition 1.1. Given a vector u, the rectangular (or perpendicular or orthogonal) components are two vectors u 1 and

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### 1 Orthogonal projections and the approximation

Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

### Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0.

Basic Terminology for Systems of Equations in a Nutshell E L Lady A system of linear equations is something like the following: x 7x +4x =0 5x +8x x = Note that the number of equations is not required

### Lattice Point Geometry: Pick s Theorem and Minkowski s Theorem. Senior Exercise in Mathematics. Jennifer Garbett Kenyon College

Lattice Point Geometry: Pick s Theorem and Minkowski s Theorem Senior Exercise in Mathematics Jennifer Garbett Kenyon College November 18, 010 Contents 1 Introduction 1 Primitive Lattice Triangles 5.1

### Math 241, Exam 1 Information.

Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)

### AP Physics - Vector Algrebra Tutorial

AP Physics - Vector Algrebra Tutorial Thomas Jefferson High School for Science and Technology AP Physics Team Summer 2013 1 CONTENTS CONTENTS Contents 1 Scalars and Vectors 3 2 Rectangular and Polar Form

### Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

### Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16.

Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular