3. INNER PRODUCT SPACES

Size: px
Start display at page:

Download "3. INNER PRODUCT SPACES"

Transcription

1 . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space. In R and R we have the concepts of lengths and angles. In those spaces we use the dot product for this purpose, but the dot product only makes sense when we have components. In the absence of components we introduce something called an inner product to play the role of the dot product. We consider only vector spaces over C, or some subfield of C, such as R. An inner product space is a vector space V over C together with a function (called an inner product) that associates with every pair of vectors in V a complex number u v such that: () v u u v for all u, v V; () u v w u w v w for all u, v, w V; () u v u v for all u, v V and all C; (4) v v is real and for all v V; (5) v v = if and only if v =. These are known as the axioms for an inner product space (along with the usual vector space axioms). A Euclidean space is a vector space over R, where u v R for all u, v and where the above five axioms hold. In this case we can simplify the axioms slightly: () v u u v for all u, v V; () u v w u w v w for all u, v, w V; () u v u v for all u, v V and all C; (4) v v for all v V; (5) v v = if and only if v =. Example : Take V = R n as a vector space over R and define u v = u v u n v n where u = (u,, u n ) and v = (v,, v n ) (the usual dot product). This makes R n into a Euclidean space. When n = we can interpret this geometrically as the real Euclidean plane. When n = this is the usual Euclidean space. Example : Take V = C n as a vector space over C and define u = (u,, u n ) and v = (v,, v n ). u v u v... u v n n where Example : Take V = M n (R), the space of nn matrices over R where A B = trace(a T B). Note, this becomes the usual dot product if we consider an n n matrix as a vector with n components, since trace(a T B) = a b ij ij if A = (a ij ) and B = (b ij ). n i, j Example 4: Show that R can be made into a Euclidean space by defining u u = 5x x x y x y + 5y y when u = (x, y ) and u = (x, y ). Solution: We check the five axioms. 9

2 () u u = 5x x x y x y + 5y y = u u. () If u = (x, y ) then u + u u = 5(x + x )x x (y + y ) (x + x )y + 5(y + y )y = 5x x + 5x x x y x y x y x y + 5y y + 5y y = (5x x x y x y + 5y y ) + (5x x x y x y + 5y y ) = u u + u u. () u u = 5(x )x x (y ) (x )y + 5(y )y = [5x x x y x y + 5y y ] = u u. (4) If v = (x, y) then v v = 5x xy + 5y = 5(x xy/5 + y ) = 5(x y/5) + 4y /5 for all x, y. (5) v v = if and only if x = y/5 and y =, that is, if and only if v =. Now we move to a rather different sort of inner product, but one that still satisfies tha above axioms. Inner product spaces of this type are very important in mathematics. Example 5: Take V to be the space of continuous functions of a real variable and define u x v( x) NOTE: Axioms (), () show that the function u However u( x) v( x) dx u v is a linear transformation for a fixed v. u v u is not linear since v u u v u v u v v u... Lengths and Distances The length of a vector in an inner product space is defined by: (Remember that v v v. v v is real and non-negative. The square root is the non-negative one.) So the zero vector is the only one with zero length. All other vectors in an inner product space have positive length. Example 6: In R, with the dot product as inner product, the length of (x, y, z) is x + y + z. Example 6: If V is the space of continuous functions of a real variable and u x vx uxvx dx = f(x)g(x) dx then the length of f(x) = x is x 4 dx = 5. The following properties of length are easily proved. Theorem : For all vectors u, v and all scalars : () v =. v ; () v ; () v = if and only if v =.

3 Theorem (Cauchy Schwarz Inequality): u v u. v. Equality holds if and only if u = u v Proof: Let d = v. Now u dv = u dv u dv = u u d v u d u v + dd v v = u dd v + dd v = u d v = u u v v Since u dv, u v u v. Example 7: In R n we have y x y Example 8: f(x)g(x) dx x. i i f(x) dx g(x) dx. i i u v v v. The Triangle Inequality in the Euclidean plane states no side of a triangle can be longer than the sum of the other two sides. It is usually proved geometrically, or appealing to the principle that the shortest distance between two points is a straight line. In a general inner product space we must prove it from the axioms. Theorem (Triangle Inequality): For all vectors u, v: u + v u + v. Proof: u + v = u + v u + v = u u + v v + u v + v u = u + v + Re(u v) u + v + u v u + v + u. v ( u + v ) So u + v u + v. We define the distance between two vectors u, v to be u v. The distance version of the Triangle Inequality is as follows. If u, v, w are the vertices of a triangle in an inner product space V then u w u v + v w. It follows from the length version as u w = (u v) + (v w). If we take u, v, w to be vertices of a triangle in the Euclidean plane this gives the geometric version of the Triangle Inequality... Orthogonality It is not possible to define angles in a general inner product space, because inner products need not be real. But in any Euclidean space we can define these geometrical concepts even if the vectors have no obvious geometric significance.

4 Now we can use the Cauchy Schwarz inequality to define the angle between vectors. If u, v are non-zero vectors the angle between them is defined to be cos u v u. v. The Cauchy u v Schwarz inequality ensures that u. v lies between and. The angle between the vectors is / if and only if u v =. Example 9: Suppose we define the inner product between two continuous functions by u v / Solution: u u( x) v( x) dx. If u(x) = sin x and v(x) = x find the angle, between them in degrees. u v / x ux / / xsin xdx / = =. sin sin x x cos x (integrating by parts) xdx cos x = dx x / = sin x 4 = 4 which is approximately Hence u(x) is approximately.886. v x vx / x x = dx / = which is approximately Hence v(x)} is approximately.66. So if the angle (in degrees) between these two functions is then cos.886* Hence NOTE: Measuring the angle between two functions in degrees is rather useless and is done here only as a curiosity. By far the major application of angles in function spaces is to orthogonality. This is a concept that is meaningful for all inner product spaces, not just Euclidean ones. Two vectors in an inner product space are orthogonal if their inner product is zero. The same definition applies to Euclidean spaces, where angles are defined and there orthogonality means that either the angle between the vectors is / or one of the vectors is zero. So orthogonality is slightly more general than perpendicularity. A vector v in an inner product space is a unit vector if v =.

5 We define a set of vectors to be orthonormal if they are all unit vectors and each one is orthogonal to each of the others. An orthonormal basis is simply a basis that is orthonormal. Note that there is no such thing as an orthonormal vector. The property applies to a whole set of vectors, not to an individual vector. Theorem 4: An orthonormal set of vectors {v,, v n } is linearly independent. Proof: Suppose v + + n v n =. Then v + + n v n v r = for each r. But v + + n v n v r = v v r + + n v n v r = r v r v r since v r is orthogonal to the other vectors in the set = r since v r v r = v r =. Hence each r =. Because of the above theorem, if we want to show that a set of vectors is an orthonormal basis we need only show that it is orthonormal and that it spans the space. Linear independences come free. Another important consequence of the above theorem is that it is very easy to find the coordinates of a vector relative to an orthonormal basis. Theorem 5: If,,..., n is an orthogonal basis for the inner product V, and v V, then x v x v i where xi. i x n Proof: Let v = x + x x n n. Then v i x j j i j = x j j since the j i are mutually orthogonal Example : Consider R as an inner product space with the usual inner product. Show that the set,,,,,,,, is an orthonormal basis for R. Solution: They are clearly mutually orthogonal and, since = they are all unit vectors. Hence they are linearly independent and so span a -dimensional subspace of R. Clearly this must be the whole of R. Example : Find the coordinates of (, 4, 5) relative to the above orthonormal basis. Solution: (, 4, 5) (/, /, /) = 7 (, 4, 5) (/, /, /) = (, 4, 5) (/, /, /) =. Hence the coordinates are (7,, ). In other words, (, 4, 5) = 7,,,,. Example : In C as an inner product space with the inner product (x, y ) (x, y ) = x x + y y show that the vectors ( i, 4i) and ( 4i, /5 + /5 i) are orthogonal. Use them to find an orthonormal basis for C.

6 Solution: ( i)( + 4i) + ( 4i)(/5 /5 i) = + 5i 5i =. ( i, 4i) = = and ( 4i, /5 + /5 i) = =. Hence { ( i, 4i), ( 4i, /5 + /5 i)} is an orthonormal basis. Theorem 6 (Gram-Schmidt): Every finite-dimensional inner product space V has an orthonormal basis. Proof: We prove this by induction on the dimension of V. If dim(v) = then the empty set is an orthonormal basis. Suppose that every vector space of dimension n has an orthonormal basis of V and suppose that V is a vector space of dimension n +. Let {u,, u n+ } be a basis for V and let U = v,, v n. By the induction hypothesis U has an orthonormal basis v,, v n. Define w = u n+ u n+ v v u n+ v n v n. Then for each i, w v i = u n+ v i u n+ v i v i v i, since v j v i = when i j = u n+ v i v v i since v i v i = =. Hence w is orthogonal to each of the v i. But it may not be a unit vector, but it is non-zero. So we may divide it by its length without affecting orthogonality. So we define v n+ = w w and so obtain an orthonormal basis {v,, v n, v n+ } for V. In practice it is inconvenient to normalise the vectors (divide by their length) as we go, because we will have to carry these lengths along into our subsequent calculations. It s much easier to produce an orthogonal basis and then to normalise at the end. GRAM-SCHMIDT ALGORITHM {u, u,..., u n } is a given basis; {v, v,..., v n } is an orthogonal basis; {w, w,..., w n } is an orthonormal basis. () LET v = u. () FOR r = TO n, LET v r = u r u r v v v u r v v v... u r v r v r v r. Multiply by a convenient factor to remove fractions. (4) FOR r = TO n, LET w r = v r v r. Example : Find an orthonormal basis for V = (,,, ), (,,, 4), (,,, ). Solution: u = basis v = orthogonal basis v w = orthonormal basis (,,, ) (,,, ) (,,, ) (,,, 4) (,,, ) 5 (,,, ) 5 (,,, ) (, 6, 6, ) 6 (6,, 8, ) 4

7 WORKING: v = u u v v v = (,,, 4) 4 (,,, ) = (,,, 4) 5 (,,, ). Multiply by, so now v = (, 4, 6, 8) 5(,,, ) = (,,, ). v = u u v v v u v v v = (,,, ) (,,, ) (,,, ). Multiply by, so now v = (,,, ) (5, 5, 5, 5) + (,,, ) = (, 6, 6, ). Example 4: Let V be the function space, x, x made into a Euclidean space by defining u x vx uxvx dx. Find an orthonormal basis for V. Solution: u = basis v = orthogonal basis v w = orthonormal basis x x (x ) x 6x 6x (6x 6x + ) WORKING: u x v x x dx x. x dx x v v (x) = u (x) u (x) v (x) v (x) v (x) = x. Multiply by, so now v (x) = x. u x v x x x. 4 ux vx x x dx x x dx x x 4 vx x dx 4x 4x dx x x x. v (x) = u (x) u (x) v (x) v (x) v (x) u (x) v (x) v (x) v (x) = x /6 / (x ) = x (x ) Multiply by 6, so now v (x) = 6x (x ) = 6x 6x vx 6 x 6x dx 6x 7x 48x x dx x 8x 6x 6x x

8 .4. Fourier Series The most important applications of inner product spaces involves function spaces with the inner product defined by means of an integral. Fourier series are infinite series in an infinite dimensional function space. However it is not appropriate here to give more than a cursory overview because to discuss them properly requires not only a good knowledge of integration, but a deep understanding of the convergence of infinite series. For any positive integer n the functions, cos nx and sin nx are periodic, with period. [This does not mean that they do not have smaller period, but simply that for each of them f(x + ) = f(x) for all x.] Take the space T spanned by all of these functions. Define the inner product on T as So T =, cos x, cos x,..., sin x, sin x,... u ( x) v( x) u( x) v( x) dx. T is an infinite dimensional vector space. Clearly, for every function f(x) T, f(x + ) = f(x). If f(x) is a continuous function for which f(x + ) = f(x) we may ask whether f(x) T. The answer is usually no. Such an f(x) may not be a linear combination of, cos x, cos x,..., sin x, sin x,... But remember that a linear combination is a finite linear combination. It may well be that f(x) can be expressed as an infinite series involving these functions. That is we might have f(x) = a + a cos nx b sin nx n n n. Such a series is called a Fourier series, named after the French mathematician Joseph de Fourier [768 8]. Of course, for this to make sense we would need this series to converge, which why we need to know a lot about infinite series in order to study Fourier series. But suppose we limit the values of n. Let T N =, sin x, sin x,..., cos x, cos x,... We can show that these N + functions are linearly independent. orthogonal. So T N is a N + dimensional Euclidean space. For n >, cos nx cos nx dx = and sin x = sin In fact, they are mutually nx dx =. Clearly = dx =. (Remember that here is not the absolute value but rather the length of the function.) N Fx By theorem 5, if F(x) = a + an cos nx bn sin nx then a = = F ( x) dx and, if n >, a n = F x cos nx cos nx n = F ( x)cos nx dx and b n = F x sin nx sin nx = F ( x)sin nx dx. A function in T N must be continuous and have period. But by no means does every such function belong to T N. However if F(x) is continuous and has period then it can be approximated by a function in T N, with the approximation getting better as N becomes larger. Even functions with period having some discontinuities can be so approximated. (We won t go into details here as to the precise conditions, or how close the approximation will be.) where a = If F (N) (x) is the approximation to F(x) in T N then F (N) (x) = a + a cos nx b sin nx F N ( x) dx and, if n >, a n = N n F N ( x)cos nx dx and b n = n F N ( x)sin nx dx. Now it can be shown that generally these integrals can be approximated by the corresponding integrals with F (N) replaced by F(x). Letting N it can be shown that if F[x] can be expressed as n a n n then a = a Fourier series a + cos nx b sin nx F ( x) dx and, if n >, n 6

9 a n = F ( x)cos nx dx and b n = F ( x)sin nx dx. Example 5: Find the Fourier series for the function F(x) = x if x / F(x) = x if / x / F(x) = x F(x + ) = F(x) for all x Answer: The solution involves a fair bit of integration by parts and, since this is not a calculus course, we omit the details and simply give the answer. 4 sin x sin x sin 5x F(x) =. 5 / / / /.5. Orthogonal Complements In R the normal to a plane through the origin is a plane through the origin. Every vector in the plane is orthogonal (i.e. perpendicular if they are non-zero) to every vector along the line. The line and the plane are said to be orthogonal complements of one another. The orthogonal complement of a subspace U, of V is defined to be: U = {v V u v = for all u U}. Intuitively it would seem that U should be U, but there are examples where this is not so. However for finite-dimensional subspaces it is true. This follows from the following important theorem. Theorem 7: If U is a finite-dimensional subspace of the vector space then U is also subspace of V and V = U U. Proof: () U is a subspace of V. Let v, w U and let u U. Then u v + w = u v + u w = + =. Hence v + w U T. Let v U and let be a scalar. Then u v =u v =. =. Hence v U and so we have shown that U is a subspace. () U U =. 7

10 Suppose v U U. Then v is orthogonal to itself, and so v v =. By the axioms of an inner product space this implies that v =. () V = U + U. Let u, u n be an orthonormal basis for U. Let v V, u = v u u v u n u n and let w = v u. Then for each i, w u i = v u i v u i u i u i since u j u i = if i j = v u i v u i since u i u i = =. Hence w U. Clearly u U. So v = u + w U + U. Theorem 8: If U is a subspace of a finite-dimensional vector space then U = U. Proof: Suppose u U and let v U. Then u v =. Hence v u =. Since this holds for all v U, and so u U. So it follows that U U. Now V = U U = U U so dim U = dim U. Hence U = U. EXERCISES FOR CHAPTER Exercise : If v = (x, y ) and v = (x, y ) define u v = x x x y x y + 5y y and [u v] = x x + x y + x y + y y. Show that under one of these products R becomes a Euclidean space and under the other it is not a Euclidean space. Exercise : Find an orthonormal basis for (,, ), (,, 5). Exercise : Find an orthonormal basis for (,,,, ), (,,,, ), (,,,, ), (,,,, ). Exercise 4: Find an orthonormal basis for the function space, x, x where u x vx uxvx dx. Exercise 5: Find an orthonormal basis for the function space, x, sin x where / x vx uxvx u dx. Exercise 6: Find the orthogonal complement for (,, 6), (,, ) in R. Exercise 7: Find the orthogonal complement of (,,, ), (,,, ) in R 4. Exercise 8: Find the orthogonal complement of, x in the vector space, x, x, where u ( x) vx is defined to be u xvx dx. 8

11 SOLUTIONS FOR CHAPTER Exercise : Axioms (), (), () are easily checked for both products. The simplest way to check them is to let A = and B =. Then u v = uav T and [u v] = ubv T. It is now very simple to check these first three axioms. (4) If v = (x, y) then v v = x 4xy + 5y = (x y) + y for all x, y. If v = (x, y) then [v v] = x + 4xy + y = (x + y) y. When x = and y = this is negative. Hence under the product [u v] R is not a Euclidean space. (5) If v v = then x = y and y = so v =. Hence under the product v v, R is a Euclidean space. Exercise : u = basis v = orthogonal basis v w = orthonormal basis (,, ) (,, ) (,, ) (,, 5) (,, 6) 8 (,, 6) 8 WORKING: v = u u v v v = (,, 5) (,, ) = (,, 5) (,, ) = (,, 6). Exercise : u = basis v = orthogonal basis v w = orthonormal basis (,,,, ) (,,,, ) (,,,, ) (,,,, ) (,, 4,, ) 8 (,, 4,, ) 8 (,,,, ) (, 4, 4, 6, ) 7 (, 4, 4, 6, ) 7 4 (,,,, ) (9, 49, 56, 49, ) 6958 (9, 49, 56, 49, ) 6958 WORKING: v = u u v v v = (,,,, ) 4 (,,,, ). Multiply by 4. Now v = (4,, 4, 4, 4) (,,,, ) = (,, 4,, ). v = u u v v v u v v v = (,,,, ) 4 (,,,, ) 8 (,, 4,, ). Multiply by 8. Now v = (8, 8, 8,, 8) (,,,, ) (, 9,,, ) = (4, 6, 6, 4, 4). Divide by 4. Now v = (, 4, 4, 6, ). v 4 = u 4 u 4 v v v u 4 v v v u 4 v v v = (,,,, ) (,,,, ) 8 (,, 4,, ) 7 (, 4, 4, 6, ). Multiply by 4. Now v 4 = (4, 4, 4, 4, ) (,,,, ) (5, 45, 6, 5, 5) (6, 4, 4, 6, 6) = (9, 49, 56, 49, ). 9

12 Exercise 4: u = basis v = orthogonal basis v w = orthonormal basis x x (x ) x x x + (x x + ) WORKING: / u x v x x dx = x =. v (x) = u (x) u (x) v (x) v (x) v (x) = x /. = x. Multiply by, so now v (x) = x. v x x u dx = x x 4 9 dx 9 / = x 8x 9 = 8 4 =. x v x x dx = x =. 4x x v x x x dx u / = x dx x dx = x 5 = 6 5 = 5. 5 / x v (x) = u (x) u (x) v (x) v (x) v (x) u (x) v (x) v (x) v (x) = x / /5. / (x ) 4

13 = x 5 (x ) = x 6 5 x = x 6 5 x +. Multiplying by we now take v (x) = x x +. v (x) = x x dx = x 44x 9 4x / 6x 7 x dx = x 4x 9 4x / 7 x dx = 5/ / x x x x x = = =. Exercise 5: u = basis v = orthogonal basis v w = orthonormal basis WORKING: x x sin x sin x / / v x dx = / x v x x = / u xdx = x = v (x) = u (x) u (x) v (x) v (x) v (x) = x /4 /. = x. Multiply by, so now v (x) = x. v / x x dx / = 4x 4x dx x 6 8 sin x 8

14 4 = x x x = 6 = 6. / x v x x dx u sin / = =. cos x / / x v x x x dx u sin / = xsin x dx / sin = cos x x dx / / / x cos x dx x dx sin / / = + sin x + cos x = =. v (x) = u (x) u (x) v (x) v (x) v (x) u (x) v (x) v (x) v (x) = sin x..(cos x sin x) = sin x. Multiplying by we now take v (x) = sin x. / v (x) = sin x dx / = sin x 4 sin x 4 / dx = sin dx 4 sin x dx + 4 dx / = cos / / = sin x = 4 + / / x dx 4 sin x dx + 4 dx / x + / 4 cos x + 4

15 = 4 4. = 8 Exercise 6: First Solution: Let u = (,, ) and u = (,, 6). Suppose (x, y, z) is orthogonal to both u and u. Then x + y + z = and x + y + 6z =. so x=, z = k, y = k. 6 4 Hence the orthogonal complement is (,, ). Second Solution: We could take, as a third vector (,, ), being outside of the space spanned by a and b, and use the Gram Schmidt process. However we are content with an orthogonal basis. u = basis v = orthogonal basis v (,, ) (,, ) 9 (,, 6) (5,, 4) 45 (,, ) (,, ) WORKING: v = u u v v v = (,, 6) 9 9 (,, ). Multiply by 9 to get the new v to be v = 9(,, 6) 9(,, ) = (8, 7, 54) (8, 9, 8) = (, 8, 6). Perhaps it would now be a good idea to divide by 4 to get a new v as v = (5,, 4). u v = 5 and u v =. v = u u v v v u v v v = (,, ) 5 9 (,, ) 45 (5,, 4) Multiply by 45 to get a new v as v = (45, 45, 45) (5, 5, 5) (5,, 4) = (, 8, 9). Divide by 9 to get a new v as v = (,, ). Hence the orthogonal complement is (,, ). Third Solution: A third method, that only works for R, is to simply find u u. i j k u u = = (6 6)i ( 4)j + (6 )k = (, 8, 4). So the orthogonal complement is 6 (, 8, 4) = (,, ). You can make up your own mind as to which is the easiest method! Exercise 7: Here we cannot use the vector product. Suppose (x, y, z, w) is orthogonal to both vectors. Then we have a system of two homogeneous linear equations that is represented by 4

16 44. So w = h, z = k, y = h, x = k giving the vector (k, h, k, h). Taking h =, k = and h =, k =, we get a basis for the orthogonal complement, which is (,,, ), (,,, ). Exercise 8: dx cx bx a. = x c x b ax = a + b + c and dx x cx bx a. = 4 4 x c x b x a = a + b + c 4. Hence w(x) = a + bx + cx is orthogonal to both and x if a + b + c = and a + b + c 4 =. We solve the homogeneous system This gives c = k, b = k, 6a = 5k. Take k = 6. Then a = 5, b = 6, c = 6 and hence w(x) = 5 + 6x + 6x. Hence the orthogonal complement is 6x + 6x 5.

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Lecture 14: Section 3.3

Lecture 14: Section 3.3 Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors 1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

FURTHER VECTORS (MEI)

FURTHER VECTORS (MEI) Mathematics Revision Guides Further Vectors (MEI) (column notation) Page of MK HOME TUITION Mathematics Revision Guides Level: AS / A Level - MEI OCR MEI: C FURTHER VECTORS (MEI) Version : Date: -9-7 Mathematics

More information

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables The Calculus of Functions of Several Variables Section 1.4 Lines, Planes, Hyperplanes In this section we will add to our basic geometric understing of R n by studying lines planes. If we do this carefully,

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition

More information

Orthogonal Projections and Orthonormal Bases

Orthogonal Projections and Orthonormal Bases CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).

More information

it is easy to see that α = a

it is easy to see that α = a 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Math 215 HW #6 Solutions

Math 215 HW #6 Solutions Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v, 1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Equations Involving Lines and Planes Standard equations for lines in space

Equations Involving Lines and Planes Standard equations for lines in space Equations Involving Lines and Planes In this section we will collect various important formulas regarding equations of lines and planes in three dimensional space Reminder regarding notation: any quantity

More information

9 Multiplication of Vectors: The Scalar or Dot Product

9 Multiplication of Vectors: The Scalar or Dot Product Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

MAT 1341: REVIEW II SANGHOON BAEK

MAT 1341: REVIEW II SANGHOON BAEK MAT 1341: REVIEW II SANGHOON BAEK 1. Projections and Cross Product 1.1. Projections. Definition 1.1. Given a vector u, the rectangular (or perpendicular or orthogonal) components are two vectors u 1 and

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

Linear Algebra: Vectors

Linear Algebra: Vectors A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

More information

NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane

NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

MAT 242 Test 3 SOLUTIONS, FORM A

MAT 242 Test 3 SOLUTIONS, FORM A MAT Test SOLUTIONS, FORM A. Let v =, v =, and v =. Note that B = { v, v, v } is an orthogonal set. Also, let W be the subspace spanned by { v, v, v }. A = 8 a. [5 points] Find the orthogonal projection

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

RESULTANT AND DISCRIMINANT OF POLYNOMIALS RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus South Carolina College- and Career-Ready (SCCCR) Pre-Calculus Key Concepts Arithmetic with Polynomials and Rational Expressions PC.AAPR.2 PC.AAPR.3 PC.AAPR.4 PC.AAPR.5 PC.AAPR.6 PC.AAPR.7 Standards Know

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

discuss how to describe points, lines and planes in 3 space.

discuss how to describe points, lines and planes in 3 space. Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point

More information

Section 9.5: Equations of Lines and Planes

Section 9.5: Equations of Lines and Planes Lines in 3D Space Section 9.5: Equations of Lines and Planes Practice HW from Stewart Textbook (not to hand in) p. 673 # 3-5 odd, 2-37 odd, 4, 47 Consider the line L through the point P = ( x, y, ) that

More information

Metric Spaces. Chapter 1

Metric Spaces. Chapter 1 Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence

More information

BX in ( u, v) basis in two ways. On the one hand, AN = u+

BX in ( u, v) basis in two ways. On the one hand, AN = u+ 1. Let f(x) = 1 x +1. Find f (6) () (the value of the sixth derivative of the function f(x) at zero). Answer: 7. We expand the given function into a Taylor series at the point x = : f(x) = 1 x + x 4 x

More information

1 3 4 = 8i + 20j 13k. x + w. y + w

1 3 4 = 8i + 20j 13k. x + w. y + w ) Find the point of intersection of the lines x = t +, y = 3t + 4, z = 4t + 5, and x = 6s + 3, y = 5s +, z = 4s + 9, and then find the plane containing these two lines. Solution. Solve the system of equations

More information

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 RAVI VAKIL CONTENTS 1. Discrete valuation rings: Dimension 1 Noetherian regular local rings 1 Last day, we discussed the Zariski tangent space, and saw that it

More information

Elementary Linear Algebra

Elementary Linear Algebra Elementary Linear Algebra Kuttler January, Saylor URL: http://wwwsaylororg/courses/ma/ Saylor URL: http://wwwsaylororg/courses/ma/ Contents Some Prerequisite Topics Sets And Set Notation Functions Graphs

More information

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z 28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE 1.4 Cross Product 1.4.1 Definitions The cross product is the second multiplication operation between vectors we will study. The goal behind the definition

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Math 241, Exam 1 Information.

Math 241, Exam 1 Information. Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)

More information

Section 13.5 Equations of Lines and Planes

Section 13.5 Equations of Lines and Planes Section 13.5 Equations of Lines and Planes Generalizing Linear Equations One of the main aspects of single variable calculus was approximating graphs of functions by lines - specifically, tangent lines.

More information

Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information