3. INNER PRODUCT SPACES


 Brittney Goodwin
 1 years ago
 Views:
Transcription
1 . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space. In R and R we have the concepts of lengths and angles. In those spaces we use the dot product for this purpose, but the dot product only makes sense when we have components. In the absence of components we introduce something called an inner product to play the role of the dot product. We consider only vector spaces over C, or some subfield of C, such as R. An inner product space is a vector space V over C together with a function (called an inner product) that associates with every pair of vectors in V a complex number u v such that: () v u u v for all u, v V; () u v w u w v w for all u, v, w V; () u v u v for all u, v V and all C; (4) v v is real and for all v V; (5) v v = if and only if v =. These are known as the axioms for an inner product space (along with the usual vector space axioms). A Euclidean space is a vector space over R, where u v R for all u, v and where the above five axioms hold. In this case we can simplify the axioms slightly: () v u u v for all u, v V; () u v w u w v w for all u, v, w V; () u v u v for all u, v V and all C; (4) v v for all v V; (5) v v = if and only if v =. Example : Take V = R n as a vector space over R and define u v = u v u n v n where u = (u,, u n ) and v = (v,, v n ) (the usual dot product). This makes R n into a Euclidean space. When n = we can interpret this geometrically as the real Euclidean plane. When n = this is the usual Euclidean space. Example : Take V = C n as a vector space over C and define u = (u,, u n ) and v = (v,, v n ). u v u v... u v n n where Example : Take V = M n (R), the space of nn matrices over R where A B = trace(a T B). Note, this becomes the usual dot product if we consider an n n matrix as a vector with n components, since trace(a T B) = a b ij ij if A = (a ij ) and B = (b ij ). n i, j Example 4: Show that R can be made into a Euclidean space by defining u u = 5x x x y x y + 5y y when u = (x, y ) and u = (x, y ). Solution: We check the five axioms. 9
2 () u u = 5x x x y x y + 5y y = u u. () If u = (x, y ) then u + u u = 5(x + x )x x (y + y ) (x + x )y + 5(y + y )y = 5x x + 5x x x y x y x y x y + 5y y + 5y y = (5x x x y x y + 5y y ) + (5x x x y x y + 5y y ) = u u + u u. () u u = 5(x )x x (y ) (x )y + 5(y )y = [5x x x y x y + 5y y ] = u u. (4) If v = (x, y) then v v = 5x xy + 5y = 5(x xy/5 + y ) = 5(x y/5) + 4y /5 for all x, y. (5) v v = if and only if x = y/5 and y =, that is, if and only if v =. Now we move to a rather different sort of inner product, but one that still satisfies tha above axioms. Inner product spaces of this type are very important in mathematics. Example 5: Take V to be the space of continuous functions of a real variable and define u x v( x) NOTE: Axioms (), () show that the function u However u( x) v( x) dx u v is a linear transformation for a fixed v. u v u is not linear since v u u v u v u v v u... Lengths and Distances The length of a vector in an inner product space is defined by: (Remember that v v v. v v is real and nonnegative. The square root is the nonnegative one.) So the zero vector is the only one with zero length. All other vectors in an inner product space have positive length. Example 6: In R, with the dot product as inner product, the length of (x, y, z) is x + y + z. Example 6: If V is the space of continuous functions of a real variable and u x vx uxvx dx = f(x)g(x) dx then the length of f(x) = x is x 4 dx = 5. The following properties of length are easily proved. Theorem : For all vectors u, v and all scalars : () v =. v ; () v ; () v = if and only if v =.
3 Theorem (Cauchy Schwarz Inequality): u v u. v. Equality holds if and only if u = u v Proof: Let d = v. Now u dv = u dv u dv = u u d v u d u v + dd v v = u dd v + dd v = u d v = u u v v Since u dv, u v u v. Example 7: In R n we have y x y Example 8: f(x)g(x) dx x. i i f(x) dx g(x) dx. i i u v v v. The Triangle Inequality in the Euclidean plane states no side of a triangle can be longer than the sum of the other two sides. It is usually proved geometrically, or appealing to the principle that the shortest distance between two points is a straight line. In a general inner product space we must prove it from the axioms. Theorem (Triangle Inequality): For all vectors u, v: u + v u + v. Proof: u + v = u + v u + v = u u + v v + u v + v u = u + v + Re(u v) u + v + u v u + v + u. v ( u + v ) So u + v u + v. We define the distance between two vectors u, v to be u v. The distance version of the Triangle Inequality is as follows. If u, v, w are the vertices of a triangle in an inner product space V then u w u v + v w. It follows from the length version as u w = (u v) + (v w). If we take u, v, w to be vertices of a triangle in the Euclidean plane this gives the geometric version of the Triangle Inequality... Orthogonality It is not possible to define angles in a general inner product space, because inner products need not be real. But in any Euclidean space we can define these geometrical concepts even if the vectors have no obvious geometric significance.
4 Now we can use the Cauchy Schwarz inequality to define the angle between vectors. If u, v are nonzero vectors the angle between them is defined to be cos u v u. v. The Cauchy u v Schwarz inequality ensures that u. v lies between and. The angle between the vectors is / if and only if u v =. Example 9: Suppose we define the inner product between two continuous functions by u v / Solution: u u( x) v( x) dx. If u(x) = sin x and v(x) = x find the angle, between them in degrees. u v / x ux / / xsin xdx / = =. sin sin x x cos x (integrating by parts) xdx cos x = dx x / = sin x 4 = 4 which is approximately Hence u(x) is approximately.886. v x vx / x x = dx / = which is approximately Hence v(x)} is approximately.66. So if the angle (in degrees) between these two functions is then cos.886* Hence NOTE: Measuring the angle between two functions in degrees is rather useless and is done here only as a curiosity. By far the major application of angles in function spaces is to orthogonality. This is a concept that is meaningful for all inner product spaces, not just Euclidean ones. Two vectors in an inner product space are orthogonal if their inner product is zero. The same definition applies to Euclidean spaces, where angles are defined and there orthogonality means that either the angle between the vectors is / or one of the vectors is zero. So orthogonality is slightly more general than perpendicularity. A vector v in an inner product space is a unit vector if v =.
5 We define a set of vectors to be orthonormal if they are all unit vectors and each one is orthogonal to each of the others. An orthonormal basis is simply a basis that is orthonormal. Note that there is no such thing as an orthonormal vector. The property applies to a whole set of vectors, not to an individual vector. Theorem 4: An orthonormal set of vectors {v,, v n } is linearly independent. Proof: Suppose v + + n v n =. Then v + + n v n v r = for each r. But v + + n v n v r = v v r + + n v n v r = r v r v r since v r is orthogonal to the other vectors in the set = r since v r v r = v r =. Hence each r =. Because of the above theorem, if we want to show that a set of vectors is an orthonormal basis we need only show that it is orthonormal and that it spans the space. Linear independences come free. Another important consequence of the above theorem is that it is very easy to find the coordinates of a vector relative to an orthonormal basis. Theorem 5: If,,..., n is an orthogonal basis for the inner product V, and v V, then x v x v i where xi. i x n Proof: Let v = x + x x n n. Then v i x j j i j = x j j since the j i are mutually orthogonal Example : Consider R as an inner product space with the usual inner product. Show that the set,,,,,,,, is an orthonormal basis for R. Solution: They are clearly mutually orthogonal and, since = they are all unit vectors. Hence they are linearly independent and so span a dimensional subspace of R. Clearly this must be the whole of R. Example : Find the coordinates of (, 4, 5) relative to the above orthonormal basis. Solution: (, 4, 5) (/, /, /) = 7 (, 4, 5) (/, /, /) = (, 4, 5) (/, /, /) =. Hence the coordinates are (7,, ). In other words, (, 4, 5) = 7,,,,. Example : In C as an inner product space with the inner product (x, y ) (x, y ) = x x + y y show that the vectors ( i, 4i) and ( 4i, /5 + /5 i) are orthogonal. Use them to find an orthonormal basis for C.
6 Solution: ( i)( + 4i) + ( 4i)(/5 /5 i) = + 5i 5i =. ( i, 4i) = = and ( 4i, /5 + /5 i) = =. Hence { ( i, 4i), ( 4i, /5 + /5 i)} is an orthonormal basis. Theorem 6 (GramSchmidt): Every finitedimensional inner product space V has an orthonormal basis. Proof: We prove this by induction on the dimension of V. If dim(v) = then the empty set is an orthonormal basis. Suppose that every vector space of dimension n has an orthonormal basis of V and suppose that V is a vector space of dimension n +. Let {u,, u n+ } be a basis for V and let U = v,, v n. By the induction hypothesis U has an orthonormal basis v,, v n. Define w = u n+ u n+ v v u n+ v n v n. Then for each i, w v i = u n+ v i u n+ v i v i v i, since v j v i = when i j = u n+ v i v v i since v i v i = =. Hence w is orthogonal to each of the v i. But it may not be a unit vector, but it is nonzero. So we may divide it by its length without affecting orthogonality. So we define v n+ = w w and so obtain an orthonormal basis {v,, v n, v n+ } for V. In practice it is inconvenient to normalise the vectors (divide by their length) as we go, because we will have to carry these lengths along into our subsequent calculations. It s much easier to produce an orthogonal basis and then to normalise at the end. GRAMSCHMIDT ALGORITHM {u, u,..., u n } is a given basis; {v, v,..., v n } is an orthogonal basis; {w, w,..., w n } is an orthonormal basis. () LET v = u. () FOR r = TO n, LET v r = u r u r v v v u r v v v... u r v r v r v r. Multiply by a convenient factor to remove fractions. (4) FOR r = TO n, LET w r = v r v r. Example : Find an orthonormal basis for V = (,,, ), (,,, 4), (,,, ). Solution: u = basis v = orthogonal basis v w = orthonormal basis (,,, ) (,,, ) (,,, ) (,,, 4) (,,, ) 5 (,,, ) 5 (,,, ) (, 6, 6, ) 6 (6,, 8, ) 4
7 WORKING: v = u u v v v = (,,, 4) 4 (,,, ) = (,,, 4) 5 (,,, ). Multiply by, so now v = (, 4, 6, 8) 5(,,, ) = (,,, ). v = u u v v v u v v v = (,,, ) (,,, ) (,,, ). Multiply by, so now v = (,,, ) (5, 5, 5, 5) + (,,, ) = (, 6, 6, ). Example 4: Let V be the function space, x, x made into a Euclidean space by defining u x vx uxvx dx. Find an orthonormal basis for V. Solution: u = basis v = orthogonal basis v w = orthonormal basis x x (x ) x 6x 6x (6x 6x + ) WORKING: u x v x x dx x. x dx x v v (x) = u (x) u (x) v (x) v (x) v (x) = x. Multiply by, so now v (x) = x. u x v x x x. 4 ux vx x x dx x x dx x x 4 vx x dx 4x 4x dx x x x. v (x) = u (x) u (x) v (x) v (x) v (x) u (x) v (x) v (x) v (x) = x /6 / (x ) = x (x ) Multiply by 6, so now v (x) = 6x (x ) = 6x 6x vx 6 x 6x dx 6x 7x 48x x dx x 8x 6x 6x x
8 .4. Fourier Series The most important applications of inner product spaces involves function spaces with the inner product defined by means of an integral. Fourier series are infinite series in an infinite dimensional function space. However it is not appropriate here to give more than a cursory overview because to discuss them properly requires not only a good knowledge of integration, but a deep understanding of the convergence of infinite series. For any positive integer n the functions, cos nx and sin nx are periodic, with period. [This does not mean that they do not have smaller period, but simply that for each of them f(x + ) = f(x) for all x.] Take the space T spanned by all of these functions. Define the inner product on T as So T =, cos x, cos x,..., sin x, sin x,... u ( x) v( x) u( x) v( x) dx. T is an infinite dimensional vector space. Clearly, for every function f(x) T, f(x + ) = f(x). If f(x) is a continuous function for which f(x + ) = f(x) we may ask whether f(x) T. The answer is usually no. Such an f(x) may not be a linear combination of, cos x, cos x,..., sin x, sin x,... But remember that a linear combination is a finite linear combination. It may well be that f(x) can be expressed as an infinite series involving these functions. That is we might have f(x) = a + a cos nx b sin nx n n n. Such a series is called a Fourier series, named after the French mathematician Joseph de Fourier [768 8]. Of course, for this to make sense we would need this series to converge, which why we need to know a lot about infinite series in order to study Fourier series. But suppose we limit the values of n. Let T N =, sin x, sin x,..., cos x, cos x,... We can show that these N + functions are linearly independent. orthogonal. So T N is a N + dimensional Euclidean space. For n >, cos nx cos nx dx = and sin x = sin In fact, they are mutually nx dx =. Clearly = dx =. (Remember that here is not the absolute value but rather the length of the function.) N Fx By theorem 5, if F(x) = a + an cos nx bn sin nx then a = = F ( x) dx and, if n >, a n = F x cos nx cos nx n = F ( x)cos nx dx and b n = F x sin nx sin nx = F ( x)sin nx dx. A function in T N must be continuous and have period. But by no means does every such function belong to T N. However if F(x) is continuous and has period then it can be approximated by a function in T N, with the approximation getting better as N becomes larger. Even functions with period having some discontinuities can be so approximated. (We won t go into details here as to the precise conditions, or how close the approximation will be.) where a = If F (N) (x) is the approximation to F(x) in T N then F (N) (x) = a + a cos nx b sin nx F N ( x) dx and, if n >, a n = N n F N ( x)cos nx dx and b n = n F N ( x)sin nx dx. Now it can be shown that generally these integrals can be approximated by the corresponding integrals with F (N) replaced by F(x). Letting N it can be shown that if F[x] can be expressed as n a n n then a = a Fourier series a + cos nx b sin nx F ( x) dx and, if n >, n 6
9 a n = F ( x)cos nx dx and b n = F ( x)sin nx dx. Example 5: Find the Fourier series for the function F(x) = x if x / F(x) = x if / x / F(x) = x F(x + ) = F(x) for all x Answer: The solution involves a fair bit of integration by parts and, since this is not a calculus course, we omit the details and simply give the answer. 4 sin x sin x sin 5x F(x) =. 5 / / / /.5. Orthogonal Complements In R the normal to a plane through the origin is a plane through the origin. Every vector in the plane is orthogonal (i.e. perpendicular if they are nonzero) to every vector along the line. The line and the plane are said to be orthogonal complements of one another. The orthogonal complement of a subspace U, of V is defined to be: U = {v V u v = for all u U}. Intuitively it would seem that U should be U, but there are examples where this is not so. However for finitedimensional subspaces it is true. This follows from the following important theorem. Theorem 7: If U is a finitedimensional subspace of the vector space then U is also subspace of V and V = U U. Proof: () U is a subspace of V. Let v, w U and let u U. Then u v + w = u v + u w = + =. Hence v + w U T. Let v U and let be a scalar. Then u v =u v =. =. Hence v U and so we have shown that U is a subspace. () U U =. 7
10 Suppose v U U. Then v is orthogonal to itself, and so v v =. By the axioms of an inner product space this implies that v =. () V = U + U. Let u, u n be an orthonormal basis for U. Let v V, u = v u u v u n u n and let w = v u. Then for each i, w u i = v u i v u i u i u i since u j u i = if i j = v u i v u i since u i u i = =. Hence w U. Clearly u U. So v = u + w U + U. Theorem 8: If U is a subspace of a finitedimensional vector space then U = U. Proof: Suppose u U and let v U. Then u v =. Hence v u =. Since this holds for all v U, and so u U. So it follows that U U. Now V = U U = U U so dim U = dim U. Hence U = U. EXERCISES FOR CHAPTER Exercise : If v = (x, y ) and v = (x, y ) define u v = x x x y x y + 5y y and [u v] = x x + x y + x y + y y. Show that under one of these products R becomes a Euclidean space and under the other it is not a Euclidean space. Exercise : Find an orthonormal basis for (,, ), (,, 5). Exercise : Find an orthonormal basis for (,,,, ), (,,,, ), (,,,, ), (,,,, ). Exercise 4: Find an orthonormal basis for the function space, x, x where u x vx uxvx dx. Exercise 5: Find an orthonormal basis for the function space, x, sin x where / x vx uxvx u dx. Exercise 6: Find the orthogonal complement for (,, 6), (,, ) in R. Exercise 7: Find the orthogonal complement of (,,, ), (,,, ) in R 4. Exercise 8: Find the orthogonal complement of, x in the vector space, x, x, where u ( x) vx is defined to be u xvx dx. 8
11 SOLUTIONS FOR CHAPTER Exercise : Axioms (), (), () are easily checked for both products. The simplest way to check them is to let A = and B =. Then u v = uav T and [u v] = ubv T. It is now very simple to check these first three axioms. (4) If v = (x, y) then v v = x 4xy + 5y = (x y) + y for all x, y. If v = (x, y) then [v v] = x + 4xy + y = (x + y) y. When x = and y = this is negative. Hence under the product [u v] R is not a Euclidean space. (5) If v v = then x = y and y = so v =. Hence under the product v v, R is a Euclidean space. Exercise : u = basis v = orthogonal basis v w = orthonormal basis (,, ) (,, ) (,, ) (,, 5) (,, 6) 8 (,, 6) 8 WORKING: v = u u v v v = (,, 5) (,, ) = (,, 5) (,, ) = (,, 6). Exercise : u = basis v = orthogonal basis v w = orthonormal basis (,,,, ) (,,,, ) (,,,, ) (,,,, ) (,, 4,, ) 8 (,, 4,, ) 8 (,,,, ) (, 4, 4, 6, ) 7 (, 4, 4, 6, ) 7 4 (,,,, ) (9, 49, 56, 49, ) 6958 (9, 49, 56, 49, ) 6958 WORKING: v = u u v v v = (,,,, ) 4 (,,,, ). Multiply by 4. Now v = (4,, 4, 4, 4) (,,,, ) = (,, 4,, ). v = u u v v v u v v v = (,,,, ) 4 (,,,, ) 8 (,, 4,, ). Multiply by 8. Now v = (8, 8, 8,, 8) (,,,, ) (, 9,,, ) = (4, 6, 6, 4, 4). Divide by 4. Now v = (, 4, 4, 6, ). v 4 = u 4 u 4 v v v u 4 v v v u 4 v v v = (,,,, ) (,,,, ) 8 (,, 4,, ) 7 (, 4, 4, 6, ). Multiply by 4. Now v 4 = (4, 4, 4, 4, ) (,,,, ) (5, 45, 6, 5, 5) (6, 4, 4, 6, 6) = (9, 49, 56, 49, ). 9
12 Exercise 4: u = basis v = orthogonal basis v w = orthonormal basis x x (x ) x x x + (x x + ) WORKING: / u x v x x dx = x =. v (x) = u (x) u (x) v (x) v (x) v (x) = x /. = x. Multiply by, so now v (x) = x. v x x u dx = x x 4 9 dx 9 / = x 8x 9 = 8 4 =. x v x x dx = x =. 4x x v x x x dx u / = x dx x dx = x 5 = 6 5 = 5. 5 / x v (x) = u (x) u (x) v (x) v (x) v (x) u (x) v (x) v (x) v (x) = x / /5. / (x ) 4
13 = x 5 (x ) = x 6 5 x = x 6 5 x +. Multiplying by we now take v (x) = x x +. v (x) = x x dx = x 44x 9 4x / 6x 7 x dx = x 4x 9 4x / 7 x dx = 5/ / x x x x x = = =. Exercise 5: u = basis v = orthogonal basis v w = orthonormal basis WORKING: x x sin x sin x / / v x dx = / x v x x = / u xdx = x = v (x) = u (x) u (x) v (x) v (x) v (x) = x /4 /. = x. Multiply by, so now v (x) = x. v / x x dx / = 4x 4x dx x 6 8 sin x 8
14 4 = x x x = 6 = 6. / x v x x dx u sin / = =. cos x / / x v x x x dx u sin / = xsin x dx / sin = cos x x dx / / / x cos x dx x dx sin / / = + sin x + cos x = =. v (x) = u (x) u (x) v (x) v (x) v (x) u (x) v (x) v (x) v (x) = sin x..(cos x sin x) = sin x. Multiplying by we now take v (x) = sin x. / v (x) = sin x dx / = sin x 4 sin x 4 / dx = sin dx 4 sin x dx + 4 dx / = cos / / = sin x = 4 + / / x dx 4 sin x dx + 4 dx / x + / 4 cos x + 4
15 = 4 4. = 8 Exercise 6: First Solution: Let u = (,, ) and u = (,, 6). Suppose (x, y, z) is orthogonal to both u and u. Then x + y + z = and x + y + 6z =. so x=, z = k, y = k. 6 4 Hence the orthogonal complement is (,, ). Second Solution: We could take, as a third vector (,, ), being outside of the space spanned by a and b, and use the Gram Schmidt process. However we are content with an orthogonal basis. u = basis v = orthogonal basis v (,, ) (,, ) 9 (,, 6) (5,, 4) 45 (,, ) (,, ) WORKING: v = u u v v v = (,, 6) 9 9 (,, ). Multiply by 9 to get the new v to be v = 9(,, 6) 9(,, ) = (8, 7, 54) (8, 9, 8) = (, 8, 6). Perhaps it would now be a good idea to divide by 4 to get a new v as v = (5,, 4). u v = 5 and u v =. v = u u v v v u v v v = (,, ) 5 9 (,, ) 45 (5,, 4) Multiply by 45 to get a new v as v = (45, 45, 45) (5, 5, 5) (5,, 4) = (, 8, 9). Divide by 9 to get a new v as v = (,, ). Hence the orthogonal complement is (,, ). Third Solution: A third method, that only works for R, is to simply find u u. i j k u u = = (6 6)i ( 4)j + (6 )k = (, 8, 4). So the orthogonal complement is 6 (, 8, 4) = (,, ). You can make up your own mind as to which is the easiest method! Exercise 7: Here we cannot use the vector product. Suppose (x, y, z, w) is orthogonal to both vectors. Then we have a system of two homogeneous linear equations that is represented by 4
16 44. So w = h, z = k, y = h, x = k giving the vector (k, h, k, h). Taking h =, k = and h =, k =, we get a basis for the orthogonal complement, which is (,,, ), (,,, ). Exercise 8: dx cx bx a. = x c x b ax = a + b + c and dx x cx bx a. = 4 4 x c x b x a = a + b + c 4. Hence w(x) = a + bx + cx is orthogonal to both and x if a + b + c = and a + b + c 4 =. We solve the homogeneous system This gives c = k, b = k, 6a = 5k. Take k = 6. Then a = 5, b = 6, c = 6 and hence w(x) = 5 + 6x + 6x. Hence the orthogonal complement is 6x + 6x 5.
1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationA Modern Course on Curves and Surfaces. Richard S. Palais
A Modern Course on Curves and Surfaces Richard S. Palais Contents Lecture 1. Introduction 1 Lecture 2. What is Geometry 4 Lecture 3. Geometry of InnerProduct Spaces 7 Lecture 4. Linear Maps and the Euclidean
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More informationAll of mathematics can be described with sets. This becomes more and
CHAPTER 1 Sets All of mathematics can be described with sets. This becomes more and more apparent the deeper into mathematics you go. It will be apparent in most of your upper level courses, and certainly
More informationChapter 5. Banach Spaces
9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on
More information2 Integrating Both Sides
2 Integrating Both Sides So far, the only general method we have for solving differential equations involves equations of the form y = f(x), where f(x) is any function of x. The solution to such an equation
More information784: algebraic NUMBER THEORY (Instructor s Notes)*
Math 784: algebraic NUMBER THEORY (Instructor s Notes)* Algebraic Number Theory: What is it? The goals of the subject include: (i) to use algebraic concepts to deduce information about integers and other
More informationWHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?
WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly
More informationMUSTHAVE MATH TOOLS FOR GRADUATE STUDY IN ECONOMICS
MUSTHAVE MATH TOOLS FOR GRADUATE STUDY IN ECONOMICS William Neilson Department of Economics University of Tennessee Knoxville September 29 289 by William Neilson web.utk.edu/~wneilson/mathbook.pdf Acknowledgments
More informationIntroduction to Differential Calculus. Christopher Thomas
Mathematics Learning Centre Introduction to Differential Calculus Christopher Thomas c 1997 University of Sydney Acknowledgements Some parts of this booklet appeared in a similar form in the booklet Review
More informationCHAPTER 1. Internal Set Theory
CHAPTER 1 Internal Set Theory Ordinarily in mathematics, when one introduces a new concept one defines it. For example, if this were a book on blobs I would begin with a definition of this new predicate:
More informationHow many numbers there are?
How many numbers there are? RADEK HONZIK Radek Honzik: Charles University, Department of Logic, Celetná 20, Praha 1, 116 42, Czech Republic radek.honzik@ff.cuni.cz Contents 1 What are numbers 2 1.1 Natural
More informationYou know from calculus that functions play a fundamental role in mathematics.
CHPTER 12 Functions You know from calculus that functions play a fundamental role in mathematics. You likely view a function as a kind of formula that describes a relationship between two (or more) quantities.
More information4. FIRST STEPS IN THE THEORY 4.1. A
4. FIRST STEPS IN THE THEORY 4.1. A Catalogue of All Groups: The Impossible Dream The fundamental problem of group theory is to systematically explore the landscape and to chart what lies out there. We
More informationBasic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011
Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011 A. Miller 1. Introduction. The definitions of metric space and topological space were developed in the early 1900 s, largely
More informationAn Elementary Introduction to Modern Convex Geometry
Flavors of Geometry MSRI Publications Volume 3, 997 An Elementary Introduction to Modern Convex Geometry KEITH BALL Contents Preface Lecture. Basic Notions 2 Lecture 2. Spherical Sections of the Cube 8
More informationAn Introductory Course in Elementary Number Theory. Wissam Raji
An Introductory Course in Elementary Number Theory Wissam Raji 2 Preface These notes serve as course notes for an undergraduate course in number theory. Most if not all universities worldwide offer introductory
More informationPROOFS BY DESCENT KEITH CONRAD
PROOFS BY DESCENT KEITH CONRAD As ordinary methods, such as are found in the books, are inadequate to proving such difficult propositions, I discovered at last a most singular method... that I called the
More informationNotes on finite group theory. Peter J. Cameron
Notes on finite group theory Peter J. Cameron October 2013 2 Preface Group theory is a central part of modern mathematics. Its origins lie in geometry (where groups describe in a very detailed way the
More informationReading material on the limit set of a Fuchsian group
Reading material on the limit set of a Fuchsian group Recommended texts Many books on hyperbolic geometry and Kleinian and Fuchsian groups contain material about limit sets. The presentation given here
More informationFigure 2.1: Center of mass of four points.
Chapter 2 Bézier curves are named after their inventor, Dr. Pierre Bézier. Bézier was an engineer with the Renault car company and set out in the early 196 s to develop a curve formulation which would
More informationControllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More informationThe Set Data Model CHAPTER 7. 7.1 What This Chapter Is About
CHAPTER 7 The Set Data Model The set is the most fundamental data model of mathematics. Every concept in mathematics, from trees to real numbers, is expressible as a special kind of set. In this book,
More informationFoundations of Data Science 1
Foundations of Data Science John Hopcroft Ravindran Kannan Version /4/204 These notes are a first draft of a book being written by Hopcroft and Kannan and in many places are incomplete. However, the notes
More informationFoundations of Data Science 1
Foundations of Data Science John Hopcroft Ravindran Kannan Version 2/8/204 These notes are a first draft of a book being written by Hopcroft and Kannan and in many places are incomplete. However, the notes
More informationALGEBRAIC NUMBER THEORY AND QUADRATIC RECIPROCITY
ALGEBRAIC NUMBER THEORY AND QUADRATIC RECIPROCITY HENRY COHN, JOSHUA GREENE, JONATHAN HANKE 1. Introduction These notes are from a series of lectures given by Henry Cohn during MIT s Independent Activities
More informationA Course on Number Theory. Peter J. Cameron
A Course on Number Theory Peter J. Cameron ii Preface These are the notes of the course MTH6128, Number Theory, which I taught at Queen Mary, University of London, in the spring semester of 2009. There
More informationA Tutorial on Spectral Clustering
A Tutorial on Spectral Clustering Ulrike von Luxburg Max Planck Institute for Biological Cybernetics Spemannstr. 38, 7276 Tübingen, Germany ulrike.luxburg@tuebingen.mpg.de This article appears in Statistics
More informationA Tutorial on Support Vector Machines for Pattern Recognition
c,, 1 43 () Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Tutorial on Support Vector Machines for Pattern Recognition CHRISTOPHER J.C. BURGES Bell Laboratories, Lucent Technologies
More informationONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK
ONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1. A random walk on the integers with step distribution F and initial state x is a sequence S n of random variables whose increments are independent,
More information