12. Inner Product Spaces

Size: px
Start display at page:

Download "12. Inner Product Spaces"

Transcription

1 1. Inner roduct Spaces 1.1. Vector spaces A real vector space is a set of objects that you can do to things ith: you can add to of them together to get another such object, and you can multiply one of them by any real number to get another such object. There s a set of axioms that a vector space must satisfy; you can find these in other textbooks. Similarly, a complex vector space is a set of objects that you can do to things ith: you can add to of them together to get another such object, and you can multiply one of them by any complex number to get another such object. If you are speaking about a real vector space, you call any real number a scalar. If you are speaking about a complex vector space, you call any complex number a scalar. If you have a (finite) list of vectors v 1, v, v 3,..., v n, the most general ay that you can combine these vectors together to get another vector is to choose a list of scalars α 1, α, α 3,..., α n and form the linear combination α 1 v 1 + α v + α 3 v α n v n Examples of real vector spaces include R N, here the vectors are N-tuples of real numbers (most familiarly, R the plane hich is the set of ordered pairs of real numbers and R 3 threedimensional space hich is the set of ordered triples of real numbers). The basic operations look like this: if x = (x 0, x 1,..., x N 1 ), y = (y 0, y 1,..., y N 1 ), and α is any real number, then x + y = (x 0 + y 0, x 1 + y 1,..., x N 1 + y N 1 ) and αx = (αx 0, αx 1,..., αx N 1 ). The zero vector is (0, 0,..., 0).Other examples include many different varieties of vector spaces hose members are real valued functions on a certain domain. When you deal ith such function spaces, you have to be careful of a fe things. One is that each function must have the same domain each function must be defined everyhere (actually, for spaces here membership is determined by some kind of integrability condition, almost everyhere is good enough). As an example, let s consider a set of real-valued functions suitable for feeding into the Laplace transform: the set of locally integrable functions on the interval [0, ) that don t gro too fast at. Is f(t) = ln t a member of this set? Yes, although its domain isn t quite all of [0, ). It blos up at 0, but not so badly as to spoil its integrability there. Is f(t) = 1 t a member of this set? No because e need to kno hat it does beteen 1 and. As such, it ould be silly to ask hat the Laplace transform of this function is. You also must make sure that your set of functions really is a vector space. This primarily means that you have to check that the sum of any to such functions is also such a function, and that a constant times any such function is still such a function. Examples of function spaces include the folloing: Description Symbol Continuous functions on [a, b] C[a, b] eriodic continuous functions C(T ) Continuous functions on the hole real line C(R) Functions ith one continuous derivative on a set S (S can be [a, b], T, R, etc.) C 1 (S) Bounded (actually, essentially bounded ), functions on a set S L (S) Absolutely integrable functions on S L 1 (S) Square integrable functions on S L (S) Examples of complex vector spaces include C N, the set of N-tuples of complex numbers, hich is just like R N, except that both the entries (components) of the vectors and the scalars are alloed 8

2 to be complex numbers, and various spaces of complex valued functions. Be sure to understand that e are talking about having the values of the function being complex e are not assuming that these are functions of a complex variable. As an example, e inx is a complex-valued function of the real variable x. Our list of complex function spaces that e are likely to encounter is exactly the same as the list above, and hen e say exactly, e mean that e are using the exact same symbols to name these spaces. When e use a symbol like C(T ), e are not committing ourselves as to hether e mean the real-valued functions or the complex-valued functions e either have to make that clear in the surrounding context, or else admit that in the particular case e have in mind it doesn t much matter hich one e mean. The study of vector spaces in general falls under the label linear algebra. In that study, sets of vectors (that is, sets of elements of some vector space) are probed as to hat their span is and as to hether or not they are linearly independent (independent for short). For further details, see any reasonably-ritten textbook on the subject. Here are to definitions out of all of that study: if you have a function (or mapping, or map, etc. the same idea has many names) T that has as its domain the things that you can feed this mapping all of one vector space, and that has as its output elements of some other (ell, possibly other no rule says it can t be the same one) vector space, then T is a linear transformation (or linear mapping or linear hatzit or just plain linear) if: T (αu + βv) = αt (u) + βt (v) for all vectors u and v and all scalars α and β. (1.1) A linear functional is something hose domain is some vector space hich takes vectors as input that gives scalars (real numbers if it is a real vector space and complex numbers if it is a complex vector space) as output, and that satisfies equation (1.1). By no you should be familiar ith the principle that the thing that takes in functions and gives back Fourier coefficients is, by any reasonable definition, linear. 1.. Real inner product spaces The vector space R N comes equipped ith a very special geometrically-inspired algebraic operation called the dot product. The dot product takes in to vectors and gives you back a real number a scalar. Synonyms for the ords dot product include inner product and scalar product. We ill rite the dot product of the to vectors u and v in at least to different notations: as u v or as u, v. The dot product on R N is defined as follos: if x = (x 0, x 1,..., x N 1 ) and y = (y 0, y 1,..., y N 1 ), then x y = x, y = x 0 y 0 + x 1 y x N 1 y N 1 = N 1 k=0 x k y k (1.) The geometric description of the dot product is the folloing: if θ is the angle beteen x and y, then ( ) x y θ = cos 1 (1.3) x y here x = x = x 0 + x x N 1 = x x (1.4) is the length or norm of the vector x. If the dot product of to vectors is zero, then the angle beteen them is a right angle and e call the to vectors orthogonal. We also notice in (1.4) that computations of lengths can use the dot product, as there seems to be a close relationship. (There 83

3 is also the familiar idea that in any problem involving lengths and the ythagorean Theorem, it is frequently easier to ork ith the square of a distance than it is to ork ith the distance itself.) What if e have a real vector space that is not R N such as one of the function spaces? We may, under some circumstances, be able to define something called an inner product on that vector space. An inner product on a real vector space is something that has all of the vital properties of the dot product on R N if e can figure out hat those properties are. The accepted line of jargon for a real inner product is that it is a positive-definite symmetric bilinear form. What does this mean? It means that it is an object that takes as input to elements of the vector space and gives back a real number - for vectors u and v, let us rite the inner product as u, v. The other ords in this description have the folloing meanings: Symmetric: Bilinear: ositive definite: u, v = v, u for all vectors u and v (1.5) αu + βv, = α u, + β v, and u, αv + β = α u, v + β u, for all u, v,, α and β (1.6) u, u 0 alays and u, u > 0 for u 0 (1.7) The best example that e ill have of something that satisfies all of these properties is to take a space hose elements are functions and to let f, g = fg, or something very much like that. For instance, for functions periodic of period, let us define the standard real inner product of fand g as: f, g = 1 0 f(x)g(x) dx (1.8) It is not hard to sho that this satisfies properties (1.5), (1.6), and (1.7) - except possibly for a little fudging on the second part of (1.7), but let s not orry too much about that no. Any real vector space on hich a real inner product has been defined is a real inner product space Complex inner product spaces We d like to do this for complex vector spaces - but e realize that there are going to have to be some subtle modifications of the detail. In particular, if e ant something positive definite something like (1.7) e are going to have to use something ith a lot of complex conjugates and absolute values in it. Conventional isdom has settled on just hat e need, and it is the folloing: a complex inner product is something that takes as input to vectors from a complex vector space and gives as output a complex number, and it is a positive-definite Hermitian sesquilinear form. Actually, sesquilinear is a part of Hermitian, but I thre it in to make the phrase sound more impressive. Sesquilinear means linear in one factor and conjugate-linear in the other factor - that s something like linear, but ith some stray complex conjugates hanging around. It makes no earthly difference hich factor is hich, but e have to come to some choice and stick to it. As accident ould have it, mathematicians have fallen into the rut of alays putting the complex conjugate on the second factor, hile physicists have fallen into the rut of putting the complex conjugate on the first factor. In hat follos, e ill follo the mathematician s convention - e suppose that a physicist ill just have to read it in a mirror. Here s hat these ords mean: 84

4 Hermitian: Sesquilinear: ositive definite: u, v = u, v for all vectors u and v (1.9) αu + βv, = α u, + β v, and u, αv + β = α u, v + β u, for all u, v,, α and β (1.10) u, u 0 alays and u, u > 0 for u 0 (1.11) We ill give to examples: the standard inner product on C N is defined as follos: if z = (z 0, z 1,..., z N 1 ) and = ( 0, 1,..., N 1 ), then z, = z z z N 1 N 1 = N 1 On a function space the inner product of f and g ill be f, g = k=0 z k k (1.1) to functions periodic of period, the standard complex inner product of f and g is: f, g = 1 0 fg, or to make it specific f(x)g(x) dx (1.13) Any complex vector space on hich a complex inner product as been defined is called a complex inner product space The real Cauchy-Buniakovski-Scharz inequality Theorem the Cauchy-Scharz inequality: in any real inner product space, for any to vectors u and v, u, v u, u v, v (1.14) ith equality holding if and only if one of these vectors is a scalar multiple of the other. The proof uses nothing but the properties of an inner product that is, (1.5), (1.6), and (1.7) and the technique of completing the square. If v = 0, there is nothing to prove, both sides of the inequality being zero, so e assume that v 0. Consider the vector u + λv for any real number λ. By property (1.7), the inner product of this vector ith itself is alays greater or equal to zero (ith equality only if it is the zero vector). Thus: 0 u λv, u λv = u, u u, λv λv, u + λv, λv by (1.6) = u, u λ u, v + λ v, v by (1.5) and (1.6) Since e kno that v, v is positive (by (1.7) again), e can divide both sides of this inequality by it, getting: λ u, v u, u λ + v, v v, v 0 λ u, v v, v 85 λ u, u v, v

5 No complete the square that is, use formula (11.1) from the previous chapter: [ ] λ u, v u, v [ ] u, v v, v λ + u, u v, v v, v v, v ( ) u, v λ u, v u, u v, v v, v v, v This is true for all real λ. If e let λ be that value hich minimizes the left hand side that is, u, v if e let λ be, then the left hand side is zero and the right hand side must be less than or v, v equal to zero. Hence, 0 u, v u, u v, v v, v or u, v u, u v, v As it turns out, there is another notation in hich this fact is usually expressed. From property (1.7), the inner product of a vector ith itself is positive and hence has a real, positive square root. We call this square root the norm of the vector - by analogy to (1.4). That is, in any real inner product space e define the norm of a vector u to be u = u, u (1.15) Using this terminology, e take the square root of both sides of the inequality (1.14) to get: The Cauchy-Scharz inequality (norm version): u, v u v (1.16) u, v One side effect of this is that e can form the fraction and be assured that it is beteen u v 1 and 1. Hence it has an arccosine, and e call that arccosine the angle beteen the to vectors The complex Cauchy-Buniakovski-Scharz inequality Theorem the Cauchy-Scharz inequality: in any real inner product space, for any to vectors u and v, u, v u, u v, v (1.17) ith equality holding if and only if one of these vectors is a scalar multiple of the other. What s the difference beteen this and equation (1.14)? To things: the fact that the vectors involved belong to a complex rather than a real vector space, and the ay that e had to make it the square of the absolute value of something on the left hand side e can t put just the square of a complex number into an inequality and have it be meaningful. The proof uses nothing but the properties of an inner product that is, (1.9), (1.10), and (1.11) and the technique of completing the square for a complex variable. If v = 0, there is nothing to prove, both sides of the inequality being zero, so e assume that v 0. Consider the vector u + λv for any complex number λ. By property (1.1), the inner product of this vector ith itself is alays greater or equal to zero (ith equality only if it is the zero vector). Thus: 0 u λv, u λv 86

6 = u, u u, λv λv, u + λv, λv by (1.6) = u, u λ u, v λ v, u + λλ v, v = u, u λ u, v λ u, v + λλ v, v by (1.9) and (1.10) Since e kno that v, v is positive (by (1.11) again), e can divide both sides of this inequality by it, getting: λ u, v u, v u, u λ λ + v, v v, v v, v 0 Next, employ equation (11.7) of the Completing the Square chapter: λ u, v u, v λ λ v, v v, v u, v u, v + v, v v, v ( λ u, v ) ( ) λ u, v v, v v, v u, v λ v, v + u, v u, v u, u + v, v v, v v, v 0 u, v v, v + u, u v, v 0 u, u v, v u, v v, v 0 This is true for all possible values of λ, especially including that value hich minimizes the left u, v hand side namely, λ =. If e choose that value of λ, then the numerator of the second v, v fraction must be greater or equal to zero - that is, (1.17) must be true. We no repeat the ay e finished off the previous section: e let the norm of the vector be the square root of its inner product ith itself that is, in a complex inner product space, u = u, u (1.18) Given this convention, e take the square root of both sides of (1.17) to get The Cauchy-Scharz inequality (norm version, complex vector space): u, v u v (1.19) 1.6. Orthonormal sets in a real inner product space The driving principle of the vast majority of mathematical ork in inner product spaces is orthogonality. The definition of orthogonal is the same in both real and complex inner product spaces: to vectors are orthogonal if and only if their inner product is zero. With that in mind, e define the concepts of an orthogonal set and an orthonormal set of vectors: Definition: Let {e j } be a set of vectors in a (real or complex) inner product space. The variable j the index to this list of vectors runs through some set of possible values. We are at the moment being deliberately vague as to hether that index set is finite or infinite. Then {e j } is an orthogonal set if and only if: { = 0 if j k e j, e k is (1.0) > 0 if j = k 87

7 The same set is an orthonormal set if and only if: { 0 if j k e j, e k = 1 if j = k (1.1) An orthonormal set is clearly a special case of an orthogonal set. On the other hand, any orthogonal set may be turned into an orthonormal set by merely dividing each element by its on norm. Note also that, although the zero vector is alays orthogonal to everything, e don t ant it as a member of anything that e are illing to call an orthogonal set. Our central problem is this: suppose e have a finite orthonormal set {e j } (no e are specifying that the set of indices the set of possible values for j be a finite set, although e are illing to let it be a very large finite set.) Let v be any vector in the inner product space. Ho closely can e approximate v by a linear combination of the elements of the orthonormal set? More specifically, ho can e choose the coefficients {α j } so that α j e j is as close as possible to v? j But hat do e mean by as close as possible? Surely e mean that the size of the difference is as small as possible. But hat do e mean by size? Well, let s see every inner product space, real or complex, has a built-in notion of size: the norm, as defined in (1.15) and (1.18). That is, our problem is to find α j so that v α j e j is minimized. j To minimize this, it is sufficient to minimize its square. This isn t a ne idea, of course it is used in nearly every calculus problem that asks that a distance be maximized or minimized. The ythagorean theorem just makes orking ith the squares of distances easier than orking ith the distances themselves. So here e go: v j α j e j = v j α j e j, v k α k e j (1.) We used to different names j and k for the index variable because e had to. If you multiply a sum of, for instance, seven terms by itself, the result ill be a sum ith 49 terms you ve got to take all possible products of any of the terms ith any of the other terms. So far, e are assuming that e are orking in a real inner product space.use (1.5) and (1.6) to simplify (1.): v j α j e j = v, v j v, α j e j j α j e j, v + j,k α j e j, α k e k = v, v j α j v, e j + j,k α j α k e j, e k = v, v j α j v, e j + j α j by 1.1) 88

8 = j ( α j α j v, e j ) + v We complete the square, using formula (11.1): = j ( α j α j v, e j + v, e j v, e j ) + v = j (α j v, e j ) j v, e j + v (1.3) We re trying to minimize this quantity, and e get to choose the α j any ay e ant to. It is clear that the first summation on the right hand side of (1.3), being the sum of squares, is alays greater than or equal to zero but e can make it zero if e just choose the α j s right. The right choice is to let α j be equal to v, e j. These values for the coefficients are called (for reasons hich ill eventually become clear) the generalized Fourier coefficients for v ith respect to this orthonormal set. Since the left hand side of (1.) is the square of a norm, it must alays be greater than or equal to zero. If e let α j be equal to v, e j for each j, the remaining portion of the right hand side of (1.3) must be nonnegative. That is, e have the folloing inequality, generally knon as Bessel s inequality: v, e j v (1.4) j If e are able to rite v as a linear combination of the ej s, then the minimum value of the left hand side of (1.) ould be zero and the inequality in (1.4) ould actually be an equality. What if the orthonormal set is actually an infinite set rather than a finite set? Then such sums as appear in (1.) and (1.4) ould have to be interpreted as infinite series. Fortunately, the very orkings of this problem notably Bessel s inequality help to assure us that these series converge in some appropriate sense. (Actually, to get everything that e might ask for in terms of these series being meaningful, e ll have to have our inner product space be a complete metric space, hich makes it a Hilbert space unfortunately, this is not the same meaning of the ord complete as e are about to use belo.) Bessel s inequality ill alays be true, and the norm of the difference beteen v and our linear combination of the ej s ill alays be minimized if e choose our coefficients to be the generalized Fourier coefficients. A ne issue no arises: does our orthonormal set have enough elements in it that e can rite any vector v as the limit of linear combinations of that is, as an infinite series based on that set? If so, e call the orthonormal set complete. (This is also knon as having an orthonormal basis.) It happens and the calculations above provide the frameork for this argument, too that an orthonormal set is complete if and only if the only vector orthogonal to every element in it is the zero vector. Let s summarize our findings: Fourier coefficients: v j Bessel s inequality: α j e j is minimized if each α j = v, e j (1.5) v, e j v (1.6) j 89

9 If the orthonormal set is also complete, then: Generalized Fourier series: v = k v, e j e j (1.7) Generalized arseval s identity: v, e j = v (1.8) j 1.7. Orthonormal sets in a complex inner product space No e suppose that {e j } is a (finite, for the time being) orthonormal set in a complex inner product space. Without further ado, let s repeat hat as in the last section: v j α j e j = v j α j e j, v k α k e j = v, v j v, α j e j j α j e j, v j,k α j e j, α k e k = v, v j α j v, e j j α j v, e j + j,k α j α k e j, e k = v, v j α j v, e j j α j v, e j + j α j = j ( ) α j α j v, e j α j v, e j + v We complete the square, using formula (11.7): = ( α j α j v, e j α j v, e j + v, e j v, e j ) + v j = j α j v, e j j v, e j + v (1.9) No e are ready to dra conclusions from this, exactly as before. Formulas (1.5), (1.6), (1.7), and (1.8) are also valid ithout change in a complex inner product space. Of course, e ere clever enough to include some ell-chosen absolute values in these statements! 1.8. Square integrable functions on T Consider a function f (real or complex valued) that is defined on the line so as to be periodic of period. We call such a function square integrable, and say that it belongs to L (T ),provided the folloing integral converges: 1 f(x) dx < 0 90

10 Just to give you a taste of this condition it doesn t require that the function be bounded, but it is harder to satisfy than mere integrability, or even absolute integrability. As an example, take the function f(x) = 1 for 0 < x [ x. This function is integrable on, ] (and hence absolutely integrable since it is positive), but if e square it, e get 1, hich is not integrable. x We say that this function belongs to L 1 but not to L. We no define the inner product. lease recognize that our decision to divide out front by represents one of several possible notational choices, and may not necessarily be reflected in other orks. The inner product for real-valued functions: If f, g L (T ), then f, g = 1 The inner product for complex-valued functions: If f, g L (T ), then f, g = 1 In either case, the norm is as follos: f(x)g(x) dx (1.30) f(x)g(x) dx (1.31) f = f, f = ( 1 ) 1 f(x) dx (1.3) The Cauchy-Scharz inequality in this case reads as follos: 1 ( f(x)g(x) dx 1 A quick corollary of this is that ) 1 ( f(x) 1 dx g(x) dx) 1 = f g (1.33) f 1 = 1 f(x) dx ( 1 ) 1 ( f(x) 1 dx 1 dx) 1 f (1.34) 1.9. Fourier series and orthogonal expansions { 1, cos ( πx ), sin ( πx ), cos ( πx ), sin ( πx ), cos ( 3 πx ), sin ( 3 πx ) },... is an orthogonal set in the real inner product space L (T ), and almost but not quite an orthonormal set. The specific problem ith being orthonormal is the normalization: the function 1 has norm equal to 1, but all of the other functions in this set have norm equal to 1. We could chase through all of the consequences of that factor, but rather than give you the details, e ll just give the results. If e ant to approximate a square integrable function f by an N th degree trigonometric polynomial, the the closest e can come in the L norm the least squares or least root mean square approximation is to let the coefficients of this polynomial be exactly the Fourier coefficients. That 91

11 is, e let this trigonometric polynomial be the Nth partial sum of the Fourier series for f. One consequence of this is by (1.6) Bessel s inequality: hich can also be ritten as a[0] + a[0] N ( a[k] + b[k] ) f (1.35) N ( a[k] + b[k] ) f(x) dx (1.36) But then, it turns out that this orthogonal set is complete. We on t prove that, but basically, our previous convergence theorems for Fourier series make this inevitable. That being the case, e can say that the Fourier series of any square integrable function alays converges to that function in the L sense. Furthermore, by (1.8), e have arseval s identity: a[0] ( a[k] + b[k] ) = f (1.37) a[0] ( + a[k] + b[k] ) = f(x) dx (1.38) Let s try to repeat this for the complex case. This time, consider the set {e πikx/ } k= of complex valued functions on T. This turns out to be an orthonormal set in fact, demonstrating that fact is far easier for this case than for the real case. This is also a complete orthonormal set e couldn t possibly have a different result for the complex case than for the real case so e have both a Bessel s inequality and a arseval s identity for this case, too. Bessel s inequality: arseval s identity: N f[k] f (1.39) k= N f[k] = f (1.40) k= The trigonometric orthogonal set on N On the polygon N, the set of functions e k [n] = e πikn/n form an orthogonal set ith respect to the usual complex inner product. That is, { N 1 N 1 e j, e k = e j [n]e k [n] = e πijn/n e πikn/n N if j = k = (1.41) 0 otherise n=0 n=0 Equation (1.41) is just a restatement of equations (4.1) and (4.6). The factor of N that appears in it means that the vectors are orthogonal but not quite orthonormal. Let s put this another ay. Suppose e have an N-dimensional complex inner product space ith the standard inner product ith respect to that basis. (The set of all complex-valued functions on N is just such a space.) We 9

12 can then rite any vector as an N 1 column matrix. Suppose e have a set of M such vectors. Create an N M matrix A hose M columns are these M vectors. If M = N e ill have a square matrix. Ho could e tell if these M vectors ere independent? This ould be a question about the rank of the matrix A. If its rank is M, e have an independent set. In the M = N square matrix case, the N vectors are independent if and only if the determinant of A is not zero. No ho can e tell if the set of vectors is orthogonal? To do this, let A be the complex conjugate of the transpose of A, and compute the matrix product A A. This ill be the M M matrix hose (j, k)th entry is precisely the inner product of the jth and kth vectors of our set. The statement that the set is orthogonal is precisely the statement that A A is diagonal: that all of its entries are zero except those on the main diagonal hich are not zero. The statement that our set is orthonormal is precisely the statement that A A = I, the identity matrix. If A is a square matrix such that A A = I, e call A a unitary matrix. Use this language to express (1.41). Let F be the N N matrix hose columns are the vectors e j named above. In other ords, the (j, k)th entry of F is e πijk/n = ζ jk. Then F is almost but not quite a unitary matrix: F F = NI. So e have an orthogonal but not quite orthonormal set. What can e say about this in general? Suppose that {e j } is a finite orthogonal (but not necessarily orthonormal) set in a complex inner product space, and that v is an arbitrary vector in that space. We ish to find the coefficients α j such that the norm v α j e j is minimized. This minimum is achieved if and only if j α j = v, e j e j, e j (1.4) Bessel s inequality in this case turns out to be j v, e j e j, e j v (1.43) ith equality (arseval s identity) in the case of the orthogonal set being complete. We may easily apply (1.4) and (1.43) to the case of the trigonometric basis for functions in N. The coefficients in (1.4) turn out to be: f, e j e j, e j = 1 N 1 f[n]e πijn/n = N f[j] (1.44) n=0 The set of all N of these trigonometric functions is a set of N orthogonal, hence independent, vectors. Therefore, they span the N-dimensional vector space of all functions on the polygon and are a complete orthogonal set. Equality necessarily holds in (1.43). A careful orking out of the consequences of (1.43) leads us to arseval s identity for this instance: N f = N N 1 k=0 f[k] = N 1 n=0 f[n] = f (1.45) Exercises 1.1. By imitating the derivations around equation (1.9), prove (1.4) and (1.43). 93

13 1.. Which of the folloing Fourier series represent square-integrable functions on T π, also knon as [ π, π]? What else can you say about the functions they represent? (a) (b) (c) (d) k= sin kx k = sin kx k = cos kx k = k= k 0 k= k 0 k= k 0 e ikx ik k e ikx e ikx k ik r k e ikx, here 0 r < 1 Supplemental exercises: 1.3. Use the Fourier series of pieceise-polynomial functions on T and either arseval s identity 1 or the synthesis equation at ell-chosen points to calculate k, 1 k 4, and 1 k 6. One possibility: Start ith f 1 (x) = 1 x on the interval (0, 1), extended to be periodic of period 1. Build a sequence of functions f n such that f n+1(x) = f n (x). (That is, f n+1 is an antiderivative of f n.) Choose the constant of integration so that f n+1 [0] = 1 0 f n+1 (x) dx = Find a ay to automate the calculations in exercise 1.3, using a computer to help. Among other things, you ll need the ability to ork ith arbitrary precision rational numbers (hence arbitrarily long integers). DERIVE, MALE, and MATHEMATICA have this capability. 1 Calculate the exact value of. (Why 6? Because that as as far as Euler got, orking k6 the problem by hand.) 94

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Vectors Math 122 Calculus III D Joyce, Fall 2012

Vectors Math 122 Calculus III D Joyce, Fall 2012 Vectors Math 122 Calculus III D Joyce, Fall 2012 Vectors in the plane R 2. A vector v can be interpreted as an arro in the plane R 2 ith a certain length and a certain direction. The same vector can be

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

1 Inner Products and Norms on Real Vector Spaces

1 Inner Products and Norms on Real Vector Spaces Math 373: Principles Techniques of Applied Mathematics Spring 29 The 2 Inner Product 1 Inner Products Norms on Real Vector Spaces Recall that an inner product on a real vector space V is a function from

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Differentiation and Integration

Differentiation and Integration This material is a supplement to Appendix G of Stewart. You should read the appendix, except the last section on complex exponentials, before this material. Differentiation and Integration Suppose we have

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

MATH 132: CALCULUS II SYLLABUS

MATH 132: CALCULUS II SYLLABUS MATH 32: CALCULUS II SYLLABUS Prerequisites: Successful completion of Math 3 (or its equivalent elsewhere). Math 27 is normally not a sufficient prerequisite for Math 32. Required Text: Calculus: Early

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way

Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way Chapter 3 Distribution Problems 3.1 The idea of a distribution Many of the problems we solved in Chapter 1 may be thought of as problems of distributing objects (such as pieces of fruit or ping-pong balls)

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

1.7 Graphs of Functions

1.7 Graphs of Functions 64 Relations and Functions 1.7 Graphs of Functions In Section 1.4 we defined a function as a special type of relation; one in which each x-coordinate was matched with only one y-coordinate. We spent most

More information

Vector Spaces. 2 3x + 4x 2 7x 3) + ( 8 2x + 11x 2 + 9x 3)

Vector Spaces. 2 3x + 4x 2 7x 3) + ( 8 2x + 11x 2 + 9x 3) Vector Spaces The idea of vectors dates back to the middle 1800 s, but our current understanding of the concept waited until Peano s work in 1888. Even then it took many years to understand the importance

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Linear Algebra: Vectors

Linear Algebra: Vectors A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

More information

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

To give it a definition, an implicit function of x and y is simply any relationship that takes the form: 2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

9.2 Summation Notation

9.2 Summation Notation 9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

More information

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

The Factor Theorem and a corollary of the Fundamental Theorem of Algebra

The Factor Theorem and a corollary of the Fundamental Theorem of Algebra Math 421 Fall 2010 The Factor Theorem and a corollary of the Fundamental Theorem of Algebra 27 August 2010 Copyright 2006 2010 by Murray Eisenberg. All rights reserved. Prerequisites Mathematica Aside

More information

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors. 3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

More information

1 Lecture: Integration of rational functions by decomposition

1 Lecture: Integration of rational functions by decomposition Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.

More information

discuss how to describe points, lines and planes in 3 space.

discuss how to describe points, lines and planes in 3 space. Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position

More information

Taylor and Maclaurin Series

Taylor and Maclaurin Series Taylor and Maclaurin Series In the preceding section we were able to find power series representations for a certain restricted class of functions. Here we investigate more general problems: Which functions

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v, 1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

5.1 Radical Notation and Rational Exponents

5.1 Radical Notation and Rational Exponents Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots

More information

Elementary Linear Algebra

Elementary Linear Algebra Elementary Linear Algebra Kuttler January, Saylor URL: http://wwwsaylororg/courses/ma/ Saylor URL: http://wwwsaylororg/courses/ma/ Contents Some Prerequisite Topics Sets And Set Notation Functions Graphs

More information

Stanford Math Circle: Sunday, May 9, 2010 Square-Triangular Numbers, Pell s Equation, and Continued Fractions

Stanford Math Circle: Sunday, May 9, 2010 Square-Triangular Numbers, Pell s Equation, and Continued Fractions Stanford Math Circle: Sunday, May 9, 00 Square-Triangular Numbers, Pell s Equation, and Continued Fractions Recall that triangular numbers are numbers of the form T m = numbers that can be arranged in

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

Sequences and Series

Sequences and Series Sequences and Series Consider the following sum: 2 + 4 + 8 + 6 + + 2 i + The dots at the end indicate that the sum goes on forever. Does this make sense? Can we assign a numerical value to an infinite

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Basic Linear Algebra

Basic Linear Algebra Basic Linear Algebra by: Dan Sunday, softsurfer.com Table of Contents Coordinate Systems 1 Points and Vectors Basic Definitions Vector Addition Scalar Multiplication 3 Affine Addition 3 Vector Length 4

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

CHAPTER 3. Methods of Proofs. 1. Logical Arguments and Formal Proofs

CHAPTER 3. Methods of Proofs. 1. Logical Arguments and Formal Proofs CHAPTER 3 Methods of Proofs 1. Logical Arguments and Formal Proofs 1.1. Basic Terminology. An axiom is a statement that is given to be true. A rule of inference is a logical rule that is used to deduce

More information

Mechanics 1: Vectors

Mechanics 1: Vectors Mechanics 1: Vectors roadly speaking, mechanical systems will be described by a combination of scalar and vector quantities. scalar is just a (real) number. For example, mass or weight is characterized

More information

The Method of Partial Fractions Math 121 Calculus II Spring 2015

The Method of Partial Fractions Math 121 Calculus II Spring 2015 Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method

More information

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous? 36 CHAPTER 1. LIMITS AND CONTINUITY 1.3 Continuity Before Calculus became clearly de ned, continuity meant that one could draw the graph of a function without having to lift the pen and pencil. While this

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Math 215 HW #6 Solutions

Math 215 HW #6 Solutions Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Zeros of Polynomial Functions

Zeros of Polynomial Functions Review: Synthetic Division Find (x 2-5x - 5x 3 + x 4 ) (5 + x). Factor Theorem Solve 2x 3-5x 2 + x + 2 =0 given that 2 is a zero of f(x) = 2x 3-5x 2 + x + 2. Zeros of Polynomial Functions Introduction

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Section V.3: Dot Product

Section V.3: Dot Product Section V.3: Dot Product Introduction So far we have looked at operations on a single vector. There are a number of ways to combine two vectors. Vector addition and subtraction will not be covered here,

More information

COMPLEX NUMBERS. a bi c di a c b d i. a bi c di a c b d i For instance, 1 i 4 7i 1 4 1 7 i 5 6i

COMPLEX NUMBERS. a bi c di a c b d i. a bi c di a c b d i For instance, 1 i 4 7i 1 4 1 7 i 5 6i COMPLEX NUMBERS _4+i _-i FIGURE Complex numbers as points in the Arg plane i _i +i -i A complex number can be represented by an expression of the form a bi, where a b are real numbers i is a symbol with

More information

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document

More information