DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents
|
|
|
- Patience Samantha Cole
- 9 years ago
- Views:
Transcription
1 DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition for holomorphy 7 1. Limit definition of a derivative Since we want to do calculus on functions of a complex variable, we will begin by defining a derivative by mimicking the definition for real functions. Namely, if f : Ω C is a complex function and z Ω an interior point of f, we define the derivative of f at z to be the it f(z + h) f(z) if this it exists. If the derivative of f exists at z, we denote its value by f (z), and we say f isholomorphic at z. If f is holomorphic at every point of an open set U we say that f is holomorphic on U. This definition naturally leads to several basic remarks. First, the definition formally looks identical to the it definition of a derivative of a function of a real variable, which is inspired by trying to approximate a tangent line using secant lines. However, in the it as h 0, we are allowing h to vary over all complex numbers that approach 0, not just real numbers. One of the main principles of this class is that this seemingly minor difference actually makes a gigantic difference in the behavior of holomorphic functions. We insist that z be an interior point of Ω to ensure that as we let h 0, we can approach h in any direction. This is similar to the fact that derivatives (at least, derivatives which are not one-sided) of real functions are only defined at interior points of intervals, not at the endpoints of closed intervals. Why is the term holomorphic used instead of differentiable? We could use differentiable, or perhaps the more specific term complex differentiable, but the convention in mathematics is that the term holomorphic refers to differentiability of complex functions, not real functions. In particular, we will see that a complex function being holomorphic is substantially more restrictive than the corresponding real function it induces being differentiable. Perhaps you might be wondering what exactly a it of a complex function is. After all, when we say that h 0 as h ranges over complex numbers, what exactly do we mean? The rigorous definition is just the natural extension of the ε δ definition used for real functions. More precisely, we say that f(z) = w if for all ε > 0 z z0 1
2 2 DIFFERENTIABILITY OF COMPLEX FUNCTIONS there exists δ > 0 such that if 0 < z z 0 < δ, then f(z) w < ε. In other words, if we specify any error ε, no matter how small it is, all values of f(z) for z sufficiently close to z 0 will be within that error of w. The fact that the formal definition of derivatives of complex functions is so similar to real functions turns out to be handy for proving many of the basic properties of the complex derivative. In particular, many of the proofs of these properties in the real case transfer over, in almost identical fashion, to the complex case. For example, one can show the following: Suppose f, g : Ω C are differentiable at z. Then f + g is differentiable at z, and (f + g) (z) = f (z) + g (z), if c is any complex number, then cf is differentiable at z, and (cf) (z) = cf (z), (fg) is differentiable at z, and (fg) (z) = f (z)g(z)+f(z)g (z) (product rule), if f is differentiable at z and g is differentiable at f(z), then (g f) is differentiable at z, and (g f) (z) = g (f(z))f (z). (chain rule) So, the good news is that all of the standard differentiation rules carry over to the complex case with no formal change. Let s use these properties and the it definition of a derivative to calculate some derivatives. Examples. Let f(z) = z. Then f(z + h) f(z) z + h z = = 1. Therefore f(z) = z is differentiable on all of C and has derivative f (z) = 1. This is exactly identical to the real case. More generally, suppose n is a positive integer. We claim that if f(z) = z n, then f (z) = nz n 1. We can prove this either using the binomial theorem to expand (z + h) n, or by induction. Let s do the latter. Recall that to prove a statement parameterized by positive integers using induction, we prove the base case (usually n = 1), and then prove that if the statement is true for n it is also true for n + 1. We already proved the n = 1 case in the prior example. Suppose we know that if f(z) = z n, then f (z) = nz n 1. Using the product rule (which we assume has been proven for us), if f(z) = z n+1, then f (z) = (z n ) z + z n (z) = nz n 1 z + z n = (n + 1)z n, as desired. Furthermore, f(z) = z n is differentiable on all of C. From the above calculations and the basic properties we mentioned earlier, differentiating a polynomial with complex coefficients is formally identical to differentiation of real polynomials. For example, if f(z) = iz 4 +(2 3i)z 3 +7z, then f (z) = 4iz 3 + (6 9i)z Similarly, one can show that if n is a negative integer, then f(z) = z n has derivative f (z) = nz n 1, although of course this time f(z) is differentiable on C 0, since z n is undefined at z = 0 if n < 0. Even though we have proven the power rule for integer exponents on z, notice that we have not said anything about whether this rule is true for non-integer exponents. As a matter of fact, notice that we do not yet have any good definition of what non-integer exponents mean!
3 DIFFERENTIABILITY OF COMPLEX FUNCTIONS 3 We will postpone the calculation of derivatives of e z until we study power series. and related functions 2. Holomorphic functions, the Cauchy-Riemann equations Let f : Ω C be any complex function. Since we can identify C with R 2, any such function automatically induces a related function f : Ω R 2, where now we think of Ω as being a subset of R 2 instead of C. If f is holomorphic, does this imply anything about real differentiability of its associated real function? As a matter of fact, yes, it does, and it turns out that this connection is not merely a curiosity; many deep theorems about certain real functions can be proven by appealing to this connection! Examples. Suppose f(z) = z. Then we may write f(x + yi) = x + yi, so f induces the function f : R 2 R 2 where f(x, y) = (x, y). Suppose f(z) = z 2. Then we may write f(x+yi) = (x+yi) 2 = (x 2 y 2 )+2xy i, so f induces the function f : R 2 R 2 where f(x, y) = (x 2 y 2, 2xy). Suppose f(z) = e z. Assuming basic properties of complex exponentials (which we will prove soon), e x+yi = e x e yi = e x (cos y + i sin y). Therefore f(z) = e z induces the function f(x, y) = (e x cos y, e x sin y). 1 x + yi = x x 2 + y 2 Suppose f(z) = 1/z. Then we may write f(x + yi) = ( y x i, so f induces f(x, y) = x 2 + y2 x 2 + y, y ), and this real function 2 x 2 + y 2 is defined on R 2 (0, 0). Suppose f(z) = z. Then we may write f(x + yi) = x 2 + y 2, so f induces f(x, y) = ( x 2 + y 2, 0). Suppose f(z) = Im z. Then we may write f(x + yi) = y, so f induces f(x, y) = (y, 0). In general, if f(x + yi) = u(x, y) + v(x, y)i, where u, v are functions from some subset of R 2 to R, then f induces a real function from some subset of R 2 to R 2 given by f(x, y) = (u(x, y), v(x, y)). Suppose f is holomorphic at z, and write f(x + yi) = u(x, y) + v(x, y)i. Then we can compute the derivative of f at z using the it definition by either letting h approach 0 along the real axis or along the imaginary axis. If we approach along the real axis, f (z) = f(z + h) f(z) f(x + h + yi) f(x + yi) = = u(x + h, y) + v(x + h, y)i (u(x, y) + v(x, y)i). Separating the last expression into real and imaginary parts, f u(x + h, y) u(x, y) v(x + h, y) v(x, y) (z) = + i. h These two expressions appear in multivariable calculus: they are the partial derivatives of u(x, y), v(x, y) with respect to x. So we have shown that if f is holomorphic
4 4 DIFFERENTIABILITY OF COMPLEX FUNCTIONS at z, then u x, v x exist at (x, y), and we have an equation which relates f (z) to u x, v x : namely, f (z) = u x (x, y) + v x (x, y)i. If we repeat the same procedure except this time we let h 0 along the imaginary axis, then letting h 0 along reals, we get f f(z + ih ) f(z) (z) = h 0 ih = 1 i h 0 = 1 i f(x + (y + h )i) f(x + yi) h 0 h u(x, y + h ) + v(x, y + h )i (u(x, y) + v(x, y)i) = 1 i u(x, y + h ) u(x, y) + v(x, y + h ) v(x, y) i h 0 h h = v y (x, y) u y (x, y)i. h Therefore we have two alternate expressions for f (z), and since they must be equal, and u x, u y, v x, v y are all real numbers, we have proven the following theorem: Theorem 1. Suppose f(x + yi) = u(x, y) + v(x, y)i is holomorphic at z = x + iy. Then the partial derivatives u x, u y, v x, v y all exist at (x, y), and at (x, y) they satisfy the pair of partial differential equations u x = v y, u y = v x. More generally, if f is holomorphic on an open set Ω, then u x, u y, v x, v y exist on Ω, and u x = v y, u y = v x holds true on all of Ω. We call this pair of partial differential equations the Cauchy-Riemann equations. At this point, we know that any holomorphic function must have real and imaginary parts which satisfy the Cauchy-Riemann equations. One use of this fact/theorem is that it lets us show that certain functions are not holomorphic: Examples. Consider the complex conjugation function f(z) = z. Then it induces the functions u(x, y) = x, v(x, y) = y. Computing partial derivatives, we find that u x = 1, v y = 1, u y = 0, v x = 0, so the CR equation u x = v y is never satisfied for any z. Therefore f(z) is not holomorphic at any point of C. Notice that this might seem somewhat surprising, since the real and imaginary parts of f(z) = z look so simple and well-behaved. Indeed, the CR equations are highly nontrivial and will only be satisfied for very special pairs of u, v. Let f(z) = Im z. Then f induces the functions u(x, y) = y, v(x, y) = 0. Since u y = 1, v x = 0, the CR equations are never satisfied, since u y = v x is impossible at any point z. Therefore f(z) = Im z is never holomorphic at any point of z. These examples illustrate that even if you construct a complex function f(x+yi) = u(x, y) + v(x, y)i using well-behaved u(x, y), v(x, y), f may still not be holomorphic. As a matter of fact, notice that even in perhaps the best possible situation, where
5 DIFFERENTIABILITY OF COMPLEX FUNCTIONS 5 u, v are polynomials, as in the above examples, f does not have to be holomorphic. That the CR equations are so special turn out (in the end) to be the source of all the strong properties that holomorphic functions satisfy. A natural question is whether the converse of the above theorem holds: namely, if u(x, y), v(x, y) satisfy the CR equations, if f(z) = u(x, y) + v(x, y)i is holomorphic. It turns out that under certain fairly weak conditions, the converse is indeed true, so that the CR equations essentially completely determine whether a function f is holomorphic. However, before giving the statement and proof we need to review a few concepts from the calculus of real functions. 3. Differentiability of real functions We spend a lot of time in single-variable calculus computing derivatives, and thinking about the geometric significance of derivatives. In multivariable calculus, we discuss attempts to generalize derivatives, like partial derivatives and directional derivatives. However, many introductory multivariable calculus classes do not discuss what the derivative of a function f : R m R n is, or if they do, they do not discuss it in depth. We want to quickly review what the derivative of such a function is. We begin by interpreting derivatives of single variable functions in a slightly different way than what a calculus student is accustomed to. Suppose f : R R is differentiable at x. Then f(x + h) f(x) Another way of saying this is that = f (x). f(x + h) f(x) hf (x) h 0 h In other words, if we define a new function r(h) such that f(x + h) f(x) hf (x) = hr(h), then h 0 r(h) = 0. So we can redefine what it means for f(x) to be differentiable at x as follows: f is differentiable at x if and only if there exists a real number f (x) such that we can write = 0. f(x + h) = f(x) + hf (x) + hr(h), where r(h) 0 as h 0. Sometimes, instead of hr(h), we will use h r(h); evidently this does not change whether a function is differentiable or not. Examples. Consider f(x) = x 2. We know that f (x) = 2x; let us see how this new interpretation of a derivative works in this example. Let x = 3. Then f (3) = 6, and if we write f(3 + h) = f(3) + hf (3) + hr(h), we can solve for r(h): f(3+h) = (3+h) 2 = h 2 +6h+9, and f(3) = 9, f (3) = 6, so h 2 + 6h + 9 = 9 + 6h + hr(h), so r(h) = h. Notice that r(h) = h 0 as h 0, as expected.
6 6 DIFFERENTIABILITY OF COMPLEX FUNCTIONS Consider f(x) = sin x, and let x = 0. Then f(0) = 0, f (0) = 1. We can write f(0 + h) = f(0) + hf (0) + hr(h), which becomes sin h = 0 + h + hr(h). Therefore r(h) = sin h 1. Notice that h h 0 r(h) = 0, as expected. Geometrically, one should think of this new interpretation of derivative as saying that a function f is differentiable at x if and only if there is a good linear approximation to f at x, in the sense that this linear approximation has error which is of lower order than the linear approximation itself. How does this help with defining a derivative for functions of several real variables? Notice that we cannot imitate the it definition in a naive way, since we cannot divide by ordered tuples of real numbers. So instead we mimic this description of a derivative, replacing the constant f (x) by a linear function of several real variables, which is known as a linear transformation in linear algebra. A linear transformation is a function T : R m R n which has the form T (x 1, x 2,..., x m ) = (a 11 x 1 + a 12 x a 1m x m, a 21 x a 2m x m,...), where the a ij are real numbers. We typically represent T using an n m matrix with a ij as its entries. For example, the function T (x, y) = (2x + 4y, x + 3.5y, 7x) is represented by the matrix Let f : R m R n be any real function. (More generally, f only needs to be defined on a subset of R m ). Then we say that f is differentiable at a point x = (x 1,..., x m ) if there exists a linear transformation T x = T : R m R n such that if we write. f(x + h) = f(x) + T (h) + h r(h), then h 0 r(h) = 0, where this it takes place in R m. We call the linear transformation T x = T the derivative (sometimes total derivative) of f at x. This definition is difficult to work with directly. Fortunately, in most situations it is fairly easy to compute, because it turns out that real differentiability is closely related to the existence of partial derivatives. In particular, we have the following theorem, which we will not prove, from real analysis: Theorem 2. Let f : R m R n be any function. Write f = (f 1 (x 1,..., x m ),..., f n (x 1,..., x m ). Let x = (x 1,..., x m ) be a point of R m. If the partial derivatives j f i exist for all i, j in some open ball containing x and are also all continuous at x, then f is differentiable at x. Furthermore, if f is differentiable at x, then all the partial derivatives j f i exist, and the derivative of f at x is given by the Jacobian matrix J(x 1,..., x m ) = ( j f i (x 1,..., x m )) ij.
7 DIFFERENTIABILITY OF COMPLEX FUNCTIONS 7 The Jacobian matrix is just the matrix you get by computing the gradient of each of the component functions of f and then evaluating them at the point (x 1,..., x m ), and taking these gradients as the rows of the Jacobian matrix. The hypothesis that the partial derivatives must exist in a neighborhood of x is essential. It is not enough to know that the partial derivatives exist at x; there are examples of functions which have partial derivatives at a point x but are not differentiable there. The property of a function having continuous partial derivatives appears so frequently that we give it a name. If Ω is an open set in R n, we say that a function f : R m R is in C k (Ω) if all the partial derivatives of order k exist and are continuous at every point of Ω. A function f : R m R n is in C k (Ω) if each of its component functions are C k (Ω). A function which is in C k (Ω) for every k 1 is in C (Ω). The theorem above says, for example, that any C 1 function on an open set Ω is differentiable on Ω, with derivative given by the Jacobian matrix. Let f(x, y) = (x 2 + xy, 2xy). Since the component functions are C (being polynomials), f is real differentiable everywhere, and its derivative is given by [ ] 2x + y x J(x, y) =. 2y 2x Let f(x, y) = (e x cos y, e x sin y). (This is the real function that f(z) = e z induces.) Then f is real differentiable everywhere, since its component functions are C, and its derivative is given by [ ] e J(x, y) = x cos y e x sin y e x sin y e x. cos y Suppose f(x, y) = (u(x, y), v(x, y)), and u, v are C 1 functions that satisfy the CR equations on an open set Ω. Then f is differentiable at Ω, and has derivative given by [ ] ux u J(x, y) = y. v x v y However, the CR equations tell us that u x = v y, u y = v x. In other words, the CR equations are equivalent to saying that the Jacobian matrix of f(x, y) = (u(x, y), v(x, y)) is an anti-symmetric matrix. For instance, in the previous example, the Jacobian matrix is anti-symmetric, so f(z) = e z satisfies the CR equations. Let f(x) be a function of a single variable. Then the Jacobian matrix is just the 1 1 matrix [f (x)], so the Jacobian matrix can be thought of as a generalization of the ordinary notion of derivative for a single-variable real function. 4. A sufficient condition for holomorphy With this background on real differentiability in hand, we can now prove a sufficient condition involving the CR equations for a complex function f to be holomorphic.
8 8 DIFFERENTIABILITY OF COMPLEX FUNCTIONS Before beginning, we remark that the description of a differentiable function as a function which has a good linear approximation (in the sense that f(x + h) = f(x) + hf (x)+ h r(h) satisfies r(h) 0 as h 0) extends to holomorphic functions, where we now interpret h 0 as a it in complex numbers. Theorem 3 (Theorem 2.4 of Ch.1 in the text). Suppose f(x+yi) = u(x, y)+v(x, y)i is a complex function defined on an open set Ω in C. Suppose u, v are C 1 on Ω and satisfy the Cauchy-Riemann equations. Then f is holomorphic on Ω. Proof. We begin by remarking that if, say, u(x, y) is C 1 on Ω, then u is differentiable on Ω. Therefore, its matrix is given by the Jacobian matrix [u x u y ], which is just the gradient, and the definition of differentiability can be re-expressed by saying that if we write u(x + h 1, y + h 2 ) = u(x, y) + u x (x, y)h 1 + u y (x, y)h 2 + h r(h), then r(h) 0 as h = (h 1, h 2 ) (0, 0). (In other words, the tangent plane to u(x, y) is a good linear approximation to u near (x, y).) Applying this to both u, v, we find that we can write u(x + h 1, y + h 2 ) = u(x, y) + u x (x, y)h 1 + u y (x, y)h 2 + h r 1 (h) v(x + h 1, y + h 2 ) = v(x, y) + v x (x, y)h 1 + v y (x, y)h 2 + h r 2 (h) where r 1 (h), r 2 (h) 0 as h (0, 0). Since f(z) = u + vi, if we let h = h 1 + ih 2, we can write f(z + h) f(z) = u(x + h 1, y + h 2 ) u(x, y) + i(v(x + h 1, y + h 2 ) v(x, y)). Plug in the two expressions above for u(x + h 1, y + h 2 ) u(x, y) and the analogous expression for v: f(z+h) f(z) = u x (x, y)h 1 +u y (x, y)h 2 + h r 1 (h)+i(v x (x, y)h 1 +v y (x, y)h 2 + h r 2 (h)). The h r 1 (h) and i h r 2 (h) sum to a term h r(h), where r(h) 0 as h 0, since r 1 (h), r 2 (h) 0 as h 0. For the remaining four terms, apply CR to convert the v x, v y terms to u x, u y terms: u x (x, y)h 1 + u y (x, y)h 2 + i(v x (x, y)h 1 + v y (x, y)h 2 ) = u x h 1 + u y h 2 iu y h 1 + iu x h 2. However, notice we can factor the right hand side as (u x iu y )(h 1 + ih 2 ). Therefore, we can write f(z + h) f(z) = (u x iu y )h + h r(h), where r(h) is a complex function satisfying r(h) 0 as h 0. Therefore f is differentiable, with derivative u x iu y = u x + iv x, which is the expected value of the derivative.
9 DIFFERENTIABILITY OF COMPLEX FUNCTIONS 9 A natural although somewhat subtle question is whether weaker conditions on u, v (weaker than being C 1 ) are sufficient to prove holomorphy. As a matter of fact, there are theorems with weaker hypotheses, but they are surprisingly non-trivial to prove. See this article by Grey and Morris in the American Mathematical Monthly for more information, if interested. We conclude by proving a few more basic facts that use the CR equations. First, we show that if f is holomorphic in a region Ω, then considered as a real function it is also real differentiable there (as expected). Theorem 4 (Proposition 2.3 in Chapter 1 of the text). Suppose f is a complex function holomorphic on a region Ω. Then considered as a real function from Ω to R 2, f is real differentiable, and the determinant of the Jacobian matrix at a point (x, y) is equal to f (x + yi) 2. Proof. To show that f is differentiable as a real function, we want to show that if (x, y) is a point in Ω, there exists a linear transformation T : R 2 R 2 such that f(x + h 1, y + h 2 ) f(x, y) = T (h 1, h 2 ) + h r(h), where r(h) 0 as h (0, 0). Since we know that f is holomorphic, we can write f(z + h) f(z) = hf (z) + h r(h), where r(h) 0 as h 0 in complex numbers. Thinking of this equation as relating real functions, we obtain f(x + h 1, y + h 2 ) f(x, y) = (h 1 + ih 2 )(u x + iv x ) + h r(h). The first term on the right hand side is equal to (h 1 u x h 2 v x ) + i(u x h 2 + v x h 1 ). Therefore T is given by the matrix [ ] [ ] ux v x ux u = y. v x u x v x v y (We used the CR equations in the above equality.) So f is real differentiable with Jacobian matrix above, and the determinant of this matrix is u 2 x + v 2 x = u x + iv x 2 = f (z) 2, as desired. The next proposition shows that holomorphic functions are very closely related to a special type of function called a harmonic function, which appear frequently in physics and engineering. We begin by defining what a harmonic function is: Definition 1. The differential operator 2 x y 2 which operates on functions f : R 2 R (or more generally, functions f : Ω R, where Ω is an open subset of R 2 ) is called the Laplacian in R 2, and is sometimes written as. Definition 2. Let f : Ω R be a real-valued function defined on an open set Ω in R 2. If f is C 2 and f = 0, then we call f a harmonic function. Examples. Constant functions are harmonic, since their second order partial derivatives are always equal to 0.
10 10 DIFFERENTIABILITY OF COMPLEX FUNCTIONS Any linear function, ie, function of the form f(x, y) = ax + by + c, is also harmonic. f(x, y) = e x cos y is harmonic on R 2. Indeed, f xx = e x cos y, while f yy = e x cos y, so f xx + f yy = 0 is true everywhere. There are actually much more complicated functions which are harmonic, but we will not discuss them further, since they constitute a large portion of the study of partial differential equations. However, the next proposition shows that any holomorphic function produces harmonic functions automatically. Proposition 1. Suppose f(z) = u+vi is holomorphic on an open set Ω, and suppose u, v are both C 2 (we will soon be able to remove this condition). Then u and v are both harmonic on Ω. Proof. Since f is holomorphic, the CR equations tell us that u x = v y, u y = v x. Differentiate the first equation with respect to x and the second with respect to y to get u xx = v yx, u yy = v xy. Since u, v are C 2, v yx = v xy, so u xx + u yy = 0; ie, u is harmonic on Ω. Something similar works to show that v is also harmonic on Ω. The CR equations provide a non-obvious connection between harmonic functions (which on the surface seem to have nothing to do with complex numbers or holomorphic functions), which appear everywhere in physics and engineering, and holomorphic functions. As a consequence, several of the non-obvious theorems we prove for holomorphic functions during this class will have natural counterparts for harmonic functions. We will point some of these connections out, but one could fill many classes discussing harmonic functions and their relatives.
3 Contour integrals and Cauchy s Theorem
3 ontour integrals and auchy s Theorem 3. Line integrals of complex functions Our goal here will be to discuss integration of complex functions = u + iv, with particular regard to analytic functions. Of
Vieta s Formulas and the Identity Theorem
Vieta s Formulas and the Identity Theorem This worksheet will work through the material from our class on 3/21/2013 with some examples that should help you with the homework The topic of our discussion
MATH 52: MATLAB HOMEWORK 2
MATH 52: MATLAB HOMEWORK 2. omplex Numbers The prevalence of the complex numbers throughout the scientific world today belies their long and rocky history. Much like the negative numbers, complex numbers
Class Meeting # 1: Introduction to PDEs
MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x
Math 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.
Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard
DERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
Practice with Proofs
Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using
36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?
36 CHAPTER 1. LIMITS AND CONTINUITY 1.3 Continuity Before Calculus became clearly de ned, continuity meant that one could draw the graph of a function without having to lift the pen and pencil. While this
4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products
Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing
Real Roots of Univariate Polynomials with Real Coefficients
Real Roots of Univariate Polynomials with Real Coefficients mostly written by Christina Hewitt March 22, 2012 1 Introduction Polynomial equations are used throughout mathematics. When solving polynomials
CHAPTER 3. Methods of Proofs. 1. Logical Arguments and Formal Proofs
CHAPTER 3 Methods of Proofs 1. Logical Arguments and Formal Proofs 1.1. Basic Terminology. An axiom is a statement that is given to be true. A rule of inference is a logical rule that is used to deduce
3. Mathematical Induction
3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)
Linear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1
Implicit Functions Defining Implicit Functions Up until now in this course, we have only talked about functions, which assign to every real number x in their domain exactly one real number f(x). The graphs
Constrained optimization.
ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values
Quotient Rings and Field Extensions
Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.
Mathematical Induction
Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,
Linear and quadratic Taylor polynomials for functions of several variables.
ams/econ 11b supplementary notes ucsc Linear quadratic Taylor polynomials for functions of several variables. c 010, Yonatan Katznelson Finding the extreme (minimum or maximum) values of a function, is
Differentiation and Integration
This material is a supplement to Appendix G of Stewart. You should read the appendix, except the last section on complex exponentials, before this material. Differentiation and Integration Suppose we have
Review of Fundamental Mathematics
Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
MATH 381 HOMEWORK 2 SOLUTIONS
MATH 38 HOMEWORK SOLUTIONS Question (p.86 #8). If g(x)[e y e y ] is harmonic, g() =,g () =, find g(x). Let f(x, y) = g(x)[e y e y ].Then Since f(x, y) is harmonic, f + f = and we require x y f x = g (x)[e
Limits and Continuity
Math 20C Multivariable Calculus Lecture Limits and Continuity Slide Review of Limit. Side limits and squeeze theorem. Continuous functions of 2,3 variables. Review: Limits Slide 2 Definition Given a function
MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
Lecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
The Method of Partial Fractions Math 121 Calculus II Spring 2015
Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method
The Mean Value Theorem
The Mean Value Theorem THEOREM (The Extreme Value Theorem): If f is continuous on a closed interval [a, b], then f attains an absolute maximum value f(c) and an absolute minimum value f(d) at some numbers
f(x + h) f(x) h as representing the slope of a secant line. As h goes to 0, the slope of the secant line approaches the slope of the tangent line.
Derivative of f(z) Dr. E. Jacobs Te erivative of a function is efine as a limit: f (x) 0 f(x + ) f(x) We can visualize te expression f(x+) f(x) as representing te slope of a secant line. As goes to 0,
Representation of functions as power series
Representation of functions as power series Dr. Philippe B. Laval Kennesaw State University November 9, 008 Abstract This document is a summary of the theory and techniques used to represent functions
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
1.7 Graphs of Functions
64 Relations and Functions 1.7 Graphs of Functions In Section 1.4 we defined a function as a special type of relation; one in which each x-coordinate was matched with only one y-coordinate. We spent most
Second Order Linear Partial Differential Equations. Part I
Second Order Linear Partial Differential Equations Part I Second linear partial differential equations; Separation of Variables; - point boundary value problems; Eigenvalues and Eigenfunctions Introduction
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
SECTION 10-2 Mathematical Induction
73 0 Sequences and Series 6. Approximate e 0. using the first five terms of the series. Compare this approximation with your calculator evaluation of e 0.. 6. Approximate e 0.5 using the first five terms
ON THE EXPONENTIAL FUNCTION
ON THE EXPONENTIAL FUNCTION ROBERT GOVE AND JAN RYCHTÁŘ Abstract. The natural exponential function is one of the most important functions students should learn in calculus classes. The applications range
6. Define log(z) so that π < I log(z) π. Discuss the identities e log(z) = z and log(e w ) = w.
hapter omplex integration. omplex number quiz. Simplify 3+4i. 2. Simplify 3+4i. 3. Find the cube roots of. 4. Here are some identities for complex conjugate. Which ones need correction? z + w = z + w,
TOPIC 4: DERIVATIVES
TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
Microeconomic Theory: Basic Math Concepts
Microeconomic Theory: Basic Math Concepts Matt Van Essen University of Alabama Van Essen (U of A) Basic Math Concepts 1 / 66 Basic Math Concepts In this lecture we will review some basic mathematical concepts
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
PYTHAGOREAN TRIPLES KEITH CONRAD
PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient
Multi-variable Calculus and Optimization
Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus
Factoring Trinomials of the Form x 2 bx c
4.2 Factoring Trinomials of the Form x 2 bx c 4.2 OBJECTIVES 1. Factor a trinomial of the form x 2 bx c 2. Factor a trinomial containing a common factor NOTE The process used to factor here is frequently
DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x
Chapter 5 COMPLEX NUMBERS 5.1 Constructing the complex numbers One way of introducing the field C of complex numbers is via the arithmetic of matrices. DEFINITION 5.1.1 A complex number is a matrix of
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way
Chapter 3 Distribution Problems 3.1 The idea of a distribution Many of the problems we solved in Chapter 1 may be thought of as problems of distributing objects (such as pieces of fruit or ping-pong balls)
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
2.2 Separable Equations
2.2 Separable Equations 73 2.2 Separable Equations An equation y = f(x, y) is called separable provided algebraic operations, usually multiplication, division and factorization, allow it to be written
Homework # 3 Solutions
Homework # 3 Solutions February, 200 Solution (2.3.5). Noting that and ( + 3 x) x 8 = + 3 x) by Equation (2.3.) x 8 x 8 = + 3 8 by Equations (2.3.7) and (2.3.0) =3 x 8 6x2 + x 3 ) = 2 + 6x 2 + x 3 x 8
Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points
Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a
88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a
88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small
LEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional
1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]
1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not
Triangle deletion. Ernie Croot. February 3, 2010
Triangle deletion Ernie Croot February 3, 2010 1 Introduction The purpose of this note is to give an intuitive outline of the triangle deletion theorem of Ruzsa and Szemerédi, which says that if G = (V,
1 Calculus of Several Variables
1 Calculus of Several Variables Reading: [Simon], Chapter 14, p. 300-31. 1.1 Partial Derivatives Let f : R n R. Then for each x i at each point x 0 = (x 0 1,..., x 0 n) the ith partial derivative is defined
8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.
CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,
Recognizing Types of First Order Differential Equations E. L. Lady
Recognizing Types of First Order Differential Equations E. L. Lady Every first order differential equation to be considered here can be written can be written in the form P (x, y)+q(x, y)y =0. This means
v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.
3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with
Vector Math Computer Graphics Scott D. Anderson
Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about
Notes on metric spaces
Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.
Differentiation of vectors
Chapter 4 Differentiation of vectors 4.1 Vector-valued functions In the previous chapters we have considered real functions of several (usually two) variables f : D R, where D is a subset of R n, where
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
1. Prove that the empty set is a subset of every set.
1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: [email protected] Proof: For any element x of the empty set, x is also an element of every set since
Mathematics Review for MS Finance Students
Mathematics Review for MS Finance Students Anthony M. Marino Department of Finance and Business Economics Marshall School of Business Lecture 1: Introductory Material Sets The Real Number System Functions,
MATH10040 Chapter 2: Prime and relatively prime numbers
MATH10040 Chapter 2: Prime and relatively prime numbers Recall the basic definition: 1. Prime numbers Definition 1.1. Recall that a positive integer is said to be prime if it has precisely two positive
1.2 Solving a System of Linear Equations
1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
Binary Number System. 16. Binary Numbers. Base 10 digits: 0 1 2 3 4 5 6 7 8 9. Base 2 digits: 0 1
Binary Number System 1 Base 10 digits: 0 1 2 3 4 5 6 7 8 9 Base 2 digits: 0 1 Recall that in base 10, the digits of a number are just coefficients of powers of the base (10): 417 = 4 * 10 2 + 1 * 10 1
Zeros of a Polynomial Function
Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we
Let s first see how precession works in quantitative detail. The system is illustrated below: ...
lecture 20 Topics: Precession of tops Nutation Vectors in the body frame The free symmetric top in the body frame Euler s equations The free symmetric top ala Euler s The tennis racket theorem As you know,
Metric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
Figure 1.1 Vector A and Vector F
CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have
Examples of Functions
Chapter 3 Examples of Functions Obvious is the most dangerous word in mathematics. E. T. Bell 3.1 Möbius Transformations The first class of functions that we will discuss in some detail are built from
Chapter 9. Systems of Linear Equations
Chapter 9. Systems of Linear Equations 9.1. Solve Systems of Linear Equations by Graphing KYOTE Standards: CR 21; CA 13 In this section we discuss how to solve systems of two linear equations in two variables
11 Multivariate Polynomials
CS 487: Intro. to Symbolic Computation Winter 2009: M. Giesbrecht Script 11 Page 1 (These lecture notes were prepared and presented by Dan Roche.) 11 Multivariate Polynomials References: MC: Section 16.6
Normal distribution. ) 2 /2σ. 2π σ
Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a
SYSTEMS OF PYTHAGOREAN TRIPLES. Acknowledgements. I would like to thank Professor Laura Schueller for advising and guiding me
SYSTEMS OF PYTHAGOREAN TRIPLES CHRISTOPHER TOBIN-CAMPBELL Abstract. This paper explores systems of Pythagorean triples. It describes the generating formulas for primitive Pythagorean triples, determines
THE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
Week 13 Trigonometric Form of Complex Numbers
Week Trigonometric Form of Complex Numbers Overview In this week of the course, which is the last week if you are not going to take calculus, we will look at how Trigonometry can sometimes help in working
DRAFT. Further mathematics. GCE AS and A level subject content
Further mathematics GCE AS and A level subject content July 2014 s Introduction Purpose Aims and objectives Subject content Structure Background knowledge Overarching themes Use of technology Detailed
5 Numerical Differentiation
D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives
Lecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
5 Homogeneous systems
5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
Computability Theory
CSC 438F/2404F Notes (S. Cook and T. Pitassi) Fall, 2014 Computability Theory This section is partly inspired by the material in A Course in Mathematical Logic by Bell and Machover, Chap 6, sections 1-10.
Vector Spaces 4.4 Spanning and Independence
Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set
Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4.
Difference Equations to Differential Equations Section. The Sum of a Sequence This section considers the problem of adding together the terms of a sequence. Of course, this is a problem only if more than
4 Lyapunov Stability Theory
4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We
1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
INCIDENCE-BETWEENNESS GEOMETRY
INCIDENCE-BETWEENNESS GEOMETRY MATH 410, CSUSM. SPRING 2008. PROFESSOR AITKEN This document covers the geometry that can be developed with just the axioms related to incidence and betweenness. The full
Zeros of Polynomial Functions
Review: Synthetic Division Find (x 2-5x - 5x 3 + x 4 ) (5 + x). Factor Theorem Solve 2x 3-5x 2 + x + 2 =0 given that 2 is a zero of f(x) = 2x 3-5x 2 + x + 2. Zeros of Polynomial Functions Introduction
1 Symmetries of regular polyhedra
1230, notes 5 1 Symmetries of regular polyhedra Symmetry groups Recall: Group axioms: Suppose that (G, ) is a group and a, b, c are elements of G. Then (i) a b G (ii) (a b) c = a (b c) (iii) There is an
