Elliptic integrals, the arithmetic-geometric mean and the Brent-Salamin algorithm for π
|
|
|
- Meghan Thomas
- 10 years ago
- Views:
Transcription
1 Elliptic integrals, the arithmetic-geometric mean and the Brent-Salamin algorithm for π Notes by G.J.O. Jameson Contents. The integrals I(a, b) and K(b). The arithmetic-geometric mean 3. The evaluation of I(a, b): Gauss s theorem 4. Estimates for I(a, ) for large a and I(, b) for small b 5. The integrals J(a, b) and E(b) 6. The integrals L(a, b) 7. Further evaluations in terms of the agm; the Brent-Salamin algorithm 8. Further results on K(b) and E(b).. The integrals I(a, b) and K(b) For a, b >, define I(a, b) = An important equivalent form is: π/ (a cos θ + b sin dθ. () θ) /.. We have I(a, b) = dx. () (x + a ) / (x + b ) / Proof. With the substitution x = b tan θ, the stated integral becomes π/ b sec θ (a + b tan θ) / b sec θ dθ = = π/ π/ (a + b tan θ) / cos θ dθ (a cos θ + b sin dθ. θ) / We list a number of facts that follow immediately from () or (): (I) I(a, a) = π a ; (I) (I3) I(b, a) = I(a, b); I(ca, cb) = I(a, b), hence ai(a, b) = I(, b/a); c (I4) I(a, b) decreases with a and with b; (I5) if a b, then π a I(a, b) π b.
2 Note that the substitution x = /y in () gives I(a, b) = dx. (3) ( + a x ) / ( + b x ) /.. We have I(a, b) π (ab) /. Proof. This follows from the integral Cauchy-Schwarz inequality, applied to (). Note further that, by the inequality of the means, π (ab) π ( / 4 a + ). b This upper bound for I(a, b) is also obtained directly by applying the inequality of the means to the integrand in (). Clearly, I(a, ) is undefined, and one would expect that I(a, b) as b +. This is, indeed, true. In fact, the behaviour of I(, b) for b close to is quite interesting; we return to it in section 4. The complete elliptic integral of the first kind is defined by K(b) = π/ ( b sin dθ. (4) θ) / For b, the conjugate b is defined by b = ( b ) /, so that b + b =. Then cos θ + b sin θ = b sin θ, so that I(, b) = K(b ). (An incomplete elliptic integral is one in which the integral is over a general interval [, α]. We do not consider such integrals in these notes.) Clearly, K() = π.3. We have and K(b) increases with b. Another equivalent form is: K(b) = dt. (5) ( t ) / ( b t ) / Proof. Denote this integral by I. Substituting t = sin θ, we have I = π/ cos θ( b sin cos θ dθ = K(b). θ) /.4 PROPOSITION. For b <, π K(b) = d n b n = + 4 b b4 +, (6) n=
3 where d = and d n =.3... (n ).4... (n). where Proof. By the binomial series, ( b sin θ) / = + a n b n sin n θ, n= ( ) a n = ( ) n = n n! ( )( 3) (n (n ) ) = (n) For fixed b <, the series is uniformly convergent for θ π. Now π/ sin n θdθ = πa n: identity (6) follows. In a sense, this series can be regarded as an evaluation of I(a, b), but a more effective one will be described in section 3..5 COROLLARY. For b <, + 4 b π K(b) + b 4( b ). Proof. Just note that + 4 b n= d nb n + 4 (b + b ). Hence π K(b) + b for b. These are convenient inequalities for K(b), but when translated into an inequality for I(, b) = K(b ), the right-hand inequality says I(, b) + ( π b )/(4b ) = /(4b ), and it is easily checked that this is weaker than the elementary estimate I(, b) + /b given above. 4 π The special case I(, ) The integral I(, ) equates to a beta integral, and consequently to numerous other integrals. We mention some of them, assuming familiarity with the beta and gamma functions..6 PROPOSITION. We have I(, ) = = dx ( x 4 ) / (7) dx (x 4 ) / (8) = 4 B( 4, ) (9) = Γ( 4 ) 4 π. () 3
4 Proof. For (7), substitute x = sin θ, so that x 4 = cos θ( + sin θ). Then π/ dx = ( x 4 ) / Substituting x = /y, we also have dx = ( x 4 ) / π/ ( + sin dθ = θ) / Substituting x = t /4 in (7), we obtain I(, ) = (cos θ + sin θ) / dθ = I(, ). ( y 4 ) / y dy = dy. (y 4 ) / ( t) / ( 4 t 3/4 ) dt = 4 B( 4, ). Now using the identities B(a, b) = Γ(a)Γ(b)/Γ(a + b) and Γ(a)Γ( a) = π/(sin aπ), we have Γ( ) = π and Γ( 4 )Γ( 3 4 ) = π/(sin π 4 ) = π, hence B( 4, ) = Γ( 4 )Γ( ) Γ( 3 4 ) = πγ( 4 ) π = Γ( 4 ) π..7. We have π/ (sin θ) dθ = I(, ), () / ( x 4 ) / dx = ( x ) /4 dx = 3 I(, ). () Proof. For (), substitute x = (sin θ) / in (7). Denote the two integrals in () by I, I. The substitutions x = t /4 and x = t / give I = B(, 3) and I 4 4 = B(, 5 ). The 4 identity (a + b)b(a, b + ) = bb(a, b) now gives I = I = B(, ) The arithmetic-geometric mean Given positive numbers a, b, the arithmetic mean is a = (a + b) and the geometric mean is b = (ab) /. If a = b, then a = b = a. If a > b, then a > b, since 4(a b ) = (a + b) 4ab = (a b) >. With a > b, consider the iteration given by a = a, b = b and a n+ = (a n + b n ), b n+ = (a n b n ) /. At each stage, the two new numbers are the arithmetic and geometric means of the previous two. The following facts are obvious: 4
5 (M) a n > b n ; (M) a n+ < a n and b n+ > b n ; (M3) a n+ b n+ < a n+ b n = (a n b n ), hence a n b n < n (a b). It follows that (a n ) and (b n ) converge to a common limit, which is called the arithmeticgeometric mean of a and b. We will denote it by M(a, b). Convergence is even more rapid than the estimate implied by (M3), as we will show below. First, we illustrate the procedure by the calculation of M(, ) and M(, ): n b n a n n b n a n Of course, a n and b n never actually become equal, but more decimal places would be required to show the difference at the last stage shown. In fact, in the first example, a 3 b 3 < 4, as we will see shortly. Clearly, M(a, b) = M(a n, b n ) and b n < M(a, b) < a n for all n. Also, M(ca, cb) = cm(a, b) for c >, so M(a, b) = am(, b/a) = bm(a/b, ). For further analysis of the process, define c n by a n b n = c n... With c n defined in this way, we have (M4) c n+ = (a n b n ), (M5) a n = a n+ + c n+, b n = a n+ c n+, (M6) c n = 4a n+ c n+, (M7) c n+ c n 4b. Proof. For (M4), we have c n+ = a n+ b n+ = 4 (a n + b n ) a n b n = 4 (a n b n ). (M5) follows, since a n+ = (a n + b n ), and (M6) is given by c n = (a n + b n )(a n b n ) = (a n+ )(c n+ ). Finally, (M7) follows from (M6), since a n+ > b. (M5) shows how to derive a n and b n from a n+ and b n+, reversing the agm iteration. 5
6 (M7) shows that c n (hence also a n b n ) converges to quadratically, hence very rapidly. We can derive an explicit bound for c n as follows:.. Let R > and let k be such that c k 4b/R. Then c n+k 4b/R n for all n. The same holds with b replaced by b k. Proof. The hypothesis equates to the statement for n =. Assuming it now for a general n, we have by (M7) c n+k+ < (4b) = 4b. 4b R n+ R n+ In the case M(, ), we have c = b =. Taking R = 4, we deduce c n 4/4 n for all n. For a slightly stronger variant, note that c = a a <, hence c n+ 4/(4 n ) = 4/( n+ ) for n, or c n 4/( n ) for n. In particular, c <. We finish this section with a variant of the agm iteration in the form of a single sequence instead of a pair of sequences..3. Let b >. Define a sequence (k n ) by: k = b and Then k n+ = k/ n. + k n M(, b) = n= ( + k n). Proof. Let (a n ), (b n ) be the sequences generated by the iteration for M(, b) (with a =, b = b, whether or not b < ), and let k n = b n /a n. Then k = b and k n+ = b n+ = (a nb n ) / = k/ n. a n+ a n + b n + k n So the sequence (k n ) defined by this iteration is (b n /a n ). Also, so n r= ( + k n) = a n + b n = a n+, a n a n ( + k r) = a n M(, b) as n. Quadratic convergence of (k n ) to is easily established directly: k n+ = k/ n + k n < kn / + k n = ( kn / ) < ( k n ). + k n 6
7 Another way to present this iteration is as follows. Assume now that b <. Recall that for k, the conjugate k is defined by k + k =. Clearly, if t = k / /( + k) (where k ), then t = ( k)/( + k). So we have kn+ = ( k n )/( + k n ), hence + kn+ = /( + k n ), so that M(, b) =. + kn n= 3. The evaluation of I(a, b): Gauss s theorem We now come to the basic theorem relating I(a, b) and M(a, b). It was established by Gauss in 799. There are two distinct steps, which we present as separate theorems. In the notation of section, we show first that I(a, b) = I(a, b ). It is then a straightforward deduction that I(a, b) = π/[m(a, b)]. Early proofs of the first step, including that of Gauss, were far from simple. Here we present a proof due to Newman [New] based on an ingenious substitution. 3. THEOREM. Let a b >, and let a = (a + b), b = (ab) /. Then I(a, b) = I(a, b ). () is even: Substitute Proof. We write I(a, b ) as an integral on (, ), using the fact that the integrand I(a, b ) = dx. (x + a ) / (x + b / ) x = t ab t. Then x as t and x as t +. Also, dx dt = ( + ab/t ) and 4(x + b ) = 4x + 4ab = t + ab + a b = t ( t + ab t ), hence Further, (x + b ) / = ( t + ab ). t 4(x + a ) = 4x + (a + b) = t + a + b + a b t = t (t + a )(t + b ). 7
8 Hence I(a, b ) = ( t (t + a ) / (t + b ) / ab (t + ) + ab ) t t = dt (t + a ) / (t + b ) / = I(a, b). dt Lord [Lo] gives an interesting alternative proof based on the area in polar coordinates. 3. THEOREM. We have I(a, b) = π M(a, b) () Proof. Let (a n ), (b n ) be the sequences generated by the iteration for M(a, b). By Theorem 3., I(a, b) = I(a n, b n ) for all n. Hence π a n I(a, b) for all n. Since both a n and b n converge to M(a, b), the statement follows. We describe some immediate consequences; more will follow later. π b n 3.3. We have π a + b I(a, b) π. (3) (ab) / Proof. This follows from Theorem 3., since π/(a ) I(a, b ) π/(b ). The right-hand inequality in (3) repeats., but I do not know an elementary proof of the left-hand inequality. From the expressions equivalent to I(, ) given in.6, we have: 3.4. Let M = M(, ) (.984). Then ( x 4 ) / dx = I(, ) = π M (.39), B( 4, ) = π M ( 5.446), Γ( 4 ) = (π)3/4 M / ( 3.656) Example. From the value M(, ) 6.67 found in section, we have I(, ) 8
9 Finally, we present several ways in which Theorem 3. can be rewritten in terms of K(b). Recall that I(, b) = K(b ), where b + b =, and that if k = ( b)/( + b), then k = k / /( + b) For b <, K(b) = I( + b, b). (4) Proof. By Theorem 3., I( + b, b) = I[, ( b ) / ] = I(, b ) = K(b) For < b <, K(b) = + b K K(b ) = + b K ( b / + b ( b + b ), (5) ). (6) Proof. By (4) and the property I(ca, cb) = I(a, b), c K(b) = I( + b, b) = ( + b I, b ) = + b + b K Also, by Theorem 3., K(b ) = I(, b) = I ( ) + b, b/ = + b I ( b / ) (, b/ = + b + b K + b ). ( ) b. + b Conversely, any of (4), (5), (6) implies () (to show this for (4), start by choosing c and k so that ca = + k and cb = k). For another proof of Theorem 3., based on this observation and the series expansion of ( + ke iθ ) / )( + ke iθ ) /, see [Bor, p. ]. 4. Estimates for I(a, ) for large a and I(, b) for small b In this section, we will show that I(, b) is closely approximated by log(4/b) for b close to. Equivalently, ai(a, ) is approximated by log 4a for large a, and consequently M(a, ) is approximated by F (a) =: π a log 4a. By way of illustration, F () = 6.7 to four d.p., while M(, ) This result has been known for a long time, with varying estimates of the accuracy of the approximation. For example, a version appears in [WW, p. 5], published in 97. 9
10 A highly effective method was introduced by Newman in [New], but only sketched rather briefly, with the statement given in the form ai(a, ) = log 4a+O(/a ). Following [Jam], we develop Newman s method in more detail to establish a more precise estimation. The method requires nothing more than some elementary integrals and the binomial and logarithmic series, and a weaker form of the result is obtained with very little effort. The starting point is: 4.. We have (ab) / dx = (x + a ) / (x + b ) / dx. (ab) (x / + a ) / (x + b ) / Proof. With the substitution x = ab/y, the left-hand integral becomes ab (ab) / ( a b + a y ) / ( a b + b y ) / y dy = dy. (ab) (b / + y ) / (a + y ) / where Hence b / I(, b) = H(x) dx, () H(x) =. () (x + ) / (x + b ) / Now ( + y) / > y for < y <, as is easily seen by multiplying out ( + y)( y), so for < x <, ( x ) (x + b ) < H(x) <. (3) / (x + b ) / Now observe that the substitution x = by gives b / /b / dx = (x + b ) / (y + ) dy = / sinh b. / 4. LEMMA. We have log b < / sinh < log b/ b + b. (4) / 4 Proof. Recall that sinh x = log[x + (x + ) / ], which is clearly greater than log x. Also, since ( + b) / < + b, ( ) / b + / b + = b [ + ( + / b)/ ] < b ( + b) = / b ( + b). / 4 The right-hand inequality in (4) now follows from the fact that log( + x) x.
11 where This is already enough to prove the following weak version of our result: 4.3 PROPOSITION. For < b and a, log 4 b b < I(, b) < log 4 b + b, (5) log 4a a < ai(a, ) < log 4a + a. (6) Proof. The two statements are equivalent, because ai(a, ) = I(, ). By () and (3), a Since (x + b ) / > x, we have sinh b / R (b) < I(, b) < sinh b /, R (b) < R (b) = b / b / x dx. (7) (x + b ) / x b / x dx = x dx = b. Both inequalities in (5) now follow from (4), since log b / = log 4 b. This result is sufficient to show that M(a, ) F (a) as a. To see this, write ai(a, ) = log 4a + r(a). Then I(a, ) which tends to as a, since ar(a). a log 4a = ar(a) log 4a[log 4a + r(a)], Some readers may be content with this version, but for those with the appetite for it, we now refine the method to establish closer estimates. All we need to do is improve our estimations by the insertion of further terms. ( + y) / <. Instead, we now use the stronger bound ( + y) / < y y The upper estimate in (3) only used for < y <. These are the first three terms of the binomial expansion, and the stated inequality holds because the terms of the expansion alternate in sign and decrease in magnitude. So we have instead of (3): ( x ) (x + b ) < H(x) < ( / x x4 ). (8) (x + b ) / We also need a further degree of accuracy in the estimates for sinh b / and R (b).
12 4.4 LEMMA. For < b, we have log b + b 3 / 4 3 b < sinh < log b/ b + b / 4 Proof. Again by the binomial series, we have So, as in Lemma 4., 6 b + 3 b3. (9) + b 8 b < ( + b) / < + b 8 b + 6 b3. () ( ) / b + / b + < b ( + b / 8 b + 6 b3 ) = b ( + b / 4 6 b + 3 b3 ). The right-hand inequality in (9) now follows from log( + x) x. Also, ( ) / b + / b + > b ( + b / 8 b ) = ( + B), b/ where B = 4 b 6 b (so B < 4 b). By the log series, log( + x) > x x for < x <, so log( + B) > B B > 4 b 6 b 3 b = 4 b 3 3 b. 4.5 LEMMA. For < b, we have: R (b) < b + 4 b b log, () b/ R (b) > b + 4 b b log b / 3 6 b3, () R (b) < b 5 b. (3) Proof: We can evaluate R (b) explicitly by the substitution x = b sinh t: R (b) = c(b) where c(b) = sinh (/b / ). Now c(b) b sinh t dt = b (cosh t ) dt = 4 b sinh c(b) b c(b), sinh c(b) = sinh c(b) cosh c(b) = ( ) / b / b + = b ( + b)/, so R (b) = b( + b)/ b c(b). Statement () now follows from ( + b) / < + b and c(b) > log. Statement () b / follows from the left-hand inequality in () and the right-hand one in (4).
13 Unfortunately, (3) does not quite follow from (). We prove it directly from the integral, as follows. Write x /(x + b ) / = g(x). Since g(x) < x, we have b / g(x) dx < b (b b ). For < x < b, we have by the binomial expansion again ) x x g(x) = ( x b( + x /b ) / b b + 3x4, 8b 4 and hence b g(x) dx < ( )b < 3 b. Together, these estimates give R (b) < b 5 b. We can now state the full version of our Theorem. 4.6 THEOREM. For < b and a, we have log 4 b < I(, b) < ( + 4 b ) log 4 b, (4) ( log 4a < ai(a, ) < + ) log 4a. (5) 4a Moreover, the following lower bounds also apply: Hence I(, b) > ( + 4 b ) log 4 b 7 6 b, (6) ( ai(a, ) > + ) log 4a 7 4a 6a. (7) I(, b) = ( + 4 b ) log 4 b + O(b ) for < b <, ( ai(a, ) = + ) log 4a + O(/a ) for a >. 4a Note. As this shows, the estimation log 4a + O(/a ) in [New] is not quite correct. Proof: lower bounds. By (8), (9) and (3), I(, b) > sinh b / R (b) > log 4 b + b 3 6 b ( b 5 b ) > log 4 b, (with a spare term 8 b ). Of course, the key fact is the cancellation of the term b. Also, using () instead of (3), we have I(, b) > log 4 b + b 3 6 b ( b + 4 b 4 b log 4 b ) = ( + 4 b ) log 4 b 7 6 b. 3
14 Upper bound: By (8), I(, b) < sinh b / R (b) R (b), where So by (9) and (), R (b) = b / x 4 b / dx < x 3 dx = (x + b ) / 4 b. I(, b) < log 4 b + b 8 b + 6 b3 ( b + 4 b 4 b log 4 b 3 6 b3 ) b = ( + 4 b ) log 4 b 3 6 b + 4 b3. If b < 3 4, then 3 6 b 4 b3, so I(, b) < ( + 4 b ) log 4 b. For b 3 4, we reason as follows. By., I(, b) π 4 (+ b ). Write h(b) = (+ 4 b ) log 4 b π ( + ). We find that h( 3 ) >. One verifies by differentiation that h(b) 4 b 4 is increasing for < b (we omit the details), hence h(b) > for 3 b. 4 By Gauss s theorem and the inequality /( + x) > x, applied with x = /(4a ), statement (5) translates into the following pair of inequalities for M(a, ): 4.7 COROLLARY. Let F (a) = (πa)/( log 4a). Then for a, ( ) F (a) < M(a, ) < F (a). (8) 4a Since M(a, b) = b M(a/b, ) we can derive the following bounds for M(a, b) (where a > b) in general: π ) a ( b < M(a, b) < π log(4a/b) 4a a log(4a/b). (9) Note: We mention very briefly an older, perhaps better known, approach (cf. [Bow, p. ]). Use the identity I(, b) = K(b ), and note that b when b. Define A(b ) = π/ b sin θ ( b sin dθ θ) / and B(b ) = K(b ) A(b ). One shows by direct integration that A(b ) = log[(+b )/b] and B() = log, so that when b, we have A(b ) log b and hence K(b ) log 4 b. This method can be developed to give an estimation with error term O(b log ), [Bor, p. b ], but the details are distinctly more laborious, and (even with some refinement) the end result does not achieve the accuracy of our Theorem
15 By a different method, we now establish exact upper and lower bounds for I(, b) log(/b) on (, ]. The lower bound amounts to a second proof that I(, b) > log(4/b) The expression I(, b) log(/b) increases with b for < b. Hence log 4 I(, b) log b π for < b ; () π log 4 ai(a, ) log a π for a ; () a log a + π M(a, ) π a for a. () log a + log 4 Proof. We show that K(b) log(/b ) decreases with b, so that I(, b) log(/b) = K(b ) log(/b) increases with b. Once this is proved, the inequalities follow, since I(, b) log(/b) takes the value π/ at b = and (by 4.3) tends to log 4 as b +. By.4, for b <, we have K(b) = π n= d nb n, where d = and d n =.3... (n ).4... (n). Hence K (b) = π n= nd nb n. Meanwhile, log(/b ) = log( b ), so d db log b = b b = b n. By well-known inequalities for the Wallis product, nπd n <, so K (b) < d db log(/b ). n= Application to the calculation of logarithms and π. Theorem 4.6 has pleasing applications to the calculation of logarithms (assuming π known) and π (assuming logarithms known), exploiting the rapid convergence of the agm iteration. Consider first the problem of calculating log x (where x > ). Choose n, in a way to be discussed below, and let 4a = x n, so that n log x = log 4a, which is approximated by ai(a, ). We calculate M = M(a, ), and hence I(a, ) = π/(m). Then log x is approximated by (a/n)i(a, ). How accurate is this approximation? By Theorem 4.6, log 4a = ai(a, ) r(a), where < r(a) < 4a log 4a. Hence in which log x = a r(a) I(a, ) n n, < r(a) n < log x = 4 log x 4a x. n We choose n so that this is within the desired degree of accuracy. It might seem anomalous that log x, which we are trying to calculate, appears in this error estimation, but a rough estimate for it is always readily available. 5
16 For illustration, we apply this to the calculation of log, taking n =, so that a = = 4. We find that M = M(4, ) , so our approximation is π 4 M , while in fact log = to seven d.p.. The discussion above (just taking log < ) shows that the approximation overestimates log, with error no more than 4/ 4 = /. We now turn to the question of approximating π. The simple-minded method is as follows. Choose a suitably large a. By Theorems 3. and 4.6, where π = M(a, )I(a, ) = M(a, )log 4a a + s(a), s(a) = M(a, ) r(a) 4a < M(a, )log a 4a. 3 So an approximation to π is M(a, ) log 4a, with error no more than s(a). Using our a original example with a =, this gives to five d.p., with the error estimated as no more than 8 5 (the actual error is about ). For a better approximation, one would repeat with a larger a. However, there is a way to generate a sequence of increasingly close approximations from a single agm iteration, which we now describe (cf. [Bor, Proposition 3]). In the original agm iteration, let c n = (a n b n) /. Note that c n converges rapidly π to. By Gauss s theorem, = M(a, c )I(a, c ). To apply Theorem 4.6, we need to equate I(a, c ) to an expression of the form I(d n, e n ), where e n /d n tends to. This is achieved as follows. Recall from. that a n = a n+ + c n+ and c n = 4a n+ c n+, so the arithmetic mean of a n+ and c n+ is a n, while the geometric mean is c n. So by Theorem 3., I(a n, c n ) = I(a n+, c n+ ), and hence I(a, c ) = I( n a n, n c n ) = n a n I (, c ) n. a n Since c n /a n tends to, Theorem 4.6 (or even 4.3) applies to show that I(a, c ) = lim log 4a n. n n a n c n Now take a = and b =. Then c = b, so M(a, b ) = M(a, c ), and (a n ) converges to this value. Since π = M(a, c )I(a, c ), we have established the following: 4.9 PROPOSITION. With a n, b n, c n defined in this way, we have π = lim n log 4a n. n c n 6
17 Denote this approximation by p n. With enough determination, one can derive an error estimation, but we will just illustrate the speed of convergence by performing the first three steps. Values are rounded to six significant figures. To calculate c n, we use c n = c n /(4a n ) rather than (a n b n ), since this would require much greater accuracy in the values of a n and b n. n a n b n c n p n More accurate calculation gives the value p 3 = , showing that it actually agrees with π to eight decimal places. While this expression does indeed converge rapidly to π, it has the disadvantage that it requires the calculation of logarithms at each stage. This is overcome by the Gauss-Brent- Salamin algorithm, which we describe in section The integrals J(a, b) and E(b) For a, b, define J(a, b) = π/ (a cos θ + b sin θ) / dθ. () Note that 4J(a, b) is the perimeter of the ellipse x /a + y /b =. Some immediate facts: (E) J(a, a) = (E) J(a, ) = π/ π/ a dθ = πa, a cos θ dθ = a, (E3) for c >, J(ca, cb) = cj(a, b); (E4) J(b, a) = J(a, b) (substitute θ = π φ); (E5) J(a, b) increases with a and with b; (E6) if a b, then πb J(a, b) πa. (E6) follows from (E) and (E5), since J(b, b) J(a, b) J(a, a). Since x + for x >, we have I(a, b) + J(a, b) π. For < b, we have x J(, b) π I(, b). 7
18 We start with some further inequalities for J(a, b). Given that J(a, b) is the length of the quarter-ellipse, it would seem geometrically more or less obvious that (a + b ) / J(a, b) a + b, () since this says that the curve is longer than the straight-line path between the same points, but shorter than two sides of the rectangle. An analytic proof of the right-hand inequality is very easy: since (x + y ) / x + y for positive x, y, we have (a cos θ + b sin θ) / a cos θ + b sin θ for each θ. Integration on [, π/] gives J(a, b) a + b. We will improve this inequality below. For the left-hand inequality in (), and some further estimations, we use the Cauchy- Schwarz inequality, in both its discrete and its integral form. 5.. We have J(a, b) (a + b ) /. (3) Proof. By the Cauchy-Schwarz inequality, a cos θ + b sin θ = a(a cos θ) + b(b sin θ) (a + b ) / (a cos θ + b sin θ) /. Integrating on [, π/], we obtain a + b (a + b ) / J(a, b), hence (3). A slight variation of this reasoning gives a second lower bound, improving upon (E6). 5.. We have J(a, b) π (a + b). (4) 4 Proof. By the Cauchy-Schwarz inequality, a cos θ + b sin θ = (a cos θ) cos θ + (b sin θ) sin θ (a cos θ + b sin θ) / (cos θ + sin θ) / = (a cos θ + b sin θ) /. Integration gives (4), since π/ cos θ dθ = π/ sin θ dθ = π 4. Note that equality holds in (3) when b =, and in (4) when b = a. 8
19 5.3. We have Equality holds when a = b. J(a, b) π (a + b ) /. (5) Proof. By the Cauchy-Schwarz inequality for integrals (with both sides squared), ( ) π/ π/ J(a, b) (a cos θ + b sin θ) dθ = π π 4 (a + b ). Since π/( ).7, J(a, b) is actually modelled fairly well by (a + b ) /. For u = (a, b), write u = (a + b ) / : this is the Euclidean norm on R. In this notation, we have J(a, b) = π/ (a cos θ, b sin θ) dθ. The triangle inequality for vectors says that u + u u + u. We show that J(a, b) also satisfies the triangle inequality (so in fact is also a norm on R ) Let u j = (a j, b j ) (j =, ), where a j, b j. Then Proof. For each θ, we have J(u + u ) J(u ) + J(u ). (6) [(a + a ) cos θ, (b + b ) sin θ] (a cos θ, b sin θ) + (a cos θ, b sin θ). Integrating on [, π ], we deduce that J(a + a, b + b ) J(a, b ) + J(a, b ). By suitable choices of u and u, we can read off various inequalities for J(a, b). Firstly, J(a, b) J(a, ) + J(, b) = a + b, as seen in (). Secondly: Second proof of (4). Since (a + b, a + b) = (a, b) + (b, a), we have (a + b) π = J(a + b, a + b) J(a, b) + J(b, a) = J(a, b). Thirdly, we can compare J(a, b) with the linear function of b that agrees with J(a, b) at b = and b = a. The resulting upper bound improves simultaneously on those in (E6) and (). 9
20 5.5. For b a, we have J(a, b) a + Equality holds when b = and when b = a. Proof. Since (a, b) = (a b, ) + (b, b), ( π ) b. (7) J(a, b) J(a b, ) + J(b, b) = (a b) + π b. Inequality (7) reflects the fact that J(a, b) is a convex functon of b for fixed a; this also follows easily from (6). We now consider equivalent and related integrals We have J(a, b) = b (x + a ) / dx. (8) (x + b ) 3/ Proof. Substituting x = b tan θ, we obtain (x + a ) / dx. = (x + b ) 3/ π/ (a + b tan θ) / b sec θ dθ b 3 sec 3 θ = π/ (a + b tan θ) / cos θ dθ b = J(a, b). b The complete elliptic integral of the second kind is E(b) = π/ Since cos θ + b sin θ = b sin θ, we have J(, b) = E(b ). ( b sin θ) / dθ. (9) Clearly, E() = π/, E() = and E(b) decreases with b. Also, K(b) E(b) = π/ ( b sin θ) ( b sin dθ = θ) / π/ b sin θ ( b sin dθ. () θ) / We use the usual mathematical notation E (b) for the derivative d E(b). We alert the db reader to the fact that some of the literature on this subject, including [Bor], writes b where we have b and then uses E (b) to mean E(b ) We have E (b) = [E(b) K(b)] for < b <. () b
21 Proof. This follows at once by differentiating under the integral sign in (9) and comparing with () We have E(b) = ( b t ) / ( t ) / dt. () Proof. Denote this integral by I. The substitution t = sin θ gives I = π/ ( b sin θ) / cos θ cos θ dθ = E(b) For b, b π E(b) 4 b. (3) Proof. For x, we have x ( x) / x. Hence E(b) π/ and similarly for the upper bound. ( b sin θ) dθ = π ( b ), When applied to J(, b) = E(b ), (3) gives weaker inequalities than (4) and (5). We now give the power series expression. Recall from.4 that π K(b) = n= d nb n, where d = and d n =.3... (n ).4... (n). 5. PROPOSITION. For b <, π E(b) = d n n bn = 4 b 3 64 b4 + (4) n= where Proof. By the binomial series, ( b sin θ) / = + ( ) a n = ( ) n = ( )n ( n n! )( ) ( a n b n sin n θ, n=... (n 3) n + ) = (n) For fixed b <, the series is uniformly convergent for θ π. The statement follows, since π/ sin n θ dθ = π (n ) (n)
22 6. The integral L(a, b) Most of the material in this section follows the excellent exposition in [Lo]. Further results on I(a, b) and J(a, b) are most conveniently derived and expressed with the help of another integral. For a > and b, define L(a, b) = π/ cos θ (a cos θ + b sin dθ. () θ) / The first thing to note is that L(b, a) L(a, b). In fact, we have: 6.. For a and b >, L(b, a) = π/ sin θ (a cos θ + b sin dθ. () θ) / Proof. The substitution θ = π φ gives L(b, a) = π/ cos θ π/ sin φ (b cos θ + a sin dθ = θ) / (a cos φ + b sin dφ. φ) / Hence, immediately: 6.. For a, b >, L(a, b) + L(b, a) = I(a, b), (3) a L(a, b) + b L(b, a) = J(a, b). (4) 6.3 COROLLARY. For a, b >, (a b )L(a, b) = J(a, b) b I(a, b). (5) Some immediate facts: (L) L(a, ) = ; L(, b) is undefined; a (L) L(a, a) = π 4a ; (L3) L(ca, cb) = L(a, b) for c > ; c (L4) L(a, b) decreases with a and with b; (L5) L(a, b) a for all b; (L6) if a b, then π 4a L(a, b) π, and similarly for L(b, a) (by (L5) and (L)). 4b
23 6.4. We have J(a, b) = bl(b, a). (6) b Proof. Differentiate under the integral sign in the expression for J(a, b). π/4. So if we write J (b) for J(, b), then J (b) = bl(b, ). In particular, J () = L(, ) = 6.5. We have L(a, b) = L(b, a) = b dx, (7) (x + a ) / (x + b ) 3/ x dx, (8) (x + a ) / (x + b ) 3/ Proof. With the substitution x = b tan θ, the integral in (7) becomes π/ b 3 sec θ (a + b tan θ) / b 3 sec 3 θ dθ = = π/ π/ cos θ dθ (a + b tan θ) / cos θ (a cos θ + b sin dθ. θ) / The integral in (8) now follows from (3) and the integral (.) for I(a, b) If a > b, then L(a, b) < L(b, a). Proof. By (7), applied to both L(a, b) and L(b, a), L(b, a) L(a, b) = a (x + b ) b (x + a ) (x + a ) 3/ (x + b ) 3/ dx. This is positive, since the numerator is (a b )x We have b I(a, b) = L(a, b), (9) b b(a b ) b I(a, b) = b I(a, b) J(a, b). () Proof. (9) is obtained by differentiating the integral expression (.) for I(a, b) under the integral sign and comparing with (7). (The integral expressions in terms of θ do not deliver this result so readily.) Using (5) to substitute for L(a, b), we obtain (). Hence, writing I (b) for I(, b), we have I () = L(, ) = π/4. There is a corresponding expression for K (b), but we defer it to section 8. 3
24 Identities (9) and () are an example of a relationship that is expressed more pleasantly in terms of L(a, b) rather than J(a, b). More such examples will be encountered below. The analogue of the functions K(b) and E(b) is the pair of integrals L(, b ) = π/ cos θ ( b sin θ) / dθ, L(b, ) = π/ Corresponding results apply. For example, the substitution t = sin θ gives L(, b ) = ( t ) / dt ( b t ) / sin θ ( b sin dθ. θ) / (compare.3 and 5.8), and power series in terms of b are obtained by modifying.5. However, we will not introduce a special notation for these functions. Recall Theorem 4.6, which stated: log 4 < I(, b) < ( + b 4 b ) log 4 for < b <. We b now derive a corresponding estimation for J(, b) (actually using the upper bound from 4.8). 6.8 LEMMA. For < b <, where c = log 4, c = π/4. log b + c L(b, ) log b + c, () Proof. By 4.8, we have log b + log 4 I(, b) log b + π. Also, π 4 Statement () follows, by (3). L(, b). 6.9 PROPOSITION. For < b <, J(, b) = + b log b + r(b), () where c 3 b r(b) < c 4 b, with c 3 = log 4 and c 4 = 4 + π 8. Proof. Write J (t) = J(, t). By (6), J (t) = tl(t, ), so J (b) = b tl(t, ) dt. Now b ( t log t) dt = [ t log t ] b b + t dt = b log b + 4 b. By (), a lower bound for J(, b) is obtained by adding b c t dt = c b, and an upper bound by adding c b. Since J(a, ) = aj(, ), we can derive the following estimation for a >, a where c 3 /a r (a) c 4 /a. J(a, ) = a + log a a 4 + r (a),
25 It follows from () that J () = (this is not a case of (6)). Note that J b (b) = L(b, ) as b +, so J does not have a second derivative at. By (4), we have L(, b) + b L(b, ) = J(, b). By () and (), it follows that L(, b) = b log + b O(b ). A corresponding expression for L(b, ) can be derived by combining this with Theorem 4.6. Values at (, ) Recall from.6 that I(, ) = ( x 4 ) / dx = 4 B( 4, ). 6.. We have L(, ) = x ( x 4 ) dx = B(, 3 ). (3) / 4 4 Proof. Substitute x = sin θ, so that x 4 = cos θ(+sin θ) = cos θ(cos θ + sin θ). Then Also, the substitution x = t /4 gives x dx = ( x 4 ) / x π/ sin θ dx = ( x 4 ) / (cos θ + sin θ) dθ = L(, ). / t / ( t) / 4 t 3/4 dt = 4 t /4 ( t) / dt = 4 B(, 3 4 ). we deduce: Now using the identities Γ(a+) = aγ(a), Γ( ) = π and B(a, b) = Γ(a)Γ(b)/Γ(a+b), 6.. We have L(, )I(, ) = π 4. (4) hence (4). Proof. By the identities just quoted, B(, 4 )B(, 3 4 ) = Γ( )Γ( 4 ) Γ( 3 4 ) Γ( )Γ( 3 4 ) Γ( 5 4 ) = Γ( ) Γ( 4 ) Γ( 5 4 ) = 4π, Now I(, ) = π/(m ), where M = M(, ). We deduce: 6.. Let M = M(, ). Then L(, ) = M (.5997), (5) 5
26 L(, ) = π M M J(, ) = π + M M (.7959), (6) (.999). (7) Proof. (5) is immediate. Then (6) follows from (3), and (7) from (4), in the form J(, ) = L(, ) + L(, ). Two related integrals We conclude with a digression on two related integrals that can be evaluated explicitly For < b <, π/ π/ cos θ ( b sin θ) dθ = sin b, / b sin θ ( b sin θ) dθ = / b log + b b. Proof. Denote these two integrals by F (b), F (b) respectively. The substitution x = b sin θ gives F (b) = b b ( x ) dx = sin b. / b Now let b = ( b ) /, so that b + b =. Then b sin θ = b + b cos θ. Substitute x = b cos θ to obtain F (b) = b b (b + x ) dx = b / b sinh b. Now sinh u = log[u + ( + u ) / ] and + b /b = /b, so Now observe that F (b) = b log + b b. + b = + b ( + b)/ = b ( b ) / ( b). / The integral F (b) can be used as the basis for an alternative method for the results section 4. Another application is particularly interesting. Writing t for b, note that F (t) = so that F (t) dt = n= 6 t n n +, n= (n + ).
27 By reversing the implied double integral, one obtains an attractive proof of the well-known identity n= n = π /6 [JL]. 7. Further evaluations in terms of the agm; the Brent-Salamin algorithm We continue to follow the exposition of [Lo]. Recall that Gauss s theorem (Theorem 3.) says that I(a, b ) = I(a, b), where a = (a + b) and b = (ab) /. There are corresponding identities for L and J, but they are not quite so simple. We now establish them, using the same substitution as in the proof of Gauss s theorem. 7. THEOREM. Let a b and a = (a + b), b = (ab) /. Then L(b, a ) = a + b [L(b, a) L(a, b)]. () a b L(a, b ) = [al(a, b) bl(b, a)]. () a b Proof. By (6.7), written as an integral on (, ), L(b, a ) = As in Theorem 3., we substitute As before, we find so that a dx. (x + a ) 3/ (x + b / ) x = t ab t. (x + b ) / = ( t + ab ), t 4(x + a ) = 4x + (a + b) = t + a + b + a b L(b, a ) = 8 (a + b) But by (6.7) again, = (a + b) L(b, a) L(a, b) = t = t (t + a )(t + b ), 8t 3 (t + a ) 3/ (t + b ) 3/ ab (t + ) t t dt. (t + a ) 3/ (t + b ) 3/ = (a b ) a (t + b ) b (t + a ) dt (t + a ) 3/ (t + b ) 3/ ( + ab ) t t dt. (t + a ) 3/ (t + b ) 3/ dt 7
28 This proves (). To deduce (), write I(a, b) = I, L(a, b) = L and L(b, a) = L. Then I = L + L, also I = I(a, b ) = L(a, b ) + L(b, a ), so (a b)l(a, b ) = (a b)[i L(b, a )] = (a b)(l + L ) + (a + b)(l L ) = al + bl. 7. COROLLARY. We have J(a, b ) = abi(a, b) + J(a, b). (3) Proof. By (6.4), J(a, b ) = a L(a, b ) + b L(b, a ). Continue to write I, L, L as above. Substituting () and (), we obtain (a b)j(a, b ) = (a + b) (al bl ) + ab(a + b)(l L) = (a + b)[a(a + b) ab]l + (a + b)[ab b(a + b)]l = a(a + b)(a b)l + b(a + b)(a b)l, hence J(a, b ) = a(a + b)l + b(a + b)l = ab(l + L ) + (a L + b L ) = abi + J. It is possible, but no shorter, to prove (3) directly from the integral expression in 5.6, without reference to L(a, b). It follows from () and () that if a > b, then L(a, b) < L(b, a) (as already seen in 6.6) and al(a, b) > bl(b, a). Note. If a and b are interchanged, then a and b remain the same: they are not interchanged! The expressions in (), () and (3) are not altered. We now derive expressions for L(a, b), L(b, a) and J(a, b) in terms of the agm iteration. In this iteration, recall that c n is defined by c n = a n b n: for this to make sense when n =, we require a > b. We saw in. that for a certain k, we have c n 4b/4 n k which it follows that lim n n c n = and the series n= n c n is convergent. for n k, from 7.3 THEOREM. Let a > b >, and let a n, b n, c n be defined by the agm iteration for M(a, b), so that c = a b. Let S = n= n c n. Then c L(b, a) = SI(a, b), (4) 8
29 c L(a, b) = (c S)I(a, b), (5) J(a, b) = (a S)I(a, b). (6) Proof. Note first that (5) and (6) follow from (4), because L(a, b) + L(b, a) = I(a, b) and J(a, b) = a I(a, b) c L(b, a) (see 6.3). Write I(a, b) = I, and recall that I(a n, b n ) = I for all n. Also, write L(b n, a n ) = L n. Note that L(b n, a n ) L(a n, b n ) = L n I. Recall from. that c n+ = (a n b n ). So by (), applied to a n, b n, we have (for each n 4c n+l n+ = (a n b n ) L n+ = (a n b n)(l n I) = c n(l n I), hence n c nl n n+ c n+l n+ = n c ni. Adding these differences for n N, we have ( N ) c L N+ c N+L N+ = n c n I. n= Taking the limit as N, we have c L = SI, i.e. (4). iteration. Since I(a, b) = π/(m(a, b)), this expresses L(a, b) and J(a, b) in terms of the agm Recall from 6. that L(, )I(, ) = π/4. With Theorem 7.3, we can deduce the following expressions for π: 7.4 THEOREM. Let M(, ) = M and let a n, b n, c n be defined by the agm iteration for M. Let S = n= n c n. Then Hence π = lim n π n, where π = M S. (7) where S n = n r= r c r. π n = a n+ S n, (8) Proof. By (5), L(, ) = ( S)I(, ). So, by 6., π 4 = ( S)I(, ) = ( S) π, 4M hence π = M /( S). Clearly, (8) follows. 9
30 The sequence in (8) is called the Brent-Salamin iteration for π. However, the idea was already known to Gauss. A minor variant is to use a n rather than a n+. Another, favoured by some writers (including [Bor]) is to work with M = M(, ) = M /. The resulting sequences are a n, b n, c n, where a n = a n / (etc.). Then S = n= n c n and π = M /( S). Of course, the rapid convergence of the agm iteration ensures rapid convergence of (8). We illustrate this by performing the first three steps. This repeats the calculation of M in section (with slightly greater accuracy), but this time we record c n and c n. To calculate c n, we use the identity c n = c n /(4a n ). n a n b n c n c n From this, we calculate S 3 = , hence S 3 = and π 3 = to eight d.p. (while π ). We now give a reasonably simple error estimation, which is essentially equivalent to the more elaborate one in [Bor, p. 48]. Recall from. that c n+ 4/( n ) for all n With π n defined by (8), we have π n < π and Proof. We have π π n < n+9 n+. (9) where π π n = M S a n+ S n = X n ( S)( S n ), X n = M ( S n ) a n+( S) = (S S n )M ( S)(a n+ M ). Now S n < S, so S n > S. From the calculations above, it is clear that S > 5, so ( S)( S n ) < 5 4 < 3. Now consider X n. We have < a n+ M < a n+ b n+ = c n+. Meanwhile, < M < and S S n = r=n+ r c r > n c n+. 3
31 At the same time, since r c r+ < r c r, we have S S n < n+ c n+. Hence X n > (so π n < π) and X n < M (S S n ) < (S S n ) < n+ c n+ < n+6 n+. Inequality (9) follows. A slight variation of the proof that X n > shows that π n increases with n. 8. Further results on K(b) and E(b) In this section, we translate some results from sections 6 and 7 into statements in terms of K(b) and E(b). First we give expressions for the derivatives, with some some applications. We have already seen in 5.7 that E (b) = [E(b) K(b)]. The expression for b K (b) is not quite so simple: 8.. For < b <, bb K (b) = E(b) b K(b). () Proof. Recall that b = ( b ) /. By 6.7, with a =, With b replaced by b, this says bb d db I(, b) = b I(, b) J(, b). Now db /db = b/b, so by the chain rule K (b) = b b b b d db K(b) = b K(b) E(b). d db K(b) = b b b b [b K(b) E(b)] = bb [b K(b) E(b)]. Alternatively, 8. can be proved from the power series for K(b) and E(b). Note that these power series also show that K () = E () =. The expressions for K (b) and E (b) can be combined to give the following more pleasant identities: 8. COROLLARY. For < b <, b [K (b) E (b)] = be(b), () b K (b) E (b) = bk(b). (3) 3
32 Proof. We have be (b) = E(b) K(b)]. With (), this gives bb [K (b) E (b)] = ( b )E(b) = b E(b), hence (), and (3) similarly. 8.3 THEOREM (Legendre s identity). For < b <, E(b)K(b ) + E(b )K(b) K(b)K(b ) = π. (4) Proof. Write K(b) E(b) = F (b). Then (4) is equivalent to E(b)E(b ) F (b)f (b ) = π. Now E (b) = F (b)/b and, by (), F (b) = (b/b )E(b). Hence, again using the chain rule, we have and d db E(b)E(b ) = b F (b)e(b ) + E(b) b b F (b ) d db F (b)f (b ) = b b E(b)F (b ) + F (b) ( bb ) b b E(b ) = b b E(b)F (b ) b F (b)e(b ). Hence E(b)E(b ) F (b)f (b ) is constant (say c) on (, ). To find c, we consider the limit as b +. Note first that < F (b) < K(b) for < b <. Now E() = π and E() =. By.5 and 5.9, F (b) < π b for < b <. Also, by., c = E()E() = π/. F (b ) < K(b ) = I(, b) π/(b / ). Hence F (b)f (b ) as b +, so Taking b = b = and writing E( ) = E, and similarly K, M, so that K = π/(m ), this gives E K K = π, so that E = π 4K + K = M + π 4M, as found in 6. (note that M = M(, ) = M and J(, ) = E ). In 3.5 and 3.6, we translated the identity I(a, b ) = I(a, b) into identities in terms of K(b). In particular, I( + b, b) = K(b). The corresponding identity for J was given in 7.: J(a, b ) = abi(a, b) + J(a, b). (5) 3
33 We now translate this into statements involving K and E. The method given here can be compared with [Bor, p. ]. Recall that if k = ( b)/( + b), then k = b / /( + b): denote this by g(b) We have E(b) b K(b) = ( + b)e[g(b)]. (6) Proof. By (5), with a and b replaced by + b and b, b I( + b, b) + J( + b, b) = J(, b ) = E(b). As mentioned above, I( + b, b) = K(b). Further ( J( + b, b) = ( + b)j, b ) = ( + b)e[g(b)]. + b 8.5. We have E(b ) + bk(b ) = ( + b)e ( ) b. (7) + b Proof. By (5), E(b ) + bk(b ) = J(, b) + bi(, b) = J ( ( + b), b/) ) = ( + b)j (, b/ + b ( ) b = ( + b)e. + b Note that (5) and (6) can be arranged as expressions for E(b) and E(b ) respectively. Outline of further results obtained from differential equations Let us restate the identities for E (b) and K (b), writing out b as b : be (b) = E(b) K(b), (8) b( b )K (b) + ( b )K(b) = E(b). (9) We can eliminate E(b) by differentiating (9), using (8) to substitute for E (b), and then (9) again to substitute for E(b). The result is the following identity, amounting to a second-order differential equation satisfied by K(b): b( b )K (b) + ( 3b )K (b) bk(b) =. () 33
34 Similarly, we find that b( b )E (b) + ( b )E (b) + be(b) =. () By following this line of enquiry further, we can obtain series expansions for K(b ) = I(, b) and E(b ) = J(, b). Denote these functions by K (b) and E (b). To derive the differential equations satisfied by them, we switch temporarily to notation more customary for differential equations. Suppose we know a second-order differential equation satisfied by y(x) for < x <, and let z(x) = y(x ), where x = ( x ) /. By the chain rule, we find that y (x ) = (x /x)z (x) and y (x ) = x x z (x) x 3 z (x). Substituting in the equation satisfied by y(x ), we derive one satisfied by z(x). In this way, we find that K (x) satisfies x( x )y (x) + ( 3x )y (x) xy(x) =. () Interestingly, this is the same as the equation () satisfied by K(x). Meanwhile, E (x) satisfies (which is not the same as ()). x( x )y (x) ( + x )y (x) + xy(x) = (3) Now look for solutions of () of the form y(x) = n= a nx n+ρ. We substitute in the equation and equate all the coefficients to zero. The coefficient of x ρ is ρ. Taking ρ =, we obtain the solution y (x) = n= d nx n, where d = and d n = (n ) (n). Of course, we knew this solution already: by.4, y (x) = K(x). A second solution can be π found by the method of Frobenius. We give no details here, but the solution obtained is where y (x) = y (x) log x + e n = n r= d n e n x n, n= ( r ). r Now relying on the appropriate uniqueness theorem for differential equations, we conclude that K (x) = c y (x) + c y (x) for some c and c. To determine these constants, we use 4.3, which can be stated as follows: K (x) = log 4 log x + r(x), where r(x) as x +. By 34
35 .4, we have < y (x) < x for x <. Hence [y (x) ] log x, therefore also y (x) log x, tends to as x +. So K (x) = c y (x) + c y (x) = c + c log x + s(x), where s(x) as x +. It follows that c = and c = log 4. We summarise the conclusion, reverting to the notation b (compare [Bor, p. 9]): 8.6 THEOREM. Let < b <. With d n, e n as above, we have I(, b) = K(b ) = π K(b) log 4 b d n e n b n (4) n= = log 4 b + d n (log 4 ) b e n b n. (5) n= It is interesting to compare (5) with Theorem 4.6. The first term of the series in (5) is 4 b (log 4 ), so we can state b I(, b) = ( + 4 b ) log 4 ( b 4 b + O b 4 log ) b for small b, which is rather more accurate than the corresponding estimate in 4.6. Also, e n < log for all n, since it is increasing with limit log. Hence log 4 b e n > for all n, so (5) gives the lower bound I(, b) > ( + 4 b ) log 4 b 4 b, slightly stronger than the lower bound in 4.6. However, the upper bound in 4.6 is ( + 4 b ) log(4/b), and some detailed work is needed to recover this from (5). For the agm, the estimation above translates into where F (a) = (πa)/( log 4a). M(a, ) = F (a) F (a) log 4a + O(/a 3 ) as a, 4a log 4a One could go through a similar procedure to find a series expansion of J(, b). However, it is less laborious (though still fairly laborious!) to use 6.7 in the form J(, b) = b I(, b) b( b ) d I(, b) db and substitute (5). The result is: 35
36 where 8.7 THEOREM. For < b <, J(, b) = + r= f n (log 4 ) b g n b n, (6) n= f n = (n 3) (n ), (n ) (n) n ( g n = r ) + r n n. The first term in the series in (6) is ( log 4 b 4 )b. Denoting this by q(b), we conclude that J(, b) > + q(b) (since all terms of the series are positive), and J(, b) = + q(b) + O(b 4 log /b) for small b. This can be compared with the estimation in 6.9. References [Bor] J.M. Borwein and P.B. Borwein, The arithmetic-geometric mean and fast computation of elementary functions, SIAM Review 6 (984), , reprinted in Pi: A Source Book (Springer, 999), [Bor] J.M. Borwein and P.B. Borwein, Pi and the AGM, (Wiley, 987). [Bow] F. Bowman, Introduction to Elliptic Functions, (Dover Publications, 96). [Jam] [JL] [Lo] G.J.O. Jameson, An approximation to the arithmetic-geometric mean, Math. Gaz. 98 (4). G.J.O. Jameson and Nick Lord, Evaluation of n= Math. Gaz. 97 (3). n by a double integral, Nick Lord, Recent calculations of π: the Gauss-Salamin algorithm, Math. Gaz. 76 (99), 3 4. [Lo] Nick Lord, Evaluating integrals using polar areas, Math. Gaz. 96 (), [New] [New] [WW] D.J. Newman, Rational approximation versus fast computer methods, in Lectures on approximation and value distribution, pp , Sém. Math. Sup. 79 (Presses Univ. Montréal, 98). D.J. Newman, A simplified version of the fast algorithm of Brent and Salamin, Math. Comp. 44 (985), 7, reprinted in Pi: A Source Book (Springer, 999), E.T. Whittaker and G.N. Watson, A Course of Modern Analysis (Cambridge Univ. Press, 97). Dept. of Mathematics and Statistics, Lancaster University, Lancaster LA 4YF 36 [email protected]
THREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
The cyclotomic polynomials
The cyclotomic polynomials Notes by G.J.O. Jameson 1. The definition and general results We use the notation e(t) = e 2πit. Note that e(n) = 1 for integers n, e(s + t) = e(s)e(t) for all s, t. e( 1 ) =
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1
A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions
Adaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
Solutions to Homework 10
Solutions to Homework 1 Section 7., exercise # 1 (b,d): (b) Compute the value of R f dv, where f(x, y) = y/x and R = [1, 3] [, 4]. Solution: Since f is continuous over R, f is integrable over R. Let x
1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style
Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with
FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z
FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao [email protected]
Integer Polynomials June 9, 007 Yufei Zhao [email protected] We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing
CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics
Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights
Metric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test
Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important
Roots of Polynomials
Roots of Polynomials (Com S 477/577 Notes) Yan-Bin Jia Sep 24, 2015 A direct corollary of the fundamental theorem of algebra is that p(x) can be factorized over the complex domain into a product a n (x
CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.
CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
Mark Scheme (Results) Summer 2013. GCE Core Mathematics 2 (6664/01R)
Mark (Results) Summer 01 GCE Core Mathematics (6664/01R) Edexcel and BTEC Qualifications Edexcel and BTEC qualifications come from Pearson, the world s leading learning company. We provide a wide range
SMT 2014 Algebra Test Solutions February 15, 2014
1. Alice and Bob are painting a house. If Alice and Bob do not take any breaks, they will finish painting the house in 20 hours. If, however, Bob stops painting once the house is half-finished, then the
PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
Section 3.7. Rolle s Theorem and the Mean Value Theorem. Difference Equations to Differential Equations
Difference Equations to Differential Equations Section.7 Rolle s Theorem and the Mean Value Theorem The two theorems which are at the heart of this section draw connections between the instantaneous rate
Lecture Notes on Polynomials
Lecture Notes on Polynomials Arne Jensen Department of Mathematical Sciences Aalborg University c 008 Introduction These lecture notes give a very short introduction to polynomials with real and complex
The Ideal Class Group
Chapter 5 The Ideal Class Group We will use Minkowski theory, which belongs to the general area of geometry of numbers, to gain insight into the ideal class group of a number field. We have already mentioned
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
Numerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.
Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize
Prime Numbers and Irreducible Polynomials
Prime Numbers and Irreducible Polynomials M. Ram Murty The similarity between prime numbers and irreducible polynomials has been a dominant theme in the development of number theory and algebraic geometry.
MATH 425, PRACTICE FINAL EXAM SOLUTIONS.
MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator
it is easy to see that α = a
21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore
1 Lecture: Integration of rational functions by decomposition
Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.
ON LIMIT LAWS FOR CENTRAL ORDER STATISTICS UNDER POWER NORMALIZATION. E. I. Pancheva, A. Gacovska-Barandovska
Pliska Stud. Math. Bulgar. 22 (2015), STUDIA MATHEMATICA BULGARICA ON LIMIT LAWS FOR CENTRAL ORDER STATISTICS UNDER POWER NORMALIZATION E. I. Pancheva, A. Gacovska-Barandovska Smirnov (1949) derived four
Notes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
UNCORRECTED PAGE PROOFS
number and and algebra TopIC 17 Polynomials 17.1 Overview Why learn this? Just as number is learned in stages, so too are graphs. You have been building your knowledge of graphs and functions over time.
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
Short Programs for functions on Curves
Short Programs for functions on Curves Victor S. Miller Exploratory Computer Science IBM, Thomas J. Watson Research Center Yorktown Heights, NY 10598 May 6, 1986 Abstract The problem of deducing a function
How To Prove The Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
Course Notes for Math 162: Mathematical Statistics Approximation Methods in Statistics
Course Notes for Math 16: Mathematical Statistics Approximation Methods in Statistics Adam Merberg and Steven J. Miller August 18, 6 Abstract We introduce some of the approximation methods commonly used
www.mathsbox.org.uk ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates
Further Pure Summary Notes. Roots of Quadratic Equations For a quadratic equation ax + bx + c = 0 with roots α and β Sum of the roots Product of roots a + b = b a ab = c a If the coefficients a,b and c
DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x
Chapter 5 COMPLEX NUMBERS 5.1 Constructing the complex numbers One way of introducing the field C of complex numbers is via the arithmetic of matrices. DEFINITION 5.1.1 A complex number is a matrix of
a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)
ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x
t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).
1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction
CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
RAJALAKSHMI ENGINEERING COLLEGE MA 2161 UNIT I - ORDINARY DIFFERENTIAL EQUATIONS PART A
RAJALAKSHMI ENGINEERING COLLEGE MA 26 UNIT I - ORDINARY DIFFERENTIAL EQUATIONS. Solve (D 2 + D 2)y = 0. 2. Solve (D 2 + 6D + 9)y = 0. PART A 3. Solve (D 4 + 4)x = 0 where D = d dt 4. Find Particular Integral:
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
Taylor and Maclaurin Series
Taylor and Maclaurin Series In the preceding section we were able to find power series representations for a certain restricted class of functions. Here we investigate more general problems: Which functions
MAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
19.6. Finding a Particular Integral. Introduction. Prerequisites. Learning Outcomes. Learning Style
Finding a Particular Integral 19.6 Introduction We stated in Block 19.5 that the general solution of an inhomogeneous equation is the sum of the complementary function and a particular integral. We have
CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12
CONTINUED FRACTIONS AND PELL S EQUATION SEUNG HYUN YANG Abstract. In this REU paper, I will use some important characteristics of continued fractions to give the complete set of solutions to Pell s equation.
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
Section 4.2: The Division Algorithm and Greatest Common Divisors
Section 4.2: The Division Algorithm and Greatest Common Divisors The Division Algorithm The Division Algorithm is merely long division restated as an equation. For example, the division 29 r. 20 32 948
Section 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 [email protected].
Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 [email protected] This paper contains a collection of 31 theorems, lemmas,
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
Inequalities of Analysis. Andrejs Treibergs. Fall 2014
USAC Colloquium Inequalities of Analysis Andrejs Treibergs University of Utah Fall 2014 2. USAC Lecture: Inequalities of Analysis The URL for these Beamer Slides: Inequalities of Analysis http://www.math.utah.edu/~treiberg/inequalitiesslides.pdf
How To Know If A Domain Is Unique In An Octempo (Euclidean) Or Not (Ecl)
Subsets of Euclidean domains possessing a unique division algorithm Andrew D. Lewis 2009/03/16 Abstract Subsets of a Euclidean domain are characterised with the following objectives: (1) ensuring uniqueness
An example of a computable
An example of a computable absolutely normal number Verónica Becher Santiago Figueira Abstract The first example of an absolutely normal number was given by Sierpinski in 96, twenty years before the concept
Solutions for Review Problems
olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector
The Goldberg Rao Algorithm for the Maximum Flow Problem
The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }
Lies My Calculator and Computer Told Me
Lies My Calculator and Computer Told Me 2 LIES MY CALCULATOR AND COMPUTER TOLD ME Lies My Calculator and Computer Told Me See Section.4 for a discussion of graphing calculators and computers with graphing
Factorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
Solutions of Equations in One Variable. Fixed-Point Iteration II
Solutions of Equations in One Variable Fixed-Point Iteration II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
Putnam Notes Polynomials and palindromes
Putnam Notes Polynomials and palindromes Polynomials show up one way or another in just about every area of math. You will hardly ever see any math competition without at least one problem explicitly concerning
Mathematical Methods of Engineering Analysis
Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................
RESULTANT AND DISCRIMINANT OF POLYNOMIALS
RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results
Continued Fractions. Darren C. Collins
Continued Fractions Darren C Collins Abstract In this paper, we discuss continued fractions First, we discuss the definition and notation Second, we discuss the development of the subject throughout history
MATH 132: CALCULUS II SYLLABUS
MATH 32: CALCULUS II SYLLABUS Prerequisites: Successful completion of Math 3 (or its equivalent elsewhere). Math 27 is normally not a sufficient prerequisite for Math 32. Required Text: Calculus: Early
Differentiating under an integral sign
CALIFORNIA INSTITUTE OF TECHNOLOGY Ma 2b KC Border Introduction to Probability and Statistics February 213 Differentiating under an integral sign In the derivation of Maximum Likelihood Estimators, or
The Relation between Two Present Value Formulae
James Ciecka, Gary Skoog, and Gerald Martin. 009. The Relation between Two Present Value Formulae. Journal of Legal Economics 15(): pp. 61-74. The Relation between Two Present Value Formulae James E. Ciecka,
Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi
Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the
Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1
Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing
Section 4.4. Using the Fundamental Theorem. Difference Equations to Differential Equations
Difference Equations to Differential Equations Section 4.4 Using the Fundamental Theorem As we saw in Section 4.3, using the Fundamental Theorem of Integral Calculus reduces the problem of evaluating a
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...
6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence
Polynomials. Key Terms. quadratic equation parabola conjugates trinomial. polynomial coefficient degree monomial binomial GCF
Polynomials 5 5.1 Addition and Subtraction of Polynomials and Polynomial Functions 5.2 Multiplication of Polynomials 5.3 Division of Polynomials Problem Recognition Exercises Operations on Polynomials
Properties of BMO functions whose reciprocals are also BMO
Properties of BMO functions whose reciprocals are also BMO R. L. Johnson and C. J. Neugebauer The main result says that a non-negative BMO-function w, whose reciprocal is also in BMO, belongs to p> A p,and
CONTINUED FRACTIONS AND FACTORING. Niels Lauritzen
CONTINUED FRACTIONS AND FACTORING Niels Lauritzen ii NIELS LAURITZEN DEPARTMENT OF MATHEMATICAL SCIENCES UNIVERSITY OF AARHUS, DENMARK EMAIL: [email protected] URL: http://home.imf.au.dk/niels/ Contents
Version 1.0. General Certificate of Education (A-level) January 2012. Mathematics MPC4. (Specification 6360) Pure Core 4. Final.
Version.0 General Certificate of Education (A-level) January 0 Mathematics MPC (Specification 660) Pure Core Final Mark Scheme Mark schemes are prepared by the Principal Eaminer and considered, together
CHAPTER 5 Round-off errors
CHAPTER 5 Round-off errors In the two previous chapters we have seen how numbers can be represented in the binary numeral system and how this is the basis for representing numbers in computers. Since any
11 Ideals. 11.1 Revisiting Z
11 Ideals The presentation here is somewhat different than the text. In particular, the sections do not match up. We have seen issues with the failure of unique factorization already, e.g., Z[ 5] = O Q(
Vilnius University. Faculty of Mathematics and Informatics. Gintautas Bareikis
Vilnius University Faculty of Mathematics and Informatics Gintautas Bareikis CONTENT Chapter 1. SIMPLE AND COMPOUND INTEREST 1.1 Simple interest......................................................................
MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding
Lectures 5-6: Taylor Series
Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Tim Kerins. Leaving Certificate Honours Maths - Algebra. Tim Kerins. the date
Leaving Certificate Honours Maths - Algebra the date Chapter 1 Algebra This is an important portion of the course. As well as generally accounting for 2 3 questions in examination it is the basis for many
PRE-CALCULUS GRADE 12
PRE-CALCULUS GRADE 12 [C] Communication Trigonometry General Outcome: Develop trigonometric reasoning. A1. Demonstrate an understanding of angles in standard position, expressed in degrees and radians.
Unified Lecture # 4 Vectors
Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE
i93 c J SYSTEMS OF CURVES 695 ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE BY C H. ROWE. Introduction. A system of co 2 curves having been given on a surface, let us consider a variable curvilinear
Gambling Systems and Multiplication-Invariant Measures
Gambling Systems and Multiplication-Invariant Measures by Jeffrey S. Rosenthal* and Peter O. Schwartz** (May 28, 997.. Introduction. This short paper describes a surprising connection between two previously
Mathematics Review for MS Finance Students
Mathematics Review for MS Finance Students Anthony M. Marino Department of Finance and Business Economics Marshall School of Business Lecture 1: Introductory Material Sets The Real Number System Functions,
Settling a Question about Pythagorean Triples
Settling a Question about Pythagorean Triples TOM VERHOEFF Department of Mathematics and Computing Science Eindhoven University of Technology P.O. Box 513, 5600 MB Eindhoven, The Netherlands E-Mail address:
correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:
Topic 1 2.1 mode MultipleSelection text How can we approximate the slope of the tangent line to f(x) at a point x = a? This is a Multiple selection question, so you need to check all of the answers that
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce
vector calculus 2 Learning outcomes
29 ontents vector calculus 2 1. Line integrals involving vectors 2. Surface and volume integrals 3. Integral vector theorems Learning outcomes In this Workbook you will learn how to integrate functions
GREATEST COMMON DIVISOR
DEFINITION: GREATEST COMMON DIVISOR The greatest common divisor (gcd) of a and b, denoted by (a, b), is the largest common divisor of integers a and b. THEOREM: If a and b are nonzero integers, then their
3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes
Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general
