A review of ideas from polynomial rootfinding
|
|
|
- Roderick Ryan
- 10 years ago
- Views:
Transcription
1 A review of ideas from polynomial rootfinding Mark Richardson September 2010 Contents 1 Introduction Polynomial basics Newton Iteration Horner s Method Polynomial Evaluation Derivative evaluation Stability modifications Deflation The Fast Fourier Transform Polynomial evaluation with the FFT Polynomial multiplication and division with the FFT Problems with Newton iteration and deflation 10 5 The Lindsey-Fox method Sampling on disc of radius r A description of the method Implementation Conclusions 14 A Appendix: MATLAB code 16 A.1 Implementation of the LF method A.2 Plotting the LF roots
2 1 Introduction The problem is simply stated: given a set of n + 1 coeffcients of a degree n polynomial, {a 0, a 1,..., a n }, what are the roots? The question looks innocent enough, yet it has provoked hundreds of years of research, and literally thousands of academic papers. In spite of this, we still don t know the best general method of solving it. In this report, we shall examine some well known, and some relatively recent developments in the field. We shall predominantly be examining Newton-style techniques which generally involve obtaining some initial estimate of a root, and iterating it to convergence. The methods we shall examine will be for computational work in the monomial basis. Though care needs to be taken in certain circumstances, polynomials are in general easy to evaluate. Indeed, polynomials are particularly amenable to Newton style rootfinding techniques since their derivatives are also easy to compute. Polynomial rootfinding can be an ill-conditioned problem. That is, for some polynomials, small perturbations to the coefficients may lead to large perturbations in the roots. Consider the following equation, in an example taken from [9], for which there is a double root at z = 1. By perturbing the order one coefficient from , so that, z 2 2z + 1 = 0 z z + 1 = 0, we find that the roots have been shifted to z = & z = The difference in the roots of the perturbed equation, relative to the roots of the unperturbed equation is However, this was the effect of a relative change in the first order coefficieint of Thus, small perturbations can lead to large errors. As the example demonstrates, ill-conditioning is inherent to the problem itself, and is entirely independent of the algorithm. However, we may also be unfortunate enough to suffer from computational difficulties in addition to ill-conditioning. Take for example a degree 1000 monic 1 polynomial. If we were working in double precision and happened to suspect that a root of this polynomial were at z = 3, we would not easily be able to check this suspicion. This is because is far greater than the largest double precision number, ( ) , and we would encounter overflow. As a result of these issues, many algorithms for polynomial rootfinding are forced to restrict themselves to a certain class of polynomial. For example, the Lindsey-Fox method that we describe in Section 5 works only with the class of polynomials that have roots close to the unit disc. We begin with a standard result concerning factorisation of polynomials. 1 Monic means that the leading order coefficient is 1. 2
3 1.1 Polynomial basics Any degree n polynomial can be written P (z) = n a k z k = a 0 + a 1 z + a 2 z a n 1 z n 1 + a n z n, (1) k=0 where a = (a 0, a 1,..., a n ) is the (n + 1)-vector of polynomial coefficients. By factoring out the z-terms in sequence, the polynomial can also be written in nested form, P (z) = a 0 + z(a 1 + z(a 2 + z(... + z(a n 1 + za n )...))). (2) In factored form this polynomial is P (z) = a n D d=1 (z z d ) E d, (3) where D is the number of distinct zeros of P, and E d are positive integers with 1.2 Newton Iteration D E d = n. The basic Newton iteration for obtaining a root of f is derived from the Taylor series approximation. Given a function of a single complex variable f(z), three-times differentiable in a neighbourhood of z 0, we can write f(z) = f(z 0 ) + (z z 0 )f (z 0 ) + 1 2! (z z 0) 2 f (z 0 ) + O(z 3 ). (4) At the location of a root, f(z) = 0. Discarding the terms of O(z 2 ) and higher gives the iteration, z [k+1] = z [k] f(z[k] ) f (z [k] ). Assuming f (z 0 ) 0, if Newton s method as it is known converges to a root of the function f, then it does so quadratically. This can be shown with the following straightforward argument. Given a root z = z r, setting f(z r ) = 0, z 0 = z [k] in (4) gives, d=1 f(z [k] ) + (z r z [k] )f (z [k] ) = 1 2! (z r z [k] ) 2 f (ξ), ξ (z [k], z r ) ( ) z r z [k] f(z[k] ) = 1 f (ξ) f (z [k] ) 2 f (z [k] ) (z r z [k] ) 2 z r z [k+1] = 1 f (ξ) 2 f (z [k] ) (z r z [k] ) 2 3
4 Taking absolute values, we have zr z [k+1] 1 2 f (ξ) f (z [k] ) z r z [k] 2. Thus, the error at the (k + 1) st step is bounded by some factor multplied by the square of the error at the k th step, and the convergence is quadratic. A similar argument shows that when the multiplicity of the root is greater than 1, Newton s method converges only linearly. In the next section, we describe a method for evaluating polynomials and their derivatives that can be used as part of Newton iteration scheme. 2 Horner s Method 2.1 Polynomial Evaluation Suppose first that we wish to evaluate the polynomial P (z) given in (1) for some z = z 0. From the nested form of P (z) in (2), we can work outwards from the innermost bracket using the following recurrence relation, known as Horner s method: b k = a k + z 0 b k+1, for k = n 1,..., 0 with b n = a n. (5) The last iterate is the value of the polynomial at z 0, b 0 = P (z 0 ). This can be simply implemented with the Matlab code in Figure 1, for a vector of randomly chosen polynomial coefficients. a = rand(1,11); nn = length(a)-1; f = a(nn+1); zz = 0.5; for k = nn:-1:1 f = a(k) + zz*f; end % P(z) = a_0 + a_1*z a_n*z^n % degree of polynomial % initialise b = b_n = a_n % value of z required % Horner iteration - P(zz) Figure 1: Matlab code for Horner evaluation of a polynomial 2.2 Derivative evaluation The intermediate b k are the values in the subsequent brackets in (2). Moreover, the last n b k coefficients are also the coefficients of the quotient polynomial Q(z) of degree n + 1 obtained by dividing P (z) by (z z 0 ). That is, if n 1 Q(z) = b k+1 z k = b 1 + b 2 z + b 3 z b n z n 1 (6) k=0 = b 1 + z(b 2 + z(b 3 + z(... + z(b n 1 + zb n )...))), 4
5 then, P (z) = (z z 0 )Q(z) + R, (7) where R = b 0 = P (z 0 ). This relationship can be verified by observing that (z z 0 )Q(z) + R = (z z 0 )(b 1 + b 2 z + b 3 z b n z n 1 ) + b 0 = (b 1 z + b 2 z b n z n ) z 0 (b 1 + b 2 z b n z n 1 ) + b 0 = (b 0 z 0 b 1 ) + (b 1 z 0 b 2 )z (b n 1 z 0 b n )z n 1 + b n z n = a 0 + a 1 z + a 2 z a n 1 z n 1 + a n z n = P (z). The quotient property enables us to easily compute the derivative P (z 0 ) at the same time as computing the function value P (z 0 ). To see this, consider differentiating (7), P (z) = (z z 0 )Q (z) + Q(z). Evaluating this expression at z = z 0, we obtain P (z 0 ) = Q(z 0 ). This suggests a very neat way of computing a function and its derivative at the same time. The Matlab code in Figure 2 achieves this. a = rand(1,11); nn = length(a)-1; f = a(nn+1); df = f; zz = 0.5; for k = nn:-1:2 f = a(k) + zz*f; df = b + zz*df; end f = a(1) + zz*b; % P(z) = a_0 + a_1*z a_n*z^n % degree of polynomial % initialise b = b_n = a_n, c = b_n % value of z required % Horner iteration steps Figure 2: Matlab code for Horner evaluation of a polynomial and its derivative 2.3 Stability modifications The recurrence relation that defines Horner s method (5) is also a first-order difference equation. An equation of this form is known to be stable for computing values of z 1, and unstable for z > 1. If using Horner s method within a Newton iteration, we should expect some of the computed values of z to lie within the unstable region of the complex plane. The instability of the Horner iteration in such cases can be controlled by writing the degree n polynomial P (z) in different nested form, 5
6 P (z) = a 0 + a 1 z + a 2 z a n 1 z n 1 + a n z n = z [ ] n z 1 (z n+1 a 0 + z n+2 a 1 + z n+3 a a n 1 ) + a n = z [ ] n z 1 (... (z 1 (z 1 a 0 + a 1 ) + a 2 ) a n 1 ) + a n. (8) The corresponding recurrence relation is, β k+1 = γ 1 β k + a k+1, k = 0,..., n 1, β 0 = a 0. (9) The derivative can also be computed stably for z > 1. Differentiation of (1) gives P (z) = a 1 + 2a 2 z + 3a 3 z (n 1)a n 1 z n 2 + na n z n 1 = z [ ] n 1 a 1 z n+1 + 2a 2 z n+2 + 3a 3 z n (n 1)a n 1 z 1 + na n = z [ n 1 z ( 1... ( z ( ) ) ( ] 1 z 1 a 1 + 2a 2 + 3a n 1)an 1 ) + na n. This time, the recurrence relation is δk + 1 = γ 1 δk + (k + 1)a k+1, k = 1,..., n 1, δ 1 = a 1. (10) The function and derivative can be computed in the same loop as in Figure 3. a = rand(50,1); % P(z) = a_0 + a_1*z a_n*z^n beta = a(1); delta = a(2); % initialise beta, delta zz = 0.5; zz_rec = zz^(-1); % z and 1/z beta = zz_rec*beta + a(2); % compute first beta value for k = 2:length(a)-1 beta = zz_rec*beta + a(k+1); % P(zz) delta = zz_rec*delta + k*a(k+1); % P (zz) end f = zz^(length(a)-1)*beta; % final values df = zz^(length(a)-2)*delta; Figure 3: Matlab code for Horner deflation of a polynomial with a known root 2.4 Deflation The term deflation refers to the process of computing the polynomial Q that is obtained upon dividing a polynomial P by the factor (z z 0 ), where z 0 is a root of P. In the context of rootfinding, this can be important in ensuring that a Newton-type iteration does not converge to a root that has already been found. One strategy for computing roots is to deflate each time a root is found before restarting the process with the deflated polynomial. 6
7 Obtaining the coefficients of a deflated polynomial is a natural consequence of evaluating the polynomial using Horner s method. If z = z 0 is a root of the polynomial P (z), then clearly P (z 0 ) = 0 and thus the remainder R in (7) is zero. The quotient polynomial Q therefore contains the remaining roots of P. Achieving this is simply a case of storing the b k coefficients as we pass through the Horner evaluation algorithm. The Matlab code in Figure 4 implements this idea. a = [ ]; nn = length(a)-1; b = zeros(nn,1); b(nn) = a(nn+1); zz = -4; for k = nn:-1:2 b(k-1) = a(k) + zz*b(k); end % P(z) = a_0 + a_1*z a_n*z^n % degree of polynomial % initialise b % initialise b = b_n = a_n % known root % Horner deflation Figure 4: Matlab code for Horner deflation of a polynomial with a known root 3 The Fast Fourier Transform The Fast Fourier Transform (FFT) is a well-known technique for computing the Discrete Fourier Transform (DFT) of a set of data. The FFT became a mainstay of modern scientfic computing after a famous paper by Cooley & Tukey in Unknown to them at the time, they had actually unearthed an algorithm originally discovered by Gauss in 1805! The FFT exploits structure in the DFT matrix in order to reduce the operation count from the O(n 2 ) complexity of a standard matrix-vector multiplication to a more palatable O(n log n). Computing a DFT is necessary in all sorts of situations, and the ability to do so in an efficient manner has made a significant impact on the efficiency of modern algorithms. In this, and the following section, we describe a few of the more arcane and unexpected settings related to polynomial rootfinding in which computation of the DFT crops up. 3.1 Polynomial evaluation with the FFT Given a degree N polynomial, defined by N + 1 monomial coefficients, a DFT of the set of coefficients will evaluate the polynomial at the N + 1 roots of unity equispaced points on the unit disc. The inverse DFT will go the other way, from function values at roots of unity to polynomial coefficients. This relationship can be demonstrated by considering the definition of the DFT. For convenience, we shall work with the definition corresponding to Matlab s FFT. Given an n-vector, x, both the DFT and the FFT are defined by 7
8 X k = M ( x m exp 2πi ) (k 1)(m 1), k = 1,..., M. (11) M m=1 Defining the M th roots of unity, ω k,m = exp ( 2πi M (k 1)), (11) can be written X k = M m=1 x m ω m 1 k,m = P M 1 (ω k,m ), k = 1,..., M. (12) This is a degree M 1 monomial defined by coefficients x k and evaluated for ω k,m. Since Matlab does not permit indexing from zero, the first coefficient is x 1 (= a 0 ). Note that it is possible to to evaluate such a polynomial for any number of roots of unity greater than or equal to M. This is made possible by zero-padding the vector of coefficients to the desired length. If for example we wished to evaluate P M 1 at 2M points on the unit circle, we would define χ k = x k for k = 1,..., M and χ k = 0 for k = M + 1,..., 2M, and compute Z k = The inverse FFT is defined by x m = 1 M M k=1 2M m=1 χ m ω m 1 k,2m, k = 1,..., 2M. ( ) 2πi X k exp (k 1)(m 1), m = 1,..., M. (13) M To verify this formula, we observe that the original coefficients x m substituting (11) into (13), x m = 1 M = 1 M [ M M k=1 M µ=1 x µ µ=1 M k=1 ( x µ exp 2πi (k 1)(µ 1) M [ ( )] k 1 2πi exp (m µ). M ) ] ( 2πi exp M can be recovered by ) (k 1)(m 1) Since m and µ are integers, the exponential term β = exp ( 2πi M (m µ)) is an M th root of unity, and therefore, β M 1 = 0. If m µ, then β 1 0 and the sum involving the square brackets is zero, M k=1 β k 1 = 1 + β + β β M 1 = βm 1 β 1 = 0. 8
9 Alternatively, if m = µ, then M k=1 βk 1 = M k=1 1 = M. Thus, the only term remaining is x m. This argument demonstrates that the DFT and its inverse are one-to-one transformations. Thus, if the FFT maps coefficients to function values, then the inverse FFT maps function values to coefficients. 3.2 Polynomial multiplication and division with the FFT An interesting corollary of this idea is that polynomial multiplication and division (deflation) can also be performed with the FFT, in O(n log n) operations. Suppose we have two polynomials; one of degree P and the other of degree Q. We know that the result of multiplying these will be a polynomial of degree P + Q. If we were to compute an P +Q+1 term DFT of each of the polynomials coefficients, this would evaluate each polynomial at the P + Q + 1 roots of unity on the unit disc. The values of the product polynomial at the same P + Q + 1 roots of unity could then be determined by point-wise multiplication of the DFT vectors. Finally, an inverse DFT of the point-wise product in Fourier-space would yield the corresponding coefficients of the product polynomial. An example of this is given in Figure 5, where a quadratic and linear polynomial are multiplied. The output shows the coefficients of the product in ascending order. a = [1 2 3]; b = [1 2]; deg = length(a)+length(b)-2; c = ifft(fft(a,deg+1).*fft(b,deg+1)) c = % coeffs in ascending order % degree of product % multiply in Fourier space Figure 5: Matlab code for polynomial multplication using the FFT Polynomial deflation with the FFT is a simple extension of this idea. We simply need to reorder the computation so that we are dividing in Fourier-space, rather than multiplying. An example of this is given in Figure 6, using the same polynomials as in the previous example. c = [ ]; a = [1 2 3]; lc = length(c); deg = lc - length(a); bb = ifft(fft(c)./fft(a,lc)); b = bb(1:deg+1) b = 1 2 % coeffs in ascending order % degree of quotient % divide in Fourier space % extract coeffcients Figure 6: Matlab code for polynomial division (deflation) using the FFT 9
10 4 Problems with Newton iteration and deflation One strategy that we have outlined for computing the factorisation of a polynomial would be to use Newton iteration combined with a deflation scheme. That is: first compute a root by making some initial guess in the complex plane and iterating to it with Newton s method; then deflate the root to obtain some quotient polynomial that mathematically at least should contain the remaining roots. Repeating this until all the roots have been found is a typical strategy. However, the effectiveness of this technique is mitigated by two significant computational issues. Firstly, given an arbitrary starting guess in the complex plane, Newton s method is most certainly not guaranteed to converge to a root. Newton s method can often fall into limit cycles that do not converge to a root, or worse still, can become entirely unstable. Good strategies for choosing a starting guess are hard to come by and a typically imperfect one is to generate pseudo-random starting numbers using e.g. z [0] k = cos(k) + i sin(k) for k = 1, 2,... Starting with any particular z k, if Newton method either does not converge within a given number of iterations, or if any of the updated values of z become too large, it may be necessary to reset the iteration with a new starting value, z k+1. As may be expected, the complexity of the problem increases with the degree of the polynomial, and quite often this method can fail entirely. A second problem is that of introducing and accumulating errors at the deflation stage. If a root has been computed, then it is accurate to within the specified tolerance, and it is only a numerical approximation to the exact root. Therefore, when deflating the polynomial with this root, an error is introduced, leading to a perturbed quotient. These errors can accumulate, particularly over large degree polynomials where there are many stages of deflation. One may think that this is not too much of a problem, since the perturbed roots obtained from deflated polynomials will be reasonable approximations to the true roots which can be polished against the original, undeflated polynomial. While this is true to some extent, we may end up in the unfortunate situation where two of our roots are close together, and polishing of an approximated root leads to a root that has already been computed, a limit cycle, or divergence. 10
11 5 The Lindsey-Fox method The Lindsey-Fox (LF) method, described in [1], is a relatively recent innovation in the field of polynomial rootfinding. The LF method takes a novel approach, utilising the some of the FFT ideas discussed in Section 3. It claims an overall O(n 2 ) operation count, and has been reported in [1] to have accurately factored polynomials of degree as high as The polynomial rootfinding problem is in general highly ill-conditioned. The exception to this is when the roots lie close to the unit disc, and few roots of are located close together. The LF method operates in exactly this setting. This may seem like quite a restrictive condition, and indeed it is, however there are practical situations in which the method is applicable. For example, in the field of digital signal processing, very often sampling time series data can result in such polynomials. So too can approximating the roots of a periodic function by finding the roots of a Fourier polynomial. The LF method works almost exclusively with the original polynomial. Working on the assumption that the roots lie near the unit disc, the LF method uses the FFT to quickly sample concentric discs around the unit disc to check for roots. Once candidate roots have been found, they are stored and polished later against the original polynomial, using an iteration scheme. Though we shall not discuss it in this report, the LF method actually uses Laguerre iteration as a more reliable and quickly convergent alternative to Newton s method. Various checks are made, including scanning for duplicate roots. Finally, once all roots have been found, the polynomial is reconstructed from its factors, and a check is made confirming that the difference between the coeffcients of the unfactored and original polynomial is small. 5.1 Sampling on disc of radius r To sample the polynomial on concentric discs around the unit disc, the LF method uses a variation of the FFT techniques discussed in Section 3. It is a simple extension of these ideas to sample the a polynomial on a disc of arbitrary radius, r. To see this, consider the formula (12) which evaluates a degree M 1 polynomial at the (k, M) th root of unity, M P M 1 (ω k,m ) = x m ω m 1 k,m. m=1 If we instead wish to evaluate the polynomial at the same complex argument, but for some arbitrary magnitude r, we must compute P M 1 (rω k,m ). This is found indirectly, M P M 1 (rω k,m ) = x m (rω k,m ) m 1 = m=1 M m=1 11 ( xm r m 1) ω m 1 k,m.
12 Thus, evaluation of a poynomial on a radius r can be achieved by first computing the pointwise product of the coeffcients vector (x 1, x 2,..., x M ) with a vector of ascending powers of r: (1, r,..., r M 1 ). Evaluating the FFT of this computes the required function values. The implementation is simple, and an example can be found in Figure 7. M = 11; x = rand(m,1); r = 0.9; vals_ud = fft(x.*(r.^(0:m-1) )); % polynomial and radius % values Figure 7: Matlab code for FFT evaluation of a polynomial on a disc of radius r. 5.2 A description of the method The main idea behind the Lindsey-Fox method is to sample the polynomial over a circular mesh around the unit disc. If the grid is sufficiently fine, then deductions about the likely locations of the roots can be made by using a corollary of the maximum modulus theorem. Figure 8(a) shows an example of the type of search grid used. Note that it is only necesary to search in one half of the complex plane, since any root that is found will have an associated complex conjugate which can be trivially obtained by negating the imaginary component. (a) (b) Figure 8: The LF search grid around the lower half of the unit disc Three concentric discs are sampled simultaneously. Each of the function values corresponding to the red crosses in Figure 8(a) are examined in turn. The absolute value of the function at the central node is compared to the absolute value of the function at each of the surrounding nodes, as in Figure 8(b). If this absolute value is less than the surrounding values, then the node is likely to be in the vicinity of a root. Once a particular set of concentric discs have been scanned, and any potential root candidates identified, the process restarts. The function values on a fourth concentric disc (inside the interior disc, say) are sampled, the previous middle disc becomes the new exterior disc, and the old interior disc becomes the new middle disc. 12
13 Naturally, the number of initial approximations discovered increases as the search grid becomes finer. A finer search grid not only leads to more initial candidate roots, but these guesses are likely to be better initial approximations. The polishing step is therefore much more likely to end up converging the correct root, rather than to one that has already been determined or worse, falling into a limit cycle, or diverging. Moreover, since most, if not all of the roots are initially approximated with the grid search, there is no need for sequential deflation steps and the accumulated error they introduce. However, it is entirely possible that the grid search will miss some roots in the initial search. If this happens, deflation is then necessary. Often, the number of roots missed by the grid search will be a relatively small number less than 100, say. Then, the companion matrix method can be used to find approximation to the remaining roots, which can then be polished with a suitable iterative scheme. 5.3 Implementation We provide an implementation of the kernel of the method in Appendix A.1. The code performs a search over just three concentric discs, but can easily be adapted to cycle over several radii. We also omit the iteration step. In Appendix A.2, we give some code which plots the intitial guesses found by the LF search method against the roots computed by the companion matrix method. Figure 9 shows sample output for two degree 99 polynomials. Figure 9: LF search approximations to the roots The red circles are roots computed with the companion matrix, and the blue crosses are the LF search approximations, before iteration. Note that many but nowhere near all of the roots have been approximated. This is because the code is only checking three discs. If more discs are added and the search grid made correspondingly finer, a majority, and perhaps all, of the roots are likely to be found. 13
14 6 Conclusions Whilst the Lindsey-Fox method does present a fresh perspective on the subject of polynomial rootfinding, care needs to be taken with regard to the claims made by the engineers of the algorithm regarding its reliability and robustness. That is not to say that the recommendation of this report that the LF method is unreliable. However, potential users should be aware that as far as the author is aware the robustness of the method has not so far been tested in a rigorous comparative setting. It is important to try and put things in perspective. In particular, the methods described in this report represent a very small cross-section of the vas literature available on polynomial rootfinding. Indeed, a recent book by McNamee [7] on the subject included a reference list with more than 8000 entries! For each of the methods described, it is easy to come up with an example that will cause it to fail. This indeed is the perennial problem faced by rootfinding algorithms. Quite apart from the inherent ill-conditioning of many problems, it is notoriously difficult to design rootfinding algorithms that are both robust and stable. In the few cases where this is achieved, the cost is often high operation count and computational complexity. One class of techniques that has been omitted from our discussion so far falls into this category. The companion matrix method is behind the the Matlab roots() command. A particular matrix is constructed from the coefficients of the polynomial and the eigenvalues of this matrix turn out to be exactly the roots of the polynomial. Stable algorithms exist to compute eigenvalues, and typically, an iterative scheme such as the QR-algorithm is used. Relying as it does on matrix multiplication, the QR method tallies up an O(n 3 ) operation count. More recently, attempts have been made recently to utilise structure in the companion matrix in order enabling the QR method to converge in O(n 2 ) operations [8]. When the rootfinding problem is restricted to finding roots on a region of the real axis, rather than in a region of the complex plane, the situation is somewhat nicer. Here, several successful attempts have been made to retain the robustness of the companion-matrix method whilst reducing the operation count to O(n 2 ). In particular, both Boyd in 2002 [4] and Battles in his 2004 Oxford DPhil Thesis [5] described how using a Chebyshev basis can lead to savings by allowing recursive subdivision of the approximation interval. This technique ensures that the size of the eigenvalue problem to be solved is never too large and is in fact the method used for computing roots in Chebfun [6]. 14
15 References [1] Sitton, G; Burrus, SC; Fox, JW; Trietel, S Factoring very high degree polynomials IEEE Signal Processing Magazine, Vol. 20(6) pp (2003) [2] Burrus, SC Horner s Method for Evaluating and Deflating Polynomials Unpublished notes: [3] Higham, NJ Accuracy and Stability of Numerical Algorithms SIAM, Second edition (2002) [4] Boyd, JP Computing zeros on a real interval through Chebyshev expansion and polynomial rootfinding SIAM Journal on Numerical Analysis 40 pp (2002) [5] Battles, Z Numerical Linear Algebra for Continuous Functions DPhil thesis, Oxford University Computing Laboratory (2004) [6] Chebfun website [7] McNamee, JM Numerical Methods for Roots of Polynomials Elsevier, First edition (2007) [8] Van Barel, M; Vandebril, R; Van Dooren, P Implicit double shift QR-algorithm for companion matrices Technical report, Department of Computer Science, K.U.Leuven (2008) [9] Datta, BN Numerical Linear Algebra and Applications SIAM, Second edition (2010) 15
16 A Appendix: MATLAB code A.1 Implementation of the LF method N = 100; c = rand(n,1); % c_1 + c_2 x c_n x^{n-1} mm = 200; % nodes in lower half plane if mm < N/2+1, error( require mm >= N/2 +1 ) else M = 2*(mm-1); end rr = [ ]; % radii of three concentric discs exps = (0:N-1) ; % exponential factors ffta = fft(c.*(rr(1).^exps),m); % evaluate polynomial on the rings fftb = fft(c.*(rr(2).^exps),m); fftc = fft(c.*(rr(3).^exps),m); cra = abs([ffta(end) ; ffta(1:mm+1)]); % append end terms and take abs val crb = abs([fftb(end) ; fftb(1:mm+1)]); crc = abs([fftc(end) ; fftc(1:mm+1)]); k = 2:mm+1; Bk = crb(k); % indices of middle-disc without end values % value function on the middle-ring % locate the local minima locs = (Bk < cra(k-1)) & (Bk < cra(k)) & (Bk < cra(k+1))... & (Bk < crc(k-1)) & (Bk < crc(k)) & (Bk < crc(k+1))... & (Bk < crb(k-1)) & (Bk < crb(k+1)); indx = find(locs); lx = length(indx); lf_rts=[]; if (lx) theta = -2*pi/M*(indx-1); lf_rts = exp(1i*theta); end % indices of local minima % number of minima found % convert indices into complex numbers A.2 Plotting the LF roots actual_rts = roots(flipud(c)); % compute roots with companion matrix plot(lf_rts, * ), hold on % plot LF approximations, plot(conj(lf_rts), * ) % companion matrix roots plot(actual_rts, or ) % and the three concentric discs uc = exp(1i*linspace(0,2*pi,200) ); plot(uc*rr, k ), hold off axis equal, axis off 16
The Lindsey-Fox Algorithm for Factoring Polynomials
OpenStax-CNX module: m15573 1 The Lindsey-Fox Algorithm for Factoring Polynomials C. Sidney Burrus This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 2.0
Roots of Polynomials
Roots of Polynomials (Com S 477/577 Notes) Yan-Bin Jia Sep 24, 2015 A direct corollary of the fundamental theorem of algebra is that p(x) can be factorized over the complex domain into a product a n (x
Lecture Notes on Polynomials
Lecture Notes on Polynomials Arne Jensen Department of Mathematical Sciences Aalborg University c 008 Introduction These lecture notes give a very short introduction to polynomials with real and complex
DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x
Chapter 5 COMPLEX NUMBERS 5.1 Constructing the complex numbers One way of introducing the field C of complex numbers is via the arithmetic of matrices. DEFINITION 5.1.1 A complex number is a matrix of
Faster deterministic integer factorisation
David Harvey (joint work with Edgar Costa, NYU) University of New South Wales 25th October 2011 The obvious mathematical breakthrough would be the development of an easy way to factor large prime numbers
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,
JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson
JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials
Factoring Ultra-High Degree Polynomials
Factoring Ultra-High Degree Polynomials Abstract The FFT is very useful in polynomial factorization. Two factorization methods, grid search and FFT argument selection, are built on the FFT. Both methods
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes
Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general
Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks
Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks Welcome to Thinkwell s Homeschool Precalculus! We re thrilled that you ve decided to make us part of your homeschool curriculum. This lesson
Notes for AA214, Chapter 7. T. H. Pulliam Stanford University
Notes for AA214, Chapter 7 T. H. Pulliam Stanford University 1 Stability of Linear Systems Stability will be defined in terms of ODE s and O E s ODE: Couples System O E : Matrix form from applying Eq.
Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction
How To Prove The Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)
ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x
U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra
U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory
(Refer Slide Time: 01:11-01:27)
Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 6 Digital systems (contd.); inverse systems, stability, FIR and IIR,
3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials
3. Interpolation Closing the Gaps of Discretization... Beyond Polynomials Closing the Gaps of Discretization... Beyond Polynomials, December 19, 2012 1 3.3. Polynomial Splines Idea of Polynomial Splines
ALGEBRAIC EIGENVALUE PROBLEM
ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.
The Factor Theorem and a corollary of the Fundamental Theorem of Algebra
Math 421 Fall 2010 The Factor Theorem and a corollary of the Fundamental Theorem of Algebra 27 August 2010 Copyright 2006 2010 by Murray Eisenberg. All rights reserved. Prerequisites Mathematica Aside
Zeros of Polynomial Functions
Review: Synthetic Division Find (x 2-5x - 5x 3 + x 4 ) (5 + x). Factor Theorem Solve 2x 3-5x 2 + x + 2 =0 given that 2 is a zero of f(x) = 2x 3-5x 2 + x + 2. Zeros of Polynomial Functions Introduction
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
Lecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
SECTION 2.5: FINDING ZEROS OF POLYNOMIAL FUNCTIONS
SECTION 2.5: FINDING ZEROS OF POLYNOMIAL FUNCTIONS Assume f ( x) is a nonconstant polynomial with real coefficients written in standard form. PART A: TECHNIQUES WE HAVE ALREADY SEEN Refer to: Notes 1.31
EVALUATING A POLYNOMIAL
EVALUATING A POLYNOMIAL Consider having a polynomial p(x) =a 0 + a 1 x + a 2 x 2 + + a n x n which you need to evaluate for many values of x. How do you evaluate it? This may seem a strange question, but
The Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., 12-15 July 2000. Derive 5: The Easiest... Just Got Better!
The Fourth International DERIVE-TI9/89 Conference Liverpool, U.K., -5 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de technologie supérieure 00, rue Notre-Dame Ouest Montréal
Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 [email protected].
Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 [email protected] This paper contains a collection of 31 theorems, lemmas,
Zeros of a Polynomial Function
Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we
Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones
Final Year Project Progress Report Frequency-Domain Adaptive Filtering Myles Friel 01510401 Supervisor: Dr.Edward Jones Abstract The Final Year Project is an important part of the final year of the Electronic
Derive 5: The Easiest... Just Got Better!
Liverpool John Moores University, 1-15 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de Technologie Supérieure, Canada Email; [email protected] 1. Introduction Engineering
Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1
Recursion Slides by Christopher M Bourke Instructor: Berthe Y Choueiry Fall 007 Computer Science & Engineering 35 Introduction to Discrete Mathematics Sections 71-7 of Rosen cse35@cseunledu Recursive Algorithms
6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives
6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Zeros of Polynomial Functions
Zeros of Polynomial Functions The Rational Zero Theorem If f (x) = a n x n + a n-1 x n-1 + + a 1 x + a 0 has integer coefficients and p/q (where p/q is reduced) is a rational zero, then p is a factor of
By choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao [email protected]
Integer Polynomials June 9, 007 Yufei Zhao [email protected] We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing
CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA
We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical
MBA Jump Start Program
MBA Jump Start Program Module 2: Mathematics Thomas Gilbert Mathematics Module Online Appendix: Basic Mathematical Concepts 2 1 The Number Spectrum Generally we depict numbers increasing from left to right
3.2 Sources, Sinks, Saddles, and Spirals
3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients
MATH 52: MATLAB HOMEWORK 2
MATH 52: MATLAB HOMEWORK 2. omplex Numbers The prevalence of the complex numbers throughout the scientific world today belies their long and rocky history. Much like the negative numbers, complex numbers
BookTOC.txt. 1. Functions, Graphs, and Models. Algebra Toolbox. Sets. The Real Numbers. Inequalities and Intervals on the Real Number Line
College Algebra in Context with Applications for the Managerial, Life, and Social Sciences, 3rd Edition Ronald J. Harshbarger, University of South Carolina - Beaufort Lisa S. Yocco, Georgia Southern University
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS E. Batzies 1, M. Kreutzer 1, D. Leucht 2, V. Welker 2, O. Zirn 1 1 Mechatronics Research
Higher Education Math Placement
Higher Education Math Placement Placement Assessment Problem Types 1. Whole Numbers, Fractions, and Decimals 1.1 Operations with Whole Numbers Addition with carry Subtraction with borrowing Multiplication
Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:
Partial Fractions Combining fractions over a common denominator is a familiar operation from algebra: From the standpoint of integration, the left side of Equation 1 would be much easier to work with than
1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
4.3 Lagrange Approximation
206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!
MATH BOOK OF PROBLEMS SERIES New from Pearson Custom Publishing! The Math Book of Problems Series is a database of math problems for the following courses: Pre-algebra Algebra Pre-calculus Calculus Statistics
AP Physics 1 and 2 Lab Investigations
AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks
FFT Algorithms. Chapter 6. Contents 6.1
Chapter 6 FFT Algorithms Contents Efficient computation of the DFT............................................ 6.2 Applications of FFT................................................... 6.6 Computing DFT
t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).
1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction
Algebra and Geometry Review (61 topics, no due date)
Course Name: Math 112 Credit Exam LA Tech University Course Code: ALEKS Course: Trigonometry Instructor: Course Dates: Course Content: 159 topics Algebra and Geometry Review (61 topics, no due date) Properties
Polynomials. Dr. philippe B. laval Kennesaw State University. April 3, 2005
Polynomials Dr. philippe B. laval Kennesaw State University April 3, 2005 Abstract Handout on polynomials. The following topics are covered: Polynomial Functions End behavior Extrema Polynomial Division
Notes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
RESULTANT AND DISCRIMINANT OF POLYNOMIALS
RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results
General Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
A simple and fast algorithm for computing exponentials of power series
A simple and fast algorithm for computing exponentials of power series Alin Bostan Algorithms Project, INRIA Paris-Rocquencourt 7815 Le Chesnay Cedex France and Éric Schost ORCCA and Computer Science Department,
Estimated Pre Calculus Pacing Timeline
Estimated Pre Calculus Pacing Timeline 2010-2011 School Year The timeframes listed on this calendar are estimates based on a fifty-minute class period. You may need to adjust some of them from time to
Numerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 [email protected] 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
Matrices and Polynomials
APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,
Method To Solve Linear, Polynomial, or Absolute Value Inequalities:
Solving Inequalities An inequality is the result of replacing the = sign in an equation with ,, or. For example, 3x 2 < 7 is a linear inequality. We call it linear because if the < were replaced with
10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
Metric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY
January 10, 2010 CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY The set of polynomials over a field F is a ring, whose structure shares with the ring of integers many characteristics.
A SIMPLE PROCEDURE FOR EXTRACTING QUADRATICS FROM A GIVEN ALGEBRAIC POLYNOMIAL.
A SIMPLE PROCEDURE FOR EXTRACTING QUADRATICS FROM A GIVEN ALGEBRAIC POLYNOMIAL. S.N.SIVANANDAM Professor and Head: Department of CSE PSG College of Technology Coimbatore, TamilNadu, India 641 004. [email protected],
The Method of Partial Fractions Math 121 Calculus II Spring 2015
Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method
South Carolina College- and Career-Ready (SCCCR) Pre-Calculus
South Carolina College- and Career-Ready (SCCCR) Pre-Calculus Key Concepts Arithmetic with Polynomials and Rational Expressions PC.AAPR.2 PC.AAPR.3 PC.AAPR.4 PC.AAPR.5 PC.AAPR.6 PC.AAPR.7 Standards Know
DRAFT. Further mathematics. GCE AS and A level subject content
Further mathematics GCE AS and A level subject content July 2014 s Introduction Purpose Aims and objectives Subject content Structure Background knowledge Overarching themes Use of technology Detailed
The Open University s repository of research publications and other research outputs
Open Research Online The Open University s repository of research publications and other research outputs The degree-diameter problem for circulant graphs of degree 8 and 9 Journal Article How to cite:
26. Determinants I. 1. Prehistory
26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent
Computing divisors and common multiples of quasi-linear ordinary differential equations
Computing divisors and common multiples of quasi-linear ordinary differential equations Dima Grigoriev CNRS, Mathématiques, Université de Lille Villeneuve d Ascq, 59655, France [email protected]
Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally
Recurrence Relations Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally modeled by recurrence relations. A recurrence relation is an equation which
FOREWORD. Executive Secretary
FOREWORD The Botswana Examinations Council is pleased to authorise the publication of the revised assessment procedures for the Junior Certificate Examination programme. According to the Revised National
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style
Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
A three point formula for finding roots of equations by the method of least squares
A three point formula for finding roots of equations by the method of least squares Ababu Teklemariam Tiruneh 1 ; William N. Ndlela 1 ; Stanley J. Nkambule 1 1 Lecturer, Department of Environmental Health
Alum Rock Elementary Union School District Algebra I Study Guide for Benchmark III
Alum Rock Elementary Union School District Algebra I Study Guide for Benchmark III Name Date Adding and Subtracting Polynomials Algebra Standard 10.0 A polynomial is a sum of one ore more monomials. Polynomial
COMPLEX NUMBERS AND SERIES. Contents
COMPLEX NUMBERS AND SERIES MIKE BOYLE Contents 1. Complex Numbers Definition 1.1. A complex number is a number z of the form z = x + iy, where x and y are real numbers, and i is another number such that
G.A. Pavliotis. Department of Mathematics. Imperial College London
EE1 MATHEMATICS NUMERICAL METHODS G.A. Pavliotis Department of Mathematics Imperial College London 1. Numerical solution of nonlinear equations (iterative processes). 2. Numerical evaluation of integrals.
AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT
AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT Dr. Nikunja Swain, South Carolina State University Nikunja Swain is a professor in the College of Science, Mathematics,
Administrative - Master Syllabus COVER SHEET
Administrative - Master Syllabus COVER SHEET Purpose: It is the intention of this to provide a general description of the course, outline the required elements of the course and to lay the foundation for
Algebra I Credit Recovery
Algebra I Credit Recovery COURSE DESCRIPTION: The purpose of this course is to allow the student to gain mastery in working with and evaluating mathematical expressions, equations, graphs, and other topics,
Algebra 2 Year-at-a-Glance Leander ISD 2007-08. 1st Six Weeks 2nd Six Weeks 3rd Six Weeks 4th Six Weeks 5th Six Weeks 6th Six Weeks
Algebra 2 Year-at-a-Glance Leander ISD 2007-08 1st Six Weeks 2nd Six Weeks 3rd Six Weeks 4th Six Weeks 5th Six Weeks 6th Six Weeks Essential Unit of Study 6 weeks 3 weeks 3 weeks 6 weeks 3 weeks 3 weeks
6. Define log(z) so that π < I log(z) π. Discuss the identities e log(z) = z and log(e w ) = w.
hapter omplex integration. omplex number quiz. Simplify 3+4i. 2. Simplify 3+4i. 3. Find the cube roots of. 4. Here are some identities for complex conjugate. Which ones need correction? z + w = z + w,
Mean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler
Machine Learning and Data Mining Regression Problem (adapted from) Prof. Alexander Ihler Overview Regression Problem Definition and define parameters ϴ. Prediction using ϴ as parameters Measure the error
New Higher-Proposed Order-Combined Approach. Block 1. Lines 1.1 App. Vectors 1.4 EF. Quadratics 1.1 RC. Polynomials 1.1 RC
New Higher-Proposed Order-Combined Approach Block 1 Lines 1.1 App Vectors 1.4 EF Quadratics 1.1 RC Polynomials 1.1 RC Differentiation-but not optimisation 1.3 RC Block 2 Functions and graphs 1.3 EF Logs
(!' ) "' # "*# "!(!' +,
MATLAB is a numeric computation software for engineering and scientific calculations. The name MATLAB stands for MATRIX LABORATORY. MATLAB is primarily a tool for matrix computations. It was developed
Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
Dynamic Eigenvalues for Scalar Linear Time-Varying Systems
Dynamic Eigenvalues for Scalar Linear Time-Varying Systems P. van der Kloet and F.L. Neerhoff Department of Electrical Engineering Delft University of Technology Mekelweg 4 2628 CD Delft The Netherlands
How To Know If A Domain Is Unique In An Octempo (Euclidean) Or Not (Ecl)
Subsets of Euclidean domains possessing a unique division algorithm Andrew D. Lewis 2009/03/16 Abstract Subsets of a Euclidean domain are characterised with the following objectives: (1) ensuring uniqueness
FACTORING LARGE NUMBERS, A GREAT WAY TO SPEND A BIRTHDAY
FACTORING LARGE NUMBERS, A GREAT WAY TO SPEND A BIRTHDAY LINDSEY R. BOSKO I would like to acknowledge the assistance of Dr. Michael Singer. His guidance and feedback were instrumental in completing this
