Factoring Ultra-High Degree Polynomials

Size: px
Start display at page:

Download "Factoring Ultra-High Degree Polynomials"

Transcription

1 Factoring Ultra-High Degree Polynomials Abstract The FFT is very useful in polynomial factorization. Two factorization methods, grid search and FFT argument selection, are built on the FFT. Both methods have factored sixty 150,000-degree real random coefficient polynomials and one 1,000,000-degree polynomial. Polynomial multiplication to find the polynomial associated with a set of roots should usually be done in the Fourier domain. It is often important that simultaneous deflation, i.e., polynomial division of a large number of roots, be done in the Fourier domain. For any radius, r, the FFT and a winding number function can find the approximate number of roots with magnitude less than r. Many ill-conditioned polynomials are factors of higher degree well-conditioned polynomials. It is highly desirable to work with well-conditioned polynomials and the amplitude spectrum of an ill-conditioned polynomial may assist in finding a well-conditioned parent. The current practice in deflation is flawed but easily repaired. This paper presents improved algorithms for root polishing and polynomial evaluation. James W. Fox, Gary A. Sitton, Joe Pat Lindsey, C. Sidney Burrus, and Sven Treitel Rice University, and TriDecon Inc. November 26, 2003 Contents page 1 Introduction Overview 1.2 Credibility 2 Four New Factorization Methods The Grid Search Method The Minimum Modulus Theorem and Grid Searching Lindsey s Idea Early Optimizations to the Grid Search Method Later Optimizations to the Grid Search Method Future Optimizations to the Grid Search Method Advantages of the Grid Search Method 2.2 The Argument Randomization Method Deflation Order is Important Burrus Idea Wilkinson s Observations 2.3 Insights From the Amplitude Spectrum The Spectral Significance of Deflating One Root An Interesting Correlation with the Amplitude Spectrum 2.4 The FFT Argument Selection Method Fox s Idea A Sample Implementation 2.5 The Coefficient Pre-Whitening Method A Crude Whitening Algorithm 1

2 2.5.2 An Encouraging Example A Cautionary Example 2.6 Timing Comparisons 24 3 Applications Ancient History 3.2 Modern Applications 3.3 Accurate phase unwrapping 3.4 Finding the minimum phase equivalent 3.5 Test Lindsey s suggestion for estimating phase rotation 3.6 Test Tanner s seismic processing ideas 3.7 Who knows what else? References Introduction 1.1 Overview This is a long article, long enough to justify a table of contents. The patient reader will be rewarded with several new, easy to understand, and yet powerful ideas relating to polynomial factorization, unfactoring, and evaluation. There is one new idea for polynomial evaluation, one for multiplication, two for deflation, two for root polishing, and four new algorithms for ultra-high degree polynomial factorization. Programs for three of the four factorization methods, lroots, broots, and froots, are named for Lindsey, Burrus, and Fox who conceived the main ideas of each method. Matlab or C code for every new program mentioned in this paper can be found at 1. It is interesting how often the FFT proves to be invaluable. Many papers focus on the great difficulties that can occur when factoring polynomials. This paper will be chiefly concerned with showing that new or improved techniques allow ultrahigh degree well-conditioned polynomials to be factored. Over a 25-year period Wilkinson wrote extensively about polynomials. In his classic book he warned darkly: A tendency to underestimate the difficulties involved in working with general polynomials is perhaps a consequence of one s experience in classical analysis. There it is natural to regard a polynomial as a very desirable function since it is bounded in any finite region and has derivatives of all orders. In numerical work, however, polynomials having coefficients which are more or less arbitrary are tiresome to deal with by entirely automatic procedures 7. His love-hate relationship with polynomials is intimated by the title of his last article, The Perfidious Polynomial, which was published two years before his death at age 67. He concluded that work with, Matlab is a registered trademark of The MathWorks Inc. 2

3 The title of this article reflects my feelings in early encounters with solving polynomial equations. I was astonished, indeed affronted, by my experiences with simple polynomials 8. Ultra-high degree polynomial factorization is widely thought to be impossible. Numerical Recipes 9 defines high degree as > 19. In general this may be true. Wilkinson s 20-degree polynomial 10 with roots -1, -2, is highly ill-conditioned and difficult to factor accurately. Ill-conditioned means small changes in the coefficients produce large changes in the roots. For example, adding 2 = 1.2x10 to the coefficient of z in the Wilkinson polynomial causes huge changes in the roots. Before the change all roots are real integers. After the change there are 5 pairs of complex conjugate roots and they are not close to any real integers. Nonetheless, evidence will be presented that a large and interesting class of polynomials can be factored to an extremely high polynomial degree. This includes speech data, seismic data, random coefficient polynomials, and many others. This does not include polynomials with coefficients having a large dynamic range that are asymptotic to zero at either end of the coefficient sequence. Such polynomials are often moderately or severely-conditioned. Examples of difficult polynomials are digital filters with a large dynamic range, polynomials n with many multiple or near-multiple roots like y = (x + 1) which has the bell-curve shape, and the Wilkinson polynomial, which is asymptotic to zero at the highest coefficients and has a dynamic range of 1.4x We show that two factorization methods, grid search and FFT argument selection, can routinely factor 150,000-degree real random coefficient polynomials, RRCP. It takes almost 5 hours per factorization (on a PC) so they have not been tested on thousands of polynomials. However, sixty RRCP have been factored by both methods. Thirty had coefficients uniformly distributed between 0 and 1, while thirty had coefficients uniformly distributed between 1 and +1. The FFT argument selection method has even factored one 1 250,000-degree RRCP. The coefficients and roots of this polynomial can be found at. The new factorization methods have been primarily tested on synthetic seismic, strictly positive RRCP, and mean zero RRCP. The primary reason is that these classes of polynomials are extremely well-conditioned. Unlike the Wilkinson polynomial, adding 7 mean-zero random values with magnitudes less than 1x10 to every coefficient of a 7 normalized 1,000-degree RRCP changes almost all roots by 5x10 or less. Section explains why random coefficient polynomials are so well-conditioned. A second reason that they have been the focus of testing is that section 2.5 will show that some ill-conditioned non-random coefficient polynomials can be factored by factoring a related polynomial that is close to random coefficient. Although two of the new methods can factor 150,000-degree well-conditioned polynomials, none of the four can currently factor the Wilkinson 20-degree polynomial because some of its roots can seemingly only be determined to 4-digit accuracy. Being able to determine the roots to high accuracy is almost essential to the success of high degree polynomial factorization. Thus, the new methods are not the ultimate solution for general polynomial factorization problems, but they are significant steps forward. They show that some 3

4 polynomials of wide interest, z-transforms of seismic data and speech data, can be factored to an extraordinarily high degree. Section 2.5 will suggest a method that may allow these methods to be applied to an even wider class of ill-conditioned polynomials. The success of these methods would not have been possible without improvements in related areas such as deflation and root polishing. 1.2 Credibility Around 1989 Lindsey lectured about the work he and Fox had done with polynomial factorization. He claimed to often be able to factor 400-degree seismic polynomials. A few members of the audience, including a former editor of Geophysics Magazine, were incredulous. The new claim in this article that some 150,000-degree polynomials can be factored does seem incredible and requires demonstration. Six tests are offered as evidence. They also introduce some of the new ideas. a) The two methods test. Each of the sixty 150,000-degree polynomials was factored by two different methods lroots and froots. The maximum difference between corresponding 16 roots was 4.0x10. These two methods gave virtually identical answers even though they used radically different algorithms. This is powerful validation of both methods. b) The unfactor test. Three of the factorization methods have the option to unfactor the roots, i.e., find the polynomial that contains them, and compare it with the original polynomial. Both are normalized to have maximum absolute value 1.0 and then the maximum error is computed. For the sixty polynomials, the maximum unfactor error was x10. For the 250,000-degree polynomial it was 6.1x10. However, having a small error in the unfactored polynomial is not sufficient evidence that the alleged roots are correct. The following is an important cautionary example. Sitton provided a smooth looking 270-degree polynomial that was asymptotic to zero at both ends and had a dynamic range of 5x It was very ill-conditioned. Attempts to factor it were made using Matlab s routine, roots, and Fox s routines lroots, froots, and broots. Factorization Method Maximum Error in the Unfactored Polynomial 14 Matlab s roots() 4.0x10 10 Lroots with root 2.6x10 polishing 15 Froots without root 8.6x10 polishing 15 Broots without root 5.6x10 polishing The first two methods provided essentially the same answer. The maximum difference 8 between corresponding roots was 8.0x10. However, there was little correlation between the common answer provided by the first two methods and that provided by the last two 4

5 methods. Furthermore, although the last two methods had several roots in common, they had several that were significantly different. There were three essentially different sets of alleged roots and each passed the unfactor test. c) The polish test. A second test is necessary to complement the unfactor test. If each alleged root is polished; e.g. using either Newton s or Laguerre s method, it should not move far. When the roots provided by the last two methods in the above table were polished, they collapsed to a subset of the roots provided by the first two methods. They failed the polish test. Many roots polished to the same value providing the illusion of roots with high multiplicity. Even though the alleged roots provided by the last two methods had the smallest unfactor error, as a set of roots they were badly in error. The reason this could happen was the polynomial was severely ill-conditioned. Tiny changes in the coefficients caused large changes to the roots, just like the Wilkinson polynomial. Despite the smaller unfactor error, it is not clear that Matlab s roots in the above table were better than the lroots roots. The last two entries in the above table show that having a small unfactor error can be meaningless. When Matlab s roots were subsequently polished and 10 unfactored the error was 9.3x10, similar to lroots. It is believed that if a set of alleged roots passes tests b) and c), they should be accepted as being approximately correct. The 250,000-degree polynomial passed these two tests. The sixty 150,000-degree polynomials passed tests a), b), and c). d) The final polishing correction test. Four different root-polishing routines will be discussed below. They are significant variants of Newton s and Laguerre s methods. The sequence of corrections applied after starting near a well-conditioned root quickly decreases 14 to a value < 1.4x10 and the polisher terminates after one more iteration. The final 17 correction is usually < 9x10. However, the corrections applied to an ill-conditioned root 10 eventually decrease to some value like 1x10 but never decrease further even though 10,000 iterations are attempted. Therefore each polisher terminates after 51 iterations unless instructed to do more. The polishers optionally return the final correction applied to each root and this is a useful approximation of the root s accuracy. In this paper, ill-conditioned usually means the final polishing correction is large. The maximum final correction to any root of one 150, degree polynomial was 9.998x10. Thus, its roots were probably accurate to almost 16 digits. There is an remarkable correlation between each root s final polishing correction as a function of the root s argument and the amplitude spectrum. This will justify two new factorization methods and explain why certain roots are well/ill conditioned. e) The inner part test. If a high degree polynomial is evaluated near a large magnitude root, it will overflow. Standard Newton s or Laguerre s method can not be applied because the polynomial can not be evaluated. This problem is solved by a new concept, the inner part of the polynomial evaluation. The inner part of a polynomial evaluation is bounded above by the sum of the absolute values of the coefficients. It never overflows. Mathematically, the inner part is zero at precisely the same locations where the polynomial is zero. The new root polishing methods never overflow because they find zeros of the inner part instead of zeros 5

6 of the polynomial. For one 150,000-degree polynomial the inner part of the polynomial 9 evaluations were computed for every root. The maximum absolute value was 6.3x10. However, this test is also not conclusive. The inner parts were computed for the false root sets provided by froots and broots in the above table. The maximum absolute value was x10 so they passed this test. The reason is that the inner part becomes asymptotic to the constant term at 0 and it becomes asymptotic to the highest order term at infinity. The first 45 and last coefficients of this unusual polynomial were 2x10 so the inner part was small everywhere except very near the unit circle. 2 Four New Factorization Methods 2.1 The Grid Search Method The Minimum Modulus Theorem and Grid Searching The Minimum Modulus Theorem 11 implies that if f(z) is a non-constant complex analytic function and f(z) has a relative minimum inside an open disk, then the relative minimum must be a zero. Suppose such an f(z) is evaluated on a rectangular grid and the grid size is small enough to capture the function s oscillations. Every point in a rectangular grid is the center of a 3x3 subset of neighboring grid points - eight points surround the center point. If f(z) is computed at these nine points and is smallest at the center, then it is reasonable to conclude there is a relative minimum near the center and strictly inside the rectangle containing the outer eight points. By the Minimum Modulus Theorem we can expect that if Newton s method is begun with the center point as the initial guess, it will likely converge to a zero that is inside the rectangle formed by the outer eight points. This is the idea behind grid searching for roots Lindsey s Idea Evaluating a high degree polynomial on a rectangular grid that is small enough to separate the roots sounds like an extremely slow algorithm. However Lindsey proposed a wonderfully efficient way to do this. Building on the work of others, he noticed that the FFT could be used to quickly evaluate the polynomial on a grid that is rectangular when viewed in polar coordinates and this permits efficient grid searching for roots. Multiplying the polynomial coefficients by an exponential and computing the FFT can efficiently evaluate a polynomial at n regularly spaced points on any circle centered at the origin. Suppose f(z) is any polynomial. Pad it with zeros at the high coefficient end until the coefficient sequence has length n, which is a power of 2. Padding with zeros at the high end does not change the polynomial s roots. Computing the radix-2 FFT of these coefficients produces the same values as evaluating the polynomial at all n th roots of unity 12. If the n+1 n n 1 n polynomial coefficients are multiplied by r,r,r,...,r, r,1, a new polynomial, g(z), is formed which satisfies g(z) = f(rz). Computing the FFT of g(z) is the same as evaluating g(z) at all n th roots of unity, which is the same as evaluating f(z) at n regularly spaced points on the circle with radius 1/r, centered at the origin. 6

7 Lindsey was only interested in factoring seismic data. However, the plot of the roots of a 100-degree seismic trace is visually indistinguishable from the roots of a 100-degree sample of speech or a 100-degree random coefficient polynomial. Arnold 13 proved that almost all roots of random coefficient polynomials are, statistically speaking, uniformly distributed in the angular direction and contained in a narrow annulus about the unit circle. The higher the degree the thinner the annulus. Without being precise as to the meaning of narrow, let us refer to any polynomial with a similar root distribution as an annular polynomial. Section?? will give evidence that a very large class of polynomials is annular, not just random coefficient polynomials. Figure 1 Figure 2 Figure 1 shows the roots of a 100-degree RRCP in rectangular coordinates; cf. Arnold s Theorem. Figure 2 shows the upper half plane roots of the same polynomial in polar coordinates. These two plots illustrate why a rectangular grid in polar coordinates is ideally suited to isolating the roots of annular polynomials. The horizontal line at 1 is the unit circle restricted to the upper half-plane, the points on or above the real axis. If a polynomial f(z) has real coefficients it is only necessary to find the roots in the upper half-plane. For suppose f(z)=0. Taking the complex conjugate of both sides and using the fact that the coefficients are real, one sees that f( z )=0. Thus if z is an upper half-plane root of a real coefficient polynomial, then z must also be a root and it will lie in the lower half-plane. The size of the grid cells is controlled differently in the radial and the angular directions. Since the FFT can be used to evaluate the polynomial at n regularly spaced points on any circle centered at the origin, the radial size of the grid cells can be precisely controlled. Zero padding controls the angular size of the grid cells. Suppose the coefficients have been padded to length 2n. After computing the FFT, the grid cells will be half as large in the angular direction as they would have been if they had only been padded to length n Early Optimizations to the Grid Search Method There is a version of the FFT that is optimized for real sequences. It is twice as fast as the complex FFT because it only evaluates the polynomial at the n th roots of unity in the upper half-plane 14. Using this real FFT results in a grid search restricted to the upper half-plane. 7

8 As mentioned above, if the polynomial has real coefficients, it suffices to find the upper half-plane roots; complex conjugation will supply the rest. If the real FFT is used with real coefficient polynomials, the grid search will be twice as fast. Creating a two-dimensional array of f(z) at every grid cell would require a substantial amount of memory. Fortunately, this is unnecessary. Instead, use the FFT to compute f(z) for three consecutive radii. Check each point on the center circle to find where f(z) is less than the surrounding eight points. Then reuse the space corresponding to these three circles for the next three circles. Furthermore, values for two of the next three circles are the same and do not need to be recomputed Later Optimizations to the Grid Search Method Fox implemented Lindsey s idea in FORTRAN and the results were reported by Lindsey 15. At that time the program could find almost all roots of 400-degree seismic polynomials. However, it often failed to find about 5 roots. Fox continued to work on the algorithm for another decade and now lroots.m seems to consistently find all roots of 150,000-degree RRCP and synthetic seismic polynomials. The following discoveries were key to its success. The first two are additional examples of the importance of the FFT for ultra-high degree polynomial factorization. a) Grid search typically finds almost all, but not all the roots. These roots are unfactored and the resulting polynomial divided into the original polynomial. The quotient contains all the missing roots. It is very important that this polynomial division be done in the Fourier domain, not the time domain. Section?? discusses the reason. b) Unfactoring a set of roots means multiplying together the corresponding linear factors to find the polynomial that contains them. Matlab s routine, poly, does the multiplications in the time domain. However, doing them in the Fourier domain seems to work measurably 1 better. See function unfactor. However, this rule is not absolute; it is better to unfactor the roots of the Wilkinson polynomial in the time domain. Lang 16 found one reference, Nachtigal 17, that discussed the errors that can occur when unfactoring a set of roots. If the roots are sorted by increasing argument, an unfactorization in the time domain can have large errors. Matlab does not seem to take this into account. For example, if n has been given a value, the following Matlab expression creates and unfactors the n th roots of unity. The result should be [1,0,0, 0,-1] which represents the polynomial x n 1. y = poly(exp(i*(2*pi/n)*(0:n-1))); (1) y = poly(exp(i*(2*pi/n)*(0: For n=16, the maximum error in any coefficient is 2.7x x10 16 i. For n=128 the maximum error is 3x x i. For n=1,500 the error is Inf, infinity. 8

9 If the roots are first randomized by argument, the results are usually very good. The function permutevdc 1 uses the Van der Corput sequence to base 2 to permute an arbitrary array. If an array is first sorted by argument and then permuted, it will be randomized by argument. For n=1,500 y = poly(permutevdc(exp(i*(2*pi/n)*(0:n-1)))); (2) has maximum error 1.8x x10 16 i, not infinity. While attempting to factor 100-degree polynomials, Lindsey independently rediscovered the necessity of randomization and passed this information on to Fox. However, Fox found that even if the roots are randomized, it is usually better to do the unfactorization in the Fourier domain. PermuteVDC uses function bitreverse 1 to compute the bit reversal of 0, 1, 2, 2 k 1. This function uses Fox s unpublished algorithm that requires only 2 k 1 bitwise or s and 2k+1 bit shifts. It is expressed in 10 Matlab statements and is probably the fastest possible bit reversal algorithm. If an un-factorization is done in the time domain, randomizing the roots by argument is very important. Conversely, sections 2.2 and 2.4 will show that in a factorization the order of the arguments of deflated roots is also important. c) If z is a large magnitude root of a high degree polynomial then f(w) will overflow for most values of w near z. Seemingly the Newton correction, f(w)/f (w) can not be computed. Fortunately, variants of Newton s and Laguerre s methods allow such roots to be polished. Sections??-?? discuss these variants and their advantages and limitations. d) For RRCP, empirical data for 4<degree<150,000 suggests the probability density function for log(root radius) is very close to the Cauchy density function where β depends on the degree: p(x) = (3) π (x 2 β 2 + β ) Therefore, the probability density for root radius is: p(x) = (4) 2 β xπ (log (x) + β ) 2 for x>0 9

10 Figure 3 β (degree) = = 0.137log(x) deg ree log(x) deg ree for degree 2, 000 (5) for 2,000 < degree Equation (4) shows that although the grid cells must be small near the unit circle, they can rapidly increase in size away from the unit circle because the roots quickly become sparse; see Figure 4. e) As mentioned above, zero padding determines the grid size in the angular direction. It was found useful to divide the points inside the unit circle into three annular regions next to the unit circle, moderately far from the unit circle, and very far. A different zero padding was used for each region with the region closest to the unit circle having the largest zero padding and hence the smallest angular grid size. This helps fulfill the requirement at the end of d). f) An empirical formula was discovered to optimize the selection of radii used in the grid search. To efficiently select radii, one needs to know the distance between a root and its nearest neighbor so the radial grid size can be set smaller than this. Empirical data suggests that if nndist(r,n) is the minimum distance to the nearest neighbor of a root of radius r<1 of a RRCP of degree n, then a useful approximation is: nndist(r,n) = 1-10/n - r for 0<r<1-18/n (6) = n(r /n) 2 for (1-18/n)<r<1 To derive this, a 100-degree RRCP was factored. For every root inside the unit circle, the distance was computed to every other root and then the minimum distance computed. This gave a table of pairs of values (root radius, distance to that root s nearest neighbor) This was repeated for 10,000 polynomials of degree 100 and a cross-plot of nearest neighbor distance as a function of radius was produced for the combined data set. Visually, points were picked out that formed a lower bound to almost all of the data points. Then a similar cross-plot was obtained for 1,000 polynomials of degree 1,000 and 10 polynomials of degree 10,000. The points picked as lower bounds for these three plots were fit to determine formula (6). Much later, a cross-plot for one polynomial of degree 150,000 was obtained and nndist(r,150000) was superimposed. It was quite good. 10

11 This shows part of the nearest neighbor distance versus root radius cross plot for the roots of 1, degree polynomials. Roots outside the unit circle are not shown. The solid line is a plot of (6) which is hoped to be a lower bound for almost all the roots. It was found best to have the grid size in the radial direction be nndist(r,n)/3; i.e. 3 grid cells between neighboring roots. Figure 4 g) The previous section only applied to roots inside the unit circle. Section?? will show that if z is a non-zero root of a polynomial, then 1/z is a root of the polynomial obtained by reversing the order (flipping the first polynomial s coefficients end for end). Thus, techniques for finding roots inside the unit circle can also be used to find roots outside the unit circle. h) After the grid search, the roots are polished against the original polynomial. Sometimes two starting locations incorrectly polish to the same location. With crucial assistance from Sitton, a fast algorithm to find and remove these false duplicates was developed. See 1 function uniq. i) After the roots found by grid search have been deflated, a low degree quotient will contain the missing roots. It is factored. However, due to round off errors in the deflation and possible ill-conditioning of the quotient, the roots of the quotient can be in error as roots of the original polynomial. The error can be significant if the quotient contains several roots with nearly the same argument because this causes ill-conditioning. Furthermore, most roots will be near the unit circle and this means they will also be near dozens of other roots. When the roots of the quotient are polished against the original polynomial, some may polish to nearby roots that were previously found and not to the desired missing roots. This must be checked and any false duplicates removed. If factoring the quotient found at least one missing root, it is added to the known roots and the entire process is repeated. This iterative deflation is continued until all missing roots are found or until the quotient is so illconditioned that it impossible to use it to find more missing roots. See function deflatefft 1. Functions uniq and deflatefft are useful in any factorization routine. They are also used in broots and froots discussed in sections 2.2 and 2.4. j) Usually, iterative deflation finds all the missing roots. If it fails, a second grid search is performed with a finer grid. If that fails, additional grid searches are performed with grids that are finer farther from the unit circle. This can be expensive. 11

12 2.1.5 Future Optimizations to the Grid Search Method The current version of lroots.m seems excellent for annular polynomials. However, if there are numerous roots far from the unit circle, it performs one or more additional grid searches with increasingly finer grids both close to and far from the unit circle. This is very time consuming. It would be desirable if the first grid better matched the distribution of the roots. This is to some extent possible, however the following enhancement has not been undertaken. By the Argument Principle 20, the number of zeros of an analytic function inside a closed curve can be computed from a contour integral of the function s logarithmic derivative. [Double-check this language.] As in section 2.1.2, the FFT can be used to evaluate the polynomial at n regularly spaced points on any circle centered about the origin. Furthermore, by zero padding, n can be made arbitrarily large and this permits the integral to be approximated to any desired accuracy. However, it is sufficient if the error in the approximation is < 0.5 because the integral must be an integer, the number of zeros inside the circle. However, there is an easier way. If c is any circle centered at the origin and f(z) is any polynomial, then the number of times f(c) winds about 0 is also the number of zeros of f(z) inside the circle. [Reference??] Thus, as in section 2.1.2, we can use the FFT to evaluate f(z) at a very large number, n, of points on the circle and then pass these values to a winding number function such as This gives the number of zeros inside the inscribed polygon determined by the n points where the function was evaluated. If n is large, this is approximately the number of zeros inside the circumscribed circle. Matlab callable C code, winding.c, which computes the winding number, and a Matlab program, nroots.m, which uses winding.c to compute the approximate number of roots with radius < r, can be found at 1. By calling nroots for several different radii, the distribution of root radii could be determined and a grid designed that better matches the actual root layout. This could save a considerable amount of time when factoring non-annular polynomials. Numerical experience strongly suggests that for RRCP, the percentage of radii that lie in the range 1-10/degree < radius < 1+10/degree is always very close to 90%. At the beginning of a factorization a winding number function could be used to determine the percentage of roots in this range. If the percentage is significantly less than 90%, the polynomial is not annular and it would be wise to use the winding number function several more times to better estimate the distribution of root radii before designing the grid. For example, lroots.m currently spends an equal amount of time searching inside the unit circle as it does outside the unit circle. If the polynomial is minimum phase, i.e., all roots are inside the unit circle, half the effort is wasted. Nroots.m easily detects this condition and the grid search could be adjusted accordingly. 12

13 When evaluating the winding number at a radius where there are numerous nearby roots, such as near the unit circle, it is necessary to pad with a large number of zeros before the FFT is used to evaluate the polynomial. For 1,000-degree RRCP, it seems necessary to pad 17 to at least length 8,192. It is occasionally necessary to pad to length 2 = 131,072 if the correct answer, not an approximate answer, is needed. It depends on how many roots have magnitude slightly less than the desired radius. However, radii that are significantly different from 1.0 require little or no padding. The grid search method also pads with zeros and computes the FFT, so it might seem possible to use the resulting values for both a winding number computation and for the grid search. Unfortunately, for 1,000-degree polynomials, grid search only needs to pad to length 4,096 and that is inadequate for the winding number function near radius 1.0. However for radii that are significantly different from 1, little or no zero padding is needed and values could be used for both computations. If a grid search fails to find a few roots, nroots could approximately determine the magnitudes of the missing roots. One or more small grids could be designed that focus on the radii of the missing roots. The current code blindly searches everywhere. Alternatively, the roots found by grid search could be deflated and the resulting small quotient factored. When the roots of the quotient are polished against the original polynomial they may all converge to nearby roots previously found by grid search, not to missing roots. However, the unpolished positions should be close to the correct missing roots. They could be slightly perturbed in several directions and polished to see if this finds some of the missing roots. If this succeeds, iterative deflation might find the rest Advantages of the Grid Search Method It is fast; it is order n 2 where n is the degree. The eigenvalue factorization method 23 with preliminary matrix balancing is very powerful but unfortunately it is order n 3. [Reference? Golub? The timings in section 2.6 are best fit by order n so perhaps the correct exponent is 2.5] See the timing comparisons in section 2.6. Matlab uses this method in its polynomial factorization function, roots. In one test on a 1.4 GHz. computer, Matlab s roots required seconds to factor a 1,000-degree random coefficient polynomial. Grid search factored the same polynomial in 0.77 seconds. They obtained the same root set, but the eigenvalue method was 96 times slower. This disparity increases dramatically at larger degrees. It requires very little workspace. By contrast, the eigenvalue method creates a (degree+1) x (degree+1) square matrix. Thus a 10,000-degree polynomial in double precision requires an 800-MB work array! Grid search should require less than 2 MB. For this reason Matlab can not factor ultra high degree polynomials; see section 2.6. It is highly parallelizable. The grid search can be broken into two or more areas and attacked in parallel. After the grid search is completed, all the roots that were found can be divided into two or more groups and the polishing done in parallel. Similarly, the unfactoring in the Fourier domain can be done in parallel. 13

14 It may be able to factor severely ill-conditioned polynomials. The roots of these polynomials may only be obtainable to an accuracy of 8 digits (or less), even when working in 16-digit double precision. Most factorization methods find and deflate one root at a time. If some root of an ill-conditioned polynomial can only be polished to 8-digit accuracy, the quotient after deflation will only have 8-digit accuracy. If the quotient is also ill-conditioned, then roots obtained from the quotient can be quite different from roots of the original polynomial. Grid search avoids this problem since it deflates only after it has found almost all the roots. It may have only found them to 8-digit accuracy, but that may be good enough. For example, almost all of the roots of the ill-conditioned 270-degree polynomial of Sitton s of section 1.2 b) could only be polished to 10-digit accuracy. Nonetheless, grid search was able to factor it. Because of the matrix balancing, Matlab s roots was able to decrease the dynamic range and factor it. Both methods obtained essentially the same results. 2.2 The Argument Randomization Method Deflation Order is Important Perhaps the three most common reasons polynomial factorizations fail are: Forward deflation was used for roots outside the unit circle. The polynomial was initially ill-conditioned, e.g. the Wilkinson polynomial. The polynomial was initially well-conditioned. However the roots were deflated in an order that caused ill-conditioned intermediate quotients. The first reason will be discussed in section??. The second reason is more difficult to deal with. If a polynomial s coefficients are asymptotic to zero at either end and has a large dynamic range, it is often moderately or seriously ill-conditioned. The eigenvalue method with matrix pre-balancing is one way to deal with a large dynamic range. The matrix balancing reduces the dynamic range. Another alternative will be discussed in section 2.5. The third reason for failure is much easier to deal with. Even if the original polynomial is well-conditioned, it is easy to deflate roots in an order that creates ill-conditioned intermediate quotients simply deflate several roots that are close together. The roots of such polynomials can only be determined with reduced accuracy. When the inaccurate roots are deflated, the resulting quotients can be both inaccurate and illconditioned. The factorization can spiral down to disaster Burrus Idea Lang, section b), observed that unfactoring a set of roots can be tremendously more accurate if the roots are first randomized by argument. Years later Lang s colleague, Burrus 24, conjectured that randomizing the arguments of deflated roots might result in more accurate factorizations and he wrote a Matlab program to test this idea. Initially the starting 14

15 point for every Newton search is on the unit circle. For the sequence of starting arguments he uses theta = theta +.77 radians (7) which, modulo 2π, forms a pseudo-random sequence between 0 and 2π. After several roots have been found and deflated, Burrus moves the starting locations off the unit circle but uses the same pseudo-random sequence for the starting arguments. When all roots have been found, they are unfactored without polishing against the original polynomial. His program is extremely successful with 1,000-degree RRCP but has mixed success with 10,000-degree. Sitton wrote a significantly faster program using Burrus idea. He improved the sequence of starting arguments to theta = theta + randn(1)/ (8) where randn generates Gaussian distributed random values. He changed the starting magnitudes to be random values similar to (4). He observed that when the roots are unfactored and compared with the original polynomial, there is less error if forward deflation had been used for roots inside the unit circle but backward deflation for roots outside the unit circle. This important discovery will be explained in sections??-??. Sitton s program is more successful, but still often fails for degree 10,000. There are two kinds of random numbers pseudo-random and quasi-random. The better known, pseudo-random, are not random at all. They are generated by some formula. However, they pass statistical tests that a true random sequence would pass. By contrast, quasi-random numbers fill out some range in a nearly uniformly distributed manner. In other words, for every n, the first n values are close to being uniformly distributed. In quadrature and Monte Carlo methods, quasi-random numbers are slightly better than pseudo-random numbers Although there are better sequences, the van der Corput sequence is a wellknown quasi-random sequence between 0 and 1. Fox wrote yet another program, broots.m, to implement Burrus and Sitton s most important ideas. It features 2π*(the van der Corput sequence) is used for the starting arguments. Backward deflation is used for roots outside the unit circle Roots are polished with a modified Laguerre s, not Newton s method; it will not overflow. Magnitudes of recently deflated roots determine the starting magnitude for finding the next root. 15

16 Broots.m is less capable than lroots.m or froots.m. Nonetheless, it succeeded in factoring 14 out of 15 70,000-degree RRCP and 3 out of 6 100,000-degree polynomials. However, it failed on one polynomial of degree 30,000. The code is still being improved. If one wishes to write a simple factorization routine that will likely work for almost all low degree polynomials, then a good choice is to use Burrus argument randomization (7) together with Sitton s discovery that backwards deflation should be used for roots outside the unit circle Wilkinson s Observations Wilkinson 25 discussed factoring f (z) = z He noted that if the upper half-plane roots are deflated in order of increasing argument, the intermediate quotient containing the lower half plane roots has much larger coefficients and is more ill-conditioned. However, he said, this worsening of condition is not very important. If he had tried a significantly higher degree polynomial, he would have found that the worsening of condition becomes catastrophic. He also noted, if the zeros are found more or less at random round the unit circle, this deterioration does not usually occur, the Burrus strategy. Randomizing the root s arguments during deflation is highly useful for many other polynomials. He asserted that the roots should be deflated in order of increasing magnitude but other orderings also have great value. Wilkinson 26 also discussed the cubic with roots z = [ , , ]. There is one extremely small root, one extremely large root, and one near the unit circle. He discussed in detail the six cases that arise depending on which root is deflated first and whether forward or backward deflation is used. He summarized the results with: If the first root of this cubic to be determined is the smallest, forward deflation gives a reduced polynomial in which both of the other roots are well preserved while backward deflation gives a useless polynomial. Conversely if the largest root is found first, backward deflation is completely successful and forward deflation is useless.... It is natural to consider a deflation in which some coefficients are obtained by forward deflation and some by backward deflation. Wilkinson had another observation about this cubic. If the root near the unit circle is deflated first, it did not matter whether forward or backward deflation was used, the final result was poor for one of the remaining two roots. Section?? will show that if one is only concerned with root magnitude, the optimal ordering is to deflate the roots closest to the unit circle last. 2.3 Insights From the Amplitude Spectrum The Spectral Significance of Deflating One Root A root, r, corresponds to a linear factor (z-r) and to the time series [1, -r]. If this is padded with zeros and the FFT computed, its spectrum has a minimum whose index/frequency is 16

17 computable from the root s argument. The transform is the linear function taking 0 radians to 0 frequency and π radians to Nyquist frequency. This correspondence between radians and index/frequency is also used in sections and If the complex FFT generates an amplitude spectrum with N points, the linear factor corresponding to a root with argument 2π/k, k>1, has a minimum at index 1+N/k. It is a notch filter; the closer the root is to the unit circle, the deeper the notch. When a root close to the unit circle is deflated, it removes a notch and the spectrum of the quotient has a local peak at the corresponding position. Figure 5 shows the real FFT amplitude spectrum of a 1,000-degree RRCP. It is called a flat spectrum or a white noise spectrum. Figure 6 shows the spectrum after two roots, and their complex conjugates have been deflated. The roots had arguments that were extremely close to π/4 and 3π/4. Deflating a root pushes up the spectrum at a frequency/index that can be computed from its argument. This will be crucial in section 2.4. Figure 5 Figure 6 Figure 7 Figure 7 shows the time series that resulted from deflating the root closest to 1.0, and its complex conjugate, from a 10,000-degree RRCP uniformly distributed between 0 and 1. The original polynomial was random coefficient but strictly positive and not mean zero. The quotient, plotted in Figure 7, was strictly positive, asymptotic to zero at both ends, and had a 7 dynamic range of 1.6x10. We should not be surprised if it is ill-conditioned. Since the original polynomial was strictly positive, the spectrum was already large at frequency 0. Deflating two roots close to 1.0 removed two notches near frequency 0 and boosted the spectrum even higher near frequency 0. The resulting spectrum had a dynamic range of 5x When plotted in linear scale, the spectrum looked like a Dirac spike. When the original random coefficient polynomial was factored, the largest final polishing 17 correction to any root was 9.990x10. When the polynomial in Figure 7 was factored, the 12 largest final correction to any root was 2.3x10 14 and 86% were > 1x10. Deflating one illchosen root from a well-conditioned positive RRCP produced a polynomial with moderately ill-conditioned roots. When the root closest to 1.0, and its conjugate, were removed from the polynomial in Figure 7, the resulting quotient had some severely ill-conditioned roots - the 6 largest final correction was 3x10. Roots near 1.0 could be obtained to 16-digit accuracy but several other roots could only be determined to 6-digit accuracy. Deflating one root changes the condition of all the remaining roots, sometimes dramatically. This phenomenon is not limited to deflating roots close to 1.0. If several roots that are very near one another are deflated, the resulting quotient seems to always be ill-conditioned. This partially explains the success of the argument randomization method. Randomizing the 17

18 starting arguments helped ensure that it did not deflate a root that was close to recently deflated roots An Interesting Correlation with the Amplitude Spectrum By default, the root polishers apply corrections to an estimated root position until either the correction is extremely small or until 51 corrections have been applied. The polishers can optionally return the final correction that was applied to each root. Fox discovered Figure 8, which shows an interesting correlation between the logarithm of the final polishing corrections and the logarithm of the amplitude spectrum of the ill-conditioned polynomial in Figure 9. (Sitton provided this polynomial.) The continuous curve in Figure 8 is the real FFT amplitude spectrum in db scale normalized so that Nyquist frequency equals 180. Each + corresponds to an affine re-scaling of the logarithm of one final polishing correction crossplotted as a function of the root s argument in degrees. The re-scaling caused the most illconditioned roots, the ones with the largest final corrections, to be the most negative. Figure 8 Figure 9 If a root had an argument that corresponded to a frequency where the amplitude spectrum was small, the final polishing correction was large; i.e., the root was ill-conditioned. The smaller the spectrum, the more severe the ill-conditioning. Figure 8 is a very good correlation but there is no known explanation for it. The reader is encouraged to publish his/her own explanation. However, this correlation is not the whole story as the following shows. The closer a root is to the unit circle, the deeper the spectral notch. If the spectrum is small near some frequency due to an isolated root very close to the unit circle, it need not be ill-conditioned. For example a mean 0 RRCP was created and factored. This forced 1.0 to be a root. The roots closest to i and -i were replaced by i and -i exactly. When the roots were unfactored, the resulting polynomial s spectrum was 0 at 0 and Nyquist/2 frequencies. However, both 1.0 and i were extremely well conditioned. The correlation was not valid. Perhaps the reason is that the spectrum was small for only one frequency, not a range of frequencies. 18

19 Wilkinson s polynomial with roots -1, -2, -3, -20 has spectrum 0 at Nyquist th because -1 is a root and it is an n root of unity. However, -1 is fairly wellconditioned as are the smallest magnitude roots. All roots have the same argument but it is only the larger magnitude roots that are ill-conditioned. The correlation was only partially valid. Multiple roots are always ill-conditioned. They have a tiny spectral minimum because their spectrum is the product of two identical notch filters. The correlation is valid. If a polynomial has most roots near the unit circle but they are not uniformly distributed in argument, the spectrum will not be flat. Each root is a notch filter so if some range of arguments has an excessive number of roots, the spectrum will be lower at the corresponding frequencies. The more roots in a sector, the deeper the spectrum. A histogram was obtained for the arguments of the roots of the polynomial in Figure 9. The peaks in the amplitude spectrum in Figure 8 correlated with the troughs of the histogram and vice versa. The correlation was valid. There needs to be further insight into this correlation, but it seems to explain several things: All roots of random coefficient polynomials are extremely well-conditioned. These polynomials have a spectrum that is flat and is not small for any continuous band of frequencies. Multiple roots are ill-conditioned and their spectrum is small at the corresponding frequency. The roots of the polynomial in Figure 7 were best-conditioned near 1 but on average, deteriorated as their distance from 1 increased. The spectrum was very large near 0 frequency and was tiny elsewhere. It has been observed that deflating several nearby roots produces ill-conditioned quotients. Deflating one root pushes up the spectrum at the corresponding frequency, cf. Figure 6. Deflating several nearby roots dramatically pushes up the spectrum for a small number of nearby frequencies leaving the spectrum tiny elsewhere.. Plots similar to Figure 8 have been obtained for other ill-conditioned polynomials. This correlation leads to two new factorization methods. Since having a small amplitude spectrum at some band of frequencies usually implies that roots with corresponding angles are ill-conditioned, we should seek to flatten the spectrum. There are two ways to do this. First, we can deflate roots whose arguments correspond to frequencies where the spectrum is small. As in section 2.3.1, deflating a root removes a notch and pushes up the spectrum at a predictable location. This is the FFT argument selection method of section 2.4. Second, we can multiply the initial polynomial by another polynomial that has roots concentrated at angles/frequencies where the first polynomial s spectrum is large. Since 19

20 each root is a notch, the second polynomial s spectrum will be small where the first polynomial s spectrum is large. Their product will have a flatter spectrum and hence better conditioned roots. This is the coefficient pre-whitening method of section 2.5. It is an open question whether this correlation is true for non-polynomials. Perhaps, if the Laplace transform is small near a root of an arbitrary analytic function, the root can not be determined accurately. If this is true, then something analogous to the coefficient prewhitening method of section 2.5 might improve the accuracy of root determination. 2.4 The FFT Argument Selection Method Fox s Idea While trying to improve Sitton s implementation of Burrus argument randomization method, Fox discovered an even more powerful method. As Burrus discovered, the order of the arguments of deflated roots is extremely important. The argument randomization method uses a random sequence of arguments as starting points when looking for new roots to deflate. By contrast, the FFT argument selection method takes a more proactive approach. It uses each intermediate quotient to determine the argument of a root that is likely to leave a well-conditioned quotient after the root is deflated. Random coefficient polynomials are always extremely well-conditioned. Therefore, it would be highly desirable if all intermediate quotients looked like random coefficient polynomials, which are characterized by having a flat amplitude spectrum. There is a way to encourage this. As in section 2.3.1, deflating a root boosts the spectrum at a frequency that is a linear function of the root s argument. Before searching for a new root, the FFT argument selection method finds the minimum of the amplitude spectrum of the current quotient and then begins the search for a new root at the corresponding angle. If a root with nearly that angle is found and deflated, it boosts the amplitude spectrum precisely where it is smallest. This tends to flatten the spectrum of the quotient and leave a quotient that is more like a RRCP. This method would not make the mistake of deflating the root near 1.0 that led to Figure 7. By section 2.3.2, deflating a root whose argument corresponds to the frequency where the spectrum is smallest means this method always strives to find and deflate the most illconditioned root. We may not be able to polish this root to high accuracy so the quotient may not be perfectly accurate. However, the alternative can be worse. If instead a root is deflated that corresponds to a frequency where the spectrum is largest, the spectrum is decreased everywhere else and most of the remaining roots become even more illconditioned. This was the situation in Figure 7 where deflating one ill chosen root from a well-conditioned positive RRCP created a moderately ill-conditioned polynomial A Sample Implementation Here is Matlab code for a simplified version of froots.m. Code for rdft, laguerre2, and deflate can be found at 1. This simplified version was used to factor the sixty 150,000-degree RRCP. In a subsequent step the roots were polished against the original polynomial and 20

21 compared with the roots obtained by lroots. They have always agreed to almost 16 digits. However, this code failed when applied to a 127 point Butterworth filter typical in seismic data. The reason is that the Butterworth was not annular there were many roots far from the unit circle. The program below was unable to find distant roots that were close to the starting angles. However, the full version of froots on the web site could handle that Butterworth. This code is only provided to demonstrate the key ideas that allow an extremely short program to factor ultra high degree RRCP. It can easily be translated to other languages. function rawroots = frootssimple(x); % Simplified froots; no polish degree = length(x) - 1; % This fails if there are leading zeros rawroots = zeros(degree,1); % Allocate space for the answer. nfound = 0; % Number of roots found so far. while (nfound < degree) ampsp = abs(rdft(x,1)); % rdft(x,1) is the real FFT of x. ixmin = find(ampsp == min(ampsp)); % Index of spectral minimum. ixmin = ixmin(1); % In case the minimum happens more than once. th = pi*(ixmin-1)/(length(ampsp)-1); % Convert index to radians. aroot = laguerre2(x, complex(cos(th),sin(th))); % Not Newton! x = deflate(x, aroot); % Deflate the root (& its conjugate). nfound = nfound + 1; rawroots(nfound) = aroot; if(~isreal(aroot)) % If the root was not real. nfound = nfound + 1; rawroots(nfound) = conj(aroot); end end Since froots does one FFT for every root in the upper half-plane, one would assume it would be much slower than broots which uses fast argument randomization. For one 50,000-degree polynomial, froots was indeed 21% slower. However, surprisingly, tests on low degree polynomials show froots is similar in speed to broots, despite having to do about degree/2 FFT s. See section The Coefficient Pre-Whitening Method (This method is under investigation... I can load a 100-degree polynomial into Maple, but not 200-degree.) If you start with a well-conditioned polynomial and deflate several roots that are close together, the intermediate quotient is usually seriously ill-conditioned. Is the converse true? If you start with an ill-conditioned polynomial, can you find a higher degree wellconditioned polynomial which contains the first as a factor? If you know the additional roots, it will be desirable to factor the well-conditioned polynomial even though it has higher degree. Fox discovered the coefficient pre-whitening method and noticed that the FFT can sometimes help find a well-conditioned parent where the extra roots are indeed known. 21

22 As in section 2.3.2, if the spectrum is very low at some band of frequencies, the corresponding roots seem to always be ill-conditioned. If we multiply by a second polynomial whose spectrum is small where the first is large and vice versa, the resulting product should have both a flatter spectrum and better conditioned roots. Call this second polynomial the whitening polynomial and call their product the coefficient pre-whitened polynomial. The pre-whitened polynomial should be easy to factor accurately and its roots will be the roots of the original together with the roots of the whitening polynomial. If the roots of the whitening polynomial are known, the roots of the original polynomial can be determined A Crude Whitening Algorithm Here is a crude algorithm for finding a whitening polynomial. Code which implements it, whiten.m, can be found at 1. It continually adds roots, i.e., spectral notches, where the spectrum is currently largest. This slowly flattens the spectrum of the evolving pre-whitened polynomial. Compute the amplitude spectrum and find the index of the maximum. Use the linear transform to convert index to radians. Create a complex number with this as argument and magnitude (Successive iterations alternate in using 1.01 or 1/1.01). Multiply by the appropriate factor to make this number a root. Repeat the above steps until the spectrum is everywhere > 40 db. Return the set of added roots. They determine the whitening polynomial. Unfortunately, this can be an infinite loop. The algorithm needs improvement! It was written in 20 minutes An Encouraging Example Two 100-degree RRCP were factored. The 5 roots closest to 0+i, and their conjugates from one polynomial were appended to the roots of the other polynomial. This larger set of roots was unfactored. Every root is a notch so these 5 extra notches caused the spectrum to be small, -120 db, at Nyquist/2. This polynomial was transferred to Maple where it was multiplied by f (z) = z which has roots 1.1 i and 1.1 i. The multiplication was done to 34-digit accuracy. This polynomial was brought back to Matlab. Its spectrum was even smaller, -140 db, at Nyquist/2 because two more notches had been added at Nyquist/2. From section we expect that any root with argument near π/2 will be ill-conditioned. When a polish was begun starting at 1.1 i, the result differed from 1.1 i in the 13 th decimal place of the real and imaginary parts. The absolute value of the final polishing correction 13 was 2x10. As expected, the final correction was a good estimate of the error and the root could not be polished to 16-digit accuracy. A set of roots for a whitening polynomial were generated and unfactored. This and the 22

23 previous polynomial were transferred to Maple where they were multiplied together with 34-digit accuracy. The result was transferred back to Matlab and a polish was begun starting at 1.1 i. The result equaled 1.1 i to 16 digits and the absolute value of the final polishing 17 correction was 4x10. Coefficient pre-whitening worked, at least for this one root. It seems to be necessary to do the polynomial multiplications to 34-digit accuracy in Maple. When it was done in 16-digit accuracy in Matlab, coefficient pre-whitening did not appear to help A Cautionary Example Figure 10 shows the result of pre-whitening the ill-conditioned 200-degree polynomial in Figure 9. Both have 1 as their highest order coefficient. The maximum in Figure 7 9 was 1.4x10 and successive coefficients essentially alternated polarity. The maximum of the 372-degree polynomial in Figure 10 was It is much more like a RRCP. Figure 10 was factored and the polishing routine reported that the largest final polishing correction to any root was 16 6x10. The polynomial in Figure 10 can easily be factored accurately. Figure 10 However, there is a subtle problem. I was unable to get a 200-degree polynomial loaded into Maple. Thus the polynomial multiplication was done in Matlab. The first and last coefficients of Figure 10 are both the product of two numbers. They are accurate. The same is not true for the center coefficients. The envelope of 6 the whitening polynomial had a maximum of 3.4x10 near the center and was asymptotic to zero at both ends. The coefficients in the center of Figure 10 are the sum of 173 products of pairs of numbers. For one of these sums the maxima of Figure 9 and the whitening polynomial coincided and produced a sum of products where one of the products equaled 4.76x Since the final sum was < 11.4, the sum of all the positive terms was on the order of 5x10 and the sum of all the negative terms was on the order of 5x10 and they were exactly equal, except for sign, for the first 11 or 12 digits. This is a numerical analyst s nightmare. The center coefficients in Figure 10 are only accurate to 4 or 5 digits. 2.6 Timing Comparisons The following table gives timings in seconds for factoring RRCP of various degrees on a GHz workstation. Roots is the Matlab 5.3 function which uses the order n eigenvalue method. It is the clear winner for extremely low degree. However, it rapidly fell behind and at degree 6,000 it could not create a 288-MB work array so it immediately terminated. The 23

24 workstation had 500 MB RAM and 10 GB free hard drive space and Windows was managing the virtual memory. The other three functions were discussed above. Roots does not polish its alleged roots against the original polynomial. Lroots always polishes the alleged roots against the original polynomial and unfactors to validate them. Froots and broots have the option to skip the polishing and validation. However, the timing values for froots and broots include both these steps. Omitting them almost doubles the speed. Polynomial roots lroots froots broots Degree , , ,000 27, ,000 Out of memory 10,000 Out of memory 100,000 Out of 5,539. 7,432. 7,253. memory 150,000 Out of memory 17, ,386. Factorization failed 250,000 Out of memory Not attempted 85,758. Not attempted 3 Applications 3.1 Ancient History Factoring polynomials is one of the oldest mathematical problem that is still being actively researched. Around 400 BCE the Babylonians could in essence factor quadratic equations 27. Of course they did not have the concept of an equation. However, cuneiform texts 28 state and solve problems like: The area of a field is 60. Its length exceeds its width by 7. What is the length? In modern terms this problem would be expressed using the quadratic equation x(x-7)=60. The Babylonians solved such problems by completing the square and that is how the general quadratic equation was solved centuries later. They even set up and attempted to solve cubic equations that arose from computing the volume of an underground cellar. However, that problem took another 2,000 years to solve accurately. In 1535 Niccolo of Brescia, known as Tartaglia - the stammerer finally discovered the general formula for the cubic equation 29. In 1540 Lodovico Ferrari discovered the solution 24

25 to the quartic equation 30. Around Galois showed that some polynomials of degree 5 and higher had roots that could not be expressed using the polynomial s coefficients and the four elementary operations of arithmetic and radicals and powers. Factoring higher degree polynomials requires numerical approximations and there is a vast literature on the subject; see Pan 32 and McNamee. 3.2 Modern Applications Because of its historical interest, studying polynomial factorization needs no further justification. However, there are some applications in signal processing where factoring the z-transform may be the only way to get the exact answer, although faster ways may exist to get approximate or least squares solutions. Roots are to polynomials what atoms are to chemistry the fundamental building blocks. Being able to isolate them can be useful Accurate phase unwrapping. There are at least three applications of factorization to phase unwrapping. If a polynomial can be accurately factored, it is easy to accurately unwrap its phase. This is useful in its own right. It also provides a benchmark for testing other, faster, phase unwrappers phase unwrapping by factorization gives the correct answer. Finally, calculating the cepstrum requires phase unwrapping. A more accurate phase unwrapping should result in a more accurate cepstrum. However, the success of the cepstral technique depends on the reliability of the unwrapped phase spectrum which has been reported to be subject to serious problems. Eisner 33 The instability of phase unwrapping has previously prevented any attempt to decompose phase spectra in the log/fourier domain. We develop a fast and robust partial unwrapping algorithm into surface consistent terms. Cambois 34 (Emphasis added.) a) Phase Unwrapping by Factorization The unwrapped phase of a polynomial can be determined from its roots, Steiglitz 35. Any polynomial can be written as a product of linear factors, z-r. Polynomial multiplication is isomorphic to convolution and the phase of a convolution is the sum of the phases of the inputs. Therefore, if each z r is padded with zeros to the polynomial length, the unwrapped phase of the polynomial is the sum of the unwrapped phases of the padded linear terms. For minimum phase roots 36, the unwrapped phase of the linear term does not wrap the wrapped phase is also the unwrapped phase. For maximum phase roots, the phase only wraps once so it is easy to unwrap. Wraps once means that the last phase value is one step less than 2π. It is not equal because the FFT is determined by evaluating the polynomial at the n th roots of unity and the last n th root is very close to, but not equal to 1. However, the phase value at Nyquist frequency exactly equals π. 25

26 Since there previously was no way to factor long polynomials, Steiglitz s paper was ignored or forgotten by later researchers. Shatilo, below, did not test phase unwrapping by factorization. If all the polynomials can be accurately factored, phase unwrapping by factorization is guaranteed to pass the convolution test the unwrapped phase of a convolution equals the sum of the unwrapped phases of the inputs. Six known methods of seismic phase unwrapping are compared. An initial validity test of the phase-unwrapping method is that the sum of the restored wavelet phase spectrum and the restored pulse-trace spectrum (assuming the convolutional model of the seismic trace) must be equal to the restored phase spectrum of the synthetic trace. Results show that none of the tested methods satisfy this test. The problem of seismic phase unwrapping has not been solved completely at the present time. Shatilo. 37 (Emphasis added.) Lindsey 38 noticed that if the polynomial has real coefficients, the unwrapped phase of the maximum phase roots, which wrap once, could be computed without unwrapping. As in section 2.1.2, if r is a complex root of a real polynomial, then r must also be a root. Assume r is a maximum phase root and wraps once. Let flip( ) be the function that reverses a polynomial s coefficients; i.e. the constant term becomes the highest order coefficient, and so on. Note, Thus ( z r)(z r) = r 2 flip( ( z 1/ r)(z 1/ r) ) (9) ( z 1/ r)(z 1/ r) ( z r)(z r) = r 2 ( z 1/ r)(z 1/ r) flip( ( z 1/ r)(z 1/ r) ) (10) The right side equals r 2 times the five-term autocorrelation of ( z 1/ r)(z 1/ r), padded with zeros. Autocorrelations are always symmetric about their center point. Experience shows that the unwrapped phase of a finite time series with a point of circular even symmetry is the negative of the linear function that circularly rotates the series left until the first point is the point of symmetry. Thus phase( ( z 1/ r)(z 1/ r) ) + phase( ( z r)(z r) ) = a linear phase (11) phase( ( z r)(z r) ) = a linear phase (phase ( z 1/ r) + phase ( z 1/ r) ) (12) Since 1/r and 1 / r are minimum phase roots, no phase unwrapping is necessary for their linear terms. This allows easy computation of the left side, which wraps twice due to the linear phase function that corresponds to a circular rotation by two samples. See Fox s function unwrapz 1 which uses this strategy to unwrap the phase of real coefficient polynomials given its roots. Assume there are no roots on the unit circle and m= (the number of maximum phase roots). As above, m can be computed from the phase value at Nyquist frequency. This provides an additional constraint on the unwrapped phase. Traditional phase unwrappers do not usually use this constraint either because the authors are unaware of it or because they do not know how to compute the number of maximum phase roots. Section discussed function nroots which can estimate this value. If there are no roots on the unit circle, the exact value 26

27 can be obtained if the polynomial is padded with sufficiently many zeros before taking the FFT. However, there is a much better strategy. b) Phase unwrapping by zero padding Consider the following algorithm: 1) Pad with zeros on the right to length N*(the original polynomial length). 2) Compute the phase of the padded polynomial. 3) Apply a phase unwrapper such as Matlab s function unwrap. 4) Extract every N th sample, starting at sample 1. The rationale for this algorithm is simple. Traditional phase unwrappers assume that all phase jumps between consecutive samples are small. If that is not true, they fail. By padding before computing the FFT, the phase spectrum is sampled N times more finely. If N is sufficiently large, the phase jumps will be small, the assumptions of the phase unwrapper will be valid, and the phase unwrapper will work. Ten thousand RRCPs each of degree 100 and 1,000 were generated, factored, and their unwrapped phases determined from their roots. In every case, except for tiny round off errors, if N was large enough the unwrapped phase from factorization equaled the unwrapped phase from zero padding. However, the value of N depended on the polynomial. For one 100-degree polynomial N=2 was sufficient. For another, N=545 was necessary. For 1,000-degree polynomials the range of N was 6 to 1,403. A similar test was run on 200 polynomials of degree 10,000. In one instance N was so large that Matlab responded, Out of memory when it tried to create the padded polynomial. What determines N? Krajnik gives a formula that depends on the maximum absolute value of the derivative of the unwrapped phase. Figure 13 is a plot of log10(n) versus log10(log10(the distance from the unit circle to the smallest root outside the unit circle)) for 10,000 1,000- degree polynomials. Each point corresponds to a different polynomial. It shows a good correlation. (There was no correlation of N with the distance to the largest root inside the unit circle.) If a root is unusually close to the unit circle, the phase changes rapidly for frequencies that correspond to the root s argument; the derivative of the unwrapped phase is large. Figure 11 The plot shows that the closer the root is to the unit circle, the larger N needed to be. It is likely there is no theoretical limit to how large N needs to be, but, for a given degree, there 27

28 is a practical limit that will usually suffice. In this case 97% of the polynomials needed N<100 and 99% needed N<200. As above, the number of maximum phase roots can be deduced from the phase value at Nyquist frequency. Function unwrapm 1 repeatedly doubles the polynomial length by zero padding to obtain better estimates of the number of maximum phase roots. When two consecutive estimates agree, the process stops. For 84% of the 1,000-degree polynomials mentioned above, this obtained the same answer as phase unwrapping by factorization. Unwrapm got the correct answer for 86% of the degree-10,000 polynomials. In most instances, unwrapm was at least twice as fast as unwrapping by factorization. However, if a large value of N was necessary, phase unwrapping by zero padding was an order of magnitude slower or more. Figure 11 shows the result of applying Matlab s unwrap to the phase of a 1,001 point chirp. Sample 1,001 was removed before plotting. Figure 12 shows the phase of this chirp unwrapped using the polynomial s roots. This is the correct phase unwrapping; Matlab is badly in error. Figure 12 Figure 13 When phase unwrapping by zero padding was applied with N=2, the result was extremely similar in shape to Figure 12. However it differed from unwrapping by factorization by about 504π. N=8 yielded a phase that differed by only 4π. N=11 yielded a phase that agreed to 6 significant digits with unwrapping by factorization, except at sample 1 where it differed by -π/2. The reason for this difference is the chirp had 1.0 as a root and this is on the unit circle; see below. Except for frequency zero, the chirp phase could be accurately unwrapped by zero padding. The first value of the FFT was 0 so technically the first phase value was undefined. c) Unwrapping the amplitude spectrum Phase unwrapping by zero padding is an intriguing alternative to unwrapping by factorization. It may be able to correctly unwrap the phase of a polynomial that can not be factored. Even if it fails, unwrapm is highly likely to be in error by only a few multiples of 2π.. However, we can have the greatest confidence in unwrapping by factorization. The 28

29 primary reason is due to the tremendous difference in unwrapped phase depending on whether the root is extremely close to but inside the unit circle, extremely close to but outside the circle, or exactly on the unit circle. Minimum phase roots do not wrap; maximum phase roots wrap once, roots on the unit circle do ½ a wrap, and 1.0 as a root is unique. There are three cases of roots on the unit circle: 1, -1, or a complex root. First, suppose r=x+iy is a complex root on the unit circle. Then ( z r)(z r) = z 2 2x + 1 is a quadratic factor and it is symmetric about the second term. As above, the unwrapped phase is the negative of the linear phase that shifts left by one sample. Contrast this with (12) where the linear function shifted the quartic polynomial by 2 samples. A single complex root on the unit circle corresponds to ½ a wrap. This is half way between minimum and maximum phase roots which wrap 0 times and 1 times respectively. However, in order to make this work, the amplitude spectrum must also be unwrapped. This may be a new idea. The amplitude and phase spectra are the polar coordinates of the FFT. The polar coordinates of a complex number z are real numbers r and θ which satisfy iθ demoirve s theorem, z = re. There is nothing in this representation that prevents r from being negative; it is merely convention not to do this. There are good reasons to allow the amplitude spectrum to be partially or completely negative. The best reason for doing this is to remove discontinuities in the phase spectrum. For example let s=(1+i)/ 2. There is some time series whose FFT equals [s, 0.75s, 0.5s, 0.25s, 0, -0.25s, -0.5s, -0.75s, -s]. The traditional amplitude spectrum would be [1,.75,.5,.25, 0,.25,.5,.75, 1], a perfect v shape with a non-differentiable point in the middle. The traditional phase is [π/4,π/4, π/4, π/4, 0, -π/4, -π/4, -π/4, -π/4] which has a discontinuity where the amplitude spectrum is 0. However, if the amplitude spectrum is allowed to be negative, it can be written as [1, 0.75, 0.5, 0.25, 0, -0.25, -0.5, -0.75, -1]. It is a perfectly straight line and the non-differentiable point has been removed. The phase spectrum is [π, π, π, π, π, π, π, π, π]. The discontinuity has been removed. This is not an academic exercise. This situation always happens when real coefficient polynomials have roots on the unit circle other than 1.0. The traditional phase has a discontinuity where the amplitude is closest to zero. If the polynomial s roots are used to unwrap the phase, roots on the unit circle can be detected and the amplitude spectrum unwrapped as well as the phase spectrum. For example, consider the time series x=[1, 0, 1, 0, 0, 0] which has 16 samples. Figures 14 and 15 show the traditional amplitude and phase spectra. The phase is the wrapped phase, but applying Matlab s unwrapper does not alter it. It is also the result of phase unwrapping by zero padding with N=75,000. Figures 16 and 17 are the unwrapped amplitude and unwrapped phase from the roots. Combining the first two functions using demoirve s relation gives the same result as combining the last two. The last two are much more visually appealing but the amplitude spectrum is both positive and negative. Function unwrapz produces unwrapped amplitude and phase. 29

30 Figure 14 Figure 15 Figure 16 Figure 17 Next suppose r = 1 is a root. The padded linear factor is [1, 1, 0, 0, ]. Intuitively, if this is circularly shifted left by ½ a sample, the result will possess even symmetry about the first sample. The unwrapped amplitude spectrum can be written as a function that is positive before Nyquist, which correspond to the root at 1, and negative afterwards. The unwrapped phase is the negative of the linear function whose slope would correspond to half a wrap. Finally suppose r = +1 is a root. The padded linear factor is [1, -1, 0, 0, ]. The unwrapped amplitude is strictly non-positive. The unwrapped phase equals π/2 + the linear phase of the previous paragraph. Since the chirp had 1.0 as a root, its unwrapped phase from factorization equaled π/2 in the first sample. The unwrapped phase by zero padding was zero because the first value of the FFT was 0 and so the phase was by default set to 0. This explains the difference mentioned above. For any root, the first value of the unwrapped phase equals π/2 (the multiplicity of 1 as a root). If 1.0 is not a root, the unwrapped phase by factorization begins at zero. This is not true for traditional unwrappers. For example, apply a traditional phase unwrapper to both x and x. One will have 0 for the first phase value and the other will have π (or π) for the first value. If the first phase value would be π, unwrapz subtracts π from every phase value and multiplies the amplitude spectrum by 1. Always having the unwrapped phase begin at 0 (provided 1.0 is not a root) seems natural and is another reason for allowing the amplitude spectrum to be negative. A final reason to allow the amplitudes to be negative is to allow scaling to be a continuous operation when viewed in the amplitude-phase domain. Suppose x is a time series. What is the spectral result of re-scaling? If x is multiplied by 2, the amplitude spectrum is multiplied by 2 and the phase is left unchanged. If x is multiplied by.001, the amplitude spectrum is multiplied by.001 and the phase is left unchanged. If a seeming tiny change is made to this last scale, if x is multiplied by -.001, then traditionally the amplitude spectrum is multiplied by and π is added to every phase value. Scaling exhibits a discontinuity at 0. If instead, the amplitude spectrum is allowed to be negative, then scaling by can be interpreted as multiplying the amplitude spectrum by and leaving the phase unchanged. Scaling becomes continuous. It always affects only the amplitude spectrum and leaves the phase unchanged. Phase becomes truly scale invariant Finding the minimum phase equivalent. Given a wavelet, w, a common problem is to find its minimum phase equivalent. The minimum phase equivalent has the same amplitude spectrum, but all its roots are minimum phase. The standard solution uses the Wiener-Levinson 41 technique. However, this is a least squares approximate solution, not the exact solution. If w can be factored, it is easy to find the exact solution. 30

31 A complex root, r, corresponds to a linear factor z r, and to the time series [ r, 1, 0, 0, ]. The amplitude spectrum of [ r, 1, 0, 0, ] equals r *(the amplitude spectrum of [ 1/ r, 1, 0, 0, ]) 42. To prove this, let w k be the k th value in the n th roots of unity. By the definition of the FFT, the k'th value of the amplitude spectrum of [ r, 1, 0, 0,...] = r + w k = r* w k * w k + w k = r w k (-1/r + w k ) = r * 1 / r + w k = r * -1/ r + w k = r *(the k th value of the amplitude spectrum of [ 1/ r, 1, 0, 0, 0, ]). Therefore, to find the exact minimum phase equivalent, factor the polynomial, replace every root, r, that is on the wrong side of the unit circle with 1 / r, unfactor, and multiply by r1 * r2 * r3 *... where the r k are the roots outside the unit circle. Sven suggests Oppenheim may provide a reference for [42] above. If he did, use that and omit my proof. Figure 18 A 127-point zero-phase Butterworth filter with characteristics typical in seismic data was generated. The solid line in Figure 14 shows the first 60 samples of the minimum phase equivalent obtained via Weiner- Levinson. The dots show the first 60 samples of the minimum phase equivalent obtained via factorization. They are visibly different, but extremely close. In actual practice, one typically wishes to find the minimum phase equivalent starting with an average of autocorrelations. The average will have noise so in that case it would be pointless to worry about obtaining the exact minimum phase equivalent. A similar test was done with a 1,001-point chirp and the differences were much smaller about one pixel in various locations on the plot. Another test was done using the center 15 samples from the Butterworth. This included the peak and most of the first troughs. In this instance there was a significant difference between the two methods. These and other tests indicate the shorter the wavelet, the greater the difference. Finding the minimum phase equivalent via factorization may only to be useful for very short wavelets Test Lindsey s suggestion for estimating phase rotation. <To be written later> Test the Tanner s seismic processing ideas. <To be written later> Who knows what else? It is hoped that this paper will spur interest in the subject and lead to further algorithms that employ roots of the z-transform for signal processing. 31

32 References 1 < 2 J. H.Wilkinson, The evaluation of the zeros of ill-conditioned polynomials, Part I, Numerische Mathematik, v 1, 1959: J. H.Wilkinson, The evaluation of the zeros of ill-conditioned polynomials, Part II, Numerische Mathematik, v 1, 1959: J. H. Wilkinson, Rounding Errors in Algebraic Processes, Prentice-Hall, Englewood Cliffs, NJ, 1963: G. Peters and J. H. Wilkinson, Practical problems arising in the solution of polynomial equations, J. Inst. Maths Applics, v 8, 1971: James H. Wilkinson, The perfidious polynomial, MAA Studies in Mathematics v 24, Studies in Numerical Analysis, ed. Gene H. Golub, The Mathematical Association of America, 1984: Wilkinson, Rounding Errors, Wilkinson, The perfidious polynomial, W. H Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes, Cambridge Univ Press, 1986: Wilkinson, The evaluation of the zeros Part I, Eric W. Weisstein, Minimum Modulus Principle, 24 Jan, 2003, < 12 Oppenheim??????????? 13 L.Arnold, Uber die nullstellenverteilung zufalliger polynome, Math. Zeitschr, 92, 1966: Real FFT evaluates only in upper half-plane??????????? 15 J.P. Lindsey and James W. Fox, A method of factoring long z-transform polynomials, Computational Methods in Geosciences, SIAM, 1992: Reprinted in Seismic Source Signature Estimation and Measurement, ed. Osman Osman and Enders Robinson, Society of Exploration Geophysicists, Geophysics Reprint Series n 18, 1996: Marcus Lang, A new and efficient program for finding all polynomial roots, Technical Report No. 9308, ECE Dept. Technical Report, Rice University, April 15, Nachtigal, Noel M., Lothar Reichel, and Lloyd N. Trefethen, A hybrid GMRES algorithm for nonsymmetric linear systems, SIAM Journal on Matrix Analysis and Applications, 13, July 1992: Artic Region Supercomputer Center, Newsleter 137, 20 Feb, 1998, 24 Jan, 2003, 19 Quasi-random numbers, 24 Jan, 2003, < 20 Eric W. Weisstein, Argument principle, 24 Jan, 2003 < 21 Ray Tracing News, compiled by Eric Haines, 24 Jan, 2003 < Ray Tracing News v 3, n 4 Oct 1, Eric Haines, Point in polygon strategies, Graphics Gems IV, edited by Paul S. Heckbert, 1994: Press C. Sidney Burrus, Algorithms for factoring polynomials of a complex variable, draft manuscript, March 4, Wilkinson, Rounding Errors, Wilkinson, Practical problems, Quadratic, cubic and quartic equations, Feb 1996, 24 January, 2003 < 28 An overview of Babylonian mathematics, Dec 2000, 24 Jan, 2003 < 29 Niccolo Fontana Tartaglia, June 1998, 24 Jan, 2003, < 30 Lodovico Ferrari, June 1998, 24 Jan, 2003, < 32

33 31 Evariste Galois, Dec, 1996, 24 Jan, 2003, < 32 V. Y. Pan, Solving a polynomial equation: some history and recent progress, SIAM Review, 39(2), June 1997: E. Eisner and G. Hampson, Decomposition into minimum and maximum phase components, Geophysics, 55, n 7, July 1990: G. Cambois and P. Stoffa, Surface-consistent phase decomposition in the log/fourier domain, Geophysics, 58, n 8, August 1993: Kenneth Steiglitz and Bradley Dickinson, Phase unwrapping by factorization, IEEE Transactions on Acoustics, Speech, and Signal Processing, December 1982: 30(6), Definition of minimum phase, 20 Feb, 2003, 28 Feb, 2003, < 37 A. P. Shatilo, Seismic phase unwrapping: methods, results, problems, Geophysical Prospecting, 1992: 40, J. P. Lindsey, private conversation. 39 Eduard Krajnik, "A simple and reliable phase unwrapping algorithm," in J. Vandewalle et al. (eds.), Signal Processing VI: Theories and Applications, Elsevier, Amsterdam 1992, pp Eduard Krajnik, 12 Sept, 2002, 28 Feb, 2003 < 41 Weiner-Levinson reference??????????????????????????????? 42 Oppenheim????????????spectrum of [-r,1,0,0, ]= r *Spectrum of [1/conj(r), 1, 0,0,0, ] 33

The Lindsey-Fox Algorithm for Factoring Polynomials

The Lindsey-Fox Algorithm for Factoring Polynomials OpenStax-CNX module: m15573 1 The Lindsey-Fox Algorithm for Factoring Polynomials C. Sidney Burrus This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 2.0

More information

MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao [email protected]

MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu Integer Polynomials June 9, 007 Yufei Zhao [email protected] We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing

More information

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks Welcome to Thinkwell s Homeschool Precalculus! We re thrilled that you ve decided to make us part of your homeschool curriculum. This lesson

More information

6. Define log(z) so that π < I log(z) π. Discuss the identities e log(z) = z and log(e w ) = w.

6. Define log(z) so that π < I log(z) π. Discuss the identities e log(z) = z and log(e w ) = w. hapter omplex integration. omplex number quiz. Simplify 3+4i. 2. Simplify 3+4i. 3. Find the cube roots of. 4. Here are some identities for complex conjugate. Which ones need correction? z + w = z + w,

More information

Roots of Polynomials

Roots of Polynomials Roots of Polynomials (Com S 477/577 Notes) Yan-Bin Jia Sep 24, 2015 A direct corollary of the fundamental theorem of algebra is that p(x) can be factorized over the complex domain into a product a n (x

More information

Integer Factorization using the Quadratic Sieve

Integer Factorization using the Quadratic Sieve Integer Factorization using the Quadratic Sieve Chad Seibert* Division of Science and Mathematics University of Minnesota, Morris Morris, MN 56567 [email protected] March 16, 2011 Abstract We give

More information

Factoring Patterns in the Gaussian Plane

Factoring Patterns in the Gaussian Plane Factoring Patterns in the Gaussian Plane Steve Phelps Introduction This paper describes discoveries made at the Park City Mathematics Institute, 00, as well as some proofs. Before the summer I understood

More information

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document

More information

Derive 5: The Easiest... Just Got Better!

Derive 5: The Easiest... Just Got Better! Liverpool John Moores University, 1-15 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de Technologie Supérieure, Canada Email; [email protected] 1. Introduction Engineering

More information

Estimated Pre Calculus Pacing Timeline

Estimated Pre Calculus Pacing Timeline Estimated Pre Calculus Pacing Timeline 2010-2011 School Year The timeframes listed on this calendar are estimates based on a fifty-minute class period. You may need to adjust some of them from time to

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,

More information

MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!

MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing! MATH BOOK OF PROBLEMS SERIES New from Pearson Custom Publishing! The Math Book of Problems Series is a database of math problems for the following courses: Pre-algebra Algebra Pre-calculus Calculus Statistics

More information

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus South Carolina College- and Career-Ready (SCCCR) Pre-Calculus Key Concepts Arithmetic with Polynomials and Rational Expressions PC.AAPR.2 PC.AAPR.3 PC.AAPR.4 PC.AAPR.5 PC.AAPR.6 PC.AAPR.7 Standards Know

More information

Understanding Poles and Zeros

Understanding Poles and Zeros MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function

More information

ANALYZER BASICS WHAT IS AN FFT SPECTRUM ANALYZER? 2-1

ANALYZER BASICS WHAT IS AN FFT SPECTRUM ANALYZER? 2-1 WHAT IS AN FFT SPECTRUM ANALYZER? ANALYZER BASICS The SR760 FFT Spectrum Analyzer takes a time varying input signal, like you would see on an oscilloscope trace, and computes its frequency spectrum. Fourier's

More information

What are the place values to the left of the decimal point and their associated powers of ten?

What are the place values to the left of the decimal point and their associated powers of ten? The verbal answers to all of the following questions should be memorized before completion of algebra. Answers that are not memorized will hinder your ability to succeed in geometry and algebra. (Everything

More information

The degree of a polynomial function is equal to the highest exponent found on the independent variables.

The degree of a polynomial function is equal to the highest exponent found on the independent variables. DETAILED SOLUTIONS AND CONCEPTS - POLYNOMIAL FUNCTIONS Prepared by Ingrid Stewart, Ph.D., College of Southern Nevada Please Send Questions and Comments to [email protected]. Thank you! PLEASE NOTE

More information

Lies My Calculator and Computer Told Me

Lies My Calculator and Computer Told Me Lies My Calculator and Computer Told Me 2 LIES MY CALCULATOR AND COMPUTER TOLD ME Lies My Calculator and Computer Told Me See Section.4 for a discussion of graphing calculators and computers with graphing

More information

SQUARE-SQUARE ROOT AND CUBE-CUBE ROOT

SQUARE-SQUARE ROOT AND CUBE-CUBE ROOT UNIT 3 SQUAREQUARE AND CUBEUBE (A) Main Concepts and Results A natural number is called a perfect square if it is the square of some natural number. i.e., if m = n 2, then m is a perfect square where m

More information

Higher Education Math Placement

Higher Education Math Placement Higher Education Math Placement Placement Assessment Problem Types 1. Whole Numbers, Fractions, and Decimals 1.1 Operations with Whole Numbers Addition with carry Subtraction with borrowing Multiplication

More information

1 Formulating The Low Degree Testing Problem

1 Formulating The Low Degree Testing Problem 6.895 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 5: Linearity Testing Lecturer: Dana Moshkovitz Scribe: Gregory Minton and Dana Moshkovitz In the last lecture, we proved a weak PCP Theorem,

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Expression. Variable Equation Polynomial Monomial Add. Area. Volume Surface Space Length Width. Probability. Chance Random Likely Possibility Odds

Expression. Variable Equation Polynomial Monomial Add. Area. Volume Surface Space Length Width. Probability. Chance Random Likely Possibility Odds Isosceles Triangle Congruent Leg Side Expression Equation Polynomial Monomial Radical Square Root Check Times Itself Function Relation One Domain Range Area Volume Surface Space Length Width Quantitative

More information

The Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., 12-15 July 2000. Derive 5: The Easiest... Just Got Better!

The Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., 12-15 July 2000. Derive 5: The Easiest... Just Got Better! The Fourth International DERIVE-TI9/89 Conference Liverpool, U.K., -5 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de technologie supérieure 00, rue Notre-Dame Ouest Montréal

More information

Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary

Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:

More information

0.1 Phase Estimation Technique

0.1 Phase Estimation Technique Phase Estimation In this lecture we will describe Kitaev s phase estimation algorithm, and use it to obtain an alternate derivation of a quantum factoring algorithm We will also use this technique to design

More information

PRE-CALCULUS GRADE 12

PRE-CALCULUS GRADE 12 PRE-CALCULUS GRADE 12 [C] Communication Trigonometry General Outcome: Develop trigonometric reasoning. A1. Demonstrate an understanding of angles in standard position, expressed in degrees and radians.

More information

Mathematics 31 Pre-calculus and Limits

Mathematics 31 Pre-calculus and Limits Mathematics 31 Pre-calculus and Limits Overview After completing this section, students will be epected to have acquired reliability and fluency in the algebraic skills of factoring, operations with radicals

More information

Chapter 4. Polynomial and Rational Functions. 4.1 Polynomial Functions and Their Graphs

Chapter 4. Polynomial and Rational Functions. 4.1 Polynomial Functions and Their Graphs Chapter 4. Polynomial and Rational Functions 4.1 Polynomial Functions and Their Graphs A polynomial function of degree n is a function of the form P = a n n + a n 1 n 1 + + a 2 2 + a 1 + a 0 Where a s

More information

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS 1. Bandwidth: The bandwidth of a communication link, or in general any system, was loosely defined as the width of

More information

PURSUITS IN MATHEMATICS often produce elementary functions as solutions that need to be

PURSUITS IN MATHEMATICS often produce elementary functions as solutions that need to be Fast Approximation of the Tangent, Hyperbolic Tangent, Exponential and Logarithmic Functions 2007 Ron Doerfler http://www.myreckonings.com June 27, 2007 Abstract There are some of us who enjoy using our

More information

CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY

CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY January 10, 2010 CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY The set of polynomials over a field F is a ring, whose structure shares with the ring of integers many characteristics.

More information

The Fourier Analysis Tool in Microsoft Excel

The Fourier Analysis Tool in Microsoft Excel The Fourier Analysis Tool in Microsoft Excel Douglas A. Kerr Issue March 4, 2009 ABSTRACT AD ITRODUCTIO The spreadsheet application Microsoft Excel includes a tool that will calculate the discrete Fourier

More information

Factorization Methods: Very Quick Overview

Factorization Methods: Very Quick Overview Factorization Methods: Very Quick Overview Yuval Filmus October 17, 2012 1 Introduction In this lecture we introduce modern factorization methods. We will assume several facts from analytic number theory.

More information

Filter Comparison. Match #1: Analog vs. Digital Filters

Filter Comparison. Match #1: Analog vs. Digital Filters CHAPTER 21 Filter Comparison Decisions, decisions, decisions! With all these filters to choose from, how do you know which to use? This chapter is a head-to-head competition between filters; we'll select

More information

A simple and fast algorithm for computing exponentials of power series

A simple and fast algorithm for computing exponentials of power series A simple and fast algorithm for computing exponentials of power series Alin Bostan Algorithms Project, INRIA Paris-Rocquencourt 7815 Le Chesnay Cedex France and Éric Schost ORCCA and Computer Science Department,

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization

More information

AP Physics 1 and 2 Lab Investigations

AP Physics 1 and 2 Lab Investigations AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks

More information

NEW MEXICO Grade 6 MATHEMATICS STANDARDS

NEW MEXICO Grade 6 MATHEMATICS STANDARDS PROCESS STANDARDS To help New Mexico students achieve the Content Standards enumerated below, teachers are encouraged to base instruction on the following Process Standards: Problem Solving Build new mathematical

More information

A review of ideas from polynomial rootfinding

A review of ideas from polynomial rootfinding A review of ideas from polynomial rootfinding Mark Richardson September 2010 Contents 1 Introduction 2 1.1 Polynomial basics................................. 3 1.2 Newton Iteration.................................

More information

VISUAL ALGEBRA FOR COLLEGE STUDENTS. Laurie J. Burton Western Oregon University

VISUAL ALGEBRA FOR COLLEGE STUDENTS. Laurie J. Burton Western Oregon University VISUAL ALGEBRA FOR COLLEGE STUDENTS Laurie J. Burton Western Oregon University VISUAL ALGEBRA FOR COLLEGE STUDENTS TABLE OF CONTENTS Welcome and Introduction 1 Chapter 1: INTEGERS AND INTEGER OPERATIONS

More information

13. Write the decimal approximation of 9,000,001 9,000,000, rounded to three significant

13. Write the decimal approximation of 9,000,001 9,000,000, rounded to three significant æ If 3 + 4 = x, then x = 2 gold bar is a rectangular solid measuring 2 3 4 It is melted down, and three equal cubes are constructed from this gold What is the length of a side of each cube? 3 What is the

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 [email protected] 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

SMT 2014 Algebra Test Solutions February 15, 2014

SMT 2014 Algebra Test Solutions February 15, 2014 1. Alice and Bob are painting a house. If Alice and Bob do not take any breaks, they will finish painting the house in 20 hours. If, however, Bob stops painting once the house is half-finished, then the

More information

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives 6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise

More information

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015 CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Linda Shapiro Today Registration should be done. Homework 1 due 11:59 pm next Wednesday, January 14 Review math essential

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

3.2 The Factor Theorem and The Remainder Theorem

3.2 The Factor Theorem and The Remainder Theorem 3. The Factor Theorem and The Remainder Theorem 57 3. The Factor Theorem and The Remainder Theorem Suppose we wish to find the zeros of f(x) = x 3 + 4x 5x 4. Setting f(x) = 0 results in the polynomial

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

Student Performance Q&A:

Student Performance Q&A: Student Performance Q&A: 2008 AP Calculus AB and Calculus BC Free-Response Questions The following comments on the 2008 free-response questions for AP Calculus AB and Calculus BC were written by the Chief

More information

Algebra and Geometry Review (61 topics, no due date)

Algebra and Geometry Review (61 topics, no due date) Course Name: Math 112 Credit Exam LA Tech University Course Code: ALEKS Course: Trigonometry Instructor: Course Dates: Course Content: 159 topics Algebra and Geometry Review (61 topics, no due date) Properties

More information

SPINDLE ERROR MOVEMENTS MEASUREMENT ALGORITHM AND A NEW METHOD OF RESULTS ANALYSIS 1. INTRODUCTION

SPINDLE ERROR MOVEMENTS MEASUREMENT ALGORITHM AND A NEW METHOD OF RESULTS ANALYSIS 1. INTRODUCTION Journal of Machine Engineering, Vol. 15, No.1, 2015 machine tool accuracy, metrology, spindle error motions Krzysztof JEMIELNIAK 1* Jaroslaw CHRZANOWSKI 1 SPINDLE ERROR MOVEMENTS MEASUREMENT ALGORITHM

More information

Information, Entropy, and Coding

Information, Entropy, and Coding Chapter 8 Information, Entropy, and Coding 8. The Need for Data Compression To motivate the material in this chapter, we first consider various data sources and some estimates for the amount of data associated

More information

Polynomials. Dr. philippe B. laval Kennesaw State University. April 3, 2005

Polynomials. Dr. philippe B. laval Kennesaw State University. April 3, 2005 Polynomials Dr. philippe B. laval Kennesaw State University April 3, 2005 Abstract Handout on polynomials. The following topics are covered: Polynomial Functions End behavior Extrema Polynomial Division

More information

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were: Topic 1 2.1 mode MultipleSelection text How can we approximate the slope of the tangent line to f(x) at a point x = a? This is a Multiple selection question, so you need to check all of the answers that

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions. Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

More information

In this chapter, you will learn improvement curve concepts and their application to cost and price analysis.

In this chapter, you will learn improvement curve concepts and their application to cost and price analysis. 7.0 - Chapter Introduction In this chapter, you will learn improvement curve concepts and their application to cost and price analysis. Basic Improvement Curve Concept. You may have learned about improvement

More information

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory

More information

Numerical Matrix Analysis

Numerical Matrix Analysis Numerical Matrix Analysis Lecture Notes #10 Conditioning and / Peter Blomgren, [email protected] Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research

More information

The continuous and discrete Fourier transforms

The continuous and discrete Fourier transforms FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1

More information

7. Some irreducible polynomials

7. Some irreducible polynomials 7. Some irreducible polynomials 7.1 Irreducibles over a finite field 7.2 Worked examples Linear factors x α of a polynomial P (x) with coefficients in a field k correspond precisely to roots α k [1] of

More information

EE 402 RECITATION #13 REPORT

EE 402 RECITATION #13 REPORT MIDDLE EAST TECHNICAL UNIVERSITY EE 402 RECITATION #13 REPORT LEAD-LAG COMPENSATOR DESIGN F. Kağan İPEK Utku KIRAN Ç. Berkan Şahin 5/16/2013 Contents INTRODUCTION... 3 MODELLING... 3 OBTAINING PTF of OPEN

More information

(Refer Slide Time: 01:11-01:27)

(Refer Slide Time: 01:11-01:27) Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 6 Digital systems (contd.); inverse systems, stability, FIR and IIR,

More information

Sequences and Series

Sequences and Series Sequences and Series Consider the following sum: 2 + 4 + 8 + 6 + + 2 i + The dots at the end indicate that the sum goes on forever. Does this make sense? Can we assign a numerical value to an infinite

More information

DRAFT. Algebra 1 EOC Item Specifications

DRAFT. Algebra 1 EOC Item Specifications DRAFT Algebra 1 EOC Item Specifications The draft Florida Standards Assessment (FSA) Test Item Specifications (Specifications) are based upon the Florida Standards and the Florida Course Descriptions as

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general

More information

Chapter 10: Network Flow Programming

Chapter 10: Network Flow Programming Chapter 10: Network Flow Programming Linear programming, that amazingly useful technique, is about to resurface: many network problems are actually just special forms of linear programs! This includes,

More information

Common Core Unit Summary Grades 6 to 8

Common Core Unit Summary Grades 6 to 8 Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations

More information

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

Chapter 22: Electric Flux and Gauss s Law

Chapter 22: Electric Flux and Gauss s Law 22.1 ntroduction We have seen in chapter 21 that determining the electric field of a continuous charge distribution can become very complicated for some charge distributions. t would be desirable if we

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

Math 0980 Chapter Objectives. Chapter 1: Introduction to Algebra: The Integers.

Math 0980 Chapter Objectives. Chapter 1: Introduction to Algebra: The Integers. Math 0980 Chapter Objectives Chapter 1: Introduction to Algebra: The Integers. 1. Identify the place value of a digit. 2. Write a number in words or digits. 3. Write positive and negative numbers used

More information

USE OF A SINGLE ELEMENT WATTMETER OR WATT TRANSDUCER ON A BALANCED THREE-PHASE THREE-WIRE LOAD WILL NOT WORK. HERE'S WHY.

USE OF A SINGLE ELEMENT WATTMETER OR WATT TRANSDUCER ON A BALANCED THREE-PHASE THREE-WIRE LOAD WILL NOT WORK. HERE'S WHY. USE OF A SINGLE ELEMENT WATTMETER OR WATT TRANSDUCER ON A BALANCED THREE-PHASE THREE-WIRE LOAD WILL NOT WORK. HERE'S WHY. INTRODUCTION Frequently customers wish to save money by monitoring a three-phase,

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

Gas Dynamics Prof. T. M. Muruganandam Department of Aerospace Engineering Indian Institute of Technology, Madras. Module No - 12 Lecture No - 25

Gas Dynamics Prof. T. M. Muruganandam Department of Aerospace Engineering Indian Institute of Technology, Madras. Module No - 12 Lecture No - 25 (Refer Slide Time: 00:22) Gas Dynamics Prof. T. M. Muruganandam Department of Aerospace Engineering Indian Institute of Technology, Madras Module No - 12 Lecture No - 25 Prandtl-Meyer Function, Numerical

More information

DRAFT. Further mathematics. GCE AS and A level subject content

DRAFT. Further mathematics. GCE AS and A level subject content Further mathematics GCE AS and A level subject content July 2014 s Introduction Purpose Aims and objectives Subject content Structure Background knowledge Overarching themes Use of technology Detailed

More information

FACTORING. n = 2 25 + 1. fall in the arithmetic sequence

FACTORING. n = 2 25 + 1. fall in the arithmetic sequence FACTORING The claim that factorization is harder than primality testing (or primality certification) is not currently substantiated rigorously. As some sort of backward evidence that factoring is hard,

More information

Number Patterns, Cautionary Tales and Finite Differences

Number Patterns, Cautionary Tales and Finite Differences Learning and Teaching Mathematics, No. Page Number Patterns, Cautionary Tales and Finite Differences Duncan Samson St Andrew s College Number Patterns I recently included the following question in a scholarship

More information

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v, 1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

The Mathematics of the RSA Public-Key Cryptosystem

The Mathematics of the RSA Public-Key Cryptosystem The Mathematics of the RSA Public-Key Cryptosystem Burt Kaliski RSA Laboratories ABOUT THE AUTHOR: Dr Burt Kaliski is a computer scientist whose involvement with the security industry has been through

More information

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of. Polynomial and Rational Functions Outline 3-1 Polynomial Functions 3-2 Finding Rational Zeros of Polynomials 3-3 Approximating Real Zeros of Polynomials 3-4 Rational Functions Chapter 3 Group Activity:

More information

REVIEW EXERCISES DAVID J LOWRY

REVIEW EXERCISES DAVID J LOWRY REVIEW EXERCISES DAVID J LOWRY Contents 1. Introduction 1 2. Elementary Functions 1 2.1. Factoring and Solving Quadratics 1 2.2. Polynomial Inequalities 3 2.3. Rational Functions 4 2.4. Exponentials and

More information

Answer Key for California State Standards: Algebra I

Answer Key for California State Standards: Algebra I Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.

More information

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( ) Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

More information

THE COMPLEX EXPONENTIAL FUNCTION

THE COMPLEX EXPONENTIAL FUNCTION Math 307 THE COMPLEX EXPONENTIAL FUNCTION (These notes assume you are already familiar with the basic properties of complex numbers.) We make the following definition e iθ = cos θ + i sin θ. (1) This formula

More information

ABSTRACT. For example, circle orders are the containment orders of circles (actually disks) in the plane (see [8,9]).

ABSTRACT. For example, circle orders are the containment orders of circles (actually disks) in the plane (see [8,9]). Degrees of Freedom Versus Dimension for Containment Orders Noga Alon 1 Department of Mathematics Tel Aviv University Ramat Aviv 69978, Israel Edward R. Scheinerman 2 Department of Mathematical Sciences

More information

If A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C?

If A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C? Problem 3 If A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C? Suggested Questions to ask students about Problem 3 The key to this question

More information