Math 56 Homework 4 Michael Downs


 Blaise Randall
 1 years ago
 Views:
Transcription
1 Fourier series theory. As in lecture, let be the L 2 norm on (, 2π). (a) Find a unitnorm function orthogonal to f(x) = x on x < 2π. [Hint: do what you d do in vectorland, eg pick some simple new function and project away its f component]. (b) Combine the Fourier series for the 2πperiodic function defined by f(x) = x for x < 2π that we computed on the worksheet with Parseval s relation to evaluate m= m 2 (c) Compute the Fourier series for the 2πperiodic function defined by f(x) = x 2 in π x < π. [Hint: shift the domain of integration to a convenient one.] Comment on how ˆf m decays for this continuous (but not C ) function compared to the discontinuous function in (b). (d) Take a general f written as a Fourier series, and take the derivative of both sides. What then is the mth Fourier coefficient of f? Prove that if f is bounded, ˆf m = O(/ m ), or better. (e) Use the previous idea to prove a bound on the decay of Fourier coefficients when f has k bounded derivatives. What bound follows if f C (arbitrarily smooth)? Can you give this a name?. (a) Let f(x) = x and g(x) =, x < 2π, be two vectors in the inner product space of 2πperiodic continuous functions. A function ĝ(x) orthogonal to f(x) can be obtained from f(x) and g(x) via the GramSchmidt process: ĝ(x) = (x, ) (x, x) x = 2π2 x 8π 2 = 4π x ĝ(x) still has to be normalized. ĝ(x)ĝ(x) dx = π so the final unitlength function 2 ( 2 orthogonal to f(x) is ĝ(x) = π x). 4π (b) Recall that the fourier coefficients for the 2πperiodic function defined by f(x) = x for x < 2π are: { π m = ˆf m = i m m and so: { ˆf 2 π 2 m = m = m m 2 By Parseval s equality we have that f 2 = 2π m= ˆf 2 m. Now, f 2 = x 2 dx =
2 8π which allows us to produce the following series of equalities: 8π = 2π 4π 2 = n= m= π 2 = 2 m 2 m= π 2 6 = m 2 m= ˆf m 2 n 2 + π2 + m= m 2 Thus m= m 2 = π2 6. (c) Consider the 2πperiodic function defined by f(x) = x 2 in π x < π. First let y = x + π. Then y < 2π and we can write f(x) = f(y π) = (y π) 2. This allows us to compute the fourier coefficients using the appealing integration bounds to 2π. First compute ˆf : ˆf = 2π = π2 (y π) 2 dy Next compute ˆf m for m : ˆf m = e im(y π) (y π) 2 dy 2π = e imy e imπ (y π) 2 dy 2π = eimπ 2π = ( )m 2π e imy (y π) 2 dy e imy (y π) 2 dy Some omitted tedious calculations reveal that Thus: ˆf m = 2π { π 2 m = ( ) m 2 m 2 m e imy (y π) 2 dy resolves to 2 m 2. 2
3 This gives us the fourier series for f(x) = x 2 = m= ˆf m e imx. It is interesting to note that the ˆf m for this continuous function decay as the inverse square of m while the ˆf m for the discontinuous function in (b) only decay as the inverse of m. This is perhaps due to the fact that a fourier series never really converges near the discontinuities (Gibbs phenomenon). I ll now confirm that this infinite series actually is the desired result. It can be shown that for real coefficients ˆf m, m= ˆf m e imx simplifies to the entirely real sum ˆf + 2 m= ˆf m cos(mx). Thus: f(x) = x 2 = π2 + 4 m= Plotting terms of this series gives the graph: ( ) m cos(mx) m 2 () Which is what we wanted. It s also interesting to note that () gives us the identity that ( ) n n= = π2. n 2 2 (d) An arbitrary function f written in its fourier series: f(x) = m= ˆf m e imx. Taking derivatives of both sides: d dx f(x) = d dx = = m= m= m= ˆf m e imx ˆf m d dx eimx ( ˆf m im)e imx From this we can write the mth fourier coefficient of f is ˆf m = ˆf m im. Suppose f is bounded. Then f 2 is bounded as well. By Parseval s equality, f 2 =
4 2π m= ˆf m 2 so that infinite sum is bounded as well. This implies that ˆf m as m. Thus ˆf m im as m. ˆf m im = ˆf m i m = ˆf m m. Then ˆf m m as m. Then, by definition, ˆf ( ) m = o. (e) Now suppose f has k bounded derivatives. By the logic in (d), the fourier coefficients (k) for the kth derivative can be expressed as ˆf m = ˆf m i k m k. Also by the logic in (d), if f (k) (k) is bounded then ˆf m as m so ˆf m m k as m. Thus if f has k bounded derivatives then ˆf ( m = o ). If f is infinitely differentiable then m k ˆf m = o( ) for all integer k >. I call this superalgebraic convergence because I m k noticed parallels with this and the definition of super exponential convergence. The positions of the rate and number of terms appear to be swapped. m Getting to know your DFT. Use numerical exploration followed by proof (each proof is very quick): (a) Produce a color image of the real part of the DFT matrix F for N = 256. Explain one of the curves seen in the image. (b) What is F 2? [careful: matrix product, also don t forget the indexing]. What does F 2 do to a vector? (this should be very simple!) Now, for general N, prove your claim [Hint: use ω] (c) What then is F 4? Prove this. (d) What are the eigenvalues of F? Use your previous result to prove this. (e) What is the condition number of F? Prove this using a result from lecture. 2. (a) Code: % code to e x p l o r e DFT m atrices n = 256; F = dftmtx ( n ) ; % the 5 matrix % t h i s produces a 2D image o f the r e a l part o f the DFT matrix 7 Z = r e a l (F) ; g = l i n s p a c e (, n,n ) ; 9 [X,Y] = meshgrid ( g, g ) ; s u r f (X,Y, Z) ; imagesc ( g, g, Z) ; c o l o r b a r ; t i t l e ( s t r c a t ( v i s u a l i z a t i o n o f r e a l part o f d f t matrix n =, num2str ( n ) ) ) ; 4
5 produces the picture: visualization of real part of dft matrix n = Zoomed in we can pay particular attention to the curve featured in the corners: visualization of real part of dft matrix n = This is the real part of the DFT matrix where each entry F mj = ω mj with ω = e 2πi N. Each entry is cos( 2πmj ). As the indices increase, the frequencies become higher and N higher. This can be seen in the picture above. The image of the matrix suggests a sinusoidal figure with higher and higher frequencies as the indices m and j become larger. 5
6 (b) For N = 256, F 2 appears to be a matrix with 256 at entry F,, 256 at every entry starting at F 2,256 and going diagonally down to the left ending at F 256,2, and at every other entry. I note that the indices (with subtracted) for the nonzero entries sum to mod N. Left multiplication by F 2 in this particular case appears to fix the first entry, reverse the order of the remaining entries, and then scale the entire vector by N: x x F 2 x 2. = N x 256 (2). x 256 x 2 General proof (keeping in mind that the indices here are actually i+, j+, and k+): Fij 2 = N k= F ikf kj = N k= ω (i+j)k so by the sum lemma: { Fij 2 N i + j (mod N) = otherwise (c) Matlab shows that F 4 is the identity matrix scaled by For general N, it should be the N N identity matrix scaled by N 2. If we left multiply a column vector by F 2 it fixes the first entry, reverses the order of the remaining entries, and then scales each entry by N. Left multiplying by F 2 again we get the original vector except scaled by N 2. By this argument F 4 = N 2 I N N. (d) Via matlab the eigenvalues appear to be 6, 6, 6i, and 6i with various multiplicites (or possibly all the same, I didn t check). For general N let v be one of its eigenvectors. Then F 4 v = λ 4 v. But also by (c) F 4 v = N 2 v so N 2 v = λ 4 v thus N 2 = λ 4. From this we have λ = N, N, i N, i N. (e) Matlab shows that the condition number is. Proving this for the general case: we know that the norm of F is the square root of the largest eigenvalue of F T F. We also know that F is symmetric (F T F = F 2 ) and that its eigenvalues are λ = N, N, i N, i N. Squaring F squares its eigenvalues so we have that the eigenvalues of F 2 are λ = N, N. Thus F = N. Inverting F inverts its eigenvalues so we have that the largest eigenvalues of (F ) 2 are,. Therefore N N F = N. In conclusion, κ(f ) = N N =. 6
7 The power of trigonometric interpolation, i.e. using just a few samples of a periodic function to reconstruct the function everywhere. (Applications to modeling data, etc.) (a) Let s interpolate f(x) = e sin x. For N = 4, by using the /Nweighted samples at the nodes x j = 2πj/N, j =,..., N, and fft to do the DFT, find f m. Plot their magnitudes on a log vertical scale vs m =,..., N. Relate to #(e). By what m have the coefficients decayed to ε mach times the largest? (This is the effective bandlimit of the function at this tolerance.) (b) Using f m as good approximations to the true Fourier coefficients in N/2 < m < N/2, plot the intepolant given by this truncated Fourier series, on the fine grid :e:2*pi. Overlay the samples Nf j = f(x j ) as blobs. [Hint: Remember the negative m s are wrapped into the upper half of the DFT output. Debug until the interpolant passes through the samples] (c) By looping over the above for different N, make a labeled semilog plot of the maximum error (taken over the fine grid) between the interpolant and f, vs N, for even N between 2 and 4. At what N is convergence to ε mach reached? (Pretty amazing, eh?) Relate to (a).. (a) Code: % code to e x p l o r e i n t e r p o l a t i o n with DFTs N = 4 ; % number o f sample p o i n t s g x ) exp ( s i n ( x ) ) ; % f u n c t i o n we re sampling 5 % get the N samples 7 x = :2 pi /N: 2 pi (N )/N; f = g ( x ). /N; 9 % take the f f t and observe the magnitudes o f the c o e f f i c i e n t s f t = f f t ( f ) ; mag = abs ( f t ) ; 5 m = : : ( N ) ; 7 semilogy (m, mag) ; 9 t i t l e ( magnitudes o f f f t approximated c o e f f i c i e n t s vs m ) ; x l a b e l ( m ) ; 2 y l a b e l ( magnitude o f f m ) ; outputs: 7
8 2 magnitudes of fft approximated coefficients vs m 2 4 magnitude of f m m f(x) = e sin(x) is infinitely differentiable so it should exhibit the kind of convergence in (e). Looking at the plot, it appears better than all types of algebraic convergence, too (magnitudes appear to be concave down). It appears that, by m = 5, the magnitude of the coefficients has decayed to ε mach. (b) Code (continuation of above): % s e e how w e l l the i n t e r p o l a t e d f u n c t i o n works i n t e r x ) r e a l (sum( f t ( (N/2 + ) :N). exp( i x (N/2: :) ) ) + sum( f t ( :N/2). exp ( i x ( : (N/2 ) ) ) ) ) ; 5 gr = : e :2 pi ; f i g u r e ; 7 p l o t ( gr, arrayfun ( i n t e r, gr ), Color, green ) ; hold on ; 9 p l o t ( x, f N, Marker, o, Color, red, L i n e S t y l e, none ) ; hold o f f ; x l a b e l ( x ) ; y l a b e l ( f ( x ) ) ; t i t l e ( I n t e r p o l a t e d approximation o f exp ( s i n ( x ) ) p a s s i n g through sample p o i n t s ) outputs: 8
9 Interpolated approximation of exp(sin(x)) passing through sample points f(x) x (c) Code (continuation of above): % now s e e which N i s n e c e s s a r y f o r emach e r r o r maxerrors = z e r o s (, N/2) ; n = 2 : 2 :N; 5 f o r j = 2 : 2 :N x = :2 pi / j : 2 pi ( j )/ j ; 7 f = g ( x ). / j ; f t = f f t ( f ) ; 9 i n t e r x ) r e a l (sum( f t ( ( j /2 + ) : j ). exp( i x ( j /2: :) ) ) + sum( f t ( : j /2). exp ( i x ( : ( j /2 ) ) ) ) ) ; % t h i s i s bad coding e r r o r = max( abs ( g ( gr ) arrayfun ( i n t e r, gr ) ) ) ; maxerrors ( j /2) = e r r o r ; end 5 f i g u r e ; semilogy (n, maxerrors ) ; 7 x l a b e l ( N ) ; y l a b e l ( max e r r o r ) ; 9 t i t l e ( Error in truncated N term approximation ) outputs: 9
10 2 Error in truncated N term approximation 2 4 max error N This seems to agree with part (a). In (a), the magnitude of the coefficients decayed to ε mach around m = 5. Here we have ε mach precision starting around N meaning the truncated approximation goes to about m = 5 in either direction. Let s prove that, amongst all trigonometric polynomials of degree at most N/2, the N/2 truncated Fourier series for f is the best approximation to f in the L 2 (, 2π) norm. (a) Let g be a general trig. poly. which can be written g = n N/2 ( ˆf n + c n )e inx for some coefficient deviations c n. Recall f = ˆf n= n e inx. Write the squared L 2 norm of the error which is the difference of g and f: error 2 = f g 2 = ˆf n e inx n >N/2 n N/2 c n e inx (b) Using the definition of the norm, expand this expression for error 2 (you should obtain four terms, two of which will vanish). Whatever you do, do not multiply any of the big sums out. Can you use this expanded form to bound error 2 below by ˆf n >N/2 n e inx 2? (c) What does this say about how small the error can get? From this lower bound conclude that the error is minimized precisely when each c n =, ie when g is the truncated Fourier series of f. 2.
11 4. (a) Apparently this part was already completed in the question. (b) (Note: ᾱ denotes the complex conjugate of α). Let α = ˆf n e inx () β = n >N/2 n N/2 c n e inx (4) (5) α β 2 = (α β, α β) = = (ᾱ β)(α β) dx ᾱα dx ᾱβ dx βα dx + ββ dx ᾱβ dx and βα dx are both because the summation indices never correspond and every e inx, e imx 2π, m n are orthogonal. ᾱα dx = α 2 and, as shown on a worksheet, ββ dx = 2π n N/2 c n 2. So error 2 = ˆf n >N/2 n e inx 2 + 2π n N/2 c n 2 and error 2 ˆf n >N/2 n e inx 2. (c) This says that the smallest the error can get is ˆf n >N/2 n e inx 2 which is the twoway tail missing from the truncated fourier series. This is achieved when each c n =. In other words, when we use the truncated fourier approximation. A quickie on runtimes. Consider the matvec problem computing y = Ax for A an nbyn matrix. Write a little code using random A and x for n = 4, which measures the runtime of Matlab s native A*x, and your own naive double loop to compute the same thing. Express your answers in flops (flop per sec), and give the ratio. Marvel at how fast the builtin library is. 5. Code: % measuring speed o f matlab l i b r a r y vs naive implementation o f matrix % m u l t i p l i c a t i o n 5 n = 4; A = randn ( n ) ; 7 x = randn (n, ) ; y2 = z e r o s (n, ) ; 9
12 % l i b r a r y speed t i c ; y = A x ; l s = toc ; 5 % naive speed t i c ; 7 f o r i = : n sum = ; 9 f o r j = : n sum = sum + A( i, j ) x ( j, ) ; 2 end y2 ( i ) = sum ; 2 end ns = toc ; 25 f p r i n t f ( Naive Speed : \n ) ; 27 f p r i n t f ( %g f l o p s \n, / ns ) ; f p r i n t f ( Library Speed : \n ) ; 29 f p r i n t f ( %g f l o p s \n, / l s ) ; f p r i n t f ( The l i b r a r y speed i s %g times f a s t e r \n, ns / l s ) ; outputs: >> HW4Q5 Naive Speed: flops Library Speed:.44 flops The library speed is times faster 2
Solutions to Linear Algebra Practice Problems
Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the
More informationPolynomials and the Fast Fourier Transform (FFT) Battle Plan
Polynomials and the Fast Fourier Transform (FFT) Algorithm Design and Analysis (Wee 7) 1 Polynomials Battle Plan Algorithms to add, multiply and evaluate polynomials Coefficient and pointvalue representation
More informationSome Notes on Taylor Polynomials and Taylor Series
Some Notes on Taylor Polynomials and Taylor Series Mark MacLean October 3, 27 UBC s courses MATH /8 and MATH introduce students to the ideas of Taylor polynomials and Taylor series in a fairly limited
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationb) lower case always use lower case for all matlab commands. This is what matlab recognizes.
1 Matlab 1) Fundamentals a) Getting Help for more detailed help on any topic, typing help, then a space, and then the matlab command brings up a detailed page on the command or topic. For really difficult
More informationMATH 2300 review problems for Exam 3 ANSWERS
MATH 300 review problems for Exam 3 ANSWERS. Check whether the following series converge or diverge. In each case, justify your answer by either computing the sum or by by showing which convergence test
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 51 Orthonormal
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationTOPIC 3: CONTINUITY OF FUNCTIONS
TOPIC 3: CONTINUITY OF FUNCTIONS. Absolute value We work in the field of real numbers, R. For the study of the properties of functions we need the concept of absolute value of a number. Definition.. Let
More informationTMA4213/4215 Matematikk 4M/N Vår 2013
Norges teknisk naturvitenskapelige universitet Institutt for matematiske fag TMA43/45 Matematikk 4M/N Vår 3 Løsningsforslag Øving a) The Fourier series of the signal is f(x) =.4 cos ( 4 L x) +cos ( 5 L
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the yaxis We observe that
More informationDiagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
More informationMatrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for
More informationLimit processes are the basis of calculus. For example, the derivative. f f (x + h) f (x)
SEC. 4.1 TAYLOR SERIES AND CALCULATION OF FUNCTIONS 187 Taylor Series 4.1 Taylor Series and Calculation of Functions Limit processes are the basis of calculus. For example, the derivative f f (x + h) f
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationOperation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floatingpoint
More informationExamination paper for Solutions to Matematikk 4M and 4N
Department of Mathematical Sciences Examination paper for Solutions to Matematikk 4M and 4N Academic contact during examination: Trygve K. Karper Phone: 99 63 9 5 Examination date:. mai 04 Examination
More informationLecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationThe Phase Plane. Phase portraits; type and stability classifications of equilibrium solutions of systems of differential equations
The Phase Plane Phase portraits; type and stability classifications of equilibrium solutions of systems of differential equations Phase Portraits of Linear Systems Consider a systems of linear differential
More informationwww.mathsbox.org.uk ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates
Further Pure Summary Notes. Roots of Quadratic Equations For a quadratic equation ax + bx + c = 0 with roots α and β Sum of the roots Product of roots a + b = b a ab = c a If the coefficients a,b and c
More informationPractice with Proofs
Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using
More informationL1 vs. L2 Regularization and feature selection.
L1 vs. L2 Regularization and feature selection. Paper by Andrew Ng (2004) Presentation by Afshin Rostami Main Topics Covering Numbers Definition Convergence Bounds L1 regularized logistic regression L1
More informationPUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
More informationSine and Cosine Series; Odd and Even Functions
Sine and Cosine Series; Odd and Even Functions A sine series on the interval [, ] is a trigonometric series of the form k = 1 b k sin πkx. All of the terms in a series of this type have values vanishing
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus ndimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationFast Fourier Transforms and Power Spectra in LabVIEW
Application Note 4 Introduction Fast Fourier Transforms and Power Spectra in LabVIEW K. Fahy, E. Pérez Ph.D. The Fourier transform is one of the most powerful signal analysis tools, applicable to a wide
More informationMATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
More informationDERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Realvalued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in threespace, we write a vector in terms
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationDerive 5: The Easiest... Just Got Better!
Liverpool John Moores University, 115 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de Technologie Supérieure, Canada Email; mbeaudin@seg.etsmtl.ca 1. Introduction Engineering
More informationLectures 56: Taylor Series
Math 1d Instructor: Padraic Bartlett Lectures 5: Taylor Series Weeks 5 Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,
More information6. Define log(z) so that π < I log(z) π. Discuss the identities e log(z) = z and log(e w ) = w.
hapter omplex integration. omplex number quiz. Simplify 3+4i. 2. Simplify 3+4i. 3. Find the cube roots of. 4. Here are some identities for complex conjugate. Which ones need correction? z + w = z + w,
More informationMth 95 Module 2 Spring 2014
Mth 95 Module Spring 014 Section 5.3 Polynomials and Polynomial Functions Vocabulary of Polynomials A term is a number, a variable, or a product of numbers and variables raised to powers. Terms in an expression
More informationAlgebra I Vocabulary Cards
Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression
More informationTaylor Polynomials and Taylor Series Math 126
Taylor Polynomials and Taylor Series Math 26 In many problems in science and engineering we have a function f(x) which is too complicated to answer the questions we d like to ask. In this chapter, we will
More informationMATH 551  APPLIED MATRIX THEORY
MATH 55  APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More information4.3 Lagrange Approximation
206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average
More informationRoots and Coefficients of a Quadratic Equation Summary
Roots and Coefficients of a Quadratic Equation Summary For a quadratic equation with roots α and β: Sum of roots = α + β = and Product of roots = αβ = Symmetrical functions of α and β include: x = and
More informationMITES Physics III Summer Introduction 1. 3 Π = Product 2. 4 Proofs by Induction 3. 5 Problems 5
MITES Physics III Summer 010 Sums Products and Proofs Contents 1 Introduction 1 Sum 1 3 Π Product 4 Proofs by Induction 3 5 Problems 5 1 Introduction These notes will introduce two topics: A notation which
More informationFunctions and Equations
Centre for Education in Mathematics and Computing Euclid eworkshop # Functions and Equations c 014 UNIVERSITY OF WATERLOO Euclid eworkshop # TOOLKIT Parabolas The quadratic f(x) = ax + bx + c (with a,b,c
More informationPresentation 3: Eigenvalues and Eigenvectors of a Matrix
Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues
More information1 Review of Newton Polynomials
cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LUdecomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 201112 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 101634501 Probability and Statistics for Engineers Winter 20102011 Contents 1 Joint Probability Distributions 1 1.1 Two Discrete
More informationMathematics Review for MS Finance Students
Mathematics Review for MS Finance Students Anthony M. Marino Department of Finance and Business Economics Marshall School of Business Lecture 1: Introductory Material Sets The Real Number System Functions,
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationMA4001 Engineering Mathematics 1 Lecture 10 Limits and Continuity
MA4001 Engineering Mathematics 1 Lecture 10 Limits and Dr. Sarah Mitchell Autumn 2014 Infinite limits If f(x) grows arbitrarily large as x a we say that f(x) has an infinite limit. Example: f(x) = 1 x
More informationMicroeconomic Theory: Basic Math Concepts
Microeconomic Theory: Basic Math Concepts Matt Van Essen University of Alabama Van Essen (U of A) Basic Math Concepts 1 / 66 Basic Math Concepts In this lecture we will review some basic mathematical concepts
More informationOn Chebyshev interpolation of analytic functions
On Chebyshev interpolation of analytic functions Laurent Demanet Department of Mathematics Massachusetts Institute of Technology Lexing Ying Department of Mathematics University of Texas at Austin March
More information4.5 Chebyshev Polynomials
230 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION 4.5 Chebyshev Polynomials We now turn our attention to polynomial interpolation for f (x) over [ 1, 1] based on the nodes 1 x 0 < x 1 < < x N 1. Both
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationNumerical Methods Lecture 5  Curve Fitting Techniques
Numerical Methods Lecture 5  Curve Fitting Techniques Topics motivation interpolation linear regression higher order polynomial form exponential form Curve fitting  motivation For root finding, we used
More informationMATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
More information3.4 Complex Zeros and the Fundamental Theorem of Algebra
86 Polynomial Functions.4 Complex Zeros and the Fundamental Theorem of Algebra In Section., we were focused on finding the real zeros of a polynomial function. In this section, we expand our horizons and
More information1 Eigenvalues and Eigenvectors
Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
More information106 Chapter 5 Curve Sketching. If f(x) has a local extremum at x = a and. THEOREM 5.1.1 Fermat s Theorem f is differentiable at a, then f (a) = 0.
5 Curve Sketching Whether we are interested in a function as a purely mathematical object or in connection with some application to the real world, it is often useful to know what the graph of the function
More informationReview of Fundamental Mathematics
Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decisionmaking tools
More informationInduction Problems. Tom Davis November 7, 2005
Induction Problems Tom Davis tomrdavis@earthlin.net http://www.geometer.org/mathcircles November 7, 2005 All of the following problems should be proved by mathematical induction. The problems are not necessarily
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More information[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More informationLinear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
More informationClass XI Chapter 5 Complex Numbers and Quadratic Equations Maths. Exercise 5.1. Page 1 of 34
Question 1: Exercise 5.1 Express the given complex number in the form a + ib: Question 2: Express the given complex number in the form a + ib: i 9 + i 19 Question 3: Express the given complex number in
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More information2 Complex Functions and the CauchyRiemann Equations
2 Complex Functions and the CauchyRiemann Equations 2.1 Complex functions In onevariable calculus, we study functions f(x) of a real variable x. Likewise, in complex analysis, we study functions f(z)
More information4. MATRICES Matrices
4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationPYTHAGOREAN TRIPLES KEITH CONRAD
PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient
More informationLinear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
More informationThe Fourth International DERIVETI92/89 Conference Liverpool, U.K., 1215 July 2000. Derive 5: The Easiest... Just Got Better!
The Fourth International DERIVETI9/89 Conference Liverpool, U.K., 5 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de technologie supérieure 00, rue NotreDame Ouest Montréal
More informationCalculus C/Multivariate Calculus Advanced Placement G/T Essential Curriculum
Calculus C/Multivariate Calculus Advanced Placement G/T Essential Curriculum UNIT I: The Hyperbolic Functions basic calculus concepts, including techniques for curve sketching, exponential and logarithmic
More information1 Review of complex numbers
1 Review of complex numbers 1.1 Complex numbers: algebra The set C of complex numbers is formed by adding a square root i of 1 to the set of real numbers: i = 1. Every complex number can be written uniquely
More informationMathematics PreTest Sample Questions A. { 11, 7} B. { 7,0,7} C. { 7, 7} D. { 11, 11}
Mathematics PreTest Sample Questions 1. Which of the following sets is closed under division? I. {½, 1,, 4} II. {1, 1} III. {1, 0, 1} A. I only B. II only C. III only D. I and II. Which of the following
More informationThis assignment will help you to prepare for Algebra 1 by reviewing some of the things you learned in Middle School. If you cannot remember how to complete a specific problem, there is an example at the
More informationMATH 110 Spring 2015 Homework 6 Solutions
MATH 110 Spring 2015 Homework 6 Solutions Section 2.6 2.6.4 Let α denote the standard basis for V = R 3. Let α = {e 1, e 2, e 3 } denote the dual basis of α for V. We would first like to show that β =
More information3 1. Note that all cubes solve it; therefore, there are no more
Math 13 Problem set 5 Artin 11.4.7 Factor the following polynomials into irreducible factors in Q[x]: (a) x 3 3x (b) x 3 3x + (c) x 9 6x 6 + 9x 3 3 Solution: The first two polynomials are cubics, so if
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More informationL12. Special Matrix Operations: Permutations, Transpose, Inverse, Augmentation 12 Aug 2014
L12. Special Matrix Operations: Permutations, Transpose, Inverse, Augmentation 12 Aug 2014 Unfortunately, no one can be told what the Matrix is. You have to see it for yourself.  Morpheus Primary concepts:
More informationState of Stress at Point
State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationAutoTuning Using Fourier Coefficients
AutoTuning Using Fourier Coefficients Math 56 Tom Whalen May 20, 2013 The Fourier transform is an integral part of signal processing of any kind. To be able to analyze an input signal as a superposition
More information