Power Spectral Density
|
|
|
- Rosalyn Carson
- 9 years ago
- Views:
Transcription
1 C H A P E R 0 Power Spectral Density INRODUCION Understanding how the strength of a signal is distributed in the frequency domain, relative to the strengths of other ambient signals, is central to the design of any LI filter intended to extract or suppress the signal. We know this well in the case of deterministic signals, and it turns out to be just as true in the case of random signals. For instance, if a measured waveform is an audio signal (modeled as a random process since the specific audio signal isn t known) with additive disturbance signals, you might want to build a lowpass LI filter to extract the audio and suppress the disturbance signals. We would need to decide where to place the cutoff frequency of the filter. here are two immediate challenges we confront in trying to find an appropriate frequency-domain description for a WSS random process. First, individual sample functions typically don t have transforms that are ordinary, well-behaved functions of frequency; rather, their transforms are only defined in the sense of generalized functions. Second, since the particular sample function is determined as the outcome of a probabilistic experiment, its features will actually be random, so we have to search for features of the transforms that are representative of the whole class of sample functions, i.e., of the random process as a whole. It turns out that the key is to focus on the expected power in the signal. his is a measure of signal strength that meshes nicely with the second-moment characterizations we have for WSS processes, as we show in this chapter. For a process that is second-order ergodic, this will also correspond to the time average power in any realization. We introduce the discussion using the case of C WSS processes, but the D case follows very similarly. 0. EXPECED INSANANEOUS POWER AND POWER SPECRAL DENSIY Motivated by situations in which x(t) is the voltage across (or current through) a unit resistor, we refer to x 2 (t) as the instantaneous power in the signal x(t). When x(t) is WSS, the expected instantaneous power is given by E[x 2 (t)] = R xx (0) = S xx (jω) dω, (0.) 2π Alan c V. Oppenheim and George C. Verghese,
2 84 Chapter 0 Power Spectral Density where S xx (jω) is the CF of the autocorrelation function R xx (τ). Furthermore, when x(t) is ergodic in correlation, so that time averages and ensemble averages are equal in correlation computations, then (0.) also represents the time-average power in any ensemble member. Note that since R xx (τ) = R xx ( τ), we know S xx (jω) is always real and even in ω; a simpler notation such as P xx (ω) might therefore have been more appropriate for it, but we shall stick to S xx (jω) to avoid a proliferation of notational conventions, and to keep apparent the fact that this quantity is the Fourier transform of R xx (τ). he integral above suggests that we might be able to consider the expected (instantaneous) power (or, assuming the process is ergodic, the time-average power) in a frequency band of width dω to be given by (/2π)S xx (jω) dω. o examine this thought further, consider extracting a band of frequency components of x(t) by passing x(t) through an ideal bandpass filter, shown in Figure 0.. x(t) H(jω) y(t) Δ H(jω) Δ ω 0 ω 0 ω FIGURE 0. Ideal bandpass filter to extract a band of frequencies from input, x(t). Because of the way we are obtaining y(t) from x(t), the expected power in the output y(t) can be interpreted as the expected power that x(t) has in the selected passband. Using the fact that we see that this expected power can be computed as hus S yy (jω) = H(jω) 2 S xx (jω), (0.2) + E{y 2 (t)} = R yy (0) = S yy (jω) dω = S xx (jω) dω. (0.3) 2π 2π passband S xx (jω) dω (0.4) 2π passband is indeed the expected power of x(t) in the passband. It is therefore reasonable to call S xx (jω) the power spectral density (PSD) of x(t). Note that the instantaneous power of y(t), and hence the expected instantaneous power E[y 2 (t)], is always nonnegative, no matter how narrow the passband, It follows that, in addition to being real and even in ω, the PSD is always nonnegative, S xx (jω) 0 for all ω. While the PSD S xx (jω) is the Fourier transform of the autocorrelation function, it Alan c V. Oppenheim and George C. Verghese, 200
3 Section 0.2 Einstein-Wiener-Khinchin heorem on Expected ime-averaged Power 85 is useful to have a name for the Laplace transform of the autocorrelation function; we shall refer to S xx (s) as the complex PSD. Exactly parallel results apply for the D case, leading to the conclusion that S xx (e jω ) is the power spectral density of x[n]. 0.2 EINSEIN-WIENER-KHINCHIN HEOREM ON EXPECED IME AVERAGED POWER he previous section defined the PSD as the transform of the autocorrelation function, and provided an interpretation of this transform. We now develop an alternative route to the PSD. Consider a random realization x(t) of a WSS process. We have already mentioned the difficulties with trying to take the CF of x(t) directly, so we proceed indirectly. Let x (t) be the signal obtained by windowing x(t), so it equals x(t) in the interval (, ) but is 0 outside this interval. hus x (t) = w (t) x(t), (0.5) where we define the window function w (t) to be for t < and 0 otherwise. Let X (jω) denote the Fourier transform of x (t); note that because the signal x (t) is nonzero only over the finite interval (,), its Fourier transform is typically well defined. We know that the energy spectral density (ESD) S xx (jω) of x (t) is given by S xx (jω) = X (jω) 2 (0.6) and that this ESD is actually the Fourier transform of x (τ) x (τ), where x (t) = x ( t). We thus have the CF pair x (τ) x (τ) = w (α)w (α τ)x(α)x(α τ) dα X (jω) 2, (0.7) or, dividing both sides by 2 (which is valid, since scaling a signal by a constant scales its Fourier transform by the same amount), 2 2 w (α)w (α τ )x(α)x(α τ ) dα X (jω). (0.8) 2 he quantity on the right is what we defined (for the D case) as the periodogram of the finite-length signal x (t). Because the Fourier transform operation is linear, the Fourier transform of the expected value of a signal is the expected value of the Fourier transform. We may therefore take expectations of both sides in the preceding equation. Since E[x(α)x(α τ)] = R xx (τ), we conclude that R xx (τ)λ(τ) 2 E[ X (jω) 2 ], (0.9) where Λ(τ) is a triangular pulse of height at the origin and decaying to 0 at τ = 2, the result of carrying out the convolution w w (τ ) and dividing by c Alan V. Oppenheim and George C. Verghese, 200
4 86 Chapter 0 Power Spectral Density 2. Now taking the limit as goes to, we arrive at the result R xx (τ) S xx (jω) = lim 2 E[ X (jω) 2 ]. (0.0) his is the Einstein-Wiener-Khinchin theorem (proved by Wiener, and independently by Khinchin, in the early 930 s, but as only recently recognized stated by Einstein in 94). he result is important to us because it underlies a basic method for estimating S xx (jω): with a given, compute the periodogram for several realizations of the random process (i.e., in several independent experiments), and average the results. Increasing the number of realizations over which the averaging is done will reduce the noise in the estimate, while repeating the entire procedure for larger will improve the frequency resolution of the estimate System Identification Using Random Processes as Input Consider the problem of determining or identifying the impulse response h[n] of a stable LI system from measurements of the input x[n] and output y[n], as indicated in Figure 0.2. x[n] h[n] y[n] FIGURE 0.2 System with impulse response h[n] to be determined. he most straightforward approach is to choose the input to be a unit impulse x[n] = δ[n], and to measure the corresponding output y[n], which by definition is the impulse response. It is often the case in practice, however, that we do not wish to or are unable to pick this simple input. For instance, to obtain a reliable estimate of the impulse response in the presence of measurement errors, we may wish to use a more energetic input, one that excites the system more strongly. here are generally limits to the amplitude we can use on the input signal, so to get more energy we have to cause the input to act over a longer time. We could then compute h[n] by evaluating the inverse transform of H(e jω ), which in turn could be determined as the ratio Y (e jω )/X(e jω ). Care has to be taken, however, to ensure that X(e jω ) = 0 for any Ω; in other words, the input has to be sufficiently rich. In particular, the input cannot be just a finite linear combination of sinusoids (unless the LI system is such that knowledge of its frequency response at a finite number of frequencies serves to determine the frequency response at all frequencies which would be the case with a lumped system, i.e., a finite-order system, except that one would need to know an upper bound on the order of the system so as to have a sufficient number of sinusoids combined in the input). c Alan V. Oppenheim and George C. Verghese, 200
5 Section 0.2 Einstein-Wiener-Khinchin heorem on Expected ime-averaged Power 87 he above constraints might suggest using a randomly generated input signal. For instance, suppose we let the input be a Bernoulli process, with x[n] for each n taking the value + or with equal probability, independently of the values taken at other times. his process is (strict- and) wide-sense stationary, with mean value 0 and autocorrelation function R xx [m] = δ[m]. he corresponding power spectral density S xx (e jω ) is flat at the value over the entire frequency range Ω [ π,π]; evidently the expected power of x[n] is distributed evenly over all frequencies. A process with flat power spectrum is referred to as a white process (a term that is motivated by the rough notion that white light contains all visible frequencies in equal amounts); a process that is not white is termed colored. Now consider what the DF X(e jω ) might look like for a typical sample function of a Bernoulli process. A typical sample function is not absolutely summable or square summable, and so does not fall into either of the categories for which we know that there are nicely behaved DFs. We might expect that the DF exists in some generalized-function sense (since the sample functions are bounded, and therefore do not grow faster than polynomially with n for large n ), and this is indeed the case, but it is not a simple generalized function; not even as nice as the impulses or impulse trains or doublets that we are familiar with. When the input x[n] is a Bernoulli process, the output y[n] will also be a WSS random process, and Y (e jω ) will again not be a pleasant transform to deal with. However, recall that R yx [m] = h[m] R xx [m], (0.) so if we can estimate the cross-correlation of the input and output, we can determine the impulse response (for this case where R xx [m] = δ[m]) as h[m] = R yx [m]. For a more general random process at the input, with a more general R xx [m], we can solve for H(e jω ) by taking the Fourier transform of (0.), obtaining H(e jω ) = S yx(e jω ). (0.2) S xx (e jω ) If the input is not accessible, and only its autocorrelation (or equivalently its PSD) is known, then we can still determine the magnitude of the frequency response, as long as we can estimate the autocorrelation (or PSD) of the output. In this case, we have 2 S yy (e jω ) H(e jω ) = Sxx (e jω ). (0.3) Given additional constraints or knowledge about the system, one can often determine a lot more (or even everything) about H(e jω ) from knowledge of its magnitude Invoking Ergodicity How does one estimate R yx [m] and/or R xx [m] in an example such as the one above? he usual procedure is to assume (or prove) that the signals x and y are ergodic. What ergodicity permits as we have noted earlier is the replacement of an expectation or ensemble average by a time average, when computing the expected c Alan V. Oppenheim and George C. Verghese, 200
6 88 Chapter 0 Power Spectral Density value of various functions of random variables associated with a stationary random process. hus a WSS process x[n] would be called mean-ergodic if N E{x[n]} = lim x[k]. (0.4) N 2N + k= N (he convergence on the right hand side involves a sequence of random variables, so there are subtleties involved in defining it precisely, but we bypass these issues in 6.0.) Similarly, for a pair of jointly-correlation-ergodic processes, we could replace the cross-correlation E{y[n + m]x[n]} by the time average of y[n + m]x[n]. What ergodicity generally requires is that values taken by a typical sample function over time be representative of the values taken across the ensemble. Intuitively, what this requires is that the correlation between samples taken at different times falls off fast enough. For instance, a sufficient condition for a WSS process x[n] with finite variance to be mean-ergodic turns out to be that its autocovariance function C xx [m] tends to 0 as m tends to, which is the case with most of the examples we deal with in these notes. A more precise (necessary and sufficient) condition for mean-ergodicity is that the time-averaged autocovariance function C xx [m], weighted by a triangular window, be 0: L ( lim m ) C xx [m] = 0. (0.5) L 2L + L + m= L A similar statement holds in the C case. More stringent conditions (involving fourth moments rather than just second moments) are needed to ensure that a process is second-order ergodic; however, these conditions are typically satisfied for the processes we consider, where the correlations decay exponentially with lag Modeling Filters and Whitening Filters here are various detection and estimation problems that are relatively easy to formulate, solve, and analyze when some random process that is involved in the problem for instance, the set of measurements is white, i.e., has a flat spectral density. When the process is colored rather than white, the easier results from the white case can still often be invoked in some appropriate way if: (a) the colored process is the result of passing a white process through some LI modeling or shaping filter, which shapes the white process at the input into one that has the spectral characteristics of the given colored process at the output; or (b) the colored process is transformable into a white process by passing it through an LI whitening filter, which flattens out the spectral characteristics of the colored process presented at the input into those of the white noise obtained at the output. c Alan V. Oppenheim and George C. Verghese, 200
7 Section 0.2 Einstein-Wiener-Khinchin heorem on Expected ime-averaged Power 89 hus, a modeling or shaping filter is one that converts a white process to some colored process, while a whitening filter converts a colored process to a white process. An important result that follows from thinking in terms of modeling filters is the following (stated and justified rather informally here a more careful treatment is beyond our scope): Key Fact: A real function R xx [m] is the autocorrelation function of a real-valued WSS random process if and only if its transform S xx (e jω ) is real, even and nonnegative. he transform in this case is the PSD of the process. he necessity of these conditions on the transform of the candidate autocorrelation function follows from properties we have already established for autocorrelation functions and PSDs. o argue that these conditions are also sufficient, suppose S xx (e jω ) has these properties, and assume for simplicity that it has no impulsive part. hen it has a real and even square root, which we may denote by S xx (e jω ). Now construct a (possibly noncausal) modeling filter whose frequency response H(e jω ) equals this square root; the unit-sample reponse of this filter is found by inverse-transforming H(e jω ) = S xx (e jω ). If we then apply to the input of this filter a (zero-mean) unit-variance white noise process, e.g., a Bernoulli process that has equal probabilities of taking + and at each D instant independently of every other instant, then the output will be a WSS process with PSD given by H(e jω ) 2 = S xx (e jω ), and hence with the specified autocorrelation function. If the transform S xx (e jω ) had an impulse at the origin, we could capture this by adding an appropriate constant (determined by the impulse strength) to the output of a modeling filter, constructed as above by using only the non-impulsive part of the transform. For a pair of impulses at frequencies Ω = ±Ω o = 0 in the transform, we could similarly add a term of the form A cos(ω o n + Θ), where A is deterministic (and determined by the impulse strength) and Θ is independent of all other other variables, and uniform in [0, 2π]. Similar statements can be made in the C case. We illustrate below the logic involved in designing a whitening filter for a particular example; the logic for a modeling filter is similar (actually, inverse) to this. Consider the following discrete-time system shown in Figure 0.3. x[n] h[n] w[n] FIGURE 0.3 A discrete-time whitening filter. Suppose that x[n] is a process with autocorrelation function R xx [m] and PSD S xx (e jω ), i.e., S xx (e jω ) = F {R xx [m]}. We would like w[n] to be a white noise output with variance σ 2. w c Alan V. Oppenheim and George C. Verghese, 200
8 90 Chapter 0 Power Spectral Density We know that or, S ww (e jω ) = H(e jω ) 2 S xx (e jω ) (0.6) σw 2 H(e jω ) 2 = Sxx ( e jω ). (0.7) his then tells us what the squared magnitude of the frequency response of the LI system must be to obtain a white noise output with variance σw 2. If we have S xx (e jω ) available as a rational function of e jω (or can model it that way), then we can obtain H(e jω ) by appropriate factorization of H(e jω ) 2. EXAMPLE 0. Suppose that Whitening filter S xx (e jω ) = 5 cos(ω). (0.8) 4 hen, to whiten x(t), we require a stable LI filter for which or equivalently, H(e jω ) 2 = ( e e jω ), (0.9) jω )( 2 2 H(z)H(/z) = ( z)( z ). (0.20) 2 2 he filter is constrained to be stable in order to produce a WSS output. One choice of H(z) that results in a causal filter is H(z) =, (0.2) 2 z with region of convergence (ROC) given by z >. his system function could be 2 multiplied by the system function A(z) of any allpass system, i.e., a system function satisfying A(z)A(z ) =, and still produce the same whitening action, because A(e jω ) 2 =. 0.3 SAMPLING OF BANDLIMIED RANDOM PROCESSES A WSS random process is termed bandlimited if its PSD is bandlimited, i.e., is zero for frequencies outside some finite band. For deterministic signals that are bandlimited, we can sample at or above the Nyquist rate and recover the signal exactly. We examine here whether we can do the same with bandlimited random processes. In the discussion of sampling and D processing of C signals in your prior courses, the derivations and discussion rely heavily on picturing the effect in the frequency c Alan V. Oppenheim and George C. Verghese, 200
9 Section 0.3 Sampling of Bandlimited Random Processes 9 domain, i.e., tracking the Fourier transform of the continuous-time signal through the C/D (sampling) and D/C (reconstruction) process. While the arguments can alternatively be carried out directly in the time domain, for deterministic finiteenergy signals the frequency domain development seems more conceptually clear. As you might expect, results similar to the deterministic case hold for the reconstruction of bandlimited random processes from samples. However, since these stochastic signals do not possess Fourier transforms except in the generalized sense, we carry out the development for random processes directly in the time domain. An essentially parallel argument could have been used in the time domain for deterministic signals (by examining the total energy in the reconstruction error rather than the expected instantaneous power in the reconstruction error, which is what we focus on below). he basic sampling and bandlimited reconstruction process should be familiar from your prior studies in signals and systems, and is depicted in Figure 0.4 below. In this figure we have explicitly used bold upper-case symbols for the signals to underscore that they are random processes. C/D X c (t) X[n] = X c (n) X[n] D/C Yc (t) = + n= X[n]sinc( t n ) where sinc x = sinπx πx FIGURE 0.4 C/D and D/C for random processes. For the deterministic case, we know that if x c (t) is bandlimited to less than π, then with the D/C reconstruction defined as y c (t) = x[n]sinc( t n ) (0.22) n it follows that y c (t) = x c (t). In the case of random processes, what we show below is that, under the condition that S xcx c (jω), the power spectral density of X c (t), is bandlimited to less that π, the mean square value of the error between X c (t) and Y c (t) is zero; i.e., if π S xcx c (jω) = 0 w, (0.23) Alan c V. Oppenheim and George C. Verghese, 200
10 92 Chapter 0 Power Spectral Density then E = E{[X c (t) Y c (t)] 2 } = 0. (0.24) his, in effect, says that there is zero power in the error. (An alternative proof to the one below is outlined in Problem 3 at the end of this chapter.) o develop the above result, we expand the error and use the definitions of the C/D (or sampling) and D/C (or ideal bandlimited interpolation) operations in Figure 0.4 to obtain E = E{X 2 c (t)} + E{Y 2 c (t)} 2E{Y c (t)x c (t)}. (0.25) We first consider the last term, E{Y c (t)x c (t)}: + t n E{Y c (t)x c (t)} = E{ X c (n) sinc( ) X c (t)} + n= n t = Rxcxc (n t) sinc( ) (0.26) n= (0.27) where, in the last expression, we have invoked the symmetry of sinc(.) to change the sign of its argument from the expression that precedes it. Equation (0.26) can be evaluated using Parseval s relation in discrete time, which states that + π v[n]w[n] = V (e jω )W (e jω )dω (0.28) n= 2π π o apply Parseval s relation, note that R xcx c (n t) can be viewed as the result of the C/D or sampling process depicted in Figure 0.5, in which the input is considered to be a function of the variable τ: R xcx c (τ t) C/D Rxcx c (n t) FIGURE 0.5 C/D applied to R xcx c (τ t). he CF (in the variable τ) of R xcx c (τ t) is e jωt S xcx c (jω), and since this is bandlimited to ω < π, the DF of its sampled version R xc x c (n t) is jωt Ω Alan c V. Oppenheim and George C. Verghese, 200 e S xcx c (j ) (0.29)
11 Section 0.3 Sampling of Bandlimited Random Processes 93 in the interval Ω < π. Similarly, the DF of sinc( n t ) is π under the condition that S xcx c (jω) is bandlimited to ω <, e j Ωt. Consequently, π jω E{Y c (t)x c (t)} = S xcx c ( )dω 2π π (π/ ) = S xcx c (jω)dω 2π (π/ ) Next, we expand the middle term in equation (0.25): = R xcx c (0) = E{X 2 c (t)} (0.30) E{Y 2 c (t)} = E{ X c (n)x c (m)sinc( t n )sinc( t m )} n m = R xcx c (n m )sinc( t m )sinc( t m ). (0.3) n m With the substitution n m = r, we can express 0.3 as E{Y 2 c (t)} = R xcx c (r) sinc( t m )sinc( t m r ). (0.32) r m Using the identity sinc(n θ )sinc(n θ 2 ) = sinc(θ 2 θ ), (0.33) n which again comes from Parseval s theorem (see Problem 2 at the end of this chapter), we have E{Y 2 c (t)} = R xcx c (r) sinc(r) r = R xcx c (0) = E{X 2 c } (0.34) since sinc(r) = if r = 0 and zero otherwise. Substituting 0.3 and 0.34 into 0.25, we obtain the result that E = 0, as desired. Alan c V. Oppenheim and George C. Verghese, 200
12 94 Chapter 0 Power Spectral Density Alan c V. Oppenheim and George C. Verghese, 200
13 MI OpenCourseWare Introduction to Communication, Control, and Signal Processing Spring 200 For information about citing these materials or our erms of Use, visit:
min ǫ = E{e 2 [n]}. (11.2)
C H A P T E R 11 Wiener Filtering INTRODUCTION In this chapter we will consider the use of LTI systems in order to perform minimum mean-square-error (MMSE) estimation of a WSS random process of interest,
Chapter 8 - Power Density Spectrum
EE385 Class Notes 8/8/03 John Stensby Chapter 8 - Power Density Spectrum Let X(t) be a WSS random process. X(t) has an average power, given in watts, of E[X(t) ], a constant. his total average power is
Signal Detection C H A P T E R 14 14.1 SIGNAL DETECTION AS HYPOTHESIS TESTING
C H A P T E R 4 Signal Detection 4. SIGNAL DETECTION AS HYPOTHESIS TESTING In Chapter 3 we considered hypothesis testing in the context of random variables. The detector resulting in the minimum probability
Probability and Random Variables. Generation of random variables (r.v.)
Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly
UNIVERSITY OF CALIFORNIA, SAN DIEGO Electrical & Computer Engineering Department ECE 101 - Fall 2010 Linear Systems Fundamentals
UNIVERSITY OF CALIFORNIA, SAN DIEGO Electrical & Computer Engineering Department ECE 101 - Fall 2010 Linear Systems Fundamentals FINAL EXAM WITH SOLUTIONS (YOURS!) You are allowed one 2-sided sheet of
(Refer Slide Time: 01:11-01:27)
Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 6 Digital systems (contd.); inverse systems, stability, FIR and IIR,
TTT4110 Information and Signal Theory Solution to exam
Norwegian University of Science and Technology Department of Electronics and Telecommunications TTT4 Information and Signal Theory Solution to exam Problem I (a The frequency response is found by taking
TTT4120 Digital Signal Processing Suggested Solution to Exam Fall 2008
Norwegian University of Science and Technology Department of Electronics and Telecommunications TTT40 Digital Signal Processing Suggested Solution to Exam Fall 008 Problem (a) The input and the input-output
Frequency Response of FIR Filters
Frequency Response of FIR Filters Chapter 6 This chapter continues the study of FIR filters from Chapter 5, but the emphasis is frequency response, which relates to how the filter responds to an input
Digital Signal Processing IIR Filter Design via Impulse Invariance
Digital Signal Processing IIR Filter Design via Impulse Invariance D. Richard Brown III D. Richard Brown III 1 / 11 Basic Procedure We assume here that we ve already decided to use an IIR filter. The basic
Advanced Signal Processing and Digital Noise Reduction
Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New
The Calculation of G rms
The Calculation of G rms QualMark Corp. Neill Doertenbach The metric of G rms is typically used to specify and compare the energy in repetitive shock vibration systems. However, the method of arriving
5.1 Radical Notation and Rational Exponents
Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots
Analysis/resynthesis with the short time Fourier transform
Analysis/resynthesis with the short time Fourier transform summer 2006 lecture on analysis, modeling and transformation of audio signals Axel Röbel Institute of communication science TU-Berlin IRCAM Analysis/Synthesis
Measuring Line Edge Roughness: Fluctuations in Uncertainty
Tutor6.doc: Version 5/6/08 T h e L i t h o g r a p h y E x p e r t (August 008) Measuring Line Edge Roughness: Fluctuations in Uncertainty Line edge roughness () is the deviation of a feature edge (as
The continuous and discrete Fourier transforms
FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1
1 Prior Probability and Posterior Probability
Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which
The Fourier Analysis Tool in Microsoft Excel
The Fourier Analysis Tool in Microsoft Excel Douglas A. Kerr Issue March 4, 2009 ABSTRACT AD ITRODUCTIO The spreadsheet application Microsoft Excel includes a tool that will calculate the discrete Fourier
TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS
TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS 1. Bandwidth: The bandwidth of a communication link, or in general any system, was loosely defined as the width of
T = 10-' s. p(t)= ( (t-nt), T= 3. n=-oo. Figure P16.2
16 Sampling Recommended Problems P16.1 The sequence x[n] = (-1)' is obtained by sampling the continuous-time sinusoidal signal x(t) = cos oot at 1-ms intervals, i.e., cos(oont) = (-1)", Determine three
Lecture 8 ELE 301: Signals and Systems
Lecture 8 ELE 3: Signals and Systems Prof. Paul Cuff Princeton University Fall 2-2 Cuff (Lecture 7) ELE 3: Signals and Systems Fall 2-2 / 37 Properties of the Fourier Transform Properties of the Fourier
First, we show how to use known design specifications to determine filter order and 3dB cut-off
Butterworth Low-Pass Filters In this article, we describe the commonly-used, n th -order Butterworth low-pass filter. First, we show how to use known design specifications to determine filter order and
RANDOM VIBRATION AN OVERVIEW by Barry Controls, Hopkinton, MA
RANDOM VIBRATION AN OVERVIEW by Barry Controls, Hopkinton, MA ABSTRACT Random vibration is becoming increasingly recognized as the most realistic method of simulating the dynamic environment of military
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
S. Boyd EE102. Lecture 1 Signals. notation and meaning. common signals. size of a signal. qualitative properties of signals.
S. Boyd EE102 Lecture 1 Signals notation and meaning common signals size of a signal qualitative properties of signals impulsive signals 1 1 Signals a signal is a function of time, e.g., f is the force
Sampling Theorem Notes. Recall: That a time sampled signal is like taking a snap shot or picture of signal periodically.
Sampling Theorem We will show that a band limited signal can be reconstructed exactly from its discrete time samples. Recall: That a time sampled signal is like taking a snap shot or picture of signal
1.1 Discrete-Time Fourier Transform
1.1 Discrete-Time Fourier Transform The discrete-time Fourier transform has essentially the same properties as the continuous-time Fourier transform, and these properties play parallel roles in continuous
The Z transform (3) 1
The Z transform (3) 1 Today Analysis of stability and causality of LTI systems in the Z domain The inverse Z Transform Section 3.3 (read class notes first) Examples 3.9, 3.11 Properties of the Z Transform
SGN-1158 Introduction to Signal Processing Test. Solutions
SGN-1158 Introduction to Signal Processing Test. Solutions 1. Convolve the function ( ) with itself and show that the Fourier transform of the result is the square of the Fourier transform of ( ). (Hints:
Convolution, Correlation, & Fourier Transforms. James R. Graham 10/25/2005
Convolution, Correlation, & Fourier Transforms James R. Graham 10/25/2005 Introduction A large class of signal processing techniques fall under the category of Fourier transform methods These methods fall
SIGNAL PROCESSING & SIMULATION NEWSLETTER
1 of 10 1/25/2008 3:38 AM SIGNAL PROCESSING & SIMULATION NEWSLETTER Note: This is not a particularly interesting topic for anyone other than those who ar e involved in simulation. So if you have difficulty
9 Fourier Transform Properties
9 Fourier Transform Properties The Fourier transform is a major cornerstone in the analysis and representation of signals and linear, time-invariant systems, and its elegance and importance cannot be overemphasized.
Lecture - 4 Diode Rectifier Circuits
Basic Electronics (Module 1 Semiconductor Diodes) Dr. Chitralekha Mahanta Department of Electronics and Communication Engineering Indian Institute of Technology, Guwahati Lecture - 4 Diode Rectifier Circuits
5 Signal Design for Bandlimited Channels
225 5 Signal Design for Bandlimited Channels So far, we have not imposed any bandwidth constraints on the transmitted passband signal, or equivalently, on the transmitted baseband signal s b (t) I[k]g
Understanding Poles and Zeros
MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function
Lecture 7 ELE 301: Signals and Systems
Lecture 7 ELE 3: Signals and Systems Prof. Paul Cuff Princeton University Fall 2-2 Cuff (Lecture 7) ELE 3: Signals and Systems Fall 2-2 / 22 Introduction to Fourier Transforms Fourier transform as a limit
Auto-Tuning Using Fourier Coefficients
Auto-Tuning Using Fourier Coefficients Math 56 Tom Whalen May 20, 2013 The Fourier transform is an integral part of signal processing of any kind. To be able to analyze an input signal as a superposition
Univariate and Multivariate Methods PEARSON. Addison Wesley
Time Series Analysis Univariate and Multivariate Methods SECOND EDITION William W. S. Wei Department of Statistics The Fox School of Business and Management Temple University PEARSON Addison Wesley Boston
LABORATORY 10 TIME AVERAGES, RMS VALUES AND THE BRIDGE RECTIFIER. Bridge Rectifier
LABORATORY 10 TIME AVERAGES, RMS VALUES AND THE BRIDGE RECTIFIER Full-wave Rectification: Bridge Rectifier For many electronic circuits, DC supply voltages are required but only AC voltages are available.
CHAPTER 6 Frequency Response, Bode Plots, and Resonance
ELECTRICAL CHAPTER 6 Frequency Response, Bode Plots, and Resonance 1. State the fundamental concepts of Fourier analysis. 2. Determine the output of a filter for a given input consisting of sinusoidal
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce
Lectures 5-6: Taylor Series
Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,
1 Short Introduction to Time Series
ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The
This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.
Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability
CS 7 Discrete Mathematics and Probability Theory Fall 29 Satish Rao, David Tse Note 8 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces
Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.
1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.
PYKC Jan-7-10. Lecture 1 Slide 1
Aims and Objectives E 2.5 Signals & Linear Systems Peter Cheung Department of Electrical & Electronic Engineering Imperial College London! By the end of the course, you would have understood: Basic signal
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
Chapter 10 Introduction to Time Series Analysis
Chapter 1 Introduction to Time Series Analysis A time series is a collection of observations made sequentially in time. Examples are daily mortality counts, particulate air pollution measurements, and
Design of Efficient Digital Interpolation Filters for Integer Upsampling. Daniel B. Turek
Design of Efficient Digital Interpolation Filters for Integer Upsampling by Daniel B. Turek Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements
7. Beats. sin( + λ) + sin( λ) = 2 cos(λ) sin( )
34 7. Beats 7.1. What beats are. Musicians tune their instruments using beats. Beats occur when two very nearby pitches are sounded simultaneously. We ll make a mathematical study of this effect, using
FFT Algorithms. Chapter 6. Contents 6.1
Chapter 6 FFT Algorithms Contents Efficient computation of the DFT............................................ 6.2 Applications of FFT................................................... 6.6 Computing DFT
Introduction to Complex Numbers in Physics/Engineering
Introduction to Complex Numbers in Physics/Engineering ference: Mary L. Boas, Mathematical Methods in the Physical Sciences Chapter 2 & 14 George Arfken, Mathematical Methods for Physicists Chapter 6 The
Positive Feedback and Oscillators
Physics 3330 Experiment #6 Fall 1999 Positive Feedback and Oscillators Purpose In this experiment we will study how spontaneous oscillations may be caused by positive feedback. You will construct an active
The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].
Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real
Multi-variable Calculus and Optimization
Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus
Sampling and Interpolation. Yao Wang Polytechnic University, Brooklyn, NY11201
Sampling and Interpolation Yao Wang Polytechnic University, Brooklyn, NY1121 http://eeweb.poly.edu/~yao Outline Basics of sampling and quantization A/D and D/A converters Sampling Nyquist sampling theorem
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued
Representation of functions as power series
Representation of functions as power series Dr. Philippe B. Laval Kennesaw State University November 9, 008 Abstract This document is a summary of the theory and techniques used to represent functions
Lecture 1-6: Noise and Filters
Lecture 1-6: Noise and Filters Overview 1. Periodic and Aperiodic Signals Review: by periodic signals, we mean signals that have a waveform shape that repeats. The time taken for the waveform to repeat
Math 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
Notes on Continuous Random Variables
Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes
CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Relations Between Time Domain and Frequency Domain Prediction Error Methods - Tomas McKelvey
COTROL SYSTEMS, ROBOTICS, AD AUTOMATIO - Vol. V - Relations Between Time Domain and Frequency Domain RELATIOS BETWEE TIME DOMAI AD FREQUECY DOMAI PREDICTIO ERROR METHODS Tomas McKelvey Signal Processing,
Probability Generating Functions
page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence
SWISS ARMY KNIFE INDICATOR John F. Ehlers
SWISS ARMY KNIFE INDICATOR John F. Ehlers The indicator I describe in this article does all the common functions of the usual indicators, such as smoothing and momentum generation. It also does some unusual
Lock - in Amplifier and Applications
Lock - in Amplifier and Applications What is a Lock in Amplifier? In a nut shell, what a lock-in amplifier does is measure the amplitude V o of a sinusoidal voltage, V in (t) = V o cos(ω o t) where ω o
EDEXCEL NATIONAL CERTIFICATE/DIPLOMA UNIT 5 - ELECTRICAL AND ELECTRONIC PRINCIPLES NQF LEVEL 3 OUTCOME 4 - ALTERNATING CURRENT
EDEXCEL NATIONAL CERTIFICATE/DIPLOMA UNIT 5 - ELECTRICAL AND ELECTRONIC PRINCIPLES NQF LEVEL 3 OUTCOME 4 - ALTERNATING CURRENT 4 Understand single-phase alternating current (ac) theory Single phase AC
10.2 Series and Convergence
10.2 Series and Convergence Write sums using sigma notation Find the partial sums of series and determine convergence or divergence of infinite series Find the N th partial sums of geometric series and
LS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
Metric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
Frequency Response of Filters
School of Engineering Department of Electrical and Computer Engineering 332:224 Principles of Electrical Engineering II Laboratory Experiment 2 Frequency Response of Filters 1 Introduction Objectives To
MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
Short-time FFT, Multi-taper analysis & Filtering in SPM12
Short-time FFT, Multi-taper analysis & Filtering in SPM12 Computational Psychiatry Seminar, FS 2015 Daniel Renz, Translational Neuromodeling Unit, ETHZ & UZH 20.03.2015 Overview Refresher Short-time Fourier
Design of FIR Filters
Design of FIR Filters Elena Punskaya www-sigproc.eng.cam.ac.uk/~op205 Some material adapted from courses by Prof. Simon Godsill, Dr. Arnaud Doucet, Dr. Malcolm Macleod and Prof. Peter Rayner 68 FIR as
4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
RF Measurements Using a Modular Digitizer
RF Measurements Using a Modular Digitizer Modern modular digitizers, like the Spectrum M4i series PCIe digitizers, offer greater bandwidth and higher resolution at any given bandwidth than ever before.
PHYS 331: Junior Physics Laboratory I Notes on Noise Reduction
PHYS 331: Junior Physics Laboratory I Notes on Noise Reduction When setting out to make a measurement one often finds that the signal, the quantity we want to see, is masked by noise, which is anything
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
Basics of Statistical Machine Learning
CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu [email protected] Modern machine learning is rooted in statistics. You will find many familiar
LAB 7 MOSFET CHARACTERISTICS AND APPLICATIONS
LAB 7 MOSFET CHARACTERISTICS AND APPLICATIONS Objective In this experiment you will study the i-v characteristics of an MOS transistor. You will use the MOSFET as a variable resistor and as a switch. BACKGROUND
The Algorithms of Speech Recognition, Programming and Simulating in MATLAB
FACULTY OF ENGINEERING AND SUSTAINABLE DEVELOPMENT. The Algorithms of Speech Recognition, Programming and Simulating in MATLAB Tingxiao Yang January 2012 Bachelor s Thesis in Electronics Bachelor s Program
SOFTWARE FOR GENERATION OF SPECTRUM COMPATIBLE TIME HISTORY
3 th World Conference on Earthquake Engineering Vancouver, B.C., Canada August -6, 24 Paper No. 296 SOFTWARE FOR GENERATION OF SPECTRUM COMPATIBLE TIME HISTORY ASHOK KUMAR SUMMARY One of the important
Maximum Likelihood Estimation
Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for
Hypothesis Testing for Beginners
Hypothesis Testing for Beginners Michele Piffer LSE August, 2011 Michele Piffer (LSE) Hypothesis Testing for Beginners August, 2011 1 / 53 One year ago a friend asked me to put down some easy-to-read notes
chapter Introduction to Digital Signal Processing and Digital Filtering 1.1 Introduction 1.2 Historical Perspective
Introduction to Digital Signal Processing and Digital Filtering chapter 1 Introduction to Digital Signal Processing and Digital Filtering 1.1 Introduction Digital signal processing (DSP) refers to anything
A few words about imaginary numbers (and electronics) Mark Cohen [email protected]
A few words about imaginary numbers (and electronics) Mark Cohen mscohen@guclaedu While most of us have seen imaginary numbers in high school algebra, the topic is ordinarily taught in abstraction without
5.1 Identifying the Target Parameter
University of California, Davis Department of Statistics Summer Session II Statistics 13 August 20, 2012 Date of latest update: August 20 Lecture 5: Estimation with Confidence intervals 5.1 Identifying
NRZ Bandwidth - HF Cutoff vs. SNR
Application Note: HFAN-09.0. Rev.2; 04/08 NRZ Bandwidth - HF Cutoff vs. SNR Functional Diagrams Pin Configurations appear at end of data sheet. Functional Diagrams continued at end of data sheet. UCSP
Introduction to the Z-transform
ECE 3640 Lecture 2 Z Transforms Objective: Z-transforms are to difference equations what Laplace transforms are to differential equations. In this lecture we are introduced to Z-transforms, their inverses,
6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS
6.4/6.43 Spring 28 Quiz 2 Wednesday, April 6, 7:3-9:3 PM. SOLUTIONS Name: Recitation Instructor: TA: 6.4/6.43: Question Part Score Out of 3 all 36 2 a 4 b 5 c 5 d 8 e 5 f 6 3 a 4 b 6 c 6 d 6 e 6 Total
Lecture 2: Universality
CS 710: Complexity Theory 1/21/2010 Lecture 2: Universality Instructor: Dieter van Melkebeek Scribe: Tyson Williams In this lecture, we introduce the notion of a universal machine, develop efficient universal
Controller Design in Frequency Domain
ECSE 4440 Control System Engineering Fall 2001 Project 3 Controller Design in Frequency Domain TA 1. Abstract 2. Introduction 3. Controller design in Frequency domain 4. Experiment 5. Colclusion 1. Abstract
T = 1 f. Phase. Measure of relative position in time within a single period of a signal For a periodic signal f(t), phase is fractional part t p
Data Transmission Concepts and terminology Transmission terminology Transmission from transmitter to receiver goes over some transmission medium using electromagnetic waves Guided media. Waves are guided
EE 179 April 21, 2014 Digital and Analog Communication Systems Handout #16 Homework #2 Solutions
EE 79 April, 04 Digital and Analog Communication Systems Handout #6 Homework # Solutions. Operations on signals (Lathi& Ding.3-3). For the signal g(t) shown below, sketch: a. g(t 4); b. g(t/.5); c. g(t
Analog and Digital Signals, Time and Frequency Representation of Signals
1 Analog and Digital Signals, Time and Frequency Representation of Signals Required reading: Garcia 3.1, 3.2 CSE 3213, Fall 2010 Instructor: N. Vlajic 2 Data vs. Signal Analog vs. Digital Analog Signals
