Chapter 7 - Correlation Functions

Size: px
Start display at page:

Download "Chapter 7 - Correlation Functions"

Transcription

1 Chapter 7 - Correlation Functions Let X(t) denote a random process. The autocorrelation of X is defined as x R (t,t ) E[X(t )X(t )] x x f(x,x ;t,t )dx dx. (7-1) The autocovariance function is defined as C (t,t ) E[{X(t ) (t )}{X(t ) (t )}] R (t,t ) (t ) (t ), (7-) x 1 1 X 1 X X 1 X 1 X and the correlation function is defined as C x(t 1,t ) R X(t 1,t ) X(t 1) X(t ) r x(t 1,t ). (7-3) (t ) (t ) (t ) (t ) x 1 x x 1 x If X(t) is at least wide sense stationary, then R X depends only on the time difference = t 1 - t, and we write x (7-4) R ( ) E[X(t)X(t )] x x f(x,x ; )dx dx Finally, if X(t) is ergodic we can write 1 T x T T -T R ( ) limit X(t)X(t )dt. (7-5) Function r X () can be thought of as a measure of statistical similarity of X(t) and X(t+). If r x ( 0 ) = 0, the samples X(t) and X(t+ ) are said to be uncorrelated. Updates at 7-1

2 Properties of Autocorrelation Functions for Real-Valued, WSS Random Processes 1. R X (0) = E[X(t)X(t)] = Average Power.. R X () = R X (-). The autocorrelation function of a real-valued, WSS process is even. Proof: R ( ) = E[X(t)X(t+ )]= E[X(t- )X(t- )] (Due to WSS) X =R (- ) X (7-6) 3. R X () R X (0). The autocorrelation is maximum at the origin. Proof: E X(t) X(t ) E X(t) X(t ) X(t)X(t ) 0 R (0) R (0) R ( ) 0 X X X (7-7) Hence, R X () R X (0) as claimed. 4. Assume that WSS process X can be represented as X(t) = + X ac (t), where is a constant and E[X ac (t)] = 0. Then, R X ( ) E X ac(t) X ac(t ) E[ ] E[X (t)] E X (t)x (t ) ac ac ac (7-8) R X ( ). ac 5. If each sample function of X(t) has a periodic component of frequency then R X () will have a periodic component of frequency. Updates at 7-

3 Example 7-1: Consider X(t) = Acos(t+) + N(t), where A and are constants, random variable is uniformly distributed over (0, ), and wide sense stationary N(t) is independent of for every time t. Find R X (), the autocorrelation of X(t). R X ( ) E Acos( t+ ) + N(t) Acos( [t+ ]+ ) + N(t+ ) A E cos( t + )+cos( ) E A cos( t+ )N(t ) E N(t)Acos( [t+ ]+ ) E N(t)N(t ) (7-9) A cos( ) R N ( ) So, R X () contains a component at, the same frequency as the periodic component in X. 6. Suppose that X(t) is ergodic, has zero mean, and it has no periodic components; then limit R X ( ) 0. (7-10) That is, X(t) and X(t+) become uncorrelated for large. 7. Autocorrelation functions cannot have an arbitrary shape. As will be discussed in Chapter 8, for a WSS random process X(t) with autocorrelation R X (), the Fourier transform of R X () is the power density spectrum (or simply power spectrum) of the random process X. And, the power spectrum must be non-negative. Hence, we have the additional requirement that x x x j F R ( ) R ( )e d R ( )cos( )d 0 (7-11) for all (the even nature of R X was used to obtain the right-hand side of (7-11)). Because of this, in applications, you will not find autocorrelation functions with flat tops, vertical sides, or Updates at 7-3

4 any jump discontinuities in amplitude (these features cause oscillatory behavior, and negative values, in the Fourier transform). Autocorrelation R() must vary smoothly with. Example 7-: Random Binary Waveform Process X(t) takes on only two values: A. Every t a seconds a sample function of X either toggles value or it remains the same (positive constant t a is known). Both possibilities are equally likely (i.e., P[ toggle ] = P[ no toggle ] = 1/). The possible transitions occur at times t 0 + kt a, where k is an integer, - < k <. Time t 0 is a random variable that is uniformly distributed over [0, t a ]. Hence, given an arbitrary sample function from the ensemble, a toggle can occur anytime. Starting from t = t 0, sample functions are constant over intervals of length t a, and the constant can change sign from one t a interval to the next. The value of X(t) over one "t a - interval" is independent of its value over any other "t a -interval". Figure 7-1 depicts a typical sample function of the random binary waveform. Figure 7- is a timing diagram that illustrates the "t a intervals". The algorithm used to generate the process is not changing with time. As a result, it is possible to argue that the process is stationary. Also, since +A and A are equally likely values for X at any time t, it is obvious that X(t) has zero mean. X(t) A X(t 1 +)=X t -A X(t 1 )=X 1 t 1 Time Line t 1 + t 0 -t a 0 t 0 t 0 +t a t 0 +t a t 0 +3t a t 0 +4t a t 0 +5t a Fig. 7-1: Sample function of a simple binary random process. Updates at 7-4

5 " t a Intervals " t 0 -t a 0 t 0 t 0 +t a t 0 +t a t 0 +3t a t 0 +4t a t 0 +5t a Fig. 7-: Time line illustrating the independent "t a intervals". To determine the autocorrelation function R X (), we must consider two basic cases. 1) Case > t a. Then, the times t 1 and t 1 + cannot be in the same "t a interval". Hence, X(t 1 ) and X(t 1 + ) must be independent so that R( ) E[X(t 1)X(t 1)] E[X(t 1)]E[X(t 1)] 0, t a. (7-1) ) Case < t a. To calculate R() for this case, we must first determine an expression for the probability P[t 1 and t 1 + in the same t a interval ]. We do this in two parts: the first part is i) 0 < < t a, and the second part is ii) -t a < 0. i) 0 < < t a. Times t 1 and t 1 + may, or may not, be in the same "t a -interval". However, we write P[t and t in same "t interval"] P[t t t <t t ] 1 1 a a P[t t t t ] 1 a [t 1 (t 1 t a )] t a (7-13) t a ta, 0 < < t a ii) -t a < 0. Times t 1 and t 1 + may, or may not, be in the same "t a -interval". However, we write Updates at 7-5

6 P[t and t in same "t interval"] P[t t t <t t ] 1 1 a a P[t t t t ] 1 a [t 1 (t 1 t a )] t a (7-14) t a ta, -t < 0 a Combine (7-13) and (7-14), we can write t P [t and t in same "t interval"] < t. (7-15) a 1 1 a, t a a Now, the product X(t 1 )X(t 1 +) takes on only two values, plus or minus A. If t 1 and t 1 + are in the same "t a -interval" then X(t 1 )X(t 1 +) = A. If t 1 and t 1 + are in different "t a -intervals" then X(t 1 ) and X(t 1 +) are independent, and X(t 1 )X(t 1 +) = ±A equally likely. For < t a we can write R( ) E X(t )X(t ) 1 1 A P t and t in same "t interval" 1 1 a A {t and t in different "t intervals"}, X(t )X(t ) A P 1 1 a 1 1. (7-16) A {t and t in different "t intervals"}, X(t )X(t ) A P 1 1 a 1 1 However, the last two terms on the right-hand side of (7-16) cancel out (read again the two sentences after (7-15)). Hence, we can write R( ) A t 1 and t 1 in same "t a interval" P. (7-17) Updates at 7-6

7 R() A -t a Fig. 7-3: Autocorrelation of Random Binary waveform. t a Finally, substitute (7-15) into (7-17) to obtain R( ) E[X(t )X(t )] 1 1 t A a, < t t a 0, > t a a. (7-18) Equation (7-18) provides a formula for R() for the random binary signal described by Figure 7-1. A plot of this formula for R() is given by Figure 7-3. Poisson Random Points Review The topic of random Poisson points is discussed in Chapters 1, and Appendix 9B. Let n(t 1, t ) denote the number of Poisson points in the time interval (t 1, t ). Then, these points are distributed in a Poisson manner with k ( ) [n(t 1,t ) k] e k! P, (7-19) where t 1 - t, and > 0 is a known parameter. That is, n(t 1,t ) is Poisson distributed with parameter. Note that n(t 1, t ) is an integer valued random variable with Updates at 7-7

8 E[n(t 1, t )] = t 1 - t VAR[n(t 1, t )] = t 1 - t n (t 1, t )] = VAR[n(t 1, t )] + (E[n(t 1, t )]) = t 1 - t t 1 - t. Note that E[n(t 1,t )] and VAR[n(t 1,t )] are the same, an unusual result for random quantities. If (t 1, t ) and (t 3, t 4 ) are non-overlapping, then the random variables n(t 1, t ) and n(t 3, t 4 ) are independent. Finally, constant is the average point density. That is, represents the average number of points in a unit length interval. Poisson Random Process Define the Poisson random process X(t) = n(0,t), t>0. (7-1) = - n(0,t), t 0 A typical sample function is illustrated by Figure 7-4. Mean of Poisson Process For any fixed t 0, X(t) is a Poisson random variable with parameter t. Hence, E[X(t)] = t, t 0 (7-) The time varying nature of the mean implies that process X(t) is nonstationary. X(t) Location of a Poison Point time Fig. 7-4: Typical sample function of Poisson random process. Updates at 7-8

9 Autocorrelation of Poisson Process that The autocorrelation is defined as R(t 1, t ) = E[X(t 1 )X(t )] for t 1 0 and t 0. First, note R(t,t) t t, - < t, (7-3) a result obtained from the known nd moment of a Poisson random variable. Next, we show that R(t, t ) t t t for 0 < t t t t t for 0 < t t. (7-4) Proof: case 0 < t 1 < t We consider the case 0 < t 1 < t. The random variables X(t 1 ) and {X(t ) - X(t 1 )} are independent since they are for non-overlapping time intervals. Also, X(t 1 ) has mean t 1, and {X(t ) - X(t 1 )} has mean (t - t 1 ). As a result, E[X(t 1){X(t ) - X(t 1)}] =E[X(t 1)]E[X(t ) - X(t 1)]. (7-5) = t (t t ) 1 1 Use this result to obtain R(t 1,t )=E[X(t 1)X(t )] =E[X(t 1){X(t 1)+X(t ) - X(t 1)}] =E[X (t 1 )]+E[X(t 1 ){X(t ) - X(t 1 )}] = t 1 t 1 t 1 (t t 1 ). (7-6) = t 1 t 1 t for 0<t 1 t Updates at 7-9

10 Case 0 < t < t 1 is similar to the case shown above. Hence, for the Poisson process, the autocorrelation function is R(t, t ) t t t for 0 < t t t t t for 0 < t t. (7-7) Semi-Random Telegraph Signal The Semi-Random Telegraph Signal is defined as X(0) 1 X(t) 1 if number of Poisson Points in (0,t) is even (7-8) = -1 if number of Poisson Points in (0,t) is odd for - < t <. Figure 7-5 depicts a typical sample function of this process. In what follows, we find the mean and autocorrelation of the semi-random telegraph signal. First, note that P[X(t) 1] P[even number of pts in (0, t)] = P[0 pts in (0,t)] + P[ pts in (0,t)] + P[4 pts in (0,t)] + (7-9) 4 4 t t t t e 1 + e cosh( t ).! 4! X(t) Location of a Poisson Point Fig. 7-5: A typical sample function of the semi-random telegraph signal. Updates at

11 Note that (7-9) is valid for t < 0 since it uses t. In a similar manner, we can write P[X(t) 1] P[odd number of pts in (0, t)] = P[1 pts in (0,t)] + P[3 pts in (0,t)] + P[5 pts in (0,t)] + (7-30) t t t t e t + e sinh( t ). 3! 5! As a result of (7-9) and (7-30), the mean is E[X(t)] 1 P[X(t) 1] 1 P[X(t) 1] t e cosh( t ) sinh( t ) (7-31) t e. The constraint X(0) = 1 causes a nonzero mean that dies out with time. Note that X(t) is not WSS since its mean is time varying. Now, find the autocorrelation R(t 1, t ). First, suppose that t 1 - t > 0, and - < t <. If there is an even number of points in (t, t 1 ), then X(t 1 ) and X(t ) have the same sign and P[X(t ) 1, X(t ) 1] P[ X(t ) 1X(t ) 1] P[ X(t ) 1] 1 1 {exp[ ]cosh( )}{exp[ t ]cosh( t )} (7-3) [X(t ) 1, X(t ) 1] [ X(t ) 1 X(t ) 1] [ X(t ) 1] P 1 P 1 P {exp[ ]cosh( )}{exp[ t ]sinh( t )} (7-33) for t 1 - t > 0, and - < t <. If there are an odd number of points in (t, t 1 ), then X(t 1 ) and Updates at

12 X(t ) have different signs, and we have P[X(t ) 1, X(t ) 1] P[ X(t ) 1X(t ) 1] P[ X(t ) 1] 1 1 {exp[ ]sinh( )}{exp[ t ]sinh( t )} (7-34) P[X(t ) 1, X(t ) 1] P[ X(t ) 1 X(t ) 1] P[ X(t ) 1] 1 1 {exp[ ]sinh( )}{exp[ t ]cosh( t )} (7-35) for t 1 - t > 0, and - < t <. The product X(t 1 )X(t ) is +1 with probability given by the sum of (7-3) and (7-33); it is -1 with probability given by the sum of (7-34) and (7-35). Hence, its expected value can be expressed as R(t,t ) E[X(t )X(t )] 1 1 t e cosh( ) e {cosh( t ) sinh( t )} (7-36) t e sinh( ) e {cosh( t ) sinh( t )}. Using standard identities, this result can be simplified to produce R(t 1,t ) E[X(t 1)X(t )] t e cosh( ) sinh( ) e cosh( t ) sinh( t ) t t e e e e (7-37) e, for t1t 0. Updates at 7-1

13 Due to symmetry (the autocorrelation function must be even), we can conclude that R(t 1,t ) R( ) e, = t1 t, (7-38) is the autocorrelation function of the semi-random telegraph signal, a result illustrated by Fig Again, note that the semi-random telegraph signal is not WSS since it has a time-varying mean. Random Telegraph Signal Let X(t) denote the semi-random telegraph signal discussed above. Consider the process Y(t) = X(t), where is a random variable that is independent of X(t) for all time. Furthermore, assume that takes on only two values: = +1 and = -1 equally likely. Then the mean of Y is E[Y] = E[X] = E[]E[X] = 0 for all time. Also, R Y () = E[ ]R X () = R X (), a result depicted by Figure 7-6. Y is called the Random Telegraph Signal since it is entirely random for all time R() = exp(-) Fig. 7-6: Autocorrelation function for both the semi-random and random telegraph signals. Updates at

14 t. Note that the Random Telegraph Signal is WSS. Autocorrelation of Wiener Process Consider the Wiener process that was introduced in Chapter 6. If we assume that X(0) = 0 (in many textbooks, this is part of the definition of a Wiener process), then the autocorrelation of the Wiener process is R X (t 1, t ) = D{min(t 1,t )}. To see this, first recall that a Wiener process has independent increments. That is, if (t 1, t ) and (t 3, t 4 ) are non-overlapping intervals (i.e., 0 t 1 < t t 3 < t 4 ), then increment X(t ) - X(t 1 ) is statistically independent of increment X(t 4 ) - X(t 3 ). Now, consider the case t 1 > t 0 and write R (t,t ) E X(t )X(t ) E {X(t ) X(t ) X(t )}X(t ) x E {X(t ) X(t )}X(t ) E X(t )}X(t ) 1 (7-39) 0Dt. By symmetry, we can conclude that R X (t 1, t ) = D{min(t 1,t )}, t 1 & t 0, (7-40) for the Wiener process X(t), t 0, with X(0) = 0. Correlation Time Let X(t) be a zero mean (i.e., E[X(t)] = 0) W.S.S. random process. The correlation time of X(t) is defined as 1 x R 0 x ( ) d R x (0). (7-41) Intuitively, time x gives some measure of the time interval over which significant correlation exists between two samples of process X(t). Updates at

15 For example, consider the random telegraph signal described above. For this process the correlation time is (7-4) x e d 0 In Chapter 8, we will relate correlation time to the spectral bandwidth (to be defined in Chapter 8) of a W.S.S. process. Crosscorrelation Functions Let X(t) and Y(t) denote real-valued random processes. The crosscorrelation of X and Y is defined as R xy (t 1,t ) E[X(t 1)Y(t )] xyf(x, y;t - - 1,t )dx dy. (7-43) The crosscovariance function is defined as C (t,t ) E[{X(t ) (t )}{Y(t ) (t )}] R (t,t ) (t ) (t ) (7-44) XY 1 1 X 1 Y XY 1 X 1 Y Let X(t) and Y(t) be WSS random processes. Then X(t) and Y(t) are said to be jointly stationary in the wide sense if R XY (t 1,t ) = R XY (), = t 1 - t. For jointly stationary in the wide sense processes the crosscorrelation is R ( ) E[X(t )Y(t)] xy f (x, y; ) dxdy XY XY (7-45) - - Warning: Some authors define R XY () = E[X(t)Y(t+)]; in the literature, there is controversy in the definition of R XY over which function is shifted. For R XY, the order of the subscript is significant! In general, R XY R YX. Updates at

16 For jointly stationary random processes X and Y, we show some elementary properties of the cross correlation function. 1. R YX () = R XY (-). To see this, note that R ( ) E[Y(t )X(t)] E[Y(t )X(t )] E[Y(t)X(t )] R ( ). (7-46) YX XY. R YX () does not necessarily have its maximum at = 0; the maximum can occur anywhere. However, we can say that R XY ( ) R X (0) R Y (0). (7-47) To see this, note that E[{X(t ) Y(t)} ] E[X (t )] E[X(t )Y(t)] +E[Y (t)] 0. (7-48) R (0) R ( ) R (0) 0 X XY Y Hence, we have R XY ( ) R X (0) R Y (0) as claimed. Linear, Time-Invariant Systems: Expected Value of the Output Consider a linear time invariant system with impulse response h(t). Given input X(t), the output Y(t) can be computed as Y(t) L X(t) X( )h(t - ) d (7-49) - The notation L[ ] denotes a linear operator (the convolution operator in this case). As given by (7-49), output Y(t) depends only on input X(t), initial conditions play no role here (assume that all initial conditions are zero). Updates at

17 Convolution and expectation are integral operators. In applications that employ these operations, it is assumed that we can interchange the order of convolution and expectation. Hence, we can write E[Y(t)] EL[X(t)] E X( )h(t - ) d - E[X( )]h(t - ) d - (7-50) - x ()h(t- )d. More generally, in applications, it is assumed that we can interchange the operations of expectation and integration so that 1 n 1 n 1 n 1 n 1 n 1 n 1 n 1 n E f (t,, t )dt dt E f (t,, t ) dt dt, (7-51) for example (f is a random function involving variables t 1,, t n ). As a special case, assume that input X(t) is wide-sense stationary with mean x. Equation (7-50) leads to Y E[Y(t)] x h(t - ) d - x h( ) d - xh(0), (7-5) where H(0) is the DC response (i.e., DC gain) of the system. Linear, Time-Invariant Systems: Input/Output Cross Correlation Let R X (t 1,t ) denote the autocorrelation of input random process X(t). We desire to find R XY (t 1,t ) = E[x(t 1 )y(t )], the crosscorrelation between input X(t) and output Y(t) of a linear, time-invariant system. Updates at

18 Theorem 7-1 The cross correlation between input X(t) and output Y(t) can be calculated as (both X and Y are assumed to be real-valued) XY 1 1 X 1 X - 1 R (t,t ) E X(t )Y(t ) L R (t,t ) R (t,t )h( )d (7-53) Notation: L [ ] means operate on the t variable (the second variable) and treat t 1 (the first variable) as a fixed parameter. Proof: In (7-53), the convolution involves folding and shifting the t slot so we write, (7-54) Y(t ) X(t - )h( ) d X(t 1)Y(t ) X(t - 1)X(t )h( ) d a result that can be used to derive XY R (t,t ) E[X(t )Y(t )] E[X(t )X(t )] h( ) d R X (t - 1,t )h( )d (7-55) L R (t,t ). X 1 Special Case: Theorem 7.1 for WSS X(t) Suppose that input process X(t) is WSS. Let = t 1 - t and write (7-55) as XY - X - X X (7-56) R ( ) R ( ) h( ) d R ( ) h( ) d R ( ) h( ) Note that X(t) and Y(t) are jointly wide sense stationary. Updates at

19 Theorem 7- The autocorrelation R Y (t 1,t ) can be obtained from the crosscorrelation R XY (t 1,t ) by the formula Y 1 1 XY 1 XY - 1 R (t,t ) L R (t,t ) R (t,t )h( )d (7-57) Notation: L 1 [ ] means operate on the t 1 variable (the first variable) and treat t (the second variable) as a fixed parameter. Proof: In (7-57), the convolution involves folding and shifting the t 1 slot so we write (7-58) Y(t 1) X(t - 1 )h( ) d Y(t 1)Y(t ) Y(t - )X(t 1 )h( ) d Now, take the expected value of (7-58) to obtain R (t,t ) E[Y(t )Y(t )] Y XY 1 (7-59) E[Y(t )X(t )] h( ) d R (t,t ) h( ) d L R (t,t ), 1 XY 1 a result that completes the proof of our theorem. Special Case: Theorem 7. for WSS X(t) Suppose that input process X(t) is WSS. Then, as seen by (7-56), X and Y are jointly WSS so that (7-59) becomes 1 - XY 1 R Y(t,t ) R (t t )h( )d. (7-60) Let = t 1 - t and write (7-60) as Updates at

20 R ( ) R ( ) h( ). (7-61) Y XY Theorems 7.1 and 7. can be combined into a single formula for finding R Y (t 1,t ) in terms of R X (t 1,t ). Simply substitute (7-55) into (7-59) to obtain Y 1 XY 1 X R (t,t ) R (t,t ) h( ) d R (t,t ) h( ) d h( ) d R X (t - - 1,t )h( )h( )dd, (7-6) an important "double convolution" formula for R Y (t 1,t ) in terms of R X (t 1,t ). For WSS input X(t), the output autocorrelation can be written as R Y( ) R XY( ) h( ) R X( ) {h( ) h( )}, (7-63) a result obtained by substituting (7-56) into (7-61). Figure 7-7 illustrates this fundamental result. Note that WSS input X(t) produce WSS output Y(t). Example 7-3 A zero-mean, stationary process X(t) with autocorrelation R x () = q() (white noise) is applied to a linear system with impulse response h(t) = e -ct U(t), c > 0. Find R xy () and R y () for this system. Since X is WSS, we know that R ( ) E[X(t )Y(t)] R ( ) h( ) q( ) e U( ) q e U( ). (7-64) XY X c c R X () h()h(-) R Y () = [h()h(-)]r X () Fig. 7-7: Output autocorrelation in terms of input autocorrelation for the WSS input case. Updates at 7-0

21 Note that X and Y are jointly WSS. That R XY () = 0 for > 0 should be intuitive since X(t) is a white noise process. Now, the autocorrelation of Y can be computed as R ( ) E[Y(t )Y(t)] R ( ) [h( ) h( )] [R ( ) h( )] h( ) Y X X XY c c R ( ) h( ) { qe U( )} {e U( )} (7-65) q c e, < <. c Note that output Y(t) is not white noise; samples of Y(t) are correlated with each other. Basically, system h(t) filtered white-noise input X(t) to produce an output Y(t) that is correlated. As we will see in Chapter 8, input X(t) is modeled as having an infinite bandwidth; the system bandlimited its input to form an output Y(t) that has a finite bandwidth. Example 7-4 (from Papoulis, 3rd Ed., pp ) A zero-mean, stationary process X(t) with autocorrelation R x () = q() (white noise) is applied at t = 0 to a linear system with impulse response h(t) = e -ct U(t), c > 0. See Figure 7-8. Assume that the system is at rest initially (the initial conditions are zero so that Y(t) = 0, t < 0). Find R xy and R y for this system. In a subtle way, this problem differs from the previous example. Since the input is applied at t = 0, the system sees a non-stationary input. Hence, we must analyze the general, nonstationary case. As shown below, processes X(t) and Y(t) are not jointly wide sense stationary, and Y(t) is not wide sense stationary. Also, it should be obvious that R XY (t 1,t ) = E[X(t 1 )Y(t )] = 0 for t 1 < 0 or t < 0 (note how this differs from Example 7-3). X(t) close at t = 0 h(t) = e -ct U(t) Y(t) Figure 7-8: System with random input applied at t = 0. Updates at 7-1

22 q R xy (t 1,t ) R xy (t 1,t ) equals the response of the system to R x (t 1 -t ) = q(t 1 -t ), when t 1 is held fixed and t is the independent variable (the input function occurs at t = t 1 ). For t 1 > 0 and t > 0, we can write 0 t 1 t - Axis Figure 7-9: Plot of (7-66); the crosscorrelation between input X and output Y. xy 1 X 1 X - 1 R (t,t ) L R (t,t ) R (t,t ) h( ) d q (t1t ) h( ) dqh( [t1t ]) - (7-66) =qexp[ c(t t )] U(t t ), t 0, t 0, , t 0 or t 0, 1 a result that is illustrated by Figure 7-9. For t < t 1, output Y(t ) is uncorrelated with input X(t 1 ), as expected (this should be intuitive). Also, for (t - t 1 ) > 5/c we can assume that X(t 1 ) and Y(t ) are uncorrelated. Finally, note that X(t) and Y(t) are not jointly wide sense stationary since R XY depends on absolute t 1 and t (and not only the difference = t 1 - t ). Now, find the autocorrelation of the output; there are two cases. The first case is t > t 1 > 0 for which we can write Updates at 7-

23 R (t,t ) E[Y(t )Y(t )] Y 1 1 R (t,t )h( )d - XY 1 t1 c(t [t 1 ]) c 0 q e e U(t [t 1])d (7-67) q ct1 c(tt 1) (1 e )e, t > t 1 > 0, c Note that the requirements 1) h() = 0, < 0, and ) t 1 - > 0 were used to write (7-67). The second case is t 1 > t > 0 for which we can write Y 1 1 XY - 1 R (t,t ) E[Y(t )Y(t )] R (t,t ) h( ) d t1 c(t [t 1]) c q e e U(t 0 [t 1])d t1 c(t [t 1]) c q t1t e e d (7-68) q ct c(t1t ) (1e )e, t 1 > t > 0. c Note that output Y(t) is not stationary. The reason for this is simple (and intuitive). Input X(t) is applied at t = 0, and the system is at rest before this time (Y(t) = 0, t < 0). For a few time constants, this fact is remembered by the system (the system has memory ). For t 1 and t larger than 5 time constants (t 1, t > 5/(c)), steady state can be assumed, and the output autocorrelation can be approximated as R y(t 1,t ) c tt1 q e. (7-69) c Output y(t) is approximately stationary for t 1, t > 5/(c 1 ). Updates at 7-3

24 Example 7-5: Let X(t) be a real-valued, WSS process with autocorrelation R(). For any fixed T > 0, define the random variable S T T X(t)dt. (7-70) T Express the second moment E[S T ] as a single integral involving R(). First, note that T T T T T , (7-71) S X(t )dt X(t )dt X(t )X(t )dt dt T T T T a result that leads to T T T T T TT T. (7-7) T E S E X(t )X(t ) dt dt R(t t ) dt dt The integrand in (7-7) depends only on one quantity, namely the difference t 1 t. Therefore, Equation (7-7) should be expressible in terms of a single integral in the variable t 1 t. To see this, use t 1 t and = t 1 + t and map the (t 1, t ) plane to the () plane (this relationship has an inverse), as shown by Fig As discussed in Appendix 4A, the integral (7-7) can be expressed as (t,t ) R(t t )dt dt R( ) dd, (7-73) (, ) T T 1 T T 1 1 R where R is the rotated square region in the () plane shown on the right-hand side of Fig For use in (7-73), the Jacobian of the transformation is Updates at 7-4

25 (t 1, t ) plane t -axis T t1t t1t plane (, T+) -axis T (, T-) -T T t 1 -axis -T T -axis -T t1 ½( ) t ½( ) (, -{T+) -T (, -{T-) Fig. 7-10: Geometry used to change variables from (t 1, t ) to (, ) in the double integral that appears in Example 7-5. (t 1,t ) (, ) ½ ½ ½ ½ ½ (7-74) In the (,)-plane, as goes from T to 0, the quantity traverses from T- to T+, as can be seen from examination of Fig Also, as goes from 0 to T, the quantity traverses from T+ to T-. Hence, we have T T 0 T T T ES T R(t T T 1t )dt1dt ½R( )dd ½R( )dd T (T ) 0 (T ) 0 T (T )R( )d (T )R( )d T 0 (7-75) T (T )R( )d T a single integral that can be evaluate given autocorrelation function R(). Updates at 7-5

Chapter 8 - Power Density Spectrum

Chapter 8 - Power Density Spectrum EE385 Class Notes 8/8/03 John Stensby Chapter 8 - Power Density Spectrum Let X(t) be a WSS random process. X(t) has an average power, given in watts, of E[X(t) ], a constant. his total average power is

More information

Probability and Random Variables. Generation of random variables (r.v.)

Probability and Random Variables. Generation of random variables (r.v.) Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly

More information

e.g. arrival of a customer to a service station or breakdown of a component in some system.

e.g. arrival of a customer to a service station or breakdown of a component in some system. Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be

More information

Trend and Seasonal Components

Trend and Seasonal Components Chapter 2 Trend and Seasonal Components If the plot of a TS reveals an increase of the seasonal and noise fluctuations with the level of the process then some transformation may be necessary before doing

More information

Convolution, Correlation, & Fourier Transforms. James R. Graham 10/25/2005

Convolution, Correlation, & Fourier Transforms. James R. Graham 10/25/2005 Convolution, Correlation, & Fourier Transforms James R. Graham 10/25/2005 Introduction A large class of signal processing techniques fall under the category of Fourier transform methods These methods fall

More information

min ǫ = E{e 2 [n]}. (11.2)

min ǫ = E{e 2 [n]}. (11.2) C H A P T E R 11 Wiener Filtering INTRODUCTION In this chapter we will consider the use of LTI systems in order to perform minimum mean-square-error (MMSE) estimation of a WSS random process of interest,

More information

NRZ Bandwidth - HF Cutoff vs. SNR

NRZ Bandwidth - HF Cutoff vs. SNR Application Note: HFAN-09.0. Rev.2; 04/08 NRZ Bandwidth - HF Cutoff vs. SNR Functional Diagrams Pin Configurations appear at end of data sheet. Functional Diagrams continued at end of data sheet. UCSP

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

The continuous and discrete Fourier transforms

The continuous and discrete Fourier transforms FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1

More information

PYKC Jan-7-10. Lecture 1 Slide 1

PYKC Jan-7-10. Lecture 1 Slide 1 Aims and Objectives E 2.5 Signals & Linear Systems Peter Cheung Department of Electrical & Electronic Engineering Imperial College London! By the end of the course, you would have understood: Basic signal

More information

5 Signal Design for Bandlimited Channels

5 Signal Design for Bandlimited Channels 225 5 Signal Design for Bandlimited Channels So far, we have not imposed any bandwidth constraints on the transmitted passband signal, or equivalently, on the transmitted baseband signal s b (t) I[k]g

More information

The Algorithms of Speech Recognition, Programming and Simulating in MATLAB

The Algorithms of Speech Recognition, Programming and Simulating in MATLAB FACULTY OF ENGINEERING AND SUSTAINABLE DEVELOPMENT. The Algorithms of Speech Recognition, Programming and Simulating in MATLAB Tingxiao Yang January 2012 Bachelor s Thesis in Electronics Bachelor s Program

More information

Series FOURIER SERIES. Graham S McDonald. A self-contained Tutorial Module for learning the technique of Fourier series analysis

Series FOURIER SERIES. Graham S McDonald. A self-contained Tutorial Module for learning the technique of Fourier series analysis Series FOURIER SERIES Graham S McDonald A self-contained Tutorial Module for learning the technique of Fourier series analysis Table of contents Begin Tutorial c 004 g.s.mcdonald@salford.ac.uk 1. Theory.

More information

Advanced Signal Processing and Digital Noise Reduction

Advanced Signal Processing and Digital Noise Reduction Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New

More information

Lecture 8 ELE 301: Signals and Systems

Lecture 8 ELE 301: Signals and Systems Lecture 8 ELE 3: Signals and Systems Prof. Paul Cuff Princeton University Fall 2-2 Cuff (Lecture 7) ELE 3: Signals and Systems Fall 2-2 / 37 Properties of the Fourier Transform Properties of the Fourier

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

Differentiation of vectors

Differentiation of vectors Chapter 4 Differentiation of vectors 4.1 Vector-valued functions In the previous chapters we have considered real functions of several (usually two) variables f : D R, where D is a subset of R n, where

More information

From Fundamentals of Digital Communication Copyright by Upamanyu Madhow, 2003-2006

From Fundamentals of Digital Communication Copyright by Upamanyu Madhow, 2003-2006 Chapter Introduction to Modulation From Fundamentals of Digital Communication Copyright by Upamanyu Madhow, 003-006 Modulation refers to the representation of digital information in terms of analog waveforms

More information

Solutions to Linear First Order ODE s

Solutions to Linear First Order ODE s First Order Linear Equations In the previous session we learned that a first order linear inhomogeneous ODE for the unknown function x = x(t), has the standard form x + p(t)x = q(t) () (To be precise we

More information

OPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS

OPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS ONDERZOEKSRAPPORT NR 8904 OPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS BY M. VANDEBROEK & J. DHAENE D/1989/2376/5 1 IN A OPTIMAl PREMIUM CONTROl NON-liFE INSURANCE BUSINESS By Martina Vandebroek

More information

6.4 Normal Distribution

6.4 Normal Distribution Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

More information

INTRODUCTION TO PROBABILITY AND RANDOM PROCESSES

INTRODUCTION TO PROBABILITY AND RANDOM PROCESSES APPENDIX H INTRODUCTION TO PROBABILITY AND RANDOM PROCESSES This appendix is not intended to be a definitive dissertation on the subject of random processes. The major concepts, definitions, and results

More information

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS 1. Bandwidth: The bandwidth of a communication link, or in general any system, was loosely defined as the width of

More information

ARBITRAGE-FREE OPTION PRICING MODELS. Denis Bell. University of North Florida

ARBITRAGE-FREE OPTION PRICING MODELS. Denis Bell. University of North Florida ARBITRAGE-FREE OPTION PRICING MODELS Denis Bell University of North Florida Modelling Stock Prices Example American Express In mathematical finance, it is customary to model a stock price by an (Ito) stochatic

More information

Parametric Curves. (Com S 477/577 Notes) Yan-Bin Jia. Oct 8, 2015

Parametric Curves. (Com S 477/577 Notes) Yan-Bin Jia. Oct 8, 2015 Parametric Curves (Com S 477/577 Notes) Yan-Bin Jia Oct 8, 2015 1 Introduction A curve in R 2 (or R 3 ) is a differentiable function α : [a,b] R 2 (or R 3 ). The initial point is α[a] and the final point

More information

CBE 6333, R. Levicky 1 Differential Balance Equations

CBE 6333, R. Levicky 1 Differential Balance Equations CBE 6333, R. Levicky 1 Differential Balance Equations We have previously derived integral balances for mass, momentum, and energy for a control volume. The control volume was assumed to be some large object,

More information

SIGNAL PROCESSING & SIMULATION NEWSLETTER

SIGNAL PROCESSING & SIMULATION NEWSLETTER 1 of 10 1/25/2008 3:38 AM SIGNAL PROCESSING & SIMULATION NEWSLETTER Note: This is not a particularly interesting topic for anyone other than those who ar e involved in simulation. So if you have difficulty

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Power Spectral Density

Power Spectral Density C H A P E R 0 Power Spectral Density INRODUCION Understanding how the strength of a signal is distributed in the frequency domain, relative to the strengths of other ambient signals, is central to the

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

The Fourier Analysis Tool in Microsoft Excel

The Fourier Analysis Tool in Microsoft Excel The Fourier Analysis Tool in Microsoft Excel Douglas A. Kerr Issue March 4, 2009 ABSTRACT AD ITRODUCTIO The spreadsheet application Microsoft Excel includes a tool that will calculate the discrete Fourier

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

3 Contour integrals and Cauchy s Theorem

3 Contour integrals and Cauchy s Theorem 3 ontour integrals and auchy s Theorem 3. Line integrals of complex functions Our goal here will be to discuss integration of complex functions = u + iv, with particular regard to analytic functions. Of

More information

Lecture 18: The Time-Bandwidth Product

Lecture 18: The Time-Bandwidth Product WAVELETS AND MULTIRATE DIGITAL SIGNAL PROCESSING Lecture 18: The Time-Bandwih Product Prof.Prof.V.M.Gadre, EE, IIT Bombay 1 Introduction In this lecture, our aim is to define the time Bandwih Product,

More information

Geometric Transformations

Geometric Transformations Geometric Transformations Definitions Def: f is a mapping (function) of a set A into a set B if for every element a of A there exists a unique element b of B that is paired with a; this pairing is denoted

More information

About the Gamma Function

About the Gamma Function About the Gamma Function Notes for Honors Calculus II, Originally Prepared in Spring 995 Basic Facts about the Gamma Function The Gamma function is defined by the improper integral Γ) = The integral is

More information

Section 1.1 Linear Equations: Slope and Equations of Lines

Section 1.1 Linear Equations: Slope and Equations of Lines Section. Linear Equations: Slope and Equations of Lines Slope The measure of the steepness of a line is called the slope of the line. It is the amount of change in y, the rise, divided by the amount of

More information

MBA Jump Start Program

MBA Jump Start Program MBA Jump Start Program Module 2: Mathematics Thomas Gilbert Mathematics Module Online Appendix: Basic Mathematical Concepts 2 1 The Number Spectrum Generally we depict numbers increasing from left to right

More information

Chapter 2. Parameterized Curves in R 3

Chapter 2. Parameterized Curves in R 3 Chapter 2. Parameterized Curves in R 3 Def. A smooth curve in R 3 is a smooth map σ : (a, b) R 3. For each t (a, b), σ(t) R 3. As t increases from a to b, σ(t) traces out a curve in R 3. In terms of components,

More information

How To Understand The Nyquist Sampling Theorem

How To Understand The Nyquist Sampling Theorem Nyquist Sampling Theorem By: Arnold Evia Table of Contents What is the Nyquist Sampling Theorem? Bandwidth Sampling Impulse Response Train Fourier Transform of Impulse Response Train Sampling in the Fourier

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Correlation key concepts:

Correlation key concepts: CORRELATION Correlation key concepts: Types of correlation Methods of studying correlation a) Scatter diagram b) Karl pearson s coefficient of correlation c) Spearman s Rank correlation coefficient d)

More information

Math 526: Brownian Motion Notes

Math 526: Brownian Motion Notes Math 526: Brownian Motion Notes Definition. Mike Ludkovski, 27, all rights reserved. A stochastic process (X t ) is called Brownian motion if:. The map t X t (ω) is continuous for every ω. 2. (X t X t

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint

More information

Transformations and Expectations of random variables

Transformations and Expectations of random variables Transformations and Epectations of random variables X F X (): a random variable X distributed with CDF F X. Any function Y = g(x) is also a random variable. If both X, and Y are continuous random variables,

More information

2.2 Magic with complex exponentials

2.2 Magic with complex exponentials 2.2. MAGIC WITH COMPLEX EXPONENTIALS 97 2.2 Magic with complex exponentials We don t really know what aspects of complex variables you learned about in high school, so the goal here is to start more or

More information

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Digital Transmission (Line Coding)

Digital Transmission (Line Coding) Digital Transmission (Line Coding) Pulse Transmission Source Multiplexer Line Coder Line Coding: Output of the multiplexer (TDM) is coded into electrical pulses or waveforms for the purpose of transmission

More information

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality. 8 Inequalities Concepts: Equivalent Inequalities Linear and Nonlinear Inequalities Absolute Value Inequalities (Sections 4.6 and 1.1) 8.1 Equivalent Inequalities Definition 8.1 Two inequalities are equivalent

More information

Systems with Persistent Memory: the Observation Inequality Problems and Solutions

Systems with Persistent Memory: the Observation Inequality Problems and Solutions Chapter 6 Systems with Persistent Memory: the Observation Inequality Problems and Solutions Facts that are recalled in the problems wt) = ut) + 1 c A 1 s ] R c t s)) hws) + Ks r)wr)dr ds. 6.1) w = w +

More information

7. DYNAMIC LIGHT SCATTERING 7.1 First order temporal autocorrelation function.

7. DYNAMIC LIGHT SCATTERING 7.1 First order temporal autocorrelation function. 7. DYNAMIC LIGHT SCATTERING 7. First order temporal autocorrelation function. Dynamic light scattering (DLS) studies the properties of inhomogeneous and dynamic media. A generic situation is illustrated

More information

Binary Adders: Half Adders and Full Adders

Binary Adders: Half Adders and Full Adders Binary Adders: Half Adders and Full Adders In this set of slides, we present the two basic types of adders: 1. Half adders, and 2. Full adders. Each type of adder functions to add two binary bits. In order

More information

Linear Equations and Inequalities

Linear Equations and Inequalities Linear Equations and Inequalities Section 1.1 Prof. Wodarz Math 109 - Fall 2008 Contents 1 Linear Equations 2 1.1 Standard Form of a Linear Equation................ 2 1.2 Solving Linear Equations......................

More information

ANALYTICAL METHODS FOR ENGINEERS

ANALYTICAL METHODS FOR ENGINEERS UNIT 1: Unit code: QCF Level: 4 Credit value: 15 ANALYTICAL METHODS FOR ENGINEERS A/601/1401 OUTCOME - TRIGONOMETRIC METHODS TUTORIAL 1 SINUSOIDAL FUNCTION Be able to analyse and model engineering situations

More information

3.5.1 CORRELATION MODELS FOR FREQUENCY SELECTIVE FADING

3.5.1 CORRELATION MODELS FOR FREQUENCY SELECTIVE FADING Environment Spread Flat Rural.5 µs Urban 5 µs Hilly 2 µs Mall.3 µs Indoors.1 µs able 3.1: ypical delay spreads for various environments. If W > 1 τ ds, then the fading is said to be frequency selective,

More information

Lecture - 4 Diode Rectifier Circuits

Lecture - 4 Diode Rectifier Circuits Basic Electronics (Module 1 Semiconductor Diodes) Dr. Chitralekha Mahanta Department of Electronics and Communication Engineering Indian Institute of Technology, Guwahati Lecture - 4 Diode Rectifier Circuits

More information

Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs

Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs Correlation and Convolution Class otes for CMSC 46, Fall 5 David Jacobs Introduction Correlation and Convolution are basic operations that we will perform to extract information from images. They are in

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned

More information

2.1 Increasing, Decreasing, and Piecewise Functions; Applications

2.1 Increasing, Decreasing, and Piecewise Functions; Applications 2.1 Increasing, Decreasing, and Piecewise Functions; Applications Graph functions, looking for intervals on which the function is increasing, decreasing, or constant, and estimate relative maxima and minima.

More information

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0. Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

UNIVERSITY OF CALIFORNIA, SAN DIEGO Electrical & Computer Engineering Department ECE 101 - Fall 2009 Linear Systems Fundamentals

UNIVERSITY OF CALIFORNIA, SAN DIEGO Electrical & Computer Engineering Department ECE 101 - Fall 2009 Linear Systems Fundamentals UNIVERSITY OF CALIFORNIA, SAN DIEGO Electrical & Computer Engineering Department ECE 101 - Fall 2009 Linear Systems Fundamentals MIDTERM EXAM You are allowed one 2-sided sheet of notes. No books, no other

More information

Contents. A Error probability for PAM Signals in AWGN 17. B Error probability for PSK Signals in AWGN 18

Contents. A Error probability for PAM Signals in AWGN 17. B Error probability for PSK Signals in AWGN 18 Contents 5 Signal Constellations 3 5.1 Pulse-amplitude Modulation (PAM).................. 3 5.1.1 Performance of PAM in Additive White Gaussian Noise... 4 5.2 Phase-shift Keying............................

More information

3 Signals and Systems: Part II

3 Signals and Systems: Part II 3 Signals and Systems: Part II Recommended Problems P3.1 Sketch each of the following signals. (a) x[n] = b[n] + 3[n - 3] (b) x[n] = u[n] - u[n - 5] (c) x[n] = 6[n] + 1n + (i)2 [n - 2] + (i)ag[n - 3] (d)

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

Step response of an RLC series circuit

Step response of an RLC series circuit School of Engineering Department of Electrical and Computer Engineering 332:224 Principles of Electrical Engineering II Laboratory Experiment 5 Step response of an RLC series circuit 1 Introduction Objectives

More information

General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1

General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions

More information

The Bivariate Normal Distribution

The Bivariate Normal Distribution The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

More information

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS There are four questions, each with several parts. 1. Customers Coming to an Automatic Teller Machine (ATM) (30 points)

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

9 Fourier Transform Properties

9 Fourier Transform Properties 9 Fourier Transform Properties The Fourier transform is a major cornerstone in the analysis and representation of signals and linear, time-invariant systems, and its elegance and importance cannot be overemphasized.

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

Availability of a system with gamma life and exponential repair time under a perfect repair policy

Availability of a system with gamma life and exponential repair time under a perfect repair policy Statistics & Probability Letters 43 (1999) 189 196 Availability of a system with gamma life and exponential repair time under a perfect repair policy Jyotirmoy Sarkar, Gopal Chaudhuri 1 Department of Mathematical

More information

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines EECS 556 Image Processing W 09 Interpolation Interpolation techniques B splines What is image processing? Image processing is the application of 2D signal processing methods to images Image representation

More information

Frequency Response of FIR Filters

Frequency Response of FIR Filters Frequency Response of FIR Filters Chapter 6 This chapter continues the study of FIR filters from Chapter 5, but the emphasis is frequency response, which relates to how the filter responds to an input

More information

Scientific Programming

Scientific Programming 1 The wave equation Scientific Programming Wave Equation The wave equation describes how waves propagate: light waves, sound waves, oscillating strings, wave in a pond,... Suppose that the function h(x,t)

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

Multi-variable Calculus and Optimization

Multi-variable Calculus and Optimization Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

4 Sums of Random Variables

4 Sums of Random Variables Sums of a Random Variables 47 4 Sums of Random Variables Many of the variables dealt with in physics can be expressed as a sum of other variables; often the components of the sum are statistically independent.

More information

EQUATIONS and INEQUALITIES

EQUATIONS and INEQUALITIES EQUATIONS and INEQUALITIES Linear Equations and Slope 1. Slope a. Calculate the slope of a line given two points b. Calculate the slope of a line parallel to a given line. c. Calculate the slope of a line

More information

6.1 Add & Subtract Polynomial Expression & Functions

6.1 Add & Subtract Polynomial Expression & Functions 6.1 Add & Subtract Polynomial Expression & Functions Objectives 1. Know the meaning of the words term, monomial, binomial, trinomial, polynomial, degree, coefficient, like terms, polynomial funciton, quardrtic

More information

Copyrighted Material. Chapter 1 DEGREE OF A CURVE

Copyrighted Material. Chapter 1 DEGREE OF A CURVE Chapter 1 DEGREE OF A CURVE Road Map The idea of degree is a fundamental concept, which will take us several chapters to explore in depth. We begin by explaining what an algebraic curve is, and offer two

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Principles of Digital Communication

Principles of Digital Communication Principles of Digital Communication Robert G. Gallager January 5, 2008 ii Preface: introduction and objectives The digital communication industry is an enormous and rapidly growing industry, roughly comparable

More information

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

More information

Math 120 Final Exam Practice Problems, Form: A

Math 120 Final Exam Practice Problems, Form: A Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,

More information

Feb 28 Homework Solutions Math 151, Winter 2012. Chapter 6 Problems (pages 287-291)

Feb 28 Homework Solutions Math 151, Winter 2012. Chapter 6 Problems (pages 287-291) Feb 8 Homework Solutions Math 5, Winter Chapter 6 Problems (pages 87-9) Problem 6 bin of 5 transistors is known to contain that are defective. The transistors are to be tested, one at a time, until the

More information

Appendix D Digital Modulation and GMSK

Appendix D Digital Modulation and GMSK D1 Appendix D Digital Modulation and GMSK A brief introduction to digital modulation schemes is given, showing the logical development of GMSK from simpler schemes. GMSK is of interest since it is used

More information

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions College of the Holy Cross, Spring 29 Math 373, Partial Differential Equations Midterm 1 Practice Questions 1. (a) Find a solution of u x + u y + u = xy. Hint: Try a polynomial of degree 2. Solution. Use

More information

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation Chapter 6 Linear Transformation 6 Intro to Linear Transformation Homework: Textbook, 6 Ex, 5, 9,, 5,, 7, 9,5, 55, 57, 6(a,b), 6; page 7- In this section, we discuss linear transformations 89 9 CHAPTER

More information