# Basics of Floating-Point Quantization

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Chapter 2 Basics of Floating-Point Quantization Representation of physical quantities in terms of floating-point numbers allows one to cover a very wide dynamic range with a relatively small number of digits. Given this type of representation, roundoff errors are roughly proportional to the amplitude of the represented quantity. In contrast, roundoff errors with uniform quantization are bounded between ±q/2 and are not in any way proportional to the represented quantity. Floating-point is in most cases so advantageous over fixed-point number representation that it is rapidly becoming ubiquitous. The movement toward usage of floating-point numbers is accelerating as the speed of floating-point calculation is increasing and the cost of implementation is going down. For this reason, it is essential to have a method of analysis for floating-point quantization and floating-point arithmetic. 2. THE FLOATING-POINT QUANTIZER Binary numbers have become accepted as the basis for all digital computation. We therefore describe floating-point representation in terms of binary numbers. Other number bases are completely possible, such as base 0 or base 6, but modern digital hardware is built on the binary base. The numbers in the table of Fig. 2. are chosen to provide a simple example. We begin by counting with nonnegative binary floating-point numbers as illustrated in Fig. 2.. The counting starts with the number 0, represented here by Each number is multiplied by 2 E,whereE is an exponent. Initially, let E = 0. Continuing the count, the next number is, represented by 0000, and so forth. The counting continues with increments of, past 6 represented by 0000, and goes on until the count stops with the number 3, represented by. The numbers are now reset back to 0000, and the exponent is incremented by. The next number will be 32, represented by The next number will be 34, given by

2 258 2 Basics of Floating-Point Quantization Mantissa E Figure 2. Counting with binary floating-point numbers with 5-bit mantissa. No sign bit is included here.

3 2. The Floating-Point Quantizer 259 x Q x FL Figure 2.2 A floating-point quantizer. This will continue with increments of 2 until the number 62 is reached, represented by 2. The numbers are again re-set back to 0000, and the exponent is incremented again. The next number will be 64, given by And so forth. By counting, we have defined the allowed numbers on the number scale. Each number consists of a mantissa (see page 343) multiplied by 2 raised to a power given by the exponent E. The counting process illustrated in Fig. 2. is done with binary numbers having 5-bit mantissas. The counting begins with binary with E = 0 and goes up to, then the numbers are recycled back to 0000, and with E =, the counting resumes up to, then the numbers are recycled back to 0000 again with E = 2, and the counting proceeds. A floating-point quantizer is represented in Fig The input to this quantizer is x, a variable that is generally continuous in amplitude. The output of this quantizer is x, a variable that is discrete in amplitude and that can only take on values in accord with a floating-point number scale. The input output relation for this quantizer is a staircase function that does not have uniform steps. Figure 2.3 illustrates the input output relation for a floating-point quantizer with a 3-bit mantissa. The input physical quantity is x. Its floating-point representation is x. The smallest step size is q. With a 3-bit mantissa, four steps are taken for each cycle, except for eight steps taken for the first cycle starting at the origin. The spacings of the cycles are determined by the choice of a parameter. A general relation between and q, defining, is given by Eq. (2.): = 2 p q, (2.) where p is the number of bits of the mantissa. With a 3-bit mantissa, = 8q. Figure 2.3 is helpful in gaining an understanding of relation (2.). Note that after the first cycle, the spacing of the cycles and the step sizes vary by a factor of 2 from cycle to cycle. The basic ideas and figures of the next sections were first published in, and are taken with permission fromwidrow, B., Kollár, I. and Liu, M.-C., Statistical theory of quantization, IEEE Transactions on Instrumentation and Measurement 45(6): c 995 IEEE.

4 260 2 Basics of Floating-Point Quantization 4 x 2 Average gain = x q x q 4 3q 2 q 2 q 2 q 3q 2 x 2q Figure 2.3 Input output staircase function for a floating-point quantizer with a 3-bit mantissa. x x Q Input FL Output + FL Figure 2.4 Floating-point quantization noise. 2.2 FLOATING-POINT QUANTIZATION NOISE The roundoff noise of the floating-point quantizer FL is the difference between the quantizer output and input: FL = x x. (2.2) Figure 2.4 illustrates the relationships between x, x,and FL.

5 2.3 An Exact Model of the Floating-Point Quantizer f FL FL 2 Figure 2.5 The PDF of floating-point quantization noise with a zero-mean Gaussian input, σ x = 32, and with a 2-bit mantissa. The PDF of the quantization noise can be obtained by slicing and stacking the PDF of x, as was done for the uniform quantizer and illustrated in Fig Because the staircase steps are not uniform, the PDF of the quantization noise is not uniform. It has a pyramidal shape. With a zero-mean Gaussian input with σ x = 32, and with a mantissa having just two bits, the quantization noise PDF has been calculated. It is plotted in Fig The shape of this PDF is typical. The narrow segments near the top of the pyramid are caused by the occurrence of values of x that are small in magnitude (small quantization step sizes), while the wide segments near the bottom of the pyramid are caused by the occurrence of values of x that are large in magnitude (large quantization step sizes). The shape of the PDF of floating-point quantization noise resembles the silhouette of a big-city skyscraper like the famous Empire State Building of New York City. We have called functions like that of Fig. 2.5 skyscraper PDFs. Mathematical methods will be developed next to analyze floating-point quantization noise, to find its mean and variance, and the correlation coefficient between this noise and the input x. 2.3 AN EXACT MODEL OF THE FLOATING-POINT QUANTIZER A floating-point quantizer of the type shown in Fig. 2.3 can be modeled exactly as a cascade of a nonlinear function (a compressor ) followed by a uniform quantizer (a hidden quantizer) followed by an inverse nonlinear function (an expander ). The

6 262 2 Basics of Floating-Point Quantization x y y x Compressor Expander Q Nonlinear function Uniform quantizer ( Hidden quantizer ) (a) Inverse nonlinear function x x Compressor Expander Q y + y + FL (b) Figure 2.6 A model of a floating-point quantizer: (a) block diagram; (b) definition of quantization noises. overall idea is illustrated in Fig. 2.6(a). A similar idea is often used to represent compression and expansion in a data compression system (see e.g. Gersho and Gray (992), CCITT (984)). The input output characteristic of the compressor (y vs. x) is shown in Fig. 2.7, and the input output characteristic of the expander (x vs. y ) is shown in Fig The hidden quantizer is conventional, having a uniform-staircase input output characteristic (y vs. y). Its quantization step size is q, and its quantization noise is = y y. (2.3) Figure 2.6(b) is a diagram showing the sources of FL and. An expression for the input output characteristic of the compressor is

7 2.3 An Exact Model of the Floating-Point Quantizer 263 y = x, if x y = 2 x + 2, if x 2 y = 2 x 2, if 2 x y = 4 x +, if 2 x 4 y = 4 x, if 4 x 2. y = x + k 2 k 2, if 2k x 2 k y = x k 2 k 2, if 2k x 2 k or y = x + sign(x) k2 2, if k 2k x 2 k, where k is a nonnegative integer. This is a piecewise-linear characteristic. An expression for the input output characteristic of the expander is x = y, if y x = 2(y 0.5), if y.5 x = 2(y 0.5), if.5 y x = 4(y ), if.5 y 2 x = 4(y + ), if 2 y.5. x = 2 k (y k 2 ), if k+ 2 y k+2 x = 2 k (y + k 2 2 ), if k+2 2 y k+ 2 or x = 2 k (y sign(y) k2 ), if k+ 2 y k+2 2, (2.4) (2.5) where k is a nonnegative integer. This characteristic is also piecewise linear. One should realize that the compressor and expander characteristics described above are universal and are applicable for any choice of mantissa size. The compressor and expander characteristics of Figs. 2.7 and 2.8 are drawn to scale. Fig. 2.9 shows the characteristic (y vs. y) of the hidden quantizer drawn to the same scale, assuming a mantissa of 2 bits. For this case, = 4q. If only the compressor and expander were cascaded, the result would be a perfect gain of unity since they are inverses of each other. If the compressor is cascaded with the hidden quantizer of Fig. 2.9 and then cascaded with the expander, all in accord with the diagram of Fig. 2.6, the result is a floating-point quantizer. The one illustrated in Fig. 2.0 has a 2-bit mantissa. The cascaded model of the floating-point quantizer shown in Fig. 2.6 becomes very useful when the quantization noise of the hidden quantizer has the properties of

8 264 2 Basics of Floating-Point Quantization y/ x/ Figure 2.7 The compressor s input output characteristic x / y / Figure 2.8 The expander s input output characteristic.

9 2.3 An Exact Model of the Floating-Point Quantizer y / y/ Figure 2.9 The uniform hidden quantizer. The mantissa has 2 bits x / x/ Figure 2.0 A floating-point quantizer with a 2-bit mantissa.

10 266 2 Basics of Floating-Point Quantization PQN. This would happen if QT I or QT II were satisfied at the input y of the hidden quantizer. Testing for the satisfaction of these quantizing theorems is complicated by the nonlinearity of the compressor. If x were a Gaussian input to the floatingpoint quantizer, the input to the hidden quantizer would be x mapped through the compressor. The result would be a non-gaussian input to the hidden quantizer. In practice, inputs to the hidden quantizer almost never perfectly meet the conditions for satisfaction of a quantizing theorem. On the other hand, these inputs almost always satisfy these conditions approximately. The quantization noise introduced by the hidden quantizer is generally very close to uniform and very close to being uncorrelated with its input y. The PDF of y is usually sliced up so finely that the hidden quantizer produces noise having properties like PQN. 2.4 HOW GOOD IS THE PQN MODEL FOR THE HIDDEN QUANTIZER? In previous chapters where uniform quantization was studied, it was possible to define properties of the input CF that would be necessary and sufficient for the satisfaction of a quantizing theorem, such as QT II. Even if these conditions were not obtained, as for example with a Gaussian input, it was possible to determine the errors that would exist in moment prediction when using the PQN model. These errors would almost always be very small. For floating-point quantization, the same kind of calculations for the hidden quantizer could in principle be made, but the mathematics would be far more difficult because of the action of the compressor on the input signal x. The distortion of the compressor almost never simplifies the CF of the input x nor makesit easyto test for the satisfaction of a quantizing theorem. To determine the statistical properties of, the PDF of the input x can be mapped through the piecewise-linear compressor characteristic to obtain the PDF of y. In turn, y is quantized by the hidden quantizer, and the quantization noise can be tested for similarity to PQN. The PDF of can be determined directly from the PDF of y, or it could be obtained by Monte Carlo methods by applying random x inputs into the compressor and observing corresponding values of y and. The moments of can be determined, and so can the covariance between and y. Forthe PQN model to be usable, the covariance of and y should be close to zero, less than a few percent, the PDF of should be almost uniform between ±q/2, and the mean of should be close to zero while the mean square of should be close to q 2 /2. In many cases with x being Gaussian, the PDF of the compressor output y can be very ugly. Examples are shown in Figs. 2.(a) 2.4(a). These represent cases where the input x is Gaussian with various mean values and various ratios of σ x /. With mantissas of 4 bits or more, these ugly inputs to the hidden quantizer cause it to produce quantization noise which behaves remarkably like PQN. This is difficult to prove analytically, but confirming results from slicing and stacking

11 2.4 How Good is the PQN Model for the Hidden Quantizer? f y (y) (a) f () /q y/ f () /q q = 6 q = 256 (b) q/2 q/2 (c) q/2 q/2 Figure 2. PDF of compressor output and of hidden quantization noise when x is zeromean Gaussian with σ x = 50: (a)f y (y); (b)f () for p = 4(q = /6); (c) f () for p = 8(q = /256). these PDFs are very convincing. Similar techniques have been used with these PDFs to calculate correlation coefficients between and y. The noise turns out to be essentially uniform, and the correlation coefficient turns out to be essentially zero. Consider the ugly PDF of y shown in Fig. 2.(a), f y (y). The horizontal scale goes over a range of ±5. If we choose a mantissa of 4 bits, the horizontal scale will cover the range ±80q. If we choose a mantissa of 8 bits, q will be much smaller and the horizontal scale will cover the range ±280q. With the 4-bit mantissa, slicing and stacking this PDF to obtain the PDF of, we obtain the result shown in Fig. 2.(b). From the PDF of, we obtain a mean value of q, and a mean square of q 2 /2. The correlation coefficient between and y is ( ). With an 8-bit mantissa, f () shown in Fig. 2.c is even more uniform, giving a mean value for of q, and a mean square of.000q 2 /2. The correlation

12 268 2 Basics of Floating-Point Quantization f y (y) (a) f () /q y/ f () /q q = 6 q = 256 (b) q/2 q/2 (c) q/2 q/2 Figure 2.2 PDF of compressor output and of hidden quantization noise when x is Gaussian with σ x = /2 andμ x = σ x : (a) f y (y); (b) f () for 4-bit mantissas, (q = /6); (c) f () for 8-bit mantissas, (q = /256).

13 2.4 How Good is the PQN Model for the Hidden Quantizer? f y (y) (a) f () /q y/ f () /q q = 6 q = 256 (b) q/2 q/2 (c) q/2 q/2 Figure 2.3 PDF of compressor output and of hidden quantization noise when x is Gaussian with σ x = 50 and μ x = σ x : (a) f y (y); (b) f () for 4-bit mantissas, (q = /6); (c) f () for 8-bit mantissas, (q = /256).

14 270 2 Basics of Floating-Point Quantization f y (y) (a) y/ f () f () /q /q q = 6 q = 256 (b) q/2 q/2 (c) q/2 q/2 Figure 2.4 PDF of compressor output and of hidden quantization noise when x is Gaussian with σ x = 50 and μ x = 0σ x = 500: (a)f y (y); (b)f () for 4-bit mantissas, (q = /6); (c) f () for 8-bit mantissas, (q = /256).

15 2.4 How Good is the PQN Model for the Hidden Quantizer? 27 TABLE 2. Properties of the hidden quantization noise for Gaussian input (a) p = 4 E{}/q E{ 2 }/(q 2 /2) ρ y, μ x = 0, σ x = μ x = σ x,σ x = μ x = σ x,σ x = μ x = σ x,σ x = (b) p = 8 E{}/q E{ 2 }/(q 2 /2) ρ y, μ x = 0, σ x = μ x = σ x,σ x = μ x = σ x,σ x = μ x = σ x,σ x = coefficient between and y is ( ). With a mantissa of four bits or more, the noise behaves very much like PQN. This process was repeated for the ugly PDFs of Figs. 2.2(a), 2.3(a), and 2.4(a). The corresponding PDFs of the quantization noise of the hidden quantizer are shown in Figs. 2.2(b), 2.3(b), and 2.4(b). With a 4-bit mantissa, Table 2.(a) lists the values of mean and mean square of, and correlation coefficient between and y for all the cases. With an 8-bit mantissa, the corresponding moments are listed in Table 2.(b). Similar tests have been made with uniformly distributed inputs, with triangularly distributed inputs, and with sinusoidal inputs. Input means ranged from zero to one standard deviation. The moments of Tables 2.(a),(b) are typical for Gaussian inputs, but similar results are obtained with other forms of inputs. For every single test case, the input x had a PDF that extended over at least several multiples of. With a mantissa of 8 bits or more, the noise of the hidden quantizer had a mean of almost zero, a mean square of almost q 2 /2, and a correlation coefficient with the quantizer input of almost zero. The PQN model works so well that we assume that it is true as long as the mantissa has 8 or more bits and the PDF of x covers at least several -quanta of the floating-point quantizer. It should be noted that the single precision IEEE standard (see Section 3.8) calls for a mantissa with 24 bits. When working with this standard, the PQN model will work exceedingly well almost everywhere.

16 272 2 Basics of Floating-Point Quantization 2.5 ANALYSIS OF FLOATING-POINT QUANTIZATION NOISE We will use the model of the floating-point quantizer shown in Fig. 2.6 to determine the statistical properties of FL. We will derive its mean, its mean square, and its correlation coefficient with input x. The hidden quantizer injects noise into y, resulting in y, which propagates through the expander to make x. Thus, the noise of the hidden quantizer propagates through the nonlinear expander into x. The noise in x is FL. Assume that the noise of the hidden quantizer is small compared to y. Then is a small increment to y. This causes an increment FL at the expander output. Accordingly, ( dx ) FL = dy. (2.6) y The derivative is a function of y. This is because the increment is added to y, soy is the nominal point where the derivative should be taken. Since the PQN model was found to work so well for so many cases with regard to the behavior of the hidden quantizer, we will make the assumption that the conditions for PQN are indeed satisfied for the hidden quantizer. This greatly simplifies the statistical analysis of FL. Expression (2.6) can be used in the following way to find the crosscorrelation between FL and input x. { ( dx ) } E{ FL x}=e dy x y { (dx ) } = E {} E dy x = 0. (2.7) When deriving this result, it was possible to factor the expectation into a product of expectations because, from the point of view of moments, ( behaves as if it is independent of y and independent of any function of y, suchas dx dy )y and x. Since the expected value of is zero, the crosscorrelation turns out to be zero. Expression (2.6) can also be used to find the mean of FL. Accordingly, { ( dx ) } E{ FL }=E dy y { (dx ) } = E{}E dy y y = 0. (2.8)

17 2.5 Analysis of Floating-Point Quantization Noise 273 So the mean of FL is zero, and the crosscorrelation between FL and x is zero. Both results are consequences of the hidden quantizer behaving in accord with the PQN model. Our next ( objective is the determination of E{ 2 FL }. We will need to find an expression for dx dy )y in order to obtain quantitative results. By inspection of Figs. 2.7 and 2.8, which show the characteristics of the compressor and the expander, we determine that dx dy = when 0.5 <x < dx dy = 2 when <x < 2 dx dy = 4. when 2<x < 4 dx dy = when <x < 0.5 dx dy = 2 when 2 <x < dx dy = 4 when 4 <x < 2. dx dy = 2 k when 2 k < x < 2 k (2.9) where k is a nonnegative integer. These relations define the derivative as a function of x, except in the range /2 < x </2. The probability of x values in that range are assumed to be negligible. Having the derivative as a function of x is equivalent to having the derivative as a function of y because y is a monotonic function of x (see Fig. 2.7). It is useful now to introduce the function log 2 (x/) This is plotted in Fig. 2.5(a). We next introduce a new notation for uniform quantization, x = Q q (x). (2.0) The operator Q q represents uniform quantization with a quantum step size of q. Using this notation, we can introduce yet another function of x, shown in Fig. 2.5(b), by adding the quantity 0.5 to log 2 (x/) and uniformly quantizing the result with a unit quantum step size. The function is, accordingly, Q ( log2 (x/) ).

18 274 2 Basics of Floating-Point Quantization 3.5 log 2 ( x ) (a) ( ( x ) ) Q log x (b) x Figure 2.5 Approximate and ( exact exponent characteristics: ) (a) logarithmic approximation; (b) exact expression, Q log 2 (x/) vs. x.

19 2.5 Analysis of Floating-Point Quantization Noise 275 Referring back to (2.9), it is apparent that for positive values of x, the derivative can be represented in terms of the new function as ( dx ) x ) ) dy = 2 Q ((log , x >/2. (2.) x For negative values of x, the derivative is (( ) ) ( dx ) x Q log dy = 2, x < /2. (2.2) x Another way to write this is (( ) ) ( dx ) x Q log dy = 2, x >/2. (2.3) x These are exact expressions for the derivative. From Eqs. (2.3) and Eq. (2.6), we obtain FL as (( ) ) x Q log FL = 2, x >/2. (2.4) One should note that the value of would generally be much smaller than the value of x. At the input level of x </2, the floating-point quantizer would be experiencing underflow. Even smaller inputs would be possible, as for example when input x has a zero crossing. But if the probability of x having a magnitude less than is sufficiently low, Eq. (2.4) could be simply written as (( ) ) x Q log FL = 2. (2.5) The exponent in Eq. (2.5) is a quantized function of x, and it can be expressed as (( ) ) ( ) x x Q log = log EXP. (2.6) The quantization noise is EXP, the noise in the exponent. It is bounded by ±0.5. The floating-point quantization noise can now be expressed as (( ) ) x Q log FL = 2 ( ) x log 2 = EXP = x EXP

20 276 2 Basics of Floating-Point Quantization = 2 x 2 EXP. (2.7) The mean square of FL can be obtained from this. { } E{ 2 }=2E FL 2 x EXP { x = 2E{ 2 2 } }E. (2.8) 2 22 EXP The factorization is permissible because by assumption behavesas PQN. The noise EXP is related to x, but since EXP is bounded, it is possible to upper and lower bound E{ 2 } even without knowledge of the relation between FL EXP and x. The bounds are { x 2 } E{ 2 }E{ x2 2 } E{2 FL } 4E{2 }E 2. (2.9) These bounds hold without exception as long as the PQN model applies for the hidden quantizer. If, in addition to this, the PQN model applies to the quantized exponent in Eq. (2.5), then a precise value for E{ 2 } can be obtained. With PQN, FL EXP will have zero mean, ( a mean ) square of /2, a mean fourth of /80, and will be uncorrelated with log x 2. From the point of view of moments, EXP will behave as if it is independent of x and any function of x. From Eq. (2.7), FL = 2 x 2 EXP = 2 x e EXP ln 2. (2.20) From this, we can obtain E{ 2 }: FL E{ 2 FL }=2E { 2 x2 = 2E } 2 e2 EXP ln 2 { 2 x2 ( EXP ln 2 + 2! (2 EXP ln 2) 2 + 3! (2 EXP ln 2) 3 ) } + 4! (2 EXP ln 2) 4 + { x = E{ 2 2 } { }E 2 E EXP ln (ln EXP 2) EXP (ln 2)3 + 4 } 3 4 (ln EXP 2)4 +. (2.2) Since the odd moments of EXP are all zero, { x E{ 2 2 }( FL }=E{2 }E ( 2 )(ln 2)2 + ( 4 3 )( ) 80 )(ln 2)4 +

21 2.5 Analysis of Floating-Point Quantization Noise 277 { x = 2.6 E{ 2 2 } }E 2. (2.22) This falls about half way between the lower and upper bounds. A more useful form of this expression can be obtained by substituting the following, E{ 2 }= q2 2, and q = 2 p. (2.23) The result is a very important one: E{ 2 }=0.80 FL 2 2p E{x 2 }. (2.24) Example 2. Roundoff noise when multiplying two floating-point numbers Let us assume that two numbers: and 4.2 are multiplied using IEEE double precision arithmetic. The exact value of the roundoff error could be determined by using the result of more precise calculation (with p > 53) as a reference. However, this is usually not available. The mean square of the roundoff noise can be easily determined using Eq. (2.24), without the need of reference calculations. The product equals When two numbers of magnitudes similar as above are multiplied, the roundoff noise will have a mean square of E{ 2 } 0.80 FL 2 2p Although we cannot determine the precise roundoff error value for the given case, we can give its expected magnitude. Example 2.2 Roundoff noise in 2nd-order IIR filtering with floating point Let us assume one evaluates the recursive formula y(n) = a()y(n ) a(2)y(n 2) + b()x(n ). When the operations (multiplications and additions) are executed in natural order, all with precision p, the following sources of arithmetic roundoff can be enumerated:. roundoff after the multiplication a()y(n ): var{ FL } p a() 2 var{y}, 2. roundoff after the storage of a()y(n ): var{ FL2 }=0, since if the product in item is rounded, the quantity to be stored is already quantized, 3. roundoff after the multiplication a(2)y(n 2): var{ FL3 } p a(2) 2 var{y}, 4. roundoff after the addition a()y(n ) a(2)y(n 2): var{ FL4 } p ( a() 2 var{y}+a(2) 2 var{y} +2a()a(2) C yy () ), 5. roundoff after the storage of a()y(n ) a(2)y(n 2): var{ FL5 }=0, since if the product in item 4 is rounded, the quantity to be stored is already quantized,

22 278 2 Basics of Floating-Point Quantization 6. roundoff after the multiplication b()x(n ): var{ FL6 } p b() 2 var{x}, 7. roundoff after the last addition: var{ FL7 } p var{y}, 8. roundoff after the storage of the result to y(n): var{ FL8 }=0, since if the sum in item 7 is rounded, the quantity to be stored is already quantized. These variances need to be added to obtain the total variance of the arithmetic roundoff noise injected to y in each step. The total noise variance in y can be determined then by adding the effect of each injected noise. This can be done using the response to an impulse injected to y(0): ifthisisgivenbyh yy (0), h yy (),..., including the unit impulse itself at time 0, the variance of the total roundoff noise can be calculated by multiplying the above variance of the injected noise by h 2 yy (n). For all these calculations, one also needs to determine the variance of y, and the one-step covariance C yy (). If x is sinusoidal with amplitude A, y is also sinusoidal with an amplitude determined by the transfer function: var{y} = H( f ) 2 var{x} = H( f ) 2 A 2 /2. C yy () var{y} for low-frequency sinusoidal input. If x is zero-mean white noise, var{y} =var{x} h 2 (n), with h(n) being the n=0 impulse response from x to y. The covariance is about var{x} h(n)h(n + ). Numerically, let us use single precision (p = 24), and let a() = 0.9, a(2) = 0.5, and b() = 0.2, and let the input signal be a sine wave with unit power (A 2 /2 = ), and frequency f = 0.05 f s. With these, H( f ) = 0.33, and var{y} =0., and the covariance C yy () approximately equals var{y}. Evaluating the variance of the floating-point noise, 8 var{ FL }= var{ FLn } (2.25) n= The use of extended-precision accumulator and of multiply-and-add operation In many modern processors, an accumulator is used with extended precision p acc, and multiply-and-add operation (see page 374) can be applied. In this case, multiplication is usually executed without roundoff, and additions are not followed by extra storage, therefore roundoff happens only for certain items (but, in addition to the above, var{ FL2 } = 0, since FL = 0). The total variance of y, caused by arithmetic roundoff, is var inj = var{ FL2 }+var{ FL4 }+var{ FL7 }+var{ FL8 } = p acc a() 2 var{y} n=0 n=0

23 2.5 Analysis of Floating-Point Quantization Noise 279 ( ) p acc a() 2 var{y}+a(2) 2 var{y}+2a()a(2)c yy () p acc var{y} p var{y}. (2.26) In this expression, usually the last term dominates. With the numerical data, var{ FL-ext} Making the same substitution in Eq. (2.9), the bounds are 2 2 2p E{x 2 } E{ 2 FL } 3 2 2p E{x 2 }. (2.27) The signal-to-noise ratio for the floating-point quantizer is defined as SNR = E{x2 } E{FL} 2. (2.28) If both and EXP obey PQN models, then Eq. (2.24) can be used to obtain the SNR. The result is: SNR = p. (2.29) This is the ratio of signal power to quantization noise power. Expressed in db, the SNR is SNR, db 0 log ((5.55)2 2p) = p. (2.30) If only obeys a PQN model, then Eq. (2.27) can be used to bound the SNR: 2 2 2p SNR 3 2 2p. (2.3) This can be expressed in db as: p SNR, db p. (2.32) From relations Eq. (2.29) and Eq. (2.3), it is clear that the SNR improves as the number of bits in the mantissa is increased. It is also clear that for the floatingpoint quantizer, the SNR does not depend on E{x 2 } (as long as and EXP act like PQN). The quantization noise power is proportional to E{x 2 }. This is a very different situation from that of the uniform quantizer, where SNR increases in proportion to E{x 2 } since the quantization noise power is constant at q 2 /2. When satisfies a PQN model, FL is uncorrelated with x. The floating-point quantizer can be replaced for purposes of least-squares analysis by an additive independent noise having a mean of zero and a mean square bounded by (2.27). If in ad-

24 280 2 Basics of Floating-Point Quantization dition EXP satisfies a PQN model, the mean square of will be given by Eq. (2.24). Section 2.6 discusses the question about EXP satisfying a PQN model. This does happen in a surprising number of cases. But when PQN fails for EXP, we cannot obtain E{ 2 FL } from Eq. (2.24), but we can get bounds on it from (2.27). 2.6 HOW GOOD IS THE PQN MODEL FOR THE EXPONENT QUANTIZER? The PQN model for the hidden quantizer turns out to be a very good one when the mantissa has 8 bits or more, and: (a) the dynamic range of x extends over at least several times, (b) for the Gaussian case, σ x is at least as big as /2. This model works over a very wide range of conditions encountered in practice. Unfortunately, the range of conditions where a PQN model would work for the exponent quantizer is much more restricted. In this section, we will explore the issue Gaussian Input The simplest and most important case is that of the Gaussian input. Let the input x be Gaussian with variance σx 2, and with a mean μ x that could vary between zero and some large multiple of σ x, say 20σ x. Fig. 2.6 shows calculated plots of the PDF of EXP for various values of μ x. This PDF is almost perfectly uniform between ± 2 for μ x = 0, 2σ x,and3σ x. For higher values of μ x, deviations from uniformity are seen. The covariance of EXP and x, shown in Fig. 2.7, is essentially zero for μ x = 0, 2σ x,and3σ x. But for higher values of μ x, significant correlation develops between EXP and x. Thus, the PQN model for the exponent quantizer appears to be intact for a range of input means from zero to 3σ x. Beyond that, this PQN model appears to break down. The reason for this is that the input to the exponent quantizer is the variable x / going through the logarithm function. The greater the mean of x, the more the logarithm is saturated and the more the dynamic range of the quantizer input is compressed. The point of breakdown of the PQN model for the exponent quantizer is not dependent on the size of the mantissa, but is dependent only on the ratios of σ x to and σ x to μ x. When the PQN model for the exponent quantizer is applicable, the PQN model for the hidden quantizer will always be applicable. The reason for this is that the input to both quantizers is of the form log 2 (x), but for the exponent quantizer x is divided by, making its effective quantization step size, and for the hidden quantizer, the input is not divided by anything and so its quantization step size is q. The quantization step of the exponent quantizer is therefore coarser than that of the hidden quantizer by the factor q = 2p. (2.33)

### Roundoff Noise in IIR Digital Filters

Chapter 16 Roundoff Noise in IIR Digital Filters It will not be possible in this brief chapter to discuss all forms of IIR (infinite impulse response) digital filters and how quantization takes place in

### Lecture 3: Quantization Effects

Lecture 3: Quantization Effects Reading: 6.7-6.8. We have so far discussed the design of discrete-time filters, not digital filters. To understand the characteristics of digital filters, we need first

### Moving Average Filters

CHAPTER 15 Moving Average Filters The moving average is the most common filter in DSP, mainly because it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average

### ECE 0142 Computer Organization. Lecture 3 Floating Point Representations

ECE 0142 Computer Organization Lecture 3 Floating Point Representations 1 Floating-point arithmetic We often incur floating-point programming. Floating point greatly simplifies working with large (e.g.,

### The CUSUM algorithm a small review. Pierre Granjon

The CUSUM algorithm a small review Pierre Granjon June, 1 Contents 1 The CUSUM algorithm 1.1 Algorithm............................... 1.1.1 The problem......................... 1.1. The different steps......................

### T = 1 f. Phase. Measure of relative position in time within a single period of a signal For a periodic signal f(t), phase is fractional part t p

Data Transmission Concepts and terminology Transmission terminology Transmission from transmitter to receiver goes over some transmission medium using electromagnetic waves Guided media. Waves are guided

### > 2. Error and Computer Arithmetic

> 2. Error and Computer Arithmetic Numerical analysis is concerned with how to solve a problem numerically, i.e., how to develop a sequence of numerical calculations to get a satisfactory answer. Part

### Approximation Errors in Computer Arithmetic (Chapters 3 and 4)

Approximation Errors in Computer Arithmetic (Chapters 3 and 4) Outline: Positional notation binary representation of numbers Computer representation of integers Floating point representation IEEE standard

### comp 180 Lecture 21 Outline of Lecture Floating Point Addition Floating Point Multiplication HKUST 1 Computer Science

Outline of Lecture Floating Point Addition Floating Point Multiplication HKUST 1 Computer Science IEEE 754 floating-point standard In order to pack more bits into the significant, IEEE 754 makes the leading

### ECE 0142 Computer Organization. Lecture 3 Floating Point Representations

ECE 0142 Computer Organization Lecture 3 Floating Point Representations 1 Floating-point arithmetic We often incur floating-point programming. Floating point greatly simplifies working with large (e.g.,

### Advantages of High Resolution in High Bandwidth Digitizers

Advantages of High Resolution in High Bandwidth Digitizers Two of the key specifications of digitizers are bandwidth and amplitude resolution. These specifications are not independent - with increasing

### Department of Electronics and Communication Engineering 1

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING III Year ECE / V Semester EC 6502 PRINCIPLES OF DIGITAL SIGNAL PROCESSING QUESTION BANK Department of

### TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS 1. Bandwidth: The bandwidth of a communication link, or in general any system, was loosely defined as the width of

### Outline. Random Variables. Examples. Random Variable

Outline Random Variables M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Random variables. CDF and pdf. Joint random variables. Correlated, independent, orthogonal. Correlation,

### Numerical Matrix Analysis

Numerical Matrix Analysis Lecture Notes #10 Conditioning and / Peter Blomgren, blomgren.peter@gmail.com Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research

### Filter Comparison. Match #1: Analog vs. Digital Filters

CHAPTER 21 Filter Comparison Decisions, decisions, decisions! With all these filters to choose from, how do you know which to use? This chapter is a head-to-head competition between filters; we'll select

### CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

### CPE 323 Data Types and Number Representations

CPE 323 Data Types and Number Representations Aleksandar Milenkovic Numeral Systems: Decimal, binary, hexadecimal, and octal We ordinarily represent numbers using decimal numeral system that has 10 as

### Floating Point Numbers. Question. Learning Outcomes. Number Representation - recap. Do you have your laptop here?

Question Floating Point Numbers 6.626068 x 10-34 Do you have your laptop here? A yes B no C what s a laptop D where is here? E none of the above Eddie Edwards eedwards@doc.ic.ac.uk https://www.doc.ic.ac.uk/~eedwards/compsys

### 1 Short Introduction to Time Series

ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

### PURSUITS IN MATHEMATICS often produce elementary functions as solutions that need to be

Fast Approximation of the Tangent, Hyperbolic Tangent, Exponential and Logarithmic Functions 2007 Ron Doerfler http://www.myreckonings.com June 27, 2007 Abstract There are some of us who enjoy using our

### Common Core Unit Summary Grades 6 to 8

Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations

### Signal Detection. Outline. Detection Theory. Example Applications of Detection Theory

Outline Signal Detection M. Sami Fadali Professor of lectrical ngineering University of Nevada, Reno Hypothesis testing. Neyman-Pearson (NP) detector for a known signal in white Gaussian noise (WGN). Matched

### Unit 2: Number Systems, Codes and Logic Functions

Unit 2: Number Systems, Codes and Logic Functions Introduction A digital computer manipulates discrete elements of data and that these elements are represented in the binary forms. Operands used for calculations

### Integer Numbers. Digital Signal Processor Data Path. Characteristics of Two Complements Representation. Integer Three Bit Representation 000

Integer Numbers Twos Complement Representation Digital Signal Processor Data Path Ingo Sander Ingo@imit.kth.se B=b N-1 b N-2...b 1 b 0 where b i {0,1} b N-1 b N-2... b 1 b 0 Sign Bit Decimal Value D(B)=-b

### Algebra 1 Course Information

Course Information Course Description: Students will study patterns, relations, and functions, and focus on the use of mathematical models to understand and analyze quantitative relationships. Through

### Data Representation Binary Numbers

Data Representation Binary Numbers Integer Conversion Between Decimal and Binary Bases Task accomplished by Repeated division of decimal number by 2 (integer part of decimal number) Repeated multiplication

### This is the 23 rd lecture on DSP and our topic for today and a few more lectures to come will be analog filter design. (Refer Slide Time: 01:13-01:14)

Digital Signal Processing Prof. S.C. Dutta Roy Department of Electrical Electronics Indian Institute of Technology, Delhi Lecture - 23 Analog Filter Design This is the 23 rd lecture on DSP and our topic

### Data Storage 3.1. Foundations of Computer Science Cengage Learning

3 Data Storage 3.1 Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: List five different data types used in a computer. Describe how

### High School Algebra 1 Common Core Standards & Learning Targets

High School Algebra 1 Common Core Standards & Learning Targets Unit 1: Relationships between Quantities and Reasoning with Equations CCS Standards: Quantities N-Q.1. Use units as a way to understand problems

### CHAPTER 5 Round-off errors

CHAPTER 5 Round-off errors In the two previous chapters we have seen how numbers can be represented in the binary numeral system and how this is the basis for representing numbers in computers. Since any

### The counterpart to a DAC is the ADC, which is generally a more complicated circuit. One of the most popular ADC circuit is the successive

The counterpart to a DAC is the ADC, which is generally a more complicated circuit. One of the most popular ADC circuit is the successive approximation converter. 1 2 The idea of sampling is fully covered

### Infinite Algebra 1 supports the teaching of the Common Core State Standards listed below.

Infinite Algebra 1 Kuta Software LLC Common Core Alignment Software version 2.05 Last revised July 2015 Infinite Algebra 1 supports the teaching of the Common Core State Standards listed below. High School

### Introduction Number Systems and Conversion

UNIT 1 Introduction Number Systems and Conversion Objectives 1. Introduction The first part of this unit introduces the material to be studied later. In addition to getting an overview of the material

### Signal Processing First Lab 08: Frequency Response: Bandpass and Nulling Filters

Signal Processing First Lab 08: Frequency Response: Bandpass and Nulling Filters Pre-Lab and Warm-Up: You should read at least the Pre-Lab and Warm-up sections of this lab assignment and go over all exercises

### Fast Logarithms on a Floating-Point Device

TMS320 DSP DESIGNER S NOTEBOOK Fast Logarithms on a Floating-Point Device APPLICATION BRIEF: SPRA218 Keith Larson Digital Signal Processing Products Semiconductor Group Texas Instruments March 1993 IMPORTANT

### Digital Logic. The Binary System is a way of writing numbers using only the digits 0 and 1. This is the method used by the (digital) computer.

Digital Logic 1 Data Representations 1.1 The Binary System The Binary System is a way of writing numbers using only the digits 0 and 1. This is the method used by the (digital) computer. The system we

### INTERNATIONAL TELECOMMUNICATION UNION ).4%2.!4)/.!,!.!,/'5% #!22)%2 3934%-3 '%.%2!, #(!2!#4%2)34)#3 #/--/. 4/!,,!.!,/'5% #!22)%242!.3-)33)/.

INTERNATIONAL TELECOMMUNICATION UNION )454 ' TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU ).4%2.!4)/.!,!.!,/'5% #!22)%2 3934%-3 '%.%2!, #(!2!#4%2)34)#3 #/--/. 4/!,,!.!,/'5% #!22)%242!.3-)33)/. 3934%-3!335-04)/.3

### PowerTeaching i3: Algebra I Mathematics

PowerTeaching i3: Algebra I Mathematics Alignment to the Common Core State Standards for Mathematics Standards for Mathematical Practice and Standards for Mathematical Content for Algebra I Key Ideas and

### FILTER CIRCUITS. A filter is a circuit whose transfer function, that is the ratio of its output to its input, depends upon frequency.

FILTER CIRCUITS Introduction Circuits with a response that depends upon the frequency of the input voltage are known as filters. Filter circuits can be used to perform a number of important functions in

### Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet

### The Basics of Interest Theory

Contents Preface 3 The Basics of Interest Theory 9 1 The Meaning of Interest................................... 10 2 Accumulation and Amount Functions............................ 14 3 Effective Interest

### MBA Jump Start Program

MBA Jump Start Program Module 2: Mathematics Thomas Gilbert Mathematics Module Online Appendix: Basic Mathematical Concepts 2 1 The Number Spectrum Generally we depict numbers increasing from left to right

### Indiana State Core Curriculum Standards updated 2009 Algebra I

Indiana State Core Curriculum Standards updated 2009 Algebra I Strand Description Boardworks High School Algebra presentations Operations With Real Numbers Linear Equations and A1.1 Students simplify and

### Computer Organization and Architecture

Computer Organization and Architecture Chapter 9 Computer Arithmetic Arithmetic & Logic Unit Performs arithmetic and logic operations on data everything that we think of as computing. Everything else in

### Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

### Senior Secondary Australian Curriculum

Senior Secondary Australian Curriculum Mathematical Methods Glossary Unit 1 Functions and graphs Asymptote A line is an asymptote to a curve if the distance between the line and the curve approaches zero

### Digital Fundamentals

Digital Fundamentals with PLD Programming Floyd Chapter 2 29 Pearson Education Decimal Numbers The position of each digit in a weighted number system is assigned a weight based on the base or radix of

### Solutions to Exam in Speech Signal Processing EN2300

Solutions to Exam in Speech Signal Processing EN23 Date: Thursday, Dec 2, 8: 3: Place: Allowed: Grades: Language: Solutions: Q34, Q36 Beta Math Handbook (or corresponding), calculator with empty memory.

### 6. Methods 6.8. Methods related to outputs, Introduction

6. Methods 6.8. Methods related to outputs, Introduction In order to present the outcomes of statistical data collections to the users in a manner most users can easily understand, a variety of statistical

### Floating-point computation

Real values and floating point values Floating-point representation IEEE 754 standard representation rounding special values Floating-point computation 1 Real values Not all values can be represented exactly

### COGNITIVE TUTOR ALGEBRA

COGNITIVE TUTOR ALGEBRA Numbers and Operations Standard: Understands and applies concepts of numbers and operations Power 1: Understands numbers, ways of representing numbers, relationships among numbers,

### Chebyshev I Bandpass IIR Filter with 6 th Order

Chebyshev I Bandpass IIR Filter with 6 th Order Study Group: IEM2 Hand in Date: 22.07.2003 Group Members: Chao Chen Bing Li Chao Wang Professor: Prof. Dr. Schwarz 1 Contents 1. Introduction...3 2. Analysis...4

### (Refer Slide Time: 01:11-01:27)

Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 6 Digital systems (contd.); inverse systems, stability, FIR and IIR,

### NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document

### pp. 4 8: Examples 1 6 Quick Check 1 6 Exercises 1, 2, 20, 42, 43, 64

Semester 1 Text: Chapter 1: Tools of Algebra Lesson 1-1: Properties of Real Numbers Day 1 Part 1: Graphing and Ordering Real Numbers Part 1: Graphing and Ordering Real Numbers Lesson 1-2: Algebraic Expressions

### Advanced Algebra 2. I. Equations and Inequalities

Advanced Algebra 2 I. Equations and Inequalities A. Real Numbers and Number Operations 6.A.5, 6.B.5, 7.C.5 1) Graph numbers on a number line 2) Order real numbers 3) Identify properties of real numbers

### The program also provides supplemental modules on topics in geometry and probability and statistics.

Algebra 1 Course Overview Students develop algebraic fluency by learning the skills needed to solve equations and perform important manipulations with numbers, variables, equations, and inequalities. Students

### NRZ Bandwidth - HF Cutoff vs. SNR

Application Note: HFAN-09.0. Rev.2; 04/08 NRZ Bandwidth - HF Cutoff vs. SNR Functional Diagrams Pin Configurations appear at end of data sheet. Functional Diagrams continued at end of data sheet. UCSP

### The continuous and discrete Fourier transforms

FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1

### Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction

Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals Modified from the lecture slides of Lami Kaya (LKaya@ieee.org) for use CECS 474, Fall 2008. 2009 Pearson Education Inc., Upper

### Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics:

Voice Transmission --Basic Concepts-- Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics: Amplitude Frequency Phase Voice Digitization in the POTS Traditional

### INFERRING TRADING STRATEGIES FROM PROBABILITY DISTRIBUTION FUNCTIONS

INFERRING TRADING STRATEGIES FROM PROBABILITY DISTRIBUTION FUNCTIONS INFERRING TRADING STRATEGIES FROM PROBABILITY DISTRIBUTION FUNCTIONS BACKGROUND The primary purpose of technical analysis is to observe

### This Unit: Floating Point Arithmetic. CIS 371 Computer Organization and Design. Readings. Floating Point (FP) Numbers

This Unit: Floating Point Arithmetic CIS 371 Computer Organization and Design Unit 7: Floating Point App App App System software Mem CPU I/O Formats Precision and range IEEE 754 standard Operations Addition

### Understand the principles of operation and characterization of digital filters

Digital Filters 1.0 Aim Understand the principles of operation and characterization of digital filters 2.0 Learning Outcomes You will be able to: Implement a digital filter in MATLAB. Investigate how commercial

### LAB 4 DIGITAL FILTERING

LAB 4 DIGITAL FILTERING 1. LAB OBJECTIVE The purpose of this lab is to design and implement digital filters. You will design and implement a first order IIR (Infinite Impulse Response) filter and a second

### TTT4120 Digital Signal Processing Suggested Solution to Exam Fall 2008

Norwegian University of Science and Technology Department of Electronics and Telecommunications TTT40 Digital Signal Processing Suggested Solution to Exam Fall 008 Problem (a) The input and the input-output

### Algebra I Vocabulary Cards

Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression

### Probability and Random Variables. Generation of random variables (r.v.)

Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly

### Floating-Point Numbers. Floating-point number system characterized by four integers: base or radix precision exponent range

Floating-Point Numbers Floating-point number system characterized by four integers: β p [L, U] base or radix precision exponent range Number x represented as where x = ± ( d 0 + d 1 β + d 2 β 2 + + d p

### IEEE floating point numbers. Floating-Point Representation and Approximation. floating point numbers with base 10. Errors

EE103 (Shinnerl) Floating-Point Representation and Approximation Errors Cancellation Instability Simple one-variable examples Swamping IEEE floating point numbers floating point numbers with base 10 floating

### Performance Level Descriptors Grade 6 Mathematics

Performance Level Descriptors Grade 6 Mathematics Multiplying and Dividing with Fractions 6.NS.1-2 Grade 6 Math : Sub-Claim A The student solves problems involving the Major Content for grade/course with

### Chapter 8 - Power Density Spectrum

EE385 Class Notes 8/8/03 John Stensby Chapter 8 - Power Density Spectrum Let X(t) be a WSS random process. X(t) has an average power, given in watts, of E[X(t) ], a constant. his total average power is

### CHAPTER THREE. 3.1 Binary Addition. Binary Math and Signed Representations

CHAPTER THREE Binary Math and Signed Representations Representing numbers with bits is one thing. Doing something with them is an entirely different matter. This chapter discusses some of the basic mathematical

### Lecture 8: Signal Detection and Noise Assumption

ECE 83 Fall Statistical Signal Processing instructor: R. Nowak, scribe: Feng Ju Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(, σ I n n and S = [s, s,...,

### Taking the Mystery out of the Infamous Formula, "SNR = 6.02N + 1.76dB," and Why You Should Care. by Walt Kester

ITRODUCTIO Taking the Mystery out of the Infamous Formula, "SR = 6.0 + 1.76dB," and Why You Should Care by Walt Kester MT-001 TUTORIAL You don't have to deal with ADCs or DACs for long before running across

### Experiment #1, Analyze Data using Excel, Calculator and Graphs.

Physics 182 - Fall 2014 - Experiment #1 1 Experiment #1, Analyze Data using Excel, Calculator and Graphs. 1 Purpose (5 Points, Including Title. Points apply to your lab report.) Before we start measuring

### AP Physics 1 and 2 Lab Investigations

AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks

### In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data.

MATHEMATICS: THE LEVEL DESCRIPTIONS In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. Attainment target

### Fixed-point Mathematics

Appendix A Fixed-point Mathematics In this appendix, we will introduce the notation and operations that we use for fixed-point mathematics. For some platforms, e.g., low-cost mobile phones, fixed-point

### Divide: Paper & Pencil. Computer Architecture ALU Design : Division and Floating Point. Divide algorithm. DIVIDE HARDWARE Version 1

Divide: Paper & Pencil Computer Architecture ALU Design : Division and Floating Point 1001 Quotient Divisor 1000 1001010 Dividend 1000 10 101 1010 1000 10 (or Modulo result) See how big a number can be

### Advanced 3G and 4G Wireless Communication Prof. Aditya K. Jagannatham Department of Electrical Engineering Indian Institute of Technology, Kanpur

Advanced 3G and 4G Wireless Communication Prof. Aditya K. Jagannatham Department of Electrical Engineering Indian Institute of Technology, Kanpur Lecture - 3 Rayleigh Fading and BER of Wired Communication

### 6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives

6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise

### A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University

A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a

### RANDOM VIBRATION AN OVERVIEW by Barry Controls, Hopkinton, MA

RANDOM VIBRATION AN OVERVIEW by Barry Controls, Hopkinton, MA ABSTRACT Random vibration is becoming increasingly recognized as the most realistic method of simulating the dynamic environment of military

### THE ENTROPY OF THE NORMAL DISTRIBUTION

CHAPTER 8 THE ENTROPY OF THE NORMAL DISTRIBUTION INTRODUCTION The normal distribution or Gaussian distribution or Gaussian probability density function is defined by N(x; µ, σ) = /σ (πσ ) / e (x µ). (8.)

### Quiz for Chapter 3 Arithmetic for Computers 3.10

Date: Quiz for Chapter 3 Arithmetic for Computers 3.10 Not all questions are of equal difficulty. Please review the entire quiz first and then budget your time carefully. Name: Course: Solutions in RED

Academic Content Standards Grade Eight and Grade Nine Ohio Algebra 1 2008 Grade Eight STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express

### Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

### FIR Filter Design. FIR Filters and the z-domain. The z-domain model of a general FIR filter is shown in Figure 1. Figure 1

FIR Filters and the -Domain FIR Filter Design The -domain model of a general FIR filter is shown in Figure. Figure Each - box indicates a further delay of one sampling period. For example, the input to

### Jitter Measurements in Serial Data Signals

Jitter Measurements in Serial Data Signals Michael Schnecker, Product Manager LeCroy Corporation Introduction The increasing speed of serial data transmission systems places greater importance on measuring

### Mathematical Procedures

CHAPTER 6 Mathematical Procedures 168 CHAPTER 6 Mathematical Procedures The multidisciplinary approach to medicine has incorporated a wide variety of mathematical procedures from the fields of physics,

### Example/ an analog signal f ( t) ) is sample by f s = 5000 Hz draw the sampling signal spectrum. Calculate min. sampling frequency.

1 2 3 4 Example/ an analog signal f ( t) = 1+ cos(4000πt ) is sample by f s = 5000 Hz draw the sampling signal spectrum. Calculate min. sampling frequency. Sol/ H(f) -7KHz -5KHz -3KHz -2KHz 0 2KHz 3KHz

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

Should we Really Care about Building Business Cycle Coincident Indexes! Alain Hecq University of Maastricht The Netherlands August 2, 2004 Abstract Quite often, the goal of the game when developing new

### CS321. Introduction to Numerical Methods

CS3 Introduction to Numerical Methods Lecture Number Representations and Errors Professor Jun Zhang Department of Computer Science University of Kentucky Lexington, KY 40506-0633 August 7, 05 Number in

### Review of Fundamental Mathematics

Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools